Description
A leading global financial institution is seeking a Data Engineer at the intermediate level to contribute to the development, implementation, and optimization of large-scale data platforms. The role involves working closely with the technology team to analyze, develop, and enhance data-driven applications and systems. The ideal candidate will bring expertise in big data architecture, data warehousing, and distributed computing to support complex business needs. Responsibilities: Conduct feasibility studies, estimate time and costs, and assist in IT planning, risk technology, and applications development. Oversee all phases of the development process, including analysis, design, construction, testing, and implementation. Provide user and operational support for applications and ensure smooth functionality for business users. Analyze complex problems, evaluate business processes, and assess industry standards to deliver innovative solutions. Develop and implement security measures in post-implementation analysis to maintain system integrity. Consult with stakeholders and technology teams to recommend advanced programming solutions and resolve issues. Define and ensure adherence to essential operating standards and procedures. Act as a mentor or advisor to junior team members and provide subject matter expertise. Work independently with minimal supervision while maintaining high-quality deliverables. Assess risk when making business decisions and ensure compliance with relevant laws, regulations, and policies. Qualifications: 5 years of experience in application development, systems analysis, or data engineering. Strong background in statistical modeling and handling large data sets. Hands-on experience in Big Data architecture with expertise in troubleshooting performance and development issues on Hadoop (Cloudera preferred) or Snowflake . Extensive experience working with Core SQL, Hive, Impala, HBase, Kudu, and Spark for data curation and processing. Experience with Hadoop, Spark, and Java required . 5 years of experience with Spark, Storm, Kafka, or equivalent batch/streaming processing frameworks . Proficiency in PySpark (Python API) and Scala Spark for data engineering workflows. Expertise in data warehousing solutions and technologies . Strong knowledge of data structures, algorithms, and distributed systems . Experience managing and implementing successful projects. Demonstrated leadership, project management, and problem-solving skills. Ability to quickly adjust priorities and work in fast-paced environments. Strong written and verbal communication skills for cross-functional collaboration. Education: Bachelor’s degree in a relevant field or equivalent experience.