This job post is closed and the position is probably filled. Please do not apply.
🤖 Automatically closed by a robot after apply link
was detected as broken.
Description:
Looking for a skilled Scala Developer with experience in constructing data engineering frameworks on Apache Spark.
Responsible for developing and optimizing data processing pipelines to ensure efficiency, scalability, and support for the bank's data-driven goals.
Key responsibilities include developing and maintaining scalable data processing pipelines using Scala and Apache Spark.
Requires a solid foundation in software engineering, including Object-Oriented Design (OOD) and design patterns.
Exposure to Cloudera or Hortonworks Hadoop distribution, including HDFS, Yarn, and Hive.
Write clean, efficient, and maintainable code meeting project requirements.
Optimize Spark jobs for performance and cost efficiency.
Collaborate with the data architecture team to implement data engineering best practices.
Troubleshoot and resolve technical issues related to data processing.
Requirements:
Bachelor's degree in computer science, Engineering, or a related field.
Minimum of 3 years of professional experience in Scala programming.
Demonstrated experience with Apache Spark and building data engineering pipelines.
Strong knowledge of data structures, algorithms, and distributed computing concepts.
Benefits:
Full-time position with the flexibility of remote work.
Opportunity to work on public cloud offerings and delivery in Cloud Data Services.
Chance to contribute to the development and optimization of data processing pipelines.
Exposure to working with a bank's data-driven goals and implementing best practices in data engineering.