This job post is closed and the position is probably filled. Please do not apply.
🤖 Automatically closed by a robot after apply link
was detected as broken.
Description:
The Data Engineer will be responsible for designing, implementing, testing, and maintaining data models within the data lake architecture to support advanced analytics.
They will need to ensure that the data models are scalable, maintainable, and optimized for high performance.
The role involves designing and managing workflows using Apache Airflow and collaborating with various team members to achieve project goals.
Requirements:
Bachelor’s degree in Computer Science, Engineering, or a related field is required.
The candidate should have at least 3 years of experience in a data engineering role, preferably with exposure to data lake environments.
Proficiency in SQL and Python for complex queries and analytics is essential.
Experience with ETL processes and tools is necessary.
Strong understanding of the Hadoop ecosystem, including Hive and Apache Spark, for processing large datasets is required.
Familiarity with Apache Airflow for workflow management and major cloud platforms (AWS, Azure) is expected.
The ability to work in a team environment, excellent problem-solving skills, and certifications in Big Data technologies and cloud platforms are strongly preferred.
Benefits:
Competitive salary offered for the position.
The company provides a friendly, pleasant, and creative working environment.
Remote working opportunity available for the Data Engineer role.
Development opportunities to enhance skills and career growth.
Private health insurance is provided as a benefit.