This job post is closed and the position is probably filled. Please do not apply.
🤖 Automatically closed by a robot after apply link
was detected as broken.
Description:
Collaborate with cross-functional teams to design and build scalable data solutions supporting product innovations and business insights.
Develop and maintain robust data pipelines for complex data collection, ingestion, and transformation processes.
Work closely with R&D in data science and AI to provide data.
Design mechanisms for data validation, quality, and reliability to ensure accuracy and trustworthiness of data sources.
Diagnose and resolve data-related issues to enhance performance and scalability of data systems.
Contribute to creating data governance and security protocols to safeguard sensitive information.
Requirements:
Master's degree in Computer Science, Engineering, or a related field.
Minimum of 5 years of experience in data engineering or a related field.
Proficiency in relational databases (e.g., MySQL, PostgreSQL).
Strong experience with data pipeline and workflow management tools (e.g., dbt).
Familiarity with big data technologies like Spark and streaming technologies such as Kafka.
Knowledge of cloud services and architecture (AWS, Azure, or GCP) with a focus on data storage and computing services.
Experience with data integration, ETL processes, and data warehousing solutions.
Proficiency in Python with a focus on data-centric programming.
Experience with machine learning algorithms, data modeling techniques, and Large Language Models.
Familiarity with Graph Databases (e.g., Neo4j).
Benefits:
Competitive salary and benefits package.
Opportunity to work with cutting-edge AI technology.
Collaborative work environment with cross-functional teams.
Continuous learning and professional development opportunities.
Chance to contribute to innovative projects in a dynamic industry.