This job post is closed and the position is probably filled. Please do not apply.
🤖 Automatically closed by a robot after apply link
was detected as broken.
Description:
Design, build, and maintain scalable data pipelines and ETL processes to support data-driven decision-making.
Develop and optimize data storage solutions, including databases and data warehouses.
Ensure data quality and consistency across various data sources.
Collaborate with data scientists and analysts to provide data for analysis and modeling.
Troubleshoot and resolve data-related issues and performance bottlenecks.
Work remotely on an hourly basis.
Requirements:
Bachelor's degree in Computer Science, Engineering, Information Systems, or a related field.
Master’s degree in Data Engineering, Computer Science, or a related field (preferred).
2-4 years of experience in data engineering or a related role.
Proficiency in SQL and programming languages such as Python, Java, or Scala.
Experience with data management tools and technologies like Hadoop, Spark, or AWS Redshift.
Knowledge of data warehousing solutions and database design.
Familiarity with data modeling and schema design.
Benefits:
Working completely remote.
Location independence.
Great opportunity for growth.
Joining a high-level and fast-paced team, working with exciting businesses and projects.