This job post is closed and the position is probably filled. Please do not apply.
🤖 Automatically closed by a robot after apply link
was detected as broken.
Description:
Design, build, and maintain scalable data pipelines and ETL processes to support data-driven decision-making.
Develop and optimize data storage solutions, including databases and data warehouses.
Ensure data quality and consistency across various data sources.
Collaborate with data scientists and analysts to provide data for analysis and modeling.
Troubleshoot and resolve data-related issues and performance bottlenecks.
Work remotely on an hourly basis to transform data into actionable insights for business growth.
Requirements:
Bachelor's degree in Computer Science, Engineering, Information Systems, or a related field.
Master’s degree in Data Engineering, Computer Science, or a related field (preferred).
2-4 years of experience in data engineering or a related role.
Proficiency in SQL and programming languages such as Python, Java, or Scala.
Experience with data management tools like Hadoop, Spark, or AWS Redshift.
Knowledge of data warehousing solutions, database design, data modeling, and schema design.
Benefits:
Work completely remote with location independence.
Great opportunity for personal and professional growth.
Join a high-level and fast-paced team, working on exciting businesses and projects.