This job post is closed and the position is probably filled. Please do not apply.
π€ Automatically closed by a robot after apply link
was detected as broken.
Description:
Design, develop, and maintain scalable data pipelines using AWS services like Glue, Redshift, RDS, and S3.
Collaborate with data scientists, analysts, and stakeholders to understand data requirements and create robust data solutions.
Optimize and manage ETL processes to ensure timely and accurate data availability for analytics.
Monitor, troubleshoot, and enhance data pipelines for performance, scalability, and reliability.
Develop and maintain data models in Redshift to support various business reporting needs.
Implement data quality checks and ensure compliance with data governance standards.
Manage and optimize AWS infrastructure for cost-effective resource utilization and security best practices.
Collaborate with DevOps and infrastructure teams to automate data pipeline deployments and integrate with CI/CD processes.
Requirements:
5+ years' experience as a Data Engineer with expertise in AWS services such as Glue, Redshift, RDS, and S3.
Proficiency in SQL, data modeling, schema design, and query optimization.
Experience with ETL tools, particularly AWS Glue, and data warehousing concepts, especially Redshift.
Knowledge of Python, Spark, or other data processing programming languages.
Familiarity with data governance, data quality, and data security best practices.
Strong problem-solving skills and ability to work in a fast-paced environment.
Benefits:
100% Remote Work
Health Care Plan (Medical, Dental & Vision) for employee and family