This job post is closed and the position is probably filled. Please do not apply.
🤖 Automatically closed by a robot after apply link
was detected as broken.
Description:
Design, build, and maintain efficient and reliable data pipelines using DBT and Airflow.
Develop and optimize ETL processes to ensure data quality and integrity in Redshift.
Collaborate with data analysts and other stakeholders to understand data requirements and deliver solutions.
Implement and maintain data warehousing solutions to support business intelligence and analytics.
Monitor and troubleshoot data pipeline performance and issues.
Support current data pipelines and handle requests from internal users, such as changing schemas and adding new data sources.
Assist in implementing components of a Data Lakehouse architecture using Apache Spark and Apache Iceberg.
Requirements:
3+ years experience in a data analytics, data science, or data engineering role.
Bachelor's degree in computer science, Engineering, or a related field.
Knowledge of big data technologies (e.g., Apache Iceberg, Spark).
Minimum of 3 years of experience in data engineering or a similar role.
Proficiency in SQL and experience with relational databases (e.g., Redshift).
Strong programming skills in Python.
Experience with data pipeline and workflow management tools (e.g., Apache Airflow, DBT).
Strong problem-solving skills and attention to detail.
Clear and direct communication skills about complex technical topics.
Passion for building the best version of whatever you’re working on.
A track record of working autonomously, with strong organizational and time management skills.
A desire to continually keep up with advancements in data (engineering, analytics, science) team practices.
Benefits:
Comprehensive healthcare, retirement, and voluntary benefits including medical, dental, vision, health savings accounts, 401k, and more.
Personalized care and tools for realizing your mental health and wellness goals.