This job post is closed and the position is probably filled. Please do not apply.
🤖 Automatically closed by a robot after apply link
was detected as broken.
Description:
The Data Engineer will need to have strong technical experience with languages and tools such as Python, Spark, DBT, and Databricks.
They should also have experience with and knowledge of AWS services like Redshift, Athena, Glue, S3, and EMR.
The role involves collaborating with team members to collect business requirements, define successful analytics outcomes, and design data models.
Providing clean and appropriate data sets/views for data and reporting requirements is a key responsibility.
Applying appropriate statistical calculations and models against data sets in GROW’s Data Lake is required.
Continuous improvement activities on performance optimization of all data pipelines and data stores are essential, applying engineering best practices to code.
The Data Engineer will provide input into the discovery and solution phase for Business/Client needs, assessing risk in exposing data and working in line with GROW’s risk framework.
Requirements:
Strong technical experience with Python, Spark, DBT, and Databricks.
Experience with and knowledge of AWS services such as Redshift, Athena, Glue, S3, and EMR.
Familiarity with CI/CD tools and practices.
Ability to collaborate with team members to collect business requirements and design data models.
Proficiency in providing clean and appropriate data sets/views for data and reporting requirements.
Knowledge of applying statistical calculations and models against data sets.
Experience in performance optimization of data pipelines and data stores.
Understanding of engineering best practices like versioning, automated testing, and CI/CD.
Ability to provide input into the discovery and solution phase for Business/Client needs.
Benefits:
1 day of birthday leave each year.
Health insurance for the employee and 1 dependent.