This job post is closed and the position is probably filled. Please do not apply.
🤖 Automatically closed by a robot after apply link
was detected as broken.
Description:
The Data Engineer position at GROW Inc involves working with languages and tools like Python, Spark, DBT, and Databricks.
The role requires experience and knowledge of AWS services such as Redshift, Athena, Glue, S3, and EMR.
Collaborating with team members to gather business requirements, define successful analytics outcomes, and design data models is a key responsibility.
Providing clean and suitable data sets/views for data and reporting needs is essential.
Applying statistical calculations and models against data sets in GROW’s Data Lake is part of the job.
Continuous improvement activities on performance optimization of all data pipelines and data stores are expected, following engineering best practices.
Offering input into the discovery and solution phase for Business/Client needs, assessing risks in exposing data, and aligning with GROW’s risk framework are also part of the role.
Requirements:
Strong technical experience with Python, Spark, DBT, and Databricks.
Experience and knowledge of AWS services like Redshift, Athena, Glue, S3, and EMR.
Familiarity with CI/CD tools and practices.
Ability to collaborate with team members to gather business requirements and design data models.
Proficiency in providing clean data sets/views for reporting needs.
Knowledge of statistical calculations and models against data sets.
Experience in performance optimization of data pipelines and data stores.
Understanding of risk assessment in exposing data and working within a risk framework.
Benefits:
10 'ME' days and 1 day of birthday leave each year in addition to annual leave.
Health insurance coverage for the employee and 1 dependent.
Government Contributions paid on top of the salary.
Flexible remote working environment.
Diverse, friendly, and transparent culture.
Opportunities for on-the-job learning and training.