This job post is closed and the position is probably filled. Please do not apply.
🤖 Automatically closed by a robot after apply link
was detected as broken.
Description:
The Sr Azure Data Engineer - Team Lead will participate in business discussions and assist in gathering data requirements.
The role requires good analytical and problem-solving skills to address data challenges.
Proficiency in writing complex SQL queries for data extraction, transformation, and analysis is essential.
Knowledge of SQL functions, joins, subqueries, and performance tuning is required.
The candidate should be able to navigate source systems with minimal guidance to understand data relationships and use data profiling for better data understanding.
Hands-on experience with PySQL/Pyspark is necessary.
The position involves creating and managing data pipelines using Azure Data Factory.
An understanding of data integration, transformation, and workflow orchestration in Azure environments is important.
Knowledge of data engineering workflows and best practices in Databricks is required.
The candidate should be able to understand existing templates and patterns for development.
Hands-on experience with Unity Catalog and Databricks workflow is necessary.
Proficiency in using Git for version control and collaboration in data projects is required.
The ability to work effectively in a team environment, especially in agile or collaborative settings, is essential.
Clear and effective communication skills are needed to articulate findings and recommendations for team members.
The candidate should be able to document processes, workflows, and data analysis results effectively.
A willingness to learn new tools, technologies, and techniques as the field of data analytics evolves is important.
The candidate must be adaptable to changing project requirements and priorities.
Requirements:
The candidate must have expertise in Azure Databricks, Data Lakehouse architectures, and Azure Data Factory.
A strong background in optimizing data workflows and predictive modeling is required.
The role requires designing and implementing data pipelines using Databricks and Spark.
Expertise in batch and streaming data solutions is necessary.
The candidate should have experience automating workflows with CI/CD tools like Jenkins and Azure DevOps.
Ensuring data governance with Delta Lake is a key requirement.
Advanced SQL expertise is essential for this position.
Benefits:
The position offers the flexibility of remote work.
It is a contracted role, providing opportunities for experienced professionals.
The role allows for participation in innovative data projects and collaboration with a skilled team.
There is potential for professional growth and development in the evolving field of data analytics.
The company supports continuous learning and adaptation to new tools and technologies.