Remote Sr Azure Data Engineer

Posted

Apply now
Please, let Zealogics.com know you found this job on RemoteYeah. This helps us grow 🌱.

Description:

  • The Sr Azure Data Engineer will participate in business discussions and assist in gathering data requirements.
  • The role requires good analytical and problem-solving skills to address data challenges.
  • Proficiency in writing complex SQL queries for data extraction, transformation, and analysis is essential.
  • Knowledge of SQL functions, joins, subqueries, and performance tuning is required.
  • The candidate should be able to navigate source systems with minimal guidance to understand data relationships and utilize data profiling for better data comprehension.
  • Hands-on experience with PySQL/Pyspark is necessary.
  • The engineer must have experience in creating and managing data pipelines using Azure Data Factory.
  • An understanding of data integration, transformation, and workflow orchestration in Azure environments is important.
  • Knowledge of data engineering workflows and best practices in Databricks is required.
  • The candidate should be able to understand existing templates and patterns for development.
  • Hands-on experience with Unity Catalog and Databricks workflow is necessary.
  • Proficiency in using Git for version control and collaboration in data projects is required.
  • The ability to work effectively in a team environment, especially in agile or collaborative settings, is essential.
  • Clear and effective communication skills are needed to articulate findings and recommendations to team members.
  • The candidate must be able to document processes, workflows, and data analysis results effectively.
  • A willingness to learn new tools, technologies, and techniques as the field of data analytics evolves is expected.
  • The candidate should be adaptable to changing project requirements and priorities.

Requirements:

  • The candidate must have expertise in Azure Databricks, Data Lakehouse architectures, and Azure Data Factory.
  • Expertise in optimizing data workflows and predictive modeling is required.
  • The role requires designing and implementing data pipelines using Databricks and Spark.
  • The candidate should have expertise in batch and streaming data solutions.
  • Experience in automating workflows with CI/CD tools like Jenkins and Azure DevOps is necessary.
  • Ensuring data governance with Delta Lake is a requirement.
  • Proficiency in Spark, PySpark, Delta Lake, Azure DevOps, and Python is essential.
  • Advanced SQL expertise is required.

Benefits:

  • The position offers the flexibility of remote work.
  • It is a contracted role, providing opportunities for experienced professionals.
  • The role allows for participation in diverse business discussions and data projects.
  • The candidate will have the opportunity to work with cutting-edge technologies in data engineering.
  • There is potential for professional growth and learning in the evolving field of data analytics.
Apply now
Please, let Zealogics.com know you found this job on RemoteYeah . This helps us grow 🌱.
About the job
Posted on
Job type
Salary
-
Location requirements

-

Position
Experience level
Report this job

Job expired or something else is wrong with this job?

Report this job
Leave a feedback