Welcome to RemoteYeah 2.0! Find out more about the new version here.

Remote Sr Azure Data Engineer

at Zealogics.com

Posted 11 hours ago 5 applied

Description:

  • The Sr Azure Data Engineer will participate in business discussions and assist in gathering data requirements.
  • The role requires good analytical and problem-solving skills to address data challenges effectively.
  • Proficiency in writing complex SQL queries for data extraction, transformation, and analysis is essential.
  • The candidate should have knowledge of SQL functions, joins, subqueries, and performance tuning.
  • The engineer must be able to navigate source systems with minimal guidance to understand data relationships and utilize data profiling for better data comprehension.
  • Hands-on experience with PySQL/Pyspark is required.
  • The position involves creating and managing data pipelines using Azure Data Factory.
  • An understanding of data integration, transformation, and workflow orchestration in Azure environments is necessary.
  • Knowledge of data engineering workflows and best practices in Databricks is expected.
  • The candidate should be able to understand existing templates and patterns for development.
  • Hands-on experience with Unity Catalog and Databricks workflow is required.
  • Proficiency in using Git for version control and collaboration in data projects is essential.
  • The ability to work effectively in a team environment, especially in agile or collaborative settings, is important.
  • Clear and effective communication skills are necessary to articulate findings and recommendations to team members.
  • The candidate should be able to document processes, workflows, and data analysis results effectively.
  • A willingness to learn new tools, technologies, and techniques as the field of data analytics evolves is required.
  • The candidate must be adaptable to changing project requirements and priorities.

Requirements:

  • The candidate must have expertise in Azure Databricks, Data Lakehouse architectures, and Azure Data Factory.
  • Expertise in optimizing data workflows and predictive modeling is required.
  • The role requires designing and implementing data pipelines using Databricks and Spark.
  • The candidate should have expertise in batch and streaming data solutions.
  • Experience in automating workflows with CI/CD tools like Jenkins and Azure DevOps is necessary.
  • Ensuring data governance with Delta Lake is a key requirement.
  • Proficiency in Spark, PySpark, Delta Lake, Azure DevOps, and Python is essential.
  • Advanced SQL expertise is required for this position.

Benefits:

  • The position offers opportunities for professional growth and development in the field of data engineering.
  • Employees will have access to the latest tools and technologies in data analytics.
  • The role promotes a collaborative and agile work environment.
  • There are opportunities to work on innovative projects that impact business decisions.
  • The company supports continuous learning and adaptation to new technologies and methodologies.