Welcome to RemoteYeah 2.0! Find out more about the new version here.

Remote Senior Data Engineer / Architect

at Enable Data Incorporated

Posted 1 day ago 1 applied

Description:

  • The position is for a Senior Data Engineer / Architect with 8+ years of experience.
  • The role is remote, allowing for flexible work arrangements.
  • Responsibilities include designing, developing, and maintaining scalable and robust data solutions in the cloud using Apache Spark and Databricks.
  • The candidate will gather and analyze data requirements from business stakeholders and identify opportunities for data-driven insights.
  • Building and optimizing data pipelines for data ingestion, processing, and integration using Spark and Databricks is a key task.
  • Ensuring data quality, integrity, and security throughout all stages of the data lifecycle is essential.
  • The role involves collaborating with cross-functional teams to design and implement data models, schemas, and storage solutions.
  • The candidate will optimize data processing and analytics performance by tuning Spark jobs and leveraging Databricks features.
  • Providing technical guidance and expertise to junior data engineers and developers is expected.
  • Staying up to date with emerging trends and technologies in cloud computing, big data, and data engineering is important.
  • The candidate will contribute to the continuous improvement of data engineering processes, tools, and best practices.

Requirements:

  • A Bachelor’s or master’s degree in computer science, engineering, or a related field is required.
  • The candidate must have 10+ years of experience as a Data Engineer, Software Engineer, or in a similar role, focusing on building cloud-based data solutions.
  • Strong knowledge and experience with the Azure cloud platform, Databricks, EventHub, Architecture, Spark, Kafka, ETL Pipeline, Python/Pyspark, and SQL are necessary.
  • Proficiency in Apache Spark and Databricks for large-scale data processing and analytics is required.
  • Experience in designing and implementing data processing pipelines using Spark and Databricks is essential.
  • Strong knowledge of SQL and experience with relational and NoSQL databases are required.
  • Experience with data integration and ETL processes using tools like Apache Airflow or cloud-native orchestration services is necessary.
  • A good understanding of data modeling and schema design principles is required.
  • Experience with data governance and compliance frameworks is essential.
  • Excellent problem-solving and troubleshooting skills are necessary.
  • Strong communication and collaboration skills to work effectively in a cross-functional team are required.

Benefits:

  • The position offers the flexibility of remote work.
  • Candidates can join immediately or within a specified notice period, with options for permanent or contract roles.
  • The opportunity to work with cutting-edge technologies in cloud computing and big data.
  • The role provides a chance to contribute to the continuous improvement of data engineering processes and best practices.
  • The candidate will have the opportunity to provide mentorship and guidance to junior team members.