This job post is closed and the position is probably filled. Please do not apply.
🤖 Automatically closed by a robot after apply link
was detected as broken.
Description:
The Sr. Data Engineer will lead the design and implementation of robust data pipelines, starting from data ingestion to processing, storage, and consumption.
This role involves engaging with internal stakeholders to understand their data requirements and translating them into scalable pipeline architectures.
The engineer will architect and optimize data solutions using AWS technologies to support business needs.
Responsibilities include developing and maintaining comprehensive data models (conceptual, logical, and physical) to support business applications and analytics.
The position requires evaluating and implementing best practices for data architecture, ensuring scalability, maintainability, and performance.
The engineer will leverage AWS services such as S3, Redshift, RDS, Glue, and Lambda to design and manage large-scale data infrastructure.
Utilizing orchestration systems like EMR, Airflow, or Dagster to automate and manage complex data workflows is also a key responsibility.
The role includes optimizing cloud-based databases for performance, availability, and reliability.
The Sr. Data Engineer will guide and mentor a team of data engineers, fostering a culture of continuous learning and improvement.
Providing technical leadership and direction in data architecture projects is essential.
Requirements:
Candidates must have 4+ years of experience in data engineering, with a strong focus on AWS technologies, Databricks, and end-to-end data pipeline design.
A proven track record in designing and implementing scalable data architectures from the ground up is required.
Experience with Python, SQL, and data modeling is necessary.
Candidates should have experience working with machine learning teams and deploying ML models.
Familiarity with orchestration systems such as EMR, Airflow, or Dagster is expected.
Experience in AWS services (S3, Redshift, RDS, Glue, Lambda) and infrastructure as code (Terraform, CloudFormation) is required.
A strong understanding of data modeling, ETL processes, and database optimization is essential.
Proficiency in designing and managing large-scale distributed systems is necessary.
Strong communication skills with the ability to articulate complex technical concepts to non-technical stakeholders are required.
Excellent problem-solving skills and the ability to work in a fast-paced environment are essential.
Strong experience in team building, mentoring, and fostering a collaborative work environment is necessary.
Experience collaborating with machine learning teams to support the deployment and maintenance of ML models within the data pipelines is a plus.
Familiarity with Graph databases such as Neo4j or Amazon Neptune, and experience with social media data and corresponding APIs is beneficial.
Benefits:
BENlabs offers competitive benefits and an inclusive culture that fosters personal and professional growth.
Employees will have the opportunity to learn from industry experts in a results-oriented and client-centric environment.
The company values Passion, Accountability, Teamwork, Inclusion, Balance, and Empowerment.
Joining BENlabs promises a rewarding and impactful journey in the dynamic world of AI-driven marketing and content creation.