Please, let Clutch know you found this job
on RemoteYeah.
This helps us grow 🌱.
Description:
As a Senior Data Engineer, you will design and implement scalable, secure, and well-structured data infrastructure.
You will lead the adoption of best practices in data engineering, ensuring efficient and resilient data pipelines for analysts, data scientists, and external customers.
Your responsibilities include establishing and maintaining data catalogs, governance frameworks, and best practices for managing and transforming data.
You will utilize your expertise in Databricks, Unity Catalog, Terraform (AWS + Databricks), Docker, Python, and DBT or flypipe to drive the team's success.
You will mentor and guide engineers in creating high-performance streaming and batch pipelines, optimizing transformations.
The role requires a hands-on leader who is willing to challenge the status quo and foster a culture of innovation.
You will assess and prioritize data needs, improve data pipelines and governance, identify and solve technical challenges, and engage with key stakeholders.
You will take ownership of the technical strategy for data engineering, lead and mentor the data engineering team, adapt data systems to business needs, and optimize performance and cost efficiency.
You will oversee the technical direction of data engineering, ensuring reliable and well-structured data models while scaling the team and maintaining data operations.
Requirements:
You must have 5+ years of experience as a data engineer, preferably in a fast-paced Fintech environment.
Strong experience in building, optimizing, and maintaining data pipelines with a focus on scalability, security, and performance is required.
You should have deep knowledge of Databricks and Unity Catalog, with hands-on experience in data governance best practices.
Proficiency in Terraform and cloud infrastructure (AWS) is necessary for managing Databricks and AWS setup.
Extensive experience in Python for data engineering tasks and DBT for transformation and data modeling is essential.
Proven ability to design efficient data models and implement data dictionaries/catalogs is required.
You must have the ability to guide and mentor data engineers, ensuring high technical standards within the team.
Familiarity with Docker and CI/CD pipelines is necessary for managing data engineering workflows.
Understanding of data security and access management for both internal and external stakeholders is required.
Knowledge of ML pipeline deployment and optimization for data scientists is a plus.
Benefits:
You will enjoy remote flexibility, allowing you to work from anywhere and balance life and career seamlessly.
The company offers unforgettable off-sites twice a year to bond with colleagues in exciting destinations.
You will receive 20 paid time off (PTO) days yearly along with national holidays for relaxation and rejuvenation.
Stock options are included in your compensation package, giving you a stake in the company's success.
A dedicated budget for home office essentials will help you create your ideal workspace.
You will have a budget for work-related trips and co-working to support your personal and professional growth.
Apply now
Please, let Clutch know you found this job
on RemoteYeah
.
This helps us grow 🌱.