The Senior Data and DevOps Engineer is responsible for the performance, scalability, and reliability of Drest’s data infrastructure and pipelines.
This role ensures the efficient ingestion, transformation, and delivery of high-volume data while maintaining robust, cost-effective cloud operations.
As a hands-on engineer, the Senior Data and DevOps Engineer is actively involved in building pipelines, defining infrastructure as code, and supporting critical systems.
Reporting to the DevOps Lead Engineer, they work closely with data science, backend, and the platform team to enable high-quality analytics and drive technical excellence across data and infrastructure.
Key responsibilities include designing, building, and maintaining robust data pipelines capable of handling tens of millions of events per day in both batch and real-time processing contexts.
The engineer will manage and optimize AWS cloud infrastructure, ensuring high availability, performance, cost-efficiency, and security.
They will develop infrastructure-as-code using Terraform, supporting scalable and maintainable infrastructure deployments.
The role involves building and monitoring data warehouse solutions (e.g., Redshift), ensuring data is accessible, clean, and well-modelled for analytics and product teams.
The engineer will drive system performance and operational excellence by improving observability, uptime, and deployment processes across data and platform systems.
Requirements:
Candidates must have 5+ years of experience in data engineering, including multiple end-to-end pipeline builds.
Solid experience with AWS cloud services, including S3, EC2, Lambda, RDS, Redshift, Glue, Kinesis, MongoDB, and PostgreSQL is required.
Proven expertise in Terraform and infrastructure-as-code practices is essential.
Strong SQL and data modeling skills, along with experience with both SQL and NoSQL data stores, are necessary.
A strong understanding of dbt (or equivalent) and Tableau is required.
Hands-on experience with Python for data processing and automation tasks is needed.
A background working in environments with high throughput data (millions of events per hour) is preferred.
Candidates should have an understanding of best practices around security, scalability, and maintainability in cloud-native systems.
Comfort working independently in a fast-paced, highly collaborative environment is essential.
Great communication skills are necessary, with the ability to explain complex systems clearly to technical and non-technical stakeholders.
Familiarity with Docker, Kubernetes, or other container orchestration platforms is required.
A willingness to be available outside standard working hours when needed to support critical issues or key deliveries is expected.
Benefits:
The position offers the opportunity to work in a dynamic and innovative environment focused on data and infrastructure excellence.
Employees will have the chance to collaborate with cross-functional teams, enhancing their skills and knowledge in data engineering and DevOps practices.
The role provides a platform for professional growth and development in a rapidly evolving field.
Employees can expect a competitive salary and benefits package, including health insurance and retirement plans.
The company promotes a culture of work-life balance, encouraging flexibility in work hours and remote work options.