We are seeking an experienced data engineering professional to lead the design, development, and optimization of modern data platforms that empower data-driven decision-making.
In this role, you will partner with clients to understand their unique challenges, create robust architectures, and implement secure, scalable solutions.
You will play a key role in building high-quality data pipelines, integrating diverse data sources, and delivering impactful visualizations.
This position offers the opportunity to work in a collaborative, client-facing environment, contribute to pre-sales activities, and stay at the forefront of emerging cloud and data technologies.
You will have a direct hand in shaping data strategies while driving innovation and best practices in analytics.
Lead discovery sessions with clients to understand requirements and deliver state-of-the-art data architectures.
Design, build, and optimize data pipelines to process structured and unstructured data in both batch and streaming modes.
Implement secure, scalable data storage solutions such as warehouses and databases.
Develop validation and testing processes to ensure data accuracy and reliability.
Automate data collection, processing, and reporting workflows to improve efficiency.
Produce high-quality documentation, including requirements, solution designs, and technical specifications.
Support pre-sales activities by contributing to solution architecture and proposals.
Create reusable technical assets and participate in knowledge sharing across the team.
Maintain up-to-date certifications and technical expertise in leading cloud technologies.
Collaborate with marketing to produce content promoting the data engineering practice.
Requirements:
Bachelor’s degree in Computer Science, Information Technology, or related field is required.
5–9 years of experience in data engineering, database architecture, or data management is required.
Strong understanding of relational databases, SQL, and data integration techniques is required.
Proven experience working with multiple data sources (structured and unstructured) in both batch and streaming environments is required.
Expertise with cloud platforms such as AWS, GCP, or Azure is required.
Hands-on experience with ETL tools or cloud equivalents (Azure Data Factory, AWS Glue, dbt, Matillion, etc.) is required.
Proficiency with data warehousing solutions like Snowflake, Redshift, BigQuery, or Azure Synapse is required.
Experience with visualization tools such as Power BI, Tableau, Looker, or QuickSight is required.
Familiarity with Docker, Kubernetes, and at least one programming language (Python, Java, or Scala) is required.
Excellent problem-solving, communication, and collaboration skills are required.
Bonus: Certifications in cloud/data technologies, experience with AI/ML integration, or legacy systems like Hadoop are preferred.
Benefits:
Competitive salary and performance-based incentives are offered.
The position provides a fully remote-first work environment with flexible schedules.
Comprehensive healthcare, dental, and vision coverage is included.
Retirement plans with company contributions are available.
Generous paid time off and leave allowances are provided.
Professional growth opportunities with ongoing training and certifications are available.
The company promotes an inclusive, collaborative culture that values diversity and innovation.