As a Principal Data Engineer, you will lead the technical vision, architecture, and delivery of the next-generation GRC data platform.
You will work at the intersection of AI and cloud-native architecture to build custom data solutions that address critical user problems and help scale intelligent, secure, and resilient data systems across multiple global regions.
Your role will enable AI-driven insights, automate compliance workflows, and ensure the platform remains robust and audit-ready for regulated industries.
You will be responsible for identifying, analyzing, and conducting the client’s business requirements gap analysis through document analysis, interviews, workshops, and workflow analysis.
You will guide and support the Data Engineer team in executing projects involving technology upgrades and implementing optimizations.
You will define data architecture for scalability, performance, and cost-efficiency.
You will evaluate and implement new technologies that facilitate or simplify the end-to-end data lifecycle targeted for reporting and AI use cases.
You will document information gathered during data requirements workshops and prepare solution architecture documents.
You will partner with internal stakeholders, including heads of product, engineering, and customer success, to identify and streamline priorities.
You will act as a technical authority providing guidance on data modeling, schema design, metadata management, and governance.
You will mentor data engineers and help grow the technical capabilities of the data team.
You will promote best practices in ELT processes and the accompanying CI/CD and DevOps for data.
You will develop and maintain SQL queries using various tools, like DBT, Azure Synapse, and SQL Server, in line with business and customer requirements.
You will develop and maintain reports in various business intelligence (BI) platforms, including Yellowfin.
You will use project management methodologies to develop project plans and estimate required effort and resources.
You will work with the broader engineering and product team on troubleshooting, investigating, and remediating reporting bugs and issues.
You will analyze data and coordinate with stakeholders to evaluate inefficiencies in business processes.
You will implement enterprise data warehouse infrastructure and manage data pipelines feeding into analytics and advanced AI-powered analytics subsystems.
Requirements:
You must have at least 8 years of relevant work experience.
You should have prior experience in a principal, staff, or lead engineer role within high-growth SaaS or enterprise tech environments.
A strong understanding of data security, compliance, and multi-region cloud deployments is required.
Proven experience in designing and developing ETL pipelines using cloud-based technologies is necessary.
You should have experience integrating data from multiple sources, including SQL databases, Excel spreadsheets, and APIs.
Excellent communication and collaboration skills are essential, with the ability to effectively interact with stakeholders at all levels of the organization.
You must be detail-oriented with strong analytical and problem-solving skills.
Self-motivation and the ability to work independently as well as part of a team are required.
You should be able to relate to people, understand their needs, and align them with proposed solutions.
Benefits:
The position offers the opportunity to lead the development of innovative data solutions in a remote work environment.
You will have the chance to work with cutting-edge technologies in AI and cloud-native architecture.
The role provides a platform for professional growth and mentorship opportunities within the data engineering team.
You will collaborate with various internal stakeholders, enhancing your networking and collaboration skills.
The position allows for the development of skills in data architecture, compliance, and advanced analytics.
You will be part of a dynamic team focused on delivering impactful data solutions for regulated industries.