This job post is closed and the position is probably filled. Please do not apply.
🤖 Automatically closed by a robot after apply link
was detected as broken.
Description:
We are seeking a Senior Data Engineer to join our dynamic team in the ad tech industry.
The role involves transforming and optimizing high-velocity ad log streams and creating a unified data lake for seamless downstream data consumption.
The company processes billions of ad events daily, providing real-time and batch data for comprehensive reporting and troubleshooting solutions.
Responsibilities include designing, implementing, and maintaining scalable data pipelines to ingest and transform large volumes of data from various sources.
The Senior Data Engineer will optimize streaming and batch data processing and storage solutions for performance and scalability.
Monitoring system performance, troubleshooting issues, and implementing solutions to ensure high availability and reliability are key tasks.
Collaboration with product managers and other stakeholders is essential to understand data requirements and deliver solutions that meet business needs.
The role requires effective communication of technical concepts and solutions to non-technical stakeholders.
Staying updated on emerging technologies and best practices in data engineering is important to identify opportunities for improving data processes, tools, and infrastructure.
Documentation of data pipelines, processes, and systems is necessary to ensure clarity and maintainability.
The Senior Data Engineer will share knowledge and best practices with team members to foster a culture of learning and collaboration.
Mentoring junior Data Engineers is also part of the responsibilities.
Other duties and responsibilities may be assigned as needed.
Requirements:
Candidates must have 5+ years of hands-on experience in the Software Development field and/or Big Data.
A minimum of 2+ years of hands-on experience in building and operating large-scale data processing systems is required.
Solid programming skills are necessary, with fluency in Scala.
Familiarity with big data processing frameworks/tools such as Spark, Spark Streaming, Databricks, and Flink is essential.
Experience with Data Lakehouse techniques, specifically Delta Lake and Iceberg, is required.
Candidates should possess solid SQL skills with experience in writing complex queries, stored procedures, and optimizing querying performance.
Experience with data warehousing, data modeling techniques, ETL processes, and relational databases (MySQL, PostgreSQL) is necessary.
Familiarity with AWS (EC2, S3, etc.) and proficiency in managing cloud-based data solutions (Snowflake) are required.
An upper-intermediate level of English is necessary for effective communication.
Benefits:
Employees have the flexibility to work remotely.
The opportunity to work with cutting-edge technology in a fast-paced environment is provided.
Joining a stellar team that values collaboration and knowledge sharing is a key benefit.
The role offers the chance to make a significant impact in the ad tech industry.
Opportunities for professional growth and mentorship are available within the team.