Please, let Voodoo know you found this job
on RemoteYeah.
This helps us grow 🌱.
Description:
Voodoo is a tech company founded in 2013 that creates mobile games and apps, aiming to entertain the world.
The company has 800 employees, 7 billion downloads, and over 200 million active users, making it the #3 mobile publisher worldwide in terms of downloads after Google and Meta.
The Engineering & Data team builds innovative tech products and platforms to support the growth of gaming and consumer apps.
The Data team includes the Ad-Network Team, an autonomous squad of around 30 people composed of software engineers, infrastructure engineers, data engineers, mobile engineers, and data scientists.
The team focuses on monetizing inventory directly with advertising partners using advanced technological solutions for real-time bidding.
The Senior Data Engineer will design, implement, and optimize real-time data pipelines handling billions of events per day with strict SLAs.
Responsibilities include architecting data flows, building scalable event ingestion systems, operating data infrastructure on Kubernetes, and collaborating with backend teams.
The role also involves ensuring high-throughput processing, managing event schemas, implementing observability, and mentoring other engineers.
This position can be done fully remote in any EMEA country.
Requirements:
Candidates must have extensive experience in data or backend engineering, with at least 2+ years of experience building real-time data pipelines.
Proficiency with stream processing frameworks like Flink, Spark Structured Streaming, Beam, or similar is required.
Strong programming experience in Java, Scala, or Python, focusing on distributed systems, is essential.
A deep understanding of event streaming and messaging platforms such as GCP Pub/Sub, AWS Kinesis, Apache Pulsar, or Kafka is necessary, including performance tuning and schema management.
Solid experience operating data services in Kubernetes, including Helm and resource tuning, is required.
Candidates should have experience with Protobuf/Avro and best practices around schema evolution in streaming environments.
Familiarity with CI/CD workflows and infrastructure-as-code tools like Terraform, ArgoCD, or CircleCI is expected.
Strong debugging skills and a bias for building reliable, self-healing systems are essential.
Nice to have: Knowledge of stream-native analytics platforms, understanding of frequency capping and fraud detection, exposure to service mesh and auto-scaling, and contributions to open-source projects.
Benefits:
The position offers best-in-class compensation.
Additional benefits will be provided according to the country of residence.
Apply now
Please, let Voodoo know you found this job
on RemoteYeah
.
This helps us grow 🌱.