Remote Senior Data Engineer (Europe, Remote, m,d,f)
Posted
Apply now
Please, let Factor Eleven know you found this job
on RemoteYeah.
This helps us grow 🌱.
Description:
We are looking for a world-class Senior Data Engineer to help shape and scale our data platform, powering insights and decision-making across our suite of SaaS digital advertising products.
You will play a key role in designing and building robust, scalable data pipelines and services that fuel analytics, machine learning, and customer-facing features.
You will collaborate closely with Product Managers and Engineers to ensure our data systems are performant, reliable, and aligned with business goals.
You will contribute hands-on to critical projects, lead technical discussions, conduct code reviews, and guide others in best practices for data engineering.
Strong expertise with building and managing a modern data stack is required, particularly familiarity with Lakehouse architecture and cloud-based data platforms.
Experience with AWS cloud platform and technologies like dbt, Apache Spark, Apache Kafka/Amazon Kinesis, and Apache Airflow/Dagster are highly valuable.
You should be passionate about clean, efficient data architecture and have a proven track record of mentoring others and improving engineering culture.
An accomplished Senior Data Engineer at Factor Eleven consistently raises the bar by delivering high-quality solutions, strengthening team collaboration, and leaving lasting improvements in system design, data reliability, and team growth.
Requirements:
You must have 5+ years of professional experience in data engineering, with a deep focus on building and scaling data platforms and infrastructure.
An excellent understanding of different data modeling techniques and their trade-offs is required.
You should have experience building and maintaining a modern data stack from scratch.
Advanced proficiency in Python and SQL is necessary, along with experience using tools such as Apache Spark, Apache Kafka/Amazon Kinesis, Apache Airflow/Dagster, and dbt.
You must build resilient, well-tested data systems with a focus on performance, security, and maintainability.
Excellent communication skills are essential, as you will need to explain complex technical topics to both technical and non-technical stakeholders.
A solid foundation in computer science, with expertise in data structures, distributed systems, data modeling, and big data processing frameworks is required.
You should have a proven track record of influencing strategic decisions through architectural leadership and cross-functional collaboration.
Experience organizing technical workshops or sessions, nurturing a growth mindset and a culture of engineering excellence is important.
A deep understanding of scalable data system design, including data lakes, data warehouses (e.g., Snowflake, BigQuery), and real-time streaming systems (e.g., Kafka, Kinesis) is necessary.
You must be a strong advocate of automated testing and CI/CD for data pipelines and infrastructure.
Highly collaborative with exceptional problem-solving skills and the ability to mentor others in a supportive and respectful way is required.
Excellent organizational and project management skills are necessary to juggle priorities and keep projects on track.
A passion for mentoring and knowledge sharing is essential, as you will thrive in helping others grow and succeed.