Please, let Jobgether know you found this job
on RemoteYeah.
This helps us grow 🌱.
Description:
Jobgether is a pioneering HR Tech startup operating entirely remotely, focused on empowering individuals to discover job opportunities that align with their lifestyles.
The company is the largest job search engine designed exclusively for remote workers, showcasing flexibility as a cornerstone of the future of work.
As a Senior Data Engineer, you will join the dynamic Content and Data team, playing a pivotal role in developing and scaling the matching algorithm using AI.
You will be responsible for designing, building, and maintaining scalable data pipelines and ETL processes to handle large volumes of structured and unstructured data.
Your role will involve developing and optimizing data scraping and extraction solutions to gather job data efficiently while ensuring data quality and scalability.
You will collaborate with data scientists to implement and optimize AI-driven matching algorithms for remote job opportunities.
Ensuring data integrity, accuracy, and reliability through robust validation and monitoring mechanisms will be a key responsibility.
You will analyze and optimize system performance, addressing bottlenecks and scaling challenges within the data infrastructure.
Working with cross-functional teams, you will deploy machine learning models into production environments efficiently and reliably.
Staying updated on emerging technologies and best practices in data engineering and AI will be essential, along with recommending and implementing new tools and methodologies.
You will partner with Product, Engineering, and Operations teams to understand data requirements and translate them into technical solutions.
Comprehensive documentation for data pipelines, systems architecture, and processes will need to be developed and maintained.
Requirements:
A Bachelor's or Master's degree in Computer Science, Engineering, or a related technical field is required.
A minimum of 5 years of experience in data engineering, focusing on building and managing data pipelines and infrastructure is mandatory.
You must have 5 years of programming experience in Python.
Hands-on experience with big data frameworks and tools such as Hadoop, Spark, or Kafka is essential.
Proficiency with relational databases (e.g., MySQL, PostgreSQL) and NoSQL databases, particularly MongoDB, is required.
Experience with AWS cloud platforms is necessary.
A strong understanding of data modeling, schema design, and data warehousing concepts is expected.
Excellent analytical and troubleshooting abilities are required for problem-solving.
Fluency in both English and Spanish is mandatory.
Preferred qualifications include experience deploying machine learning models into production environments and familiarity with CI/CD pipelines, containerization (Docker), and orchestration tools (Kubernetes).
Knowledge of data privacy regulations and security best practices is a plus.
Excellent verbal and written communication skills are necessary to explain complex technical concepts to non-technical stakeholders.
Benefits:
You will enjoy the freedom to work remotely from anywhere in the world.
Being part of a forward-thinking team that is shaping the future of remote work offers an innovative environment.
There are opportunities for professional growth by working on cutting-edge technologies and challenging projects that drive real impact.
You will collaborate with a diverse team of passionate professionals from around the globe.
Apply now
Please, let Jobgether know you found this job
on RemoteYeah
.
This helps us grow 🌱.