Please, let Rackspace know you found this job
on RemoteYeah.
This helps us grow π±.
Description:
The company is looking for a Senior Big Data Engineer with expertise in developing batch processing systems within the Apache Hadoop ecosystem.
The role involves working with technologies like Map Reduce, Oozie, Hive, Pig, HBase, and Storm, as well as Java and Machine Learning pipelines.
The position is remote and requires strong communication skills and the ability to independently solve complex problems.
Responsibilities include developing scalable code for batch processing systems, managing batch pipelines for Machine Learning workloads, and utilizing GCP for big data processing.
The role also involves implementing automation and DevOps practices for CI/CD and IaC.
Requirements:
Proficiency in Apache Hadoop ecosystem components such as Map Reduce, Oozie, Hive, Pig, HBase, and Storm.
Strong programming skills in Java, Python, and Spark.
Knowledge of public cloud services, specifically GCP.
Experience with Infrastructure and Applied DevOps principles, including CI/CD and IaC tools like Terraform.
Ability to tackle complex challenges, think critically, and propose innovative solutions.
Effective communication skills for remote collaboration and understanding technical requirements.
Proven experience in engineering batch processing systems at scale and working with public cloud platforms.
Benefits:
Opportunity to work remotely.
Competitive starting pay ranges based on location.
Potential for variable compensation based on performance.
Access to benefits offered by the company.
Equal employment opportunity commitment without discrimination based on various factors.
Apply now
Please, let Rackspace know you found this job
on RemoteYeah
.
This helps us grow π±.