Please, let Rackspace know you found this job
on RemoteYeah.
This helps us grow π±.
Description:
We are seeking a highly skilled and experienced Senior Big Data Engineer to join our dynamic team.
The ideal candidate will have a strong background in Java, developing and scaling both stream and batch processing systems, and a solid understanding of public cloud technologies, especially GCP.
This role involves working in a remote environment, requiring excellent communication skills and the ability to solve complex problems independently and creatively.
Responsibilities include designing and developing scalable data processing solutions, leveraging advanced Java techniques to ensure performance, reliability, and maintainability across streaming and batch workflows.
The engineer will build reusable and reliable code for stream and batch processing systems at scale.
The role involves working with technologies such as Pub/Sub, Kafka, DataFlow, Flink, Hadoop, Pig, Oozie, Hive, and Spark.
The engineer will implement automation/DevOps best practices for CI/CD, IaC, Containerization, etc.
Requirements:
Candidates must be driven and self-motivated to thoroughly clarify problem statements and develop well-structured plans to effectively solve challenges.
Proficiency in Java with solid experience in asynchronous programming paradigms is required, including a strong understanding of API futures and the ability to implement asynchronous APIs for scalable solutions.
Hands-on experience with reactive programming techniques, focusing on non-blocking data flow and real-time data handling, is necessary.
The ability to design, debug, and optimize concurrent processes within distributed systems is essential.
A proactive problem-solving approach is required, with the ability to analyze problem statements, anticipate corner cases, and devise a comprehensive solution plan prior to coding.
Expertise in public cloud services, particularly in GCP, is mandatory.
Candidates should have experience in building reusable and reliable code for stream and batch processing systems at scale.
Experience with GCP managed services and understanding of cloud-based messaging/stream processing systems (e.g., Data Proc, Cloud Composer, GCS, DataFlow) is required.
Knowledge in Infrastructure and Applied DevOps principles, including CI/CD and IaC (e.g., Terraform), is necessary.
Familiarity with containerization technologies like Docker and Kubernetes to enhance scalability and efficiency is required.
Candidates must have strong programming abilities in Java and Python.
Proven experience in engineering stream/batch processing systems at scale is essential.
A technical degree in Computer Science, Software Engineering, or a related field is required.
Benefits:
The anticipated starting pay range for Colorado is $116,100 - $170,280.
The anticipated starting pay range for Hawaii and New York (not including NYC) is $123,600 - $181,280.
The anticipated starting pay range for California, New York City, and Washington is $135,300 - $198,440.
The role may include variable compensation in the form of bonuses, commissions, or other discretionary payments based on company and/or individual performance.
Actual compensation is influenced by a wide array of factors including skill set, level of experience, licenses and certifications, and specific work location.
Rackspace Technology offers a commitment to equal employment opportunity without regard to legally protected characteristics.
The company is dedicated to creating a diverse and inclusive work environment, welcoming applicants from all backgrounds.
Apply now
Please, let Rackspace know you found this job
on RemoteYeah
.
This helps us grow π±.