Remote Staff Hadoop Administrator - Cloudera / BigData

Posted

This job is closed

This job post is closed and the position is probably filled. Please do not apply.  Automatically closed by a robot after apply link was detected as broken.

Description:

  • The Staff DevOps Engineer-Hadoop Admin will be part of the Big Data Federal Team, providing 24x7 support for the Private Cloud infrastructure.
  • The role involves ensuring that ServiceNow exceeds the availability and performance SLAs of the ServiceNow Platform powered Customer instances across the ServiceNow cloud and Azure cloud.
  • Responsibilities include deploying, monitoring, maintaining, and supporting Big Data infrastructure and applications on ServiceNow Cloud and Azure environments.
  • The position requires architecting and driving end-to-end Big Data deployment automation using tools like Ansible, Puppet, Terraform, Jenkins, Docker, and Kubernetes.
  • The engineer will automate Continuous Integration / Continuous Deployment (CI/CD) data pipelines for applications.
  • Performance tuning and troubleshooting of various Hadoop components and data analytics tools will be required.
  • The role includes providing production support to resolve critical Big Data pipeline and application issues, collaborating with Site Reliability Engineers, Customer Support, Developers, QA, and System engineering teams.
  • The engineer will also be responsible for enforcing data governance policies in Commercial and Regulated Big Data environments.

Requirements:

  • Candidates must have 6+ years of overall experience, with at least 4+ years of DevOps experience in building and administering Hadoop clusters.
  • A deep understanding of the Hadoop/Big Data Ecosystem is required, including knowledge in querying and analyzing large amounts of data on Hadoop HDFS using Hive and Spark Streaming.
  • Experience in securing the Hadoop stack with Sentry, Ranger, LDAP, and Kerberos KDC is necessary.
  • Candidates should have experience supporting CI/CD pipelines on Cloudera in Native cloud and Azure/AWS environments.
  • Demonstrated expert-level experience in delivering end-to-end deployment automation using Puppet, Ansible, Terraform, Jenkins, Docker, Kubernetes, or similar technologies is required.
  • Good knowledge of programming languages such as Perl, Python, Bash, Groovy, and Java is essential.
  • In-depth knowledge of Linux internals (Centos 7.x) and shell scripting is required.
  • The ability to learn quickly in a fast-paced, dynamic team environment is necessary.

Benefits:

  • The position offers a base pay range of $152,000 - $266,000, plus equity (when applicable), variable/incentive compensation, and benefits.
  • Health plans are included, along with flexible spending accounts and a 401(k) Plan with company match.
  • Employees can participate in an Employee Stock Purchase Plan (ESPP) and matching donations.
  • A flexible time away plan and family leave programs are available, subject to eligibility requirements.
  • Compensation is based on geographic location and is subject to change based on work location.
Leave a feedback