Remote Big Data Engineer

Posted

This job is closed

This job post is closed and the position is probably filled. Please do not apply.  Automatically closed by a robot after apply link was detected as broken.

Description:

  • We are seeking an experienced and innovative Big Data Engineer to join our data analytics team.
  • In this role, you will be responsible for designing, implementing, and maintaining our big data infrastructure and processing systems.
  • You will design and develop scalable big data solutions and data pipelines.
  • You will implement and manage distributed computing systems using technologies like Hadoop, Spark, and Kafka.
  • You will create and maintain ETL (Extract, Transform, Load) processes for large datasets.
  • You will optimize data storage and retrieval systems for performance and scalability.
  • You will collaborate with data scientists and analysts to support their data needs.
  • You will ensure data security, privacy, and compliance with relevant regulations.
  • You will develop and implement data retention policies.
  • You will conduct performance tuning and optimization of big data systems.
  • You will work on data architecture to support the organization's analytical needs.
  • You will integrate various data sources and APIs into our data ecosystem.
  • You will implement and manage cloud-based big data solutions (e.g., AWS, Azure, Google Cloud).

Requirements:

  • A Bachelor's or Master's degree in Computer Science, Data Science, or a related field is required.
  • You must have 3+ years of experience in big data engineering or similar roles.
  • Strong proficiency in programming languages such as Python, Java, or Scala is necessary.
  • Extensive experience with big data technologies (e.g., Hadoop, Spark, Hive, HBase) is required.
  • A solid understanding of distributed computing principles is essential.
  • Experience with data warehousing and ETL processes is needed.
  • Proficiency in SQL and NoSQL databases is required.
  • Familiarity with cloud platforms (AWS, Azure, or Google Cloud) is necessary.
  • Knowledge of data modeling and architecture design is required.
  • Experience with stream processing technologies (e.g., Kafka, Flink) is essential.
  • Strong problem-solving and analytical skills are necessary.
  • Excellent communication and teamwork abilities are required.

Benefits:

  • You will receive a competitive salary commensurate with experience.
  • Health, dental, and vision insurance will be provided.
  • A 401(k) retirement plan with company match is included.
  • Flexible work arrangements are available.
  • Professional development opportunities will be offered.
  • You will work on exciting projects at the forefront of big data innovation.
Leave a feedback