Welcome to RemoteYeah 2.0! Find out more about the new version here.

Remote Senior Data Engineer

at Blend360

Posted 3 days ago 1 applied

Description:

  • As a Senior Data Engineer, you will be responsible for designing, building, and optimizing scalable data pipelines that power AI and analytics solutions for clients.
  • You will work on the architecture and deployment of data infrastructure, ensuring efficiency, reliability, and security across large-scale datasets.
  • The position allows for 100% remote work if you are currently living in LATAM, or you can join the office in Montevideo, Uruguay.
  • Your tasks will include designing, developing, and maintaining scalable ETL/ELT data pipelines for AI and analytics applications.
  • You will optimize data architectures and storage solutions using Databricks, Snowflake, and cloud-based platforms.
  • You will develop big data processing jobs using PySpark, Spark SQL, and distributed computing frameworks.
  • Ensuring data quality, governance, and security best practices across all environments will be part of your responsibilities.
  • You will implement CI/CD workflows for automated deployment and infrastructure as code (IaC).
  • Collaboration with cross-functional teams (data scientists, software engineers, analysts) to build end-to-end data solutions will be essential.
  • You will lead troubleshooting and performance tuning efforts for high-volume data processing systems.
  • Developing and maintaining Python-based backend services to support data infrastructure will be required.
  • You will implement Apache Airflow, Dataplane, or similar orchestration tools to automate and monitor workflows.

Requirements:

  • A strong proficiency in SQL for data processing and transformation is required.
  • You must have strong knowledge of object-oriented programming in at least one language (Python, Scala, or Java).
  • Hands-on experience deploying and managing large-scale data pipelines in production environments is necessary.
  • Expertise in workflow orchestration tools like Apache Airflow, Dataplane, or equivalent is required.
  • A deep understanding of cloud-based data platforms such as Databricks and Snowflake (Databricks preferred) is essential.
  • Knowledge of CI/CD pipelines and infrastructure as code for data workflows is required.
  • Familiarity with cloud environments (AWS preferred, Azure, or GCP) and cloud-native data processing is necessary.
  • Expertise in Spark, PySpark, and Spark SQL, with a solid understanding of distributed computing frameworks is required.
  • You must have a proven ability to lead projects and mentor junior engineers in a fast-paced, collaborative environment.
  • Excellent written and verbal English for clear and effective communication is a must.
  • A minimum of 4+ years of experience working as a Data Engineer or in related positions is required.

Benefits:

  • You will have access to learning opportunities, including certifications in AWS, Databricks, and Snowflake.
  • There will be access to AI learning paths to stay up to date with the latest technologies.
  • Study plans, courses, and additional certifications tailored to your role will be provided.
  • English lessons will be offered to support your professional communication.
  • Career development plans and mentorship programs will help shape your career path.
  • You will receive anniversary and birthday gifts as part of the company culture.
  • Company-provided equipment for remote work will be available.
  • Other benefits may vary according to your location in LATAM.