Remote Data Engineer – Data Pipelines & Modeling

at RYZ Labs

Posted 2 days ago 2 applied

Description:

  • This position is only for professionals based in Argentina or Uruguay.
  • We are looking for a data engineer to join one of our clients' teams.
  • You will help enhance and scale the data transformation and modeling layer.
  • This role will focus on building robust, maintainable pipelines using dbt, Snowflake, and Airflow to support analytics and downstream applications.
  • You will work closely with the data, analytics, and software engineering teams to create scalable data models, improve pipeline orchestration, and ensure trusted, high-quality data delivery.
  • Key responsibilities include designing, implementing, and optimizing data pipelines that extract, transform, and load data into Snowflake from multiple sources using Airflow and AWS services.
  • You will build modular, well-documented dbt models with strong test coverage to serve business reporting, lifecycle marketing, and experimentation use cases.
  • You will partner with analytics and business stakeholders to define source-to-target transformations and implement them in dbt.
  • You will maintain and improve the orchestration layer (Airflow/Astronomer) to ensure reliability, visibility, and efficient dependency management.
  • You will collaborate on data model design best practices, including dimensional modeling, naming conventions, and versioning strategies.

Requirements:

  • You must have hands-on experience developing dbt models at scale, including the use of macros, snapshots, testing frameworks, and documentation.
  • You should be familiar with dbt Cloud or CLI workflows.
  • Strong SQL skills and an understanding of Snowflake architecture, including query performance tuning and cost optimization, are required.
  • You must have solid experience managing Airflow DAGs, scheduling jobs, and implementing retry logic and failure handling; familiarity with Astronomer is a plus.
  • Proficiency in dimensional modeling and building reusable data marts that support analytics and operational use cases is necessary.
  • Familiarity with AWS services such as DMS, Kinesis, and Firehose for ingesting and transforming data is a nice to have.
  • Familiarity with event data and related flows, piping data in and out of Segment is also a nice to have.

Benefits:

  • The position offers the opportunity to work with cutting-edge technologies in data engineering.
  • You will be part of a collaborative team that values innovation and continuous improvement.
  • The role provides a chance to enhance your skills in data modeling and pipeline orchestration.
  • You will have the opportunity to work closely with analytics and business stakeholders, impacting business decisions.
  • The position allows for professional growth in a dynamic and supportive environment.