Please, let Talkiatry know you found this job
on RemoteYeah.
This helps us grow 🌱.
Description:
As a Senior Data Engineer, your main goal is to help build and maintain our data infrastructure, including Data Pipelines, Data Warehouse, and Data Integrations.
You will contribute to Architectural Decisions and build new data products and services to serve our business.
Your role includes contributing to the continuous improvement of our development workflows and processes.
You will collaborate with cross-functional stakeholders, including Product, Infrastructure, and Operations teams to deliver clear outcomes.
You will create custom data products for production-grade system integrations as needed.
You will contribute to the monitoring of the overall health and integrity of Data Pipelines and data systems.
You will support, maintain, and extend our existing DBT Data Models and Snowflake Data Warehouse, including making performance optimizations.
You will maintain and build CI/CD Pipelines using GitHub Actions.
You will work closely with the infrastructure team to anticipate infrastructure needs based on planned work for new data products and articulate those needs.
You will ingest new data sources into the data warehouse and create Reverse ETL solutions as needed.
You will migrate independently scheduled tasks and pipelines into Airflow DAGs.
You will join the rest of the Data Engineering team in the On-Call Rotation.
You will architect and build new innovative data solutions that will interact with both internal and external APIs and our Data Warehouse.
Requirements:
A Bachelor’s degree in computer science or a similar field is preferred but not required.
You must have 3+ years of experience working as a Data Engineer.
Advanced SQL experience is required.
You should have 3+ years of experience as a developer, with Python preferred but other languages such as Java or Scala acceptable.
You need 2+ years of experience working with cloud data warehouses, with Snowflake preferred, including performance optimization through warehouse sizing and caching strategies.
Experience with Data Modeling in DBT is required, including performance optimization techniques to enhance data processing speed and minimize cloud costs while ensuring data accuracy and reliability.
You should have experience using CI/CD tools such as GitHub Actions, Jenkins, or CircleCI.
Familiarity with cloud computing environments such as AWS, Azure, or Google Cloud is necessary.
Experience writing Infrastructure as Code in Terraform or AWS CloudFormation is useful but not required.
Experience working with an orchestration tool like Airflow is a plus but not required.
Experience with other ETL tools such as Fivetran, Rivery, or Stitch is a plus.
Experience working with APIs and Message Queues is also beneficial.
Benefits:
You will be part of a top-notch team that is diverse and experienced, motivated to make a difference in mental health care.
The position offers a collaborative environment where you can be part of building something from the ground up at a fast-paced startup.
You will have the flexibility to work remotely across the U.S. or from the company's HQ in NYC.
The company provides excellent benefits, including medical, dental, and vision coverage effective from day one of employment, a 401K with match, generous PTO plus paid holidays, and paid parental leave.
You will have opportunities to grow your career, hone your skills, and build new ones with the Learning team as Talkiatry expands.
The company prioritizes the well-being of its team, reflecting its commitment to mental health care.
Apply now
Please, let Talkiatry know you found this job
on RemoteYeah
.
This helps us grow 🌱.