Please, let Titan Cloud know you found this job
on RemoteYeah.
This helps us grow 🌱.
Description:
Titan Cloud is a market-leading provider of Fuel Asset Optimization, assisting large convenience stores, fleets, and suppliers in reducing compliance risk, lowering asset maintenance costs, and increasing revenue and fuel yield.
The company serves as the enterprise software platform and system of record, connecting clients' fuel, environment, store operations, and maintenance departments.
Customers save millions annually through reduced lost sales, improved customer experiences, fuel loss mitigation, and reduced environmental reserves and fines.
The Data Engineer will design, implement, and maintain standardized data models that align with business needs and analytical use cases.
Responsibilities include optimizing data structures and schemas for efficient querying, scalability, and performance across various storage and compute platforms.
The role involves developing and maintaining a data pipeline design and building robust and scalable ETL/ELT data pipelines to transform raw data into structured datasets optimized for analysis.
Collaboration with data scientists is essential to streamline feature engineering and improve the accessibility of high-value data assets.
The Data Engineer will design, build, and maintain the data architecture needed to support business decisions and data-driven applications, utilizing AWS, Azure, and local tools and services.
The position requires developing and enforcing data governance standards to ensure consistency, accuracy, and reliability of data across the organization.
Ensuring data quality, integrity, and completeness in all pipelines by implementing automated validation and monitoring mechanisms is crucial.
The role includes implementing data cataloging, metadata management, and lineage tracking to enhance data discoverability and usability.
The Data Engineer will work with Engineering to manage and optimize data warehouse and data lake architectures, ensuring efficient storage and retrieval of structured and semi-structured data.
Evaluating and integrating emerging cloud-based data technologies to improve performance, scalability, and cost efficiency is part of the job.
The position involves assisting with designing and implementing automated tools for collecting and transferring data from multiple source systems to the AWS and Azure cloud platform.
Collaboration with DevOps Engineers to integrate new code into existing pipelines and troubleshooting functional and performance issues is required.
The candidate must be a team player to work effectively in an agile environment.
Requirements:
A Bachelor’s degree in computer science or a related technical field is required.
The candidate must have 4+ years of relevant employment experience.
A minimum of 4+ years of work experience with ETL, Data Modeling, Data Analysis, and Data Architecture is necessary.
Proficiency in MySQL, MSSQL Database, Postgres, and Python is required.
Experience operating very large data warehouses or data lakes is essential.
The candidate should have experience building data pipelines and applications to stream and process datasets at low latencies.
Good to have experience with AWS Glue, Lambdas, AWS Data Migration Service, AWS Batch, ECS, Fargate, Kinesis, S3, and Apache Iceberg.
Familiarity with Terraform/Cloud Formation is a plus.
Benefits:
The position offers a remote work environment.
Employees enjoy flexible time off.
Medical Insurance is provided, including HSA/FSA accounts.
Dental Insurance is included.
Group Term Life Insurance is offered.
Vision Insurance is available.
Disability Insurance is provided.
Maternity/Paternity Leave is included.
A 401(k) plan is available.
Additional and voluntary benefits are offered.
Apply now
Please, let Titan Cloud know you found this job
on RemoteYeah
.
This helps us grow 🌱.