The Data Science team at TRACTIAN focuses on extracting valuable insights from vast amounts of industrial data.
This team uses advanced statistical methods, algorithms, and data visualization techniques to transform raw data into actionable intelligence.
The insights drive decision-making across engineering, product development, and operational strategies.
The team works on optimizing prediction models, identifying trends, and providing data-driven solutions to enhance operational efficiency and product quality.
The Mid-Level Machine Learning Engineer will bridge the gap between data science and production systems.
Responsibilities include owning the end-to-end deployment of machine learning models, working with real-time sensor data, and building reliable services for diagnostics of industrial equipment.
This is a hands-on role with real impact, ideal for engineers looking to grow their systems design and ML Ops skills.
Requirements:
Candidates must have 2–4 years of experience in software or machine learning engineering.
A Bachelor’s degree in Computer Science, Engineering, or a related technical field is required.
A solid background in math, statistics, and machine learning concepts is necessary.
Strong Python skills and experience with ML libraries like scikit-learn or PyTorch are essential.
Experience deploying models in production environments is required.
Familiarity with event-driven platforms and message queues, such as Kafka or Redis Streams, is necessary.
Candidates should be comfortable working with streaming or time-series data.
Benefits:
The position offers the opportunity to work with large-scale time-series datasets from vibration and sensor systems.
Engineers will have the chance to improve the performance and reliability of model serving pipelines.
The role includes monitoring system health and implementing logging, alerting, and fallback mechanisms.
There is an opportunity to contribute to architectural decisions and collaborate across teams.
Preferred qualifications include experience with containerization (Docker) and cloud deployment, exposure to real-time or low-latency systems, and an interest in optimizing inference latency and resource usage.