We are seeking a skilled ML Ops Engineer for a short-term contract role (90–120 days) to help build and operationalize scalable machine learning infrastructure in the cloud.
This role is critical to enabling enterprise ML solutions that support supply chain planning, design, and execution.
The ideal candidate will have hands-on experience with Databricks, MLflow, PySpark, and Unity Catalog, with a strong foundation in building cloud-native ML pipelines and enforcing data/model governance at scale.
Key responsibilities include designing and implementing scalable ML pipelines on cloud platforms (Azure), using Databricks, PySpark, and MLflow to build and manage the ML lifecycle, applying Unity Catalog for data and model governance, building and maintaining CI/CD workflows, refactoring ML code for production readiness, automating testing and monitoring for production models, working closely with cross-functional teams, and documenting technical solutions.
Requirements:
Proficient in Python.
Strong experience with Databricks, MLflow, and PySpark for distributed data processing and ML lifecycle management.
Familiarity with Unity Catalog for data security and governance in Databricks.
Experience using Terraform or similar infrastructure-as-code (IaC) tools for provisioning and managing cloud infrastructure.
Experience deploying ML pipelines in cloud platforms (Azure).
Hands-on with Docker and Kubernetes for containerization and orchestration.
Familiarity with ML frameworks like scikit-learn, TensorFlow, Keras, or PyTorch.
Solid understanding of DevOps, CI/CD practices, and test automation in data science environments.
Excellent collaboration and communication skills.
Benefits:
Competitive rate based on experience.
Flexible work hours with remote or hybrid flexibility.
Work on mission-critical ML initiatives in a high-impact supply chain environment.
Collaborate with an experienced, agile team using modern ML Ops tooling.