This job post is closed and the position is probably filled. Please do not apply.
🤖 Automatically closed by a robot after apply link
was detected as broken.
Description:
The MLOps Engineer will establish and maintain AI/ML infrastructure, ensuring models are efficiently trained, deployed, and monitored.
This role focuses on automating ML workflows, optimizing AI operations, and improving model reliability.
The MLOps Engineer will work closely with the ML team and IT/Engineering to streamline AI deployment at Canibuild.
Key responsibilities include implementing CI/CD pipelines for model training, testing, and deployment.
The engineer will develop scalable ML infrastructure to ensure reliable AI model performance.
Automation of model retraining, versioning, and monitoring will be done using MLflow, Kubeflow, or Airflow.
The role involves deploying ML models on cloud platforms (AWS, Azure, GCP) and managing Kubernetes/Docker environments.
The engineer will assist in optimizing data pipelines and integrating AI models with production systems.
Ensuring AI deployments adhere to security, governance, and compliance standards is also a key responsibility.
Requirements:
A Bachelor’s or Master’s degree in Computer Science, AI, or a related field is required.
The candidate must have 4 or more years of experience in MLOps, AI infrastructure, or DevOps.
Strong expertise in CI/CD tools for ML, such as MLflow, Kubeflow, or Airflow, is necessary.
Experience with cloud ML services, including AWS SageMaker, Google Vertex AI, or Azure ML, is required.
Proficiency in container orchestration tools like Docker and Kubernetes is essential.
An understanding of AI model monitoring, logging, and explainability frameworks is needed.
Benefits:
The position offers flexible remote work opportunities along with career development opportunities.
Employees will engage with a supportive and collaborative global team.