Overview
Required Skills
Python
5/5
AWS Cloud services
5/5
Kubernetes & Docker
5/5
ML Orchestration
5/5
Data Engineering Tools
5/5
Requirements
- 3+ years of software development experience in Python
- Strong experience with AWS Cloud services (Lambda, S3, ECS, EKS, EC2, etc.)
- Expertise in Kubernetes & Docker for containerized ML model deployments
- Experience in orchestrating Machine Learning solutions for large-scale production
- Deep understanding of CI/CD pipelines for ML models (GitHub Actions, etc.)
- Experience in Machine Learning Orchestration ( data version control, ML flow)
- Experience with ML Model Monitoring (e.g., Seldon, Grafana)
- Knowledge of Data Engineering Tools ( Airflow, Spark, or similar)
- Independence & Proactiveness A self-starter approach who pushes boundaries and drives projects to completion
- Strong Communication & Leadership Skills Ability to work across teams and drive ML Ops best practices
- Experience with MLOps frameworks (clearML / SageMaker / W&B) – advantage
- Experience with TensorFlow – advantage
- Familiarity with GPU-based model deployment and optimization – advantage
- Background in computer vision and deep learning workflows – advantage
- MB.Sc. in Computer Science or equivalent – advantage
Responsibilities
- Collaborate with machine learning engineers and data managers to improve, validate, and deploy ML models at a large scale
- Design and implement large-scale data pipelines using cloud computing
- Design, and implement, and deploy large-scale pipelines for ML models in production
- Maintenance and monitoring of performance and reliability and scalability
- Work with us to constantly grow and improve our ML workflows, tools, and data with us, to keep improving our ML and data tools and workflows.

