
Deploying Continental R&D’s First Predictive ML Model: How an Industrial Giant Scaled Multilingual Machine Learning Into Production
by Toni Perämäki | on December 02, 2025When a global industrial manufacturer deploys its first predictive ML model into production, it's never just a technical milestone—it reshapes how the organization thinks about data, engineering, and the future of its products.
Continental Tires' recent work exemplifies this transformation. As one of the world's oldest and most respected tire companies, with 150 years of history, millions of test results, and deep engineering expertise, Continental treats machine learning as a practical tool that accelerates development cycles and improves product performance.
This article summarizes a MLOpsWorld 2025 keynote by Claudia Peñaloza from Continental Tires, where she explained how their R&D organization deployed its first predictive ML model into production and what it took to make it work in a real industrial environment.
Watch the talk
Claudia's full conference keynote is available here:
Below is a structured summary of that journey and the key lessons for industrial teams building production-grade ML workflows.
From local notebooks to industrial pipelines
Early ML experiments in manufacturing often resemble academic prototypes: teams test ideas locally, version control is inconsistent, data variations are tracked manually (if at all), collaboration is difficult, and scaling beyond a single laptop is nearly impossible.
Continental's early steps followed this familiar pattern: ML workflows living inside notebooks, versioning that was more "wishful thinking" than practice, and dependencies that differed from developer to developer. Scaling the work into something reliable and repeatable required a fundamental shift.
When the team set out to build a predictive model for tire performance, the limitations became clear. A single tire contains more than 200 components, and manufacturing variables, test results, and historical datasets all influence performance. Turning this complexity into a reliable predictive system required a structured, production-grade ML workflow.
They identified several essential requirements:
- reliable version control across code, data, and models
- dependency management across multiple languages
- repeatability in every step
- full traceability for all inputs and outputs
- orchestration for both sequential and parallel tasks
- seamless execution in their cloud environment
Beyond these technical requirements, the team also needed to prepare for organizational change.
Before you continue
Get the Enterprise ML Architecture Guide (free)
A practical blueprint for multilingual pipelines, reproducibility, and scalable ML orchestration.
Multilingual ML in practice: when the team changed, the pipeline had to survive
After 18 months of development, the project was approved for industrialization. At the same time, the core development team changed completely: three experienced R developers left for two-year parental leave, and their replacements were primarily Python developers.
This is where many ML initiatives stall — or worse: panic. A new team inherits thousands of lines of code written in a different language, with a production timeline that isn't waiting around.
A complete rewrite wasn't realistic or necessary. Instead of forcing everyone to speak the same language, Continental chose a practical path:
- standardized Parquet as the data exchange format between pipeline steps
- allowed R and Python to coexist in harmony (or at least mutual tolerance)
- introduced cross-language code reviews so the whole team understood the evolving codebase
This pragmatic approach produced a multilingual ML architecture that could survive team changes and varying skill sets.
Why orchestration became the foundation
Once it became clear that multiple languages would remain in the pipeline, the team needed a platform that could orchestrate everything reliably.
The team needed to:
- run both R and Python scripts in isolated, containerized environments
- connect dozens of workflow steps with full transparency
- store intermediate outputs in S3 with complete traceability
- support both sequential and parallel execution
- dynamically spin up and shut down compute resources
- integrate with AWS Aurora and internal systems
- maintain an audit trail across the entire workflow
This is where Valohai came in.
Continental had already experimented with MLflow and SageMaker, but needed a system that was more flexible and adaptable to their environment. They needed support for teams with mixed experience levels and a platform that didn't require standardizing on a single language or rigid workflow.
Valohai became the orchestration backbone, providing a language-agnostic and cloud-agnostic platform that connected every part of the pipeline and every member of the development team.

From two months to overnight
With their multilingual pipeline fully deployed on Valohai, Continental now delivers overnight predictions to more than 100 tire developers. Instead of waiting two months for physical tire test results, engineers wake up to model predictions in their inbox.
This shift delivered several key benefits:
- faster R&D cycles, allowing quicker iteration on new designs
- earlier insight into expected performance, with physical tests still carried out for verification
Machine learning didn't replace expert testing—it amplified it. Orchestration turned complex ML development into a repeatable, traceable process suitable for industrial scale.
Lessons for industrial teams
Continental's experience highlights several principles for enterprises deploying ML into production:
- multilingual ML is normal—R and Python can and should coexist when needed
- standardized data exchange formats like Parquet are critical
- containerization is essential for reproducibility
- orchestration is what makes ML production-ready
- pipelines must survive team turnover and changing skill sets
Industrial ML is demanding, but with the right architecture and platform, it becomes practical and transformative.
Ready for the next step?
Talk to Valohai about orchestrating ML in industrial environments
If your team is working with multilingual pipelines, complex data flows, or production-grade workflow challenges, our team can walk you through relevant architectures and examples.