Blog / MLOps for AI Consultancies
MLOps for AI Consultancies

MLOps for AI Consultancies

Henrik Skogström

How can MLOps make consultant-client relationships more productive?

There's no doubt, AI and machine learning have come to stay. Most companies who are building software have realized this. This realization is not limited to companies developing technology-first software products, but rather it extends to the vast majority of internal applications and background systems. Teams are figuring out what problems would be better solved with machine learning rather than fixed, pre-defined logic.

However, getting into machine learning for many organizations means founding new teams and hiring suitable leadership as data science isn't an established function for many outside the Fortune 500. Therefore, starting with machine learning is a massive, strategic undertaking, and many are turning to consultancies and contractors to take the first steps with AI.

AI, ML, and data science consulting are booming. Many AI teams for hire are overwhelmed and can't hire fast enough to take on new clients. Unfortunately, MLOps can't help consultancies find talent quicker, but there are many ways it makes data scientists and the co-operation with the customer more productive. However, let's start with why MLOps doesn't seem relevant to consultancies at first.

Why doesn't MLOps seem relevant?

The primary reason why MLOps doesn't seem relevant for many data scientists doing consulting is that projects tend to be POCs. Proof-of-concept projects are often short and fuzzy in scope, and a high chance of throwaway is implied.

We often hear that for many consulting projects pushing a model to production is not even the goal, and for most, MLOps implies a strong production-focus. MLOps tooling is thought of as a necessity for projects where continuity between model development and operations is a hard requirement and not a nice-to-have.

However, this perspective can be too narrow as MLOps platforms such as Valohai offer many capabilities for model development continuity. Let's look at some of these capabilities.

MLOps isn't just about operations

MLOps isn't just for the last mile

Now let's be clear, MLOps is not a single tool or platform but rather a shared process. Tooling such as Valohai is, however, a practical way to implement much of the MLOps playbook.

Valohai helps transfer knowledge to clients.

In a machine learning project, a trained model which delivers results is just the tip of the iceberg. Below are the scripts and notebooks used to clean the data and train the model and the hyperparameters used to train the model. Further below, we have many more layers that get more meta as we go, such as each experiment's progression and even down to each data scientist's workflows.

While the client deliverable might be a model and a set of scripts, the value often lays much more in the learnings. An anecdote a data science consultant shared with me illustrates this: "Sometimes the most valuable deliverable is telling the client a problem shouldn't be solved with ML and why."

User access in Valohai

MLOps tooling is an excellent way to quantify and pass on that knowledge to the client. At the core of Valohai are features such as automatic experiment tracking and automatic storing of metadata and artifacts, all of which help document and deliver the more implicit aspects of data science work.

Valohai makes infrastructure accessible without compromising security.

Increasingly complex uses of machine learning have been made possible by advances in hardware but at the same, managing such hardware becomes more complicated. For most, machine learning you'll ideally use something other than your laptop. Your team will have to know how to operate AWS, Azure, or GCP, or you'll want to have an MLOps platform in place that automatically manages cloud or on-premise infrastructure.

Most AI consultancies work on several client projects at the same time, which makes things trickier. A customer may require that data never leaves their environment and computation is done on their hardware. Valohai allows this type of configuration easily where each project connects with different environments. Additionally, You can limit projects' access to a user, team, or organization.

Multiple environments in Valohai

Each available backend can be accessed without changing any code by either selecting the environment from a dropdown or specifying it in an API call.

Valohai ensures a smooth path to production and beyond.

Finally, the option for going to production and beyond exists without any additional work. In Valohai, the ability to deploy a model to a scalable Kubernetes cluster is included.

Seamless continuity from experimentation to production reduces the risk for everything being re-engineered by the client when the project ends. If adopted by both sides, the MLOps platform functions as a natural integration point for future development.

To learn more about MLOps, you should read our MLOps eBook and check out our upcoming collection of case studies. If you have a client project you'd like to kick-off on Valohai, book a call with us to get a free trial.

Free BookletMLOps & AI TrailblazersHow trailblazers implement MLOps with Valohai?