Empowering machine learning teams to train and deploy their models

Preview  We provide initial credit for all users; no credit card needed. You can apply for additional credit for research and non-profit purposes.

Say goodbye to
manual record keeping

Everything that an execution ingests and produces is recorded and accessible through web browser or command line. Parameters, inputs, logs, errors, metadata, outputs, you name it.

Find out more about tracking results

Real-time exploratory research

Experiment metadata is visible in real-time and visualized between other concurrently running experiments to support exploratory research. You can run as many parallel experiments as you want!

See how running experiments works

Get started

Zero setup infrastructure

We maintain scalable fleets of CPU and GPU workers so you don't have to, at up to 50% cheaper than market prices, billed per-second. We also have support for private workers that you can use to extend your personal worker fleet. Get running in seconds, scale when you need to.

Explore pricing in more detail

Use the tools you already love

Runtime environments use GPU-enabled Docker images so running virtually any language or machine learning library is possible. We provide a set of readily available images but we do support any of your favorite tools if it runs on Linux.

NumPy Caffe TensorFlow Keras Theano Torch PyTorch Darknet MXNet

Empowering professionals

Iterate on many models in parallel or crunch everything out from a specific model with various styles of hyperparameter optimizations.

See how optimizations work

Get empowered

GitHub of machine learning

Octocat overlords have given us a lot of inspiration. All experiments run on the platform are accessible and reproducible by those with project collaborator privileges. Or make your project public for the world to see. Working together on machine learning projects has never been so straightforward.

See collaboration features

From command line to the cloud

Some people prefer the graphical user interface, others like using the command line; we embrace students of both schools. We also offer the whole platform as a REST API for third party applications.

Open source at heart

We use a lot of open source and love contributing back.
If it makes sense to open source something, we will do it.
You can find us as @valohai on Github. Here's some of the code we maintain.

django-safespace – a middleware for exception processing
koodaus – encoding utilities
ml-logos – machine learning library logos as SVG
ulid2 – a better library for ULID encoding and decoding
valohai-yaml – parser and linter for valohai.yaml files
valohai-cli – the official CLI for Valohai


Interested in machine learning? Join our community Slack for open discussion about everything from ML research to concrete use cases.