Blog

All

AI

Case Studies

Comparisons

Data Science

Engineering

MLOps

Tutorials

Valohai

ML Pioneers

Blog
December 18, 2024 Tarek Oraby
2024 in Review (Part 1)

Let's take a look back at the past year. In this first part of our annual review, we'll recap all the key additions and improvements to our end-to-end MLOps platform, ecosystem integrations, and more. Stick around and you'll find out what to expect in the next year and far beyond!

View post
Blog
November 27, 2024 Tarek Oraby
Boosting Velocity in Data Science Teams: A Practical Guide

Create structured and efficient workflows that help your data science team work faster and smarter, i.e., maximize the impact on the business and increase the speed of experimentation and delivery without compromising quality.

View post
Blog
November 20, 2024 Tarek Oraby
Stop wasting your GPUs with Valohai's Dynamic GPU Allocation

Our latest feature is built to help you make the most out of your on-prem hardware: utilize idle GPUs, adjust GPU usage for every ML job, and forget about managing priority queues. It’s live and ready for you to give it a spin (no pun intended).

View post
Blog
November 06, 2024 Tarek Oraby
Valohai's Audit Log: Traceability built for AI governance

Introducing an out-of-the-box solution that gives all Valohai users automatic, immutable, and secure audit logs that ensure traceability for navigating compliance requirements, debugging issues, and improving accountability within teams.

View post
Blog
October 31, 2024 Eero Laaksonen
AMD GPU Performance for LLM Inference: A Deep Dive

AMD's MI300X GPU can outperform Nvidia's H100 in LLM inference benchmarks, offering larger memory and higher bandwidth. Read our benchmark in full, get the details, and discover how this impacts AI hardware performance and model capabilities.

View post
Blog
September 18, 2024 Tarek Oraby
Simplify and automate the machine learning model lifecycle

We’ve built the Model Hub to help you streamline and automate model lifecycle management. Leverage Valohai for lineage tracking, performance comparison, workflow automation, access control, regulatory compliance, and more.

View post
Blog
September 11, 2024 Alexander Rozhkov
3 things to look forward to in MLOps (or maybe 4)

Don’t miss out on Valohai’s upcoming updates on AI governance and the AI EU Act, examples of machine learning pipelines in production, new features, and GPU benchmarks. Subscribe to our newsletter.

View post
Blog
September 04, 2024 Tarek Oraby
Stop waiting for your training data to download (again)

Valohai’s new experimental feature selects compute instances based on where the data has been cached already, helping you reduce data transfer overhead and increase model iteration speed.

View post
Blog
August 28, 2024 Toni Perämäki
Solve the GPU shortage and control cloud costs: Valohai’s partnership with OVHcloud

Our new partnership enables you to seamlessly access OVHcloud’s scalable and secure environments from the Valohai MLOps platform without changing your preferred ML workflows.

View post
Blog
August 20, 2024 Tarek Oraby
Save time and avoid recomputation with Pipeline Step Caching

Valohai’s latest feature helps you avoid unnecessary costs by reusing the results of matching pipeline steps from previous executions. This feature is already available to all Valohai users!

View post
Blog
July 10, 2024 Tarek Oraby
New Features for Optimizing MLOps Efficiency and Resource Utilization

We’ve built significant enhancements into our platform to further empower data science teams in accelerating time-to-market and optimizing operational costs. These enhancements tackle model iteration speed, efficient resource utilization, and dataset management.

View post
Blog
July 01, 2024 Alexander Rozhkov
Stop paying for the compute resources that you’re not using anymore

Our new feature monitors CPU, GPU, and memory usage and alerts you when your machines operate below 50% capacity. This allows you to optimize resource usage and reduce costs.

View post
Blog
May 22, 2024 Tarek Oraby
Track and Manage the Lifecycle of ML Models with Valohai’s Model Registry

Valohai’s Model Registry is a centralized hub for managing model lifecycle from development to production. Think of it as a single source of truth for model versions and lineage.

View post
Blog
May 15, 2024 Tarek Oraby
Introducing Kubernetes Support for Streamlined Machine Learning Workflows

We designed our new Kubernetes support so that Data Science teams can effortlessly manage and scale their workflows on top of Kubernetes and enhance their overall machine-learning operations.

View post
Blog
April 02, 2024 Tarek Oraby
Introducing Slurm Support: Scale Your ML Workflows with Ease

We're excited to announce that Valohai now supports Slurm, an open-source workload manager used in HPC environments. Valohai users can now scale their ML workflows with Slurm-based clusters with unprecedented ease and efficiency.

View post