2024 in Review (Part 1)
Let's take a look back at the past year. In this first part of our annual review, we'll recap all the key additions and improvements to our end-to-end MLOps platform, ecosystem integrations, and more. Stick around and you'll find out what to expect in the next year and far beyond!
Boosting Velocity in Data Science Teams: A Practical Guide
Create structured and efficient workflows that help your data science team work faster and smarter, i.e., maximize the impact on the business and increase the speed of experimentation and delivery without compromising quality.
Stop wasting your GPUs with Valohai's Dynamic GPU Allocation
Our latest feature is built to help you make the most out of your on-prem hardware: utilize idle GPUs, adjust GPU usage for every ML job, and forget about managing priority queues. It’s live and ready for you to give it a spin (no pun intended).
Valohai's Audit Log: Traceability built for AI governance
Introducing an out-of-the-box solution that gives all Valohai users automatic, immutable, and secure audit logs that ensure traceability for navigating compliance requirements, debugging issues, and improving accountability within teams.
Simplify and automate the machine learning model lifecycle
We’ve built the Model Hub to help you streamline and automate model lifecycle management. Leverage Valohai for lineage tracking, performance comparison, workflow automation, access control, regulatory compliance, and more.
Stop waiting for your training data to download (again)
Valohai’s new experimental feature selects compute instances based on where the data has been cached already, helping you reduce data transfer overhead and increase model iteration speed.
Save time and avoid recomputation with Pipeline Step Caching
Valohai’s latest feature helps you avoid unnecessary costs by reusing the results of matching pipeline steps from previous executions. This feature is already available to all Valohai users!
New Features for Optimizing MLOps Efficiency and Resource Utilization
We’ve built significant enhancements into our platform to further empower data science teams in accelerating time-to-market and optimizing operational costs. These enhancements tackle model iteration speed, efficient resource utilization, and dataset management.
Track and Manage the Lifecycle of ML Models with Valohai’s Model Registry
Valohai’s Model Registry is a centralized hub for managing model lifecycle from development to production. Think of it as a single source of truth for model versions and lineage.
Introducing Kubernetes Support for Streamlined Machine Learning Workflows
We designed our new Kubernetes support so that Data Science teams can effortlessly manage and scale their workflows on top of Kubernetes and enhance their overall machine-learning operations.
Introducing Slurm Support: Scale Your ML Workflows with Ease
We're excited to announce that Valohai now supports Slurm, an open-source workload manager used in HPC environments. Valohai users can now scale their ML workflows with Slurm-based clusters with unprecedented ease and efficiency.