Blog
All
AI
Case Studies
Comparisons
Data Science
Engineering
MLOps
Tutorials
Valohai
ML Pioneers
New Features for Optimizing MLOps Efficiency and Resource Utilization
We’ve built significant enhancements into our platform to further empower data science teams in accelerating time-to-market and optimizing operational costs. These enhancements tackle model iteration speed, efficient resource utilization, and dataset management.
Stop paying for the compute resources that you’re not using anymore
Our new feature monitors CPU, GPU, and memory usage and alerts you when your machines operate below 50% capacity. This allows you to optimize resource usage and reduce costs.
Track and Manage the Lifecycle of ML Models with Valohai’s Model Registry
Valohai’s Model Registry is a centralized hub for managing model lifecycle from development to production. Think of it as a single source of truth for model versions and lineage.
Introducing Kubernetes Support for Streamlined Machine Learning Workflows
We designed our new Kubernetes support so that Data Science teams can effortlessly manage and scale their workflows on top of Kubernetes and enhance their overall machine-learning operations.
Introducing Slurm Support: Scale Your ML Workflows with Ease
We're excited to announce that Valohai now supports Slurm, an open-source workload manager used in HPC environments. Valohai users can now scale their ML workflows with Slurm-based clusters with unprecedented ease and efficiency.
Taking GenAI and LLMs from POCs to Production
LLMs and other generative models make ripples everywhere from established enterprises to innovative startups, and beyond. But what did successful adoption look like in 2023? And what can we expect in 2024?
Easiest way to fine-tune Mistral 7B
We’ve built a template for fine-tuning Mistral 7B on Valohai. Mistral is an excellent combination of size and performance, and by fine-tuning it using a technique called LoRA, we can be very cost-efficient.
Dive into Valohai with our new serverless trial
We’re thrilled to announce our new free trial for all aspiring ML pioneers! With the new free trial, we’ve made it easy to kickstart your journey with our handpicked templates.
Why closed-source LLMs are not suited for production
ChatGPT continues to capture the public attention and many are looking to incorporate similar functionalities in their products. But is it a safe route for production-grade applications?
Tap into the most extensive open-source model library with Valohai’s Hugging Face templates
We've built a set of Hugging Face templates that make it super simple to use the latest and greatest in open-source ML. These templates are available through the Valohai Ecosystem.
KONUX x Valohai Datasets: Ensuring Traceability and Eliminating Data Inconsistency
The key takeaways from a presentation by Andres Hernandez, Principal Data Scientist at KONUX, about how their team streamlines operations utilizing the Valohai datasets feature.
Using OpenAI’s GPT APIs to generate data for your NLP project
Collecting, cleaning and labeling data is one of the most time-consuming problems in data science and this is especially true in NLP. Recently, we've seen data scientists utilize large language models such as OpenAI's GPT-4 to help produce datasets to train smaller NLP models that solve a more specific task.
Large Language Models for the Rest of Us
With the popularization of LLM's developers and product folks are flocking to the space and testing out novel concepts. How will LLM products evolve over time?
Business Value of MLOps
In 2020, Forbes estimated the market of MLOps solutions is expected to reach $4 billion by 2025. The recent Venture Beats article claims it will grow to over $6 billion by 2028. Let's look at what is the driving force for the demand of MLOps.
Hannes Heikinheimo, Speechly: Voice is the New Touch
Hannes is working on making voice the new touch: ubiquitous and intuitive for everyone. Together with his team Hannes is pioneering not only voice interfaces but also voice moderation problem.