Skip to main content

Your submission was sent successfully! Close

Thank you for signing up for our newsletter!
In these regular emails you will find the latest updates from Canonical and upcoming events where you can meet our team.Close

Thank you for contacting us. A member of our team will be in touch shortly. Close

An error occurred while submitting your form. Please try again or file a bug report. Close

  1. Blog
  2. Article

Stefano Fioravanzo
on 8 April 2025

Announcing Charmed Kubeflow 1.10


We are thrilled to announce the release of Charmed Kubeflow 1.10, Canonical’s latest update to the widely-adopted open source MLOps platform. This release integrates significant improvements from the upstream Kubeflow 1.10 project, while also bringing a suite of additional capabilities targeted towards enterprise deployments. Charmed Kubeflow 1.10 empowers machine learning practitioners and teams to operationalize machine learning workflows more efficiently, securely, and seamlessly than ever.

Highlights from upstream Kubeflow 1.10

Advanced hyperparameter tuning with Trainer 2.0 and Katib

Kubeflow Trainer 2.0 introduces enhanced capabilities designed to simplify hyperparameter optimization. In combination with Katib, a new high-level API specifically supports hyperparameter tuning for large language models (LLMs), reducing manual intervention and accelerating fine-tuning workflows. Additionally, Katib now supports:

  • Multiple parameter distribution types, including log-uniform, normal, and log-normal distributions.
  • Push-based metrics collection mechanism, enhancing performance and simplifying administration.

Improved scalability and flexibility in Kubeflow Pipelines

Kubeflow Pipelines 2.4.1 includes key enhancements such as:

  • Support for placeholders in resource limits, allowing dynamic and adaptable pipeline configurations.
  • Loop parallelism with configurable parallelism limits, facilitating massively parallel execution while maintaining system stability.
  • Reliable resolution of outputs from nested DAG components, simplifying pipeline management and reuse.

Next-level model serving with KServe

KServe 0.14.1 introduces powerful features to further streamline model deployment:

  • New Python SDK with asynchronous inference capabilities.
  • Stable OCI storage integration for robust model management.
  • Model caching leveraging local node storage for rapid deployment of large models.
  • Direct integration with Hugging Face, allowing seamless deployment using the Hugging Face Hub.

I am very excited to see continued collaborations and new features from KServe being integrated in Kubeflow 1.10 release, particularly the model cache feature and integration with Hugging Face, which enables more streamlined deployment and efficient autoscaling for both predictive and generative models. We are actively working with ecosystem projects and communities like vLLM, Kubernetes WG Serving, and Envoy to tackle the growing challenges of serving LLMs.

Yuan Tang
Kubeflow Steering Committee member

The Kubeflow ecosystem is also growing – it recently welcomed Spark, and the Feast community is actively working on a donation plan as well.

Feast is the reference open-source feature store for AI/ML, and when combined with Kubeflow, it provides a seamless end-to-end MLOps experience. I am excited to see the two projects working more closely together to unlock powerful use cases, especially for Generative AI and Retrieval-Augmented Generation (RAG). Kubeflow and Feast will enable data scientists to efficiently manage features, accelerate model development, and accelerate getting models to production.

— Francisco Javier Arceo
Kubeflow Steering Committee member & Feast Maintainer

Added value of Charmed Kubeflow 1.10

We don’t just package upstream components, we take the care needed to ensure a seamless production deployment experience for our customers. We develop open source solutions for improved orchestration and integration with ancillary services. And of course we always take our customers’ feedback in consideration. This is how we have improved Charmed Kubeflow 1.10 even further:

  • Added an automated and simplified way to manage your Kubeflow profiles via GitOps, with our new Github Profile Automator charm. This mechanism allows you to declaratively define your Kubeflow profiles in one single place. This work also lays the foundation to provide a seamless authentication experience with external identity providers as well, which can be particularly useful when deploying Kubeflow in the Public Cloud
  • We’ve enabled a high availability option for the Istio ingress, to improve the resilience of your deployments and make sure you can handle a high traffic volume with confidence.
  • You can now leverage more application health-check endpoints and alerting rules for KServe, Istio, and other components. With every release, we strive to provide more ways to monitor the health status of your deployment.
  • Charmed Kubeflow is more secure than ever. Most of our images are now based on Ubuntu and and our Rocks technology, leveraging Canonical’s security patching pipelines, guaranteeing the lowest number of CVEs possible.

Canonical’s AI/ML Ecosystem

Canonical works closely with a broad range of partners to enable open source technology at every scale and in any environment. Charmed Kubeflow runs seamlessly on any CNCF-certified Kubernetes distribution, providing a lot of flexibility to choose the best environment that fits your needs. Additionally, we’re working towards bringing Kubeflow as a managed offering in the public cloud, significantly cutting deployment time and operational costs. For data scientists looking to quickly start experimenting right on their Ubuntu laptops or workstations, our Data Science Stack provides a straightforward, ready-to-use solution. Lastly, we’re developing a robust, standalone model-serving solution built on Kubernetes, ideal for secure, mission-critical deployments and extending reliable inference capabilities even to the edge.

Get started with Charmed Kubeflow 1.10

Whether you’re a seasoned MLOps practitioner or new to Kubeflow, now is the perfect time to experience these enhancements firsthand. Install Charmed Kubeflow 1.10 today and elevate your machine learning workflows.

Explore the full details and installation instructions in our release notes.

Contact Canonical for enterprise support or managed services. 

To learn more about Canonical’s AI solutions, visit canonical.com/solutions/ai.

Related posts


Andreea Munteanu
19 March 2025

Building optimized LLM chatbots with Canonical and NVIDIA

AI Downloads

The landscape of generative AI is rapidly evolving, and building robust, scalable large language model (LLM) applications is becoming a critical need for many organizations. Canonical, in collaboration with NVIDIA, is excited to introduce a reference architecture designed to streamline and optimize the creation of powerful LLM chatbots. T ...


Andreea Munteanu
11 March 2025

How to deploy Kubeflow on Azure

AI Article

Kubeflow is a cloud-native, open source machine learning operations (MLOps) platform designed for developing and deploying ML models on Kubernetes. Kubeflow helps data scientists and machine learning engineers run the entire ML lifecycle within one tool. Charmed Kubeflow is Canonical’s official distribution of Kubeflow. The key benefits o ...


Andreea Munteanu
1 November 2024

Charmed Kubeflow vs Kubeflow

AI Article

Why should you use an official distribution of Kubeflow? Kubeflow is an open source MLOps platform that is designed to enable organizations to scale their ML initiatives and automate their workloads. It is a cloud-native solution that helps developers run the entire machine learning lifecycle within a single solution on Kubernetes. It can ...