Amazon SageMaker AI MLflow Apps Now Support MLflow Version 3.10
Today marks an exciting development in the realm of generative AI as we announce that Amazon SageMaker AI MLflow Apps now support MLflow version 3.10. This update brings enhanced capabilities for generative AI development and streamlined experiment tracking, thus accelerating the journey from experimentation to production for data scientists and ML engineers.
In this article, we delve into the new features introduced in MLflow v3.10, guide you on how to get started with SageMaker AI MLflow applications, and explore how these enhancements can be leveraged to create cutting-edge generative AI applications.
What’s New in MLflow v3.10
MLflow 3.10 introduces a suite of targeted enhancements to the MLflow ecosystem, building on the tracing and observability capabilities established in MLflow 3.0. The focus is on generative AI application development and agentic workflows. This release enhances tracing for complex, multi-round workflows, offers tighter integration with popular LLM frameworks and libraries, and streamlines logging for generative AI interactions.
The evaluation process receives a significant boost with the mlflow.genai.evaluation() API. This feature provides a programming interface to systematically measure and maintain the quality of generative AI across the lifecycle, with built-in metrics covering relevance, fidelity, correctness, and security. These integrate seamlessly with SageMaker AI workflows, providing a comprehensive evaluation framework.
Observability enhancements include granular trace filtering and searching, enriched metadata capture for debugging and root cause analysis, and pre-built performance dashboards. These dashboards present workload-level metrics such as latency distributions, request counts, quality scores, and token usage at a glance, eliminating the need for manual chart setup. This empowers teams running production workloads with clear visibility into operational costs, while MLflow Workstations offer a structured way to organize MLflow artifacts across teams and projects.
These enhancements, coupled with SageMaker AI, provide an enterprise-grade generative AI infrastructure, simplifying the tracking of experiments, monitoring of generative AI performance, and governance of AI applications at scale.
Getting Started with the SageMaker AI MLflow v3.10 App
For new users, creating a SageMaker AI MLflow application is straightforward via the SageMaker Studio console, AWS CLI, or API. The default configuration automatically provisions MLflow 3.10, granting immediate access to all the latest features.
You can begin with fully managed MLflow 3.10 on Amazon SageMaker AI MLflow applications through the AWS Management Console, AWS Command Line Interface (AWS CLI), or the API.
Prerequisites
To get started, you need:
Next, navigate to the Amazon SageMaker AI Studio console and select the MLflow application.
Choose Create an MLflow application and enter a name. An AWS Identity and Access Management (IAM) role and an Amazon Simple Service (Amazon S3) bucket are pre-configured for you using the SageMaker AI Studio domain defaults. You only need to modify them in the Advanced settings if necessary, as shown below.
Once created, you will receive an Amazon MLflow Resource Name (ARN) to connect to, and you can immediately start using the new SageMaker AI MLflow app with MLflow v3.10 with your existing code or follow the instructions below to connect your code with SageMaker AI MLflow apps.
To start tracking your experiences with your new SageMaker AI MLflow application, you need to install both MLflow and the AWS SageMaker MLflow plugin in your environment. You can use Jupyter Lab powered by SageMaker Studio, SageMaker Studio Code Editor, a local integrated development environment (IDE), or another supported environment where your AI workloads work with SageMaker AI MLFlow applications.
To install the two Python packages using pip:
pip install mlflow==3.10.1 sagemaker-mlflow==0.3.0
To log in and start recording your AI experiments, settings, and models directly into SageMaker AI MLflow applications, see the code snippet below to get your workload started. Note: Replace the Amazon Resource Name (ARN) with the ARN of your SageMaker AI MLflow application below.
import mlflow
# Connect to your SageMaker MLflow application
mlflow_app_arn = ""
mlflow.set_tracking_uri(mlflow_app_arn)
# Define your experiment
mlflow.set_experiment("your_genai_experiment")
# Your existing code continues to work with improved features
# New features are automatically available
Migration
If you have an MLflow tracking server or application hosted on SageMaker or elsewhere, you can migrate to a new 3.10 application by following the instructions in the Migrate MLflow tracking servers to Amazon SageMaker AI with serverless MLflow blog post.
Conclusion
The introduction of MLflow v3.10 to Amazon SageMaker AI MLflow applications represents a significant step forward in making enterprise AI development more efficient, observable, and manageable. Get started with Amazon SageMaker AI MLflow apps by visiting Amazon SageMaker AI Studio and creating your first MLflow app.
The new MLflow v3.10 is also supported in customizing the Amazon SageMaker AI and SageMaker Unified Studio serverless model, offering additional workflow flexibility.
Share your feedback with us through AWS re:Post for SageMaker or your usual AWS Support contacts.
About the Authors
Sandeep Raveesh
Sandeep Raveesh is a Specialized GenAI GTM Solutions Architect at AWS. He works with clients through their LLM training, inference, and observability. He focuses on product development helping AWS create and solve industry challenges in generative AI. You can connect with Sandeep on LinkedIn to learn more about generative AI solutions.
Dana Benson
Dana Benson is a Software Development Manager working in SageMaker AI ML Observability and LLM. Before joining AWS, Dana developed Smart Home behaviors for Alexa.
Ruidi Peng
Ruidi Peng is a software development engineer at AWS. He works on the Amazon SageMaker MLflow team, focusing on AI/ML and LLM observability. Ruidi is passionate about building scalable infrastructure that helps customers monitor and gain insights into their machine learning workloads. In his free time, he enjoys hiking and exploring the outdoors.
For more details, visit the original source here.
“`

