Skip to main content

MLflow 3.0 traditional ML workflow (Beta)

Beta

This feature is in Beta.

Example notebook

The example notebook runs a model training job, which is tracked as an MLflow Run, to produce a trained model, which is tracked as an MLflow Logged Model.

MLflow 3.0 traditional ML model notebook

Open notebook in new tab

Explore model parameters and performance using the MLflow UI

To explore the model in the MLflow UI:

  1. Click Experiments in the workspace sidebar.

  2. Find your experiment in the experiments list. You can select the Only my experiments checkbox or use the Filter experiments search box to filter the list of experiments.

  3. Click the name of your experiment. The Runs page opens. The experiment contains two MLflow runs, one to train the model and one to test the model.

    MLflow 3 runs tab showing training and test runs.

  4. Click the Models tab. The LoggedModel (elasticnet) is tracked on this screen. You can see all of the parameters and metadata, as well as all of the metrics linked from the training and evaluation runs.

    MLflow 3 models tab showing trained model with metrics and parameters.

  5. Click the model name to display the model page.

    MLflow 3 model details page.

  6. The notebook registers the model to Unity Catalog. As a result, all model parameters and performance data are available on the model version page in Catalog Explorer.

    Model version page in Catalog Explorer.

What's the difference between the Models tab on the MLflow experiment page and the model version page in Catalog Explorer?

The Models tab of the experiment page and the model version page in Catalog Explorer show similar information about the model. The two views have different roles in the model development and deployment lifecycle.

  • The Models tab of the experiment page presents the results of logged models from an experiment on a single page. The Charts tab on this page provides visualizations to help you compare models and select the model versions to register to Unity Catalog for possible deployment.
  • In Catalog Explorer, the model version page provides an overview of all model performance and evaluation results. This page shows model parameters, metrics, and traces across all linked environments including different workspaces, endpoints, and experiments. This is useful for monitoring and deployment, and works especially well with deployment jobs. The evaluation task in a deployment job creates additional metrics that appear on this page. The approver for the job can then review this page to assess whether to approve the model version for deployment.