Log, load, register, and deploy MLflow Models

An MLflow Model is a standard format for packaging machine learning models that can be used in a variety of downstream tools—for example, batch inference on Apache Spark or real-time serving through a REST API. The format defines a convention that lets you save a model in different flavors (python-function, pytorch, sklearn, and so on), that can be understood by different model serving and inference platforms.

Log and load models

API commands

To log a model to the MLflow tracking server, use mlflow.<model-type>.log_model(model, ...).

To load a previously logged model for inference or further development, use mlflow.<model-type>.load_model(modelpath), where modelpath is one of the following:

  • a run-relative path (such as runs:/{run_id}/{model-path})
  • a DBFS path
  • a registered model path (such as models:/{model_name}/{model_stage}).

For a complete list of options for loading MLflow models, see Referencing Artifacts in the MLflow documentation.

For Python MLflow models, an additional option is to use mlflow.pyfunc.load_model() to load the model as a generic Python function.

Automatically generated code snippets in the MLflow UI

When you log a model in a Databricks notebook, Databricks automatically generates code snippets that you can copy and use to load and run the model. To view these code snippets:

  1. Navigate to the Runs screen for the run that generated the model. (See View notebook experiment for how to display the Runs screen.)
  2. Scroll to the Artifacts section.
  3. Click the name of the logged model. A panel opens to the right showing code you can use to load the logged model and make predictions on Spark or pandas DataFrames.
Artifact panel code snippets


For examples of logging models, see the examples in Track machine learning training runs examples. For an example of loading a logged model for inference, see the following example.

Register models in the Model Registry

You can register models in the MLflow Model Registry, a centralized model store that provides a UI and set of APIs to manage the full lifecycle of MLflow Models. For general information about the Model Registry, see MLflow Model Registry on Databricks. For instructions on how to use the Model Registry to manage models in Databricks, see Manage models.

To register a model using the API, use mlflow.register_model("runs:/{run_id}/{model-path}", "{registered-model-name}").

Save models to DBFS

To save a model locally, use mlflow.<model-type>.save_model(model, modelpath). modelpath must be a DBFS path. For example, if you use a DBFS location dbfs:/my_project_models to store your project work, you must use the model path /dbfs/my_project_models:

  modelpath = "/dbfs/my_project_models/model-%f-%f" % (alpha, l1_ratio)
  mlflow.sklearn.save_model(lr, modelpath)

Deploy models

To deploy a model to third-party serving frameworks, use mlflow.<deploy-type>.deploy(). See the following examples.

You can also use MLflow Model Serving on Databricks.