Log, load, and deploy MLflow Models

An MLflow Model is a standard format for packaging machine learning models that can be used in a variety of downstream tools—for example, batch inference on Apache Spark or real-time serving through a REST API. The format defines a convention that lets you save a model in different flavors (python-function, pytorch, sklearn, and so on), that can be understood by different model serving and inference platforms.

  • To log a model to the MLflow tracking server, use mlflow.<model-type>.log_model(model, ...).
  • To load a model, use mlflow.<model-type>.load_model(modelpath).
  • To deploy a model, use mlflow.<model-type>.deploy().

You can register models in the MLflow Model Registry, a centralized model store that provides a UI and set of APIs to manage the full lifecycle of MLflow Models.

See Track machine learning training runs examples for examples of logging models, and see the notebooks in this article for examples of loading and deploying models.

You can also save models locally.

  • To save a model locally, use mlflow.<model-type>.save_model(model, modelpath). modelpath must be a DBFS path. For example, if you use a DBFS location dbfs:/my_project_models to store your project work, you must use the model path /dbfs/my_project_models:

    modelpath = "/dbfs/my_project_models/model-%f-%f" % (alpha, l1_ratio)
    mlflow.sklearn.save_model(lr, modelpath)