mlflow-model-registry-example(Python)

Loading...

Overview

The MLflow Model Registry component is a centralized model store, set of APIs, and UI, to collaboratively manage the full lifecycle of MLflow Models. It provides model lineage (which MLflow Experiment and Run produced the model), model versioning, stage transitions, annotations, and deployment management.

In this notebook, you use each of the MLflow Model Registry's components to develop and manage a production machine learning application. This notebook covers the following topics:

  • Track and log models with MLflow
  • Register models with the Model Registry
  • Describe models and make model version stage transitions
  • Integrate registered models with production applications
  • Search and discover models in the Model Registry
  • Archive and delete models

Requirements

  • Databricks Runtime for Machine Learning.

Machine learning application: Forecasting wind power

In this notebook, you use the MLflow Model Registry to build a machine learning application that forecasts the daily power output of a wind farm. Wind farm power output depends on weather conditions: generally, more energy is produced at higher wind speeds. Accordingly, the machine learning models used in the notebook predict power output based on weather forecasts with three features: wind direction, wind speed, and air temperature.

This notebook uses altered data from the National WIND Toolkit dataset provided by NREL, which is publicly available and cited as follows:

Draxl, C., B.M. Hodge, A. Clifton, and J. McCaa. 2015. Overview and Meteorological Validation of the Wind Integration National Dataset Toolkit (Technical Report, NREL/TP-5000-61740). Golden, CO: National Renewable Energy Laboratory.

Draxl, C., B.M. Hodge, A. Clifton, and J. McCaa. 2015. "The Wind Integration National Dataset (WIND) Toolkit." Applied Energy 151: 355366.

Lieberman-Cribbin, W., C. Draxl, and A. Clifton. 2014. Guide to Using the WIND Toolkit Validation Code (Technical Report, NREL/TP-5000-62595). Golden, CO: National Renewable Energy Laboratory.

King, J., A. Clifton, and B.M. Hodge. 2014. Validation of Power Output for the WIND Toolkit (Technical Report, NREL/TP-5D00-61714). Golden, CO: National Renewable Energy Laboratory.

Load the dataset

The following cells load a dataset containing weather data and power output information for a wind farm in the United States. The dataset contains wind direction, wind speed, and air temperature features sampled every eight hours (once at 00:00, once at 08:00, and once at 16:00), as well as daily aggregate power output (power), over several years.

Display a sample of the data for reference.

    Train a power forecasting model and track it with MLflow

    The following cells train a neural network to predict power output based on the weather features in the dataset. MLflow is used to track the model's hyperparameters, performance metrics, source code, and artifacts.

    Define a power forecasting model using TensorFlow Keras.

    Train the model and use MLflow to track its parameters, metrics, artifacts, and source code.

    /databricks/spark/python/pyspark/sql/context.py:77: DeprecationWarning: Deprecated in 3.0.0. Use SparkSession.builder.getOrCreate() instead. DeprecationWarning) Epoch 1/100 1/19 [>.............................] - ETA: 0s - loss: 9484723.0000WARNING:tensorflow:From /databricks/python/lib/python3.7/site-packages/tensorflow/python/ops/summary_ops_v2.py:1277: stop (from tensorflow.python.eager.profiler) is deprecated and will be removed after 2020-07-01. Instructions for updating: use `tf.profiler.experimental.stop` instead. WARNING:tensorflow:Callbacks method `on_train_batch_end` is slow compared to the batch time (batch time: 0.0013s vs `on_train_batch_end` time: 0.0068s). Check your callbacks. 19/19 [==============================] - 0s 9ms/step - loss: 10067302.0000 - val_loss: 7770970.0000 Epoch 2/100 19/19 [==============================] - 0s 6ms/step - loss: 9562933.0000 - val_loss: 7257168.5000 Epoch 3/100 19/19 [==============================] - 0s 2ms/step - loss: 8938973.0000 - val_loss: 6646180.0000 Epoch 4/100 19/19 [==============================] - 0s 2ms/step - loss: 8216568.0000 - val_loss: 5976497.0000 Epoch 5/100 19/19 [==============================] - 0s 2ms/step - loss: 7447836.0000 - val_loss: 5370607.0000 Epoch 6/100 19/19 [==============================] - 0s 2ms/step - loss: 6758506.0000 - val_loss: 4900572.0000 Epoch 7/100 19/19 [==============================] - 0s 2ms/step - loss: 6210918.0000 - val_loss: 4613708.0000 Epoch 8/100 19/19 [==============================] - 0s 2ms/step - loss: 5830125.0000 - val_loss: 4501453.0000 Epoch 9/100 19/19 [==============================] - 0s 2ms/step - loss: 5628230.0000 - val_loss: 4500190.0000 Epoch 10/100 19/19 [==============================] - 0s 2ms/step - loss: 5527121.5000 - val_loss: 4537889.0000 Epoch 11/100 19/19 [==============================] - 0s 2ms/step - loss: 5489752.0000 - val_loss: 4571292.5000 Epoch 12/100 19/19 [==============================] - 0s 2ms/step - loss: 5465207.5000 - val_loss: 4590183.0000 Epoch 13/100 19/19 [==============================] - 0s 2ms/step - loss: 5447360.5000 - val_loss: 4587871.0000 Epoch 14/100 19/19 [==============================] - 0s 2ms/step - loss: 5435397.5000 - val_loss: 4544777.5000 Epoch 15/100 19/19 [==============================] - 0s 2ms/step - loss: 5412849.0000 - val_loss: 4559014.0000 Epoch 16/100 19/19 [==============================] - 0s 2ms/step - loss: 5395060.5000 - val_loss: 4559205.5000 Epoch 17/100 19/19 [==============================] - 0s 2ms/step - loss: 5376707.0000 - val_loss: 4537700.5000 Epoch 18/100 19/19 [==============================] - 0s 2ms/step - loss: 5358261.0000 - val_loss: 4516561.5000 Epoch 19/100 19/19 [==============================] - 0s 2ms/step - loss: 5338590.0000 - val_loss: 4500343.0000 Epoch 20/100 19/19 [==============================] - 0s 2ms/step - loss: 5319505.5000 - val_loss: 4485660.0000 Epoch 21/100 19/19 [==============================] - 0s 2ms/step - loss: 5299456.5000 - val_loss: 4468097.5000 Epoch 22/100 19/19 [==============================] - 0s 2ms/step - loss: 5280401.0000 - val_loss: 4453151.0000 Epoch 23/100 19/19 [==============================] - 0s 2ms/step - loss: 5258781.5000 - val_loss: 4441619.5000 Epoch 24/100 19/19 [==============================] - 0s 2ms/step - loss: 5237484.0000 - val_loss: 4432260.0000 Epoch 25/100 19/19 [==============================] - 0s 2ms/step - loss: 5216421.5000 - val_loss: 4422940.0000 Epoch 26/100 19/19 [==============================] - 0s 2ms/step - loss: 5194179.5000 - val_loss: 4373990.0000 Epoch 27/100 19/19 [==============================] - 0s 2ms/step - loss: 5167187.5000 - val_loss: 4372833.5000 Epoch 28/100 19/19 [==============================] - 0s 2ms/step - loss: 5146688.0000 - val_loss: 4366647.0000 Epoch 29/100 19/19 [==============================] - 0s 2ms/step - loss: 5119176.0000 - val_loss: 4319375.5000 Epoch 30/100 19/19 [==============================] - 0s 2ms/step - loss: 5092856.0000 - val_loss: 4298915.5000 Epoch 31/100 19/19 [==============================] - 0s 2ms/step - loss: 5069950.0000 - val_loss: 4257826.0000 Epoch 32/100 19/19 [==============================] - 0s 2ms/step - loss: 5039203.5000 - val_loss: 4268547.0000 Epoch 33/100 19/19 [==============================] - 0s 2ms/step - loss: 5011379.5000 - val_loss: 4239205.0000 Epoch 34/100 19/19 [==============================] - 0s 2ms/step - loss: 4985568.0000 - val_loss: 4241627.0000 Epoch 35/100 19/19 [==============================] - 0s 2ms/step - loss: 4947238.5000 - val_loss: 4161798.0000 Epoch 36/100 19/19 [==============================] - 0s 2ms/step - loss: 4915470.5000 - val_loss: 4137575.0000 Epoch 37/100 19/19 [==============================] - 0s 2ms/step - loss: 4881914.5000 - val_loss: 4118528.7500 Epoch 38/100 19/19 [==============================] - 0s 2ms/step - loss: 4844858.5000 - val_loss: 4078730.0000 Epoch 39/100 19/19 [==============================] - 0s 2ms/step - loss: 4809075.5000 - val_loss: 4070097.7500 Epoch 40/100 19/19 [==============================] - 0s 2ms/step - loss: 4771630.5000 - val_loss: 4043004.5000 Epoch 41/100 19/19 [==============================] - 0s 2ms/step - loss: 4731070.0000 - val_loss: 4005449.7500 Epoch 42/100 19/19 [==============================] - 0s 2ms/step - loss: 4684211.0000 - val_loss: 3962441.5000 Epoch 43/100 19/19 [==============================] - 0s 2ms/step - loss: 4636608.5000 - val_loss: 3916048.2500 Epoch 44/100 19/19 [==============================] - 0s 2ms/step - loss: 4589868.0000 - val_loss: 3892645.5000 Epoch 45/100 19/19 [==============================] - 0s 2ms/step - loss: 4544527.0000 - val_loss: 3821463.2500 Epoch 46/100 19/19 [==============================] - 0s 2ms/step - loss: 4499638.0000 - val_loss: 3809383.0000 Epoch 47/100 19/19 [==============================] - 0s 2ms/step - loss: 4450561.0000 - val_loss: 3724508.5000 Epoch 48/100 19/19 [==============================] - 0s 2ms/step - loss: 4391499.5000 - val_loss: 3739167.5000 Epoch 49/100 19/19 [==============================] - 0s 2ms/step - loss: 4341117.0000 - val_loss: 3678887.2500 Epoch 50/100 19/19 [==============================] - 0s 2ms/step - loss: 4287635.5000 - val_loss: 3637396.5000 Epoch 51/100 19/19 [==============================] - 0s 2ms/step - loss: 4233032.0000 - val_loss: 3584930.5000 Epoch 52/100 19/19 [==============================] - 0s 2ms/step - loss: 4180039.0000 - val_loss: 3527865.0000 Epoch 53/100 19/19 [==============================] - 0s 2ms/step - loss: 4115761.0000 - val_loss: 3525460.2500 Epoch 54/100 19/19 [==============================] - 0s 2ms/step - loss: 4058294.2500 - val_loss: 3428790.5000 Epoch 55/100 19/19 [==============================] - 0s 2ms/step - loss: 4002944.5000 - val_loss: 3385822.2500 Epoch 56/100 19/19 [==============================] - 0s 2ms/step - loss: 3948906.5000 - val_loss: 3337103.7500 Epoch 57/100 19/19 [==============================] - 0s 2ms/step - loss: 3887851.7500 - val_loss: 3299175.5000 Epoch 58/100 19/19 [==============================] - 0s 2ms/step - loss: 3827925.0000 - val_loss: 3246767.0000 Epoch 59/100 19/19 [==============================] - 0s 2ms/step - loss: 3764823.2500 - val_loss: 3189476.7500 Epoch 60/100 19/19 [==============================] - 0s 2ms/step - loss: 3704606.7500 - val_loss: 3126584.0000 Epoch 61/100 19/19 [==============================] - 0s 2ms/step - loss: 3640626.7500 - val_loss: 3098789.2500 Epoch 62/100 19/19 [==============================] - 0s 2ms/step - loss: 3577066.7500 - val_loss: 3048370.0000 Epoch 63/100 19/19 [==============================] - 0s 2ms/step - loss: 3513250.5000 - val_loss: 2993557.0000 Epoch 64/100 19/19 [==============================] - 0s 2ms/step - loss: 3443939.5000 - val_loss: 2908029.7500 Epoch 65/100 19/19 [==============================] - 0s 2ms/step - loss: 3379032.5000 - val_loss: 2873386.7500 Epoch 66/100 19/19 [==============================] - 0s 2ms/step - loss: 3307983.0000 - val_loss: 2800540.7500 Epoch 67/100 19/19 [==============================] - 0s 2ms/step - loss: 3240870.0000 - val_loss: 2747316.5000 Epoch 68/100 19/19 [==============================] - 0s 2ms/step - loss: 3175030.5000 - val_loss: 2688108.0000 Epoch 69/100 19/19 [==============================] - 0s 2ms/step - loss: 3111936.0000 - val_loss: 2656021.5000 Epoch 70/100 19/19 [==============================] - 0s 2ms/step - loss: 3039488.7500 - val_loss: 2549916.7500 Epoch 71/100 19/19 [==============================] - 0s 2ms/step - loss: 2969658.7500 - val_loss: 2532856.2500 Epoch 72/100 19/19 [==============================] - 0s 2ms/step - loss: 2905318.7500 - val_loss: 2467766.5000 Epoch 73/100 19/19 [==============================] - 0s 2ms/step - loss: 2833125.5000 - val_loss: 2386220.0000 Epoch 74/100 19/19 [==============================] - 0s 2ms/step - loss: 2760761.0000 - val_loss: 2361742.2500 Epoch 75/100 19/19 [==============================] - 0s 2ms/step - loss: 2693665.5000 - val_loss: 2252839.0000 Epoch 76/100 19/19 [==============================] - 0s 2ms/step - loss: 2625533.5000 - val_loss: 2208679.2500 Epoch 77/100 19/19 [==============================] - 0s 2ms/step - loss: 2553439.7500 - val_loss: 2140409.0000 Epoch 78/100 19/19 [==============================] - 0s 2ms/step - loss: 2494428.0000 - val_loss: 2127855.2500 Epoch 79/100 19/19 [==============================] - 0s 2ms/step - loss: 2426929.0000 - val_loss: 2021563.8750 Epoch 80/100 19/19 [==============================] - 0s 2ms/step - loss: 2350609.7500 - val_loss: 1962679.5000 Epoch 81/100 19/19 [==============================] - 0s 2ms/step - loss: 2285125.7500 - val_loss: 1923779.2500 Epoch 82/100 19/19 [==============================] - 0s 2ms/step - loss: 2225017.5000 - val_loss: 1837375.3750 Epoch 83/100 19/19 [==============================] - 0s 2ms/step - loss: 2159625.5000 - val_loss: 1817587.8750 Epoch 84/100 19/19 [==============================] - 0s 2ms/step - loss: 2088580.0000 - val_loss: 1721310.5000 Epoch 85/100 19/19 [==============================] - 0s 2ms/step - loss: 2026857.8750 - val_loss: 1693849.5000 Epoch 86/100 19/19 [==============================] - 0s 2ms/step - loss: 1961367.0000 - val_loss: 1626255.5000 Epoch 87/100 19/19 [==============================] - 0s 2ms/step - loss: 1898749.7500 - val_loss: 1569436.5000 Epoch 88/100 19/19 [==============================] - 0s 2ms/step - loss: 1837694.0000 - val_loss: 1526256.0000 Epoch 89/100 19/19 [==============================] - 0s 2ms/step - loss: 1777762.1250 - val_loss: 1468514.3750 Epoch 90/100 19/19 [==============================] - 0s 2ms/step - loss: 1717962.8750 - val_loss: 1402799.1250 Epoch 91/100 19/19 [==============================] - 0s 2ms/step - loss: 1662396.0000 - val_loss: 1372346.3750 Epoch 92/100 19/19 [==============================] - 0s 2ms/step - loss: 1605381.0000 - val_loss: 1291905.0000 Epoch 93/100 19/19 [==============================] - 0s 2ms/step - loss: 1545358.8750 - val_loss: 1266034.1250 Epoch 94/100 19/19 [==============================] - 0s 2ms/step - loss: 1490568.7500 - val_loss: 1218560.6250 Epoch 95/100 19/19 [==============================] - 0s 2ms/step - loss: 1433521.1250 - val_loss: 1153090.3750 Epoch 96/100 19/19 [==============================] - 0s 2ms/step - loss: 1386462.6250 - val_loss: 1100352.6250 Epoch 97/100 19/19 [==============================] - 0s 2ms/step - loss: 1331285.8750 - val_loss: 1082801.7500 Epoch 98/100 19/19 [==============================] - 0s 2ms/step - loss: 1280609.0000 - val_loss: 1025335.3750 Epoch 99/100 19/19 [==============================] - 0s 2ms/step - loss: 1232648.1250 - val_loss: 979178.9375 Epoch 100/100 19/19 [==============================] - 0s 2ms/step - loss: 1185373.6250 - val_loss: 942745.7500

    Register the model with the MLflow Model Registry API

    Now that a forecasting model has been trained and tracked with MLflow, the next step is to register it with the MLflow Model Registry. You can register and manage models using the MLflow UI or the MLflow API .

    The following cells use the API to register your forecasting model, add rich model descriptions, and perform stage transitions. See the documentation for the UI workflow.

      Create a new registered model using the API

      The following cells use the mlflow.register_model() function to create a new registered model whose name begins with the string power-forecasting-model. This also creates a new model version (for example, Version 1 of power-forecasting-model).

      Successfully registered model 'power-forecasting-model'. 2021/02/04 20:13:51 INFO mlflow.tracking._model_registry.client: Waiting up to 300 seconds for model version to finish creation. Model name: power-forecasting-model, version 1 Created version '1' of model 'power-forecasting-model'.

      After creating a model version, it may take a short period of time to become ready. Certain operations, such as model stage transitions, require the model to be in the READY state. Other operations, such as adding a description or fetching model details, can be performed before the model version is ready (for example, while it is in the PENDING_REGISTRATION state).

      The following cell uses the MlflowClient.get_model_version() function to wait until the model is ready.

      Model status: READY

      Add model descriptions

      You can add descriptions to registered models as well as model versions:

      • Model version descriptions are useful for detailing the unique attributes of a particular model version (such as the methodology and algorithm used to develop the model).
      • Registered model descriptions are useful for recording information that applies to multiple model versions (such as a general overview of the modeling problem and dataset).

      Add a high-level description to the registered model, including the machine learning problem and dataset.

      Out[9]: <RegisteredModel: creation_timestamp=1612469631555, description=('This model forecasts the power output of a wind farm based on weather data. ' 'The weather data consists of three features: wind speed, wind direction, and ' 'air temperature.'), last_updated_timestamp=1612469638094, latest_versions=[], name='power-forecasting-model', tags={}>

      Add a model version description with information about the model architecture and machine learning framework.

      Out[10]: <ModelVersion: creation_timestamp=1612469631744, current_stage='None', description=('This model version was built using TensorFlow Keras. It is a feed-forward ' 'neural network with one hidden layer.'), last_updated_timestamp=1612469638175, name='power-forecasting-model', run_id='41c0dd1acaf74ad8b35b76169fdebe41', run_link='', source='dbfs:/databricks/mlflow-tracking/2314812274044967/41c0dd1acaf74ad8b35b76169fdebe41/artifacts/model', status='READY', status_message='', tags={}, user_id='1486628617178110', version='1'>

      Perform a model stage transition

      The MLflow Model Registry defines several model stages: None, Staging, Production, and Archived. Each stage has a unique meaning. For example, Staging is meant for model testing, while Production is for models that have completed the testing or review processes and have been deployed to applications.

      Users with appropriate permissions can transition models between stages. In private preview, any user can transition a model to any stage. In the near future, administrators in your organization will be able to control these permissions on a per-user and per-model basis.

      If you have permission to transition a model to a particular stage, you can make the transition directly by using the MlflowClient.update_model_version() function. If you do not have permission, you can request a stage transition using the REST API; for example:

      %sh curl -i -X POST -H "X-Databricks-Org-Id: <YOUR_ORG_ID>" -H "Authorization: Bearer <YOUR_ACCESS_TOKEN>" https://<YOUR_DATABRICKS_WORKSPACE_URL>/api/2.0/preview/mlflow/transition-requests/create -d '{"comment": "Please move this model into production!", "model_version": {"version": 1, "registered_model": {"name": "power-forecasting-model"}}, "stage": "Production"}'
      

      Now that you've learned about stage transitions, transition the model to the Production stage.

      Out[11]: <ModelVersion: creation_timestamp=1612469631744, current_stage='Production', description=('This model version was built using TensorFlow Keras. It is a feed-forward ' 'neural network with one hidden layer.'), last_updated_timestamp=1612469638268, name='', run_id='41c0dd1acaf74ad8b35b76169fdebe41', run_link='', source='dbfs:/databricks/mlflow-tracking/2314812274044967/41c0dd1acaf74ad8b35b76169fdebe41/artifacts/model', status='READY', status_message='', tags={}, user_id='1486628617178110', version='1'>

      Use the MlflowClient.get_model_version() function to fetch the model's current stage.

      The current model stage is: 'Production'

      The MLflow Model Registry allows multiple model versions to share the same stage. When referencing a model by stage, the Model Registry will use the latest model version (the model version with the largest version ID). The MlflowClient.get_latest_versions() function fetches the latest model version for a given stage or set of stages. The following cell uses this function to print the latest version of the power forecasting model that is in the Production stage.

      The latest production version of the model 'power-forecasting-model' is '1'.

      Integrate the model with the forecasting application

      Now that you have trained and registered a power forecasting model with the MLflow Model Registry, the next step is to integrate it with an application. This application fetches a weather forecast for the wind farm over the next five days and uses the model to produce power forecasts. For example purposes, the application consists of a simple forecast_power() function (defined below) that is executed within this notebook. In practice, you may want to execute this function as a recurring batch inference job using the Databricks Jobs service.

      The following section demonstrates how to load model versions from the MLflow Model Registry for use in applications. The Forecast power output with the production model section uses the Production model to forecast power output for the next five days.

      Load versions of the registered model

      The MLflow Models component defines functions for loading models from several machine learning frameworks. For example, mlflow.tensorflow.load_model() is used to load Tensorflow Keras models that were saved in MLflow format, and mlflow.sklearn.load_model() is used to load scikit-learn models that were saved in MLflow format.

      These functions can load models from the MLflow Model Registry.

      You can load a model by specifying its name (for example, power-forecast-model) and version number (in this case, 1). The following cell uses the mlflow.pyfunc.load_model() API to load Version 1 of the registered power forecasting model as a generic Python function.

      Loading registered model version from URI: 'models:/power-forecasting-model/1' WARNING:tensorflow:From /databricks/python/lib/python3.7/site-packages/mlflow/keras.py:461: set_learning_phase (from tensorflow.python.keras.backend) is deprecated and will be removed after 2020-10-11. Instructions for updating: Simply pass a True/False value to the `training` argument of the `__call__` method of your layer or model.

      You can also load a specific model stage. The following cell loads the Production stage of the power forecasting model.

      Loading registered model version from URI: 'models:/power-forecasting-model/production'

      Forecast power output with the production model

      In this section, the production model is used to evaluate weather forecast data for the wind farm. The forecast_power() application loads the latest version of the forecasting model from the specified stage and uses it to forecast power production over the next five days.

        Create and deploy a new model version

        The MLflow Model Registry enables you to create multiple model versions corresponding to a single registered model. By performing stage transitions, you can seamlessly integrate new model versions into your staging or production environments. Model versions can be trained in different machine learning frameworks (such as scikit-learn and tensorflow); MLflow's python_function provides a consistent inference API across machine learning frameworks, ensuring that the same application code continues to work when a new model version is introduced.

        The following sections create a new version of the power forecasting model using scikit-learn, perform model testing in Staging, and update the production application by transitioning the new model version to Production.

        Create a new model version

        Classical machine learning techniques are also effective for power forecasting. The following cell trains a random forest model using scikit-learn and registers it with the MLflow Model Registry via the mlflow.sklearn.log_model() function.

        /databricks/spark/python/pyspark/sql/context.py:77: DeprecationWarning: Deprecated in 3.0.0. Use SparkSession.builder.getOrCreate() instead. DeprecationWarning) Validation MSE: 44960 Registered model 'power-forecasting-model' already exists. Creating a new version of this model... 2021/02/04 20:14:07 INFO mlflow.tracking._model_registry.client: Waiting up to 300 seconds for model version to finish creation. Model name: power-forecasting-model, version 2 Created version '2' of model 'power-forecasting-model'.

        Fetch the new model version ID using MLflow Model Registry Search

        The MlflowClient.search_model_versions() function searches for model versions by model name, MLflow run ID, or artifact source location. All model versions satisfying a particular filter query are returned.

        The following cell uses this search function to fetch the version ID of the new model. It searches for the maximum value of the version ID (that is, the most recent version).

        Wait for the new model version to become ready.

          Model status: READY

          Add a description to the new model version

          Out[21]: <ModelVersion: creation_timestamp=1612469647314, current_stage='None', description=('This model version is a random forest containing 100 decision trees that was ' 'trained in scikit-learn.'), last_updated_timestamp=1612469656897, name='power-forecasting-model', run_id='d45b1ac942e34a0ca59d408309605840', run_link='', source='dbfs:/databricks/mlflow-tracking/2314812274044967/d45b1ac942e34a0ca59d408309605840/artifacts/sklearn-model', status='READY', status_message='', tags={}, user_id='1486628617178110', version='2'>

          Transition the new model version to Staging

          Before deploying a model to a production application, it is often best practice to test it in a staging environment. The following cells transition the new model version to Staging and evaluate its performance.

          Out[22]: <ModelVersion: creation_timestamp=1612469647314, current_stage='Staging', description=('This model version is a random forest containing 100 decision trees that was ' 'trained in scikit-learn.'), last_updated_timestamp=1612469656979, name='', run_id='d45b1ac942e34a0ca59d408309605840', run_link='', source='dbfs:/databricks/mlflow-tracking/2314812274044967/d45b1ac942e34a0ca59d408309605840/artifacts/sklearn-model', status='READY', status_message='', tags={}, user_id='1486628617178110', version='2'>

          Evaluate the new model's forecasting performance in Staging

            Transition the new model version to Production

            After verifying that the new model version performs well in staging, the following cells transition the model version to Production and use the exact same application code from the Forecast power output with the production model section to produce a power forecast.

            There are now two model versions of the forecasting model in the Production stage: the model version trained in Tensorflow Keras and the version trained in scikit-learn.

            When referencing a model by stage, the MLflow Model Model Registry automatically uses the latest production version. This enables you to update your production models without changing any application code.

            See the documentation for how to transition the model to Production using the UI.

            Transition the new model version to Production using the API

            Out[24]: <ModelVersion: creation_timestamp=1612469647314, current_stage='Production', description=('This model version is a random forest containing 100 decision trees that was ' 'trained in scikit-learn.'), last_updated_timestamp=1612469659618, name='', run_id='d45b1ac942e34a0ca59d408309605840', run_link='', source='dbfs:/databricks/mlflow-tracking/2314812274044967/d45b1ac942e34a0ca59d408309605840/artifacts/sklearn-model', status='READY', status_message='', tags={}, user_id='1486628617178110', version='2'>

              Archive and delete models

              When a model version is no longer being used, you can archive it or delete it. You can also delete an entire registered model; this removes all of its associated model versions.

              Archive Version 1 of the power forecasting model

              Archive Version 1 of the power forecasting model because it is no longer being used. You can archive models in the MLflow Model Registry UI or via the MLflow API. See the documentation for the UI workflow.

              Archive Version 1 using the MLflow API

              The following cell uses the MlflowClient.update_model_version() function to archive Version 1 of the power forecasting model.

              Out[26]: <ModelVersion: creation_timestamp=1612469631744, current_stage='Archived', description=('This model version was built using TensorFlow Keras. It is a feed-forward ' 'neural network with one hidden layer.'), last_updated_timestamp=1612469661931, name='', run_id='41c0dd1acaf74ad8b35b76169fdebe41', run_link='', source='dbfs:/databricks/mlflow-tracking/2314812274044967/41c0dd1acaf74ad8b35b76169fdebe41/artifacts/model', status='READY', status_message='', tags={}, user_id='1486628617178110', version='1'>

              Delete Version 1 of the power forecasting model

              You can also use the MLflow UI or MLflow API to delete model versions. Model version deletion is permanent and cannot be undone.

              The following cells provide a reference for deleting Version 1 of the power forecasting model using the MLflow API. See the documentation for how to delete a model version using the UI.

              Delete Version 1 using the MLflow API

              The following cell permanently deletes Version 1 of the power forecasting model.

              Delete the power forecasting model

              If you want to delete an entire registered model, including all of its model versions, you can use the MlflowClient.delete_registered_model() to do so. This action cannot be undone. You must first transition all model version stages to None or Archived.

              Warning: The following cell permanently deletes the power forecasting model, including all of its versions.