This article contains examples of tracking model development in Databricks. Log and track models automatically with MLflow or manually with the MLflow API.
The model development process is iterative, and it can be challenging to keep track of your work as you develop and optimize a model. In Databricks, you can use MLflow tracking to help you keep track of the model development process, including parameter settings or combinations you have tried and how they affected the model’s performance.
MLflow tracking uses experiments and runs to log and track your model development. A run is a single execution of model code. During an MLflow run, you can log model parameters and results. An experiment is a collection of related runs. Within an experiment, you can compare and filter runs to understand how your model performs and how its performance depends on the parameter settings, input data, and so on.
The notebooks in this article provide simple examples that can help you quickly get started using MLflow to track your model development. For more details on using MLflow tracking in Databricks, see Track machine learning training runs.
MLflow can automatically log training code written in many ML frameworks. This is the easiest way to get started using MLflow tracking.
This notebook illustrates how to use the MLflow logging API. Using the logging API gives you more control over the metrics logged and lets you log additional artifacts such as tables or plots.
This tutorial notebook presents an end-to-end example of training a model in Databricks, including loading data, visualizing the data, setting up a parallel hyperparameter optimization, and using MLflow to review the results, register the model, and perform inference on new data using the registered model in a Spark UDF.