View, manage, and analyze Foundation Model Training runs


This feature is in Public Preview. Reach out to your Databricks account team to enroll in the Public Preview.

This article describes how to view, manage, and analyze Founation Model Training runs using APIs or using the Databricks UI.

For information on creating runs, see Create a training run using the Foundation Model Training API and Create a training run using the Foundation Model Training UI.

Use Foundation Model Training APIs to view and manage training runs

The Foundation Model Training APIs provide the following functions for managing your training runs.

Get a run

Use the get() function to return a run by name or run object you have launched.

from databricks.model_training import foundation_model as fm


List runs

Use the list() function to see the runs you have launched. The following table lists the optional filters you can specify.

Optional filter



A list of runs to get. Defaults to selecting all runs.


If shared runs is enabled for your workspace, you can filter results by the user who submitted the training run. Defaults to no user filter.


A datetime or datetime string to filter runs before. Defaults to all runs.


A datetime or datetime string to filter runs after. Defaults to all runs.

from databricks.model_training import foundation_model as fm


# filtering example
fm.list(before='01012023', limit=50)

Cancel training runs

To cancel a run, use the cancel() function and pass the run or a list of the training runs.

from databricks.model_training import foundation_model as fm

run_to_cancel = '<name-of-run-to-cancel>'

Delete training runs

Use delete() to delete training runs by passing a single run or a list of runs.

from databricks.model_training import foundation_model as fm


Review status of training runs

The following table lists the events created by a training run. Use the get_events() function anytime during your run to see your run’s progress.

Event type

Example event message



Run created.

Training run was created. If resources are availabe, the run starts. Otherwise, it enters the Pending state.


Run started.

Resources have been allocated, and the run has started.


Training data validated.

Validated that training data is correctly formatted.


Model data downloaded and initialized for base model meta-llama/Llama-2-7b-chat-hf.

Weights for the base model have been downloaded, and training is ready to begin.


[epoch=1/1][batch=50/56][ETA=5min] Train loss: 1.71

Reports the current training batch, epoch, or token, estimated time for training to finish (not including checkpoint upload time) and train loss. This event is updated when each batch ends. If the run configuration specifies max_duration in tok units, progress is reported in tokens.


Training completed.

Training has finished. Checkpoint uploading begins.


Run completed. Final weights uploaded.

Checkpoint has been uploaded, and the run has been completed.


Run canceled.

The run is canceled if fm.cancel() is called on it.


One or more train dataset samples has unknown keys. Please check the documentation for supported data formats.

The run failed. Check event_message for actionable details, or contact support.

from databricks.model_training import foundation_model as fm


Use the UI to view and manage runs

To view runs in the UI:

  1. Click Experiments in the left nav bar to display the Experiments page.

  2. In the table, click the name of your experiment to display the experiment page. The experiment page lists all runs associated with the experiment.

    experiment page
  3. To display additional information or metrics in the table, click plus sign and select the items to display from the menu:

    add metrics to chart
  4. Additional run information is available in the Chart tab:

    chart tab
  5. You can also click on the name of the run to display the run screen. This screen gives you access to additional details about the run.

    run page

Checkpoint folder

To access the checkpoint folder, click the Artifacts tab on the run screen. Open the experiment name, and then open the checkpoints folder.

checkpoint folder on artifacts tab

The epoch folders (named ep<n>-xxx) contain the weights at each checkpoint and can be used to start another training run from those weights.

You can download the contents of the huggingface folder and use it as a Hugging Face model.