Notebook outputs and results

After you attach a notebook to a cluster and run one or more cells, your notebook has state and displays outputs. This section describes how to manage notebook state and outputs.

Clear notebooks state and outputs

To clear the notebook state and outputs, select one of the Clear options at the bottom of the Run menu.

Menu option


Clear all cell outputs

Clears the cell outputs. This is useful if you are sharing the notebook and do not want to include any results.

Clear state

Clears the notebook state, including function and variable definitions, data, and imported libraries.

Clear state and outputs

Clears both cell outputs and the notebook state.

Clear state and run all

Clears the notebook state and starts a new run.

Show results

When a cell is run, table results return a maximum of 10,000 rows or 2 MB, whichever is less.

By default, text results return a maximum of 50,000 characters. With Databricks Runtime 12.1 and above, you can increase this limit by setting the Spark configuration property spark.databricks.driver.maxReplOutputLength.

Explore SQL cell results in Python notebooks natively using Python

You can load data using SQL and explore it using Python. In a Databricks Python notebook, table results from a SQL language cell are automatically made available as a Python DataFrame. For details, see Explore SQL cell results in Python notebooks.

Download results

By default downloading results is enabled. To toggle this setting, see Manage the ability to download results from notebooks.

You can download a cell result that contains tabular output to your local machine. Click the downward pointing arrow next to the tab title. The menu options depend on the number of rows in the result and on the Databricks Runtime version. Downloaded results are saved on your local machine as a CSV file named export.csv.

Download cell results

View multiple outputs per cell

Python notebooks and %python cells in non-Python notebooks support multiple outputs per cell. For example, the output of the following code includes both the plot and the table:

import pandas as pd
from sklearn.datasets import load_iris

data = load_iris()
iris = pd.DataFrame(, columns=data.feature_names)
ax = iris.plot()

In Databricks Runtime 7.3 LTS, you must enable this feature by setting spark.databricks.workspace.multipleResults.enabled true.

Commit notebook outputs in Databricks Repos

To learn about committing .ipynb notebook outputs, see Allow committing .ipynb notebook output.

  • The notebook must be an .ipynb file

  • Workspace admin settings must allow notebook outputs to be committed.