Notebook-scoped libraries let you create, modify, save, reuse, and share custom Python environments that are specific to a notebook. When you install a notebook-scoped library, only the current notebook and any jobs associated with that notebook have access to that library. Other notebooks attached to the same cluster are not affected.
Notebook-scoped libraries do not persist across sessions. You must reinstall notebook-scoped libraries at the beginning of each session, or whenever the notebook is detached from a cluster.
There are two methods for installing notebook-scoped libraries:
- Run the
%condamagic command in a notebook. The
%pipcommand is supported on Databricks Runtime 7.1 (Unsupported) and above. Both
%condaare supported on Databricks Runtime 6.4 ML and above and Databricks Runtime 6.4 for Genomics and above. Databricks recommends using this approach for new workloads. This article describes how to use these magic commands.
- Invoke Databricks library utilities. Library utilities are supported only on Databricks Runtime, not Databricks Runtime ML or Databricks Runtime for Genomics. See Library utilities.
Notebook-scoped libraries using magic commands are enabled by default in Databricks Runtime 7.1 and above, Databricks Runtime 7.1 ML and above, and Databricks Runtime 7.1 for Genomics and above.
They are also available using a configuration setting in Databricks Runtime 6.4 ML to 7.0 ML and Databricks Runtime 6.4 for Genomics to Databricks Runtime 7.0 for Genomics. Set the Spark configuration
On a High Concurrency cluster running Databricks Runtime 7.4 ML or Databricks Runtime 7.4 for Genomics or below, notebook-scoped libraries are not compatible with table access control or credential passthrough. An alternative is to use Library utilities on a Databricks Runtime cluster, or to upgrade your cluster to Databricks Runtime 7.5 ML or Databricks Runtime 7.5 for Genomics or above.
Using notebook-scoped libraries might result in more traffic to the driver node as it works to keep the environment consistent across executor nodes. When you use a cluster with 10 or more nodes, Databricks recommends these specs as a minimum requirement for the driver node:
- For a 100 node CPU cluster, use i3.8xlarge.
- For a 10 node GPU cluster, use p2.xlarge.
For larger clusters, use a larger driver node.
You can use
%pip magic commands to create and manage notebook-scoped libraries on Databricks Runtime. On Databricks Runtime ML and Databricks Runtime for Genomics, you can also use
%conda magic commands. Databricks recommends using
pip to install libraries, unless the library you want to install recommends using
conda. For more information, see Understanding conda and pip.
- You should place all
%condacommands at the beginning of the notebook. The notebook state is reset after any
%condacommand that modifies the environment. If you create Python methods or variables in a notebook, and then use
%condacommands in a later cell, the methods or variables are lost.
- If you must use both
%condacommands in a notebook, see Interactions between pip and conda commands.
- In Databricks Runtime ML, uninstalling or modifying core Python packages (for example, IPython or conda) with
%condamay cause some features to stop working as expected. If you experience such problems, reset the environment by detaching and re-attaching the notebook or by restarting the cluster.
%pip command is equivalent to the pip command and supports the same API. The following sections show examples of how you can use
%pip commands to manage your environment. For more information on installing Python packages with
pip, see the pip install documentation and related pages.
In this section:
- Install a library with
- Install a wheel package with
- Uninstall a library with
- Install a library from a version control system with
- Install a private package with credentials managed by Databricks secrets with
- Install a package from DBFS with
- Save libraries in a requirements file
- Use a requirements file to install libraries
%pip install matplotlib
%pip install /path/to/my_package.whl
You cannot uninstall a library that is included in Databricks Runtime or a library that has been installed as a cluster library. If you have installed a different library version than the one included in Databricks Runtime or the one installed on the cluster, you can use
%pip uninstall to revert the library to the default version in Databricks Runtime or the version installed on the cluster, but you cannot use a
%pip command to uninstall the version of a library included in Databricks Runtime or installed on the cluster.
%pip uninstall -y matplotlib
-y option is required.
%pip install git+https://github.com/databricks/databricks-cli
You can add parameters to the URL to specify things like the version or git subdirectory. Refer to the VCS support for more information and for examples using other version control systems.
Pip supports installing packages from private sources with basic authentication, including private version control systems and private package repositories, such as Nexus and Artifactory. Secret management is available via the Databricks Secrets API, which allows you to store authentication tokens and passwords. Use the DBUtils API to access secrets from your notebook. Note that you can use
$variables in magic commands.
To install a package from a private repository, specify the repository URL with the
--index-url option to
%pip install or add it to the
pip config file at
token = dbutils.secrets.get(scope="scope", key="key") %pip install --index-url https://user:$firstname.lastname@example.org/path/to/repo <package>==<version>
Similarly, you can use secret management with magic commands to install private packages from version control systems.
token = dbutils.secrets.get(scope="scope", key="key") %pip install git+https://user:$email@example.com/path/to/repo
You can use
%pip to install a private package that has been saved on DBFS.
When you upload a file to DBFS, it automatically renames the file, replacing spaces, periods, and hyphens with underscores.
pip requires that the name of the wheel file use periods in the version (for example, 0.1.0) and hyphens instead of spaces or underscores. To install the package with a
%pip command, you must rename the file to meet these requirements.
%pip install /dbfs/mypackage-0.0.1-py3-none-any.whl
%pip freeze > /dbfs/requirements.txt
Any subdirectories in the file path must already exist. If you run
%pip freeze > /dbfs/<new-directory>/requirements.txt, the command fails if the directory
/dbfs/<new-directory> does not already exist.
%conda command is equivalent to the conda command and supports the same API with some restrictions noted below. The following sections contain examples of how to use
%conda commands to manage your environment. For more information on installing Python packages with
conda, see the conda install documentation.
%conda magic commands are not available on Databricks Runtime. They are only available on Databricks Runtime ML and Databricks Runtime for Genomics.
conda commands are not supported when used with
In this section:
%conda install matplotlib
%conda uninstall matplotlib
To show the Python environment associated with a notebook, use
To avoid conflicts, follow these guidelines when using
conda to install Python packages and libraries.
- Libraries installed using the API or using the cluster UI are installed using
pip. If any libraries have been installed from the API or the cluster UI, you should use only
%pipcommands when installing notebook-scoped libraries.
- If you use notebook-scoped libraries on a cluster, init scripts run on that cluster can use either
pipcommands to install libraries. However, if the init script includes
pipcommands, use only
%pipcommands in notebooks (not
- It’s best to use either
pipcommands exclusively or
condacommands exclusively. If you must install some packages using
condaand some using
pip, run the
condacommands first, and then run the
pipcommands. For more information, see Using Pip in a Conda Environment.
- How do libraries installed from the cluster UI/API interact with notebook-scoped libraries?
- How do libraries installed using an init script interact with notebook-scoped libraries?
- Can I use
%condacommands in job notebooks?
- Can I use
%condacommands in R or Scala notebooks?
- Can I use
- Can I update R packages using
Libraries installed from the cluster UI or API are available to all notebooks on the cluster. These libraries are installed using
pip; therefore, if libraries are installed using the cluster UI, use only
%pip commands in notebooks.
Libraries installed using an init script are available to all notebooks on the cluster.
If you use notebook-scoped libraries on a cluster running Databricks Runtime ML or Databricks Runtime for Genomics, init scripts run on the cluster can use either
pip commands to install libraries. However, if the init script includes
pip commands, then use only
%pip commands in notebooks.
For example, this notebook code snippet generates a script that installs fast.ai packages on all the cluster nodes.
dbutils.fs.put("dbfs:/home/myScripts/fast.ai", "conda install -c pytorch -c fastai fastai -y", True)
Databricks does not recommend using
%sh pip because it is not compatible with
- On Databricks Runtime 7.0 ML and below as well as Databricks Runtime 7.0 for Genomics and below, if a registered UDF depends on Python packages installed using
%conda, it won’t work in
spark.sqlin a Python command shell instead.
- On Databricks Runtime 7.2 ML and below as well as Databricks Runtime 7.2 for Genomics and below, when you update the notebook environment using
%conda, the new environment is not activated on worker Python processes. This can cause issues if a PySpark UDF function calls a third-party function that uses resources installed inside the Conda environment.
- When you use
%conda env updateto update a notebook environment, the installation order of packages is not guaranteed. This can cause problems for the
horovodpackage, which requires that
torchbe installed before
horovodin order to use
horovod.torchrespectively. If this happens, uninstall the
horovodpackage and reinstall it after ensuring that the dependencies are installed.