Libraries

To make third-party or custom code available to notebooks and jobs running on your clusters, you can install a library. Libraries can be written in Python, Java, Scala, and R. You can upload Java, Scala, and Python libraries and point to external packages in PyPI, Maven, and CRAN repositories.

This article focuses on performing library tasks in the workspace UI. You can also manage libraries using the Libraries CLI or the Libraries API 2.0.

Tip

Databricks includes many common libraries in Databricks Runtime. To see which libraries are included in Databricks Runtime, look at the System Environment subsection of the Databricks Runtime release notes for your Databricks Runtime version.

You can install libraries in three modes: workspace, cluster-installed, and notebook-scoped.

  • Workspace libraries serve as a local repository from which you create cluster-installed libraries. A workspace library might be custom code created by your organization, or might be a particular version of an open-source library that your organization has standardized on.
  • Cluster libraries can be used by all notebooks running on a cluster. You can install a cluster library directly from a public repository such as PyPI or Maven, or create one from a previously installed workspace library.
  • Notebook-scoped libraries, available for Python and R, allow you to install libraries and create an environment scoped to a notebook session. These libraries do not affect other notebooks running on the same cluster. Notebook-scoped libraries do not persist and must be re-installed for each session. Use notebook-scoped libraries when you need a custom environment for a specific notebook.

This section covers:

Python environment management

The following table provides an overview of options you can use to install Python libraries in Databricks.

Note

  • Notebook-scoped libraries using magic commands are enabled by default in Databricks Runtime 7.1 and above, Databricks Runtime 7.1 ML and above, and Databricks Runtime 7.1 for Genomics and above. They are also available using a configuration setting in Databricks Runtime 6.4 ML to 7.0 ML and Databricks Runtime 6.4 for Genomics to Databricks Runtime 7.0 for Genomics. See Requirements for details.
  • Notebook-scoped libraries with the library utility are available in Databricks Runtime only. They are not available on Databricks Runtime ML or Databricks Runtime for Genomics.
Python package source Notebook-scoped libraries with %pip Notebook-scoped libraries with the library utility Cluster libraries Job libraries with Jobs API
PyPI Use %pip install. See example. Use dbutils.library .installPyPI. Select PyPI as the source. Add a new pypi object to the job libraries and specify the package field.
Private PyPI mirror, such as Nexus or Artifactory Use %pip install with the --index-url option. Secret management is available. See example. Use dbutils.library .installPyPI and specify the repo argument. Not supported. Not supported.
VCS, such as GitHub, with raw source Use %pip install and specify the repository URL as the package name. See example. Not supported. Select PyPI as the source and specify the repository URL as the package name. Add a new pypi object to the job libraries and specify the repository URL as the package field.
Private VCS with raw source Use %pip install and specify the repository URL with basic authentication as the package name. Secret management is available. See example. Not supported. Not supported. Not supported.
DBFS Use %pip install. See example. Use dbutils.library .install(dbfs_path). Select DBFS/S3 as the source. Add a new egg or whl object to the job libraries and specify the DBFS path as the package field.
S3 Use %pip install together with a pre-signed URL. Paths with the S3 protocol s3:// are not supported. Use dbutils.library .install(s3_path). Select DBFS/S3 as the source. Add a new egg or whl object to the job libraries and specify the S3 path as the package field.