REST API 2.0

The Databricks REST API 2.0 supports services to manage your workspace, DBFS, clusters, instance pools, jobs, libraries, users and groups, tokens, and MLflow experiments and models.

This article provides an overview of how to use the REST API. Links to each API reference, authentication options, and examples are listed at the end of the article.

For information about authenticating to the REST API, see Authentication using Databricks personal access tokens. For API examples, see API examples.

Rate limits

To ensure high quality of service under heavy load, Databricks enforces rate limits for all REST API calls. Limits are set per endpoint and per workspace to ensure fair usage and high availability. To request a limit increase, contact your Databricks representative.

Requests that exceed the rate limit return a 429 response status code.

Parse output

It can be useful to parse out parts of the JSON output. In these cases, we recommend that you to use the utility jq. For more information, see the jq Manual. You can install jq on MacOS using Homebrew by running brew install jq.

Some STRING fields (which contain error and descriptive messaging intended to be consumed by the UI) are unstructured, and you should not depend on the format of these fields in programmatic workflows.

Invoke a GET using a query string

While most API calls require that you specify a JSON body, for GET calls you can specify a query string.

In the following examples, replace <databricks-instance> with the workspace URL of your Databricks deployment.

To get the details for a cluster, run:

curl ... https://<databricks-instance>/api/2.0/clusters/get?cluster_id=<cluster-id>

To list the contents of the DBFS root, run:

curl ... https://<databricks-instance>/api/2.0/dbfs/list?path=/

Runtime version strings

Many API calls require you to specify a Databricks runtime version string. This section describes the structure of a version string in the Databricks REST API.

<M>.<F>.x[-cpu][-esr][-gpu][-ml][-hls][conda]-scala<scala-version>

where

  • M - Databricks Runtime major release
  • F - Databricks Runtime feature release
  • cpu - CPU version (with -ml only)
  • esr - Extended Support
  • gpu - GPU-enabled
  • ml - Machine learning
  • hls - Genomics
  • conda - with Conda (no longer available)
  • scala-version - version of Scala used to compile Spark: 2.10, 2.11, or 2.12

For example:

  • 7.6.x-gpu-ml-scala2.12 represents Databricks Runtime 7.6 for Machine Learning, is GPU-enabled, and uses Scala version 2.12 to compile Spark version 3.0.1
  • 6.4.x-esr-scala2.11 represents Databricks Runtime 6.4 Extended Support and uses Scala version 2.11 to compile Spark version 2.4.5

The Supported Databricks runtime releases and support schedule and Unsupported releases tables map Databricks Runtime versions to the Spark version contained in the runtime.

You can get a list of available Databricks runtime version strings by calling the Runtime versions API.

Databricks Light

apache-spark.<M>.<F>.x-scala<scala-version>

where

  • M - Apache Spark major release
  • F - Apache Spark feature release
  • scala-version - version of Scala used to compile Spark: 2.10 or 2.11

For example, apache-spark-2.4.x-scala2.11.