REST API 2.0

The Databricks REST API 2.0 supports services to manage your workspace, DBFS, clusters, instance pools, jobs, libraries, users and groups, tokens, and MLflow experiments and models.

This article provides an overview of how to use the REST API. Links to each API reference, authentication options, and examples are listed at the end of the article.

For information about authenticating to the REST API, see Authentication using Databricks personal access tokens. For API examples, see Examples.

Rate limits

The Databricks REST API supports a maximum of 30 requests/second per workspace. Requests that exceed the rate limit will receive a 429 response status code.

Parse output

It can be useful to parse out parts of the JSON output. In these cases, we recommend that you to use the utility jq. For more information, see the jq Manual. You can install jq on MacOS using Homebrew by running brew install jq.

Some STRING fields (which contain error/descriptive messaging intended to be consumed by the UI) are unstructured, and you should not depend on the format of these fields in programmatic workflows.

Invoke a GET using a query string

While most API calls require that you specify a JSON body, for GET calls you can specify a query string.

To get the details for a cluster, run:

curl ... https://<databricks-instance>/api/2.0/clusters/get?cluster_id=<cluster-id>

To list the contents of the DBFS root, run:

curl ... https://<databricks-instance>/api/2.0/dbfs/list?path=/

Runtime version strings

Many API calls require you to specify a Databricks runtime version string. This section describes the structure of a version string in the Databricks REST API.

Databricks Runtime versions 3.x and above

<M>.<F>.x[-cpu][-gpu][-ml][-hls][conda]-scala<scala-version>

where

  • M - Databricks Runtime major release
  • F - Databricks Runtime feature release
  • cpu - CPU version (with -ml only)
  • gpu - GPU-enabled
  • ml - Machine learning
  • hls - Genomics
  • conda - with Conda (no longer available)
  • scala-version - version of Scala used to compile Spark: 2.10, 2.11, or 2.12

For example, 5.5.x-scala2.10 and 6.3.x-gpu-scala2.11. The Supported releases and End-of-support history tables map Databricks Runtime versions to the Spark version contained in the runtime.

Databricks Runtime versions 2.x and below (unsupported)

<M>.<F>.<m>-db<n>-scala<scala-version>

where

  • M - Apache Spark major release
  • F - Apache Spark feature release
  • m - Apache Spark maintenance update
  • n - Databricks Runtime version
  • scala-version - version of Scala used to compile Spark: 2.10 or 2.11

For example, 2.1.1-db6-scala2.11.

Databricks Light

apache-spark.<M>.<F>.x-scala<scala-version>

where

  • M - Apache Spark major release
  • F - Apache Spark feature release
  • scala-version - version of Scala used to compile Spark: 2.10 or 2.11

For example, apache-spark-2.4.x-scala2.11.