api command group
Note
This information applies to Databricks CLI versions 0.100 and higher, which are in Private Preview. To try them, reach out to your Databricks contact. To find your version of the Databricks CLI, run databricks -v
.
The api
command group within the Databricks CLI enables you to call any available Databricks REST API.
You should run the api
command only for advanced scenarios, such as preview releases of specific Databricks REST APIs for which the Databricks CLI does not already wrap the target Databricks REST API within a related command. For a list of wrapped command groups, see CLI command groups.
Important
Before you use the Databricks CLI, be sure to set up the Databricks CLI and set up authentication for the Databricks CLI.
Run api
commands (for advanced scenarios only)
You run api
commands by appending them to databricks api
. To display help for the api
command, run databricks api -h
.
To call the api
command, use the following format:
databricks api <http-method> <rest-api-path> [--json {<request-body> | @<filename>}]
In the preceding call:
Replace
<http-method>
with the HTTP method for the Databricks REST API that you want to call, such asdelete
,get
,head
,path
,post
, orput
. For example, to return the list of available clusters for a workspace, useget
. To get the correct HTTP method for the Databricks REST API that you want to call, see the Databricks REST API documentation.Replace
<rest-api-path>
with the path to the Databricks REST API that you want to call. Do not includehttps://
or the workspace instance name. For example, to return the list of available clusters for a workspace, use/api/2.0/clusters/list
. To get the correct syntax for the Databricks REST API that you want to call, see the Databricks REST API documentation.If the Databricks REST API that you want to call requires a request body, include
--json
and<request-body>
, replacing<request-body>
with the request body in JSON format. Alternatively, you can store the request body in a separate JSON file. To do so, include--json
and@<filename>
, replacing<filename>
with the JSON file’s name. To get the correct syntax for the request body that you want to include, see the Databricks REST API documentation.
For authentication with Databricks workspaces, the api
command always searches for authentication information in the following order, stopping its search after it finds what it needs:
The
DATABRICKS_HOST
andDATABRICKS_TOKEN
environment variables.The
DATABRICKS_HOST
,DATABRICKS_USERNAME
, andDATABRICKS_PASSWORD
environment variables.The
DEFAULT
profile within your.databrickscfg
file. This profile must contain either thehost
andtoken
fields or thehost
,username
, andpassword
fields.
For authentication with Databricks accounts instead of workspaces, the api
command always searches for authentication information in the following order, stopping its search after it finds what it needs:
The
DATABRICKS_HOST
,DATABRICKS_USERNAME
,DATABRICKS_PASSWORD
, andDATABRICKS_ACCOUNT_ID
environment variables.The
DEFAULT
profile within your.databrickscfg
file. This profile must contain thehost
,username
,password
, andaccount_id
fields.
If the api
still cannot find the authentication information it needs, the api
command fails. The api
commands (and all other commands) support a --profile
option for specifying a profile other than the DEFAULT
one.
Note
The api
command does not use OAuth for authentication.
Examples
Get the list of available clusters in the workspace.
databricks api get /api/2.0/clusters/list
Get information about the specified cluster in the workspace.
databricks api post /api/2.0/clusters/get --json '{
"cluster_id": "1234-567890-abcde123"
}'
Update settings for the specified cluster in the workspace.
databricks api post /api/2.0/clusters/edit --json '{
"cluster_id": "1234-567890-abcde123",
"cluster_name": "my-changed-cluster",
"num_workers": 1,
"spark_version": "11.3.x-scala2.12",
"node_type_id": "i3.xlarge"
}'
Update settings for the specified cluster in the workspace. Get the request body from a file named edit-cluster.json
within the current working directory.
databricks api post /api/2.0/clusters/edit --json @edit-cluster.json
edit-cluster.json
:
{
"cluster_id": "1234-567890-abcde123",
"cluster_name": "my-changed-cluster",
"num_workers": 1,
"spark_version": "11.3.x-scala2.12",
"node_type_id": "i3.xlarge"
}