Workspace API
The Workspace API allows you to list, import, export, and delete notebooks and folders. The maximum allowed size of a request to the Workspace API is 10MB. See Workspace examples for a how to guide on this API.
Important
To access Databricks REST APIs, you must authenticate.
Delete
Endpoint | HTTP Method |
---|---|
2.0/workspace/delete |
POST |
Delete an object or a directory (and optionally recursively deletes all objects in the directory).
If path
does not exist, this call returns an error RESOURCE_DOES_NOT_EXIST
.
If path
is a non-empty directory and recursive
is set to false
, this call returns
an error DIRECTORY_NOT_EMPTY
.
Object deletion cannot be undone and deleting a directory recursively is not atomic.
Example of request:
{
"path": "/Users/user@example.com/project",
"recursive": true
}
Request structure
Field Name | Type | Description |
---|---|---|
path | STRING |
The absolute path of the notebook or directory. This field is required. |
recursive | BOOL |
The flag that specifies whether to delete the object recursively. It is false by default.
Please note this deleting directory is not atomic. If it fails in the middle, some of objects
under this directory may be deleted and cannot be undone. |
Export
Endpoint | HTTP Method |
---|---|
2.0/workspace/export |
GET |
Export a notebook or contents of an entire directory.
If path
does not exist, this call returns an error RESOURCE_DOES_NOT_EXIST
.
You can export a directory only in DBC
format.
If the exported data exceeds the size limit, this call returns an error
MAX_NOTEBOOK_SIZE_EXCEEDED
.
This API does not support exporting a library.
Example of request:
{
"path": "/Users/user@example.com/project/ScalaExampleNotebook",
"format": "SOURCE"
}
Example of response, where content
is base64-encoded:
{
"content": "Ly8gRGF0YWJyaWNrcyBub3RlYm9vayBzb3VyY2UKMSsx",
}
Alternatively, you can download the exported file by enabling direct_download
:
curl -n -o example.scala \
'https://<databricks-instance>/api/2.0/workspace/export?path=/Users/user@example.com/ScalaExampleNotebook&direct_download=true'
In the following examples, replace <databricks-instance>
with the workspace URL of your Databricks deployment.
Request structure
Field Name | Type | Description |
---|---|---|
path | STRING |
The absolute path of the notebook or directory. Exporting a directory is supported only for DBC .
This field is required. |
format | ExportFormat | This specifies the format of the exported file. By default, this is SOURCE . However it may be one of:
SOURCE , HTML , JUPYTER , DBC . The value is case sensitive. |
direct_download | BOOL |
Flag to enable direct download. If it is true , the response will be the exported file itself.
Otherwise, the response contains content as base64 encoded string.
See Export a notebook or folder for more information about how to use it. |
Get status
Endpoint | HTTP Method |
---|---|
2.0/workspace/get-status |
GET |
Gets the status of an object or a directory.
If path
does not exist, this call returns an error RESOURCE_DOES_NOT_EXIST
.
Example of request:
{
"path": "/Users/user@example.com/project/ScaleExampleNotebook"
}
Example of response:
{
"path": "/Users/user@example.com/project/ScalaExampleNotebook",
"language": "SCALA",
"object_type": "NOTEBOOK",
"object_id": 789
}
Request structure
Field Name | Type | Description |
---|---|---|
path | STRING |
The absolute path of the notebook or directory. This field is required. |
Response structure
Field Name | Type | Description |
---|---|---|
object_type | ObjectType | The type of the object. It could be NOTEBOOK , DIRECTORY or LIBRARY . |
object_id | INT64 |
Unique identifier for a NOTEBOOK or DIRECTORY . |
path | STRING |
The absolute path of the object. |
language | Language | The language of the object. This value is set only if the object type is NOTEBOOK . |
Import
Endpoint | HTTP Method |
---|---|
2.0/workspace/import |
POST |
Import a notebook or the contents of an entire directory.
If path
already exists and overwrite
is set to false
, this call returns an error
RESOURCE_ALREADY_EXISTS
.
You can use only DBC
format to import a directory.
Example of request, where content
is the base64-encoded string of 1+1
:
{
"content": "MSsx\n",
"path": "/Users/user@example.com/project/ScalaExampleNotebook",
"language": "SCALA",
"overwrite": true,
"format": "SOURCE"
}
Alternatively, you can import a local file directly.
In the following examples, replace <databricks-instance>
with the workspace URL of your Databricks deployment.
curl -n -F path=/Users/user@example.com/project/ScalaExampleNotebook -F language=SCALA \
-F content=@example.scala \
https://<databricks-instance>/api/2.0/workspace/import
Request structure
Field Name | Type | Description |
---|---|---|
path | STRING |
The absolute path of the notebook or directory. Importing directory is only support for DBC format.
This field is required. |
format | ExportFormat | This specifies the format of the file to be imported. By default, this is SOURCE . However it may be one of:
SOURCE , HTML , JUPYTER , DBC . The value is case sensitive. |
language | Language | The language. If format is set to SOURCE , this field is required; otherwise, it will be ignored. |
content | BYTES |
The base64-encoded content. This has a limit of 10 MB.
If the limit (10MB) is exceeded, exception with error code MAX_NOTEBOOK_SIZE_EXCEEDED is thrown.
This parameter might be absent, and instead a posted file will be used.
See Import a notebook or directory for more information about how to use it. |
overwrite | BOOL |
The flag that specifies whether to overwrite existing object. It is false by default.
For DBC format, overwrite is not supported since it may contain a directory. |
List
Endpoint | HTTP Method |
---|---|
2.0/workspace/list |
GET |
List the contents of a directory, or the object if it is not a directory.
If the input path does not exist, this call returns an error RESOURCE_DOES_NOT_EXIST
.
Example of request:
{
"path": "/Users/user@example.com/"
}
Example of response:
{
"objects": [
{
"path": "/Users/user@example.com/project",
"object_type": "DIRECTORY",
"object_id": 123
},
{
"path": "/Users/user@example.com/PythonExampleNotebook",
"language": "PYTHON",
"object_type": "NOTEBOOK",
"object_id": 456
}
]
}
Request structure
Field Name | Type | Description |
---|---|---|
path | STRING |
The absolute path of the notebook or directory. This field is required. |
Response structure
Field Name | Type | Description |
---|---|---|
objects | An array of ObjectInfo | List of objects. |
Mkdirs
Endpoint | HTTP Method |
---|---|
2.0/workspace/mkdirs |
POST |
Create the given directory and necessary parent directories if they do not exists.
If there exists an object (not a directory) at any prefix of the input path, this call returns
an error RESOURCE_ALREADY_EXISTS
.
If this operation fails it may have succeeded in creating some of the necessary
parent directories.
Example of request:
{
"path": "/Users/user@example.com/project"
}
Data structures
In this section:
ObjectInfo
The information of the object in workspace. It is returned by list
and get-status
.
Field Name | Type | Description |
---|---|---|
object_type | ObjectType | The type of the object. It could be NOTEBOOK , DIRECTORY or LIBRARY . |
object_id | INT64 |
Unique identifier for a NOTEBOOK or DIRECTORY . |
path | STRING |
The absolute path of the object. |
language | Language | The language of the object. This value is set only if the object type is NOTEBOOK . |
ExportFormat
The format for notebook import and export.
Format | Description |
---|---|
SOURCE | The notebook will be imported/exported as source code. |
HTML | The notebook will be imported/exported as an HTML file. |
JUPYTER | The notebook will be imported/exported as a Jupyter/IPython Notebook file. |
DBC | The notebook will be imported/exported as Databricks archive format. |
Language
The language of notebook.
Language | Description |
---|---|
SCALA | Scala notebook. |
PYTHON | Python notebook. |
SQL | SQL notebook. |
R | R notebook. |
ObjectType
The type of the object in workspace.
Type | Description |
---|---|
NOTEBOOK | Notebook |
DIRECTORY | Directory |
LIBRARY | Library |