Work with files on Databricks
Databricks provides multiple utilities and APIs for interacting with files in the following locations:
Unity Catalog volumes.
Workspace files.
Cloud object storage.
DBFS mounts and DBFS root.
Ephemeral storage attached to the driver node of the cluster.
This article provides examples for interacting with files in these locations for the following tools:
Apache Spark.
Spark SQL and Databricks SQL.
Databricks file system utitlities (
dbutils.fs
or%fs
).Databricks CLI.
Databricks REST API.
Bash shell commands (
%sh
).Notebook-scoped library installs using
%pip
.Pandas.
OSS Python file management and processing utilities.
Important
File operations that require FUSE access to data cannot directly access cloud object storage using URIs. Databricks recommends using Unity Catalog volumes to configure access to these locations for FUSE.
Scala does not support FUSE for Unity Catalog volumes or workspace files on compute configured with single user access mode or clusters without Unity Catalog. Scala supports FUSE for Unity Catalog volumes and workspace files on compute configured with Unity Catalog and shared access mode.
Do I need to provide a URI scheme to access data?
Data access paths in Databricks follow one of the following standards:
URI-style paths include a URI scheme. For Databricks-native data access solutions, URI schemes are optional for most use cases. When you directly access data in cloud object storage, you must provide the correct URI scheme for the storage type.
POSIX-style paths provide data access relative to the driver root (
/
). POSIX-style paths never require a scheme. You can use Unity Catalog volumes or DBFS mounts to provide POSIX-style access to data in cloud object storage. Many ML frameworks and other OSS Python modules require FUSE and can only use POSIX-style paths.
Work with files in Unity Catalog volumes
Databricks recommends using Unity Catalog volumes to configure access to non-tabular data files stored in cloud object storage. See Create volumes.
Tool |
Example |
---|---|
Apache Spark |
|
Spark SQL and Databricks SQL |
|
Databricks file system utilities |
|
Databricks CLI |
|
Databricks REST API |
|
Bash shell commands |
|
Library installs |
|
Pandas |
|
OSS Python |
|
Note
The dbfs:/
schema is required when working with the Databricks CLI.
Work with workspace files
You can use workspace files to store and access data and other files saved alongside notebooks and other workspace assets. Because workspace files have size restrictions, Databricks recommends only storing small data files here primarily for development and testing.
Tool |
Example |
---|---|
Apache Spark |
|
Spark SQL and Databricks SQL |
|
Databricks file system utilities |
|
Databricks CLI |
|
Databricks REST API |
|
Bash shell commands |
|
Library installs |
|
Pandas |
|
OSS Python |
|
Note
The file:/
schema is required when working with Databricks Utilities, Apache Spark, or SQL.
You cannot use Apache Spark to read or write to workspace files on cluster configured with shared access mode.
Work with files in cloud object storage
Databricks recommends using Unity Catalog volumes to configure secure access to files in cloud object storage. If you choose to directly access data in cloud object storage using URIs, you must configure permissions. See Manage external locations, external tables, and external volumes.
The following examples use URIs to access data in cloud object storage:
Tool |
Example |
---|---|
Apache Spark |
|
Spark SQL and Databricks SQL |
|
Databricks file system utilities |
|
Databricks CLI |
Not supported |
Databricks REST API |
Not supported |
Bash shell commands |
Not supported |
Library installs |
|
Pandas |
Not supported |
OSS Python |
Not supported |
Work with files in DBFS mounts and DBFS root
DBFS mounts are not securable using Unity Catalog and are no longer recommended by Databricks. Data stored in the DBFS root is accessible by all users in the workspace. Databricks recommends against storing any sensitive or production code or data in the DBFS root. See What is the Databricks File System (DBFS)?.
Tool |
Example |
---|---|
Apache Spark |
|
Spark SQL and Databricks SQL |
|
Databricks file system utilities |
|
Databricks CLI |
|
Databricks REST API |
|
Bash shell commands |
|
Library installs |
|
Pandas |
|
OSS Python |
|
Note
The dbfs:/
schema is required when working with the Databricks CLI.
Work with files in ephemeral storage attached to the driver node
The ephermal storage attached to the drive node is block storage with native POSIX-based path access. Any data stored in this location disappears when a cluster terminates or restarts.
Tool |
Example |
---|---|
Apache Spark |
Not supported |
Spark SQL and Databricks SQL |
Not supported |
Databricks file system utilities |
|
Databricks CLI |
Not supported |
Databricks REST API |
Not supported |
Bash shell commands |
|
Library installs |
Not supported |
Pandas |
|
OSS Python |
|
Note
The file:/
schema is required when working with Databricks Utilities.
Move data from ephemeral storage to volumes
You might want to access data downloaded or saved to ephemeral storage using Apache Spark. Because ephemeral storage is attached to the driver and Spark is a distributed processing engine, not all operations can directly access data here. If you need to move data from the driver filesystem to Unity Catalog volumes, you can copy files using magic commands or the Databricks utilities, as in the following examples
dbutils.fs.cp ("file:/<path>", "/Volumes/<catalog>/<schema>/<volume>/<path>")
%sh cp /<path> /Volumes/<catalog>/<schema>/<volume>/<path>
%fs cp file:/<path> /Volumes/<catalog>/<schema>/<volume>/<path>
Local file API limitations
The following lists the limitations in local file API usage with cloud object storage in Databricks Runtime.
Does not support Amazon S3 mounts with client-side encryption enabled.
Does not support random writes. For workloads that require random writes, perform the operations on local disk first and then copy the result to Unity Catalog volumes. For example:
# python
import xlsxwriter
from shutil import copyfile
workbook = xlsxwriter.Workbook('/local_disk0/tmp/excel.xlsx')
worksheet = workbook.add_worksheet()
worksheet.write(0, 0, "Key")
worksheet.write(0, 1, "Value")
workbook.close()
copyfile('/local_disk0/tmp/excel.xlsx', '/Volumes/my_catalog/my_schema/my_volume/excel.xlsx')
No sparse files. To copy sparse files, use
cp --sparse=never
:
$ cp sparse.file /Volumes/my_catalog/my_schema/my_volume/sparse.file
error writing '/dbfs/sparse.file': Operation not supported
$ cp --sparse=never sparse.file /Volumes/my_catalog/my_schema/my_volume/sparse.file