What are ACID guarantees on Databricks?

Databricks uses Delta Lake by default for all reads and writes and builds upon the ACID guarantees provided by the open source Delta Lake protocol. ACID stands for atomicity, consistency, isolation, and durability.

  • Atomicity means that all transactions either succeed or fail completely.

  • Consistency guarantees relate to how a given state of the data is observed by simultaneous operations.

  • Isolation refers to how simultaneous operations potentially conflict with one another.

  • Durability means that committed changes are permanent.

While many data processing and warehousing technologies describe having ACID transactions, specific guarantees vary by system, and transactions on Databricks might differ from other systems you’ve worked with.

Note

This page describes guarantees for tables backed by Delta Lake. Other data formats and integrated systems might not provide transactional guarantees for reads and writes.

All Databricks writes to cloud object storage use transactional commits, which create metadata files starting with _started_<id> and _committed_<id> alongside data files. You do not need to interact with these files, as Databricks routinely cleans up stale commit metadata files.

How are transactions scoped on Databricks?

Databricks manages transactions at the table level. Transactions always apply to one table at a time. For managing concurrent transactions, Databricks uses optimistic concurrency control. This means that there are no locks on reading or writing against a table, and deadlock is not a possibility.

By default, Databricks provides snapshot isolation on reads and write-serializable isolation on writes. Write-serializable isolation provides stronger guarantees than snapshot isolation, but it applies that stronger isolation only for writes.

Read operations referencing multiple tables return the current version of each table at the time of access, but do not interrupt concurrent transactions that might modify referenced tables.

Databricks does not have BEGIN/END constructs that allow multiple operations to be grouped together as a single transaction. Applications that modify multiple tables commit transactions to each table in a serial fashion. You can combine inserts, updates, and deletes against a table into a single write transaction using MERGE INTO.

How does Databricks implement atomicity?

The transaction log controls commit atomicity. During a transaction, data files are written to the file directory backing the table. When the transaction completes, a new entry is committed to the transaction log that includes the paths to all files written during the transaction. Each commit increments the table version and makes new data files visible to read operations. The current state of the table comprises all data files marked valid in the transaction logs.

Data files are not tracked unless the transaction log records a new version. If a transaction fails after writing data files to a table, these data files will not corrupt the table state, but the files will not become part of the table. The VACUUM operation deletes all untracked data files in a table directory, including remaining uncommitted files from failed transactions.

How does Databricks implement durability?

Databricks uses cloud object storage to store all data files and transaction logs. Cloud object storage has high availability and durability. Because transactions either succeed or fail completely and the transaction log lives alongside data files in cloud object storage, tables on Databricks inherit the durability guarantees of the cloud object storage on which they’re stored.

How does Databricks implement consistency?

Delta Lake uses optimistic concurrency control to provide transactional guarantees between writes. Under this mechanism, writes operate in three stages:

  1. Read: Reads (if needed) the latest available version of the table to identify which files need to be modified (that is, rewritten).

    • Writes that are append-only do not read the current table state before writing. Schema validation leverages metadata from the transaction log.

  2. Write: Writes data files to the directory used to define the table.

  3. Validate and commit:

    • Checks whether the proposed changes conflict with any other changes that may have been concurrently committed since the snapshot that was read.

    • If there are no conflicts, all the staged changes are committed as a new versioned snapshot, and the write operation succeeds.

    • If there are conflicts, the write operation fails with a concurrent modification exception. This failure prevents corruption of data.

Optimistic conccurency assumes that most concurrent transactions on your data could not conflict with one another, but conflicts can occur. See Isolation levels and write conflicts on Databricks.

How does Databricks implement isolation?

Databricks uses write serializable isolation by default for all table writes and updates. Snapshot isolation is used for all table reads.

Write serializability and optimistic concurrency control work together to provide high throughput for writes. The current valid state of a table is always available, and a write can be started against a table at any time. Concurrent reads are only limited by throughput of the metastore and cloud resources.

See Isolation levels and write conflicts on Databricks.

Does Delta Lake support multi-table transactions?

Delta Lake does not support multi-table transactions. Delta Lake supports transactions at the table level.

Primary key and foreign key relationships on Databricks are informational and not enforced. See Declare primary key and foreign key relationships.

What does it mean that Delta Lake supports multi-cluster writes?

Delta Lake prevents data corruption when multiple clusters write to the same table concurrently. Some write operations can conflict during simultaneous execution, but don’t corrupt the table. See Isolation levels and write conflicts on Databricks.

Note

Delta Lake on S3 has several limitations not found on other storage systems. See Delta Lake limitations on S3.