Clean up files associated with a table. There are different versions of this command for Delta and Apache Spark tables.
Recursively vacuum directories associated with the Delta table and remove data files that are no longer
in the latest state of the transaction log for the table and are older than a retention threshold.
Files are deleted according to the time they have been logically removed from Delta’s transaction log + retention hours,
not their modification timestamps on the storage system. The default threshold is 7 days. Databricks does not automatically trigger
VACUUM operations on Delta tables. See Remove files no longer referenced by a Delta table.
If you run
VACUUM on a Delta table, you lose the ability time travel back to a
version older than the specified data retention period.
Databricks recommends that you set a retention interval to be at least 7 days,
because old snapshots and uncommitted files can still be in use by concurrent
readers or writers to the table. If
VACUUM cleans up active files,
concurrent readers can fail or, worse, tables can be corrupted when
deletes files that have not yet been committed. You must choose an interval
that is longer than the longest running concurrent transaction and the longest
period that any stream can lag behind the most recent update to the table.
VACUUM table_identifier [RETAIN num HOURS] [DRY RUN]
Recursively vacuums directories associated with the Spark table and remove uncommitted files older
than a retention threshold. The default threshold is 7 days. Databricks automatically triggers
operations as data is written. See Clean up uncommitted files.