Skip to main content

ANALYZE TABLE … COMPUTE STORAGE METRICS

Applies to: check marked yes Databricks Runtime 18.0 and above

The ANALYZE TABLE … COMPUTE STORAGE METRICS command computes total storage size metrics for a table. It provides detailed storage breakdown for cost analysis and optimization. To optimize query performance instead, see ANALYZE TABLE … COMPUTE STATISTICS.

Syntax

ANALYZE TABLE table_name COMPUTE STORAGE METRICS

Parameters

Description

Computes total storage size metrics for a specific table. This command returns comprehensive storage information including total bytes, active bytes, vacuumable bytes, and time travel bytes, along with the associated number of files for each category.

Use this command to identify large or unused tables, optimize storage costs, and understand why the total storage size differs from the active table size. This is particularly useful for platform administrators who need to analyze storage patterns across multiple tables or track storage changes over time.

Output metrics

The command returns the following metrics in a single row:

Field

Description

total_bytes

Total storage size for the table. This equals transaction log size + active bytes + vacuumable bytes + time travel bytes.

num_total_files

Total number of files, including delta log files, active files, vacuumable files, and time travel files.

active_bytes

Size of data files actively referenced by the table (same as sizeInBytes).

num_active_files

Total number of files actively referenced by the table.

vacuumable_bytes

Size of data that can be removed by running VACUUM or enabling predictive optimization.

num_vacuumable_files

Number of vacuumable files.

time_travel_bytes

Size of historical data used for rollbacks and time travel operations. Also known as tombstoned bytes or failsafe bytes.

num_time_travel_files

Number of files used for time travel.

Details

  • The command uses a recursive list approach to compute storage information. Execution time is typically within minutes, but can take up to several hours for very large tables.
  • This command works for both Unity Catalog managed and external tables.
  • The storage metrics are computed at the time the command is run and aren't stored in Unity Catalog or returned by DESCRIBE EXTENDED.
  • To track storage changes over time, run this command periodically and store the results in a table. Run this command in a loop across multiple tables to analyze storage patterns across your data estate.

Table type considerations

For materialized views and streaming tables, total_bytes includes the size of the table and associated metadata. The active_bytes metric excludes the vacuumable and time-travelable portions of the table.

Examples

SQL
> ANALYZE TABLE main.my_schema.my_table COMPUTE STORAGE METRICS;
total_bytes total_num_files active_bytes num_active_files vacuumable_bytes num_vacuumable_files time_travel_bytes num_time_travel_files
----------- --------------- ------------ ---------------- ---------------- -------------------- ----------------- ---------------------
5368709120 1250 4294967296 1000 805306368 150 268435456 100

The output shows:

  • Total storage: 5.37 GB across 1,250 files
  • Active data: 4.29 GB in 1,000 files (current table version)
  • Vacuumable data: 805 MB in 150 files (can be reclaimed with VACUUM)
  • Time travel data: 268 MB in 100 files (for historical queries)