Databricks SQL release notes
The following Databricks SQL features and improvements were released in 2025.
July 2025
Preset date ranges for parameters in the SQL editor
July 31, 2025
In the new SQL editor, you can now choose from preset date ranges—such as This week, Last 30 days, or Last year when using timestamp, date, and date range parameters. These presets make it faster to apply common time filters without manually entering dates.
Inline execution history in SQL editor
July 24, 2025
Inline execution history is now available in the new SQL editor, allowing you to quickly access past results without re-executing queries. Easily reference previous executions, navigate directly to past query profiles, or compare run times and statuses—all within the context of your current query.
Databricks SQL version 2025.20 is now available in Current
July 17, 2025
Databricks SQL version 2025.20 is rolling out in stages to the Current channel. For features and updates in this release, see 2025.20 features.
SQL editor updates
July 17, 2025
-
Improvements to named parameters: Date-range and multi-select parameters are now supported.
-
Updated header layout in SQL editor: The run button and catalog picker have moved to the header, creating more vertical space for writing queries.
Git support for alerts
July 17, 2025
You can now use Databricks Git folders to track and manage changes to alerts. To track alerts with Git, place them in a Databricks Git folder. Newly cloned alerts only appear in the alerts list page or API after a user interacts with them. They have paused schedules and need to be explicitly resumed by users.
Databricks SQL version 2025.20 is now available in Preview
July 3, 2025
Databricks SQL version 2025.20 is now available in the Preview channel. Review the following section to learn about new features and behavioral changes.
SQL procedure support
SQL scripts can now be encapsulated in a procedure stored as a reusable asset in Unity Catalog.
You can create a procedure using the CREATE PROCEDURE
command, and then call it using the CALL
command.
Set a default collation for SQL Functions
Using the new DEFAULT COLLATION
clause in the CREATE FUNCTION
command defines the default collation used for STRING
parameters, the return type, and STRING
literals in the function body.
Recursive common table expressions (rCTE) support
SAP Databricks now supports navigation of hierarchical data using recursive common table expressions (rCTEs). Use a self-referencing CTE with UNION ALL
to follow the recursive relationship.
ANSI SQL enabled by default
The default SQL dialect is now ANSI SQL. ANSI SQL is a well-established standard and will help protect users from unexpected or incorrect results. Read the Databricks ANSI enablement guide for more information.
Support ALL CATALOGS
in SHOW
SCHEMAS
The SHOW SCHEMAS
syntax is updated to accept the following syntax:
SHOW SCHEMAS [ { FROM | IN } { catalog_name | ALL CATALOGS } ] [ [ LIKE ] pattern ]
When ALL CATALOGS
is specified in a SHOW
query, the execution iterates through all active catalogs that support namespaces using the catalog manager (DsV2). For each catalog, it includes the top-level namespaces.
The output attributes and schema of the command have been modified to add a catalog
column indicating the catalog of the corresponding namespace. The new column is added to the end of the output attributes, as shown below:
Previous output
| Namespace |
|------------------|
| test-namespace-1 |
| test-namespace-2 |
New output
| Namespace | Catalog |
|------------------|----------------|
| test-namespace-1 | test-catalog-1 |
| test-namespace-2 | test-catalog-2 |
Liquid clustering now compacts deletion vectors more efficiently
Delta tables with Liquid clustering now apply physical changes from deletion vectors more efficiently when OPTIMIZE
is running.
Allow non-deterministic expressions in UPDATE
/INSERT
column values for MERGE
operations
SAP Databricks now allows the use of non-deterministic expressions in updated and inserted column values of MERGE
operations. However, non-deterministic expressions in the conditions of MERGE
statements are not supported.
For example, you can now generate dynamic or random values for columns:
MERGE INTO target USING source
ON target.key = source.key
WHEN MATCHED THEN UPDATE SET target.value = source.value + rand()
This can be helpful for data privacy by obfuscating actual data while preserving the data properties (such as mean values or other computed columns).
Support VAR keyword for declaring and dropping SQL variables
SQL syntax for declaring and dropping variables now supports the VAR
keyword in addition to VARIABLE
. This change unifies the syntax across all variable-related operations, which improves consistency and reduces confusion for users who already use VAR
when setting variables.
June 2025
Databricks SQL Serverless engine upgrades
June 11, 2025
The following engine upgrades are now rolling out globally, with availability expanding to all regions over the coming weeks.
- Lower latency: Mixed workloads now run faster, with up to 25% improvement. The upgrade is automatically applied to serverless SQL warehouses with no additional cost or configuration.
- Predictive Query Execution (PQE): PQE monitors tasks in real time and dynamically adjusts query execution to help avoid skew, spills, and unnecessary work.
- Photon vectorized shuffle: Keeps data in compact columnar format, sorts it within the CPU's high-speed cache, and processes multiple values simultaneously using vectorized instructions. This improves throughput for CPU-bound workloads such as large joins and wide aggregation.
User interface updates
June 5, 2025
- Query insights: Visiting the Query History page now emits the
listHistoryQueries
event. Opening a query profile now emits thegetHistoryQuery
event.