Skip to main content

Databricks SQL release notes

Applies to: check marked yes AWS check marked yes GCP check marked yes Azure

The following Databricks SQL features and improvements were released recently.

March 2026

Databricks SQL version 2026.10 is now available in Preview

March 26, 2026

Databricks SQL version 2026.10 is now available in the Preview channel. Review the following section to learn about new features, behavioral changes, and bug fixes.

Observation metric errors no longer fail queries

Errors during observation metric collection no longer cause query execution failures. Previously, errors in OBSERVE clauses (such as division by zero) could block or fail the entire query. Now, the query completes successfully and the error is raised when you call observation.get.

FILTER clause for MEASURE aggregate functions

MEASURE aggregate functions now support FILTER clauses. Previously, filters were silently ignored.

Optimized writes for Unity Catalog CRTAS operations

CREATE OR REPLACE TABLE AS SELECT (CRTAS) operations on partitioned Unity Catalog tables now apply optimized writes by default, producing fewer, larger files. To disable, set spark.databricks.delta.optimizeWrite.UCTableCRTAS.enabled to false.

Timestamp partition values use session timezone

Timestamp partition values now use the SQL warehouse session timezone. If you have timestamp partitions written before Databricks SQL version 2025.40, run SHOW PARTITIONS to verify your partition metadata before writing new data.

DESCRIBE FLOW reserved keyword

The DESCRIBE FLOW command is now available. If you have a table named flow, use DESCRIBE schema.flow, DESCRIBE TABLE flow, or DESCRIBE `flow` with backticks.

SpatialSQL boolean set operations

ST_Difference, ST_Intersection, and ST_Union use a new implementation with approximately 2x faster performance. Valid input geometries always produce a result and no longer raise errors. Results are normalized for consistent, comparable output.

Exception types for SQLSTATE

Exception types now support SQLSTATE. If your code parses exceptions by string matching or catches specific exception types, update your error handling logic.

Schema evolution with INSERT statements

Use the WITH SCHEMA EVOLUTION clause with SQL INSERT statements to automatically evolve the target table's schema during insert operations. The clause is supported for INSERT INTO, INSERT OVERWRITE, and INSERT INTO ... REPLACE forms. The target Delta Lake table's schema is updated to accommodate additional columns or widened types from the source.

Preserved NULL struct values in INSERT operations

INSERT operations with schema evolution or implicit casting now preserve NULL struct values when the source and target tables have differing struct field orders.

parse_timestamp SQL function

The parse_timestamp SQL function parses timestamp strings using multiple patterns and runs on the Photon engine for improved performance when parsing timestamps in multiple formats.

max_by and min_by with optional limit

The aggregate functions max_by and min_by now accept an optional third argument limit (up to 100,000). When provided, the functions return an array of up to limit values corresponding to the largest (or smallest) values of the ordering expression, simplifying top-K and bottom-K queries without window functions or CTEs.

Vector aggregate and scalar functions

New SQL functions operate on ARRAY<FLOAT> vectors for embedding and similarity workloads:

Aggregate functions:

  • vector_avg: Returns the element-wise average of vectors in a group.
  • vector_sum: Returns the element-wise sum of vectors in a group.

Scalar functions:

  • vector_cosine_similarity: Returns the cosine similarity of two vectors.
  • vector_inner_product: Returns the inner (dot) product of two vectors.
  • vector_l2_distance: Returns the Euclidean (L2) distance between two vectors.
  • vector_norm: Returns the Lp norm of a vector (1, 2, or infinity).
  • vector_normalize: Returns a vector normalized to unit length.

SQL cursor support in compound statements

Compound statements now support cursor processing. Use DECLARE CURSOR to define a cursor, then OPEN, FETCH, and CLOSE to run the query and consume rows one at a time. Cursors can use parameter markers and condition handlers such as NOT FOUND for row-by-row processing.

Approximate top-k sketch functions

New functions enable building and combining approximate top-K sketches for distributed top-K aggregation:

  • approx_top_k_accumulate: Builds a sketch per group.
  • approx_top_k_combine: Merges sketches.
  • approx_top_k_estimate: Returns the top K items with estimated counts.

Tuple sketch functions

New aggregate and scalar functions for tuple sketch support distinct counting and aggregation over key-summary pairs.

Custom dependencies for Unity Catalog Python UDTFs

Unity Catalog Python user-defined table functions (UDTFs) can now use custom dependencies for external libraries, so you can use packages beyond what's available in the default SQL warehouse environment.

New geospatial functions

The following geospatial functions are now available:

  • st_estimatesrid: Estimates the best projected spatial reference identifier (SRID) for an input geometry.
  • st_force2d: Converts a geography or geometry to its 2D representation.
  • st_nrings: Counts the total number of rings in a polygon or multipolygon.
  • st_numpoints: Counts the number of non-empty points in a geography or geometry.

Photon support for geospatial functions

The following geospatial functions now run on the Photon engine for faster performance: st_difference, st_intersection, and st_union.

February 2026

Databricks SQL version 2025.40 is rolling out in Current

February 23, 2026

Databricks SQL version 2025.40 is rolling out to the Current channel.

Databricks SQL version 2025.40 is now available in Preview

February 11, 2026

Databricks SQL version 2025.40 is now available in the Preview channel. Review the following section to learn about new features, behavioral changes, and bug fixes.

SQL scripting is generally available

SQL scripting is now generally available. Write procedural logic with SQL, including conditional statements, loops, local variables, and exception handling.

Parameter markers now supported in more SQL contexts

You can now use named (:param) and unnamed (?) parameter markers anywhere a literal value of the appropriate type is allowed. This includes DDL statements such as CREATE VIEW v AS SELECT ? AS c1, column types such as DECIMAL(:p, :s), and comments such as COMMENT ON t IS :comment.

IDENTIFIER clause expanded to more SQL contexts

The IDENTIFIER clause, which casts strings to SQL object names, is now supported in nearly every context where an identifier is permitted. Combined with expanded parameter marker and literal string coalescing support, you can parameterize anything from column aliases (AS IDENTIFIER(:name)) to column definitions (IDENTIFIER(:pk) BIGINT NOT NULL).

Literal string coalescing supported everywhere

Sequential string literals such as 'Hello' ' World' now coalesce into 'Hello World' in any context where string literals are allowed, including COMMENT 'This' ' is a ' 'comment'.

New BITMAP_AND_AGG function

A new BITMAP_AND_AGG function is now available to complement the existing library of BITMAP functions.

New Theta Sketch functions for approximate distinct counts

A new library of functions for approximate distinct count and set operations using Datasketches Theta Sketch is now available, including theta_sketch_agg, theta_union_agg, theta_intersection_agg, and related functions.

New KLL Sketch functions for approximate quantiles

A new library of functions for building KLL Sketches for approximate quantile computation is now available, including kll_sketch_agg_bigint, kll_sketch_agg_double, kll_sketch_agg_float, and related merge and query functions.

SQL window functions in metric views

You can now use SQL window functions in metric views to calculate running totals, rankings, and other window-based calculations.

New geospatial functions

The following new geospatial functions are now available:

  • st_azimuth: Returns the north-based azimuth from the first point to the second in radians.
  • st_boundary: Returns the boundary of the input geometry.
  • st_closestpoint: Returns the 2D projection of a point on the first geometry closest to the second geometry.
  • st_geogfromewkt: Parses an Extended Well-Known Text (EWKT) description of a geography.
  • st_geomfromewkt: Parses an Extended Well-Known Text (EWKT) description of a geometry.

EWKT input support for existing geometry and geography functions

The try_to_geography, try_to_geometry, to_geography, and to_geometry functions now accept Extended Well-Known Text (EWKT) as input.

Improved performance for repeated queries over tables with row filters and column masks

Repeated eligible queries over tables with row filters and column masks now benefit from improved query result caching, resulting in faster execution times.

Improved geospatial function performance

Spatial join performance is improved with shuffled spatial join support. The st_isvalid, st_makeline, and st_makepolygon functions now have Photon implementations.

FSCK REPAIR TABLE includes metadata repair by default

FSCK REPAIR TABLE now includes an initial metadata repair step before checking for missing data files, allowing it to work on tables with corrupt checkpoints or invalid partition values.

DESCRIBE TABLE output includes metadata column

The output of DESCRIBE TABLE [EXTENDED] now includes a metadata column for all table types. This column contains semantic metadata (display name, format, and synonyms) defined on the table as a JSON string.

NULL structs preserved in MERGE, UPDATE, and write operations

NULL structs are now preserved as NULL in Delta Lake MERGE, UPDATE, and write operations that include struct type casts. Previously, NULL structs were expanded to structs with all fields set to NULL.

Partition columns materialized in Parquet files

Partitioned Delta Lake tables now materialize partition columns in newly written Parquet data files. Previously, partition values were stored only in the Delta Lake transaction log metadata.

Timestamp partition values respect session timezone

Timestamp partition values are now correctly adjusted using the spark.sql.session.timeZone configuration. Previously, they were incorrectly converted to UTC using the JVM timezone.

Time travel restrictions updated

SAP Databricks now blocks time travel queries beyond the deletedFileRetentionDuration threshold for all tables. The VACUUM command ignores the retention duration argument except when the value is 0 hours. You cannot set deletedFileRetentionDuration larger than logRetentionDuration.

SHOW TABLES DROPPED respects LIMIT clause

SHOW TABLES DROPPED now correctly respects the LIMIT clause.

November 2025

Databricks SQL alerts are now in Preview

November 14, 2025

The latest version of Databricks SQL alerts, with a new editing experience, is now in Preview.

SQL Editor visualization fix

November 6, 2025

Resolved an issue where tooltips were hidden behind the legend in Notebook and SQL Editor visualizations.

October 2025

Databricks SQL version 2025.35 is now available in Preview

October 30, 2025

Databricks SQL version 2025.35 is now available in the Preview channel. Review the following section to learn about new features, behavioral changes, and bug fixes.

EXECUTE IMMEDIATE using constant expressions

You can now pass constant expressions as the SQL string and as arguments to parameter markers in EXECUTE IMMEDIATE statements.

LIMIT ALL support for recursive CTEs

You can now use LIMIT ALL to remove the total size restriction on recursive common table expressions (CTEs).

st_dump function support

You can now use the st_dump function to get an array containing the single geometries of the input geometry.

Polygon interior ring functions are now supported

You can now use the following functions to work with polygon interior rings:

  • st_numinteriorrings: Get the number of inner boundaries (rings) of a polygon.
  • st_interiorringn: Extract the n-th inner boundary of a polygon and return it as a linestring.

Add metadata column to DESCRIBE QUERY and DESCRIBE TABLE

SAP Databricks now includes a metadata column in the output of DESCRIBE QUERY and DESCRIBE TABLE for semantic metadata.

For DESCRIBE QUERY, when describing a query with metric views, semantic metadata propagates through the query if dimensions are directly referenced and measures use the MEASURE() function.

For DESCRIBE TABLE, the metadata column appears only for metric views, not other table types.

Default mode change for FSCK REPAIR TABLE command

The FSCK REPAIR TABLE command now includes an initial metadata repair step that validates checkpoints and partition values before removing references to missing data files.

Correct handling of null structs when dropping NullType columns

When writing to Delta tables, SAP Databricks now correctly preserves null struct values when dropping NullType columns from the schema. Previously, null structs were incorrectly replaced with non-null struct values where all fields were set to null.

New alert editing experience

October 20, 2025

Creating or editing an alert now opens in the new multi-tab editor, providing a unified editing workflow.

Visualizations fix

October 9, 2025

Legend selection now works correctly for charts with aliased series names in SQL editor and notebooks.

September 2025

Databricks SQL version 2025.30 is now available in Preview

September 25, 2025

Databricks SQL version 2025.30 is now available in the Preview channel. Review the following section to learn about new features, behavioral changes, and bug fixes.

UTF8 based collations now support LIKE operator

You can now use LIKE with columns that have one of the following collations enabled: UTF8_Binary, UTF8_Binary_RTRIM, UTF8_LCASE, UTF8_LCASE_RTRIM.

ST_ExteriorRing function is now supported

You can now use the ST_ExteriorRing function to extract the outer boundary of a polygon and return it as a linestring.

Declare multiple session or local variables in a single DECLARE statement

You can now declare multiple session or local variables of the same type and default value in a single DECLARE statement.

Support TEMPORARY keyword for metric view creation

You can now use the TEMPORARY keyword when creating a metric view. Temporary metric views are visible only in the session that created them and are dropped when the session ends.

DESCRIBE CONNECTION shows environment settings for JDBC connections

SAP Databricks now includes user-defined environment settings in the DESCRIBE CONNECTION output for JDBC connections that support custom drivers and run in isolation. Other connection types remain unchanged.

Correct results for split with empty regex and positive limit

SAP Databricks now returns correct results when using split function with an empty regex and a positive limit. Previously, the function incorrectly truncated the remaining string instead of including it in the last element.

Fix url_decode and try_url_decode error handling in Photon

In Photon, try_url_decode() and url_decode() with failOnError = false now return NULL for invalid URL-encoded strings instead of failing the query.

August 2025

Default warehouse setting is now available in Preview

August 28, 2025

Set a default warehouse that will be automatically selected in the compute selector across the SQL editor, Alerts, and Catalog Explorer. Individual users can override this setting by selecting a different warehouse before running a query. They can also define their own user-level default warehouse to apply across their sessions.

Databricks SQL version 2025.25 is rolling out in Current

August 21, 2025

Databricks SQL version 2025.25 is rolling out to the Current channel from August 20th, 2025 to August 28th, 2025. See features in 2025.25.

Databricks SQL version 2025.25 is now available in Preview

August 14, 2025

Databricks SQL version 2025.25 is now available in the Preview channel. Review the following section to learn about new features and behavioral changes.

Recursive common table expressions (rCTE) are generally available

Recursive common table expressions (rCTEs) are generally available. Navigate hierarchical data using a self-referencing CTE with UNION ALL to follow the recursive relationship.

Support for schema and catalog level default collation

You can now set a default collation for schemas and catalogs. This allows you to define a collation that applies to all objects created within the schema or catalog, ensuring consistent collation behavior across your data.

Support for Spatial SQL expressions and GEOMETRY and GEOGRAPHY data types

You can now store geospatial data in built-in GEOMETRY and GEOGRAPHY columns for improved performance of spatial queries. This release adds more than 80 new spatial SQL expressions, including functions for importing, exporting, measuring, constructing, editing, validating, transforming, and determining topological relationships with spatial joins.

Support for schema and catalog level default collation

You can now set a default collation for schemas and catalogs. This allows you to define a collation that applies to all objects created within the schema or catalog, ensuring consistent collation behavior across your data.

Better handling of JSON options with VARIANT

The from_json and to_json functions now correctly apply JSON options when working with top-level VARIANT schemas. This ensures consistent behavior with other supported data types.

Support for TIMESTAMP WITHOUT TIME ZONE syntax

You can now specify TIMESTAMP WITHOUT TIME ZONE instead of TIMESTAMP_NTZ. This change improves compatibility with the SQL Standard.

Resolved subquery correlation issue

SAP Databricks no longer incorrectly correlates semantically equal aggregate expressions between a subquery and its outer query. Previously, this could lead to incorrect query results.

Error thrown for invalid CHECK constraints

SAP Databricks now throws an AnalysisException if a CHECK constraint expression cannot be resolved during constraint validation.

New SQL editor is generally available

August 14, 2025

The new SQL editor is now generally available. The new SQL editor provides a unified authoring environment with support for multiple statement results, inline execution history, real-time collaboration, enhanced Databricks Assistant integration, and additional productivity features.

July 2025

Preset date ranges for parameters in the SQL editor

July 31, 2025

In the new SQL editor, you can now choose from preset date ranges—such as This week, Last 30 days, or Last year when using timestamp, date, and date range parameters. These presets make it faster to apply common time filters without manually entering dates.

Inline execution history in SQL editor

July 24, 2025

Inline execution history is now available in the new SQL editor, allowing you to quickly access past results without re-executing queries. Easily reference previous executions, navigate directly to past query profiles, or compare run times and statuses—all within the context of your current query.

Databricks SQL version 2025.20 is now available in Current

July 17, 2025

Databricks SQL version 2025.20 is rolling out in stages to the Current channel. For features and updates in this release, see 2025.20 features.

SQL editor updates

July 17, 2025

  • Improvements to named parameters: Date-range and multi-select parameters are now supported.

  • Updated header layout in SQL editor: The run button and catalog picker have moved to the header, creating more vertical space for writing queries.

Git support for alerts

July 17, 2025

You can now use Databricks Git folders to track and manage changes to alerts. To track alerts with Git, place them in a Databricks Git folder. Newly cloned alerts only appear in the alerts list page or API after a user interacts with them. They have paused schedules and need to be explicitly resumed by users.

Databricks SQL version 2025.20 is now available in Preview

July 3, 2025

Databricks SQL version 2025.20 is now available in the Preview channel. Review the following section to learn about new features and behavioral changes.

SQL procedure support

SQL scripts can now be encapsulated in a procedure stored as a reusable asset in Unity Catalog. You can create a procedure using the CREATE PROCEDURE command, and then call it using the CALL command.

Set a default collation for SQL Functions

Using the new DEFAULT COLLATION clause in the CREATE FUNCTION command defines the default collation used for STRING parameters, the return type, and STRING literals in the function body.

Recursive common table expressions (rCTE) support

SAP Databricks now supports navigation of hierarchical data using recursive common table expressions (rCTEs). Use a self-referencing CTE with UNION ALL to follow the recursive relationship.

Support ALL CATALOGS in SHOW SCHEMAS

The SHOW SCHEMAS syntax is updated to accept the following syntax:

SHOW SCHEMAS [ { FROM | IN } { catalog_name | ALL CATALOGS } ] [ [ LIKE ] pattern ]

When ALL CATALOGS is specified in a SHOW query, the execution iterates through all active catalogs that support namespaces using the catalog manager (DsV2). For each catalog, it includes the top-level namespaces.

The output attributes and schema of the command have been modified to add a catalog column indicating the catalog of the corresponding namespace. The new column is added to the end of the output attributes, as shown below:

Previous output

| Namespace        |
|------------------|
| test-namespace-1 |
| test-namespace-2 |

New output

| Namespace        | Catalog        |
|------------------|----------------|
| test-namespace-1 | test-catalog-1 |
| test-namespace-2 | test-catalog-2 |

Liquid clustering now compacts deletion vectors more efficiently

Delta tables with Liquid clustering now apply physical changes from deletion vectors more efficiently when OPTIMIZE is running.

Allow non-deterministic expressions in UPDATE/INSERT column values for MERGE operations

SAP Databricks now allows the use of non-deterministic expressions in updated and inserted column values of MERGE operations. However, non-deterministic expressions in the conditions of MERGE statements are not supported.

For example, you can now generate dynamic or random values for columns:

MERGE INTO target USING source
ON target.key = source.key
WHEN MATCHED THEN UPDATE SET target.value = source.value + rand()

This can be helpful for data privacy by obfuscating actual data while preserving the data properties (such as mean values or other computed columns).

Support VAR keyword for declaring and dropping SQL variables

SQL syntax for declaring and dropping variables now supports the VAR keyword in addition to VARIABLE. This change unifies the syntax across all variable-related operations, which improves consistency and reduces confusion for users who already use VAR when setting variables.

June 2025

Databricks SQL Serverless engine upgrades

June 11, 2025

The following engine upgrades are now rolling out globally, with availability expanding to all regions over the coming weeks.

  • Lower latency: Mixed workloads now run faster, with up to 25% improvement. The upgrade is automatically applied to serverless SQL warehouses with no additional cost or configuration.
  • Predictive Query Execution (PQE): PQE monitors tasks in real time and dynamically adjusts query execution to help avoid skew, spills, and unnecessary work.
  • Photon vectorized shuffle: Keeps data in compact columnar format, sorts it within the CPU's high-speed cache, and processes multiple values simultaneously using vectorized instructions. This improves throughput for CPU-bound workloads such as large joins and wide aggregation.

User interface updates

June 5, 2025

  • Query insights: Visiting the Query History page now emits the listHistoryQueries event. Opening a query profile now emits the getHistoryQuery event.