Databricks SQL release notes 2026
The following Databricks SQL features and improvements were released in 2026.
March 2026
Databricks SQL version 2026.10 is now available in Preview
March 26, 2026
Databricks SQL version 2026.10 is now available in the Preview channel. Review the following section to learn about new features, behavioral changes, and bug fixes.
Observation metric errors no longer fail queries
Errors during observation metric collection no longer cause query execution failures. Previously, errors in OBSERVE clauses (such as division by zero) could block or fail the entire query. Now, the query completes successfully and the error is raised when you call observation.get.
FILTER clause for MEASURE aggregate functions
MEASURE aggregate functions now support FILTER clauses. Previously, filters were silently ignored.
Optimized writes for Unity Catalog CRTAS operations
CREATE OR REPLACE TABLE AS SELECT (CRTAS) operations on partitioned Unity Catalog tables now apply optimized writes by default, producing fewer, larger files. To disable, set spark.databricks.delta.optimizeWrite.UCTableCRTAS.enabled to false.
Timestamp partition values use session timezone
Timestamp partition values now use the SQL warehouse session timezone. If you have timestamp partitions written before Databricks SQL version 2025.40, run SHOW PARTITIONS to verify your partition metadata before writing new data.
DESCRIBE FLOW reserved keyword
The DESCRIBE FLOW command is now available. If you have a table named flow, use DESCRIBE schema.flow, DESCRIBE TABLE flow, or DESCRIBE `flow` with backticks.
SpatialSQL boolean set operations
ST_Difference, ST_Intersection, and ST_Union use a new implementation with the following improvements:
- Valid input geometries always produce a result and no longer raise errors. Invalid inputs don't raise errors but might not produce valid results.
- Approximately 2x faster performance.
- Results can differ after the 15th decimal place for line-segment intersections due to different formulas and order of operations.
- Results are normalized for consistent, comparable output:
- Points are sorted by coordinate values.
- Linestrings are built from the longest possible paths.
- Polygon rings are rotated so the first point has the smallest coordinate values.
- This normalization applies in all cases except when calling
ST_Differencewith two non-overlapping geometries, where the first geometry is returned unmodified.
Exception types for SQLSTATE
Exception types now support SQLSTATE. If your code parses exceptions by string matching or catches specific exception types, update your error handling logic.
DATETIMEOFFSET data type support for Microsoft Azure Synapse
The DATETIMEOFFSET data type is now available for Microsoft Azure Synapse connections.
Google BigQuery table comments
Google BigQuery table descriptions are resolved and exposed as table comments.
Schema evolution with INSERT statements
Use the WITH SCHEMA EVOLUTION clause with SQL INSERT statements to automatically evolve the target table's schema during insert operations. The clause is supported for INSERT INTO, INSERT OVERWRITE, and INSERT INTO ... REPLACE forms. For example:
INSERT WITH SCHEMA EVOLUTION INTO students TABLE visiting_students_with_additional_id;
The target Delta Lake table's schema is updated to accommodate additional columns or widened types from the source. For details, see schema evolution and INSERT statement syntax.
Preserved NULL struct values in INSERT operations
INSERT operations with schema evolution or implicit casting now preserve NULL struct values when the source and target tables have differing struct field orders.
parse_timestamp SQL function
The parse_timestamp SQL function parses timestamp strings using multiple patterns and runs on the Photon engine for improved performance when parsing timestamps in multiple formats. See Datetime patterns for information about datetime pattern formatting.
max_by and min_by with optional limit
The aggregate functions max_by and min_by now accept an optional third argument limit (up to 100,000). When provided, the functions return an array of up to limit values corresponding to the largest (or smallest) values of the ordering expression, simplifying top-K and bottom-K queries without window functions or CTEs.
Vector aggregate and scalar functions
New SQL functions operate on ARRAY<FLOAT> vectors for embedding and similarity workloads:
Aggregate functions:
- vector_avg: Returns the element-wise average of vectors in a group.
- vector_sum: Returns the element-wise sum of vectors in a group.
Scalar functions:
- vector_cosine_similarity: Returns the cosine similarity of two vectors.
- vector_inner_product: Returns the inner (dot) product of two vectors.
- vector_l2_distance: Returns the Euclidean (L2) distance between two vectors.
- vector_norm: Returns the Lp norm of a vector (1, 2, or infinity).
- vector_normalize: Returns a vector normalized to unit length.
See Built-in functions.
SQL cursor support in compound statements
SQL scripting compound statements now support cursor processing. Use DECLARE CURSOR to define a cursor, then OPEN statement, FETCH statement, and CLOSE statement to run the query and consume rows one at a time. Cursors can use parameter markers and condition handlers such as NOT FOUND for row-by-row processing.
Approximate top-k sketch functions
New functions enable building and combining approximate top-K sketches for distributed top-K aggregation:
- approx_top_k_accumulate: Builds a sketch per group.
- approx_top_k_combine: Merges sketches.
- approx_top_k_estimate: Returns the top K items with estimated counts.
For more information, see approx_top_k aggregate function and Built-in functions.
Tuple sketch functions
New aggregate and scalar functions for tuple sketch support distinct counting and aggregation over key-summary pairs.
Aggregate functions:
tuple_sketch_agg_doubleaggregate functiontuple_sketch_agg_integeraggregate functiontuple_union_agg_doubleaggregate functiontuple_union_agg_integeraggregate functiontuple_intersection_agg_doubleaggregate functiontuple_intersection_agg_integeraggregate function
Scalar functions:
- tuple_sketch_estimate
- tuple_sketch_summary
- tuple_sketch_theta
- tuple_union
- tuple_intersection
- tuple_difference
See Built-in functions.
Custom dependencies for Unity Catalog Python UDTFs
Unity Catalog Python user-defined table functions (UDTFs) can now use custom dependencies for external libraries, so you can use packages beyond what's available in the default SQL warehouse environment. See Extend UDFs using custom dependencies.
New geospatial functions
The following geospatial functions are now available:
st_estimatesridfunction: Estimates the best projected spatial reference identifier (SRID) for an input geometry.st_force2dfunction: Converts a geography or geometry to its 2D representation.st_nringsfunction: Counts the total number of rings in a polygon or multipolygon, including both exterior and interior rings.st_numpointsfunction: Counts the number of non-empty points in a geography or geometry.
Photon support for geospatial functions
The following geospatial functions now run on the Photon engine for faster performance:
February 2026
Databricks SQL version 2025.40 is rolling out in Current
February 23, 2026
Databricks SQL version 2025.40 is rolling out to the Current channel. See features in 2025.40.
Databricks SQL version 2025.40 is now available in Preview
February 11, 2026
Databricks SQL version 2025.40 is now available in the Preview channel. Review the following section to learn about new features, behavioral changes, and bug fixes.
SQL scripting is generally available
SQL scripting is now generally available. Write procedural logic with SQL, including conditional statements, loops, local variables, and exception handling.
Parameter markers now supported in more SQL contexts
You can now use named (:param) and unnamed (?) parameter markers anywhere a literal value of the appropriate type is allowed. This includes DDL statements such as CREATE VIEW v AS SELECT ? AS c1, column types such as DECIMAL(:p, :s), and comments such as COMMENT ON t IS :comment. This allows you to parameterize a large variety of SQL statements without exposing your code to SQL injection attacks. See Parameter markers.
IDENTIFIER clause expanded to more SQL contexts
The IDENTIFIER clause, which casts strings to SQL object names, is now supported in nearly every context where an identifier is permitted. Combined with expanded parameter marker and literal string coalescing support, you can parameterize anything from column aliases (AS IDENTIFIER(:name)) to column definitions (IDENTIFIER(:pk) BIGINT NOT NULL). See IDENTIFIER clause.
Literal string coalescing supported everywhere
Sequential string literals such as 'Hello' ' World' now coalesce into 'Hello World' in any context where string literals are allowed, including COMMENT 'This' ' is a ' 'comment'. See STRING type.
New BITMAP_AND_AGG function
A new BITMAP_AND_AGG function is now available to complement the existing library of BITMAP functions.
New Theta Sketch functions for approximate distinct counts
A new library of functions for approximate distinct count and set operations using Datasketches Theta Sketch is now available:
theta_sketch_aggaggregate functiontheta_union_aggaggregate functiontheta_intersection_aggaggregate functiontheta_sketch_estimatefunctiontheta_unionfunctiontheta_differencefunctiontheta_intersectionfunction
New KLL Sketch functions for approximate quantiles
A new library of functions for building KLL Sketches for approximate quantile computation is now available:
kll_sketch_agg_bigintaggregate functionkll_sketch_get_quantile_bigintfunctionkll_sketch_merge_bigintfunctionkll_sketch_agg_doubleaggregate functionkll_sketch_get_quantile_doublefunctionkll_sketch_merge_doublefunctionkll_sketch_agg_floataggregate functionkll_sketch_get_quantile_floatfunctionkll_sketch_merge_floatfunctionkll_sketch_get_n_bigintfunctionkll_sketch_get_rank_bigintfunctionkll_sketch_to_string_bigintfunctionkll_sketch_get_n_doublefunctionkll_sketch_get_rank_doublefunctionkll_sketch_to_string_doublefunctionkll_sketch_get_n_floatfunctionkll_sketch_get_rank_floatfunctionkll_sketch_to_string_floatfunction
You can merge multiple KLL sketches in an aggregation context using kll_merge_agg_bigint, kll_merge_agg_double, and kll_merge_agg_float.
SQL window functions in metric views
You can now use SQL window functions in metric views to calculate running totals, rankings, and other window-based calculations.
New geospatial functions
The following new geospatial functions are now available:
st_azimuthfunction: Returns the north-based azimuth from the first point to the second in radians in[0, 2π).st_boundaryfunction: Returns the boundary of the input geometry.st_closestpointfunction: Returns the 2D projection of a point on the first geometry that is closest to the second geometry.st_geogfromewktfunction: Parses an Extended Well-Known Text (EWKT) description of a geography.st_geomfromewktfunction: Parses an Extended Well-Known Text (EWKT) description of a geometry.
EWKT input support for existing geometry and geography functions
The following functions now accept Extended Well-Known Text (EWKT) as input:
Improved performance for repeated queries over tables with row filters and column masks
Repeated eligible queries over tables with row filters and column masks now benefit from improved query result caching, resulting in faster execution times.
Improved geospatial function performance
Spatial join performance is improved with shuffled spatial join support. The following ST functions now have Photon implementations:
FSCK REPAIR TABLE includes metadata repair by default
FSCK REPAIR TABLE now includes an initial metadata repair step before checking for missing data files, allowing it to work on tables with corrupt checkpoints or invalid partition values. Additionally, the dataFilePath column in the FSCK REPAIR TABLE DRY RUN output schema is now nullable to support new issue types where the data file path is not applicable.
DESCRIBE TABLE output includes metadata column
The output of DESCRIBE TABLE [EXTENDED] now includes a metadata column for all table types. This column contains semantic metadata (display name, format, and synonyms) defined on the table as a JSON string.
NULL structs preserved in MERGE, UPDATE, and streaming write operations
NULL structs are now preserved as NULL in Delta Lake MERGE, UPDATE, and streaming write operations that include struct type casts. Previously, NULL structs were expanded to structs with all fields set to NULL.
Partition columns materialized in Parquet files
Partitioned Delta Lake tables now materialize partition columns in newly written Parquet data files. Previously, partition values were stored only in the Delta Lake transaction log metadata. Workloads that directly read Parquet files written by Delta Lake sees additional partition columns in newly written files.
Timestamp partition values respect session timezone
Timestamp partition values are now correctly adjusted using the spark.sql.session.timeZone configuration. Previously, they were incorrectly converted to UTC using the JVM timezone.
Time travel restrictions updated
Databricks now blocks time travel queries beyond the deletedFileRetentionDuration threshold for all tables. The VACUUM command ignores the retention duration argument except when the value is 0 hours. You cannot set deletedFileRetentionDuration larger than logRetentionDuration.
SHOW TABLES DROPPED respects LIMIT clause
SHOW TABLES DROPPED now correctly respects the LIMIT clause.