Serverless compute release notes
Applies to: AWS
GCP
Azure
This section includes release notes for serverless compute. Release notes are organized by year and week of year. Serverless compute always runs using the most recently released version listed here.
Version 17.3
October 28, 2025
This serverless compute release roughly corresponds to Databricks Runtime 17.3 LTS.
New features
-
LIMIT ALL support for recursive CTEs: You can now use the
LIMIT ALLclause with recursive common table expressions (rCTEs) to explicitly specify that no row limit should be applied to the query results. -
Appending to files in Unity Catalog volumes returns correct error: Attempting to append to existing files in Unity Catalog volumes now returns a more descriptive error message to help you understand and resolve the issue.
-
st_dumpfunction support: You can now use thest_dumpfunction to decompose a geometry object into its constituent parts, returning a set of simpler geometries. -
Polygon interior ring functions are now supported: You can now use the following functions to work with polygon interior rings:
st_numinteriorrings: Get the number of inner boundaries (rings) of a polygon.st_interiorringn: Extract the n-th inner boundary of a polygon and return it as a linestring.
-
EXECUTE IMMEDIATE using constant expressions: The
EXECUTE IMMEDIATEstatement now supports using constant expressions in the query string, allowing for more flexible dynamic SQL execution. -
Allow
spark.sql.files.maxPartitionBytesin serverless compute: You can now configure thespark.sql.files.maxPartitionBytesSpark configuration parameter on serverless compute to control the maximum number of bytes to pack into a single partition when reading files.
Behavior changes
-
Add metadata column to DESCRIBE QUERY and DESCRIBE TABLE: The
DESCRIBE QUERYandDESCRIBE TABLEcommands now include a metadata column in their output, providing additional information about each column's properties and characteristics. -
Default mode change for FSCK REPAIR TABLE command: The default mode for the
FSCK REPAIR TABLEcommand has changed to provide more consistent behavior when repairing table metadata. -
Correct handling of null structs when dropping NullType columns: SAP Databricks now correctly handles null struct values when dropping columns with
NullType, preventing potential data corruption or unexpected behavior. -
Improved handling of null structs in Parquet: This release includes improvements to how null struct values are handled when reading from and writing to Parquet files, ensuring more consistent and correct behavior.
-
Upgrade aws-msk-iam-auth library for Kafka: The
aws-msk-iam-authlibrary used for Amazon MSK IAM authentication has been upgraded to the latest version, providing improved security and compatibility.
Version 17.2
September 25, 2025
This serverless compute release roughly corresponds to Databricks Runtime 17.2.
New features
-
ST_ExteriorRingfunction is now supported: You can now use theST_ExteriorRingfunction to extract the outer boundary of a polygon and return it as a linestring. -
Support
TEMPORARYkeyword for metric view creation: You can now use theTEMPORARYkeyword when creating a metric view. Temporary metric views are visible only in the session that created them and are dropped when the session ends. -
Use native I/O for
LokiFileSystem.getFileStatuson S3:LokiFileSystem.getFileStatusnow uses the native I/O stack for Amazon S3 traffic and returnsorg.apache.hadoop.fs.FileStatusobjects instead ofshaded.databricks.org.apache.hadoop.fs.s3a.S3AFileStatus. -
Auto Loader infers partition columns in
singleVariantColumnmode: Auto Loader now infers partition columns from file paths when ingesting data as a semi-structured variant type using thesingleVariantColumnoption. Previously, partition columns were not automatically detected.
Behavior changes
-
DESCRIBE CONNECTIONshows environment settings for JDBC connections: SAP Databricks now includes user-defined environment settings in theDESCRIBE CONNECTIONoutput for JDBC connections that support custom drivers and run in isolation. Other connection types remain unchanged. -
Option to truncate uniform history during managed tables migration: You can now truncate uniform history when migrating tables with Uniform/Iceberg enabled using
ALTER TABLE...SET MANAGED. This simplifies migrations and reduces downtime compared to disabling and re-enabling Uniform manually. -
Correct results for
splitwith empty regex and positive limit: SAP Databricks now returns correct results when usingsplit functionwith an empty regex and a positive limit. Previously, the function incorrectly truncated the remaining string instead of including it in the last element. -
Fix
url_decodeandtry_url_decodeerror handling in Photon: In Photon,try_url_decode()andurl_decode()withfailOnError = falsenow returnNULLfor invalid URL-encoded strings instead of failing the query. -
Shared execution environment for Unity Catalog Python UDTFs: SAP Databricks now shares the execution environment for Python user-defined table functions (UDTFs) from the same owner and Spark session. An optional
STRICT ISOLATIONclause is available to disable sharing for UDTFs with side effects, such as modifying environment variables or executing arbitrary code.
Version 17.1
August 19, 2025
This serverless compute release roughly corresponds to Databricks Runtime 17.1.
New features
- Reduced memory usage for wide schemas in Photon writer: Enhancements were made to the Photon engine that significantly reduce memory usage for wide schemas, addressing scenarios that previously resulted in out-of-memory errors.
Behavior changes
-
Error thrown for invalid
CHECKconstraints: SAP Databricks now throws anAnalysisExceptionif aCHECKconstraint expression cannot be resolved during constraint validation. -
Pulsar connector no longer exposes Bouncy Castle: The Bouncy Castle library is now shaded in the Pulsar connector to prevent classpath conflicts. As a result, Spark jobs can no longer access
org.bouncycastle.*classes from the connector. If your code depends on Bouncy Castle, install the library manually on serverless environment.
Serverless environment version 4
August 13, 2025
Environment version 4 is now available in your serverless notebooks and jobs. This environment version includes library upgrades and API updates.
Version 17.0
July 24, 2025
This serverless compute release roughly corresponds to Databricks Runtime 17.0.
New features
-
SQL procedure support: SQL scripts can now be encapsulated in a procedure stored as a reusable asset in Unity Catalog. You can create a procedure using the CREATE PROCEDURE command, and then call it using the CALL command.
-
Set a default collation for SQL Functions: Using the new
DEFAULT COLLATIONclause in the CREATE FUNCTION command defines the default collation used forSTRINGparameters, the return type, andSTRINGliterals in the function body. -
Recursive common table expressions (rCTE) support: SAP Databricks now supports navigation of hierarchical data using recursive common table expressions (rCTEs). Use a self-referencing CTE with
UNION ALLto follow the recursive relationship. -
PySpark and Spark Connect now support the DataFrames
df.mergeIntoAPI: PySpark and Spark Connect now support thedf.mergeIntoAPI. -
Support
ALL CATALOGSinSHOWSCHEMAS: TheSHOW SCHEMASsyntax is updated to acceptALL CATALOGS, allowing you to iterate through all active catalogs that support namespaces. The output attributes now include acatalogcolumn indicating the catalog of the corresponding namespace. -
Liquid clustering now compacts deletion vectors more efficiently: Delta tables with liquid clustering now apply physical changes from deletion vectors more efficiently when
OPTIMIZEis running. -
Allow non-deterministic expressions in
UPDATE/INSERTcolumn values forMERGEoperations: SAP Databricks now allows the use of non-deterministic expressions in updated and inserted column values ofMERGEoperations. For example, you can now generate dynamic or random values for columns using expressions likerand(). -
Change Delta MERGE Python APIs to return DataFrame instead of Unit: The Python
MERGEAPIs (such asDeltaMergeBuilder) now also return a DataFrame like the SQL API does, with the same results.