Serverless compute release notes
Applies to: AWS
GCP
This section includes release notes for serverless compute. Release notes are organized by year and week of year. Serverless compute always runs using the most recently released version listed here.
Version 17.1
August 19, 2025
This serverless compute release roughly corresponds to Databricks Runtime 17.1.
New features
- Reduced memory usage for wide schemas in Photon writer: Enhancements were made to the Photon engine that significantly reduce memory usage for wide schemas, addressing scenarios that previously resulted in out-of-memory errors.
Behavior changes
-
Error thrown for invalid
CHECK
constraints: SAP Databricks now throws anAnalysisException
if aCHECK
constraint expression cannot be resolved during constraint validation. -
Pulsar connector no longer exposes Bouncy Castle: The Bouncy Castle library is now shaded in the Pulsar connector to prevent classpath conflicts. As a result, Spark jobs can no longer access
org.bouncycastle.*
classes from the connector. If your code depends on Bouncy Castle, install the library manually on serverless environment.
Serverless environment version 4
August 13, 2025
Environment version 4 is now available in your serverless notebooks and jobs. This environment version includes library upgrades and API updates.
Version 17.0
July 24, 2025
This serverless compute release roughly corresponds to Databricks Runtime 17.0.
New features
-
SQL procedure support: SQL scripts can now be encapsulated in a procedure stored as a reusable asset in Unity Catalog. You can create a procedure using the CREATE PROCEDURE command, and then call it using the CALL command.
-
Set a default collation for SQL Functions: Using the new
DEFAULT COLLATION
clause in the CREATE FUNCTION command defines the default collation used forSTRING
parameters, the return type, andSTRING
literals in the function body. -
Recursive common table expressions (rCTE) support: SAP Databricks now supports navigation of hierarchical data using recursive common table expressions (rCTEs). Use a self-referencing CTE with
UNION ALL
to follow the recursive relationship. -
PySpark and Spark Connect now support the DataFrames
df.mergeInto
API: PySpark and Spark Connect now support thedf.mergeInto
API. -
Support
ALL CATALOGS
inSHOW
SCHEMAS: TheSHOW SCHEMAS
syntax is updated to acceptALL CATALOGS
, allowing you to iterate through all active catalogs that support namespaces. The output attributes now include acatalog
column indicating the catalog of the corresponding namespace. -
Liquid clustering now compacts deletion vectors more efficiently: Delta tables with liquid clustering now apply physical changes from deletion vectors more efficiently when
OPTIMIZE
is running. -
Allow non-deterministic expressions in
UPDATE
/INSERT
column values forMERGE
operations: SAP Databricks now allows the use of non-deterministic expressions in updated and inserted column values ofMERGE
operations. For example, you can now generate dynamic or random values for columns using expressions likerand()
. -
Change Delta MERGE Python APIs to return DataFrame instead of Unit: The Python
MERGE
APIs (such asDeltaMergeBuilder
) now also return a DataFrame like the SQL API does, with the same results.