Skip to main content

Serverless compute release notes

Applies to: check marked yes AWS check marked yes GCP check marked yes Azure

This section includes release notes for serverless compute. Release notes are organized by year and week of year. Serverless compute always runs using the most recently released version listed here.

Version 17.3

October 28, 2025

This serverless compute release roughly corresponds to Databricks Runtime 17.3 LTS.

New features

  • LIMIT ALL support for recursive CTEs: You can now use the LIMIT ALL clause with recursive common table expressions (rCTEs) to explicitly specify that no row limit should be applied to the query results.

  • Appending to files in Unity Catalog volumes returns correct error: Attempting to append to existing files in Unity Catalog volumes now returns a more descriptive error message to help you understand and resolve the issue.

  • st_dump function support: You can now use the st_dump function to decompose a geometry object into its constituent parts, returning a set of simpler geometries.

  • Polygon interior ring functions are now supported: You can now use the following functions to work with polygon interior rings:

    • st_numinteriorrings: Get the number of inner boundaries (rings) of a polygon.
    • st_interiorringn: Extract the n-th inner boundary of a polygon and return it as a linestring.
  • EXECUTE IMMEDIATE using constant expressions: The EXECUTE IMMEDIATE statement now supports using constant expressions in the query string, allowing for more flexible dynamic SQL execution.

  • Allow spark.sql.files.maxPartitionBytes in serverless compute: You can now configure the spark.sql.files.maxPartitionBytes Spark configuration parameter on serverless compute to control the maximum number of bytes to pack into a single partition when reading files.

Behavior changes

  • Add metadata column to DESCRIBE QUERY and DESCRIBE TABLE: The DESCRIBE QUERY and DESCRIBE TABLE commands now include a metadata column in their output, providing additional information about each column's properties and characteristics.

  • Default mode change for FSCK REPAIR TABLE command: The default mode for the FSCK REPAIR TABLE command has changed to provide more consistent behavior when repairing table metadata.

  • Correct handling of null structs when dropping NullType columns: SAP Databricks now correctly handles null struct values when dropping columns with NullType, preventing potential data corruption or unexpected behavior.

  • Improved handling of null structs in Parquet: This release includes improvements to how null struct values are handled when reading from and writing to Parquet files, ensuring more consistent and correct behavior.

  • Upgrade aws-msk-iam-auth library for Kafka: The aws-msk-iam-auth library used for Amazon MSK IAM authentication has been upgraded to the latest version, providing improved security and compatibility.

Version 17.2

September 25, 2025

This serverless compute release roughly corresponds to Databricks Runtime 17.2.

New features

  • ST_ExteriorRing function is now supported: You can now use the ST_ExteriorRing function to extract the outer boundary of a polygon and return it as a linestring.

  • Support TEMPORARY keyword for metric view creation: You can now use the TEMPORARY keyword when creating a metric view. Temporary metric views are visible only in the session that created them and are dropped when the session ends.

  • Use native I/O for LokiFileSystem.getFileStatus on S3: LokiFileSystem.getFileStatus now uses the native I/O stack for Amazon S3 traffic and returns org.apache.hadoop.fs.FileStatus objects instead of shaded.databricks.org.apache.hadoop.fs.s3a.S3AFileStatus.

  • Auto Loader infers partition columns in singleVariantColumn mode: Auto Loader now infers partition columns from file paths when ingesting data as a semi-structured variant type using the singleVariantColumn option. Previously, partition columns were not automatically detected.

Behavior changes

  • DESCRIBE CONNECTION shows environment settings for JDBC connections: SAP Databricks now includes user-defined environment settings in the DESCRIBE CONNECTION output for JDBC connections that support custom drivers and run in isolation. Other connection types remain unchanged.

  • Option to truncate uniform history during managed tables migration: You can now truncate uniform history when migrating tables with Uniform/Iceberg enabled using ALTER TABLE...SET MANAGED. This simplifies migrations and reduces downtime compared to disabling and re-enabling Uniform manually.

  • Correct results for split with empty regex and positive limit: SAP Databricks now returns correct results when using split function with an empty regex and a positive limit. Previously, the function incorrectly truncated the remaining string instead of including it in the last element.

  • Fix url_decode and try_url_decode error handling in Photon: In Photon, try_url_decode() and url_decode() with failOnError = false now return NULL for invalid URL-encoded strings instead of failing the query.

  • Shared execution environment for Unity Catalog Python UDTFs: SAP Databricks now shares the execution environment for Python user-defined table functions (UDTFs) from the same owner and Spark session. An optional STRICT ISOLATION clause is available to disable sharing for UDTFs with side effects, such as modifying environment variables or executing arbitrary code.

Version 17.1

August 19, 2025

This serverless compute release roughly corresponds to Databricks Runtime 17.1.

New features

  • Reduced memory usage for wide schemas in Photon writer: Enhancements were made to the Photon engine that significantly reduce memory usage for wide schemas, addressing scenarios that previously resulted in out-of-memory errors.

Behavior changes

  • Error thrown for invalid CHECK constraints: SAP Databricks now throws an AnalysisException if a CHECK constraint expression cannot be resolved during constraint validation.

  • Pulsar connector no longer exposes Bouncy Castle: The Bouncy Castle library is now shaded in the Pulsar connector to prevent classpath conflicts. As a result, Spark jobs can no longer access org.bouncycastle.* classes from the connector. If your code depends on Bouncy Castle, install the library manually on serverless environment.

Serverless environment version 4

August 13, 2025

Environment version 4 is now available in your serverless notebooks and jobs. This environment version includes library upgrades and API updates.

Version 17.0

July 24, 2025

This serverless compute release roughly corresponds to Databricks Runtime 17.0.

New features

  • SQL procedure support: SQL scripts can now be encapsulated in a procedure stored as a reusable asset in Unity Catalog. You can create a procedure using the CREATE PROCEDURE command, and then call it using the CALL command.

  • Set a default collation for SQL Functions: Using the new DEFAULT COLLATION clause in the CREATE FUNCTION command defines the default collation used for STRING parameters, the return type, and STRING literals in the function body.

  • Recursive common table expressions (rCTE) support: SAP Databricks now supports navigation of hierarchical data using recursive common table expressions (rCTEs). Use a self-referencing CTE with UNION ALL to follow the recursive relationship.

  • PySpark and Spark Connect now support the DataFrames df.mergeInto API: PySpark and Spark Connect now support the df.mergeInto API.

  • Support ALL CATALOGS in SHOW SCHEMAS: The SHOW SCHEMAS syntax is updated to accept ALL CATALOGS, allowing you to iterate through all active catalogs that support namespaces. The output attributes now include a catalog column indicating the catalog of the corresponding namespace.

  • Liquid clustering now compacts deletion vectors more efficiently: Delta tables with liquid clustering now apply physical changes from deletion vectors more efficiently when OPTIMIZE is running.

  • Allow non-deterministic expressions in UPDATE/INSERT column values for MERGE operations: SAP Databricks now allows the use of non-deterministic expressions in updated and inserted column values of MERGE operations. For example, you can now generate dynamic or random values for columns using expressions like rand().

  • Change Delta MERGE Python APIs to return DataFrame instead of Unit: The Python MERGE APIs (such as DeltaMergeBuilder) now also return a DataFrame like the SQL API does, with the same results.