Skip to main content
RSS Feed

What's coming?

Learn about features and behavioral changes in upcoming Databricks releases.

Behavioral change for working with Delta table history and VACUUM

Databricks Runtime 18.0 will change how time travel queries and VACUUM work in Delta Lake for more predictable behavior.

Current behavior:

Time travel availability depends on when VACUUM last ran, which can be difficult to predict.

Changes in Databricks Runtime 18.0:

The following updates will make time travel deterministic and aligned with retention settings:

  • Time travel queries (SELECT, RESTORE, CDC, and CLONE with AS OF syntax) are blocked if they exceed delta.deletedFileRetentionDuration.
  • The retention period (RETAIN num HOURS) in VACUUM is ignored with a warning, with an exception of 0 hours, which permanently removes all history from a Delta table.
  • delta.logRetentionDuration must be greater than or equal to delta.deletedFileRetentionDuration if you modify either property.

These changes will be released on the following timeline:

  • Mid-December 2025: Applies to all Delta tables on Databricks Runtime 18.0.
  • January 2026: Extends to serverless compute, Databricks SQL, and Databricks Runtime 12.2 and above for Unity Catalog managed tables.

For Unity Catalog managed tables, the changes apply to Databricks Runtime 12.2 and above. For all other Delta tables, changes apply to Databricks Runtime 18.0 and above.

Action required:

Verify that your time travel queries continue to work after the Databricks Runtime 18.0 release:

  • Review and update delta.deletedFileRetentionDuration to match your time travel needs. Verify that it’s less than or equal to delta.logRetentionDuration.
  • Stop setting the retention period in the VACUUM command. Use delta.deletedFileRetentionDuration instead.

Column mask automatic type casting

Unity Catalog attribute-based access control (ABAC) is currently in Beta. Starting in Public Preview, Databricks will automatically cast the output of column mask functions resolved from ABAC policies to match the target column's data type. This enhancement ensures type consistency and improved query reliability when using ABAC column masking. For more information on ABAC, see Unity Catalog attribute-based access control (ABAC).

important

Existing ABAC column mask implementations might experience query failures if mask function return types are incompatible with target column types. Review your mask functions before upgrading to Public Preview.

Alerts (Beta) updates

Databricks is releasing changes to improve alerts that include breaking API changes. These changes do not affect legacy alerts.

For more information on alerts, see Databricks SQL alerts.

API breaking changes

The following Alerts V2 API fields and values are being renamed or removed:

  • In the Create, Get, Update, and List APIs, the run_as_user_name field will be removed.
    • Use run_as (request) and effective_run_as (response) instead.
  • In the List API, the results field will be renamed to alerts.
  • In the Create, Get, Update, and List APIs, the TRASHED value in the lifecycle_state field will be renamed to DELETED.
  • In the Create and Update APIs, UNKNOWN will no longer be supported for empty_result_state.
important

Update any integrations using these APIs before October 23, 2025. If you use the SDKs or Terraform, upgrade to the latest version.

For the API reference, see Alerts V2 API.

Simplification of alert states

Currently, an alert’s state is set to UNKNOWN for one of the following reasons:

  • It has never been run.
  • The associated query returns no rows in the result and you have chosen UNKNOWN as the desired state for the If query result is empty setting.

To remove this ambiguity, the following updates will be made:

  • Starting October 14, 2025

    • Never-run alerts will be shown as Not Run in the UI and null in the API.
    • For new alerts,UNKNOWN will not be able to be selected for empty results. The defaults will remain Error with the option to use OK or Triggered.
  • Starting October 23, 2025

    • The Create and Update APIs will no longer accept UNKNOWN as empty_result_state.
  • Starting December 3, 2025

    • All existing alerts set to UNKNOWN will be updated to default to Error.
important

If you use UNKNOWN in your alerts, update them to use OK, Triggered, or Error.

New editing experience

Starting October 14, 2025, if you create or edit alerts, they will open in the new multi-tab editor experience. See Write queries and explore data in the new SQL editor. This change improves consistency across Databricks but does not alter functionality.

Lakehouse Federation sharing and default storage

Delta Sharing on Lakehouse Federation is in Beta, allowing Delta Sharing data providers to share foriegn catalogs and tables. By default, data must be temporarily materialized and stored on default storage. Currently, users must manually enable the Delta Sharing for Default Storage – Expanded Access feature in the account console to use Lakehouse Federation sharing.

After Delta Sharing for Default Storage – Expanded Access is enabled by default for all Databricks users, Delta Sharing on Lakehouse Federation will automatically be available in regions where default storage is supported.

See Default storage in Databricks and Add foreign schemas or tables to a share.

Reload notification in workspaces

In an upcoming release, a message to reload your workspace tab will display if your workspace tab has been open for a long time without refreshing. This will help ensure you are always using the latest version of Databricks with the newest features and fixes.

SAP Business Data Cloud (BDC) Connector for Databricks will soon be generally available

The SAP Business Data Cloud (BDC) Connector for Databricks is a new feature that allows you to share data from SAP BDC to Databricks and from Databricks to SAP BDC using Delta Sharing. This feature will be generally available soon.

Delta Sharing for tables on default storage will soon be enabled by default (Beta)

This default storage update for Delta Sharing has expanded sharing capabilities, allowing providers to share tables backed by default storage to any Delta Sharing recipient (open or Databricks), including recipients using classic compute. This feature is currently in Beta and requires providers to manually enable Delta Sharing for Default Storage – Expanded Access in the account console. Soon, this will be enabled by default for all users.

See Limitations.

Behavior change for the Auto Loader incremental directory listing option

note

The Auto Loader cloudFiles.useIncrementalListing option is deprecated. Although this note discusses a change to the options's default value and how to continue using it after this change, Databricks recommends against using this option in favor of file notification mode with file events.

In an upcoming Databricks Runtime release, the value of the deprecated Auto Loader cloudFiles.useIncrementalListing option will, by default, be set to false. Setting this value to false causes Auto Loader to perform a full directory listing each time it's run. Currently, the default value of the cloudFiles.useIncrementalListing option is auto, instructing Auto Loader to make a best-effort attempt at detecting if an incremental listing can be used with a directory.

To continue using the incremental listing feature, set the cloudFiles.useIncrementalListing option to auto. When you set this value to auto, Auto Loader makes a best-effort attempt to do a full listing once every seven incremental listings, which matches the behavior of this option before this change.

To learn more about Auto Loader directory listing, see Auto Loader streams with directory listing mode.

Behavior change when dataset definitions are removed from Lakeflow Declarative Pipelines

An upcoming release of Lakeflow Declarative Pipelines will change the behavior when a materialized view or streaming table is removed from a pipeline. With this change, the removed materialized view or streaming table will not be deleted automatically when the next pipeline update runs. Instead, you will be able to use the DROP MATERIALIZED VIEW command to delete a materialized view or the DROP TABLE command to delete a streaming table. After dropping an object, running a pipeline update will not recover the object automatically. A new object is created if a materialized view or streaming table with the same definition is re-added to the pipeline. You can, however, recover an object using the UNDROP command.

End of support timeline for legacy dashboards

  • Official support for the legacy version of dashboards has ended as of April 7, 2025. Only critical security issues and service outages will be addressed.
  • November 3, 2025: Databricks will begin archiving legacy dashboards that have not been accessed in the past six months. Archived dashboards will no longer be accessible, and the archival process will occur on a rolling basis. Access to actively used dashboards will remain unchanged.

Databricks will work with customers to develop migration plans for active legacy dashboards after November 3, 2025.

To help transition to AI/BI dashboards, upgrade tools are available in both the user interface and the API. For instructions on how to use the built-in migration tool in the UI, see Clone a legacy dashboard to an AI/BI dashboard. For tutorials about creating and managing dashboards using the REST API at Use Databricks APIs to manage dashboards.

The sourceIpAddress field in audit logs will no longer include a port number

Due to a bug, certain authorization and authentication audit logs include a port number in addition to the IP in the sourceIPAddress field (for example, "sourceIPAddress":"10.2.91.100:0"). The port number, which is logged as 0, does not provide any real value and is inconsistent with the rest of the Databricks audit logs. To enhance the consistency of audit logs, Databricks plans to change the format of the IP address for these audit log events. This change will gradually roll out starting in early August 2024.

If the audit log contains a sourceIpAddress of 0.0.0.0, Databricks might stop logging it.

External support ticket submission will soon be deprecated

Databricks is transitioning the support ticket submission experience from help.databricks.com to the help menu in the Databricks workspace. Support ticket submission via help.databricks.com will soon be deprecated. You'll continue to view and triage your tickets at help.databricks.com.

The in-product experience, which is available if your organization has a Databricks Support contract, integrates with Databricks Assistant to help address your issues quickly without having to submit a ticket.

To access the in-product experience, click your user icon in the top bar of the workspace, and then click Contact Support or type “I need help” into the assistant.

The Contact support modal opens.

Contact support modal

If the in-product experience is down, send requests for support with detailed information about your issue to help@databricks.com. For more information, see Get help.