What's coming?
Learn about features and behavioral changes in upcoming Databricks releases.
Column mask automatic type casting
Unity Catalog attribute-based access control (ABAC) is currently in Beta. Starting in Public Preview, Databricks will automatically cast the output of column mask functions resolved from ABAC policies to match the target column's data type. This enhancement ensures type consistency and improved query reliability when using ABAC column masking. For more information on ABAC, see Unity Catalog attribute-based access control (ABAC).
Existing ABAC column mask implementations might experience query failures if mask function return types are incompatible with target column types. Review your mask functions before upgrading to Public Preview.
Alerts (Beta) updates
Databricks is releasing changes to improve alerts that include breaking API changes. These changes do not affect legacy alerts.
For more information on alerts, see Databricks SQL alerts.
API breaking changes
The following Alerts V2 API fields and values are being renamed or removed:
- In the Create, Get, Update, and List APIs, the
run_as_user_name
field will be removed.- Use
run_as
(request) andeffective_run_as
(response) instead.
- Use
- In the List API, the
results
field will be renamed toalerts
. - In the Create, Get, Update, and List APIs, the
TRASHED
value in thelifecycle_state
field will be renamed toDELETED
. - In the Create and Update APIs,
UNKNOWN
will no longer be supported forempty_result_state
.
Update any integrations using these APIs before October 21, 2025. If you use the SDKs or Terraform, upgrade to the latest version.
For the API reference, see Alerts V2 API.
Deprecating UNKNOWN
status for alerts
Currently, alerts can show as UNKNOWN
either when they have never run or when UNKNOWN
is selected for handling empty SQL results. To remove this ambiguity, the following updates will be made:
-
Starting October 7, 2025
- Never-run alerts will be shown as Not Run in the UI and
null
in the API. - For new alerts,
UNKNOWN
will not be able to be selected for empty results. The defaults will remainError
with the option to useOK
orTriggered
.
- Never-run alerts will be shown as Not Run in the UI and
-
Starting October 21, 2025
- The Create and Update APIs will no longer accept
UNKNOWN
asempty_result_state
.
- The Create and Update APIs will no longer accept
-
Starting November 7, 2025
- All existing alerts set to
UNKNOWN
will be updated to default toError
.
- All existing alerts set to
If you use UNKNOWN
in your alerts, update them to use OK
, Triggered
, or Error
.
New editing experience
Starting October 7, 2025, if you create or edit alerts, they will open in the new multi-tab editor experience. See Write queries and explore data in the new SQL editor. This change improves consistency across Databricks but does not alter functionality.
The Data Science Agent will also use models served through Amazon Bedrock
The Databricks Assistant will soon use models served through Amazon Bedrock as part of the Data Science Agent when partner-powered AI features are enabled.
Lakehouse Federation sharing and default storage
Delta Sharing on Lakehouse Federation is in Beta, allowing Delta Sharing data providers to share foriegn catalogs and tables. By default, data must be temporarily materialized and stored on default storage. Currently, users must manually enable the Delta Sharing for Default Storage – Expanded Access feature in the account console to use Lakehouse Federation sharing.
After Delta Sharing for Default Storage – Expanded Access is enabled by default for all Databricks users, Delta Sharing on Lakehouse Federation will automatically be available in regions where default storage is supported.
See Default storage in serverless workspaces and Add foreign schemas or tables to a share.
Reload notification in workspaces
In an upcoming release, a message to reload your workspace tab will display if your workspace tab has been open for a long time without refreshing. This will help ensure you are always using the latest version of Databricks with the newest features and fixes.
SAP Business Data Cloud (BDC) Connector for Databricks will soon be generally available
The SAP Business Data Cloud (BDC) Connector for Databricks is a new feature that allows you to share data from SAP BDC to Databricks and from Databricks to SAP BDC using Delta Sharing. This feature will be generally available soon.
Delta Sharing for tables on default storage will soon be enabled by default (Beta)
This default storage update for Delta Sharing has expanded sharing capabilities, allowing providers to share tables backed by default storage to any Delta Sharing recipient (open or Databricks), including recipients using classic compute. This feature is currently in Beta and requires providers to manually enable Delta Sharing for Default Storage – Expanded Access in the account console. Soon, this will be enabled by default for all users.
See Limitations.
Behavior change for the Auto Loader incremental directory listing option
The Auto Loader cloudFiles.useIncrementalListing
option is deprecated. Although this note discusses a change to the options's default value and how to continue using it after this change, Databricks recommends against using this option in favor of file notification mode with file events.
In an upcoming Databricks Runtime release, the value of the deprecated Auto Loader cloudFiles.useIncrementalListing
option will, by default, be set to false
. Setting this value to false
causes Auto Loader to perform a full directory listing each time it's run. Currently, the default value of the cloudFiles.useIncrementalListing
option is auto
, instructing Auto Loader to make a best-effort attempt at detecting if an incremental listing can be used with a directory.
To continue using the incremental listing feature, set the cloudFiles.useIncrementalListing
option to auto
. When you set this value to auto
, Auto Loader makes a best-effort attempt to do a full listing once every seven incremental listings, which matches the behavior of this option before this change.
To learn more about Auto Loader directory listing, see Auto Loader streams with directory listing mode.
Behavior change when dataset definitions are removed from Lakeflow Declarative Pipelines
An upcoming release of Lakeflow Declarative Pipelines will change the behavior when a materialized view or streaming table is removed from a pipeline. With this change, the removed materialized view or streaming table will not be deleted automatically when the next pipeline update runs. Instead, you will be able to use the DROP MATERIALIZED VIEW
command to delete a materialized view or the DROP TABLE
command to delete a streaming table. After dropping an object, running a pipeline update will not recover the object automatically. A new object is created if a materialized view or streaming table with the same definition is re-added to the pipeline. You can, however, recover an object using the UNDROP
command.
End of support timeline for legacy dashboards
- Official support for the legacy version of dashboards has ended as of April 7, 2025. Only critical security issues and service outages will be addressed.
- November 3, 2025: Databricks will begin archiving legacy dashboards that have not been accessed in the past six months. Archived dashboards will no longer be accessible, and the archival process will occur on a rolling basis. Access to actively used dashboards will remain unchanged.
Databricks will work with customers to develop migration plans for active legacy dashboards after November 3, 2025.
To help transition to AI/BI dashboards, upgrade tools are available in both the user interface and the API. For instructions on how to use the built-in migration tool in the UI, see Clone a legacy dashboard to an AI/BI dashboard. For tutorials about creating and managing dashboards using the REST API at Use Databricks APIs to manage dashboards.
The sourceIpAddress field in audit logs will no longer include a port number
Due to a bug, certain authorization and authentication audit logs include a port number in addition to the IP in the sourceIPAddress
field (for example, "sourceIPAddress":"10.2.91.100:0"
). The port number, which is logged as 0
, does not provide any real value and is inconsistent with the rest of the Databricks audit logs. To enhance the consistency of audit logs, Databricks plans to change the format of the IP address for these audit log events. This change will gradually roll out starting in early August 2024.
If the audit log contains a sourceIpAddress
of 0.0.0.0
, Databricks might stop logging it.
External support ticket submission will soon be deprecated
Databricks is transitioning the support ticket submission experience from help.databricks.com
to the help menu in the Databricks workspace. Support ticket submission via help.databricks.com
will soon be deprecated. You'll continue to view and triage your tickets at help.databricks.com
.
The in-product experience, which is available if your organization has a Databricks Support contract, integrates with Databricks Assistant to help address your issues quickly without having to submit a ticket.
To access the in-product experience, click your user icon in the top bar of the workspace, and then click Contact Support or type “I need help” into the assistant.
The Contact support modal opens.
If the in-product experience is down, send requests for support with detailed information about your issue to help@databricks.com. For more information, see Get help.