Skip to main content

Databricks on AWS GovCloud release notes 2025

The following platform features, improvements, and fixes were released on Databricks on AWS GovCloud in 2025.

August 2025

Account SCIM 2.0 updates

August 29, 2025

To enhance stability and support future growth, Databricks has updated the Account SCIM API for identity management as follows:

  • Calling GET with filter params filter=displayName eq value_without_quotes results in a syntax error. To prevent this error, use quotation marks to wrap the value (for example, filter=displayName eq "value_with_quotes").

  • Calling GET /api/2.0/accounts/{account_id}/scim/v2/Groups no longer returns members. Instead, iterate through get group details to get membership information. See get group details.

  • Calling PATCH /api/2.0/accounts/{account_id}/scim/v2/Groups/{id} returns a 204 response instead of a 200 response.

These changes apply only to new accounts; existing accounts are unaffected. Current integrations will continue to work without disruption.

Track and navigate notebook runs with the new cell execution minimap

August 28, 2025

Use the cell execution minimap to track your notebook’s progress at a glance. The minimap appears in the right margin and shows each cell’s execution state (skipped, queued, running, success, or error). Hover to see cell details, or click to jump directly to a cell.

For information about using the cell execution minimap, see Navigate the Databricks notebook and file editor.

Migrate Lakeflow Declarative Pipelines from legacy publishing mode is now GA

August 28, 2025

Lakeflow Declarative Pipelines has a legacy publishing mode that only allowed publishing to a single catalog and schema. The default publishing mode enables publishing to multiple catalogs and schemas. Migration from the legacy publishing mode to the default publishing mode is now generally available.

See Enable the default publishing mode in a pipeline

Governed tags are Public Preview

August 26, 2025

You can now create governed tags to enforce consistent tagging across data assets such as catalogs, schemas, and tables. Using governed tags, admins define the allowed keys and values and control which users and groups can assign them to objects. This helps standardize metadata for data classification, cost tracking, access control, and automation.

See Governed tags.

Selectively and atomically replace data with INSERT REPLACE USING and INSERT REPLACE ON (GA)

August 26, 2025

INSERT REPLACE USING and INSERT REPLACE ON are now generally available for Databricks Runtime 17.2. Both SQL commands replace part of the table with the result of a query.

INSERT REPLACE USING replaces rows when the USING columns compare equal under equality. INSERT REPLACE ON replaces rows when they match a user-defined condition.

See INSERT in the SQL language reference and Selectively overwrite data with Delta Lake.

OAuth token federation is now GA

August 26, 2025

OAuth token federation is now generally available. Token federation enables you to securely access Databricks APIs using tokens from your identity provider (IdP). You can configure token federation policies directly in the Databricks UI, or using the Databricks CLI or REST API.

See Configure a federation policy.

New table property to control Delta table compression

August 26, 2025

You can now explicitly set the compression codec for a Delta table using the delta.parquet.compression.codec table property. This property ensures that all future writes to the table use the chosen codec. See Delta table properties reference.

Automatic liquid clustering is now available for Lakeflow Declarative Pipelines

August 25, 2025

You can now use automatic liquid clustering with Lakeflow Declarative Pipelines. Use automatic liquid clustering with CLUSTER BY AUTO, and Databricks intelligently chooses clustering keys to optimize query performance.

See Automatic liquid clustering, create_streaming_table, table, CREATE MATERIALIZED VIEW (Lakeflow Declarative Pipelines), and CREATE STREAMING TABLE (Lakeflow Declarative Pipelines).

Enhanced autocomplete for complex data types in notebooks

August 22, 2025

Notebook autocomplete now supports enhanced suggestions for complex data types including structs, maps, and arrays in SQL cells. Additionally, when referencing common table expressions (CTEs) that use SELECT *, autocomplete provides column recommendations based on the underlying table structure.

See Personalized autocomplete.

Set the run-as user for Lakeflow Declarative Pipelines

August 18, 2025

You can now change the identity that a pipeline uses to run updates and the owner of tables published by the pipeline. This feature allows you to set a service principal as the run-as identity, which is safer and more reliable than using user accounts for automated workloads. Common use cases include recovering pipelines when the original owner was deactivated and deploying pipelines with service principals as a best practice.

For information about setting the run-as user, see Set the run-as user.

Single-node compute on standard access mode is now GA

August 12, 2025

Single node compute resources with standard access mode are now generally available. This configuration allows multiple users to share a single-node compute resource with full user isolation. Single-node compute is useful for small jobs or non-distributed workloads.

See Compute configuration reference.

Column masks now retained when replacing a table

August 12, 2025

The following features and updates were released on Databricks on AWS GovCloud in August 2025.

If a column in the new table matches a column name from the original table, its existing column mask is now retained, even if no mask is specified. This change prevents accidental removal of column-level security policies during table replacement. Previously, replacing a table dropped all existing column masks, and only newly defined masks were applied.

This change affects SQL commands ([CREATE OR] REPLACE TABLE), DataFrame APIs (saveAsTable, replace, createOrReplace), and other similar update table operations.

See Manually apply row filters and column masks.

Databricks ODBC driver 2.9.2

August 5, 2025

The Databricks ODBC Driver version 2.9.2 is now available for download from the ODBC driver download page.

This release includes the following fixes and new features:

  • The process name is now used as the default UserAgentEntry if the UserAgentEntry is not explicitly set.
  • Added support for Databricks domains cloud.databricks.us and cloud.databricks.mil.
  • Enhanced recognition and handling of timestamp_ntz columns across multiple data source functions including SQLGetTypeInfo, SQLColumns, and SQLColAttribute.
  • Added CRL (Certificate Revocation Lists) cache support on Windows when UseSystemTruststore is enabled.
  • Added VOID type column support, so that VOID columns are now correctly listed in SQLGetColumns.
  • Enabled OAuth Token exchange for IDPs different from the host, which allows the exchange of OAuth access tokens (including BYOT) for Databricks in-house tokens.
  • Added support for Windows Server 2025.
  • Fixed a memory leak in the driver.

This release includes upgrades to several third-party libraries:

  • OpenSSL upgraded from 3.0.15 to 3.0.16
  • libcURL upgraded from 8.11.0 to 8.12.1
  • Expat upgraded from 2.6.3 to 2.7.1

This release includes the following behavior changes:

  • The connector no longer supports Databricks Runtime version 10.4 LTS.
  • The default maximum catalog name length and maximum schema name length has been changed from 128 to 1024.
  • Ubuntu 20.04 is no longer supported.

For complete configuration information, see the Databricks ODBC Driver Guide installed with the driver download package.

Jobs in continuous mode can now have task-level retries for failed tasks

August 1, 2025

Jobs that are set to run in continuous mode now have the option to retry individual tasks on task failure.

See Run jobs continuously.

Databricks Runtime 17.1 is now is now GA

August 1, 2025

Databricks Runtime 17.1 is now generally available. See Databricks Runtime 17.1.

July 2025

The following features and updates were released on Databricks on AWS GovCloud in July 2025.

New compute policy form (Public Preview)

July 31, 2025

The new compute policy form uses UI elements to help you configure policy definitions, making it simpler to write compute policies in the UI.

New compute policy form

The new form includes the following changes:

  • New definition dropdown menus allow you to configure rules without needing to reference the policy syntax.
  • Max compute resources per user, max DBUs per hour, and cluster type settings have moved under the Advanced options section.
  • Tagging definitions now have their own separate section.
  • Policy permission settings have moved out of the policy form and are now set using the permissions modal in the policy overview page.

See Configure policy definitions using the new policy form (Public Preview).

Sharing streaming tables and materialized views is GA

July 30, 2025

Using Delta Sharing to share streaming tables and materialized views is generally available. There are fewer limitations for share recipients and providers when sharing streaming tables and materialized views.

See Add streaming tables to a share, Add materialized views to a share, and Read shared streaming tables and materialized views.

Jobs & Pipelines list now includes Databricks SQL pipelines

July 29, 2025

The Jobs & Pipelines list now includes pipelines for materialized views and streaming tables that were created with Databricks SQL.

Organization name required to enable Delta Sharing on metastore

July 29, 2025

When enabling Delta Sharing on your metastore, an organization name must be specified if you are sharing data with a Databricks recipient not in your account. When possible, existing provider names without an organization name are automatically updated to include account details, aiming to make them more readable. A readable organization name helps recipients to identify their share providers.

See Enable Delta Sharing on a metastore and View providers.

One-time job runs now correctly record the job name in usage system table

July 28, 2025

The usage_metadata.job_name value in the system.billing.usage table now contains the run names for runs triggered through the one-time run API. If a run name isn't provided in the request body, the job_name field is recorded as Untitled.

Serverless compute runtime updated to 17.0

July 28, 2025

Serverless compute for notebooks and jobs now uses an upgraded runtime, which roughly corresponds to Databricks Runtime 17.0. See Serverless compute release notes.

Unity Catalog attribute-based access control now available in AWS GovCloud (Beta)

July 28, 2025

Databricks now supports attribute-based access control (ABAC) in Unity Catalog, enabling dynamic, tag-driven access policies across catalogs, schemas, and tables. ABAC uses tags and user-defined functions (UDFs) to enforce fine-grained access controls based on data attributes such as sensitivity, region, or business domain.

Using ABAC, you can define scalable policies once and apply them across large sets of data assets. Policies inherit across the object hierarchy and can include row-level filters or column masking logic. This simplifies governance, supports centralized policy management, and improves security posture. See Unity Catalog attribute-based access control (ABAC).

Disable DBFS root and mounts is in Public Preview

July 28, 2025

You can now disable access to the Databricks Filesystem (DBFS) root and mounts in existing Databricks workspaces. See Disable access to DBFS root and mounts in your existing Databricks workspace.

Improvements to the notebook editing experience

July 25, 2025

The following improvements have been made to the notebook editing experience:

  • Add a split view to edit notebooks side by side. See Edit notebooks side by side.
  • Pressing Cmd + F (Mac) or Ctrl + F (Windows) in a notebook now opens the native Databricks find-and-replace tool. This allows you to quickly search and replace text throughout your entire notebook, including content outside the current viewport. See Find and replace text.
  • Quickly switch between tab groups based on authoring contexts using the Home icon., Query editor icon., and Pipeline icon. icons on the top left in the editor. See Switch between authoring contexts.

The CAN VIEW permission on SQL warehouses is now generally available

July 15, 2025

The CAN VIEW permission allows users to view SQL warehouses, including query history and query profiles. These users cannot run queries on the warehouse.

See SQL warehouse ACLs.

Must be metastore admin to transfer share ownership for Delta Sharing

July 14, 2025

To change the ownership of a share for Delta Sharing, you must now be the metastore admin. Share owners can no longer transfer ownership. See Update shares.

Parent tasks (Run job and For each) now have a separate limit

July 4, 2025

Tasks that wait on child processes (Run job and For each tasks) now have a separate limit for the number of tasks that can run simultaneously, and do not count against the overall limit.

See Resource limits.

June 2025

The following features and updates were released on Databricks on AWS GovCloud in June 2025.

Databricks Runtime 17.0 is GA

June 24, 2025

Databricks Runtime 17.0 and Databricks Runtime 17.0 ML are now generally available.

See Databricks Runtime 17.0 and Databricks Runtime 17.0 for Machine Learning.

OIDC federation for Databricks-to-open Delta Sharing is generally available

June 24, 2025

Using Open ID Connect (OIDC) federation for Delta Sharing, where recipients use JSON web tokens (JWT) from their own IdP as short-lived OAuth tokens for secure, federated authentication, is generally available.

See Use Open ID Connect (OIDC) federation to enable authentication to Delta Sharing shares (open sharing).

Jobs & Pipelines in the left navigation menu

June 18, 2025

The Jobs & Pipelines item in the left navigation is the entry point to the Databricks' unified data engineering features, Lakeflow. The Pipelines and Workflows items in the left navigation have been removed, and their functionality is now available from Jobs & Pipelines.

Automatic liquid clustering is now GA

June 12, 2025

Automatic liquid clustering is now generally available. You can enable automatic liquid clustering on Unity Catalog managed tables. Automatic liquid clustering intelligently selects clustering keys to optimize data layout for your queries. See Automatic liquid clustering.

Monitor and revoke personal access tokens in your account (GA)

June 11, 2025

The token report page enables account admins to monitor and revoke personal access tokens (PATs) in the account console. Databricks recommends you use OAuth access tokens instead of PATs for greater security and convenience. See Monitor and revoke personal access tokens in the account.

AUTO CDC APIs replace APPLY CHANGES

June 11, 2025

The new AUTO CDC APIs create flows that support change data feeds (CDF) in Lakeflow Declarative Pipelines. Databricks recommends replacing usage of APPLY CHANGES APIs with AUTO CDC.

For information about the SQL AUTO CDC API, see:

For information about the Python create_auto_cdc_flow APIs, see

Databricks Jobs is now Lakeflow Jobs

June 11, 2025

The product known as Databricks Jobs is now Lakeflow Jobs. No migration is required to use Lakeflow Jobs. See Lakeflow Jobs.

DLT is now Lakeflow Declarative Pipelines

June 11, 2025

The product known as DLT is now Lakeflow Declarative Pipelines. No migration is required to use Lakeflow Declarative Pipelines. See Lakeflow Declarative Pipelines.

MLflow 3.0 is generally available

June 10, 2025

MLflow 3.0 is now generally available.

MLflow 3.0 on Databricks delivers state-of-the-art experiment tracking, observability, and performance evaluation for machine learning models, generative AI applications, and agents on the Databricks Lakehouse. See Get started with MLflow 3.

Cross-platform view sharing is now GA

June 9, 2025

Cross-platform view sharing via Delta Sharing is now generally available. The data access and billing method when sharing views are updated. See How do I incur and check Delta Sharing costs?.

Account admins can now configure the time-to-live (TTL) of data materialization. See Configure TTL of data materialization.

New consumer entitlement is generally available

June 5, 2025

Workspace admins can now grant consumer access as an entitlement to users, service principals, and groups. This allows for more fine-grained control over what users can do in a Databricks workspace. Key details:

  • Consumer access enables limited workspace UI access, querying SQL warehouses using BI tools, and viewing dashboards with embedded or viewer credentials.

  • Useful for business users who need access to shared content and dashboards but not to author or manage workspace objects.

  • This entitlement is more restrictive than workspace access or Databricks SQL access. To assign it independently, remove broader entitlements from the users group and configure them per user or group.

See Manage entitlements.

Billing and audit system tables are now available in AWS GovCloud DoD

June 5, 2025

The system.billing schema and the system.access.audit table are now supported on AWS GovCloud DoD. System tables provide a a Databricks-hosted analytical store of your account's operational data, accessible in the system catalog. System tables in schemas other than billing and access.audit are not available on AWS GovCloud. For more information, see Monitor account activity with system tables.

Corrected job_name values in system.billing.usage

June 3, 2025

The usage_metadata.job_name value in the system.billing.usage table now correctly contains job names. Previously, this value was populated with task keys instead of the user-provided job names. This change does not apply to one-time job runs, which continue to be logged with the task key.

See Billable usage system table reference.

History sharing now enabled by default to improve table read performance for Databricks-to-Databricks Delta Sharing (GA)

Jun 3, 2025

History sharing is enabled by default (for Databricks Runtime 16.2 and above) to improve table read performance for Databricks-to-Databricks Delta Sharing. See Improve table read performance with history sharing.

May 2025

The following features and updates were released on Databricks on AWS GovCloud in May 2025.

Improved UI for managing notebook dashboards

May 30, 2025

Quickly navigate between a notebook and its associated dashboards by clicking Dashboard icon. in the top right.

See Navigate between a notebook dashboard and a notebook.

SQL authoring improvements

May 30, 2025

The following improvements were made to the SQL editing experience in the SQL editor and notebooks:

  • Filters applied to result tables now also affect visualizations, enabling interactive exploration without modifying the underlying query or dataset. To learn more about filters, see Filter results.
  • In a SQL notebook, you can now create a new query from a filtered results table or visualization. See Create a query from filtered results.
  • You can hover over * in a SELECT * query to expand the columns in the queried table.
  • Custom SQL formatting settings are now available in the new SQL editor and the notebook editor. Click View > Developer Settings > SQL Format. See Customize SQL formatting.

Dashboards, alerts, and queries are supported as workspace files

May 20, 2025

Dashboards, alerts, and queries are now supported as workspace files, which means you can programmatically interact with these Databricks objects like any other file, from anywhere the workspace filesystem is available. See What are workspace files? and Programmatically interact with workspace files.

Workflow task repair now respects transitive dependencies

May 19, 2025

Previously, repaired tasks were unblocked once their direct dependencies completed. Now, repaired tasks wait for all transitive dependencies. For example, in a graph A → B → C, repairing A and C will block C until A finishes.

Databricks Runtime 16.4 LTS is GA

May 13, 2025

Databricks Runtime 16.4 and Databricks Runtime 16.4 ML are now generally available.

See Databricks Runtime 16.4 LTS and Databricks Runtime 16.4 LTS for Machine Learning.

Databricks JDBC driver 2.7.3

May 12, 2025

The Databricks JDBC Driver version 2.7.3 is now available for download from the JDBC driver download page.

This release includes the following enhancements and new features:

  • Added support for Azure Managed Identity OAuth 2.0 authentication. To enable this, set the Auth_Flow property to 3.
  • Added support for OAuth Token exchange for IDPs different than host. OAuth access tokens (including BYOT) will be exchanged for a Databricks access token.
  • OAuth browser (Auth_Flow=2) now supports token caching for Linux and Mac operating systems.
  • Added support for VOID, Variant, and TIMESTAMP_NTZ data types in getColumns() and getTypeInfo() APIs.
  • The driver now lists columns with unknown or unsupported types and maps them to SQL VARCHAR in the getColumns() metadata API.
  • Added support for cloud.databricks.us and cloud.databricks.mil domains when connecting to Databricks using OAuth (AuthMech=11).
  • Upgraded to netty-buffer 4.1.119 and netty-common 4.1.119 (previously 4.1.115).

This release resolves the following issues:

  • Compatibility issues when deserializing Apache Arrow data with Java JVMs version 11 or higher.
  • Issues with date and timestamp before the beginning of the Gregorian calendar when connecting to specific Spark versions with Arrow result set serialization.

For complete configuration information, see the Databricks JDBC Driver Guide installed with the driver download package.

You can now collaborate with multiple parties with Databricks Clean Rooms

May 14, 2025

Databricks Clean Rooms now supports:

  • Up to 10 collaborators for more complex, multi-party data projects.
  • New notebook approval workflow that enhances security and compliance, allowing designated runners, requiring explicit approval before execution
  • Auto-approval options for trusted partners.
  • Difference views for easy review and auditing.

These updates enable more secure, scalable, and auditable collaboration.

Databricks GitHub App adds workflow scope to support authoring GitHub Actions

May 9, 2025

Databricks made a change where you may get an email request for the Read and write access to Workflows scope for the Databricks GitHub app. This change makes the scope of the Databricks GitHub app consistent with the required scope of other supported authentication methods and allows users to commit GitHub Actions from Databricks Git folders using the Databricks GitHub app for authorization.

If you are the owner of a Databricks account where the Databricks GitHub app is installed and configured to support OAuth, you may receive the following notification in an email titled "Databricks is requesting updated permissions" from GitHub. (This is a legitimate email request from Databricks.) Accept the new permissions in order to enable committing GitHub Actions from Databricks Git folders with the Databricks GitHub app.

The email image you may receive when the Databricks GitHub app requests Workflow access

Query snippets are now available in the new SQL editor, notebooks, files, and dashboards

May 9, 2025

Query snippets are segments of queries that you can share and trigger using autocomplete. You can now create query snippets through the View menu in the new SQL editor, and also in the notebook and file editors. You can use your query snippets in the SQL editor, notebook SQL cells, SQL files, and SQL datasets in dashboards.

See Query snippets.

May 2025

The following features and updates were released on Databricks on AWS GovCloud in May 2025.

Billing and audit system tables are now available in AWS GovCloud

May 9, 2025

The system.billing schema and the system.access.audit table are now supported on AWS GovCloud. System tables provide a a Databricks-hosted analytical store of your account's operational data, accessible in the system catalog. System tables in schemas other than billing and access.audit are not available on AWS GovCloud. System tables are not available in AWS GovCloud DoD. For more information, see Monitor account activity with system tables.

You can now create views in ETL pipelines

May 8, 2025

The CREATE VIEW SQL command is now available in ETL pipelines. You can create a dynamic view of your data. See CREATE VIEW (Lakeflow Declarative Pipelines).

Configure Python syntax highlighting in Databricks notebooks

May 8, 2025

You can now configure Python syntax highlighting in notebooks by placing a pyproject.toml file in the notebook's ancestor path or your home folder. Through the pyproject.toml file, you can configure ruff, pylint, pyright, and flake8 linters, as well as disable Databricks-specific rules. This configuration is supported for clusters running Databricks Runtime 16.4 or above, or Client 3.0 or above.

See Configure Python syntax highlighting.

April 2025

The following features and updates were released on Databricks on AWS GovCloud in April 2025.

Deletion vectors on DLT tables now follow workspace settings

April 28, 2025

New streaming tables and materialized views will follow the workspace settings for deletion vectors. See Auto-enable deletion vectors and What are deletion vectors?.

Strict enforcement of row-level security and column masking policies in Delta Sharing

April 21, 2025

Delta Sharing now consistently enforces row-level security and column masking policies applied to tables a shared data asset is dependent on, whether those policies were applied before or after the data asset was shared. Recipients may experience differences in query behavior when accessing shared data that depends on tables with row-level security or column masking policies. This ensures that data access aligns with the provider's intended security controls at all times.

See Row filters and column masks.

Run a subset of tasks within a job

April 21, 2025

You can now run a subset of the tasks when manually triggering a job. See Run a job with different settings.

Python type error highlighting

April 14, 2025

Python code in notebooks and file editors can highlight type errors for non-existent attributes, missing arguments, and mismatched arguments. See Python error highlighting.

Reference SQL output in downstream tasks of a job

April 14, 2025

You can now use dynamic values to reference the output of a SQL task in downstream tasks in the same job. For each tasks can iterate over the rows of data in the output.

See What is a dynamic value reference?.

Access UDF context information using TaskContext

April 14, 2025

The TaskContext PySpark API now allows you to retrieve context information—such as user identity and cluster tags—while running Batch Unity Catalog Python UDFs or PySpark UDFs. This feature lets you pass user-specific details, like identity, to authenticate external services within UDFs. See Get task context in a UDF.

The BROWSE privilege is GA

April 1, 2025

The BROWSE privilege is now generally available. The BROWSE privilege allows you to grant users, service principals, and account groups permission to view a Unity Catalog object's metadata. This enables users to discover data without having read access to the data. A user can view an object's metadata using Catalog Explorer, the schema browser, search results, the lineage graph, information_schema, and the REST API.

See BROWSE.