Databricks on AWS GovCloud release notes 2025
The following platform features, improvements, and fixes were released on Databricks on AWS GovCloud in 2025.
July 2025
The following features and updates were released on Databricks on AWS GovCloud in July 2025.
Parent tasks (Run job
and For each
) now have a separate limit
July 4, 2025
Tasks that wait on child processes (Run job
and For each
tasks) now have a separate limit for the number of tasks that can run simultaneously, and do not count against the overall limit.
See Resource limits.
June 2025
The following features and updates were released on Databricks on AWS GovCloud in June 2025.
Databricks Runtime 17.0 is GA
June 24, 2025
Databricks Runtime 17.0 and Databricks Runtime 17.0 ML are now generally available.
See Databricks Runtime 17.0 and Databricks Runtime 17.0 for Machine Learning.
OIDC federation for Databricks-to-open Delta Sharing is generally available
June 24, 2025
Using Open ID Connect (OIDC) federation for Delta Sharing, where recipients use JSON web tokens (JWT) from their own IdP as short-lived OAuth tokens for secure, federated authentication, is generally available.
Jobs & Pipelines in the left navigation menu
June 18, 2025
The Jobs & Pipelines item in the left navigation is the entry point to the Databricks' unified data engineering features, Lakeflow. The Pipelines and Workflows items in the left navigation have been removed, and their functionality is now available from Jobs & Pipelines.
Automatic liquid clustering is now GA
June 12, 2025
Automatic liquid clustering is now generally available. You can enable automatic liquid clustering on Unity Catalog managed tables. Automatic liquid clustering intelligently selects clustering keys to optimize data layout for your queries. See Automatic liquid clustering.
Monitor and revoke personal access tokens in your account (GA)
June 11, 2025
The token report page enables account admins to monitor and revoke personal access tokens (PATs) in the account console. Databricks recommends you use OAuth access tokens instead of PATs for greater security and convenience. See Monitor and revoke personal access tokens in the account.
AUTO CDC APIs replace APPLY CHANGES
June 11, 2025
The new AUTO CDC
APIs create flows that support change data feeds (CDF) in Lakeflow Declarative Pipelines. Databricks recommends replacing usage of APPLY CHANGES
APIs with AUTO CDC
.
For information about the SQL AUTO CDC
API, see:
For information about the Python create_auto_cdc_flow
APIs, see
Databricks Jobs is now Lakeflow Jobs
June 11, 2025
The product known as Databricks Jobs is now Lakeflow Jobs. No migration is required to use Lakeflow Jobs. See Lakeflow Jobs.
DLT is now Lakeflow Declarative Pipelines
June 11, 2025
The product known as DLT is now Lakeflow Declarative Pipelines. No migration is required to use Lakeflow Declarative Pipelines. See Lakeflow Declarative Pipelines.
MLflow 3.0 is generally available
June 10, 2025
MLflow 3.0 is now generally available.
MLflow 3.0 on Databricks delivers state-of-the-art experiment tracking, observability, and performance evaluation for machine learning models, generative AI applications, and agents on the Databricks Lakehouse. See Get started with MLflow 3.
Cross-platform view sharing is now GA
June 9, 2025
Cross-platform view sharing via Delta Sharing is now generally available. The data access and billing method when sharing views are updated. See How do I incur and check Delta Sharing costs?.
Account admins can now configure the time-to-live (TTL) of data materialization. See Configure TTL of data materialization.
New consumer entitlement is generally available
June 5, 2025
Workspace admins can now grant consumer access as an entitlement to users, service principals, and groups. This allows for more fine-grained control over what users can do in a Databricks workspace. Key details:
-
Consumer access enables limited workspace UI access, querying SQL warehouses using BI tools, and viewing dashboards with embedded or viewer credentials.
-
Useful for business users who need access to shared content and dashboards but not to author or manage workspace objects.
-
This entitlement is more restrictive than workspace access or Databricks SQL access. To assign it independently, remove broader entitlements from the
users
group and configure them per user or group.
See Manage entitlements.
Billing and audit system tables are now available in AWS GovCloud DoD
June 5, 2025
The system.billing
schema and the system.access.audit
table are now supported on AWS GovCloud DoD. System tables provide a a Databricks-hosted analytical store of your account's operational data, accessible in the system catalog. System tables in schemas other than billing
and access.audit
are not available on AWS GovCloud. For more information, see Monitor account activity with system tables.
Corrected job_name
values in system.billing.usage
June 3, 2025
The usage_metadata.job_name
value in the system.billing.usage
table now correctly contains job names. Previously, this value was populated with task keys instead of the user-provided job names. This change does not apply to one-time job runs, which continue to be logged with the task key.
See Billable usage system table reference.
History sharing now enabled by default to improve table read performance for Databricks-to-Databricks Delta Sharing (GA)
June 3, 2025
History sharing is enabled by default (for Databricks Runtime 16.2 and above) to improve table read performance for Databricks-to-Databricks Delta Sharing. See Improve table read performance with history sharing.
May 2025
The following features and updates were released on Databricks on AWS GovCloud in May 2025.
Improved UI for managing notebook dashboards
May 30, 2025
Quickly navigate between a notebook and its associated dashboards by clicking in the top right.
See Navigate between a notebook dashboard and a notebook.
SQL authoring improvements
May 30, 2025
The following improvements were made to the SQL editing experience in the SQL editor and notebooks:
- Filters applied to result tables now also affect visualizations, enabling interactive exploration without modifying the underlying query or dataset. To learn more about filters, see Filter results.
- In a SQL notebook, you can now create a new query from a filtered results table or visualization. See Create a query from filtered results.
- You can hover over
*
in aSELECT *
query to expand the columns in the queried table. - Custom SQL formatting settings are now available in the new SQL editor and the notebook editor. Click View > Developer Settings > SQL Format. See Customize SQL formatting.
Dashboards, alerts, and queries are supported as workspace files
May 20, 2025
Dashboards, alerts, and queries are now supported as workspace files, which means you can programmatically interact with these Databricks objects like any other file, from anywhere the workspace filesystem is available. See What are workspace files? and Programmatically interact with workspace files.
Workflow task repair now respects transitive dependencies
May 19, 2025
Previously, repaired tasks were unblocked once their direct dependencies completed. Now, repaired tasks wait for all transitive dependencies. For example, in a graph A → B → C, repairing A and C will block C until A finishes.
Databricks Runtime 16.4 LTS is GA
May 13, 2025
Databricks Runtime 16.4 and Databricks Runtime 16.4 ML are now generally available.
See Databricks Runtime 16.4 LTS and Databricks Runtime 16.4 LTS for Machine Learning.
Databricks JDBC driver 2.7.3
May 12, 2025
The Databricks JDBC Driver version 2.7.3 is now available for download from the JDBC driver download page.
This release includes the following enhancements and new features:
- Added support for Azure Managed Identity OAuth 2.0 authentication. To enable this, set the
Auth_Flow
property to 3. - Added support for OAuth Token exchange for IDPs different than host. OAuth access tokens (including BYOT) will be exchanged for a Databricks access token.
- OAuth browser (
Auth_Flow=2
) now supports token caching for Linux and Mac operating systems. - Added support for
VOID
,Variant
, andTIMESTAMP_NTZ
data types ingetColumns()
andgetTypeInfo()
APIs. - The driver now lists columns with unknown or unsupported types and maps them to SQL
VARCHAR
in thegetColumns()
metadata API. - Added support for cloud.databricks.us and cloud.databricks.mil domains when connecting to Databricks using OAuth (
AuthMech=11
). - Upgraded to netty-buffer 4.1.119 and netty-common 4.1.119 (previously 4.1.115).
This release resolves the following issues:
- Compatibility issues when deserializing Apache Arrow data with Java JVMs version 11 or higher.
- Issues with date and timestamp before the beginning of the Gregorian calendar when connecting to specific Spark versions with Arrow result set serialization.
For complete configuration information, see the Databricks JDBC Driver Guide installed with the driver download package.
You can now collaborate with multiple parties with Databricks Clean Rooms
May 14, 2025
Databricks Clean Rooms now supports:
- Up to 10 collaborators for more complex, multi-party data projects.
- New notebook approval workflow that enhances security and compliance, allowing designated runners, requiring explicit approval before execution
- Auto-approval options for trusted partners.
- Difference views for easy review and auditing.
These updates enable more secure, scalable, and auditable collaboration.
Databricks GitHub App adds workflow scope to support authoring GitHub Actions
May 9, 2025
Databricks made a change where you may get an email request for the Read and write access to Workflows scope for the Databricks GitHub app. This change makes the scope of the Databricks GitHub app consistent with the required scope of other supported authentication methods and allows users to commit GitHub Actions from Databricks Git folders using the Databricks GitHub app for authorization.
If you are the owner of a Databricks account where the Databricks GitHub app is installed and configured to support OAuth, you may receive the following notification in an email titled "Databricks is requesting updated permissions" from GitHub. (This is a legitimate email request from Databricks.) Accept the new permissions in order to enable committing GitHub Actions from Databricks Git folders with the Databricks GitHub app.
Query snippets are now available in the new SQL editor, notebooks, files, and dashboards
May 9, 2025
Query snippets are segments of queries that you can share and trigger using autocomplete. You can now create query snippets through the View menu in the new SQL editor, and also in the notebook and file editors. You can use your query snippets in the SQL editor, notebook SQL cells, SQL files, and SQL datasets in dashboards.
See Query snippets.
May 2025
The following features and updates were released on Databricks on AWS GovCloud in May 2025.
Billing and audit system tables are now available in AWS GovCloud
May 9, 2025
The system.billing
schema and the system.access.audit
table are now supported on AWS GovCloud. System tables provide a a Databricks-hosted analytical store of your account's operational data, accessible in the system catalog. System tables in schemas other than billing
and access.audit
are not available on AWS GovCloud. System tables are not available in AWS GovCloud DoD. For more information, see Monitor account activity with system tables.
You can now create views in ETL pipelines
May 8, 2025
The CREATE VIEW
SQL command is now available in ETL pipelines. You can create a dynamic view of your data. See CREATE VIEW (Lakeflow Declarative Pipelines).
Configure Python syntax highlighting in Databricks notebooks
May 8, 2025
You can now configure Python syntax highlighting in notebooks by placing a pyproject.toml
file in the notebook's ancestor path or your home folder. Through the pyproject.toml
file, you can configure ruff
, pylint
, pyright
, and flake8
linters, as well as disable Databricks-specific rules. This configuration is supported for clusters running Databricks Runtime 16.4 or above, or Client 3.0 or above.
See Configure Python syntax highlighting.
April 2025
The following features and updates were released on Databricks on AWS GovCloud in April 2025.
Deletion vectors on DLT tables now follow workspace settings
April 28, 2025
New streaming tables and materialized views will follow the workspace settings for deletion vectors. See Auto-enable deletion vectors and What are deletion vectors?.
Strict enforcement of row-level security and column masking policies in Delta Sharing
April 21, 2025
Delta Sharing now consistently enforces row-level security and column masking policies applied to tables a shared data asset is dependent on, whether those policies were applied before or after the data asset was shared. Recipients may experience differences in query behavior when accessing shared data that depends on tables with row-level security or column masking policies. This ensures that data access aligns with the provider's intended security controls at all times.
See Row filters and column masks.
Run a subset of tasks within a job
April 21, 2025
You can now run a subset of the tasks when manually triggering a job. See Run a job with different settings.
Python type error highlighting
April 14, 2025
Python code in notebooks and file editors can highlight type errors for non-existent attributes, missing arguments, and mismatched arguments. See Python error highlighting.
Reference SQL output in downstream tasks of a job
April 14, 2025
You can now use dynamic values to reference the output of a SQL task in downstream tasks in the same job. For each
tasks can iterate over the rows of data in the output.
See What is a dynamic value reference?.
Access UDF context information using TaskContext
April 14, 2025
The TaskContext PySpark API now allows you to retrieve context information—such as user identity and cluster tags—while running Batch Unity Catalog Python UDFs or PySpark UDFs. This feature lets you pass user-specific details, like identity, to authenticate external services within UDFs. See Get task context in a UDF.
The BROWSE
privilege is GA
April 1, 2025
The BROWSE
privilege is now generally available. The BROWSE
privilege allows you to grant users, service principals, and account groups permission to view a Unity Catalog object's metadata. This enables users to discover data without having read access to the data. A user can view an object's metadata using Catalog Explorer, the schema browser, search results, the lineage graph, information_schema
, and the REST API.
See BROWSE.