October 2025
These features and Databricks platform improvements were released in October 2025.
Releases are staged. Your Databricks account might not be updated until a week or more after the initial release date.
Compatibility Mode (Public Preview)
October 21, 2025
Compatibility Mode is now in Public Preview. Compatibility Mode generates a read-only version of a Unity Catalog managed table, streaming table, or materialized view that is automatically synced with the original table. This enables external Delta Lake and Iceberg clients, such as Amazon Athena, Snowflake, and Microsoft Fabric to read your tables and views without sacrificing performance on Databricks. You can configure how often your read-only versions are refreshed, up to near real-time.
See Compatibility Mode.
Zstd is now the default compression for new Delta tables
October 21, 2025
All newly created Delta tables in Databricks Runtime 16.0 and above now use Zstandard (Zstd) compression by default instead of Snappy.
Existing tables continue to use their current compression codec. To change the compression codec for an existing table, set the delta.parquet.compression.codec
table property. See Delta table properties reference.
Unified runs list (Public Preview)
October 20, 2025
The unified runs list is in Public Preview. Monitor both job and pipeline runs in a single unified list.
See What changes are in the Unified Runs List preview?.
Databricks Connector for Google Sheets offers additional features (Public Preview)
October 17, 2025
The Databricks Connector for Google Sheets introduces improved query management options. Users can save queries within a Google Sheet spreadsheet, enabling easy data refresh, query reuse, and query editing.
See Connect to Databricks from Google Sheets.
Dashboard and Genie spaces tagging (Public Preview)
October 16, 2025
You can now add tags to dashboards and Genie spaces to improve organization across your workspace. Tags can be used for automation. For example, you can tag a dashboard as “Work in progress,” and an overnight process can automatically retrieve all dashboards with that tag using the API and assign them to the temporary warehouse until they’re tagged as “Certified.” Search is not supported using dashboard tags.
See Manage dashboard tags and Add tags.
Jobs can now be triggered on source table update
October 16, 2025
You may now create triggers for jobs to run when a source table is updated. See Trigger jobs when source tables are updated.
Databricks Asset Bundles in the workspace is GA
October 16, 2025
Databricks Asset Bundles in the workspace is now generally available (GA). This feature allows you to collaborate with other users in your organization to edit, commit, test, and deploy bundle updates through the UI.
See Collaborate on bundles in the workspace.
SQL MCP server now available (Beta)
October 10, 2025
Databricks now provides a SQL managed MCP server that allows AI agents to execute SQL queries directly against Unity Catalog tables using SQL warehouses. The SQL MCP server is available at:https://<workspace-hostname>/api/2.0/mcp/sql
. See Use Databricks managed MCP servers.
Create backfill job runs
October 14, 2025
Job backfills allow you to trigger job runs to backfill data from the past. This is useful for loading older data, or repairing data when there are failures in processing. For more details, see Backfill jobs.
Data Classification (Public Preview)
October 13, 2025
Databricks Data Classification is now in Public Preview and supports all catalog types, consolidates all classification results into a single system table, and a new UI to review and auto-tag classifications. See Data Classification.
Multimodal support is now available
October 13, 2025
Mosaic AI Model Serving now supports multimodal inputs for Databricks hosted foundation models. See Query vision models.
This multimodal support is available using the following functionalities:
- Foundation Model APIs pay-per-token.
- Foundation Model APIs provisioned throughput.
- AI Functions. Both real-time inference and batch inference workloads.
Context based ingress control (Beta)
October 9, 2025
Context-based ingress control is now in Beta. This enables account admins to set allow and deny rules that combine who is calling, from where they are calling, and what they can reach in Databricks. Context-based ingress control ensures that only trusted combinations of identity, request type, and network source can reach your workspace. A single policy can govern multiple workspaces, ensuring consistent enforcement across your organization.
See Context-based ingress control.
The billable usage table now records the performance mode of serverless jobs and pipelines
October 9, 2025
Billing logs now record the performance mode of serverless jobs and pipelines. The workload's performance mode is logged in the product_features.performance_target
column and can include values of PERFORMANCE_OPTIMIZED
, STANDARD
, or null
.
For billing log reference, see Billable usage system table reference.
The Data Science Agent can now also use models served through Amazon Bedrock
October 8, 2025
The Databricks Assistant can now also use models served through Amazon Bedrock as part of the Data Science Agent when partner-powered AI features are enabled.
Databricks Runtime maintenance updates
October 7, 2025
New maintenance updates are available for supported Databricks Runtime versions. These updates include bug fixes, security patches, and performance improvements. For details, see Databricks Runtime maintenance updates.
Databricks Runtime 17.3 LTS and Databricks Runtime 17.3 LTS ML are in Beta
October 6, 2025
Databricks Runtime 17.3 LTS and Databricks Runtime 17.3 LTS ML are now in Beta, powered by Apache Spark 4.0.0. The release includes new configuration options, improved error handling, and enhanced Spark Connect support.
See Databricks Runtime 17.3 LTS (Beta) and Databricks Runtime 17.3 LTS for Machine Learning (Beta).
Mosaic AI Model Serving now supports OpenAI GPT 5 models (preview)
October 6, 2025
Model Serving now supports the OpenAI GPT-5, GP-5 mini and GPT-5 nano models in Public Preview. Reach out to your account teams to access these models during the preview.
These models are optimized for AI Functions, which means you can perform batch inference using these models and AI Functions like ai_query()
.
For real-time inference workloads, see the following pages:
Partition metadata is generally available
October 6, 2025
You can now enable partition metadata logging, a partition discovery strategy for external tables registered to Unity Catalog. See Use partition metadata logging.
Delta Sharing recipients can apply row filters and column masks (GA)
October 6, 2025
Delta Sharing recipients can now apply their own row filters and columns masks on shared tables and shared foreign tables. However, Delta Sharing providers still cannot share data assets that have row-level security or column masks.
For details, see Apply row filters and column masks.
Certification status system tag is in Public Preview
October 6, 2025
You can now apply the system.certification_status
governed tag to catalogs, schemas, tables, views, volumes, dashboards, registered models, and Genie Spaces to indicate whether a data asset is certified or deprecated. This improves governance, discoverability, and trust in analytics and AI workloads. See Flag data as certified or deprecated.
Prompt caching is now supported for Claude models
October 3, 2025
Prompt caching is now supported for Databricks-hosted Claude models. You can specify the cache_control
parameter in your query requests to cache the following:
- Thinking messages content in the
messages.content
array. - Images content blocks in the
messages.content
array. - Tool use, results and definitions in the
tools
array.
See Foundation model REST API reference.
Anthropic Claude Sonnet 4.5 now available as a Databricks-hosted model
October 3, 2025
Mosaic AI Model Serving now supports Anthropic Claude Sonnet 4.5 as a Databricks-hosted model. You can access this model using Foundation Model APIs pay-per-token.
Notebook improvements
October 3, 2025
The following notebook improvements are now available:
- The cell execution minimap now appears in the right margin of notebooks. Use the minimap to get a visual overview of your notebook's run status and quickly navigate between cells. See Cell execution minimap.
- Use Databricks Assistant to help diagnose and fix environment errors, including library installation errors. See Debug environment errors.
- When reconnecting to serverless notebooks, sessions are automatically restored with the notebook's Python variables and Spark state. See Automated session restoration for serverless notebooks.
- Pyspark authoring completion now supports
agg
,withColumns
,withColumnsRenamed
, andfilter
/where
clauses. See Get inline code suggestions: Python and SQL examples. - Databricks now supports importing and exporting IPYNB notebooks up to 100 MB. Revision snapshot autosaving, manual saving, and cloning are supported for all notebooks up to 100 MB. See Notebook sizing.
- When cloning and exporting notebooks, you can now choose whether to include cell outputs or not. See Manage notebook format.
Anthropic Claude Sonnet 4 is available for batch inference in US regions
October 3, 2025
Mosaic AI Model Serving now supports Anthropic Claude Sonnet 4 for batch inference workflows. You can now use databricks-claude-sonnet-4
in your ai_query
requests to perform batch inference.
- See Use ai_query with foundation models for an example.
- See Supported foundation models on Mosaic AI Model Serving for region availability.
Convert to Unity Catalog managed table from external table
October 2, 2025
The ALTER TABLE ... SET MANAGED
command is now generally available. This command seamlessly converts Unity Catalog external tables to managed tables. It allows you to take full advantage of Unity Catalog managed table features, such as enhanced governance, reliability, and performance. See Convert an external table to a managed Unity Catalog table.
Git email identity configuration for Git folders
October 1, 2025
You can now specify a Git provider email address, separate from your username, when creating Git credentials for Databricks Git folders. This email is used as the Git author and committer identity for all commits made through Git folders, ensuring proper attribution in your Git provider and better integration with your Git account.
The email you provide becomes the GIT_AUTHOR_EMAIL
and GIT_COMMITTER_EMAIL
for commits, allowing Git providers to properly associate commits with your user account and display your profile information. If no email is specified, Databricks uses your Git username as the email address (legacy behavior).
See Git commit identity and email configuration.
New permissions for the Databricks GitHub App
October 1, 2025
If you own a Databricks account with the Databricks GitHub app installed, you may receive an email titled "Databricks is requesting updated permissions" from GitHub.
This is a legitimate request from Databricks. It asks you to approve a new permission that allows Databricks to read your GitHub account email(s). Granting this permission will let Databricks retrieve and save your primary GitHub account email to your Linked Git credential in Databricks. In an upcoming feature, this will ensure that commits made from Databricks are properly linked to your GitHub identity.
If you don't accept the new permission, your Linked Git credential will still authenticate with GitHub. However, future commits from this credential will not be associated with your GitHub account identity