Pular para o conteúdo principal

February 2026

These features and Databricks platform improvements were released in February 2026.

nota

Releases are staged. Your Databricks account might not be updated until a week or more after the initial release date.

Deploy Databricks apps from Git repositories (Beta)

February 6, 2026

You can now deploy Databricks apps directly from Git repositories without uploading files to the workspace. Configure a repository for your app and deploy from any branch, tag, or commit. See Deploy from a Git repository.

Query tags for SQL warehouses (Public Preview)

February 6, 2026

You can now apply custom key-value tags to SQL workloads on Databricks SQL warehouses for grouping, filtering, and cost attribution. Query tags appear in the system.query.history table and on the Query History page of the Databricks UI, allowing you to attribute warehouse costs by business context and identify sources of long-running queries. Tags can be set using session configuration parameters, the SET QUERY_TAGS SQL statement, or through connectors including dbt, Power BI, Tableau, Python, Node.js, Go, JDBC, and ODBC. See Query tags.

February 5, 2026

The managed Google Ads connector in Lakeflow Connect allows you to ingest data from Google Ads into Databricks. See Google Ads connector.

Applying filters, masks, tags, and comments to pipline-created datasets is now GA

February 5, 2026

Using CREATE, ALTER, or the Lakeflow UI to modify ETL and ingestion pipelines (Lakeflow Spark Declarative Pipelines and Lakeflow Connect) is now GA. You can modify pipelines to apply row filters, columns masks, table and column tags, column comments, and (for materialized views only) table comments.

See ALTER STREAMING TABLE and ALTER MATERIALIZED VIEW. For general information about using ALTER with Lakeflow Spark Declarative Pipelines, see Use ALTER statements with pipeline datasets.

Anthropic Claude Opus 4.6 now available as a Databricks-hosted model

February 5, 2026

Mosaic AI Model Serving now supports Anthropic Claude Opus 4.6 as a Databricks-hosted model.

To access this model, use:

Default SQL warehouse settings (General Availability)

February 5, 2026

Default SQL warehouse settings are now generally available. Workspace administrators can set a default SQL warehouse that is automatically selected in SQL authoring surfaces, including the SQL editor, AI/BI dashboards, AI/BI Genie, Alerts, and Catalog Explorer. Individual users can also override the workspace default by setting their own user-level default warehouse. See Set a default SQL warehouse for the workspace and Set a user-level default warehouse.

View warehouse activity details (Beta)

February 5, 2026

You can now view detailed annotations on the Running clusters chart in the SQL warehouse monitoring UI to understand why warehouses remain active. The Activity details toggle displays color-coded bars that show query activity, fetching queries, open sessions, and idle states. Hover over bars to see metadata, or click on fetching activity to filter the query history table. See Monitor a SQL warehouse.

Connect Databricks Assistant to MCP servers

February 4, 2026

You can now connect Databricks Assistant in agent mode to external tools and data sources through the Model Context Protocol (MCP). The Assistant can use any MCP servers that have been added to your workspace and that you have permission to use.

See Connect Databricks Assistant to MCP servers.

Select tables and create pivot tables in Google Sheets

February 3, 2026

You can now directly select Databricks tables from the catalog explorer and import data as pivot tables in Google Sheets using the Databricks Connector. See Connect to Databricks from Google Sheets.

Regional model hosting for Genie in Japan

February 2, 2026

For workspaces in asia-northeast1 (Tokyo, Japan), Genie now uses models hosted in the same region. Users in this region no longer require cross-Geo processing to use Genie.

See Availability of Designated Services in each Geo for more information on which features require cross-geo processing.

Automatically provision users (JIT) GA

February 2, 2026

You can now enable just-in-time (JIT) provisioning to automatically create new user accounts during first-time authentication. When a user logs in to Databricks for the first time using single sign-on (SSO), Databricks checks if the user already has an account. If not, Databricks instantly provisions a new user account using details from the identity provider. See Automatically provision users (JIT).

New GCP regions support serverless compute

February 2, 2026

Serverless compute is now available in workspaces deployed in me-central2 and southamerica-east1. For a list of regional availability, see Serverless availability.

Tag Databricks Apps (Public Preview)

February 2, 2026

You can now apply tags to Databricks apps to organize and categorize them. See Apply tags to Databricks apps. Search is not supported using Databricks apps tags.

Zendesk Support connector (Beta)

February 2, 2026

The Zendesk Support connector allows you to ingest ticket data, help center content, and community forum data from Zendesk Support. See Zendesk Support connector overview.

Data Quality Monitoring Anomaly Detection public preview

February 2, 2026

Databricks Data Quality Monitoring Anomaly Detection is now in Public Preview. The feature is enabled at the schema level and learns from historical data patterns to detect data quality anomalies. The health of all monitored tables is consolidated into a single system table and a new UI. See Anomaly detection.