メインコンテンツまでスキップ

February 2026

These features and Databricks platform improvements were released in February 2026.

注記

Releases are staged. Your Databricks account might not be updated until a week or more after the initial release date.

Databricks SQL pipelines support notifications and performance mode (Beta)

February 5, 2026

Materialized views and streaming tables defined and scheduled in Databricks SQL now support failure notifications and serverless performance mode configuration in Beta.

See Schedule refreshes in Databricks SQL.

JAR tasks on serverless compute is now in Public Preview

February 20, 2026

Running JAR jobs on serverless compute is now in Public Preview. See JAR task for jobs.

ADBC driver is now the default driver for new Power BI connections

February 20, 2026

New connections created in Power BI Desktop or Power BI Service now automatically use the Arrow Database Connectivity (ADBC) driver by default. Existing connections continue to use ODBC unless you manually update them to ADBC. You can still switch to ODBC drivers for new connections. See Configure ADBC or ODBC driver for Power BI.

Google Gemini 3.1 Pro Preview now available as a Databricks-hosted model

February 19, 2026

Mosaic AI Model Serving now supports Google Gemini 3.1 Pro Preview as a Databricks-hosted model.

To access this model, use:

Enhanced Security and Compliance add-on is now generally available

February 19, 2026

The Enhanced Security and Compliance add-on, including the compliance security profile, enhanced security monitoring, and automatic cluster update, is now generally available. See Configure enhanced security and compliance settings.

TikTok Ads connector (Beta)

February 18, 2026

Lakeflow Connect now supports a managed connector for ingestion from TikTok Ads. See TikTok Ads connector.

Discover page and domains (Beta)

February 18, 2026

The Discover page provides a centralized interface for searching, browsing, and previewing data assets governed by Unity Catalog. This Beta release introduces:

  • Domains: Business-aligned organization layer that groups data assets by functional area (for example, Marketing, or Finance) to improve discovery and stewardship
  • Custom curation: Curators can customize and highlight specific assets in their organization or domain
  • AI-powered recommendations: Surface popular and valuable datasets to enable curation at scale
  • Unified discovery: Access tables, dashboards, Genie spaces, and more in one place

Domains are built on governed tags, allowing you to organize data assets in a way that's best suited for your business needs.

Scoped personal access tokens (Beta)

February 18, 2026

You can now limit the permissions of personal access tokens by selecting a token type and adding API scopes. This restricts each token to only the API operations you specify. See Authenticate with Databricks personal access tokens (legacy). This feature is in Beta.

Qwen3-Embedding-0.6B now available in Public Preview as a Databricks-hosted model

February 17, 2026

Mosaic AI Model Serving now supports Qwen3-Embedding-0.6B in Public Preview as a Databricks-hosted model. You can access this model using Foundation Model APIs pay-per-token.

Anthropic Claude Sonnet 4.6 now available as a Databricks-hosted model

February 17, 2026

Mosaic AI Model Serving now supports Anthropic Claude Sonnet 4.6 as a Databricks-hosted model. You can access this model using Foundation Model APIs pay-per-token.

Databricks Runtime 18.1 and Databricks Runtime 18.1 ML are in Beta

February 17, 2026

Databricks Runtime 18.1 and Databricks Runtime 18.1 ML are now in Beta, powered by Apache Spark 4.1.0.

See Databricks Runtime 18.1 (Beta) and Databricks Runtime 18.1 for Machine Learning (Beta).

AI Gateway (Beta)

February 12, 2026

AI Gateway (Beta) is the enterprise control plane for governing LLM endpoints and coding agents with enhanced features, including a rich UI, improved observability, and expanded API coverage.

See AI Gateway (Beta).

File events enabled by default on new external locations

February 12, 2026

File events are now enabled by default when you create new external locations. The Databricks file events service sets up cloud resources to detect file changes, enabling more efficient, event-driven storage jobs and ingestion pipelines.

When you create an external location using the UI, Databricks checks for the required permissions and guides you to fix any issues or proceed with a force create. This check does not run in the API, but ingestion pipelines automatically fall back to prevent breakage.

You can disable file events for an external location after creation if needed. For more information, see Set up file events for an external location.

HubSpot connector (Beta)

February 12, 2026

The managed HubSpot connector in Lakeflow Connect allows you to ingest data from HubSpot Marketing Hub into Databricks. See HubSpot connector.

Configure default Python package repositories for Lakeflow Spark Declarative Pipelines (Public Preview)

February 11, 2026

Workspace admins can now configure private or authenticated package repositories within workspaces as the default pip configuration for Lakeflow Spark Declarative Pipelines. This allows users in the workspace to install packages from internal Python repositories without explicitly defining index-url or extra-index-url values.

See Configure default Python package repositories for more details.

New documentation for authoring and deploying agents on Databricks Apps

February 10, 2026

New documentation and project templates are available for building and deploying AI agents on Databricks Apps. Author agents with popular libraries like LangGraph, PyFunc, and OpenAI Agent SDK, then deploy them on Databricks Apps. See Author an AI agent and deploy it on Databricks Apps.

Serverless egress control is now generally available

February 9, 2026

You can now manage outbound network connections from serverless compute resources using network policies. Control egress with restricted access to specific destinations, FQDN filtering, and dry-run mode for testing. See Manage network policies for serverless egress control.

Deploy Databricks apps from Git repositories (Beta)

February 6, 2026

You can now deploy Databricks apps directly from Git repositories without uploading files to the workspace. Configure a repository for your app and deploy from any branch, tag, or commit. See Deploy from a Git repository.

Query tags for SQL warehouses (Public Preview)

February 6, 2026

You can now apply custom key-value tags to SQL workloads on Databricks SQL warehouses for grouping, filtering, and cost attribution. Query tags appear in the system.query.history table and on the Query History page of the Databricks UI, allowing you to attribute warehouse costs by business context and identify sources of long-running queries. Tags can be set using session configuration parameters, the SET QUERY_TAGS SQL statement, or through connectors including dbt, Power BI, Tableau, Python, Node.js, Go, JDBC, and ODBC. See Query tags.

February 5, 2026

The managed Google Ads connector in Lakeflow Connect allows you to ingest data from Google Ads into Databricks. See Google Ads connector.

Applying filters, masks, tags, and comments to pipeline-created datasets is now GA

February 5, 2026

Using CREATE, ALTER, or the Lakeflow UI to modify ETL and ingestion pipelines (Lakeflow Spark Declarative Pipelines and Lakeflow Connect) is now GA. You can modify pipelines to apply row filters, column masks, table and column tags, column comments, and (for materialized views only) table comments.

See ALTER STREAMING TABLE and ALTER MATERIALIZED VIEW. For general information about using ALTER with Lakeflow Spark Declarative Pipelines, see Use ALTER statements with pipeline datasets.

Anthropic Claude Opus 4.6 now available as a Databricks-hosted model

February 5, 2026

Mosaic AI Model Serving now supports Anthropic Claude Opus 4.6 as a Databricks-hosted model.

To access this model, use:

Default SQL warehouse settings (General Availability)

February 5, 2026

Default SQL warehouse settings are now generally available. Workspace administrators can set a default SQL warehouse that is automatically selected in SQL authoring surfaces, including the SQL editor, AI/BI dashboards, AI/BI Genie, Alerts, and Catalog Explorer. Individual users can also override the workspace default by setting their own user-level default warehouse. See Set a default SQL warehouse for the workspace and Set a user-level default warehouse.

View warehouse activity details (Beta)

February 5, 2026

You can now view detailed annotations on the Running clusters chart in the SQL warehouse monitoring UI to understand why warehouses remain active. The Activity details toggle displays color-coded bars that show query activity, fetching queries, open sessions, and idle states. Hover over bars to see metadata, or click on fetching activity to filter the query history table. See Monitor a SQL warehouse.

Connect Databricks Assistant to MCP servers

February 4, 2026

You can now connect Databricks Assistant in agent mode to external tools and data sources through the Model Context Protocol (MCP). The Assistant can use any MCP servers that have been added to your workspace and that you have permission to use.

See Connect Databricks Assistant to MCP servers.

Select tables and create pivot tables in Google Sheets

February 3, 2026

You can now directly select Databricks tables from the Catalog Explorer and import data as pivot tables in Google Sheets using the Databricks Connector. See Connect to Databricks from Google Sheets.

Regional model hosting for Genie in Japan

February 2, 2026

For workspaces in asia-northeast1 (Tokyo, Japan), Genie now uses models hosted in the same region. Users in this region no longer require cross-Geo processing to use Genie.

See Availability of Designated Services in each Geo for more information on which features require cross-geo processing.

Automatically provision users (JIT) GA

February 2, 2026

You can now enable just-in-time (JIT) provisioning to automatically create new user accounts during first-time authentication. When a user logs in to Databricks for the first time using single sign-on (SSO), Databricks checks if the user already has an account. If not, Databricks instantly provisions a new user account using details from the identity provider. See Automatically provision users (JIT).

New GCP regions support serverless compute

February 2, 2026

Serverless compute is now available in workspaces deployed in me-central2 and southamerica-east1. For a list of regional availability, see Serverless availability.

Tag Databricks Apps (Public Preview)

February 2, 2026

You can now apply tags to Databricks apps to organize and categorize them. See Apply tags to Databricks apps. Search is not supported using Databricks apps tags.

Zendesk Support connector (Beta)

February 2, 2026

The Zendesk Support connector allows you to ingest ticket data, help center content, and community forum data from Zendesk Support. See Zendesk Support connector overview.

Data Quality Monitoring Anomaly Detection public preview

February 2, 2026

Databricks Data Quality Monitoring Anomaly Detection is now in Public Preview. The feature is enabled at the schema level and learns from historical data patterns to detect data quality anomalies. The health of all monitored tables is consolidated into a single system table and a new UI. See Anomaly detection.