Serverless compute

With the serverless compute version of the Databricks platform architecture, the compute layer exists in your Databricks account rather than your AWS account.

Databricks SQL Serverless

Databricks SQL Serverless supports serverless compute. Workspace admins can create serverless SQL warehouses that enable instant compute and are managed by Databricks. Serverless SQL warehouses use compute clusters in your Databricks account. Use them with Databricks SQL queries just like you normally would with the original customer-hosted SQL warehouses, which are now called classic SQL warehouses.

New SQL warehouses are serverless by default when you create them from the UI. New SQL warehouses are not serverless by default when you create them using the SQL Warehouses API, which requires that you explicitly specify serverless. You can also create new pro or classic SQL warehouses using either method. For more information about warehouse type defaults, see What are the warehouse type defaults?.

You can upgrade a pro or classic SQL warehouse to a serverless SQL warehouse or a classic SQL warehouse to a pro SQL warehouse. You can also downgrade from serverless to pro or classic.

Using serverless SQL warehouses only affects your use of Databricks SQL. It does not affect how Databricks Runtime clusters work with notebooks and jobs in the Data Science & Engineering or Databricks Machine Learning workspace environments. Databricks Runtime clusters continue to run in the classic data plane in your AWS account. See Compare serverless compute to other Databricks architectures.

If your account needs updated terms of use to use serverless SQL warehouses, workspace admins are prompted in the Databricks SQL UI.

If your workspace has an AWS instance profile, you might need to update the trust relationship to support serverless SQL warehouses, depending on how and when it was created.

For serverless SQL warehouses, support for the compliance security profile varies by region. See Serverless SQL warehouses support the compliance security profile in some regions.

For regional support, see Databricks clouds and regions.

Model Serving

Databricks Model Serving deploys your MLflow machine learning (ML) models and exposes them as REST API endpoints that run in your Databricks account. The serverless compute resources run as Databricks AWS resources in what is known as the serverless data plane.

In contrast, the legacy model serving architecture is a single-node cluster that runs in your AWS account within the classic data plane.

  • Easy configuration and compute resource management: Databricks automatically prepares a production-ready environment for your model and makes it easy to switch its compute configuration.

  • High availability and scalability: Serverless model endpoints autoscale, which means that the number of server replicas automatically adjusts based on the volume of scoring requests.

  • Dashboards: Use the built-in serverless model endpoint dashboard to monitor the health of your model endpoints using metrics such as queries-per-second (QPS), latency, and error rate.

For regional support, see Databricks clouds and regions.

Serverless quotas

Serverless quotas are a safety measure for serverless compute. Serverless quotas restrict how many serverless compute resources a customer can have at any given time. The quota is enforced at the regional level for all workspaces in your account. Quotas are enforced only for serverless SQL warehouses. See Serverless quotas.

Compare serverless compute to other Databricks architectures

Databricks operates out of a control plane and a data plane:

  • The control plane includes the backend services that Databricks manages in its own AWS account. Databricks SQL queries, notebook commands, and many other workspace configurations are stored in the control plane and encrypted at rest.

  • The data plane is where data is processed by clusters of compute resources.

There are important differences between the classic data plane (the original Databricks platform architecture) and the serverless data plane:

  • For a classic data plane, Databricks compute resources run in your AWS account. Clusters perform distributed data analysis using queries (in Databricks SQL) or notebooks (in the Data Science & Engineering or Databricks Machine Learning environments):

    • New clusters are created within each workspace’s virtual network in the customer’s AWS account.

    • A classic data plane has natural isolation because it runs in each customer’s own AWS account.

  • For a serverless data plane, Databricks compute resources run in a compute layer within your Databricks account:

    • The serverless data plane is used for serverless SQL warehouses and Model Serving. Enabling serverless compute does not change how Databricks Runtime clusters work in the Data Science & Engineering or Databricks Machine Learning environments.

    • To protect customer data within the serverless data plane, serverless compute runs within a network boundary for the workspace, with various layers of security to isolate different Databricks customer workspaces and additional network controls between clusters of the same customer.

Databricks creates a serverless data plane in the same AWS region as your workspace’s classic data plane.

Worker nodes are private, which means they do not have public IP addresses.

For communication between the Databricks control plane and the serverless data plane:

  • For Databricks SQL Serverless, the communication uses private connectivity.

  • For Model Serving, the communication uses mTLS encrypted communication with connection initiated from the control plane with access limited to control plane IP addresses.

When reading or writing to AWS S3 buckets in the same region as your workspace, serverless SQL warehouses now use direct access to S3 using AWS gateway endpoints. This applies when a serverless SQL warehouse reads and writes to your workspace’s root S3 bucket in your AWS account and to other S3 data sources in the same region.

The following diagram shows important differences between the serverless data plane and classic data plane for both serverless features.

Compare classic and serverless data plane for Databricks SQL
Compare classic and serverless data plane for Model Serving

For more information about secure cluster connectivity, which is mentioned in the diagram, see Secure cluster connectivity.

The table below summarizes differences between serverless compute and the classic data plane architecture of Databricks, focusing on product security. It is not a complete explanation of those security features or a detailed comparison. For more details about serverless compute security, or if you have questions about items in this table, contact your Databricks representative.

Item

Serverless data plane (AWS only)

Classic data plane (AWS and Azure)

Location of control plane resources

Databricks cloud account

Databricks cloud account

Location of data plane compute resources

Serverless data plane (VPC in the Databricks AWS account)

Classic data plane (VPC in the customer’s cloud provider account)

Data plane compute resources

Databricks-managed Kubernetes (EKS) clusters

Databricks-managed standalone VMs

Customer access to data plane

Access through Databricks control plane

  • AWS: Direct access in customer’s AWS account. Additional indirect access through Databricks control plane.

  • Azure: Direct read-only access to clusters, even with VNet injection (customer-managed VNet). Additional indirect access through Databricks control plane.

Who pays for unassigned VMs for Databricks SQL?

Databricks

Not applicable. For pro and classic SQL warehouses, there is no concept of unassigned VMs. In Databricks SQL, there is no direct equivalent to warm instance pools for notebooks and jobs.

Who pays for VMs after starting a warehouse or running a query in Databricks SQL?

Customer pays based on DBUs until Auto Stop stops the SQL warehouse.

Customer pays AWS for the VMs, and customer pays Databricks based on DBUs.

Virtual private network (VPC) for data plane

VPC in the customer’s Databricks account, with network boundaries between workspaces and between clusters.

  • AWS: Exclusive, the VPC is in the customer’s account.

  • Azure: Exclusive, the VNet is in the customer’s account.

OS image

Databricks-modified cloud-managed Amazon-linux2

Databricks-managed Ubuntu or CentOS

Technology that manages default egress from the VPC

Databricks-created AWS internet gateway

Default internet gateway or load balancer provided by the cloud

Customize VPC and firewall settings

No

Yes

Customize CIDR ranges

No

Yes

Public IPs

No

Secure cluster connectivity

  • When disabled, one public IP for each VM.

  • When enabled (the default), no public IPs for VMs.

Container-level network isolation for Databricks Runtime clusters

Uses Kubernetes network policy

Uses Databricks-managed iptable rules

VM-level network isolation for Databricks Runtime clusters

Security group isolation

Security group and isolation of VPC (AWS) or VNet (Azure)

VM isolation

VMs in a cluster can communicate among themselves, but no ingress traffic is allowed from other clusters.

VMs in a cluster can communicate among themselves, but no ingress traffic is allowed from other clusters.

Communication between control plane and data plane

For Databricks SQL Serverless, communication uses private connectivity. For Model Serving, communication uses direct mTLS encrypted communication with the connection initiated from the control plane with access limited to control plane IP addresses.

Secure cluster connectivity

  • When enabled (the default for AWS E2 and Azure): Individual VMs connect to the SCC relay in the control plane during cluster creation

  • When disabled: Control plane connects to individual VMs using public IPs.

Credential for initial deployment

Databricks internal IAM roles

  • AWS: IAM roles provided by customers.

  • Azure: None required.

Credential for regular data plane operations

Databricks invokes sts:AssumeRole on customer-provided IAM role.

  • AWS: VMs run with instance profiles that are provided by customers (sts:PassRole).

  • Azure: First-party application token.

Location of the storage for DBFS root and workspace system data

Customer creates the S3 bucket in the customer account as part of workspace creation.

  • AWS: Customer creates the S3 bucket in the customer account as part of workspace creation.

  • Azure: Databricks creates storage in the customer account as part of workspace creation.