Skip to main content

Compute

Databricks compute refers to the selection of computing resources available on Databricks to run your data engineering, data science, and analytics workloads. Choose from serverless compute for on-demand scaling, classic compute for customizable resources, or SQL warehouses for optimized analytics.

You can view and manage compute resources in the Compute section of your workspace:

Serverless compute

On-demand, automatically managed compute that scales based on your workload requirements.

Topic

Description

Serverless compute for notebooks

Interactive Python and SQL execution in notebooks with automatic scaling and no infrastructure management.

Serverless compute for jobs

Run Lakeflow Jobs without without configuring or deploying infrastructure. Automatically provisions and scales compute resources.

Serverless pipelines

Run Lakeflow Declarative Pipelines without configuring or deploying infrastructure. Automatically provisions and scales compute resources.

Serverless compute limitations

Understanding limitations and requirements for serverless workloads and supported configurations.

Classic compute

Provisioned compute resources that you create, configure, and manage for your workloads.

Topic

Description

Classic compute overview

Overview of who can access and create classic compute resources.

Configure compute

Create and configure compute for interactive data analysis in notebooks or automated workflows with Lakeflow Jobs.

Standard compute

Multi-user compute with shared resources for cost-effective collaboration. Lakeguard provides secure user isolation.

Dedicated compute

Compute resource assigned to a single user or group.

Instance pools

Pre-configured instances that reduce compute startup time and provide cost savings for frequent workloads.

SQL warehouses

Optimized compute resources for specific use cases and advanced functionality. SQL warehouses can be configured as serverless or classic.

Topic

Description

SQL warehouses

Optimized compute for SQL queries, analytics, and business intelligence workloads with serverless or classic options.

SQL warehouse types

Understanding the differences between serverless and classic SQL warehouse options to choose the right type for your workloads.

Additional topics

Topic

Description

What is Photon?

High-performance query engine that accelerates SQL workloads and provides faster data processing.

What is Lakeguard?

Security framework that provides data governance and access control for compute resources.

For information about working with compute using the command line or APIs, see What is the Databricks CLI? and the Databricks REST API reference.