Skip to main content

Get started with Terraform for Lakebase

This guide helps you get started with Terraform to manage Lakebase resources using the Databricks Terraform provider. You'll create a project, add a development branch and endpoint, and then delete them when finished. This is a typical workflow for managing development and testing environments.

For the complete resource reference and all available configuration options, see the Databricks provider documentation on the Terraform Registry.

Prerequisites

Before you begin, you need:

Lakebase Autoscaling Terraform semantics

Lakebase Autoscaling resources use Terraform semantics with spec/status fields for declarative state management. The spec field defines your desired state, while the status field shows the current state.

info

Important: Drift detection and changes outside of Terraform

Changes made to Lakebase resources outside of Terraform (using the UI, CLI, or API) are not detected by Terraform's standard drift detection.

For complete details on how spec/status fields work, drift detection behavior, and state management requirements, see the databricks_postgres_project resource documentation.

Resource hierarchy

Understanding the Lakebase resource hierarchy helps you manage dependencies in Terraform. Resources have parent-child relationships: you create parent resources before children, and delete children before parents.

Project
└── Branches (main, development, staging, etc.)
├── Endpoints (compute for executing queries)
├── Roles (Postgres roles)
└── Databases (Postgres databases)

In this quickstart, you follow this hierarchy by creating a project first, then a development branch, then an endpoint for your development branch. Branches allow you to create isolated development and testing environments and test applications against realistic data sets.

Quickstart: Manage a Lakebase project with Terraform

Follow these steps to create a complete working project with a development branch and compute endpoint:

1. Set up authentication

Configure the Databricks provider to authenticate using the service principal you configured in the prerequisites. Lakebase resources require OAuth authentication, so you set environment variables for your service principal's OAuth credentials:

Bash
export DATABRICKS_HOST="https://your-workspace.cloud.databricks.com"
export DATABRICKS_CLIENT_ID="your-service-principal-client-id"
export DATABRICKS_CLIENT_SECRET="your-service-principal-secret"

Then configure your provider to use these environment variables:

Terraform
terraform {
required_version = ">= 1.0"

required_providers {
databricks = {
source = "databricks/databricks"
version = "~> 1.0"
}
}
}

provider "databricks" {
# Automatically uses DATABRICKS_HOST, DATABRICKS_CLIENT_ID,
# and DATABRICKS_CLIENT_SECRET from environment variables
}

For more authentication options and details about OAuth configuration, see Authorize service principal access to Databricks with OAuth and Databricks Terraform provider.

2. Create a project

A project is the top-level resource that contains branches, endpoints, databases, and roles.

note

When you create a project, Databricks automatically provisions a default production branch with a read-write compute endpoint. Both the branch and endpoint are created with auto-generated IDs.

Create a basic project:

Hcl
resource "databricks_postgres_project" "app" {
project_id = "my-app"
spec = {
pg_version = 17
display_name = "My Application"
}
}

Run these commands to format your configuration and create the project:

Bash
terraform fmt
terraform apply

3. Get a project

Get information about the project you just created using a data source:

Hcl
data "databricks_postgres_project" "this" {
name = databricks_postgres_project.app.name
}

output "project_name" {
value = data.databricks_postgres_project.this.name
}

output "project_pg_version" {
value = try(data.databricks_postgres_project.this.status.pg_version, null)
}

output "project_display_name" {
value = try(data.databricks_postgres_project.this.status.display_name, null)
}
tip

Data sources return values in the status field. Use try() to safely access fields that might not be available in all provider versions.

Run these commands to apply the configuration and view the project details:

Bash
terraform apply
terraform output

4. Create a branch

Branches provide isolated database environments within a project.

note

A default production branch is created automatically with your project and includes a read-write endpoint. When you create additional branches for development, staging, or other environments, they do not include an endpoint automatically. You must create endpoints, as shown in step 5.

In this example, you create a development branch:

Hcl
resource "databricks_postgres_branch" "dev" {
branch_id = "dev"
parent = databricks_postgres_project.app.name
spec = {
no_expiry = true
}
}

output "dev_branch_name" {
value = databricks_postgres_branch.dev.name
}

Run these commands to create the branch and view its name:

Bash
terraform apply
terraform output dev_branch_name

5. Create an endpoint

Endpoints provide compute resources for executing queries against a branch.

note

The default production branch created with your project already includes a read-write endpoint. This section shows how to create an endpoint for the development branch you created in the previous step.

Create a read-write endpoint for the dev branch:

Hcl
resource "databricks_postgres_endpoint" "dev_primary" {
endpoint_id = "primary"
parent = databricks_postgres_branch.dev.name
spec = {
endpoint_type = "ENDPOINT_TYPE_READ_WRITE"
}
}

output "dev_endpoint_name" {
value = databricks_postgres_endpoint.dev_primary.name
}

Run these commands to create the endpoint and view its name:

Bash
terraform apply
terraform output dev_endpoint_name

6. List endpoints

List the endpoints in your development branch to view details about the read-write endpoint you created:

Hcl
data "databricks_postgres_endpoints" "dev" {
parent = databricks_postgres_branch.dev.name
}

output "dev_endpoint_names" {
value = [for e in data.databricks_postgres_endpoints.dev.endpoints : e.name]
}

output "dev_endpoint_types" {
value = [
for e in data.databricks_postgres_endpoints.dev.endpoints :
try(e.status.endpoint_type, null)
]
}

Run these commands to apply the configuration and view the endpoint details:

Bash
terraform apply
terraform output dev_endpoint_names
terraform output dev_endpoint_types
tip

When you run terraform apply and only outputs change (no infrastructure changes), Terraform shows "Changes to Outputs" and updates the state without modifying resources.

7. List branches

List all branches in your project. This returns two branches: the production branch that was created automatically with your project, and the development branch you created in a preceding step:

Hcl
data "databricks_postgres_branches" "all" {
parent = databricks_postgres_project.app.name
}

output "branch_names" {
value = [for b in data.databricks_postgres_branches.all.branches : b.name]
}

Run these commands to apply the configuration and view the branch names:

Bash
terraform apply
terraform output branch_names

8. Delete a branch

Now delete the development branch you created earlier. This is a typical workflow: create a branch for development or testing, and delete it when you're finished.

When deleting a branch, destroy any associated endpoints, and then destroy the branch.

8.1 Destroy the endpoint

Destroy the endpoint for the development branch:

Bash
terraform destroy -target=databricks_postgres_endpoint.dev_primary

8.2 Destroy the branch

Destroy the development branch:

Bash
terraform destroy -target=databricks_postgres_branch.dev

8.3 Remove from configuration

After targeted destroy, remove or comment out the resource blocks from your configuration files to prevent Terraform from recreating them:

  • Remove databricks_postgres_branch.dev and its outputs
  • Remove databricks_postgres_endpoint.dev_primary and its outputs
  • Update any data sources that reference the deleted branch (e.g., list_endpoints.tf)

Then reconcile the state:

Bash
terraform apply
tip

Alternative: Remove all at once

You can also remove the resource blocks from your configuration first, then run terraform apply. Terraform will plan to destroy the resources. This approach shows you the full destruction plan before executing.

Next steps