Skip to main content

Lakebase Autoscaling API guide

info

Lakebase Autoscaling is available in the following regions: us-east-1, us-east-2, eu-central-1, eu-west-1, eu-west-2, ap-south-1, ap-southeast-1, ap-southeast-2.

Lakebase Autoscaling is the latest version of Lakebase with autoscaling compute, scale-to-zero, branching, and instant restore. For feature comparison with Lakebase Provisioned, see choosing between versions.

This page provides an overview of the Lakebase Autoscaling API, including authentication, available endpoints, and common patterns for working with the REST API, Databricks CLI, Databricks SDKs (Python, Java, Go), and Terraform.

For the complete API reference, see the Postgres API documentation.

important

The Lakebase Postgres API is in Beta. API endpoints, parameters, and behaviors are subject to change.

Authentication

The Lakebase Autoscaling API uses workspace-level OAuth authentication for managing project infrastructure (creating projects, configuring settings, etc.).

note

Two types of connectivity: This API is for platform management (creating projects, branches, computes). For database access (connecting to query data):

  • SQL clients (psql, pgAdmin, DBeaver): Use LakeBase OAuth tokens or Postgres passwords. See Authentication.
  • Data API (RESTful HTTP): Use LakeBase OAuth tokens. See Data API.
  • Programming language drivers (psycopg, SQLAlchemy, JDBC): Use LakeBase OAuth tokens or Postgres passwords. See Quickstart.

For a complete explanation of these two authentication layers, see Authentication architecture.

Set up authentication

Authenticate using the Databricks CLI:

Bash
databricks auth login --host https://your-workspace.cloud.databricks.com

Follow the browser prompts to log in. The CLI caches your OAuth token at ~/.databricks/token-cache.json.

Then choose your access method:

The SDK uses unified authentication and automatically handles OAuth tokens:

Python
from databricks.sdk import WorkspaceClient

w = WorkspaceClient()

For more details, see Authorize user access to Databricks with OAuth.

Available endpoints (Beta)

All endpoints use the base path /api/2.0/postgres/.

Projects

Operation

Method

Endpoint

Documentation

Create project

POST

/projects

Create a project

Update project

PATCH

/projects/{project_id}

General settings

Delete project

DELETE

/projects/{project_id}

Delete a project

Get project

GET

/projects/{project_id}

Get project details

List projects

GET

/projects

List projects

Branches

Operation

Method

Endpoint

Documentation

Create branch

POST

/projects/{project_id}/branches

Create a branch

Update branch

PATCH

/projects/{project_id}/branches/{branch_id}

Update branch settings

Delete branch

DELETE

/projects/{project_id}/branches/{branch_id}

Delete a branch

Get branch

GET

/projects/{project_id}/branches/{branch_id}

View branches

List branches

GET

/projects/{project_id}/branches

List branches

Endpoints (Computes and Read Replicas)

Operation

Method

Endpoint

Documentation

Create endpoint

POST

/projects/{project_id}/branches/{branch_id}/endpoints

Create a compute / Create a read replica

Update endpoint

PATCH

/projects/{project_id}/branches/{branch_id}/endpoints/{endpoint_id}

Edit a compute / Edit a read replica

Delete endpoint

DELETE

/projects/{project_id}/branches/{branch_id}/endpoints/{endpoint_id}

Delete a compute / Delete a read replica

Get endpoint

GET

/projects/{project_id}/branches/{branch_id}/endpoints/{endpoint_id}

View computes

List endpoints

GET

/projects/{project_id}/branches/{branch_id}/endpoints

View computes

Database Credentials

Operation

Method

Endpoint

Documentation

Generate database credential

POST

/credentials

OAuth token authentication

Operations

Operation

Method

Endpoint

Documentation

Get operation

GET

/projects/{project_id}/operations/{operation_id}

See example below

Get operation

Check the status of a long-running operation by its resource name.

Python
from databricks.sdk import WorkspaceClient

w = WorkspaceClient()

# Start an operation (example: create project)
operation = w.postgres.create_project(...)
print(f"Operation started: {operation.name}")

# Wait for completion
result = operation.wait()
print(f"Operation completed: {result.name}")

Common patterns

Resource naming

Resources follow a hierarchical naming pattern where child resources are scoped to their parent.

Projects use this format:

projects/{project_id}

Child resources like operations are nested under their parent project:

projects/{project_id}/operations/{operation_id}

This means you need the parent project ID to access operations or other child resources.

Resource IDs:

When creating resources, you must provide a resource ID (like my-app) for the project_id, branch_id, or endpoint_id parameter. This ID becomes part of the resource path in API calls (such as projects/my-app/branches/development).

You can optionally provide a display_name to give your resource a more descriptive label. If you don't specify a display name, the system uses your resource ID as the display name.

Finding resources in the UI

To locate a project in the Lakebase UI, look for its display name in the projects list. If you didn't provide a custom display name when creating the project, search for your project_id (such as "my-app").

note

Resource IDs cannot be changed after creation.

Requirements:

  • Must be 1-63 characters long
  • Lowercase letters, digits, and hyphens only
  • Cannot start or end with a hyphen
  • Examples: my-app, analytics-db, customer-123

Long-running operations (LROs)

Create, update, and delete operations return a databricks.longrunning.Operation object that provides a completion status.

Example operation response:

JSON
{
"name": "projects/my-project/operations/abc123",
"done": false
}

Poll for completion using GetOperation:

Python
from databricks.sdk import WorkspaceClient

w = WorkspaceClient()

# Start an operation
operation = w.postgres.create_project(...)

# Wait for completion
result = operation.wait()
print(f"Operation completed: {result.name}")

Update masks

Update operations require an update_mask parameter specifying which fields to modify. This prevents accidentally overwriting unrelated fields.

Format differences:

Method

Format

Example

REST API

Query parameter

?update_mask=spec.display_name

Python SDK

FieldMask object

update_mask=FieldMask(field_mask=["spec.display_name"])

CLI

Positional argument

update-project NAME spec.display_name

Additional resources

SDKs and infrastructure-as-code