Skip to main content

Lakebase Autoscaling API guide

info

Lakebase Autoscaling is the latest version of Lakebase, with autoscaling compute, scale-to-zero, branching, and instant restore. For supported regions, see Region availability. If you are a Lakebase Provisioned user, see Lakebase Provisioned.

This page provides an overview of the Lakebase Autoscaling API, including authentication, available endpoints, and common patterns for working with the REST API, Databricks CLI, and Databricks SDKs (Python, Java, Go).

For the complete API reference, see the Postgres API documentation.

important

The Lakebase Postgres API is in Beta. API endpoints, parameters, and behaviors are subject to change.

Authentication

The Lakebase Autoscaling API uses workspace-level OAuth authentication for managing project infrastructure (creating projects, configuring settings, etc.).

note

Two types of connectivity: This API is for platform management (creating projects, branches, computes). For database access (connecting to query data):

  • SQL clients (psql, pgAdmin, DBeaver): Use Lakebase OAuth tokens or Postgres passwords. See Authentication.
  • Data API (RESTful HTTP): Use Lakebase OAuth tokens. See Data API.
  • Programming language drivers (psycopg, SQLAlchemy, JDBC): Use Lakebase OAuth tokens or Postgres passwords. See Quickstart.

For a complete explanation of these two authentication layers, see Authentication architecture.

Set up authentication

Authenticate using the Databricks CLI:

Bash
databricks auth login --host https://your-workspace.cloud.databricks.com

Follow the browser prompts to log in. The CLI caches your OAuth token at ~/.databricks/token-cache.json.

Then choose your access method:

The SDK uses unified authentication and automatically handles OAuth tokens:

Python
from databricks.sdk import WorkspaceClient

w = WorkspaceClient()

For more details, see Authorize user access to Databricks with OAuth.

Available endpoints (Beta)

All endpoints use the base path /api/2.0/postgres/.

Projects

Operation

Method

Endpoint

Documentation

Create project

POST

/projects

Create a project

Update project

PATCH

/projects/{project_id}

General settings

Delete project

DELETE

/projects/{project_id}

Delete a project

Get project

GET

/projects/{project_id}

Get project details

List projects

GET

/projects

List projects

Branches

Operation

Method

Endpoint

Documentation

Create branch

POST

/projects/{project_id}/branches

Create a branch

Update branch

PATCH

/projects/{project_id}/branches/{branch_id}

Update branch settings

Delete branch

DELETE

/projects/{project_id}/branches/{branch_id}

Delete a branch

Get branch

GET

/projects/{project_id}/branches/{branch_id}

View branches

List branches

GET

/projects/{project_id}/branches

List branches

Endpoints (Computes and Read Replicas)

In the API, a compute is called an endpoint. For a full mapping of UI and API terms, see Computes and endpoints.

Operation

Method

Endpoint

Documentation

Create endpoint

POST

/projects/{project_id}/branches/{branch_id}/endpoints

Create a compute / Create a read replica

Update endpoint

PATCH

/projects/{project_id}/branches/{branch_id}/endpoints/{endpoint_id}

Edit a compute / Edit a read replica

Delete endpoint

DELETE

/projects/{project_id}/branches/{branch_id}/endpoints/{endpoint_id}

Delete a compute / Delete a read replica

Get endpoint

GET

/projects/{project_id}/branches/{branch_id}/endpoints/{endpoint_id}

View computes

List endpoints

GET

/projects/{project_id}/branches/{branch_id}/endpoints

View computes

Roles

Operation

Method

Endpoint

Documentation

List roles

GET

/projects/{project_id}/branches/{branch_id}/roles

View Postgres roles

Create role

POST

/projects/{project_id}/branches/{branch_id}/roles

Create an OAuth role | Create a password role

Get role

GET

/projects/{project_id}/branches/{branch_id}/roles/{role_id}

View Postgres roles

Update role

PATCH

/projects/{project_id}/branches/{branch_id}/roles/{role_id}

Update a role

Delete role

DELETE

/projects/{project_id}/branches/{branch_id}/roles/{role_id}

Delete a role

Catalogs

Operation

Method

Endpoint

Documentation

Register database with Unity Catalog

POST

/catalogs

Register a database

Get catalog registration

GET

/catalogs/{catalog_id}

Check registration status

Delete catalog registration

DELETE

/catalogs/{catalog_id}

Unregister a database

note

Register and delete are long-running operations. Poll the returned operation until done: true. See Long-running operations.

Deleting a catalog registration does not drop the underlying Postgres database.

Synced Tables

Operation

Method

Endpoint

Documentation

Create synced table

POST

/synced_tables

Create a synced table

Get synced table

GET

/synced_tables/{table_name}

Check sync status

Delete synced table

DELETE

/synced_tables/{table_name}

Delete a synced table

note

The table_name in the path uses the format catalog.schema.table.

Create and delete are long-running operations. Poll the returned operation until done: true. See Long-running operations.

Deleting a synced table removes the Unity Catalog registration only. Drop the Postgres table separately to free up space.

Database Credentials

Operation

Method

Endpoint

Documentation

Generate database credential

POST

/credentials

OAuth token authentication

Operations

Operation

Method

Endpoint

Documentation

Get operation

GET

/projects/{project_id}/operations/{operation_id}

See example below

Permissions

Project ACL permissions use the standard Databricks Permissions API, not the /api/2.0/postgres/ base path. Set the request_object_type to database-projects and request_object_id to the project ID.

Operation

Method

Endpoint

Documentation

Get project permissions

GET

/api/2.0/permissions/database-projects/{project_id}

Permissions API reference

Update project permissions

PATCH

/api/2.0/permissions/database-projects/{project_id}

Permissions API reference

Replace project permissions

PUT

/api/2.0/permissions/database-projects/{project_id}

Permissions API reference

The grantable permission levels for Lakebase projects are CAN_USE and CAN_MANAGE. CAN_CREATE is an inherited level and cannot be set via the API. See Permission levels.

For usage examples and CLI/SDK/Terraform equivalents, see Grant permissions programmatically.

Get operation

Check the status of a long-running operation by its resource name.

Python
from databricks.sdk import WorkspaceClient

w = WorkspaceClient()

# Start an operation (example: create project)
operation = w.postgres.create_project(...)
print(f"Operation started: {operation.name}")

# Wait for completion
result = operation.wait()
print(f"Operation completed: {result.name}")

Common patterns

Resource naming

Resources follow a hierarchical naming pattern where child resources are scoped to their parent.

Projects use this format:

projects/{project_id}

Child resources like operations are nested under their parent project:

projects/{project_id}/operations/{operation_id}

This means you need the parent project ID to access operations or other child resources.

Resource IDs:

When creating resources, you must provide a resource ID (like my-app) for the project_id, branch_id, or endpoint_id parameter. This ID becomes part of the resource path in API calls (such as projects/my-app/branches/development).

You can optionally provide a display_name to give your resource a more descriptive label. If you don't specify a display name, the system uses your resource ID as the display name.

Finding resources in the UI

To locate a project in the Lakebase UI, look for its display name in the projects list. If you didn't provide a custom display name when creating the project, search for your project_id (such as "my-app").

note

Resource IDs cannot be changed after creation.

Requirements:

  • Must be 1-63 characters long
  • Lowercase letters, digits, and hyphens only
  • Cannot start or end with a hyphen
  • Examples: my-app, analytics-db, customer-123

Long-running operations (LROs)

Create, update, and delete operations return a databricks.longrunning.Operation object that provides a completion status.

Example operation response:

JSON
{
"name": "projects/my-project/operations/abc123",
"done": false
}

Poll for completion using GetOperation:

Python
from databricks.sdk import WorkspaceClient

w = WorkspaceClient()

# Start an operation
operation = w.postgres.create_project(...)

# Wait for completion
result = operation.wait()
print(f"Operation completed: {result.name}")

Update masks

Update operations require an update_mask parameter specifying which fields to modify. This prevents accidentally overwriting unrelated fields.

Format differences:

Method

Format

Example

REST API

Query parameter

?update_mask=spec.display_name

Python SDK

FieldMask object

update_mask=FieldMask(field_mask=["spec.display_name"])

CLI

Positional argument

update-project NAME spec.display_name

Error handling

The Lakebase API returns standard HTTP status codes.

409: Conflicting operations

Error message:

project already has running conflicting operations, scheduling of new ones is prohibited

What it means:

Lakebase sometimes schedules internal maintenance operations on projects. If a client request arrives while one of these internal operations is in progress, Lakebase can reject the new request with a 409 Conflict error.

This is expected behavior. Clients should be prepared to retry requests when this error occurs.

What to do:

Retry the request. When the internal operation completes, Lakebase accepts new requests for the project.

Use exponential backoff for retries: wait a short interval before the first retry, then double the wait on each subsequent attempt. A starting interval of 100 milliseconds with a maximum of 30 seconds is a reasonable default.

Python
import time
from databricks.sdk import WorkspaceClient
from databricks.sdk.errors import ResourceConflict
from databricks.sdk.service.postgres import Branch, BranchSpec

w = WorkspaceClient()

def retry_on_conflict(fn, max_attempts=5, base_delay=0.1):
"""Retry a Lakebase API call when a conflicting operation is in progress."""
for attempt in range(max_attempts):
try:
return fn()
except ResourceConflict:
if attempt == max_attempts - 1:
raise
wait = base_delay * (2 ** attempt)
print(f"Conflicting operation in progress. Retrying in {wait}s...")
time.sleep(wait)

# Example: create a branch with retry
branch = retry_on_conflict(
lambda: w.postgres.create_branch(
parent="projects/my-project",
branch=Branch(spec=BranchSpec(no_expiry=True)),
branch_id="my-branch",
).wait()
)
note

A 409 Conflict on a Lakebase API request means the request was not accepted, not that it was applied. Always verify the resource state after a successful retry by calling the corresponding GET endpoint.

SDKs and infrastructure-as-code