Skip to main content

Connect external app to Lakebase using SDK

info

Lakebase Autoscaling is available in the following regions: us-east-1, us-east-2, us-west-2, eu-central-1, eu-west-1, ap-south-1, ap-southeast-1, ap-southeast-2.

Lakebase Autoscaling is the latest version of Lakebase with autoscaling compute, scale-to-zero, branching, and instant restore. For feature comparison with Lakebase Provisioned, see choosing between versions.

This guide shows how to connect external applications to Lakebase Autoscaling using standard Postgres drivers (psycopg, pgx, JDBC) with OAuth token rotation. You use the Databricks SDK with a service principal and a connection pool that calls generate_database_credential() when opening each new connection, so you get a new token (60-minute lifetime) each time you connect. Examples are provided for Python, Java, and Go. For easier setup with automatic credential management, consider Databricks Apps instead.

What you'll build: A connection pattern that uses OAuth token rotation to connect to Lakebase Autoscaling from an external application, then verify the connection works.

You need the Databricks SDK (Python v0.89.0+, Java v0.73.0+, or Go v0.109.0+). Complete the following steps in order:

Other Languages

For languages without Databricks SDK support (Node.js, Ruby, PHP, Elixir, Rust, etc.), see Connect external app to Lakebase using API.

How it works

The Databricks SDK simplifies OAuth authentication by handling workspace token management automatically:

SDK OAuth flow

Your application calls generate_database_credential() with the endpoint parameter. The SDK obtains the workspace OAuth token internally (no code needed), requests the database credential from the Lakebase API, and returns it to your application. You then use this credential as the password when connecting to Postgres.

Both the workspace OAuth token and database credential expire after 60 minutes. Connection pools handle automatic refresh by calling generate_database_credential() when creating new connections.

1. Create service principal with OAuth secret

Create a Databricks service principal with an OAuth secret. Full details are in Authorize service principal access. For building an external app, keep in mind:

  • Set your secret to your preferred lifetime, up to 730-days. This defines how often you need to refresh the secret, which is used to generate database credentials via rotation.
  • Enable "Workspace access" for the service principal (Settings → Identity and access → Service principals → {name} → Configurations tab). It is required for generating new database credentials.
  • Note the client ID (a UUID). You use it when creating the matching Postgres role in your app setup and for PGUSER.

2. Create Postgres role for the service principal

The Lakebase UI only supports password-based roles. Create an OAuth role in the Lakebase SQL Editor using the client ID from step 1 (not the display name; role name is case-sensitive):

SQL
-- Enable the auth extension (if not already enabled)
CREATE EXTENSION IF NOT EXISTS databricks_auth;

-- Create OAuth role using the service principal client ID
SELECT databricks_create_role('{client-id}', 'SERVICE_PRINCIPAL');

-- Grant database permissions
GRANT CONNECT ON DATABASE databricks_postgres TO "{client-id}";
GRANT USAGE ON SCHEMA public TO "{client-id}";
GRANT SELECT, INSERT, UPDATE, DELETE ON ALL TABLES IN SCHEMA public TO "{client-id}";
ALTER DEFAULT PRIVILEGES IN SCHEMA public
GRANT SELECT, INSERT, UPDATE, DELETE ON TABLES TO "{client-id}";

Replace {client-id} with your service principal client ID. See Create OAuth roles.

3. Get connection details

From your project in the Lakebase Console, click Connect, select branch and endpoint, and note host, database (usually databricks_postgres), and endpoint name (format: projects/<project-id>/branches/<branch-id>/endpoints/<endpoint-id>).

Or use the CLI:

Bash
databricks postgres list-endpoints projects/<project-id>/branches/<branch-id>

See Connection strings for details.

4. Set environment variables

Set these environment variables before running your application:

Bash
# Databricks workspace authentication
export DATABRICKS_HOST="https://your-workspace.databricks.com"
export DATABRICKS_CLIENT_ID="<service-principal-client-id>"
export DATABRICKS_CLIENT_SECRET="<your-oauth-secret>"

# Lakebase connection details (from step 3)
export ENDPOINT_NAME="projects/<project-id>/branches/<branch-id>/endpoints/<endpoint-id>"
export PGHOST="<endpoint-id>.database.<region>.cloud.databricks.com"
export PGDATABASE="databricks_postgres"
export PGUSER="<service-principal-client-id>" # Same UUID as step 1
export PGPORT="5432"
export PGSSLMODE="require" # Python only

5. Add connection code

This example uses psycopg3 with a custom connection class that generates a fresh token when the pool creates each new connection.

Python
import os
from databricks.sdk import WorkspaceClient
import psycopg
from psycopg_pool import ConnectionPool

# Initialize Databricks SDK
workspace_client = None

def _get_workspace_client():
"""Get or create the workspace client for OAuth."""
global workspace_client
if workspace_client is None:
workspace_client = WorkspaceClient(
host=os.environ["DATABRICKS_HOST"],
client_id=os.environ["DATABRICKS_CLIENT_ID"],
client_secret=os.environ["DATABRICKS_CLIENT_SECRET"],
)
return workspace_client

def _get_endpoint_name():
"""Get endpoint name from environment."""
name = os.environ.get("ENDPOINT_NAME")
if not name:
raise ValueError(
"ENDPOINT_NAME must be set (format: projects/<id>/branches/<id>/endpoints/<id>)"
)
return name

class OAuthConnection(psycopg.Connection):
"""Custom connection class that generates a fresh OAuth token per connection."""

@classmethod
def connect(cls, conninfo="", **kwargs):
endpoint_name = _get_endpoint_name()
client = _get_workspace_client()
# Generate database credential (tokens are workspace-scoped)
credential = client.postgres.generate_database_credential(
endpoint=endpoint_name
)
kwargs["password"] = credential.token
return super().connect(conninfo, **kwargs)

# Create connection pool with OAuth token rotation
def get_connection_pool():
"""Get or create the connection pool."""
database = os.environ["PGDATABASE"]
user = os.environ["PGUSER"]
host = os.environ["PGHOST"]
port = os.environ.get("PGPORT", "5432")
sslmode = os.environ.get("PGSSLMODE", "require")

conninfo = f"dbname={database} user={user} host={host} port={port} sslmode={sslmode}"

return ConnectionPool(
conninfo=conninfo,
connection_class=OAuthConnection,
min_size=1,
max_size=10,
open=True,
)

# Use the pool in your application
pool = get_connection_pool()
with pool.connection() as conn:
with conn.cursor() as cur:
cur.execute("SELECT current_user, current_database()")
print(cur.fetchone())

Dependencies: databricks-sdk>=0.89.0, psycopg[binary,pool]>=3.1.0

6. Run and verify the connection

Install dependencies:

Bash
pip install databricks-sdk psycopg[binary,pool]

Run:

Python
# Save all the code from step 5 (above) as db.py, then run:
from db import get_connection_pool

pool = get_connection_pool()
with pool.connection() as conn:
with conn.cursor() as cur:
cur.execute("SELECT current_user, current_database()")
print(cur.fetchone())

Expected output:

('c00f575e-d706-4f6b-b62c-e7a14850571b', 'databricks_postgres')

If current_user matches your service principal client ID from step 1, OAuth token rotation is working.

Note: First connection after idle may take longer as Lakebase Autoscaling starts compute from zero.

Troubleshooting

Error

Fix

"API is disabled for users without workspace-access entitlement"

Enable "Workspace access" for the service principal (step 1).

"Role does not exist" or auth fails

Create the OAuth role via SQL (step 2), not the UI.

"Connection refused" or "Endpoint not found"

Use ENDPOINT_NAME format projects/<id>/branches/<id>/endpoints/<id>; endpoint ID is in the host.

"Invalid user" or "User not found"

Set PGUSER to the service principal client ID (UUID), not the display name.