Get started with Lakebase Postgres
Lakebase Postgres (Autoscaling Beta) is the next version of Lakebase, available for evaluation only. For production workloads, use Lakebase Public Preview. See choosing between versions to understand which version is right for you.
Get up and running with Lakebase Postgres in minutes. Create your first project, connect to your database, and explore key features including Unity Catalog integration.
Enable Lakebase Postgres (Autoscaling Beta)
If you don't see Lakebase Postgres (Autoscaling Beta) in your workspace, a workspace admin can enable it through Settings > Workspace settings > Preview features. For more information, see Lakebase Beta access.
Create your first project
Open the Lakebase App from the apps switcher.

Click New project. Give your project a name and select your Postgres version. Your project is created with two branches (production and development), a default databricks_postgres database, and compute resources configured for each branch.

It may take a few monents for your compute to activate.
The region for your project is automatically set to your workspace region. For detailed configuration options, see Create a database project.
Connect to your database
From your project, select the production branch and click Connect. You can connect using your Databricks identity with OAuth authentication, or create a native Postgres password role. Connection strings work with standard Postgres clients like psql, pgAdmin, or any Postgres-compatible tool.

When you create a project, a Postgres role for your Databricks identity (for example, user@databricks.com) is automatically created. This role owns the default databricks_postgres database and is a member of databricks_superuser, giving it broad privileges to manage database objects.
To connect using your Databricks identity with OAuth, copy the psql connection snippet from the connection dialog.
psql 'postgresql://your-email@databricks.com@ep-abc-123.databricks.com/databricks_postgres?sslmode=require'
After entering the psql connection commoand in your terminal, you are prompted to provide an OAuth token. Get your token by clicking the Copy OAuth token option in the connection dialog.
For connection details and authentication options, see Connection overview.
Create your first table
The Lakebase SQL Editor comes preloaded with sample SQL to help you get started. From your project, select the production branch, open the SQL Editor, and run the provided statements to create a playing_with_lakebase table and insert sample data. You can also use the Tables Editor for visual data management or connect with external Postgres clients.

Learn more about querying options: SQL Editor | Tables Editor | Postgres clients
Register in Unity Catalog
Now that you've created a table on your production branch, let's register the production database in Unity Catalog so you can query that data from Databricks SQL Editor.
- Use the apps switcher to navigate to Lakehouse.
- In Catalog Explorer, click the plus icon and Create a catalog.
- Enter a catalog name (for example, lakebase_catalog).
- Select Lakebase Serverless as the catalog type.
- Select your project, the production branch, and the databricks_postgresdatabase.
- Click Create.
Query the playing_with_lakebase table you just created from Databricks SQL Editor using a SQL warehouse:
SELECT * FROM lakebase_catalog.public.playing_with_lakebase;
This enables federated queries that join your Lakebase transactional data with lakehouse analytics. For details, see Register in Unity Catalog.
Sync data with Reverse ETL
You've just seen how to make Lakebase data queryable in Unity Catalog. Lakebase also works in the reverse direction: bringing curated analytical data FROM Unity Catalog INTO your Lakebase database. This is useful when you have enriched data, ML features, or aggregated metrics computed in your lakehouse that need to be served by applications with low-latency transactional queries.
First, create a table in Unity Catalog that represents analytical data. Open a SQL warehouse or notebook and run:
CREATE TABLE main.default.user_segments AS
SELECT * FROM VALUES
  (1001, 'premium', 2500.00, 'high'),
  (1002, 'standard', 450.00, 'medium'),
  (1003, 'premium', 3200.00, 'high'),
  (1004, 'basic', 120.00, 'low')
AS segments(user_id, tier, lifetime_value, engagement);
Now sync this table to your Lakebase database:
- In the Lakehouse Catalog Explorer, navigate to main > default > user_segments.
- Click Create > Synced table.
- Configure the sync:
- Table name: Enter user_segments_synced.
- Database type: Select Lakebase Serverless (Beta).
- Sync mode: Choose Snapshot for a one-time data sync.
- Select your project, the production branch, and the databricks_postgresdatabase.
 
- Table name: Enter 
- Click Create.
After the sync completes, the table appears in your Lakebase database. The sync process creates a default schema in Postgres to match the Unity Catalog schema, so main.default.user_segments_synced becomes default.user_segments_synced. Navigate back to Lakebase using the apps switcher and query it in the Lakebase SQL Editor:
SELECT * FROM "default"."user_segments_synced" WHERE "engagement" = 'high';

Your lakehouse analytics are now available for real-time serving in your transactional database. For continuous syncing, advanced configurations, and data type mappings, see Reverse ETL.
Next steps
Now that you've created your first project, here are some recommended paths forward:
- Tutorials
- Connect
- Key features
- Explore more
- Branch-based development tutorial - Learn schema change workflows with branches and schema diff.
- Connect to your database - Authentication options and connection strings.
- Postgres clients - Use psql, pgAdmin, DataGrip, and other tools.
- Database branches - Development workflows with instant database branching.
- Autoscaling - Automatic compute scaling based on demand.
- Scale to zero - Cost optimization with automatic suspension.
- Database projects overview - Complete guide to features, integrations, and operations.