Serve lakehouse data
Region availability
Lakebase Autoscaling is the latest version of Lakebase, with autoscaling compute, scale-to-zero, branching, and instant restore. For supported regions, see Region availability. If you are a Lakebase Provisioned user, see Lakebase Provisioned.
Sync a Unity Catalog table into Postgres and query it alongside your operational data.
Steps: ① Create analytics data → ② Sync to Lakebase → ③ Find your data in Postgres → ④ Query across both worlds
Before you begin
- Make sure you completed Get a Postgres database. You need a Lakebase project with sample data.
- A SQL warehouse or notebook for Unity Catalog queries.
- USE_SCHEMA and CREATE_TABLE on the schema where you'll create the synced table.
Step 1: Create analytics data in Unity Catalog
Imagine your data team has built user segmentation scores in the lakehouse. In production, this would be a gold table, ML output, or enriched dataset. For this guide, you'll create a small sample.
In a SQL warehouse or notebook, run:
CREATE TABLE main.default.user_segments AS
SELECT * FROM VALUES
(1, 'power_user', 0.92),
(2, 'casual', 0.35),
(3, 'power_user', 0.88)
AS segments(user_id, segment, engagement_score);
Notice the user_id values match the id column in your playing_with_lakebase table from get-started. That's intentional. You'll join them in Step 4.
Learn more: Supported source types
Step 2: Sync the table to Lakebase
In Catalog Explorer, navigate to your user_segments table and create a synced table from it. Choose your Lakebase project's databricks_postgres database as the target and Snapshot as the sync mode. Snapshot copies the data once, which is the simplest option for getting started.
The sync runs automatically. When it completes, a new read-only table appears in your Lakebase database. The schema name from Unity Catalog becomes the Postgres schema name, and the table name gets a _synced suffix: default.user_segments_synced.
Learn more: Create a synced table (full procedure) | Sync modes
Step 3: Find your data in Postgres
Switch to the Lakebase SQL Editor. The analytics data from Unity Catalog is now queryable with standard Postgres SQL. Look for user 1:
SELECT * FROM "default".user_segments_synced WHERE user_id = 1;
default must be quoted because it is a PostgreSQL reserved keyword. The synced table schema inherits the Unity Catalog schema name, so if your schema is named default, you must always quote it in queries.
You should see user 1 with segment power_user and an engagement score of 0.92. This is the same row you created in Unity Catalog, now available in Postgres with low-latency reads.
Learn more: Data type mapping
Step 4: Query across both worlds
Here's the payoff. Your playing_with_lakebase table has operational data. Your user_segments_synced table has lakehouse analytics. Join them:
SELECT
p.id,
p.name,
p.value,
s.segment,
s.engagement_score
FROM playing_with_lakebase p
JOIN "default".user_segments_synced s ON p.id = s.user_id;
Your application can now serve enriched data. A single Postgres query combines what the app knows (names, values) with what the lakehouse computed (segments, scores). No API calls to the lakehouse, no sync scripts, no latency penalty.
Learn more: Capacity planning
Next steps
- Keep data fresh: Configure Triggered or Continuous sync modes for ongoing updates.
- Build an app: Use synced data in a Databricks App or external application.
- Explore Lakebase: Core concepts | What is Lakebase?