Use connection pooling
Lakebase Autoscaling is the latest version of Lakebase, with autoscaling compute, scale-to-zero, branching, and instant restore. For supported regions, see Region availability. If you are a Lakebase Provisioned user, see Lakebase Provisioned.
Lakebase includes a built-in PgBouncer connection pooler that maintains a pool of server connections and shares them across many client connections. The pooler supports up to 10,000 concurrent connections, making it a good fit for serverless functions, web APIs, and other applications that open many short-lived connections.
Connection pooling requires native Postgres password authentication. It is not available for OAuth or Databricks identity roles.
How connection pooling works
Each Postgres connection consumes server resources because Postgres creates a separate process for each client. Applications that open many short-lived connections, such as web APIs and serverless functions, can exhaust the server's connection limit quickly.
The connection pooler sits between your application and Postgres. Clients connect to the pooler, and the pooler forwards queries to a smaller pool of actual server connections. When a transaction completes, the pooler returns the server connection to the pool, making it available to the next client.
Lakebase runs PgBouncer in transaction mode. In transaction mode, a server connection is held for the duration of a single transaction and then returned to the pool. This lets many clients share a small pool of server connections.
Transaction mode improves connection efficiency but restricts certain Postgres features that require a persistent server connection. See Transaction mode limitations.
Connection pools
PgBouncer creates a separate pool for each database and user combination. Two users connecting to the same database get independent pools. The size of each pool is approximately 90% of the compute's direct connection limit.
When all connections in a pool are in use, new client requests wait in a queue. If a server connection doesn't become available within 2 minutes, the client receives a timeout error.
Connection limits
Three limits govern connection pooling:
Limit | Value | What it controls |
|---|---|---|
Client connections ( | 10,000 | Maximum connections from your application to PgBouncer |
Pool size ( | ~90% of | Active server connections per (user, database) pair |
Direct connections ( | Varies by compute size | Maximum direct Postgres connections |
The direct connection limit depends on your compute size. For example, an 8 CU compute supports 1,678 direct connections and a 16 CU compute supports 3,357. For the full list, see Compute specifications.
Connection pooling lets your application support far more concurrent users than the direct connection limit alone would allow.
Prerequisites
- Your Lakebase Autoscaling project must be active.
- You must have a native Postgres password role in the project. For instructions, see Create a native Postgres password role.
- To use the read-only pooler, you must have a high availability endpoint with Allow access to read-only compute instances enabled. See High availability.
Enable connection pooling
- In the Lakebase App, go to your project and click Connect.
- Select the branch and compute you want to connect to.
- From the Role drop-down, select a native Postgres password role. The Connection pooling toggle is only visible when a password role is selected. It is hidden for OAuth and Databricks identity roles.
- Turn on Connection pooling.
- Copy the connection string and use it in your application.

Connection string formats
Pooler connection strings use a different hostname than direct database connections:
Type | Hostname format | When to use |
|---|---|---|
Read-write pooler |
| All write and read traffic routed through the pooler |
Read-only pooler |
| Read traffic only. Requires a high availability endpoint with read access enabled. |
Both pooler types use port 5432.
Copy your pooler connection string directly from the Connect dialog in the Lakebase App to get the correct hostname for your endpoint, region, and cloud.
Transaction mode limitations
The following Postgres features aren't available when using the connection pooler:
-
SQL-level prepared statements:
PREPAREandDEALLOCATEstatements are not supported in transaction mode. Driver-level prepared statements (used internally by psycopg2, node-postgres, JDBC, and similar libraries) work correctly through PgBouncer's protocol-level support. For JDBC, if you see errors related to prepared statements, setprepareThreshold=0to disable named server-side prepared statement caching. -
Session-level settings:
SETcommands do not persist across transactions because each transaction may use a different server connection. For example:SQLBEGIN;
SET search_path TO myschema;
SELECT * FROM mytable; -- works in this transaction
COMMIT;
-- connection returns to pool after COMMIT
SELECT * FROM mytable; -- ERROR: relation "mytable" does not existTo apply a setting permanently, use
ALTER ROLEinstead:SQLALTER ROLE myrole SET search_path TO myschema, public; -
Session-held temporary tables: Temporary tables that persist across transactions are not available. A connection returned to the pool may be assigned to a different client in the next transaction.
-
WITH HOLDcursors: Cursors declared withWITH HOLDrequire a persistent connection and are not supported. -
Advisory locks: PgBouncer doesn't support advisory locks. Advisory locks require a persistent server connection, which isn't available in transaction mode.
-
LISTEN/NOTIFY: Not supported. Use a direct (non-pooled) connection for applications that require pub/sub messaging. -
pg_dumpand schema migrations: Use a direct connection forpg_dump, schema migrations, and other tools that rely on session-level state.
For applications that require session-level Postgres features, use a direct connection string from the Connect dialog without enabling the Connection pooling toggle.
Next steps
- Connection strings: Connection string format reference for direct connections. See Connection strings.
- Create Postgres roles: How to create native Postgres password roles required for connection pooling. See Create Postgres roles.
- About authentication: Comparison of OAuth and password authentication methods. See About authentication.