Skip to main content

Query PostgreSQL with Databricks

This example queries PostgreSQL using its JDBC driver. For more details on reading, writing, configuring parallelism, and query pushdown, see Query databases using JDBC.

Experimental

The legacy query federation documentation has been retired and might not be updated. The configurations mentioned in this content are not officially endorsed or tested by Databricks. If Lakehouse Federation supports your source database, Databricks recommends using that instead.

Using JDBC

Python
driver = "org.postgresql.Driver"

database_host = "<database-host-url>"
database_port = "5432" # update if you use a non-default port
database_name = "<database-name>"
table = "<table-name>"
user = "<username>"
password = "<password>"

url = f"jdbc:postgresql://{database_host}:{database_port}/{database_name}"

remote_table = (spark.read
.format("jdbc")
.option("driver", driver)
.option("url", url)
.option("dbtable", table)
.option("user", user)
.option("password", password)
.load()
)

Using the PostgreSQL connector in Databricks Runtime

In Databricks Runtime 11.3 LTS and above, you can use the named connector to query PosgresQL. See the following examples:

Python
remote_table = (spark.read
.format("postgresql")
.option("dbtable", "schema_name.table_name") # if schema_name not provided, default to "public".
.option("host", "database_hostname")
.option("port", "5432") # Optional - will use default port 5432 if not specified.
.option("database", "database_name")
.option("user", "username")
.option("password", "password")
.load()
)