Connect to external systems

Databricks provides built-in integrations to many cloud-native data systems, as well as extensible JDBC support to connect to other data systems.

The connectors documented in this section mostly focus on configuring a connection to a single table in the external data system. You can use some of these drivers to write data back to external systems as well.

For read-only data connections, Databricks recommends using Lakehouse Federation, which enables syncing entire databases to Databricks from external systems and is governed by Unity Catalog. See What is Lakehouse Federation.

Partner Connect also provides integrations to many popular enterprise data systems. Many Partner Connect solutions not only connect to data sources, but also facilitate easy ETL to keep data in your lakehouse fresh. See What is Databricks Partner Connect?.

What data sources connect to Databricks with JDBC?

You can use JDBC to connect with many data sources. Databricks Runtime includes drivers for a number of JDBC databases, but you might need to install a driver or different driver version to connect to your preferred database. Supported databases include the following:

What data services does Databricks integrate with?

The following data services require you to configure connection settings, security credentials, and networking settings. You might need administrator or power user privileges in your AWS account or Databricks workspace. Some also require that you create a Databricks library and install it in a cluster: