Databricks JDBC Driver (OSS)
Preview
This driver is in Public Preview and will be available as open source when it becomes generally available (GA).
Databricks JDBC (OSS), the latest version of the driver, enables you to connect tools such as DataGrip, DBeaver, and SQL Workbench/J to Databricks through Java Database Connectivity (JDBC), an industry-standard specification for accessing database management systems.
This driver has implemented the JDBC APIs and provides core functionality including OAuth, Cloud Fetch, and features such as Unity Catalog volume ingestion. It runs native query mode and supports native parameterized query, and can run using Statement Execution APIs, which provides the beneficial query result retention feature, or Thrift.
This article provides information on installing and using the Databricks JDBC Driver (OSS). For information about the non-OSS Databricks JDBC Driver see Databricks JDBC Driver.
Requirements
To use the Databricks JDBC Driver (OSS), the following requirements must be met:
Java Runtime Environment (JRE) 11.0 or above. CI testing is supported on JRE 11, 17, and 21.
Note
As a result of a change in JDK 16 that caused a compatibility issue with the Apache Arrow library used by the JDBC driver, runtime errors may occur when using the JDBC driver with JDK 16 or or above. To prevent these errors, restart your application or driver using the following JVM command option:
--add-opens=java.base/java.nio=org.apache.arrow.memory.core ALL-UNNAMED
Install the driver
The Databricks JDBC Driver (OSS) is published in the Maven Repository. The latest version is 0.9.6-oss.
To install the driver, you can do any of the following:
For Maven projects, add the following dependency to the project’s
pom.xml
file to instruct Maven to automatically download the JDBC driver with the specified version:<dependency> <groupId>com.databricks</groupId> <artifactId>databricks-jdbc</artifactId> <version>0.9.6-oss</version> <scope>runtime</scope> </dependency>
For Gradle projects, add the following dependency to the project’s build file to instruct Gradle to automatically download the JDBC driver with the specified version:
implementation 'com.databricks:databricks-jdbc:0.9.6-oss'
To view the dependency syntax for other project types, and to get the latest version number of the Databricks JDBC Driver (OSS), see the Maven Repository.
Configure the connection URL
To connect to your Databricks workspace using the JDBC driver, you need to specify a JDBC connection URL that includes various connection settings such as your Databricks workspace’s server hostname, the compute resource settings, and authentication credentials to connect to the workspace.
You can set the value of these properties on the JDBC connection URL, set and pass them to the DriverManager.getConnection method, or a combination of both. See the provider’s documentation for how best to connect using your specific app, client, SDK, API, or SQL tool.
The JDBC connection URL must be in the following format. Properties are case insensitive.
jdbc:databricks://<server-hostname>:<port>/<schema>;[property1]=[value];[property2]=[value];...
Alternatively, specify the settings using the java.util.Properties
class or a combination:
String url = "jdbc:databricks://<server-hostname>:<port>/<schema>";
Properties properties = new java.util.Properties();
properties.put("<property1>", "<value1");
properties.put("<property2>", "<value2");
// ...
Connection conn = DriverManager.getConnection(url, properties);
String url = "jdbc:databricks://<server-hostname>:<port>/<schema>;[property1]=[value];[property2]=[value];";
Connection conn = DriverManager.getConnection(url, "token", "12345678901234667890abcdabcd");
Connection URL elements are described in the following table. For information about additional properties, including authentication properties, see the sections below. URL elements and properties are case insensitive.
URL element or property |
Description |
---|---|
|
The Databricks compute resource’s server hostname value. |
|
The Databricks compute resource’s port value. The default value is |
|
The name of the schema. Alternatively you can set the |
|
The Databricks compute resource’s HTTP path value. The connector forms the HTTP address to connect to by appending the |
To get the JDBC connection URL for a Databricks cluster:
Log in to your Databricks workspace.
In the sidebar, click Compute, then click the target cluster’s name.
On the Configuration tab, expand Advanced options.
Click the JDBC/ODBC tab.
Copy the JDBC URL to use as the JDBC connection URL, or construct the URL from values in the Server hostname, Port, and HTTP Path fields.
To get the JDBC connection URL for a Databricks SQL warehouse:
Log in to your Databricks workspace.
In the sidebar, click SQL Warehouses, then click the target warehouse’s name.
Click the Connection details tab.
Copy the JDBC URL to use as the JDBC connection URL, or construct the URL from values in the Server hostname, Port, and HTTP Path fields.
Authenticate the driver
You can authenticate the JDBC driver connection using one of the following authentication mechanisms:
OAuth user-to-machine (U2M) authentication (Recommended)
OAuth user-to-machine (U2M) authentication
The JDBC driver supports OAuth user-to-machine (U2M) authentication for real-time human sign in and consent to authenticate the target Databricks user account. This is also known as browser-based OAuth authentication.
Databricks has created the OAuth client ID databricks-sql-jdbc
for customers. This is also the default OAuth client ID used in the JDBC driver. To configure OAuth U2M authentication, just add the following properties to your existing JDBC connection URL or java.util.Properties
object:
Property |
Value |
---|---|
|
|
|
|
OAuth machine-to-machine (M2M) authentication
The JDBC driver supports OAuth machine-to-machine (M2M) authentication using a Databricks service principal. This is also known as OAuth 2.0 client credentials authentication. See Authenticate access to Databricks with a service principal using OAuth (OAuth M2M).
To configure OAuth M2M or OAuth 2.0 client credentials authentication:
Create a Databricks service principal in your Databricks workspace, and create an OAuth secret for that service principal. See Authenticate access to Databricks with a service principal using OAuth (OAuth M2M). Make a note of the service principal’s UUID or Application ID value, and the Secret value for the service principal’s OAuth secret.
Give the service principal access to your cluster or warehouse. See Compute permissions or Manage a SQL warehouse.
Add the following properties to your existing JDBC connection URL or java.util.Properties
object:
Property |
Value |
---|---|
|
|
|
|
|
The service principal’s UUID or Application ID value. |
|
The service principal’s OAuth Secret value. |
Databricks personal access token
To authenticate your JDBC driver connection using a Databricks personal access token, add the following properties to your JDBC connection URL or java.util.Properties
object:
Property |
Value |
---|---|
|
|
|
The value |
|
Your Databricks personal access token value, as a string. |
Connection properties
The following additional connection properties are supported by the JDBC driver. Properties are case insensitive.
Property |
Default value |
Description |
---|---|---|
|
Required |
The authentication mechanism, where |
|
|
The OAuth2 authentication flow for the driver connection. This property is required if |
|
|
Whether the connector communicates with the Spark server through an SSL-enabled socket. |
|
|
The name of the default catalog to use. |
|
|
The name of the default schema to use. This can be specified either by replacing |
|
|
If set to |
|
|
A string that represents the name of the proxy host to use when |
|
|
An integer that represents the number of the proxy port to use when |
|
|
A string that represents the username to use for proxy authentication when |
|
|
A string that represents the password to use for proxy authentication when |
|
|
If set to |
|
|
If set to |
|
|
If set to |
|
|
If set to |
|
|
A string that represents the name of the proxy host to use when |
|
|
An integer that represents the number of the proxy port to use when |
|
|
A string that represents the username to use for proxy authentication when |
|
|
A string that represents the password to use for proxy authentication when |
|
|
The time in milliseconds between each poll for the asynchronous query execution status. Asynchronous refers to the fact that the RPC call used to execute a query against Spark is asynchronous. It does not mean that JDBC asynchronous operations are supported. |
|
|
The User-Agent entry to be included in the HTTP request. This value is in the following format: |
|
|
Whether the JDBC driver should use the Thrift client to connect to an all-purpose cluster. The default value works for SQL warehouses. |
SQL configuration properties
The following SQL configuration properties are supported by the JDBC driver. These are also described in Configuration parameters. Properties are case insensitive.
Property |
Default value |
Description |
---|---|---|
|
|
Whether to enable strict ANSI SQL behavior for certain functions and casting rules. |
|
|
Whether to enable the Photon vectorized query engine. |
|
|
The methods used to parse and format dates and timestamps. Valid values are |
|
|
The maximum number of bytes to pack into a single partition when reading from file based sources. The setting can be any positive integer and optionally include a measure such as |
|
|
Controls whether an external metastore is treated as read-only. |
|
|
Sets a SQL statement timeout between 0 and 172800 seconds. |
|
|
Set the local timezone. Region IDs in the form |
|
|
Whether Databricks SQL caches and reuses results whenever possible. |
Logging properties
The following logging properties are supported by the JDBC driver. Properties are case insensitive.
Property |
Default value |
Description |
---|---|---|
|
|
The logging level, which is a value 0 through 6:
Use this property to enable or disable logging in the connector and to specify the amount of detail included in log files. |
|
To determine the default path for logs, the driver uses the value set for these system properties, in this priority order:
|
The full path to the folder where the connector saves log files when logging is enabled, as a string. To ensure that the connection URL is compatible with all JDBC applications, escape the backslashes ( If the |
|
No maximum |
The maximum allowed log file size, specified in MB |
|
No maximum |
The maximum number of allowed log files |
Enable and configure logging
The JDBC driver supports the Simple Logging Facade for Java (SLF4J) and java.util.logging (JUL) frameworks. The driver uses the JUL logging framework by default.
To enable and configure logging for the JDBC driver:
Enable the logging framework that you want to use:
For SLF4J logging, set the system property
-Dcom.databricks.jdbc.loggerImpl=SLF4JLOGGER
and provide the SLF4J binding implementation (compatible with SLF4J version 2.0.13 and above) and corresponding configuration file in the classpath.For JUL logging, set the system property
-Dcom.databricks.jdbc.loggerImpl=JDKLOGGER
. This is the default.
Set the
LogLevel
property on the connection string to the desired level of information to include in log files.Set the
LogPath
property on the connection string to the full path to the folder where you want to save log files.For example, the following connection URL enables logging level 6 and saves the log files to the C:temp folder:
jdbc: databricks://localhost:11000;LogLevel=6;LogPath=C:\\temp
Restart your JDBC application and reconnect to the server to apply the settings.
Example: Run a query using the JDBC driver
The following example shows how to use the JDBC driver to run a Databricks SQL query using a Databricks compute resource.
import java.sql.Connection;
import java.sql.DriverManager;
import java.sql.ResultSet;
import java.sql.ResultSetMetaData;
import java.sql.Statement;
import java.util.Properties;
public class DatabricksJDBCExample {
public static void main(String[] args) {
Class.forName("com.databricks.client.jdbc.Driver");
// Set JDBC URL properties
String jdbcUrl = "jdbc:databricks://dbc-a1b2345c-d6e7.cloud.databricks.com:443";
Properties connectionProperties = new Properties();
connectionProperties.put("httpPath", "sql/protocolv1/o/123456780012345/0123-123450-z000pi22");
connectionProperties.put("ssl", "1");
// Set authentication properties (personal access token)
connectionProperties.put("AuthMech", "3");
connectionProperties.put("user", "token");
connectionProperties.put("password", "12345678901234667890abcdabcd");
// Set logging properties
connectionProperties.put("logPath", "logs/myapplication.log");
// Establish connection and execute query
try (Connection connection = DriverManager.getConnection(jdbcUrl, connectionProperties);
Statement statement = connection.createStatement();
ResultSet resultSet = statement.executeQuery("SELECT * FROM samples.nyctaxi.trips")) {
// Get metadata and column names
ResultSetMetaData metaData = resultSet.getMetaData();
String[] columns = new String[metaData.getColumnCount()];
for (int i = 0; i < columns.length; i++) {
columns[i] = metaData.getColumnName(i + 1);
}
// Process and print the result set
while (resultSet.next()) {
System.out.print("Row " + resultSet.getRow() + "=[");
for (int i = 0; i < columns.length; i++) {
if (i != 0) {
System.out.print(", ");
}
System.out.print(columns[i] + "='" + resultSet.getObject(i + 1) + "'");
}
System.out.println("]");
}
} catch (Exception e) {
e.printStackTrace();
}
}
}