Skip to main content

Driver capability settings for the Databricks JDBC Driver (Simba)

note

This page applies to Databricks JDBC driver versions below version 3. For Databricks JDBC driver version 3 and above, see Databricks JDBC Driver.

This page describes how to configure special and advanced driver capability settings for the Databricks JDBC Driver.

The Databricks JDBC Driver provides the following special and advanced driver capability settings.

ANSI SQL-92 query support in JDBC

Legacy Spark JDBC drivers accept SQL queries in ANSI SQL-92 dialect and translate them to Databricks SQL before sending them to the server.

If your application generates Databricks SQL directly or uses non-ANSI SQL-92 syntax specific to Databricks, set UseNativeQuery=1 in your connection configuration. This setting passes SQL queries verbatim to Databricks without translation.

Default catalog and schema

To specify the default catalog and schema, add ConnCatalog=<catalog-name>;ConnSchema=<schema-name> to the JDBC connection URL.

Query tags for tracking

Preview

This feature is in Private Preview. To request access, contact your account team.

Attach key-value tags to your SQL queries for tracking and analytics purposes. Query tags appear in the system.query.history table for query identification and analysis.

To add query tags to your connection, include the ssp_query_tags parameter in your JDBC connection URL:

jdbc:databricks://<server-hostname>:443;httpPath=<http-path>;ssp_query_tags=key1:value1,key2:value2

Define query tags as comma-separated key-value pairs, where each key and value is separated by a colon. For example, key1:value1,key2:value2.

Extract large query results in JDBC

To achieve the best performance when you extract large query results, use the latest version of the JDBC driver, which includes the following optimizations.

Arrow serialization in JDBC

JDBC driver version 2.6.16 and above supports an optimized query results serialization format that uses Apache Arrow.

Cloud Fetch in JDBC

JDBC driver version 2.6.19 and above supports Cloud Fetch, a capability that fetches query results through the cloud storage configured in your Databricks deployment.

When you run a query, Databricks uploads the results to an internal DBFS storage location as Arrow-serialized files of up to 20 MB. After the query completes, the driver sends fetch requests, and Databricks returns presigned URLs to the uploaded files. The driver then uses these URLs to download results directly from DBFS.

Cloud Fetch only applies to query results larger than 1 MB. The driver retrieves smaller results directly from Databricks.

Databricks automatically garbage collects accumulated files, marking them for deletion after 24 hours and permanently deleting them after an additional 24 hours.

Cloud Fetch requires an E2 workspace and an Amazon S3 bucket without versioning enabled. If you have versioning enabled, see Advanced configurations to enable Cloud Fetch.

To learn more about the Cloud Fetch architecture, see How We Achieved High-bandwidth Connectivity With BI Tools.

Advanced configurations

If you enable S3 bucket versioning on your DBFS root, Databricks can't garbage collect older versions of uploaded query results. Set an S3 lifecycle policy first that purges older versions of uploaded query results.

To set a lifecycle policy:

  1. In the AWS console, go to the S3 service.
  2. Click on the S3 bucket that you use for your workspace's root storage.
  3. Open the Management tab and click Create lifecycle rule.
  4. Enter a name for the Lifecycle rule name.
  5. Keep the prefix field empty.
  6. Under Lifecycle rule actions select Permanently delete noncurrent versions of objects.
  7. Set a value under Days after objects become noncurrent. Databricks recommends using 1 day.
  8. Click Create rule.

Enable logging

To enable logging in the JDBC driver, set the LogLevel property to a value between 1 (severe events only) and 6 (all driver activity). Set the LogPath property to the full path of the folder where you want to save log files.

For more information, see the Configuring Logging section in the Databricks JDBC Driver Guide.