Skip to main content

Driver capability settings for the Databricks ODBC Driver (Simba)

This page describes how to configure special and advanced driver capability settings for the Databricks ODBC Driver.

The Databricks ODBC Driver provides the following special and advanced driver capability settings.

Set the initial schema in ODBC

The ODBC driver allows you to specify the schema by setting Schema=<schema-name> as a connection configuration. This is equivalent to running USE <schema-name>.

Query tags for tracking

Preview

This feature is in Private Preview. To request access, contact your account team.

Attach key-value tags to your SQL queries for tracking and analytics purposes. Query tags appear in the system.query.history table for query identification and analysis.

To add query tags to your connection, include the ssp_query_tags parameter in your ODBC connection configuration:

Define query tags as comma-separated key-value pairs, where each key and value is separated by a colon. For example, ssp_query_tags=team:engineering,env:prod.

ANSI SQL-92 query support in ODBC

Legacy Spark ODBC drivers accept SQL queries in ANSI SQL-92 dialect and translate them to Databricks SQL before sending them to the server.

If your application generates Databricks SQL directly or uses non-ANSI SQL-92 syntax specific to Databricks, set UseNativeQuery=1 in your connection configuration. This setting passes SQL queries verbatim to Databricks without translation.

Extract large query results in ODBC

To achieve the best performance when you extract large query results, use the latest version of the ODBC driver, which includes the following optimizations.

Arrow serialization in ODBC

ODBC driver version 2.6.15 and above supports an optimized query results serialization format that uses Apache Arrow.

Cloud Fetch in ODBC

ODBC driver version 2.6.17 and above supports Cloud Fetch, a capability that fetches query results through the cloud storage configured in your Databricks deployment.

When you run a query, Databricks uploads the results to an internal DBFS storage location as Arrow-serialized files of up to 20 MB. After the query completes, the driver sends fetch requests, and Databricks returns presigned URLs to the uploaded files. The driver then uses these URLs to download results directly from DBFS.

Cloud Fetch only applies to query results larger than 1 MB. The driver retrieves smaller results directly from Databricks.

Databricks automatically garbage collects accumulated files, marking them for deletion after 24 hours and permanently deleting them after an additional 24 hours.

Cloud Fetch requires an E2 workspace and an Amazon S3 bucket without versioning enabled. If you have versioning enabled, see Advanced configurations to enable Cloud Fetch.

To learn more about the Cloud Fetch architecture, see How We Achieved High-bandwidth Connectivity With BI Tools.

Advanced configurations

If you enable S3 bucket versioning on your DBFS root, Databricks can't garbage collect older versions of uploaded query results. Set an S3 lifecycle policy first that purges older versions of uploaded query results.

To set a lifecycle policy:

  1. In the AWS console, go to the S3 service.
  2. Click on the S3 bucket that you use for your workspace's root storage.
  3. Open the Management tab and click Create lifecycle rule.
  4. Enter a name for the Lifecycle rule name.
  5. Keep the prefix field empty.
  6. Under Lifecycle rule actions select Permanently delete noncurrent versions of objects.
  7. Set a value under Days after objects become noncurrent. Databricks recommends using 1 day.
  8. Click Create rule.

Enable logging

To enable logging in the ODBC driver, set the LogLevel property to a value between 1 (severe events only) and 6 (all driver activity). Set the LogPath property to the full path of the folder where you want to save log files.

For more information, see the Configuring Logging section in the Databricks ODBC Driver Guide.