Authentication for Databricks automation - overview

In Databricks, authentication refers to verifying a Databricks identity (such as a user, service principal, or group). Databricks uses credentials (such as an access token or a username and password) to verify the identity.

After Databricks verifies the caller’s identity, Databricks then uses a process called authorization to determine whether the verified identity has sufficient access permissions to perform the specified action on the resource at the given location. This article includes details only about authentication. It does not include details about authorization or access permissions; see Authentication and access control.

When a tool makes an automation or API request, it includes credentials that authenticate an identity with Databricks. This article describes typical ways to create, store, and pass credentials and related information that Databricks needs to authenticate and authorize requests. To learn which credential types, related information, and storage mechanism are supported by your tools, SDKs, scripts, and apps, see Supported authentication types by Databricks tool or SDK or your provider’s documentation.

Common tasks for Databricks authentication

Use the following instructions to complete common tasks for Databricks authentication.

To complete this task…

Follow the instructions in this article

Create a Databricks user that you can use for authenticating at the Databricks account level.

Manage users in your account

Create a Databricks user that you can use for authenticating with a specific Databricks workspace.

Manage users in your workspace

Create a Databricks personal access token for a Databricks user. (This Databricks personal access token can be used only for authenticating with its associated Databricks workspace.)

Databricks personal access tokens for workspace users

Create a Databricks service principal, and then create a Databricks personal access token for that Databricks service principal. (This Databricks personal access token can be used only for authenticating with its associated Databricks workspace.)

Manage service principals

See also Databricks personal access tokens for service principals.

Create a Databricks configuration profile.

Databricks configuration profiles

Create a Databricks group, and add Databricks users and Databricks service principals to that group, for more robust authorization.

Manage account groups using the account console, Manage account groups using the workspace admin settings page

Supported Databricks authentication types

Databricks provides several ways to authenticate Databricks users and service principals, as follows:

Authentication type

Details

OAuth machine-to-machine (M2M) authentication

  • OAuth M2M authentication uses Databricks service principals for authentication.

  • OAuth M2M authentication uses short-lived (one hour) OAuth access tokens for authentication credentials.

  • Expired OAuth access tokens can be automatically refreshed by participating Databricks tools and SDKs. See Supported authentication types by Databricks tool or SDK and Databricks client unified authentication.

  • Databricks recommends that you use OAuth M2M authentication for unattended authentication scenarios. These scenarios include fully automated and CI/CD workflows, where you cannot use your web browser to authenticate with Databricks in real time.

  • For additional technical details, see OAuth machine-to-machine (M2M) authentication.

OAuth user-to-machine (U2M) authentication

Databricks personal access token authentication

  • Databricks personal access token authentication uses Databricks users or service principals for authentication.

  • Databricks personal access token authentication uses short-lived or long-lived strings for authentication credentials. These access tokens can be set to expire in as short as one day or less, or they can be set to never expire.

  • Expired Databricks personal access tokens cannot be refreshed.

  • Databricks recommends that you use OAuth M2M authentication or OAuth U2M authentication, if your target Databricks tool or SDK supports it, instead of Databricks personal access token authentication. This is because OAuth access tokens are more secure than Databricks personal access tokens.

  • For additional technical details, see Databricks personal access token authentication.

Basic authentication (not recommended in production)

  • Basic authentication uses Databricks users for authentication.

  • Basic authentication uses a username and password for authentication credentials.

  • Databricks does not recommend basic authentication for authentication credentials in production, as usernames and passwords are less secure than access tokens.

  • For additional technical details, see Basic authentication (legacy).

Supported authentication types by Databricks tool or SDK

Databricks tools and SDKs that work with one or more supported Databricks authentication types include the following:

Tool or SDK

Supported authentication types

Databricks CLI

For specific Databricks CLI authentication documentation, including how to set up and use Databricks configuration profiles to switch among multiple related authentication settings, see:

For additional technical details about the Databricks CLI, see What is the Databricks CLI?.

Databricks Terraform provider

For specific Databricks Terraform provider authentication documentation, including how to store and use credentials through environment variables, Databricks configuration profiles, .tfvars files, or secret stores such as Hashicorp Vault, AWS Secrets Manager, or AWS System Manager Parameter Store, see Authentication.

For additional technical details about the Databricks Terraform provider, see Databricks Terraform provider.

Databricks Connect

For specific Databricks Connect authentication documentation, see:

For additional technical details about Databricks Connect, see What is Databricks Connect?.

Databricks extension for Visual Studio Code

For specific Databricks extension for Visual Studio Code authentication documentation, see Authentication setup for the Databricks extension for Visual Studio Code.

For additional technical details about the Databricks extension for Visual Studio Code, see What is the Databricks extension for Visual Studio Code?.

Databricks SDK for Python

For specific Databricks SDK for Python authentication documentation, see:

For additional technical details about the Databricks SDK for Python, see Databricks SDK for Python.

Databricks SDK for Java

For specific Databricks SDK for Java authentication documentation, see:

For additional technical details about the Databricks SDK for Java, see Databricks SDK for Java.

Databricks SDK for Go

For specific Databricks SDK for Java authentication documentation, see:

For additional technical details about the Databricks SDK for Go, see Databricks SDK for Go.

Databricks Asset Bundles

For additional technical details about Databricks Asset Bundles, see What are Databricks Asset Bundles?.

Databricks Driver for SQLTools for Visual Studio Code

Basic authentication (legacy) (not recommended in production) is not yet supported.

For additional technical details about the Databricks Driver for SQLTools for Visual Studio Code, see Databricks Driver for SQLTools for Visual Studio Code.

Databricks SQL Connector for Python

For additional technical details about the Databricks SQL Connector for Python, see Databricks SQL Connector for Python.

Databricks SQL Driver for Node.js

For additional technical details about the Databricks SQL Driver for Node.js, see Databricks SQL Driver for Node.js.

Databricks SQL Driver for Go

Basic authentication (legacy) (not recommended in production) is not yet supported.

For additional technical details about the Databricks SQL Driver for Go, see Databricks SQL Driver for Go.

Other Databricks tools and SDKs

See the tool’s or SDK’s documentation:

Databricks account and workspace REST APIs

Databricks organizes its Databricks REST API into two categories of APIs: account APIs and workspace APIs. Each of these categories requires different sets of information to authenticate the target Databricks identity. Also, each supported Databricks authentication type requires additional information that uniquely identifies the target Databricks identity.

For instance, to authenticate a Databricks identity for calling Databricks account-level API operations, you must provide:

  • The target Databricks account console URL, which is typically https://accounts.cloud.databricks.com.

  • The target Databricks account ID. See Locate your account ID.

  • Information that uniquely identifies the target Databricks identity for the target Databricks authentication type. For the specific information to provide, see the section later in this article for that authentication type.

To authenticate a Databricks identity for calling Databricks workspace-level API operations, you must provide:

  • The target Databricks workspace URL, for example https://dbc-a1b2345c-d6e7.cloud.databricks.com.

  • Information that uniquely identifies the target Databricks identity for the target Databricks authentication type. For the specific information to provide, see the section later in this article for that authentication type.

Databricks client unified authentication

Databricks provides a consolidated and consistent architectural and programmatic approach to authentication, known as Databricks client unified authentication. This approach helps make setting up and automating authentication with Databricks more centralized and predictable. It enables you to configure Databricks authentication once and then use that configuration across multiple Databricks tools and SDKs without further authentication configuration changes.

Participating Databricks tools and SDKs include:

All participating tools and SDKs accept special environment variables and Databricks configuration profiles for authentication. The Databricks Terraform provider and the Databricks SDKs for Python, Java, and Go also accept direct configuration of authentication settings within code. For details, see Supported authentication types by Databricks tool or SDK or the tool’s or SDK’s documentation.

Default order of evaluation for client unified authentication methods and credentials

Whenever a participating tool or SDK needs to authenticate with Databricks, the tool or SDK tries the following types of authentication in the following order by default. When the tool or SDK succeeds with the type of authentication that it tries, the tool or SDK stops trying to authenticate with the remaining authentication types. To force an SDK to authenticate with a specific authentication type, set the Config API’s Databricks authentication type field.

  1. Databricks personal access token authentication

  2. Basic authentication (legacy)

  3. OAuth machine-to-machine (M2M) authentication

  4. OAuth user-to-machine (U2M) authentication

For each authentication type that the participating tool or SDK tries, the tool or SDK tries to find authentication credentials in the following locations, in the following order. When the tool or SDK succeeds in finding authentication credentials that can be used, the tool or SDK stops trying to find authentication credentials in the remaining locations.

  1. Credential-related Config API fields (for SDKs). To set Config fields, see Supported authentication types by Databricks tool or SDK or the SDK’s reference documentation.

  2. Credential-related environment variables. To set environment variables, see Supported authentication types by Databricks tool or SDK and your operating system’s documentation.

  3. Credential-related fields in the DEFAULT configuration profile within the .databrickscfg file. To set configuration profile fields, see Supported authentication types by Databricks tool or SDK and Databricks configuration profiles.

To provide maximum portability for your code, Databricks recommends that you create a custom configuration profile within the .databrickscfg file, add the required fields for your target Databricks authentication type to the custom configuration profile, and then set the DATABRICKS_CONFIG_PROFILE environment variable to the name of the custom configuration profile. For more information, see Supported authentication types by Databricks tool or SDK.

Environment variables and fields for client unified authentication

The following tables list the names and descriptions of the supported environment variables and fields for Databricks client unified authentication. In the following tables:

General host, token, and account ID environment variables and fields

Common name

Description

Environment variable

.databrickscfg field, Terraform field

Config field

Databricks host

(String) The Databricks host URL for either the Databricks workspace endpoint or the Databricks accounts endpoint.

DATABRICKS_HOST

host

host (Python), setHost (Java), Host (Go)

Databricks token

(String) The Databricks personal access token.

DATABRICKS_TOKEN

token

token (Python), setToken (Java), Token (Go)

Databricks account ID

(String) The Databricks account ID for the Databricks account endpoint. Only has effect when the Databricks host is also set to https://accounts.cloud.databricks.com.

DATABRICKS_ACCOUNT_ID

account_id

account_id (Python), setAccountID (Java), AccountID (Go)

AWS-specific environment variables and fields

Common name

Description

Environment variable

.databrickscfg field, Terraform field

Config field

Databricks username

(String) The Databricks user’s username.

DATABRICKS_USERNAME

username

username (Python), setUsername (Java), Username (Go)

Databricks password

(String) The Databricks user’s password.

DATABRICKS_PASSWORD

password

password (Python), setPassword (Java), Password (Go)

Service principal client ID

(String) The Databricks service principal’s client ID.

DATABRICKS_CLIENT_ID

client_id

client_id (Python), setClientId (Java), ClientId (Go)

Service principal secret

(String) The Databricks service principal’s secret.

DATABRICKS_CLIENT_SECRET

client_secret

client_secret (Python), setClientSecret (Java), ClientSecret (Go)

.databrickscfg-specific environment variables and fields

Use these environment variables or fields to specify non-default settings for .databrickscfg. See also Databricks configuration profiles.

Common name

Description

Environment variable

Terraform field

Config field

.databrickscfg file path

(String) A non-default path to the .databrickscfg file.

DATABRICKS_CONFIG_FILE

config_file

config_file (Python), setConfigFile (Java), ConfigFile (Go)

.databrickscfg default profile

(String) The default named profile to use, other than DEFAULT.

DATABRICKS_CONFIG_PROFILE

profile

profile (Python), setProfile (Java), Profile (Go)

Authentication type field

Use this environment variable or field to force an SDK to use a specific type of Databricks authentication.

Common name

Description

Terraform field

Config field

Databricks authentication type

(String) When multiple authentication attributes are available in the environment, use the authentication type specified by this argument.

auth_type

auth_type (Python), setAuthType (Java), AuthType (Go)

Supported Databricks authentication type field values include:

Databricks configuration profiles

A Databricks configuration profile (sometimes referred to as a configuration profile, a config profile, or simply a profile) contains settings and other information that Databricks needs to authenticate. Databricks configuration profiles are stored in Databricks configuration profiles files for your tools, SDKs, scripts, and apps to use. To learn whether Databricks configuration profiles are supported by your tools, SDKs, scripts, and apps, see your provider’s documentation. All participating tools and SDKs that implement Databricks client unified authentication support Databricks configuration profiles. For more information, see Supported authentication types by Databricks tool or SDK.

To create a Databricks configuration profiles file:

  1. Use your favorite text editor to create a file named .databrickscfg in your ~ (your user home) folder on Unix, Linux, or macOS, or your %USERPROFILE% (your user home) folder on Windows, if you do not already have one. Do not forget the dot (.) at the beginning of the file name. Add the following contents to this file:

    [<some-unique-name-for-this-configuration-profile>]
    <field-name> = <field-value>
    
  2. In the preceding contents, replace the following values, and then save the file:

    • <some-unique-name-for-this-configuration-profile> with a unique name for the configuration profile, such as DEFAULT, DEVELOPMENT, PRODUCTION, or similar. You can have multiple configuration profiles in the same .databrickscfg file, but each configuration profile must have a unique name within this file.

    • <field-name> and <field-value> with the name and a value for one of the required fields for the target Databricks authentication type. For the specific information to provide, see the section earlier in this article for that authentication type.

    • Add a <field-name> and <field-value> pair for each of the additional required fields for the target Databricks authentication type.

For example, for Databricks personal access token authentication, the .databrickscfg file might look like this:

[DEFAULT]
host  = https://dbc-a1b2345c-d6e7.cloud.databricks.com
token = dapi123...

To create additional configuration profiles, specify different profile names within the same .databrickscfg file. For example, to specify separate Databricks workspaces, each with their own Databricks personal access token:

[DEFAULT]
host  = https://dbc-a1b2345c-d6e7.cloud.databricks.com
token = dapi123...

[DEVELOPMENT]
host  = https://dbc-b2c3456d-e7f8.cloud.databricks.com
token = dapi234...

You can also specify different profile names within the .databrickscfg file for Databricks accounts and different Databricks authentication types, for example:

[DEFAULT]
host  = https://dbc-a1b2345c-d6e7.cloud.databricks.com
token = dapi123...

[ACCOUNT]
host       = https://accounts.cloud.databricks.com
username   = someone@example.com
password   = MyP25...
account_id = ab0cd1...

ODBC DSNs

In ODBC, a data source name (DSN) is a symbolic name that tools, SDKs, scripts, and apps use to request a connection to an ODBC data source. A DSN stores connection details such as the path to an ODBC driver, networking details, authentication credentials, and database details. To learn whether ODBC DSNs are supported by your tools, scripts, and apps, see your provider’s documentation.

To install and configure the Databricks ODBC Driver and create an ODBC DSN for Databricks, see Databricks ODBC Driver.

JDBC connection URLs

In JDBC, a connection URL is a symbolic URL that tools, SDKs, scripts, and apps use to request a connection to a JDBC data source. A connection URL stores connection details such as networking details, authentication credentials, database details, and JDBC driver capabilities. To learn whether JDBC connection URLs are supported by your tools, SDKs, scripts, and apps, see your provider’s documentation.

To install and configure the Databricks JDBC Driver and create a JDBC connection URL for Databricks, see Databricks JDBC Driver.