Compute configuration for Databricks Connect

Note

This article covers Databricks Connect for Databricks Runtime 13.3 LTS and above.

In this article, you configure properties to establish a connection between Databricks Connect and your Databricks cluster or serverless compute. This information applies to the Python and Scala version of Databricks Connect unless stated otherwise.

Databricks Connect enables you to connect popular IDEs such as Visual Studio Code, PyCharm, RStudio Desktop, IntelliJ IDEA, notebook servers, and other custom applications to Databricks clusters. See What is Databricks Connect?.

Requirements

To configure a connection to Databricks compute, you must have:

Setup

Before you begin, you need the following:

Note

Configure a connection to a cluster

There are multiple ways to configure the connection to your cluster. Databricks Connect searches for configuration properties in the following order, and uses the first configuration it finds. For advanced configuration information, see Advanced usage of Databricks Connect for Python.

  1. The DatabricksSession class’s remote() method.

  2. A Databricks configuration profile

  3. The DATABRICKS_CONFIG_PROFILE environment variable

  4. An environment variable for each configuration property

  5. A Databricks configuration profile named DEFAULT

The DatabricksSession class’s remote() method

For this option, which applies to Databricks personal access token authentication only, specify the workspace instance name, the Databricks personal access token, and the ID of the cluster.

You can initialize the DatabricksSession class in several ways:

  • Set the host, token, and cluster_id fields in DatabricksSession.builder.remote().

  • Use the Databricks SDK’s Config class.

  • Specify a Databricks configuration profile along with the cluster_id field.

Instead of specifying these connection properties in your code, Databricks recommends configuring properties through environment variables or configuration files, as described throughout this section. The following code examples assume that you provide some implementation of the proposed retrieve_* functions to get the necessary properties from the user or from some other configuration store, such as AWS Systems Manager Parameter Store.

The code for each of these approaches is as follows:

# Set the host, token, and cluster_id fields in DatabricksSession.builder.remote.
# If you have already set the DATABRICKS_CLUSTER_ID environment variable with the
# cluster's ID, you do not also need to set the cluster_id field here.
from databricks.connect import DatabricksSession

spark = DatabricksSession.builder.remote(
host       = f"https://{retrieve_workspace_instance_name()}",
token      = retrieve_token(),
cluster_id = retrieve_cluster_id()
).getOrCreate()
// Set the host, token, and clusterId fields in DatabricksSession.builder.
// If you have already set the DATABRICKS_CLUSTER_ID environment variable with the
// cluster's ID, you do not also need to set the clusterId field here.
import com.databricks.connect.DatabricksSession

val spark = DatabricksSession.builder()
    .host(retrieveWorkspaceInstanceName())
    .token(retrieveToken())
    .clusterId(retrieveClusterId())
    .getOrCreate()
# Use the Databricks SDK's Config class.
# If you have already set the DATABRICKS_CLUSTER_ID environment variable with the
# cluster's ID, you do not also need to set the cluster_id field here.
from databricks.connect import DatabricksSession
from databricks.sdk.core import Config

config = Config(
host       = f"https://{retrieve_workspace_instance_name()}",
token      = retrieve_token(),
cluster_id = retrieve_cluster_id()
)

spark = DatabricksSession.builder.sdkConfig(config).getOrCreate()
// Use the Databricks SDK's Config class.
// If you have already set the DATABRICKS_CLUSTER_ID environment variable with the
// cluster's ID, you do not also need to set the clusterId field here.
import com.databricks.connect.DatabricksSession
import com.databricks.sdk.core.DatabricksConfig

val config = new DatabricksConfig()
    .setHost(retrieveWorkspaceInstanceName())
    .setToken(retrieveToken())
val spark = DatabricksSession.builder()
    .sdkConfig(config)
    .clusterId(retrieveClusterId())
    .getOrCreate()
# Specify a Databricks configuration profile along with the `cluster_id` field.
# If you have already set the DATABRICKS_CLUSTER_ID environment variable with the
# cluster's ID, you do not also need to set the cluster_id field here.
from databricks.connect import DatabricksSession
from databricks.sdk.core import Config

config = Config(
profile    = "<profile-name>",
cluster_id = retrieve_cluster_id()
)

spark = DatabricksSession.builder.sdkConfig(config).getOrCreate()
// Specify a Databricks configuration profile along with the clusterId field.
// If you have already set the DATABRICKS_CLUSTER_ID environment variable with the
// cluster's ID, you do not also need to set the clusterId field here.
import com.databricks.connect.DatabricksSession
import com.databricks.sdk.core.DatabricksConfig

val config = new DatabricksConfig()
    .setProfile("<profile-name>")
val spark = DatabricksSession.builder()
    .sdkConfig(config)
    .clusterId(retrieveClusterId())
    .getOrCreate()

A Databricks configuration profile

For this option, create or identify a Databricks configuration profile containing the field cluster_id and any other fields that are necessary for the Databricks authentication type that you want to use.

The required configuration profile fields for each authentication type are as follows:

Note

Basic authentication using a Databricks username and password reached end of life on July 10, 2024. See End of life for Databricks-managed passwords.

Then set the name of this configuration profile through the configuration class.

Note

You can use the auth login command’s --configure-cluster option to automatically add the cluster_id field to a new or existing configuration profile. For more information, run the command databricks auth login -h.

You can specify cluster_id in a couple of ways:

  • Include the cluster_id field in your configuration profile, and then just specify the configuration profile’s name.

  • Specify the configuration profile name along with the cluster_id field.

If you have already set the DATABRICKS_CLUSTER_ID environment variable with the cluster’s ID, you do not also need to specify cluster_id.

The code for each of these approaches is as follows:

# Include the cluster_id field in your configuration profile, and then
# just specify the configuration profile's name:
from databricks.connect import DatabricksSession

spark = DatabricksSession.builder.profile("<profile-name>").getOrCreate()
// Include the cluster_id field in your configuration profile, and then
// just specify the configuration profile's name:
import com.databricks.connect.DatabricksSession
import com.databricks.sdk.core.DatabricksConfig

val config = new DatabricksConfig()
    .setProfile("<profile-name>")
    val spark = DatabricksSession.builder()
    .sdkConfig(config)
    .getOrCreate()
# Specify the configuration profile name along with the cluster_id field.
# In this example, retrieve_cluster_id() assumes some custom implementation that
# you provide to get the cluster ID from the user or from some other
# configuration store:
from databricks.connect import DatabricksSession
from databricks.sdk.core import Config

config = Config(
profile    = "<profile-name>",
cluster_id = retrieve_cluster_id()
)

spark = DatabricksSession.builder.sdkConfig(config).getOrCreate()
// Specify a Databricks configuration profile along with the clusterId field.
// If you have already set the DATABRICKS_CLUSTER_ID environment variable with the
// cluster's ID, you do not also need to set the clusterId field here.
import com.databricks.connect.DatabricksSession
import com.databricks.sdk.core.DatabricksConfig

val config = new DatabricksConfig()
    .setProfile("<profile-name>")
val spark = DatabricksSession.builder()
    .sdkConfig(config)
    .clusterId(retrieveClusterId())
    .getOrCreate()

The DATABRICKS_CONFIG_PROFILE environment variable

For this option, create or identify a Databricks configuration profile containing the field cluster_id and any other fields that are necessary for the Databricks authentication type that you want to use.

If you have already set the DATABRICKS_CLUSTER_ID environment variable with the cluster’s ID, you do not also need to specify cluster_id.

The required configuration profile fields for each authentication type are as follows:

Note

Basic authentication using a Databricks username and password reached end of life on July 10, 2024. See End of life for Databricks-managed passwords.

Note

You can use the auth login command’s --configure-cluster to automatically add the cluster_id field to a new or existing configuration profile. For more information, run the command databricks auth login -h.

Set the DATABRICKS_CONFIG_PROFILE environment variable to the name of this configuration profile. Then initialize the DatabricksSession class:

from databricks.connect import DatabricksSession

spark = DatabricksSession.builder.getOrCreate()
import com.databricks.connect.DatabricksSession

val spark = DatabricksSession.builder().getOrCreate()

An environment variable for each configuration property

For this option, set the DATABRICKS_CLUSTER_ID environment variable and any other environment variables that are necessary for the Databricks authentication type that you want to use.

The required environment variables for each authentication type are as follows:

Note

Basic authentication using a Databricks username and password reached end of life on July 10, 2024. See End of life for Databricks-managed passwords.

Then initialize the DatabricksSession class:

from databricks.connect import DatabricksSession

spark = DatabricksSession.builder.getOrCreate()
import com.databricks.connect.DatabricksSession

val spark = DatabricksSession.builder().getOrCreate()

A Databricks configuration profile named DEFAULT

For this option, create or identify a Databricks configuration profile containing the field cluster_id and any other fields that are necessary for the Databricks authentication type that you want to use.

If you have already set the DATABRICKS_CLUSTER_ID environment variable with the cluster’s ID, you do not also need to specify cluster_id.

The required configuration profile fields for each authentication type are as follows:

Note

Basic authentication using a Databricks username and password reached end of life on July 10, 2024. See End of life for Databricks-managed passwords.

Name this configuration profile DEFAULT.

Note

You can use the auth login command’s --configure-cluster option to automatically add the cluster_id field to the DEFAULT configuration profile. For more information, run the command databricks auth login -h.

Then initialize the DatabricksSession class:

from databricks.connect import DatabricksSession

spark = DatabricksSession.builder.getOrCreate()
import com.databricks.connect.DatabricksSession

val spark = DatabricksSession.builder().getOrCreate()

Configure a connection to serverless compute

Preview

This feature is in Public Preview.

Databricks Connect for Python supports connecting to serverless compute. To use this feature, requirements for connecting to serverless must be met. See Requirements.

Important

This feature has the following limitations:

You can configure a connection to serverless compute in one of the following ways:

  • Set the local environment variable DATABRICKS_SERVERLESS_COMPUTE_ID to auto. If this environment variable is set, Databricks Connect ignores the cluster_id.

  • In a local Databricks configuration profile, set serverless_compute_id = auto, then reference that profile from your code.

    [DEFAULT]
    host = https://my-workspace.cloud.databricks.com/
    serverless_compute_id = auto
    token = dapi123...
    
  • Or use either of the following options:

from databricks.connect import DatabricksSession as SparkSession

spark = DatabricksSession.builder.serverless(True).getOrCreate()
from databricks.connect import DatabricksSession as SparkSession

spark = DatabricksSession.builder.remote(serverless=True).getOrCreate()

Note

The serverless compute session times out after 10 minutes of inactivity. After this, a new Spark session should be created using getOrCreate() to connect to serverless compute.

Validate the connection to Databricks

To validate your environment, default credentials, and connection to compute are correctly set up for Databricks Connect, run the databricks-connect test command, which fails with a non-zero exit code and a corresponding error message when it detects any incompatibility in the setup.

databricks-connect test

In Databricks Connect 14.3 and above, you can also validate your environment using validateSession():

DatabricksSession.builder.validateSession(True).getOrCreate()

Disabling Databricks Connect

Databricks Connect (and the underlying Spark Connect) services can be disabled on any given cluster.

To disable the Databricks Connect service, set the following Spark configuration on the cluster.

spark.databricks.service.server.enabled false