Connect to Preset
Preset provides modern business intelligence for your entire organization. Preset provides a powerful, easy to use data exploration and visualization platform, powered by open source Apache Superset.
You can integrate your Databricks SQL warehouses (formerly Databricks SQL endpoints) and Databricks clusters with Preset.
Connect to Preset using Partner Connect
To connect your Databricks workspace to Preset using Partner Connect, see Connect to BI partners using Partner Connect.
Note
Partner Connect only supports Databricks SQL warehouses for Preset. To connect a cluster in your Databricks workspace to Preset, connect to Preset manually.
Connect to Preset manually
In this section, you connect an existing SQL warehouse or cluster in your Databricks workspace to Preset.
Note
For SQL warehouses, you can use Partner Connect to simplify the connection process.
Requirements
Before you integrate with Preset manually, you must have the following:
A cluster or SQL warehouse in your Databricks workspace.
The connection details for your cluster or SQL warehouse, specifically the Server Hostname, Port, and HTTP Path values.
A Databricks personal access token. To create a personal access token, follow the steps in Databricks personal access tokens for workspace users.
Note
As a security best practice when you authenticate with automated tools, systems, scripts, and apps, Databricks recommends that you use OAuth tokens.
If you use personal access token authentication, Databricks recommends using personal access tokens belonging to service principals instead of workspace users. To create tokens for service principals, see Manage tokens for a service principal.
Steps to connect
To connect to Preset manually, do the following:
Create a new Preset account, or sign in to your existing Preset account.
Click + Workspace.
In the Add New Workspace dialog, enter a name for the workspace, select the workspace region that is nearest to you, and then click Save.
Open the workspace by clicking the workspace tile.
On the toolbar, click Catalog > Databases.
Click + Database.
In the Connect a database dialog, in the Supported Databases list, select one of the following:
For a SQL warehouse, select Databricks SQL Warehouse.
For a cluster, select Databricks Interactive Cluster.
For SQLAlchemy URI, enter the following value:
For a SQL warehouse:
databricks+pyodbc://token:{access token}@{server hostname}:{port}/{database name}
For a cluster:
databricks+pyhive://token:{access token}@{server hostname}:{port}/{database name}
Replace:
{access token}
with the Databricks personal access token value from the requirements.{server hostname}
with the Server Hostname value from the requirements.{port}
with the Port value from the requirements.{database name}
with the name of the target database in your Databricks workspace.
For example, for a SQL warehouse:
databricks+pyodbc://token:dapi...@dbc-a1b2345c-d6e7.cloud.databricks.com:443/default
For example, for a cluster:
databricks+pyhive://token:dapi...@dbc-a1b2345c-d6e7.cloud.databricks.com:443/default
Click the Advanced tab, and expand Other.
For Engine Parameters, enter the following value:
For a SQL warehouse:
{"connect_args": {"http_path": "sql/1.0/warehouses/****", "driver_path": "/opt/simba/spark/lib/64/libsparkodbc_sb64.so"}}
For a cluster:
{"connect_args": {"http_path": "sql/protocolv1/o/****"}}
Replace
sql/1.0/warehouses/****
orsql/protocolv1/o/****
with the HTTP Path value from the requirements.For example, for a SQL warehouse:
{"connect_args": {"http_path": "sql/1.0/warehouses/ab12345cd678e901", "driver_path": "/opt/simba/spark/lib/64/libsparkodbc_sb64.so"}}
For example, for a cluster:
{"connect_args": {"http_path": "sql/protocolv1/o/1234567890123456/1234-567890-buyer123"}}
Click the Basic tab, and then click Test Connection.
Note
For connection troubleshooting, see Database Connection Walkthrough for Databricks on the Preset website.
After the connection succeeds, click Connect.
Next steps
Explore one or more of the following resources on the Preset website: