Looker

This article describes how to use Looker with Databricks.

Step 1: Download and install software

Download and install the following:

Simba Spark JDBC driver.

Step 2: Get Databricks connection information

  1. Get a personal access token.
  2. Get your cluster’s server hostname, port, and HTTP path, using the instructions in Server hostname, port, HTTP path, and JDBC URL.

Step 3: Configure connection in Looker to a Databricks cluster

  1. In Looker, go to Admin > Connections > New Database Connection.

    Cluster connection parameters
  2. Enter a name and Apache Spark dialect.

  3. In the Host and Port fields, enter the information you retrieved in Step 2.

  4. Enter token in the Username field and the token from Step 2 in the Password field.

  5. If you want to translate queries into other time zones, adjust Query Time Zone.

  6. Set Additional Params to ;transportMode=http;ssl=true;httpPath=, appending the HTTP path from Step 2.

  7. For the remaining fields, keep the defaults:

    • Do not enable Persistent Derived Tables.
    • Keep the Max Connections and Connection Pool Timeout defaults.
    • Leave Database Time Zone blank (assuming that you are storing everything in UTC).

For more information, see the Looker documentation.

Step 4: Begin modeling your database in Looker by creating a project and running the generator

This step assumes that there are permanent tables stored in the default database of your cluster.

  1. If necessary, apply Developer Mode by toggling the Dev button from OFF to ON.
  2. Go to LookML > Manage Projects.
  3. Click New LookML Project.
  4. Configure the new project.
    • Give the project a name.
    • Select Generate Model & Views.
    • Select the Connection name that you provided when you created the database connection.
    • Select All Tables.
    • Set Schemas to default, unless you have other databases to model in the cluster.
  5. Click Create Project.

After you create the project and the generator runs, Looker displays a user interface with one model file and multiple view files. The model file shows the tables in the schema and any discovered join relations between them, and the view files list each dimension (column) available for each table in the schema.