Stitch integration

Preview

This feature is in Public Preview.

Stitch helps you consolidate all your business data from different databases and SaaS applications (Salesforce, Hubspot, Marketo, and so on) into Delta Lake.

Here are the steps for using Stitch with Databricks.

Step 1: Generate a Databricks personal access token

Stitch authenticates with Databricks using a Databricks personal access token. To generate a personal access token, follow the instructions in Generate a personal access token.

Step 2: Set up a cluster to support integration needs

Stitch will write data to an S3 bucket and the Databricks integration cluster will read data from that location. Therefore the integration cluster requires secure access to the S3 bucket.

Secure access to an S3 bucket

To access AWS resources, you can launch the Databricks integration cluster with an instance profile. The instance profile should have access to the staging S3 bucket and the target S3 bucket where you want to write the Delta tables. To create an instance profile and configure the integration cluster to use the role, follow the instructions in Secure access to S3 buckets using instance profiles.

As an alternative, you can use IAM credential passthrough, which enables user-specific access to S3 data from a shared cluster.

Specify the cluster configuration

  1. Set Cluster Mode to Standard.

  2. Set Databricks Runtime Version to Runtime: 6.3 or above.

  3. Enable Auto Optimize by adding the following properties to your Spark configuration:

    spark.databricks.delta.optimizeWrite.enabled true
    spark.databricks.delta.autoCompact.enabled true
    
  4. Configure your cluster depending on your integration and scaling needs.

For cluster configuration details, see Configure clusters.

See Get server hostname, port, HTTP path, and JDBC URL for the steps to obtain the JDBC URL and HTTP path.

Step 3: Configure Stitch with Databricks

Go to the Stitch login page and follow the instructions.