This feature is in Public Preview.
Qlik Replicate helps you pull data from multiple data sources (Oracle, Microsoft SQL Server, SAP, mainframe and more) into Delta Lake. Replicate’s automated change data capture (CDC) helps you avoid the heavy lifting of manually extracting data, transferring via API script, chopping, staging, and importing. Qlik Compose automates the CDC into Delta Lake.
Here are the steps for using Qlik with Databricks.
Qlik authenticates with Databricks using a Databricks personal access token. To generate a personal access token, follow the instructions in Generate a personal access token.
Qlik will write data to an S3 bucket and the Databricks integration cluster will read data from that location. Therefore the integration cluster requires secure access to the S3 bucket.
To access AWS resources, you can launch the Databricks integration cluster with an instance profile. The instance profile should have access to the staging S3 bucket and the target S3 bucket where you want to write the Delta tables. To create an instance profile and configure the integration cluster to use the role, follow the instructions in Secure access to S3 buckets using instance profiles.
As an alternative, you can use IAM credential passthrough, which enables user-specific access to S3 data from a shared cluster.
Set Cluster Mode to Standard.
Set Databricks Runtime Version to a Databricks runtime version.
spark.databricks.delta.optimizeWrite.enabled true spark.databricks.delta.autoCompact.enabled true
Configure your cluster depending on your integration and scaling needs.
For cluster configuration details, see Configure clusters.
See Get server hostname, port, HTTP path, and JDBC URL for the steps to obtain the JDBC URL and HTTP path.
To connect a Databricks cluster to Qlik you need the following JDBC/ODBC connection properties:
- JDBC URL
- HTTP Path
Go to the Qlik login page and follow the instructions.