Create a Zendesk Support ingestion pipeline
This feature is in Beta. Workspace admins can control access to this feature from the Previews page. See Manage Databricks previews.
This page describes how to create a Zendesk Support ingestion pipeline using Databricks Lakeflow Connect.
Prerequisites
To create an ingestion pipeline, you must meet the following requirements:
-
Your workspace must be enabled for Unity Catalog.
-
Serverless compute must be enabled for your workspace. See Serverless compute requirements.
-
If you plan to create a new connection: You must have
CREATE CONNECTIONprivileges on the metastore.If the connector supports UI-based pipeline authoring, an admin can create the connection and the pipeline at the same time by completing the steps on this page. However, if the users who create pipelines use API-based pipeline authoring or are non-admin users, an admin must first create the connection in Catalog Explorer. See Connect to managed ingestion sources.
-
If you plan to use an existing connection: You must have
USE CONNECTIONprivileges orALL PRIVILEGESon the connection object. -
You must have
USE CATALOGprivileges on the target catalog. -
You must have
USE SCHEMAandCREATE TABLEprivileges on an existing schema orCREATE SCHEMAprivileges on the target catalog.
To ingest from Zendesk Support, you must complete the steps in Configure Zendesk Support for OAuth.
Create the ingestion pipeline
- Databricks Asset Bundles
- Databricks notebook
-
Create a new bundle using the Databricks CLI:
Bashdatabricks bundle init -
Add two new resource files to the bundle:
- A pipeline definition file (
resources/zendesk_pipeline.yml). - A workflow file that controls the frequency of data ingestion (
resources/zendesk_job.yml).
The following is an example
resources/zendesk_pipeline.ymlfile:YAMLvariables:
destination_catalog:
default: main
destination_schema:
default: ingest_destination_schema
# The main pipeline for zendesk_dab
resources:
pipelines:
pipeline_zendesk:
name: zendesk_pipeline
catalog: ${var.destination_catalog}
target: ${var.destination_schema}
ingestion_definition:
connection_name: zendesk_connection
objects:
- table:
source_schema: <source-schame-name>
source_table: <source-table-name>
destination_catalog: ${var.destination_catalog}
destination_schema: ${var.destination_schema}The following is an example
resources/zendesk_job.ymlfile:YAMLresources:
jobs:
zendesk_dab_job:
name: zendesk_dab_job
trigger:
# Run this job every day, exactly one day from the last run
# See https://docs.databricks.com/api/workspace/jobs/create#trigger
periodic:
interval: 1
unit: DAYS
email_notifications:
on_failure:
- <email-address>
tasks:
- task_key: refresh_pipeline
pipeline_task:
pipeline_id: ${resources.pipelines.pipeline_zendesk.id} - A pipeline definition file (
-
Deploy the pipeline using the Databricks CLI:
Bashdatabricks bundle deploy
-
Import the following notebook into your Databricks workspace.
Create a Zendesk Support ingestion pipeline
-
Modify the following values in cell 3:
pipeline_name: A unique name for your ingestion pipeline.connection_name: The name of the Unity Catalog connection from the source setup.source_schema: The name of the schema that contains your source data.source_table: The name of the table you want to ingest. For a list of supported source tables, see Zendesk Support connector reference.destination_schema: The schema you want to write to.destination_table: (Optional) The name of the destination streaming table. If you don't provide one, the connector automatically gives the destination table the same name as the source table.
Common patterns
Optionally configure advanced options, like history tracking (SCD type 2). See Common patterns for managed ingestion pipelines.