Zendesk Support connector overview
The managed Zendesk Support connector in Lakeflow Connect allows you to ingest data from Zendesk Support into Databricks, including ticket data, knowledge base content, and community forum data.
Feature availability
Feature | Availability |
|---|---|
UI-based pipeline authoring |
|
API-based pipeline authoring |
|
Declarative Automation Bundles |
|
Incremental ingestion |
|
Unity Catalog governance |
|
Orchestration using Databricks Workflows |
|
SCD type 2 |
|
API-based column selection and deselection |
|
API-based row filtering |
|
Automated schema evolution: New and deleted columns |
|
Automated schema evolution: Data type changes |
|
Automated schema evolution: Column renames |
Treated as a new column (new name) and deleted column (old name). |
Automated schema evolution: New tables | N/A |
Maximum number of tables per pipeline | 250 |
Authentication methods
Authentication method | Availability |
|---|---|
OAuth U2M |
|
OAuth M2M |
|
OAuth (manual refresh token) |
|
Basic authentication (username/password) |
|
Basic authentication (API key) |
|
Basic authentication (service account JSON key) |
|
What to know before you start
Topic | Why it matters |
|---|---|
The workflow depends on your Databricks user persona:
| |
The steps to create a connection depend on the authentication method you choose. | |
The steps to create a pipeline depend on the interface. | |
The pipeline schedule depends on your latency and cost requirements. | |
Depending on your ingestion needs, the pipeline might use configurations like history tracking, column selection, and row filtering. Supported configurations vary by connector. See Feature availability. |
Start ingesting from Zendesk Support
The following table provides an overview of the end-to-end Zendesk Support ingestion flow, based on user type:
User | Steps |
|---|---|
Admin |
|
Non-admin | Use any supported interface to create a pipeline from an existing connection. See Ingest data from Zendesk Support. |