Zendesk Support connector limitations
This feature is in Beta. Workspace admins can control access to this feature from the Previews page. See Manage Databricks previews.
General limitations
- When you run a scheduled pipeline, alerts don't trigger immediately. Instead, they trigger when the next update runs.
- When a source table is deleted, the destination table is not automatically deleted. You must delete the destination table manually. This behavior is not consistent with Lakeflow Spark Declarative Pipelines behavior.
- During source maintenance periods, Databricks might not be able to access your data.
- If a source table name conflicts with an existing destination table name, the pipeline update fails.
- Multi-destination pipeline support is API-only.
- You can optionally rename a table that you ingest. If you rename a table in your pipeline, it becomes an API-only pipeline, and you can no longer edit the pipeline in the UI.
- Column-level selection and deselection are API-only.
- If you select a column after a pipeline has already started, the connector does not automatically backfill data for the new column. To ingest historical data, manually run a full refresh on the table.
- Databricks can't ingest two or more tables with the same name in the same pipeline, even if they come from different source schemas.
- The source system assumes that the cursor columns are monotonically increasing.
- With SCD type 1 enabled, deletes don't produce an explicit
deleteevent in the change data feed. For auditable deletions, use SCD type 2 if the connector supports it. For details, see Example: SCD type 1 and SCD type 2 processing with CDF source data. - The connector ingests raw data without transformations. Use downstream Lakeflow Spark Declarative Pipelines pipelines for transformations.
Supported Zendesk products
Zendesk offers a variety of products, including Zendesk Support, Zendesk Chat, Zendesk Talk, and others. Databricks only supports ingestion from Zendesk Support, including ticket data, knowledge base content, and community forum data.
API rate limits
Zendesk enforces rate limits on its REST API, particularly for incremental data endpoints. To ensure consistent ingestion performance, Databricks recommends limiting the number of tables that you ingest at the same time. For example, you can split out tables across multiple pipelines and schedule them at different times.
Nested and custom fields
Some fields in the schema may be nested within complex structures, and the inner-level fields can include custom attributes. To ensure compatibility and consistency, such fields are represented as a string data type. For example, the custom_fields column in the tickets table is an array of custom objects, which can have any number of subfields.