ServiceNow connector limitations
The ServiceNow connector is in gated Public Preview. To participate in the preview, contact your Databricks account team.
This article lists limitations and considerations when ingesting from ServiceNow using Databricks Lakeflow Connect.
- When a source table is deleted, the destination table is not automatically deleted. You must delete the destination table manually. This behavior is not consistent with DLT behavior.
- During source maintenance periods, Databricks might not be able to access your data.
- If a source table name conflicts with an existing destination table name, the pipeline update fails.
-
The connector can only ingest tables with a
sys_id
column. -
Incrementally ingested tables must have one of the following columns. If none of these columns exist, the connector snapshots the source table and overwrites the destination table.
sys_updated_on
sys_created_on
sys_archived
Some tables cannot be incrementally pulled, even if one of these three columns is present. In these cases, the connector also snapshots the source table and overwrites the destination table.
-
There is a limit of 250 tables per pipeline. To ingest more than 250 tables, create multiple pipelines.
-
The connector does not support multiple destination schemas in a single pipeline.
-
If the table name or any of the column names contain the character '$', the connector cannot ingest the table.
-
If any row lacks a cursor key, the connector can not ingest the table.