ServiceNow connector limitations
Preview
The ServiceNow connector is in Public Preview.
This article lists limitations and considerations for ingesting data from ServiceNow using Databricks Lakeflow Connect.
General SaaS connector limitations
The limitations in this section apply to all SaaS connectors in Lakeflow Connect.
- When you run a scheduled pipeline, alerts don't trigger immediately. Instead, they trigger when the next update runs.
- When a source table is deleted, the destination table is not automatically deleted. You must delete the destination table manually. This behavior is not consistent with Lakeflow Declarative Pipelines behavior.
- During source maintenance periods, Databricks might not be able to access your data.
- If a source table name conflicts with an existing destination table name, the pipeline update fails.
- Multi-destination pipeline support is API-only.
- You can optionally rename a table that you ingest. If you rename a table in your pipeline, it becomes an API-only pipeline, and you can no longer edit the pipeline in the UI.
- Column-level selection and deselection are API-only.
- If you select a column after a pipeline has already started, the connector does not automatically backfill data for the new column. To ingest historical data, manually run a full refresh on the table.
- Databricks can't ingest two or more tables with the same name in the same pipeline, even if they come from different source schemas.
- Managed ingestion pipelines aren't supported for workspaces in AWS GovCloud regions (FedRAMP High).
- Managed ingestion pipelines aren't supported for FedRAMP Moderate workspaces in the
us-east-2
orus-west-1
regions.
- The source system assumes that the cursor columns are monotonically increasing.
Connector-specific
The limitations in this section are specific to the ServiceNow connector.
Pipelines
-
The connector can only ingest tables with a
sys_id
column. -
Incrementally ingested tables must have one of the following columns. If none of these columns exist, the connector snapshots the source table and overwrites the destination table.
sys_updated_on
sys_created_on
sys_archived
Some tables cannot be incrementally pulled, even if one of these three columns is present. In these cases, the connector also snapshots the source table and overwrites the destination table.
Tables
- There is a limit of 250 tables per pipeline. To ingest more than 250 tables, create multiple pipelines.
- If the table name or any of the column names contain the character '$', the connector cannot ingest the table.