Skip to main content

ServiceNow connector FAQs

Preview

The connector-name connector is in Public Preview.

This page answers frequently asked questions about the ServiceNow connector in Databricks Lakeflow Connect.

General managed connector FAQs

The answers in Managed connector FAQs apply to all managed connectors in Lakeflow Connect. Keep reading for ServiceNow-specific FAQs.

Connector-specific FAQs

The answers in this section are specific to the ServiceNow connector.

How does the connector pull data from ServiceNow?

The ServiceNow connector uses the ServiceNow Table API v2.

Could using the Table API impact the ServiceNow instance?

Yes. However, the impact depends on the amount of data ingested. For example, it is typically more noticeable in the initial snapshot than during an incremental read.

How does the connector pull data incrementally?

In order to incrementally ingest a table, the table must have one of the following columns. If none of these columns exists, then the connector snapshots the source table and overwrite the destination table.

  • sys_updated_on
  • sys_created_on
  • sys_archived

What permissions does the connector require?

The connector requires the admin and snc_read_only roles. If this is a blocker for your use case, reach out to your account team.

Why is my ServiceNow ingestion performance slow?

Databricks recommends working with your ServiceNow administrator to enable ServiceNow-side indexing on the cursor field. The cursor column is selected from the following list, in order of availability and preference: sys_updated_on (first choice), sys_created_on (second choice), sys_archived (third choice). This is a standard approach for improving performance when ingesting using the ServiceNow APIs. Setting the index allows Databricks to avoid fully scanning the entire sys_updated_on column, which can bottleneck large updates. For instructions, see Create a table index in the ServiceNow documentation. If the issue persists, create a support ticket.