Skip to main content

Smartsheet connector limitations

Beta

This feature is in Beta. Workspace admins can control access to this feature from the Previews page. See Manage Databricks previews.

This page contains information about known limitations of the managed Smartsheet connector in Lakeflow Connect.

General limitations

  • When you run a scheduled pipeline, alerts don't trigger immediately. Instead, they trigger when the next update runs.
  • When a source table is deleted, the destination table is not automatically deleted. You must delete the destination table manually. This behavior is not consistent with Lakeflow Spark Declarative Pipelines behavior.
  • During source maintenance periods, Databricks might not be able to access your data.
  • If a source table name conflicts with an existing destination table name, the pipeline update fails.
  • Multi-destination pipeline support is API-only.
  • You can optionally rename a table that you ingest. If you rename a table in your pipeline, it becomes an API-only pipeline, and you can no longer edit the pipeline in the UI.
  • Column-level selection and deselection are API-only.
  • If you select a column after a pipeline has already started, the connector does not automatically backfill data for the new column. To ingest historical data, manually run a full refresh on the table.
  • Databricks can't ingest two or more tables with the same name in the same pipeline, even if they come from different source schemas.
  • The source system assumes that the cursor columns are monotonically increasing.
  • The connector ingests raw data without transformations. Use downstream Lakeflow Spark Declarative Pipelines pipelines for transformations.

Supported source object types

The Smartsheet connector supports the following source object types:

Type

Description

Sheet

Ingests an individual Smartsheet spreadsheet.

Report

Ingests a Smartsheet report, which aggregates rows from multiple source sheets or multiple rows from one sheet into a single read-only view.

Connector-specific limitations

No incremental sync

This connector does not support incremental syncing. Each pipeline run performs a full refresh of the destination table.

Delta Lake column mapping restrictions

The Smartsheet connector uses Delta Lake column mapping to support Smartsheet column names that contain special characters, spaces, and punctuation. This enables accurate schema representation without renaming columns during ingestion. However, enabling column mapping introduces the following limitations:

  • Streaming reads are not supported: Tables created by this connector cannot be read as streaming sources. This is a constraint of Delta Lake column mapping, which upgrades the table protocol to reader version 2 and writer version 5, which are incompatible with Structured Streaming.
  • Change data feed (CDF) is not supported: You can't enable Delta CDF on tables that use column mapping. If your downstream pipelines use CDF, choose an alternative approach.
  • Requires Databricks Runtime 10.4 LTS or above: Tables written by this connector can only be read by clusters running Databricks Runtime 10.4 LTS or higher. Older runtimes do not support the required Delta protocol versions.

Column mapping is enabled because Smartsheet allows column names with characters such as spaces, commas, and parentheses that are not valid in standard Delta table schemas. For more information, see Delta Lake column mapping.

For field names and type mappings, see Smartsheet connector reference.