Smartsheet connector reference
This page contains reference documentation for the managed Smartsheet connector in Lakeflow Connect.
Type mapping
The Smartsheet connector maps Smartsheet column types to Spark types during ingestion. The following table describes the mapping for each supported column type.
Smartsheet column type | Spark target type | Logic / transformation behavior |
|---|---|---|
|
| Always mapped to |
|
| ISO-8601 formatted date string from the Smartsheet API. |
|
| UTC timestamp from the Smartsheet API. |
|
| Maps |
|
|
|
|
| Each contact in the list is extracted as a struct. |
|
| The display value of the selected option is ingested. |
|
| Each selected option is ingested as an element in the array. |
|
| Human-readable format (for example, |
|
| Task dependency representation is preserved as a string. |
|
| System-generated auto-incrementing row label. Read-only. |
|
| Used for system columns (for example, Created At). UTC-based. |
|
| Used for system columns such as Created At. UTC-based. |
|
| Row-level audit metadata. UTC-based. |
Formulas | [Derived] | The evaluated display value is ingested and cast to the column's declared target type. Formula strings (for example, |
Pipeline configuration parameters
The following tables describe all available parameters for configuring a Smartsheet ingestion pipeline.
Connection parameters
Parameter | Required | Description |
|---|---|---|
| Yes | Name of the Unity Catalog connection for Smartsheet. |
Source parameters
Parameter | Required | Description |
|---|---|---|
| Yes | Always |
| Yes | The 16-digit Smartsheet sheet or report ID. |
Destination parameters
Parameter | Required | Description |
|---|---|---|
| Yes | Target Unity Catalog catalog. |
| Yes | Target schema within the catalog. |
| No | Target table name. Defaults to the Smartsheet sheet or report ID. |
table_configuration options
Parameter | Required | Default | Description |
|---|---|---|---|
| No | All rows | DBSQL filter expression for selective row ingestion. See Row filtering. |
| No | All columns | List of column names to include in the ingested table. If specified, only the listed columns are ingested. |
| No | None | List of column names to exclude from the ingested table. Cannot be used together with |
connector_options
Parameter | Required | Description |
|---|---|---|
| No | When |
enforce_schema behavior
The enforce_schema option controls how the connector maps Smartsheet column types to Spark types during ingestion. It is set in connector_options and defaults to true.
-
enforce_schema: true(default) — Each column is mapped to its Smartsheet-declared type according to the type mapping table above. Cells that do not conform to the declared type are set toNULLrather than causing the pipeline to fail. Use this setting for sheets with consistent, well-typed data. -
enforce_schema: false— All columns are ingested asSTRING, regardless of their declared Smartsheet type. Use this setting for sheets with irregular data, frequently overridden column types, or when downstream systems handle type casting.JSON"connector_options": {
"enforce_schema": false
}
Row filtering
Use the row_filter option in table_configuration to ingest a subset of rows from a sheet or report. Rows are referenced by row_number (1-based) or by any column value using the column's Smartsheet title.
Supported operators: =, !=, <, <=, >, >=, AND, OR, IN, BETWEEN, LIKE
For more information, see Row filtering.