Skip to main content

Smartsheet connector

Beta

This feature is in Beta. Workspace admins can control access to this feature from the Previews page. See Manage Databricks previews.

Use the managed Smartsheet connector in Lakeflow Connect to ingest data from Smartsheet into Databricks.

What to know before you start

Topic

Why it matters

Databricks user persona

The workflow depends on your Databricks user persona:

  • Single-user: An admin user creates a Unity Catalog connection and an ingestion pipeline.
  • Multi-user: An admin user creates a connection for non-admin users to create pipelines with.

Authentication method

The steps to create a connection depend on the authentication method you choose.

Interface

The steps to create a pipeline depend on the interface.

Ingestion frequency

The pipeline schedule depends on your latency and cost requirements.

Common patterns

Depending on your ingestion needs, the pipeline might use configurations like history tracking, column selection, and row filtering. Supported configurations vary by connector. See Feature availability.

Start ingesting from Smartsheet

The following table provides an overview of the end-to-end Smartsheet ingestion flow, based on user type:

User

Steps

Admin

  1. Configure Smartsheet to enable authentication from Databricks. See Configure OAuth for Smartsheet ingestion.
  2. Either:
    • Use Catalog Explorer to create a connection to Smartsheet so that non-admins can create pipelines. See Smartsheet.
    • Use the data ingestion UI to create a connection and a pipeline at the same time. See Ingest data from Smartsheet.

Non-admin

Use any supported interface to create a pipeline from an existing connection. See Ingest data from Smartsheet.

Feature availability

Feature

Availability

UI-based pipeline authoring

check marked yes Supported

API-based pipeline authoring

check marked yes Supported

Declarative Automation Bundles

check marked yes Supported

Incremental ingestion

x mark no Not supported

Unity Catalog governance

check marked yes Supported

Orchestration using Lakeflow Jobs

check marked yes Supported

SCD type 2

x mark no Not supported

API-based column selection and deselection

check marked yes Supported

API-based row filtering

check marked yes Supported

Automated schema evolution: New and deleted columns

N/A

Automated schema evolution: Data type changes

N/A

Automated schema evolution: Column renames

N/A

Automated schema evolution: New tables

N/A

Maximum number of tables per pipeline

250

Authentication methods

Authentication method

Availability

OAuth U2M

check marked yes Supported

OAuth M2M

x mark no Not supported

OAuth (manual refresh token)

x mark no Not supported

Basic authentication (username/password)

x mark no Not supported

Basic authentication (API key)

x mark no Not supported

Basic authentication (service account JSON key)

x mark no Not supported