Limitations for SQL Server connections

Preview

LakeFlow Connect is in gated Public Preview. To participate in the preview, contact your Databricks account team.

This article lists limitations and considerations for connecting to and ingesting data from SQL Server using LakeFlow Connect.

  • When you run a scheduled pipeline, alerts don’t trigger immediately. Instead, they trigger when the next update runs.

  • When a source table is deleted, the destination table is automatically deleted. This behavior is consistent with Delta Live Tables behavior.

  • During maintenance periods, Databricks might not be able to access your data.

  • If the source table name conflicts with an existing destination table name, the flow fails.

  • The staging catalog cannot be a foreign catalog.

  • To use Microsoft change data capture (CDC), you must have SQL Server 2017 or above.

  • To use Microsoft change tracking, you must have SQL Server 2012 or above.

Database variations

  • Only Azure SQL Database and Amazon RDS for SQL Server are supported.

  • Only username/password authentication is supported.

Tables

  • Databricks recommends ingesting 50 or fewer tables per pipeline. However, there is no known limit on the number of rows or columns that are supported within these objects.

  • Databricks doesn’t support ingesting tables whose case-sensitive names differ only in case (for example, MyTable and MYTABLE) in a single ingestion pipeline. To support such cases, create two gateway-ingestion pipeline pairs that publish to different target schemas.

Schema evolution

Schema evolution is not supported. Changes require a full refresh of the target tables.

Data types

  • The following SQL-92 data types are supported:

    • Numeric (fixed precision and floating point)

    • String and binary

    • Date, time, and timestamp

  • Databricks does not recommend using the AdventureWorksLT sample data in testing due to the SQL-92 data type restrictions.

Pipelines

  • Only one ingestion pipeline per gateway is supported. For each ingestion pipeline, you must also create a corresponding gateway.

  • The gateway must run in Classic mode.

  • The ingestion pipeline must run in Serverless mode.

  • The ingestion pipeline does not support more than one destination catalog and schema. If you need to write to multiple destination catalogs or schemas, create multiple gateway-ingestion pipeline pairs.

  • Only triggered mode for running ingestion pipelines is supported.