Skip to main content

Ingest data from SQL Server

Learn how to ingest data from SQL Server into Databricks using Lakeflow Connect.

The SQL Server connector supports Azure SQL Database, Azure SQL Managed Instance, and Amazon RDS SQL databases. This includes SQL Server running on Azure virtual machines (VMs) and Amazon EC2. The connector also supports SQL Server on-premises using Azure ExpressRoute and AWS Direct Connect networking.

Requirements

  • To create an ingestion gateway and an ingestion pipeline, you must first meet the following requirements:

    • Your workspace is enabled for Unity Catalog.

    • Serverless compute is enabled for your workspace. See Serverless compute requirements.

    • If you plan to create a connection: You have CREATE CONNECTION privileges on the metastore. See Manage privileges in Unity Catalog.

      If your connector supports UI-based pipeline authoring, you can create the connection and the pipeline at the same time by completing the steps on this page. However, if you use API-based pipeline authoring, you must create the connection in Catalog Explorer before you complete the steps on this page. See Connect to managed ingestion sources.

    • If you plan to use an existing connection: You have USE CONNECTION privileges or ALL PRIVILEGES on the connection.

    • You have USE CATALOG privileges on the target catalog.

    • You have USE SCHEMA, CREATE TABLE, and CREATE VOLUME privileges on an existing schema or CREATE SCHEMA privileges on the target catalog.

    • You have access to a primary SQL Server instance. Change tracking and change data capture features are not supported on read replicas or secondary instances.

    • Unrestricted permissions to create clusters, or a custom policy (API only). A custom policy for the gateway must meet the following requirements:

      • Family: Job Compute

      • Policy family overrides:

        {
        "cluster_type": {
        "type": "fixed",
        "value": "dlt"
        },
        "num_workers": {
        "type": "unlimited",
        "defaultValue": 1,
        "isOptional": true
        },
        "runtime_engine": {
        "type": "fixed",
        "value": "STANDARD",
        "hidden": true
        }
        }
      • Databricks recommends specifying the smallest possible worker nodes for ingestion gateways because they do not impact gateway performance. The following compute policy enables Databricks to scale the ingestion gateway to meet the needs of your workload. The minimum requirement is 8 cores to enable efficient and performant data extraction from your source database.

        Python
        {
        "driver_node_type_id": {
        "type": "fixed",
        "value": "r5n.16xlarge"
        },
        "node_type_id": {
        "type": "fixed",
        "value": "m5n.large"
        }
        }

      For more information about cluster policies, see Select a compute policy.

  • To ingest from SQL Server, you must first complete the steps in Configure Microsoft SQL Server for ingestion into Databricks.

Create a gateway and an ingestion pipeline

  1. In the sidebar of the Databricks workspace, click Data Ingestion.

  2. On the Add data page, under Databricks connectors, click SQL Server.

  3. On the Connection page of the ingestion wizard, select the connection that stores SQL Server access credentials from Configure Microsoft SQL Server for ingestion into Databricks. If you have the CREATE CONNECTION privilege on the metastore, you can click Plus icon. Create connection to create a new connection with the authentication details in SQL Server.

  4. Click Next.

  5. On the Ingestion setup page, enter a unique name for the ingestion pipeline. This pipeline moves data from the staging location to the destination.

  6. Select a catalog and a schema to write event logs to. The event log contains audit logs, data quality checks, pipeline progress, and errors. If you have USE CATALOG and CREATE SCHEMA privileges on the catalog, you can click Plus icon. Create schema in the drop-down menu to create a new schema.

  7. (Optional) Set Auto full refresh for all tables to On. When auto refresh is on, the pipeline automatically tries to fix issues like log cleanup events and certain types of schema evolution by fully refreshing the impacted table. If history tracking is enabled, a full refresh erases that history.

  8. Enter a unique name for the ingestion gateway. The gateway is a pipeline that extracts changes from the source and stages them for the ingestion pipeline to load.

  9. Select a catalog and a schema for the Staging location. A volume is created in this location to stage extracted data. If you have USE CATALOG and CREATE SCHEMA privileges on the catalog, you can click Plus icon. Create schema in the drop-down menu to create a new schema.

  10. Click Create pipeline and continue.

  11. On the Source page, select the tables to ingest. If you select specific tables, you can configure table settings:

    a. (Optional) On the Settings tab, specify a Destination name for each ingested table. This is useful to differentiate between destination tables when you ingest an object into the same schema multiple times. See Name a destination table.

    a. (Optional) Change the default History tracking setting. See Enable history tracking (SCD type 2).

  12. Click Next, then click Save and continue.

  13. On the Destination page, select a catalog and a schema to load data into. If you have USE CATALOG and CREATE SCHEMA privileges on the catalog, you can click Plus icon. Create schema in the drop-down menu to create a new schema.

  14. Click Save and continue.

  15. On the Database setup page, click Validate to confirm that your source is properly configured for Databricks ingestion. Any missing configurations are returned. For steps to resolve, click Complete configuration. Then click Next. Alternatively, click Skip validation.

  16. (Optional) On the Schedules and notifications page, click Plus icon. Create schedule. Set the frequency to refresh the destination tables.

  17. (Optional) Click Plus icon. Add notification to set email notifications for pipeline operation success or failure, then click Save and run pipeline.

Verify successful data ingestion

The list view on the pipeline details page shows the number of records processed as data is ingested. These numbers refresh automatically.

Verify replication

The Upserted records and Deleted records columns are not shown by default. You can enable them by clicking on the columns configuration Columns configuration icon button and selecting them.

Examples

Use these examples to configure your pipeline.

Pipeline configuration

The following pipeline definition file:

YAML
variables:
# Common variables used multiple places in the DAB definition.
gateway_name:
default: sqlserver-gateway
dest_catalog:
default: main
dest_schema:
default: ingest-destination-schema

resources:
pipelines:
gateway:
name: ${var.gateway_name}
gateway_definition:
connection_name: <sqlserver-connection>
gateway_storage_catalog: main
gateway_storage_schema: ${var.dest_schema}
gateway_storage_name: ${var.gateway_name}
target: ${var.dest_schema}
catalog: ${var.dest_catalog}

pipeline_sqlserver:
name: sqlserver-ingestion-pipeline
ingestion_definition:
ingestion_gateway_id: ${resources.pipelines.gateway.id}
objects:
# Modify this with your tables!
- table:
# Ingest the table test.ingestion_demo_lineitem to dest_catalog.dest_schema.ingestion_demo_line_item.
source_catalog: test
source_schema: ingestion_demo
source_table: lineitem
destination_catalog: ${var.dest_catalog}
destination_schema: ${var.dest_schema}
- schema:
# Ingest all tables in the test.ingestion_whole_schema schema to dest_catalog.dest_schema. The destination
# table name will be the same as it is on the source.
source_catalog: test
source_schema: ingestion_whole_schema
destination_catalog: ${var.dest_catalog}
destination_schema: ${var.dest_schema}
target: ${var.dest_schema}
catalog: ${var.dest_catalog}

Bundle job definition file

The following is an example job definition file for use with Declarative Automation Bundles. The job runs every day, exactly one day from the last run.

YAML
resources:
jobs:
sqlserver_dab_job:
name: sqlserver_dab_job

trigger:
periodic:
interval: 1
unit: DAYS

email_notifications:
on_failure:
- <email-address>

tasks:
- task_key: refresh_pipeline
pipeline_task:
pipeline_id: ${resources.pipelines.pipeline_sqlserver.id}

Common patterns

For advanced pipeline configurations, see Common patterns for managed ingestion pipelines.

Next steps

Start, schedule, and set alerts on your pipeline. See Common pipeline maintenance tasks.

Additional resources