Skip to main content

Ingest data from Microsoft Dynamics 365

Preview

This feature is in Public Preview.

Learn how to create a managed Microsoft Dynamics 365 ingestion pipeline using Databricks Lakeflow Connect.

Requirements

  • To create an ingestion pipeline, you must first meet the following requirements:

    • Your workspace must be enabled for Unity Catalog.

    • Serverless compute must be enabled for your workspace. See Serverless compute requirements.

    • If you plan to create a new connection: You must have CREATE CONNECTION privileges on the metastore. See Manage privileges in Unity Catalog.

      If the connector supports UI-based pipeline authoring, an admin can create the connection and the pipeline at the same time by completing the steps on this page. However, if the users who create pipelines use API-based pipeline authoring or are non-admin users, an admin must first create the connection in Catalog Explorer. See Connect to managed ingestion sources.

    • If you plan to use an existing connection: You must have USE CONNECTION privileges or ALL PRIVILEGES on the connection object.

    • You must have USE CATALOG privileges on the target catalog.

    • You must have USE SCHEMA and CREATE TABLE privileges on an existing schema or CREATE SCHEMA privileges on the target catalog.

  • To ingest from Dynamics 365, you must first complete the steps in Configure data source for Microsoft Dynamics 365 ingestion.

Create an ingestion pipeline

  1. In the sidebar of the Databricks workspace, click Data Ingestion.
  2. On the Add data page, under Databricks connectors, click Microsoft Dynamics 365.
  3. On the Connection page of the ingestion wizard, select the connection that stores your Microsoft Dynamics 365 access credentials. If you have the CREATE CONNECTION privilege on the metastore, you can click Plus icon. Create connection to create a new connection with the authentication details in Configure data source for Microsoft Dynamics 365 ingestion.
  4. Click Next.
  5. On the Ingestion setup page, enter a unique name for the pipeline.
  6. Select a catalog and a schema to write event logs to. If you have USE CATALOG and CREATE SCHEMA privileges on the catalog, you can click Plus icon. Create schema in the drop-down menu to create a new schema.
  7. Click Create pipeline and continue.
  8. On the Source page, enter the Dataverse environment URL and select the tables to ingest.
  9. Click Save and continue.
  10. On the Destination page, select a catalog and a schema to load data into. If you have USE CATALOG and CREATE SCHEMA privileges on the catalog, you can click Plus icon. Create schema in the drop-down menu to create a new schema.
  11. Click Save and continue.
  12. (Optional) On the Schedules and notifications page, click Plus icon. Create schedule. Set the frequency to refresh the destination tables.
  13. (Optional) Click Plus icon. Add notification to set email notifications for pipeline operation success or failure, then click Save and run pipeline.

Verify pipeline creation

After you create the pipeline:

  1. Navigate to Jobs & Pipelines in your workspace.
  2. Locate your pipeline by name.
  3. Select the pipeline to view details.
  4. Select Start to run the initial ingestion.
  5. Monitor the pipeline run and verify that the pipeline creates tables in your target schema.

To verify the ingested data:

SQL
-- Check the account table
SELECT * FROM main.d365_data.account LIMIT 10;

-- Verify record counts
SELECT COUNT(*) FROM main.d365_data.account;
note

The initial pipeline run performs a full refresh of all selected tables. Subsequent runs use incremental ingestion based on the VersionNumber cursor from Azure Synapse Link changelogs.

Examples

Use these examples to configure your pipeline.

Ingest a single source table

The following pipeline definition file ingests a single source table:

YAML
resources:
pipelines:
d365_ingestion:
name: 'd365_ingestion'
catalog: 'main'
schema: 'd365_data'
ingestion_definition:
channel: 'PREVIEW'
connection_name: 'd365_connection'
objects:
- table:
source_schema: 'https://yourorg.crm.dynamics.com'
source_table: account
destination_catalog: 'main'
destination_schema: 'd365_data'

Ingest multiple source tables

The following pipeline definition file ingests multiple source tables:

YAML
resources:
pipelines:
d365_ingestion:
name: 'd365_ingestion'
catalog: 'main'
schema: 'd365_data'
ingestion_definition:
channel: 'PREVIEW'
connection_name: 'd365_connection'
objects:
- table:
source_schema: 'https://yourorg.crm.dynamics.com'
source_table: account
destination_catalog: 'main'
destination_schema: 'd365_data'
- table:
source_schema: 'https://yourorg.crm.dynamics.com'
source_table: contact
destination_catalog: 'main'
destination_schema: 'd365_data'

Bundle job definition file

The following is an example job definition file to use with Declarative Automation Bundles. The job runs every day, exactly one day from the last run.

YAML
resources:
jobs:
d365_dab_job:
name: d365_dab_job

trigger:
periodic:
interval: 1
unit: DAYS

email_notifications:
on_failure:
- <email-address>

tasks:
- task_key: refresh_pipeline
pipeline_task:
pipeline_id: ${resources.pipelines.d365_ingestion.id}

Common patterns

For advanced pipeline configurations, see Common patterns for managed ingestion pipelines.

Next steps

Start, schedule, and set alerts on your pipeline. See Common pipeline maintenance tasks.

Additional resources