Microsoft Dynamics 365 connector reference
This feature is in Public Preview.
This page provides technical reference information for the Microsoft Dynamics 365 connector in Lakeflow Connect.
Authentication parameters
The Dynamics 365 connector uses Microsoft Entra ID (formerly Azure Active Directory) OAuth authentication to access Azure Data Lake Storage (ADLS) Gen2 where Azure Synapse Link exports data.
Required authentication fields
When creating a Unity Catalog connection for D365, provide these parameters:
Parameter | Description | Example |
|---|---|---|
| Your Microsoft Entra ID tenant ID (Directory ID) |
|
| The application (client) ID of your Entra ID app |
|
| The client secret value created for your Entra ID app |
|
| The name of your ADLS Gen2 storage account |
|
| The container where Synapse Link exports data |
|
| The OAuth scope for Azure Storage access |
|
Cursor field
The Dynamics 365 connector uses the VersionNumber field from Azure Synapse Link changelogs as the cursor for incremental ingestion.
Cursor behavior
- Source: Synapse Link automatically generates
VersionNumbervalues when exporting changes. - Format: Integer timestamp representing the change sequence.
- Scope: Per-table cursor. Each table maintains its own cursor position.
- Storage: Cursors are stored in the pipeline metadata and don't appear in target Delta tables.
Cursor requirements
For incremental ingestion to work:
- Synapse Link must export changelogs with the
VersionNumberfield. VersionNumbermust be present in all changelog files.- Changelog folders must follow Synapse Link's timestamp-based naming convention.
If VersionNumber is missing, incremental ingestion fails and you must perform a full refresh.
Schema discovery
The Dynamics 365 connector automatically discovers table schemas from Dataverse metadata.
Discovery process
When you create a pipeline:
- The connector reads Synapse Link metadata files from ADLS Gen2.
- The connector extracts table schemas from the metadata JSON files.
- Column names, data types, and nullability are inferred from the metadata.
- Target tables are created with the discovered schemas.
Supported Dataverse data types
The Dynamics 365 connector maps Dataverse data types to Delta Lake data types.
Data type mapping
Dataverse type | Delta Lake type | Notes |
|---|---|---|
String (single line) | STRING | Max length preserved as metadata |
String (multi-line) | STRING | |
Integer (Whole Number) | INTEGER | |
BigInt | LONG | |
Decimal | DECIMAL | Precision and scale preserved |
Double (Floating Point) | DOUBLE | |
Money | DECIMAL(19,4) | Stored as decimal with 4 decimal places |
Boolean (Yes/No) | BOOLEAN | |
DateTime | TIMESTAMP | Timezone information preserved |
Date | DATE | |
Time | STRING | Stored as ISO 8601 time string |
Uniqueidentifier (GUID) | STRING | Stored as string representation |
Lookup | STRING | Foreign key GUID stored as string |
Picklist (Option Set) | INTEGER | Integer value, not label |
Multi-select Picklist | STRING | Comma-separated integer values |
Image | STRING | URL or metadata, not binary data |
File | STRING | Metadata only, not file contents |
Complex data types
Some Dataverse types require special handling:
- Option Sets (Picklists): Ingested as integer codes. To map to labels, join with
OptionSetMetadatatable or maintain a reference mapping table. - Lookups: Ingested as GUID strings. To get related data, join with the referenced table.
- Multi-select Option Sets: Ingested as comma-separated integer strings (for example,
"1,3,5"). Parse the string to extract individual values.
Example: Parsing multi-select option sets
-- Split comma-separated values into array
SELECT
accountid,
accountname,
SPLIT(industrycodes, ',') AS industry_array
FROM main.d365_data.account;
-- Explode into separate rows
SELECT
accountid,
accountname,
CAST(code AS INT) AS industry_code
FROM main.d365_data.account
LATERAL VIEW EXPLODE(SPLIT(industrycodes, ',')) AS code;
API version compatibility
The Dynamics 365 connector is compatible with:
- Dataverse API: Version 9.2 and later
- Azure Synapse Link for Dataverse: Version 1.0 and later
- Azure Storage REST API: Version 2021-08-06 and later
- Microsoft Entra ID (Azure AD): OAuth 2.0 client credentials flow
Older API versions might work but aren't officially supported. Keep your D365 and Azure services updated for best compatibility.
Incremental ingestion behavior
The Dynamics 365 connector's incremental ingestion follows these rules:
Change detection
- Inserts: New records detected by their presence in changelogs.
- Updates: Modified records identified by changes in
VersionNumber. - Deletes: Deleted records identified by delete markers in changelogs (if exported by Synapse Link).
SCD Type 1 behavior
For SCD Type 1 pipelines, records are updated in place without preserving history. Updates overwrite existing rows based on primary key, and deletes remove rows (if delete tracking is enabled).
Example table structure:
SELECT * FROM main.d365_data.account ORDER BY accountid;
-- Result: Latest state only
-- accountid | accountname | modifiedon
-- 123 | Acme Corp | 2025-12-03 10:00:00
-- 456 | TechCo | 2025-12-03 09:30:00
SCD Type 2 behavior
For SCD Type 2 pipelines, all changes are preserved as new row versions. The connector adds __START_AT, __END_AT, and __CURRENT columns to track version history.
Example table structure:
SELECT * FROM main.d365_data.account ORDER BY accountid, __START_AT;
-- Result: All historical versions
-- accountid | accountname | __START_AT | __END_AT | __CURRENT
-- 123 | Acme Inc | 2025-11-01 08:00:00 | 2025-12-03 10:00:00 | false
-- 123 | Acme Corp | 2025-12-03 10:00:00 | NULL | true
-- 456 | TechCo | 2025-12-01 14:00:00 | NULL | true
Delete handling
Delete handling depends on your Synapse Link configuration:
- Hard deletes: If Synapse Link exports delete records, the connector removes (SCD Type 1) or marks (SCD Type 2) deleted records.
- No delete tracking: If Synapse Link doesn't export deletes, deleted records remain in target tables until you perform a full refresh.
Verify your Synapse Link configuration exports deletes if you need accurate delete tracking.
Pipeline parameters
When creating a D365 ingestion pipeline, specify these parameters:
Required parameters
Parameter | Type | Description | Example |
|---|---|---|---|
| String | Must be |
|
| String | Name of your Unity Catalog connection |
|
| String | Dataverse environment URL or ID |
|
| Array[String] | List of D365 table logical names |
|
| String | Target Unity Catalog catalog |
|
| String | Target Unity Catalog schema |
|
| String |
|
|
Optional parameters
Parameter | Type | Description | Example |
|---|---|---|---|
| Object | Per-table settings (column selection, etc.) | See column selection |
Example pipeline configuration
Complete pipeline configuration using Python SDK:
from databricks.sdk import WorkspaceClient
from databricks.sdk.service.pipelines import IngestionPipelineDefinition
w = WorkspaceClient()
pipeline = w.pipelines.create(
name="d365_comprehensive_ingestion",
ingestion_definition=IngestionPipelineDefinition(
channel="PREVIEW",
connection_name="d365_connection",
source_schema="https://contoso.crm.dynamics.com",
source_table=[
"account",
"contact",
"opportunity",
"salesorder",
"systemuser"
],
destination_catalog="main",
destination_schema="d365_sales",
scd_type="SCD_TYPE_2",
table_configuration={
"account": {
"columns": [
"accountid",
"accountnumber",
"name",
"emailaddress1",
"telephone1"
]
}
}
)
)
Finding table logical names
To identify table logical names for the source_table parameter:
- Power Apps maker portal: Navigate to Tables and view the Logical name column.
- Dataverse API: Query metadata using
https://yourorg.api.crm.dynamics.com/api/data/v9.2/EntityDefinitions. - ADLS Gen2 storage: List folders in your Synapse Link container (folder names match logical names).
Use lowercase logical names in pipeline configurations (for example, "account" rather than "Account"). The connector is case-sensitive.
Performance tuning
The Dynamics 365 connector provides limited performance tuning options:
Column selection
Select only required columns to reduce:
- Data transfer from ADLS Gen2
- Storage costs in Delta Lake
- Query processing time
See column selection for configuration details.
Table grouping
For environments with many tables:
- Related tables: Group related tables in the same pipeline for easier management.
- Volume-based: Separate high-volume tables into dedicated pipelines.
- Update frequency: Group tables with similar update patterns.
Each pipeline is limited to 250 tables. For larger environments, create multiple pipelines.
Troubleshooting
For common issues and solutions when working with the Dynamics 365 connector, see Troubleshoot Microsoft Dynamics 365 ingestion.