Ingest data from Workday HCM
This feature is in Beta. Workspace admins can control access to this feature from the Previews page. See Manage Databricks previews.
This page shows how to create a managed Workday Human Capital Management (HCM) ingestion pipeline using Lakeflow Connect.
Requirements
-
To create an ingestion pipeline, you must first:
-
Your workspace must be enabled for Unity Catalog.
-
Serverless compute must be enabled for your workspace. See Serverless compute requirements.
-
If you plan to create a new connection: You must have
CREATE CONNECTIONprivileges on the metastore.If the connector supports UI-based pipeline authoring, an admin can create the connection and the pipeline at the same time by completing the steps on this page. However, if the users who create pipelines use API-based pipeline authoring or are non-admin users, an admin must first create the connection in Catalog Explorer. See Connect to managed ingestion sources.
-
If you plan to use an existing connection: You must have
USE CONNECTIONprivileges orALL PRIVILEGESon the connection object. -
You must have
USE CATALOGprivileges on the target catalog. -
You must have
USE SCHEMAandCREATE TABLEprivileges on an existing schema orCREATE SCHEMAprivileges on the target catalog.
-
To ingest from Workday HCM, you must first complete the steps in Configure authentication to Workday HCM.
Create an ingestion pipeline
Each source table is ingested into a streaming table. For a list of supported source tables, see Supported data.
- Databricks Asset Bundles
- Databricks notebook
Use Databricks Asset Bundles to manage Workday HCM pipelines as code. Bundles can contain YAML definitions of jobs and tasks, are managed using the Databricks CLI, and can be shared and run in different target workspaces (such as development, staging, and production). For more information, see What are Databricks Asset Bundles?.
-
Create a new bundle using the Databricks CLI:
Bashdatabricks bundle init -
Add two new resource files to the bundle:
- A pipeline definition file (
resources/workday_hcm_pipeline.yml). See pipeline.ingestion_definition and Examples. - A job definition file that controls the frequency of data ingestion (
resources/workday_hcm_job.yml).
- A pipeline definition file (
-
Deploy the pipeline using the Databricks CLI:
Bashdatabricks bundle deploy
-
Import the following notebook into your Databricks workspace:
-
Leave cell one and cell two as they are. Do not modify the code.
-
Modify cell three with your pipeline configuration details. See pipeline.ingestion_definition and Examples.
-
Click Run all.
Examples
Use these examples to configure your pipeline.
Ingest a single source table
- Databricks Asset Bundles
- Databricks notebook
The following pipeline definition file ingests a single source table:
variables:
dest_catalog:
default: main
dest_schema:
default: ingest_destination_schema
# The main pipeline for workday_hcm_dab
resources:
pipelines:
pipeline_workday_hcm:
name: workday_hcm_pipeline
catalog: ${var.dest_catalog}
schema: ${var.dest_schema}
ingestion_definition:
connection_name: <workday-hcm-connection>
objects:
# An array of objects to ingest from Workday HCM. This example ingests the workers table.
- table:
source_schema: default
source_table: workers
destination_catalog: ${var.dest_catalog}
destination_schema: ${var.dest_schema}
The following pipeline specification ingests a single source table:
pipeline_spec = {
"name": "workday-hcm-pipeline",
"catalog": "main",
"schema": "ingest_destination_schema",
"ingestion_definition": {
"connection_name": "<workday-hcm-connection>",
"objects": [
{
"table": {
"source_schema": "default",
"source_table": "workers",
"destination_catalog": "main",
"destination_schema": "ingest_destination_schema"
}
}
]
}
}
json_payload = json.dumps(pipeline_spec, indent=2)
create_pipeline(json_payload)
Ingest multiple source tables
- Databricks Asset Bundles
- Databricks notebook
The following pipeline definition file ingests multiple source tables:
variables:
dest_catalog:
default: main
dest_schema:
default: ingest_destination_schema
# The main pipeline for workday_hcm_dab
resources:
pipelines:
pipeline_workday_hcm:
name: workday_hcm_pipeline
catalog: ${var.dest_catalog}
schema: ${var.dest_schema}
ingestion_definition:
connection_name: <workday-hcm-connection>
objects:
# An array of objects to ingest from Workday HCM.
- table:
source_schema: default
source_table: workers
destination_catalog: ${var.dest_catalog}
destination_schema: ${var.dest_schema}
- table:
source_schema: default
source_table: payroll
destination_catalog: ${var.dest_catalog}
destination_schema: ${var.dest_schema}
The following pipeline specification ingests multiple source tables:
pipeline_spec = {
"name": "workday-hcm-pipeline",
"catalog": "main",
"schema": "ingest_destination_schema",
"ingestion_definition": {
"connection_name": "<workday-hcm-connection>",
"objects": [
{
"table": {
"source_schema": "default",
"source_table": "workers",
"destination_catalog": "main",
"destination_schema": "ingest_destination_schema"
}
},
{
"table": {
"source_schema": "default",
"source_table": "payroll",
"destination_catalog": "main",
"destination_schema": "ingest_destination_schema"
}
}
]
}
}
json_payload = json.dumps(pipeline_spec, indent=2)
create_pipeline(json_payload)
Bundle workflow file
The following is an example job definition file to use with Databricks Asset Bundles. The job runs every day, exactly one day from the last run.
resources:
jobs:
workday_hcm_dab_job:
name: workday_hcm_dab_job
trigger:
periodic:
interval: 1
unit: DAYS
email_notifications:
on_failure:
- <email-address>
tasks:
- task_key: refresh_pipeline
pipeline_task:
pipeline_id: ${resources.pipelines.pipeline_workday_hcm.id}
Common patterns
For advanced pipeline configurations, see Common patterns for managed ingestion pipelines.
Next steps
Start, schedule, and set alerts on your pipeline. See Common pipeline maintenance tasks.