Skip to main content

Ingest Workday reports

This page describes how to ingest Workday reports and load them into Databricks using Lakeflow Connect.

Before you begin

To create an ingestion pipeline, you must meet the following requirements:

  • Your workspace must be enabled for Unity Catalog.

  • Serverless compute must be enabled for your workspace. See Enable serverless compute.

  • If you plan to create a new connection: You must have CREATE CONNECTION privileges on the metastore.

    If your connector supports UI-based pipeline authoring, you can create the connection and the pipeline at the same time by completing the steps on this page. However, if you use API-based pipeline authoring, you must create the connection in Catalog Explorer before you complete the steps on this page. See Connect to managed ingestion sources.

  • If you plan to use an existing connection: You must have USE CONNECTION privileges or ALL PRIVILEGES on the connection object.

  • You must have USE CATALOG privileges on the target catalog.

  • You must have USE SCHEMA and CREATE TABLE privileges on an existing schema or CREATE SCHEMA privileges on the target catalog.

To ingest from Workday, see Configure Workday reports for ingestion.

Configure networking

If you have serverless egress control enabled, allowlist the host names of your report URLs. For example, the report URL https://ww1.workday.com/service/ccx/<tenant>/<reportName>?format=json has the host name https://ww1.workday.com. See Manage network policies for serverless egress control.

Create a Workday connection

Permissions required: CREATE CONNECTION on the metastore.

To create a Workday connection, do the following:

  1. In your Databricks workspace, click Catalog > External locations > Connections > Create connection.
  2. For Connection name, enter a unique name for the Workday connection.
  3. For Connection type, select Workday Reports.
  4. For Auth type, select OAuth Refresh Token and then enter the Client ID, Client secret, and Refresh token that you generated during source setup.
  5. On the Create Connection page, click Create.

Create an ingestion pipeline

This step describes how to set up the ingestion pipeline. Each ingested table gets a corresponding streaming table with the same name (but all lowercase) in the destination unless you've explicitly renamed it.

This tab describes how to deploy an ingestion pipeline using Databricks Asset Bundles. Bundles can contain YAML definitions of jobs and tasks, are managed using the Databricks CLI, and can be shared and run in different target workspaces (such as development, staging, and production). For more information, see Databricks Asset Bundles.

You can use the following table configuration properties in your pipeline definition to select or deselect specific columns to ingest:

  • include_columns: Optionally specify a list of columns to include for ingestion. If you use this option to explicitly include columns, the pipeline automatically excludes columns that are added to the source in the future. To ingest the future columns, you'll have to add them to the list.
  • exclude_columns: Optionally specify a list of columns to exclude from ingestion. If you use this option to explicitly exclude columns, the pipeline automatically includes columns that are added to the source in the future. To ingest the future columns, you'll have to add them to the list.

You can also specify prompts in the report URL (source_url), which allows you to ingest filtered reports.

  1. Create a new bundle using the Databricks CLI:

    Bash
    databricks bundle init
  2. Add two new resource files to the bundle:

    • A pipeline definition file (resources/workday_pipeline.yml).
    • A workflow file that controls the frequency of data ingestion (resources/workday_job.yml).

    The following is an example resources/workday_pipeline.yml file:

    YAML
    variables:
    dest_catalog:
    default: main
    dest_schema:
    default: ingest_destination_schema

    # The main pipeline for workday_dab
    resources:
    pipelines:
    pipeline_workday:
    name: workday_pipeline
    catalog: ${var.dest_catalog}
    schema: ${var.dest_schema}
    ingestion_definition:
    connection_name: <workday-connection>
    objects:
    # An array of objects to ingest from Workday. This example
    # ingests a sample report about all active employees. The Employee_ID key is used as
    # the primary key for the report.
    - report:
    source_url: https://wd2-impl-services1.workday.com/ccx/service/customreport2/All_Active_Employees_Data?format=json
    destination_catalog: ${var.dest_catalog}
    destination_schema: ${var.dest_schema}
    destination_table: All_Active_Employees_Data
    table_configuration:
    primary_keys:
    - Employee_ID
    include_columns: # This can be exclude_columns instead
    - <column_a>
    - <column_b>
    - <column_c>

    The following is an example resources/workday_job.yml file:

    YAML
    resources:
    jobs:
    workday_dab_job:
    name: workday_dab_job

    trigger:
    # Run this job every day, exactly one day from the last run
    # See https://docs.databricks.com/api/workspace/jobs/create#trigger
    periodic:
    interval: 1
    unit: DAYS

    email_notifications:
    on_failure:
    - <email-address>

    tasks:
    - task_key: refresh_pipeline
    pipeline_task:
    pipeline_id: ${resources.pipelines.pipeline_workday.id}
  3. Deploy the pipeline using the Databricks CLI:

    Bash
    databricks bundle deploy

Example JSON pipeline definition:

JSON
"ingestion_definition": {

"connection_name": "<connection-name>",

"objects": [

{

"report": {

"source_url": "<report-url>",

"destination_catalog": "<destination-catalog>",

"destination_schema": "<destination-schema>",

"table_configuration": {

"primary_keys": ["<primary-key>"],

"scd_type": "SCD_TYPE_2",

"include_columns": ["<column-a>", "<column-b>", "<column-c>"]

}

}

}

]

}

Start, schedule, and set alerts on your pipeline

You can create a schedule for the pipeline on the pipeline details page.

  1. After the pipeline has been created, revisit the Databricks workspace, and then click Pipelines.

    The new pipeline appears in the pipeline list.

  2. To view the pipeline details, click the pipeline name.

  3. On the pipeline details page, you can schedule the pipeline by clicking Schedule.

  4. To set notifications on the pipeline, click Settings, and then add a notification.

For each schedule that you add to a pipeline, Lakeflow Connect automatically creates a job for it. The ingestion pipeline is a task within the job. You can optionally add more tasks to the job.

Example: Ingest two Workday reports into separate schemas

The example pipeline definition in this section ingests two Workday reports into separate schemas. Multi-destination pipeline support is API-only.

YAML
resources:
pipelines:
pipeline_workday:
name: workday_pipeline
catalog: my_catalog_1 # Location of the pipeline event log
schema: my_schema_1 # Location of the pipeline event log
ingestion_definition:
connection_name: <workday-connection>
objects:
- report:
source_url: <report-url-1>
destination_catalog: my_catalog_1
destination_schema: my_schema_1
destination_table: my_table_1
table_configuration:
primary_keys:
- <primary_key_column>
- report:
source_url: <report-url-2>
destination_catalog: my_catalog_2
destination_schema: my_schema_2
destination_table: my_table_2
table_configuration:
primary_keys:
- <primary_key_column>

Additional resources