Skip to main content

Create a Confluence ingestion pipeline

Preview

The Confluence connector is in Beta.

This page describes how to create a Confluence ingestion pipeline using Databricks Lakeflow Connect. The following interfaces are supported:

  • Databricks Asset Bundles
  • Databricks APIs
  • Databricks SDKs
  • Databricks CLI

Before you begin

To create the ingestion pipeline, you must meet the following requirements:

  • Your workspace must be enabled for Unity Catalog.

  • Serverless compute must be enabled for your workspace. See Serverless compute requirements.

  • If you plan to create a new connection: You must have CREATE CONNECTION privileges on the metastore.

    If the connector supports UI-based pipeline authoring, an admin can create the connection and the pipeline at the same time by completing the steps on this page. However, if the users who create pipelines use API-based pipeline authoring or are non-admin users, an admin must first create the connection in Catalog Explorer. See Connect to managed ingestion sources.

  • If you plan to use an existing connection: You must have USE CONNECTION privileges or ALL PRIVILEGES on the connection object.

  • You must have USE CATALOG privileges on the target catalog.

  • You must have USE SCHEMA and CREATE TABLE privileges on an existing schema or CREATE SCHEMA privileges on the target catalog.

To ingest from Confluence, see Configure OAuth U2M for Confluence ingestion.

Create the ingestion pipeline

You must have USE CONNECTION or ALL PRIVILEGES on a connection to create an ingestion pipeline.

This step describes how to create the ingestion pipeline. Each ingested table is written to a streaming table with the same name.

  1. Create a new bundle using the Databricks CLI:

    Bash
    databricks bundle init
  2. Add two new resource files to the bundle:

    • A pipeline definition file (resources/confluence_pipeline.yml).
    • A workflow file that controls the frequency of data ingestion (resources/confluence_job.yml).

    The following is an example resources/confluence_pipeline.yml file:

    YAML
    variables:
    dest_catalog:
    default: main
    dest_schema:
    default: ingest_destination_schema

    # The main pipeline for confluence_dab
    resources:
    pipelines:
    pipeline_confluence:
    name: confluence_pipeline
    catalog: ${var.dest_catalog}
    target: ${var.dest_schema}
    ingestion_definition:
    connection_name: confluence_connection
    objects:
    - table:
    source_schema: default
    source_table: pages
    destination_catalog: ${var.dest_catalog}
    destination_schema: ${var.dest_schema}
    destination_table: <table-name>

    The following is an example resources/confluence_job.yml file:

    YAML
    resources:
    jobs:
    confluence_dab_job:
    name: confluence_dab_job

    trigger:
    # Run this job every day, exactly one day from the last run
    # See https://docs.databricks.com/api/workspace/jobs/create#trigger
    periodic:
    interval: 1
    unit: DAYS

    email_notifications:
    on_failure:
    - <email-address>

    tasks:
    - task_key: refresh_pipeline
    pipeline_task:
    pipeline_id: ${resources.pipelines.pipeline_confluence.id}
  3. Deploy the pipeline using the Databricks CLI:

    Bash
    databricks bundle deploy

Next steps

Additional resources