Skip to main content

Create a Meta Ads ingestion pipeline

Beta

This feature is in Beta. Workspace admins can control access to this feature from the Previews page. See Manage Databricks previews.

Learn how to create a managed ingestion pipeline to ingest data from Meta Ads into Databricks. For a list of supported objects, see Supported objects.

Requirements

To create an ingestion pipeline, you must meet the following requirements:

  • Your workspace must be enabled for Unity Catalog.

  • Serverless compute must be enabled for your workspace. See Serverless compute requirements.

  • If you plan to create a new connection: You must have CREATE CONNECTION privileges on the metastore.

    If the connector supports UI-based pipeline authoring, an admin can create the connection and the pipeline at the same time by completing the steps on this page. However, if the users who create pipelines use API-based pipeline authoring or are non-admin users, an admin must first create the connection in Catalog Explorer. See Connect to managed ingestion sources.

  • If you plan to use an existing connection: You must have USE CONNECTION privileges or ALL PRIVILEGES on the connection object.

  • You must have USE CATALOG privileges on the target catalog.

  • You must have USE SCHEMA and CREATE TABLE privileges on an existing schema or CREATE SCHEMA privileges on the target catalog.

To ingest from Meta Ads, you must complete the steps in Set up Meta Ads as a data source.

Create the pipeline

This tab describes how to deploy an ingestion pipeline using Databricks Asset Bundles. Bundles can contain YAML definitions of jobs and tasks, are managed using the Databricks CLI, and can be shared and run in different target workspaces (such as development, staging, and production). For more information, see What are Databricks Asset Bundles?.

  1. Create a new bundle using the Databricks CLI:

    Bash
    databricks bundle init
  2. Add two new resource files to the bundle:

  3. Deploy the pipeline using the Databricks CLI:

    Bash
    databricks bundle deploy

Values to modify

Value

Description

name

A unique name for the pipeline.

connection_name

The name of the Unity Catalog connection that stores authentication details for Meta Ads.

source_schema

Your Meta Ads account ID.

source_table

The name of the object you want to ingest (for example, ads, campaigns, or ad_insights).

destination_catalog

The name of the catalog you want to write to in Databricks.

destination_schema

The name of the schema you want to write to in Databricks.

destination_table

Optional. A unique name for the table you want to write to in Databricks. If you don't provide this, the connector automatically uses the source table name.

scd_type

The SCD method to use: SCD_TYPE_1 or SCD_TYPE_2. The default is SCD type 1. For more information, see Enable history tracking (SCD type 2).

For ad_insights only:

When you ingest from ad_insights, you must configure the following additional settings in metamarketing_parameters:

Value

Description

level

Granularity level for insights: account, campaign, adset, or ad. Default is ad.

start_date

The start date for the insights data in YYYY-MM-DD format.

breakdowns

Optional. List of breakdown dimensions (for example, ["age", "gender", "country"]).

action_breakdowns

Optional. List of action breakdown dimensions (for example, ["action_type", "action_destination"]).

Examples

Ingest all current and future tables from an account

The following is an example resources/meta_ads_pipeline.yml file:

YAML
resources:
pipelines:
pipeline_meta_ads:
name: <pipeline-name>
catalog: <destination-catalog>
target: <destination-schema>
channel: PREVIEW
ingestion_definition:
connection_name: <connection-name>
objects:
- schema:
source_schema: <meta-ads-account-id>
destination_catalog: <destination-catalog>
destination_schema: <destination-schema>
table_configuration:
scd_type: SCD_TYPE_1

Select specific tables from an account to ingest

The following is an example resources/meta_ads_pipeline.yml file:

YAML
resources:
pipelines:
pipeline_meta_ads:
name: <pipeline-name>
catalog: <destination-catalog>
target: <destination-schema>
channel: PREVIEW
ingestion_definition:
connection_name: <connection-name>
objects:
- table:
source_schema: <meta-ads-account-id>
source_table: campaigns
destination_catalog: <destination-catalog>
destination_schema: <destination-schema>
table_configuration:
scd_type: SCD_TYPE_1
- table:
source_schema: <meta-ads-account-id>
source_table: ads
destination_catalog: <destination-catalog>
destination_schema: <destination-schema>
table_configuration:
scd_type: SCD_TYPE_1

Ingest ad_insights with metamarketing_parameters

The following is an example resources/meta_ads_pipeline.yml file:

YAML
resources:
pipelines:
pipeline_meta_ads:
name: <pipeline-name>
catalog: <destination-catalog>
target: <destination-schema>
channel: PREVIEW
ingestion_definition:
connection_name: <connection-name>
objects:
- table:
source_schema: <meta-ads-account-id>
source_table: ad_insights
destination_catalog: <destination-catalog>
destination_schema: <destination-schema>
table_configuration:
scd_type: SCD_TYPE_1
metamarketing_parameters:
level: ad
start_date: '2024-01-01'
breakdowns:
- age
- gender
action_breakdowns:
- action_type

Databricks Asset Bundles workflow file

The following is an example resources/meta_ads_job.yml file:

YAML
resources:
jobs:
meta_ads_dab_job:
name: meta_ads_dab_job
trigger:
# Run this job every day, exactly one day from the last run
# See https://docs.databricks.com/api/workspace/jobs/create#trigger
periodic:
interval: 1
unit: DAYS
email_notifications:
on_failure:
- <email-address>
tasks:
- task_key: refresh_pipeline
pipeline_task:
pipeline_id: <pipeline-id>

Common patterns

For advanced pipeline configurations, see Common patterns for managed ingestion pipelines.

Additional resources