Create a Google Ads ingestion pipeline
This feature is in Beta. Workspace admins can control access to this feature from the Previews page. See Manage Databricks previews.
Learn how to create a managed ingestion pipeline to ingest data from Google Ads into Databricks.
Requirements
To create an ingestion pipeline, you must meet the following requirements:
-
Your workspace must be enabled for Unity Catalog.
-
Serverless compute must be enabled for your workspace. See Serverless compute requirements.
-
If you plan to create a new connection: You must have
CREATE CONNECTIONprivileges on the metastore.If the connector supports UI-based pipeline authoring, an admin can create the connection and the pipeline at the same time by completing the steps on this page. However, if the users who create pipelines use API-based pipeline authoring or are non-admin users, an admin must first create the connection in Catalog Explorer. See Connect to managed ingestion sources.
-
If you plan to use an existing connection: You must have
USE CONNECTIONprivileges orALL PRIVILEGESon the connection object. -
You must have
USE CATALOGprivileges on the target catalog. -
You must have
USE SCHEMAandCREATE TABLEprivileges on an existing schema orCREATE SCHEMAprivileges on the target catalog.
To ingest from Google Ads, you must complete the steps in Configure OAuth for Google Ads ingestion.
Create an ingestion pipeline
- Databricks Asset Bundles
- Databricks notebook
This tab describes how to deploy an ingestion pipeline using Databricks Asset Bundles. Bundles can contain YAML definitions of jobs and tasks, are managed using the Databricks CLI, and can be shared and run in different target workspaces (such as development, staging, and production). For more information, see What are Databricks Asset Bundles?.
-
Create a new bundle using the Databricks CLI:
Bashdatabricks bundle init -
Add two new resource files to the bundle:
- A pipeline definition file (
resources/google_ads_pipeline.yml). - A workflow file that controls the frequency of data ingestion (
resources/google_ads_job.yml).
- A pipeline definition file (
-
Deploy the pipeline using the Databricks CLI:
Bashdatabricks bundle deploy
-
Import the following notebook into your Databricks workspace:
-
Leave cell one as-is.
-
Modify cell two or three with your pipeline configuration details, depending on your use case. See Values to modify.
-
Click Run all.
Values to modify
Value | Description |
|---|---|
| A unique name for the pipeline. |
| The name of the Unity Catalog connection that stores authentication details for Google Ads. |
| The name of the account that contains the data you want to ingest. Don't include hyphens when you enter Account IDs in your pipeline specification. |
| The name of the table you want to ingest. |
| The name of the catalog you want to write to in Databricks. |
| The name of the schema you want to write to in Databricks. |
| Optional. A unique name for the table you want to write to in Databricks. If you don't provide this, the connector automatically uses the source table name. |
| One pipeline maps to at most one Google Ads Manager Account ID. If your Manager Account ID maps to multiple Customer Account IDs, you can ingest from those different Customer Account IDs within the same pipeline. Don't include hyphens when you enter Account IDs in your pipeline specification. |
| Optional (default: 30 days). This determines the number of past days to re-check during each pipeline update to capture late conversions and attribution updates. Consider your organization's conversion attribution window when setting this value. |
| Optional (default: two years). This specifies the initial sync start date for report tables in |
Pipeline definition templates
- YAML
- JSON
This tab provides templates for use with Databricks Asset Bundles.
The following is an example resources/google_ads_pipeline.yml file that ingests all current and future tables from one account:
resources:
pipelines:
pipeline_google_ads:
name: <pipeline>
catalog: <destination-catalog>
target: <destination-schema>
ingestion_definition:
connection_name: <connection>
objects:
- schema:
source_schema: <account-id>
destination_catalog: <destination-catalog>
destination_schema: <destination-schema>
google_ads_options:
manager_account_id: <manager-account-id>
lookback_window_days: <lookback-window-days>
sync_start_date: <sync-start-date>
The following is an example resources/google_ads_pipeline.yml file that selects specific tables from an account to ingest:
resources:
pipelines:
pipeline_google_ads:
name: <pipeline-name>
catalog: <destination-catalog>
target: <destination-schema>
ingestion_definition:
connection_name: <connection-name>
objects:
- table:
source_schema: <customer-account-id>
source_table: <table1>
destination_catalog: <destination-catalog>
destination_schema: <destination-schema>
destination_table: <destination-table>
google_ads_options:
manager_account_id: <manager-account-id>
lookback_window_days: <lookback-window-days>
sync_start_date: <sync-start-date>
- table:
source_schema: <customer-account-id>
source_table: table2
destination_catalog: <destination-catalog>
destination_schema: <destination-schema>
destination_table: <destination-table>
google_ads_options:
manager_account_id: <manager-account-id>
lookback_window_days: <lookback-window-days>
sync_start_date: <sync-start-date>
The following is an example resources/google_ads_job.yml file:
resources:
jobs:
google_ads_dab_job:
name: google_ads_dab_job
trigger:
# Run this job every day, exactly one day from the last run
# See https://docs.databricks.com/api/workspace/jobs/create#trigger
periodic:
interval: 1
unit: DAYS
email_notifications:
on_failure:
- <email-address>
tasks:
- task_key: refresh_pipeline
pipeline_task:
pipeline_id: <pipeline-id>
This tab provides pipeline definition templates for use with Databricks notebooks.
The following example ingests all current and future tables from one account:
pipeline_spec = {
"name": <pipeline>,
"ingestion_definition": {
"connection_name": <connection>,
"objects": [
{
"schema": {
"source_schema": "<account-id>",
"destination_catalog": "<destination-catalog>",
"destination_schema": "<destination-schema>",
"google_ads_options": {
"manager_account_id": "<manager-account-id>",
"lookback_window_days": <lookback-window-days>,
"sync_start_date": "<sync-start-date>"
}
}
}
]
}
}
json_payload = json.dumps(pipeline_spec, indent=2)
create_pipeline(json_payload)
The following example selects specific tables from an account to ingest:
pipeline_spec = {
"name": <pipeline>,
"ingestion_definition": {
"connection_name": <connection>,
"objects": [
{
"table": {
"source_schema": "<customer-account-id>",
"source_table": "<table1>",
"destination_catalog": "<destination-catalog>",
"destination_schema": "<destination-schema>",
"destination_table": "<destination-table>",
"google_ads_options": {
"manager_account_id": "<manager-account-id>",
"lookback_window_days": <lookback-window-days>,
"sync_start_date": "<sync-start-date>"
}
}
},
{
"table": {
"source_schema": "<customer-account-id>",
"source_table": "table2",
"destination_catalog": "<destination-catalog>",
"destination_schema": "<destination-schema>",
"destination_table": "<destination-table>",
"google_ads_options": {
"manager_account_id": "<manager-account-id>",
"lookback_window_days": <lookback-window-days>,
"sync_start_date": "<sync-start-date>"
}
}
}
]
}
}
json_payload = json.dumps(pipeline_spec, indent=2)
create_pipeline(json_payload)
Common patterns
For advanced pipeline configurations, see Common patterns for managed ingestion pipelines.