Ingest data from TikTok Ads into Databricks
This feature is in Beta. Workspace admins can control access to this feature from the Previews page. See Manage Databricks previews.
Learn how to create a managed pipeline to ingest data from TikTok Ads into Databricks.
Requirements
To create an ingestion pipeline, you must meet the following requirements:
-
Your workspace must be enabled for Unity Catalog.
-
Serverless compute must be enabled for your workspace. See Serverless compute requirements.
-
If you plan to create a new connection: You must have
CREATE CONNECTIONprivileges on the metastore.If the connector supports UI-based pipeline authoring, an admin can create the connection and the pipeline at the same time by completing the steps on this page. However, if the users who create pipelines use API-based pipeline authoring or are non-admin users, an admin must first create the connection in Catalog Explorer. See Connect to managed ingestion sources.
-
If you plan to use an existing connection: You must have
USE CONNECTIONprivileges orALL PRIVILEGESon the connection object. -
You must have
USE CATALOGprivileges on the target catalog. -
You must have
USE SCHEMAandCREATE TABLEprivileges on an existing schema orCREATE SCHEMAprivileges on the target catalog.
To ingest from TikTok Ads, you must configure authentication from Databricks. See Configure TikTok Ads for managed ingestion.
Create an ingestion pipeline
- Databricks Asset Bundles
- Databricks notebook
This tab describes how to deploy an ingestion pipeline using Databricks Asset Bundles. Bundles can contain YAML definitions of jobs and tasks, are managed using the Databricks CLI, and can be shared and run in different target workspaces (such as development, staging, and production). For more information, see What are Databricks Asset Bundles?.
-
Create a new bundle using the Databricks CLI:
Bashdatabricks bundle init -
Add two new resource files to the bundle:
- A pipeline definition file (
resources/tiktok_ads_pipeline.yml). - A workflow file that controls the frequency of data ingestion (
resources/tiktok_ads_job.yml).
- A pipeline definition file (
-
Deploy the pipeline using the Databricks CLI:
Bashdatabricks bundle deploy
-
Import the following notebook into your Databricks workspace:
-
Leave cells one and two as they are. Do not modify.
-
Modify cell three with your pipeline configuration details. See Values to modify.
-
Optionally configure advanced pipeline settings. See Common patterns for managed ingestion pipelines.
-
Click Run all.
Values to modify
Value | Description |
|---|---|
| A unique name for the pipeline. |
| The name of the connection you created in TikTok Ads. |
| The advertiser ID for which you want to ingest data. |
| The name of the table you want to ingest. For a list of supported tables, see TikTok Ads connector reference. |
| The name of the catalog where you want to store the ingested data. |
| The name of the schema where you want to store the ingested data. |
| (Optional) The name of the destination table. If not provided, the connector uses the source table name. |
Bundle resource file templates
For Databricks Asset Bundles deployments, use the following templates for your pipeline definition file and workflow file. For advanced pipeline settings, see Common patterns for managed ingestion pipelines.
Pipeline definition file
resources:
pipelines:
tiktok_ads_pipeline:
name: tiktok_ads_pipeline
ingestion_definition:
connection_name: tiktok_ads_connection
objects:
- table:
source_schema: '<your_advertiser_id>'
source_table: 'campaign_report_daily'
destination_catalog: 'main'
destination_schema: 'tiktok_ads_data'
destination_table: 'campaign_report_daily'
Workflow file
resources:
jobs:
tiktok_ads_job:
name: tiktok_ads_job
schedule:
quartz_cron_expression: '0 0 0 * * ?'
timezone_id: 'UTC'
tasks:
- task_key: tiktok_ads_ingestion
pipeline_task:
pipeline_id: ${resources.pipelines.tiktok_ads_pipeline.id}