Ingest data from Jira
This feature is in Beta. Workspace admins can control access to this feature from the Previews page. See Manage Databricks previews.
Learn how to create a managed Jira ingestion pipeline using Databricks Lakeflow Connect.
Requirements
-
To create an ingestion pipeline, you must first meet the following requirements:
-
Your workspace must be enabled for Unity Catalog.
-
Serverless compute must be enabled for your workspace. See Serverless compute requirements.
-
If you plan to create a new connection: You must have
CREATE CONNECTIONprivileges on the metastore. See Manage privileges in Unity Catalog.If the connector supports UI-based pipeline authoring, an admin can create the connection and the pipeline at the same time by completing the steps on this page. However, if the users who create pipelines use API-based pipeline authoring or are non-admin users, an admin must first create the connection in Catalog Explorer. See Connect to managed ingestion sources.
-
If you plan to use an existing connection: You must have
USE CONNECTIONprivileges orALL PRIVILEGESon the connection object. -
You must have
USE CATALOGprivileges on the target catalog. -
You must have
USE SCHEMAandCREATE TABLEprivileges on an existing schema orCREATE SCHEMAprivileges on the target catalog.
-
-
To ingest from Jira, you must first complete the steps in Configure Jira for ingestion.
Create an ingestion pipeline
Each source table is ingested into a streaming table or a snapshot table, depending on the source. For a list of supported source tables, see Jira connector reference.
- Databricks UI
- Databricks Asset Bundles
- Databricks notebook
- In the sidebar of the Databricks workspace, click Data Ingestion.
- On the Add data page, under Databricks connectors, click Jira.
- On the Connection page of the ingestion wizard, select the connection that stores your Jira access credentials. If you have the
CREATE CONNECTIONprivilege on the metastore, you can clickCreate connection to create a new connection with the authentication details in Configure Jira for ingestion.
- Click Next.
- On the Ingestion setup page, enter a unique name for the pipeline.
- Select a catalog and a schema to write event logs to. If you have
USE CATALOGandCREATE SCHEMAprivileges on the catalog, you can clickCreate schema in the drop-down menu to create a new schema.
- Click Create pipeline and continue.
- On the Source page, select the tables to ingest. You can optionally filter the data by Jira spaces or projects. Use exact project keys, not project names or IDs.
- Click Save and continue.
- On the Destination page, select a catalog and a schema to load data into. If you have
USE CATALOGandCREATE SCHEMAprivileges on the catalog, you can clickCreate schema in the drop-down menu to create a new schema.
- Click Save and continue.
- (Optional) On the Schedules and notifications page, click
Create schedule. Set the frequency to refresh the destination tables.
- (Optional) Click
Add notification to set email notifications for pipeline operation success or failure, then click Save and run pipeline.
Use Declarative Automation Bundles to manage Jira pipelines as code. Bundles can contain YAML definitions of jobs and tasks, are managed using the Databricks CLI, and can be shared and run in different target workspaces (such as development, staging, and production). For more information, see What are Declarative Automation Bundles?.
-
Create a new bundle using the Databricks CLI:
Bashdatabricks bundle init -
Add two new resource files to the bundle:
- A pipeline definition file (for example,
resources/jira_pipeline.yml). See pipeline.ingestion_definition and Examples. - A job definition file that controls the frequency of data ingestion (for example,
resources/jira_job.yml).
- A pipeline definition file (for example,
-
Deploy the pipeline using the Databricks CLI:
Bashdatabricks bundle deploy
-
Import the following notebook into your Databricks workspace:
-
Leave cell one as-is.
-
Modify cell three with your pipeline configuration details. See pipeline.ingestion_definition and Examples.
-
Click Run all.
Examples
Use these examples to configure your pipeline.
Ingest a single source table
- Databricks Asset Bundles
- Databricks notebook
(Recommended) The following pipeline definition file ingests a single source table.
variables:
dest_catalog:
default: main
dest_schema:
default: ingest_destination_schema
# The main pipeline for jira_dab
resources:
pipelines:
pipeline_jira:
name: jira_pipeline
catalog: ${var.dest_catalog}
schema: ${var.dest_schema}
ingestion_definition:
connection_name: <jira-connection>
objects:
# An array of objects to ingest from Jira. This example ingests the issues table.
- table:
source_schema: default
source_table: issues
destination_catalog: ${var.dest_catalog}
destination_schema: ${var.dest_schema}
(Recommended) The following pipeline specification ingests a single source table:
pipeline_spec = """
{
"name": "<pipeline-name>",
"ingestion_definition": {
"connection_name": "<jira-connection>",
"objects": [
{
"table": {
"source_schema": "default",
"source_table": "issues",
"destination_catalog": "main",
"destination_schema": "ingest_destination_schema"
}
}
]
},
"channel": "PREVIEW"
}
"""
create_pipeline(pipeline_spec)
Ingest multiple source tables
- Databricks Asset Bundles
- Databricks notebook
(Recommended) The following pipeline definition file ingests multiple source tables.
variables:
dest_catalog:
default: main
dest_schema:
default: ingest_destination_schema
# The main pipeline for jira_dab
resources:
pipelines:
pipeline_jira:
name: jira_pipeline
catalog: ${var.dest_catalog}
schema: ${var.dest_schema}
ingestion_definition:
connection_name: <jira-connection>
objects:
# An array of objects to ingest from Jira. This example ingests the issues and projects tables.
- table:
source_schema: default
source_table: issues
destination_catalog: ${var.dest_catalog}
destination_schema: ${var.dest_schema}
- table:
source_schema: default
source_table: projects
destination_catalog: ${var.dest_catalog}
destination_schema: ${var.dest_schema}
(Recommended) The following pipeline specification ingests multiple source tables:
pipeline_spec = """
{
"name": "<pipeline-name>",
"ingestion_definition": {
"connection_name": "<jira-connection>",
"objects": [
{
"table": {
"source_schema": "default",
"source_table": "issues",
"destination_catalog": "main",
"destination_schema": "ingest_destination_schema"
}
},
{
"table": {
"source_schema": "default",
"source_table": "projects",
"destination_catalog": "main",
"destination_schema": "ingest_destination_schema"
}
}
]
},
"channel": "PREVIEW"
}
"""
create_pipeline(pipeline_spec)
Ingest all source tables
- Databricks Asset Bundles
- Databricks notebook
The following pipeline definition file ingests all available Jira source tables in one pipeline. Make sure that your OAuth application includes all scopes required by the full table set and the authenticating user has the necessary Jira permissions. Pipelines fail if any required scope or permission is missing.
variables:
dest_catalog:
default: main
dest_schema:
default: ingest_destination_schema
# The main pipeline for jira_dab
resources:
pipelines:
pipeline_jira:
name: jira_pipeline
catalog: ${var.dest_catalog}
schema: ${var.dest_schema}
ingestion_definition:
connection_name: <jira-connection>
objects:
# An array of objects to ingest from Jira. This example ingests all tables in the default schema.
- schema:
source_schema: default
destination_catalog: ${var.dest_catalog}
destination_schema: ${var.dest_schema}
The following pipeline specification ingests all source tables:
pipeline_spec = """
{
"name": "<pipeline-name>",
"ingestion_definition": {
"connection_name": "<jira-connection>",
"objects": [
{
"schema": {
"source_schema": "default",
"destination_catalog": "main",
"destination_schema": "ingest_destination_schema"
}
}
]
},
"channel": "PREVIEW"
}
"""
create_pipeline(pipeline_spec)
Bundle job definition file
The following is an example job definition file to use with Declarative Automation Bundles. The job runs every day, exactly one day from the last run.
resources:
jobs:
jira_dab_job:
name: jira_dab_job
trigger:
periodic:
interval: 1
unit: DAYS
email_notifications:
on_failure:
- <email-address>
tasks:
- task_key: refresh_pipeline
pipeline_task:
pipeline_id: ${resources.pipelines.pipeline_jira.id}
Common patterns
For advanced pipeline configurations, see Common patterns for managed ingestion pipelines.
Next steps
Start, schedule, and set alerts on your pipeline. See Common pipeline maintenance tasks.