Jira connector FAQs
The Jira connector is in Beta.
This page answers frequently asked questions about the Jira connector in Databricks Lakeflow Connect.
General managed connector FAQs
The answers in Managed connector FAQs apply to all managed connectors in Lakeflow Connect. Keep reading for Jira-specific FAQs.
Connector-specific FAQs
The answers in this section are specific to the Jira connector.
Why is the ADMIN scope needed for an on-premises OAuth app?
Some Jira on-premises entities require more than just READ privileges to be fetched. These are boards, issue_fields, issue_field_values, project_board, project_roles, project_role_actors, and sprints. To allow fetching these tables without failures, you must set up ADMIN scopes and give Databricks permissions to fetch them.
How are deletes in Jira handled by the connector?
- Issues: Deleted rows are handled through the Jira audit logs API (if enabled).
- Comments and Worklogs: Deletes are not handled by default on incremental runs. Full refresh is the only way.
- The rest of the entities: These are refreshed anyway, so deletes are handled automatically.
How do I enable audit logs in Jira?
- Cloud: Your Jira site should be part of at least one paid plan. The connection user should have global admin access.
- On-premises: Audit logs are enabled by default in version 8.22 and later.
What roles are required for the connection user?
Ensure the connection user has the following roles and permissions:
Table | Roles and Permissions |
|---|---|
Projects, versions, components, and boards | Administer Projects or Browse Projects permission |
Issues, Issue Links, Issue watchers, Issue comments, worklogs | Administer Jira global permission |
Project roles, Project role actors | Administer Jira global permission |
Users, User Groups | Browse users and groups global permission |
Issue security schemes, security levels, Permissions, permission schemes | Administer Jira global permission |
How is this different from Atlassian's Delta Sharing product?
Databricks offers both Delta Sharing and ingestion for Jira.
A lot of organizations gravitate towards ingestion because it can help you track history of data and scale across regions and clouds.
But there are a lot of reasons you might want to use your data without moving it. For example, you might want to limit data duplication. Or you might want to be querying only the freshest possible data. In that case, Delta Sharing is the better choice.
How does the connector pull data from Jira?
The Jira connector uses the Jira REST API to retrieve issue data, comments, attachments, and custom fields from your Jira projects.
What data does the connector ingest?
The connector ingests the following data from Jira:
- Issue data (summary, description, status, priority, assignee)
- Issue metadata (creation date, update date, resolution date)
- Issue comments
- Custom field values
- Issue links and relationships
- Projects
- Users and groups
- Issue watchers
Can I ingest specific projects or all projects?
Yes. You can configure the pipeline to ingest:
- All projects in your Jira instance
- Specific projects by project key
How does the connector handle custom fields?
The connector automatically ingests all custom fields that are defined in your Jira instance. Custom fields are included in the ingested data with their field names and values preserved.
Does the connector support incremental ingestion?
Yes. The connector uses the updated timestamp to identify and ingest only issues that were created or modified since the last pipeline run. This reduces API usage and improves performance.
What happens if an issue is deleted in Jira?
When you use SCD type 2, deleted issues are tracked and marked with a deletion timestamp in the destination table. With SCD type 1, the issue is removed from the destination table. See Enable history tracking (SCD type 2).
Can I ingest archived projects?
Yes. The connector can ingest archived projects. However, issues from archived projects are not fetched.
What permissions does the connector require?
Permissions vary by table. For detailed information about required OAuth scopes and user permissions for each table, see the Supported Jira source tables for ingestion reference.
How does the connector handle workflow states?
The connector ingests the current workflow state for each issue, including status, resolution, and workflow history. You can track state changes over time when you use SCD type 2.
Are issue comments ingested?
Yes. The connector ingests comments for all issues in the selected projects. Each comment includes the comment body, the author, and the timestamp.
How are issue links handled?
Issue links and relationships between issues are ingested as part of the issue data. You can use this information to reconstruct relationships in your downstream processing.