Jira connector limitations
Beta
The Jira connector is in Beta.
This page lists limitations and considerations for ingesting data from Jira using Databricks Lakeflow Connect.
General SaaS connector limitations
The limitations in this section apply to all SaaS connectors in Lakeflow Connect.
- When you run a scheduled pipeline, alerts don't trigger immediately. Instead, they trigger when the next update runs.
- When a source table is deleted, the destination table is not automatically deleted. You must delete the destination table manually. This behavior is not consistent with Lakeflow Spark Declarative Pipelines behavior.
- During source maintenance periods, Databricks might not be able to access your data.
- If a source table name conflicts with an existing destination table name, the pipeline update fails.
- Multi-destination pipeline support is API-only.
- You can optionally rename a table that you ingest. If you rename a table in your pipeline, it becomes an API-only pipeline, and you can no longer edit the pipeline in the UI.
- Column-level selection and deselection are API-only.
- If you select a column after a pipeline has already started, the connector does not automatically backfill data for the new column. To ingest historical data, manually run a full refresh on the table.
- Databricks can't ingest two or more tables with the same name in the same pipeline, even if they come from different source schemas.
- The source system assumes that the cursor columns are monotonically increasing.
- With SCD type 1 enabled, deletes don't produce an explicit
deleteevent in the change data feed. For auditable deletions, use SCD type 2 if the connector supports it. For details, see Example: SCD type 1 and SCD type 2 processing with CDF source data. - The connector ingests raw data without transformations. Use downstream Lakeflow Spark Declarative Pipelines pipelines for transformations.
Connector-specific limitations
The limitations in this section are specific to the Jira connector.
Pipelines
- UI-based pipeline authoring is not supported. You must use a notebook, the Databricks CLI, or Databricks Asset Bundles to create pipelines.
Incremental sync
- Only a subset of tables can support incremental sync. Others are full refresh only. See the Supported Jira source tables for ingestion for details.
Delete tracking
- Issue deletes are only supported if Jira audit logs are supported and enabled.
- Deletes are only tracked if the authenticating user has global admin permissions on the Jira instance.
- Comments and worklogs deletions are only supported through full refresh.
Filtering
- Filtering by Jira project or space is supported using the
include_jira_spacesparameter injira_options. Make sure that you are using exact project keys instead of project names or IDs.
Content ingestion
- The connector provides access to 27 tables in total. All data, including data from multiple projects, is organized into these 27 tables.
- Some tables (for example, issue links) use internal Jira IDs and might require joining with other tables for meaningful output.