Configure job parameters in Declarative Automation Bundles
Bundle variables and job parameters work together in Declarative Automation Bundles to allow you to override environment-specific defaults at runtime. Bundle variables are defined in configuration and are resolved during deployment. Job parameters are resolved when a job runs, so default values can be overridden without redeploying.
Bundle variables or job parameters
Select whether to use variables or job parameters based on when values must change:
If the value changes... | Use | Example |
|---|---|---|
Per environment (dev, staging, prod) | Bundle variables | Cluster size, warehouse ID |
Per job run | Job parameters | Processing date, source table |
Per task, when no job parameters exist | Task | Task-specific file paths |
Set job parameter defaults using variables
Databricks recommendeds using bundle variables as the defaults for job parameters. This gives you environment-specific defaults that can be overridden at runtime.
In the following example, the catalog parameter defaults to biz_dev for a dev deployment. It defaults to biz_prod for a prod deployment.
# databricks.yml
variables:
default_catalog:
description: Environment-specific catalog
default: dev_catalog
targets:
dev:
variables:
default_catalog: biz_dev
prod:
variables:
default_catalog: biz_prod
resources:
jobs:
etl_pipeline:
name: etl_pipeline
parameters:
- name: catalog
default: ${var.default_catalog}
- name: processing_date
default: '{{job.start_time.iso_date}}'
- name: mode
default: incremental
tasks:
- task_key: process_data
notebook_task:
notebook_path: ./notebooks/process.py
Limitations
Bundle validation does not allow job-level parameters and task-level base_parameters in the same job. This restriction comes from the API. If you add a job-level parameters block, move all task-level base_parameters to the job level as well.
resources:
jobs:
my_job:
parameters:
- name: catalog
default: dev
- name: schema
default: default
tasks:
- task_key: task1
notebook_task:
notebook_path: ./notebook.py
If you need task-specific parameters and have no job-level parameters block, use base_parameters on the task. After you add any job-level parameters, move all task parameters to the job level.
Override parameters at runtime
To override job parameter values when a job is run, pass new values:
- CLI
- REST API
- UI
Use double hyphens (--) to indicate the command flags are job parameters. See Pass job parameters
databricks bundle run my_job -- --catalog=prod --mode=full_refresh
{
"job_id": 123,
"job_parameters": {
"catalog": "prod",
"mode": "full_refresh"
}
}
- Open the job in the Databricks workspace.
- Click Run now.
- Select the option to run with different parameters.
Troubleshooting
The following sections may help you resolve some common issues using job parameters.
Parameter values don't change at runtime
Bundle variables are resolved at deploy time. If you need a value to be overridable at runtime, define it as a job parameter with the bundle variable as its default, not as a ${var.name} reference in task code or task configuration. Once deployed, job parameter defaults can be overridden when run without redeploying.
Parameters not available in notebook
If a job parameter is not available in a notebook, verify all of the following:
- Parameters are defined in the job
parameterssection. - The notebook uses
dbutils.widgets.get("name")to access each parameter. - The parameter name matches exactly, including case.
- The bundle was redeployed after adding or renaming parameters.