Trigger a single job run
Use the Run now button to trigger a single job run.
You can also choose to Run now with different parameters to override default job parameters.
Databricks recommends using the default settings that enforce a single concurrent run for each job. If your workload requires multiple concurrent runs of a job, see Configure maximum concurrent runs.
Run a job immediately
To run the job immediately, click .
You can perform a test run of a job with a notebook task by clicking Run now. Click Run now again after editing the notebook to run the latest version of the notebook.
Run a job with different settings
Run now with different settings allows you to choose which tasks to run, different parameters to run with, to override default values for existing parameters, and to run Performance optimized serverless workloads.
- Click the blue caret
next to Run now and select Run now with different settings or, in the Active Runs table, click Run now with different settings.
- (Optional) Select or deselect the tasks to run by clicking them in the graph. Hover over a task to get an option to select or deselect all upstream or downstream tasks.
- (Optional) Enter the new job parameters as key-value pairs. See Configure job parameters.
- (Optional) Change the Performance optimized setting. See Run your Databricks job with serverless compute for workflows.
- Click Run.
The Run now with different parameters dialog contains an option to Switch to legacy parameters if your current job only has task parameters and not job parameters defined. This legacy behavior is no longer recommended.
Manual triggers and continuous jobs
Continuous jobs should always keep the default setting of a single concurrent run. As such, if a continuous trigger is active, the Run now button is replaced by a Restart run button.
If you pause a continuous trigger, the Run now button becomes available.
If any run is active when a continuous trigger is resumed, the job scheduler waits until that run completes to trigger a new run.