Jobs CLI

You run Databricks jobs CLI subcommands by appending them to databricks jobs and job run commands by appending them to databricks runs.

databricks jobs -h
Usage: databricks jobs [OPTIONS] COMMAND [ARGS]...

  Utility to interact with jobs.

  Job runs are handled by ``databricks runs``.

Options:
  -v, --version  [VERSION]
  -h, --help     Show this message and exit.

Commands:
  create   Creates a job.
    Options:
      --json-file PATH            File containing JSON request to POST to /api/2.0/jobs/create.
      --json JSON                 JSON string to POST to /api/2.0/jobs/create.
  delete   Deletes a job.
    Options:
      --job-id JOB_ID             Can be found in the URL at https://<databricks-instance>/?o=<16-digit-number>#job/$JOB_ID. [required]
  get      Describes the metadata for a job.
    Options:
    --job-id JOB_ID               Can be found in the URL at https://<databricks-instance>/?o=<16-digit-number>#job/$JOB_ID. [required]
  list     Lists the jobs in the Databricks Job Service.
  reset    Resets (edits) the definition of a job.
    Options:
      --job-id JOB_ID             Can be found in the URL at https://<databricks-instance>/?o=<16-digit-number>#job/$JOB_ID. [required]
      --json-file PATH            File containing JSON request to POST to /api/2.0/jobs/create.
      --json JSON                 JSON string to POST to /api/2.0/jobs/create.
  run-now  Runs a job with optional per-run parameters.
    Options:
      --job-id JOB_ID             Can be found in the URL at https://<databricks-instance>/#job/$JOB_ID. [required]
      --jar-params JSON           JSON string specifying an array of parameters. i.e. '["param1", "param2"]'
      --notebook-params JSON      JSON string specifying a map of key-value pairs. i.e. '{"name": "john doe", "age": 35}'
      --python-params JSON        JSON string specifying an array of parameters. i.e. '["param1", "param2"]'
      --spark-submit-params JSON  JSON string specifying an array of parameters. i.e. '["--class", "org.apache.spark.examples.SparkPi"]'
databricks runs -h
Usage: databricks runs [OPTIONS] COMMAND [ARGS]...

  Utility to interact with job runs.

Options:
  -v, --version  [VERSION]
  -h, --help     Show this message and exit.

Commands:
  cancel  Cancels a run.
    Options:
        --run-id RUN_ID  [required]
  get     Gets the metadata about a run in JSON form.
    Options:
      --run-id RUN_ID  [required]

  get-output Gets the output of a run. See [_](/api/latest/jobs.html#runs-get-output).
  list    Lists job runs.
  submit  Submits a one-time run.
    Options:
      --json-file PATH  File containing JSON request to POST to /api/2.0/jobs/runs/submit.
      --json JSON       JSON string to POST to /api/2.0/jobs/runs/submit.

List and find jobs

The databricks jobs list command has two output formats, JSON and TABLE. The TABLE format is outputted by default and returns a two column table (job ID, job name).

To find a job by name, run:

databricks jobs list | grep "JOB_NAME"

Copy a job

Note

This example requires the program jq.

SETTINGS_JSON=$(databricks jobs get --job-id 284907 | jq .settings)
# JQ Explanation:
#   - peek into top level `settings` field.
databricks jobs create --json "$SETTINGS_JSON"

Delete “Untitled” jobs

databricks jobs list --output json | jq '.jobs[] | select(.settings.name == "Untitled") | .job_id' | xargs -n 1 databricks jobs delete --job-id
# Explanation:
#   - List jobs in JSON.
#   - Peek into top level `jobs` field.
#   - Select only jobs with name equal to "Untitled"
#   - Print those job IDs out.
#   - Invoke `databricks jobs delete --job-id` once per row with the $job_id appended as an argument to the end of the command.