Jobs

A job is a way of running a notebook or JAR either immediately or on a scheduled basis. The other way to run a notebook is interactively in the notebook UI.

You can create and run jobs using the UI, the CLI, and by invoking the Jobs API. You can monitor job run results in the UI, using the CLI, by querying the API, and through email alerts. This article focuses on performing job tasks using the UI. For the other methods, see Jobs CLI and Jobs API.

Important

  • The number of jobs is limited to 1000.
  • The number of jobs a workspace can create in an hour is limited to 5000 (includes “run now” and “runs submit”). This limit also affects jobs created by the REST API and notebook workflows.
  • The number of actively concurrent runs a workspace can create is limited to 150.

View jobs

Click the Jobs icon Jobs Menu Icon in the sidebar. The Jobs list displays. The Jobs page lists all defined jobs, the cluster definition, the schedule if any, and the result of the last run.

In the Jobs list, you can filter jobs:

  • Using key words.
  • Selecting only jobs you own or jobs you have access to. Access to this filter depends on Jobs Access Control being enabled.

You can also click any column header to sort the list of jobs (either descending or ascending) by that column. By default, the page is sorted on job names in ascending order.

no-alternative-text

Create a job

  1. Click + Create Job. The job detail page displays.

    no-alternative-text
  2. Enter a name in the text field with the placeholder text Untitled.

  3. Specify the task type: click Select Notebook, Set JAR, or Configure spark-submit.

    • Notebook

      1. Select a notebook and click OK.
      2. Next to Parameters, click Edit. Specify key-value pairs or a JSON string representing key-value pairs. Such parameters set the value of widgets.
    • JAR: Upload a JAR, specify the main class and arguments, and click OK. To learn more about JAR jobs, see JAR job tips.

    • spark-submit: Specify the main class, path to the library JAR, arguments, and click Confirm. To learn more about spark-submit, see the Apache Spark documentation.

      Note

      The following Databricks features are not available for spark-submit jobs:

  4. In the Dependent Libraries field, optionally click Add and specify dependent libraries. Dependent libraries are automatically attached to the cluster on launch. Follow the recommendations in Library dependencies for specifying dependencies.

    Important

    If you have configured a library to automatically install on all clusters or in the next step you select an existing terminated cluster that has libraries installed, the job execution does not wait for library installation to complete. If a job requires a certain library, you should attach the library to the job in the Dependent Libraries field.

  5. In the Cluster field, click Edit and specify the cluster on which to run the job. In the Cluster Type drop-down, choose New Automated Cluster or Existing Interactive Cluster.

    Note

    Keep the following in mind when you choose a cluster type:

    • For production-level jobs or jobs that are important to complete, we recommend that you select new cluster.
    • You can run spark-submit jobs only on new clusters.
    • When you run a job on a new cluster, the job is treated as a data engineering (automated) workload subject to the automated workload pricing. When you run a job on an existing cluster, the job is treated as a data analytics ( interactive) workload subject to interactive workload pricing.
    • If you select a terminated existing cluster and the job owner has Can Restart permission, Databricks starts the cluster when the job is scheduled to run.
    • Existing clusters work best for tasks such as updating dashboards at regular intervals.
    • New Automated Cluster - complete the cluster configuration.
      1. In the cluster configuration, select a runtime version. For help with selecting a runtime version, see Databricks Runtime and Databricks Light.
      2. To decrease new cluster start time, select a pool in the cluster configuration.
    • Existing Interactive Cluster - in the drop-down, select the existing cluster.
  6. In the Schedule field, optionally click Edit and schedule the job. See Run a job.

  7. Optionally click Advanced and specify advanced job options. See Advanced job options.

View job details

On the Jobs page, click a job name in the Name column. The job details page shows configuration parameters, active runs, and completed runs.

no-alternative-text

Databricks maintains a history of your job runs for up to 60 days. If you need to preserve job runs, we recommend that you export job run results before they expire. For more information, see Export job run results.

In the job runs page, you can view the standard error, standard output, log4j output for a job run by clicking the Logs link in the Spark column.

Run a job

You can run a job on a schedule or immediately.

To define a schedule for the job:

  1. Click Edit next to Schedule.

    no-alternative-text

    The Schedule Job dialog displays.

    no-alternative-text
  2. Specify the schedule granularity, starting time, and time zone.

  3. Click Confirm.

Note

  • You can choose a time zone that observes daylight saving time or a UTC time. If you select a zone that observes daylight saving time, an hourly job will be skipped or may appear to not fire for an hour or two when daylight saving time begins or ends. If you want jobs to run at every hour (absolute time), choose a UTC time.
  • The job scheduler, like the Spark batch interface, is not intended for low latency jobs. Due to network or cloud issues, job runs may occasionally be delayed up to several minutes. In these situations, scheduled jobs will run immediately upon service availability.

To run the job immediately, in the Active runs table click Run Now.

no-alternative-text

Tip

Click Run Now to do a test run of your notebook or JAR when you’ve finished configuring your job. If your notebook fails, you can edit it and the job will automatically run the new version of the notebook.

Run a job with different parameters

You can use Run Now with Different Parameters to re-run a job specifying different parameters or different values for existing parameters.

  1. In the Active runs table, click Run Now with Different Parameters. The dialog varies depending on whether you are running a notebook job or a spark-submit job.

    • Notebook - A UI that lets you set key-value pairs or a JSON object displays. You can use this dialog to set the values of widgets:

      no-alternative-text
    • spark-submit - A dialog containing the list of parameters displays. For example, you could run the SparkPi estimator described in Create a job with 100 instead of the default 10 partitions:

      no-alternative-text
  2. Specify the parameters. The provided parameters are merged with the default parameters for the triggered run. If you delete keys, the default parameters are used.

  3. Click Run.

Notebook job tips

Total notebook cell output (the combined output of all notebook cells) is subject to a 20MB size limit. Additionally, individual cell output is subject to an 8MB size limit. If total cell output exceeds 20MB in size, or if the output of an individual cell is larger than 8MB, the run will be canceled and marked as failed. If you need help finding cells that are near or beyond the limit, run the notebook against an interactive cluster and use this notebook autosave technique.

JAR job tips

There are some caveats you need to be aware of when you run a JAR job.

Output size limits

Job output, such as log output emitted to stdout, is subject to a 20MB size limit. If the total output has a larger size, the run will be canceled and marked as failed.

Use the shared SparkContext

Because Databricks is a managed service, some code changes may be necessary to ensure that your Apache Spark jobs run correctly. JAR job programs must use the shared SparkContext API to get the SparkContext. Because Databricks initializes the SparkContext, programs that invoke new SparkContext() will fail. To get the SparkContext, use only the shared SparkContext created by Databricks:

val goodSparkContext = SparkContext.getOrCreate()
val goodSparkSession = SparkSession.builder().getOrCreate()

In addition, there are several methods you should avoid when using the shared SparkContext.

  • Do not call SparkContext.stop().
  • Do not call System.exit(0) or sc.stop() at the end of your Main program. This can cause undefined behavior.

Use try-finally blocks for job clean up

Consider a JAR that consists of two parts:

  • jobBody() which contains the main part of the job
  • jobCleanup() which has to be executed after jobBody(), irrespective of whether that function succeded or returned an exception

As an example, jobBody() may create temporary tables, and you can use jobCleanup() to drop these tables.

The safe way to ensure that the clean up method is called is to put a try-finally block in the code:

try {
  jobBody()
} finally {
  jobCleanup()
}

You should should not try to clean up using sys.addShutdownHook(jobCleanup) or

val cleanupThread = new Thread { override def run = jobCleanup() }
Runtime.getRuntime.addShutdownHook(cleanupThread)

Due to the way the lifetime of Spark containers is managed in Databricks, the shutdown hooks are not run reliably.

Configure JAR job parameters

JAR jobs are parameterized with an array of strings.

  • In the UI, you input the parameters in the Arguments text box which are split into an array by applying POSIX shell parsing rules. For more information, reference the shlex documentation.
  • In the API, you input the parameters as a standard JSON array. For more information, reference SparkJarTask. To access these parameters, inspect the String array passed into your main function.

View job run details

A job run details page contains job output and links to logs:

no-alternative-text

You can view job run details from the Jobs page and the Clusters page.

  • Click the Jobs icon Jobs Menu Icon. In the Run column of the Completed in past 60 days table, click the run number link.

    no-alternative-text
  • Click the Clusters icon Clusters Menu Icon. In a job row in the Automated Clusters table, click the Job Run link.

    no-alternative-text

Export job run results

You can export notebook run results and job run logs for all job types.

Export notebook run results

You can persist job runs by exporting their results. For notebook job runs, you can export a rendered notebook which can be later be imported into your Databricks workspace.

  1. In the job detail page, click a job run name in the Run column.

    no-alternative-text
  2. Click Export to HTML.

    no-alternative-text

Export job run logs

You can also export the logs for your job run. To automate this process, you can set up your job so that it automatically delivers logs to DBFS or S3 through the Job API. For more information, see the NewCluster and ClusterLogConf fields in the Job Create API call.

Edit a job

To edit a job, click the job name link in the Jobs list.

Delete a job

To delete a job, click the x in the Action column in the Jobs list.

Library dependencies

The Spark driver has certain library dependencies that cannot be overridden. These libraries take priority over any of your own libraries that conflict with them.

To get the full list of the driver library dependencies, run the following command inside a notebook attached to a cluster of the same Spark version (or the cluster with the driver you want to examine).

%sh ls /databricks/jars

Manage library dependencies

A good rule of thumb when dealing with library dependencies while creating JARs for jobs is to list Spark and Hadoop as provided dependencies. On Maven, add Spark and/or Hadoop as provided dependencies as shown below.

<dependency>
  <groupId>org.apache.spark</groupId>
  <artifactId>spark-core_2.11</artifactId>
  <version>2.3.0</version>
  <scope>provided</scope>
</dependency>
<dependency>
  <groupId>org.apache.hadoop</groupId>
  <artifactId>hadoop-core</artifactId>
  <version>1.2.1</version>
  <scope>provided</scope>
</dependency>

In sbt, add Spark and Hadoop as provided dependencies as shown below.

libraryDependencies += "org.apache.spark" %% "spark-core" % "2.3.0" % "provided"
libraryDependencies += "org.apache.hadoop" %% "hadoop-core" % "1.2.1" % "provided"

Tip

Specify the correct Scala version for your dependencies based on the version you are running.

Advanced job options

Maximum concurrent runs

The maximum number of runs that can be run in parallel. On starting a new run, Databricks skips the run if the job has already reached its maximum number of active runs. Set this value higher than the default of 1 if you want to be able to perform multiple runs of the same job concurrently. This is useful for example if you trigger your job on a frequent schedule and want to allow consecutive runs to overlap with each other, or if you want to trigger multiple runs that differ by their input parameters.

Alerts

Email alerts sent in case of job failure, success, or timeout. You can set alerts up for job start, job success, and job failure (including skipped jobs), providing multiple comma-separated email addresses for each alert type. You can also opt out of alerts for skipped job runs.

no-alternative-text

Integrate these email alerts with your favorite notification tools, including:

Timeout

The maximum completion time for a job. If the job does not complete in this time, Databricks sets its status to “Timed Out”.

Retries

Policy that determines when and how many times failed runs are retried.

no-alternative-text

Note

If you configure both Timeout and Retries, the timeout applies to each retry.

Control access to jobs

Job access control enable job owners and administrators to grant fine grained permissions on their jobs. With job access controls, job owners can choose which other users or groups can view results of the job. Owners can also choose who can manage runs of their job (that is, invoke Run Now and Cancel.)

See Jobs Access Control for details.