A job is a way of running a notebook or JAR either immediately or on a scheduled basis. The other way to run a notebook is interactively in the notebook UI.
You can create and run jobs using the UI, the CLI, and by invoking the Jobs API. You can monitor job run results in the UI, using the CLI, by querying the API, and through email alerts. This topic focuses on performing job tasks using the UI. For the other methods, see Jobs CLI and Jobs API.
- The number of jobs is limited to 1000.
- The number of jobs a workspace can create in an hour is limited to 1000 (includes “run now”). This limit also affects “jobs” created by the REST API and notebook workflows.
- The number of actively concurrent runs a workspace can create is limited to 150.
This topic includes:
Click the Jobs icon in the sidebar. The Jobs list displays. The Jobs page lists all defined jobs, the cluster definition, the schedule if any, and the result of the last run.
In the Jobs list, you can filter jobs:
- Using key words.
- Selecting only jobs you own or jobs you have access to. Access to this filter depends on Jobs Access Control being enabled.
You can also click any column header to sort the list of jobs (either descending or ascending) by that column. By default, the page is sorted on job names in ascending order.
Click + Create Job. The job detail page displays.
Enter a name in the text field with the placeholder text
Specify the job properties:
Task: Click Select Notebook, Set JAR, or Configure spark-submit.
The Parameters and Dependent Libraries fields display. For example:
Parameters: Click Edit. The type of the parameters depends on the task type:
Notebook: Key-value pairs or a JSON string representing key-value pairs. Such parameters set the value of widgets.
JAR job: Main class and arguments.
spark-submit: Main class, path to the library JAR, and arguments. For example:
Dependent Libraries: Optionally click Add. The dependent libraries are automatically attached to the cluster on launch. Follow the recommendations in Library dependencies for specifying dependencies.
Cluster: Click Edit.
- In the Cluster Type drop-down, choose New Automated Cluster or Existing Interactive Cluster.
- New Automated Cluster
- We recommend that you run on a new cluster for production-level jobs or jobs that are important to complete.
- You can run spark-submit jobs only on new clusters.
- When you run a job on a new cluster, the job is treated as a data engineering (aka automated) workload subject to the automated workload pricing.
- Existing Interactive Cluster
- When you run a job on an existing cluster, the job is treated as a data analytics (aka interactive) workload subject to interactive workload pricing.
- If you select a terminated existing cluster and the job owner has Can Restart permission, Databricks starts the cluster when the job is scheduled to run.
- Existing clusters work best for tasks such as updating dashboards at regular intervals.
- Complete the cluster specification.
New Automated Cluster - complete the cluster configuration.
To decrease new cluster start time, select a pool in the cluster configuration.
Existing Interactive Cluster - select the cluster in the Select Cluster drop-down.
On the Jobs page, click a job name in the Name column. The job details page shows configuration parameters, active runs, and completed runs.
Databricks maintains a history of your job runs for up to 60 days. If you need to preserve job runs, we recommend that you export job run results before they expire. For more information, see Export job run results.
In the job runs page, you can view the standard error, standard output, log4j output for a job run by clicking the Logs link in the Spark column.
You can run a job on a schedule or immediately.
To define a schedule for the job:
Click Edit next to Schedule.
The Schedule Job dialog displays.
Specify the schedule granularity, starting time, and time zone.
- You can choose a time zone that observes daylight saving time or a UTC time. If you select a zone that observes daylight saving time, an hourly job will be skipped or may appear to not fire for an hour or two when daylight saving time begins or ends. If you want jobs to run at every hour (absolute time), choose a UTC time.
- The job scheduler, like the Spark batch interface, is not intended for low latency jobs. Due to network or cloud issues, job runs may occasionally be delayed up to several minutes. In these situations, scheduled jobs will run immediately upon service availability.
To run the job immediately, in the Active runs table click Run Now.
Click Run Now to do a test run of your notebook or JAR when you’ve finished configuring your job. If your notebook fails, you can edit it and the job will automatically run the new version of the notebook.
You can use Run Now with Different Parameters to re-run a job specifying different parameters or different values for existing parameters.
In the Active runs table, click Run Now with Different Parameters. The dialog varies depending on whether you are running a notebook job or a spark-submit job.
Notebook - A UI that lets you set key-value pairs or a JSON object displays. You can use this dialog to set the values of widgets:
spark-submit - A dialog containing the list of parameters displays. For example, you could run the SparkPi estimator described in Create a job with 100 instead of the default 10 partitions:
Specify the parameters. The provided parameters are merged with the default parameters for the triggered run. If you delete keys, the default parameters are used.
All output cells are subject to an 8MB size limit. If the output of a cell has a larger size, the rest of the run will be canceled and the run will be marked as failed. In that case, some of the content output from other cells may also be missing. If you need help finding the cell that is beyond the limit, run the notebook against an interactive cluster and use this notebook autosave technique.
There are some caveats you need to be aware of when you run a JAR job.
JAR jobs are parameterized with an array of strings. In the UI, you input the parameters in the Arguments text box which are split into an array by applying POSIX shell parsing rules. For more information, reference the
In the API, you input the parameters as a standard JSON array. For more information, reference SparkJarTask. To access these parameters, inspect the
String array passed into your
A job run details page contains job output and links to logs:
You can view job run details from the Jobs page and the Clusters page.
Click the Jobs icon . In the Run column of the Completed in past 60 days table, click the run number link.
Click the Clusters icon . In a job row in the Automated Clusters table, click the Job Run link.
You can export notebook run results and job run logs for all job types.
In the job detail page, click a job run name in the Run column.
Click Export to HTML.
To edit a job, click the job name link in the Jobs list.
To delete a job, click the x in the Action column in the Jobs list.
The Spark driver for Databricks has certain library dependencies that cannot be overridden. These libraries will take priority over any of your own libraries that conflict with them.
To get the full list of the driver library dependencies, run the following command inside a notebook attached to a cluster of the same Spark version (or the cluster with the driver you want to examine).
%sh ls /databricks/jars
A good rule of thumb when dealing with library dependencies while creating JARs for jobs is to list Spark and Hadoop as
provided dependencies. On Maven, add Spark and/or Hadoop as provided dependencies as shown below.
<dependency> <groupId>org.apache.spark</groupId> <artifactId>spark-core_2.11</artifactId> <version>2.3.0</version> <scope>provided</scope> </dependency> <dependency> <groupId>org.apache.hadoop</groupId> <artifactId>hadoop-core</artifactId> <version>1.2.1</version> <scope>provided</scope> </dependency>
sbt, add Spark and Hadoop as provided dependencies as shown below.
libraryDependencies += "org.apache.spark" %% "spark-core" % "2.3.0" % "provided" libraryDependencies += "org.apache.hadoop" %% "hadoop-core" % "1.2.1" % "provided"
Specify the correct Scala version for your dependencies based on the version you are running.
The other options that you can specify for a job include:
- Email alerts sent in case of job failure, success, or timeout. See Job alerts.
- The maximum completion time for a job. If the job does not complete in this time, Databricks sets its status to “Timed Out”.
Policy that determines when and how many times failed runs are retried.
If you configure both Timeout and Retries, the timeout applies to each retry.
- Maximum concurrent runs
- The maximum number of runs that can be run in parallel. On starting a new run, Databricks skips the run if the job has already reached its maximum number of active runs. Set this value higher than the default of 1 if you want to be able to perform multiple runs of the same job concurrently. This is useful for example if you trigger your job on a frequent schedule and want to allow consecutive runs to overlap with each other, or if you want to trigger multiple runs that differ by their input parameters.
You can set up email alerts for job runs. On the job detail page, click Advanced and click Edit next to Alerts. You can send alerts up job start, job success, and job failure (including skipped jobs), providing multiple comma-separated email addresses for each alert type. You can also opt out of alerts for skipped job runs.
Integrate these email alerts with your favorite notification tools, including:
Job access control enable job owners and administrators to grant fine grained permissions on their jobs. With job access controls, job owners can choose which other users or groups can view results of the job. Owners can also choose who can manage runs of their job (that is, invoke Run Now and Cancel.)
See Jobs Access Control for details.