JAR task for jobs
Use the JAR task to deploy Scala or Java code compiled into a JAR (Java ARchive). You must store JAR files in a location supported by your compute configurations. See Java and Scala library support.
Scala has support limitations in Unity Catalog standard access mode. See Language limitations.
Standard access mode requires an admin to add Maven coordinates and paths for JAR libraries to an allowlist. See Allowlist libraries and init scripts on compute with standard access mode (formerly shared access mode).
For details on how to deploy JAR files on a Unity Catalog-enabled cluster in standard access mode, see Tutorial: Run Scala code on serverless compute.
Requirements
-
You must choose a compute configuration that supports your workload. When selecting serverless compute, be aware of the serverless limitations.
BetaUsing serverless compute for JAR tasks is in Beta.
-
You must upload your JAR file to a location or Maven repository compatible with your compute configuration.
To learn more about creating a JAR that is compatible with Databricks and jobs, see Create a Databricks compatible JAR.
Configure a JAR task
Add a JAR task from the Tasks tab in the Jobs UI by doing the following:
-
Click Add task.
-
Enter a name into the Task name field.
-
In the Type drop-down menu, select
JAR. -
Specify the Main class.
- This is the full name of the class containing the main method to be executed. This class must be included in a JAR configured as a Dependent library.
-
Click Compute to select or configure compute. Choose a classic or serverless compute.
-
Configure your environment and add dependencies:
-
For classic compute, click
Add under Dependent libraries. The Add dependent library dialog appears.
- You can select an existing JAR file or upload a new JAR file.
- Not all locations support JAR files.
- Not all compute configurations support JAR files in all supported locations.
- Each Library Source has a different flow for selecting or uploading a JAR file. See Install libraries.
-
For serverless compute, choose an environment then click
edit to configure it.
- You must select 4 or higher for the Environment version.
- Add your JAR file.
- Add any other dependencies that you have. Do not include Spark dependencies, because these are already provided in the environment by Databricks Connect. For more information on dependencies in JARs, see Create a Databricks compatible JAR.
-
-
(Optional) Configure Parameters as a list of strings passed as arguments to the main class. See Configure task parameters.
-
Click Save task.