Use a JAR in Lakeflow Jobs
The Java archive or JAR file format is based on the popular ZIP file format and is used for aggregating many Java or Scala files into one. Using the JAR task, you can ensure fast and reliable installation of Java or Scala code in your Lakeflow Jobs. This page describes how to create a job that runs a Scala application packaged in a JAR.
Requirements
- A Scala JAR that is compatible with the Databricks Runtime for your compute cluster. To create a compatible Scala JAR that prints a list of the job parameters passed to the JAR, see Build a JAR.
 
Step 1. Upload your JAR
Upload the JAR you created to a volume in your worspace. See Upload files to a Unity Catalog volume.
If you are using compute with standard access, you are required to have an administrator add Maven coordinates and paths for JAR libraries to an allowlist. See Allowlist libraries and init scripts on compute with standard access mode (formerly shared access mode).
Step 2. Create a job to run the JAR
- 
In your workspace, click
Jobs & Pipelines in the sidebar.
 - 
Click Create, then Job.
The Tasks tab displays with the empty task pane.
noteIf the Lakeflow Jobs UI is ON, click the JAR tile to configure the first task. If the JAR tile is not available, click Add another task type and search for JAR.
 - 
Optionally, replace the name of the job, which defaults to
New Job <date-time>, with your job name. - 
In Task name, enter a name for the task, for example
JAR_example. - 
If necessary, select JAR from the Type drop-down menu.
 - 
For Main class, enter the package and class of your Jar. If you followed the example from Create a Databricks compatible JAR, enter
com.example.SparkJar. - 
For Compute, select a compatible cluster.
 - 
For Dependent libraries, click
Add.
 - 
In the Add dependent library dialog, with Volumes selected, enter the location where you uploaded the JAR in the previous step into Volumes File Path, or filter or browse to find the JAR. Select it.
 - 
Click Add.
 - 
For Parameters, for this example, enter
["Hello", "World!"]. - 
Click Create task.
 
Step 3: Run the job and view the job run details
Click  to run the workflow. To view details for the run, click View run in the Triggered run pop-up or click the link in the Start time column for the run in the job runs view.
When the run completes, the output displays in the Output panel, including the arguments passed to the task.
Next steps
- To learn more about JAR tasks, see JAR task for jobs.
 - To learn more creating a compatible JAR, see Create a Databricks compatible JAR.
 - To learn more about creating and running jobs, see Lakeflow Jobs.