Jobs access control
Note
Access control is available only in the Premium plan and above.
Enabling access control for jobs allows job owners to control who can view job results or manage runs of a job. This article describes the individual permissions and how to configure jobs access control.
Before you can use jobs access control, a Databricks admin must enable it for the workspace. See Enable jobs access control for your workspace.
Job permissions
There are five permission levels for jobs: No Permissions, Can View, Can Manage Run, Is Owner, and Can Manage. Admins are granted the Can Manage permission by default, and they can assign that permission to non-admin users.
Note
The job owner can be changed only by an admin.
The table lists the abilities for each permission.
Ability |
No Permissions |
Can View |
Can Manage Run |
Is Owner |
Can Manage |
---|---|---|---|---|---|
View job details and settings |
x |
x |
x |
x |
x |
View results, Spark UI, logs of a job run |
x |
x |
x |
x |
|
Run now |
x |
x |
x |
||
Cancel run |
x |
x |
x |
||
Edit job settings |
x |
x |
|||
Modify permissions |
x |
x |
|||
Delete job |
x |
x |
|||
Change owner |
Note
The creator of a job has Is Owner permission.
A job cannot have more than one owner.
A job cannot have a group as an owner.
Jobs triggered through Run Now assume the permissions of the job owner and not the user who issued Run Now. For example, even if job A is configured to run on an existing cluster accessible only to the job owner (user A), a user (user B) with Can Manage Run permission can start a new run of the job.
You can view notebook run results only if you have the Can View or higher permission on the job. This allows jobs access control to be intact even if the job notebook was renamed, moved, or deleted.
Jobs access control applies to jobs displayed in the Databricks Jobs UI and their runs. It doesn’t apply to:
Runs spawned by modularized or linked code in notebooks that use the permissions of the notebook. If a notebook workflow is created from a notebook stored in Git, a fresh checkout is created and files in that checkout have only the permissions of the user the original run was executed as.
Runs submitted by API whose ACLs are by default bundled with the notebooks. However, the default ACLs can be overriden by setting the
access_control_list
parameter in the request body.
Note
Jobs access control was introduced in the September 2017 release of Databricks. Customers with cluster access control enabled automatically have jobs access control enabled.
For jobs that existed before September 2017, job access control changes behavior for customers who had cluster access control enabled. Previously job access control settings on the job notebook were coupled with the access control of job run results. That is, a user could view the notebook job run result if the user could view the job notebook. Databricks initializes job access control settings to be compatible with previous access control settings as follows:
Job creators are granted the Is Owner permission and administrators are granted the Can Manage permission.
Databricks grants users who can view the job notebook the Can View permission on the job. This preserves the view access control on notebook jobs.
Can View permission applies to all historical runs with regard to notebook results. However, it doesn’t apply to clusters created by the job that existed before jobs access control was available. For example, suppose a job has a completed run (say run 1) that created a cluster C1 and ran notebook N1. Later the job was set to run notebook N2. Users with Can View permission can view run 1 but cannot view the Spark UI or driver logs of cluster C1. You can use cluster access control to control access to C1.
Configure job permissions
Note
This section describes how to manage permissions using the UI. You can also use the Permissions API 2.0.
You must have Can Manage or Is Owner permission.
Go to the details page for a job.
Click the Edit permissions button in the Job details panel.
In the pop-up dialog box, assign job permissions via the drop-down menu beside a user’s name.
Click Save Changes.
Terraform integration
You can manage permissions in a fully automated setup using Databricks Terraform provider and databricks_permissions:
resource "databricks_group" "auto" {
display_name = "Automation"
}
resource "databricks_group" "eng" {
display_name = "Engineering"
}
data "databricks_spark_version" "latest" {}
data "databricks_node_type" "smallest" {
local_disk = true
}
resource "databricks_job" "this" {
name = "Featurization"
max_concurrent_runs = 1
new_cluster {
num_workers = 300
spark_version = data.databricks_spark_version.latest.id
node_type_id = data.databricks_node_type.smallest.id
}
notebook_task {
notebook_path = "/Production/MakeFeatures"
}
}
resource "databricks_permissions" "job_usage" {
job_id = databricks_job.this.id
access_control {
group_name = "users"
permission_level = "CAN_VIEW"
}
access_control {
group_name = databricks_group.auto.display_name
permission_level = "CAN_MANAGE_RUN"
}
access_control {
group_name = databricks_group.eng.display_name
permission_level = "CAN_MANAGE"
}
}