Jobs access control

Note

Access control is available only in the Premium plan or above.

Enabling access control for jobs allows job owners to control who can view job results or manage runs of a job. This article describes the individual permissions and how to configure jobs access control.

Before you can use jobs access control, a Databricks workspace admin must enable it for the workspace. See Enable access control.

Job permissions

There are five permission levels for jobs: No Permissions, Can View, Can Manage Run, Is Owner, and Can Manage. Workspace admins are granted the Can Manage permission by default, and they can assign that permission to non-admin users.

Note

The job owner can be changed only by a workspace admin.

The table lists the abilities for each permission.

Ability

No Permissions

Can View

Can Manage Run

Is Owner

Can Manage

View job details and settings

x

x

x

x

x

View results, Spark UI, logs of a job run

x

x

x

x

Run now

x

x

x

Cancel run

x

x

x

Edit job settings

x

x

Modify permissions

x

x

Delete job

x

x

Change owner

Note

  • The creator of a job has Is Owner permission.

  • A job cannot have more than one owner.

  • A job cannot have a group as an owner.

  • Jobs triggered through Run Now assume the permissions of the identity in the Run as setting and not the user who issued Run Now. The Run as setting defaults to the job’s owner.

  • You can view notebook run results only if you have the Can View or higher permission on the job. This allows jobs access control to be intact even if the job notebook was renamed, moved, or deleted.

  • Jobs access control applies to jobs displayed in the Databricks Jobs UI and their runs. It doesn’t apply to the following:

    • Runs triggered by modularized or linked code in notebooks that use the permissions of the notebook. If a notebook workflow is created from a notebook stored in a Git provider, a fresh checkout is created and files in that checkout only have the permissions of the user or service principal the original run was executed as.

    • Runs submitted by API whose ACLs are by default bundled with the notebooks. However, the default ACLs can be overriden by setting the access_control_list parameter in the request body.

Configure job permissions

Note

This section describes how to manage permissions using the UI. You can also use the Permissions API.

You must have Can Manage or Is Owner permission.

  1. Go to the details page for a job.

  2. Click the Edit permissions button in the Job details panel.

  3. In the pop-up dialog box, assign job permissions via the drop-down menu beside a user’s name.

    Assign job permissions
  4. Click Save Changes.

Terraform integration

You can manage permissions in a fully automated setup using Databricks Terraform provider and databricks_permissions:

resource "databricks_group" "auto" {
  display_name = "Automation"
}

resource "databricks_group" "eng" {
  display_name = "Engineering"
}

data "databricks_spark_version" "latest" {}

data "databricks_node_type" "smallest" {
  local_disk = true
}

resource "databricks_job" "this" {
  name                = "Featurization"
  max_concurrent_runs = 1

  new_cluster {
    num_workers   = 300
    spark_version = data.databricks_spark_version.latest.id
    node_type_id  = data.databricks_node_type.smallest.id
  }

  notebook_task {
    notebook_path = "/Production/MakeFeatures"
  }
}

resource "databricks_permissions" "job_usage" {
  job_id = databricks_job.this.id

  access_control {
    group_name       = "users"
    permission_level = "CAN_VIEW"
  }

  access_control {
    group_name       = databricks_group.auto.display_name
    permission_level = "CAN_MANAGE_RUN"
  }

  access_control {
    group_name       = databricks_group.eng.display_name
    permission_level = "CAN_MANAGE"
  }
}