Manage cluster policies

A cluster policy is a tool used to limit a user or group’s cluster creation permissions based on a set of policy rules.

Cluster policies let you:

  • Limit users to creating clusters with prescribed settings.

  • Limit users to creating a certain number of clusters.

  • Simplify the user interface and enable more users to create their own clusters (by fixing and hiding some values).

  • Control cost by limiting per cluster maximum cost (by setting limits on attributes whose values contribute to hourly price).

This article focuses on managing policies using the UI. You can also use the Cluster Policies API and the Permissions API to manage policies.

Personal Compute policy

Personal Compute is a Databricks-managed cluster policy available, by default, on all Databricks workspaces. Granting users access to this policy enables them to create single-machine compute resources in Databricks for their individual use.

Admins can manage access and customize the policy rules to fit their workspace’s needs.

Requirements

Cluster policies require the Premium plan or above.

Enforcement rules

You can express the following types of constraints in policy rules:

  • Fixed value with disabled control element

  • Fixed value with control hidden in the UI (value is visible in the JSON view)

  • Attribute value limited to a set of values (either allow list or block list)

  • Attribute value matching a given regex

  • Numeric attribute limited to a certain range

  • Default value used by the UI with control enabled

Managed cluster attributes

Cluster policies support all cluster attributes controlled with the Clusters API. The specific type of restrictions supported may vary per field (based on their type and relation to the cluster form UI elements).

In addition, cluster policies support the following synthetic attributes:

  • A “max DBU-hour” metric, which is the maximum DBUs a cluster can use on an hourly basis. This metric is a direct way to control cost at the individual cluster level.

  • A limit on the source that creates the cluster: Jobs service (job clusters), Clusters UI, Clusters REST API (all-purpose clusters).

Unmanaged cluster attributes

The following cluster attributes cannot be restricted in a cluster policy:

  • Cluster permissions (ACLs), which is handled by a separate API.

Define a cluster policy

You define a cluster policy in a JSON policy definition, which you add when you create the cluster policy.

Create a cluster policy

You create a cluster policy using the cluster policies UI or the Cluster Policies API. To create a cluster policy using the UI:

  1. Click compute icon Compute in the sidebar.

  2. Click the Policies tab.

  3. Click Create Cluster Policy.

  4. Name the policy. Policy names are case insensitive.

  5. Optionally, select the policy family from the Family dropdown. This determines the template from which you build the policy. See policy family.

  6. Enter a Description of the policy. This helps others know the purpose of the policy.

  7. In the Definition tab, paste a policy definition.

  8. Click Create.

Clone an existing cluster policy

You can create a cluster policy by cloning an existing policy. To clone a cluster policy using the UI:

  1. Click compute icon Compute in the sidebar.

  2. Click the Policies tab.

  3. Select the policy you want to clone.

  4. Click Clone.

  5. In the next page, all fields are pre-populated with values from the existing policy. Change the values of the fields that you want to modify, then click Create.

Manage cluster policy permissions using the UI

Workspace admins have permission to all policies.

When creating a cluster, non-admins can only select policies for which they have been granted permission. If a user has cluster create permission, then they can also select the Unrestricted policy, allowing them to create fully-configurable clusters.

Note

If the user doesn’t have access to any policies, the policy dropdown does not display.

Add a cluster policy permission

To add a cluster policy permission using the UI:

  1. Click compute icon Compute in the sidebar.

  2. Click the Policies tab.

  3. Select the policy you want to update.

  4. Click the Permissions tab.

  5. In the Name column, select a principal.

  6. In the Permission column, select a permission.

  7. Click Add.

Delete a cluster policy permission

To delete a cluster policy permission using the UI:

  1. Click compute icon Compute in the sidebar.

  2. Click the Policies tab.

  3. Select the policy you want to update.

  4. Click the Permissions tab.

  5. Click the Delete Icon icon in the permission row.

Restrict the number of clusters per users using the UI

Policy permissions allow you to set a max number of clusters per user. This determines how many clusters a user can create using that policy. If the user exceeds the limit, the operation fails.

To restrict the number of clusters a user can create using a policy, use the Max clusters per user setting under the Permissions tab in the cluster policies UI.

Note

Databricks doesn’t proactively terminate clusters to maintain the limit. If a user has three clusters running with the policy and the workspace admin reduces the limit to one, the three clusters will continue to run. Extra clusters must be manually terminated to comply with the limit.

Edit a cluster policy using the UI

You edit a cluster policy using the cluster policies UI or the Cluster Policies API. To edit a cluster policy using the UI:

  1. Click compute icon Compute in the sidebar.

  2. Click the Cluster Policies tab.

    Cluster Policies tab image
  3. Select the policy you want to edit.

  4. Click Edit.

  5. In the Definition tab, edit the policy definition.

  6. Click Update.

Delete a cluster policy using the UI

You delete a cluster policy using the cluster policies UI or the Cluster Policies API. To delete a cluster policy using the UI:

  1. Click compute icon Compute in the sidebar.

  2. Click the Cluster Policies tab.

    Cluster Policies tab selected
  3. Select the policy you want to delete.

  4. Click Delete.

  5. Click Delete to confirm.

Cluster policy families

When you create a cluster policy, you can choose to use a policy family. Policy families provide you with pre-populated policy rules for common compute use cases.

When using a policy family, the rules for your policy are inherited from the policy family. After selecting a policy family, you can create the policy as-is, or choose to add rules or override the given rules.

Create a custom policy using a policy family

To customize a policy using a policy family:

  1. Click compute icon Compute in the sidebar.

  2. Click the Policies tab.

  3. Click Create Cluster Policy.

  4. Name the policy. Policy names are case insensitive.

  5. Select the policy family from the Family dropdown.

  6. Under the Definitions tab, click Edit.

  7. A modal appears where you can override policy definitions. In the Overrides section, add the updated definitions then click OK.

Cluster policy definitions

A cluster policy definition is a collection of individual policy definitions expressed in JSON.

Policy definitions

A policy definition is a map between a path string defining an attribute and a limit type. There can only be one limitation per attribute. A path is specific to the type of resource and reflects the resource creation API attribute name. If the resource creation uses nested attributes, the path concatenates the nested attribute names using dots. Attributes that aren’t defined in the policy definition are unlimited when you create a cluster using the policy.

interface Policy {
  [path: string]: PolicyElement
}

Policy elements

A policy element specifies one of the supported limit types on a given attribute and optionally a default value. You can specify a default value without defining a limit on the attribute in the policy.

type PolicyElement = FixedPolicy | ForbiddenPolicy | (LimitingPolicyBase & LimitingPolicy);
type LimitingPolicy = AllowlistPolicy | BlocklistPolicy | RegexPolicy | RangePolicy | UnlimitedPolicy;

This section describes the policy types:

Fixed policy

Limit the value to the specified value. For attribute values other than numeric and boolean, the value of the attribute must be represented by or convertible to a string.

Optionally the attribute can be hidden in the UI when the hidden flag is present and set to true. A fixed policy cannot specify a defaultValue attribute since the value attribute already determines the default value.

interface FixedPolicy {
    type: "fixed";
    value: string | number | boolean;
    hidden?: boolean;
}
Example
{
  "spark_version": { "type": "fixed", "value": "auto:latest-ml", "hidden": true }
}

Forbidden policy

For an optional attribute, prevent use of the attribute.

interface ForbiddenPolicy {
    type: "forbidden";
}
Example

This policy forbids attaching pools to the cluster for worker nodes. Pools are also forbidden for the driver node, because driver_instance_pool_id inherits the policy.

{
  "instance_pool_id": { "type": "forbidden" }
}

Limiting policies: common fields

In a limiting policy you can specify two additional fields:

  • defaultValue - the value that populates the cluster creation form in the UI.

  • isOptional - a limiting policy on an attribute makes it required. To make the attribute optional, set the isOptional field to true.

interface LimitedPolicyBase {
    defaultValue?: string | number | boolean;
    isOptional?: boolean;
}

Note

Default values don’t automatically get applied to clusters created with the Clusters API. To apply default values when creating a cluster with the API, add the parameter apply_policy_default_values to the cluster definition and set it to true. This is not needed for fixed policies.

Example
{
  "instance_pool_id": { "type": "unlimited", "isOptional": true, "defaultValue": "id1" }
}

This example policy specifies the default value id1 for the pool for worker nodes, but makes it optional. When creating the cluster, you can select a different pool or choose not to use one. If driver_instance_pool_id isn’t defined in the policy or when creating the cluster, the same pool is used for worker nodes and the driver node.

Allow list policy

A list of allowed values.

interface AllowlistPolicy {
  type: "allowlist";
  values: (string | number | boolean)[];
}
Example
{
  "spark_version":  { "type": "allowlist", "values": [ "11.3.x-scala2.12", "10.4.x-scala2.12" ] }
}

Block list policy

The list of disallowed values. Since the values must be exact matches, this policy may not work as expected when the attribute is lenient in how the value is represented (for example allowing leading and trailing spaces).

interface BlocklistPolicy {
  type: "blocklist";
  values: (string | number | boolean)[];
}
Example
{
  "spark_version":  { "type": "blocklist", "values": [ "7.3.x-scala2.12" ] }
}

Regex policy

Limits the value to the ones matching the regex. For safety, when matching the regex is always anchored to the beginning and end of the string value.

interface RegexPolicy {
  type: "regex";
  pattern: string;
}
Example
{
  "spark_version":  { "type": "regex", "pattern": "5\\.[3456].*" }
}

Range policy

Limits the value to the range specified by the minValue and maxValue attributes. The value must be a decimal number. The numeric limits must be representable as a double floating point value. To indicate lack of a specific limit, you can omit one of minValue, maxValue.

interface RangePolicy {
  type: "range";
  minValue?: number;
  maxValue?: number;
}
Example
{
  "num_workers":  { "type": "range", "maxValue": 10 }
}

Unlimited policy

Does not define value limits. You can use this policy type to make attributes required or to set the default value in the UI.

interface UnlimitedPolicy {
  type: "unlimited";
}
Example

To require adding the COST_BUCKET tag:

{
  "custom_tags.COST_BUCKET":  { "type": "unlimited" }
}

To set default a value for a Spark configuration variable, but also allow omitting (removing) it:

{
  "spark_conf.spark.my.conf":  { "type": "unlimited", "isOptional": true, "defaultValue": "my_value" }
}

Cluster policy attribute paths

The following table lists the supported cluster policy attribute paths.

Attribute path

Type

Description

autoscale.max_workers

optional number

When hidden, removes the maximum worker number field from the UI.

autoscale.min_workers

optional number

When hidden, removes the minimum worker number field from the UI.

autotermination_minutes

number

A value of 0 represents no auto termination. When hidden, removes the auto termination checkbox and value input from the UI.

aws_attributes.availability

string

Controls AWS availiability (SPOT, ON_DEMAND, or SPOT_WITH_FALLBACK)

aws_attributes.ebs_volume_count

number

The number of AWS EBS volumes.

aws_attributes.ebs_volume_size

number

The size (in GiB) of AWS EBS volumes.

aws_attributes.ebs_volume_type

string

The type of AWS EBS volumes.

aws_attributes.first_on_demand

number

Controls the number of nodes to put on on-demand instances.

aws_attributes.instance_profile_arn

string

Controls the AWS instance profile.

aws_attributes.spot_bid_price_percent

number

Controls the maximum price for AWS spot instances.

aws_attributes.zone_id

string

Controls the AWS zone ID.

cluster_log_conf.path

string

The destination URL of the log files.

cluster_log_conf.region

string

The Region for the S3 location.

cluster_log_conf.type

S3, DBFS, or NONE

The type of log destination.

cluster_name

string

The cluster name.

custom_tags.*

string

Controls specific tag values by appending the tag name, for example: custom_tags.<mytag>.

data_security_mode

string

Sets the security features of the cluster. Unity Catalog requires SINGLE_USER or USER_ISOLATION mode. Passthrough clusters require LEGACY_PASSTHROUGH and Table ACL clusters require LEGACY_TABLE_ACL. Default is set to NONE with no security feature enabled.

docker_image.basic_auth.password

string

The password for the Databricks Container Services image basic authentication.

docker_image.basic_auth.username

string

The user name for the Databricks Container Services image basic authentication.

docker_image.url

string

Controls the Databricks Container Services image URL. When hidden, removes the Databricks Container Services section from the UI.

driver_node_type_id

optional string

When hidden, removes the driver node type selection from the UI.

enable_elastic_disk

boolean

When hidden, removes the Enable autoscaling local storage checkbox from the UI.

enable_local_disk_encryption

boolean

Set to true to enable, or false to disable, encrypting disks that are locally attached to the cluster (as specified through the API).

init_scripts.*.workspace.destination init_scripts.*.volumes.destination init_scripts.*.s3.destination init_scripts.*.dbfs.destination init_scripts.*.file.destination init_scripts.*.s3.region

string

* refers to the index of the init script in the attribute array. See Array attributes.

instance_pool_id

string

Controls the pool used by worker nodes if driver_instance_pool_id is also defined, or for all cluster nodes otherwise. If you use pools for worker nodes, you must also use pools for the driver node. When hidden, removes pool selection from the UI.

driver_instance_pool_id

string

If specified, configures a different pool for the driver node than for worker nodes. If not specified, inherits instance_pool_id. If you use pools for worker nodes, you must also use pools for the driver node. When hidden, removes driver pool selection from the UI.

node_type_id

string

When hidden, removes the worker node type selection from the UI.

num_workers

optional number

When hidden, removes the worker number specification from the UI.

runtime_engine

string

Determines whether the cluster uses Photon or not. Possible values are PHOTON or STANDARD.

single_user_name

string

The user name for credential passthrough single user access.

spark_conf.*

optional string

Control specific configuration values by appending the configuration key name. For example, spark_conf.spark.executor.memory.

spark_env_vars.*

optional string

Control specific Spark environment variable values by appending the environment variable, for example: spark_env_vars.<environment variable name>.

spark_version

string

The Spark image version name (as specified through the API).

ssh_public_keys.*

string

* refers to the index of the public key in the attribute array. See Array attributes.

Cluster policy virtual attribute paths

Attribute path

Type

Description

dbus_per_hour

number

Calculated attribute representing (maximum, in case of autoscaling clusters) DBU cost of the cluster including the driver node. For use with range limitation.

cluster_type

string

Represents the type of cluster that can be created:

  • all-purpose for Databricks all-purpose clusters

  • job for job clusters created by the job scheduler

  • dlt for clusters created for Delta Live Tables pipelines

Allow or block specified types of clusters to be created from the policy. If the all-purpose value is not allowed, the policy is not shown in the all-purpose cluster creation form. If the job value is not allowed, the policy is not shown in the job new cluster form.

Array attributes

You can specify policies for array attributes in two ways:

  • Generic limitations for all array elements. These limitations use the * wildcard symbol in the policy path.

  • Specific limitations for an array element at a specific index. These limitation use a number in the path.

For example, for the array attribute ssh_public_keys, the generic path is ssh_public_keys.* and the specific paths have the form ssh_public_keys.<n>, where <n> is an integer index in the array (starting with 0). You can combine generic and specific limitations, in which case the generic limitation applies to each array element that does not have a specific limitation. In each case only one policy limitation will apply.

Typical use cases for the array policies are:

  • Require inclusion-specific entries. For example:

    {
      "ssh_public_keys.0": {
        "type": "fixed",
        "value": "<required-key-1>"
      },
      "ssh_public_keys.1": {
        "type": "fixed",
        "value": "<required-key-2>"
      }
    }
    

    You cannot require specific keys without specifying the order.

    • Require a fixed value of the entire list. For example:

      {
        "ssh_public_keys.0": {
          "type": "fixed",
          "value": "<required-key-1>"
        },
        "ssh_public_keys.*": {
          "type": "forbidden"
        }
      }
      
    • Disallow the use altogether.

      {
        "ssh_public_keys.*": {
          "type": "forbidden"
        }
      }
      
  • Allow any number of entries but only following a specific restriction. For example:

    {
      "ssh_public_keys.*": {
        "type": "regex",
        "pattern": ".*<required-content>.*"
      }
    }
    

In case of init_scripts paths, the array contains structures for which all elements may need to be handled depending on the use case. For example, to require a specific set of init scripts, you can use the following pattern:

{
  "init_scripts.0.s3.destination": {
    "type": "fixed",
    "value": "s3://<s3-path>"
  },
  "init_scripts.0.s3.region": {
    "type": "fixed",
    "value": "<s3-region>"
  },
  "init_scripts.1.dbfs.destination": {
    "type": "fixed",
    "value": "dbfs:/<dbfs-path>"
  },
  "init_scripts.*.workspace.destination": {
    "type": "forbidden"
  },
  "init_scripts.*.volumes.destination": {
    "type": "forbidden"
  },
  "init_scripts.*.s3.destination": {
    "type": "forbidden"
  },
  "init_scripts.*.dbfs.destination": {
    "type": "forbidden"
  },
  "init_scripts.*.file.destination": {
    "type": "forbidden"
  }
}

Cluster policy examples

General cluster policy

A general purpose cluster policy meant to guide users and restrict some functionality, while requiring tags, restricting the maximum number of instances, and enforcing timeout.

{
  "spark_conf.spark.databricks.cluster.profile": {
    "type": "fixed",
    "value": "singleNode",
    "hidden": true
  },
  "instance_pool_id": {
    "type": "forbidden",
    "hidden": true
  },
  "spark_version": {
    "type": "unlimited",
    "pattern": "auto:latest-ml"
  },
  "node_type_id": {
    "type": "allowlist",
    "values": [
      "i3.xlarge",
      "i3.2xlarge",
      "i3.4xlarge"
    ],
    "defaultValue": "i3.2xlarge"
  },
  "driver_node_type_id": {
    "type": "fixed",
    "value": "i3.2xlarge",
    "hidden": true
  },
  "autoscale.min_workers": {
    "type": "fixed",
    "value": 1,
    "hidden": true
  },
  "autoscale.max_workers": {
    "type": "range",
    "maxValue": 25,
    "defaultValue": 5
  },
  "enable_elastic_disk": {
    "type": "fixed",
    "value": true,
    "hidden": true
  },
  "autotermination_minutes": {
    "type": "fixed",
    "value": 30,
    "hidden": true
  },
  "custom_tags.team": {
    "type": "fixed",
    "value": "product"
  }
}

Define limits on Delta Live Tables pipeline clusters

Note

When using cluster policies to configure Delta Live Tables clusters, Databricks recommends applying a single policy to both the default and maintenance clusters.

To configure a cluster policy for a pipeline cluster, create a policy with the cluster_type field set to dlt. The following example creates a minimal policy for a Delta Live Tables cluster:

{
  "cluster_type": {
    "type": "fixed",
    "value": "dlt"
  },
  "num_workers": {
    "type": "unlimited",
    "defaultValue": 3,
    "isOptional": true
  },
  "node_type_id": {
    "type": "unlimited",
    "isOptional": true
  },
  "spark_version": {
    "type": "unlimited",
    "hidden": true
  }
}

Simple medium-sized policy

Allows users to create a medium-sized cluster with minimal configuration. The only required field at creation time is cluster name; the rest is fixed and hidden.

{
  "instance_pool_id": {
    "type": "forbidden",
    "hidden": true
  },
  "spark_conf.spark.databricks.cluster.profile": {
    "type": "forbidden",
    "hidden": true
  },
  "autoscale.min_workers": {
    "type": "fixed",
    "value": 1,
    "hidden": true
  },
  "autoscale.max_workers": {
    "type": "fixed",
    "value": 10,
    "hidden": true
  },
  "autotermination_minutes": {
    "type": "fixed",
    "value": 60,
    "hidden": true
  },
  "node_type_id": {
    "type": "fixed",
    "value": "i3.xlarge",
    "hidden": true
  },
  "driver_node_type_id": {
    "type": "fixed",
    "value": "i3.xlarge",
    "hidden": true
  },
  "spark_version": {
    "type": "fixed",
    "value": "auto:latest-ml",
    "hidden": true
  },
  "enable_elastic_disk": {
    "type": "fixed",
    "value": false,
    "hidden": true
  },
  "custom_tags.team": {
    "type": "fixed",
    "value": "product"
  }
}

Job-only policy

Allows users to create job clusters and run jobs using the cluster. Users cannot create an all-purpose cluster using this policy.

{
  "cluster_type": {
    "type": "fixed",
    "value": "job"
  },
  "dbus_per_hour": {
    "type": "range",
    "maxValue": 100
  },
  "instance_pool_id": {
    "type": "forbidden",
    "hidden": true
  },
  "num_workers": {
    "type": "range",
    "minValue": 1
  },
  "node_type_id": {
    "type": "regex",
    "pattern": "[rmci][3-5][rnad]*.[0-8]{0,1}xlarge"
  },
  "driver_node_type_id": {
    "type": "regex",
    "pattern": "[rmci][3-5][rnad]*.[0-8]{0,1}xlarge"
  },
  "spark_version": {
    "type": "unlimited",
    "defaultValue": "auto:latest-lts"
  },
  "custom_tags.team": {
    "type": "fixed",
    "value": "product"
  }
}

Single Node policy

Allows users to create a Single Node cluster with no worker nodes with Spark enabled in local mode. For example policies, see Single Node cluster policy.

External metastore policy

Allows users to create a cluster with an admin-defined metastore already attached. This is useful to allow users to create their own clusters without requiring additional configuration.

{
  "spark_conf.spark.hadoop.javax.jdo.option.ConnectionURL": {
      "type": "fixed",
      "value": "jdbc:sqlserver://<jdbc-url>"
  },
  "spark_conf.spark.hadoop.javax.jdo.option.ConnectionDriverName": {
      "type": "fixed",
      "value": "com.microsoft.sqlserver.jdbc.SQLServerDriver"
  },
  "spark_conf.spark.databricks.delta.preview.enabled": {
      "type": "fixed",
      "value": "true"
  },
  "spark_conf.spark.hadoop.javax.jdo.option.ConnectionUserName": {
      "type": "fixed",
      "value": "<metastore-user>"
  },
  "spark_conf.spark.hadoop.javax.jdo.option.ConnectionPassword": {
      "type": "fixed",
      "value": "<metastore-password>"
  }
}

Remove autoscaling policy

This policy disables autoscaling and allows the user to set the number of workers within a given range.

{
 "num_workers": {
 "type": "range",
 "maxValue": 25,
 "minValue": 1,
 "defaultValue": 5
 }
}