Monitor usage using tags

To monitor cost and accurately attribute Databricks usage to your organization’s business units and teams (for chargebacks, for example), you can add custom tags to workspaces and compute resources. Databricks recommends using system tables (Public Preview) to view usage data. See Billable usage system table reference.

These tags propagate both to usage logs and to AWS EC2 and AWS EBS instances for cost analysis. Note: Tag data may be replicated globally. Do not use tag names or values that could compromise the security of your resources. For example, do not use tag names that contain personal or sensitive information.

Tagged objects and resources

You can add custom tags for the following objects managed by Databricks:

Object

Tagging interface (UI)

Tagging interface (API)

Workspace

N/A

Account API

Pool

Pools UI in the Databricks workspace

Instance Pool API

All-purpose and job compute

Compute UI in the Databricks workspace

Clusters API

SQL warehouse

SQL warehouse UI in the Databricks workspace

Warehouses API

Warning

Do not assign a custom tag with the key Name to a cluster. Every cluster has a tag Name whose value is set by Databricks. If you change the value associated with the key Name, the cluster can no longer be tracked by Databricks. As a consequence, the cluster might not be terminated after becoming idle and will continue to incur usage costs.

Default tags

Databricks adds the following default tags to all-purpose compute:

Tag key

Value

Vendor

Constant value: Databricks

ClusterId

Databricks internal ID of the cluster

ClusterName

Name of the cluster

Creator

Username (email address) of the user who created the cluster

On job clusters, Databricks also applies the following default tags:

Tag key

Value

RunName

Job name

JobId

Job ID

Databricks adds the following default tags to all pools:

Tag key

Value

Vendor

Constant value: Databricks

DatabricksInstancePoolCreatorId

Databricks internal ID of the user who created the pool

DatabricksInstancePoolId

Databricks internal ID of the pool

On compute used by Lakehouse Monitoring, Databricks also applies the following tags:

Tag key

Value

LakehouseMonitoring

true

LakehouseMonitoringTableId

ID of the monitored table

LakehouseMonitoringWorkspaceId

ID of the workspace where the monitor was created

LakehouseMonitoringMetastoreId

ID of the metastore where the monitored table exists

Tagging serverless compute workloads

To attribute serverless compute usage to users, groups, or projects, you can use budget policies. When a user is assigned a budget policy, their serverless usage is automatically tagged with their policy’s tags. See Attribute serverless usage with budget policies.

Tag propagation

Tags are propagated to AWS EC2 instances differently depending on whether or not the cluster was created from a pool.

cluster and pool tag propagation

If a cluster is created from a pool, its EC2 instances inherit only the custom and default workspace tags and pool tags, not the cluster tags. Therefore if you want to create clusters from a pool, make sure to assign all of the custom cluster tags you need to the workspace or pool.

If a cluster is not created from a pool, its tags propagate as expected to EC2 instances.

Cluster and pool tags both propagate to DBU usage reports, whether or not the cluster was created from a pool.

If there is a tag name conflict, Databricks default tags take precedence over custom tags and pool tags take precedence over cluster tags.

Limitations

  • Tag keys and values can only contain letters, spaces, numbers, or the characters +, -, =, ., _, :, /, @. Tags containing other characters are invalid.

  • If you change tag keys names or values, these changes apply only after cluster restart or pool expansion.

  • If the cluster’s custom tags conflict with a pool’s custom tags, the cluster can’t be created.

  • It can take up to one hour for custom workspace tags to propagate after any change.

  • No more than 20 tags can be assigned to a workspace resource.

Tag enforcement with policies

You can enforce tags on clusters using compute policies. For more information, see Custom tag enforcement.

Tag enforcement with IAM role

To ensure that certain tags are always populated when compute resources are created, you can apply a specific IAM policy to your account’s primary IAM role (the one created during account setup; contact your AWS administrator if you need access). The IAM policy should include explicit Deny statements for mandatory tag keys and optional values. Cluster creation will fail if required tags with one of the allowed values aren’t provided.

For example, if you want to enforce Department and Project tags, with only specified values allowed for the former and a free-form non-empty value for the latter, you could apply an IAM policy like this one:

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Sid": "MandateLaunchWithTag1",
      "Effect": "Deny",
      "Action": [
        "ec2:RunInstances",
        "ec2:CreateTags"
      ],
      "Resource": "arn:aws:ec2:region:accountId:instance/*",
      "Condition": {
        "StringNotEqualsIgnoreCase": {
          "aws:RequestTag/Department": [
              "Deptt1", "Deptt2", "Deptt3"
          ]
        }
      }
    },
    {
      "Sid": "MandateLaunchWithTag2",
      "Effect": "Deny",
      "Action": [
        "ec2:RunInstances",
        "ec2:CreateTags"
      ],
      "Resource": "arn:aws:ec2:region:accountId:instance/*",
      "Condition": {
        "StringNotLike": {
          "aws:RequestTag/Project": "?*"
        }
      }
    }
  ]
}

Both ec2:RunInstances and ec2:CreateTags actions are required for each tag for effective coverage of scenarios in which there are clusters that have only on-demand instances, only spot instances, or both.

Tip

Databricks recommends that you add a separate policy statement for each tag. The overall policy might become long, but it is easier to debug. See the IAM Policy Condition Operators Reference for a list of operators that can be used in a policy.

Cluster creation errors due to an IAM policy show an encoded error message, starting with:

Cloud Provider Launch Failure: A cloud provider error was encountered while setting up the cluster.

The message is encoded because the details of the authorization status can constitute privileged information that the user who requested the action should not see. See DecodeAuthorizationMessage API (or CLI) for information about how to decode such messages.