Account Console

The Account Console is where you administer your Databricks account-level configurations. Only the account owner who initially created the Databricks account can log in to the Account Console.

Access the Account Console

Click the user profile Account Icon icon at the top right and select Manage Account.

../../_images/manage-account.png

Billing Details

In the Account Console, click the Billing Details tab to view and configure billing details for your account.

AWS Account

In the Account Console, click the AWS Account tab to configure AWS access from Databricks.

For details, see:

AWS Storage

In the Account Console, click the AWS Storage tab to configure AWS storage settings for your account.

For details, see:

Audit Logs

In the Account Console, click the Audit Logs tab to configure audit logs for your account.

For details, see:

Databricks Account

In the Account Console, click the Deploy Databricks and Delete Account tabs to manage your Databricks account.

For details, see:

Usage

In the Account Console, click the View Usage tab to see historical account usage.

../../_images/cluster-usage.png

You can also download a CSV report that uses this schema:

Column Name Type Description Example
workspaceId string Workspace ID of the workspace 1234567890
timestamp datetime End of the hour for the provided usage 2019-02-22T09:59:59Z
clusterId string The cluster ID of the cluster 0405-020048-brawl507
clusterName string The user-provided name for the cluster Shared Autoscaling
clusterNodeType string Instance type for the cluster m4.16xlarge
clusterOwnerUserId string The user ID of the user who created the cluster 1234567890
clusterOwnerUserEmail string The user email address of the user who created the cluster rebecca.li@databricks.com
clusterCustomTags string (“-escaped json) Cluster tags associated with the cluster during this hour “{“”key1””:”“value1””,”“key2””:”“value2”“}”
sku string The billing SKU STANDARD_AUTOMATED_NON_OPSEC
quantity double The number of DBUs used by the customer during this hour 1.2345
machineHours double The total number of machine hours used by all containers in the cluster 12.345

You can import the CSV file into Databricks using the Create New Table UI and use the following Spark-CSV reader code to analyze the data:

df = (spark.
      read.
      option("header", "true").
      option("inferSchema", "true").
      option("escape", "\"").
      csv("/FileStore/tables/usage_data/new_usage_data.csv"))

df.createOrReplaceTempView("dbu_usage")

Total DBUs are the sum of the quantity column.

To map the clusterOwnerUserId field to the usage data, you can use the SCIM API to get a list of user email addresses mapped to user IDs.