Configure audit logging

Databricks provides comprehensive end-to-end audit logs of activities performed by Databricks users, allowing your enterprise to monitor detailed Databricks usage patterns.


Audit logs must be explicitly enabled for your account. To enable audit logs, contact

Configure audit log delivery

If your account is enabled for audit logging, the Databricks account owner configures where Databricks sends the logs. Admin users cannot configure audit log delivery.

  1. Log in to the Account Console.

  2. Click the Audit Logs tab.

  3. Configure the S3 bucket and directory:

    • S3 Bucket in <region name>: the S3 bucket where you want to store your audit logs. The bucket must exist.
    • Path: the path to the directory in the S3 bucket where you want to store the audit logs. For example, /databricks/auditlogs. If you want to store the logs at the bucket root, enter /.

    Databricks sends the audit logs to the specified S3 bucket and directory path, partitioned by date. For example, my-bucket/databricks/auditlogs/date=2018-01-15/part-0.json.gz.

Configure access policy

To configure Databricks access to your AWS S3 bucket using an access policy, follow the steps in this section.

Step 1: Generate the access policy

In the Databricks Account Console, on the Audit Logs tab:

  1. Click the Generate Policy button. The generated policy should look like:

       "Version": "2012-10-17",
       "Id": "DatabricksAuditLogs",
       "Statement": [
           "Sid": "PutAuditLogs",
           "Effect": "Allow",
           "Principal": {
             "AWS": "arn:aws:iam::090101015318:role/DatabricksAuditLogs-WriterRole-VV4KJWX4FRIK"
           "Action": [
           "Resource": "arn:aws:s3:::AUDIT_LOG_BUCKET/audit_log_path/*"
           "Sid": "DenyNotContainingFullAccess",
           "Effect": "Deny",
           "Principal": {
             "AWS": "arn:aws:iam::090101015318:role/DatabricksAuditLogs-WriterRole-VV4KJWX4FRIK"
           "Action": [
           "Resource": "arn:aws:s3:::AUDIT_LOG_BUCKET/audit_log_path/*",
           "Condition": {
             "StringNotEquals": {
               "s3:x-amz-acl": "bucket-owner-full-control"

    This policy ensures that the Databricks AWS account has write permission on the bucket and directory that you specified. The first section grants Databricks write permissions. Databricks does not have read, list, or delete permission. The second section ensures that you have full control over everything that Databricks writes to your bucket.

  2. Copy the generated JSON policy to your clipboard.

Step 2: Apply the policy to the AWS S3 bucket

  1. In the AWS console, go to the S3 service.
  2. Click the name of the bucket where you want to store the audit logs.
  3. Click the Permissions tab.
  4. Click the Bucket Policy button.
  5. Paste the policy string from Step 1.
  6. Click Save.

Step 3: Verify that the policy is applied correctly

In the Databricks Account Console, on the Audit Logs tab, click the Verify Access button.

Verify access

If you see a check mark check, audit logs are configured correctly. If verification fails:

  1. Check that you entered the bucket name correctly, and that the AWS region is correct.
  2. Check that you copied the generated policy correctly to AWS.
  3. Contact your AWS account admin.

Audit log delivery

Once logging is enabled for your account, Databricks automatically starts sending audit logs in human-readable format to your delivery location on a periodic basis. Logs are available within 72 hours of activation.

  • Encryption: Databricks encrypts audit logs using Amazon S3 server-side encryption.
  • Format: Databricks delivers audit logs in gzipped json format, for example json.gz.
  • When: Databricks delivers audit logs daily and partitions the logs by date in yyyy-MM-dd format.
  • Guarantees
    • Databricks delivers logs within 72 hours after day close.
    • Each audit log record is unique.


  • In order to guarantee exactly-once delivery of your audit logs while accounting for late records, Databricks can overwrite the delivered log files in your bucket at any time during the three-day period after the log date. After three days, audit files become immutable. In other words, logs for 2018-01-06 are subject to overwrites through 2018-01-09, and you can safely archive them on 2018-01-10.
  • Overwriting ensures exactly-once semantics without requiring read or delete access to your account.

Audit log schema

The schema of audit log records is as follows:

  • version: the schema version of the audit log format
  • timestamp: UTC timestamp of the action
  • sourceIPAddress: the IP address of the source request
  • userAgent: the browser or API client used to make the request
  • sessionId: session ID of the action
  • userIdentity: information about the user that makes the requests
    • email: user email address
  • serviceName: the service that logged the request
  • actionName: the action, such as login, logout, read, write, and so on
  • requestId: unique request ID
  • requestParams: parameter key-value pairs used in the audited event
  • response: response to the request
    • errorMessage: the error message if there was an error
    • result: the result of the request
    • statusCode: HTTP status code that indicates the request succeeds or not

Audit events

The serviceName and actionName properties identify an audit event in an audit log record. The naming convention follows the Databricks REST API 2.0.

Databricks provides audit logs for the following services:

  • accounts
  • clusters
  • dbfs
  • genie
  • globalInitScripts
  • groups
  • iamRole
  • instancePools
  • jobs
  • mlflowExperiment
  • notebook
  • secrets
  • sqlPermissions, which has all the audit logs for table access when table ACLs are enabled.
  • ssh
  • workspace


  • If actions take a long time, the request and response are logged separately but the request and response pair have the same requestId.
  • With the exception of mount-related operations, Databricks audit logs do not include DBFS-related operations. We recommend that you set up server access logging in S3, which can log object-level operations associated with an IAM role. If you map IAM roles to Databricks users, your Databricks users cannot share IAM roles.
  • Automated actions—such as resizing a cluster due to autoscaling or launching a job due to scheduling—are performed by the user System-User.

Request parameters

The request parameters (field requestParams) for each supported service and action are listed in the following table:

Service Action Request Parameters
accounts add ["targetUserName","endpoint","targetUserId"]
  addPrincipalToGroup ["targetGroupId","endpoint","targetUserId","targetGroupName","targetUserName"]
  changePassword ["newPasswordSource","targetUserId","serviceSource","wasPasswordChanged","userId"]
  createGroup ["endpoint","targetGroupId","targetGroupName"]
  delete ["targetUserId","targetUserName","endpoint"]
  garbageCollectDbToken ["tokenExpirationTime","userId"]
  generateDbToken ["userId","tokenExpirationTime"]
  jwtLogin ["user"]
  login ["user"]
  logout ["user"]
  removeAdmin ["targetUserName","endpoint","targetUserId"]
  removeGroup ["targetGroupId","targetGroupName","endpoint"]
  resetPassword ["serviceSource","userId","endpoint","targetUserId","targetUserName","wasPasswordChanged","newPasswordSource"]
  revokeDbToken ["userId"]
  samlLogin ["user"]
  setAdmin ["endpoint","targetUserName","targetUserId"]
  tokenLogin ["tokenId","user"]
  validateEmail ["endpoint","targetUserName","targetUserId"]
clusters changeClusterAcl ["shardName","aclPermissionSet","targetUserId","resourceId"]
  create ["cluster_log_conf","num_workers","enable_elastic_disk","driver_node_type_id","start_cluster","docker_image","ssh_public_keys","aws_attributes","acl_path_prefix","node_type_id","instance_pool_id","spark_env_vars","init_scripts","spark_version","cluster_source","autotermination_minutes","cluster_name","autoscale","custom_tags","cluster_creator","enable_local_disk_encryption","idempotency_token","spark_conf","organization_id","no_driver_daemon","user_id"]
  createResult ["clusterName","clusterState","clusterId","clusterWorkers","clusterOwnerUserId"]
  delete ["cluster_id"]
  deleteResult ["clusterWorkers","clusterState","clusterId","clusterOwnerUserId","clusterName"]
  edit ["spark_env_vars","no_driver_daemon","enable_elastic_disk","aws_attributes","driver_node_type_id","custom_tags","cluster_name","spark_conf","ssh_public_keys","autotermination_minutes","cluster_source","docker_image","enable_local_disk_encryption","cluster_id","spark_version","autoscale","cluster_log_conf","instance_pool_id","num_workers","init_scripts","node_type_id"]
  permanentDelete ["cluster_id"]
  resize ["cluster_id","num_workers","autoscale"]
  resizeResult ["clusterWorkers","clusterState","clusterId","clusterOwnerUserId","clusterName"]
  restart ["cluster_id"]
  restartResult ["clusterId","clusterState","clusterName","clusterOwnerUserId","clusterWorkers"]
  start ["init_scripts_safe_mode","cluster_id"]
  startResult ["clusterName","clusterState","clusterWorkers","clusterOwnerUserId","clusterId"]
dbfs addBlock ["handle","data_length"]
  create ["path","bufferSize","overwrite"]
  delete ["recursive","path"]
  getSessionCredentials ["mountPoint"]
  mkdirs ["path"]
  mount ["mountPoint","owner"]
  move ["dst","source_path","src","destination_path"]
  put ["path","overwrite"]
  unmount ["mountPoint"]
genie databricksAccess ["duration","approver","reason","authType","user"]
globalInitScripts create ["name","position","script-SHA256","enabled"]
  update ["script_id","name","position","script-SHA256","enabled"]
  delete ["script_id"]
groups addPrincipalToGroup ["user_name","parent_name"]
  createGroup ["group_name"]
  getGroupMembers ["group_name"]
  removeGroup ["group_name"]
iamRole changeIamRoleAcl ["targetUserId","shardName","resourceId","aclPermissionSet"]
instancePools changeInstancePoolAcl ["shardName","resourceId","targetUserId","aclPermissionSet"]
  create ["enable_elastic_disk","preloaded_spark_versions","idle_instance_autotermination_minutes","instance_pool_name","node_type_id","custom_tags","max_capacity","min_idle_instances","aws_attributes"]
  delete ["instance_pool_id"]
  edit ["instance_pool_name","idle_instance_autotermination_minutes","min_idle_instances","preloaded_spark_versions","max_capacity","enable_elastic_disk","node_type_id","instance_pool_id","aws_attributes"]
jobs cancel ["run_id"]
  changeJobAcl ["shardName","aclPermissionSet","resourceId","targetUserId"]
  create ["spark_jar_task","email_notifications","notebook_task","spark_submit_task","timeout_seconds","libraries","name","spark_python_task","job_type","new_cluster","existing_cluster_id","max_retries","schedule"]
  delete ["job_id"]
  deleteRun ["run_id"]
  reset ["job_id","new_settings"]
  resetJobAcl ["grants","job_id"]
  runFailed ["jobClusterType","jobTriggerType","jobId","jobTaskType","runId","jobTerminalState","idInJob","orgId"]
  runNow ["notebook_params","job_id","jar_params","workflow_context"]
  runSucceeded ["idInJob","jobId","jobTriggerType","orgId","runId","jobClusterType","jobTaskType","jobTerminalState"]
  submitRun ["shell_command_task","run_name","spark_python_task","existing_cluster_id","notebook_task","timeout_seconds","libraries","new_cluster","spark_jar_task"]
  update ["fields_to_remove","job_id","new_settings"]
mlflowExperiment deleteMlflowExperiment ["experimentId","path","experimentName"]
  moveMlflowExperiment ["newPath","experimentId","oldPath"]
  restoreMlflowExperiment ["experimentId","path","experimentName"]
notebook attachNotebook ["path","clusterId","notebookId"]
  createNotebook ["notebookId","path"]
  deleteFolder ["path"]
  deleteNotebook ["notebookId","notebookName","path"]
  detachNotebook ["notebookId","clusterId","path"]
  importNotebook ["path"]
  moveNotebook ["newPath","oldPath","notebookId"]
  renameNotebook ["newName","oldName","parentPath","notebookId"]
  restoreFolder ["path"]
  restoreNotebook ["path","notebookId","notebookName"]
  takeNotebookSnapshot ["path"]
secrets createScope ["scope"]
  deleteScope ["scope"]
  deleteSecret ["key","scope"]
  getSecret ["scope","key"]
  listAcls ["scope"]
  listSecrets ["scope"]
  putSecret ["string_value","scope","key"]
sqlPermissions createSecurable ["securable"]
  grantPermission ["permission"]
  removeAllPermissions ["securable"]
  requestPermissions ["requests"]
  revokePermission ["permission"]
  showPermissions ["securable","principal"]
ssh login ["containerId","userName","port","publicKey","instanceId"]
  logout ["userName","containerId","instanceId"]
workspace changeWorkspaceAcl ["shardName","targetUserId","aclPermissionSet","resourceId"]
  fileCreate ["path"]
  fileDelete ["path"]
  moveWorkspaceNode ["destinationPath","path"]
  purgeWorkspaceNodes ["treestoreId"]

Analyze audit logs

You can analyze audit logs using Databricks. The following example uses logs to report on Databricks access and Apache Spark versions.

Load audit logs as a DataFrame and register the DataFrame as a temp table. See Amazon S3 for a detailed guide.

val df ="s3a://bucketName/path/to/auditLogs")

List the users who accessed Databricks and from where.

FROM audit_logs
WHERE serviceName = "accounts" AND actionName LIKE "%login%"

Check the Apache Spark versions used.

SELECT requestParams.spark_version, COUNT(*)
FROM audit_logs
WHERE serviceName = "clusters" AND actionName = "create"
GROUP BY requestParams.spark_version

Check table data access.

FROM audit_logs
WHERE serviceName = "sqlPermissions" AND actionName = "requestPermissions"