Data access configuration

Preview

This feature is in Public Preview. Contact your Databricks representative to request access.

This article describes the data access configurations performed by Databricks SQL Analytics administrators using the UI for all SQL endpoints.

To configure all SQL endpoints using the Databricks REST API, see Global SQL Endpoints API.

Important

Changing these settings restarts all running SQL endpoints.

For a general overview of how to enable access to data, see Data access overview.

Configure an instance profile

A Databricks SQL Analytics administrator can configure all endpoints to use an AWS instance profile when accessing AWS storage.

  1. Click the User Settings Icon icon at the bottom of the sidebar and select Settings.
  2. Click the SQL Endpoint Settings tab.
  3. In the Instance Profile drop-down, select an instance profile. If there are no profiles, click Configure to open the Databricks admin console in a new tab to configure an instance profile.
  4. Click Save.

Warning

  • If a user does not have permission to use the instance profile, all endpoints the user creates will fail to start.
  • If the instance profile is invalid, all SQL endpoints will become unhealthy.

Configure data access properties

A Databricks SQL Analytics administrator can configure all endpoints with data access properties.

  1. Click the User Settings Icon icon at the bottom of the sidebar and select Settings.
  2. Click the SQL Endpoint Settings tab.
  3. In the Data Access Configuration textbox, specify key-value pairs containing metastore properties.
  4. Click Save.

Supported properties

  • spark.databricks.hive.metastore.glueCatalog.enabled
  • spark.sql.hive.metastore.*
  • spark.sql.warehouse.dir
  • spark.hadoop.aws.region
  • spark.hadoop.datanucleus.*
  • spark.hadoop.fs.*
  • spark.hadoop.hive.*
  • spark.hadoop.javax.jdo.option.*
  • spark.hive.*

For details on how to set these properties, see External Hive metastore and AWS Glue data catalog.