Azure Data Lake StoreΒΆ

Note

Databricks Runtime 3.1 and above provide built-in support for Azure Blob Storage and Azure Data Lake Store.

To read from your Data Lake Store account, you can configure Spark to use service credentials with the following snippet in your notebook:

spark.conf.set("dfs.adls.oauth2.access.token.provider.type", "ClientCredential")
spark.conf.set("dfs.adls.oauth2.client.id", "{YOUR SERVICE CLIENT ID}")
spark.conf.set("dfs.adls.oauth2.credential", "{YOUR SERVICE CREDENTIALS}")
spark.conf.set("dfs.adls.oauth2.refresh.url", "https://login.microsoftonline.com/{YOUR DIRECTORY ID}/oauth2/token")

If you do not already have service credentials, you can follow these instructions: Create service principal with portal. If you do not know your directory ID, you can follow these instructions.

After providing credentials, you can read from Data Lake Store using standard APIs:

val df = spark.read.parquet("adl://{YOUR DATA LAKE STORE ACCOUNT NAME}.azuredatalakestore.net/{YOUR DIRECTORY NAME}")
dbutils.fs.ls("adl://{YOUR DATA LAKE STORE ACCOUNT NAME}.azuredatalakestore.net/{YOUR DIRECTORY NAME}")

Note that Data Lake Store provides directory level access control, so the service principal must have access to the directories that you want to read from as well as the Data Lake Store resource.

Note

Hadoop configuration options set using spark.conf.set(...) are not accessible via SparkContext. This means that, while they are visible to the DataFrame and Dataset API, they are not visible to the RDD API. If you are using the RDD API to read from Azure Data Lake Store, you must set the credentials using one of the following methods:

  • Specify the Hadoop credential configuration options as Spark options when you create the cluster.

    You must add the spark.hadoop. prefix to the corresponding Hadoop configuration keys to tell Spark to propagate them to the Hadoop configurations that are used for your RDD jobs:

    spark.hadoop.dfs.adls.oauth2.access.token.provider.type ClientCredential
    spark.hadoop.dfs.adls.oauth2.client.id {YOUR SERVICE CLIENT ID}
    spark.hadoop.dfs.adls.oauth2.credential {YOUR SERVICE CREDENTIALS}
    spark.hadoop.dfs.adls.oauth2.refresh.url "https://login.microsoftonline.com/{YOUR DIRECTORY ID}/oauth2/token"
    
  • For Scala users, you can also set the credentials into spark.sparkContext.hadoopConfiguration:

    spark.sparkContext.hadoopConfiguration.set("dfs.adls.oauth2.access.token.provider.type", "ClientCredential")
    spark.sparkContext.hadoopConfiguration.set("dfs.adls.oauth2.client.id", "{YOUR SERVICE CLIENT ID}")
    spark.sparkContext.hadoopConfiguration.set("dfs.adls.oauth2.credential", "{YOUR SERVICE CREDENTIALS}")
    spark.sparkContext.hadoopConfiguration.set("dfs.adls.oauth2.refresh.url", "https://login.microsoftonline.com/{YOUR DIRECTORY ID}/oauth2/token")
    

Warning! In either case, the credentials you set here are available to all users who access the cluster.