This article describes how to read from and write to Google BigQuery tables in Databricks.
- A Databricks cluster running Databricks Runtime 7.1 or above.
- Databricks CLI installed and configured.
The BigQuery Storage API is enabled by default in any new projects where BigQuery is enabled. However, in case you are using existing project and the API isn’t enabled, follow the steps in this section to enable the API.
Create a service account for the Databricks cluster. We recommend giving this service account the least privileges needed to perform its tasks. See BigQuery Roles and Permissions.
The JSON key you generate for the service account is a private key that should only be shared with authorized users as it controls access to datasets and resources in your Google Cloud account.
gcloud iam service-accounts create <service-account-name> gcloud projects add-iam-policy-binding <project-name> \ --role roles/bigquery.user \ --member="serviceAccount:<service-account-name>@<project-name>.iam.gserviceaccount.com" gcloud projects add-iam-policy-binding <project-name> \ --role roles/bigquery.dataEditor \ --member="serviceAccount:<service-account-name>@<project-name>.iam.gserviceaccount.com" gcloud iam service-accounts keys create --iam-account \ "<service-account-name>@<project-name>.iam.gserviceaccount.com" \ <project-name>-deadbeefdead.json
Click IAM and Admin in the left navigation pane.
Click Service Accounts.
Click + CREATE SERVICE ACCOUNT.
Enter the service account name and description.
Specify roles for the service account. In the Select a role drop-down, type
BigQueryand add the following roles:
In the service accounts list, click the newly created account.
In the Keys section, select ADD KEY > Create new key button.
Accept the JSON key type.
Click CREATE. The key file is downloaded to your computer.
To write data to BigQuery the data source needs access to a GCS bucket.
Click Storage in the left navigation pane.
Click CREATE BUCKET.
Configure the bucket details.
Click the Permissions tab and Add members.
Provide the following permissions to the service account on the bucket.
Upload the service credentials to DBFS and configure a cluster to read the credentials.
Upload the JSON key containing Google Cloud service credentials you generated in Step 1 to DBFS.
dbfs cp <project-name>-deadbeefdead.json dbfs:/gcp/service-account-key.json
Set the Databricks Runtime Version to Databricks Runtime 7.1 or above.
GOOGLE_APPLICATION_CREDENTIALSenvironment variable to point to the JSON key path in DBFS.
To read a BigQuery table, specify
df = spark.read.format("bigquery").option("table",<table-name>).load()
To write to a BigQuery table, specify
df.write.format("bigquery").mode("<mode>").option("temporaryGcsBucket", "<bucket-name>").option("table", <table-name>).save()
<bucket-name> is the name of the bucket you created in Create GCS bucket for temporary storage.