Azure Cosmos DB

Preview

This feature is in Public Preview.

Azure Cosmos DB is Microsoft’s globally distributed, multi-model database. Azure Cosmos DB enables you to elastically and independently scale throughput and storage across any number of Azure’s geographic regions. It offers throughput, latency, availability, and consistency guarantees with comprehensive service level agreements (SLAs). Azure Cosmos DB provides APIs for the following data models, with SDKs available in multiple languages:

  • SQL API
  • MongoDB API
  • Cassandra API
  • Graph (Gremlin) API
  • Table API

This article explains how to read data from and write data to Azure Cosmos DB.

Note

You cannot access this data source from a cluster running Databricks Runtime 7.0 or above because an Azure Cosmos DB connector that supports Apache Spark 3.0 is not available.

Requirements

Azure Cosmos DB Spark Connector, developed by Microsoft, requires Databricks Runtime 3.4 or above.

Create and attach required libraries

  1. Download the latest azure-cosmosdb-spark library for the version of Apache Spark you are running:
  2. Upload the downloaded JAR files to Databricks following the instructions in Upload a Jar, Python Egg, or Python Wheel.
  3. Install the uploaded libraries into your Databricks cluster.

Use the Azure Cosmos DB Spark connector

The following Scala notebook provides a simple example of how to write data to Cosmos DB and read data from Cosmos DB. See the Azure Cosmos DB Spark Connector project for detailed documentation. The Azure Cosmos DB Spark Connector User Guide, developed by Microsoft, also shows how to use this connector in Python.

Azure Cosmos DB notebook

Open notebook in new tab