Neo4j is a native graph database that leverages data relationships as first-class entities. You can connect a Databricks cluster to a Neo4j cluster using the neo4j-spark-connector, which offers Spark APIs for RDD, DataFrame, GraphX, and GraphFrames. The neo4j-spark-connector uses the binary Bolt protocol to transfer data to and from the Neo4j server.

This topic describes how to deploy and configure Neo4j, configure Databricks to access Neo4j, and includes a notebook demonstrating usage.

Neo4j deployment and configuration

You can deploy Neo4j on various cloud providers.

To deploy Neo4j on AWS EC2 using a custom AMI follow the instructions in Hosting Neo4j on EC2 on AWS. For other options, see the official Neo4j cloud deployment guide. This guide assumes Neo4j 3.2.2.

Change the Neo4j password from the default (you should be prompted when you first access Neo4j) and modify conf/neo4j.conf to accept remote connections.

# conf/neo4j.conf

# Bolt connector

# HTTP Connector. There must be exactly one HTTP connector.

# HTTPS Connector. There can be zero or one HTTPS connectors.

For more information, see Configuring Neo4j Connectors.

Databricks configuration

If your Neo4j cluster is running in AWS and you want to use private IPs, see the VPC Peering guide.

  1. Install two libraries: neo4j-spark-connector and graphframes as Spark Packages. See the libraries guide for instructions.
  2. Create a cluster with these Spark configurations.
spark.neo4j.bolt.url bolt://<ip-of-neo4j-instance>:7687
spark.neo4j.bolt.user <username>
spark.neo4j.bolt.password <password>
  1. Import libraries and test the connection.
import org.neo4j.spark._
import org.graphframes._

val neo = Neo4j(sc)

// Dummy Cypher query to check connection
val testConnection = neo.cypher("MATCH (n) RETURN n;").loadRdd[Long]