Sample datasets

There are a variety of sample datasets provided by Databricks and made available by third parties that you can use in your Databricks workspace.

Unity Catalog datasets

Unity Catalog provides access to a number of sample datasets in the samples catalog. You can review these datasets in the Catalog Explorer UI and reference them directly in a notebook or in the SQL editor by using the <catalog-name>.<schema-name>.<table-name> pattern.

The nyctaxi schema (also known as a database) contains the table trips, which has details about taxi rides in New York City. The following statement returns the first 10 records in this table:

SELECT * FROM samples.nyctaxi.trips LIMIT 10

The tpch schema contains data from the TPC-H Benchmark. To list the tables in this schema, run:

SHOW TABLES IN samples.tpch

Databricks datasets (databricks-datasets)

Databricks includes a variety of sample datasets mounted to DBFS.

Note

The availability and location of Databricks datasets are subject to change without notice.

Browse Databricks datasets

To browse these files from a Python, Scala, or R notebook, you can use Databricks Utilities (dbutils) reference. The following code lists all of the available Databricks datasets.

display(dbutils.fs.ls('/databricks-datasets'))
display(dbutils.fs.ls("/databricks-datasets"))
%fs ls "/databricks-datasets"

Get information about Databricks datasets

To get more information about a Databricks dataset, you can use a local file API to print out the dataset README (if one is available) by using a Python, R, or Scala notebook, as shown in this code example.

f = open('/discover/databricks-datasets/README.md', 'r')
print(f.read())
scala.io.Source.fromFile("/discover/databricks-datasets/README.md").foreach {
  print
}
library(readr)

f = read_lines("/discover/databricks-datasets/README.md", skip = 0, n_max = -1L)
print(f)

Create a table based on a Databricks dataset

This code example demonstrates how to use SQL in the SQL editor, or how to use SQL, Python, Scala, or R notebooks, to create a table based on a Databricks dataset:

CREATE TABLE default.people10m OPTIONS (PATH 'dbfs:/databricks-datasets/learning-spark-v2/people/people-10m.delta')
spark.sql("CREATE TABLE default.people10m OPTIONS (PATH 'dbfs:/databricks-datasets/learning-spark-v2/people/people-10m.delta')")
spark.sql("CREATE TABLE default.people10m OPTIONS (PATH 'dbfs:/databricks-datasets/learning-spark-v2/people/people-10m.delta')")
library(SparkR)
sparkR.session()

sql("CREATE TABLE default.people10m OPTIONS (PATH 'dbfs:/databricks-datasets/learning-spark-v2/people/people-10m.delta')")

Third-party sample datasets in CSV format

Databricks has built-in tools to quickly upload third-party sample datasets as comma-separated values (CSV) files into Databricks workspaces. Some popular third-party sample datasets available in CSV format:

Sample dataset

To download the sample dataset as a CSV file…

The Squirrel Census

On the Data webpage, click Park Data, Squirrel Data, or Stories.

OWID Dataset Collection

In the GitHub repository, click the datasets folder. Click the subfolder that contains the target dataset, and then click the dataset’s CSV file.

Data.gov CSV datasets

On the search results webpage, click the target search result, and next to the CSV icon, click Download.

Diamonds (Requires a Kaggle account)

On the dataset’s webpage, on the Data tab, on the Data tab, next to diamonds.csv, click the Download icon.

NYC Taxi Trip Duration (Requires a Kaggle account)

On the dataset’s webpage, on the Data tab, next to sample_submission.zip, click the Download icon. To find the dataset’s CSV files, extracts the contents of the downloaded ZIP file.

UFO Sightings (Requires a data.world account)

On the dataset’s webpage, next to nuforc_reports.csv, click the Download icon.

To use third-party sample datasets in your Databricks workspace, do the following:

  1. Follow the third-party’s instructions to download the dataset as a CSV file to your local machine.

  2. Upload the CSV file from your local machine into your Databricks workspace.

  3. To work with the imported data, use Databricks SQL to query the data. Or you can use a notebook to load the data as a DataFrame.

Third-party sample datasets within libraries

Some third parties include sample datasets within libraries, such as Python Package Index (PyPI) packages or Comprehensive R Archive Network (CRAN) packages. For more information, see the library provider’s documentation.