Avro Files

Apache Avro (TM) is a data serialization system. Avro provides:

  • Rich data structures.
  • A compact, fast, binary data format.
  • A container file, to store persistent data.
  • Remote procedure call (RPC).
  • Simple integration with dynamic languages. Code generation is not required to read or write data files nor to use or implement RPC protocols. Code generation as an optional optimization, only worth implementing for statically typed languages.

Also see Read and Write Streaming Avro Data with DataFrames.

Installation

The installation steps vary depending on your cluster’s cluster image version:

Features

The Avro data source supports reading and writing Avro data from Spark SQL:

Automatic schema conversion
Supports most conversions between Spark SQL and Avro records, making Avro a first-class citizen in Spark.
Partitioning
This library allows you to easily read and write partitioned data without any extra configuration. Just pass the columns you want to partition on, just like you would for Parquet.
Compression
You can specify the type of compression to use when writing Avro out to disk. The supported types are uncompressed, snappy, and deflate. You can also specify the deflate level.
Specify record names
You can specify the record name and namespace to use by passing a map of parameters with recordName and recordNamespace.

Configuration

You can change the behavior of an Avro data source using various configuration parameters.

To ignore files without the .avro extension when reading, you can set the parameter avro.mapred.ignore.inputs.without.extension in the Hadoop configuration. The default is false.

spark.sqlContext
  .sparkContext
  .hadoopConfiguration
  .set("avro.mapred.ignore.inputs.without.extension", "true")

To configure compression when writing, you can set the following Spark properties:

  • Compression codec: spark.sql.avro.compression.codec. Supported codecs are snappy and deflate. The default codec is snappy.
  • If the compression codec is deflate, you can set the compression level with: spark.sql.avro.deflate.level. The default level is -1.

You can set these properties in your cluster configuration or at runtime using spark.conf.set(). For example:

spark.conf.set("spark.sql.avro.compression.codec", "deflate")
spark.conf.set("spark.sql.avro.deflate.level", "5")

Supported types for Avro -> Spark SQL conversion

This library supports reading all Avro types. It uses the following mapping from Avro types to Spark SQL types:

Avro type Spark SQL type
boolean BooleanType
int IntegerType
long LongType
float FloatType
double DoubleType
bytes BinaryType
string StringType
record StructType
enum StringType
array ArrayType
map MapType
fixed BinaryType
union See below

In addition to the types listed above, it supports reading union types. Avro considers the following three types to be union types:

  • union(int, long) maps to LongType.
  • union(float, double) maps to DoubleType.
  • union(something, null), where something is any supported Avro type. This maps to the same Spark SQL type as that of something, with nullable set to true.

All other union types are complex types. They map to StructType where field names are member0, member1, etc., in accordance with members of the union. This is consistent with the behavior when converting between Avro and Parquet.

It also supports reading the following Avro logical types:

Avro logical type Avro type Spark SQL type
date int DateType
timestamp-millis long TimestampType
timestamp-micros long TimestampType
decimal fixed DecimalType
decimal bytes DecimalType

It ignores docs, aliases, and other properties present in the Avro file.

Supported types for Spark SQL -> Avro conversion

This library supports writing of all Spark SQL types into Avro. For most types, the mapping from Spark types to Avro types is straightforward (for example IntegerType gets converted to int); below is a list of the few special cases:

Spark SQL type Avro type Avro logical type
ByteType int  
ShortType int  
BinaryType bytes  
DecimalType fixed decimal
TimestampType long timestamp-micros
DateType int date

You can also specify the whole output Avro schema with the option avroSchema, so that Spark SQL types can be converted into other Avro types. The following conversions are not applied by default and require user specified Avro schema:

Spark SQL type Avro type Avro logical type
ByteType fixed  
StringType enum  
DecimalType bytes decimal
TimestampType long timestamp-mills

Examples

The recommended way to read or write Avro data from Spark SQL is by using Spark DataFrame APIs, which are available in Scala, Python, and R. These examples use the example episodes.avro file.

Scala API

Databricks Runtime 5.0 and above
// The Avro records get converted to Spark types, filtered, and
// then written back out as Avro records

val df = spark.read.format("avro").load("/tmp/episodes.avro")
df.filter("doctor > 5").write.format("avro").save("/tmp/output")
Databricks Runtime 4.3 and below
// The Avro records get converted to Spark types, filtered, and
// then written back out as Avro records

val df = spark.read.format("com.databricks.spark.avro").load("/tmp/episodes.avro")
df.filter("doctor > 5").write.format("com.databricks.spark.avro").save("/tmp/output")

Note

For all following examples, use .format("com.databricks.spark.avro") when running on Databricks Runtime 4.3 and below.

You can specify a custom Avro schema:

import org.apache.avro.Schema

val schema = new Schema.Parser().parse(new File("user.avsc"))

spark
  .read
  .format("avro")
  .option("avroSchema", schema.toString)
  .load("/tmp/episodes.avro")
  .show()

You can also specify Avro compression options:

// configuration to use deflate compression
spark.conf.set("spark.sql.avro.compression.codec", "deflate")
spark.conf.set("spark.sql.avro.deflate.level", "5")

val df = spark.read.format("avro").load("/tmp/episodes.avro")

// writes out compressed Avro records
df.write.format("avro").save("/tmp/output")

You can write partitioned Avro records like this:

import org.apache.spark.sql.SparkSession

val spark = SparkSession.builder().master("local").getOrCreate()

val df = spark.createDataFrame(
  Seq(
    (2012, 8, "Batman", 9.8),
    (2012, 8, "Hero", 8.7),
    (2012, 7, "Robot", 5.5),
    (2011, 7, "Git", 2.0))
  ).toDF("year", "month", "title", "rating")

df.toDF.write.format("avro").partitionBy("year", "month").save("/tmp/output")

You can specify the record name and namespace like this:

val df = spark.read.format("avro").load("/tmp/episodes.avro")

val name = "AvroTest"
val namespace = "org.foo"
val parameters = Map("recordName" -> name, "recordNamespace" -> namespace)

df.write.options(parameters).format("avro").save("/tmp/output")

Python API

Databricks Runtime 5.0 and above
# Creates a DataFrame from a specified directory
df = spark.read.format("avro").load("/tmp/episodes.avro")

#  Saves the subset of the Avro records read in
subset = df.where("doctor > 5")
subset.write.format("avro").save("/tmp/output")
Databricks Runtime 4.3 and below
# Creates a DataFrame from a specified directory
df = spark.read.format("com.databricks.spark.avro").load("/tmp/episodes.avro")

#  Saves the subset of the Avro records read in
subset = df.where("doctor > 5")
subset.write.format("com.databricks.spark.avro").save("/tmp/output")

SQL API

In SQL you can query Avro data by registering the data file as a temporary table.

CREATE TEMPORARY TABLE episodes
USING avro
OPTIONS (path "/tmp/episodes.avro")

Example Notebook

The following notebook demonstrates how to read write and read Avro files.