Transforming Complex Data Types - Scala(Scala)

Loading...

Transforming Complex Data Types in Spark SQL

In this notebook we're going to go through some data transformation examples using Spark SQL. Spark SQL supports many built-in transformation functions in the module org.apache.spark.sql.functions._ therefore we will start off by importing that.

import org.apache.spark.sql.DataFrame
import org.apache.spark.sql.functions._
import org.apache.spark.sql.types._
 
// Convenience function for turning JSON strings into DataFrames.
def jsonToDataFrame(json: String, schema: StructType = null): DataFrame = {
  // SparkSessions are available with Spark 2.0+
  val reader = spark.read
  Option(schema).foreach(reader.schema)
  reader.json(sc.parallelize(Array(json)))
}
import org.apache.spark.sql.DataFrame import org.apache.spark.sql.functions._ import org.apache.spark.sql.types._ jsonToDataFrame: (json: String, schema: org.apache.spark.sql.types.StructType)org.apache.spark.sql.DataFrame

Selecting from nested columns - Dots (".") can be used to access nested columns for structs and maps.

// Using a struct
val schema = new StructType().add("a", new StructType().add("b", IntegerType))
                          
val events = jsonToDataFrame("""
{
  "a": {
     "b": 1
  }
}
""", schema)
 
display(events.select("a.b"))
 
b
1
1

Showing all 1 rows.

// Using a map
val schema = new StructType().add("a", MapType(StringType, IntegerType))
                          
val events = jsonToDataFrame("""
{
  "a": {
     "b": 1
  }
}
""", schema)
 
display(events.select("a.b"))
 
b
1
1

Showing all 1 rows.

Flattening structs - A star ("*") can be used to select all of the subfields in a struct.

val events = jsonToDataFrame("""
{
  "a": {
     "b": 1,
     "c": 2
  }
}
""")
 
display(events.select("a.*"))
 
b
c
1
1
2

Showing all 1 rows.

Nesting columns - The struct() function or just parentheses in SQL can be used to create a new struct.

val events = jsonToDataFrame("""
{
  "a": 1,
  "b": 2,
  "c": 3
}
""")
 
display(events.select(struct('a as 'y) as 'x))
 
x
1
{"y": 1}

Showing all 1 rows.

Nesting all columns - The star ("*") can also be used to include all columns in a nested struct.

val events = jsonToDataFrame("""
{
  "a": 1,
  "b": 2
}
""")
 
display(events.select(struct("*") as 'x))
 
x
1
{"a": 1, "b": 2}

Showing all 1 rows.

Selecting a single array or map element - getItem() or square brackets (i.e. [ ]) can be used to select a single element out of an array or a map.

val events = jsonToDataFrame("""
{
  "a": [1, 2]
}
""")
 
display(events.select('a.getItem(0) as 'x))
 
x
1
1

Showing all 1 rows.

// Using a map
val schema = new StructType().add("a", MapType(StringType, IntegerType))
 
val events = jsonToDataFrame("""
{
  "a": {
    "b": 1
  }
}
""", schema)
 
display(events.select('a.getItem("b") as 'x))
 
x
1
1

Showing all 1 rows.

Creating a row for each array or map element - explode() can be used to create a new row for each element in an array or each key-value pair. This is similar to LATERAL VIEW EXPLODE in HiveQL.