Transforming Complex Data Types - SQL(SQL)

Loading...

Transforming Complex Data Types in Spark SQL

In this notebook we're going to go through some data transformation examples using Spark SQL. Spark SQL supports many built-in transformation functions natively in SQL.

%python
 
from pyspark.sql.functions import *
from pyspark.sql.types import *
 
# Convenience function for turning JSON strings into DataFrames.
def jsonToDataFrame(json, schema=None):
  # SparkSessions are available with Spark 2.0+
  reader = spark.read
  if schema:
    reader.schema(schema)
  reader.json(sc.parallelize([json])).createOrReplaceTempView("events")

Selecting from nested columns - Dots (".") can be used to access nested columns for structs and maps.

%python
 
# Using a struct
schema = StructType().add("a", StructType().add("b", IntegerType()))
                          
jsonToDataFrame("""
{
  "a": {
     "b": 1
  }
}
""", schema)
select a.b from events
 
b
1
1

Showing all 1 rows.

%python
 
# Using a map
schema = StructType().add("a", MapType(StringType(), IntegerType()))
                          
jsonToDataFrame("""
{
  "a": {
     "b": 1
  }
}
""", schema)
select a.b from events
 
b
1
1

Showing all 1 rows.

Flattening structs - A star ("*") can be used to select all of the subfields in a struct.

%python
 
jsonToDataFrame("""
{
  "a": {
     "b": 1,
     "c": 2
  }
}
""")
select a.* from events
 
b
c
1
1
2

Showing all 1 rows.

Nesting columns - The struct() function or just parentheses in SQL can be used to create a new struct.

%python
 
jsonToDataFrame("""
{
  "a": 1,
  "b": 2,
  "c": 3
}
""")
select named_struct("y", a) as x from events
 
x
1
{"y": 1}

Showing all 1 rows.

Nesting all columns - The star ("*") can also be used to include all columns in a nested struct.

%python
 
jsonToDataFrame("""
{
  "a": 1,
  "b": 2
}
""")