array_agg
Returns a list of objects with duplicates.
Syntax
Python
from pyspark.sql import functions as sf
sf.array_agg(col)
Parameters
Parameter | Type | Description |
|---|---|---|
|
| Target column to compute on. |
Returns
pyspark.sql.Column: list of objects with duplicates.
Examples
Example 1: Using array_agg function on an int column
Python
from pyspark.sql import functions as sf
df = spark.createDataFrame([[1],[1],[2]], ["c"])
df.agg(sf.sort_array(sf.array_agg('c')).alias('sorted_list')).show()
Output
+-----------+
|sorted_list|
+-----------+
| [1, 1, 2]|
+-----------+
Example 2: Using array_agg function on a string column
Python
from pyspark.sql import functions as sf
df = spark.createDataFrame([["apple"],["apple"],["banana"]], ["c"])
df.agg(sf.sort_array(sf.array_agg('c')).alias('sorted_list')).show(truncate=False)
Output
+----------------------+
|sorted_list |
+----------------------+
|[apple, apple, banana]|
+----------------------+
Example 3: Using array_agg function on a column with null values
Python
from pyspark.sql import functions as sf
df = spark.createDataFrame([[1],[None],[2]], ["c"])
df.agg(sf.sort_array(sf.array_agg('c')).alias('sorted_list')).show()
Output
+-----------+
|sorted_list|
+-----------+
| [1, 2]|
+-----------+
Example 4: Using array_agg function on a column with different data types
Python
from pyspark.sql import functions as sf
df = spark.createDataFrame([[1],["apple"],[2]], ["c"])
df.agg(sf.sort_array(sf.array_agg('c')).alias('sorted_list')).show()
Output
+-------------+
| sorted_list|
+-------------+
|[1, 2, apple]|
+-------------+