Skip to main content

count_distinct

Returns a new Column for distinct count of col or cols.

Syntax

Python
from pyspark.sql import functions as sf

sf.count_distinct(col, *cols)

Parameters

Parameter

Type

Description

col

pyspark.sql.Column or column name

First column to compute on.

cols

pyspark.sql.Column or column name

Other columns to compute on.

Returns

pyspark.sql.Column: distinct values of these two column values.

Examples

Example 1: Counting distinct values of a single column

Python
from pyspark.sql import functions as sf
df = spark.createDataFrame([(1,), (1,), (3,)], ["value"])
df.select(sf.count_distinct(df.value)).show()
Output
+---------------------+
|count(DISTINCT value)|
+---------------------+
| 2|
+---------------------+

Example 2: Counting distinct values of multiple columns

Python
from pyspark.sql import functions as sf
df = spark.createDataFrame([(1, 1), (1, 2)], ["value1", "value2"])
df.select(sf.count_distinct(df.value1, df.value2)).show()
Output
+------------------------------+
|count(DISTINCT value1, value2)|
+------------------------------+
| 2|
+------------------------------+

Example 3: Counting distinct values with column names as strings

Python
from pyspark.sql import functions as sf
df = spark.createDataFrame([(1, 1), (1, 2)], ["value1", "value2"])
df.select(sf.count_distinct("value1", "value2")).show()
Output
+------------------------------+
|count(DISTINCT value1, value2)|
+------------------------------+
| 2|
+------------------------------+