Skip to main content

regr_avgx

Aggregate function: returns the average of the independent variable for non-null pairs in a group, where y is the dependent variable and x is the independent variable.

For the corresponding Databricks SQL function, see regr_avgx aggregate function.

Syntax

Python
import pyspark.sql.functions as sf

sf.regr_avgx(y=<y>, x=<x>)

Parameters

Parameter

Type

Description

y

pyspark.sql.Column or str

The dependent variable.

x

pyspark.sql.Column or str

The independent variable.

Returns

pyspark.sql.Column: the average of the independent variable for non-null pairs in a group.

Examples

Example 1: All pairs are non-null.

Python
import pyspark.sql.functions as sf
df = spark.sql("SELECT * FROM VALUES (1, 2), (2, 2), (2, 3), (2, 4) AS tab(y, x)")
df.select(sf.regr_avgx("y", "x"), sf.avg("x")).show()
Output
+---------------+------+
|regr_avgx(y, x)|avg(x)|
+---------------+------+
| 2.75| 2.75|
+---------------+------+

Example 2: All pairs' x values are null.

Python
import pyspark.sql.functions as sf
df = spark.sql("SELECT * FROM VALUES (1, null) AS tab(y, x)")
df.select(sf.regr_avgx("y", "x"), sf.avg("x")).show()
Output
+---------------+------+
|regr_avgx(y, x)|avg(x)|
+---------------+------+
| NULL| NULL|
+---------------+------+

Example 3: All pairs' y values are null.

Python
import pyspark.sql.functions as sf
df = spark.sql("SELECT * FROM VALUES (null, 1) AS tab(y, x)")
df.select(sf.regr_avgx("y", "x"), sf.avg("x")).show()
Output
+---------------+------+
|regr_avgx(y, x)|avg(x)|
+---------------+------+
| NULL| 1.0|
+---------------+------+

Example 4: Some pairs' x values are null.

Python
import pyspark.sql.functions as sf
df = spark.sql("SELECT * FROM VALUES (1, 2), (2, null), (2, 3), (2, 4) AS tab(y, x)")
df.select(sf.regr_avgx("y", "x"), sf.avg("x")).show()
Output
+---------------+------+
|regr_avgx(y, x)|avg(x)|
+---------------+------+
| 3.0| 3.0|
+---------------+------+

Example 5: Some pairs' x or y values are null.

Python
import pyspark.sql.functions as sf
df = spark.sql("SELECT * FROM VALUES (1, 2), (2, null), (null, 3), (2, 4) AS tab(y, x)")
df.select(sf.regr_avgx("y", "x"), sf.avg("x")).show()
Output
+---------------+------+
|regr_avgx(y, x)|avg(x)|
+---------------+------+
| 3.0| 3.0|
+---------------+------+