regr_sxy
Aggregate function: returns REGR_COUNT(y, x) * COVAR_POP(y, x) for non-null pairs in a group, where y is the dependent variable and x is the independent variable.
For the corresponding Databricks SQL function, see regr_sxy aggregate function.
Syntax
Python
import pyspark.sql.functions as sf
sf.regr_sxy(y=<y>, x=<x>)
Parameters
Parameter | Type | Description |
|---|---|---|
|
| The dependent variable. |
|
| The independent variable. |
Returns
pyspark.sql.Column: REGR_COUNT(y, x) * COVAR_POP(y, x) for non-null pairs in a group.
Examples
Example 1: All pairs are non-null.
Python
import pyspark.sql.functions as sf
df = spark.sql("SELECT * FROM VALUES (1, 1), (2, 2), (3, 3), (4, 4) AS tab(y, x)")
df.select(sf.regr_sxy("y", "x")).show()
Output
+--------------+
|regr_sxy(y, x)|
+--------------+
| 5.0|
+--------------+
Example 2: All pairs' x values are null.
Python
import pyspark.sql.functions as sf
df = spark.sql("SELECT * FROM VALUES (1, null) AS tab(y, x)")
df.select(sf.regr_sxy("y", "x")).show()
Output
+--------------+
|regr_sxy(y, x)|
+--------------+
| NULL|
+--------------+
Example 3: All pairs' y values are null.
Python
import pyspark.sql.functions as sf
df = spark.sql("SELECT * FROM VALUES (null, 1) AS tab(y, x)")
df.select(sf.regr_sxy("y", "x")).show()
Output
+--------------+
|regr_sxy(y, x)|
+--------------+
| NULL|
+--------------+
Example 4: Some pairs' x values are null.
Python
import pyspark.sql.functions as sf
df = spark.sql("SELECT * FROM VALUES (1, 1), (2, null), (3, 3), (4, 4) AS tab(y, x)")
df.select(sf.regr_sxy("y", "x")).show()
Output
+-----------------+
| regr_sxy(y, x)|
+-----------------+
|4.666666666666...|
+-----------------+
Example 5: Some pairs' x or y values are null.
Python
import pyspark.sql.functions as sf
df = spark.sql("SELECT * FROM VALUES (1, 1), (2, null), (null, 3), (4, 4) AS tab(y, x)")
df.select(sf.regr_sxy("y", "x")).show()
Output
+--------------+
|regr_sxy(y, x)|
+--------------+
| 4.5|
+--------------+