Skip to main content

try_zstd_decompress

Returns the decompressed value of expr using Zstandard. Supports data compressed in both single-pass mode and streaming mode. On decompression failure, it returns NULL.

Syntax

Python
from pyspark.databricks.sql import functions as dbf

dbf.try_zstd_decompress(input=<input>)

Parameters

Parameter

Type

Description

input

pyspark.sql.Column or str

The binary value to decompress.

Returns

pyspark.sql.Column: A new column that contains an uncompressed value.

Examples

Example 1: Decompress data using Zstandard

Python
from pyspark.databricks.sql import functions as dbf
df = spark.createDataFrame([("KLUv/SCCpQAAaEFwYWNoZSBTcGFyayABABLS+QU=",)], ["input"])
df.select(dbf.try_zstd_decompress(dbf.unbase64(df.input)).cast("string").alias("result")).show(truncate=False)
Output
+----------------------------------------------------------------------------------------------------------------------------------+
|result |
+----------------------------------------------------------------------------------------------------------------------------------+
|Apache Spark Apache Spark Apache Spark Apache Spark Apache Spark Apache Spark Apache Spark Apache Spark Apache Spark Apache Spark |
+----------------------------------------------------------------------------------------------------------------------------------+

Example 2: Decompress invalid input

Python
from pyspark.databricks.sql import functions as dbf
df = spark.createDataFrame([("invalid input",)], ["input"])
df.select(dbf.try_zstd_decompress(dbf.unbase64(df.input)).cast("string").alias("result")).show(truncate=False)
Output
+------+
|result|
+------+
|NULL |
+------+