Skip to main content

to_varchar

Converts col to a string based on the format. Throws an exception if the conversion fails. Supports Spark Connect.

The format can consist of the following characters, case insensitive:

  • '0' or '9': Specifies an expected digit between 0 and 9. A sequence of 0 or 9 in the format string matches a sequence of digits in the input value, generating a result string of the same length as the corresponding sequence in the format string. The result string is left-padded with zeros if the 0/9 sequence comprises more digits than the matching part of the decimal value, starts with 0, and is before the decimal point. Otherwise, it is padded with spaces.
  • '.' or 'D': Specifies the position of the decimal point (optional, only allowed once).
  • ',' or 'G': Specifies the position of the grouping (thousands) separator (,). There must be a 0 or 9 to the left and right of each grouping separator.
  • '$': Specifies the location of the $ currency sign. This character may only be specified once.
  • 'S' or 'MI': Specifies the position of a '-' or '+' sign (optional, only allowed once at the beginning or end of the format string). Note that 'S' prints '+' for positive values but 'MI' prints a space.
  • 'PR': Only allowed at the end of the format string; specifies that the result string will be wrapped by angle brackets if the input value is negative.

If col is a datetime, format shall be a valid datetime pattern, see Patterns.

If col is a binary, it is converted to a string in one of the formats:

  • 'base64': a base 64 string.
  • 'hex': a string in the hexadecimal format.
  • 'utf-8': the input binary is decoded to UTF-8 string.

For the corresponding Databricks SQL function, see to_varchar function.

Syntax

Python
from pyspark.databricks.sql import functions as dbf

dbf.to_varchar(col=<col>, format=<format>)

Parameters

Parameter

Type

Description

col

pyspark.sql.Column or str

Input column or strings.

format

pyspark.sql.Column or str

Format to use to convert char values.

Examples

Python
from pyspark.databricks.sql import functions as dbf
from pyspark.sql.functions import lit
df = spark.createDataFrame([(78.12,)], ['e'])
df.select(dbf.to_varchar(df.e, lit("$99.99")).alias('r')).collect()