Spark API options reference
This page lists available input and output options for Spark APIs that read and write data.
DataFrameReader options
Use these options with DataFrameReader.option(), DataFrameReader.options(), read_files, COPY INTO, and Auto Loader to control how Databricks reads data files.
Example
The following example sets multiLine to True for reading JSON files:
- Python
- Scala
- SQL
df = spark.read.format("json").option("multiLine", True).load("/path/to/data")
val df = spark.read.format("json").option("multiLine", "true").load("/path/to/data")
SELECT * FROM read_files("/path/to/data", format => "json", multiLine => true)
Common
The following options apply to all file formats.
Key | Default | Description |
|---|---|---|
|
| Whether to ignore corrupt files. If true, the Spark jobs will continue to run when encountering corrupted files and the contents that have been read will still be returned. For |
|
| Whether to ignore missing files. If true, the Spark jobs continue to run when encountering missing files and the contents are still returned. Available in Databricks Runtime 11.3 LTS and above. |
| None | An optional timestamp as a filter to only ingest files that have a modification timestamp after the provided timestamp. |
| None | An optional timestamp as a filter to only ingest files that have a modification timestamp before the provided timestamp. |
| None | A potential glob pattern to provide for choosing files. Equivalent to |
|
| When |
Avro
Key | Default | Description |
|---|---|---|
| None | Optional schema provided by a user in Avro format. When reading Avro, this option can be set to an evolved schema that is compatible but different from the actual Avro schema. The deserialization schema is consistent with the evolved schema. For example, if you set an evolved schema containing one additional column with a default value, the read result contains the new column too. |
|
| How to handle schema evolution when using a schema registry. Valid values: |
|
| Controls the rebasing of the DATE and TIMESTAMP values between Julian and Proleptic Gregorian calendars. Valid values: |
|
| Whether to use stable field names for Avro Union types. When enabled, union type field names are derived from their type names in lowercase (for example, |
|
| Whether to infer the schema across multiple files and to merge the schema of each file. |
|
| Parser mode for handling corrupt records. Valid values: |
|
| Specifies the case sensitivity behavior when |
| None | The maximum recursion depth for recursive Avro fields. Set to |
| None | Whether to collect all data that can't be parsed due to: a data type mismatch, and schema mismatch (including column casing) to a separate column. This column is included by default when using Auto Loader.
For more details refer to What is the rescued data column?. |
|
| The prefix to use for stable union type field names when |
CSV
Key | Default | Description |
|---|---|---|
| None | The path to store files for recording the information about bad CSV records. |
|
| The character used to escape the character used for escaping quotes. For example, for the following record:
|
|
| Supported for Auto Loader. Not supported for |
|
| Defines the character that represents a line comment when found in the beginning of a line of text. Use |
|
| The format for parsing date strings. |
| Empty string | String representation of an empty value. |
|
| Whether to fall back to the legacy date and timestamp parsing behavior when a value cannot be parsed with the specified format. When |
|
| The name of the encoding of the CSV files. See |
|
| Whether to forcibly apply the specified or inferred schema to the CSV files. If the option is enabled, headers of CSV files are ignored. This option is ignored by default when using Auto Loader to rescue data and allow schema evolution. |
|
| The escape character to use when parsing the data. |
|
| The expected filename extension. Files without this extension are filtered out during reads. |
|
| Whether to fail when the CSV record contains columns not present in the schema. When |
|
| Whether to fail when a field value cannot be parsed as the declared schema type without widening. When |
|
| Whether the CSV files contain a header. Auto Loader assumes that files have headers when inferring the schema. |
|
| Whether to ignore leading whitespaces for each parsed value. |
|
| Whether to ignore trailing whitespaces for each parsed value. |
|
| Whether to infer the data types of the parsed CSV records or to assume all columns are of |
|
| The buffer size in bytes for the CSV parser. Useful for tuning memory usage when parsing large CSV files. Valid values: positive integers. |
| None, which covers | A string between two consecutive CSV records. |
|
| A |
|
| Maximum number of characters expected from a value to parse. Can be used to avoid memory errors. Defaults to |
|
| The hard limit of how many columns a record can have. Valid values: positive integers. |
|
| Whether to infer the schema across multiple files and to merge the schema of each file. Enabled by default for Auto Loader when inferring the schema. |
|
| Parser mode around handling malformed records. Valid values: |
|
| Whether the CSV records span multiple lines. |
|
| The string representation of a non-a-number value when parsing |
|
| The string representation of negative infinity when parsing |
| Empty string | String representation of a null value. |
|
| While reading files, whether to align columns declared in the header with the schema case sensitively. This is |
|
| The string representation of positive infinity when parsing |
|
| Attempts to infer strings as dates instead of timestamp when possible. You must also use schema inference, either by enabling |
|
| The character used for escaping values where the field delimiter is part of the value. |
|
| Specifies the case sensitivity behavior when |
| None | Whether to collect all data that can't be parsed due to: a data type mismatch, and schema mismatch (including column casing) to a separate column. This column is included by default when using Auto Loader. For more details refer to What is the rescued data column?.
|
|
| The separator string between columns. |
| None | When set to a column name, reads the entire CSV record into a single |
|
| The number of rows from the beginning of the CSV file that should be ignored (including commented and empty rows). If |
|
| The format for parsing |
|
| The format for parsing timestamp strings. |
|
| The format for parsing timestamp without timezone ( |
| None | The |
|
| The strategy for handling unescaped quotes. Allowed options:
|
Excel
Key | Default | Description |
|---|---|---|
| None | The cell range to read in Excel syntax. If omitted, reads all valid cells from the first sheet. Use |
|
| Number of initial rows to use as column name headers. When |
|
| The operation to perform on the Excel workbook. Valid values: |
|
| Custom format string for timestamp-without-timezone values stored as strings in Excel. Custom date formats follow the formats at Datetime patterns. |
|
| Custom format string for string values read as |
JSON
Key | Default | Description |
|---|---|---|
|
| Whether to allow backslashes to escape any character that succeeds it. If not enabled, only characters that are explicitly listed by the JSON specification can be escaped. |
|
| Whether to allow the use of Java, C, and C++ style comments ( |
|
| Whether to allow the set of not-a-number ( |
|
| Whether to allow integral numbers to start with additional (ignorable) zeroes (for example, |
|
| Whether to allow use of single quotes (apostrophe, character |
|
| Whether to allow JSON strings to contain unescaped control characters (ASCII characters with value less than 32, including tab and line feed characters) or not. |
|
| Whether to allow use of unquoted field names, which are allowed by JavaScript, but not by the JSON specification. |
| None | The encoding used for Variant values in the source JSON. Set to |
| None | The path to store files for recording the information about bad JSON records. Using the
|
|
| The column for storing records that are malformed and cannot be parsed. If the |
|
| The format for parsing date strings. |
|
| Whether to ignore columns of all null values or empty arrays and structs during schema inference. |
|
| The name of the encoding of the JSON files. See |
|
| Whether to try and infer timestamp strings as a |
| None, which covers | A string between two consecutive JSON records. |
|
| A |
|
| The maximum allowed nesting depth for JSON objects and arrays. Increase this value for deeply nested documents. Valid values: positive integers. |
|
| The maximum length of number tokens in the JSON input. Increase this value for JSON with large numeric literals. Valid values: positive integers. |
| unlimited | The maximum length of string values in the JSON input. Set to limit memory usage when parsing JSON with large strings. Valid values: positive integers. |
|
| Parser mode around handling malformed records. Valid values: |
|
| Whether the JSON records span multiple lines. |
|
| Attempts to infer strings as |
|
| Whether to infer primitive types like numbers and booleans as |
|
| Specifies the case sensitivity behavior when |
| None | Whether to collect all data that can't be parsed due to a data type mismatch or schema mismatch (including column casing) to a separate column. This column is included by default when using Auto Loader. For more details, refer to What is the rescued data column?.
|
| None | Whether to ingest the entire JSON document, parsed into a single Variant column with the specified string as the column's name. If not set, the JSON fields are ingested into their own columns. Valid values: any string. |
|
| The format for parsing timestamp strings. |
|
| The format for parsing timestamp without timezone ( |
| None | The |
|
| Whether to treat type upgrade exceptions (for example, when a value can't be widened to the declared column type) as bad records rather than throwing an exception. |
ORC
Key | Default | Description |
|---|---|---|
|
| Whether to infer the schema across multiple files and to merge the schema of each file. |
Parquet
Key | Default | Description |
|---|---|---|
|
| Controls the rebasing of the DATE and TIMESTAMP values between Julian and Proleptic Gregorian calendars. Valid values: |
|
| Controls the rebasing of the INT96 timestamp values between Julian and Proleptic Gregorian calendars. Valid values: |
|
| Whether to infer the schema across multiple files and to merge the schema of each file. |
|
| Specifies the case sensitivity behavior when |
| None | Whether to collect all data that can't be parsed due to: a data type mismatch, and schema mismatch (including column casing) to a separate column. This column is included by default when using Auto Loader. For more details refer to What is the rescued data column?.
|
Text
Key | Default | Description |
|---|---|---|
|
| The name of the encoding of the TEXT file line separator. For a list of options, see |
| None, which covers | A string between two consecutive TEXT records. |
|
| Whether to read a file as a single record. |
XML
Key | Default | Description |
|---|---|---|
| None | The row tag of the XML files to treat as a row. In the example XML |
|
| Defines a fraction of rows used for schema inference. XML built-in functions ignore this option. Valid values: |
|
| Whether to exclude attributes in elements. |
| None | Mode for dealing with corrupt records during parsing. |
|
| If |
|
| Allows renaming the new field that contains a malformed string created by |
| None | The prefix for attributes to differentiate attributes from elements. This will be the prefix for field names. Default is |
|
| The tag used for the character data within elements that also have attribute(s) or child element(s) elements. User can specify the |
|
| For reading, decodes the XML files by the given encoding type. For writing, specifies encoding (charset) of saved XML files. XML built-in functions ignore this option. Also applies to DataFrameWriter XML options. |
|
| Whether white spaces surrounding values must be skipped. Whitespace-only character data are ignored. |
| None | Path to an optional XSD file that is used to validate the XML for each row individually. Rows that fail to validate are treated like parse errors. The XSD does not otherwise affect the schema, whether provided or inferred. |
|
| If |
|
| Custom timestamp format string that follows the datetime pattern format. This applies to |
|
| Custom format string for timestamp without timezone that follows the datetime pattern format. This applies to TimestampNTZType type. Also applies to DataFrameWriter XML options. |
|
| Custom date format string that follows the datetime pattern format. This applies to date type. Also applies to DataFrameWriter XML options. |
|
| Sets a locale as a language tag in IETF BCP 47 format. For instance, |
| string | Sets the string representation of a null value. When this is |
|
| Specifies the case sensitivity behavior when rescuedDataColumn is enabled. If true, rescue the data columns whose names differ by case from the schema. When false, read the data in a case-insensitive manner. |
| None | Whether to collect all data that can't be parsed due to a data type mismatch and schema mismatch (including column casing) to a separate column. This column is included by default when using Auto Loader. For more details, see What is the rescued data column?. |
|
| Specifies the name of the single variant column. If this option is specified for reading, parse the entire XML record into a single Variant column with the given option string value as the column's name. If this option is provided for writing, write the value of the single Variant column to XML files. Also applies to DataFrameWriter XML options. |
|
| Whether to use the legacy XML parser. The legacy parser has less stringent validation for malformed content but is less memory-efficient. Set to |
|
| The column name used to capture XML elements that match the wildcard ( |
DataStreamReader options
Use these options with DataStreamReader.option() to configure streaming reads from Delta Lake tables and other file-based sources.
For file format options (JSON, CSV, Parquet, and others), see DataFrameReader options.
For Auto Loader (cloudFiles.*) options, see Auto Loader.
Example
The following example sets maxFilesPerTrigger to 10 for a Delta Lake table stream:
- Python
- Scala
df = spark.readStream.format("delta").option("maxFilesPerTrigger", 10).load("/path/to/delta-table")
val df = spark.readStream.format("delta").option("maxFilesPerTrigger", "10").load("/path/to/delta-table")
Common
The following options apply to Delta Lake tables and other file-based streaming sources.
Key | Default | Description |
|---|---|---|
|
| How to handle source files after they are processed by the stream. Valid values: |
|
| Whether to identify already-processed files by filename only rather than by full path. When |
|
| Whether to process the most recently modified files first within each micro-batch. Useful when you want to process the latest data as quickly as possible. When |
| None | Soft maximum for the amount of data processed per micro-batch. A batch may process more than the limit if the smallest input unit exceeds it. When used together with For Auto Loader, use |
|
| Maximum number of unprocessed files to cache for subsequent micro-batches. Set to |
|
| Maximum age of files considered for processing, relative to the timestamp of the most recently modified file rather than the current system time. Files older than this threshold are ignored. Accepts duration strings such as |
|
| Upper bound for the number of new files processed in each micro-batch. When used together with For Auto Loader, use |
| None | Path to the archive directory when |
Auto Loader
Use these options with the cloudFiles source to configure Auto Loader for streaming ingestion from cloud storage. Options specific to the cloudFiles source are prefixed with cloudFiles to keep them in a separate namespace from other Structured Streaming source options.
Common
Key | Default | Description |
|---|---|---|
|
| Whether to allow input directory file changes to overwrite existing data. For configuration caveats, see Does Auto Loader process the file again when the file gets appended or overwritten?. |
| None | Auto Loader can trigger asynchronous backfills at a given interval. For example Do not use when |
|
| Whether to automatically delete processed files from the input directory. When set to When set to When set to A file is considered processed when it has a non-null value for Review the following considerations before enabling
Available in Databricks Runtime 16.4 and above. |
|
| Amount of time to wait before processed files become candidates for archival with The value is a CalendarInterval string. For example, Available in Databricks Runtime 16.4 and above. |
| None | Path to archive processed files to when The move location must:
Auto Loader must have write permissions to this directory. Available in Databricks Runtime 16.4 and above. |
| None (required option) | The data file format in the source path. Valid values include:
|
|
| Whether to include existing files in the stream processing input path or to only process new files arriving after initial setup. This option is evaluated only when you start a stream for the first time. Changing this option after restarting the stream has no effect. |
|
| Whether to infer exact column types when leveraging schema inference. By default, columns are inferred as strings when inferring JSON and CSV datasets. See schema inference for more details. |
| None | The maximum number of new bytes to be processed in every trigger. You can specify a byte string such as In Databricks Runtime 18.0 and above, this option is dynamically configured and does not need to be set manually. |
| None | How long a file event is tracked for deduplication purposes. Databricks does not recommend tuning this parameter unless you are ingesting data at the order of millions of files an hour. See the section on File event tracking for more details. Tuning |
|
| The maximum number of new files to be processed in every trigger. When used together with In Databricks Runtime 18.0 and above, this option is dynamically configured and does not need to be set manually. |
| None | A comma-separated list of Hive-style partition columns that you would like inferred from the directory structure of the files. Hive-style partition columns are key-value pairs combined by an equality sign such as
Specifying
|
|
| The mode for evolving the schema as new columns are discovered in the data. By default, columns are inferred as strings when inferring JSON datasets. See schema evolution for more details. |
| None | Schema information that you provide to Auto Loader during schema inference. See schema hints for more details. |
| None (required to infer the schema) | The location to store the inferred schema and subsequent changes. See schema inference for more details. |
|
| Whether to use a strict globber that matches the default globbing behavior of other file sources in Apache Spark. See Common data loading patterns for more details. Available in Databricks Runtime 12.2 LTS and above. |
|
| Whether to validate Auto Loader options and return an error for unknown or inconsistent options. |
Directory listing
Key | Default | Description |
|---|---|---|
|
| This feature has been deprecated. Databricks recommends using file notification mode with file events instead of Whether to use the incremental listing rather than the full listing in directory listing mode. By default, Auto Loader makes the best effort to automatically detect if a given directory is applicable for the incremental listing. You can explicitly use the incremental listing or use the full directory listing by setting it as Incorrectly enabling incremental listing on a non-lexically ordered directory prevents Auto Loader from discovering new files. Works with Azure Data Lake Storage ( Available in Databricks Runtime 9.1 LTS and above.
Available values: |
File notification
For information about configuring file notification mode, including required cloud permissions, setup instructions, and authentication methods, see Configure Auto Loader streams in file notification mode.
Key | Default | Description |
|---|---|---|
|
| Number of threads to use when fetching messages from the queueing service. Do not use when |
| None | Required only if you specify a Do not use when |
| None | A series of key-value tag pairs to help associate and identify related resources, for example:
For more information on AWS, see Amazon SQS cost allocation tags and Configuring tags for an Amazon SNS topic. (1) For more information on Azure, see Naming Queues and Metadata and the coverage of For more information on GCP, see Reporting usage with labels. (1) Do not use when |
|
| When set to File events provide notifications-level performance in file discovery, because Auto Loader can discover new files after the last run. Unlike directory listing, this process does not need to list all files in the directory. There are some situations when Auto Loader uses directory listing even though the file events option is enabled:
See When does Auto Loader with file events use directory listing? for a comprehensive list of situations when Auto Loader uses directory listing with this option. Available in Databricks Runtime 14.3 LTS and above. |
|
| When set to |
|
| Whether to use file notification mode to determine when there are new files. If Do not use when |
(1) Auto Loader adds the following key-value tag pairs by default on a best-effort basis:
vendor:Databrickspath: The location from where the data is loaded. Unavailable in GCP due to labeling limitations.checkpointLocation: The location of the stream's checkpoint. Unavailable in GCP due to labeling limitations.streamId: A globally unique identifier for the stream.
Databricks reserves these key names, and you cannot overwrite their values.
Cloud-specific
Auto Loader provides options for configuring cloud infrastructure for file notification mode. For required cloud permissions and setup instructions, see Configure Auto Loader streams in file notification mode.
AWS
Provide the following options only if you choose cloudFiles.useNotifications = true and you want Auto Loader to set up the notification services for you:
Key | Default | Description |
|---|---|---|
| The region of the EC2 instance | The region where the source S3 bucket resides and where you want to create the AWS SNS and SQS services. |
Key | Default | Description |
|---|---|---|
|
| Only allow event notifications from AWS S3 buckets in the same account as the SNS topic. When true, Auto Loader only accepts event notifications from AWS S3 buckets in the same account as the SNS topic. When Available in Databricks Runtime 17.2 and above. |
Provide the following option only if you choose cloudFiles.useNotifications = true and you want Auto Loader to use a queue that you have already set up:
Key | Default | Description |
|---|---|---|
| None | The URL of the SQS queue. If provided, Auto Loader directly consumes events from this queue instead of setting up its own AWS SNS and SQS services. |
AWS authentication options
Provide the following authentication option to use a Databricks service credential:
Key | Default | Description |
|---|---|---|
| None | The name of your Databricks service credential. Available in Databricks Runtime 16.1 and above. |
When Databricks service credentials or IAM roles are not available, you can provide the following authentication options instead:
Key | Default | Description |
|---|---|---|
| None | The AWS access key ID for the user. Must be provided with |
| None | The AWS secret access key for the user. Must be provided with |
| None | The ARN of an IAM role to assume, if needed. The role can be assumed from your cluster's instance profile or by providing credentials with |
| None | An identifier to provide while assuming a role using |
| None | An optional session name to use while assuming a role using |
| None | An optional endpoint to provide for accessing AWS STS when assuming a role using |
Azure
You must provide values for all of the following options if you specify cloudFiles.useNotifications = true and you want Auto Loader to set up the notification services for you:
Key | Default | Description |
|---|---|---|
| None | The Azure Resource Group in which the storage account is created. |
| None | The Azure Subscription ID in which the resource group is created. |
| None | The name of your Databricks service credential. Available in Databricks Runtime 16.1 and above. |
If a Databricks service credential is not available, you can provide the following authentication options instead:
Key | Default | Description |
|---|---|---|
| None | The client ID or application ID of the Databricks service principal. |
| None | The client secret of the Databricks service principal. |
| None | The connection string for the storage account, based on either account access key or shared access signature (SAS). |
| None | The Azure Tenant ID in which the Databricks service principal is created. |
Provide the following option only if you set cloudFiles.useNotifications = true and you want Auto Loader to use an existing queue:
Key | Default | Description |
|---|---|---|
| None | The name of the Azure queue. If provided, the cloud files source directly consumes events from this queue instead of setting up its own Azure Event Grid and Queue Storage services. In that case, your |
GCP
Auto Loader can automatically set up notification services for you by leveraging Databricks service credentials. The service account created with the Databricks service credential will require the permissions specified in Configure Auto Loader streams in file notification mode.
Key | Default | Description |
|---|---|---|
| None | The ID of the project that the GCS bucket is in. The Google Cloud Pub/Sub subscription is also created within this project. |
| None | The name of your Databricks service credential. Available in Databricks Runtime 16.1 and above. |
If a Databricks service credential is not available, you can use Google Service Accounts directly. You can either configure your cluster to assume a service account by following Google service setup or provide the following authentication options directly:
Key | Default | Description |
|---|---|---|
| None | The client ID of the Google Service Account. |
| None | The email of the Google Service Account. |
| None | The private key that's generated for the Google Service Account. |
| None | The ID of the private key that's generated for the Google Service Account. |
Provide the following option only if you choose cloudFiles.useNotifications = true and you want Auto Loader to use a queue that you have already set up:
Key | Default | Description |
|---|---|---|
| None | The name of the Google Cloud Pub/Sub subscription. If provided, the cloud files source consumes events from this queue instead of setting up its own GCS Notification and Google Cloud Pub/Sub services. |
Delta Lake
The following options apply when reading from a Delta Lake table using spark.readStream.
Key | Default | Description |
|---|---|---|
| None | Set to a Delta table version number or |
| None | Set to a Delta table version number or |
| None | Set to a Delta table version number or |
| None | A regular expression pattern. Files whose paths match the pattern are excluded from the streaming read. Useful for filtering out files that do not conform to the expected naming convention. |
|
| Whether to fail the streaming query if source data has been deleted due to log retention ( |
|
| Available in Databricks Runtime 11.3 LTS and lower. Re-emits rewritten data files after modification operations such as |
|
| Ignores transactions that delete data at partition boundaries (full partition drops only). Does not handle non-partition deletes, updates, or other modifications. Use |
|
| Whether to enable reading the change data feed for the streaming query. When enabled, the stream emits row-level changes (inserts, updates, and deletes) with additional metadata columns. See Use Delta Lake change data feed on Databricks. |
| None | Path to a directory where Delta Lake tracks schema changes for the streaming read. Required when streaming from tables with column mapping enabled and using |
|
| Ignores transactions that delete or modify existing records and processes only appends. Databricks recommends this option for most workloads that do not use change data feeds. Available in Databricks Runtime 12.2 LTS and above. See Skip upstream change commits with |
| Latest available | Timestamp to start reading from. The stream reads all table changes committed at or after the specified timestamp. If the timestamp precedes all available table commits, the stream starts from the earliest available commit. Cannot be used together with Accepts a timestamp string such as |
| Latest available | Delta table version to start reading from. The stream reads all changes committed at or after the specified version. Specify |
|
| Divides the initial table snapshot into event time buckets to prevent records from being incorrectly marked as late events and dropped in stateful queries with watermarks. Cannot be changed after initial snapshot processing has begun without deleting the checkpoint. Available in Databricks Runtime 11.3 LTS and above. See Process initial snapshot without dropping data. |
DataFrameWriter options
Use these options with DataFrameWriter.option() and DataFrameWriterV2.option() to control how Databricks writes data.
Example
The following example sets mergeSchema to True for writing a Delta Lake table:
- Python
- Scala
df.write.format("delta").option("mergeSchema", True).saveAsTable("my_table")
df.write.format("delta").option("mergeSchema", "true").saveAsTable("my_table")
Avro
Key | Default | Description |
|---|---|---|
| None | The full Avro schema as a JSON string. Use this option to convert Spark SQL types to specific Avro types. Applies to Avro file. |
| None | A URL pointing to an Avro schema file. Use instead of |
|
| Compression codec to use when writing. Valid values: |
|
| The top-level record name in the output Avro schema. Applies to Avro file. |
|
| Whether to match columns between the Spark schema and the Avro schema by field position instead of by name. Applies to Avro file. |
| Empty string | The namespace for the top-level record in the output Avro schema. Applies to Avro file. |
Delta Lake and Apache Iceberg
Key | Default | Description |
|---|---|---|
|
| Whether to enable automatic liquid clustering, where Databricks selects clustering columns based on query patterns. Only valid with |
| None | Whether to enable schema evolution for the write operation. New columns in the source DataFrame are added to the target table schema. Applies to batch and streaming appends. Applies to Update table schema. |
| None | Whether to replace the table schema and partitioning when overwriting. Requires |
| None | The partition overwrite mode. Set this to |
| None | A boolean expression that matches rows in the target table to replace with rows from the source query. Can reference columns from both the target table and the source query. Rows in the target that match a source row are deleted and replaced. If the source is empty, no deletions occur. Use |
| None | A comma-separated list of column names used to match rows between the target table and the source query. Both the target and the source must contain all listed columns. Rows in the target that match a source row under equality comparison are deleted and replaced. |
| None | A predicate expression. Atomically overwrites only the records that match the predicate. Applies to Selectively overwrite data with Delta Lake. |
| None | A string alias for the target table. Use with |
| None | A unique string identifying the application for idempotent writes in |
| None | A monotonically increasing number used as the transaction version for idempotent writes in |
| None | Whether to enable Auto Optimize Write for this write operation. Overrides the |
| None | A user-defined string appended to the commit metadata for the write operation. Visible in the output of |
CSV
Key | Default | Description |
|---|---|---|
|
| The character used to escape the escape character when it differs from the quote character. Applies to csv (DataFrameWriter). |
|
| Compression codec to use when writing. Valid values: |
|
| Format string for date column values. Applies to csv (DataFrameWriter). |
| Empty string | The string written for empty (non-null) values. Applies to csv (DataFrameWriter). |
|
| The character encoding for the output files. Applies to csv (DataFrameWriter). |
|
| The character used to escape quoted values. Applies to csv (DataFrameWriter). |
|
| Whether to escape quote characters inside quoted field values. Applies to csv (DataFrameWriter). |
|
| Whether to write column names as the first line of the output. Applies to csv (DataFrameWriter). |
|
| Whether to trim leading whitespace from values when writing. Applies to csv (DataFrameWriter). |
|
| Whether to trim trailing whitespace from values when writing. Applies to csv (DataFrameWriter). |
|
| The line separator string used between records. Applies to csv (DataFrameWriter). |
|
| A |
| Empty string | String written for null values. Applies to csv (DataFrameWriter). |
|
| The character used to quote field values that contain the separator. Applies to csv (DataFrameWriter). |
|
| Whether to enclose all field values in quotes regardless of content. Applies to csv (DataFrameWriter). |
|
| The field delimiter character. Applies to csv (DataFrameWriter). |
|
| The format string for timestamp column values. Applies to csv (DataFrameWriter). |
|
| Format string for timestamp without timezone ( |
Excel
Key | Default | Description |
|---|---|---|
| None | The sheet name or starting cell for the write. If omitted, writes to a sheet named |
|
| Excel cell format string applied to |
|
| Whether to write column names as the first row. Valid values: |
|
| Excel cell format string applied to |
|
| The Excel file format version to write. Valid values: |
JSON
Key | Default | Description |
|---|---|---|
|
| Compression codec to use when writing. Valid values: |
|
| Format string for date column values. Applies to json (DataFrameWriter). |
|
| The character encoding for the output files. Applies to json (DataFrameWriter). |
| value of | Whether to omit fields with null values from the JSON output. Applies to json (DataFrameWriter). |
|
| The line separator string used between records. Applies to json (DataFrameWriter). |
|
| A |
|
| Whether to enable pretty (indented, multiline) JSON output. |
|
| Whether to sort the keys of JSON objects alphabetically in the output. Useful for producing deterministic output. |
|
| The format string for timestamp column values. Applies to json (DataFrameWriter). |
|
| Format string for timestamp without timezone ( |
|
| Whether to encode non-ASCII characters as |
ORC
Key | Default | Description |
|---|---|---|
|
| Compression codec to use when writing. Valid values: |
Parquet
Key | Default | Description |
|---|---|---|
|
| Compression codec to use when writing. Valid values: |
|
| The physical type used to encode timestamp columns. Valid values: |
Text
Key | Default | Description |
|---|---|---|
|
| Compression codec to use when writing. Valid values: |
|
| The character encoding for the output files. |
|
| The line separator string used between records. Applies to text (DataFrameWriter). |
XML
Key | Default | Description |
|---|---|---|
|
| The element name for array elements that have no explicit name. Applies to xml (DataFrameWriter). |
|
| The prefix prepended to field names that correspond to XML attributes. Applies to xml (DataFrameWriter). |
|
| Compression codec to use when writing. Valid values: |
|
| Format string for date column values. Applies to xml (DataFrameWriter). |
|
| The XML declaration string written at the top of each output file. Set to an empty string to suppress the declaration. Applies to xml (DataFrameWriter). |
|
| The character encoding for the output files. Applies to xml (DataFrameWriter). |
| 4 spaces | The string used to indent child elements in the output. Set to an empty string to turn off indentation and write each row on a single line. |
|
| A |
|
| The string written for null values. When set to |
|
| The root element tag that wraps all row elements in the output. Applies to xml (DataFrameWriter). |
|
| The element tag that represents a row in the output. Applies to xml (DataFrameWriter). |
| None | The name of the single Variant column to write to XML files. Applies to xml (DataFrameWriter). |
|
| The format string for timestamp column values. Applies to xml (DataFrameWriter). |
|
| Format string for timestamp without timezone column values. Applies to xml (DataFrameWriter). |
|
| Whether to throw an exception if a column name is not a valid XML element identifier. Applies to xml (DataFrameWriter). |
|
| The field name used for character data in XML elements that also have attributes or child elements. Applies to xml (DataFrameWriter). |
DataStreamWriter options
Use these options with DataStreamWriter.option() to configure streaming writes.
Example
The following example sets the checkpoint location for a stream:
- Python
- Scala
(df.writeStream
.format("delta")
.option("checkpointLocation", "/path/to/checkpoint")
.start("/path/to/table"))
df.writeStream
.format("delta")
.option("checkpointLocation", "/path/to/checkpoint")
.start("/path/to/table")
Common
Key | Default | Description |
|---|---|---|
| None (required) | Path to the checkpoint directory for the streaming query. Required for fault tolerance and exactly-once processing guarantees. Each streaming query must use a unique checkpoint location. Databricks recommends storing checkpoints in a Unity Catalog volume or cloud storage path. See Structured Streaming checkpoints. |
| None | Output path for file-based streaming sinks such as Parquet. Applies to file-based formats only. |
Console sink
Key | Default | Description |
|---|---|---|
|
| The number of rows to display per micro-batch when writing to the console sink. |
|
| Whether to truncate long strings when displaying rows. Set to |
Delta Lake
The following options apply when writing a stream to a Delta Lake table using format("delta"). Overwrite-only options such as overwriteSchema, replaceWhere, and partitionOverwriteMode are not supported for streaming writes.
Key | Default | Description |
|---|---|---|
|
| Whether to evolve the Delta Lake table schema when the streaming DataFrame contains new columns. Applies to append output mode only. Applies to Update table schema. |
| None | A user-defined string appended to the commit metadata for the write operation. Visible in the output of |
File sink
The following option applies when writing a stream to file-based formats (Parquet, JSON, CSV, ORC, text). For format-specific options, see DataFrameWriter options.
Key | Default | Description |
|---|---|---|
| None | How long to retain sink metadata files used for fault tolerance and compaction. Accepts a time string such as |
Kafka sink
For a complete list of options for writing streams to Kafka, see Options.
Key | Default | Description |
|---|---|---|
| None | Required. A comma-separated list of Kafka broker |
| None | The target Kafka topic for all rows. Required if the DataFrame does not include a |
| None | Any Kafka producer configuration prefixed with |
Memory sink
Key | Default | Description |
|---|---|---|
| None (required) | The name of the in-memory table that the query writes to. Required for the memory sink. Also configurable via |
|
| Delivery guarantee for the memory sink. |
Spark function options
Some Spark SQL built-in functions accept an options map that controls parsing or serialization behavior. Pass options as a Python dict or a Scala Map[String, String].
Example
The following example parses a JSON column while dropping malformed records:
- Python
- Scala
from pyspark.sql.functions import from_json
from pyspark.sql.types import StructType, StructField, StringType
schema = StructType([StructField("name", StringType())])
df = df.withColumn("parsed", from_json("json_col", schema, {"mode": "DROPMALFORMED"}))
import org.apache.spark.sql.functions.from_json
import org.apache.spark.sql.types._
val schema = StructType(Seq(StructField("name", StringType)))
val df = df.withColumn("parsed", from_json(col("json_col"), schema, Map("mode" -> "DROPMALFORMED")))
Avro
Avro functions accept the same options as the corresponding DataFrame options:
from_avroandschema_of_avrouse DataFrameReader Avro options.to_avrouses DataFrameWriter Avro options.
Example
The following example decodes an Avro column with schema evolution enabled:
- Python
- Scala
from pyspark.sql.functions import from_avro
df = df.withColumn("decoded", from_avro("avro_col", json_schema, {"avroSchemaEvolutionMode": "restart"}))
import org.apache.spark.sql.avro.functions.from_avro
val df = df.withColumn("decoded", from_avro(col("avro_col"), jsonSchema, Map("avroSchemaEvolutionMode" -> "restart")))
In addition, the Schema Registry variants of from_avro and to_avro accept the following options:
Key | Default | Description |
|---|---|---|
| None | Schema ID from the Confluent Schema Registry to use when decoding Avro data that was encoded with a schema incompatible with |
| None | Confluent Schema Registry client configuration properties. Pass any Confluent SR client property using this prefix, for example |
CSV
CSV functions accept the same options as the corresponding DataFrame options:
from_csvandschema_of_csvuse DataFrameReader CSV options.to_csvuses DataFrameWriter CSV options.
Example
The following example reads CSV with a custom separator and NULL value:
- Python
- Scala
from pyspark.sql.functions import from_csv
from pyspark.sql.types import StructType, StructField, IntegerType, StringType
schema = StructType([StructField("id", IntegerType()), StructField("name", StringType())])
df = df.withColumn("parsed", from_csv("csv_col", schema, {"sep": "|", "nullValue": "N/A"}))
import org.apache.spark.sql.functions.from_csv
import org.apache.spark.sql.types._
val schema = StructType(Seq(StructField("id", IntegerType), StructField("name", StringType)))
val df = df.withColumn("parsed", from_csv(col("csv_col"), schema, Map("sep" -> "|", "nullValue" -> "N/A")))
JSON
JSON functions accept the same options as the corresponding DataFrame options:
from_jsonandschema_of_jsonuse DataFrameReader JSON options.to_jsonuses DataFrameWriter JSON options.
Example
The following example writes JSON with NULL fields ignored and pretty formatting enabled:
- Python
- Scala
from pyspark.sql.functions import to_json
df = df.withColumn("json_str", to_json("struct_col", {"pretty": "true", "ignoreNullFields": "true"}))
import org.apache.spark.sql.functions.to_json
val df = df.withColumn("json_str", to_json(col("struct_col"), Map("pretty" -> "true", "ignoreNullFields" -> "true")))
Protobuf
from_protobuf and to_protobuf do not use a file-based DataSource. Protobuf data is always read and written as binary columns using these functions. Options are passed as a Map[String, String] and are case-sensitive.
Example
The following example decodes a Protobuf column using PERMISSIVE mode:
- Python
- Scala
from pyspark.sql.functions import from_protobuf
df = df.withColumn("decoded", from_protobuf("proto_col", "MyMessage", "/path/to/descriptor.desc",
{"mode": "PERMISSIVE", "enums.as.ints": "true"}))
import org.apache.spark.sql.protobuf.functions.from_protobuf
val df = df.withColumn("decoded", from_protobuf(col("proto_col"), "MyMessage", "/path/to/descriptor.desc",
Map("mode" -> "PERMISSIVE", "enums.as.ints" -> "true")))
Protobuf functions use the following options:
Key | Default | Description |
|---|---|---|
|
| How to handle corrupt records. |
|
| Maximum recursion depth for recursive Protobuf fields. Set to |
|
| Whether to convert Protobuf |
|
| Whether to emit fields with zero or default values (proto3 semantics). When |
|
| Whether to render enum fields as integer values instead of strings. Applies to |
|
| Whether to upcast |
|
| Whether to unwrap |
|
| Whether to retain empty Protobuf message types in the output schema by inserting a dummy column. Applies to |
| None | Schema Registry subject name. Required when using the Schema Registry variants of |
| None | Schema Registry address (host and port). Required when using the Schema Registry variants of |
| None | Specifies which Protobuf message to use when the schema registry subject contains multiple messages. Optional. |
XML
XML functions accept the same options as the corresponding DataFrame options:
from_xmlandschema_of_xmluse DataFrameReader XML options.to_xmluses DataFrameWriter XML options.
Example
The following example writes XML with custom root and row tags:
- Python
- Scala
from pyspark.sql.functions import to_xml
df = df.withColumn("xml_str", to_xml("struct_col", {"rootTag": "records", "rowTag": "record"}))
import org.apache.spark.sql.functions.to_xml
val df = df.withColumn("xml_str", to_xml(col("struct_col"), Map("rootTag" -> "records", "rowTag" -> "record")))