STATE_REPARTITION_INVALID_CHECKPOINT error condition
The provided checkpoint location '<checkpointLocation>' is in an invalid state.
LAST_BATCH_ABANDONED_REPARTITION
The last batch ID <lastBatchId> is a repartition batch with <lastBatchShufflePartitions> shuffle partitions and didn't finish successfully.
You're now requesting to repartition to <numPartitions> shuffle partitions.
Please retry with the same number of shuffle partitions as the previous attempt.
Once that completes successfully, you can repartition to another number of shuffle partitions.
LAST_BATCH_FAILED
The last batch ID <lastBatchId> didn't finish successfully. Please make sure the streaming query finishes successfully, before repartitioning.
If using ProcessingTime trigger, you can use AvailableNow trigger instead, which will make sure the query terminates successfully by itself.
If you want to skip this check, set enforceExactlyOnceSink parameter in repartition to false.
But this can cause duplicate output records from the failed batch when using exactly-once sinks.
MISSING_OFFSET_SEQ_METADATA
The OffsetSeq (v<version>) metadata is missing for batch ID <batchId>. Please make sure the checkpoint is from a supported Spark version (Spark 4.0+) or DBR 14.3+.
NO_BATCH_FOUND
No microbatch has been recorded in the checkpoint location. Make sure the streaming query has successfully completed at least one microbatch before repartitioning.
NO_COMMITTED_BATCH
There is no committed microbatch. Make sure the streaming query has successfully completed at least one microbatch before repartitioning.
OFFSET_SEQ_NOT_FOUND
Offset sequence entry for batch ID <batchId> not found. You might have set a very low value for
'spark.sql.streaming.minBatchesToRetain' config during the streaming query execution or you deleted files in the checkpoint location.
SHUFFLE_PARTITIONS_ALREADY_MATCH
The number of shuffle partitions in the last committed batch (id=<batchId>) is the same as the requested <numPartitions> partitions.
Hence, already has the requested number of partitions, so no-op.
UNSUPPORTED_COMMIT_METADATA_VERSION
Unsupported commit metadata version <version>. Please make sure the checkpoint is from a supported Spark version (Spark 4.0+) or DBR 14.3+.
UNSUPPORTED_OFFSET_SEQ_VERSION
Unsupported offset sequence version <version>. Please make sure the checkpoint is from a supported Spark version (Spark 4.0+) or DBR 14.3+.
UNSUPPORTED_PROVIDER
<provider> is not supported