Standard compute requirements and limitations
This page includes a list of requirements and limitations for standard compute. If you are using classic compute, Databricks recommends using standard access mode unless your workload is dependent on one of the limitations listed below.
Init scripts and libraries have different support across access modes and Databricks Runtime versions. See Where can init scripts be installed? and Compute-scoped libraries.
Current standard compute limitations
The following sections list limitations for standard compute based on the most recent Databricks Runtime version. For limitations that apply to older Databricks Runtime versions, see Runtime-dependent limitations.
If these features are required for your workload, use dedicated compute instead.
General standard compute limitations
- Databricks Runtime for ML is not supported. Instead, install any ML library not bundled with the Databricks Runtime as a compute-scoped library.
- GPU-enabled compute is not supported.
- Spark-submit job tasks are not supported. Use a JAR task instead.
- DBUtils and other clients can only read from cloud storage using an external location.
- Custom containers are not supported.
- DBFS root and mounts do not support FUSE.
Language limitations
- R is not supported.
Spark API limitations
- Spark Context (
sc
),spark.sparkContext
, andsqlContext
are not supported for Scala:- Databricks recommends using the
spark
variable to interact with theSparkSession
instance. - The following
sc
functions are also not supported:emptyRDD
,range
,init_batched_serializer
,parallelize
,pickleFile
,textFile
,wholeTextFiles
,binaryFiles
,binaryRecords
,sequenceFile
,newAPIHadoopFile
,newAPIHadoopRDD
,hadoopFile
,hadoopRDD
,union
,runJob
,setSystemProperty
,uiWebUrl
,stop
,setJobGroup
,setLocalProperty
,getConf
.
- Databricks recommends using the
- The Spark configuration property
spark.executor.extraJavaOptions
is not supported. - When creating a DataFrame from local data using
spark.createDataFrame
, row sizes cannot exceed 128MB. - RDD APIs are not supported.
UDF limitations
- Hive UDFs are not supported. Instead, use UDFs in Unity Catalog.
Streaming limitations
Some of the listed Kafka options have limited support when used for supported configurations on Databricks. All listed Kafka limitations are valid for both batch and stream processing. See Stream processing with Apache Kafka and Databricks.
- You cannot use the formats
statestore
andstate-metadata
to query state information for stateful streaming queries. transformWithState
and associated APIs are not supported.- Working with socket sources is not supported.
- The
sourceArchiveDir
must be in the same external location as the source when you useoption("cleanSource", "archive")
with a data source managed by Unity Catalog. - For Kafka sources and sinks, the following options are not supported:
kafka.sasl.client.callback.handler.class
kafka.sasl.login.callback.handler.class
kafka.sasl.login.class
kafka.partition.assignment.strategy
Network and file system limitations
- Standard compute runs commands as a low-privilege user forbidden from accessing sensitive parts of the filesystem.
- POSIX-style paths (
/
) for DBFS are not supported. - Only workspace admins and users with ANY FILE permissions can directly interact with files using DBFS.
- You cannot connect to the instance metadata service or any services running in the Databricks VPC.
Scala kernel limitations
The following limitations apply when using the scala kernel on standard compute:
- Certain classes cannot be used in your code if they conflict with the internal almond kernel library, most notably
Input
. For a list of almond's defined imports, see almond imports. - Logging directly to log4j is not supported.
- In the UI, the dataframe schema dropdown is not supported.
- If your driver hits OOM, the Scala REPL will not terminate.
//connector/sql-aws-connectors:sql-aws-connectors
is not in the Scala REPL's bazel target, use results inClassNotFoundException
.- The Scala kernel is incompatible with SQLImplicits.
Runtime-dependent limitations
The following limitations have been resolved through runtime updates, but might still apply to your workload if you use an older runtime.
Language support
Feature | Required Databricks Runtime version |
---|---|
Scala | 13.3 or above |
All runtime-bundled Java and Scala libraries available by default | 15.4 LTS or above (for 15.3 or below, set |
Spark API support
Feature | Required Databricks Runtime version |
---|---|
Spark ML | 17.0 or above |
Python: | 14.0 or above |
Scala | 15.4 LTS or above |
UDF support
Feature | Required Databricks Runtime version |
---|---|
| 14.3 LTS or above |
Scala scalar UDFs and Scala UDAFs | 14.3 LTS or above |
Import modules from Git folders, workspace files, or volumes in PySpark UDFs | 14.3 LTS or above |
Use custom versions of | 14.3 LTS or above |
Non-scalar Python and Pandas UDFs, including UDAFs, UDTFs, and Pandas on Spark | 14.3 LTS or above |
Python scalar UDFs and Pandas UDFs | 14.1 or above |
Streaming support
Feature | Required Databricks Runtime version |
---|---|
| 16.3 or above |
| 14.3 LTS or above |
Scala | 16.1 or above |
Scala | 16.2 or above |
Scala | 14.2 or above |
Kafka options | 13.3 LTS or above |
Scala | 16.1 or above |
Python | 14.3 LTS or above |
Additionally, for Python, foreachBatch
has the following behavior changes on Databricks Runtime 14.0 and above:
print()
commands write output to the driver logs.- You cannot access the
dbutils.widgets
submodule inside the function. - Any files, modules, or objects referenced in the function must be serializable and available on Spark.
Network and file system support
Feature | Required Databricks Runtime version |
---|---|
Connections to ports other than 80 and 443 | 12.2 LTS or above |