horovod.spark
: distributed deep learning with Horovod
Important
Horovod and HorovodRunner are now deprecated. Releases after 15.4 LTS ML will not have this package pre-installed. For distributed deep learning, Databricks recommends using TorchDistributor for distributed training with PyTorch or the tf.distribute.Strategy
API for distributed training with TensorFlow.
Learn how to use the horovod.spark
package to perform distributed training of machine learning models.
horovod.spark
on Databricks
Databricks supports the horovod.spark
package, which provides an estimator API that you can use in ML pipelines with Keras and PyTorch. For details, see Horovod on Spark, which includes a section on Horovod on Databricks.
Note
Databricks installs the
horovod
package with dependencies. If you upgrade or downgrade these dependencies, there might be compatibility issues.When using
horovod.spark
with custom callbacks in Keras, you must save models in the TensorFlow SavedModel format.With TensorFlow 2.x, use the
.tf
suffix in the file name.With TensorFlow 1.x, set the option
save_weights_only=True
.
Requirements
Databricks Runtime ML 7.4 or above.
Note
horovod.spark
does not support pyarrow versions 11.0 and above (see relevant GitHub Issue). Databricks Runtime 15.0 ML includes pyarrow version 14.0.1. To use horovod.spark
with Databricks Runtime 15.0 ML or above, you must manually install pyarrow, specifying a version below 11.0.