Part 2. Distributed tuning using Apache Spark and MLflow
To distribute tuning, add one more argument to fmin()
: a Trials
class called SparkTrials
.
SparkTrials
takes 2 optional arguments:
parallelism
: Number of models to fit and evaluate concurrently. The default is the number of available Spark task slots.
timeout
: Maximum time (in seconds) that fmin()
can run. The default is no maximum time limit.
This example uses the very simple objective function defined in Cmd 7. In this case, the function runs quickly and the overhead of starting the Spark jobs dominates the calculation time, so the calculations for the distributed case take more time. For typical real-world problems, the objective function is more complex, and using SparkTrails
to distribute the calculations will be faster than single-machine tuning.
Automated MLflow tracking is enabled by default. To use it, call mlflow.start_run()
before calling fmin()
as shown in the example.
Distributed Hyperopt and automated MLflow tracking
Hyperopt is a Python library for hyperparameter tuning. Databricks Runtime for Machine Learning includes an optimized and enhanced version of Hyperopt, including automated MLflow tracking and the
SparkTrials
class for distributed tuning.This notebook illustrates how to scale up hyperparameter tuning for a single-machine Python ML algorithm and track the results using MLflow. In part 1, you create a single-machine Hyperopt workflow. In part 2, you learn to use the
SparkTrials
class to distribute the workflow calculations across the Spark cluster.