Parallelize Hyperopt hyperparameter tuning

Note

The open-source version of Hyperopt is no longer being maintained.

Hyperopt will be removed in the next major DBR ML version. Azure Databricks recommends using either Optuna for single-node optimization or RayTune for a similar experience to the deprecated Hyperopt distributed hyperparameter tuning functionality. Learn more about using RayTune on Azure Databricks.

This notebook shows how to use Hyperopt to parallelize hyperparameter tuning calculations. It uses the SparkTrials class to automatically distribute calculations across the cluster workers. It also illustrates automated MLflow tracking of Hyperopt runs so you can save the results for later.

Parallelize hyperparameter tuning with automated MLflow tracking notebook

Get notebook

After you perform the actions in the last cell in the notebook, your MLflow UI should display:

Hyperopt MLflow demo