Jaa


Hyperparameter tuning

Python libraries like Optuna, Ray Tune, and Hyperopt simplify and automate hyperparameter tuning to efficiently find an optimal set of hyperparameters for machine learning models. These libraries scale across multiple computes to quickly find hyperparameters with minimal manual orchestration and configuration requirements.

Optuna

Optuna is a light-weight framework that makes it easy to define a dynamic search space for hyperparameter tuning and model selection. Optuna includes some of the latest optimization and machine learning algorithms.

Optuna can be easily parallelized with Joblib to scale workloads, and integrated with Mlflow to track hyperparameters and metrics across trials.

To get started with Optuna, see Hyperparameter tuning with Optuna.

Ray Tune

Databricks Runtime ML includes Ray, an open-source framework used for parallel compute processing. Ray Tune is a hyperparameter tuning library that comes with Ray and uses Ray as a backend for distributed computing.

For details on how to run Ray on Databricks, see What is Ray on Azure Databricks?. For examples of Ray Tune, see Ray Tune documentation.

Hyperopt

Note

The open-source version of Hyperopt is no longer being maintained.

Hyperopt will be removed in the next major DBR ML version. Azure Databricks recommends using either Optuna for single-node optimization or RayTune for a similar experience to the deprecated Hyperopt distributed hyperparameter tuning functionality. Learn more about using RayTune on Azure Databricks.

Hyperopt is a Python library used for distributed hyperparameter tuning and model selection. Hyperopt works with both distributed ML algorithms such as Apache Spark MLlib and Horovod, as well as with single-machine ML models such as scikit-learn and TensorFlow.

To get started using Hyperopt, see Use distributed training algorithms with Hyperopt.

MLlib automated MLflow tracking

Note

MLlib automated MLflow tracking is deprecated and disabled by default on clusters that run Databricks Runtime 10.4 LTS ML and above.

Instead, use MLflow PySpark ML autologging by calling mlflow.pyspark.ml.autolog(), which is enabled by default with Databricks Autologging.

With MLlib automated MLflow tracking, when you run tuning code that uses CrossValidator or TrainValidationSplit. Hyperparameters and evaluation metrics are automatically logged in MLflow.