Launch HPO Experiment Locally

Along with several of the examples below, this launcher script is using the following train_height.py training script:

Launch HPO Experiment with Python Backend

The Python backend does not need a separate training script.

Population-Based Training (PBT)

This launcher script is using the following pbt_example.py training script:

For this toy example, PBT is run with a population size of 2, so only two parallel workers are needed. In order to use PBT competitively, choose the SageMaker backend. Note that PBT requires your training script to support checkpointing.

Visualize Tuning Progress with Tensorboard

Requirements:

  • Needs tensorboardX to be installed: pip install tensorboardX.

Makes use of train_height.py.

Tensorboard visualization works by using a callback, for example TensorboardCallback, which is passed to the Tuner. In order to visualize other metrics, you may have to modify this callback.

Bayesian Optimization with Scikit-learn Based Surrogate Model

Requirements:

  • Needs sckit-learn to be installed. If you installed Syne Tune with sklearn or basic, this dependence is included.

In this example, a simple new surrogate model is implemented based on sklearn.linear_model.BayesianRidge, and Bayesian optimization is run with this surrogate model rather than a Gaussian process model.

Launch HPO Experiment with Simulator Backend

Requirements:

  • Syne Tune dependencies blackbox-repository need to be installed.

  • Needs nasbench201 blackbox to be downloaded and preprocessed. This can take quite a while when done for the first time

  • If AWS SageMaker is used or an S3 bucket is accessible, the blackbox files are uploaded to your S3 bucket

In this example, we use the simulator backend with the NASBench-201 blackbox. Since time is simulated, we can use max_wallclock_time=3600 (one hour), but the experiment finishes in mere seconds. More details about the simulator backend is found in this tutorial.

Multi-objective Asynchronous Successive Halving (MOASHA)

This launcher script is using the following mo_artificial.py training script:

PASHA: Efficient HPO and NAS with Progressive Resource Allocation

Requirements:

  • Syne Tune dependencies blackbox-repository need to be installed.

  • Needs nasbench201 blackbox to be downloaded and preprocessed. This can take quite a while when done for the first time

PASHA typically uses max_num_trials_completed as the stopping criterion. After finding a strong configuration using PASHA, the next step is to fully train a model with the configuration.

Constrained Bayesian Optimization

This launcher script is using the following train_constrained_example.py training script:

Restrict Scheduler to Tabulated Configurations with Simulator Backend

Requirements:

  • Syne Tune dependencies blackbox-repository need to be installed.

  • Needs lcbench blackbox to be downloaded and preprocessed. This can take quite a while when done for the first time

  • If AWS SageMaker is used or an S3 bucket is accessible, the blackbox files are uploaded to your S3 bucket

This example is similar to the one above, but here we use the tabulated LCBench benchmark, whose configuration space is infinite, and whose objective values have not been evaluated on a grid. With such a benchmark, we can either use a surrogate to interpolate objective values, or we can restrict the scheduler to only suggest configurations which have been observed in the benchmark. This example demonstrates the latter.

Since time is simulated, we can use max_wallclock_time=3600 (one hour), but the experiment finishes in mere seconds. More details about the simulator backend is found in this tutorial.

Tuning Reinforcement Learning

This launcher script is using the following train_cartpole.py training script:

This training script requires the following dependencies to be installed:

examples/training_scripts/rl_cartpole/requirements.txt
tensorboardX==2.5.1
opencv-python
ray[rllib]==2.9.1
dm-tree==0.1.8
gymnasium==0.28.1
tensorflow==2.12.1
pygame==2.1.2

Retrieving the Best Checkpoint

This launcher script is using the following xgboost_checkpoint.py training script:

Launch HPO Experiment with Home-Made Scheduler

Makes use of train_height.py.

For a more thorough introduction on how to develop new schedulers and searchers in Syne Tune, consider this tutorial.

Launch HPO Experiment on mlp_fashionmnist Benchmark

Requirements:

  • Needs “mlp_fashionmnist” benchmark, which requires Syne Tune to have been installed from source.

In this example, we tune one of the built-in benchmark problems, which is useful in order to compare different HPO methods. More details on benchmarking is provided in this tutorial.

Transfer Tuning on NASBench-201

Requirements:

  • Syne Tune dependencies blackbox-repository need to be installed.

  • Needs nasbench201 blackbox to be downloaded and preprocessed. This can take quite a while when done for the first time

  • If AWS SageMaker is used or an S3 bucket is accessible, the blackbox files are uploaded to your S3 bucket

In this example, we use the simulator backend with the NASBench-201 blackbox. It serves as a simple demonstration how evaluations from related tasks can be used to speed up HPO.

Transfer Learning Example

Requirements:

  • Needs matplotlib to be installed if the plotting flag is given: pip install matplotlib. If you installed Syne Tune with visual or extra, this dependence is included.

An example of how to use evaluations collected in Syne Tune to run a transfer learning scheduler. Makes use of train_height.py. Used in the transfer learning tutorial. To plot the figures, run as python launch_transfer_learning_example.py --generate_plots.

Plot Results of Tuning Experiment

Requirements:

  • Needs matplotlib to be installed: pip install matplotlib. If you installed Syne Tune with visual or extra, this dependence is included.

Makes use of train_height.py.

Resume a Tuning Job

Customize Results Written during an Experiment

Makes use of train_height.py.

An example for how to append extra results to those written by default to results.csv.zip. This is done by customizing the StoreResultsCallback.

Pass Configuration as JSON File to Training Script

Requirements:

Makes use of the following train_height_config_json.py training script:

Speculative Early Checkpoint Removal

Requirements:

  • Needs “mlp_fashionmnist” benchmark, which requires Syne Tune to have been installed from source.

This example uses the mlp_fashionmnist benchmark. It runs for about 30 minutes. It demonstrates speculative early checkpoint removal for MOBSTER with promotion scheduling (pause and resume).

Launch HPO Experiment with Ray Tune Scheduler

Makes use of train_height.py.

Stand-Alone Bayesian Optimization

Syne Tune combines a scheduler (HPO algorithm) with a backend to provide a complete HPO solution. If you already have a system in place for job scheduling and managing the state of the tuning problem, you may want to call the scheduler on its own. This example demonstrates how to do this for Gaussian process based Bayesian optimization.

Ask Tell Interface

This is an example on how to use syne-tune in the ask-tell mode. In this setup the tuning loop and experiments are disentangled. The AskTell Scheduler suggests new configurations and the users themselves perform experiments to test the performance of each configuration. Once done, user feeds the result into the Scheduler which uses the data to suggest better configurations.

In some cases, experiments needed for function evaluations can be very complex and require extra orchestration (example vary from setting up jobs on non-aws clusters to running physical lab experiments) in which case this interface provides all the necessary flexibility.

Ask Tell interface for Hyperband

This is an extension of launch_ask_tell_scheduler.py to run multi-fidelity methods such as Hyperband.

Multi Objective Multi Surrogate (MSMOS) Searcher

This example shows how to use the multi-objective multi-surrogate (MSMOS) searcher to tune a multi-objective problem. In this example, we use two Gaussian process regresors as the surrogate models and rely on lower confidence bound random scalarizer as the acquisition function. With that in mind, any Syne Tune Estimator can be used as surrogate.