Installation

To install Syne Tune from pip, you can simply do:

pip install 'syne-tune[basic]'

For development, you need to install Syne Tune from source:

git clone https://github.com/awslabs/syne-tune.git
cd syne-tune
python3 -m venv st_venv
. st_venv/bin/activate
pip install --upgrade pip
pip install -e '.[basic,dev]'

This installs Syne Tune in a virtual environment st_venv. Remember to activate this environment before working with Syne Tune. We also recommend building the virtual environment from scratch now and then, in particular when you pull a new release, as dependencies may have changed.

See our change log to check what has changed in the latest version.

In the examples above, Syne Tune is installed with the tag basic, which collects a reasonable number of dependencies. If you want to install all dependencies, replace basic with extra. You can further refine this selection by using partial dependencies.

What Is Hyperparameter Optimization?

Here is an introduction to hyperparameter optimization in the context of deep learning, which uses Syne Tune for some examples.

First Example

To enable tuning, you have to report metrics from a training script so that they can be communicated later to Syne Tune, this can be accomplished by just calling report(epoch=epoch, loss=loss), as shown in this example:

train_height_simple.py
import logging
import time

from syne_tune import Reporter
from argparse import ArgumentParser

if __name__ == "__main__":
    root = logging.getLogger()
    root.setLevel(logging.INFO)
    parser = ArgumentParser()
    parser.add_argument("--epochs", type=int)
    parser.add_argument("--width", type=float)
    parser.add_argument("--height", type=float)
    args, _ = parser.parse_known_args()
    report = Reporter()

    for step in range(args.epochs):
        time.sleep(0.1)
        dummy_score = 1.0 / (0.1 + args.width * step / 100) + args.height * 0.1
        # Feed the score back to Syne Tune
        report(epoch=step + 1, mean_loss=dummy_score)

Once you have annotated your training script in this way, you can launch a tuning experiment as follows:

launch_height_simple.py
from pathlib import Path

from syne_tune import Tuner, StoppingCriterion
from syne_tune.backend import LocalBackend
from syne_tune.config_space import randint
from syne_tune.optimizer.baselines import ASHA

# Hyperparameter configuration space
config_space = {
    "width": randint(1, 20),
    "height": randint(1, 20),
    "epochs": 100,
}
# Scheduler (i.e., HPO algorithm)
scheduler = ASHA(
    config_space,
    metric="mean_loss",
    resource_attr="epoch",
    max_resource_attr="epochs",
    search_options={"debug_log": False},
)

entry_point = str(
    Path(__file__).parent
    / "training_scripts"
    / "height_example"
    / "train_height_simple.py"
)
tuner = Tuner(
    trial_backend=LocalBackend(entry_point=entry_point),
    scheduler=scheduler,
    stop_criterion=StoppingCriterion(max_wallclock_time=30),
    n_workers=4,  # how many trials are evaluated in parallel
)
tuner.run()

This example runs ASHA with n_workers=4 asynchronously parallel workers for max_wallclock_time=30 seconds on the local machine it is called on (trial_backend=LocalBackend(entry_point=entry_point)).

Experimentation with Syne Tune

If you plan to use advanced features of Syne Tune, such as different execution backends or running experiments remotely, writing launcher scripts like examples/launch_height_simple.py can become tedious. Syne Tune provides an advanced experimentation framework, which you can learn about in this tutorial, or also in this one. Examples for the experimentation framework are given in benchmarking.examples and benchmarking.nursery.

Supported HPO Methods

The following hyperparameter optimization (HPO) methods are available in Syne Tune:

Method

Reference

Searcher

Asynchronous?

Multi-fidelity?

Transfer?

Grid Search

deterministic

yes

no

no

Random Search

Bergstra, et al. (2011)

random

yes

no

no

Bayesian Optimization

Snoek, et al. (2012)

model-based

yes

no

no

BORE

Tiao, et al. (2021)

model-based

yes

no

no

MedianStoppingRule

Golovin, et al. (2017)

any

yes

yes

no

SyncHyperband

Li, et al. (2018)

random

no

yes

no

SyncBOHB

Falkner, et al. (2018)

model-based

no

yes

no

SyncMOBSTER

Klein, et al. (2020)

model-based

no

yes

no

ASHA

Li, et al. (2019)

random

yes

yes

no

BOHB

Falkner, et al. (2018)

model-based

yes

yes

no

MOBSTER

Klein, et al. (2020)

model-based

yes

yes

no

DEHB

Awad, et al. (2021)

evolutionary

no

yes

no

HyperTune

Li, et al. (2022)

model-based

yes

yes

no

DyHPO *

Wistuba, et al. (2022)

model-based

yes

yes

no

ASHABORE

Tiao, et al. (2021)

model-based

yes

yes

no

PASHA

Bohdal, et al. (2022)

random

yes

yes

no

REA

Real, et al. (2019)

evolutionary

yes

no

no

KDE

Falkner, et al. (2018)

model-based

yes

no

no

PopulationBasedTraining

Jaderberg, et al. (2017)

evolutionary

no

yes

no

ZeroShotTransfer

Wistuba, et al. (2015)

deterministic

yes

no

yes

ASHA-CTS (ASHACTS)

Salinas, et al. (2021)

random

yes

yes

yes

RUSH (RUSHScheduler)

Zappella, et al. (2021)

random

yes

yes

yes

BoundingBox

Perrone, et al. (2019)

any

yes

yes

yes

*: We implement the model-based scheduling logic of DyHPO, but use the same Gaussian process surrogate models as MOBSTER and HyperTune. The original source code for the paper is here.

The searchers fall into four broad categories, deterministic, random, evolutionary and model-based. The random searchers sample candidate hyperparameter configurations uniformly at random, while the model-based searchers sample them non-uniformly at random, according to a model (e.g., Gaussian process, density ration estimator, etc.) and an acquisition function. The evolutionary searchers make use of an evolutionary algorithm.

Syne Tune also supports BoTorch searchers, see BoTorch.

Supported Multi-objective Optimization Methods

Method

Reference

Searcher

Asynchronous?

Multi-fidelity?

Transfer?

ConstrainedBayesianOptimization

Gardner, et al. (2014)

model-based

yes

no

no

MOASHA

Schmucker, et al. (2021)

random

yes

yes

no

NSGA2

Deb, et al. (2002)

evolutionary

no

no

no

MORandomScalarizationBayesOpt

Peria, et al. (2018)

model-based

yes

no

no

MOLinearScalarizationBayesOpt

model-based

yes

no

no

HPO methods listed can be used in a multi-objective setting by scalarization (LinearScalarizationPriority) or non-dominated sorting (NonDominatedPriority).

Security

See CONTRIBUTING for more information.

Citing Syne Tune

If you use Syne Tune in a scientific publication, please cite the following paper:

Syne Tune: A Library for Large Scale Hyperparameter Tuning and Reproducible Research

@inproceedings{
    salinas2022syne,
    title = {{Syne Tune}: A Library for Large Scale Hyperparameter Tuning and Reproducible Research},
    author = {David Salinas and Matthias Seeger and Aaron Klein and Valerio Perrone and Martin Wistuba and Cedric Archambeau},
    booktitle = {International Conference on Automated Machine Learning, AutoML 2022},
    year = {2022},
    url = {https://proceedings.mlr.press/v188/salinas22a.html}
}

License

This project is licensed under the Apache-2.0 License.