Installation
To install Syne Tune from pip, you can simply do:
pip install 'syne-tune[extra]'
For development, or for using the benchmarking framework to run many experiments in parallel, you need to install Syne Tune from source:
git clone https://github.com/awslabs/syne-tune.git
cd syne-tune
python3 -m venv st_venv
. st_venv/bin/activate
pip install --upgrade pip
pip install -e '.[extra]'
This installs everything in a virtual environment st_venv
. Remember to activate
this environment before working with Syne Tune. We also recommend building the
virtual environment from scratch now and then, in particular when you pull a new
release, as dependencies may have changed.
See our change log to check what has changed in the latest version.
In the examples above, Syne Tune is installed with the union of all its dependencies, which can be a lot. If you only need specific features, you may be able to use partial dependencies.
First Example
To enable tuning, you have to report metrics from a training script so that they
can be communicated later to Syne Tune, this can be accomplished by just
calling report(epoch=epoch, loss=loss)
, as shown in this example:
import logging
import time
from syne_tune import Reporter
from argparse import ArgumentParser
if __name__ == "__main__":
root = logging.getLogger()
root.setLevel(logging.INFO)
parser = ArgumentParser()
parser.add_argument("--epochs", type=int)
parser.add_argument("--width", type=float)
parser.add_argument("--height", type=float)
args, _ = parser.parse_known_args()
report = Reporter()
for step in range(args.epochs):
time.sleep(0.1)
dummy_score = 1.0 / (0.1 + args.width * step / 100) + args.height * 0.1
# Feed the score back to Syne Tune
report(epoch=step + 1, mean_loss=dummy_score)
Once you have annotated your training script in this way, you can launch a tuning experiment as follows:
from pathlib import Path
from syne_tune.config_space import randint
from syne_tune.optimizer.baselines import ASHA
from syne_tune.backend import LocalBackend
from syne_tune import Tuner, StoppingCriterion
# Hyperparameter configuration space
config_space = {
"width": randint(1, 20),
"height": randint(1, 20),
"epochs": 100,
}
# Scheduler (i.e., HPO algorithm)
scheduler = ASHA(
config_space,
metric="mean_loss",
resource_attr="epoch",
max_resource_attr="epochs",
search_options={"debug_log": False},
)
entry_point = str(
Path(__file__).parent
/ "training_scripts"
/ "height_example"
/ "train_height_simple.py"
)
tuner = Tuner(
trial_backend=LocalBackend(entry_point=entry_point),
scheduler=scheduler,
stop_criterion=StoppingCriterion(max_wallclock_time=30),
n_workers=4, # how many trials are evaluated in parallel
)
tuner.run()
This example runs ASHA with
n_workers=4
asynchronously parallel workers for max_wallclock_time=30
seconds on the local machine it is called on
(trial_backend=LocalBackend(entry_point=entry_point)
).
Supported HPO Methods
The following hyperparameter optimization (HPO) methods are available in Syne Tune:
Method |
Reference |
Searcher |
Asynchronous? |
Multi-fidelity? |
Transfer? |
---|---|---|---|---|---|
deterministic |
yes |
no |
no |
||
random |
yes |
no |
no |
||
model-based |
yes |
no |
no |
||
model-based |
yes |
no |
no |
||
any |
yes |
yes |
no |
||
random |
no |
yes |
no |
||
model-based |
no |
yes |
no |
||
model-based |
no |
yes |
no |
||
random |
yes |
yes |
no |
||
model-based |
yes |
yes |
no |
||
model-based |
yes |
yes |
no |
||
evolutionary |
no |
yes |
no |
||
model-based |
yes |
yes |
no |
||
model-based |
yes |
yes |
no |
||
model-based |
yes |
yes |
no |
||
random |
yes |
yes |
no |
||
evolutionary |
yes |
no |
no |
||
model-based |
yes |
no |
no |
||
evolutionary |
no |
yes |
no |
||
deterministic |
yes |
no |
yes |
||
ASHA-CTS ( |
random |
yes |
yes |
yes |
|
RUSH ( |
random |
yes |
yes |
yes |
|
any |
yes |
yes |
yes |
The searchers fall into four broad categories, deterministic, random, evolutionary and model-based. The random searchers sample candidate hyperparameter configurations uniformly at random, while the model-based searchers sample them non-uniformly at random, according to a model (e.g., Gaussian process, density ration estimator, etc.) and an acquisition function. The evolutionary searchers make use of an evolutionary algorithm.
Syne Tune also supports BoTorch searchers,
see BoTorch
.
Supported multi-objective optimization methods
Method |
Reference |
Searcher |
Asynchronous? |
Multi-fidelity? |
Transfer? |
---|---|---|---|---|---|
model-based |
yes |
no |
no |
||
random |
yes |
yes |
no |
HPO methods listed can be used in a multi-objective setting by scalarization
(LinearScalarizationPriority
)
or non-dominated sorting
(NonDominatedPriority
).
Security
See CONTRIBUTING for more information.
Citing Syne Tune
If you use Syne Tune in a scientific publication, please cite the following paper:
Syne Tune: A Library for Large Scale Hyperparameter Tuning and Reproducible Research
@inproceedings{
salinas2022syne,
title = {{Syne Tune}: A Library for Large Scale Hyperparameter Tuning and Reproducible Research},
author = {David Salinas and Matthias Seeger and Aaron Klein and Valerio Perrone and Martin Wistuba and Cedric Archambeau},
booktitle = {International Conference on Automated Machine Learning, AutoML 2022},
year = {2022},
url = {https://proceedings.mlr.press/v188/salinas22a.html}
}
License
This project is licensed under the Apache-2.0 License.