Syne Tune: Large-Scale and Reproducible Hyperparameter Optimization

Latest Release Python Versions License PyPI - Downloads _images/synetune.gif

This package provides state-of-the-art algorithms for hyperparameter optimization (HPO) with the following key features:

  • Wide coverage (>20) of different HPO methods, including:

    • Asynchronous versions to maximize utilization and distributed versions (i.e., with multiple workers);

    • Multi-fidelity methods supporting model-based decisions (BOHB, MOBSTER, Hyper-Tune, DyHPO, BORE);

    • Hyperparameter transfer learning to speed up (repeated) tuning jobs;

    • Multi-objective optimizers that can tune multiple objectives simultaneously (such as accuracy and latency).

  • HPO can be run in different environments (locally, AWS, simulation) by changing just one line of code.

  • Out-of-the-box tabulated benchmarks that allows you simulate results in seconds while preserving the real dynamics of asynchronous or synchronous HPO with any number of workers.

What’s New?

  • Andreas Mueller, co-creator and core contributor to scikit-learn, used Syne Tune extensively to optimize parameters of a hypernetwork which solves tabular classification tasks faster than state of the art boosted decision tree algorithms. Check out the video.

  • The experimentation framework of Syne Tune, providing an easy access to all the different methods, execution backends, and ways to run many experiments in parallel, is now available in syne_tune.experiments, there is no need to install from source anymore. This framework is the best place to start serious experimentation work with Syne Tune.

  • New tutorial: Distributed Hyperparameter Tuning: Finding the Right Model can be Fast and Fun. Provides an overview of Syne Tune and its experimentation framework.

  • You can now create comparative plots, combining the results of many experiments, as shown here.

  • Local Backend supports training with more than one GPU per trial.

  • Speculative early checkpoint removal for asynchronous multi-fidelity optimization. Retaining all checkpoints often exhausts all available disk space when training large models. With this feature, Syne Tune automatically removes checkpoints that are unlikely to be needed. Details.

  • New Multi-Objective Scheduler: LinearScalarizedScheduler. The method works by taking a multi-objective problem and turning it into a single-objective task by optimizing for a linear combination of all objectives. This wrapper works with all single-objective schedulers.

  • Support for automatic termination criterion proposed by Makarova et al. Instead of defining a fixed number of iterations or wall-clock time limit, we can set a threshold on how much worse we allow the final solution to be compared to the global optimum, such that we automatically stop the optimization process once we find a solution that meets this criteria.

Videos featuring Syne Tune

Indices and tables