syne_tune.optimizer.schedulers.transfer_learning.rush module

class syne_tune.optimizer.schedulers.transfer_learning.rush.RUSHScheduler(config_space, transfer_learning_evaluations, metric, type='stopping', points_to_evaluate=None, custom_rush_points=None, num_hyperparameters_per_task=1, **kwargs)[source]

Bases: TransferLearningMixin, HyperbandScheduler

A transfer learning variation of Hyperband which uses previously well-performing hyperparameter configurations as an initialization. The best hyperparameter configuration of each individual task provided is evaluated. The one among them which performs best on the current task will serve as a hurdle and is used to prune other candidates. This changes the standard successive halving promotion as follows. As usual, only the top-performing fraction is promoted to the next rung level. However, these candidates need to be at least as good as the hurdle configuration to be promoted. In practice this means that much fewer candidates can be promoted. Reference:

A resource-efficient method for repeated HPO and NAS.
Giovanni Zappella, David Salinas, Cédric Archambeau.
AutoML workshop @ ICML 2021.

Additional arguments on top of parent class HyperbandScheduler.

Parameters:
  • transfer_learning_evaluations (Dict[str, TransferLearningTaskEvaluations]) – Dictionary from task name to offline evaluations.

  • points_to_evaluate (Optional[List[dict]]) – If given, these configurations are evaluated after custom_rush_points and configurations inferred from transfer_learning_evaluations. These points are not used to prune any configurations.

  • custom_rush_points (Optional[List[dict]]) – If given, these configurations are evaluated first, in addition to top performing configurations from other tasks and also serve to preemptively prune underperforming configurations

  • num_hyperparameters_per_task (int) – The number of top hyperparameter configurations to consider per task. Defaults to 1