syne_tune.optimizer.schedulers.searchers.dyhpo.hyperband_dyhpo module

class syne_tune.optimizer.schedulers.searchers.dyhpo.hyperband_dyhpo.ScheduleDecision[source]

Bases: object

PROMOTE_SH = 0
PROMOTE_DYHPO = 1
START_DYHPO = 2
class syne_tune.optimizer.schedulers.searchers.dyhpo.hyperband_dyhpo.DyHPORungSystem(rung_levels, promote_quantiles, metric, mode, resource_attr, max_t, searcher, probability_sh, random_state)[source]

Bases: PromotionRungSystem

Implements the logic which decides which paused trial to promote to the next resource level, or alternatively which configuration to start as a new trial, proposed in:

Wistuba, M. and Kadra, A. and Grabocka, J.
Dynamic and Efficient Gray-Box Hyperparameter Optimization for Deep Learning

We do promotion-based scheduling, as in PromotionRungSystem. In fact, we run the successive halving rule in on_task_schedule() with probability probability_sh, and the DyHPO logic otherwise, or if the SH rule does not promote a trial. This mechanism (not contained in the paper) ensures that trials are promoted eventually, even if DyHPO only starts new trials.

Since HyperbandScheduler was designed for promotion decisions to be separate from decisions about new configs, the overall workflow is a bit tricky:

  • In FIFOScheduler._suggest(), we first call promote_trial_id, extra_kwargs = self._promote_trial(). If promote_trial_id != None, this trial is promoted. Otherwise, we call config = self.searcher.get_config(**extra_kwargs, trial_id=trial_id) and start a new trial with this config. In most cases, _promote_trial() makes a promotion decision without using the searcher.

  • Here, we use the fact that information can be passed from _promote_trial() to self.searcher.get_config via extra_kwargs. Namely, :meth:``HyperbandScheduler._promote_trial` calls on_task_schedule() here, which calls score_paused_trials_and_new_configs(), where everything happens.

  • First, all paused trials are scored w.r.t. the value of running them for one more unit of resource. Also, a number of random configs are scored w.r.t. the value of running them to the minimum resource.

  • If the winning config is from a paused trial, this is resumed. If the winning config is a new one, on_task_schedule() returns this config using a special key KEY_NEW_CONFIGURATION. This dict becomes part of extra_kwargs and is passed to self.searcher.get_config

  • get_config() is trivial. It obtains an argument of name KEY_NEW_CONFIGURATION returns its value, which is the winning config to be started as new trial

We can ignore rung_levels and promote_quantiles, they are not used. For each trial, we only need to maintain the resource level at which it is paused.

on_task_schedule(new_trial_id)[source]

The main decision making happens here. We collect (trial_id, resource) for all paused trials and call searcher. The searcher scores all these trials along with a certain number of randomly drawn new configurations.

If one of the paused trials has the best score, we return its trial_id along with extra information, so it gets promoted. If one of the new configurations has the best score, we return this configuration. In this case, a new trial is started with this configuration.

Note: For this scheduler type, kwargs must contain the trial ID of the new trial to be started, in case none can be promoted.

Return type:

Dict[str, Any]

property schedule_records: List[Tuple[str, int, int]]
static summary_schedule_keys()[source]
Return type:

List[str]

summary_schedule_records()[source]
Return type:

Dict[str, Any]

support_early_checkpoint_removal()[source]

Early checkpoint removal currently not supported for DyHPO

Return type:

bool