syne_tune.optimizer.schedulers.transfer_learning.bounding_box module
- class syne_tune.optimizer.schedulers.transfer_learning.bounding_box.BoundingBox(scheduler_fun, config_space, metric, transfer_learning_evaluations, mode=None, num_hyperparameters_per_task=1)[source]
Bases:
TransferLearningMixin
,TrialScheduler
Simple baseline that computes a bounding-box of the best candidate found in previous tasks to restrict the search space to only good candidates. The bounding-box is obtained by restricting to the min-max of the best numerical hyperparameters and restricting to the set of the best candidates on categorical parameters. Reference:
Learning search spaces for Bayesian optimization: Another view of hyperparameter transfer learning.Valerio Perrone, Huibin Shen, Matthias Seeger, Cédric Archambeau, Rodolphe Jenatton.NeurIPS 2019.scheduler_fun
is used to create the scheduler to be used here, feeding it with the modified config space. Any additional scheduler arguments (such aspoints_to_evaluate
) should be encoded inside this function. Example:from syne_tune.optimizer.baselines import RandomSearch def scheduler_fun(new_config_space: Dict[str, Any], mode: str, metric: str): return RandomSearch(new_config_space, metric, mode) bb_scheduler = BoundingBox(scheduler_fun, ...)
Here,
bb_scheduler
represents random search, where the hyperparameter ranges are restricted to contain the best evalutions of previous tasks, as provided bytransfer_learning_evaluations
.- Parameters:
scheduler_fun (
Callable
[[dict
,str
,str
],TrialScheduler
]) – Maps tuple of configuration space (dict), mode (str), metric (str) to a scheduler. This is required since the final configuration space is known only after computing a bounding-box.config_space (
Dict
[str
,Any
]) – Initial configuration space to consider, will be updated to the bounding of the best evaluations of previous tasksmetric (
str
) – Objective name to optimize, must be present in transfer learning evaluations.mode (
Optional
[str
]) – Mode to be considered, default to “min”.transfer_learning_evaluations (
Dict
[str
,TransferLearningTaskEvaluations
]) – Dictionary from task name to offline evaluations.num_hyperparameters_per_task (
int
) – Number of the best configurations to use per task when computing the bounding box, defaults to 1.
- suggest(trial_id)[source]
Returns a suggestion for a new trial, or one to be resumed
This method returns
suggestion
of typeTrialSuggestion
(unless there is no config left to explore, and None is returned).If
suggestion.spawn_new_trial_id
isTrue
, a new trial is to be started with configsuggestion.config
. Typically, this new trial is started from scratch. But ifsuggestion.checkpoint_trial_id
is given, the trial is to be (warm)started from the checkpoint written for the trial with this ID. The new trial has IDtrial_id
.If
suggestion.spawn_new_trial_id
isFalse
, an existing and currently paused trial is to be resumed, whose ID issuggestion.checkpoint_trial_id
. If this trial has a checkpoint, we start from there. In this case,suggestion.config
is optional. If not given (default), the config of the resumed trial does not change. Otherwise, its config is overwritten bysuggestion.config
(seeHyperbandScheduler
withtype="promotion"
for an example why this can be useful).Apart from the HP config, additional fields can be appended to the dict, these are passed to the trial function as well.
- Parameters:
trial_id (
int
) – ID for new trial to be started (ignored if existing trial to be resumed)- Return type:
Optional
[TrialSuggestion
]- Returns:
Suggestion for a trial to be started or to be resumed, see above. If no suggestion can be made, None is returned
- on_trial_add(trial)[source]
Called when a new trial is added to the trial runner.
Additions are normally triggered by
suggest
.- Parameters:
trial (
Trial
) – Trial to be added
- on_trial_complete(trial, result)[source]
Notification for the completion of trial.
Note that
on_trial_result()
is called with the same result before. However, if the scheduler only uses one final report from each trial, it may ignoreon_trial_result()
and just useresult
here.- Parameters:
trial (
Trial
) – Trial which is completingresult (
Dict
[str
,Any
]) – Result dictionary
- on_trial_remove(trial)[source]
Called to remove trial.
This is called when the trial is in PAUSED or PENDING state. Otherwise, call
on_trial_complete()
.- Parameters:
trial (
Trial
) – Trial to be removed
- on_trial_error(trial)[source]
Called when a trial has failed.
- Parameters:
trial (
Trial
) – Trial for which error is reported.
- on_trial_result(trial, result)[source]
Called on each intermediate result reported by a trial.
At this point, the trial scheduler can make a decision by returning one of
SchedulerDecision.CONTINUE
,SchedulerDecision.PAUSE
, orSchedulerDecision.STOP
. This will only be called when the trial is currently running.- Parameters:
trial (
Trial
) – Trial for which results are reportedresult (
Dict
[str
,Any
]) – Result dictionary
- Return type:
str
- Returns:
Decision what to do with the trial