syne_tune.optimizer.schedulers.multiobjective.legacy_expected_hyper_volume_improvement module
- class syne_tune.optimizer.schedulers.multiobjective.legacy_expected_hyper_volume_improvement.LegacyExpectedHyperVolumeImprovement(config_space, metric, mode, points_to_evaluate=None, allow_duplicates=False, restrict_configurations=None, num_init_random=3, no_fantasizing=False, max_num_observations=200, input_warping=True, **kwargs)[source]
Bases:
StochasticAndFilterDuplicatesSearcher
Implementation of expected hypervolume improvement [1] based on the BOTorch implementation.
[1] S. Daulton, M. Balandat, and E. Bakshy. Differentiable Expected Hypervolume Improvement for Parallel Multi-Objective Bayesian Optimization. Advances in Neural Information Processing Systems 33, 2020.
Additional arguments on top of parent class
StochasticAndFilterDuplicatesSearcher
:- Parameters:
mode (
Union
[List
[str
],str
]) – “min” (default) or “max”num_init_random (
int
) –get_config()
returns randomly drawn configurations until at leastinit_random
observations have been recorded inupdate()
. After that, the BOTorch algorithm is used. Defaults to 3no_fantasizing (
bool
) – IfTrue
, fantasizing is not done and pending evaluations are ignored. This may lead to loss of diversity in decisions. Defaults toFalse
max_num_observations (
Optional
[int
]) – Maximum number of observation to use when fitting the GP. If the number of observations gets larger than this number, then data is subsampled. IfNone
, then all data is used to fit the GP. Defaults to 200input_warping (
bool
) – Whether to apply input warping when fitting the GP. Defaults toTrue
- clone_from_state(state)[source]
Together with
get_state()
, this is needed in order to store and re-create the mutable state of the searcher.Given state as returned by
get_state()
, this method combines the non-pickle-able part of the immutable state from self with state and returns the corresponding searcher clone. Afterwards,self
is not used anymore.- Parameters:
state (
Dict
[str
,Any
]) – See above- Returns:
New searcher object
- register_pending(trial_id, config=None, milestone=None)[source]
Signals to searcher that evaluation for trial has started, but not yet finished, which allows model-based searchers to register this evaluation as pending.
- Parameters:
trial_id (
str
) – ID of trial to be registered as pending evaluationconfig (
Optional
[dict
]) – Iftrial_id
has not been registered with the searcher, its configuration must be passed here. Ignored otherwise.milestone (
Optional
[int
]) – For multi-fidelity schedulers, this is the next rung level the evaluation will attend, so that model registers(config, milestone)
as pending.
- evaluation_failed(trial_id)[source]
Called by scheduler if an evaluation job for a trial failed.
The searcher should react appropriately (e.g., remove pending evaluations for this trial, not suggest the configuration again).
- Parameters:
trial_id (
str
) – ID of trial whose evaluated failed
- cleanup_pending(trial_id)[source]
Removes all pending evaluations for trial
trial_id
.This should be called after an evaluation terminates. For various reasons (e.g., termination due to convergence), pending candidates for this evaluation may still be present.
- Parameters:
trial_id (
str
) – ID of trial whose pending evaluations should be cleared
- dataset_size()[source]
- Returns:
Size of dataset a model is fitted to, or 0 if no model is fitted to data