syne_tune.optimizer.schedulers.searchers.botorch package
- class syne_tune.optimizer.schedulers.searchers.botorch.BoTorchSearcher(config_space, metric, points_to_evaluate=None, allow_duplicates=False, restrict_configurations=None, mode='min', num_init_random=3, no_fantasizing=False, max_num_observations=200, input_warping=True, **kwargs)[source]
Bases:
StochasticAndFilterDuplicatesSearcher
A searcher that suggest configurations using BOTORCH to build GP surrogate and optimize acquisition function.
qExpectedImprovement
is used for the acquisition function, given that it supports pending evaluations.Additional arguments on top of parent class
StochasticAndFilterDuplicatesSearcher
:- Parameters:
mode (
str
) – “min” (default) or “max”num_init_random (
int
) –get_config()
returns randomly drawn configurations until at leastinit_random
observations have been recorded inupdate()
. After that, the BOTorch algorithm is used. Defaults to 3no_fantasizing (
bool
) – IfTrue
, fantasizing is not done and pending evaluations are ignored. This may lead to loss of diversity in decisions. Defaults toFalse
max_num_observations (
Optional
[int
]) – Maximum number of observation to use when fitting the GP. If the number of observations gets larger than this number, then data is subsampled. IfNone
, then all data is used to fit the GP. Defaults to 200input_warping (
bool
) – Whether to apply input warping when fitting the GP. Defaults toTrue
- clone_from_state(state)[source]
Together with
get_state()
, this is needed in order to store and re-create the mutable state of the searcher.Given state as returned by
get_state()
, this method combines the non-pickle-able part of the immutable state from self with state and returns the corresponding searcher clone. Afterwards,self
is not used anymore.- Parameters:
state (
Dict
[str
,Any
]) – See above- Returns:
New searcher object
- register_pending(trial_id, config=None, milestone=None)[source]
Signals to searcher that evaluation for trial has started, but not yet finished, which allows model-based searchers to register this evaluation as pending.
- Parameters:
trial_id (
str
) – ID of trial to be registered as pending evaluationconfig (
Optional
[dict
]) – Iftrial_id
has not been registered with the searcher, its configuration must be passed here. Ignored otherwise.milestone (
Optional
[int
]) – For multi-fidelity schedulers, this is the next rung level the evaluation will attend, so that model registers(config, milestone)
as pending.
- evaluation_failed(trial_id)[source]
Called by scheduler if an evaluation job for a trial failed.
The searcher should react appropriately (e.g., remove pending evaluations for this trial, not suggest the configuration again).
- Parameters:
trial_id (
str
) – ID of trial whose evaluated failed
- cleanup_pending(trial_id)[source]
Removes all pending evaluations for trial
trial_id
.This should be called after an evaluation terminates. For various reasons (e.g., termination due to convergence), pending candidates for this evaluation may still be present.
- Parameters:
trial_id (
str
) – ID of trial whose pending evaluations should be cleared
- dataset_size()[source]
- Returns:
Size of dataset a model is fitted to, or 0 if no model is fitted to data
- class syne_tune.optimizer.schedulers.searchers.botorch.BotorchSearcher(config_space, metric, points_to_evaluate=None, **kwargs)[source]
Bases:
BoTorchSearcher
Downwards compatibility. Please use
BoTorchSearcher
instead
Submodules
- syne_tune.optimizer.schedulers.searchers.botorch.botorch_searcher module
BoTorchSearcher
BoTorchSearcher.clone_from_state()
BoTorchSearcher.num_suggestions()
BoTorchSearcher.register_pending()
BoTorchSearcher.evaluation_failed()
BoTorchSearcher.cleanup_pending()
BoTorchSearcher.dataset_size()
BoTorchSearcher.configure_scheduler()
BoTorchSearcher.objectives()
BoTorchSearcher.metric_names()
BoTorchSearcher.metric_mode()
BotorchSearcher
- syne_tune.optimizer.schedulers.searchers.botorch.botorch_transfer_searcher module