syne_tune.optimizer.schedulers.searchers.botorch package

class syne_tune.optimizer.schedulers.searchers.botorch.BoTorchSearcher(config_space, metric, points_to_evaluate=None, allow_duplicates=False, restrict_configurations=None, mode='min', num_init_random=3, no_fantasizing=False, max_num_observations=200, input_warping=True, **kwargs)[source]

Bases: StochasticAndFilterDuplicatesSearcher

A searcher that suggest configurations using BOTORCH to build GP surrogate and optimize acquisition function.

qExpectedImprovement is used for the acquisition function, given that it supports pending evaluations.

Additional arguments on top of parent class StochasticAndFilterDuplicatesSearcher:

  • mode (str) – “min” (default) or “max”

  • num_init_random (int) – get_config() returns randomly drawn configurations until at least init_random observations have been recorded in update(). After that, the BOTorch algorithm is used. Defaults to 3

  • no_fantasizing (bool) – If True, fantasizing is not done and pending evaluations are ignored. This may lead to loss of diversity in decisions. Defaults to False

  • max_num_observations (Optional[int]) – Maximum number of observation to use when fitting the GP. If the number of observations gets larger than this number, then data is subsampled. If None, then all data is used to fit the GP. Defaults to 200

  • input_warping (bool) – Whether to apply input warping when fitting the GP. Defaults to True


Together with get_state(), this is needed in order to store and re-create the mutable state of the searcher.

Given state as returned by get_state(), this method combines the non-pickle-able part of the immutable state from self with state and returns the corresponding searcher clone. Afterwards, self is not used anymore.


state (Dict[str, Any]) – See above


New searcher object

register_pending(trial_id, config=None, milestone=None)[source]

Signals to searcher that evaluation for trial has started, but not yet finished, which allows model-based searchers to register this evaluation as pending.

  • trial_id (str) – ID of trial to be registered as pending evaluation

  • config (Optional[dict]) – If trial_id has not been registered with the searcher, its configuration must be passed here. Ignored otherwise.

  • milestone (Optional[int]) – For multi-fidelity schedulers, this is the next rung level the evaluation will attend, so that model registers (config, milestone) as pending.


Called by scheduler if an evaluation job for a trial failed.

The searcher should react appropriately (e.g., remove pending evaluations for this trial, not suggest the configuration again).


trial_id (str) – ID of trial whose evaluated failed


Removes all pending evaluations for trial trial_id.

This should be called after an evaluation terminates. For various reasons (e.g., termination due to convergence), pending candidates for this evaluation may still be present.


trial_id (str) – ID of trial whose pending evaluations should be cleared


Size of dataset a model is fitted to, or 0 if no model is fitted to data


Some searchers need to obtain information from the scheduler they are used with, in order to configure themselves. This method has to be called before the searcher can be used.


scheduler (TrialScheduler) – Scheduler the searcher is used with.

Return type:


Return type:


class syne_tune.optimizer.schedulers.searchers.botorch.BotorchSearcher(config_space, metric, points_to_evaluate=None, **kwargs)[source]

Bases: BoTorchSearcher

Downwards compatibility. Please use BoTorchSearcher instead