syne_tune.optimizer.schedulers.searchers.model_based_searcher module
- syne_tune.optimizer.schedulers.searchers.model_based_searcher.check_initial_candidates_scorer(initial_scoring)[source]
- Return type:
str
- class syne_tune.optimizer.schedulers.searchers.model_based_searcher.ModelBasedSearcher(config_space, metric, points_to_evaluate=None, **kwargs)[source]
Bases:
StochasticSearcher
Common code for surrogate model based searchers
If
num_initial_random_choices > 0
, initial configurations are drawn using an internalRandomSearcher
object, which is created in_assign_random_searcher()
. This internal random searcher sharesrandom_state
with the searcher here. This ensures that ifModelBasedSearcher
andRandomSearcher
objects are created with the samerandom_seed
andpoints_to_evaluate
argument, initial configurations are identical until_get_config_modelbased()
kicks in.Note that this works because
random_state
is only used in the internal random searcher until meth:_get_config_modelbased
is first called.- on_trial_result(trial_id, config, result, update)[source]
Inform searcher about result
The scheduler passes every result. If
update == True
, the searcher should update its surrogate model (if any), otherwiseresult
is an intermediate result not modelled.The default implementation calls
_update()
ifupdate == True
. It can be overwritten by searchers which also react to intermediate results.- Parameters:
trial_id (
str
) – Seeon_trial_result()
config (
Dict
[str
,Any
]) – Seeon_trial_result()
result (
Dict
[str
,Any
]) – Seeon_trial_result()
update (
bool
) – Should surrogate model be updated?
- get_config(**kwargs)[source]
Runs Bayesian optimization in order to suggest the next config to evaluate.
- Return type:
Optional
[Dict
[str
,Any
]]- Returns:
Next config to evaluate at
- dataset_size()[source]
- Returns:
Size of dataset a model is fitted to, or 0 if no model is fitted to data
- model_parameters()[source]
- Returns:
Dictionary with current model (hyper)parameter values if this is supported; otherwise empty
- get_state()[source]
The mutable state consists of the GP model parameters, the
TuningJobState
, and theskip_optimization
predicate (which can have a mutable state). We assume thatskip_optimization
can be pickled.Note that we do not have to store the state of
_random_searcher
, since this internal searcher shares itsrandom_state
with the searcher here.- Return type:
Dict
[str
,Any
]
- property debug_log
Some subclasses support writing a debug log, using
DebugLogPrinter
. SeeRandomSearcher
for an example.- Returns:
debug_log
object`` or None (not supported)
- syne_tune.optimizer.schedulers.searchers.model_based_searcher.create_initial_candidates_scorer(initial_scoring, predictor, acquisition_class, random_state, active_metric='target')[source]
- Return type:
- class syne_tune.optimizer.schedulers.searchers.model_based_searcher.BayesianOptimizationSearcher(config_space, metric, points_to_evaluate=None, **kwargs)[source]
Bases:
ModelBasedSearcher
Common Code for searchers using Bayesian optimization
We implement Bayesian optimization, based on a model factory which parameterizes the state transformer. This implementation works with any type of surrogate model and acquisition function, which are compatible with each other.
The following happens in
get_config()
:For the first
num_init_random
calls, a config is drawn at random (afterpoints_to_evaluate
, which are included in thenum_init_random
initial ones). Afterwards, Bayesian optimization is used, unless there are no finished evaluations yet (a surrogate model cannot be used with no data at all)For BO, model hyperparameter are refit first. This step can be skipped (see
opt_skip_*
parameters).Next, the BO decision is made based on
BayesianOptimizationAlgorithm
. This involves samplingnum_init_candidates`
configs are sampled at random, ranking them with a scoring function (initial_scoring
), and finally runing local optimization starting from the top scoring config.
- configure_scheduler(scheduler)[source]
Some searchers need to obtain information from the scheduler they are used with, in order to configure themselves. This method has to be called before the searcher can be used.
- Parameters:
scheduler (
TrialScheduler
) – Scheduler the searcher is used with.
- register_pending(trial_id, config=None, milestone=None)[source]
Registers trial as pending. This means the corresponding evaluation task is running. Once it finishes, update is called for this trial.
- get_batch_configs(batch_size, num_init_candidates_for_batch=None, **kwargs)[source]
Asks for a batch of
batch_size
configurations to be suggested. This is roughly equivalent to callingget_config
batch_size
times, marking the suggested configs as pending in the state (but the state is not modified here). This means the batch is chosen sequentially, at about the cost of callingget_config
batch_size
times.If
num_init_candidates_for_batch
is given, it is used instead ofnum_init_candidates
for the selection of all but the first config in the batch. In order to speed up batch selection, choosenum_init_candidates_for_batch
smaller thannum_init_candidates
.If less than
batch_size
configs are returned, the search space has been exhausted.Note: Batch selection does not support
debug_log
right now: make sure to switch this off when creating scheduler and searcher.- Return type:
List
[Dict
[str
,Union
[int
,float
,str
]]]