syne_tune.optimizer.schedulers.searchers.searcher module

syne_tune.optimizer.schedulers.searchers.searcher.impute_points_to_evaluate(points_to_evaluate, config_space)[source]

Transforms points_to_evaluate argument to BaseSearcher. Each config in the list can be partially specified, or even be an empty dict. For each hyperparameter not specified, the default value is determined using a midpoint heuristic. Also, duplicate entries are filtered out. If None (default), this is mapped to [dict()], a single default config determined by the midpoint heuristic. If [] (empty list), no initial configurations are specified.

Parameters:
  • points_to_evaluate (Optional[List[Dict[str, Any]]]) – Argument to BaseSearcher

  • config_space (Dict[str, Any]) – Configuration space

Return type:

List[Dict[str, Any]]

Returns:

List of fully specified initial configs

class syne_tune.optimizer.schedulers.searchers.searcher.BaseSearcher(config_space, metric, points_to_evaluate=None, mode='min')[source]

Bases: object

Base class of searchers, which are components of schedulers responsible for implementing get_config().

Note

This is an abstract base class. In order to implement a new searcher, try to start from StochasticAndFilterDuplicatesSearcher or StochasticSearcher, which implement generally useful properties.

Parameters:
  • config_space (Dict[str, Any]) – Configuration space

  • metric (Union[List[str], str]) –

    Name of metric passed to update(). Can be obtained from scheduler in configure_scheduler(). In the case of multi-objective optimization,

    metric is a list of strings specifying all objectives to be optimized.

  • points_to_evaluate (Optional[List[Dict[str, Any]]]) – List of configurations to be evaluated initially (in that order). Each config in the list can be partially specified, or even be an empty dict. For each hyperparameter not specified, the default value is determined using a midpoint heuristic. If None (default), this is mapped to [dict()], a single default config determined by the midpoint heuristic. If [] (empty list), no initial configurations are specified.

  • mode (Union[List[str], str]) – Should metric be minimized (“min”, default) or maximized (“max”). In the case of multi-objective optimization, mode can be a list defining for each metric if it is minimized or maximized

configure_scheduler(scheduler)[source]

Some searchers need to obtain information from the scheduler they are used with, in order to configure themselves. This method has to be called before the searcher can be used.

Parameters:

scheduler (TrialScheduler) – Scheduler the searcher is used with.

get_config(**kwargs)[source]

Suggest a new configuration.

Note: Query _next_initial_config() for initial configs to return first.

Parameters:

kwargs – Extra information may be passed from scheduler to searcher

Return type:

Optional[Dict[str, Any]]

Returns:

New configuration. The searcher may return None if a new configuration cannot be suggested. In this case, the tuning will stop. This happens if searchers never suggest the same config more than once, and all configs in the (finite) search space are exhausted.

on_trial_result(trial_id, config, result, update)[source]

Inform searcher about result

The scheduler passes every result. If update == True, the searcher should update its surrogate model (if any), otherwise result is an intermediate result not modelled.

The default implementation calls _update() if update == True. It can be overwritten by searchers which also react to intermediate results.

Parameters:
  • trial_id (str) – See on_trial_result()

  • config (Dict[str, Any]) – See on_trial_result()

  • result (Dict[str, Any]) – See on_trial_result()

  • update (bool) – Should surrogate model be updated?

register_pending(trial_id, config=None, milestone=None)[source]

Signals to searcher that evaluation for trial has started, but not yet finished, which allows model-based searchers to register this evaluation as pending.

Parameters:
  • trial_id (str) – ID of trial to be registered as pending evaluation

  • config (Optional[Dict[str, Any]]) – If trial_id has not been registered with the searcher, its configuration must be passed here. Ignored otherwise.

  • milestone (Optional[int]) – For multi-fidelity schedulers, this is the next rung level the evaluation will attend, so that model registers (config, milestone) as pending.

remove_case(trial_id, **kwargs)[source]

Remove data case previously appended by _update()

For searchers which maintain the dataset of all cases (reports) passed to update, this method allows to remove one case from the dataset.

Parameters:
  • trial_id (str) – ID of trial whose data is to be removed

  • kwargs – Extra arguments, optional

evaluation_failed(trial_id)[source]

Called by scheduler if an evaluation job for a trial failed.

The searcher should react appropriately (e.g., remove pending evaluations for this trial, not suggest the configuration again).

Parameters:

trial_id (str) – ID of trial whose evaluated failed

cleanup_pending(trial_id)[source]

Removes all pending evaluations for trial trial_id.

This should be called after an evaluation terminates. For various reasons (e.g., termination due to convergence), pending candidates for this evaluation may still be present.

Parameters:

trial_id (str) – ID of trial whose pending evaluations should be cleared

dataset_size()[source]
Returns:

Size of dataset a model is fitted to, or 0 if no model is fitted to data

model_parameters()[source]
Returns:

Dictionary with current model (hyper)parameter values if this is supported; otherwise empty

get_state()[source]

Together with clone_from_state(), this is needed in order to store and re-create the mutable state of the searcher. The state returned here must be pickle-able.

Return type:

Dict[str, Any]

Returns:

Pickle-able mutable state of searcher

clone_from_state(state)[source]

Together with get_state(), this is needed in order to store and re-create the mutable state of the searcher.

Given state as returned by get_state(), this method combines the non-pickle-able part of the immutable state from self with state and returns the corresponding searcher clone. Afterwards, self is not used anymore.

Parameters:

state (Dict[str, Any]) – See above

Returns:

New searcher object

property debug_log: DebugLogPrinter | None

Some subclasses support writing a debug log, using DebugLogPrinter. See RandomSearcher for an example.

Returns:

debug_log object`` or None (not supported)