syne_tune.optimizer.schedulers.searchers.bayesopt.models.model_transformer module

class syne_tune.optimizer.schedulers.searchers.bayesopt.models.model_transformer.StateForModelConverter[source]

Bases: object

Interface for state converters (optionally) used in ModelStateTransformer. These are applied to a state before being passed to the model for fitting and predictions. The main use case is to filter down data if fitting the model scales super-linearly.

set_random_state(random_state)[source]

Some state converters use random sampling. For these, the random state has to be set before first usage.

Parameters:

random_state (RandomState) – Random state to be used internally

class syne_tune.optimizer.schedulers.searchers.bayesopt.models.model_transformer.ModelStateTransformer(estimator, init_state, skip_optimization=None, state_converter=None)[source]

Bases: object

This class maintains the TuningJobState object alongside an HPO experiment, and manages the reaction to changes of this state. In particular, it provides a fitted surrogate model on demand, which encapsulates the GP posterior.

The state transformer is generic, it uses Estimator for anything specific to the model type.

skip_optimization is a predicate depending on the state, determining what is done at the next recent call of model. If False, the model parameters are refit, otherwise the current ones are not changed (which is usually faster, but risks stale-ness).

We also track the observed data state.trials_evaluations. If this did not change since the last recent model() call, we do not refit the model parameters. This is based on the assumption that model parameter fitting only depends on state.trials_evaluations (observed data), not on other fields (e.g., pending evaluations).

If given, state_converter maps the state to another one which is then passed to the model for fitting and predictions. One important use case is filtering down data when model fitting is superlinear. Another is to convert multi-fidelity setups to be used with single-fidelity models inside.

Note that estimator and skip_optimization can also be a dictionary mapping output names to models. In that case, the state is shared but the models for each output metric are updated independently.

Parameters:
property state: TuningJobState
property use_single_model: bool
property estimator: Estimator | Dict[str, Estimator]
property skip_optimization: SkipOptimizationPredicate | Dict[str, SkipOptimizationPredicate]
fit(**kwargs)[source]

If skip_optimization is given, it overrides the self._skip_optimization predicate.

Return type:

Union[Predictor, Dict[str, Predictor]]

Returns:

Fitted surrogate model for current state in the standard single model case; in the multi-model case, it returns a dictionary mapping output names to surrogate model instances for current state (shared across models).

get_params()[source]
set_params(param_dict)[source]
append_trial(trial_id, config=None, resource=None)[source]

Appends new pending evaluation to the state.

Parameters:
  • trial_id (str) – ID of trial

  • config (Optional[Dict[str, Union[int, float, str]]]) – Must be given if this trial does not yet feature in the state

  • resource (Optional[int]) – Must be given in the multi-fidelity case, to specify at which resource level the evaluation is pending

drop_pending_evaluation(trial_id, resource=None)[source]

Drop pending evaluation from state. If it is not listed as pending, nothing is done

Parameters:
  • trial_id (str) – ID of trial

  • resource (Optional[int]) – Must be given in the multi-fidelity case, to specify at which resource level the evaluation is pending

Return type:

bool

remove_observed_case(trial_id, metric_name='target', key=None)[source]

Removes specific observation from the state.

Parameters:
  • trial_id (str) – ID of trial

  • metric_name (str) – Name of internal metric

  • key (Optional[str]) – Must be given in the multi-fidelity case

label_trial(data, config=None)[source]

Adds observed data for a trial. If it has observations in the state already, data.metrics are appended. Otherwise, a new entry is appended. If new observations replace pending evaluations, these are removed.

config must be passed if the trial has not yet been registered in the state (this happens normally with the append_trial call). If already registered, config is ignored.

filter_pending_evaluations(filter_pred)[source]

Filters state.pending_evaluations with filter_pred.

Parameters:

filter_pred (Callable[[PendingEvaluation], bool]) – Filtering predicate

mark_trial_failed(trial_id)[source]