syne_tune.optimizer.schedulers.searchers.bayesopt.datatypes.common module

syne_tune.optimizer.schedulers.searchers.bayesopt.datatypes.common.dictionarize_objective(x)[source]
class syne_tune.optimizer.schedulers.searchers.bayesopt.datatypes.common.TrialEvaluations(trial_id, metrics)[source]

Bases: object

For each fixed k, metrics[k] is either a single value or a dict. The latter is used, for example, for multi-fidelity schedulers, where metrics[k][str(r)] is the value at resource level r.

trial_id: str
metrics: Dict[str, Union[float, Dict[str, float]]]
num_cases(metric_name='target', resource=None)[source]

Counts the number of observations for metric metric_name.

Parameters:
  • metric_name (str) – Defaults to INTERNAL_METRIC_NAME

  • resource (Optional[int]) – In the multi-fidelity case, we only count observations at this resource level

Return type:

int

Returns:

Number of observations

class syne_tune.optimizer.schedulers.searchers.bayesopt.datatypes.common.PendingEvaluation(trial_id, resource=None)[source]

Bases: object

Maintains information for pending candidates (i.e. candidates which have been queried for labeling, but target feedback has not yet been obtained.

The minimum information is the candidate which has been queried.

property trial_id: str
property resource: int | None
class syne_tune.optimizer.schedulers.searchers.bayesopt.datatypes.common.FantasizedPendingEvaluation(trial_id, fantasies, resource=None)[source]

Bases: PendingEvaluation

Here, latent target values are integrated out by Monte Carlo samples, also called “fantasies”.

property fantasies