syne_tune.optimizer.schedulers.searchers.bayesopt.datatypes.tuning_job_state module
- class syne_tune.optimizer.schedulers.searchers.bayesopt.datatypes.tuning_job_state.TuningJobState(hp_ranges, config_for_trial, trials_evaluations, failed_trials=None, pending_evaluations=None)[source]
Bases:
object
Collects all data determining the state of a tuning experiment. Trials are indexed by
trial_id
. The configurations associated with trials are listed inconfig_for_trial
.trials_evaluations
contains observations,failed_trials
lists trials for which evaluations have failed,pending_evaluations
lists trials for which observations are pending.trials_evaluations
may store values for different metrics in each record, and each such value may be a dict (see:class:TrialEvaluations
). For example, for multi-fidelity schedulers,trials_evaluations[i].metrics[k][str(r)]
is the value for metric k and trialtrials_evaluations[i].trial_id
observed at resource level r.- metrics_for_trial(trial_id, config=None)[source]
Helper for inserting new entry into
trials_evaluations
. Iftrial_id
is already contained there, the correspondingeval.metrics
is returned. Otherwise, a new entrynew_eval
is appended totrials_evaluations
and itsnew_eval.metrics
is returned (emptydict
). In the latter case,config
needs to be passed, because it may not yet feature inconfig_for_trial
.- Return type:
Union
[float
,Dict
[str
,float
]]
- num_observed_cases(metric_name='target', resource=None)[source]
Counts the number of observations for metric
metric_name
.- Parameters:
metric_name (
str
) – Defaults toINTERNAL_METRIC_NAME
resource (
Optional
[int
]) – In the multi-fidelity case, we only count observations at this resource level
- Return type:
int
- Returns:
Number of observations
- observed_data_for_metric(metric_name='target', resource_attr_name=None)[source]
Extracts datapoints from
trials_evaluations
for particular metricmetric_name
, in the form of a list of configs and a list of metric values. Ifmetric_name
is a dict-valued metric, the dict keys must be resource values, and the returned configs are extended. Here, the name of the resource attribute can be passed inresource_attr_name
(if not given, it can be obtained fromhp_ranges
if this is extended).Note: Implements the default behaviour, namely to return extended configs for dict-valued metrics, which also require
hp_ranges
to be extended. This is not correct for some specific multi-fidelity surrogate models, which should access the data directly.- Parameters:
metric_name (
str
)resource_attr_name (
Optional
[str
])
- Return type:
(
List
[Dict
[str
,Union
[int
,float
,str
]]],List
[float
])- Returns:
configs, metric_values
- is_labeled(trial_id, metric_name='target', resource=None)[source]
Checks whether
trial_id
has observed data undermetric_name
. Ifresource
is given, the observation must be at that resource level.- Return type:
bool
- append_pending(trial_id, config=None, resource=None)[source]
Appends new pending evaluation. If the trial has not been registered here,
config
must be given. Otherwise, it is ignored.
- pending_configurations(resource_attr_name=None)[source]
Returns list of configurations corresponding to pending evaluations. If the latter have resource values, the configs are extended.
- Return type:
List
[Dict
[str
,Union
[int
,float
,str
]]]
- all_configurations(filter_observed_data=None)[source]
Returns list of configurations for all trials represented here, whether observed, pending, or failed. If
filter_observed_data
is given, the configurations for observed trials are filtered with this predicate.- Parameters:
filter_observed_data (
Optional
[Callable
[[Dict
[str
,Union
[int
,float
,str
]]],bool
]]) – See above, optional- Return type:
List
[Dict
[str
,Union
[int
,float
,str
]]]- Returns:
List of all configurations