syne_tune.optimizer.schedulers.searchers.dyhpo.dyhpo_searcher module
- class syne_tune.optimizer.schedulers.searchers.dyhpo.dyhpo_searcher.MyGPMultiFidelitySearcher(config_space, **kwargs)[source]
Bases:
GPMultiFidelitySearcher
This wrapper is for convenience, to avoid having to depend on internal concepts of
GPMultiFidelitySearcher
.- score_paused_trials_and_new_configs(paused_trials, min_resource, new_trial_id, skip_optimization)[source]
See
DynamicHPOSearcher.score_paused_trials_and_new_configs()
. Ifskip_optimization == True
, this is passed to the posterior state computation, and refitting of the surrogate model is skipped. Otherwise, nothing is passed, so the built-inskip_optimization
logic is used.- Return type:
Dict
[str
,Any
]
- class syne_tune.optimizer.schedulers.searchers.dyhpo.dyhpo_searcher.DynamicHPOSearcher(config_space, metric, points_to_evaluate=None, **kwargs)[source]
Bases:
BaseSearcher
Supports model-based decisions in the DyHPO algorithm proposed by Wistuba etal (see
DyHPORungSystem
).It is not recommended to create
DynamicHPOSearcher
searcher objects directly, but rather to createHyperbandScheduler
objects withsearcher="dyhpo"
andtype="dyhpo"
, and passing arguments here insearch_options
. This will use the appropriate functions from :mod:syne_tune.optimizer.schedulers.searchers.gp_searcher_factory
to create components in a consistent way.This searcher is special, in that it contains a searcher of type
GPMultiFidelitySearcher
. Also, its model-based scoring is not triggered byget_config()
, but rather when the scheduler tries to find a trial which can be promoted. At this point,score_paused_trials_and_new_configs()
is called, which scores all paused trials along with new configurations. Depending on who is the best scorer, a paused trial is resumed, or a trial with a new configuration is started. Since all the work is already done inscore_paused_trials_and_new_configs()
, the implementation ofget_config()
becomes trivial. See alsoDyHPORungSystem
. Extra points:The number of new configurations scored in
score_paused_trials_and_new_configs()
is the maximum ofnum_init_candidates
and the number of paused trials scored as wellThe parameters of the surrogate model are not refit in every call of
score_paused_trials_and_new_configs()
, but only when in the last recent call, a new configuration was chosen as top scorer. The aim is to do refitting in a similar frequency to MOBSTER, where decisions on whether to resume a trial are not done in a model-based way.
This searcher must be used with
HyperbandScheduler
andtype="dyhpo"
. It has the same constructor parameters asGPMultiFidelitySearcher
. Of these, the following are not used, but need to be given valid values:resource_acq
,initial_scoring
,skip_local_optimization
.- configure_scheduler(scheduler)[source]
Some searchers need to obtain information from the scheduler they are used with, in order to configure themselves. This method has to be called before the searcher can be used.
- Parameters:
scheduler (
TrialScheduler
) – Scheduler the searcher is used with.
- get_config(**kwargs)[source]
Suggest a new configuration.
Note: Query
_next_initial_config()
for initial configs to return first.- Parameters:
kwargs – Extra information may be passed from scheduler to searcher
- Return type:
Optional
[dict
]- Returns:
New configuration. The searcher may return None if a new configuration cannot be suggested. In this case, the tuning will stop. This happens if searchers never suggest the same config more than once, and all configs in the (finite) search space are exhausted.
- on_trial_result(trial_id, config, result, update)[source]
Inform searcher about result
The scheduler passes every result. If
update == True
, the searcher should update its surrogate model (if any), otherwiseresult
is an intermediate result not modelled.The default implementation calls
_update()
ifupdate == True
. It can be overwritten by searchers which also react to intermediate results.- Parameters:
trial_id (
str
) – Seeon_trial_result()
config (
Dict
[str
,Any
]) – Seeon_trial_result()
result (
Dict
[str
,Any
]) – Seeon_trial_result()
update (
bool
) – Should surrogate model be updated?
- register_pending(trial_id, config=None, milestone=None)[source]
Signals to searcher that evaluation for trial has started, but not yet finished, which allows model-based searchers to register this evaluation as pending.
- Parameters:
trial_id (
str
) – ID of trial to be registered as pending evaluationconfig (
Optional
[dict
]) – Iftrial_id
has not been registered with the searcher, its configuration must be passed here. Ignored otherwise.milestone (
Optional
[int
]) – For multi-fidelity schedulers, this is the next rung level the evaluation will attend, so that model registers(config, milestone)
as pending.
- remove_case(trial_id, **kwargs)[source]
Remove data case previously appended by
_update()
For searchers which maintain the dataset of all cases (reports) passed to update, this method allows to remove one case from the dataset.
- Parameters:
trial_id (
str
) – ID of trial whose data is to be removedkwargs – Extra arguments, optional
- evaluation_failed(trial_id)[source]
Called by scheduler if an evaluation job for a trial failed.
The searcher should react appropriately (e.g., remove pending evaluations for this trial, not suggest the configuration again).
- Parameters:
trial_id (
str
) – ID of trial whose evaluated failed
- cleanup_pending(trial_id)[source]
Removes all pending evaluations for trial
trial_id
.This should be called after an evaluation terminates. For various reasons (e.g., termination due to convergence), pending candidates for this evaluation may still be present.
- Parameters:
trial_id (
str
) – ID of trial whose pending evaluations should be cleared
- dataset_size()[source]
- Returns:
Size of dataset a model is fitted to, or 0 if no model is fitted to data
- model_parameters()[source]
- Returns:
Dictionary with current model (hyper)parameter values if this is supported; otherwise empty
- score_paused_trials_and_new_configs(paused_trials, min_resource, new_trial_id)[source]
This method computes acquisition scores for a number of extended configs \((x, r)\). The acquisition score \(EI(x | r)\) is expected improvement (EI) at resource level \(r\). Here, the incumbent used in EI is the best value attained at level \(r\), or the best value overall if there is no data yet at that level. There are two types of configs being scored:
Paused trials: Passed by
paused_trials
as tuples(trial_id, resource)
, whereresource
is the level to be attained by the trial if it was resumedNew configurations drawn at random. For these, the score is EI at \(r\) equal to
min_resource
We return a dictionary. If a paused trial wins, its
trial_id
is returned with key “trial_id”. If a new configuration wins, this configuration is returned with key “config”.Note: As long as the internal searcher still returns configs from
points_to_evaluate
or drawn at random, this method always returns this config with key “config”. Scoring and considering paused trials is only done afterwards.- Parameters:
paused_trials (
List
[Tuple
[str
,int
,int
]]) – See above. Can be emptymin_resource (
int
) – Smallest resource levelnew_trial_id (
str
) – ID of new trial to be started in case a new configuration wins
- Return type:
Dict
[str
,Any
]- Returns:
Dictionary, see above
- get_state()[source]
Together with
clone_from_state()
, this is needed in order to store and re-create the mutable state of the searcher. The state returned here must be pickle-able.- Return type:
Dict
[str
,Any
]- Returns:
Pickle-able mutable state of searcher
- clone_from_state(state)[source]
Together with
get_state()
, this is needed in order to store and re-create the mutable state of the searcher.Given state as returned by
get_state()
, this method combines the non-pickle-able part of the immutable state from self with state and returns the corresponding searcher clone. Afterwards,self
is not used anymore.- Parameters:
state (
Dict
[str
,Any
]) – See above- Returns:
New searcher object
- property debug_log: DebugLogPrinter | None
Some subclasses support writing a debug log, using
DebugLogPrinter
. SeeRandomSearcher
for an example.- Returns:
debug_log
object`` or None (not supported)