syne_tune.optimizer.scheduler module

class syne_tune.optimizer.scheduler.SchedulerDecision[source]

Bases: object

Possible return values of TrialScheduler.on_trial_result(), signals the tuner how to proceed with the reporting trial.

The difference between PAUSE and STOP is important. If a trial is stopped, it cannot be resumed afterwards. Its checkpoints may be deleted. If a trial is paused, it may be resumed in the future, and its most recent checkpoint should be retained.

CONTINUE = 'CONTINUE'

Status for continuing trial execution

PAUSE = 'PAUSE'

Status for pausing trial execution

STOP = 'STOP'

Status for stopping trial execution

class syne_tune.optimizer.scheduler.TrialSuggestion(spawn_new_trial_id=True, checkpoint_trial_id=None, config=None)[source]

Bases: object

Suggestion returned by TrialScheduler.suggest()

Parameters:
  • spawn_new_trial_id (bool) – Whether a new trial_id should be used.

  • checkpoint_trial_id (Optional[int]) – Checkpoint of this trial ID should be used to resume from. If spawn_new_trial_id is False, then the trial checkpoint_trial_id is resumed with its previous checkpoint.

  • config (Optional[dict]) – The configuration which should be evaluated.

spawn_new_trial_id: bool = True
checkpoint_trial_id: Optional[int] = None
config: Optional[dict] = None
static start_suggestion(config, checkpoint_trial_id=None)[source]

Suggestion to start new trial

Parameters:
  • config (Dict[str, Any]) – Configuration to use for the new trial.

  • checkpoint_trial_id (Optional[int]) – Use checkpoint of this trial when starting the new trial (otherwise, it is started from scratch).

Return type:

TrialSuggestion

Returns:

A trial decision that consists in starting a new trial (which would receive a new trial-id).

static resume_suggestion(trial_id, config=None)[source]

Suggestion to resume a paused trial

Parameters:
  • trial_id (int) – ID of trial to be resumed (from its checkpoint)

  • config (Optional[dict]) – Configuration to use for resumed trial

Return type:

TrialSuggestion

Returns:

A trial decision that consists in resuming trial trial-id with config if provided, or the previous configuration used if not provided.

class syne_tune.optimizer.scheduler.TrialScheduler(config_space)[source]

Bases: object

Schedulers maintain and drive the logic of an experiment, making decisions which configs to evaluate in new trials, and which trials to stop early.

Some schedulers support pausing and resuming trials. In this case, they also drive the decision when to restart a paused trial.

Parameters:

config_space (Dict[str, Any]) – Configuration spoce

suggest(trial_id)[source]

Returns a suggestion for a new trial, or one to be resumed

This method returns suggestion of type TrialSuggestion (unless there is no config left to explore, and None is returned).

If suggestion.spawn_new_trial_id is True, a new trial is to be started with config suggestion.config. Typically, this new trial is started from scratch. But if suggestion.checkpoint_trial_id is given, the trial is to be (warm)started from the checkpoint written for the trial with this ID. The new trial has ID trial_id.

If suggestion.spawn_new_trial_id is False, an existing and currently paused trial is to be resumed, whose ID is suggestion.checkpoint_trial_id. If this trial has a checkpoint, we start from there. In this case, suggestion.config is optional. If not given (default), the config of the resumed trial does not change. Otherwise, its config is overwritten by suggestion.config (see HyperbandScheduler with type="promotion" for an example why this can be useful).

Apart from the HP config, additional fields can be appended to the dict, these are passed to the trial function as well.

Parameters:

trial_id (int) – ID for new trial to be started (ignored if existing trial to be resumed)

Return type:

Optional[TrialSuggestion]

Returns:

Suggestion for a trial to be started or to be resumed, see above. If no suggestion can be made, None is returned

on_trial_add(trial)[source]

Called when a new trial is added to the trial runner.

Additions are normally triggered by suggest.

Parameters:

trial (Trial) – Trial to be added

on_trial_error(trial)[source]

Called when a trial has failed.

Parameters:

trial (Trial) – Trial for which error is reported.

on_trial_result(trial, result)[source]

Called on each intermediate result reported by a trial.

At this point, the trial scheduler can make a decision by returning one of SchedulerDecision.CONTINUE, SchedulerDecision.PAUSE, or SchedulerDecision.STOP. This will only be called when the trial is currently running.

Parameters:
  • trial (Trial) – Trial for which results are reported

  • result (Dict[str, Any]) – Result dictionary

Return type:

str

Returns:

Decision what to do with the trial

on_trial_complete(trial, result)[source]

Notification for the completion of trial.

Note that on_trial_result() is called with the same result before. However, if the scheduler only uses one final report from each trial, it may ignore on_trial_result() and just use result here.

Parameters:
  • trial (Trial) – Trial which is completing

  • result (Dict[str, Any]) – Result dictionary

on_trial_remove(trial)[source]

Called to remove trial.

This is called when the trial is in PAUSED or PENDING state. Otherwise, call on_trial_complete().

Parameters:

trial (Trial) – Trial to be removed

metric_names()[source]
Return type:

List[str]

Returns:

List of metric names. The first one is the target metric optimized over, unless the scheduler is a genuine multi-objective metric (for example, for sampling the Pareto front)

metric_mode()[source]
Return type:

Union[str, List[str]]

Returns:

“min” if target metric is minimized, otherwise “max”. Here, “min” should be the default. For a genuine multi-objective scheduler, a list of modes is returned

metadata()[source]
Return type:

Dict[str, Any]

Returns:

Metadata for the scheduler

is_multiobjective_scheduler()[source]

Return True if a scheduler is multi-objective.

Return type:

bool