syne_tune.optimizer.schedulers.multiobjective package

class syne_tune.optimizer.schedulers.multiobjective.MOASHA(config_space, metrics, mode=None, time_attr='training_iteration', multiobjective_priority=None, max_t=100, grace_period=1, reduction_factor=3, brackets=1)[source]

Bases: TrialScheduler

Implements MultiObjective Asynchronous Successive HAlving with different multiobjective sort options. References:

A multi-objective perspective on jointly tuning hardware and hyperparameters
David Salinas, Valerio Perrone, Cedric Archambeau and Olivier Cruchant
NAS workshop, ICLR2021.

and

Multi-objective multi-fidelity hyperparameter optimization with application to fairness
Robin Schmucker, Michele Donini, Valerio Perrone, Cédric Archambeau
Parameters:
  • config_space (Dict[str, Any]) – Configuration space

  • metrics (List[str]) – List of metric names MOASHA optimizes over

  • mode (Union[str, List[str], None]) – One of {"min", "max"} or a list of these values (same size as metrics). Determines whether objectives are minimized or maximized. Defaults to “min”

  • time_attr (str) – A training result attr to use for comparing time. Note that you can pass in something non-temporal such as training_iteration as a measure of progress, the only requirement is that the attribute should increase monotonically. Defaults to “training_iteration”

  • multiobjective_priority (Optional[MOPriority]) – The multiobjective priority that is used to sort multiobjective candidates. We support several choices such as non-dominated sort or linear scalarization, default is non-dominated sort.

  • max_t (int) – max time units per trial. Trials will be stopped after max_t time units (determined by time_attr) have passed. Defaults to 100

  • grace_period (int) – Only stop trials at least this old in time. The units are the same as the attribute named by time_attr. Defaults to 1

  • reduction_factor (float) – Used to set halving rate and amount. This is simply a unit-less scalar. Defaults to 3

  • brackets (int) – Number of brackets. Each bracket has a different grace_period and number of rung levels. Defaults to 1

metric_names()[source]
Return type:

List[str]

Returns:

List of metric names. The first one is the target metric optimized over, unless the scheduler is a genuine multi-objective metric (for example, for sampling the Pareto front)

metric_mode()[source]
Return type:

str

Returns:

“min” if target metric is minimized, otherwise “max”. Here, “min” should be the default. For a genuine multi-objective scheduler, a list of modes is returned

on_trial_add(trial)[source]

Called when a new trial is added to the trial runner.

Additions are normally triggered by suggest.

Parameters:

trial (Trial) – Trial to be added

on_trial_result(trial, result)[source]

Called on each intermediate result reported by a trial.

At this point, the trial scheduler can make a decision by returning one of SchedulerDecision.CONTINUE, SchedulerDecision.PAUSE, or SchedulerDecision.STOP. This will only be called when the trial is currently running.

Parameters:
  • trial (Trial) – Trial for which results are reported

  • result (Dict[str, Any]) – Result dictionary

Return type:

str

Returns:

Decision what to do with the trial

on_trial_complete(trial, result)[source]

Notification for the completion of trial.

Note that on_trial_result() is called with the same result before. However, if the scheduler only uses one final report from each trial, it may ignore on_trial_result() and just use result here.

Parameters:
  • trial (Trial) – Trial which is completing

  • result (Dict[str, Any]) – Result dictionary

on_trial_remove(trial)[source]

Called to remove trial.

This is called when the trial is in PAUSED or PENDING state. Otherwise, call on_trial_complete().

Parameters:

trial (Trial) – Trial to be removed

is_multiobjective_scheduler()[source]

Return True if a scheduler is multi-objective.

Return type:

bool

class syne_tune.optimizer.schedulers.multiobjective.MultiObjectiveRegularizedEvolution(config_space, metric, mode, points_to_evaluate=None, population_size=100, sample_size=10, multiobjective_priority=None, **kwargs)[source]

Bases: RegularizedEvolution

Adapts regularized evolution algorithm by Real et al. to the multi-objective setting. Elements in the populations are scored via a multi-objective priority that is set to non-dominated sort by default. Parents are sampled from the population based on this score.

Additional arguments on top of parent class syne_tune.optimizer.schedulers.searchers.StochasticSearcher:

Parameters:
  • mode (Union[List[str], str]) – Mode to use for the metric given, can be “min” or “max”, defaults to “min”

  • population_size (int) – Size of the population, defaults to 100

  • sample_size (int) – Size of the candidate set to obtain a parent for the mutation, defaults to 10

class syne_tune.optimizer.schedulers.multiobjective.NSGA2Searcher(config_space, metric, mode='min', points_to_evaluate=None, population_size=20, **kwargs)[source]

Bases: StochasticSearcher

This is a wrapper around the NSGA-2 [1] implementation of pymoo [2].

[1] K. Deb, A. Pratap, S. Agarwal, and T. Meyarivan.
A fast and elitist multiobjective genetic algorithm: nsga-II.
Trans. Evol. Comp, 6(2):182–197, April 2002.
[2] J. Blank and K. Deb
pymoo: Multi-Objective Optimization in Python
IEEE Access, 2020
Parameters:
  • config_space (Dict[str, Any]) – Configuration space

  • metric (List[str]) –

    Name of metric passed to update(). Can be obtained from scheduler in configure_scheduler(). In the case of multi-objective optimization,

    metric is a list of strings specifying all objectives to be optimized.

  • points_to_evaluate (Optional[List[dict]]) – List of configurations to be evaluated initially (in that order). Each config in the list can be partially specified, or even be an empty dict. For each hyperparameter not specified, the default value is determined using a midpoint heuristic. If None (default), this is mapped to [dict()], a single default config determined by the midpoint heuristic. If [] (empty list), no initial configurations are specified.

  • mode (Union[List[str], str]) – Should metric be minimized (“min”, default) or maximized (“max”). In the case of multi-objective optimization, mode can be a list defining for each metric if it is minimized or maximized

  • population_size (int) – Size of the population

get_config(**kwargs)[source]

Suggest a new configuration.

Note: Query _next_initial_config() for initial configs to return first.

Parameters:

kwargs – Extra information may be passed from scheduler to searcher

Return type:

Optional[Dict[str, Any]]

Returns:

New configuration. The searcher may return None if a new configuration cannot be suggested. In this case, the tuning will stop. This happens if searchers never suggest the same config more than once, and all configs in the (finite) search space are exhausted.

class syne_tune.optimizer.schedulers.multiobjective.LinearScalarizedScheduler(config_space, metric, mode='min', scalarization_weights=None, base_scheduler_factory=None, **base_scheduler_kwargs)[source]

Bases: TrialScheduler

Scheduler with linear scalarization of multiple objectives

This method optimizes a single objective equal to the linear scalarization of given two objectives. The scalarized single objective is named: "scalarized_<metric1>_<metric2>_..._<metricN>".

Parameters:
  • base_scheduler_factory (Optional[Callable[[Any], TrialScheduler]]) – Factory method for the single-objective scheduler used on the scalarized objective. It will be initialized inside this scheduler. Defaults to FIFOScheduler.

  • config_space (Dict[str, Any]) – Configuration space for evaluation function

  • metric (List[str]) – Names of metrics to optimize

  • mode (Union[List[str], str]) – Modes of metrics to optimize (“min” or “max”). All must be matching.

  • scalarization_weights (Union[ndarray, List[float], None]) – Weights used to scalarize objectives. Defaults to an array of 1s

  • base_scheduler_kwargs – Additional arguments to base_scheduler_factory beyond config_space, metric, mode

scalarization_weights: ndarray
single_objective_metric: str
base_scheduler: TrialScheduler
on_trial_add(trial)[source]

Called when a new trial is added to the trial runner. See the docstring of the chosen base_scheduler for details

on_trial_error(trial)[source]

Called when a trial has failed. See the docstring of the chosen base_scheduler for details

on_trial_result(trial, result)[source]

Called on each intermediate result reported by a trial. See the docstring of the chosen base_scheduler for details

Return type:

str

on_trial_complete(trial, result)[source]

Notification for the completion of trial. See the docstring of the chosen base_scheduler for details

on_trial_remove(trial)[source]

Called to remove trial. See the docstring of the chosen base_scheduler for details

trials_checkpoints_can_be_removed()[source]

See the docstring of the chosen base_scheduler for details :rtype: List[int] :return: IDs of paused trials for which checkpoints can be removed

metric_names()[source]
Return type:

List[str]

Returns:

List of metric names.

metric_mode()[source]
Return type:

Union[str, List[str]]

Returns:

“min” if target metric is minimized, otherwise “max”.

metadata()[source]
Return type:

Dict[str, Any]

Returns:

Metadata of the scheduler

is_multiobjective_scheduler()[source]

Return True if a scheduler is multi-objective.

Return type:

bool

class syne_tune.optimizer.schedulers.multiobjective.MultiObjectiveMultiSurrogateSearcher(config_space, metric, estimators, mode='min', points_to_evaluate=None, scoring_class=None, num_initial_candidates=250, num_initial_random_choices=3, allow_duplicates=False, restrict_configurations=None, clone_from_state=False, **kwargs)[source]

Bases: BayesianOptimizationSearcher

Multi Objective Multi Surrogate Searcher for FIFO scheduler

This searcher must be used with FIFOScheduler. It provides Bayesian optimization, based on a scikit-learn estimator based surrogate model.

Additional arguments on top of parent class StochasticSearcher:

Parameters:
  • estimator – Instance of SKLearnEstimator to be used as surrogate model

  • scoring_class (Optional[Callable[[Any], ScoringFunction]]) – The scoring function (or acquisition function) class and any extra parameters used to instantiate it. If None, expected improvement (EI) is used. Note that the acquisition function is not locally optimized with this searcher.

  • num_initial_candidates (int) – Number of candidates sampled for scoring with acquisition function.

  • num_initial_random_choices (int) – Number of randomly chosen candidates before surrogate model is used.

  • allow_duplicates (bool) – If True, allow for the same candidate to be selected more than once.

  • restrict_configurations (Optional[List[Dict[str, Any]]]) – If given, the searcher only suggests configurations from this list. If allow_duplicates == False, entries are popped off this list once suggested.

clone_from_state(state)[source]

Together with get_state(), this is needed in order to store and re-create the mutable state of the searcher.

Given state as returned by get_state(), this method combines the non-pickle-able part of the immutable state from self with state and returns the corresponding searcher clone. Afterwards, self is not used anymore.

Parameters:

state – See above

Returns:

New searcher object

class syne_tune.optimizer.schedulers.multiobjective.MultiObjectiveLCBRandomLinearScalarization(predictor, active_metric=None, weights_sampler=None, kappa=0.5, normalize_acquisition=True, random_seed=None)[source]

Bases: ScoringFunction

Note: This is the multi objective random scalarization scoring function based on the work of Biswajit et al. [1]. This scoring function uses Lower Confidence Bound as the acquisition for the scalarized objective \(h(\mu, \sigma) = \mu - \kappa * \sigma\)

[1] Paria, Biswajit, Kirthevasan Kandasamy and Barnabás Póczos.
A Flexible Framework for Multi-Objective Bayesian Optimization using Random Scalarizations.
Conference on Uncertainty in Artificial Intelligence (2018).
Parameters:
  • predictor (Dict[str, Predictor]) – Surrogate predictor for statistics of predictive distribution

  • weights_sampler (Optional[Callable[[], Dict[str, float]]]) –

    Callable that can generate weights for each objective. Once called it will return a dictionary mapping metric name to scalarization weight as {

    <name of metric 1> : <weight for metric 1>, <name of metric 2> : <weight for metric 2>, …

    }

  • kappa (float) – Hyperparameter used for the LCM portion of the scoring

  • normalize_acquisition (bool) – If True, use rank-normalization on the acquisition function results before weighting.

  • random_seed (Optional[int]) – The random seed used for default weights_sampler if not provided.

score(candidates, predictor=None)[source]
Parameters:
  • candidates (Iterable[Dict[str, Union[int, float, str]]]) – Configurations for which scores are to be computed

  • predictor (Optional[Dict[str, Predictor]]) – Overrides default predictor

Return type:

List[float]

Returns:

List of score values, length of candidates

Submodules