syne_tune package
- class syne_tune.StoppingCriterion(max_wallclock_time=None, max_num_evaluations=None, max_num_trials_started=None, max_num_trials_completed=None, max_cost=None, max_num_trials_finished=None, min_metric_value=None, max_metric_value=None)[source]
Bases:
object
Stopping criterion that can be used in a Tuner, for instance
Tuner(stop_criterion=StoppingCriterion(max_wallclock_time=3600), ...)
.If several arguments are used, the combined criterion is true whenever one of the atomic criteria is true.
In principle,
stop_criterion
forTuner
can be any lambda function, but this class should be used with remote launching in order to ensure proper serialization.- Parameters:
max_wallclock_time (
Optional
[float
]) – Stop once this wallclock time is reachedmax_num_evaluations (
Optional
[int
]) – Stop once more than this number of metric records have been reportedmax_num_trials_started (
Optional
[int
]) – Stop once more than this number of trials have been startedmax_num_trials_completed (
Optional
[int
]) – Stop once more than this number of trials have been completed. This does not include trials which were stopped or failedmax_cost (
Optional
[float
]) – Stop once total cost of evaluations larger than this valuemax_num_trials_finished (
Optional
[int
]) – Stop once more than this number of trials have finished (i.e., completed, stopped, failed, or stopping)min_metric_value (
Optional
[Dict
[str
,float
]]) – Dictionary with thresholds for selected metrics. Stop once an evaluation reports a metric value below a thresholdmax_metric_value (
Optional
[Dict
[str
,float
]]) – Dictionary with thresholds for selected metrics. Stop once an evaluation reports a metric value above a threshold
-
max_wallclock_time:
float
= None
-
max_num_evaluations:
int
= None
-
max_num_trials_started:
int
= None
-
max_num_trials_completed:
int
= None
-
max_cost:
float
= None
-
max_num_trials_finished:
int
= None
-
min_metric_value:
Optional
[Dict
[str
,float
]] = None
-
max_metric_value:
Optional
[Dict
[str
,float
]] = None
- class syne_tune.Tuner(trial_backend, scheduler, stop_criterion, n_workers, sleep_time=5.0, results_update_interval=10.0, print_update_interval=30.0, max_failures=1, tuner_name=None, asynchronous_scheduling=True, wait_trial_completion_when_stopping=False, callbacks=None, metadata=None, suffix_tuner_name=True, save_tuner=True, start_jobs_without_delay=True, trial_backend_path=None)[source]
Bases:
object
Controller of tuning loop, manages interplay between scheduler and trial backend. Also, stopping criterion and number of workers are maintained here.
- Parameters:
trial_backend (
TrialBackend
) – Backend for trial evaluationsscheduler (
TrialScheduler
) – Tuning algorithm for making decisions about which trials to start, stop, pause, or resumestop_criterion (
Callable
[[TuningStatus
],bool
]) – Tuning stops when this predicates returnsTrue
. Called in each iteration with the current tuning status. It is recommended to useStoppingCriterion
.n_workers (
int
) – Number of workers used here. Note that the backend needs to support (at least) this number of workers to be run in parallelsleep_time (
float
) – Time to sleep when all workers are busy. Defaults toDEFAULT_SLEEP_TIME
results_update_interval (
float
) – Frequency at which results are updated and stored (in seconds). Defaults to 10.print_update_interval (
float
) – Frequency at which result table is printed. Defaults to 30.max_failures (
int
) – This many trial execution failures are allowed before the tuning loop is aborted. Defaults to 1tuner_name (
Optional
[str
]) – Name associated with the tuning experiment, default to the name of the entrypoint. Must consists of alpha-digits characters, possibly separated by ‘-’. A postfix with a date time-stamp is added to ensure uniqueness.asynchronous_scheduling (
bool
) – Whether to use asynchronous scheduling when scheduling new trials. IfTrue
, trials are scheduled as soon as a worker is available. IfFalse
, the tuner waits that all trials are finished before scheduling a new batch of sizen_workers
. Default toTrue
.wait_trial_completion_when_stopping (
bool
) – How to deal with running trials when stopping criterion is met. IfTrue
, the tuner waits until all trials are finished. IfFalse
, all trials are terminated. Defaults toFalse
.callbacks (
Optional
[List
[TunerCallback
]]) – Called at certain times in the tuning loop, for example when a result is seen. The default callback stores results everyresults_update_interval
.metadata (
Optional
[dict
]) – Dictionary of user-metadata that will be persisted in{tuner_path}/{ST_METADATA_FILENAME}
, in addition to metadata provided by the user.SMT_TUNER_CREATION_TIMESTAMP
is always included which measures the time-stamp when the tuner started to run.suffix_tuner_name (
bool
) – IfTrue
, a timestamp is appended to the providedtuner_name
that ensures uniqueness, otherwise the name is left unchanged and is expected to be unique. Defaults toTrue
.save_tuner (
bool
) – IfTrue
, theTuner
object is serialized at the end of tuning, including its dependencies (e.g., scheduler). This allows all details of the experiment to be recovered. Defaults toTrue
.start_jobs_without_delay (
bool
) –Defaults to
True
. If this isTrue
, the tuner starts new jobs depending on scheduler decisions communicated to the backend. For example, if a trial has just been stopped (by callingbackend.stop_trial
), the tuner may start a new one immediately, even if the SageMaker training job is still busy due to stopping delays. This can lead to faster experiment runtime, because the backend is temporarily going over its budget.If set to
False
, the tuner always asks the backend for the number of busy workers, which guarantees that we never go over then_workers
budget. This makes a difference for backends where stopping or pausing trials is not immediate (e.g.,SageMakerBackend
). Not going over budget means thatn_workers
can be set up to the available quota, without running the risk of an exception due to the quota being exceeded. If you get such exceptions, we recommend to usestart_jobs_without_delay=False
. Also, if the SageMaker warm pool feature is used, it is recommended to setstart_jobs_without_delay=False
, since otherwise more thann_workers
warm pools will be started, because existing ones are busy with stopping when they should be reassigned.trial_backend_path (
Optional
[str
]) –If this is given, the path of
trial_backend
(where logs and checkpoints of trials are stored) is set to this. Otherwise, it is set toself.tuner_path
, so that per-trial information is written to the same path as tuning results.If the backend is
LocalBackend
and the experiment is run remotely, we recommend to set this, since otherwise checkpoints and logs are synced to S3, along with tuning results, which is costly and error-prone.
- best_config(metric=0)[source]
- Parameters:
metric (
Union
[str
,int
,None
]) – Indicates which metric to use, can be the index or a name of the metric. default to 0 - first metric defined in the Scheduler- Return type:
Tuple
[int
,Dict
[str
,Any
]]- Returns:
the best configuration found while tuning for the metric given and the associated trial-id
- class syne_tune.Reporter(add_time=True)[source]
Bases:
object
Callback for reporting metric values from a training script back to Syne Tune. Example:
from syne_tune import Reporter report = Reporter() for epoch in range(1, epochs + 1): # ... report(epoch=epoch, accuracy=accuracy)
- Parameters:
add_time (
bool
) – If True (default), the time (in secs) since creation of theReporter
object is reported automatically asST_WORKER_TIME
-
add_time:
bool
= True
Subpackages
- syne_tune.backend package
- syne_tune.blackbox_repository package
BlackboxOffline
deserialize()
load_blackbox()
blackbox_list()
add_surrogate()
BlackboxRepositoryBackend
UserBlackboxBackend
- Subpackages
- Submodules
- syne_tune.blackbox_repository.blackbox module
- syne_tune.blackbox_repository.blackbox_offline module
- syne_tune.blackbox_repository.blackbox_surrogate module
- syne_tune.blackbox_repository.blackbox_tabular module
- syne_tune.blackbox_repository.repository module
- syne_tune.blackbox_repository.serialize module
- syne_tune.blackbox_repository.simulated_tabular_backend module
- syne_tune.blackbox_repository.utils module
- syne_tune.callbacks package
- syne_tune.experiments package
ExperimentResult
ExperimentResult.name
ExperimentResult.results
ExperimentResult.metadata
ExperimentResult.tuner
ExperimentResult.path
ExperimentResult.creation_date()
ExperimentResult.plot_hypervolume()
ExperimentResult.plot()
ExperimentResult.plot_trials_over_time()
ExperimentResult.metric_mode()
ExperimentResult.metric_names()
ExperimentResult.entrypoint_name()
ExperimentResult.best_config()
load_experiment()
get_metadata()
list_experiments()
load_experiments_df()
hypervolume_indicator_column_generator()
- Submodules
- syne_tune.optimizer package
- Subpackages
- Submodules
- syne_tune.optimizer.baselines module
RandomSearch
GridSearch
BayesianOptimization
ASHA
MOBSTER
HyperTune
DyHPO
PASHA
BOHB
SyncHyperband
SyncBOHB
DEHB
SyncMOBSTER
BORE
ASHABORE
BoTorch
REA
create_gaussian_process_estimator()
MORandomScalarizationBayesOpt
NSGA2
MOREA
MOLinearScalarizationBayesOpt
ConstrainedBayesianOptimization
ZeroShotTransfer
ASHACTS
KDE
CQR
ASHACQR
EHVI
- syne_tune.optimizer.legacy_scheduler module
- syne_tune.optimizer.scheduler module
- syne_tune.optimizer.baselines module
- syne_tune.utils package
add_checkpointing_to_argparse()
resume_from_checkpointed_model()
checkpoint_model_at_rung_level()
pytorch_load_save_functions()
parse_bool()
add_config_json_to_argparse()
load_config_json()
streamline_config_space()
- Submodules
Submodules
- syne_tune.config_space module
Domain
Sampler
BaseSampler
Uniform
LogUniform
Normal
Grid
Float
Integer
Categorical
Ordinal
OrdinalNearestNeighbor
FiniteRange
uniform()
loguniform()
randint()
lograndint()
choice()
ordinal()
logordinal()
finrange()
logfinrange()
is_log_space()
is_reverse_log_space()
is_uniform_space()
add_to_argparse()
cast_config_values()
postprocess_config()
remove_constant_and_cast()
non_constant_hyperparameter_keys()
config_space_size()
config_to_match_string()
to_dict()
from_dict()
config_space_to_json_dict()
config_space_from_json_dict()
restrict_domain()
Quantized
quniform()
reverseloguniform()
qloguniform()
qrandint()
qlograndint()
- syne_tune.constants module
SYNE_TUNE_ENV_FOLDER
SYNE_TUNE_DEFAULT_FOLDER
ST_WORKER_ITER
ST_WORKER_TIMESTAMP
ST_WORKER_TIME
ST_WORKER_COST
ST_CHECKPOINT_DIR
ST_CONFIG_JSON_FNAME_ARG
ST_REMOTE_UPLOAD_DIR_NAME
ST_RESULTS_DATAFRAME_FILENAME
ST_METADATA_FILENAME
ST_TUNER_DILL_FILENAME
ST_DATETIME_FORMAT
TUNER_DEFAULT_SLEEP_TIME
ST_METRIC_TAG
- syne_tune.num_gpu module
- syne_tune.report module
- syne_tune.results_callback module
- syne_tune.stopping_criterion module
- syne_tune.tuner module
- syne_tune.tuner_callback module
TunerCallback
TunerCallback.on_tuning_start()
TunerCallback.on_tuning_end()
TunerCallback.on_loop_start()
TunerCallback.on_loop_end()
TunerCallback.on_fetch_status_results()
TunerCallback.on_trial_complete()
TunerCallback.on_trial_result()
TunerCallback.on_tuning_sleep()
TunerCallback.on_start_trial()
TunerCallback.on_resume_trial()
- syne_tune.tuning_status module
MetricsStatistics
TuningStatus
TuningStatus.update()
TuningStatus.mark_running_job_as_stopped()
TuningStatus.num_trials_started
TuningStatus.num_trials_completed
TuningStatus.num_trials_failed
TuningStatus.num_trials_finished
TuningStatus.num_trials_running
TuningStatus.wallclock_time
TuningStatus.user_time
TuningStatus.cost
TuningStatus.get_dataframe()
print_best_metric_found()
- syne_tune.util module
RegularCallback
experiment_path()
name_from_base()
random_string()
repository_root_path()
script_checkpoint_example_path()
script_height_example_path()
catchtime()
is_increasing()
is_positive_integer()
is_integer()
dump_json_with_numpy()
dict_get()
recursive_merge()
find_first_of_type()
metric_name_mode()