syne_tune.utils package
- syne_tune.utils.add_checkpointing_to_argparse(parser)[source]
To be called for the argument parser in the endpoint script. Arguments added here are optional. If checkpointing is not supported, they are simply not parsed.
- Parameters:
parser (
ArgumentParser
) – Parser to add extra arguments to
- syne_tune.utils.resume_from_checkpointed_model(config, load_model_fn)[source]
Checks whether there is a checkpoint to be resumed from. If so, the checkpoint is loaded by calling
load_model_fn
. This function takes a local pathname (to which it appends a filename). It returns resume_from, the resource value (e.g., epoch) the checkpoint was written at. If it fails to load the checkpoint, it may return 0. This skips resuming from a checkpoint. This resume_from value is returned.If checkpointing is not supported in
config
, or no checkpoint is found, resume_from = 0 is returned.- Parameters:
config (
Dict
[str
,Any
]) – Configuration the training script is called withload_model_fn (
Callable
[[str
],int
]) – See above, must returnresume_from
. Seepytorch_load_save_functions()
for an example
- Return type:
int
- Returns:
resume_from
(0 if no checkpoint has been loaded)
- syne_tune.utils.checkpoint_model_at_rung_level(config, save_model_fn, resource)[source]
If checkpointing is supported, checks whether a checkpoint is to be written. This is the case if the checkpoint dir is set in
config
. A checkpoint is written by callingsave_model_fn
, passing the local pathname and resource.Note: Why is
resource
passed here? In the future, we want to support writing checkpoints only for certain resource levels. This is useful if writing the checkpoint is expensive compared to the time needed to run one resource unit.- Parameters:
config (
Dict
[str
,Any
]) – Configuration the training script is called withsave_model_fn (
Callable
[[str
,int
],Any
]) – See above. Seepytorch_load_save_functions()
for an exampleresource (
int
) – Current resource level (e.g., number of epochs done)
- syne_tune.utils.pytorch_load_save_functions(state_dict_objects, mutable_state=None, fname='checkpoint.json')[source]
Provides default
load_model_fn
,save_model_fn
functions for standard PyTorch models (arguments toresume_from_checkpointed_model()
,checkpoint_model_at_rung_level()
).- Parameters:
state_dict_objects (
Dict
[str
,Any
]) – Dict of PyTorch objects implementingstate_dict
andload_state_dict
mutable_state (
Optional
[dict
]) – Optional. Additional dict with elementary value typesfname (
str
) – Name of local file (path is taken from config)
- Returns:
load_model_fn, save_model_fn
- syne_tune.utils.add_config_json_to_argparse(parser)[source]
To be called for the argument parser in the endpoint script.
- Parameters:
parser (
ArgumentParser
) – Parser to add extra arguments to
- syne_tune.utils.load_config_json(args)[source]
Loads configuration from JSON file and returns the union with
args
.- Parameters:
args (
Dict
[str
,Any
]) – Arguments returned byArgumentParser
, as dictionary- Return type:
Dict
[str
,Any
]- Returns:
Combined configuration dictionary
- syne_tune.utils.streamline_config_space(config_space, exclude_names=None, verbose=False)[source]
Given a configuration space
config_space
, this function returns a new configuration space where some domains may have been replaced by approximately equivalent ones, which are however better suited for Bayesian optimization. Entries with key inexclude_names
are not replaced.See
convert_domain()
for what replacement rules may be applied.- Parameters:
config_space (
Dict
[str
,Any
]) – Original configuration spaceexclude_names (
Optional
[List
[str
]]) – Do not convert entries with these keysverbose (
bool
) – Log output for replaced domains? Defaults toFalse
- Return type:
Dict
[str
,Any
]- Returns:
Streamlined configuration space