syne_tune.utils package

syne_tune.utils.add_checkpointing_to_argparse(parser)[source]

To be called for the argument parser in the endpoint script. Arguments added here are optional. If checkpointing is not supported, they are simply not parsed.

Parameters:

parser (ArgumentParser) – Parser to add extra arguments to

syne_tune.utils.resume_from_checkpointed_model(config, load_model_fn)[source]

Checks whether there is a checkpoint to be resumed from. If so, the checkpoint is loaded by calling load_model_fn. This function takes a local pathname (to which it appends a filename). It returns resume_from, the resource value (e.g., epoch) the checkpoint was written at. If it fails to load the checkpoint, it may return 0. This skips resuming from a checkpoint. This resume_from value is returned.

If checkpointing is not supported in config, or no checkpoint is found, resume_from = 0 is returned.

Parameters:
  • config (Dict[str, Any]) – Configuration the training script is called with

  • load_model_fn (Callable[[str], int]) – See above, must return resume_from. See pytorch_load_save_functions() for an example

Return type:

int

Returns:

resume_from (0 if no checkpoint has been loaded)

syne_tune.utils.checkpoint_model_at_rung_level(config, save_model_fn, resource)[source]

If checkpointing is supported, checks whether a checkpoint is to be written. This is the case if the checkpoint dir is set in config. A checkpoint is written by calling save_model_fn, passing the local pathname and resource.

Note: Why is resource passed here? In the future, we want to support writing checkpoints only for certain resource levels. This is useful if writing the checkpoint is expensive compared to the time needed to run one resource unit.

Parameters:
  • config (Dict[str, Any]) – Configuration the training script is called with

  • save_model_fn (Callable[[str, int], Any]) – See above. See pytorch_load_save_functions() for an example

  • resource (int) – Current resource level (e.g., number of epochs done)

syne_tune.utils.pytorch_load_save_functions(state_dict_objects, mutable_state=None, fname='checkpoint.json')[source]

Provides default load_model_fn, save_model_fn functions for standard PyTorch models (arguments to resume_from_checkpointed_model(), checkpoint_model_at_rung_level()).

Parameters:
  • state_dict_objects (Dict[str, Any]) – Dict of PyTorch objects implementing state_dict and load_state_dict

  • mutable_state (Optional[dict]) – Optional. Additional dict with elementary value types

  • fname (str) – Name of local file (path is taken from config)

Returns:

load_model_fn, save_model_fn

syne_tune.utils.parse_bool(val)[source]
Return type:

bool

syne_tune.utils.add_config_json_to_argparse(parser)[source]

To be called for the argument parser in the endpoint script.

Parameters:

parser (ArgumentParser) – Parser to add extra arguments to

syne_tune.utils.load_config_json(args)[source]

Loads configuration from JSON file and returns the union with args.

Parameters:

args (Dict[str, Any]) – Arguments returned by ArgumentParser, as dictionary

Return type:

Dict[str, Any]

Returns:

Combined configuration dictionary

syne_tune.utils.streamline_config_space(config_space, exclude_names=None, verbose=False)[source]

Given a configuration space config_space, this function returns a new configuration space where some domains may have been replaced by approximately equivalent ones, which are however better suited for Bayesian optimization. Entries with key in exclude_names are not replaced.

See convert_domain() for what replacement rules may be applied.

Parameters:
  • config_space (Dict[str, Any]) – Original configuration space

  • exclude_names (Optional[List[str]]) – Do not convert entries with these keys

  • verbose (bool) – Log output for replaced domains? Defaults to False

Return type:

Dict[str, Any]

Returns:

Streamlined configuration space

Submodules