syne_tune.experiments.launchers.hpo_main_local module

syne_tune.experiments.launchers.hpo_main_local.get_benchmark(configuration, benchmark_definitions, **benchmark_kwargs)[source]

If configuration.benchmark is None and benchmark_definitions maps to a single benchmark, configuration.benchmark is set to its key.

Return type:

RealBenchmarkDefinition

syne_tune.experiments.launchers.hpo_main_local.create_objects_for_tuner(configuration, methods, method, benchmark, master_random_seed, seed, verbose, extra_tuning_job_metadata=None, map_method_args=None, extra_results=None, num_gpus_per_trial=1)[source]
Return type:

Dict[str, Any]

syne_tune.experiments.launchers.hpo_main_local.start_experiment_local_backend(configuration, methods, benchmark_definitions, extra_results=None, map_method_args=None, extra_tuning_job_metadata=None)[source]

Runs sequence of experiments with local backend sequentially. The loop runs over methods selected from methods and repetitions,

map_method_args can be used to modify method_kwargs for constructing MethodArguments, depending on configuration and the method. This allows for extra flexibility to specify specific arguments for chosen methods Its signature is method_kwargs = map_method_args(configuration, method, method_kwargs), where method is the name of the baseline.

Note

When this is launched remotely as entry point of a SageMaker training job (command line --launched_remotely 1), the backend is configured to write logs and checkpoints to a directory which is not synced to S3. This is different to the tuner path, which is “/opt/ml/checkpoints”, so that tuning results are synced to S3. Syncing checkpoints to S3 is not recommended (it is slow and can lead to failures, since several worker processes write to the same synced directory).

Parameters:
  • configuration (ConfigDict) – ConfigDict with parameters of the experiment. Must contain all parameters from LOCAL_BACKEND_EXTRA_PARAMETERS

  • methods (Dict[str, Callable[[MethodArguments], TrialScheduler]]) – Dictionary with method constructors.

  • benchmark_definitions (Callable[..., Dict[str, RealBenchmarkDefinition]]) – Definitions of benchmarks; one is selected from command line arguments

  • extra_results (Optional[ExtraResultsComposer]) – If given, this is used to append extra information to the results dataframe

  • map_method_args (Optional[Callable[[ConfigDict, str, Dict[str, Any]], Dict[str, Any]]]) – See above, optional

  • extra_tuning_job_metadata (Optional[Dict[str, Any]]) – Metadata added to the tuner, can be used to manage results

syne_tune.experiments.launchers.hpo_main_local.main(methods, benchmark_definitions, extra_args=None, map_method_args=None, extra_results=None)[source]

Runs sequence of experiments with local backend sequentially. The loop runs over methods selected from methods and repetitions, both controlled by command line arguments.

map_method_args can be used to modify method_kwargs for constructing MethodArguments, depending on configuration returned by parse_args() and the method. Its signature is method_kwargs = map_method_args(configuration, method, method_kwargs), where method is the name of the baseline. It is called just before the method is created.

Parameters:
  • methods (Dict[str, Callable[[MethodArguments], TrialScheduler]]) – Dictionary with method constructors

  • benchmark_definitions (Callable[..., Dict[str, RealBenchmarkDefinition]]) – Definitions of benchmarks; one is selected from command line arguments

  • extra_args (Optional[List[Dict[str, Any]]]) – Extra arguments for command line parser. Optional

  • map_method_args (Optional[Callable[[ConfigDict, str, Dict[str, Any]], Dict[str, Any]]]) – See above, optional

  • extra_results (Optional[ExtraResultsComposer]) – If given, this is used to append extra information to the results dataframe