syne_tune.experiments.launchers.hpo_main_sagemaker module

syne_tune.experiments.launchers.hpo_main_sagemaker.start_experiment_sagemaker_backend(configuration, methods, benchmark_definitions, extra_results=None, map_method_args=None, extra_tuning_job_metadata=None)[source]

Runs experiment with SageMaker backend.

map_method_args can be used to modify method_kwargs for constructing MethodArguments, depending on configuration and the method. This allows for extra flexibility to specify specific arguments for chosen methods Its signature is method_kwargs = map_method_args(configuration, method, method_kwargs), where method is the name of the baseline.

Parameters:
  • configuration (ConfigDict) – ConfigDict with parameters of the experiment. Must contain all parameters from SAGEMAKER_BACKEND_EXTRA_PARAMETERS

  • methods (Dict[str, Callable[[MethodArguments], TrialScheduler]]) – Dictionary with method constructors.

  • benchmark_definitions (Callable[..., Dict[str, RealBenchmarkDefinition]]) – Definitions of benchmarks; one is selected from command line arguments

  • extra_results (Optional[ExtraResultsComposer]) – If given, this is used to append extra information to the results dataframe

  • map_method_args (Optional[Callable[[ConfigDict, str, Dict[str, Any]], Dict[str, Any]]]) – See above, optional

  • extra_tuning_job_metadata (Optional[Dict[str, Any]]) – Metadata added to the tuner, can be used to manage results

syne_tune.experiments.launchers.hpo_main_sagemaker.main(methods, benchmark_definitions, extra_args=None, map_method_args=None, extra_results=None)[source]

Runs experiment with SageMaker backend.

Command line arguments must specify a single benchmark, method, and seed, for example --method ASHA --num_seeds 5 --start_seed 4 starts experiment with seed=4, or --method ASHA --num_seeds 1 starts experiment with seed=0. Here, ASHA must be key in methods.

map_method_args can be used to modify method_kwargs for constructing MethodArguments, depending on configuration returned by parse_args() and the method. Its signature is method_kwargs = map_method_args(configuration, method, method_kwargs), where method is the name of the baseline. It is called just before the method is created.

Parameters:
  • methods (Dict[str, Callable[[MethodArguments], TrialScheduler]]) – Dictionary with method constructors

  • benchmark_definitions (Callable[..., Dict[str, RealBenchmarkDefinition]]) – Definitions of benchmark; one is selected from command line arguments

  • extra_args (Optional[List[Dict[str, Any]]]) – Extra arguments for command line parser. Optional

  • map_method_args (Optional[Callable[[ConfigDict, str, Dict[str, Any]], Dict[str, Any]]]) – See above. Needed if extra_args is given

  • extra_results (Optional[ExtraResultsComposer]) – If given, this is used to append extra information to the results dataframe