syne_tune.experiments.launchers.hpo_main_simulator module

syne_tune.experiments.launchers.hpo_main_simulator.is_dict_of_dict(benchmark_definitions)[source]
Return type:

bool

syne_tune.experiments.launchers.hpo_main_simulator.get_transfer_learning_evaluations(blackbox_name, test_task, datasets, n_evals=None)[source]
Parameters:
  • blackbox_name (str) – name of blackbox

  • test_task (str) – task where the performance would be tested, it is excluded from transfer-learning evaluations

  • datasets (Optional[List[str]]) – subset of datasets to consider, only evaluations from those datasets are provided to

transfer-learning methods. If none, all datasets are used. :type n_evals: Optional[int] :param n_evals: maximum number of evaluations to be returned :rtype: Dict[str, Any] :return:

syne_tune.experiments.launchers.hpo_main_simulator.start_experiment_simulated_backend(configuration, methods, benchmark_definitions, extra_results=None, map_method_args=None, extra_tuning_job_metadata=None, use_transfer_learning=False)[source]

Runs sequence of experiments with simulator backend sequentially. The loop runs over methods selected from methods, repetitions and benchmarks selected from benchmark_definitions

map_method_args can be used to modify method_kwargs for constructing MethodArguments, depending on configuration and the method. This allows for extra flexibility to specify specific arguments for chosen methods Its signature is method_kwargs = map_method_args(configuration, method, method_kwargs), where method is the name of the baseline.

Parameters:
  • configuration (ConfigDict) – ConfigDict with parameters of the experiment. Must contain all parameters from LOCAL_LOCAL_SIMULATED_BENCHMARK_REQUIRED_PARAMETERS

  • methods (Dict[str, Callable[[MethodArguments], TrialScheduler]]) – Dictionary with method constructors.

  • benchmark_definitions (Union[Dict[str, SurrogateBenchmarkDefinition], Dict[str, Dict[str, SurrogateBenchmarkDefinition]]]) – Definitions of benchmarks; one is selected from command line arguments

  • extra_results (Optional[ExtraResultsComposer]) – If given, this is used to append extra information to the results dataframe

  • map_method_args (Optional[Callable[[ConfigDict, str, Dict[str, Any]], Dict[str, Any]]]) – See above, optional

  • extra_tuning_job_metadata (Optional[Dict[str, Any]]) – Metadata added to the tuner, can be used to manage results

  • use_transfer_learning (bool) – If True, we use transfer tuning. Defaults to False

syne_tune.experiments.launchers.hpo_main_simulator.main(methods, benchmark_definitions, extra_args=None, map_method_args=None, extra_results=None, use_transfer_learning=False)[source]

Runs sequence of experiments with simulator backend sequentially. The loop runs over methods selected from methods, repetitions and benchmarks selected from benchmark_definitions, with the range being controlled by command line arguments.

map_method_args can be used to modify method_kwargs for constructing MethodArguments, depending on configuration returned by parse_args() and the method. Its signature is method_kwargs = map_method_args(configuration, method, method_kwargs), where method is the name of the baseline. It is called just before the method is created.

Parameters:
  • methods (Dict[str, Callable[[MethodArguments], TrialScheduler]]) – Dictionary with method constructors

  • benchmark_definitions (Union[Dict[str, SurrogateBenchmarkDefinition], Dict[str, Dict[str, SurrogateBenchmarkDefinition]]]) – Definitions of benchmarks

  • extra_args (Optional[List[Dict[str, Any]]]) – Extra arguments for command line parser. Optional

  • map_method_args (Optional[Callable[[ConfigDict, str, Dict[str, Any]], Dict[str, Any]]]) – See above. Needed if extra_args given

  • extra_results (Optional[ExtraResultsComposer]) – If given, this is used to append extra information to the results dataframe

  • use_transfer_learning (bool) – If True, we use transfer tuning. Defaults to False