syne_tune.experiments.launchers.hpo_main_simulator module
- syne_tune.experiments.launchers.hpo_main_simulator.is_dict_of_dict(benchmark_definitions)[source]
- Return type:
bool
- syne_tune.experiments.launchers.hpo_main_simulator.get_transfer_learning_evaluations(blackbox_name, test_task, datasets, n_evals=None)[source]
- Parameters:
blackbox_name (
str
) – name of blackboxtest_task (
str
) – task where the performance would be tested, it is excluded from transfer-learning evaluationsdatasets (
Optional
[List
[str
]]) – subset of datasets to consider, only evaluations from those datasets are provided to
transfer-learning methods. If none, all datasets are used. :type n_evals:
Optional
[int
] :param n_evals: maximum number of evaluations to be returned :rtype:Dict
[str
,Any
] :return:
- syne_tune.experiments.launchers.hpo_main_simulator.start_experiment_simulated_backend(configuration, methods, benchmark_definitions, extra_results=None, map_method_args=None, extra_tuning_job_metadata=None, use_transfer_learning=False)[source]
Runs sequence of experiments with simulator backend sequentially. The loop runs over methods selected from
methods
, repetitions and benchmarks selected frombenchmark_definitions
map_method_args
can be used to modifymethod_kwargs
for constructingMethodArguments
, depending onconfiguration
and the method. This allows for extra flexibility to specify specific arguments for chosen methods Its signature ismethod_kwargs = map_method_args(configuration, method, method_kwargs)
, wheremethod
is the name of the baseline.- Parameters:
configuration (
ConfigDict
) – ConfigDict with parameters of the experiment. Must contain all parameters from LOCAL_LOCAL_SIMULATED_BENCHMARK_REQUIRED_PARAMETERSmethods (
Dict
[str
,Callable
[[MethodArguments
],TrialScheduler
]]) – Dictionary with method constructors.benchmark_definitions (
Union
[Dict
[str
,SurrogateBenchmarkDefinition
],Dict
[str
,Dict
[str
,SurrogateBenchmarkDefinition
]]]) – Definitions of benchmarks; one is selected from command line argumentsextra_results (
Optional
[ExtraResultsComposer
]) – If given, this is used to append extra information to the results dataframemap_method_args (
Optional
[Callable
[[ConfigDict
,str
,Dict
[str
,Any
]],Dict
[str
,Any
]]]) – See above, optionalextra_tuning_job_metadata (
Optional
[Dict
[str
,Any
]]) – Metadata added to the tuner, can be used to manage resultsuse_transfer_learning (
bool
) – If True, we use transfer tuning. Defaults to False
- syne_tune.experiments.launchers.hpo_main_simulator.main(methods, benchmark_definitions, extra_args=None, map_method_args=None, extra_results=None, use_transfer_learning=False)[source]
Runs sequence of experiments with simulator backend sequentially. The loop runs over methods selected from
methods
, repetitions and benchmarks selected frombenchmark_definitions
, with the range being controlled by command line arguments.map_method_args
can be used to modifymethod_kwargs
for constructingMethodArguments
, depending onconfiguration
returned byparse_args()
and the method. Its signature ismethod_kwargs = map_method_args(configuration, method, method_kwargs)
, wheremethod
is the name of the baseline. It is called just before the method is created.- Parameters:
methods (
Dict
[str
,Callable
[[MethodArguments
],TrialScheduler
]]) – Dictionary with method constructorsbenchmark_definitions (
Union
[Dict
[str
,SurrogateBenchmarkDefinition
],Dict
[str
,Dict
[str
,SurrogateBenchmarkDefinition
]]]) – Definitions of benchmarksextra_args (
Optional
[List
[Dict
[str
,Any
]]]) – Extra arguments for command line parser. Optionalmap_method_args (
Optional
[Callable
[[ConfigDict
,str
,Dict
[str
,Any
]],Dict
[str
,Any
]]]) – See above. Needed ifextra_args
givenextra_results (
Optional
[ExtraResultsComposer
]) – If given, this is used to append extra information to the results dataframeuse_transfer_learning (
bool
) – If True, we use transfer tuning. Defaults to False