syne_tune.optimizer.schedulers.searchers.bayesopt.gpautograd.optimization_utils module

syne_tune.optimizer.schedulers.searchers.bayesopt.gpautograd.optimization_utils.apply_lbfgs(exec_func, param_dict, bounds, **kwargs)[source]

Run SciPy L-BFGS-B on criterion given by autograd code

Run SciPy L-BFGS-B in order to minimize criterion given by autograd code. Criterion and gradient are computed by:

crit_val, gradient = exec_func(param_vec)

Given an autograd expression, use make_scipy_objective to obtain exec_func. param_vec must correspond to the parameter dictionary param_dict via ParamVecDictConverter. The initial param_vec is taken from param_dict, and final values are written back to param_dict (conversions are done by ParamVecDictConverter).

L-BFGS-B allows box constraints [a, b] for any coordinate. Here, None stands for -infinity (a) or +infinity (b). The default is (None, None), so no constraints. In bounds, box constraints can be specified per argument (the constraint applies to all coordinates of the argument). Pass {} for no constraints.

Parameters:
  • exec_func – Function to compute criterion and gradient

  • param_dict – See above

  • bounds – See above

Returns:

None, or dict with info about exception caught

syne_tune.optimizer.schedulers.searchers.bayesopt.gpautograd.optimization_utils.apply_lbfgs_with_multiple_starts(exec_func, param_dict, bounds, random_state, n_starts=5, **kwargs)[source]

When dealing with non-convex problems (e.g., optimization the marginal likelihood), we typically need to start from various starting points. This function applies this logic around apply_lbfgs, randomizing the starting points around the initial values provided in param_dict (see below “copy_of_initial_param_dict”).

The first optimization happens exactly at param_dict, so that the case n_starts=1 exactly coincides with the previously used apply_lbfgs. Importantly, the communication with the L-BFGS solver happens via param_dict, hence all the operations with respect to param_dict are inplace.

We catch exceptions and return ret_infos about these. If none of the restarts worked, param_dict is not modified.

Parameters:
  • exec_func – see above

  • param_dict – see above

  • bounds – see above

  • random_state – RandomState for sampling

  • n_starts – Number of times we start an optimization with L-BFGS (must be >= 1)

Returns:

List ret_infos of length n_starts. Entry is None if optimization worked, or otherwise has dict with info about exception caught

syne_tune.optimizer.schedulers.searchers.bayesopt.gpautograd.optimization_utils.add_regularizer_to_criterion(criterion, crit_args)[source]
syne_tune.optimizer.schedulers.searchers.bayesopt.gpautograd.optimization_utils.create_lbfgs_arguments(criterion, crit_args, verbose=False)[source]

Creates SciPy optimizer objective and param_dict for criterion function.

Parameters:
  • criterion (MarginalLikelihood) – Learning criterion (nullary)

  • crit_args (list) – Arguments for criterion.forward

Returns:

scipy_objective, param_dict