syne_tune.optimizer.schedulers.searchers.bayesopt.models.sklearn_model module

class syne_tune.optimizer.schedulers.searchers.bayesopt.models.sklearn_model.SKLearnPredictorWrapper(sklearn_predictor, state, active_metric=None)[source]

Bases: BasePredictor

Wrapper class for sklearn predictors returned by fit_from_state of SKLearnEstimatorWrapper.

predict(inputs)[source]

Returns signals which are statistics of the predictive distribution at input points inputs. By default:

  • “mean”: Predictive means. If the model supports fantasizing with a number nf of fantasies, this has shape (n, nf), otherwise (n,)

  • “std”: Predictive stddevs, shape (n,)

If the hyperparameters of the surrogate model are being optimized (e.g., by empirical Bayes), the returned list has length 1. If its hyperparameters are averaged over by MCMC, the returned list has one entry per MCMC sample.

Parameters:

inputs (ndarray) – Input points, shape (n, d)

Return type:

List[Dict[str, ndarray]]

Returns:

List of dict with keys keys_predict(), of length the number of MCMC samples, or length 1 for empirical Bayes

backward_gradient(input, head_gradients)[source]

Computes the gradient \(\nabla f(x)\) for an acquisition function \(f(x)\), where \(x\) is a single input point. This is using reverse mode differentiation, the head gradients are passed by the acquisition function. The head gradients are \(\partial_k f\), where \(k\) runs over the statistics returned by predict() for the single input point \(x\). The shape of head gradients is the same as the shape of the statistics.

Parameters:
  • input (ndarray) – Single input point \(x\), shape (d,)

  • head_gradients (List[Dict[str, ndarray]]) – See above

Return type:

List[ndarray]

Returns:

Gradient \(\nabla f(x)\) (one-length list)

class syne_tune.optimizer.schedulers.searchers.bayesopt.models.sklearn_model.SKLearnEstimatorWrapper(sklearn_estimator, active_metric=None, *args, **kwargs)[source]

Bases: Estimator

Wrapper class for sklearn estimators.

get_params()[source]
Return type:

Dict[str, Any]

Returns:

Current tunable model parameters

set_params(param_dict)[source]
Parameters:

param_dict (Dict[str, Any]) – New model parameters

fit_from_state(state, update_params)[source]

Creates a Predictor object based on data in state.

If the model also has hyperparameters, these are learned iff update_params == True. Otherwise, these parameters are not changed, but only the posterior state is computed. If your surrogate model is not Bayesian, or does not have hyperparameters, you can ignore the update_params argument.

If self.state.pending_evaluations is not empty, we compute posterior for state without pending evals. This method can be overwritten for any other behaviour such as one found in fit_from_state().

Parameters:
  • state (TuningJobState) – Current data model parameters are to be fit on, and the posterior state is to be computed from

  • update_params (bool) – See above

Return type:

Predictor

Returns:

Predictor, wrapping the posterior state