syne_tune.optimizer.schedulers.searchers.bayesopt.models.gp_model module

class syne_tune.optimizer.schedulers.searchers.bayesopt.models.gp_model.GaussProcPredictor(state, gpmodel, fantasy_samples, active_metric=None, normalize_mean=0.0, normalize_std=1.0, filter_observed_data=None, hp_ranges_for_prediction=None)[source]

Bases: BasePredictor

Gaussian process surrogate model, where model parameters are either fit by marginal likelihood maximization (e.g., GaussianProcessRegression), or integrated out by MCMC sampling (e.g., GPRegressionMCMC).

Both state and gpmodel are immutable. If parameters of the latter are to be fit, this has to be done before.

fantasy_samples contains the sampled (normalized) target values for pending configs. Only active_metric target values are considered. The target values for a pending config are a flat vector. If MCMC is used, its length is a multiple of the number of MCMC samples, containing the fantasy values for MCMC sample 0, sample 1, …

Parameters:
hp_ranges_for_prediction()[source]
Return type:

HyperparameterRanges

Returns:

Feature generator to be used for inputs in predict()

predict(inputs)[source]

Returns signals which are statistics of the predictive distribution at input points inputs. By default:

  • “mean”: Predictive means. If the model supports fantasizing with a number nf of fantasies, this has shape (n, nf), otherwise (n,)

  • “std”: Predictive stddevs, shape (n,)

If the hyperparameters of the surrogate model are being optimized (e.g., by empirical Bayes), the returned list has length 1. If its hyperparameters are averaged over by MCMC, the returned list has one entry per MCMC sample.

Parameters:

inputs (ndarray) – Input points, shape (n, d)

Return type:

List[Dict[str, ndarray]]

Returns:

List of dict with keys keys_predict(), of length the number of MCMC samples, or length 1 for empirical Bayes

backward_gradient(input, head_gradients)[source]

Computes the gradient \(\nabla_x f(x)\) for an acquisition function \(f(x)\), where \(x\) is a single input point. This is using reverse mode differentiation, the head gradients are passed by the acquisition function. The head gradients are \(\partial_k f\), where \(k\) runs over the statistics returned by predict() for the single input point \(x\). The shape of head gradients is the same as the shape of the statistics.

Lists have > 1 entry if MCMC is used, otherwise they are all size 1.

Parameters:
  • input (ndarray) – Single input point \(x\), shape (d,)

  • head_gradients (List[Dict[str, ndarray]]) – See above

Return type:

List[ndarray]

Returns:

Gradient \(\nabla_x f(x)\) (several if MCMC is used)

does_mcmc()[source]
property posterior_states: List[PosteriorState] | None
class syne_tune.optimizer.schedulers.searchers.bayesopt.models.gp_model.GaussProcEstimator(gpmodel, active_metric, normalize_targets=True, debug_log=None, filter_observed_data=None, no_fantasizing=False, hp_ranges_for_prediction=None)[source]

Bases: Estimator

We support pending evaluations via fantasizing. Note that state does not contain the fantasy values, but just the pending configs. Fantasy values are sampled here.

Parameters:
property debug_log: DebugLogPrinter | None
property gpmodel: GaussianProcessRegression | GPRegressionMCMC | IndependentGPPerResourceModel | HyperTuneIndependentGPModel | HyperTuneJointGPModel
fit_from_state(state, update_params)[source]

Parameters of self._gpmodel are optimized iff update_params. This requires state to contain labeled examples.

If self.state.pending_evaluations is not empty, we proceed as follows: :rtype: Predictor

  • Compute posterior for state without pending evals

  • Draw fantasy values for pending evals

  • Recompute posterior (without fitting)

configure_scheduler(scheduler)[source]

Called by configure_scheduler() of searchers which make use of an class:Estimator. Allows the estimator to depend on parameters of the scheduler.

Parameters:

scheduler – Scheduler object

class syne_tune.optimizer.schedulers.searchers.bayesopt.models.gp_model.GaussProcEmpiricalBayesEstimator(gpmodel, num_fantasy_samples, active_metric='target', normalize_targets=True, debug_log=None, filter_observed_data=None, no_fantasizing=False, hp_ranges_for_prediction=None)[source]

Bases: GaussProcEstimator

We support pending evaluations via fantasizing. Note that state does not contain the fantasy values, but just the pending configs. Fantasy values are sampled here.

Parameters:
get_params()[source]
Returns:

Current tunable model parameters

set_params(param_dict)[source]
Parameters:

param_dict – New model parameters