syne_tune.optimizer.schedulers.searchers.bayesopt.gpautograd.independent.posterior_state module

class syne_tune.optimizer.schedulers.searchers.bayesopt.gpautograd.independent.posterior_state.IndependentGPPerResourcePosteriorState(features, targets, kernel, mean, covariance_scale, noise_variance, resource_attr_range, debug_log=False)[source]

Bases: PosteriorStateWithSampleJoint

Posterior state for model over f(x, r), where for a fixed set of resource levels r, each f(x, r) is represented by an independent Gaussian process. These processes share a common covariance function k(x, x), but can have their own mean functions mu_r and covariance scales c_r. They can also have their own noise variances, or the noise variance is shared.

Attention: Predictions can only be done at (x, r) where r has at least one training datapoint. This is because a posterior state cannot represent the prior.

state(resource)[source]
Return type:

GaussProcPosteriorState

property num_data
property num_features
property num_fantasies
neg_log_likelihood()[source]
Return type:

ndarray

Returns:

Negative log marginal likelihood

predict(test_features)[source]

Computes marginal statistics (means, variances) for a number of test features.

Parameters:

test_features (ndarray) – Features for test configs

Return type:

Tuple[ndarray, ndarray]

Returns:

posterior_means, posterior_variances

sample_marginals(test_features, num_samples=1, random_state=None)[source]

Different to predict, entries in test_features may have resources not covered by data in posterior state. For such entries, we return the prior mean. We do not sample from the prior. If sample_marginals is used to draw fantasy values, this corresponds to the Kriging believer heuristic.

Return type:

ndarray

sample_joint(test_features, num_samples=1, random_state=None)[source]

Different to predict, entries in test_features may have resources not covered by data in posterior state. For such entries, we return the prior mean. We do not sample from the prior. If sample_joint is used to draw fantasy values, this corresponds to the Kriging believer heuristic.

Return type:

ndarray

backward_gradient(input, head_gradients, mean_data, std_data)[source]

Implements Predictor.backward_gradient, see comments there. This is for a single posterior state. If the Predictor uses MCMC, have to call this for every sample.

Parameters:
  • input (ndarray) – Single input point x, shape (d,)

  • head_gradients (Dict[str, ndarray]) – See Predictor.backward_gradient

  • mean_data (float) – Mean used to normalize targets

  • std_data (float) – Stddev used to normalize targets

Return type:

ndarray

Returns: