syne_tune.optimizer.schedulers.searchers.bayesopt.gpautograd.posterior_state module
- class syne_tune.optimizer.schedulers.searchers.bayesopt.gpautograd.posterior_state.PosteriorState[source]
Bases:
object
Interface for posterior state of Gaussian-linear model.
- property num_data
- property num_features
- property num_fantasies
- predict(test_features)[source]
Computes marginal statistics (means, variances) for a number of test features.
- Parameters:
test_features (
ndarray
) – Features for test configs- Return type:
Tuple
[ndarray
,ndarray
]- Returns:
posterior_means, posterior_variances
- sample_marginals(test_features, num_samples=1, random_state=None)[source]
See comments of
predict
.- Parameters:
test_features (
ndarray
) – Input points for test configsnum_samples (
int
) – Number of samplesrandom_state (
Optional
[RandomState
]) – PRNG
- Return type:
ndarray
- Returns:
Marginal samples, (num_test, num_samples)
- backward_gradient(input, head_gradients, mean_data, std_data)[source]
Implements Predictor.backward_gradient, see comments there. This is for a single posterior state. If the Predictor uses MCMC, have to call this for every sample.
- Parameters:
input (
ndarray
) – Single input point x, shape (d,)head_gradients (
Dict
[str
,ndarray
]) – See Predictor.backward_gradientmean_data (
float
) – Mean used to normalize targetsstd_data (
float
) – Stddev used to normalize targets
- Return type:
ndarray
- Returns:
- class syne_tune.optimizer.schedulers.searchers.bayesopt.gpautograd.posterior_state.PosteriorStateWithSampleJoint[source]
Bases:
PosteriorState
- sample_joint(test_features, num_samples=1, random_state=None)[source]
See comments of
predict
.- Parameters:
test_features (
ndarray
) – Input points for test configsnum_samples (
int
) – Number of samplesrandom_state (
Optional
[RandomState
]) – PRNG
- Return type:
ndarray
- Returns:
Joint samples, (num_test, num_samples)
- class syne_tune.optimizer.schedulers.searchers.bayesopt.gpautograd.posterior_state.GaussProcPosteriorState(features, targets, mean, kernel, noise_variance, debug_log=False, **kwargs)[source]
Bases:
PosteriorStateWithSampleJoint
Represent posterior state for Gaussian process regression model. Note that members are immutable. If the posterior state is to be updated, a new object is created and returned.
- property num_data
- property num_features
- property num_fantasies
- neg_log_likelihood()[source]
Works only if fantasy samples are not used (single targets vector).
- Return type:
ndarray
- predict(test_features)[source]
Computes marginal statistics (means, variances) for a number of test features.
- Parameters:
test_features (
ndarray
) – Features for test configs- Return type:
Tuple
[ndarray
,ndarray
]- Returns:
posterior_means, posterior_variances
- sample_marginals(test_features, num_samples=1, random_state=None)[source]
See comments of
predict
.- Parameters:
test_features (
ndarray
) – Input points for test configsnum_samples (
int
) – Number of samplesrandom_state (
Optional
[RandomState
]) – PRNG
- Return type:
ndarray
- Returns:
Marginal samples, (num_test, num_samples)
- backward_gradient(input, head_gradients, mean_data, std_data)[source]
Implements Predictor.backward_gradient, see comments there. This is for a single posterior state. If the Predictor uses MCMC, have to call this for every sample.
The posterior represented here is based on normalized data, while the acquisition function is based on the de-normalized predictive distribution, which is why we need ‘mean_data’, ‘std_data’ here.
- Parameters:
input (
ndarray
) – Single input point x, shape (d,)head_gradients (
Dict
[str
,ndarray
]) – See Predictor.backward_gradientmean_data (
float
) – Mean used to normalize targetsstd_data (
float
) – Stddev used to normalize targets
- Return type:
ndarray
- Returns:
- sample_joint(test_features, num_samples=1, random_state=None)[source]
See comments of
predict
.- Parameters:
test_features (
ndarray
) – Input points for test configsnum_samples (
int
) – Number of samplesrandom_state (
Optional
[RandomState
]) – PRNG
- Return type:
ndarray
- Returns:
Joint samples, (num_test, num_samples)
- syne_tune.optimizer.schedulers.searchers.bayesopt.gpautograd.posterior_state.backward_gradient_given_predict(predict_func, input, head_gradients, mean_data, std_data)[source]
Implements Predictor.backward_gradient, see comments there. This is for a single posterior state. If the Predictor uses MCMC, have to call this for every sample.
The posterior represented here is based on normalized data, while the acquisition function is based on the de-normalized predictive distribution, which is why we need ‘mean_data’, ‘std_data’ here.
- Parameters:
predict_func (
Callable
[[ndarray
],Tuple
[ndarray
,ndarray
]]) – Function mapping input x to mean, varianceinput (
ndarray
) – Single input point x, shape (d,)head_gradients (
Dict
[str
,ndarray
]) – See Predictor.backward_gradientmean_data (
float
) – Mean used to normalize targetsstd_data (
float
) – Stddev used to normalize targets
- Return type:
ndarray
- Returns:
- class syne_tune.optimizer.schedulers.searchers.bayesopt.gpautograd.posterior_state.IncrementalUpdateGPPosteriorState(features, targets, mean, kernel, noise_variance, **kwargs)[source]
Bases:
GaussProcPosteriorState
Extension of GaussProcPosteriorState which allows for incremental updating, given that a single data case is appended to the training set.
In order to not mutate members, “the update method returns a new object.”
- update(feature, target)[source]
- Parameters:
feature (
ndarray
) – Additional input xstar, shape (1, d)target (
ndarray
) – Additional target ystar, shape (1, m)
- Return type:
- Returns:
Posterior state for increased data set
- sample_and_update(feature, mean_impute_mask=None, random_state=None)[source]
Draw target(s), shape (1, m), from current posterior state, then update state based on these. The main computation of lvec is shared among the two. If mean_impute_mask is given, it is a boolean vector of size m (number columns of pred_mat). Columns j of target, where mean_impute_ mask[j] is true, are set to the predictive mean (instead of being sampled).
- Parameters:
feature (
ndarray
) – Additional input xstar, shape (1, d)mean_impute_mask – See above
random_state (
Optional
[RandomState
]) – PRN generator
- Return type:
(
ndarray
, IncrementalUpdateGPPosteriorState)- Returns:
target, poster_state_new
- expand_fantasies(num_fantasies)[source]
If this posterior has been created with a single targets vector, shape (n, 1), use this to duplicate this vector m = num_fantasies times. Call this method before fantasy targets are appended by update.
- Parameters:
num_fantasies (
int
) – Number m of fantasy samples- Return type:
- Returns:
New state with targets duplicated m times