syne_tune.optimizer.schedulers.searchers.bayesopt.gpautograd.hypertune.posterior_state module
- syne_tune.optimizer.schedulers.searchers.bayesopt.gpautograd.hypertune.posterior_state.assert_ensemble_distribution(distribution, all_resources)[source]
- class syne_tune.optimizer.schedulers.searchers.bayesopt.gpautograd.hypertune.posterior_state.HyperTuneIndependentGPPosteriorState(features, targets, kernel, mean, covariance_scale, noise_variance, resource_attr_range, ensemble_distribution, debug_log=False)[source]
Bases:
IndependentGPPerResourcePosteriorState
Special case of
IndependentGPPerResourcePosteriorState
, where methodspredict
,backward_gradient
,sample_marginals
,sample_joint
are over a random function \(f_{MF}(x)\), obtained by first sampling the resource level \(r \sim [\theta_r]\), then use \(f_{MF}(x) = f(x, r)\). Predictive means and variances are:- ..math::
mu_{MF}(x) = sum_r theta_r mu(x, r) sigma_{MF}^2(x) = sum_r theta_r^2 sigma_{MF}^2(x, r)
Here, \([\theta_k]\) is a distribution over a subset of rung levels.
Note: This posterior state is unusual, in that
sample_marginals
,sample_joint
have to work both with (a) extended inputs (x, r) and (b) non-extended inputs x. For case (a), they behave like the superclass methods, this is needed to support fitting model parameters, for example for drawing fantasy samples. For case (b), they use the ensemble distribution detailed above, which supports optimizing the acquisition function.- predict(test_features)[source]
Computes marginal statistics (means, variances) for a number of test features.
- Parameters:
test_features (
ndarray
) – Features for test configs- Return type:
Tuple
[ndarray
,ndarray
]- Returns:
posterior_means, posterior_variances
- sample_marginals(test_features, num_samples=1, random_state=None)[source]
If
test_features
are non-extended features (no resource attribute), we sample from the ensemble predictive distribution. Otherwise, we call the superclass method.- Return type:
ndarray
- sample_joint(test_features, num_samples=1, random_state=None)[source]
If
test_features
are non-extended features (no resource attribute), we sample from the ensemble predictive distribution. Otherwise, we call the superclass method.- Return type:
ndarray
- backward_gradient(input, head_gradients, mean_data, std_data)[source]
Implements Predictor.backward_gradient, see comments there. This is for a single posterior state. If the Predictor uses MCMC, have to call this for every sample.
- Parameters:
input (
ndarray
) – Single input point x, shape (d,)head_gradients (
Dict
[str
,ndarray
]) – See Predictor.backward_gradientmean_data (
float
) – Mean used to normalize targetsstd_data (
float
) – Stddev used to normalize targets
- Return type:
ndarray
- Returns:
- class syne_tune.optimizer.schedulers.searchers.bayesopt.gpautograd.hypertune.posterior_state.HyperTuneJointGPPosteriorState(features, targets, mean, kernel, noise_variance, resource_attr_range, ensemble_distribution, debug_log=False)[source]
Bases:
GaussProcPosteriorState
Special case of
GaussProcPosteriorState
, where methodspredict
,backward_gradient
,sample_marginals
,sample_joint
are over a random function \(f_{MF}(x)\), obtained by first sampling the resource level \(r \sim [\theta_r]\), then use \(f_{MF}(x) = f(x, r)\). Predictive means and variances are:- ..math::
mu_{MF}(x) = sum_r theta_r mu(x, r) sigma_{MF}^2(x) = sum_r theta_r^2 sigma_{MF}^2(x, r)
Here, \([\theta_k]\) is a distribution over a subset of rung levels.
Note: This posterior state is unusual, in that
sample_marginals
,sample_joint
have to work both with (a) extended inputs (x, r) and (b) non-extended inputs x. For case (a), they behave like the superclass methods, this is needed to support fitting model parameters, for example for drawing fantasy samples. For case (b), they use the ensemble distribution detailed above, which supports optimizing the acquisition function.- predict(test_features)[source]
Computes marginal statistics (means, variances) for a number of test features.
- Parameters:
test_features (
ndarray
) – Features for test configs- Return type:
Tuple
[ndarray
,ndarray
]- Returns:
posterior_means, posterior_variances
- sample_marginals(test_features, num_samples=1, random_state=None)[source]
If
test_features
are non-extended features (no resource attribute), we sample from the ensemble predictive distribution. Otherwise, we call the superclass method.- Return type:
ndarray
- sample_joint(test_features, num_samples=1, random_state=None)[source]
If
test_features
are non-extended features (no resource attribute), we sample from the ensemble predictive distribution. Otherwise, we call the superclass method.- Return type:
ndarray
- backward_gradient(input, head_gradients, mean_data, std_data)[source]
Implements Predictor.backward_gradient, see comments there. This is for a single posterior state. If the Predictor uses MCMC, have to call this for every sample.
The posterior represented here is based on normalized data, while the acquisition function is based on the de-normalized predictive distribution, which is why we need ‘mean_data’, ‘std_data’ here.
- Parameters:
input (
ndarray
) – Single input point x, shape (d,)head_gradients (
Dict
[str
,ndarray
]) – See Predictor.backward_gradientmean_data (
float
) – Mean used to normalize targetsstd_data (
float
) – Stddev used to normalize targets
- Return type:
ndarray
- Returns: