syne_tune.optimizer.schedulers.searchers.bayesopt.gpautograd.hypertune.utils module
- class syne_tune.optimizer.schedulers.searchers.bayesopt.gpautograd.hypertune.utils.ExtendFeaturesByResourceMixin(resource, resource_attr_range)[source]
Bases:
object
- class syne_tune.optimizer.schedulers.searchers.bayesopt.gpautograd.hypertune.utils.PosteriorStateClampedResource(poster_state_extended, resource, resource_attr_range)[source]
Bases:
PosteriorStateWithSampleJoint
,ExtendFeaturesByResourceMixin
Converts posterior state of
PosteriorStateWithSampleJoint
over extended inputs into posterior state over non-extended inputs, where the resource attribute is clamped to a fixed value.- Parameters:
poster_state_extended (
PosteriorStateWithSampleJoint
) – Posterior state over extended inputsresource (
int
) – Value to which resource attribute is clampedresource_attr_range (
Tuple
[int
,int
]) – \((r_{min}, r_{max})\)
- property num_data
- property num_features
- property num_fantasies
- predict(test_features)[source]
Computes marginal statistics (means, variances) for a number of test features.
- Parameters:
test_features (
ndarray
) – Features for test configs- Return type:
Tuple
[ndarray
,ndarray
]- Returns:
posterior_means, posterior_variances
- sample_marginals(test_features, num_samples=1, random_state=None)[source]
See comments of
predict
.- Parameters:
test_features (
ndarray
) – Input points for test configsnum_samples (
int
) – Number of samplesrandom_state (
Optional
[RandomState
]) – PRNG
- Return type:
ndarray
- Returns:
Marginal samples, (num_test, num_samples)
- sample_joint(test_features, num_samples=1, random_state=None)[source]
See comments of
predict
.- Parameters:
test_features (
ndarray
) – Input points for test configsnum_samples (
int
) – Number of samplesrandom_state (
Optional
[RandomState
]) – PRNG
- Return type:
ndarray
- Returns:
Joint samples, (num_test, num_samples)
- backward_gradient(input, head_gradients, mean_data, std_data)[source]
Implements Predictor.backward_gradient, see comments there. This is for a single posterior state. If the Predictor uses MCMC, have to call this for every sample.
- Parameters:
input (
ndarray
) – Single input point x, shape (d,)head_gradients (
Dict
[str
,ndarray
]) – See Predictor.backward_gradientmean_data (
float
) – Mean used to normalize targetsstd_data (
float
) – Stddev used to normalize targets
- Return type:
ndarray
- Returns:
- class syne_tune.optimizer.schedulers.searchers.bayesopt.gpautograd.hypertune.utils.MeanFunctionClampedResource(mean_extended, resource, resource_attr_range, **kwargs)[source]
Bases:
MeanFunction
,ExtendFeaturesByResourceMixin
- param_encoding_pairs()[source]
- Returns list of tuples
(param_internal, encoding)
over all Gluon parameters maintained here.
- Returns:
List [(param_internal, encoding)]
- class syne_tune.optimizer.schedulers.searchers.bayesopt.gpautograd.hypertune.utils.KernelFunctionClampedResource(kernel_extended, resource, resource_attr_range, **kwargs)[source]
Bases:
KernelFunction
,ExtendFeaturesByResourceMixin
- param_encoding_pairs()[source]
- Returns list of tuples
(param_internal, encoding)
over all Gluon parameters maintained here.
- Returns:
List [(param_internal, encoding)]
- set_params(param_dict)[source]
- Parameters:
param_dict (
Dict
[str
,Any
]) – Dictionary with new hyperparameter values- Returns:
- diagonal(X)[source]
- Parameters:
X – Input data, shape
(n, d)
- Returns:
Diagonal of \(k(X, X)\), shape
(n,)
- diagonal_depends_on_X()[source]
For stationary kernels, diagonal does not depend on
X
- Returns:
Does
diagonal()
depend onX
?
- class syne_tune.optimizer.schedulers.searchers.bayesopt.gpautograd.hypertune.utils.GaussProcPosteriorStateAndRungLevels(poster_state, rung_levels)[source]
Bases:
PosteriorStateWithSampleJoint
- property poster_state: GaussProcPosteriorState
- property num_data
- property num_features
- property num_fantasies
- predict(test_features)[source]
Computes marginal statistics (means, variances) for a number of test features.
- Parameters:
test_features (
ndarray
) – Features for test configs- Return type:
Tuple
[ndarray
,ndarray
]- Returns:
posterior_means, posterior_variances
- sample_marginals(test_features, num_samples=1, random_state=None)[source]
See comments of
predict
.- Parameters:
test_features (
ndarray
) – Input points for test configsnum_samples (
int
) – Number of samplesrandom_state (
Optional
[RandomState
]) – PRNG
- Return type:
ndarray
- Returns:
Marginal samples, (num_test, num_samples)
- sample_joint(test_features, num_samples=1, random_state=None)[source]
See comments of
predict
.- Parameters:
test_features (
ndarray
) – Input points for test configsnum_samples (
int
) – Number of samplesrandom_state (
Optional
[RandomState
]) – PRNG
- Return type:
ndarray
- Returns:
Joint samples, (num_test, num_samples)
- backward_gradient(input, head_gradients, mean_data, std_data)[source]
Implements Predictor.backward_gradient, see comments there. This is for a single posterior state. If the Predictor uses MCMC, have to call this for every sample.
- Parameters:
input (
ndarray
) – Single input point x, shape (d,)head_gradients (
Dict
[str
,ndarray
]) – See Predictor.backward_gradientmean_data (
float
) – Mean used to normalize targetsstd_data (
float
) – Stddev used to normalize targets
- Return type:
ndarray
- Returns:
- property rung_levels: List[int]
- syne_tune.optimizer.schedulers.searchers.bayesopt.gpautograd.hypertune.utils.hypertune_ranking_losses(poster_state, data, num_samples, resource_attr_range, random_state=None)[source]
Samples ranking loss values as defined in the Hyper-Tune paper. We return a matrix of size
(num_supp_levels, num_samples)
, wherenum_supp_levels <= poster_state.rung_levels
is the number of rung levels supported by at least 6 labeled datapoints.The loss values depend on the cases in
data
at the levelposter_state.rung_levels[num_supp_levels - 1]
. We must havenum_supp_levels >= 2
.Loss values at this highest supported level are estimated by cross-validation (so the data at this level is split into training and test, where the training part is used to obtain the posterior state). The number of CV folds is
<= 5
, and such that each fold has at least two points.- Parameters:
poster_state (
Union
[IndependentGPPerResourcePosteriorState
,GaussProcPosteriorStateAndRungLevels
]) – Posterior state over rung levelsdata (
Dict
[str
,Any
]) – Training datanum_samples (
int
) – Number of independent loss samplesresource_attr_range (
Tuple
[int
,int
]) –(r_min, r_max)
random_state (
Optional
[RandomState
]) – PRNG state
- Return type:
ndarray
- Returns:
See above
- syne_tune.optimizer.schedulers.searchers.bayesopt.gpautograd.hypertune.utils.number_supported_levels_and_data_highest_level(rung_levels, data, resource_attr_range)[source]
Finds
num_supp_levels
as maximum such that rung levels up to there have>= 6
labeled datapoints. The set of labeled datapoints of levelnum_supp_levels - 1
is returned as well.If
num_supp_levels == 1
, no level except for the lowest has>= 6
datapoints. In this case,data_max_resource
returned is invalid.- Parameters:
rung_levels (
List
[int
]) – Rung levelsdata (
Dict
[str
,Any
]) – Training data (only data at highest level is used)resource_attr_range (
Tuple
[int
,int
]) –(r_min, r_max)
- Return type:
Tuple
[int
,dict
]- Returns:
(num_supp_levels, data_max_resource)