syne_tune.optimizer.schedulers.searchers.bayesopt.gpautograd.learncurve.gpiss_model module
- class syne_tune.optimizer.schedulers.searchers.bayesopt.gpautograd.learncurve.gpiss_model.GaussianProcessLearningCurveModel(kernel, res_model, mean=None, initial_noise_variance=None, optimization_config=None, random_seed=None, fit_reset_params=True)[source]
Bases:
GaussianProcessOptimizeModel
Represents joint Gaussian model of learning curves over a number of configurations. The model has an additive form:
f(x, r) = g(r | x) + h(x),
where h(x) is a Gaussian process model for function values at r_max, and the g(r | x) are independent Gaussian models. Right now, g(r | x) can be:
- Innovation state space model (ISSM) of a particular power-law decay
form. For this one, g(r_max | x) = 0 for all x. Used if
res_model
is of typeISSModelParameters
- Gaussian process model with exponential decay covariance function. This
is essentially the model from the Freeze Thaw paper, see also
ExponentialDecayResourcesKernelFunction
. Used ifres_model
is of typeExponentialDecayBaseKernelFunction
Importantly, inference scales cubically only in the number of configurations, not in the number of observations.
Details about ISSMs in general are found in
Hyndman, R. and Koehler, A. and Ord, J. and Snyder, R. Forecasting with Exponential Smoothing: The State Space Approach Springer, 2008
- Parameters:
kernel (
KernelFunction
) – Kernel function k(X, X’)res_model (
Union
[ISSModelParameters
,ExponentialDecayBaseKernelFunction
]) – Model for g(r | x)mean (
Optional
[MeanFunction
]) – Mean function mu(X)initial_noise_variance (
Optional
[float
]) – A scalar to initialize the value of the residual noise varianceoptimization_config (
Optional
[OptimizationConfig
]) – Configuration that specifies the behavior of the optimization of the marginal likelihood.random_seed – Random seed to be used (optional)
fit_reset_params (
bool
) – Reset parameters to initial values before running ‘fit’? If False, ‘fit’ starts from the current values
- property likelihood: MarginalLikelihood