syne_tune.optimizer.schedulers.searchers.bayesopt.gpautograd.kernel.cross_validation module
- syne_tune.optimizer.schedulers.searchers.bayesopt.gpautograd.kernel.cross_validation.decode_resource_values(res_encoded, num_folds)[source]
We assume the resource attribute
r
is encoded asrandint(1, num_folds)
. Internally,r
is taken as value in the real interval[0.5, num_folds + 0.5]
, which is linearly transformed to[0, 1]
for encoding.- Parameters:
res_encoded – Encoded values
r
num_folds – Maximum number of folds
- Returns:
Original values
r
(not rounded toint
)
- class syne_tune.optimizer.schedulers.searchers.bayesopt.gpautograd.kernel.cross_validation.CrossValidationKernelFunction(kernel_main, kernel_residual, mean_main, num_folds, **kwargs)[source]
Bases:
KernelFunction
Kernel function suitable for \(f(x, r)\) being the average of
r
validation metrics evaluated on different (train, validation) splits.More specifically, there are ‘num_folds`` such splits, and \(f(x, r)\) is the average over the first
r
of them.We model the score on fold
k
as \(e_k(x) = f(x) + g_k(x)\), where \(f(x)\) and the \(g_k(x)\) are a priori independent Gaussian processes with kernelskernel_main
andkernel_residual
(all \(g_k\) share the same kernel). Moreover, the \(g_k\) are zero-mean, while \(f(x)\) may have a mean function. Then:\[ \begin{align}\begin{aligned}f(x, r) = r^{-1} sum_{k \le r} e_k(x),\\k((x, r), (x', r')) = k_{main}(x, x') + \frac{k_{residual}(x, x')}{\mathrm{max}(r, r')}\end{aligned}\end{align} \]Note that
kernel_main
,kernel_residual
are over inputs \(x\) (dimensiond
), while the kernel represented here is over inputs \((x, r)\) of dimensiond + 1
, where the resource attribute \(r\) (number of folds) is last.Inputs are encoded. We assume a linear encoding for r with bounds 1 and
num_folds
. TODO: Right now, all HPs are encoded, and the resource attribute counts as HP, even if it is not optimized over. This creates a dependence to how inputs are encoded.- forward(X1, X2, **kwargs)[source]
Overrides to implement forward computation using
NDArray
. Only accepts positional arguments. Parameters ———- *args : list of NDArrayInput tensors.
- diagonal(X)[source]
- Parameters:
X – Input data, shape
(n, d)
- Returns:
Diagonal of \(k(X, X)\), shape
(n,)
- diagonal_depends_on_X()[source]
For stationary kernels, diagonal does not depend on
X
- Returns:
Does
diagonal()
depend onX
?
- class syne_tune.optimizer.schedulers.searchers.bayesopt.gpautograd.kernel.cross_validation.CrossValidationMeanFunction(kernel, **kwargs)[source]
Bases:
MeanFunction
- forward(X)[source]
Overrides to implement forward computation using
NDArray
. Only accepts positional arguments. Parameters ———- *args : list of NDArrayInput tensors.