syne_tune.optimizer.schedulers.searchers.bayesopt.gpautograd.kernel.fabolas module

class syne_tune.optimizer.schedulers.searchers.bayesopt.gpautograd.kernel.fabolas.FabolasKernelFunction(dimension=1, encoding_type='logarithm', u1_init=1.0, u3_init=0.0, **kwargs)[source]

Bases: KernelFunction

The kernel function proposed in:

Klein, A., Falkner, S., Bartels, S., Hennig, P., & Hutter, np. (2016). Fast Bayesian Optimization of Machine Learning Hyperparameters on Large Datasets, in AISTATS 2017. ArXiv:1605.07079 [Cs, Stat]. Retrieved from http://arxiv.org/abs/1605.07079

Please note this is only one of the components of the factorized kernel proposed in the paper. This is the finite-rank (“degenerate”) kernel for modelling data subset fraction sizes. Defined as:

k(x, y) = (U phi(x))^T (U phi(y)), x, y in [0, 1], phi(x) = [1, (1 - x)^2]^T, U = [[u1, u3], [0, u2]] upper triangular, u1, u2 > 0.

forward(X1, X2)[source]

Overrides to implement forward computation using NDArray. Only accepts positional arguments. Parameters ———- *args : list of NDArray

Input tensors.

diagonal(X)[source]
Parameters:

X – Input data, shape (n, d)

Returns:

Diagonal of \(k(X, X)\), shape (n,)

diagonal_depends_on_X()[source]

For stationary kernels, diagonal does not depend on X

Returns:

Does diagonal() depend on X?

param_encoding_pairs()[source]
Returns list of tuples

(param_internal, encoding)

over all Gluon parameters maintained here.

Returns:

List [(param_internal, encoding)]

get_params()[source]
Return type:

Dict[str, Any]

Returns:

Dictionary with hyperparameter values

set_params(param_dict)[source]
Parameters:

param_dict (Dict[str, Any]) – Dictionary with new hyperparameter values

Returns: