syne_tune.optimizer.schedulers.searchers.bayesopt.gpautograd.kernel.exponential_decay module

class syne_tune.optimizer.schedulers.searchers.bayesopt.gpautograd.kernel.exponential_decay.ExponentialDecayResourcesKernelFunction(kernel_x, mean_x, encoding_type='logarithm', alpha_init=1.0, mean_lam_init=0.5, gamma_init=0.5, delta_fixed_value=None, delta_init=0.5, max_metric_value=1.0, **kwargs)[source]

Bases: KernelFunction

Variant of the kernel function for modeling exponentially decaying learning curves, proposed in:

Swersky, K., Snoek, J., & Adams, R. P. (2014).
Freeze-Thaw Bayesian Optimization.

The argument in that paper actually justifies using a non-zero mean function (see ExponentialDecayResourcesMeanFunction) and centralizing the kernel proposed there. This is done here. Details in:

Tiao, Klein, Archambeau, Seeger (2020)
Model-based Asynchronous Hyperparameter Optimization

We implement a new family of kernel functions, for which the additive Freeze-Thaw kernel is one instance (delta == 0). The kernel has parameters alpha, mean_lam, gamma > 0, and 0 <= delta <= 1. Note that beta = alpha / mean_lam is used in the Freeze-Thaw paper (the Gamma distribution over lambda is parameterized differently). The additive Freeze-Thaw kernel is obtained for delta == 0 (use delta_fixed_value = 0).

In fact, this class is configured with a kernel and a mean function over inputs x (dimension d) and represents a kernel (and mean function) over inputs (x, r) (dimension d + 1), where the resource attribute r >= 0 is last.

forward(X1, X2, **kwargs)[source]

Overrides to implement forward computation using NDArray. Only accepts positional arguments. Parameters ———- *args : list of NDArray

Input tensors.

diagonal(X)[source]
Parameters:

X – Input data, shape (n, d)

Returns:

Diagonal of \(k(X, X)\), shape (n,)

diagonal_depends_on_X()[source]

For stationary kernels, diagonal does not depend on X

Returns:

Does diagonal() depend on X?

param_encoding_pairs()[source]
Returns list of tuples

(param_internal, encoding)

over all Gluon parameters maintained here.

Returns:

List [(param_internal, encoding)]

mean_function(X)[source]
get_params()[source]

Parameter keys are “alpha”, “mean_lam”, “gamma”, “delta” (only if not fixed to delta_fixed_value), as well as those of self.kernel_x (prefix “kernelx_”) and of self.mean_x (prefix “meanx_”).

Return type:

Dict[str, Any]

set_params(param_dict)[source]
Parameters:

param_dict (Dict[str, Any]) – Dictionary with new hyperparameter values

Returns:

class syne_tune.optimizer.schedulers.searchers.bayesopt.gpautograd.kernel.exponential_decay.ExponentialDecayResourcesMeanFunction(kernel, **kwargs)[source]

Bases: MeanFunction

forward(X)[source]

Overrides to implement forward computation using NDArray. Only accepts positional arguments. Parameters ———- *args : list of NDArray

Input tensors.

param_encoding_pairs()[source]
Returns list of tuples

(param_internal, encoding)

over all Gluon parameters maintained here.

Returns:

List [(param_internal, encoding)]

get_params()[source]
Return type:

Dict[str, Any]

Returns:

Dictionary with hyperparameter values

set_params(param_dict)[source]
Parameters:

param_dict (Dict[str, Any]) – Dictionary with new hyperparameter values

Returns: