syne_tune.optimizer.schedulers.searchers.bayesopt.gpautograd.target_transform module

class syne_tune.optimizer.schedulers.searchers.bayesopt.gpautograd.target_transform.ScalarTargetTransform(**kwargs)[source]

Bases: MeanFunction

Interface for invertible transforms of scalar target values.

forward() maps original target values \(y\) to latent target values \(z\), the latter are typically modelled as Gaussian. negative_log_jacobian() returns the term to be added to \(-\log P(z)\), where \(z\) is mapped from \(y\), in order to obtain \(-\log P(y)\).

forward(targets)[source]
Parameters:

targets – Target vector \(y\) in original form

Returns:

Transformed latent target vector \(z\)

negative_log_jacobian(targets)[source]
Parameters:

targets – Target vector \(y\) in original form

Returns:

Term to add to \(-\log P(z)\) to obtain \(-\log P(y)\)

inverse(latents)[source]
Parameters:

latents – Latent target vector \(z\)

Returns:

Corresponding target vector \(y\)

on_fit_start(targets)[source]

This is called just before the surrogate model optimization starts.

Parameters:

targets – Target vector \(y\) in original form

class syne_tune.optimizer.schedulers.searchers.bayesopt.gpautograd.target_transform.IdentityTargetTransform(**kwargs)[source]

Bases: ScalarTargetTransform

forward(targets)[source]
Parameters:

targets – Target vector \(y\) in original form

Returns:

Transformed latent target vector \(z\)

negative_log_jacobian(targets)[source]
Parameters:

targets – Target vector \(y\) in original form

Returns:

Term to add to \(-\log P(z)\) to obtain \(-\log P(y)\)

inverse(latents)[source]
Parameters:

latents – Latent target vector \(z\)

Returns:

Corresponding target vector \(y\)

param_encoding_pairs()[source]
Returns list of tuples

(param_internal, encoding)

over all Gluon parameters maintained here.

Returns:

List [(param_internal, encoding)]

get_params()[source]
Return type:

Dict[str, Any]

Returns:

Dictionary with hyperparameter values

set_params(param_dict)[source]
Parameters:

param_dict (Dict[str, Any]) – Dictionary with new hyperparameter values

Returns:

class syne_tune.optimizer.schedulers.searchers.bayesopt.gpautograd.target_transform.BoxCoxTargetTransform(initial_boxcox_lambda=None, **kwargs)[source]

Bases: ScalarTargetTransform

The Box-Cox transform for \(y > 0\) is parameterized in terms of \(\lambda\):

\[ \begin{align}\begin{aligned}z = T(y, \lambda) = \frac{y^{\lambda} - 1}{\lambda},\quad \lambda\ne 0\\T(y, \lambda=0) = \log y\end{aligned}\end{align} \]

One difficulty is that expressions involve division by \(\lambda\). Our implementation separates between (1) \(\lambda \ge \varepsilon\), (2) \(\lambda\le -\varepsilon\), and (3) \(-\varepsilon < \lambda < \varepsilon\), where \(\varepsilon\) is BOXCOX_LAMBDA_EPS. In case (3), we use the approximation \(z \approx u + \lambda u^2/2\), where \(u = \log y\).

Note that we require \(1 + z\lambda > 0\), which restricts \(z\) if \(\lambda\ne 0\).

Note

Targets must be positive. They are thresholded at BOXCOX_TARGET_THRES, so negative targets do not raise an error.

The Box-Cox transform has been proposed in the content of Bayesian optimization by

Cowen-Rivers, A. et.al.
HEBO: Pushing the Limits of Sample-efficient Hyper-parameter Optimisation
Journal of Artificial Intelligence Research 74 (2022), 1269-1349

However, they decouple the transformation of targets from fitting the remaining surrogate model parameters, which is possible only under a simplifying assumption (namely, that targets after transform are modelled i.i.d. by a single univariate Gaussian). Instead, we treat \(\lambda\) as just one more parameter to fit along with all the others.

param_encoding_pairs()[source]
Returns list of tuples

(param_internal, encoding)

over all Gluon parameters maintained here.

Returns:

List [(param_internal, encoding)]

get_boxcox_lambda()[source]
set_boxcox_lambda(boxcox_lambda)[source]
get_params()[source]
Return type:

Dict[str, Any]

Returns:

Dictionary with hyperparameter values

set_params(param_dict)[source]
Parameters:

param_dict (Dict[str, Any]) – Dictionary with new hyperparameter values

Returns:

negative_log_jacobian(targets)[source]
Parameters:

targets – Target vector \(y\) in original form

Returns:

Term to add to \(-\log P(z)\) to obtain \(-\log P(y)\)

forward(targets)[source]
Parameters:

targets – Target vector \(y\) in original form

Returns:

Transformed latent target vector \(z\)

inverse(latents)[source]

The inverse is \(\exp( \log(1 + z\lambda) / \lambda )\). For \(\lambda\approx 0\), we use \(\exp( z (1 - z\lambda/2) )\).

We also need \(1 + z\lambda > 0\), so we use the maximum of \(z lambda\) and BOXCOX_ZLAMBDA_THRES.

on_fit_start(targets)[source]

We only optimize boxcox_lambda once there are no less than BOXCOX_LAMBDA_OPT_MIN_NUMDATA data points. Otherwise, it remains fixed to its initial value.