syne_tune.optimizer.schedulers.searchers.bayesopt.gpautograd.gluon_blocks_helpers module
- class syne_tune.optimizer.schedulers.searchers.bayesopt.gpautograd.gluon_blocks_helpers.ConstantPositiveVector(param_name, encoding, size_cols, **kwargs)[source]
Bases:
Block
Represents constant vector, with positive entry value represented as Gluon parameter, to be used in the context of wrapper classes in
gluon_blocks.py
. Shape,dtype
, and context are determined from the features argument:If
features.shape = (n, d)
:shape = (d, 1)
ifsize_cols = True
(number cols of features)shape = (n, 1)
ifsize_cols = False
(number rows of features)dtype = features.dtype
,ctx = features.ctx
Encoding and internal Gluon parameter: The positive scalar parameter is encoded via encoding (see
ScalarEncodingBase
). The internal Gluon parameter (before encoding) has thename param_name + "_internal"
.- forward(features, param_internal)[source]
Returns constant positive vector
If
features.shape = (n, d)
, the shape of the vector returned is(d, 1)
ifsize_cols = True
,(n, 1)
otherwise.- Parameters:
features – Matrix for shape, dtype, ctx
param_internal – Unwrapped parameter
- Returns:
Constant positive vector
- switch_updating(flag)[source]
Is the underlying parameter updated during learning?
By default, the parameter takes part in learning (its
grad_req
attribute is ‘write’). Forflag == False
, the attribute is flipped to ‘null’, and the parameter remains constant during learning.- Parameters:
flag – Update parameter during learning?
- class syne_tune.optimizer.schedulers.searchers.bayesopt.gpautograd.gluon_blocks_helpers.PositiveScalarEncoding(lower, constr_upper=None, init_val=None, regularizer=None, dimension=1)[source]
Bases:
ScalarEncodingBase
Provides encoding for positive scalar and vector:
param > lower
. Here,param
is represented asgluon.Parameter
. Theparam
is with shape(dimension,)
wheredimension
is 1 by default.The encoding is given as:
param = softrelu(param_internal) + lower
,softrelu(x) = log(1 + exp(x))
If
constr_upper
is used, the constraintparam_internal < dec(constr_upper)
can be enforced by an optimizer. Since
dec
is increasing, this translates toparam < constr_upper
. Note: Whilelower
is enforced by the encoding, the upper bound is not, has to be enforced by an optimizer.
- class syne_tune.optimizer.schedulers.searchers.bayesopt.gpautograd.gluon_blocks_helpers.IdentityScalarEncoding(constr_lower=None, constr_upper=None, init_val=None, regularizer=None, dimension=1)[source]
Bases:
ScalarEncodingBase
Identity encoding for scalar and vector:
param = param_internal
This does not ensure that param is positive! Use this only if positivity is otherwise guaranteed.
- class syne_tune.optimizer.schedulers.searchers.bayesopt.gpautograd.gluon_blocks_helpers.LogarithmScalarEncoding(constr_lower=None, constr_upper=None, init_val=None, regularizer=None, dimension=1)[source]
Bases:
ScalarEncodingBase
Logarithmic encoding for scalar and vector:
param = exp(param_internal)
,param_internal = param
- syne_tune.optimizer.schedulers.searchers.bayesopt.gpautograd.gluon_blocks_helpers.unwrap_parameter(param_internal, some_arg=None)[source]
- syne_tune.optimizer.schedulers.searchers.bayesopt.gpautograd.gluon_blocks_helpers.encode_unwrap_parameter(param_internal, encoding, some_arg=None)[source]
- syne_tune.optimizer.schedulers.searchers.bayesopt.gpautograd.gluon_blocks_helpers.param_to_pretty_string(gluon_param, encoding)[source]
Take a gluon parameter and transform it to a string amenable to plotting If need be, the gluon parameter is appropriately encoded (e.g., log-exp transform).
- Parameters:
gluon_param (
Parameter
) – gluon parameterencoding (
ScalarEncodingBase
) – object in charge of encoding/decoding the gluon_param
- Return type:
str