syne_tune.optimizer.schedulers.searchers.bayesopt.models.cost_fifo_model module
- class syne_tune.optimizer.schedulers.searchers.bayesopt.models.cost_fifo_model.CostFixedResourcePredictor(state, model, fixed_resource, num_samples=1)[source]
Bases:
BasePredictor
Wraps cost model \(c(x, r)\) of
CostModel
to be used as surrogate model, where predictions are done at r =fixed_resource
.Note: For random cost models, we approximate expectations in
predict
by resamplingnum_samples
times (should be 1 for deterministic cost models).Note: Since this is a generic wrapper, we assume for
backward_gradient
that the gradient contribution through the cost model vanishes. For special cost models, the mapping from encoded input to predictive means may be differentiable, and prediction code inautograd
may be available. For such cost models, this wrapper should not be used, andbackward_gradient
should be implemented properly.- Parameters:
state (
TuningJobState
) – TuningJobSubStatemodel (
CostModel
) – Model parameters must have been fitfixed_resource (
int
) – \(c(x, r)\) is predicted for this resource level rnum_samples (
int
) – Number of samples drawn inpredict()
. Use this for random cost models only
- static keys_predict()[source]
Keys of signals returned by
predict()
.Note: In order to work with
AcquisitionFunction
implementations, the following signals are required:“mean”: Predictive mean
“std”: Predictive standard deviation
- Return type:
Set
[str
]- Returns:
Set of keys for
dict
returned bypredict()
- predict(inputs)[source]
Returns signals which are statistics of the predictive distribution at input points
inputs
. By default:“mean”: Predictive means. If the model supports fantasizing with a number
nf
of fantasies, this has shape(n, nf)
, otherwise(n,)
“std”: Predictive stddevs, shape
(n,)
If the hyperparameters of the surrogate model are being optimized (e.g., by empirical Bayes), the returned list has length 1. If its hyperparameters are averaged over by MCMC, the returned list has one entry per MCMC sample.
- Parameters:
inputs (
ndarray
) – Input points, shape(n, d)
- Return type:
List
[Dict
[str
,ndarray
]]- Returns:
List of
dict
with keyskeys_predict()
, of length the number of MCMC samples, or length 1 for empirical Bayes
- backward_gradient(input, head_gradients)[source]
The gradient contribution through the cost model is blocked.
- Return type:
List
[ndarray
]
- predict_mean_current_candidates()[source]
Returns the predictive mean (signal with key ‘mean’) at all current candidates in the state (observed, pending).
If the hyperparameters of the surrogate model are being optimized (e.g., by empirical Bayes), the returned list has length 1. If its hyperparameters are averaged over by MCMC, the returned list has one entry per MCMC sample.
- Return type:
List
[ndarray
]- Returns:
List of predictive means
- current_best()[source]
Returns the so-called incumbent, to be used in acquisition functions such as expected improvement. This is the minimum of predictive means (signal with key “mean”) at all current candidate locations (both state.trials_evaluations and state.pending_evaluations). Normally, a scalar is returned, but if the model supports fantasizing and the state contains pending evaluations, there is one incumbent per fantasy sample, so a vector is returned.
If the hyperparameters of the surrogate model are being optimized (e.g., by empirical Bayes), the returned list has length 1. If its hyperparameters are averaged over by MCMC, the returned list has one entry per MCMC sample.
- Return type:
List
[ndarray
]- Returns:
Incumbent, see above
- class syne_tune.optimizer.schedulers.searchers.bayesopt.models.cost_fifo_model.CostEstimator(model, fixed_resource, num_samples=1)[source]
Bases:
Estimator
The name of the cost metric is
model.cost_metric_name
.- Parameters:
model (
CostModel
) – CostModel to be wrappedfixed_resource (
int
) – \(c(x, r)\) is predicted for this resource level rnum_samples (
int
) – Number of samples drawn inpredict()
. Use this for random cost models only
- property fixed_resource: int