syne_tune.optimizer.schedulers.searchers.bayesopt.models.gp_model module
- class syne_tune.optimizer.schedulers.searchers.bayesopt.models.gp_model.GaussProcPredictor(state, gpmodel, fantasy_samples, active_metric=None, normalize_mean=0.0, normalize_std=1.0, filter_observed_data=None, hp_ranges_for_prediction=None)[source]
Bases:
BasePredictor
Gaussian process surrogate model, where model parameters are either fit by marginal likelihood maximization (e.g.,
GaussianProcessRegression
), or integrated out by MCMC sampling (e.g.,GPRegressionMCMC
).Both
state
andgpmodel
are immutable. If parameters of the latter are to be fit, this has to be done before.fantasy_samples
contains the sampled (normalized) target values for pending configs. Onlyactive_metric
target values are considered. The target values for a pending config are a flat vector. If MCMC is used, its length is a multiple of the number of MCMC samples, containing the fantasy values for MCMC sample 0, sample 1, …- Parameters:
state (
TuningJobState
) – TuningJobSubStategpmodel (
Union
[GaussianProcessRegression
,GPRegressionMCMC
,IndependentGPPerResourceModel
,HyperTuneIndependentGPModel
,HyperTuneJointGPModel
]) – Model parameters must have been fit and/or posterior states been computedfantasy_samples (
List
[FantasizedPendingEvaluation
]) – See aboveactive_metric (
Optional
[str
]) – Name of the metric to optimize.normalize_mean (
float
) – Mean used to normalize targetsnormalize_std (
float
) – Stddev used to normalize targets
- hp_ranges_for_prediction()[source]
- Return type:
- Returns:
Feature generator to be used for
inputs
inpredict()
- predict(inputs)[source]
Returns signals which are statistics of the predictive distribution at input points
inputs
. By default:“mean”: Predictive means. If the model supports fantasizing with a number
nf
of fantasies, this has shape(n, nf)
, otherwise(n,)
“std”: Predictive stddevs, shape
(n,)
If the hyperparameters of the surrogate model are being optimized (e.g., by empirical Bayes), the returned list has length 1. If its hyperparameters are averaged over by MCMC, the returned list has one entry per MCMC sample.
- Parameters:
inputs (
ndarray
) – Input points, shape(n, d)
- Return type:
List
[Dict
[str
,ndarray
]]- Returns:
List of
dict
with keyskeys_predict()
, of length the number of MCMC samples, or length 1 for empirical Bayes
- backward_gradient(input, head_gradients)[source]
Computes the gradient \(\nabla_x f(x)\) for an acquisition function \(f(x)\), where \(x\) is a single input point. This is using reverse mode differentiation, the head gradients are passed by the acquisition function. The head gradients are \(\partial_k f\), where \(k\) runs over the statistics returned by
predict()
for the single input point \(x\). The shape of head gradients is the same as the shape of the statistics.Lists have
> 1
entry if MCMC is used, otherwise they are all size 1.- Parameters:
input (
ndarray
) – Single input point \(x\), shape(d,)
head_gradients (
List
[Dict
[str
,ndarray
]]) – See above
- Return type:
List
[ndarray
]- Returns:
Gradient \(\nabla_x f(x)\) (several if MCMC is used)
- property posterior_states: List[PosteriorState] | None
- class syne_tune.optimizer.schedulers.searchers.bayesopt.models.gp_model.GaussProcEstimator(gpmodel, active_metric, normalize_targets=True, debug_log=None, filter_observed_data=None, no_fantasizing=False, hp_ranges_for_prediction=None)[source]
Bases:
Estimator
We support pending evaluations via fantasizing. Note that state does not contain the fantasy values, but just the pending configs. Fantasy values are sampled here.
- Parameters:
gpmodel (
Union
[GaussianProcessRegression
,GPRegressionMCMC
,IndependentGPPerResourceModel
,HyperTuneIndependentGPModel
,HyperTuneJointGPModel
]) – Internal modelactive_metric (
str
) – Name of the metric to optimize.normalize_targets (
bool
) – Normalize observed target values?debug_log (
Optional
[DebugLogPrinter
]) – DebugLogPrinter (optional)filter_observed_data (
Optional
[Callable
[[Dict
[str
,Union
[int
,float
,str
]]],bool
]]) – Filter for observed data before computing incumbentno_fantasizing (
bool
) – If True, pending evaluations in the state are simply ignored, fantasizing is not done (not recommended)hp_ranges_for_prediction (
Optional
[HyperparameterRanges
]) – If given,GaussProcPredictor
should use this instead ofstate.hp_ranges
- property debug_log: DebugLogPrinter | None
- property gpmodel: GaussianProcessRegression | GPRegressionMCMC | IndependentGPPerResourceModel | HyperTuneIndependentGPModel | HyperTuneJointGPModel
- fit_from_state(state, update_params)[source]
Parameters of
self._gpmodel
are optimized iffupdate_params
. This requiresstate
to contain labeled examples.If
self.state.pending_evaluations
is not empty, we proceed as follows: :rtype:Predictor
Compute posterior for state without pending evals
Draw fantasy values for pending evals
Recompute posterior (without fitting)
- configure_scheduler(scheduler)[source]
Called by
configure_scheduler()
of searchers which make use of an class:Estimator
. Allows the estimator to depend on parameters of the scheduler.- Parameters:
scheduler – Scheduler object
- class syne_tune.optimizer.schedulers.searchers.bayesopt.models.gp_model.GaussProcEmpiricalBayesEstimator(gpmodel, num_fantasy_samples, active_metric='target', normalize_targets=True, debug_log=None, filter_observed_data=None, no_fantasizing=False, hp_ranges_for_prediction=None)[source]
Bases:
GaussProcEstimator
We support pending evaluations via fantasizing. Note that state does not contain the fantasy values, but just the pending configs. Fantasy values are sampled here.
- Parameters:
gpmodel (
Union
[GaussianProcessRegression
,GPRegressionMCMC
,IndependentGPPerResourceModel
,HyperTuneIndependentGPModel
,HyperTuneJointGPModel
]) –GaussianProcessRegression
modelnum_fantasy_samples (
int
) – See aboveactive_metric (
str
) – Name of the metric to optimize.normalize_targets (
bool
) – Normalize target values instate.candidate_evaluations
?