deephyper.hpo.CBO#
- class deephyper.hpo.CBO(problem, evaluator, random_state: int = None, log_dir: str = '.', verbose: int = 0, stopper=None, surrogate_model='ET', surrogate_model_kwargs: dict = None, acq_func: str = 'UCBd', acq_optimizer: str = 'auto', acq_optimizer_freq: int = 10, kappa: float = 1.96, xi: float = 0.001, n_points: int = 10000, filter_duplicated: bool = True, update_prior: bool = False, update_prior_quantile: float = 0.1, multi_point_strategy: str = 'cl_max', n_jobs: int = 1, n_initial_points: int = 10, initial_point_generator: str = 'random', initial_points=None, filter_failures: str = 'min', max_failures: int = 100, moo_lower_bounds=None, moo_scalarization_strategy: str = 'Chebyshev', moo_scalarization_weight=None, scheduler=None, objective_scaler='auto', **kwargs)[source]#
Bases:
Search
Centralized Bayesian Optimisation Search.
It follows a manager-workers architecture where the manager runs the Bayesian optimization loop and workers execute parallel evaluations of the black-box function.
Single-Objective
Multi-Objectives
Failures
✅
✅
✅
Example Usage:
>>> search = CBO(problem, evaluator) >>> results = search.search(max_evals=100, timeout=120)
- Parameters:
problem (HpProblem) – Hyperparameter problem describing the search space to explore.
evaluator (Evaluator) – An
Evaluator
instance responsible of distributing the tasks.random_state (int, optional) – Random seed. Defaults to
None
.log_dir (str, optional) – Log directory where search’s results are saved. Defaults to
"."
.verbose (int, optional) – Indicate the verbosity level of the search. Defaults to
0
.stopper (Stopper, optional) – a stopper to leverage multi-fidelity when evaluating the function. Defaults to
None
which does not use any stopper.surrogate_model (Union[str,sklearn.base.RegressorMixin], optional) – Surrogate model used by the Bayesian optimization. Can be a value in
["RF", "GP", "ET", "MF", "GBRT", "DUMMY"]
or a sklearn regressor."ET"
is for Extremely Randomized Trees which is the best compromise between speed and quality when performing a lot of parallel evaluations, i.e., reaching more than hundreds of evaluations."GP"
is for Gaussian- Process which is the best choice when maximizing the quality of iteration but quickly slow down when reaching hundreds of evaluations, also it does not support conditional search space."RF"
is for Random-Forest, slower than extremely randomized trees but with better mean estimate and worse epistemic uncertainty quantification capabilities."GBRT"
is for Gradient-Boosting Regression Tree, it has better mean estimate than other tree-based method worse uncertainty quantification capabilities and slower than"RF"
. Defaults to"ET"
.surrogate_model_kwargs (dict, optional) – Additional parameters to pass to the surrogate model. Defaults to
None
.acq_func (str, optional) – Acquisition function used by the Bayesian optimization. Can be a value in
["UCB", "EI", "PI", "gp_hedge"]
. Defaults to"UCB"
.acq_optimizer (str, optional) – Method used to minimze the acquisition function. Can be a value in
["sampling", "lbfgs", "ga", "mixedga"]
. Defaults to"auto"
.acq_optimizer_freq (int, optional) – Frequency of optimization calls for the acquisition function. Defaults to
10
, using optimizer every10
surrogate model updates.kappa (float, optional) – Manage the exploration/exploitation tradeoff for the “UCB” acquisition function. Defaults to
1.96
which corresponds to 95% of the confidence interval.xi (float, optional) – Manage the exploration/exploitation tradeoff of
"EI"
and"PI"
acquisition function. Defaults to0.001
.n_points (int, optional) – The number of configurations sampled from the search space to infer each batch of new evaluated configurations.
filter_duplicated (bool, optional) – Force the optimizer to sample unique points until the search space is “exhausted” in the sens that no new unique points can be found given the sampling size
n_points
. Defaults toTrue
.update_prior (bool, optional) – Update the prior of the surrogate model with the new evaluated points. Defaults to
False
. Should be set toTrue
when all objectives and parameters are continuous.update_prior_quantile (float, optional) – The quantile used to update the prior. Defaults to
0.1
.multi_point_strategy (str, optional) – Definition of the constant value use for the Liar strategy. Can be a value in
["cl_min", "cl_mean", "cl_max", "qUCB", "qUCBd"]
. All"cl_..."
strategies follow the constant-liar scheme, where if $N$ new points are requested, the surrogate model is re-fitted $N-1$ times with lies (respectively, the minimum, mean and maximum objective found so far; for multiple objectives, these are the minimum, mean and maximum of the individual objectives) to infer the acquisition function. Constant-Liar strategy have poor scalability because of this repeated re- fitting. The"qUCB"
strategy is much more efficient by sampling a new $kappa$ value for each new requested point without re-fitting the model.n_jobs (int, optional) – Number of parallel processes used to fit the surrogate model of the Bayesian optimization. A value of
-1
will use all available cores. Not used insurrogate_model
if passed as own sklearn regressor. Defaults to1
.n_initial_points (int, optional) – Number of collected objectives required before fitting the surrogate-model. Defaults to
10
.initial_point_generator (str, optional) – Sets an initial points generator. Can be either
["random", "sobol", "halton", "hammersly", "lhs", "grid"]
. Defaults to"random"
.initial_points (List[Dict], optional) – A list of initial points to evaluate where each point is a dictionnary where keys are names of hyperparameters and values their corresponding choice. Defaults to
None
for them to be generated randomly from the search space.filter_failures (str, optional) – Replace objective of failed configurations by
"min"
or"mean"
. If"ignore"
is passed then failed configurations will be filtered-out and not passed to the surrogate model. For multiple objectives, failure of any single objective will lead to treating that configuration as failed and each of these multiple objective will be replaced by their individual"min"
or"mean"
of past configurations. Defaults to"min"
to replace failed configurations objectives by the running min of all objectives.max_failures (int, optional) – Maximum number of failed configurations allowed before observing a valid objective value when
filter_failures
is not equal to"ignore"
. Defaults to100
.moo_lower_bounds (list, optional) – List of lower bounds on the interesting range of objective values. Must be the same length as the number of obejctives. Defaults to
None
, i.e., no bounds. Can bound only a single objective by providingNone
for all other values. For example,moo_lower_bounds=[None, 0.5, None]
will explore all tradeoffs for the objectives at index 0 and 2, but only consider scores for objective 1 that exceed 0.5.moo_scalarization_strategy (str, optional) – Scalarization strategy used in multiobjective optimization. Can be a value in
["Linear", "Chebyshev", "AugChebyshev", "PBI", "Quadratic"]
. Defaults to"Chebyshev"
. Typically, randomized methods should be used to capture entire Pareto front, unless there is a known target solution a priori. Additional details on each scalarization can be found indeephyper.skopt.moo
.moo_scalarization_weight (list, optional) – Scalarization weights to be used in multiobjective optimization with length equal to the number of objective functions. Defaults to
None
for randomized weights. Only set if you want to fix the scalarization weights for a multiobjective HPS.scheduler (dict, callable, optional) – a function to manage the value of
kappa, xi
with iterations. Defaults toNone
which does not use any scheduler. The periodic exponential decay scheduler can be used withscheduler={"type": "periodic-exp-decay", "period": 30, "rate": 0.1}
. The scheduler can also be a callable function with signaturescheduler(i, eta_0, **kwargs)
wherei
is the current iteration,eta_0
is the initial value of[kappa, xi]
andkwargs
are other fixed parameters of the function. Instead of fixing the decay"rate"
the finalkappa
orxi
can be used{"type": "periodic-exp-decay", "period": 25, "kappa_final": 1.96}
.objective_scaler (str, optional) – a way to map the objective space to some other support for example to normalize it. Defaults to
"auto"
which automatically set it to “identity” for any surrogate model except “RF” which will use “quantile-uniform”.
Methods
Ask the search for new configurations to evaluate.
check_evaluator
Dumps the context in the log folder.
Dump jobs completed to CSV in log_dir.
Extend the results DataFrame with Pareto-Front.
Fits a generative model for sampling during BO.
Apply prior-guided transfer learning based on a DataFrame of results.
Fit the surrogate model of the search from a checkpointed Dataframe.
Execute the search algorithm.
Tell the search the results of the evaluations.
Returns a json version of the search object.
Attributes
The identifier of the search used by the evaluator.
- ask(n: int = 1) List[Dict] #
Ask the search for new configurations to evaluate.
- Parameters:
n (int, optional) – The number of configurations to ask. Defaults to 1.
- Returns:
a list of hyperparameter configurations to evaluate.
- Return type:
List[Dict]
- dump_context()#
Dumps the context in the log folder.
- dump_jobs_done_to_csv(flush: bool = False)#
Dump jobs completed to CSV in log_dir.
- Parameters:
flush (bool, optional) – Force the dumping if set to
True
. Defaults toFalse
.
- extend_results_with_pareto_efficient_indicator()#
Extend the results DataFrame with Pareto-Front.
A column
pareto_efficient
is added to the dataframe. It isTrue
if the point is Pareto efficient.
- fit_generative_model(df, q=0.9, n_samples=100, verbose=False, **generative_model_kwargs)[source]#
Fits a generative model for sampling during BO.
Learn the distribution of hyperparameters for the top-
(1-q)x100%
configurations and sample from this distribution. It can be used for transfer learning. For multiobjective problems, this function computes the top-(1-q)x100%
configurations in terms of their ranking with respect to pareto efficiency: all points on the first non-dominated pareto front have rank 1 and in general, points on the k’th non-dominated front have rank k.Example Usage:
>>> search = CBO(problem, evaluator) >>> search.fit_surrogate("results.csv")
- Parameters:
df (str|DataFrame) – a dataframe or path to CSV from a previous search.
q (float, optional) – the quantile defined the set of top configurations used to bias the search. Defaults to
0.90
which select the top-10% configurations fromdf
.n_samples (int, optional) – the number of samples used to score the generative model.
verbose (bool, optional) – If set to
True
it will print the score of the generative model. Defaults toFalse
.generative_model_kwargs (dict, optional) – additional parameters to pass to the generative model.
- Returns:
score, model
which are a metric which measures the quality of the learnedgenerated-model and the generative model respectively.
- Return type:
- fit_search_space(df, fac_numerical=0.125, fac_categorical=10)[source]#
Apply prior-guided transfer learning based on a DataFrame of results.
Example Usage:
>>> search = CBO(problem, evaluator) >>> search.fit_surrogate("results.csv")
- Parameters:
df (str|DataFrame) – a checkpoint from a previous search.
fac_numerical (float) – the factor used to compute the sigma of a truncated normal distribution based on
sigma = max(1.0, (upper - lower) * fac_numerical)
. A small large factor increase exploration while a small factor increase exploitation around the best-configuration from thedf
parameter.fac_categorical (float) – the weight given to a categorical feature part of the best configuration. A large weight
> 1
increase exploitation while a small factor close to1
increase exploration.
- fit_surrogate(df)[source]#
Fit the surrogate model of the search from a checkpointed Dataframe.
- Parameters:
df (str|DataFrame) – a checkpoint from a previous search.
Example Usage:
>>> search = CBO(problem, evaluator) >>> search.fit_surrogate("results.csv")
- search(max_evals: int = -1, timeout: int = None, max_evals_strict: bool = False)#
Execute the search algorithm.
- Parameters:
max_evals (int, optional) – The maximum number of evaluations of the run function to perform before stopping the search. Defaults to
-1
, will run indefinitely.timeout (int, optional) – The time budget (in seconds) of the search before stopping. Defaults to
None
, will not impose a time budget.max_evals_strict (bool, optional) – If
True
the search will not spawn more thanmax_evals
jobs. Defaults toFalse
.
- Returns:
- A pandas DataFrame containing the evaluations performed or
None
if the search could not evaluate any configuration.
This DataFrame contains the following columns: -
p:HYPERPARAMETER_NAME
: for each hyperparameter of the problem. -objective
: for single objective optimization. -objective_0
,objective_1
, …: for multi-objective optimization. -job_id
: the identifier of the job. -job_status
: the status of the job at the end of the search. -m:METADATA_NAME
: for each metadata of the problem. Some metadata are alwayspresent like
m:timestamp_submit
andm:timestamp_gather
which are the timestamps of the submission and gathering of the job.
- A pandas DataFrame containing the evaluations performed or
- Return type:
DataFrame
- property search_id#
The identifier of the search used by the evaluator.
- tell(results: List[HPOJob])#
Tell the search the results of the evaluations.
- Parameters:
results (List[HPOJob]) – a list of HPOJobs from which hyperparameters and objectives can
retrieved. (be)
- to_json()#
Returns a json version of the search object.