deephyper.search.hps.ExperimentalDesignSearch#

class deephyper.search.hps.ExperimentalDesignSearch(problem, evaluator, random_state: int = None, log_dir: str = '.', verbose: int = 0, n_points: int = None, design: str = 'random', initial_points=None)[source]#

Bases: CBO

Centralized Experimental Design Search. It follows a manager-workers architecture where the manager runs the sampling process and workers execute parallel evaluations of the black-box function.

Example Usage:

>>> max_evals = 100
>>> search = ExperimentalDesignSearch(problem, evaluator, n_points=max_evals, design="grid")
>>> results = search.search(max_evals=100)
Parameters:
  • problem (HpProblem) – Hyperparameter problem describing the search space to explore.

  • evaluator (Evaluator) – An Evaluator instance responsible of distributing the tasks.

  • random_state (int, optional) – Random seed. Defaults to None.

  • log_dir (str, optional) – Log directory where search’s results are saved. Defaults to ".".

  • verbose (int, optional) – Indicate the verbosity level of the search. Defaults to 0.

  • n_points (int, optional) – Number of points to sample. Defaults to None.

  • design (str, optional) – Experimental design to use. Defaults to "random". - "random" for uniform random numbers. - "sobol" for a Sobol’ sequence. - "halton" for a Halton sequence. - "hammersly" for a Hammersly sequence. - "lhs" for a latin hypercube sequence. - "grid" for a uniform grid sequence.

  • initial_points (list, optional) – List of initial points to evaluate. Defaults to None.

Methods

check_evaluator

dump_context

Dumps the context in the log folder.

extend_results_with_pareto_efficient

Extend the results DataFrame with a column pareto_efficient which is True if the point is Pareto efficient.

fit_generative_model

Learn the distribution of hyperparameters for the top-(1-q)x100% configurations and sample from this distribution.

fit_search_space

Apply prior-guided transfer learning based on a DataFrame of results.

fit_surrogate

Fit the surrogate model of the search from a checkpointed Dataframe.

search

Execute the search algorithm.

to_json

Returns a json version of the search object.

Attributes

search_id

The identifier of the search used by the evaluator.

dump_context()#

Dumps the context in the log folder.

extend_results_with_pareto_efficient(df_path: str)#

Extend the results DataFrame with a column pareto_efficient which is True if the point is Pareto efficient.

Parameters:

df (pd.DataFrame) – the input results DataFrame.

fit_generative_model(df, q=0.9, n_iter_optimize=0, n_samples=100)#

Learn the distribution of hyperparameters for the top-(1-q)x100% configurations and sample from this distribution. It can be used for transfer learning. For multiobjective problems, this function computes the top-(1-q)x100% configurations in terms of their ranking with respect to pareto efficiency: all points on the first non-dominated pareto front have rank 1 and in general, points on the k’th non-dominated front have rank k.

Example Usage:

>>> search = CBO(problem, evaluator)
>>> search.fit_surrogate("results.csv")
Parameters:
  • df (str|DataFrame) – a dataframe or path to CSV from a previous search.

  • q (float, optional) – the quantile defined the set of top configurations used to bias the search. Defaults to 0.90 which select the top-10% configurations from df.

  • n_iter_optimize (int, optional) – the number of iterations used to optimize the generative model which samples the data for the search. Defaults to 0 with no optimization for the generative model.

  • n_samples (int, optional) – the number of samples used to score the generative model.

Returns:

score, model which are a metric which measures the quality of the learned generated-model and the generative model respectively.

Return type:

tuple

fit_search_space(df, fac_numerical=0.125, fac_categorical=10)#

Apply prior-guided transfer learning based on a DataFrame of results.

Example Usage:

>>> search = CBO(problem, evaluator)
>>> search.fit_surrogate("results.csv")
Parameters:
  • df (str|DataFrame) – a checkpoint from a previous search.

  • fac_numerical (float) – the factor used to compute the sigma of a truncated normal distribution based on sigma = max(1.0, (upper - lower) * fac_numerical). A small large factor increase exploration while a small factor increase exploitation around the best-configuration from the df parameter.

  • fac_categorical (float) – the weight given to a categorical feature part of the best configuration. A large weight > 1 increase exploitation while a small factor close to 1 increase exploration.

fit_surrogate(df)#

Fit the surrogate model of the search from a checkpointed Dataframe.

Parameters:

df (str|DataFrame) – a checkpoint from a previous search.

Example Usage:

>>> search = CBO(problem, evaluator)
>>> search.fit_surrogate("results.csv")
search(max_evals: int = -1, timeout: int = None, max_evals_strict: bool = False)#

Execute the search algorithm.

Parameters:
  • max_evals (int, optional) – The maximum number of evaluations of the run function to perform before stopping the search. Defaults to -1, will run indefinitely.

  • timeout (int, optional) – The time budget (in seconds) of the search before stopping. Defaults to None, will not impose a time budget.

  • max_evals_strict (bool, optional) – If True the search will not spawn more than max_evals jobs. Defaults to False.

Returns:

a pandas DataFrame containing the evaluations performed or None if the search could not evaluate any configuration.

Return type:

DataFrame

property search_id#

The identifier of the search used by the evaluator.

to_json()#

Returns a json version of the search object.