1. DeepHyper 101#

Open In Colab

In this tutorial, we present the basics of DeepHyper.

Let us start with installing DeepHyper!

[1]:
try:
    import deephyper
    print(deephyper.__version__)
except (ImportError, ModuleNotFoundError):
    !pip install deephyper
0.4.2

1.1. Optimization Problem#

In the definition of our optimization problem we have two components:

  1. black-box function that we want to optimize

  2. the search space of input variables

1.1.1. Black-Box Function#

DeepHyper is developed to optimize black-box functions. Here, we define the function \(f(x) = - x ^ 2\) that we want to maximise (the maximum being \(f(x=0) = 0\) on \(I_x = [-10;10]\)). The black-box function f takes as input a config dictionary from which we retrieve the variables of interest.

[2]:
def f(config):
    return - config["x"]**2

1.1.2. Search Space of Input Variables#

In this example, we have only one variable \(x\) for the black-box functin \(f\). We empirically decide to optimize this variable \(x\) on the interval \(I_x = [-10;10]\). To do so we use the HpProblem from DeepHyper and add a real hyperparameter by using a tuple of two floats.

[3]:
from deephyper.problem import HpProblem

problem = HpProblem()

# define the variable you want to optimize
problem.add_hyperparameter((-10.0, 10.0), "x")

problem
[3]:
Configuration space object:
  Hyperparameters:
    x, Type: UniformFloat, Range: [-10.0, 10.0], Default: 0.0

1.2. Evaluator Interface#

DeepHyper uses an API called Evaluator to distribute the computation of black-box functions and adapt to different backends (e.g., threads, processes, MPI, Ray). An Evaluator object wraps the black-box function f that we want to optimize. Then a method parameter is used to select the backend and method_kwargs defines some available options of this backend.

Tip

The method="thread" provides parallel computation only if the black-box is releasing the global interpretor lock (GIL). Therefore, if you want parallelism in Jupyter notebooks you should use the Ray evaluator (method="ray") after installing Ray with pip install ray.

It is possible to define callbacks to extend the behaviour of Evaluator each time a function-evaluation is launched or completed. In this example we use the TqdmCallback to follow the completed evaluations and the evolution of the objective with a progress-bar.

[4]:
from deephyper.evaluator import Evaluator
from deephyper.evaluator.callback import TqdmCallback


# define the evaluator to distribute the computation
evaluator = Evaluator.create(
    f,
    method="thread",
    method_kwargs={
        "num_workers": 4,
        "callbacks": [TqdmCallback()]
    },
)

print(f"Evaluator has {evaluator.num_workers} available worker{'' if evaluator.num_workers == 1 else 's'}")
Evaluator has 4 available workers
/Users/romainegele/Documents/Argonne/deephyper/deephyper/evaluator/_evaluator.py:126: UserWarning: Applying nest-asyncio patch for IPython Shell!
  warnings.warn(

1.3. Search Algorithm#

The next step is to define the search algorithm that we want to use. Here, we choose CBO (Centralized Bayesian Optimization) which is a sampling based Bayesian optimization strategy. This algorithm has the advantage of being asynchronous thanks to a constant liar strategy which is crutial to keep a good utilization of the resources when the number of available workers increases.

[5]:
from deephyper.search.hps import CBO

# define your search
search = CBO(problem, evaluator)

Then, we can execute the search for a given number of iterations by using the search.search(max_evals=...). It is also possible to use the timeout parameter if one needs a specific time budget (e.g., restricted computational time in machine learning competitions, allocation time in HPC).

[6]:
results = search.search(max_evals=100)

Finally, let us visualize the results. The search(...) returns a DataFrame also saved locally under results.csv (in case of crash we don’t want to lose the possibly expensive evaluations already performed).

The DataFrame contains as columns: 1. the optimized hyperparameters: such as x in our case. 2. the id of each evaluated function (increased incrementally following the order of created evaluations). 3. the objective maximised which directly match the results of the \(f\)-function in our example. 4. the objective maximised which directly match the results of the \(f\)-function in our example. 5. the time of creation/collection of each task timestamp_submit and timestamp_gather respectively (in secondes, since the creation of the Evaluator).

[7]:
results
[7]:
p:x objective job_id m:timestamp_submit m:timestamp_gather
0 9.138987 -8.352109e+01 1 0.088184 0.088959
1 -7.692589 -5.917593e+01 2 0.088191 0.101996
2 5.110193 -2.611408e+01 3 0.088196 0.102237
3 -9.787812 -9.580125e+01 0 0.088172 0.102414
4 0.239850 -5.752782e-02 5 0.114977 0.115826
... ... ... ... ... ...
95 -0.001097 -1.204058e-06 95 13.172690 13.708221
96 -0.003347 -1.120130e-05 98 13.707965 13.825222
97 0.000346 -1.197333e-07 96 13.707946 13.825585
98 -0.003690 -1.361392e-05 97 13.707957 13.825754
99 -0.000764 -5.831021e-07 99 13.825041 14.359546

100 rows × 5 columns

We can also plot the evolution of the objective to verify that we converge correctly toward \(0\).

[8]:
import matplotlib.pyplot as plt

plt.figure()
plt.plot(results.objective.cummax())
plt.scatter(list(range(len(results))), results.objective)
plt.xlabel("Number of Search Evaluations")
plt.ylabel("$y = f(x)$")
plt.show()
../../../_images/tutorials_tutorials_colab_DeepHyper_101_15_0.png