deephyper.evaluator

This evaluator module asynchronously manages a series of Job objects to help execute given HPS or NAS tasks on various environments with differing system settings and properties.

class deephyper.evaluator.Evaluator(run_function, num_workers: int = 1, callbacks=None)[source]

Bases: object

This Evaluator class asynchronously manages a series of Job objects to help execute given HPS or NAS tasks on various environments with differing system settings and properties.

Parameters
  • run_function (callable) – functions to be executed by the Evaluator.

  • num_workers (int, optional) – Number of parallel workers available for the Evaluator. Defaults to 1.

  • callbacks (list, optional) – A list of callbacks to trigger custom actions at the creation or completion of jobs. Defaults to None.

FAIL_RETURN_VALUE = -3.4028235e+38
PYTHON_EXE = '/home/docs/checkouts/readthedocs.org/user_builds/deephyper/envs/develop/bin/python'
convert_for_csv(val)[source]

Convert an input value to an accepted format to be saved as a value of a CSV file (e.g., a list becomes it’s str representation).

Parameters

val (Any) – The input value to convert.

Returns

The converted value.

Return type

Any

static create(run_function, method='subprocess', method_kwargs={})[source]

Create evaluator with a specific backend and configuration.

Parameters
  • run_function (function) – the function to execute in parallel.

  • method (str, optional) – the backend to use in [“thread”, “process”, “subprocess”, “ray”]. Defaults to “subprocess”.

  • method_kwargs (dict, optional) – configuration dictionnary of the corresponding backend. Keys corresponds to the keyword arguments of the corresponding implementation. Defaults to “{}”.

Raises

DeephyperRuntimeError – if the method is not acceptable.

Returns

the Evaluator with the corresponding backend and configuration.

Return type

Evaluator

create_job(config)[source]
decode(key)[source]

from JSON string to x (list)

dump_evals(saved_keys=None, log_dir: str = '.')[source]

Dump evaluations to a CSV file name "results.csv"

Parameters
  • saved_keys (list|callable) – If None the whole job.config will be added as row of the CSV file. If a list filtered keys will be added as a row of the CSV file. If a callable the output dictionnary will be added as a row of the CSV file.

  • log_dir (str) – directory where to dump the CSV file.

async execute(job)[source]

Execute the received job. To be implemented with a specific backend.

Parameters

job (Job) – the Job to be executed.

gather(type, size=1)[source]

Collect the completed tasks from the evaluator in batches of one or more.

Parameters
  • type (str) –

    Options:
    ”ALL”

    Collect all jobs submitted to the evaluator. Ex.) eval.gather(“ALL”)

    ”BATCH”

    Specify a minimum batch size of jobs to collect from the evaluator.

  • size (int, optional) – The minimum batch size that we want to collect from the evaluator. Defaults to 1.

Raises

Exception – Raised when a gather operation other than “ALL” or “BATCH” is provided.

Returns

A batch of completed jobs that is at minimum the given size.

Return type

List[Job]

submit(configs: List[Dict])[source]

Send configurations to be evaluated by available workers.

Parameters

configs (List[Dict]) – A list of dict which will be passed to the run function to be executed.

class deephyper.evaluator.Job(id, config: dict, run_function)[source]

Bases: object

Represents an evaluation executed by the Evaluator class.

Parameters
  • id (Any) – unique identifier of the job. Usually an integer.

  • config (dict) – argument dictionnary of the run_function.

  • run_function (callable) – function executed by the Evaluator

DONE = 2
READY = 0
RUNNING = 1
class deephyper.evaluator.ProcessPoolEvaluator(run_function, num_workers: int = 1, callbacks=None)[source]

Bases: deephyper.evaluator._evaluator.Evaluator

This evaluator uses the ProcessPoolExecutor as backend.

Parameters
  • run_function (callable) – functions to be executed by the Evaluator.

  • num_workers (int, optional) – Number of parallel processes used to compute the run_function. Defaults to 1.

  • callbacks (list, optional) – A list of callbacks to trigger custom actions at the creation or completion of jobs. Defaults to None.

FAIL_RETURN_VALUE = -3.4028235e+38
PYTHON_EXE = '/home/docs/checkouts/readthedocs.org/user_builds/deephyper/envs/develop/bin/python'
convert_for_csv(val)

Convert an input value to an accepted format to be saved as a value of a CSV file (e.g., a list becomes it’s str representation).

Parameters

val (Any) – The input value to convert.

Returns

The converted value.

Return type

Any

static create(run_function, method='subprocess', method_kwargs={})

Create evaluator with a specific backend and configuration.

Parameters
  • run_function (function) – the function to execute in parallel.

  • method (str, optional) – the backend to use in [“thread”, “process”, “subprocess”, “ray”]. Defaults to “subprocess”.

  • method_kwargs (dict, optional) – configuration dictionnary of the corresponding backend. Keys corresponds to the keyword arguments of the corresponding implementation. Defaults to “{}”.

Raises

DeephyperRuntimeError – if the method is not acceptable.

Returns

the Evaluator with the corresponding backend and configuration.

Return type

Evaluator

create_job(config)
decode(key)

from JSON string to x (list)

dump_evals(saved_keys=None, log_dir: str = '.')

Dump evaluations to a CSV file name "results.csv"

Parameters
  • saved_keys (list|callable) – If None the whole job.config will be added as row of the CSV file. If a list filtered keys will be added as a row of the CSV file. If a callable the output dictionnary will be added as a row of the CSV file.

  • log_dir (str) – directory where to dump the CSV file.

async execute(job)[source]

Execute the received job. To be implemented with a specific backend.

Parameters

job (Job) – the Job to be executed.

gather(type, size=1)

Collect the completed tasks from the evaluator in batches of one or more.

Parameters
  • type (str) –

    Options:
    ”ALL”

    Collect all jobs submitted to the evaluator. Ex.) eval.gather(“ALL”)

    ”BATCH”

    Specify a minimum batch size of jobs to collect from the evaluator.

  • size (int, optional) – The minimum batch size that we want to collect from the evaluator. Defaults to 1.

Raises

Exception – Raised when a gather operation other than “ALL” or “BATCH” is provided.

Returns

A batch of completed jobs that is at minimum the given size.

Return type

List[Job]

submit(configs: List[Dict])

Send configurations to be evaluated by available workers.

Parameters

configs (List[Dict]) – A list of dict which will be passed to the run function to be executed.

class deephyper.evaluator.RayEvaluator(run_function, callbacks=None, address: Optional[str] = None, password: Optional[str] = None, num_cpus: Optional[int] = None, num_gpus: Optional[int] = None, num_cpus_per_task: float = 1, num_gpus_per_task: Optional[float] = None, ray_kwargs: dict = {}, num_workers: Optional[int] = None)[source]

Bases: deephyper.evaluator._evaluator.Evaluator

This evaluator uses the ray library as backend.

Parameters
  • run_function (callable) – functions to be executed by the Evaluator.

  • num_workers (int, optional) – Number of parallel Ray-workers used to compute the run_function. Defaults to 1.

  • callbacks (list, optional) – A list of callbacks to trigger custom actions at the creation or completion of jobs. Defaults to None.

  • address (str, optional) – address of the Ray-head. Defaults to None, if no Ray-head was started.

  • password (str, optional) – password to connect ot the Ray-head. Defaults to None, if the default Ray-password is used.

  • num_cpus (int, optional) – number of CPUs available in the Ray-cluster. Defaults to None, if the Ray-cluster was already started it will be automatically computed.

  • num_gpus (int, optional) – number of GPUs available in the Ray-cluster. Defaults to None, if the Ray-cluster was already started it will be automatically computed.

  • num_cpus_per_task (float, optional) – number of CPUs used per remote task. Defaults to 1.

  • num_gpus_per_task (float, optional) – number of GPUs used per remote task. Defaults to None.

  • ray_kwargs (dict, optional) – other ray keyword arguments passed to ray.init(...). Defaults to {}.

  • num_workers – number of workers available to compute remote-tasks in parallel. Defaults to None, it is automatically computed based on num_workers = int(num_cpus // num_cpus_per_task).

FAIL_RETURN_VALUE = -3.4028235e+38
PYTHON_EXE = '/home/docs/checkouts/readthedocs.org/user_builds/deephyper/envs/develop/bin/python'
convert_for_csv(val)

Convert an input value to an accepted format to be saved as a value of a CSV file (e.g., a list becomes it’s str representation).

Parameters

val (Any) – The input value to convert.

Returns

The converted value.

Return type

Any

static create(run_function, method='subprocess', method_kwargs={})

Create evaluator with a specific backend and configuration.

Parameters
  • run_function (function) – the function to execute in parallel.

  • method (str, optional) – the backend to use in [“thread”, “process”, “subprocess”, “ray”]. Defaults to “subprocess”.

  • method_kwargs (dict, optional) – configuration dictionnary of the corresponding backend. Keys corresponds to the keyword arguments of the corresponding implementation. Defaults to “{}”.

Raises

DeephyperRuntimeError – if the method is not acceptable.

Returns

the Evaluator with the corresponding backend and configuration.

Return type

Evaluator

create_job(config)
decode(key)

from JSON string to x (list)

dump_evals(saved_keys=None, log_dir: str = '.')

Dump evaluations to a CSV file name "results.csv"

Parameters
  • saved_keys (list|callable) – If None the whole job.config will be added as row of the CSV file. If a list filtered keys will be added as a row of the CSV file. If a callable the output dictionnary will be added as a row of the CSV file.

  • log_dir (str) – directory where to dump the CSV file.

async execute(job)[source]

Execute the received job. To be implemented with a specific backend.

Parameters

job (Job) – the Job to be executed.

gather(type, size=1)

Collect the completed tasks from the evaluator in batches of one or more.

Parameters
  • type (str) –

    Options:
    ”ALL”

    Collect all jobs submitted to the evaluator. Ex.) eval.gather(“ALL”)

    ”BATCH”

    Specify a minimum batch size of jobs to collect from the evaluator.

  • size (int, optional) – The minimum batch size that we want to collect from the evaluator. Defaults to 1.

Raises

Exception – Raised when a gather operation other than “ALL” or “BATCH” is provided.

Returns

A batch of completed jobs that is at minimum the given size.

Return type

List[Job]

submit(configs: List[Dict])

Send configurations to be evaluated by available workers.

Parameters

configs (List[Dict]) – A list of dict which will be passed to the run function to be executed.

class deephyper.evaluator.SubprocessEvaluator(run_function, num_workers: int = 1, callbacks=None)[source]

Bases: deephyper.evaluator._evaluator.Evaluator

This evaluator uses the asyncio.create_subprocess_exec as backend.

Parameters
  • run_function (callable) – functions to be executed by the Evaluator.

  • num_workers (int, optional) – Number of parallel processes used to compute the run_function. Defaults to 1.

  • callbacks (list, optional) – A list of callbacks to trigger custom actions at the creation or completion of jobs. Defaults to None.

FAIL_RETURN_VALUE = -3.4028235e+38
PYTHON_EXE = '/home/docs/checkouts/readthedocs.org/user_builds/deephyper/envs/develop/bin/python'
convert_for_csv(val)

Convert an input value to an accepted format to be saved as a value of a CSV file (e.g., a list becomes it’s str representation).

Parameters

val (Any) – The input value to convert.

Returns

The converted value.

Return type

Any

static create(run_function, method='subprocess', method_kwargs={})

Create evaluator with a specific backend and configuration.

Parameters
  • run_function (function) – the function to execute in parallel.

  • method (str, optional) – the backend to use in [“thread”, “process”, “subprocess”, “ray”]. Defaults to “subprocess”.

  • method_kwargs (dict, optional) – configuration dictionnary of the corresponding backend. Keys corresponds to the keyword arguments of the corresponding implementation. Defaults to “{}”.

Raises

DeephyperRuntimeError – if the method is not acceptable.

Returns

the Evaluator with the corresponding backend and configuration.

Return type

Evaluator

create_job(config)
decode(key)

from JSON string to x (list)

dump_evals(saved_keys=None, log_dir: str = '.')

Dump evaluations to a CSV file name "results.csv"

Parameters
  • saved_keys (list|callable) – If None the whole job.config will be added as row of the CSV file. If a list filtered keys will be added as a row of the CSV file. If a callable the output dictionnary will be added as a row of the CSV file.

  • log_dir (str) – directory where to dump the CSV file.

async execute(job)[source]

Execute the received job. To be implemented with a specific backend.

Parameters

job (Job) – the Job to be executed.

gather(type, size=1)

Collect the completed tasks from the evaluator in batches of one or more.

Parameters
  • type (str) –

    Options:
    ”ALL”

    Collect all jobs submitted to the evaluator. Ex.) eval.gather(“ALL”)

    ”BATCH”

    Specify a minimum batch size of jobs to collect from the evaluator.

  • size (int, optional) – The minimum batch size that we want to collect from the evaluator. Defaults to 1.

Raises

Exception – Raised when a gather operation other than “ALL” or “BATCH” is provided.

Returns

A batch of completed jobs that is at minimum the given size.

Return type

List[Job]

submit(configs: List[Dict])

Send configurations to be evaluated by available workers.

Parameters

configs (List[Dict]) – A list of dict which will be passed to the run function to be executed.

class deephyper.evaluator.ThreadPoolEvaluator(run_function, num_workers: int = 1, callbacks=None)[source]

Bases: deephyper.evaluator._evaluator.Evaluator

This evaluator uses the ThreadPoolExecutor as backend.

Warning

This evaluator is interesting with I/O intensive tasks, do not expect a speed-up with compute intensive tasks.

Parameters
  • run_function (callable) – functions to be executed by the Evaluator.

  • num_workers (int, optional) – Number of concurrent threads used to compute the run_function. Defaults to 1.

  • callbacks (list, optional) – A list of callbacks to trigger custom actions at the creation or completion of jobs. Defaults to None.

FAIL_RETURN_VALUE = -3.4028235e+38
PYTHON_EXE = '/home/docs/checkouts/readthedocs.org/user_builds/deephyper/envs/develop/bin/python'
convert_for_csv(val)

Convert an input value to an accepted format to be saved as a value of a CSV file (e.g., a list becomes it’s str representation).

Parameters

val (Any) – The input value to convert.

Returns

The converted value.

Return type

Any

static create(run_function, method='subprocess', method_kwargs={})

Create evaluator with a specific backend and configuration.

Parameters
  • run_function (function) – the function to execute in parallel.

  • method (str, optional) – the backend to use in [“thread”, “process”, “subprocess”, “ray”]. Defaults to “subprocess”.

  • method_kwargs (dict, optional) – configuration dictionnary of the corresponding backend. Keys corresponds to the keyword arguments of the corresponding implementation. Defaults to “{}”.

Raises

DeephyperRuntimeError – if the method is not acceptable.

Returns

the Evaluator with the corresponding backend and configuration.

Return type

Evaluator

create_job(config)
decode(key)

from JSON string to x (list)

dump_evals(saved_keys=None, log_dir: str = '.')

Dump evaluations to a CSV file name "results.csv"

Parameters
  • saved_keys (list|callable) – If None the whole job.config will be added as row of the CSV file. If a list filtered keys will be added as a row of the CSV file. If a callable the output dictionnary will be added as a row of the CSV file.

  • log_dir (str) – directory where to dump the CSV file.

async execute(job)[source]

Execute the received job. To be implemented with a specific backend.

Parameters

job (Job) – the Job to be executed.

gather(type, size=1)

Collect the completed tasks from the evaluator in batches of one or more.

Parameters
  • type (str) –

    Options:
    ”ALL”

    Collect all jobs submitted to the evaluator. Ex.) eval.gather(“ALL”)

    ”BATCH”

    Specify a minimum batch size of jobs to collect from the evaluator.

  • size (int, optional) – The minimum batch size that we want to collect from the evaluator. Defaults to 1.

Raises

Exception – Raised when a gather operation other than “ALL” or “BATCH” is provided.

Returns

A batch of completed jobs that is at minimum the given size.

Return type

List[Job]

submit(configs: List[Dict])

Send configurations to be evaluated by available workers.

Parameters

configs (List[Dict]) – A list of dict which will be passed to the run function to be executed.