deephyper.evaluator.ProcessPoolEvaluator

deephyper.evaluator.ProcessPoolEvaluator

class deephyper.evaluator.ProcessPoolEvaluator(run_function, num_workers: int = 1, callbacks: Optional[list] = None, run_function_kwargs: Optional[dict] = None)[source]

Bases: deephyper.evaluator._evaluator.Evaluator

This evaluator uses the ProcessPoolExecutor as backend.

Parameters
  • run_function (callable) – functions to be executed by the Evaluator.

  • num_workers (int, optional) – Number of parallel processes used to compute the run_function. Defaults to 1.

  • callbacks (list, optional) – A list of callbacks to trigger custom actions at the creation or completion of jobs. Defaults to None.

Methods

convert_for_csv

Convert an input value to an accepted format to be saved as a value of a CSV file (e.g., a list becomes it’s str representation).

create

Create evaluator with a specific backend and configuration.

create_job

Create a Job object from the input configuration.

decode

Decode the key following a JSON format to return a dict.

dump_evals

Dump evaluations to a CSV file name "results.csv"

execute

Execute the received job.

gather

Collect the completed tasks from the evaluator in batches of one or more.

submit

Send configurations to be evaluated by available workers.

Attributes

FAIL_RETURN_VALUE

PYTHON_EXE

convert_for_csv(val)

Convert an input value to an accepted format to be saved as a value of a CSV file (e.g., a list becomes it’s str representation).

Parameters

val (Any) – The input value to convert.

Returns

The converted value.

Return type

Any

static create(run_function, method='subprocess', method_kwargs={})

Create evaluator with a specific backend and configuration.

Parameters
  • run_function (function) – the function to execute in parallel.

  • method (str, optional) – the backend to use in [“thread”, “process”, “subprocess”, “ray”]. Defaults to “subprocess”.

  • method_kwargs (dict, optional) – configuration dictionnary of the corresponding backend. Keys corresponds to the keyword arguments of the corresponding implementation. Defaults to “{}”.

Raises

DeephyperRuntimeError – if the method is not acceptable.

Returns

the Evaluator with the corresponding backend and configuration.

Return type

Evaluator

create_job(config)

Create a Job object from the input configuration.

decode(key)

Decode the key following a JSON format to return a dict.

dump_evals(saved_keys=None, log_dir: str = '.')

Dump evaluations to a CSV file name "results.csv"

Parameters
  • saved_keys (list|callable) – If None the whole job.config will be added as row of the CSV file. If a list filtered keys will be added as a row of the CSV file. If a callable the output dictionnary will be added as a row of the CSV file.

  • log_dir (str) – directory where to dump the CSV file.

async execute(job)[source]

Execute the received job. To be implemented with a specific backend.

Parameters

job (Job) – the Job to be executed.

gather(type, size=1)

Collect the completed tasks from the evaluator in batches of one or more.

Parameters
  • type (str) –

    Options:
    "ALL"

    Block until all jobs submitted to the evaluator are completed.

    "BATCH"

    Specify a minimum batch size of jobs to collect from the evaluator. The method will block until at least size evaluations are completed.

  • size (int, optional) – The minimum batch size that we want to collect from the evaluator. Defaults to 1.

Raises

Exception – Raised when a gather operation other than “ALL” or “BATCH” is provided.

Returns

A batch of completed jobs that is at minimum the given size.

Return type

List[Job]

submit(configs: List[Dict])

Send configurations to be evaluated by available workers.

Parameters

configs (List[Dict]) – A list of dict which will be passed to the run function to be executed.