deephyper.evaluator.ThreadPoolEvaluator#

class deephyper.evaluator.ThreadPoolEvaluator(run_function, num_workers: int = 1, callbacks: Optional[list] = None, run_function_kwargs: Optional[dict] = None)[source]#

Bases: deephyper.evaluator._evaluator.Evaluator

This evaluator uses the ThreadPoolExecutor as backend.

Warning

This evaluator is interesting with I/O intensive tasks, do not expect a speed-up with compute intensive tasks.

Parameters
  • run_function (callable) – functions to be executed by the Evaluator.

  • num_workers (int, optional) – Number of concurrent threads used to compute the run_function. Defaults to 1.

  • callbacks (list, optional) – A list of callbacks to trigger custom actions at the creation or completion of jobs. Defaults to None.

Methods

convert_for_csv

Convert an input value to an accepted format to be saved as a value of a CSV file (e.g., a list becomes it’s str representation).

create

Create evaluator with a specific backend and configuration.

decode

Decode the key following a JSON format to return a dict.

dump_evals

Dump evaluations to a CSV file.

execute

Execute the received job.

gather

Collect the completed tasks from the evaluator in batches of one or more.

submit

Send configurations to be evaluated by available workers.

to_json

Returns a json version of the evaluator.

Attributes

FAIL_RETURN_VALUE

PYTHON_EXE

convert_for_csv(val)#

Convert an input value to an accepted format to be saved as a value of a CSV file (e.g., a list becomes it’s str representation).

Parameters

val (Any) – The input value to convert.

Returns

The converted value.

Return type

Any

static create(run_function, method='serial', method_kwargs={})#

Create evaluator with a specific backend and configuration.

Parameters
  • run_function (function) – the function to execute in parallel.

  • method (str, optional) – the backend to use in ["serial", "thread", "process", "subprocess", "ray", "mpicomm", "mpipool"]. Defaults to "serial".

  • method_kwargs (dict, optional) – configuration dictionnary of the corresponding backend. Keys corresponds to the keyword arguments of the corresponding implementation. Defaults to “{}”.

Raises

ValueError – if the method is not acceptable.

Returns

the Evaluator with the corresponding backend and configuration.

Return type

Evaluator

decode(key)#

Decode the key following a JSON format to return a dict.

dump_evals(saved_keys=None, log_dir: str = '.', filename='results.csv')#

Dump evaluations to a CSV file.

Parameters
  • saved_keys (list|callable) – If None the whole job.config will be added as row of the CSV file. If a list filtered keys will be added as a row of the CSV file. If a callable the output dictionnary will be added as a row of the CSV file.

  • log_dir (str) – directory where to dump the CSV file.

  • filename (str) – name of the file where to write the data.

async execute(job)[source]#

Execute the received job. To be implemented with a specific backend.

Parameters

job (Job) – the Job to be executed.

gather(type, size=1)#

Collect the completed tasks from the evaluator in batches of one or more.

Parameters
  • type (str) –

    Options:
    "ALL"

    Block until all jobs submitted to the evaluator are completed.

    "BATCH"

    Specify a minimum batch size of jobs to collect from the evaluator. The method will block until at least size evaluations are completed.

  • size (int, optional) – The minimum batch size that we want to collect from the evaluator. Defaults to 1.

Raises

Exception – Raised when a gather operation other than “ALL” or “BATCH” is provided.

Returns

A batch of completed jobs that is at minimum the given size.

Return type

List[Job]

submit(configs: List[Dict])#

Send configurations to be evaluated by available workers.

Parameters

configs (List[Dict]) – A list of dict which will be passed to the run function to be executed.

to_json()#

Returns a json version of the evaluator.