deephyper.evaluator.MPICommEvaluator#

class deephyper.evaluator.MPICommEvaluator(run_function: Callable, num_workers: int = None, callbacks=None, run_function_kwargs=None, storage: Storage = None, search_id: Hashable = None, comm=None, root=0)[source]#

Bases: Evaluator

This evaluator uses the mpi4py library as backend.

This evaluator consider an already existing MPI-context (with running processes), therefore it has less overhead than MPIPoolEvaluator which spawn processes dynamically.

Parameters:
  • run_function (callable) – Functions to be executed by the Evaluator.

  • num_workers (int, optional) – Number of parallel Ray-workers used to compute the run_function. Defaults to None which consider 1 rank as a worker (minus the master rank).

  • callbacks (list, optional) – A list of callbacks to trigger custom actions at the creation or completion of jobs. Defaults to None.

  • run_function_kwargs (dict, optional) – Keyword-arguments to pass to the run_function. Defaults to None.

  • storage (Storage, optional) – Storage used by the evaluator. Defaults to SharedMemoryStorage.

  • search_id (Hashable, optional) – The id of the search to use in the corresponding storage. If None it will create a new search identifier when initializing the search.

  • comm (optional) – A MPI communicator, if None it will use MPI.COMM_WORLD. Defaults to None.

  • rank (int, optional) – The rank of the master process. Defaults to 0.

Methods

close

convert_for_csv

Convert an input value to an accepted format.

create

Create evaluator with a specific backend and configuration.

decode

Decode the key following a JSON format to return a dict.

dump_evals

dump_jobs_done_to_csv

Dump completed jobs to a CSV file.

execute

Execute the received job.

gather

Collect the completed tasks from the evaluator in batches of one or more.

gather_other_jobs_done

Access storage to return results from other processes.

process_local_tasks_done

set_event_loop

set_maximum_num_jobs_submitted

submit

Send configurations to be evaluated by available workers.

to_json

Returns a json version of the evaluator.

Attributes

FAIL_RETURN_VALUE

NEST_ASYNCIO_PATCHED

PYTHON_EXE

is_master

Boolean that indicates if the current Evaluator object is a "Master".

num_jobs_gathered

num_jobs_submitted

time_left

timeout

convert_for_csv(val)#

Convert an input value to an accepted format.

This is to be saved as a value of a CSV file (e.g., a list becomes it’s str representation).

Parameters:

val (Any) – The input value to convert.

Returns:

The converted value.

Return type:

Any

static create(run_function, method='serial', method_kwargs={})#

Create evaluator with a specific backend and configuration.

Parameters:
  • run_function (function) – The function to execute in parallel.

  • method (str, optional) – The backend to use in `` [“serial”, “thread”, “process”, “ray”, “mpicomm”]``. Defaults to "serial".

  • method_kwargs (dict, optional) – Configuration dictionnary of the corresponding backend. Keys corresponds to the keyword arguments of the corresponding implementation. Defaults to “{}”.

Raises:

ValueError – if the method is not acceptable.

Returns:

the Evaluator with the corresponding backend and configuration.

Return type:

Evaluator

decode(key)#

Decode the key following a JSON format to return a dict.

dump_jobs_done_to_csv(log_dir: str = '.', filename='results.csv', flush: bool = False)#

Dump completed jobs to a CSV file.

This will reset the Evaluator.jobs_done attribute to an empty list.

Parameters:
  • log_dir (str) – Directory where to dump the CSV file.

  • filename (str) – Name of the file where to write the data.

  • flush (bool) – A boolean indicating if the results should be flushed (i.e., forcing the dumping).

async execute(job: Job) Job[source]#

Execute the received job. To be implemented with a specific backend.

Parameters:

job (Job) – the Job to be executed.

Returns:

the update Job.

Return type:

job

gather(type, size=1)#

Collect the completed tasks from the evaluator in batches of one or more.

Parameters:
  • type (str) –

    Options:
    "ALL"

    Block until all jobs submitted to the evaluator are completed.

    "BATCH"

    Specify a minimum batch size of jobs to collect from the evaluator. The method will block until at least size evaluations are completed.

  • size (int, optional) – The minimum batch size that we want to collect from the evaluator. Defaults to 1.

Raises:

Exception – Raised when a gather operation other than “ALL” or “BATCH” is provided.

Returns:

A batch of completed jobs that is at minimum the given size.

Return type:

List[Job]

gather_other_jobs_done()#

Access storage to return results from other processes.

property is_master#

Boolean that indicates if the current Evaluator object is a “Master”.

submit(args_list: List[Dict])#

Send configurations to be evaluated by available workers.

Parameters:

args_list (List[Dict]) – A list of dict which will be passed to the run function to be executed.

to_json()#

Returns a json version of the evaluator.