deephyper.evaluator.RayEvaluator#

class deephyper.evaluator.RayEvaluator(run_function, callbacks=None, run_function_kwargs=None, address: Optional[str] = None, password: Optional[str] = None, num_cpus: Optional[int] = None, num_gpus: Optional[int] = None, num_cpus_per_task: float = 1, num_gpus_per_task: Optional[float] = None, ray_kwargs: Optional[dict] = None, num_workers: Optional[int] = None)[source]#

Bases: deephyper.evaluator._evaluator.Evaluator

This evaluator uses the ray library as backend.

Parameters
  • run_function (callable) – functions to be executed by the Evaluator.

  • callbacks (list, optional) – A list of callbacks to trigger custom actions at the creation or completion of jobs. Defaults to None.

  • address (str, optional) – address of the Ray-head. Defaults to None, if no Ray-head was started.

  • password (str, optional) – password to connect ot the Ray-head. Defaults to None, if the default Ray-password is used.

  • num_cpus (int, optional) – number of CPUs available in the Ray-cluster. Defaults to None, if the Ray-cluster was already started it will be automatically computed.

  • num_gpus (int, optional) – number of GPUs available in the Ray-cluster. Defaults to None, if the Ray-cluster was already started it will be automatically computed.

  • num_cpus_per_task (float, optional) – number of CPUs used per remote task. Defaults to 1.

  • num_gpus_per_task (float, optional) – number of GPUs used per remote task. Defaults to None.

  • ray_kwargs (dict, optional) – other ray keyword arguments passed to ray.init(...). Defaults to {}.

  • num_workers (int, optional) – number of workers available to compute remote-tasks in parallel. Defaults to None, or if it is -1 it is automatically computed based with num_workers = int(num_cpus // num_cpus_per_task).

Methods

convert_for_csv

Convert an input value to an accepted format to be saved as a value of a CSV file (e.g., a list becomes it’s str representation).

create

Create evaluator with a specific backend and configuration.

decode

Decode the key following a JSON format to return a dict.

dump_evals

Dump evaluations to a CSV file.

execute

Execute the received job.

gather

Collect the completed tasks from the evaluator in batches of one or more.

submit

Send configurations to be evaluated by available workers.

to_json

Returns a json version of the evaluator.

Attributes

FAIL_RETURN_VALUE

PYTHON_EXE

convert_for_csv(val)#

Convert an input value to an accepted format to be saved as a value of a CSV file (e.g., a list becomes it’s str representation).

Parameters

val (Any) – The input value to convert.

Returns

The converted value.

Return type

Any

static create(run_function, method='serial', method_kwargs={})#

Create evaluator with a specific backend and configuration.

Parameters
  • run_function (function) – the function to execute in parallel.

  • method (str, optional) – the backend to use in ["serial", "thread", "process", "subprocess", "ray", "mpicomm", "mpipool"]. Defaults to "serial".

  • method_kwargs (dict, optional) – configuration dictionnary of the corresponding backend. Keys corresponds to the keyword arguments of the corresponding implementation. Defaults to “{}”.

Raises

ValueError – if the method is not acceptable.

Returns

the Evaluator with the corresponding backend and configuration.

Return type

Evaluator

decode(key)#

Decode the key following a JSON format to return a dict.

dump_evals(saved_keys=None, log_dir: str = '.', filename='results.csv')#

Dump evaluations to a CSV file.

Parameters
  • saved_keys (list|callable) – If None the whole job.config will be added as row of the CSV file. If a list filtered keys will be added as a row of the CSV file. If a callable the output dictionnary will be added as a row of the CSV file.

  • log_dir (str) – directory where to dump the CSV file.

  • filename (str) – name of the file where to write the data.

async execute(job)[source]#

Execute the received job. To be implemented with a specific backend.

Parameters

job (Job) – the Job to be executed.

gather(type, size=1)#

Collect the completed tasks from the evaluator in batches of one or more.

Parameters
  • type (str) –

    Options:
    "ALL"

    Block until all jobs submitted to the evaluator are completed.

    "BATCH"

    Specify a minimum batch size of jobs to collect from the evaluator. The method will block until at least size evaluations are completed.

  • size (int, optional) – The minimum batch size that we want to collect from the evaluator. Defaults to 1.

Raises

Exception – Raised when a gather operation other than “ALL” or “BATCH” is provided.

Returns

A batch of completed jobs that is at minimum the given size.

Return type

List[Job]

submit(configs: List[Dict])#

Send configurations to be evaluated by available workers.

Parameters

configs (List[Dict]) – A list of dict which will be passed to the run function to be executed.

to_json()#

Returns a json version of the evaluator.