deephyper.evaluator.SerialEvaluator#
- class deephyper.evaluator.SerialEvaluator(run_function: Callable, num_workers: int = 1, callbacks: list = None, run_function_kwargs: dict = None, storage: Storage = None, search_id: Hashable = None)[source]#
Bases:
Evaluator
This evaluator run evaluations one after the other (not parallel).
- Parameters:
run_function (callable) – functions to be executed by the
Evaluator
.num_workers (int, optional) – Number of parallel Ray-workers used to compute the
run_function
. Defaults to 1.callbacks (list, optional) – A list of callbacks to trigger custom actions at the creation or completion of jobs. Defaults to None.
run_function_kwargs (dict, optional) – Static keyword arguments to pass to the
run_function
when executed.storage (Storage, optional) – Storage used by the evaluator. Defaults to
MemoryStorage
.search_id (Hashable, optional) – The id of the search to use in the corresponding storage. If
None
it will create a new search identifier when initializing the search.
Methods
Convert an input value to an accepted format to be saved as a value of a CSV file (e.g., a list becomes it's str representation).
Create evaluator with a specific backend and configuration.
Decode the key following a JSON format to return a dict.
dump_evals
Dump completed jobs to a CSV file.
Execute the received job.
Collect the completed tasks from the evaluator in batches of one or more.
Access storage to return results from other processes.
set_maximum_num_jobs_submitted
Set a timeout for the Evaluator.
Send configurations to be evaluated by available workers.
Returns a json version of the evaluator.
Attributes
FAIL_RETURN_VALUE
NEST_ASYNCIO_PATCHED
PYTHON_EXE
num_jobs_gathered
num_jobs_submitted
- convert_for_csv(val)#
Convert an input value to an accepted format to be saved as a value of a CSV file (e.g., a list becomes it’s str representation).
- Parameters:
val (Any) – The input value to convert.
- Returns:
The converted value.
- Return type:
Any
- static create(run_function, method='serial', method_kwargs={})#
Create evaluator with a specific backend and configuration.
- Parameters:
run_function (function) – the function to execute in parallel.
method (str, optional) – the backend to use in
["serial", "thread", "process", "ray", "mpicomm"]
. Defaults to"serial"
.method_kwargs (dict, optional) – configuration dictionnary of the corresponding backend. Keys corresponds to the keyword arguments of the corresponding implementation. Defaults to “{}”.
- Raises:
ValueError – if the
method is
not acceptable.- Returns:
the
Evaluator
with the corresponding backend and configuration.- Return type:
- decode(key)#
Decode the key following a JSON format to return a dict.
- dump_jobs_done_to_csv(log_dir: str = '.', filename='results.csv', flush: bool = False)#
Dump completed jobs to a CSV file. This will reset the
Evaluator.jobs_done
attribute to an empty list.
- async execute(job: Job) Job [source]#
Execute the received job. To be implemented with a specific backend.
- Parameters:
job (Job) – the
Job
to be executed.- Returns:
the update
Job
.- Return type:
job
- gather(type, size=1)#
Collect the completed tasks from the evaluator in batches of one or more.
- Parameters:
type (str) –
- Options:
"ALL"
Block until all jobs submitted to the evaluator are completed.
"BATCH"
Specify a minimum batch size of jobs to collect from the evaluator. The method will block until at least
size
evaluations are completed.
size (int, optional) – The minimum batch size that we want to collect from the evaluator. Defaults to 1.
- Raises:
Exception – Raised when a gather operation other than “ALL” or “BATCH” is provided.
- Returns:
A batch of completed jobs that is at minimum the given size.
- Return type:
List[Job]
- gather_other_jobs_done()#
Access storage to return results from other processes.
- set_timeout(timeout)#
Set a timeout for the Evaluator. It will create task with a “time budget” and will kill the the task if this budget is exhausted.
- submit(args_list: List[Dict])#
Send configurations to be evaluated by available workers.
- Parameters:
args_list (List[Dict]) – A list of dict which will be passed to the run function to be executed.
- to_json()#
Returns a json version of the evaluator.