logo

Get Started

  • Install
    • conda
    • docker
    • jupyter
    • pip
    • spack
    • Argonne Leadership Computing Facility (ALCF)
    • National Energy Research Scientific Computing (NERSC)
    • Oak Ridge Leadership Computing Facility (OLCF)
  • Tutorials
    • Colab
      • 1. DeepHyper 101
      • 2. Hyperparameter search for classification with Tabular data (Keras)
      • 3. Multi-Fidelity Hyperparameter Optimization with Keras
      • 4. Hyperparameter Search to reduce overfitting in Machine Learning (Scikit-Learn)
      • 5. Automated Machine Learning with Scikit-Learn
      • 6. Neural Architecture Search (Basic)
    • Notebooks
      • 1. Hyperparameter search for text classification (Pytorch)
      • 2. Neural Architecture Search with Multiple Input Tensors
      • 3. From Neural Architecture Search to Automated Deep Ensemble with Uncertainty Quantification
    • Scripts
      • 1. Tuning of MPI Programs
      • 2. Introduction to Distributed Bayesian Optimization (DBO) with MPI (Communication) and Redis (Storage)
      • 3. Understanding the pros and cons of Evaluator parallel backends
    • Argonne LCF
      • 1. Execution on the Theta supercomputer
      • 2. Execution on the ThetaGPU supercomputer (with Ray)
      • 3. Execution on the ThetaGPU supercomputer (with MPI)
      • 4. Execution on the ThetaGPU supercomputer (within a Jupyter notebook)
  • Examples
    • From Serial to Parallel Evaluations
    • Notify Failures in Hyperparameter optimization
    • Transfer Learning for Hyperparameter Search
    • Profile the Worker Utilization
  • F.A.Q.
  • Blog (Events & Workshops)
  • Publications
  • Authors

API Reference

  • Core
    • deephyper.core.cli
    • deephyper.core.exceptions
      • deephyper.core.exceptions.DeephyperError
      • deephyper.core.exceptions.DeephyperRuntimeError
      • deephyper.core.exceptions.MissingRequirementError
      • deephyper.core.exceptions.RunFunctionError
      • deephyper.core.exceptions.SearchTerminationError
      • deephyper.core.exceptions.loading
        • deephyper.core.exceptions.loading.DeephyperError
        • deephyper.core.exceptions.loading.GenericLoaderError
      • deephyper.core.exceptions.nas
        • deephyper.core.exceptions.nas.DeephyperError
        • deephyper.core.exceptions.nas.NASError
        • deephyper.core.exceptions.nas.space
      • deephyper.core.exceptions.problem
        • deephyper.core.exceptions.problem.DeephyperError
        • deephyper.core.exceptions.problem.NaProblemError
        • deephyper.core.exceptions.problem.ProblemLoadDataIsNotCallable
        • deephyper.core.exceptions.problem.ProblemPreprocessingIsNotCallable
        • deephyper.core.exceptions.problem.SearchSpaceBuilderIsNotCallable
        • deephyper.core.exceptions.problem.SearchSpaceBuilderMissingDefaultParameter
        • deephyper.core.exceptions.problem.SearchSpaceBuilderMissingParameter
        • deephyper.core.exceptions.problem.SpaceDimNameOfWrongType
        • deephyper.core.exceptions.problem.WrongProblemObjective
    • deephyper.core.parser
      • deephyper.core.parser.add_arguments_from_signature
      • deephyper.core.parser.signature
      • deephyper.core.parser.str2bool
    • deephyper.core.utils
      • deephyper.core.utils.load_attr
  • Ensemble
    • deephyper.ensemble.BaggingEnsembleClassifier
    • deephyper.ensemble.BaggingEnsembleRegressor
    • deephyper.ensemble.BaseEnsemble
    • deephyper.ensemble.UQBaggingEnsembleClassifier
    • deephyper.ensemble.UQBaggingEnsembleRegressor
  • Evaluator
    • deephyper.evaluator.distributed
    • deephyper.evaluator.parse_subprocess_result
    • deephyper.evaluator.profile
    • deephyper.evaluator.queued
    • deephyper.evaluator.to_json
    • deephyper.evaluator.Evaluator
    • deephyper.evaluator.Job
    • deephyper.evaluator.MPICommEvaluator
    • deephyper.evaluator.ProcessPoolEvaluator
    • deephyper.evaluator.RayEvaluator
    • deephyper.evaluator.RunningJob
    • deephyper.evaluator.SerialEvaluator
    • deephyper.evaluator.ThreadPoolEvaluator
    • deephyper.evaluator.callback
      • deephyper.evaluator.callback.Callback
      • deephyper.evaluator.callback.LoggerCallback
      • deephyper.evaluator.callback.ProfilingCallback
      • deephyper.evaluator.callback.SearchEarlyStopping
      • deephyper.evaluator.callback.TqdmCallback
    • deephyper.evaluator.storage
      • deephyper.evaluator.storage.MemoryStorage
      • deephyper.evaluator.storage.RedisStorage
      • deephyper.evaluator.storage.Storage
  • Keras
    • deephyper.keras.callbacks
      • deephyper.keras.callbacks.import_callback
      • deephyper.keras.callbacks.CSVExtendedLogger
      • deephyper.keras.callbacks.LearningRateScheduleCallback
      • deephyper.keras.callbacks.LearningRateWarmupCallback
      • deephyper.keras.callbacks.StopIfUnfeasible
      • deephyper.keras.callbacks.TimeStopping
      • deephyper.keras.callbacks.csv_extended_logger
        • deephyper.keras.callbacks.csv_extended_logger.CSVExtendedLogger
      • deephyper.keras.callbacks.learning_rate_warmup
        • deephyper.keras.callbacks.learning_rate_warmup.LearningRateScheduleCallback
        • deephyper.keras.callbacks.learning_rate_warmup.LearningRateWarmupCallback
      • deephyper.keras.callbacks.stop_if_unfeasible
        • deephyper.keras.callbacks.stop_if_unfeasible.StopIfUnfeasible
      • deephyper.keras.callbacks.stop_on_timeout
        • deephyper.keras.callbacks.stop_on_timeout.TerminateOnTimeOut
        • deephyper.keras.callbacks.stop_on_timeout.datetime
      • deephyper.keras.callbacks.time_stopping
        • deephyper.keras.callbacks.time_stopping.TimeStopping
      • deephyper.keras.callbacks.utils
        • deephyper.keras.callbacks.utils.import_callback
    • deephyper.keras.layers
      • deephyper.keras.layers.AttentionCOS
      • deephyper.keras.layers.AttentionConst
      • deephyper.keras.layers.AttentionGAT
      • deephyper.keras.layers.AttentionGCN
      • deephyper.keras.layers.AttentionGenLinear
      • deephyper.keras.layers.AttentionLinear
      • deephyper.keras.layers.AttentionSymGAT
      • deephyper.keras.layers.GlobalAttentionPool
      • deephyper.keras.layers.GlobalAttentionSumPool
      • deephyper.keras.layers.GlobalAvgPool
      • deephyper.keras.layers.GlobalMaxPool
      • deephyper.keras.layers.GlobalSumPool
      • deephyper.keras.layers.MessagePasserNNM
      • deephyper.keras.layers.MessagePassing
      • deephyper.keras.layers.Padding
      • deephyper.keras.layers.SparseMPNN
      • deephyper.keras.layers.UpdateFuncGRU
      • deephyper.keras.layers.UpdateFuncMLP
    • deephyper.keras.utils
  • NAS
    • deephyper.nas.KSearchSpace
    • deephyper.nas.NxSearchSpace
    • deephyper.nas.losses
      • deephyper.nas.losses.load_attr
      • deephyper.nas.losses.selectLoss
      • deephyper.nas.losses.tfp_negloglik
      • deephyper.nas.losses.OrderedDict
    • deephyper.nas.lr_scheduler
      • deephyper.nas.lr_scheduler.exponential_decay
    • deephyper.nas.metrics
      • deephyper.nas.metrics.acc
      • deephyper.nas.metrics.load_attr
      • deephyper.nas.metrics.mae
      • deephyper.nas.metrics.mse
      • deephyper.nas.metrics.r2
      • deephyper.nas.metrics.rmse
      • deephyper.nas.metrics.selectMetric
      • deephyper.nas.metrics.sparse_perplexity
      • deephyper.nas.metrics.tfp_mae
      • deephyper.nas.metrics.tfp_mse
      • deephyper.nas.metrics.tfp_r2
      • deephyper.nas.metrics.tfp_rmse
      • deephyper.nas.metrics.to_tfp
      • deephyper.nas.metrics.OrderedDict
    • deephyper.nas.node
      • deephyper.nas.node.ConstantNode
      • deephyper.nas.node.MimeNode
      • deephyper.nas.node.MirrorNode
      • deephyper.nas.node.Node
      • deephyper.nas.node.Operation
      • deephyper.nas.node.OperationNode
      • deephyper.nas.node.VariableNode
    • deephyper.nas.operation
      • deephyper.nas.operation.operation
      • deephyper.nas.operation.AddByPadding
      • deephyper.nas.operation.AddByProjecting
      • deephyper.nas.operation.Concatenate
      • deephyper.nas.operation.Connect
      • deephyper.nas.operation.Identity
      • deephyper.nas.operation.Operation
      • deephyper.nas.operation.Tensor
      • deephyper.nas.operation.Zero
    • deephyper.nas.preprocessing
      • deephyper.nas.preprocessing.minmaxstdscaler
      • deephyper.nas.preprocessing.stdscaler
    • deephyper.nas.run
      • deephyper.nas.run.run_base_trainer
      • deephyper.nas.run.run_debug
      • deephyper.nas.run.run_debug_arch
      • deephyper.nas.run.run_debug_hp_arch
      • deephyper.nas.run.run_debug_slow
      • deephyper.nas.run.run_distributed_base_trainer
      • deephyper.nas.run.run_horovod
    • deephyper.nas.spacelib
    • deephyper.nas.trainer
      • deephyper.nas.trainer.BaseTrainer
      • deephyper.nas.trainer.HorovodTrainer
  • Problem
    • deephyper.problem.Categorical
    • deephyper.problem.Float
    • deephyper.problem.Integer
    • deephyper.problem.AndConjunction
    • deephyper.problem.Beta
    • deephyper.problem.BetaFloatHyperparameter
    • deephyper.problem.BetaIntegerHyperparameter
    • deephyper.problem.CategoricalHyperparameter
    • deephyper.problem.Configuration
    • deephyper.problem.ConfigurationSpace
    • deephyper.problem.Constant
    • deephyper.problem.Distribution
    • deephyper.problem.EqualsCondition
    • deephyper.problem.ForbiddenAndConjunction
    • deephyper.problem.ForbiddenEqualsClause
    • deephyper.problem.ForbiddenEqualsRelation
    • deephyper.problem.ForbiddenGreaterThanRelation
    • deephyper.problem.ForbiddenInClause
    • deephyper.problem.ForbiddenLessThanRelation
    • deephyper.problem.GreaterThanCondition
    • deephyper.problem.HpProblem
    • deephyper.problem.InCondition
    • deephyper.problem.LessThanCondition
    • deephyper.problem.NaProblem
    • deephyper.problem.Normal
    • deephyper.problem.NormalFloatHyperparameter
    • deephyper.problem.NormalIntegerHyperparameter
    • deephyper.problem.NotEqualsCondition
    • deephyper.problem.OrConjunction
    • deephyper.problem.OrdinalHyperparameter
    • deephyper.problem.UnParametrizedHyperparameter
    • deephyper.problem.Uniform
    • deephyper.problem.UniformFloatHyperparameter
    • deephyper.problem.UniformIntegerHyperparameter
  • Search
    • deephyper.search.Search
    • deephyper.search.hps
      • deephyper.search.hps.AMBS
      • deephyper.search.hps.CBO
      • deephyper.search.hps.MPIDistributedBO
    • deephyper.search.nas
      • deephyper.search.nas.AMBSMixed
      • deephyper.search.nas.AgEBO
      • deephyper.search.nas.NeuralArchitectureSearch
      • deephyper.search.nas.Random
      • deephyper.search.nas.RegularizedEvolution
      • deephyper.search.nas.RegularizedEvolutionMixed
  • Sklearn
    • deephyper.sklearn.classifier
      • deephyper.sklearn.classifier.run_autosklearn1
    • deephyper.sklearn.regressor
      • deephyper.sklearn.regressor.run_autosklearn1
  • Stopper
    • deephyper.stopper.ConstantStopper
    • deephyper.stopper.IdleStopper
    • deephyper.stopper.MedianStopper
    • deephyper.stopper.Stopper
    • deephyper.stopper.SuccessiveHalvingStopper

Developer Guides

  • Development
  • Documentation
  • Tests
Theme by the Executable Book Project
  • repository
  • open issue
  • suggest edit
  • .rst

Transfer Learning for Hyperparameter Search

Note

Click here to download the full example code

Transfer Learning for Hyperparameter Search#

Author(s): Romain Egele.

In this example we present how to apply transfer-learning for hyperparameter search. Let’s assume you have a bunch of similar tasks for example the search of neural networks hyperparameters for different datasets. You can easily imagine that close choices of hyperparameters can perform well these different datasets even if some light additional tuning can help improve the performance. Therefore, you can perform an expensive search once to then reuse the explored set of hyperparameters of thid search and bias the following search with it. Here, we will use a cheap to compute and easy to understand example where we maximise the \(f(x) = -\sum_{i=0}^{n-1}\) function. In this case the size of the problem can be defined by the variable \(n\). We will start by optimizing the small-size problem where \(n=1\), then apply transfer-learning from to optimize the larger-size problem where \(n=2\) and visualize the difference if were not to apply transfer-learning on this larger problem instance.

Let us start by defining the run-functions of the small and large scale problems:

import functools


def run(config: dict, N: int) -> float:
    y = -sum([config[f"x{i}"] ** 2 for i in range(N)])
    return y


run_small = functools.partial(run, N=1)
run_large = functools.partial(run, N=2)

Then, we can define the hyperparameter problem space based on \(n\)

from deephyper.problem import HpProblem


N = 1
problem_small = HpProblem()
for i in range(N):
    problem_small.add_hyperparameter((-10.0, 10.0), f"x{i}")
problem_small

Out:

Configuration space object:
  Hyperparameters:
    x0, Type: UniformFloat, Range: [-10.0, 10.0], Default: 0.0
N = 2
problem_large = HpProblem()
for i in range(N):
    problem_large.add_hyperparameter((-10.0, 10.0), f"x{i}")
problem_large

Out:

Configuration space object:
  Hyperparameters:
    x0, Type: UniformFloat, Range: [-10.0, 10.0], Default: 0.0
    x1, Type: UniformFloat, Range: [-10.0, 10.0], Default: 0.0

Then, we define setup the search and execute it:

from deephyper.evaluator import Evaluator
from deephyper.evaluator.callback import TqdmCallback
from deephyper.search.hps import CBO

results = {}
max_evals = 20
evaluator_small = Evaluator.create(
    run_small, method="serial", method_kwargs={"callbacks": [TqdmCallback()]}
)
search_small = CBO(problem_small, evaluator_small, random_state=42)
results["Small"] = search_small.search(max_evals)

Out:

  0%|          | 0/20 [00:00<?, ?it/s]
  5%|5         | 1/20 [00:00<00:00, 10894.30it/s, failures=0, objective=-35.2]
 10%|#         | 2/20 [00:00<00:00, 196.12it/s, failures=0, objective=-23.6]
 15%|#5        | 3/20 [00:00<00:00, 171.98it/s, failures=0, objective=-23.6]
 20%|##        | 4/20 [00:00<00:00, 162.69it/s, failures=0, objective=-23.6]
 25%|##5       | 5/20 [00:00<00:00, 158.97it/s, failures=0, objective=-23.6]
 30%|###       | 6/20 [00:00<00:00, 156.15it/s, failures=0, objective=-.545]
 35%|###5      | 7/20 [00:00<00:00, 53.52it/s, failures=0, objective=-.545]
 35%|###5      | 7/20 [00:00<00:00, 53.52it/s, failures=0, objective=-.545]
 40%|####      | 8/20 [00:00<00:00, 53.52it/s, failures=0, objective=-.545]
 45%|####5     | 9/20 [00:00<00:00, 53.52it/s, failures=0, objective=-.545]
 50%|#####     | 10/20 [00:00<00:00, 53.52it/s, failures=0, objective=-.545]
 55%|#####5    | 11/20 [00:00<00:00, 53.52it/s, failures=0, objective=-.469]
 60%|######    | 12/20 [00:00<00:00, 53.52it/s, failures=0, objective=-.469]
 65%|######5   | 13/20 [00:00<00:00, 13.14it/s, failures=0, objective=-.469]
 65%|######5   | 13/20 [00:00<00:00, 13.14it/s, failures=0, objective=-.469]
 70%|#######   | 14/20 [00:01<00:00, 13.14it/s, failures=0, objective=-.469]
 75%|#######5  | 15/20 [00:01<00:00, 13.14it/s, failures=0, objective=-.469]
 80%|########  | 16/20 [00:01<00:00,  7.85it/s, failures=0, objective=-.469]
 80%|########  | 16/20 [00:01<00:00,  7.85it/s, failures=0, objective=-.469]
 85%|########5 | 17/20 [00:01<00:00,  7.85it/s, failures=0, objective=-.469]
 90%|######### | 18/20 [00:02<00:00,  6.56it/s, failures=0, objective=-.469]
 90%|######### | 18/20 [00:02<00:00,  6.56it/s, failures=0, objective=-.469]
 95%|#########5| 19/20 [00:02<00:00,  6.56it/s, failures=0, objective=-.469]
100%|##########| 20/20 [00:02<00:00,  5.42it/s, failures=0, objective=-.469]
100%|##########| 20/20 [00:02<00:00,  5.42it/s, failures=0, objective=-.469]
evaluator_large = Evaluator.create(
    run_large, method="serial", method_kwargs={"callbacks": [TqdmCallback()]}
)
search_large = CBO(problem_large, evaluator_large, random_state=42)
results["Large"] = search_large.search(max_evals)

Out:

  0%|          | 0/20 [00:00<?, ?it/s]

  5%|5         | 1/20 [00:00<00:00, 35848.75it/s, failures=0, objective=-58.7]

 10%|#         | 2/20 [00:00<00:00, 184.64it/s, failures=0, objective=-58.7]

 15%|#5        | 3/20 [00:00<00:00, 140.12it/s, failures=0, objective=-58.7]

 20%|##        | 4/20 [00:00<00:00, 124.92it/s, failures=0, objective=-30.2]

 25%|##5       | 5/20 [00:00<00:00, 116.68it/s, failures=0, objective=-30.2]

 30%|###       | 6/20 [00:00<00:00, 48.75it/s, failures=0, objective=-30.2]

 30%|###       | 6/20 [00:00<00:00, 48.75it/s, failures=0, objective=-30.2]

 35%|###5      | 7/20 [00:00<00:00, 48.75it/s, failures=0, objective=-30.2]

 40%|####      | 8/20 [00:00<00:00, 48.75it/s, failures=0, objective=-30.2]

 45%|####5     | 9/20 [00:00<00:00, 48.75it/s, failures=0, objective=-30.2]

 50%|#####     | 10/20 [00:00<00:00, 48.75it/s, failures=0, objective=-1.84]

 55%|#####5    | 11/20 [00:00<00:00, 26.15it/s, failures=0, objective=-1.84]

 55%|#####5    | 11/20 [00:00<00:00, 26.15it/s, failures=0, objective=-1.84]

 60%|######    | 12/20 [00:00<00:00, 26.15it/s, failures=0, objective=-1.84]

 65%|######5   | 13/20 [00:00<00:00, 26.15it/s, failures=0, objective=-1.34]

 70%|#######   | 14/20 [00:01<00:00, 26.15it/s, failures=0, objective=-1.34]

 75%|#######5  | 15/20 [00:01<00:00,  8.32it/s, failures=0, objective=-1.34]

 75%|#######5  | 15/20 [00:01<00:00,  8.32it/s, failures=0, objective=-1.34]

 80%|########  | 16/20 [00:01<00:00,  8.32it/s, failures=0, objective=-1.34]

 85%|########5 | 17/20 [00:01<00:00,  6.45it/s, failures=0, objective=-1.34]

 85%|########5 | 17/20 [00:01<00:00,  6.45it/s, failures=0, objective=-1.34]

 90%|######### | 18/20 [00:02<00:00,  6.45it/s, failures=0, objective=-1.34]

 95%|#########5| 19/20 [00:02<00:00,  5.60it/s, failures=0, objective=-1.34]

 95%|#########5| 19/20 [00:02<00:00,  5.60it/s, failures=0, objective=-1.34]

100%|##########| 20/20 [00:02<00:00,  5.00it/s, failures=0, objective=-1.34]

100%|##########| 20/20 [00:02<00:00,  5.00it/s, failures=0, objective=-1.34]
evaluator_large_tl = Evaluator.create(
    run_large, method="serial", method_kwargs={"callbacks": [TqdmCallback()]}
)
search_large_tl = CBO(problem_large, evaluator_large_tl, random_state=42)
search_large_tl.fit_generative_model(results["Small"])
results["Large+TL"] = search_large_tl.search(max_evals)

Out:

/Users/romainegele/Documents/Argonne/deephyper/deephyper/search/hps/_cbo.py:552: UserWarning: The value of q=0.9 is replaced by q_max=0.5 because a minimum of 10 results are required to perform transfer-learning!
  warnings.warn(
[<sdv.constraints.tabular.ScalarRange object at 0x2a7452130>]



  0%|          | 0/100 [00:00<?, ?it/s]


Sampling rows:   0%|          | 0/100 [00:00<?, ?it/s]
Sampling rows: 100%|##########| 100/100 [00:00<00:00, 14671.04it/s]



  0%|          | 0/10000 [00:00<?, ?it/s]


Sampling rows:   0%|          | 0/10000 [00:00<?, ?it/s]/Users/romainegele/miniforge3/envs/dh-arm/lib/python3.9/site-packages/ctgan/data_transformer.py:188: FutureWarning: In a future version, `df.iloc[:, i] = newvals` will attempt to set the values inplace instead of always setting a new array. To retain the old behavior, use either `df[df.columns[i]] = newvals` or, if columns are non-unique, `df.isetitem(i, newvals)`
  data.iloc[:, 1] = np.argmax(column_data[:, 1:], axis=1)

Sampling rows: 100%|##########| 10000/10000 [00:00<00:00, 208719.61it/s]



  0%|          | 0/20 [00:00<?, ?it/s]


  5%|5         | 1/20 [00:00<00:00, 26214.40it/s, failures=0, objective=-35.2]



  0%|          | 0/10000 [00:00<?, ?it/s]



Sampling rows:   0%|          | 0/10000 [00:00<?, ?it/s]/Users/romainegele/miniforge3/envs/dh-arm/lib/python3.9/site-packages/ctgan/data_transformer.py:188: FutureWarning: In a future version, `df.iloc[:, i] = newvals` will attempt to set the values inplace instead of always setting a new array. To retain the old behavior, use either `df[df.columns[i]] = newvals` or, if columns are non-unique, `df.isetitem(i, newvals)`
  data.iloc[:, 1] = np.argmax(column_data[:, 1:], axis=1)

Sampling rows: 100%|##########| 10000/10000 [00:00<00:00, 206620.03it/s]



 10%|#         | 2/20 [00:00<00:00, 31.59it/s, failures=0, objective=-25.8]



  0%|          | 0/10000 [00:00<?, ?it/s]



Sampling rows:   0%|          | 0/10000 [00:00<?, ?it/s]/Users/romainegele/miniforge3/envs/dh-arm/lib/python3.9/site-packages/ctgan/data_transformer.py:188: FutureWarning: In a future version, `df.iloc[:, i] = newvals` will attempt to set the values inplace instead of always setting a new array. To retain the old behavior, use either `df[df.columns[i]] = newvals` or, if columns are non-unique, `df.isetitem(i, newvals)`
  data.iloc[:, 1] = np.argmax(column_data[:, 1:], axis=1)

Sampling rows: 100%|##########| 10000/10000 [00:00<00:00, 207891.99it/s]



 15%|#5        | 3/20 [00:00<00:00, 23.81it/s, failures=0, objective=-25.8]


 15%|#5        | 3/20 [00:00<00:00, 23.81it/s, failures=0, objective=-24.9]



  0%|          | 0/10000 [00:00<?, ?it/s]



Sampling rows:   0%|          | 0/10000 [00:00<?, ?it/s]/Users/romainegele/miniforge3/envs/dh-arm/lib/python3.9/site-packages/ctgan/data_transformer.py:188: FutureWarning: In a future version, `df.iloc[:, i] = newvals` will attempt to set the values inplace instead of always setting a new array. To retain the old behavior, use either `df[df.columns[i]] = newvals` or, if columns are non-unique, `df.isetitem(i, newvals)`
  data.iloc[:, 1] = np.argmax(column_data[:, 1:], axis=1)

Sampling rows: 100%|##########| 10000/10000 [00:00<00:00, 204780.00it/s]



 20%|##        | 4/20 [00:00<00:00, 23.81it/s, failures=0, objective=-24.9]



  0%|          | 0/10000 [00:00<?, ?it/s]



Sampling rows:   0%|          | 0/10000 [00:00<?, ?it/s]/Users/romainegele/miniforge3/envs/dh-arm/lib/python3.9/site-packages/ctgan/data_transformer.py:188: FutureWarning: In a future version, `df.iloc[:, i] = newvals` will attempt to set the values inplace instead of always setting a new array. To retain the old behavior, use either `df[df.columns[i]] = newvals` or, if columns are non-unique, `df.isetitem(i, newvals)`
  data.iloc[:, 1] = np.argmax(column_data[:, 1:], axis=1)

Sampling rows: 100%|##########| 10000/10000 [00:00<00:00, 212486.02it/s]



 25%|##5       | 5/20 [00:00<00:00, 23.81it/s, failures=0, objective=-24.9]



  0%|          | 0/10000 [00:00<?, ?it/s]



Sampling rows:   0%|          | 0/10000 [00:00<?, ?it/s]/Users/romainegele/miniforge3/envs/dh-arm/lib/python3.9/site-packages/ctgan/data_transformer.py:188: FutureWarning: In a future version, `df.iloc[:, i] = newvals` will attempt to set the values inplace instead of always setting a new array. To retain the old behavior, use either `df[df.columns[i]] = newvals` or, if columns are non-unique, `df.isetitem(i, newvals)`
  data.iloc[:, 1] = np.argmax(column_data[:, 1:], axis=1)

Sampling rows: 100%|##########| 10000/10000 [00:00<00:00, 208004.36it/s]



 30%|###       | 6/20 [00:00<00:00, 18.73it/s, failures=0, objective=-24.9]


 30%|###       | 6/20 [00:00<00:00, 18.73it/s, failures=0, objective=-1.18]



  0%|          | 0/10000 [00:00<?, ?it/s]



Sampling rows:   0%|          | 0/10000 [00:00<?, ?it/s]/Users/romainegele/miniforge3/envs/dh-arm/lib/python3.9/site-packages/ctgan/data_transformer.py:188: FutureWarning: In a future version, `df.iloc[:, i] = newvals` will attempt to set the values inplace instead of always setting a new array. To retain the old behavior, use either `df[df.columns[i]] = newvals` or, if columns are non-unique, `df.isetitem(i, newvals)`
  data.iloc[:, 1] = np.argmax(column_data[:, 1:], axis=1)

Sampling rows: 100%|##########| 10000/10000 [00:00<00:00, 207496.03it/s]



 35%|###5      | 7/20 [00:00<00:00, 18.73it/s, failures=0, objective=-1.18]



  0%|          | 0/10000 [00:00<?, ?it/s]



Sampling rows:   0%|          | 0/10000 [00:00<?, ?it/s]/Users/romainegele/miniforge3/envs/dh-arm/lib/python3.9/site-packages/ctgan/data_transformer.py:188: FutureWarning: In a future version, `df.iloc[:, i] = newvals` will attempt to set the values inplace instead of always setting a new array. To retain the old behavior, use either `df[df.columns[i]] = newvals` or, if columns are non-unique, `df.isetitem(i, newvals)`
  data.iloc[:, 1] = np.argmax(column_data[:, 1:], axis=1)

Sampling rows: 100%|##########| 10000/10000 [00:00<00:00, 216745.33it/s]



 40%|####      | 8/20 [00:00<00:00, 17.93it/s, failures=0, objective=-1.18]


 40%|####      | 8/20 [00:00<00:00, 17.93it/s, failures=0, objective=-1.18]



  0%|          | 0/10000 [00:00<?, ?it/s]



Sampling rows:   0%|          | 0/10000 [00:00<?, ?it/s]/Users/romainegele/miniforge3/envs/dh-arm/lib/python3.9/site-packages/ctgan/data_transformer.py:188: FutureWarning: In a future version, `df.iloc[:, i] = newvals` will attempt to set the values inplace instead of always setting a new array. To retain the old behavior, use either `df[df.columns[i]] = newvals` or, if columns are non-unique, `df.isetitem(i, newvals)`
  data.iloc[:, 1] = np.argmax(column_data[:, 1:], axis=1)

Sampling rows: 100%|##########| 10000/10000 [00:00<00:00, 214519.36it/s]



 45%|####5     | 9/20 [00:00<00:00, 17.93it/s, failures=0, objective=-1.18]



  0%|          | 0/10000 [00:00<?, ?it/s]



Sampling rows:   0%|          | 0/10000 [00:00<?, ?it/s]/Users/romainegele/miniforge3/envs/dh-arm/lib/python3.9/site-packages/ctgan/data_transformer.py:188: FutureWarning: In a future version, `df.iloc[:, i] = newvals` will attempt to set the values inplace instead of always setting a new array. To retain the old behavior, use either `df[df.columns[i]] = newvals` or, if columns are non-unique, `df.isetitem(i, newvals)`
  data.iloc[:, 1] = np.argmax(column_data[:, 1:], axis=1)

Sampling rows: 100%|##########| 10000/10000 [00:00<00:00, 216229.10it/s]



 50%|#####     | 10/20 [00:00<00:00, 14.09it/s, failures=0, objective=-1.18]


 50%|#####     | 10/20 [00:00<00:00, 14.09it/s, failures=0, objective=-1.18]



  0%|          | 0/10000 [00:00<?, ?it/s]



Sampling rows:   0%|          | 0/10000 [00:00<?, ?it/s]/Users/romainegele/miniforge3/envs/dh-arm/lib/python3.9/site-packages/ctgan/data_transformer.py:188: FutureWarning: In a future version, `df.iloc[:, i] = newvals` will attempt to set the values inplace instead of always setting a new array. To retain the old behavior, use either `df[df.columns[i]] = newvals` or, if columns are non-unique, `df.isetitem(i, newvals)`
  data.iloc[:, 1] = np.argmax(column_data[:, 1:], axis=1)

Sampling rows: 100%|##########| 10000/10000 [00:00<00:00, 216376.35it/s]




  0%|          | 0/10000 [00:00<?, ?it/s]



Sampling rows:   0%|          | 0/10000 [00:00<?, ?it/s]/Users/romainegele/miniforge3/envs/dh-arm/lib/python3.9/site-packages/ctgan/data_transformer.py:188: FutureWarning: In a future version, `df.iloc[:, i] = newvals` will attempt to set the values inplace instead of always setting a new array. To retain the old behavior, use either `df[df.columns[i]] = newvals` or, if columns are non-unique, `df.isetitem(i, newvals)`
  data.iloc[:, 1] = np.argmax(column_data[:, 1:], axis=1)

Sampling rows: 100%|##########| 10000/10000 [00:00<00:00, 214873.23it/s]



 55%|#####5    | 11/20 [00:00<00:00, 14.09it/s, failures=0, objective=-1.18]



  0%|          | 0/10000 [00:00<?, ?it/s]



Sampling rows:   0%|          | 0/10000 [00:00<?, ?it/s]/Users/romainegele/miniforge3/envs/dh-arm/lib/python3.9/site-packages/ctgan/data_transformer.py:188: FutureWarning: In a future version, `df.iloc[:, i] = newvals` will attempt to set the values inplace instead of always setting a new array. To retain the old behavior, use either `df[df.columns[i]] = newvals` or, if columns are non-unique, `df.isetitem(i, newvals)`
  data.iloc[:, 1] = np.argmax(column_data[:, 1:], axis=1)

Sampling rows: 100%|##########| 10000/10000 [00:00<00:00, 215082.59it/s]




  0%|          | 0/10000 [00:00<?, ?it/s]



Sampling rows:   0%|          | 0/10000 [00:00<?, ?it/s]/Users/romainegele/miniforge3/envs/dh-arm/lib/python3.9/site-packages/ctgan/data_transformer.py:188: FutureWarning: In a future version, `df.iloc[:, i] = newvals` will attempt to set the values inplace instead of always setting a new array. To retain the old behavior, use either `df[df.columns[i]] = newvals` or, if columns are non-unique, `df.isetitem(i, newvals)`
  data.iloc[:, 1] = np.argmax(column_data[:, 1:], axis=1)

Sampling rows: 100%|##########| 10000/10000 [00:00<00:00, 215307.82it/s]



 60%|######    | 12/20 [00:01<00:01,  6.45it/s, failures=0, objective=-1.18]


 60%|######    | 12/20 [00:01<00:01,  6.45it/s, failures=0, objective=-1.18]



  0%|          | 0/10000 [00:00<?, ?it/s]



Sampling rows:   0%|          | 0/10000 [00:00<?, ?it/s]/Users/romainegele/miniforge3/envs/dh-arm/lib/python3.9/site-packages/ctgan/data_transformer.py:188: FutureWarning: In a future version, `df.iloc[:, i] = newvals` will attempt to set the values inplace instead of always setting a new array. To retain the old behavior, use either `df[df.columns[i]] = newvals` or, if columns are non-unique, `df.isetitem(i, newvals)`
  data.iloc[:, 1] = np.argmax(column_data[:, 1:], axis=1)

Sampling rows: 100%|##########| 10000/10000 [00:00<00:00, 215592.24it/s]




  0%|          | 0/10000 [00:00<?, ?it/s]



Sampling rows:   0%|          | 0/10000 [00:00<?, ?it/s]/Users/romainegele/miniforge3/envs/dh-arm/lib/python3.9/site-packages/ctgan/data_transformer.py:188: FutureWarning: In a future version, `df.iloc[:, i] = newvals` will attempt to set the values inplace instead of always setting a new array. To retain the old behavior, use either `df[df.columns[i]] = newvals` or, if columns are non-unique, `df.isetitem(i, newvals)`
  data.iloc[:, 1] = np.argmax(column_data[:, 1:], axis=1)

Sampling rows: 100%|##########| 10000/10000 [00:00<00:00, 215516.92it/s]



 65%|######5   | 13/20 [00:01<00:01,  6.45it/s, failures=0, objective=-1.18]



  0%|          | 0/10000 [00:00<?, ?it/s]



Sampling rows:   0%|          | 0/10000 [00:00<?, ?it/s]/Users/romainegele/miniforge3/envs/dh-arm/lib/python3.9/site-packages/ctgan/data_transformer.py:188: FutureWarning: In a future version, `df.iloc[:, i] = newvals` will attempt to set the values inplace instead of always setting a new array. To retain the old behavior, use either `df[df.columns[i]] = newvals` or, if columns are non-unique, `df.isetitem(i, newvals)`
  data.iloc[:, 1] = np.argmax(column_data[:, 1:], axis=1)

Sampling rows: 100%|##########| 10000/10000 [00:00<00:00, 214209.31it/s]




  0%|          | 0/10000 [00:00<?, ?it/s]



Sampling rows:   0%|          | 0/10000 [00:00<?, ?it/s]/Users/romainegele/miniforge3/envs/dh-arm/lib/python3.9/site-packages/ctgan/data_transformer.py:188: FutureWarning: In a future version, `df.iloc[:, i] = newvals` will attempt to set the values inplace instead of always setting a new array. To retain the old behavior, use either `df[df.columns[i]] = newvals` or, if columns are non-unique, `df.isetitem(i, newvals)`
  data.iloc[:, 1] = np.argmax(column_data[:, 1:], axis=1)

Sampling rows: 100%|##########| 10000/10000 [00:00<00:00, 215253.68it/s]



 70%|#######   | 14/20 [00:02<00:01,  4.50it/s, failures=0, objective=-1.18]


 70%|#######   | 14/20 [00:02<00:01,  4.50it/s, failures=0, objective=-1.18]



  0%|          | 0/10000 [00:00<?, ?it/s]



Sampling rows:   0%|          | 0/10000 [00:00<?, ?it/s]/Users/romainegele/miniforge3/envs/dh-arm/lib/python3.9/site-packages/ctgan/data_transformer.py:188: FutureWarning: In a future version, `df.iloc[:, i] = newvals` will attempt to set the values inplace instead of always setting a new array. To retain the old behavior, use either `df[df.columns[i]] = newvals` or, if columns are non-unique, `df.isetitem(i, newvals)`
  data.iloc[:, 1] = np.argmax(column_data[:, 1:], axis=1)

Sampling rows: 100%|##########| 10000/10000 [00:00<00:00, 213215.13it/s]




  0%|          | 0/10000 [00:00<?, ?it/s]



Sampling rows:   0%|          | 0/10000 [00:00<?, ?it/s]/Users/romainegele/miniforge3/envs/dh-arm/lib/python3.9/site-packages/ctgan/data_transformer.py:188: FutureWarning: In a future version, `df.iloc[:, i] = newvals` will attempt to set the values inplace instead of always setting a new array. To retain the old behavior, use either `df[df.columns[i]] = newvals` or, if columns are non-unique, `df.isetitem(i, newvals)`
  data.iloc[:, 1] = np.argmax(column_data[:, 1:], axis=1)

Sampling rows: 100%|##########| 10000/10000 [00:00<00:00, 214980.06it/s]



 75%|#######5  | 15/20 [00:02<00:01,  4.13it/s, failures=0, objective=-1.18]


 75%|#######5  | 15/20 [00:02<00:01,  4.13it/s, failures=0, objective=-1.18]



  0%|          | 0/10000 [00:00<?, ?it/s]



Sampling rows:   0%|          | 0/10000 [00:00<?, ?it/s]/Users/romainegele/miniforge3/envs/dh-arm/lib/python3.9/site-packages/ctgan/data_transformer.py:188: FutureWarning: In a future version, `df.iloc[:, i] = newvals` will attempt to set the values inplace instead of always setting a new array. To retain the old behavior, use either `df[df.columns[i]] = newvals` or, if columns are non-unique, `df.isetitem(i, newvals)`
  data.iloc[:, 1] = np.argmax(column_data[:, 1:], axis=1)

Sampling rows: 100%|##########| 10000/10000 [00:00<00:00, 215077.07it/s]




  0%|          | 0/10000 [00:00<?, ?it/s]



Sampling rows:   0%|          | 0/10000 [00:00<?, ?it/s]/Users/romainegele/miniforge3/envs/dh-arm/lib/python3.9/site-packages/ctgan/data_transformer.py:188: FutureWarning: In a future version, `df.iloc[:, i] = newvals` will attempt to set the values inplace instead of always setting a new array. To retain the old behavior, use either `df[df.columns[i]] = newvals` or, if columns are non-unique, `df.isetitem(i, newvals)`
  data.iloc[:, 1] = np.argmax(column_data[:, 1:], axis=1)

Sampling rows: 100%|##########| 10000/10000 [00:00<00:00, 214597.29it/s]



 80%|########  | 16/20 [00:02<00:01,  3.82it/s, failures=0, objective=-1.18]


 80%|########  | 16/20 [00:02<00:01,  3.82it/s, failures=0, objective=-1.18]



  0%|          | 0/10000 [00:00<?, ?it/s]



Sampling rows:   0%|          | 0/10000 [00:00<?, ?it/s]/Users/romainegele/miniforge3/envs/dh-arm/lib/python3.9/site-packages/ctgan/data_transformer.py:188: FutureWarning: In a future version, `df.iloc[:, i] = newvals` will attempt to set the values inplace instead of always setting a new array. To retain the old behavior, use either `df[df.columns[i]] = newvals` or, if columns are non-unique, `df.isetitem(i, newvals)`
  data.iloc[:, 1] = np.argmax(column_data[:, 1:], axis=1)

Sampling rows: 100%|##########| 10000/10000 [00:00<00:00, 214800.60it/s]




  0%|          | 0/10000 [00:00<?, ?it/s]



Sampling rows:   0%|          | 0/10000 [00:00<?, ?it/s]/Users/romainegele/miniforge3/envs/dh-arm/lib/python3.9/site-packages/ctgan/data_transformer.py:188: FutureWarning: In a future version, `df.iloc[:, i] = newvals` will attempt to set the values inplace instead of always setting a new array. To retain the old behavior, use either `df[df.columns[i]] = newvals` or, if columns are non-unique, `df.isetitem(i, newvals)`
  data.iloc[:, 1] = np.argmax(column_data[:, 1:], axis=1)

Sampling rows: 100%|##########| 10000/10000 [00:00<00:00, 215145.47it/s]



 85%|########5 | 17/20 [00:03<00:00,  3.56it/s, failures=0, objective=-1.18]


 85%|########5 | 17/20 [00:03<00:00,  3.56it/s, failures=0, objective=-1.18]



  0%|          | 0/10000 [00:00<?, ?it/s]



Sampling rows:   0%|          | 0/10000 [00:00<?, ?it/s]/Users/romainegele/miniforge3/envs/dh-arm/lib/python3.9/site-packages/ctgan/data_transformer.py:188: FutureWarning: In a future version, `df.iloc[:, i] = newvals` will attempt to set the values inplace instead of always setting a new array. To retain the old behavior, use either `df[df.columns[i]] = newvals` or, if columns are non-unique, `df.isetitem(i, newvals)`
  data.iloc[:, 1] = np.argmax(column_data[:, 1:], axis=1)

Sampling rows: 100%|##########| 10000/10000 [00:00<00:00, 214786.30it/s]




  0%|          | 0/10000 [00:00<?, ?it/s]



Sampling rows:   0%|          | 0/10000 [00:00<?, ?it/s]/Users/romainegele/miniforge3/envs/dh-arm/lib/python3.9/site-packages/ctgan/data_transformer.py:188: FutureWarning: In a future version, `df.iloc[:, i] = newvals` will attempt to set the values inplace instead of always setting a new array. To retain the old behavior, use either `df[df.columns[i]] = newvals` or, if columns are non-unique, `df.isetitem(i, newvals)`
  data.iloc[:, 1] = np.argmax(column_data[:, 1:], axis=1)

Sampling rows: 100%|##########| 10000/10000 [00:00<00:00, 215646.56it/s]



 90%|######### | 18/20 [00:03<00:00,  3.31it/s, failures=0, objective=-1.18]


 90%|######### | 18/20 [00:03<00:00,  3.31it/s, failures=0, objective=-1.18]



  0%|          | 0/10000 [00:00<?, ?it/s]



Sampling rows:   0%|          | 0/10000 [00:00<?, ?it/s]/Users/romainegele/miniforge3/envs/dh-arm/lib/python3.9/site-packages/ctgan/data_transformer.py:188: FutureWarning: In a future version, `df.iloc[:, i] = newvals` will attempt to set the values inplace instead of always setting a new array. To retain the old behavior, use either `df[df.columns[i]] = newvals` or, if columns are non-unique, `df.isetitem(i, newvals)`
  data.iloc[:, 1] = np.argmax(column_data[:, 1:], axis=1)

Sampling rows: 100%|##########| 10000/10000 [00:00<00:00, 216154.44it/s]




  0%|          | 0/10000 [00:00<?, ?it/s]



Sampling rows:   0%|          | 0/10000 [00:00<?, ?it/s]/Users/romainegele/miniforge3/envs/dh-arm/lib/python3.9/site-packages/ctgan/data_transformer.py:188: FutureWarning: In a future version, `df.iloc[:, i] = newvals` will attempt to set the values inplace instead of always setting a new array. To retain the old behavior, use either `df[df.columns[i]] = newvals` or, if columns are non-unique, `df.isetitem(i, newvals)`
  data.iloc[:, 1] = np.argmax(column_data[:, 1:], axis=1)

Sampling rows: 100%|##########| 10000/10000 [00:00<00:00, 213770.42it/s]



 95%|#########5| 19/20 [00:03<00:00,  3.00it/s, failures=0, objective=-1.18]


 95%|#########5| 19/20 [00:03<00:00,  3.00it/s, failures=0, objective=-1.18]



  0%|          | 0/10000 [00:00<?, ?it/s]



Sampling rows:   0%|          | 0/10000 [00:00<?, ?it/s]/Users/romainegele/miniforge3/envs/dh-arm/lib/python3.9/site-packages/ctgan/data_transformer.py:188: FutureWarning: In a future version, `df.iloc[:, i] = newvals` will attempt to set the values inplace instead of always setting a new array. To retain the old behavior, use either `df[df.columns[i]] = newvals` or, if columns are non-unique, `df.isetitem(i, newvals)`
  data.iloc[:, 1] = np.argmax(column_data[:, 1:], axis=1)

Sampling rows: 100%|##########| 10000/10000 [00:00<00:00, 214832.51it/s]




  0%|          | 0/10000 [00:00<?, ?it/s]



Sampling rows:   0%|          | 0/10000 [00:00<?, ?it/s]/Users/romainegele/miniforge3/envs/dh-arm/lib/python3.9/site-packages/ctgan/data_transformer.py:188: FutureWarning: In a future version, `df.iloc[:, i] = newvals` will attempt to set the values inplace instead of always setting a new array. To retain the old behavior, use either `df[df.columns[i]] = newvals` or, if columns are non-unique, `df.isetitem(i, newvals)`
  data.iloc[:, 1] = np.argmax(column_data[:, 1:], axis=1)

Sampling rows: 100%|##########| 10000/10000 [00:00<00:00, 214860.02it/s]



100%|##########| 20/20 [00:04<00:00,  2.97it/s, failures=0, objective=-1.18]


100%|##########| 20/20 [00:04<00:00,  2.97it/s, failures=0, objective=-1.18]

Finally, we compare the results and quickly see that transfer-learning provided a consequant speed-up for the search:

import matplotlib.pyplot as plt

plt.figure()

for strategy, df in results.items():
    x = [i for i in range(len(df))]
    plt.scatter(x, df.objective, label=strategy, alpha=0.5)
    plt.plot(x, df.objective.cummax(), alpha=0.5)

plt.xlabel("Time (sec.)")
plt.ylabel("Objective")
plt.grid()
plt.legend()
plt.show()
plot transfer learning for hps

Total running time of the script: ( 0 minutes 12.852 seconds)

Download Python source code: plot_transfer_learning_for_hps.py

Download Jupyter notebook: plot_transfer_learning_for_hps.ipynb

Gallery generated by Sphinx-Gallery

previous

Notify Failures in Hyperparameter optimization

next

Profile the Worker Utilization

By Argonne
© Copyright 2018-2022, Argonne.