Manual Random Parameter Search

Trains is now ClearML

This documentation applies to the legacy Trains versions. For the latest documentation, see ClearML.

The example scripts demonstrate a random parameter search by automating the execution of an experiment, multiple times, each time with a different set of random hyperparameters. That experiment must run first. It is named Keras HP optimization base, and is created by running another Trains example script,

This example accomplishes the automated random parameter search by doing the following:

  1. Creating a parameter dictionary, which we connect to the Task by calling Task.connect so that the parameters are logged by Trains.
  2. Adding the random search hyperparameters and parameters defining the search (e.g., the experiment name, and number of times to run the experiment).
  3. Creating a Task object referencing the experiment.
  4. For each set of parameters:
    1. Cloning the Task object, see Task.clone.
    2. Getting the newly cloned Task's parameters, see Task.get_parameters.
    3. Setting the newly cloned Task's parameters to the search values in the parameter dictionary (Step 1), see Task.set_parameters.
    4. Enqueuing the newly cloned Task to execute, see Task.enqueue.

When the example script runs, it creates an experiment named Random Hyper-Parameter Search Example which is associated with the examples project. This starts the parameter search, and creates the experiments:

  • Keras HP optimization base 0
  • Keras HP optimization base 1
  • Keras HP optimization base 2.

When they complete, you can compare the experiment results.