Tuning Experiments Tutorial¶
In this tutorial, you learn how to tune an experiment. The experiment tuned is created by the pytorch_mnist.py example script.
Prerequisites¶
Clone the clearml repository.
Install the requirements for the TensorFlow examples.
ClearML Agent is installed and configured.
Step 1. Run the experiment¶
In the examples/frameworks/pytorch
directory, run the experiment script:
python pytorch_mnist.py
Step 2. Clone the experiment¶
Clone the experiment to create an editable copy for tuning.
In the Trains Web-App (UI), on the Projects page, click the
examples
project card.In the experiments table, right click the experiment
pytorch mnist train
.In the context menu, click Clone > CLONE. The newly cloned experiment appears and its info panel slides open.
Step 3. Tune the cloned experiment¶
To demonstrate tuning, change two hyperparameter values.
In the info panel, CONFIGURATIONS > HYPER PARAMETERS > Args > Hover and click EDIT.
Change the value of
batch_size
from64
to32
.Change the value of
lr
from0.01
to0.025
.Click SAVE.
Step 4. Run a worker daemon listening to a queue¶
To execute the cloned experiment, use a worker which can be a running worker daemon listening to a queue.
Note
For more information about workers, worker daemons, and queues, see the “Concepts and Architecture” page, Workers and queues and ClearML Agent.
Run the worker daemon on your local development machine.
Open a terminal session.
Run the following
clearml-agent
command which runs a worker daemon listening to thedefault
queue.clearml-agent daemon --queue default
The response to this command is information about your configuration, the worker, and the queue. For example:
Current configuration (clearml_agent v0.16.0, location: /home/<username>/clearml.conf): ---------------------- agent.worker_id = agent.worker_name = LAPTOP-PPTKKPGK agent.python_binary = agent.package_manager.type = pip . . . sdk.development.worker.report_period_sec = 2 sdk.development.worker.ping_period_sec = 30 sdk.development.worker.log_stdout = true Worker "LAPTOP-PPTKKPGK:0" - Listening to queues: + ---------------------------------+---------+-------+ | id | name | tags | + ---------------------------------+---------+-------+ | 2a03daf5ff9a4255b9915fbd5306f924 | default | | + ---------------------------------+---------+-------+ Running CLEARML-AGENT daemon in background mode, writing stdout/stderr to /home/<username>/.clearml_agent_daemon_outym6lqxrz.txt
Step 5. Enqueue the tuned experiment¶
Enqueue the tuned experiment.
In the ClearML Web-App (UI), experiments table, right click the experiment
Clone Of pytorch mnist train
.In the context menu, click Enqueue.
If the queue is not Default, in the queue list, select Default.
Click ENQUEUE. The experiment’s status becomes Pending. When the worker fetches the experiment from the queue, the status becomes Running, and you can view its progress in the info panel. When the status becomes Completed, go to the next step.
Step 6. Compare the experiments¶
To compare the original and tuned experiments:
In the Trains Web-App (UI), on the Projects page, click the
examples
project.In the experiments table, select the checkboxes for the two experiments:
pytorch mnist train
andClone Of pytorch mnist train
.On the menu bar at the bottom of the experiments table, click COMPARE. The experiment comparison window appears. All differences appear with a different background color to highlight them.
The experiment comparison window is organized in the following tabs:
DETAILS - The ARTIFACTS section, including input and output models with their network designs, and other artifacts; the EXECUTION section execution, including source code control, installed Python packages and versions, uncommitted changes, and the Docker image name which, in this case, is empty.
HYPER PARAMETERS - The hyperparameters and their values.
SCALARS - Scalar metrics with the option to view them as charts or values.
PLOTS - Plots of any data with the option to view them as charts or values.
DEBUG SAMPLES - Media including images, audio, and video uploaded by your experiment shown as thumbnails.
Examine the differences.
Compare the hyperparameters. In the HYPER PARAMETERS tab, expand ARGS. The hyperparameters
batch_size
andlr
are shown with a different background color. The values are different.Compare the metrics. In the SCALARS tab, to the right of Add Experiment, select the plot or value comparison:
Graph - The scalar metrics plots show
pytorch mnist train
andClone of pytorch mnist train
.Last Values - Expand a metric and variant.
Next Steps
For more information about all editing experiments, see modify experiments in the User Interface section.