PyTorch MNIST¶
The pytorch_mnist.py example demonstrates the integration of ClearML into code which uses PyTorch. It trains a simple deep neural network on the PyTorch built-in MNIST dataset. This example script uses ClearML automatic logging and explicit reporting, which allows you to add customized reporting to your code. In the example script, we call the Logger.report_scalar method to demonstrate explicit reporting. When the script runs, it creates an experiment named pytorch mnist train
which is associated with the examples
project.
Scalars¶
In the example script’s train
function, the following code explicitly reports scalars to ClearML:
Logger.current_logger().report_scalar(
"train", "loss", iteration=(epoch * len(train_loader) + batch_idx), value=loss.item())
In the test
method, the code explicitly reports loss
and accuracy
scalars.
Logger.current_logger().report_scalar(
"test", "loss", iteration=epoch, value=test_loss)
Logger.current_logger().report_scalar(
"test", "accuracy", iteration=epoch, value=(correct / len(test_loader.dataset)))
Hyperparameters¶
ClearML automatically logs command line options when you use argparse
. They appear in CONFIGURATIONS > HYPER PARAMETERS > Args.
Log¶
Text printed to the console for training progress, as well as all other console output, appear in RESULTS > LOG.
Artifacts¶
Model artifacts associated with the experiment appear in the experiment info panel (in the EXPERIMENTS tab), and in the model info panel (in the MODELS tab).
The experiment info panel shows model tracking, including the model name and design (in this case, no design was stored).
The model info panel contains the model details, including the model URL, framework, and snapshot locations.