PyTorch MNIST

The pytorch_mnist.py example demonstrates the integration of Trains into code which uses PyTorch. It trains a simple deep neural network on the PyTorch built-in MNIST dataset. This example script uses Trains automatic logging and explicit reporting, which allows you to add customized reporting to your code. In the example script, we call the Logger.report_scalar method to demonstrate explicit reporting. When the script runs, it creates an experiment named pytorch mnist train which is associated with the examples project.

Scalars

In the example script's train function, the following code explicitly reports scalars to Trains:

Logger.current_logger().report_scalar(
    "train", "loss", iteration=(epoch * len(train_loader) + batch_idx), value=loss.item())

In the test method, the code explicitly reports loss and accuracy scalars.

Logger.current_logger().report_scalar(
    "test", "loss", iteration=epoch, value=test_loss)
Logger.current_logger().report_scalar(
    "test", "accuracy", iteration=epoch, value=(correct / len(test_loader.dataset)))

Hyperparameters

Command line arguments, which are automatically logged when argparse is used, appear in the HYPER PARAMETERS tab.

Log

Text printed to the console for training progress, as well as all other console output, appear in the RESULTS tab, LOG tab.

Artifacts

Trains tracks the input and output model with the experiment, but the Trains Web (UI) shows the model details separately.

Input model

In the experiment details, ARTIFACTS tab, Input Model area, you can see Trains logging of the input model. Since the example script imports the input model, Trains stores input model data in Trains Server.

In the model details (which appear when you click the model name, expand image above), GENERAL tab, you can see the the input model location (URL), and other general information about the model.

Output model

Trains logs the output model, providing the model name and output model configuration in ARTIFACTS tab, Output Model area.

In the model details (which appear when you click the model name, expand image above), GENERAL tab you can see the following:

  • The output model location (URL).
  • Model snapshots / checkpoint model locations (URLs).
  • Experiment creating the model.
  • Other general information about the model.