The pytorch_tensorboardX.py example demonstrates the integration of Trains into code which uses PyTorch and TensorBoardX. It trains a simple deep neural network on the PyTorch built-in MNIST dataset. It creates a TensorBoardX
SummaryWriter object to log scalars during training, scalars and debug samples during testing, and a test text message to the console (a test message to demonstrate Trains). When the script runs, it creates an experiment named
pytorch with tensorboardX which is associated with the
examples project in the Trains Web (UI).
The loss and accuracy metric scalar plots appear in RESULTS > SCALARS, along with the resource utilization plots, which are titled :monitor: machine.
Trains automatically logs command line options when you use
argparse. They appear in CONFIGURATIONS > HYPER PARAMETERS > Args.
Text printed to the console for training progress, as well as all other console output, appear in RESULTS > LOG.
Model artifacts associated with the experiment appear in the experiment info panel (in the EXPERIMENTS tab), and in the model info panel (in the MODELS tab).
The experiment info panel shows model tracking, including the model name and design (in this case, no design was stored).
The model info panel contains the model details, including the model URL, framework, and snapshot locations.