Tracking and Visualizing Results
While an experiment is running, and any time after it terminates, you can track it and visualize the results in the Trains Web-App (UI). The experiment details pane shows all the following:
- Source code - See details below.
- Artifacts - The initial weights input model, configuration, output model, and other artifacts.
- Log - stdout, stderr, and explicitly reported text.
- Scalars and plots of any data using the Trains Web-App (UI) visualization tools
- Debug samples using the image viewer, audio player, and video player.
To improve experimentation tracking, Trains supports experiment (and model) tags.
In the experiment details pane, source code information is in the EXECUTION tab. It includes the following:
- The experiment's repository, commit ID, script path, and working directory.
- Uncommitted changes.
- Python packages and their versions.
- The output destination for model snapshots.
- The log level.
It also includes the base Docker image used for Trains Agent, if you specify one.
If you rerun the experiment, or reproduce the experiment by cloning it, you can select different source code for execution, see Selecting source code on the "Executing Experiments Remotely" page.
See Tuning hyperparameters on the "Executing Experiments Remotely" page.
Experiment artifacts appear in the ARTIFACTS tab and include the following:
- Models - In the Input Model and Output Model sections, and the model name is link to the model details.
- Data audition - Dynamic artifacts which, once uploaded, are monitored by Trains, and changes to them are uploaded to Trains Server. Currently, this includes Pandas DataFrames, only.
- Other artifacts - Static artifacts which once uploaded, do not change.
For more information, see Artifacts.
General experiment is in the INFO tab. This includes information describing the stored experiment, including creation, start, and last update date and time, the user creating the experiment, the project name, experiment state (status), and whether the experiment is archived.
The experiment log, scalars, any other data as plots or tables, and debug samples are in the RESULTS tab.
The Trains Web-App (UI) provides two viewing modes for results.
- Details pane - All experiments details.
- Full screen - Results only (full screen width).
To switch between viewing modes:
- From the details pane to full screen - Click (View in experiment table), or in the menu () > Results.
- From full screen to the details pane - Click (View in experiment table).
In the experiment details pane, the complete experiment log containing everything printed to stdout and strerr is in the LOG tab. The full log is downloadable.
Scalars and plots
Scalars and plots of any data are in the SCALARS and PLOTS tabs, respectively.
Full screen mode provides plot features, including:
- Hide scalar series (scalars and plots) - Click HIDE ALL , and then (show) those you want to see.
- Horizontal axis modes (scalars, only) - Click > Select ITERATIONS, RELATIVE (time since experiment began), or WALL (local clock time).
- Curve smoothing (scalars, only) - Click > Slide the slider or type a smoothing factor between 0 and 0.999.
- Plot controls (scalars and plots)
The following table lists the plot controls which may be available for any plot when you hover over the plot.
|Switch horizontal axis to iterations, relative time (since the experiment began), wall time (actual date and time the experiment ran). Click and then select ITERATIONS, RELATIVE, or WALL.|
|Download plots as PNG files.|
|Zoom into a section of a plot (and reset to the original plot using autoscale ()). Click , click on the plot, and then drag to highlight the zoom in area. The plot refreshes showing the zoomed in area.|
|Pan a plot (move around it in any direction) - Click , click the plot, and then drag around the plot.|
|To examine an area, draw a dotted box around it. Click the plot, click , and then drag a dotted rectangle around a section of a plot.|
|To examine an area, draw a dotted lasso around it. Click and then drag a dotted curve around a section of a plot.|
|Reset to autoscale after zooming (after , , or ).|
|Reset axes after a zoom.|
|To see one metric at an iteration, show closest data on hover. Click and then hover over a series on the plot to see a data point.|
|To see metrics at an iteration, compare data on hover. Click and then hover over the plot to see all data points at an iteration.|
|To get metric data for further analysis, download plot data to JSON file.|
|Switch to logarithmic view.|
Debug samples appears by iteration, starting with the most recent iteration, and are downloadable. By clicking on a thumbnail, open the media in the viewer.
- Locate debug samples by doing the following:
- Filter by metric. In the Metric list, choose a metric.
- Show other iterations. Click (Older images), (New images), or (Newest images).
- To view a debug image in the viewer, click it.
Tags are user-defined, color-coded labels that you can add to experiments (and models) allowing you to easily identify and group of experiments. A tag shows any text you choose, for any purpose. For example, add tags for the type of remote machine experiments execute on, label versions of experiments, or apply team names to organize experimentation.
To add a new tag:
- In the experiments table, click the experiment.
- In the experiment's details pane, click +ADD TAG, or (menu) > Hover over Add Tag.
- Type the new tag name.
- Click (Create New). The tag is added.
To add a tag to an experiment
- In the experiment's details pane, click +ADD TAG > click a tag, or (menu) > Hover over Add Tag > > click a tag.
To change a tag's foreground and / or background colors:
- In the experiment's details pane, click +ADD TAG > Tag Colors, or (menu) > Hover over Add Tag > Tag Colors.
- Click the tag icon > Background or Foreground > Pick a color > OK > CLOSE.
To remove a tag from an experiment:
- In the details pane, hover over the tag > x.
Locating the experiment (Task) ID
- In the details pane, top area, click ID. The Task ID appears.