Tracking Experiments and Visualizing Results

While an experiment is running, and any time after it finishes, you can track it and visualize the results in the Trains Web-App (UI).

The experiment info panel

Tracking information and results appear in the experiment info panel. The info panel slides open when you click an experiment. When the info panel is open, you can no longer see all experiment table columns. The only column that appears (to the right of the info panel) is the column currently used to sort the table. That column becomes drop list of all sortable columns. The filter (image) becomes a list of all filterable columns.

image

Experiment details are organized in the following info panel tabs:

The info panel also shows tags, which are user defined, color-coded descriptive labels you can apply to experiments (and to models). These tags also appear in the experiment table when the info panel is closed.

Tracking

Execution

Source code

Source code information includes:

  • The experiment's repository, commit ID, script path, and working directory.
  • Uncommitted changes.
  • Installed Python packages and their versions.

Base Docker image

The base Docker image is a pre-configured Docker that Trains Agent will use to remotely execute this experiment (see the Building Docker containers).

Artifact output destination

An output destination you specify for storing model checkpoints (snapshots) and artifacts (see also, default_output_uri in the configuration file, and output_uri in Task.init parameter)

Log level

The logging level for the experiment (see the standard Python logging levels.

Configuration

All parameters and configuration objects appear in the CONFIGURATION tab.

Hyperparameters

In older versions of Trains Server, the CONFIGURATION tab was named HYPER PARAMETERS, and it contained all parameters. The renamed tab contains a HYPER PARAMETER section, and subsections for hyperparameter groups.

Hyperparameters are grouped by their type. They are in the HYPER PARAMETERS section of the CONFIGURATION tab, and are grouped as follows:

  • Command line arguments. In the Args section. Automatically logged argparse arguments, and all older experiments parameters, except TensorFlow DEFINEs. Hover over a parameter, and the type, description, and default value appear, if they were provided.

    View a screenshot

    image

    image

  • TensorFlow defintions. In the TF_DEFINE section. Automatic TensorFlow logging.

    View a screenshot

    image

  • Parameter dictionaries. In the General section. See connecting a dict object when calling the Task.connect method.

    View a screenshot

    image

  • Environment variables. In the Environment section. Logged if you set the TRAINS_LOG_ENVIRONMENT environment variable, see this FAQ).

    View a screenshot

    image

  • Custom named parameter groups. See the name parameter when calling the Task.connect method.

    View a screenshot

    image

Once an experiment is run and stored in Trains Server, any of these hyperparameters may have been set by modifying the experiment.

User properties

User properties allow you to store any descriptive information in key-value pair format. They are editable in any experiment, except experiments whose status is Published (read-only).

View a screenshot

image

Configuration objects

Trains tracks experiment (Task) model configuration objects, which appear in Configuration Objects > General. This includes automatic tracking, and those connected to a Task in code (see Task.connect_configuration). Trains supports providing a name for a Task model configuration (see the name parameter in Task.connect_configuration.

In older versions of Trains Server, the Task model configuration appeared in the ARTIFACTS tab, MODEL CONFIGURATION section. Task model configurations now appear in the Configuration Objects section, in the CONFIGURATION tab.

View a screenshot

image

image

Artifacts

Artifacts tracked with an experiment appear in the ARTIFACTS tab and include models and other artifacts.

Models

The input and output model appear in the ARTIFACTS tab. It shows the model names, their model designs, and the name of the experiment that created the model (stored it in Trains Server). In Trains, models are associated with the experiment, but the model details, including the model design, class enumeration, and general information about the model are in the MODELS tab. The model name is a hyperlink to those details.

View a screenshot

image

Data audit

Artifacts which are uploaded and dynamically tracked by Trains appear in the DATA AUDIT section. They include the file path, file size, hash, and metadata stored with the artifact.

View a screenshot

image

Other

Other artifacts, those which are uploaded but not dynamically tracked after the upload, appear in the OTHER section. They include the file path, file size, and hash.

View a screenshot

image

General information

General experiment is in the INFO tab. This includes information describing the stored experiment, including creation, start, and last update date and time, the user creating the experiment, the project name, experiment state (status), and whether the experiment is archived.

Experiment results

All experiment results appear in the RESULTS tab, including:

Viewing modes

The Trains Web (UI) supports two viewing modes for results: full screen (full screen width showing results only) and info panel (adjustable width sliding panel showing all experiment details).

To switch between viewing modes:

  • From the info panel to full screen - Click image (View in experiment table), or in the menu (image) > Results.

    View a screenshot

    image

  • From full screen to the info panel - Click image (View in experiment table).

    View a screenshot

    image

Log

The complete experiment log containing everything printed to stdout and strerr is in the LOG tab. The full log is downloadable. When viewing the log, you can click Jump to end.

Scalars and other plots

All scalars that Trains automatically logs, as well as those explicitly reported in code, appears in RESULTS > SCALARS tab. All other plots appears in the PLOTS sub tab.

In results full screen mode, the Trains Web (UI) provides the following plot features:

  • Show / hide plots - Click HIDE ALL , and then (show) those you want to see.
  • Horizontal axis modes (scalars, only) - Click > Select ITERATIONS, RELATIVE (time since experiment began), or WALL (local clock time).
  • Curve smoothing (scalars, only) - Click > Slide the slider or type a smoothing factor between 0 and 0.999.

Each plot supports plot controls allowing you better analyze the results. The table below lists the plot controls which may be available for any plot. Hover over a plot, and the controls appear.

Icon Description
image Download plots as PNG files.
image Pan around plot. Click image, click the plot, and then drag.
image To examine an area, draw a dotted box around it. Click image and then drag.
image To examine an area, draw a dotted lasso around it. Click image and then drag.
image Zoom into a section of a plot. Zoom in - Click image and drag over a section of the plot. Reset to original scale - Click image.
image Zoom in.
image Zoom out.
image Reset to autoscale after zooming ( image, image, or image ).
image Reset axes after a zoom.
image Show / hide spike lines.
image Show the closest data point on hover, including horizontal and vertical axes values. Click image and then hover over a series on the plot.
image Compare data on hover. Click image and then hover over the plot.
image Switch to logarithmic view.
image Hide / show the legend.
image To get metric data for further analysis, download plot data to JSON file.

Debug samples

Debug samples automatically logged by Trains and those explicitly reported in code appear in RESULTS > DEBUG SAMPLES. Debug samples appear by metric and iteration within each metric. You can view debug samples for other iterations by clicking image (Older images), image (New images), or image (Newest images).

Debug samples include:

  • Images - See the height and width in pixels, zoom, and download the image in the image viewer.
  • Audio - Play the audio and download it in the audio player.
  • Video - Play the video and download it in the video player.
  • Text files - Copy the text file URL and open the text file in a new browser tab.

For images, audio, and video, click the thumbnail to open the viewer / player.

Tagging experiments

Tags are user-defined, color-coded labels that you can add to experiments (and models) allowing you to easily identify and group of experiments. A tag shows any text you choose, for any purpose. For example, add tags for the type of remote machine experiments execute on, label versions of experiments, or apply team names to organize experimentation.

  • Adding tags and changing tag colors:
    1. Click the experiment > Hover over the tag area > +ADD TAG or image (menu)
    2. Do one of the following:
      • Add a new tag - Type the new tag name > (Create New).
      • Add an existing tag - Click a tag.
      • Change a tag's colors - Click Tag Colors > Click the tag icon > Background or Foreground > Pick a color > OK > CLOSE.
  • Remove a tag - Hover over the tag > X.

Locating the experiment (Task) ID

  • In the info panel, top area, click ID. The Task ID appears.