Comparing Experiments

The Trains Web-App (UI) provides a deep experiment comparison supported visualization tools allowing you to easily locate, identify, and analyze differences. The experiment comparison includes the following:

  • Source code - Repository, branch, commit ID, script, and working directory.
  • Artifacts - Input model, configuration, and output model.
  • Hyperparameters - Side by side values comparison, and compare by metric.
  • Scalars - Compare specific values, and compare scalar series.
  • Plots - Compare any series data.
  • Debug samples - Compare by each iteration, and an image viewer to closely examine each image, audio sample, or video sample.

Selecting experiments to compare

To select experiments to compare:

  1. Select the checkbox of each experiment to compare, or select the checkbox at the top to select all experiments. After selecting the second checkbox, the comparison footer appears.
  2. In the comparison footer, click COMPARE. The comparison page appears.

    The experiment on the left is the base experiment. Other experiments compare to the base experiment. The differences appear with their background highlighted.

  3. You can now compare the experiments.

Comparison page features

To assist you in your analysis, the comparison page supports:

Adding experiments to the comparison

Add an experiment to the comparison - Click Add Experiment and start typing an experiment name. A dialog appears showing matching experiments to choose from. To add an experiment, click +. To Remove an experiment, click .

Find the next or previous difference

  • Find the previous difference (), or the next difference ().

Hiding identical fields

  • Move the Hide Identical Fields slider to on.

Search all text

Search all text in the comparison.

Choosing a different base experiment

Set a new base experiment. Show differences in other experiments to the new base.

Dynamic ordering of the compared experiments

  • Drag the experiment to a new position

Remove an experiment from the comparison

Remove an experiment from the comparison.

Source code and artifacts

Compare source code and artifacts in the DETAILS tab.

Source code and artifact differences are highlighted and include the following:

  • The repository, branch, commit ID, script file name, and working directory.
  • Installed packages, sorted by package name (differences highlighted line-by-line).
  • Uncommitted changes, organized and sorted by file name (differences highlighted line-by-line).
  • Input model.
  • Configuration (differences highlighted line-by-line).
  • Output model
  • Other artifacts, if any.

To locate the source differences:

  1. Click the DETAILS tab.
  2. Locate differences by either:
    • Expanding highlighted sections.
    • Jump directly to differences - In the header, click (Previous diff) or (Next diff).

For example, in the image below, we expand EXECUTION, Installed Packages, and then Source, to see the binary (Python version) and branch in one experiment, and the branch, commit ID, and tag_name in another experiment differ from the base experiment.

Additional information about this comparison.

In this comparison, the rightmost experiment's branch is empty, and the tag_name is 0.13.0, because the experiment was modified and the source code was selected by choosing a tag instead of a commit Id. For more information about modifying experiments, see Selecting source code on the "Executing Experiments Remotely" page.


Compare hyperparameters as values, or visualize a metric comparison to hyperparameters (parallel comparison).

Side by side values comparison

Hyperparameter value differences are highlighted line-by-line.

To view a side by side values comparison:

  1. Click the HYPER PARAMETERS tab.
  2. In the list (upper right), choose Values.
  3. You can move the Hide Identical Fields slider to on, and show only differences.
  4. Locate differences by either:
    • Clicking (Previous diff) or (Next diff).
    • Scrolling to see highlighted hyperparameters.

For example, the image below shows hyperparameter differences for batch_size, dropout, learning_rate, and max_steps.

Compare by metric

Compare a metric to any combination of hyperparameters using a parallel coordinates plot.

To compare by metric:

  1. Click the HYPER PARAMETERS tab.
  2. In the list (upper right), choose Parallel Coordinates.
  3. In Performance metrics, expand a metric or monitored resource, and then click a variant.
  4. Select the metric values to use. Choose one of the following:
    • LAST - The final value, or the most recent value, for in-progress experiments.
    • MIN - Minimal value.
    • MAX - Maximal value.
  5. In Parameters, and then select all hyperparameter checkboxes to compare.

For example, the image below shows the metric/variant accuracy_1 / accuracy_1 plotted against the hyperparameters batch_size, dropout, learning_rate, and max_steps.


Visualize the comparison of scalars which includes metrics and monitored resources in the SCALARS tab.

Compare specific values

To compare specific values:

  1. Click the SCALARS tab.
  2. In the list (upper right), choose either:
    • Last values (the final or most recent value)
    • Min Values (the minimal values)
    • Max Values (the maximal values)
  3. You can sort by variant.

Compare scalar series

Compare scalar series in plots and analyze differences using the visualization tools which include hiding plots to more easily identify selected scalar series, adjust the horizontal axis, smooth curves, and use any of the plot controls available on each plot.

To compare scalar series:

  1. Click the SCALARS tab.
  2. In the list (upper right), Graph.
  3. To improve your comparison, use any of the following:
    • To locate scalars, click HIDE ALL , and then (show) those you want to see. You can also filter scalars by full or partial scalar name.
    • To change the horizontal axis, click , and then click ITERATIONS, RELATIVE (time since the experiment began), or WALL (clock time).
    • To smooth a curve, click , and move the Smoothing slider or type a smoothing number from 0 to 0.999.
    • Use any of the plot controls which appear when you hover over the top of a plot.


Visualize the comparison of any data Trains automatically captures, and which you explicitly report in your experiments, in the PLOTS tab.

To compare plots:

  1. Click the PLOTS tab.
  2. To improve your comparison, use either of the following:
    • To locate scalars, you can click HIDE ALL , and then (show) for those you want to see. You can also filter scalars by full or partial scalar name.
    • Use any of the plot controls which appear when you hover over the top of a plot, see Plot control features, including downloading the image, downloading the data as JSON, zoom, pan, and logarithmic scale.

Debug samples

Compare debug samples at any iteration to verify your experiment is running as expected. Use the image viewer, audio player, or video player to examine debug samples.

To compare debug samples:

  1. Click the DEBUG SAMPLES tab. The debug samples appear. The most recent iteration is at the top.
  2. Locate debug samples by doing the following:
    • Filter by metric. In the Metric list, choose a metric.
    • Show other iterations. Click (Older images), (New images), or (Newest images).
  3. To open a debug sample (image, audio, or video) in the viewer or player, click it.