This page describes the Trains Web-App Profile page, Experiments tab, details panel which contains all information for an experiment, with easy access to model details, and is organized by the following tabs explained in the Overview section:
- EXECUTION - Source control and execution environment.
- HYPER PARAMETERS
- ARTIFACTS - Input models, output models, other artifacts.
- INFO - General experiment information.
- RESULTS - The LOG, SCALARS (metrics), PLOTS (any data), and DEBUG IMAGES sub-tabs.
The Features section of this page highlights features and actions you can perform for an experiment.
The EXECUTION tab shows source control and execution environment information from the most recent run of the experiment.
SOURCE CODE section
In Draft status experiments, you can select a different repository, branch, commit, script, and / or working directory for the experiment. If the experiment is enqueued to run again, the new source code settings are used.
UNCOMMITTED CHANGES and INSTALLED PACKAGES sections
In Draft status experiments, you can discard the git diff and select different Python packages and / or versions for the experiment. If the experiment is enqueued to run again, the new Python packages and / or versions are used.
AGENT CONFIGURATION and OUTPUT sections
In a Draft status experiment, you select a different Docker image, if a Docker image is used to run a worker in Docker mode, for the experiment, and output destination for model snapshots, artifacts, and uploaded images. If the experiment is enqueued to run again, the new Docker image and output destination are used.
HYPER PARAMETERS tab
The HYPER PARAMETERS tab shows you the names and values of the hyperparameters. In Draft status experiments, you can add, change, or delete hyperparameters. If the experiment is enqueued to run again, the new hyperparameters are used.
MODELS section - Input Model
This section shows the input model MODEL NAME and CREATING EXPERIMENT which are hyperlinks you can click to view model details and experiment details, respectively. The model is downloadable and the model's path can be copied to the clipboard. In Draft status experiments, you can switch to a different input model, and edit the MODEL CONFIGURATION. If the experiment is enqueued to run again, the new input model and / or configuration is used.
MODELS section - Output Model
This section shows the output model MODEL NAME shows a hyperlink you can click to view the model details and the output MODEL CONFIGURATION is shown. The model is downloadable and the model's path can be copied to the clipboard.
DATA AUDIT section
This section shows registered (dynamically synchronize with Trains) artifacts. Each artifact is shown with its file path, file size, hash, and metadata. You can download registered artifacts.
This section shows uploaded (one-time, static) artifacts.
These artifacts can include:
- Numpy arrays which are stored as NPZ files.
- Static Pandas DataFrame which are one-time, stored version of a Pandas DataFrame (not dynamically synchronized).
- Dictionaries which are stored as a JSONs.
- Local files
- Local folders which are stored as a ZIP files.
- Images which are stored as PNG files.
- Any other objects you store.
Each artifact is shown with its file path, file size, hash, and metadata. You can download the static artifacts.
The INFO tab shows general information about the experiment, including the dates and times for the experiment activities (creation, start, last update, completion), and last or most recent iteration, as well as other information.
This tab contains experiment results automagically captured by Trains and explicit reporting which you can add to your Python experiment scripts. To learn about explicit reporting, see our Explicit Reporting tutorial.
This sub-tab shows the experiment log, including stdout, stderr, and explicit reporting you may add to your experiment script. The full log is downloadable and the log is searchable by using a filter for a search term.
This sub-tab shows shows the scalar metrics plots that Trains automagically captures from metrics, resource monitoring, and visualization tools you may use such as TensorBoard/TensorBoardX, Matplotlib, and Seaborn, as well as explicit scalar reporting you may add to your experiment script.
Each scalar metrics plot provides controls allowing you to better analyze your results. For example, switch the horizontal axis to iterations, time relative to the start of the experiment, or wall time which is the local time of the experiment; smooth the curve; download the plot as a PNG or JSON file; zoom, pan, and view the closest data point as you hover over the plot.
The PLOTS sub-tab shows plots of any data that Trains automagically captures from visualization tools, as well as explicit reporting you may add to your experiment script.
These plots provide the same controls as scalar metrics plots in the SCALARS sub-tab.
DEBUG IMAGES sub-tab
This sub-tab shows thumbnail previews of the debugging images stored by your experiment. You can zoom in on an image by clicking it.
View an experiment (Task) Id
- In the details panel, top area, click ID. The Task Id appears.
The Trains Web-App supports features allowing you to manage your experiments. These features are available in the experiments details panel menu and in experiments context menu on the Experiments Table page.
To manage an experiment (perform a model action):
- In the experiments details panel, click the menu and do any of the actions described in Experiments models on the Experiments Table page.
Modify experiments for tuning
Modify Draft status experiments for tuning by editing experiment details such as hyperparameters, input model, configuration, class enumeration, Git repository, Python packages and versions, Docker image, output destination, and log level.
- In the details panel, EXECUTION, HYPER PARAMETERS, or ARTIFACTS sub-tabs - Hover over the area whose details you want to edit and then click EDIT. Add, change, or delete the details and then click SAVE.
Full screen mode
The details panel and its tabs are also viewable in full screen mode. In full screen, use auto refresh and additional features for filtering the SCALARS and PLOTS, such as filtering by metrics, to improve you tracking and experiment analysis.
To view results in full screen mode, do the following:
- In the details panel menu, click Results. Full screen mode appears.
- In the SCALARS and PLOTS sub-tab, show / hide a scalar or plot by title, all plots, or filter by title.
Plot controls and filtering
To use plot controls, do the following:
- In the RESULTS tab, SCALARS or PLOT sub-tab, do any of the following:
- Manipulate the plot - Hover over a plot, the controls appear, click your required control and, apply the control if required. For example, zoom into a section of a plot by dragging from one point on the plot to another. The plot refreshes showing the section you grab.
- Download the plot as a PNG or JSON file.
- Select an alternate horizontal axis or smooth the curve - Click settings and then your required axis (ITERATIONS, RELATIVE time since the experiment began, or WALL which is local date / time since the experiment begin) and / or smoothing.