Trains Examples Overview
Trains is now ClearML
This documentation applies to the legacy Trains versions. For the latest documentation, see ClearML.
To help you learn and use Trains, we provide pre-loaded examples. Their scripts are in the trains repository. These include ready to run examples and configurable Trains services.
Ready to run examples demonstrate the Trains Python Package, Trains Web (UI), and Trains Server features. These features include Trains automatic logging, integrating Trains into code with frameworks and visualization tools, automation, and optimization. They are associated with the
examples project, and their status is Published. You can clone, edit, and enqueue them.
Configurable services examples perform various continuing functions, because they execute in Trains Agent services mode. Their status is Draft (editable), you configure them, and then enqueue them to the
services queue. For example, configure the monitoring service, enqueue it, and once it begins execution, it sends alerts to your Slack channel for Task completion/failure.
All examples on this page, except the services examples, are ready to run.
Each examples folder in the
trains repository contains a
requirements.txt file for example scripts in that folder.
- Manual Random Parameter Search - Executing an experiment multiple times, each time with different sets of random hyperparameters.
- Task Piping - Creating an instance of a Task from a template Task, customizing that instance, and enqueuing the customized instance to execute.
- PyTorch Distributed - Integrating Trains into code that uses the
torch.distributed. Spawn Tasks in subprocesses which train a network, and report artifacts, scalars, hyperparameters to the main Task.
- Subprocess - Multiple subprocesses interacting and reporting to a main Task.
- Explicit Reporting - Jupyter Notebook - Several explicit reporting examples running in a Jupyter Notebook, including scalars, plots, media (audio, HTML, images, and video), and text.
- 2D Plots Reporting - Reporting series as 2D plots in histogram, confusion matrix, and 2D scatter plot formats.
- 3D Plots Reporting - Reporting series as a surface plot and as a 3D scatter plot.
- Artifacts Reporting - Uploading objects (other than models) to storage as experiment artifacts.
- Configuring Models - Configuring a model and defining class label enumeration.
- HTML Reporting - Reporting local HTML files and HTML by URL.
- Hyperparameters Reporting - The example hyper_parameters.py demonstrates automatic logging of command line options from argparse, TensorFlow DEFINEs, and parameter dictionaries which are explicitly connected to Tasks.
- Images Reporting - Reporting (uploading) images in several formats, including NumPy arrays, uint8, uint8 RGB, PIL Image objects, and local files.
- Manual Matplotlib Reporting - Reporting using Matplotlib and Seaborn in Trains.
- Media Reporting - Reporting images, audio, and video. Upload from a local path, provide a BytesIO stream, or provide the URL of media already uploaded to some storage.
- Plotly Reporting - Report Plotly plots in Trains by calling the
Logger.report_plotlymethod, and passing it a complex Plotly figure using the figure parameter.
- Scalars Reporting - Reporting scalars.
- Tables Reporting (Pandas and CSV Files) - Reporting tabular data from Pandas DataFrames and CSV files as tables.
- Text Reporting - Explicitly reporting (as compared to automatic logging) text.
Keras and TensorFlow examples include legacy examples for versions of TensorFlow older than v2.0.
- Keras with Matplotlib - Jupyter Notebook - Trains running in Jupyter Notebook with Keras, Matplotlib, and automatic logging.
- Keras with TensorBoard - Integrating Trains into code which uses Keras and TensorBoard.
- Keras with TensorBoard - Jupyter Notebook - The "Keras with TensorBoard" example (as the preceding example) running in a Jupyter Notebook.
- Manual Model Upload - Trains tracking of a manually configured model created with Keras, including model checkpoints (snapshots), hyperparameters, and output to the console.
- Keras Tuner - Integrating Trains into code which uses the Keras Tuner
Hyperbandtuner to optimize hyperparameters for training a network on a CIFAR10 dataset. This example is described in the "Integration" section, on the "Keras Tuner" page.
- Matplotlib - Integrating Trains into code which uses Matplotlib to plot scatter diagrams, and show images.
- Matplotlib - Jupyter Notebook - The same "Matplotlib" example (as the preceding example) running in a Jupyter Notebook.
These examples demonstration integrating Trains into code that uses PyTorch.
- Manual Model Upload - Trains tracking of a manually configured model created with PyTorch, including model checkpoints (snapshots), and output to the console.
- PyTorch MNIST - Integrating Trains into code that trains a simple deep neural network on the PyTorch built-in MNIST dataset.
- PyTorch TensorBoard Toy - Trains with PyTorch and TensorBoard to log debug sample images.
- PyTorch with Matplotlib - Trains with PyTorch and Matplotlib.
- PyTorch with TensorBoard - Trains with PyTorch and TensorBoard.
- PyTorch with TensorBoardX - Trains with PyTorch and TensorBoardX.
- Audio Preprocessing - Jupyter Notebook - Integrating Trains into a Jupyter Notebook which uses PyTorch and preprocesses audio samples.
- Audio Classification - Jupyter Notebooks - Integrating Trains into a Jupyter Notebook which uses PyTorch, TensorBoard, and TorchVision to train a neural network on the UrbanSound8K dataset for audio classification.
- Hyperparameter Optimization - Jupyter Notebook - Integrating Trains into a Jupyter Notebook which performs automated hyperparameter optimization.
- Image Classification - Jupyter Notebook - Integrating Trains into a Jupyter Notebook which uses PyTorch, TensorBoard, and TorchVision to train a neural network on the UrbanSound8K dataset for image classification.
- Tabular Data Preprocessing - Trains stores downloaded training as artfacts.
- See the pipeline example using tabular data, Pipeline with Concurrent Steps - Tabular Data.
- Text Classification - Juypter Notebook for Trains, and the integration of Trains into code which trains a network to classify text in the
torchtextAG_NEWS dataset, and then applies the model to predict the classification of sample text.
- scikit-learn with Joblib - Integrating Trains into code which uses
joblibto store a model and model snapshot, and Matplotlib to create a scatter diagram.
- scikit-learn with Matplotlib - Integrating Trains into code which uses
scikit-learnto determine cross-validated training and test scores, and
matplotlibto plot the learning curves. Trains automatically logs the scatter diagrams for the learning curves.
- TensorBoardX - Integrating Trains into code which uses PyTorch and TensorBoardX.
- Manual Model Upload - Trains tracking of a manually configured model created with TensorFlow, including model checkpoints (snapshots), hyperparameters, and output to the console.
- TensorBoard PR Curve - Integrating Trains into code which uses TensorFlow and TensorBoard.
- TensorBoard Toy - Trains automatic logging of TensorBoard scalars, histograms, images, and text, as well as all other console output and TensorFlow DEFINES.
- TensorFlow MNIST - Integrating Trains into code which uses TensorFlow and Keras to trains a neural network on the Keras built-in MNIST handwritten digits dataset.
- XGBoost - Integrating Trains into code that trains a network on the scikit-learn iris classification dataset, and XGBoost.
- Hyperparameter Optimization - Trains hyperparameter optimization automation.
- Basic Pipeline - Serialized Data - A basic pipeline to download data, process it, and a train a network. The data is serialized.
- Pipeline with Concurrent Steps - Tabular Data - A pipeline with nodes that run concurrently to preprocess two sets of data, train on each, and select the better model. The data is tabular.
- StorageManager class.
- Trains AWS Autoscaler - Optimizes AWS EC2 instance scaling according to the budget you configure.
- Cleanup Service - Deletes Archived Tasks, and their associated artifacts and debug samples, based on configurable parameter criteria.
- Jupyter Notebook Server Service - A Jupyter Notebook server.
- Monitoring Service Posting Slack Alerts - Monitors Task completion/failure based on configurable parameter criteria and post alerts to your Slack channel.
- Trains Agent Use Case Examples - Trains Agent case examples, including running workers, explicit Task execution, building Docker containers, and launching Trains Agent in services mode.