AutoKeras Integration

For your autokeras tasks, Trains can automatically log comprehensive information, including code source control, execution environment, hyperparameters, and resource monitoring, as well as automatically record any scalars, histograms and images reported to TensorBoard/Matplotlib, or Seaborn. See also the AutoKeras documentation.

Integrating Trains with your AutoKeras project allows you to:

  • Visualize experiment results in the Trains Web-App (UI).
  • Track and upload models.
  • Track model performance and create tracking leaderboards.
  • Compare experiments.

To install Trains, see Quick Start.

Tasks and experiments

In Trains, “Task” refers to the class in the Trains Python Client package, the object in your Python experiment script, and the object with which Trains Server and Trains Agent work. “Experiment” refers to your deep learning solution, including its connected components, inputs, and outputs, and is the experiment you can view, analyze, compare, modify, duplicate, and manage using the Trains Web-App (UI). Therefore, a “Task” is effectively an “experiment”, and “Task (experiment)” encompasses its usage throughout the Trains.

Visualizing experiment results

Trains supports detailed detailed experiment results, which you can view in the Trains Web-App (UI). The example script in the trains repository demonstrates this. By adding TensorBoard callbacks, all recorded information is available to Trains.

In the example, we import trains and create a Task which connects the experiment to the Trains platform.

from trains import Task
task = Task.init(project_name="autokeras", task_name="autokeras imdb example with scalars")

Add TensorBoard callbacks.

tensorboard_callback_train = keras.callbacks.TensorBoard(log_dir='log')
tensorboard_callback_test = keras.callbacks.TensorBoard(log_dir='log')

Then use the callbacks when calling the AutoKeras method., y_train, epochs=2, callbacks=[tensorboard_callback_train]), y_test, epochs=2, callbacks=[tensorboard_callback_test])

When your experiment scripts runs, the Trains Task (experiment) connects it to the Trains platform, and tracking in Trains begins while the experiment is in progress.

For example, view scalar metrics.

More experiment visualizations

The Trains Python Client Logger class contains methods for explicit reporting which you can use to plot any additional scalars, plot any data in a variety of formats, upload images, set a default destination for images, and log text messages, as well as other features.

Task (experiment) Models

Trains can automatically track models produced by your AutoKeras project. To upload models, when calling Task.init method, specify the output_uri parameter with the upload destination.

task = Task.init(project_name="autokeras", task_name="autokeras imdb example with scalars",

View the model information in the experiment details panel, ARTIFACTS tab:

Tracking Model Performance

Use the Trains Web-App (UI) to easily create experiment leaderboards and quickly identify the best performing models. Customize your board adding any metric or hyperparameter.

See our Tracking Leaderboards tutorial and Customize the experiments table in the "User Interface" section.

Model Development Insights

Use the Trains Web-App (UI) to view side-by-side comparisons of experiments. Easily locate the differences, and the impact of experiment configuration parameters, metrics, scalars, other plots, and experiment details including artifacts and debug samples (images, audio, and video).

Compare multiple experiments, by selecting two or more experiments in the EXPERIMENTS table, and clicking COMPARE.

For example, the following image shows how two experiments compare in their epoch_accuracy and epoch_loss behaviour: