Keras with Matplotlib - Jupyter Notebook

The jupyter.ipynb example demonstrates Trains automatic logging of code running in a Jupyter Notebook that uses Keras and Matplotlib. It trains a simple deep neural network on the Keras built-in MNIST dataset. It builds a sequential model using a categorical crossentropy loss objective function, specifies accuracy as the metric, and uses two callbacks: a TensorBoard callback and a model checkpoint callback. When the script runs, it creates an experiment named notebook example which is associated with the examples project.

Scalars

The loss and accuracy metric scalar plots appear in the RESULTS tab, SCALARS tab, along with the resource utilization plots, which are titled :monitor: machine.

Plots

The example calls Matplotlib methods to create several sample plots, and TensorBoard methods to plot histograms for layer density. They appear in the RESULTS tab, PLOTS tab.

Debug samples

Calls to Matplotlib methods log debug sample images. They appear in the RESULTS tab, DEBUG SAMPLES sub-tab.

Hyperparameters

We create a hyperparameter dictionary and connect it to the Task by calling Task.connect.

task_params = {'num_scatter_samples': 60, 'sin_max_value': 20, 'sin_steps': 30}
task_params = task.connect(task_params)

Later in the Juputer Notebook, we add additional parameters.

task_params['batch_size'] = 128
task_params['nb_classes'] = 10
task_params['nb_epoch'] = 6
task_params['hidden_dim'] = 512

Trains automatically logs the hyperparameters, as well as the TensorFlow DEFINEs. They appear in the HYPER PARAMETERS tab.

Log

Text printed to the console for training appears in the RESULTS tab, LOG sub-tab.

Artifacts

Trains tracks the input and output model with the experiment, but the Trains Web (UI) shows the model details separately.

Output model

Trains logs the output model, providing the model name and output model configuration in ARTIFACTS tab, Output Model area.

In the model details (which appear when you click the model name, expand image above), you can see the following:

  • In the model details GENERAL tab you can see:
    • The output model location (URL).
    • Model snapshots / checkpoint model locations (URLs).
    • Experiment creating the model.
    • Other general information about the model.
  • The output model configuration, which appears in the model details NETWORK tab.