Audio Classification - Jupyter Notebooks

The example audio_classification_UrbanSound8K.ipynb demonstrates integrating Trains into a Jupyter Notebook which uses PyTorch, TensorBoard, and TorchVision to train a neural network on the UrbanSound8K dataset for audio classification. The example calls TensorBoard methods in training and testing to report scalars, audio debug samples, and spectrogram visualizations. The spectrogram visualizations are plotted by calling Matplotlib methods. In the example, we also demonstrate connecting parameters to a Task and logging them. When the script runs, it creates an experiment named audio classifier which is associated with the Audio Example project.

Scalars

The accuracy, learning rate, and training loss scalars are automatically logged, along with the resource utilization plots (titled :monitor: machine), and appear in the Trains Web (UI), RESULTS tab, SCALARS sub-tab.

Debug samples

The audio samples and spectrogram plots are automatically logged and appear in the RESULTS tab, DEBUG SAMPLES sub-tab.

Audio samples

By doubling clicking a thumbnail, you can play an audio sample.

Spectrogram visualizations

By doubling clicking a thumbnail, you can view a spectrogram plot in the image viewer.

Hyperparameters

The example connects a parameter dictionary to the Task. These parameters, as well as the TensorFlow DEFINEs, are automatically logged, and appear in the HYPER PARAMETERS tab.

configuration_dict = {'number_of_epochs': 10, 'batch_size': 4, 'dropout': 0.25, 'base_lr': 0.001}
configuration_dict = task.connect(configuration_dict)  # enabling configuration override by trains

Log

Text printed to the console for training progress, as well as all other console output, appear in the RESULTS tab, LOG tab.