Audio Classification - Jupyter Notebooks
Trains is now ClearML
This documentation applies to the legacy Trains versions. For the latest documentation, see ClearML.
The example audio_classification_UrbanSound8K.ipynb demonstrates integrating Trains into a Jupyter Notebook which uses PyTorch, TensorBoard, and TorchVision to train a neural network on the UrbanSound8K dataset for audio classification. The example calls TensorBoard methods in training and testing to report scalars, audio debug samples, and spectrogram visualizations. The spectrogram visualizations are plotted by calling Matplotlib methods. In the example, we also demonstrate connecting parameters to a Task and logging them. When the script runs, it creates an experiment named audio classifier
which is associated with the Audio Example
project.
Scalars
The accuracy, learning rate, and training loss scalars are automatically logged, along with the resource utilization plots (titled :monitor: machine), and appear RESULTS > SCALARS.
Debug samples
The audio samples and spectrogram plots are automatically logged and appear in RESULTS > DEBUG SAMPLES.
Audio samples
By doubling clicking a thumbnail, you can play an audio sample.
Spectrogram visualizations
By doubling clicking a thumbnail, you can view a spectrogram plot in the image viewer.
Hyperparameters
Trains automatically logs TensorFlow DEFINEs. A parameter dictionary is logged by connecting it to the Task using a call to the Task.connect method.
configuration_dict = {'number_of_epochs': 10, 'batch_size': 4, 'dropout': 0.25, 'base_lr': 0.001}
configuration_dict = task.connect(configuration_dict) # enabling configuration override by trains
Parameter dictionaries appear in CONFIGURATIONS > HYPER PARAMETERS > General.
TensorFlow DEFINEs appear in the TF_DEFINE subsection.
Log
Text printed to the console for training progress, as well as all other console output, appear in RESULTS > LOG.