dimanche, juin 28, 2020

CIFAR10: 10 classes of images of 32x32 pixels

This blog post:


  • shows the results of running a deep learning network on a set of images provided by CIFAR10 (10 classes of 6000 images of 32x32 pixels).
  • demonstrates the duration of running a deep learning network on my own computer.
  • demonstrates the impact on accuracy when running the model with dropout, data augmentation and batch normalisation.
The example is provided from the book of Jason Brownlee “Deep Learning for Computer Vision”.

Scope

I ran on the CIFAR10 dataset basically two different models of the CNN (Convolutional Neural Network):
  • The baseline model
  • The baseline augmented model with dropout, data augmentation and batch normalisation.  Data augmentation involves making copies of the example in the training dataset with small random modifications.

Results

You have an overview of the results in the screenshot below with code in Python:

Cross entropy and accuracy loss:

The above result shows the cross entropy and classification accuracy for the baseline model

The above diagram shows the cross entropy and classification accuracy for the augmented model

CPU load:
You can observe that GPU load is 0%, which is a bad usage in Neural Network computation.

A CFIFAR 10 example written in JavaScript has been created here and is very much visualising the ConvNetJS deep learning network.

Epilogue

This test clearly indicates the necessity to use an other mean of training deep neural network other than my own computer: either Amazon Web Services, Google Colab or a personal computer with big GPU capacity.

Aucun commentaire: