dimanche, juin 07, 2020

Running VGG16 on my own images

The aim of this post is to give you an idea of the capability of a CNN pre-trained model to transfer his knowledge for the discovery of your own photos. You will have a simple overview and make your own ideas about a model being able to assign its description of the photo.

I decided to use the pre-trained model VGG16 on my own images. The suggestion comes this excellent book from Jason Brownlee, "Deep Learning for Computer Vision".

After downloading the model, I tried it with my own photos taken this afternoon in a natural reserve, at home for the lady bird and the jay, and in the museum Charles de Gaulle fo the Solex.

The results are below, and the legend is the model result.
daisy (98.72%)





















monarch (39.78%)















picket_fence (17.62%)




















ladybug (64.89%)





















jay (34.68%)














tricycle (18.96%)


















Apart from the orchid, which the model associates to a "picket fence", and the solex, which the model associates with a "tricycle", all other images are correctly identified.

Aucun commentaire: