mercredi, novembre 11, 2020

Better Deep Learning - Jason Brownlee - Horizontal Voting Ensembles

 Preamble

  • This post is an extract of the book "Better Deep Learning" from Jason Brownlee. 
  • This post is related to "Better Predictions": how to make better predictions using Horizontal Voting Ensembles (chapter 23)
Schönnebrunn

Chapter 23: Models from Contiguous Epochs with Horizontal Voting Ensembles

  • The horizontal voting ensemble is a simple method where a collection of models saved over contiguous training epochs towards the end of a training run are saved and used as an ensemble that results in more stable and better performance on average than randomly choosing a single final model.
  • It is challenging to choose a final neural network model that has high variance on a training dataset.
  • Horizontal voting ensembles provide a way to reduce variance and improve average model performance for models with high variance using a single training run.
  • Ensemble learning combines the predictions from multiple models.
  • An alternative source of models that may contribute to an ensemble are the state of a single model at different points during training.
  • The method involves using multiple models from the end of a contiguous blocks of epochs before the end of training in an ensemble to make predictions.  The approach was developed specifically for those predictive modeling problems where the training dataset is relatively small compared to the number of predictions required by the model.

Case study

  • The case study is the same as the one used in the post "Resampling Ensembles": a multi class classification problem.
  • In a second step we create 50 models that we saved, for only the last 50 epochs out of 1000,  with the help of h5py library into a directory.
  • The last step consists of loading the 50 models so that we use them in a horizontal voting ensemble.

Single Model Accuracy (blue dots) vs Accuracy of Ensembles of Varying Size with a Horizontal Voting Ensemble

Conclusion

  • The Horizontal Voting Ensemble experimentation did not demonstrate clearly that a sized ensemble outperforms sharply a randomly selected model.
  • We will experiment in the next chapter named "Cyclic Learning rate and Snapshot Ensembles" an other technique of ensembles.

Aucun commentaire: