LM101-030: How to Improve Deep Learning Performance with Artificial Brain Damage (Dropout and Model Averaging) - podcast episode cover

LM101-030: How to Improve Deep Learning Performance with Artificial Brain Damage (Dropout and Model Averaging)

Jun 08, 201532 minSeason 1Ep. 30
--:--
--:--
Listen in podcast apps:
Metacast
Spotify
Youtube
RSS

Episode description

Deep learning machine technology has rapidly developed over the past five years due in part to a variety of actors such as: better technology, convolutional net algorithms, rectified linear units, and a relatively new learning strategy called "dropout" in which hidden unit feature detectors are temporarily deleted during the learning process. This article introduces and discusses the concept of "dropout" to support deep learning performance and makes connections of the "dropout" concept to concepts of regularization and model averaging. For more details and background references, check out: www.learningmachines101.com !

 

For the best experience, listen in Metacast app for iOS or Android
Open in Metacast
LM101-030: How to Improve Deep Learning Performance with Artificial Brain Damage (Dropout and Model Averaging) | Learning Machines 101 podcast - Listen or read transcript on Metacast