If you are not a member, Read for free here
It's amazing how some of the basic topics of machine learning are still unknown to researchers and despite being fundamental and commonly used, they seem to be mysterious. The fun thing about machine learning is that we build things that work and then we find out why they work!
Here, my goal is to investigate uncharted territory in some machine learning concepts to show that while these ideas may seem basic, they are actually built by layers and layers of abstraction. This helps us practice questioning the depth of our knowledge.
In In this article, we explore several key phenomena in deep learning that challenge our traditional understanding of neural networks.
- We start with Batch normalization and its underlying mechanisms that are not yet fully understood.
- We examine the counterintuitive observation that overparameterized Models often generalize bettercontradicting classical machine learning theories.
- We explore the implicit regularization effects of degraded…