In my previous post, we discussed how Neural networks predict and learn from data. There are two processes responsible for this: advance pass and pass back, also known as back propagation. You can learn more about this here:
This post will delve into how we can optimize this “learning” and “training” process to increase the performance of our model. The areas we will cover are computational improvements and hyperparameter tuning And how to implement it in PyTorch!
But, before all that, let’s quickly refresh our memory about neural networks!
If you’re enjoying this article, be sure to subscribe to my YouTube channel!
Click the link to watch video tutorials that Teach you basic data science concepts in a digestible way!
Neural networks are large mathematical expressions that attempt to find the “correct” function that can map a set of inputs to their corresponding outputs. Below is an example of a neural network:
Each hidden layer neuron performs the following calculation: