Course Content
Part – 4
Various Hyper Parameters In Neural Networks
- Understand about challenges in Gradient
- Introduction to various Error, Cost, Loss functions
- ME, MAD, MSE, RMSE, MPE, MAPE, Entropy, Cross Entropy
- Vanishing / Exploding Gradient
- Learning Rate (Eta), Decay Parameter, Iteration, Epoch
- Variants of Gradient Descent
- Batch Gradient Descent (BGD)
- Stochastic Gradient Descent (SGD)
- Mini-batch Stochastic Gradient Descent (Mini-batch SGD)