Deep Learning

Course Content

Total learning: 3 lessons

Part – 1

  1.  Mini-batch gradient descent
  2.  Understanding mini-batch gradient descent
  3.  Exponentially weighted averages
  4.  Understanding exponentially weighted averages
  5.  Bias correction in exponentially weighted averages
  6.  Gradient descent with momentum
  7.  RMSprop
  8.  Adam optimization algorithm
  9.  Learning rate decay
  10.  The problem of local optima
  11. Notebook: Optimization
  12.  Yuanqing Lin interview
  13. Graded: Optimization algorithms
  14. Graded: Optimization
Need help?