An Introduction To Optimization Techniques In Machine Learning
  1. From linear programming to non-convex smooth optimization
  2. Gradient descent
  3. Polyak's Heavy ball: accelerated gradient descent
  4. Stochastic gradient descent (SGD)
  5. SGD and Markov chains
  6. Variance reduced SGD
  7. Computational efficiency of SGD for statistical inference
  8. Computational statistical tradeoff through convex relaxation
Modules
Hadi Daneshmand