Recent developments in optimization methods and machine learning applications

Humboldt University, Berlin, winter semester 2019/2020

The course will be concentrated mostly on theoretical aspects of first-order methods in optimization, e.g. gradient descent, accelerated gradient descent, mirror descent, AdaGrad. These methods are at their renaissance motivated by current big data world, which leads to large-scale problems with no need of very precise solution. One of the main assumptions in the course will be convexity of the objective function and some background from convex analysis will be given. Another cornerstone is randomness of different nature. One case is when the randomness is present in the problem itself, e.g. in stochastic optimization, for which stochastic gradient descent is usually applied. The second case is when the randomness is added to a deterministic problem by introducing randomization. In this case one can also apply stochastic gradient descent, as well as more sophisticated random coordinate descent or stochastic variance reduction methods. Non-convex optimization will also be discussed as many of the optimization problems in machine learning are non-convex.



Prerequisites: Multivariate calculus, Linear algebra, Basic probability



Exam: 20.02.2020, WIAS Mohrenstraße 39, room 212.

Lecture contents


Homework


Literature:

  1. A. Ben-Tal, A. Nemirovski
    Lectures on Modern Convex Optimization
  2. G. Lan
    Lectures on Optimization Methods for Machine Learning
  3. Yu. Nesterov
    Introductory Lectures on Convex Optimization. A Basic Course
    Springer-Verlag US, 2004
  4. Yu. Nesterov
    Lectures on convex optimization
    Springer, 2018
  5. John C. Duchi
    Introductory Lectures on Stochastic Optimization,
    2016