Stochastic Optimization and Online Learning
Eindhoven, March 26-28, 2014
Stochastic optimization embodies a collection of methodologies and theory that aim at devising optimal solutions to countless real-world inference problems, particularly when these involve uncertain and missing data. At the heart of stochastic optimization is the idea that many deterministic optimization problems can be addressed in a more powerful and convenient way by introducing intrinsic randomness in the optimization algorithms. Furthermore, this generalization gives rise to a set of techniques that are well suited for settings involving uncertain, incomplete, or missing data. Online (machine) learning is concerned with learning and prediction in a sequential and online fashion. In many settings the goal of online learning is the optimal prediction of a sequence of instances, possibly given a sequence of side-information. For example, the instances might correspond to the daily value of a financial asset or the daily meteorological conditions, and one wants to predict tomorrow’s value of the asset or weather conditions. Interestingly, it is possible to devise very powerful online learning algorithms able to cope with adversarial settings, meaning that powerful adversary can generate a sequence of instances that attempts to “break” the algorithms strategy. However, one can show that these algorithms must necessarily incorporate a randomization of the predictions, and can be often casted as stochastic optimization algorithms. One of the goals of this workshop is to make such connections between online learning and stochastic optimization more transparent.
Particularly important in the present workshop is the quantification of the performance of a given online stochastic optimization/online learning procedure. This challenging question requires several ingredients to be answered adequately; in particular one needs to develop a proper optimality framework. Here parallels with modern statistical theory emerge, as notions such as consistency, convergence rates and minimax bounds, common in statistical theory, all have parallels in stochastic optimization and online learning. Therefore there is plenty of room for cross-fertilization between all these fields, which is the main motivation for this workshop.
The aim of the workshop “Stochastic Optimization and Online Learning” is to introduce these broad fields to young researchers, in particular Ph.D. students, postdoctoral fellows and junior researchers, who are interested and eager to tackle new challenges in the fields of stochastic optimization and online learning.
Rui Castro (TU Eindhoven)
Eduard Belitser (VU Amsterdam)
The workshop will take place at Eurandom, and will consist of tutorial courses given by three world experts in the field, each being comprised of 3 hours of lectures. There will be also contributed talks, as well as plenty of time for discussion.
Nicolò Cesa-Bianchi, Università degli Studi di Milano
Francis Bach, INRIA - Laboratoire d'Informatique de l'Ecole Normale Superieure, Paris
- Experts, bandits, and online learning
Anatoli Juditsky, Laboratoire Jean Kuntzmann, Université Joseph Fourier, Grenoble
- Beyond stochastic gradient descent for large-scale machine learning
- Deterministic and stochastic first order algorithms of large-scale convex optimization
(click on author's name for title and abstract)
Wednesday (March 26th)
Thursday (March 27th)
Friday (March 28th)
Please register by filling in the REGISTRATION FORM (the link will redirect you to the website of the TU/e).
Registration includes lunch, coffee/tea and drinks. A workshop dinner will be organised for which non-invitees will be asked to contribute 35 euro.
Some of the junior participants will be given the opportunity to present their current work during the workshop by giving a short talk (or by presenting a poster if we receive too many quality solicitations). If interested, please email a title and short abstract (max. one page) to the organizers Rui Castro and Eduard Belitser.
About the YES Workshops:
This workshop is the seventh in the series of YES (Young European Statisticians) workshops. The first was held in October 2007 on Shape Restricted Inference with seminars given by Lutz Dümbgen (Bern) and Jon Wellner (Seattle) together with shorter talks by Laurie Davies (Duisburg-Essen) and Geurt Jongbloed (Delft). The second workshop was held in October 2008 on High Dimensional Statistics with seminars given by Sara van de Geer (Zürich), Nicolai Meinshausen (Oxford) and Gilles Blanchard (Berlin). The third was held October 2009 on Paradigms of Model Choice, with seminars given by Laurie Davies (Duisburg-Essen), Peter Grünwald (Amsterdam), Nils Hjort (Oslo) and Christian Robert (Paris). The fourth, on the topic of Bayesian Non-Parametric Statistics took place in November 2010, with seminars given by Judith Rousseau (Paris), Zoubin Ghahramani (Cambridge), Yongdai Kim (Seoul) and Harry van Zanten (Eindhoven). The fifth workshop, on Adaptation in Nonparametric Statistics, took place in October 2011, with seminars by Alexander Goldenshluger (Haifa), Richard Nickl (Cambridge), Laurent Cavalier (University Aix-Marseille 1) and Eduard Belitser, (TU Eindhoven). Finally the sixth workshop, on Statistics for Complex and High Dimensional Systems took place in January 2013, with tutorials by Martin Wainwright (Berkeley), Eric Kolaczyk (Boston University) and Johan Koskinen (University of Manchester).
This workshop is partially sponsored by: