Stochastic Submodular Maximization
Halls department, Hall 5
Thursday, 28 December 2017
17:00 - 18:00
Stochastic optimization of continuous objectives is at the heart of mod- ern machine learning. However, many important problems are of discrete nature and often involve submodular objectives. We seek to unleash the power of stochastic continuous optimization, namely stochastic gradient descent and its variants, to such discrete problems. We first introduce the problem of stochastic submodular optimization, where one needs to optimize a submodular objective which is given as an expectation. Our model captures situations where the discrete objective arises as an empir- ical risk (e.g., in the case of exemplar-based clustering), or is given as an explicit stochastic model (e.g., in the case of influence maximization in so- cial networks). By exploiting that common extensions act linearly on the class of submodular functions, we employ projected stochastic gradient ascent and its variants in the continuous domain, and perform rounding to obtain discrete solutions. We focus on the rich and widely used family of weighted coverage functions. We show that our approach yields solu- tions that are guaranteed to match the optimal approximation guarantees, while reducing the computational cost by several orders of magnitude, as we demonstrate empirically in our NIPS paper1. In the talk, a number of related directions of research will also be introduced.
Mohammad Reza Karimi is a master student in the Computer Science department at ETH Zurich, and a research assistant in Learning & Adaptive Systems Lab. He received his B.Sc. degree from Sharif University both in Computer Engineering and Mathematics. He is interested in Combinatorial Optimization and Machine Learning. More specifically, his work examines submodularity and its applications in machine learning.