We develop a unified random operator theory framework to analyze and determine convergence properties of many constant step-size recursive stochastic algorithms (RSAs) used for machine learning and reinforcement learning. RSAs involve randomization to ease the computational burden, and as a result, the output sequence forms a stochastic process. Instead of determining the convergence of the stochastic process (which may not converge due to constant stepsize), we study the convergence of the distribution of the stochastic process. The key idea is to lift the problem into an appropriate space so that the stochastic process becomes a Markov process in a higher dimensional space, and then determine the convergence properties of the RSA. To derive the convergence bounds, we depart from the traditional Wasserstein metric space. We define Wasserstein divergence as an analytical vehicle to study the convergence of the sequence of distributions of the Markov chain. We hope that this study will accelerate development of new algorithms. We show that the random operator theory developed here can be applied to mini-batch SGD, forward-backward splitting, SAGA, SVRG, empirical value iteration, synchronous Q-learning, SARSA, and MDPs with a generative model. We also establish that the asymptotic performance of an RSA is largely dependent on how far it moves the optimal solution of interest (in expectation).
Bio: William B. Haskell received his B.S. Mathematics and M.S. Econometrics degrees from the University of Massachusetts Amherst in 2005 and 2006, respectively. He then obtained his M.S. Operations Research, M.A. Mathematics, and Ph.D. Operations Research degrees from the University of California Berkeley in 2007, 2010, and 2011, respectively. He is currently an Assistant Professor in the Supply Chain and Operations Management Group in the Krannert School of Management at Purdue University. Dr. Haskell’s current research focus is on algorithms for convex optimization and dynamic programming, with an emphasis on risk-aware decision-making.
Bienvenue à tous!