Mohammad Babaiezadeh

PhD, University of Illinois at Urbana–Champaign

Title

Deep Learning at Scale

Abstract

The extraordinary success of Deep Learning hinges on two important factors: first, increase in the availability of the data, and second, accessibility to more computation power. As data grows and computations become more complex, so does the infeasibility of producing deep learning models on a single machine. The solution is to scale out to multiple machines, using the “power of many.” Currently, single machines equipped with multiple GPUs are the primary source of computational power. However, there exist an operating point where a distributed implementation becomes a necessity due to limited memory and computation power of a single GPU. Training large networks on large image classification tasks currently takes multiple days or even weeks. In some other applications, such as the Deep Reinforcement, the current training time exceeds a week even for small networks. In this talk, I will go through the scalability issues of deep learning, current state-of-the-art, and the future research directions.

Bio

Mohammad Babaeizadeh [Sharif alumni (MS 2009)] is currently a Ph.D. student at Computer Science Department in University of Illinois at Urbana-Champaign. His primary research interest is large-scale machine learning and its applications. His current focus is scalability of training and inference of deep models. Before joining UIUC, he worked in machine learning oriented industries for several years, including Microsoft Cortana and Nvidia research.