Elmira Nezamfar

Researcher at DSN lab

Reconfigurable Architecture for Neural Networks

  Halls department, Hall 4
  Thursday, 27 December 2018
  10:15 - 11:15

Abstract

Machine learning algorithms are achieving state-of-the-art performance in many various applications such as image processing, machine vision, speech recognition, and diagnosis diseases. For decades, the usage of machine learning algorithms especially Neural Network Algorithms (NNA) has been restricted due to their complexity and high computation time of available inefficient hardware. Although advances in technology and the emergence of powerful processors has increased the usage of NNAs, especially Deep Neural Networks (DNNs), the research gap in machine learning hardware platform with high performance as well as high energy efficiency is still remaining. Therefore, design custom hardware based on computational demands of DNN will help to address this research gap. In this talk, I'll give an overview of the state-of-the-art hardware for machine learning and explain their problems. Further, I will present an efficient FPGA architecture as a machine learning hardware platform, especially DNN. At last, I'll discuss about the advantages and disadvantages of the proposed architecture.

Bio

Elmira Nezamfar is currently a member of Data Storage, Networks, and Processing (DSN) lab supervised by Prof. Hossein Asadi. She recently obtained her M.Sc. in computer engineering (computer architecture) in 2018 from the Sharif University of Technology. She received her B.Sc. in computer engineering (hardware) in 2011 from Shahid Beheshti University. From 2010 to 2011, she worked on FPGA design for Intrusion Detection System (as a final project of B.Sc. degree) supervised by Prof. Maghsoud Abbaspour. From 2013 to 2015 she was a researcher and developer at R&D group of Behsam Afzar Company, working on embedded devices. She earned a M.Sc. degree with a thesis on the efficient reconfigurable architecture to speed up machine learning algorithms. Her research interests lie in the area of reconfigurable architecture, hardware design for machine learning (Deep Neural Network), and neuroscience.