Home | Contact | Sitemap | 中文 | CAS | Join Us
Search: 
Home About Us Science & Technology Scientists International Cooperation News Education & Training Societies & Journals Papers
Location: Home > Huang Kun Forum
Huang Kun Forum
The 312th: Trends of Reconfigurable and in-Memory Processing Architectures for Deep Neural Networks
Author:
Update time: 2017-10-26
Close
Text Size: A A A
Print

Title:  Trends of Reconfigurable and in-Memory Processing Architectures for Deep Neural Networks

Speaker: Prof. Masato Motomura (Graduate School of Information Science and Technology, Hokkaido Univeristy, Japan)

Time: Oct. 30, 2017 14:00PM

Venue: No. 303 meeting room of building 2, IOS, CAS


Abstract: Thanks to the enormous progress and success of deep neural networks (DNNs), computer architecture research has been regaining its past "excitement" again recently: lot's of architectural proposals based on vastly/slightly different approaches have been proposed for the accelerated execution of the training/inference of DNNs. Most of them, including those from our own research group, have common archtiectural features: i.e., reconfigurable and in-memory processing architectures. This talk will try to give 1) insights on why they are happening now, 2) what are the recent findings in this movement, and 3) where this architectural innovation will be heading. 

Copyright © Institute of Semiconductors Chinese Academy of Sciences,All Right Reserved