Share
Fr. 83.00
Tushar Krishna, Hyoukjun Kwon, Angshuman Parashar, Michael Pellauer, Ananda Samajdar
Data Orchestration in Deep Learning Accelerators
English · Paperback / Softback
Shipping usually within 1 to 2 weeks (title will be printed to order)
Description
This Synthesis Lecture focuses on techniques for efficient data orchestration within DNN accelerators. The End of Moore's Law, coupled with the increasing growth in deep learning and other AI applications has led to the emergence of custom Deep Neural Network (DNN) accelerators for energy-efficient inference on edge devices. Modern DNNs have millions of hyper parameters and involve billions of computations; this necessitates extensive data movement from memory to on-chip processing engines. It is well known that the cost of data movement today surpasses the cost of the actual computation; therefore, DNN accelerators require careful orchestration of data across on-chip compute, network, and memory elements to minimize the number of accesses to external DRAM. The book covers DNN dataflows, data reuse, buffer hierarchies, networks-on-chip, and automated design-space exploration. It concludes with data orchestration challenges with compressed and sparse DNNs and future trends. The target audience is students, engineers, and researchers interested in designing high-performance and low-energy accelerators for DNN inference.
List of contents
Preface.- Acknowledgments.- Introduction to Data Orchestration.- Dataflow and Data Reuse.- Buffer Hierarchies.- Networks-on-Chip.- Putting it Together: Architecting a DNN Accelerator.- Modeling Accelerator Design Space.- Orchestrating Compressed-Sparse Data.- Conclusions.- Bibliography.- Authors' Biographies.
About the author
Tushar Krishna is an Assistant Professor in the School of Electrical and Computer Engineering at the Georgia Institute of Technology. He received a Ph.D. in Electrical Engineering and Computer Science from the Massachusetts Institute of Technology in 2014. Prior to that, he received an M.S.E in Electrical Engineering from Princeton University in 2009 and a B.Tech in Electrical Engineering from the Indian Institute of Technology (IIT), Delhi in 2007. Before joining Georgia Tech in 2015, he worked as a researcher in the VSSAD Group at Intel in Massachusetts. Dr. Krishna's research spans computer architecture, interconnection networks, networks-on-chip (NoC), and deep learning accelerators, with a focus on optimizing data movement in modern computing systems. Three of his papers have been selected for IEEE Micro's Top Picks from Computer Architecture, one more received an honorable mention, and three have won best paper awards. He received the National Science Foundation (NSF) CRII awardin 2018 and both a Google Faculty Award and a Facebook Faculty Award in 2019.
Product details
Authors | Tushar Krishna, Hyoukjun Kwon, Angshuman Parashar, Michael Pellauer, Ananda Samajdar |
Publisher | Springer, Berlin |
Original title | Data Orchestration in Deep Learning Accelerators |
Languages | English |
Product format | Paperback / Softback |
Released | 01.01.2020 |
EAN | 9783031006395 |
ISBN | 978-3-0-3100639-5 |
No. of pages | 146 |
Dimensions | 191 mm x 9 mm x 235 mm |
Illustrations | XVII, 146 p. |
Series |
Synthesis Lectures on Computer Architecture |
Subject |
Natural sciences, medicine, IT, technology
> Technology
> Electronics, electrical engineering, communications engineering
|
Customer reviews
No reviews have been written for this item yet. Write the first review and be helpful to other users when they decide on a purchase.
Write a review
Thumbs up or thumbs down? Write your own review.