Share
Fr. 166.00
Pinaki Mazumder, Mazumder Pinaki, N Zheng, Na Zheng, Nan Zheng, Nan Mazumder Zheng
Learning in Energy Efficient Neuromorphic Computing: Algorithm and - Architecture Co Desig
English · Hardback
Shipping usually within 1 to 3 weeks (not available at short notice)
Description
Explains current co-design and co-optimization methodologies for building hardware neural networks and algorithms for machine learning applications
This book focuses on how to build energy-efficient hardware for neural networks with learning capabilities--and provides co-design and co-optimization methodologies for building hardware neural networks that can learn. Presenting a complete picture from high-level algorithm to low-level implementation details, Learning in Energy-Efficient Neuromorphic Computing: Algorithm and Architecture Co-Design also covers many fundamentals and essentials in neural networks (e.g., deep learning), as well as hardware implementation of neural networks.
The book begins with an overview of neural networks. It then discusses algorithms for utilizing and training rate-based artificial neural networks. Next comes an introduction to various options for executing neural networks, ranging from general-purpose processors to specialized hardware, from digital accelerator to analog accelerator. A design example on building energy-efficient accelerator for adaptive dynamic programming with neural networks is also presented. An examination of fundamental concepts and popular learning algorithms for spiking neural networks follows that, along with a look at the hardware for spiking neural networks. Then comes a chapter offering readers three design examples (two of which are based on conventional CMOS, and one on emerging nanotechnology) to implement the learning algorithm found in the previous chapter. The book concludes with an outlook on the future of neural network hardware.
* Includes cross-layer survey of hardware accelerators for neuromorphic algorithms
* Covers the co-design of architecture and algorithms with emerging devices for much-improved computing efficiency
* Focuses on the co-design of algorithms and hardware, which is especially critical for using emerging devices, such as traditional memristors or diffusive memristors, for neuromorphic computing
Learning in Energy-Efficient Neuromorphic Computing: Algorithm and Architecture Co-Design is an ideal resource for researchers, scientists, software engineers, and hardware engineers dealing with the ever-increasing requirement on power consumption and response time. It is also excellent for teaching and training undergraduate and graduate students about the latest generation neural networks with powerful learning capabilities.
List of contents
Preface xi
Acknowledgment xix
1 Overview 1
1.1 History of Neural Networks 1
1.2 Neural Networks in Software 2
1.2.1 Artificial Neural Network 2
1.2.2 Spiking Neural Network 3
1.3 Need for Neuromorphic Hardware 3
1.4 Objectives and Outlines of the Book 5
References 8
2 Fundamentals and Learning of Artificial Neural Networks 11
2.1 Operational Principles of Artificial Neural Networks 11
2.1.1 Inference 11
2.1.2 Learning 13
2.2 Neural Network Based Machine Learning 16
2.2.1 Supervised Learning 17
2.2.2 Reinforcement Learning 20
2.2.3 Unsupervised Learning 22
2.2.4 Case Study: Action-Dependent Heuristic Dynamic Programming 23
2.2.4.1 Actor-Critic Networks 24
2.2.4.2 On-Line Learning Algorithm 25
2.2.4.3 Virtual Update Technique 27
2.3 Network Topologies 31
2.3.1 Fully Connected Neural Networks 31
2.3.2 Convolutional Neural Networks 32
2.3.3 Recurrent Neural Networks 35
2.4 Dataset and Benchmarks 38
2.5 Deep Learning 41
2.5.1 Pre-Deep-Learning Era 41
2.5.2 The Rise of Deep Learning 41
2.5.3 Deep Learning Techniques 42
2.5.3.1 Performance-Improving Techniques 42
2.5.3.2 Energy-Efficiency-Improving Techniques 46
2.5.4 Deep Neural Network Examples 50
References 53
3 Artificial Neural Networks in Hardware 61
3.1 Overview 61
3.2 General-Purpose Processors 62
3.3 Digital Accelerators 63
3.3.1 A Digital ASIC Approach 63
3.3.1.1 Optimization on Data Movement and Memory Access 63
3.3.1.2 Scaling Precision 71
3.3.1.3 Leveraging Sparsity 76
3.3.2 FPGA-Based Accelerators 80
3.4 Analog/Mixed-Signal Accelerators 82
3.4.1 Neural Networks in Conventional Integrated Technology 82
3.4.1.1 In/Near-Memory Computing 82
3.4.1.2 Near-Sensor Computing 85
3.4.2 Neural Network Based on Emerging Non-volatile Memory 88
3.4.2.1 Crossbar as a Massively Parallel Engine 89
3.4.2.2 Learning in a Crossbar 91
3.4.3 Optical Accelerator 93
3.5 Case Study: An Energy-Efficient Accelerator for Adaptive Dynamic Programming 94
3.5.1 Hardware Architecture 95
3.5.1.1 On-Chip Memory 95
3.5.1.2 Datapath 97
3.5.1.3 Controller 99
3.5.2 Design Examples 101
References 108
4 Operational Principles and Learning in Spiking Neural Networks 119
4.1 Spiking Neural Networks 119
4.1.1 Popular Spiking Neuron Models 120
4.1.1.1 Hodgkin-Huxley Model 120
4.1.1.2 Leaky Integrate-and-Fire Model 121
4.1.1.3 Izhikevich Model 121
4.1.2 Information Encoding 122
4.1.3 Spiking Neuron versus Non-Spiking Neuron 123
4.2 Learning in Shallow SNNs 124
4.2.1 ReSuMe 124
4.2.2 Tempotron 125
4.2.3 Spike-Timing-Dependent Plasticity 127
4.2.4 Learning Through Modulating Weight-Dependent STDP in Two-Layer Neural Networks 131
4.2.4.1 Motivations 131
4.2.4.2 Estimating Gradients with Spike Timings 131
4.2.4.3 Reinforcement Learning Example 135
4.3 Learning in Deep SNNs 146
4.3.1 SpikeProp 146
4.3.2 Stack of Shallow Networks 147
4.3.3 Conversion from ANNs 148
4.3.4 Recent Advances in Backpropagation for Deep SNNs 150
4.3.5 Learning Through Modulating Weight-Dependent STDP in Multilayer Neural Net
About the author
NAN ZHENG, PhD, received a B. S. degree in Information Engineering from Shanghai Jiao Tong University, China, in 2011, and an M. S. and PhD in Electrical Engineering from the University of Michigan, Ann Arbor, USA, in 2014 and 2018, respectively. His research interests include low-power hardware architectures, algorithms and circuit techniques with an emphasis on machine-learning applications. PINAKI MAZUMDER, PhD, is a professor in the Department of Electrical Engineering and Computer Science at The University of Michigan, USA. His research interests include CMOS VLSI design, semiconductor memory systems, CAD tools and circuit designs for emerging technologies including quantum MOS, spintronics, spoof plasmonics, and resonant tunneling devices.
Summary
Explains current co-design and co-optimization methodologies for building hardware neural networks and algorithms for machine learning applications
This book focuses on how to build energy-efficient hardware for neural networks with learning capabilities--and provides co-design and co-optimization methodologies for building hardware neural networks that can learn. Presenting a complete picture from high-level algorithm to low-level implementation details, Learning in Energy-Efficient Neuromorphic Computing: Algorithm and Architecture Co-Design also covers many fundamentals and essentials in neural networks (e.g., deep learning), as well as hardware implementation of neural networks.
The book begins with an overview of neural networks. It then discusses algorithms for utilizing and training rate-based artificial neural networks. Next comes an introduction to various options for executing neural networks, ranging from general-purpose processors to specialized hardware, from digital accelerator to analog accelerator. A design example on building energy-efficient accelerator for adaptive dynamic programming with neural networks is also presented. An examination of fundamental concepts and popular learning algorithms for spiking neural networks follows that, along with a look at the hardware for spiking neural networks. Then comes a chapter offering readers three design examples (two of which are based on conventional CMOS, and one on emerging nanotechnology) to implement the learning algorithm found in the previous chapter. The book concludes with an outlook on the future of neural network hardware.
* Includes cross-layer survey of hardware accelerators for neuromorphic algorithms
* Covers the co-design of architecture and algorithms with emerging devices for much-improved computing efficiency
* Focuses on the co-design of algorithms and hardware, which is especially critical for using emerging devices, such as traditional memristors or diffusive memristors, for neuromorphic computing
Learning in Energy-Efficient Neuromorphic Computing: Algorithm and Architecture Co-Design is an ideal resource for researchers, scientists, software engineers, and hardware engineers dealing with the ever-increasing requirement on power consumption and response time. It is also excellent for teaching and training undergraduate and graduate students about the latest generation neural networks with powerful learning capabilities.
Product details
Authors | Pinaki Mazumder, Mazumder Pinaki, N Zheng, Na Zheng, Nan Zheng, Nan Mazumder Zheng |
Publisher | Wiley, John and Sons Ltd |
Languages | English |
Product format | Hardback |
Released | 31.12.2019 |
EAN | 9781119507383 |
ISBN | 978-1-119-50738-3 |
No. of pages | 296 |
Series |
Wiley - IEEE Wiley - IEEE IEEE Press |
Subjects |
Natural sciences, medicine, IT, technology
> Technology
> Electronics, electrical engineering, communications engineering
Informatik, Hardware, Neuronale Netze, computer science, computer hardware, Neural Networks, Electrical & Electronics Engineering, Elektrotechnik u. Elektronik, Schaltkreise - Theorie u. Entwurf / VLSI / ULSI, Circuit Theory & Design / VLSI / ULSI |
Customer reviews
No reviews have been written for this item yet. Write the first review and be helpful to other users when they decide on a purchase.
Write a review
Thumbs up or thumbs down? Write your own review.