![A 161.6 TOPS/W Mixed-mode Computing-in-Memory Processor for Energy-Efficient Mixed-Precision Deep Neural Networks (유회준교수 연구실) - KAIST 전기 및 전자공학부 A 161.6 TOPS/W Mixed-mode Computing-in-Memory Processor for Energy-Efficient Mixed-Precision Deep Neural Networks (유회준교수 연구실) - KAIST 전기 및 전자공학부](https://ee.kaist.ac.kr/wp-content/uploads/2022/08/%EC%9C%A0%ED%9A%8C%EC%A4%8010.png)
A 161.6 TOPS/W Mixed-mode Computing-in-Memory Processor for Energy-Efficient Mixed-Precision Deep Neural Networks (유회준교수 연구실) - KAIST 전기 및 전자공학부
![PDF) BRein Memory: A Single-Chip Binary/Ternary Reconfigurable in-Memory Deep Neural Network Accelerator Achieving 1.4 TOPS at 0.6 W PDF) BRein Memory: A Single-Chip Binary/Ternary Reconfigurable in-Memory Deep Neural Network Accelerator Achieving 1.4 TOPS at 0.6 W](https://i1.rgstatic.net/publication/321930684_BRein_Memory_A_Single-Chip_BinaryTernary_Reconfigurable_in-Memory_Deep_Neural_Network_Accelerator_Achieving_14_TOPS_at_06_W/links/5b4f2d930f7e9b240fe98667/largepreview.png)
PDF) BRein Memory: A Single-Chip Binary/Ternary Reconfigurable in-Memory Deep Neural Network Accelerator Achieving 1.4 TOPS at 0.6 W
![Not all TOPs are created equal. Deep Learning processor companies often… | by Forrest Iandola | Analytics Vidhya | Medium Not all TOPs are created equal. Deep Learning processor companies often… | by Forrest Iandola | Analytics Vidhya | Medium](https://miro.medium.com/max/1400/1*L-Mg3ubn0e9OmuKtWeh2aQ.png)
Not all TOPs are created equal. Deep Learning processor companies often… | by Forrest Iandola | Analytics Vidhya | Medium
Mipsology Zebra on Xilinx FPGA Beats GPUs, ASICs for ML Inference Efficiency - Embedded Computing Design
![Figure 5 from Sticker: A 0.41-62.1 TOPS/W 8Bit Neural Network Processor with Multi-Sparsity Compatible Convolution Arrays and Online Tuning Acceleration for Fully Connected Layers | Semantic Scholar Figure 5 from Sticker: A 0.41-62.1 TOPS/W 8Bit Neural Network Processor with Multi-Sparsity Compatible Convolution Arrays and Online Tuning Acceleration for Fully Connected Layers | Semantic Scholar](https://d3i71xaburhd42.cloudfront.net/e66b097ed6305f4b9f93141759c2aa7a20e4dcdf/2-Figure5-1.png)
Figure 5 from Sticker: A 0.41-62.1 TOPS/W 8Bit Neural Network Processor with Multi-Sparsity Compatible Convolution Arrays and Online Tuning Acceleration for Fully Connected Layers | Semantic Scholar
![A 1.32 TOPS/W Energy Efficient Deep Neural Network Learning Processor with Direct Feedback Alignment based Heterogeneous Core Architecture | Semantic Scholar A 1.32 TOPS/W Energy Efficient Deep Neural Network Learning Processor with Direct Feedback Alignment based Heterogeneous Core Architecture | Semantic Scholar](https://d3i71xaburhd42.cloudfront.net/b89be7bc39e51e8874b550b94ca0124cfe992aaa/2-Table1-1.png)
A 1.32 TOPS/W Energy Efficient Deep Neural Network Learning Processor with Direct Feedback Alignment based Heterogeneous Core Architecture | Semantic Scholar
![A 0.32–128 TOPS, Scalable Multi-Chip-Module-Based Deep Neural Network Inference Accelerator With Ground-Referenced Signaling in 16 nm | Research A 0.32–128 TOPS, Scalable Multi-Chip-Module-Based Deep Neural Network Inference Accelerator With Ground-Referenced Signaling in 16 nm | Research](https://research.nvidia.com/sites/default/files/styles/wide/public/publications/RC18_photo_0_0.jpg?itok=ubFiJEaV)
A 0.32–128 TOPS, Scalable Multi-Chip-Module-Based Deep Neural Network Inference Accelerator With Ground-Referenced Signaling in 16 nm | Research
![PDF] A 0.3–2.6 TOPS/W precision-scalable processor for real-time large-scale ConvNets | Semantic Scholar PDF] A 0.3–2.6 TOPS/W precision-scalable processor for real-time large-scale ConvNets | Semantic Scholar](https://d3i71xaburhd42.cloudfront.net/f2dd73ae127c5ee3713a92e1057eddea92fbf207/2-Figure1-1.png)
PDF] A 0.3–2.6 TOPS/W precision-scalable processor for real-time large-scale ConvNets | Semantic Scholar
![Figure 4 from A 44.1TOPS/W Precision-Scalable Accelerator for Quantized Neural Networks in 28nm CMOS | Semantic Scholar Figure 4 from A 44.1TOPS/W Precision-Scalable Accelerator for Quantized Neural Networks in 28nm CMOS | Semantic Scholar](https://d3i71xaburhd42.cloudfront.net/81918cbfc5d3e4c54848bd4fc31a354c4c8c9dd6/2-Figure4-1.png)