Hardware-friendly quantization for efficient DNN accelerators

December 8th, 2021 Comments off

Related Publications

Quarry: Quantization-based ADC Reduction for ReRAM-based Deep Neural Network Accelerators, Azat Azamat, Faaiz Asim and Jongeun Lee**, Proc. of International Conference on Computer-Aided Design (ICCAD), November, 2021.

Automated Log-Scale Quantization for Low-Cost Deep Neural Networks, Sangyun Oh, Hyeonuk Sim, Sugil Lee and Jongeun Lee**, Proc. of Conference on Computer Vision and Pattern Recognition (CVPR), June, 2021.

RRNet: Repetition-Reduction Network for Energy Efficient Depth Estimation, Sangyun Oh, Hye-Jin S. Kim, Jongeun Lee and Junmo Kim, IEEE Access8, pp. 106097-106108, IEEE, June, 2020.

Successive Log Quantization for Cost-Efficient Neural Networks Using Stochastic Computing, Sugil Lee, Hyeonuk Sim, Jooyeon Choi and Jongeun Lee**, Proc. of the 56th Annual ACM/IEEE Design Automation Conference (DAC), pp. 7:1-7:6, June, 2019.

Categories: Research Tags:

In-Memory Computing DNN Hardware Using Emerging Memory

December 8th, 2021 Comments off

Related Publications

Fast and Low-Cost Mitigation of ReRAM Variability for Deep Learning Applications, Sugil Lee, Mohammed Fouda, Jongeun Lee**, Ahmed Eltawil and Fadi Kurdahi, Proc. of International Conference on Computer Design (ICCD), October, 2021.

Quarry: Quantization-based ADC Reduction for ReRAM-based Deep Neural Network Accelerators, Azat Azamat, Faaiz Asim and Jongeun Lee**, Proc. of International Conference on Computer-Aided Design (ICCAD), November, 2021.

Cost- and Dataset-free Stuck-at Fault Mitigation for ReRAM-based Deep Learning Accelerators, Giju Jung, Mohammed Fouda, Sugil Lee, Jongeun Lee**, Ahmed Eltawil and Fadi Kurdahi, Proc. of Design, Automation and Test in Europe (DATE), pp. 1733-1738, February, 2021.

IR-QNN Framework: An IR Drop-Aware Offline Training of Quantized Crossbar Arrays, Mohammed E. Fouda, Sugil Lee, Jongeun Lee, Gun Hwan Kim, Fadi Kurdahi and Ahmed Eltawil, IEEE Access8, pp. 228392-228408, IEEE, December, 2020.

Architecture-Accuracy Co-optimization of ReRAM-based Low-cost Neural Network Processor, Segi Lee, Sugil Lee, Jongeun Lee**, Jong-Moon Choi, Do-Wan Kwon, Seung-Kwang Hong and Kee-Won Kwon, Proc. of the 30th ACM Great Lakes Symposium on VLSI (GLSVLSI), pp. 427-432, September, 2020.

Learning to Predict IR Drop with Effective Training for ReRAM-based Neural Network Hardware, Sugil Lee, Mohammed Fouda, Jongeun Lee**, Ahmed Eltawil and Fadi Kurdahi, Proc. of the 57th Annual ACM/IEEE Design Automation Conference (DAC), pp. 1-6, July, 2020.

Categories: Research Tags:

Deep Neural Network based on Stochastic Computing

December 8th, 2021 Comments off

Related Publications

Bitstream-based Neural Network for Scalable, Efficient and Accurate Deep Learning Hardware, Hyeonuk Sim and Jongeun Lee**, Frontiers in Neuroscience14, pp. 1198, Frontiers, December, 2020.

Cost-effective Stochastic MAC Circuits for Deep Neural Networks, Hyeonuk Sim and Jongeun Lee**, Neural Networks117, pp. 152-162, Elsevier, September, 2019.

Successive Log Quantization for Cost-Efficient Neural Networks Using Stochastic Computing, Sugil Lee, Hyeonuk Sim, Jooyeon Choi and Jongeun Lee**, Proc. of the 56th Annual ACM/IEEE Design Automation Conference (DAC), pp. 7:1-7:6, June, 2019.

Log-Quantized Stochastic Computing for Memory and Computation Efficient DNNs, Hyeonuk Sim and Jongeun Lee**, Proc. of the 24th Asia and South Pacific Design Automation Conference (ASP-DAC), pp. 280-285, January, 2019.

An Efficient and Accurate Stochastic Number Generator Using Even-distribution Coding, Aidyn Zhakatayev, Kyounghoon Kim, Jongeun Lee** and Kiyoung Choi, IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems (TCAD)37(12), pp. 3056-3066, December, 2018.

DPS: Dynamic Precision Scaling for Stochastic Computing-Based Deep Neural Networks, Hyeonuk Sim, Saken Kenzhegulov and Jongeun Lee**, Proc. of the 55th Annual ACM/IEEE Design Automation Conference (DAC), pp. 13:1-13:6, June, 2018.

Sign-Magnitude SC: Getting 10X Accuracy for Free in Stochastic Computing for Deep Neural Networks, Aidyn Zhakatayev, Sugil Lee, Hyeonuk Sim and Jongeun Lee**, Proc. of the 55th Annual ACM/IEEE Design Automation Conference (DAC), pp. 158:1-158:6, June, 2018.

FPGA Implementation of Convolutional Neural Network Based on Stochastic Computing, Daewoo Kim, Mansureh S. Moghaddam, Hossein Moradian, Hyeonuk Sim, Jongeun Lee** and Kiyoung Choi, Proc. of IEEE International Conference on Field-Programmable Technology (FPT), pp. 287-290, December, 2017.

Accurate and Efficient Stochastic Computing Hardware for Convolutional Neural Networks, Joonsang Yu, Kyounghoon Kim, Jongeun Lee* and Kiyoung Choi, Proc. of IEEE International Conference on Computer Design (ICCD), pp. 105-112, November, 2017.

A New Stochastic Computing Multiplier with Application to Deep Convolutional Neural Networks, Hyeonuk Sim and Jongeun Lee**, Proc. of the 54th Annual ACM/IEEE Design Automation Conference (DAC), pp. 29:1-29:6, June, 2017.

Scalable Stochastic-Computing Accelerator for Convolutional Neural Networks, Hyeonuk Sim, Dong Nguyen, Jongeun Lee** and Kiyoung Choi, Proc. of the 22nd Asia and South Pacific Design Automation Conference (ASP-DAC), pp. 696-701, January, 2017.

Categories: Research Tags:

Architecture Optimization for AI Accelerators

December 7th, 2021 Comments off

Related Publications

Specializing CGRAs for Light-Weight Convolutional Neural Networks, Jungi Lee and Jongeun Lee**, IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems (TCAD), November, 2021. (online publication)

NP-CGRA: Extending CGRAs for Efficient Processing of Light-weight Deep Neural Networks, Jungi Lee and Jongeun Lee**, Proc. of Design, Automation and Test in Europe (DATE), pp. 1408-1413, February, 2021.

Architecture-Accuracy Co-optimization of ReRAM-based Low-cost Neural Network Processor, Segi Lee, Sugil Lee, Jongeun Lee**, Jong-Moon Choi, Do-Wan Kwon, Seung-Kwang Hong and Kee-Won Kwon, Proc. of the 30th ACM Great Lakes Symposium on VLSI (GLSVLSI), pp. 427-432, September, 2020.

SparTANN: Sparse Training Accelerator for Neural Networks with Threshold-based Sparsification, Hyeonuk Sim, Jooyeon Choi and Jongeun Lee**, Proc. of ACM/IEEE International Symposium on Low Power Electronics and Design (ISLPED), pp. 211–216, August, 2020.

Double MAC on a DSP: Boosting the Performance of Convolutional Neural Networks on FPGAs, Sugil Lee, Daewoo Kim, Dong Nguyen and Jongeun Lee**, IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems (TCAD)38(5), pp. 888-897, May, 2019.

Efficient FPGA Implementation of Local Binary Convolutional Neural Network, Aidyn Zhakatayev and Jongeun Lee**, Proc. of the 24th Asia and South Pacific Design Automation Conference (ASP-DAC), pp. 699-704, January, 2019.

XOMA: Exclusive On-Chip Memory Architecture for Energy-Efficient Deep Learning Acceleration, Hyeonuk Sim, Jason H. Anderson and Jongeun Lee**, Proc. of the 24th Asia and South Pacific Design Automation Conference (ASP-DAC), pp. 651-656, January, 2019.

FPGA Prototyping of Local Binary Convolutional Neural Network, Segi Lee, Aidyn Zhakatayev and Jongeun Lee**, Proc. of the 26th Korean Conference on Semiconductors, February, 2019.

Design Space Exploration of FPGA Accelerators for Convolutional Neural Networks, Atul Rahman, Sangyun Oh, Jongeun Lee** and Kiyoung Choi, Proc. of Design, Automation and Test in Europe (DATE), pp. 1147-1152, March, 2017.

Double MAC: Doubling the Performance of Convolutional Neural Networks on Modern FPGAs, Dong Nguyen, Daewoo Kim and Jongeun Lee**, Proc. of Design, Automation and Test in Europe (DATE), pp. 890-893, March, 2017.

Efficient FPGA Acceleration of Convolutional Neural Networks Using Logical-3D Compute Array, Atul Rahman, Jongeun Lee** and Kiyoung Choi, Proc. of Design, Automation and Test in Europe (DATE), pp. 1393-1398, March, 2016.

Categories: Research Tags:

Difference between a positive vs a negative person

November 16th, 2021 Comments off

A quotation from a keynote at “ICT Colloquium 2021” today, “A positive person knows no limit, but a negative person has no result.”  (The two clauses rhyme very well in Korean.)

Categories: Uncategorized Tags:

BK21 FOUR

August 11th, 2021 Comments off

ICCL is participating in the BK21 FOUR program in the Department of Electrical Engineering, UNIST.

BK21 FOUR

The main theme of our BK21 FOUR Program is neuromorphic computing.

Categories: News Tags:

Paper accepted to CVPR ’21

March 15th, 2021 Comments off

CVPR is the premiere conference in the computer vision and deep learning algorithms and optimizations.  Congratulations to Sangyun and other team members including Hyeonuk and Sugil!  The title of the paper is “Automated Log-Scale Quantization for Low-Cost Deep Neural Networks.”

Categories: News Tags:

Jungi Lee

March 13th, 2021 Comments off

Jun Gi Lee

Jun Gi Lee joined the lab in March, 2019, as a master’s program student. He majored in Mechatronics Engineering from Chungnam National University. He received the master’s degree in electrical engineering from UNIST in February 2021, with a thesis titled “NP-CGRA: Extending CGRAs for Efficient Processing of Light-weight Deep Neural Networks.” After graduation, he joined ElroiLab, a startup in Seoul specializing in AI (Artificial Intelligence) and robotics.

Categories: People Tags:

Minjoon Song

February 21st, 2021 Comments off
Minjoon Song

Minjoon Song joined the lab in February, 2021, as a master’s program student. He majored in Automotive-IT Convergence Engineering from Kookmin University. His research interest lies in the intersection between the architecture and compiler for NPUs (Neuromorphic Processing Units). Mr. Song has graduated with a Master’s degree, and joined BOS Semiconductors, a global fabless company with a focus on automotive.

Categories: People Tags:

Hyeonuk Sim

February 7th, 2021 Comments off

Hyeonuk Sim

Hyeonuk Sim joined the lab in March, 2015 as a master-Ph.D combined degree program. He received the B.S. and Ph.D. degrees in computer science and engineering from Ulsan National Institute of Science and Technology, Ulsan, South Korea, in 2015 and 2021, respectively.

He is currently working as a Staff Researcher with the Samsung Advanced Institute of Technology, Suwon 16678, South Korea. His research interests include energy-efficient accelerator design for specific fields, such as artificial neural networks and stochastic computing.

He won the Naver Ph.D. fellowship in 2017.

Categories: People Tags: