Note: See News Archive for all news postings.

  • RapidGPT won the Best UIRP Award December 19, 2023

    Congratulations to our UIRP* team, RapidGPT, which is one of the three teams that won the Best UIRP Award this year.  Our RapidGPT team (Hyeonjin Jo and Jaewoo Park) won the second place award (“우수상” in Korean), which comes with some prize money!  Congrats again and we look forward to your greater achievements in the future!

    *UIRP is Undergraduate Research Project or Undergraduate Interdisciplinary Research Project supported by UNIST.  For UNIST undergraduate students only.

  • NPU compiler paper accepted September 05, 2023

    Our recent paper on NPU compiler work specialized for modern binarized neural networks is accepted to an upcoming conference, ASP-DAC 2024.  Kudos to Minjoon and Faaiz as well as the entire ICCL team. 

  • GitLab updated August 23, 2023

    Using the latest version.  You may have noticed slightly different menu.

    Anyway, enjoy gitlab!

  • DAC/ICCAD papers accepted July 31, 2023

    The ICCL lab gave a presentation at the 60th DAC, July 2023.

    The title of the DAC paper was, “NTT-PIM: Row-Centric Architecture and Mapping for Efficient Number-Theoretic Transform on PIM,” authored by Jaewoo Park, Sugil Lee and Jongeun Lee.

    We also got our paper accepted to ICCAD, 2023.  

    The title of the ICCAD paper is, “Hyperdimensional Computing as a Rescue for Efficient Privacy-Preserving Machine Learning-as-a-Service,” authored by Jaewoo Park, Chenghao Quan, Hyungon Moon and Jongeun Lee.

    Congratulations to those who contributed to DAC/ICCAD papers!

  • ICCAD Paper Accepted July 30, 2022

    Congratulations!  Our paper titled “Squeezing Accumulators in Binary Neural Networks for Extremely Resource-Constrained Applications,” authored by Azat and Jaewoo has been accepted to the 41st International Conference on Computer-Aided Design (ICCAD 2022), which is held in San Diego, California, in October 30 – November 3.

    Unlike the previous papers trying to reduce the multiplication overhead of neural network hardware, this paper asks a different question that is, in binarized neural networks and extremely low-precision quantized neural networks, what is the real bottleneck in hardware implementation?  It turns out that accumulators now take a lion’s share in terms of not only area but more power dissipation, and we propose a novel method to minimize accumulator overhead.


See News Archive for all news postings.