We’re looking for undergraduate internship students who are interested in the following topics.
– Tensor processor design – Hardware design with Python – Prototyping custom hardware on a real FPGA board – Quantization of neural networks – AI processors
This internship is different from usual internship in that it is very well structured and students will be able to learn key, important concepts and skills in the broad area of hardware / software co-design through hands-on practice.
Opportunities to participate in industry-collaborative projects (such as with Samsung Advanced Institute of Technology (SAIT)) are open as well.
Drop by the lab or professor’s office or send an email if you’re interested.
Due: June 26, 2022.
Tensor Processor Design with Python and FPGA Board
Congratulations to our UIRP* team, RapidGPT, which is one of the three teams that won the Best UIRP Award this year. Our RapidGPT team (Hyeonjin Jo and Jaewoo Park) won the second place award (“우수상” in Korean), which comes with some prize money! Congrats again and we look forward to your greater achievements in the future!
*UIRP is Undergraduate Research Project or Undergraduate Interdisciplinary Research Project supported by UNIST. For UNIST undergraduate students only.
Our recent paper on NPU compiler work specialized for modern binarized neural networks is accepted to an upcoming conference, ASP-DAC 2024. Kudos to Minjoon and Faaiz as well as the entire ICCL team.
The ICCL lab gave a presentation at the 60th DAC, July 2023.
The title of the DAC paper was, “NTT-PIM: Row-Centric Architecture and Mapping for Efficient Number-Theoretic Transform on PIM,” authored by Jaewoo Park, Sugil Lee and Jongeun Lee.
We also got our paper accepted to ICCAD, 2023.
The title of the ICCAD paper is, “Hyperdimensional Computing as a Rescue for Efficient Privacy-Preserving Machine Learning-as-a-Service,” authored by Jaewoo Park, Chenghao Quan, Hyungon Moon and Jongeun Lee.
Congratulations to those who contributed to DAC/ICCAD papers!
Congratulations! Our paper titled “Squeezing Accumulators in Binary Neural Networks for Extremely Resource-Constrained Applications,” authored by Azat and Jaewoo has been accepted to the 41st International Conference on Computer-Aided Design (ICCAD 2022), which is held in San Diego, California, in October 30 – November 3.
Unlike the previous papers trying to reduce the multiplication overhead of neural network hardware, this paper asks a different question that is, in binarized neural networks and extremely low-precision quantized neural networks, what is the real bottleneck in hardware implementation? It turns out that accumulators now take a lion’s share in terms of not only area but more power dissipation, and we propose a novel method to minimize accumulator overhead.
Congratulations! Our paper titled “Non-Uniform Step Size Quantization for Post-Training Quantization” authored by Sangyun and Jounghyun as well as our graduate, Hyeonuk, has been accepted to the European Conference on Computer Vision (ECCV) 2022, which is held in Tel Aviv, Israel, in October 23-27.
Unlike the previous papers focusing on better training for quantized neural networks, this paper proposes a radically new concept called subset quantizer, which is based on the idea that by selecting the best subset of quantization levels from a given set of predefined levels, we can increase the representation capability of a quantizer while ensuring the hardware friendliness of arithmetic operations. The concept of the subset quantizer itself was developed by Dr. Hyeonuk Sim together with his advisor, Dr. Jongeun Lee, during the last year of his Ph.D. program.
Minsang Yu joined the lab in February, 2022, as a master’s program student. He majored in electronic engineering. Before joining the lab, he worked as an assistant researcher at Korea Electronics Technology Institute (KETI) and developed the IoT Edge Device for sensor data synchronization for digital twin. His research interests include AI hardware accelerator design and electronic design automation with machine learning.
Minuk Hong joined the lab in February, 2022, as a master’s program student. He majored in electronic engineering. His research interests include hardware accelerator design with HDL and HLS for AI application.
Mr. Hong has graduated in February 2024 with a Masters degree, to join SOOSAN ENS ((주)수산이앤에스), an AI+X company in South Korea.