Internship – Tensor Processor Design with Python and FPGA Board

June 15th, 2022 Comments off

Hello,

We’re looking for undergraduate internship students who are interested in the following topics.

– Tensor processor design
– Hardware design with Python
– Prototyping custom hardware on a real FPGA board
– Quantization of neural networks
– AI processors

This internship is different from usual internship in that it is very well structured and students will be able to learn key, important concepts and skills in the broad area of hardware / software co-design through hands-on practice.

Opportunities to participate in industry-collaborative projects (such as with Samsung Advanced Institute of Technology (SAIT)) are open as well.

Drop by the lab or professor’s office or send an email if you’re interested.

Due: June 26, 2022.

Tensor Processor Design with Python and FPGA Board

Categories: News Tags:

Graduate students wanted

February 21st, 2020 Comments off

We’re accepting new graduate students.
Background doesn’t really matter, but it helps a lot if you are good at either

  • software programming (e.g., C++, advanced level in Python)
  • hardware design (e.g., Verilog/VHDL/SystemC development of a sizable system)
  • strong math or analytical thinking
  • or have a very good GPA from a reputable school (e.g., UNIST).

So whether you’re EE or CSE major (or something else), 
Please apply or contact me if you’re interested and qualified.

We’re a leading research group on edge-based AI,
and our graduates go to Samsung, LG, international start-ups, etc.

We’re also recruiting undergraduate interns. 
Please apply!

Categories: News Tags:

RapidGPT won the Best UIRP Award

December 19th, 2023 Comments off

Congratulations to our UIRP* team, RapidGPT, which is one of the three teams that won the Best UIRP Award this year.  Our RapidGPT team (Hyeonjin Jo and Jaewoo Park) won the second place award (“우수상” in Korean), which comes with some prize money!  Congrats again and we look forward to your greater achievements in the future!

*UIRP is Undergraduate Research Project or Undergraduate Interdisciplinary Research Project supported by UNIST.  For UNIST undergraduate students only.

Categories: News Tags:

NPU compiler paper accepted

September 5th, 2023 Comments off

Our recent paper on NPU compiler work specialized for modern binarized neural networks is accepted to an upcoming conference, ASP-DAC 2024.  Kudos to Minjoon and Faaiz as well as the entire ICCL team. 

Categories: News Tags:

GitLab updated

August 23rd, 2023 Comments off

Using the latest version.  You may have noticed slightly different menu.

Anyway, enjoy gitlab!

Categories: News Tags:

DAC/ICCAD papers accepted

July 31st, 2023 Comments off

The ICCL lab gave a presentation at the 60th DAC, July 2023.

The title of the DAC paper was, “NTT-PIM: Row-Centric Architecture and Mapping for Efficient Number-Theoretic Transform on PIM,” authored by Jaewoo Park, Sugil Lee and Jongeun Lee.

We also got our paper accepted to ICCAD, 2023.  

The title of the ICCAD paper is, “Hyperdimensional Computing as a Rescue for Efficient Privacy-Preserving Machine Learning-as-a-Service,” authored by Jaewoo Park, Chenghao Quan, Hyungon Moon and Jongeun Lee.

Congratulations to those who contributed to DAC/ICCAD papers!

Categories: News Tags:

ICCAD Paper Accepted

July 30th, 2022 Comments off

Congratulations!  Our paper titled “Squeezing Accumulators in Binary Neural Networks for Extremely Resource-Constrained Applications,” authored by Azat and Jaewoo has been accepted to the 41st International Conference on Computer-Aided Design (ICCAD 2022), which is held in San Diego, California, in October 30 – November 3.

Unlike the previous papers trying to reduce the multiplication overhead of neural network hardware, this paper asks a different question that is, in binarized neural networks and extremely low-precision quantized neural networks, what is the real bottleneck in hardware implementation?  It turns out that accumulators now take a lion’s share in terms of not only area but more power dissipation, and we propose a novel method to minimize accumulator overhead.

IEEE ACM

Categories: News Tags:

ECCV Paper Accepted

July 30th, 2022 Comments off

Congratulations!  Our paper titled “Non-Uniform Step Size Quantization for Post-Training Quantization” authored by Sangyun and Jounghyun as well as our graduate, Hyeonuk, has been accepted to the European Conference on Computer Vision (ECCV) 2022, which is held in Tel Aviv, Israel, in October 23-27. 

Unlike the previous papers focusing on better training for quantized neural networks, this paper proposes a radically new concept called subset quantizer, which is based on the idea that by selecting the best subset of quantization levels from a given set of predefined levels, we can increase the representation capability of a quantizer while ensuring the hardware friendliness of arithmetic operations. The concept of the subset quantizer itself was developed by Dr. Hyeonuk Sim together with his advisor, Dr. Jongeun Lee, during the last year of his Ph.D. program.

ECCV 2022 at Tel Aviv

Categories: News Tags:

Minsang Yu

March 14th, 2022 Comments off
Minsang Yu

Minsang Yu joined the lab in February, 2022, as a master’s program student. He majored in electronic engineering. Before joining the lab, he worked as an assistant researcher at Korea Electronics Technology Institute (KETI) and developed the IoT Edge Device for sensor data synchronization for digital twin. His research interests include AI hardware accelerator design and electronic design automation with machine learning.

Categories: People Tags:

Minuk Hong

February 20th, 2022 Comments off

Minuk Hong joined the lab in February, 2022, as a master’s program student. He majored in electronic engineering. His research interests include hardware accelerator design with HDL and HLS for AI application.

Categories: People Tags:

Research page updated

December 8th, 2021 Comments off

The research page has been updated. Come and see!

 

 

Categories: News Tags: