A paper titled “Scalable Stochastic-Computing Accelerator for Convolutional Neural Networks” authored by Hyeonuk Sim and others is accepted to the 22nd Asia and South Pacific Design Automation Conference (ASP-DAC 2017), to be held in Tokyo, Japan in January, 2017. Congratulations!
Aidyn Zhakatayev
Aidyn Zhakatayev started his M.S. program in September, 2016. He majored in EE with a double major in CSE from UNIST, which should give him a perfect background for his research into various aspects of stochastic computing and deep neural networks. He enjoys football, and is from Kazakstan.
He is the first author of a paper presented at Design Automation Conference, June 2018. After graduation he joined Deeplite, an AI-driven deep neural network optimizer company, in Montréal, Canada.
Daewoo Kim
Daewoo Kim joined the lab in August, 2016 for his M.S. program staring in Fall, 2016. He has a CS background but is also very competent in various areas of hardware design including processor design, accelerator design for emerging application such as deep learning, and high level synthesis.
After graduation he joined LG Electronics, in the Home Appliance Division, in Guro-gu, Seoul, Korea.
We had a joint-lab workshop with SNU last Friday, May 20. The photo is at the Gwanak-mountain hiking right after the workshop.
DAC is the premiere conference in the area of design automation and embedded systems optimization.
For DAC 2016, we are presenting a paper titled “Dynamic Energy-Accuracy Trade-off Using Stochastic Computing in Deep Neural Networks”, which is in collaboration with Prof. Choi’s group at SNU. Congratulations!
Congratulations!
Atul Rahman and other members (Hyeon Uk Sim and Dong Nguyen) of the Renew Lab have won the highly competitive Samsung Human Tech Paper Award. The paper is titled “Efficient FPGA Acceleration of Convolutional DNNs Using 3D Compute Array”, and the only entry from UNIST in the category of CSE (Computer Science and Engineering). The award is extremely selective, and the acceptance rate was only about 6% this year.
This is the first time this award was given to School of ECE, UNIST, though there are three other teams from the school who won the award, all in different categories.
The ICCL lab at UNIST has one paper accepted to the 2016 Design, Automation and Test in Europe (DATE) conference, to be held in Dresden, Germany, March 14 ~ 18, 2016. Following is the information of the accepted paper:
- Title: Efficient FPGA Acceleration of Convolutional DNNs Using Logical 3D Compute Array
- Authors: Atul Rahman, Jongeun Lee and Kiyoung Choi
Congratulations!
The ICCL lab has one paper accepted to the 2016 International Symposium on Code Generation and Optimization (CGO), to be held in Barcelona, Spain, March 12 ~ 18, 2016. The following is the information of the accepted paper:
- Title: Communication-aware Multi-GPU Mapping for Stream Graphs
- Authors: Dong Nguyen and Jongeun Lee
Congratulations!
ICCL Members in front of Engineering Building 2, on a beautiful day in May.
Tip: You can watch more pictures like this in Categories | Album.
Internship Positions at ICCL, UNIST
The Embedded Computing Laboratory at UNIST is recruiting
multiple undergraduate researchers in the broad area of
brain-inspired computing as described below.
FPGA-based Deep Learning
Recent advances in deep learning are in large part due to the increased computing capability of off-the-shelf processors. To enable further advances in this direction, this project explores the use of “programmable hardware”, or FPGA (Field-Programmable Gate Array) technology, for the acceleration of deep neural networks such as convolutional neural networks. In a broader context, this research topic is about the application of hardware-software co-design principles to machine learning algorithms, which has many implications and is in active research nowadays.
This topic is best suited for students majoring in both CSE and EE (it doesn’t matter whichever is the 1st major). Prerequisites include Computer Organization, and exposure to hardware description languages is a strong plus. Knowledge of Machine Learning or Artificial Intelligence is a plus, but not required.
Stochastic Deep Neural Network
Creating a Deep Neural Network (DNN) processor has many appeals. A DNN processor can be much more efficient than CPU/GPU/FPGA-based implementations, thus enabling a host of interesting applications (e.g., real-time image recognition), and being a processor, it can be applied to many different neural network applications. Challenges however include how to make it scalable to large and small networks. One idea is to apply Stochastic Computing (SC). SC is a new way of representing numbers and performing arithmetic operations, and radically different from conventional digital computing and enables much more compact implementations of complex functions.
Best candidates for this topic should have strong math skills (especially in probability). Machine learning or hardware design is not a requirement.
Interested students should contact Prof. Lee.
Note: These research positions are related to Samsung Future Technology Project.