(This was posted using org2blog plugin.)
Goal
- Learn LLVM, the state-of-the-art compiler infrastructure
- Explore application of compiler techniques to solving interesting problems
Exercise
- creating a module
- implementing DFG gen (or similar)
- hacking the front-end
Seminar Topics
- LLVM (of course!)
- JIT compilation (?)
- … any suggestions?
For More Ambitious Students
Study/evaluation of LLVM based JIT translator
- LLVM-based JIT translator is already designed/implemented
- The main goal of this activity is to learn the JIT version by running it with benchmarks
- One possible outcome is to compare its performance with a static version (which is also LLVM-based)
- If you can improve the performance of the JIT version, it will be fantastic! (probably need an extended period of time…)
Disclaimer
There are other interesting projects as well for those who are inclined toward hardware design (such as FPGA) or GPU computing. We will discuss this in the first meeting.
- other things: SRP trial (paek), VM security thing (youn), GPU profiling (choi)
Multi-core and even many-core processors have been successfully used in other domains. Reconfigurable array processors, for instance, have been actively researched and used as an on-chip accelerator for stream processing applications and embedded processors, due to their extremely low power and high performance execution, compared to general purpose processors or even DSPs (digital signal processors).
However, the main challenge in such accelerator-type reconfigurable processors is compilation — the problem of how to map applications onto the architecture. At the heart of this problem is the 2D placement-and-routing problem, which is traditionally recognized as a CAD problem, which is why this problem is often discussed in the design automation communities. Still the problem needs more research and development efforts (such as mature tool chains) for more wide-spread adoption of the architecture.
The ICCL Lab is actively pursuing research on this topic, with a few specific goals in mind. We have two granted projects on this, partially in collaboration with other labs.
Research Questions
- how to compile the usual C programs (“legacy”) onto coarse-grained reconfigurable architectures?
- can there be good architectural solutions (such as architecture extensions) to make it much easier to map programs to these architectures (“compiler-friendly architectures”)?
- what are the real bottleneck to enhancing performance through these processors and how to address them?
- application level mapping problem
Publications
- Compiling Control-Intensive Loops for CGRAs with State-Based Full Predication, Kyuseung Han, Kiyoung Choi, and Jongeun Lee, Proc. of Design, Automation and Test in Europe (DATE ’13), March, 2013.
- Architecture Customization of On-Chip Reconfigurable Accelerators, Jonghee W. Yoon, Jongeun Lee*, Sanghyun Park, Yongjoo Kim, Jinyong Lee, Yunheung Paek, and Doosan Cho, ACM Transactions on Design Automation of Electronic Systems (TODAES), 18(4), pp. 52:1-52:22, ACM, October, 2013.
- Improving Performance of Nested Loops on Reconfigurable Array Processors, Yongjoo Kim, Jongeun Lee*, Toan X. Mai, and Yunheung Paek, ACM Transactions on Architecture and Code Optimization (TACO), 8(4), pp. 32:1-32:23, ACM, January, 2012.
- Exploiting Both Pipelining and Data Parallelism with SIMD Reconfigurable Architecture, Yongjoo Kim, Jongeun Lee*, Jinyong Lee, Toan X. Mai, Ingoo Heo, and Yunheung Paek, Proc. of International Symposium on Applied Reconfigurable Computing (ARC ’12), Lecture Notes in Computer Science, vol. 7199, pp. 40-52, March, 2012.
- High Throughput Data Mapping for Coarse-Grained Reconfigurable Architectures, Yongjoo Kim, Jongeun Lee*, Aviral Shrivastava, Jonghee W. Yoon, Doosan Cho, and Yunheung Paek, IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems (TCAD), 30(11), pp. 1599-1609, IEEE, November, 2011.
- Memory Access Optimization in Compilation for Coarse Grained Reconfigurable Architectures, Yongjoo Kim, Jongeun Lee*, Aviral Shrivastava, and Yunheung Paek, ACM Transactions on Design Automation of Electronic Systems (TODAES), 16(4), pp. 42:1-42:27, ACM, October, 2011.
Related Articles
Image source: http://wwwiebe.com
In Embedded Computing Lab, we are currently recruiting motivated graduate students with CS or EE background.
Applications to UNIST for academic year 2014 are still open until at least mid-December.
Senior-level undergraduate students who are interested in the design and optimization of computer architecture/compiler/systems especially for mobile embedded systems are also welcome!
In ICCL lab, we are particularly interested in various (=energy-efficient) ways of implementing computation and programmability. If you are a good programmer, it will help a lot, but even if you aren’t, you can learn how to program (along with many others) while in the graduate school. Please contact us if interested.
Congratulations to Yeonghun and Seongseok,
Our paper titled “Evaluator-Executor Transformation for Efficient Pipelining of Loops with Conditionals” is accepted to ACM Transactions on Architecture and Code Optimization. Authors will also be invited to HiPEAC 2014, Vienna, Austria. Congratulations again!
Tuesday July 9, we’ll have a lab meeting, where Seongseok will give a short presentation on stream applications.
Date: Wed, July 10 -> Tue, July 9
Time: 5 pm
Place: Room 511
Andrew Huh from University of Michigan, doing lab internship at ICCL lab as part of the UNIST Global program.
“At Bell Labs, the first thing they told us was that we were there to use our brains, not for just for our engineering skills,” said Chen. “Now that’s what I tell my students.”
via http://www.eetimes.com/electronics-blogs/other/4414629/Taiwan-reversing-brain-drain
Nowadays, I find EETimes articles are much more interesting than those of CACM. The above quote is something that reminds me of our mission as researchers in academia.
Learning Processor-Generator
Automatic processor generation using a commercial state-of-the-art tool (Synopsys LISA tool)
Programming GPU
Machine Learning algorithm (HMM) on GPU (do something similar to: https://code.google.com/p/hmm-cuda/)
For further detail, please ask Prof. Lee at EB2 #501-5, or drop him a line.
In this year’s DATE (Design, Automation & Test in Europe), which is the largest European conference in the EDA (Electronic Design Automation) field, we have contributed two interactive presentation (IP) papers, which are published in 4 pages each and will appear in the IEEE Xplore as well. Congrats to those participants!
Here is the paper information (follow the links for full text).
Seongseok Seo
Seongseok Seo entered UNIST in March, 2012, though he started to come to the lab about six months earlier, and worked as an intern in the lab during the winter break. He graduated from Ulsan University.
He is a recipient of Dongbu HiTek Best Paper Award at the 21st Korean Conference on Semiconductors (KCS), together with Yeonghun Jeong and Prof. Lee.
In 2014 he joined LG Electronics, in the CTO Division, in Seoul, Korea, as a hardware designer.