KAIST Develops World’s First Low-Power Deep Learning Semiconductor Chip for Mobile Platforms

0

Deep Learning semiconductor chip that can apply AI (Artificial Intelligence) system to mobile platforms has been developed by a South Korean research team. It is expected that small mobile devices such as Smartphones will be sufficient enough to implement variety of high-performance AI functions in real-time through low-power operations.

Institute for Information and Communications Technology Promotion (IITP, Director Lee Sang-hong) made an announcement on the 28th that a research team led by Professor Yoo Hoi-joon of KAIST’s Electric and Electronic Engineering Department has developed low-power Deep Learning semiconductor chip called ‘Deep Neural Network Processing Unit (DNPU)’ for mobile platforms.

Eng_IPnomics-Page-upload_little

Deep Learning is a Machine Learning technology that resembles a structure of a human brain. Through artificial neural network, it discovers and learns patterns within many information and classifies objects. ’s AlphaGo also utilized Deep Learning technology. CNN (Convolution Neural Network) and RNN (Recurrent Neural Network) are the major artificial neural network.

CNN is usually used for processing visualized information. It recognizes faces and objects through images. On the other hand, RNN is used for processing and forming data sequentially through neurons that have feedback route.

Until now it was difficult to apply Deep Learning technology to mobile platforms as it required high-performance computing environment to process many data arithmetically and huge amount of power consumption.

Research team made a semiconductor chip into a low-power semiconductor chip that is able to implement CNN and RNN technologies by using its own optimized technology. It manufactured and installed accelerators (processors) that are optimized to perform CNN and RNN calculations into low-power semiconductor chips.

An accelerator for CNN minimized approach from external memories and lowers power consumption by mainly using on-chip memories. It optimizes calculation process so that processed data can be divided in detail and allows data to be saved with just internal on-chip memories.

It also makes meticulous optimization possible by diversifying outputs of many arithmetic layers. There is however limitations in optimization as outputs of CNN’s arithmetic layers are uniformed per layer.

An accelerator for RNN uses a method that brings back repetitive arithmetic results whenever they are needed by saving them into internal memories. It simplifies calculations and increases energy efficiency by minimizing approach from external memories. DNPU’s energy efficiency is four times higher than energy efficiency of TPU (Tensor Processing Unit), which is heard as the brain of AlphaGo.

슬라이드 1

DNPU was presented at Hop Chips conference, which was held in San Jose from the 20th until the 22nd of August, and had drawn attention from participants. Hot Chips conference is a conference that is held yearly and is a place where world’s best businesses and universities present semiconductor chips.

“DNPU can implement AI technologies such as object recognition, movement recognition, and image captioning in real-time at low power.” said Professor Yoo Hoi-joon. “We are going to continually carry out post-research so that DNPU can be applied to Smartphones, small robots, wearable devices, and IoT (Internet of Things) devices in the future.”

Staff Reporter Kim, Youngjoon | [email protected]

About Author

Comments are closed.