What is the importance of FPGAs in AI chips?

Last Update Time: 2019-12-20 11:11:07

      The three main supports of artificial intelligence are hardware, algorithms and data, where hardware refers to the chip running the AI algorithm and the corresponding computing platform. On the hardware side, the current use of GPU parallel computing neural networks, while FPGAs and ASICs also have the potential to emerge in the future. 

      The GPU (Graphics Processing Unit) is called the graphics processor. It is the "heart" of the graphics card. Similar to the CPU, it is just a microprocessor dedicated to image computing.


image.png


      FPGAs have lower power consumption than GPUs, and have shorter development time and lower cost than ASICs. Since Xilinx created FPGAs in 1984, it has a place in communications, medical, industrial control, and security, and has seen phenomenal growth rates in the past few years. In the last two years, due to the prosperity of cloud computing, high-performance computing and artificial intelligence, the attention of FPGAs with inherent advantages has reached an unprecedented height.

      In terms of the current market, Intel, IBM, Texas Instruments, Motorola, Philips, Toshiba, Samsung and other giants have been involved in FPGAs, but the most successful are Xilinx and Altera. The two companies share nearly 90% of the market and have more than 6,000 patents. Intel acquired Altera for $16.1 billion in 2015, and is also interested in the development of FPGA-specific computing power in the field of artificial intelligence. It can be seen from the action of the giants of the industry that the combination of CPU and FPGA will become an important development direction in the field of deep learning in the future due to the fact that FPGAs can make up for the shortcomings of CPUs in terms of computing power and flexibility.

      The GPU is designed to perform complex mathematical and geometric calculations that are required for graphics rendering. The GPU can provide tens of times or even hundreds of times the performance of the CPU in terms of partial calculations such as floating-point operations and parallel computing. Since the second half of 2006, Nvidia has launched related hardware products and software development tools, and is currently the dominant market for artificial intelligence hardware.

      The GPU's ability to parallelize massive amounts of data coincides with deep learning needs, so it was first introduced into deep learning. In 2011, Professor Wu Enda took the lead in applying it to Google's brain and achieved amazing results. The results showed that 12 NVIDIA GPUs can provide deep learning performance equivalent to 2,000 CPUs.

      The GPU is an image processor designed to cope with the need for massively parallel computing in image processing. Therefore, when applied to the deep learning algorithm, there are three limitations: 1. The parallel computing advantage cannot be fully utilized in the application process. 2. 

      The hardware structure is fixed and does not have programmability. 3. Running deep learning algorithms is much less energy efficient than ASICs and FPGAs. FPGA (Field-Programmable Gate Array) is called field programmable gate array, and users can repeat programming according to their own needs. Compared with GPU and CPU, it has high performance, low power consumption and hardware programming.

      FPGA also has three types of limitations: 1, the basic unit's computing power is limited; 2, speed and power consumption need to be improved; 3, FPGA price is relatively expensive. An ASIC (Application Specific Integrated Circuit) is an integrated circuit designed for a specific purpose. Unable to reprogram, low performance, low power, but expensive.

      In recent years, various dazzling chips like TPU, NPU, VPU, BPU, etc., are essentially ASICs. ASICs differ from the flexibility of GPUs and FPGAs. Customized ASICs cannot be changed once they are manufactured, so initial cost and long development cycles make entry thresholds high. At present, most of them are involved in the AI algorithm and are good at chip development, such as Google's TPU. 

      Because it is perfectly suited for neural network-related algorithms, ASICs outperform GPUs and FPGAs in performance and power consumption. TPU1 is 14-16 times better than traditional GPUs, and NPU is 118 times more powerful than GPUs. The Cambrian has released an external application instruction set, and it is expected that ASIC will be the core of future AI chips.

      Another future development of ASIC is brain-like chips. The brain-like chip is based on neuromorphic engineering and borrows human brain information processing methods. It is suitable for real-time processing of unstructured information, ultra-low-power chips with learning ability, closer to artificial intelligence targets, and tries to imitate the human brain in the basic structure. 

      Principle, using the method of neurons and synapses to replace the traditional "Von Neumann" architecture, enabling the chip to perform asynchronous, parallel, low-speed and distributed processing capabilities, while having the ability to sense, recognize and learn. IBM's NorthTrue is a brain-like chip. At present, brain-like chips are still in their infancy, and there is still a long way to go from commercialization. This is also the place where countries are actively deploying.

      Different kinds of chips are suitable for different scenarios. GPUs and CPUs are suitable for consumer and enterprise scales; FPGAs are more suitable for enterprise users, especially in the military and industrial electronics industries where chip reconfigurability is high, which is ideal for deployment in cloud data centers; ASICs 

      can achieve mass production, cost Compared to FPGA solutions, energy consumption is more suitable for the consumer market.

 

If you want to know more, our website has product specifications for the FPGAs in AI chips, you can go to ALLICDATA ELECTRONICS LIMITED to get more information