Can FPGA stand out in many AI chips?
The artificial intelligence (AI) market continues to heat up, but the industry is still very different about how these systems should be constructed. Large technology companies are investing billions of dollars in new startups or supporting research and development. Governments also provide large research funding for universities and research institutions. I hope to stand out in this wave of AI competition.
According to Semiconductor Engineering, data from research firm Tractica shows that the global AI market will grow to $36.8 billion in 2025, but there is currently no consensus on the definition of AI or the types of data that need to be analyzed. OneSpin SoluTIons President and Executive According to Raik Brinkmann, there are three issues that need to be addressed.
The first is a large amount of data to be processed, followed by parallel processing and interconnection technology, and the third is the energy consumption caused by moving large amounts of data.
At present, the first batch of AI chips in the market are almost all made up of ready-made CPUs, GPUs, FPGAs and DSPs. Although companies such as Intel, Google, NVIDIA, Qualcomm and IBM are developing new designs, it is still unclear who will win. In any case, these systems still need at least one CPU to control, but may require different types of coprocessors.
AI processing involves matrix multiplication and addition. The cost of using GPUs operating in parallel is lower, but the disadvantage is higher energy consumption. FPGAs with built-in DSP blocks and local memory can achieve better energy efficiency, but at a higher price. Mentor Graphics Chairman and CEO Wally Rhines said that some people use standard GPUs to perform deep learning, and many people use CPUs. In order to achieve the goal of making neural network behavior more human-like, it has stimulated a new wave of design.
Visual processing is currently the most concerned AI block. Most of the current AI research is related to the visual processing used in autonomous driving. This technology is also increasing in the application of drones and robots. Robert Blake, president and CEO of Achronix, pointed out that the computational complexity of image processing is very high, and the market needs 5 to 10 years to precipitate, but because of the variable precision arithmeTIc operation, the role of programmable logic components will be more important.
FPGA is very suitable for matrix multiplication. The stylized feature increases the flexibility of design. The data part used for decision making will be processed locally, and the data part will be processed by the data center, but the ratio of the two will change depending on the application. Affect AI chip and software design.
At present, the AI technology used in automobiles is mainly to detect and avoid objects, which is still a gap with real artificial intelligence. A true AI should have some degree of reasoning, such as judging how to avoid people who are crossing the road. The former's inference is based on the large amount of data processing and pre-programming behavior of the sensor input, while the latter can make value judgments and think about various possible consequences to find the best choice.
Such systems require extremely high bandwidth and built-in security mechanisms. In addition, data security must be protected. Many designs developed from off-the-shelf parts are difficult to balance computational and programming efficiency. Google is trying to change such equations with TPU special application chips developed specifically for machine learning, and open the TensorFlow platform to speed up AI.
The first generation of AI chips focused on computing power and heterogeneity, but this is like the early IoT devices. In the uncertain market how the market evolved, the industry had to add everything, and then find the bottleneck. Achieve a balance of power and performance for a specific functional design.
As the number of self-driving use cases increases, the scope of AI applications will gradually expand, which is why Intel acquired Nervana in August 2016. The 2.5D deep learning chip developed by Nervana uses a high-performance processor core to move data from the carrier board to the high-bandwidth memory, hoping to reduce the deep learning model training time by 100 times compared to the GPU solution.
Quantum computing is another option for AI systems. Dario Gil, vice president of IBM's research department, explained that if there are 3 blue cards and 1 red card in 4 cards, the probability of using a traditional operation to guess a red card is 1 in 4, using quantum computers and entanglement of superimposed qubits ( Entlement), the system can provide the correct answer every time.
AI does not have a single system that is most suitable, and no application can eat in various markets. These further market segments need to be refined, expanded to find available tools, and need ecosystem support, but low power. High throughput and low latency are common requirements for AI systems. After years of relying on process miniature components to improve power, performance and cost, the semiconductor industry now needs to rethink the way it enters new markets.
If you want to know more, our website has product specifications for the FPGAs, you can go to ALLICDATA ELECTRONICS LIMITED to get more information