At Baidu’s Create conference for AI developers in Beijing today, the company and Intel announced a new partnership to work together on Intel’s new Nervana Neural Network Processor for training. As its name very clearly states, this forthcoming chip (NNP-T for short) is a processor built specifically for the task of training neural networks for the purposes of performing deep learning at scale.
Baidu and Intel’s collaboration on the NNP-T involves working together on both the hardware and software side of this custom accelerator to ensure that its optimized for use with Baidu’s PaddlePaddle deep learning framework, which will complement existing work that Intel has already done to ensure that PaddlePaddle is set up to perform best on its existing Intel Xeon Scalable processors. The NNP-T optimization will specifically focus on applications of PaddlePaddle that focus on distributed training of neural networks, to complete other types of AI applications.
Intel’s Nervana Neural Network Processor lineup, named after ‘Nervana,’ the company it acquired in 2016, is developed by the Intel AI group led by former Nervana CEO Naveen Rao. The NNP-T is tailor-made for training AI (ingesting data sets and learning how to do the job its supposed to do), while the NNP-I (announced at CES this year) is designed specifically for inference (taking the results of the learning process and putting into actions, or actually doing the job it’s supposed to do).
The NNP made its debut in 2017, and the first-generation chip is currently being used as a software development prototype and demo hardware for partners, while the new so-called ‘Spring Crest’ generation are targeting production availability this year.
from TechCrunch https://tcrn.ch/2J6WUYT
via IFTTT