Intel Hardware Speeds AI Development, Deployment and Performance

Intel Hardware Speeds AI Development, Deployment and Performance
Intel

Intel announced the next wave of AI with updates on new products designed to accelerate AI system development and deployment from cloud to edge. The company demonstrated its Nervana Neural Network Processors (NNP) for training (NNP-T1000) and inference (NNP-I1000), Intel’s first purpose-built ASICs for complex deep learning with incredible scale and efficiency for cloud and data center customers. They also revealed their next-generation Movidius Vision Processing Unit (VPU) for edge media, computer vision and inference applications.

“With this next phase of AI, we’re reaching a breaking point in terms of computational hardware and memory. Purpose-built hardware like Intel Nervana NNPs and Movidius VPUs are necessary to continue the incredible progress in AI. Using more advanced forms of system-level AI will help us move from the conversion of data into information toward the transformation of information into knowledge,“ said Naveen Rao, Intel corporate vice president and general manager of the AI Products Group.

These products should further strengthen Intel’s portfolio of AI solutions, which is expected to generate more than $3.5 billion in revenue in 2019, the company said in a statement. The broadest in breadth and depth in the industry, Intel’s AI portfolio helps customers enable AI model development and deployment at any scale from massive clouds to tiny edge devices, and everything in between.

Now in production and being delivered to customers, the new Nervana NNPs are part of a systems-level AI approach offering a full software stack developed with open components and deep learning framework integration for maximum use. It strikes the balance between computing, communication and memory, allowing near-linear, energy-efficient scaling from small clusters up to the largest pod supercomputers. Nervana NNP-I is power- and budget-efficient. Both products were developed for the AI processing needs of leading-edge AI customers like Baidu and Facebook.

Additionally, Intel’s next-generation Movidius VPU, scheduled to be available in the first half of 2020, incorporates efficient architectural advances that are expected to deliver more than 10 times the inference performance as the previous generation with up to six times the power efficiency of competitor processors. Intel also announced its new DevCloud for the Edge, which along with the Distribution of OpenVINO toolkit, addresses a key pain point for developers, allowing them to try, prototype and test AI solutions on a broad range of Intel processors before they buy hardware.