Average: 4.6 (34 votes)

Artificial Intelligence is moving to the edge as organizations apply trained models to new datasets. That’s good. But your customers’ data centers probably aren’t ready. That’s not so good.

While most AI still happens in data centers or their clouds, that’s changing as Internet of Things devices proliferate out on the edge. Also shifting to the edge is AI inference – the application of deep-learning training to the real world.

This shift is likely to require your customers to change both their data-center infrastructures and the way they handle IoT data. For starters, most data centers will need a lot more flexibility. Only then will they be able to meet new demand for greater compute, storage and connectivity resources on smart edge devices.

Right now, most data centers simply aren’t ready. Try moving all that memory, power and data out to the edge, and you’ll likely create bottlenecks. These can lower utilization rates while also raising costs – the opposite of what most data centers want and need.

AI accelerated

So what’s the solution? Well, one way your data-center customers can prepare for AI inference at scale is with Intel’s latest server CPUs.

More specifically, with servers powered by the 2nd Generation Intel Xeon Scalable processors featuring Intel’s new Deep Learning Boost technology. These processors offer a common platform for AI, complete with high throughput for both training and inference, without the need for additional graphics processors (GPUs).

How high, you ask? According to Intel, up to 2x faster inference, and up to 30x improved deep-learning performance, both compared with previous generations.

Further, Intel Deep Learning Boost gets even more oomph from the first of what Intel says will be several embedded accelerators. Known as Vector Neural Network Instructions (or VNNI for short, pronounced like “Vinnie”), it speeds dense computations by accomplishing in one instruction what formerly took three.

Memory & storage matter

More help is available from Intel Optane DC persistent memory, which is supported on 2nd Gen Intel Xeon processors. Intel Optane DC persistent memory can activate large working datasets in-memory to enhance both deep-learning training and inference.

To open storage bottlenecks, your customers should also consider Intel Optane Solid State Drives. These SSDs can ease AI training and inference by letting customers deploy larger datasets more affordably.

To further extend AI to the edge, Intel has up its sleeve a couple of other tools. Among them is the Intel Movidius Vision Processing Unit. It's a visual processing unit (VPU) that enables deep neural-network inferencing on cameras, drones and similar low-power devices.

Another is Intel’s distribution of the OpenVINO toolkit, software for deploying computer-vision applications across platforms such as IoT devices and clouds.

Training gains

Most likely, your AI-using customers are today doing at least as much training as they are inference. But that will change, and probably soon. With much of their AI training now completed, organizations are starting to launch applications and services that apply their models to new datasets.

By moving deep-learning inference to the edge, your customers can enjoy another benefit: the ability to monetize their AI efforts and start delivering a return on investment. Given today’s budget pressure, that’s a big deal.

It’s also an outcome you should be part of. So get started today. Tell your customers how the latest Intel tech can help them get ready for AI at scale.

Now do more:

> Get Intel channel sales resources for selling AI

> Take the Intel training: Empower AI Transformations with Intel Select Solutions on AI Inferencing

> Check out a short (10-min.) Channel Chat podcast with Milind Pandit of Intel: Help Customers Make AI a Reality with How DL Boost Technology

Blog Category: 
Cloud and Data Centers