Back to top

The indispensable source for professionals who create, implement and service technology solutions for entrepreneurs to enterprise.

In the Zone

Get smarter, faster with AI accelerators in Intel Xeon Scalable Processors

Peter Krass's picture

by Peter Krass on 08/30/2022
Blog Category: cloud-and-data-centers

Chances are good your data-center customers are getting involved with AI. It’s a big, fast-growing market.

Last year AI sales worldwide topped $93 billion, according to Grand View Research. Looking ahead, Grand View expects AI sales to grow by nearly 40% a year through 2030.

Chances are equally good that you can help those customers with servers based on the latest 3rd generation Intel Xeon Scalable Processors.

These are the only CPUs for the data center that feature built-in, hardware-based AI accelerators for deep learning and other compute-intensive workloads.

Intel has to date shipped more than 50 million Intel Xeon Scalable Processors. That makes it the world’s most broadly deployed data-center CPU.

Intel Deep Learning Boost

Intel Deep Learning Boost (Intel DL Boost) acceleration is built in, giving your customers the flexibility to run complex AI workloads on the same hardware they use for existing workloads.

> With int8 instructions: Vector Neural Network Instructions (VNNI) enhance inference workloads. This works by maximizing compute resources, improving cache utilization and reducing potential bandwidth bottlenecks. This feature is available on all 3rd Gen Intel Xeon Scalable processors.

> With bfloat16: The industry’s first x86 support of Brain Floating Point 16-bit brings enhanced AI inference and training performance. Your customers can also utilize optimized libraries and frameworks including oneAPI, OpenVino and TensorFlow. This feature is available on select 3rd Gen Intel Xeon Scalable processors only.

What kind of results can your customers expect? Intel’s own tests show an up to 1.74x improvement in language processing interference.

One Intel customer, CERN openlab, reports that with Intel Deep Learning Boost and one API-optimized software, its Monte Carlo simulations gained a 1.8x performance improvement, yet with no loss of accuracy.

Intel AVX-512

Short for Intel Advanced Vector Extensions, 512-bit vector operations, AVX-512 is a set of vector-processing instructions that speed compute-intensive workloads. These can include scientific simulations, financial analytics and 3D modeling.

Intel says its 3rd gen Intel Xeon Scalable Processor with AVX-512, when compared with competing processors, delivers a 50% performance gain in financial services Monte Carlo simulations.

It does this with its ultrawide capabilities. Applications can pack 32 double-precision and 64 single-precision floating point operations per clock cycle within Intel AVX’s 512-bit vectors.

So if your customers are getting involved with AI, tell them about the built-in, hardware-based AI accelerators of the 3rd gen Intel Xeon Scalable processors.

Get smarter, faster:

> Explore the 3rd generation Intel Xeon Scalable processors

> Skill up on AI with the Intel AI Fundamentals competency

> Find AI partners and solutions in the Intel Solutions Marketplace

> Read the press release: Intel accelerates process and packaging innovations

 

Back to top