Back to top

The indispensable source for professionals who create, implement and service technology solutions for entrepreneurs to enterprise.

In the Zone

Behold the IPU, a new processor for offloading server infrastructure

Peter Krass's picture

by Peter Krass on 07/12/2022
Blog Category: cloud-and-data-centers

There’s a new processing unit in town. You already know the CPU (central processing unit) and GPU (graphics). Now get to know the IPU – that’s I for infrastructure.

An IPU is a programmable networking device designed to help improve security, reduce overhead and free up CPU performance by better balancing compute and storage. The target audience for IPUs includes cloud and communications providers, and large enterprises.      

In theory, at least, the IPU solves a common server issue: CPUs getting so overloaded with system-level overhead and infrastructure processing, they can’t run core applications as quickly as they should.

Those overhead tasks include networking, storage and security. In other words, they’re important.

Yet by Intel’s estimate, a server processor can spend up to 40% of its cycles on infrastructure. And that’s 40% of the CPU’s capacity that it cannot dedicate to core applications.

The solution? The IPU’s big idea is that those overhead and infrastructure tasks don’t have to be done by the server’s CPU. Instead, they can be done by another processor, one specially designed for them. In the process, that frees up the CPU to do the work it was designed for, too.

4 tasks for the IPU

That’s the theory. Intel introduced its vision for the IPU a little over a year ago. At the time, Intel said a well-designed IPU should help with 4 distinct tasks:

> Accelerate infrastructure functions: These include storage and network virtualization, as well as security.

> Free up CPU cores: By shifting storage and network virtualization functions from CPU to IPU.

> Improve data-center utilization: By allowing for flexible workload placement.

> Speed customizing: By enabling organizations to quickly customize infrastructure function deployments.

Architecturally speaking, the IPU provides four key functions with three main subcomponents, as shown in this Intel illustration:

Intel IPU

First out of the gate

More recently, Intel has released its first ASIC IPU. Called Mount Evans, it’s the result of a collaboration between Intel and Google.

Mount Evans was designed for a specific role: It handles hardware offloads and leverages ARM N1 cores for specific workloads that don’t require an Intel Xeon processor, yet are still important.

This IPU handles transfers of up to 200 million packets per second. It also uses crypto tech to keep all that data secure.

Though Mount Evans was designed for Google, Intel says other companies are eager to give it a try. And to help them, there’s now an open-source Infrastructure Programmer Development Kit (IPDK).

What’s coming next

What’s next for Intel IPUs? Earlier this year Intel published an IPU road map. Here are some of its highlights:

> Mount Morgan: An ASIC IPU scheduled for 2023 and 2024, it will be rated at 400 GB.

> Hot Springs Canyon: Intel’s next-gen FPGA-based IPU platform, it’s expected to start shipping in 2023. Also rated at 400GB.

> Next-Gen: No code name yet, but this is an 800GB IPU expected to ship in 2025 or 2026.

“This IPU is flexible enough to support many more use cases,” says Brad Burres, an Intel Fellow who works with the company’s Ethernet products group. “We’re at the tip of the iceberg.”

Learn more:

> Intel Unveils IPU Road Map (fact sheet)

> Nailing the IPU (Intel blog post)

> Intel Unveils IPU (Intel press release)



Back to top