Back to top

The indispensable source for professionals who create, implement and service technology solutions for entrepreneurs to enterprise.

In the Zone

Ethernet for the masses: Are you ready for the fourth era?

Kevin Jacoby's picture

by Kevin Jacoby on 05/10/2022
Blog Category: cloud-and-data-centers

It’s been 50 years since a hand-written, back-of-the-napkin drawing would lead to the Ethernet. And it’s been 40 years since the introduction of the first Ethernet standard brought to you by Intel, Xerox and Digital Equipment Corp. (DEC) and supported by a cast of thousands of engineers.

Since then, the evolution of Ethernet technology has enabled the kind of communications infrastructure early sci-fi writers could only dream about.

Yet today, many of us take the lightning-fast responsiveness to internet searches and apps for granted. If we have to wait 5 seconds for a web page to load, we move on to other things, shaking our fists at the technology gods and cursing our virtual solitary confinement.

But high-speed communications are not a birthright. In fact, they’re a hard-won achievement. Along the way there have been many new inventions to enable Ethernet’s multiple speed gains.

You can view Ethernet’s history in 3 eras, says Gary Gumanow, Intel sales enablement manager for Ethernet, who’s witnessed all three. And now we could be entering a fourth. Let’s take a look.

Era 1 (1980 - 2005): Faster yet also smaller

At the 1980s dawned, engineers from Intel, Xerox and DEC came together to author and standardize an IEEE specification for a communications protocol. What they couldn’t know then was that Ethernet would empower the tech world for decades to come.

Their early work was based on ideals of interoperability, openness and the need for both higher speeds and less delay (aka low latency).

While fast data transfer was just one of Ethernet’s many stated goals, it’s mainly how the technology’s first era has been defined. Early engineers worked tirelessly to achieve transfer rates up to and beyond 2.9 megabits per second (Mbps).

They achieved that goal in 1982. That’s the year Intel shipped the world’s first high-volume 10 Mbps Ethernet controller.

Early Ethernet products from Intel

Early Ethernet products from Intel

This herculean effort would kick off many years of R&D. Intel provided the world’s first dual-speed 10/100 Mbps network adapter in 1994. Then the first single-chip 10/100 Mbps Ethernet controller in 1997. And then the first single-chip 10/100/1000 Mbps controller in 2001.

Intel’s 82557 10/100 Ethernet Adapter

Intel’s 82557 10/100 Ethernet Adapter

Why was a single-chip network controller so revolutionary? Mainly because it ushered in LAN on Motherboard (LOM). This made Ethernet the ubiquitous connection for desktops, servers and embedded devices. Ethernet for everyone!

Era 2 (2005 - 2015): Server virtualization

The path to Ethernet’s second era, virtualization, was forged in 1999 with the standardization of Gigabit Ethernet (aka IEEE 802.3ab). The first compatible products came to market just 2 years later, in 2001.

In a way, Gigabit Ethernet was a solution looking for a problem. That’s because servers running a single application could barely push a Gigabit of data down the wire. Gigabit Ethernet provided more bandwidth than data centers needed.

Ah, but then came virtualization. This meant multiple applications could be consolidated on a single physical server, each application running with its own virtual operating environment. As Intel’s Gumanow said back then, “Every virtual machine needs a Gigabit Ethernet connection.”

Each VM was given its own dedicated, physical 1gigabit Ethernet (GbE) connection. This finally enabled data-center architects to begin using more of their available server resources. Now they could consolidate physical servers, increasing efficiency and creating greater IT scale.

Several years later, we got another 10X improvement with 10Gb Ethernet. Once again, the industry had delivered more bandwidth than we knew what to do with.

But it was also perfect timing for network and server architects. 10GbE solved the problem of sprawling network-adapter ports, cables and switch ports in servers and data-center racks.

But how to virtualize all those network adapters in the server? The solution came from Intel. Through the PCIe SIG, the company introduced Single-Root Input Output Virtualization (SR-IOV).

Most virtualization hypervisors allowed for the emulation of network adapter through software; however, this introduced latency. By contrast, SR-IOV allowed for the splitting up of a network adapter via a “virtual function” on the adapter, instead of running in software on the host processor cores. This also delivered additional improvements in throughput and server efficiencies.

Another technology that helped data-center managers were improvements in network virtualization. One example is Intel’s Virtual Machine Device Queue (VMDQ). This technology aligns transmit and receive queues in hardware with a specific VM host network adapter.

These advances in both hardware and software have enabled virtualization to be the scale engine of the cloud. Today, VMs can host many customer workloads scaled across a single server platform in the cloud.

But networking still needed a method for keeping data and flows secure and separate from one another. Fortunately, from Ethernet’s point of view, the cloud is simply a vast array of remote data centers that collect, process and disseminate data for a wide variety of applications and users. What’s remarkable is the quantity of that data: nearly 100 zettabytes so far.

Data-center virtualization, enabled by Ethernet technology, also gave us access to fast, reliable email and vast databases. It still powers many modern conveniences used today.

Today’s data centers get more done with virtualization

Another benefit of virtualization: It can reduce a data center’s carbon footprint. By enabling multiple VMs on a single physical server, virtualization has helped IT managers use electric power more efficiently.

The resulting decrease in power consumption for processing, cooling and management may not be the silver bullet we need to reverse climate change. But at least it’s a step in the right direction.

Era 3 (2015 - Present): Optimization

Improving how a technology works may not be as sexy as inventing it in the first place. But it’s no less valuable.

Today, Intel and others are considering how best to upgrade Ethernet by improving application responsiveness and reducing latency.

Bandwidth continues to improve with 40GbE, then 25GbE and 100GbE. It should soon move to 200GbE and beyond.

“A chain is only as strong as its weakest link,” Gumarov reminds us. “As apps scale, their responsiveness becomes less predictable. Application Device Queues provides the mechanisms to scale the pipeline between the network, compute and storage.”

Optimization is the process of reclaiming that predictability. It’s a big challenge given the titanic scale on which Ethernet-based systems now operate.

Yet the resulting systems could catalyze advances in mission-critical applications, such those used by healthcare and aerospace, where failure simply isn’t an option.

Era 4: Will there be one?

While the future is uncertain, it’s safe to say Ethernet tech will be with us for some time to come.

Sure, each day brings the possibility of technological advances that offer faster speeds and higher reliability. In fact, many have already come and gone. But it would still take many years to replace the infrastructure that enables Ethernet to push data to even the farthest corners of the planet.

Ethernet is the unseen thread in the fabric of our digital lifestyles. It’s as vital to us now as it’s ever been. Ethernet will still be vital when we wake up tomorrow morning and log on to our devices to check the weather, start a Zoom meeting and navigate to a new destination.

So will the Ethernet have a fourth era? That may be the wrong question. Maybe the right one is: How do we know we’re not already in it?

 

Back to top