Connecting a server to a networking switch may sound simple enough. But in reality, it can be pretty complicated.
For one, a server has only so many PCIe slots to go around. Balancing bandwidth with other functionalities quickly becomes a zero-sum game. One function’s gain is another’s loss.
Cabling can be problematic, too. When your customer has multiple cables running from a server to its switch, things can easily become a tangled mess. Accidentally disconnect the wrong cable, and oops, you just took down a vital workload.
Yet another issue is infrastructure validation. That is, does the speed of the switch you’re plugging into match what’s on the other end of the cable? If not, then Houston, we have a problem.
The networking wizards at Intel have come up with a solution. It’s called the Ethernet Port Configuration Tool (EPCT).
Okay, no awards for the most creative title. But EPCT, available on 100Gb Intel Ethernet 800 Series Network Adapters, is one clever piece of engineering. It essentially turns a network adapter into a Swiss Army knife of networking, able to take on whatever infrastructure your customer runs.
EPCT does this by helping IT managers configure both the port speed and the number of ports on demand. More specifically, a network adapter with EPCT can support the functionality of up to 8 network adapters, and a combined maximum throughput of 100 Gbps, in a single PCIe 4.0 slot. That’s a powerful solution to your server real-estate shortage; it can also help clear up that tangle of cables.
Once EPCT is rolling, it lets you change the adapter’s configuration to match whatever Ethernet infrastructure you’re plugging into — whether that’s 10, 25, 50 or 100GbE. The on-demand function reduces the need for network-adapter validation and simplify deployments.
Also, EPCT lets you set the connection between the server and a network for both fault tolerance and link aggregation.
Under the hood
EPCT does most of its magic in firmware. Behind the scenes, the tool configures the MACs (media access controllers) operating on each network-adapter chip and their speeds. Each chip has up to 8 MACs, which can operate at different speeds.
So how can you actually configure this thing? Let us count the ways:
> 8 ports of 10GbE: This uses all 8 MAC and SerDes to combine all 8 10GbE physical adapters onto 1 adapter. This maintains server and switch fault tolerance, helps troubleshoot cabling issues, and provides better airflow in the rack. With this setup, the switch and your server OS will both see 8 physical ports of 10GbE.
> 4 ports of 25GbE: Basically, quad-port out of 1 port. If your customer wants more fault tolerance at the server, then 2 dual-ports of 25GbE configuration is an excellent option. The server maintains active/active link aggregation and fault tolerance with the switches.
> 2 ports of 50GbE: The adapter is configured to deliver 50Gb on each network adapter, providing a total of 100Gb from the server, while providing server and switch fault tolerance.
> 2 ports of 100GbE: This is all about fault tolerance. Both ports are active, and the total bandwidth is 100GbE, which might be 75/25, 50/50 or 25/75Gb, depending on the traffic flows from the network. It’s supported by using either a pair of QSFP+ to SFP+ splitter cables … or a pair of QSFP+ straight-through cables.