Your customers’ business interests intersect with data-center design every day. Whether they’re browsing Facebook, deploying cloud-based applications, or contracting data analytics services, all the information they want and need comes from an enterprise data center.
Our reliance on these data centers is one good reason why data centers need to become cheaper, faster, more efficient and more reliable. Let’s look at 3 ways they’re doing just that.
1) High-density design and cooling
High-density design is about fitting more servers into a given space. Though that may sound elementary, it’s easier said than done. More servers create more heat; they also consume more energy. When heat production and power consumption hit critical mass, the entire system begins to break down.
For a company like Intel — currently running some 145,000 servers in 60 locations — breaking down is not an option. So, when Intel deployed its new high-density 60U racks (pictured right), the company also upgraded the cooling system.
Using manufactured air pressure, waste heat from the new, larger racks is pushed up through a cooling unit at the top of the 60U enclosure. From there it moves through a series of diffusers and back into the room. Combined with greener, more efficient water-cooling loops, Intel has managed to boost its Power Usage Effectiveness (PUE) to 1.06 — that’s on par with Google and Facebook.
2) The distributed cloud model
Some data-center operators are taking the opposite approach. Instead of packing more servers into a single location, they push space-consuming apps and non-critical workloads to underutilized servers in secondary markets.
In a recent study, Jonathan Koomey of Stanford University and Jon Taylor of Anthesis Group found that the utilization of servers in business and enterprise data centers rarely exceeds 6 percent. They also found that 30 percent of all U.S. servers are “comatose” — that is, they use electricity, but deliver no information at all. Renting all that underutilized processing power to others would be a good deal for everyone involved.
3) SSD fast-swap drives
Sometimes small design improvements can make a big difference. A recently published Intel report, Data Center Strategy Leading Intel’s Business Transformation, describes a simple solution to a complex issue. Intel’s silicon-chip design engineers were experiencing a bottleneck. Their increasingly complex chip-design automation workloads required a server-configuration strategy that was more cost effective.
In the end, Intel IT found that substituting lower-cost Solid State Drives (SSD) for part of its server’s physical memory increased design throughput by 27 percent on an array of 13,000 servers.
Even if none of your customers are designing microprocessors, they can still employ cost-cutting methods like using fast-swap drives to increase efficiency and shorten data center configuration timelines. Changes like that can lead to lower costs and higher profit margins. What business owner wouldn’t say yes to that?