News

Mason & Hanger’s Coles Jennings was featured in the Military Engineer “As Data Center Growth Surges, So Does the Energy Requirement.”

As Data Center Growth Surges, So Does the Energy Requirement Better managing data center design will ensure more efficient and resilient operations of what is anticipated to be a substantial energy user for decades to come. By Coles Jennings, P.E., LEED AP BD+C, BEMP Data center operation, at least in hindsight, used to seem simple. If operators could follow two golden rules, they would likely be in good shape. Keep the power on. Keep the room cold. For any advanced data center now, the prescription is far more complicated. On one hand, cost pressures and technological evolution have driven data centers to become among the most complex facilities in the world. Secondly, data centers are growing rapidly as private and public organizations are increasingly reliant on data processing. As these centers multiply, they comprise a larger slice of global energy demand.

According to the Department of Energy, data centers account for 1.8 percent of total U.S. electricity use, consuming 70-billion-kWh in 2016—a staggering 280 percent rise since 2000. While this rate of growth is not projected to continue due to improvements in efficiency, data center energy usage will remain significant for decades to come. MEASURING VALUE The need to compare data center energy usage was first addressed in a big way with the development of power usage effectiveness (PUE). The metric provides a means of comparing the efficiency of a server closet to a large collocation center, by scaling total facility energy to the energy of IT equipment.

IN A LARGE DATA CENTER WITH HUNDREDS OF SERVER RACKS, DESIGNERS MUST GO A STEP BEYOND BEST PRACTICES. AIRFLOW NETWORKS IN LARGE, UNDERFLOOR AIR SYSTEMS ARE VERY COMPLEX AND CAN BE TRICKY TO BALANCE. ADVANCED ANALYSIS BECOMES CRITICAL.

PUE asks a simple question: For every dollar spent on computing power, how much is spent to keep the power on and the servers conditioned? Operators constantly strive to reduce PUE, because every dollar spent on energy is one less being spent on operations. While the best data centers are pushing PUE very close to 1.0, which is ideal, the reality is most existing data centers are nowhere close to this. Annual surveys by the Uptime Institute have shown PUE plateauing near 1.7 in recent years. This means that for every dollar spent on information technology power in 2013, another 70 cents was spent to keep the systems housed and cooled. Still, asking harder questions concerning IT loads, reliability and efficiency reveals that PUE alone is insufficient for managing IT assets. What will efficiency be when the IT load ramps down? What if servers are overheating and sacrificing reliability for the sake of saving energy? How effectively is the data center utilizing its installed cooling capacity? These are relevant questions not fully answered by PUE, and which have spawned a new generation of performance metrics. These include thermal metrics, which evaluate the effectiveness of cooling air distribution, as well as new performance indicators for assessing efficiency, capacity, and reliability. Designers and operators must build on PUE by identifying and targeting more advanced, and appropriate, metrics. According to the Department of Energy, data centers account for 1.8 percent of total U.S. electricity use, having consumed 70-billion-kWh in 2016.

OPTIMIZING AIRFLOW Though climate plays a factor, much of a data center’s inefficiencies result from poor design, or operations or maintenance of the cooling systems. It is very common to walk through a data center and see too many fans spinning and wasted airflow. Further, many existing data centers still operate under notions that server rooms are best kept frigid. Recent guidance from ASHRAE says that servers can handle some heat, but many designers and operators are still catching on to this. Improving cooling efficiency starts with improving airflow. Though oil- and water-cooled servers are becoming more prevalent, air-cooled remains the standard. The best air-cooled data centers circulate just enough air to accomplish the primary goal of keeping the servers stable. With ideal air circulation, fans are running efficiently and it becomes much easier to tackle efficiency of heat rejection equipment, such as chillers and condensers. With a smaller server room, designing for efficient airflow may be as simple as following key best practices: locate fans close to IT equipment to reduce air pressure losses; provide variable speed fans for turndown with IT load; isolate the cold (inlet) and hot (outlet) side of servers to prevent mixing; provide containment panels or curtains around the hot or cold aisle; elevate cooling temperatures in accordance with ASHRAE guidelines; and provide blanking panels and brush seals to reduce loss of cooling capacity. In a large data center with hundreds of server racks, designers must go a step beyond best practices. Airflow networks in large, underfloor air systems are very complex and can be tricky to balance. Advanced analysis becomes critical.

MAXIMIZING EFFECTIVENESS Computational fluid dynamics (CFD) takes a complex problem and breaks it up into tiny chunks that computers can solve. This is also known as finite element analysis and it allows computers to quickly and accurately simulate fluid behavior, including the air flow in a data center. Data center CFD helps answer several detailed questions: • How much airflow do the servers really need? • Should there be full aisle containment, or are end panels enough? • How high can the cooling temperature be? • Does there need to be this many computer room air conditioners? • Does there really need to be this many perforated tiles? • Why is this server happy and the one right next to it is not? When navigating data center design, best practices are the general direction. CFD is the map. Best practices will set the path to a good design. CFD builds on best practices by providing much more specific feedback regarding airflow patterns, which designers can use to maximize cooling effectiveness.

FEDERAL IMPLEMENTATION PUE has recently expanded its presence in the federal government landscape. The 2016 Data Center Optimization Initiative, which supersedes the Federal Data Center Consolidation Initiative and fulfills the data center requirements of the Federal Information Technology Acquisition Reform Act, requires that all federal agencies implement active PUE monitoring, and that all existing tiered federal data centers achieve a PUE of 1.5 or less by September 2018. Real-time tracking of federal progress towards these targets, and others, is now displayed on the federal information technology dashboard: www.itdashboard.gov.

PUTTING IT ALL TOGETHER Located in Newport News, Va., the Thomas Jefferson National Accelerator Facility (Jefferson Lab) conducts cutting-edge research of sub-atomic particles. The campus houses a large particle accelerator nearly a mile long. As a Department of Energy-funded laboratory that conducts research year-round, energy efficiency is critical to the laboratory’s operations and it faced some tough decisions when it came time to expand its existing data center. Shifting space constraints and expanding IT needs necessitated consolidating several data rooms into a single location. This had to be done with minimal disruption to operations and without a loss of computing capacity. Additionally, the PUE goal for the data center was established at 1.4—an aggressive number for the often hot and humid climate of coastal Virginia. By working directly with data center personnel, a detailed phasing plan and conceptual design was developed to safely and consolidate Jefferson Lab’s IT operations. Next, a predictive CFD model of the expanded data center was created, allowing optimization of the proposed layout before relocating a single server. The benefits were tangible and achieved several key outcomes. Predictive CFD yielded the following assertions:

    • Confirmation that the proposed cooling solution would maintain inlet conditions for all servers.
    • Assurance of continuous cooling in the event of failure of a single computer room air conditioner unit.
    • Consolidation of cold aisles to free space for future expansion.
    • Validation of the aisle containment solution for energy efficiency.
    • Establishment of server airflow limits to be used in IT equipment procurement.
    • Elevation of the recommended cooling supply temperature to expand the availability of free cooling throughout the year.

To tie it all together, the results of the CFD study were fed back into an energy analysis software. The data center airflow optimization, coupled with improvements to the chiller plant, were able to demonstrate a predicted annual PUE below the 1.4 threshold. Best of all, metering data gathered in the months since the data center returned to operation indicates an average PUE of 1.25.

Coles Jennings, P.E., LEED AP BD+C, BEMP, is Senior Energy Engineer, Building Sciences Manager, Mason & Hanger; coles.jennings@masonandhanger.com. Read more here 

 

Related Posts