The cooling of data centers has become a growing challenge for facility managers. For many, there is a good deal of concern and confusion in trying to figure out the best way to cool increasingly hot data centers. Today’s data center operations are hotter because the equipment is denser. There is more computing power packed into every square foot—much more than there was just a few years ago. And because there is a direct relationship between computing power and heat generation, increased density translates to an increase in heat load.
The traditional way to manage heat loads is to install dedicated cooling units inside the data center. Companies like Liebert make a wide variety of specialized units that do this quite well. However, with the super dense data centers of today, traditional approaches may not do the job as well as they once could. In fact, many experts believe that data center design is entering a new era where new cooling techniques will be required.
There are two main problems with cooling data centers today. The first is the increase in total heat generated. The most common components in a data center are servers. These units are the backbone of modern computing, storing information and making that information available to users at their desktops.
Back in the 1980s, when the personal computing revolution took off, servers were basically powerful desktop computers mounted in racks. These were usually 7″ to 12″ high, and each typically produced between 250 watts to 400 watts of heat output. They were large by current standards; only four to six servers could fit in a standard rack. They generated between 1,200 watts to 3,000 watts per rack. This was handled by adding extra cooling units.
The next generation of servers, in the late 1990s and this decade, was the “pizza box” server, which was much smaller than previous versions. Up to 40 of these units could fit in a rack, generating up to 8,000 watts per rack. These servers brought their own challenges but could still be controlled by adding larger cooling units.
The latest advance is blade servers, which are computers stripped down to just a motherboard with processor and memory. These consist of the basic components of a desktop computer without the case. Because blade servers are so small (about 3″ high, 12″ long, and 1″ thick), it is possible to mount as many as 96 of these in a single rack, while creating a heat load as much as 10 times more than the early servers. Blade servers are capable of amazing feats of data processing for less money and in less space than any other kind of computer.
The second problem with cooling is the shrinking footprint of data center components. Because they are physically smaller, hotter, and closer together, it is more difficult to achieve good airflow around them. This means that while the data center may feel cool and air is moving, the air may not be getting to the right places. A data center can be cool at 65°F but still have overheating components. If airflow is not optimized to travel through the rack and components, much of the cooling will simply pass right by the components.
Airflow in a data center is not intuitive and is far more complicated than it might seem. Air will take the easiest route to get where it is going, and that might not be where the facility manager wants it to go.
The key to meeting airflow challenges is acquiring the services of a good HVAC design engineer. This person should have extensive experience in data center cooling rather than just general HVAC design. These engineers will use sophisticated design methods like creating airflow simulations using specialized software. Flovent software, for example, can predict how air will flow and how hot components will be before any actual HVAC components are installed. The investment in good design will pay big dividends down the road by saving money on equipment failures and expensive retrofits later.
There are also some simple, inexpensive ways facility managers can help cool their data centers. One thing to consider is the presence of too many cables under the raised floor. Most data centers circulate cool air under raised floors, and too many cables can restrict airflow. Often, old cables are not removed when upgraded cabling is installed. In most places, this is not an issue, but under a data center floor, it can choke airflow and dramatically reduce cooling capacity. Facility managers can coordinate with IT people to determine what can be removed from under the floor.
Another simple technique is to take temperature readings in the racks. A digital thermometer with a remote probe is very useful for checking temperatures inside racks, especially if they are enclosed racks. Checking the temperature will help to pinpoint hot spots, and checking at various times of day will help to expose the cooling trends.
Many experts say that the hotter, denser computing technologies have maxed out existing cooling methods, and that new solutions must be found. A more efficient method is to use chilled liquids to provide cooling. One basic tenet of cooling system design is that denser cooling mediums transfer heat more effectively. Therefore, a liquid at 50°F provides far more cooling capacity than air at the same temperature, because the liquid is denser.
IBM has announced a liquid cooling system named Cool Blue. The IBM system circulates chilled liquid inside air ducts to bring a huge amount of cooling capacity directly to the racks and components.
Many people have a deeply rooted fear of liquids that close to the equipment, however. Few things can so quickly and completely devastate electrical components as liquids. This fear may be part of the reason that liquid cooling systems have not been more widely developed. However, this attitude will certainly change as cooling requirements force facility managers to find new ways to cool overheated data centers.
Liquid cooling is not new. Back in the 1970s, supercomputers had liquid cooling systems. The technology is coming full circle in some ways. Centralized computing and liquid cooling were both key concepts in the 1970s but were abandoned with the development of cooler processors and the personal computer in the 1980s. Today, there is more and more centralized computing in larger data centers and liquid cooling has re-emerged. Someone once said, everything old is new again. Ain’t it the truth!
- Flovent (www.flovent.com/e-news/articles/data-center_airflow.jsp)
- IBM (www.ibm.com/us)
- Liebert (www.liebert.com)
Condon, a Facility Technologist and former facility manager, is a contributing author for BOMI Institute’s revised Technologies in Facility Management textbook. He works for System Development Integration, a Chicago, IL-based firm committed to improving the performance, quality, and reliability of client business through technology.