Partner Channels

The Definitive Source of Information on the Following Subjects:

Building Automation | Building Envelope | Commercial Roofing | Cooling
Energy Measurement | LED Lighting | Lighting Control | Site Furnishings

FM Issue: Today’s Data Center

Written by FM Issue Contributor. Posted in Energy, FM Issue, In-Depth Articles, Magazine, Technology, Topics

Tagged: , , , ,

Published on February 27, 2011 with No Comments

By Melissa Chambal, RCDD, NTS
Published in the February 2011 issue of
Today’s Facility ManagerPhoto: Thinkstock

Today’s facility manager (fm) is a key contributor to the successful relocation or expansion for many organizations and their data centers. Fms provide the design team with the insight and understanding of how the critical infrastructure systems operate according to the expectations of every interested party in the facility. Power, HVAC, and access control are all vital to any data center, large or small, and the fm is responsible for them all. Whether from an internal facilities perspective or from a commercial building environment with multiple occupants, the fm is required to provide consistent and reliable service within the facility for a multitude of disciplines.

How Critical Is Critical?

Not all data centers are alike. Each facility will have its own distinct operating procedures, systems, layout, construction, and occupancy requirements. By understanding these and the organization’s expectations for the availability of the network, today’s fm is better positioned to provide a safe, continuous, consistent, and reliable infrastructure.

Regardless of industry and equipment, all data centers have one thing in common, and that’s the cost of downtime. Conservative estimates have listed this cost for financial institutions to be in the $7 million dollar per hour range, with credit card and banking around $3 million per hour (and climbing). Cross industry averages are approximately $47,000 per hour, which includes lost revenue, wages, and productivity.

IT departments know the critical network and storage demands for the business applications and software they must support. But requirements like “Is it a 24/7 operation?” or “Will ‘remote’ or ‘after hour’ network access be required for continuous support?” should be clearly communicated by IT to facilities management (FM) early in the project phase so fms can plan, integrate, and operate a building infrastructure adequately—at the very least.

Fms have become more aware of the demand and availability requirements in their mission critical operations. After all, if the data center within the facility is the only one the enterprise has (which is very common for many owner occupied and operated buildings), any outage can be crippling and have a long-term affect. Grasping how an organization will be impacted with an outage will help explore solutions in preventive maintenance and even expansion plans.

Technology is changing rapidly, and today’s cross industry corporate IT departments are in a continuing stage of proactive and reactive situations when it comes to the applications and processes that networks support. Consequently, the fm with one or more mission critical site(s) will always be in the same proactive and reactive situations; however, the emphasis for the fm will be on other specialized disciplines (such as power, HVAC, fire suppression, and access control).

Location, Location, Location

Not all locations are ideal for a data center. In areas subject to seismic instability or prone to high winds (such as hurricanes and tornados), facilities that will house data centers need specialized hardening of structural elements. Depending on the geographic location, these characteristics will be inherent in the initial design of the building, based on local and life safety codes.

Special consideration of data center placement in the building is also important. Computer rooms housing servers and mission critical equipment should not be next to elevator shafts, load bearing walls, or exterior walls or windows. This will limit expansion and pose security concerns. This industry best practice can cause space planning nightmares, but it is better to have the capability of expansion built in early, rather than being faced with the “out of room” possibility later in the lifespan of the facility.

Design Factors For Optimizing Data Center Performance
By Brian L. Mordick, RCDD

A reliably operating data center is critical for small and large businesses alike. To maintain network functionality and prevent failure, all equipment must be kept within a specified temperature range. [For more specifics on this range, see the guidelines referenced in bold in the main article.] This can prove to be a challenging task, as higher processing power of network equipment is required to meet increasing information demands.

In raised floor data centers, hot exhaust air is directed through ho aisles to computer room air conditioners (CRAC) and the recondtitioned cool air is pumped through the raised floor plenum and into the cold aisle where it can be reused.

In raised floor data centers, hot exhaust air is directed through ho aisles to computer room air conditioners (CRAC) and the recondtitioned cool air is pumped through the raised floor plenum and into the cold aisle where it can be reused.

High temperatures can cause equipment failure and network downtime, resulting in costly labor and equipment replacement. Power consumed by the processors is converted to heat, which must be removed via air (convection)—requiring an efficient cooling system.

The data center must provide cold air to network equipment intakes and recycle the exhaust air to remove the heat and maintain proper temperatures. The operating equipment must be less than 95˚F to 100˚F, with the intake below 80˚F to 85˚F.

Numerous design factors can affect data center performance, including air distribution, sufficient airflow, and support structures. Data center floor designs and cabinet characteristics are two factors that impact operation and, ultimately, affect the overall cooling system.

Data Center Floor Designs
Data center floors should be carefully designed to provide a cooling strategy that effectively delivers cold air to the equipment. Fms can choose to use a raised or non-raised floor design to meet their facility requirements.

Raised Floor Data Centers are intended to provide an efficient cold air distribution system for equipment. Using perforated floor tiles, raised floor data centers allow cold air to enter a cold aisle and be used for equipment intake.

An average floor tile allows 200 to 500 cubic feet per minute or CFM (cold intake air and airflow), depending on the percentage of open space and underfloor air pressure. Open space is defined as the amount of cutout area that allows airflow through the tile.

The percentage of open space can range from 25% up to 60% of the total tile area, based on the model type and version. Typical perforated floor tiles can provide 1,500 to 4,000 watts of cooling.

The placement of perforated floor tiles is critical, since they are the point of entry for all cold air in the data center. Their location determines the cooling capability and overall efficiency of the facility.

Floor tiles can be placed in two ways: through a process of trial, error, and measurement or Computational Fluid Dynamics (CFD). Simple trial and error methods can be used to ensure proper airflow in the data center. CFD, a more sophisticated process, uses modeling simulation to create the most effective floor plan for optimal airflow and cooling.

The proximity of perforated floor tiles to the computer room air conditioners (CRAC) or computer room air handlers (CRAH) is also important. CRAC/CRAH units recycle air by removing heat from the exhaust by using either refrigeration or chilled water. The reconditioned cold air is pumped into the raised floor and up through the perforated tiles into the cold aisle while the heat is directed outside of the building.

If the floor tiles are placed too far from these units, the air conditioners cannot produce enough airflow. On the other hand, if the tiles are positioned too close to the units, the raised floor may experience the Venturi Effect. This effect occurs when a rush of air flowing below the perforated tile causes suction into the flow, potentially causing a downward draw of air above the floor—ultimately affecting cold air distribution to the data center.

Non-Raised Floor Data Centers may achieve the same objectives as their raised floor counterparts while eliminating issues associated with structure, cabling, and costs. The non-raised floor option allows for higher weight capacity, which better accommodates cabinets full of equipment—cabinets that can weigh as much as 3,000 pounds.

This design also eliminates blockages and obstructions to cold airflows. Perforated tiles may deliver less airflow than expected, due to blockages and obstructions under the floor. The non-raised floor data center structure also allows for simple cleaning and quick moves, adds, changes, and upgrades to IT equipment.

Additionally, non-raised floor data centers provide easy access to power and data cabling, since the cabling is located overhead. This allows for quick cabling changes and also eliminates clusters of cables underneath perforated tiles, which can cause unwanted obstructions.

Costs can be lower with a non-raised floor design. Outside studies suggest raised floor costs approximately $20 per square foot, plus costs associated with power and data cabling.

Cabinet Characteristics
Many cabinet characteristics can also affect server performance. Sufficient cool air for a server can be provided by incorporating cabinet design elements like those described below.

Perforated Doors. To provide servers with enough cold airflow to dissipate the heat generated and ensure the cold air is not restricted, cabinet doors must have at least 50% open space. (Open space refers to the space open for airflow through the door.)

Mesh or perforated doors should be used on both the front and rear of all passive server cabinets to provide proper airflow to the equipment. It is important that the rear doors are also perforated or mesh, so the exhaust air from the servers can flow away from the cabinet, into the hot aisle, and back to the cooling unit—allowing the air to be recycled back to cool intake air. Servers should be positioned four to six inches from the front and rear cabinet doors to provide sufficient space for accessories, such as handles and cables, as well as airflow.

Rear Mount Fans can be used in conjunction with the server fans to increase the amount of heat removed from a cabinet. These cabinets use a solid door with cut outs for the fans. Providing more airflow in specific areas, rear mount fans help to eliminate excess hot exhaust air and prevent hot spots. Rear mount fans can be monitored and controlled for maximum efficiency, allowing users to switch off or adjust their RPM to match cabinet requirements.

Cabinet Tops
. Removing heat from the top of a cabinet can be achieved by installing a perforated top or a top mount fan. However, users should be aware of possible complications with top mount fans.

These fans can actually block heat dissipation and overall server thermal performance by drawing cold air away from the server and mixing it with hot exhaust air—wasting energy and disrupting airflow. In place of top mount fans, solid cabinet tops can be used to force hot exhaust air to exit the back of the cabinets.

Cabinet Bottoms. Delivering cold air in front of the cabinet through the perforated tile (instead of directly under the cabinet) can prevent blocked airflow and keep hot and cold air from mixing together. Floor ducted cabinets can create a cold air plenum, ensuring all equipment receives the same cold temperature intake air and reducing temperature variance.

Blanking Panels are used to fill empty rack unit spaces. They prevent cold air from bypassing equipment and decrease air temperature around the equipment by 15˚F to 20˚F.

Through the proper data center floor design and use of cabinet design characteristics, data center performance and continuous operation can be achieved. Sufficient cooling and airflow will help to prevent equipment failure—reducing costs and increasing overall productivity.

Mordick is senior product manager—cabinets, for Minneapolis, MN-based Pentair Technical Products.

Careful examination must be considered when the plans of a data center are explored for a particular space in a multi-story building. For high density data centers, industry experts call for floor loads of 250 pounds per square foot (with a hanging load of 50 pounds per square foot). These requirements can be difficult to meet in older commercial buildings. The FM team will be pivotal in providing engineers with the information they need to ensure weight loads are within tolerance. Most importantly, it will be the fm who will diplomatically run interference to ensure any subfloor reinforcement work will not inconvenience others in the building.

Industry standards recommend a minimum span of 15′ slab to slab distance between floors to accommodate a minimum of an 18″ raised floor; high density data centers require a 24″ raised floor. A minimum 10′ ceiling height (from above finished floor to ceiling) will allow the hot air return path from the servers to rise unobstructed to the plenum space above and return to the computer room air conditioners (CRACs). Unfortunately, many existing commercial data centers exist in cramped quarters that do not have enough clearance either above or below to allow for cool air circulation.

Anchoring of freestanding cabinets and racks to the slab is a best practice being implemented, regardless of location, with additional bracing if required by local codes. (For more on these issues and others, see the accompanying sidebar.)

Got Power?

Availability of redundant power is most attractive to the organization with large computer processing needs. Eliminating single points of failure in the power chain will increase redundancy.

The farther away the failure is from the servers, the more of an impact the failure has on the entire data center. Thus, for mission critical facilities, redundant power should feed from separate and distinct substations entering the building into their own separate electrical entrance rooms. An opposite side of the street scenario is ideal, but it’s also the most costly. Backup generators dedicated to maintaining the critical business functions in addition to life safety/emergency power generator requirements add an attractive benefit to the enterprise whose entire viability depends on the availability of power.

Most data centers will require their own uninterruptible power supply (UPS) or system—whether standalone or within the cabinets themselves—in addition to a percentage of the building UPS that may be available. This could present a concern with respect to floor loading (as mentioned earlier), as the standalone systems can be large and heavy. The modular approach to UPS systems can help organizations that may wish to expand critical network equipment and allow for proper shutdown or continuous operation, but it will all depend on how the electrical system is designed.

Grounding and bonding are essential from life safety and performance perspectives. Today’s sophisticated equipment requires less than five ohms of resistance. Nothing creates sporadic and intermittent problems, failures, and equipment shutdowns more than poor grounding and bonding systems. Proper techniques throughout the facility may eliminate many problems before they even happen. Regular facility inspections are strongly recommended to ensure the grounding infrastructure is intact and that any new cabinets or equipment is being properly bonded to the building ground system.

Brrr…Is It Cold In Here?

Cooling is the largest power requirement for a data center. But it actually might be a bit too cold in some data centers, especially if fms are unaware of the recent changes in data center cooling.

Delivering the proper amount of cold air can be a challenge, but fortunately, it can be delivered in a variety of ways. Most common in data centers is underfloor cooling architecture. CRAC units deliver cold air under the raised floor by way of properly placed perforated floor tiles.

This cooling architecture works best when the underfloor space is free from obstructions. Cabinets are aligned in a hot and cold aisle configuration, allowing the cool air to rise up from the cold aisle through perforated floor tiles. However, overhead and in row cooling may need to be deployed if no raised floor is present (or a lack of space or clearance causes concerns). A combination of in row, CRAC, or individual water cooled cabinets may need to be deployed as the data center migrates to higher density blade servers.

One thing that’s changing strategies is an update of ASHRAE’s Thermal Guidelines for Data Processing Environments (2009). With a new recommendation of a temperature range of 64.4˚F to 80.6˚F (or 18.3˚C to 26.7˚C)—this 3˚F increase will translate into power savings. Measurements must be monitored at the air intake of the equipment. (The temperature indicated on the CRAC unit itself is not indicative of the temperature down the cold aisle with a new blade server in production.)

What Is PM Anyway?

In the world of FM, there may be some confusion over the initials PM. Is it preventive maintenance or project management? Actually, it’s both.

Preventive maintenance is vital in terms of the health of facility infrastructure. Preventive maintenance procedures must be strictly followed to ensure the safety of those performing the work, but it must also ensure that people, property, and systems continue to operate as intended. Unfortunately, many documented data center outages are due to neglected routine maintenance on key infrastructure systems.

Facility project management becomes vital in coordinating planned, scheduled maintenance. Contractors should provide a step by step scope of work detail of what is to occur, and many fms require a detailed back out plan if the procedure runs into a problem and cannot be completed. This ensures the system will be restored to its pre-maintenance condition and gives the team time to explore solutions without compromising expected services. As tedious as this may seem, it allows for contingencies that may not have been there, had the procedures not been enforced.

Since routine maintenance for infrastructure systems typically falls under the responsibility of FM, it is vital to inform users of procedures, just in case alternate methods of network access must be arranged. Many small- to midsize organizations will require a complete shutdown of network facilities without any alternate means of access. This is common, and when planned and managed accordingly, these routine maintenance projects can—and should—be invisible to users.

What Now?

Technology is getting faster. Enterprises depend on their networks to carry out their business objectives. These networks are housed in facilities with complex and diverse systems that keep the servers humming without a second thought. So what now? How do fms prepare their facilities to meet demands?

Technology links everyone, everywhere. Fms and data center managers must keep this link open in order to communicate what changes have—and will—occur within the enterprise environment.

Is more power in the future? Yes. Good clean reliable power will be a commodity many will seek out in their existing or new places of business. Newer buildings are being designed with power and technology entrance rooms. Energy efficient programs with local power providers and materials will help minimize the building’s usage and carbon footprint.

Virtualization will free up floor space by consolidating multiple applications onto fewer servers. However, those applications will reside on more power hungry blade servers, churning out high quantities of heat that need to be cooled or removed.

Regardless of the application, communication must take place between the FM team and the IT department—person to person—around the design table, conference table, or drafting table. Bridges have been established to help IT and fm understand the industry requirements for a productive and functional data center. Resources available through professional networking groups and published industry standards and best practices can offer assistance.

Power and cooling are the focal point of many technical white papers and blog postings. Fms looking to conduct research on this topic simply need to search under the terms “data center,” “power,” and “cooling,” and numerous resources will be supplied. These resources will offer suggestions, recommendations, and solutions for the data center manager and illustrate how fms can achieve success.

In the era of technological innovations, many fms will be riding a high speed roller coaster with their building occupants as they migrate to new platforms, speeds, and processes which will be embedded into the normal course of doing business. What’s the best advice for fms entering the data center world? Just hang on and enjoy the ride!

Chambal, RCDD, NTS, BICSI ITS Technician, has spent over 25 years within telecommunications and data centers. She currently is a Master Instructor with BICSI and is a leading expert for the organization network and data center design courses.

About FM Issue Contributor

Facility management related issues are often in the news. This monthly feature examines some of the more abstract, non-product concepts and challenges facility managers face in that regard. For more FM Issues, visit this link.

Browse Archived Articles by

No Comments

There are currently no comments on FM Issue: Today’s Data Center. Perhaps you would like to add one of your own?

Leave a Comment