ASHRAE NY-08-001-2008 Some Worst Case Practices in Data Centers《一些数据中心的最坏情况案例》.pdf
《ASHRAE NY-08-001-2008 Some Worst Case Practices in Data Centers《一些数据中心的最坏情况案例》.pdf》由会员分享,可在线阅读,更多相关《ASHRAE NY-08-001-2008 Some Worst Case Practices in Data Centers《一些数据中心的最坏情况案例》.pdf(5页珍藏版)》请在麦多课文档分享上搜索。
1、2008 ASHRAE 3ABSTRACTMany data centers today have inappropriate temperatureand relative humidity environments due to being operated inef-ficiently and ineffectively. This paper will highlight the worstcase practices found in many data centers, their impact on theoperation and environment of the faci
2、lity, and how many ofthese situations can be quickly and inexpensively corrected.A few of the areas of greatest concerns include themismatch of IT expectations and facilities (primarily cooling)capability, which leads to greater exposures of catastrophicfailures in infrastructure equipment. Mismatch
3、ed electricaland cooling infrastructure lead to a waste of investment, aswell as, exposure of catastrophic failures of infrastructureequipment. Inappropriate equipment layout and no masterplan leads to inefficient utilization of floor space and coolingcapacity. Failure to measure and monitor key par
4、ameters leadsto uncontrolled application of cooling resources. Bypassairflow is a large contributor to inefficient use of coolingcapacity.INTRODUCTIONThere is a great deal of talk about the “Best Practices” toemploy in data centers. There have been numerous articles,white papers 1, and presentations
5、 2 made on the subject.This paper is going to take a slightly different approach, thatof identifying some of the worst practices that go on in datacenters. From this approach numerous poor to bad practicesthat are not usually addressed will be highlighted. These badpractices lead to exposure of the
6、computer and infrastructureequipment to unscheduled outages and therefore loss ofsystem availability. In addition many of these practices createa wasteful environment, where up to twice the energy isconsumed operating the data center than is actuallyneeded. 3 The two driving forces in many data cent
7、ers today areavailability, which has always been the key, plus hardware andinfrastructure efficiency, which is the new paradigm in wellrun installations. The later being driven by the recent effort toconserve electrical power in both computer equipment andwithin the facility. This paper will highlig
8、ht many situations,including management decisions, poor design and implemen-tation, and poor maintenance practices that make achievingthe goals of availability and efficiency difficult, if not impos-sible to achieve. System Availability and Infrastructure CapabilityIn todays demanding environment sy
9、stem availability isassumed to be “24 X Forever” in many data centers. In somecases this is based on sound business requirements. In many itis based on unsupported Information Technologys (IT)demands. Another driving force for such availability comes frommultiple users / tenants in the computer room
10、. Each has thecapability of accepting a scheduled maintenance outage some-time during the year. Unfortunately they can not coordinate thetimes when they can be down, so the demand is essentially“24 X Forever”.To support such high availability both the power and cool-ing systems must be both fault to
11、lerant (be able to sustain afailure in any component within the generation and deliverysystem), and be concurrently maintainable (be able to haveany component tested, repaired, or replaced) without having toshut down any component of the computer system(s), asSome Worst Case Practices in Data Center
12、sRobert F. Sullivan, PhDRobert F. Sullivan is a senior consultant with the Uptime Institute, Morgan Hill, CA.NY-08-0012008, American Society of Heating, Refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). Published in ASHRAE Transactions, Volume 114, Part 1. For personal use only. A
13、dditional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAEs prior written permission.4 ASHRAE Transactionsshown in Figure 2. A tier structure system categorizes the abil-ity of data centers to meet these availability requirements. 4The probl
14、em occurs when the power and cooling infra-structure can not support the “24 X Forever” availability. Inmany data centers there might be redundant capability incertain components of both the power and cooling system.However, in many the distribution path is single threaded, asshown in Figure 1. In p
15、ower this might be non-redundant crit-ical switch gear and/or a single distribution of power betweenthe UPS and the PDUs. In the cooling system the non-redun-dant component is quite often the distribution piping. If a valveneeds to be replaced, a leak repaired, or additional coolingequipment added,
16、the cooling system has to be shut down. In an environment where scheduled outages are notacceptable, due to the real or perceived system availabilitydemands, and the infrastructure can not support the availabil-ity, the only “tolerated” outage is an unscheduled one. You fixit when it breaks and hope
17、 that does not happen to often. There is yet another aspect to the infrastructure not meet-ing the IT demands for availability. This is where the powerand cooling systems are not matched in their ability to supportsuch an availability demand. Usually the power system is morerobust than the cooling s
18、ystem. We have been concerned withuninterruptible power for 30 years or more. Therefore, thereare many solutions available to supply uninterruptible powerthat is also concurrently maintainable to the computer hard-ware. However, the concern about cooling is a more recentphenomenon. Therefore, there
19、are many data centers that havecooling systems with “aged” technology and designs. There isusually redundant cooling capacity on the raised floor andquite often in the refrigeration and heat dissipation compo-nents. The weak link is the pumps, piping, and control valves.They are single threaded and
20、can not be changed or servicedwith the cooling system operating.The inefficiency in this latest scenario is the waste of capi-tal expenditure. A great deal of money is spent on uninterrupt-ible power and the cooling system can not support suchavailability. It used to be that if power to the data cen
21、ter waslost and the UPS system kept the computer equipment oper-ating the heat loads were low enough to be able to open thewindows and doors, turn on the fans and continue to run. Withthe high density of equipment and power dissipation that existsin many data centers today that is not a viable optio
22、n.Another exposure is the inability to provide continuouscooling, which is defined as the capability to continue to coolthe computer equipment in the case of a loss of power. This isthe mechanical equivalent of the battery system on the UPSsystem. 5With todays high heat load environments the time it
23、 takesa computer room, or section thereof, to reach critical temper-atures is limited. Data collected has shown that at a heatdensity of 40 Watts/ft2it takes 10 minutes for the room temper-ature to rise over 25F and exceed the manufacturers maxi-mum operating temperature 3. At a heat density of 100W
24、atts/ft2that time is three to five minutes. Thermal modelingby a number of sources indicates at 300 Watts/ft2the time hasdecreased to less than one minute.The general response to this lack of a continuous coolingcapability is the use of emergency backup power in the formof engine generators. “My gen
- 1.请仔细阅读文档,确保文档完整性,对于不预览、不比对内容而直接下载带来的问题本站不予受理。
- 2.下载的文档,不会出现我们的网址水印。
- 3、该文档所得收入(下载+内容+预览)归上传者、原创作者;如果您是本文档原作者,请点此认领!既往收益都归您。
下载文档到电脑,查找使用更方便
10000 积分 0人已下载
下载 | 加入VIP,交流精品资源 |
- 配套讲稿:
如PPT文件的首页显示word图标,表示该PPT已包含配套word讲稿。双击word图标可打开word文档。
- 特殊限制:
部分文档作品中含有的国旗、国徽等图片,仅作为作品整体效果示例展示,禁止商用。设计者仅对作品中独创性部分享有著作权。
- 关 键 词:
- ASHRAENY080012008SOMEWORSTCASEPRACTICESINDATACENTERS 一些 数据中心 最坏 情况 案例 PDF

链接地址:http://www.mydoc123.com/p-455527.html