ASHRAE AN-04-9-1-2004 Evolution of Data Center Environmental Guidelines《数据中心环境准则的演化》.pdf
《ASHRAE AN-04-9-1-2004 Evolution of Data Center Environmental Guidelines《数据中心环境准则的演化》.pdf》由会员分享,可在线阅读,更多相关《ASHRAE AN-04-9-1-2004 Evolution of Data Center Environmental Guidelines《数据中心环境准则的演化》.pdf(8页珍藏版)》请在麦多课文档分享上搜索。
1、AN-04-9-1 Evolution of Data Center Environ men ta I G u i de I i n es Roger R. Schmidt, Ph.D. Christian Belady Alan Classen Tom Davidson Member ASHRAE Associate Member ASHRAE Member ASHRAE Magnus Herrlin Shlomo Novotny Rebecca Perry Member ASHRAE ABSTRACT Recent trends toward increased equipmentpowe
2、r density in data centers can result in signijkant thermal stress, with the undesirable side efects of decreased equipment availability, wasted floor space, and ineficient cooling system operation. In response to these concerns, manufacturers identified the need to provide standardization across the
3、 industry, and in 1998 a Thermal Management Consortium was formed. This was followed in 2002 by the creation of a new ASHRAE Tech- nical Group to help bridge the gap between equipment manu- facturers and facility designers and operators. “Thermal Guidelines for Data Processing Environments ” thefirs
4、tpubli- cation of TC9.9, is discussed in this papel; along with a histor- ical perspective leading up to the publication and discussion of issues that will define the roadmap for future ASHME activ- ities in thisfield. CURRENT INDUSTRY TRENDSIPROBLEMSIISSUES Over the years, computer performance has
5、significantly increased but unfortunately with the undesirable side effect of higher power. Figure 1 shows the Nationalhternational Tech- nology Roadmap for Semiconductors projection for proces- sor chip power. Note that between the years 2000 and 2005 the total power ofthe chip is expected to incre
6、ase 60% and the heat flux will more than double during this same period. This is only part of the total power dissipation, which increases geometrically. The new system designs, which include very efficient interconnects and high-performance data-bus design, create a significant increase in memory a
7、nd other device utili- zation, thus dramatically exceeding power dissipation expec- tations. As a result, significantly more emphasis has been placed on the cooling designs and power delivery methods within electronic systems over the past year. In addition, the new trend of low-end and high-end sys
8、tem miniaturization, dense packing within racks, and the increase in power needed for power conversion on system boards have caused an order of magnitude rack power increase. Similarly, this miniaturization and increase in power of electronics scales into the data center environment, In fact, it was
9、nt until recently that the industry has publicly recognized that the increasing density within the data center may have profound impact on the reliability and performance of the equipment it houses in the future. For this reason, there has been a recent flurry of papers addressing the need for new r
10、oom cooling 200 I -,- - - * Y” 1995 2000 2005 2010 201s Year Figure 1 Projection of processor power by the National/ International Technology Roadmap for Semiconductors. - Roger Schmidt and Alan Claassen are with IBM Corp., San Jose, Calif. Christian Belady is with Hewlett-Packard, Richardson, Tex.
11、Tom Davidson is with DLB Associates, Ocean N.J. Magnus Herrlin is a telecom consultant at ANCIS Professional Services, San Francisco, Calif. Shlomo Novotny and Rebecca Perry are with Sun Microsystems, San Diego, Calif. 02004 ASHRAE. 559 Year of Announcement Figure 2 Equipment power projection (Uptim
12、e Institute). technologies as well as modeling and testing techniques within the data center. All of these recognize that the status quo will no longer be adequate in the future. So what are the result- ing problems in the data center? Although there are many, the following list discusses some of th
13、e more relevant problems: 1. 2. 3. 560 Power density is projected to go up Figure 2 shows how rapidly machine power density is expected to increase in the next decade. Based on this figure it can easily be projected that by the year 2010 server power densities will be on the order of 20,000 WI m2. T
14、his exceeds what todays room cooling infrastruc- ture can handle. Rapidly changing business demands Rapidly changing business demands are forcing IT managers to deploy equipment quickly. Their goal is to roll equipment in and power on equipment immediately. This means that there will be zero time fo
15、r site prepara- tion, which implies predictable system requirements (i.e., “plug and play” servers). Infrastructure costs are rising The cost of the data center infrastructure is rising rapidly with current costs in excess of about $1000/ft2. For this reason, IT and facility managers want to obtain
16、the most from their data center and maximize the utilization of their infrastructure. Unfortunately, there are many barri- ers to achieve this. First, airflow in the data center is often completely ad hoc. In the past, manufacturers of servers have not paid much attention to where the exhaust and in
17、lets are in their equip- ment. This has resulted in situations where one server may exhaust hot air into the inlet of another server (some- times in the same rack). In these cases, the data center needs to be overcooled to compensate for this ineffi- ciency. In addition, a review of the industry sho
18、ws that the envi- ronmental requirements of most servers from various manufacturers are all different, yet they all coexist in the same environment. As a result, the capacity of the data center needs to be designed for the worst-case server with the tightest requirements. Once again, the data center
19、 needs to be overcooled to maintain a problematic server within its operating range. Finally, data center managers want to install as many serv- ers as possible into their facility to get as much production as possible per square foot. In order to do this they need to optimize their layout in a way
20、that provides the maxi- mum density for their infrastructure. The above cases illustrate situations that require over- capacity to compensate for inefficiencies. There is no NEBS equivalent specification for data centers. (NEBS metwork Equipment-Building Systems is the telecommunication industrys mo
21、st adhered to set of phys- ical, environmental, and electrical standards and require- ments for a central office of a local exchange carrier.)IT/ facility managers have no common specification that drives them to speak the same language and design to a common interface document. The purpose of this
22、paper is to review what started as a “grassroots” indusmide effort that tried to address the above problems and later evolved into an ASHRAE Technical Committee. This committee then developed “Thermal Guide- lines for Data Processing Environments” (ASHRAE 2003a), which will be reviewed in this paper
23、. HISTORY OF INDUSTRY SPECIFICATIONS 4. Manufacturers Environmental Specifications In the late 1970s and early 1980s, data center site planning consisted mainly of determining if power was clean (not connected to the elevator or coffee pot), had an isolated ground, and if it would be uninterrupted s
24、hould the facility experience a main power failure. The technology of the power to the equipment was considered the problem to be solved, not the power density. Other issues concerned the types of plugs, which vaned widely for some of the larger computers. In some cases, cooling was considered a pro
- 1.请仔细阅读文档,确保文档完整性,对于不预览、不比对内容而直接下载带来的问题本站不予受理。
- 2.下载的文档,不会出现我们的网址水印。
- 3、该文档所得收入(下载+内容+预览)归上传者、原创作者;如果您是本文档原作者,请点此认领!既往收益都归您。
下载文档到电脑,查找使用更方便
10000 积分 0人已下载
下载 | 加入VIP,交流精品资源 |
- 配套讲稿:
如PPT文件的首页显示word图标,表示该PPT已包含配套word讲稿。双击word图标可打开word文档。
- 特殊限制:
部分文档作品中含有的国旗、国徽等图片,仅作为作品整体效果示例展示,禁止商用。设计者仅对作品中独创性部分享有著作权。
- 关 键 词:
- ASHRAEAN04912004EVOLUTIONOFDATACENTERENVIRONMENTALGUIDELINES 数据中心 环境 准则 演化 PDF

链接地址:http://www.mydoc123.com/p-454565.html