ASHRAE LO-09-013-2009 High Performance Computing with High Efficiency《高效率高性能计算》.pdf
《ASHRAE LO-09-013-2009 High Performance Computing with High Efficiency《高效率高性能计算》.pdf》由会员分享,可在线阅读,更多相关《ASHRAE LO-09-013-2009 High Performance Computing with High Efficiency《高效率高性能计算》.pdf(8页珍藏版)》请在麦多课文档分享上搜索。
1、2009 ASHRAE 179ABSTRACTHigh Performance Scientific Computing typically involves many “clusters” of processors that are close connected. This results in high energy dissipation in tightly compacted areas creating high heat intensity. As these “machines” continue to evolve, cooling requirements become
2、 more challenging and the total electrical power requirements more resemble large industrial facilities than typical build-ings. Compounding the complexity of the HVAC design is the fact that these computers may be designed for air or liquid cooling. A new computational facility under design for the
3、 University of California in Berkeley, CA is such a center that is being designed to accommodate either air or liquid cooling.This paper describes the unique design features of this center whose goals included both being a model of high perfor-mance computing and a showcase for energy efficiency. Th
4、e mild climate in Berkeley provides an ideal opportunity to mini-mize energy use through the use of free cooling but traditional data center approaches could not fully take advantage of the mild climate to save energy. A design that utilizes outside air for cooling for all but a few hundred hours pe
5、r year is described. But in addition, there was a desire to provide for the eventual transition to liquid coolingin various possible configurations. This capability is also described.INTRODUCTIONA new supercomputer facility at the University of Cali-fornia the Computational Research and Theory Facil
6、ity (CRTF) was designed to incorporate energy efficiency strate-gies while providing flexibility for a wide range of supercom-puter cooling strategies. One of the primary goals of this facility was to provide a design that not only demonstrated world-class computational ability but also demonstrated
7、 best practices and novel solutions to the energy use of high perfor-mance computers. This one building, with office space, computer room, and infrastructure is expected to more than double the energy use for the campus that it is associated with. An arbitrary power budget of 7.5 MW for the initial
8、buildout and 17 MW for the ultimate buildout was established by management decision. Since the computing sciences group that will occupy the building is judged on computational output, there was strong incentive to maximize the amount of energy available for computational work and to minimize the in
9、frastructure loading.As a result, a design target for data center infrastructure efficiency (DCiE) was established. This established the ratio of IT energy to energy use of the total facility at 0.83. (Power Usage Effectiveness (PUE) the inverse of DCiE target = 1.2). With this goal, the design team
10、 was challenged to seek out and implement best practices and push the limits of exist-ing technologies. As can be seen in Figure 1, this would clearly place this center above the centers previously benchmarked by LBNL. While the design is not finalized, DCiE for peak power is predicted to be in the
11、range of 0.830.90, and the DCiE for energy is predicted to be in the range of 0.900.95.FLEXIBILITY FOR AIR OR LIQUID COOLINGA key design concept for the CRTF is to accommodate many generations of supercomputer over the course of several decades. While air is the dominant cooling scheme for such mach
12、ines at present (and thus the first iteration of computers is most likely to be air-cooled), there is a general trend in the industry toward liquid cooling as power densities increase (ASHRAE 2008), and it is anticipated that future equipment High Performance Computing with High EfficiencySteve Gree
13、nberg, PE Amit Khanna William Tschudi, PEAssociate Member ASHRAE Member ASHRAESteve Greenberg is an energy management engineer and William Tschudi is a program manager in the Building Technologies program at the Lawrence Berkeley National Laboratory, Berkeley CA. Amit Khanna is a senior consultant a
14、t Arup North America Ltd., San Francisco, CA.LO-09-013 2009, American Society of Heating, Refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). Published in ASHRAE Transactions 2009, vol. 115, part 2. For personal use only. Additional reproduction, distribution, or transmission in eit
15、her print or digital form is not permitted without ASHRAEs prior written permission.180 ASHRAE Transactionswill be largely or completely liquid-cooled. The lower density areas housing memory and network equipment will likely be air-cooled for a longer period than the scientific computing machines. T
16、hus it is imperative to maintain flexibility throughout the facility to cool the IT equipment with air, liquid, or a combination. Adding to the challenge is to main-tain this flexibility with maximum energy efficiency and mini-mum first cost.USE OF LARGE EFFICIENT AIR HANDLERSFor air cooling, the re
17、quired airflow is determined primar-ily by the IT equipment, as well as by how effectively the flow is managed (see “Air Management” section below). With a given flow, the power and energy requirements of the air-handling equipment are determined by the fan and motor effi-ciencies and by the total p
18、ressure drop in the system. All of these strategies are facilitated in the CRTF design by using central AHUs.The AHUs are located in a level below the computer floor, which frees up expensive raised-floor space and allows for maximum IT placement flexibility in the high-performance computing (HPC) s
19、pace. See Figure 2.The AHU configuration is modular, with unit sizes of 100,000 cfm (2800 m3/min) each, and each bay (20 (6.1 m) wide) can be equipped with one or two AHUs. (This is a maxi-mum flow, adjusted to meet load using variable-speed fans). If a bay requires two AHUs, they will be vertically
20、 stacked in the basement area. The ductwork from the AHU(s) in each bay feeds supply air into the 4 (1.2 m)-high raised floor plenum in multiple locations (Figure 3). The air is then delivered to the IT equipment either through a typical cold aisle arrangement or directly into the bottoms of the rac
21、ks, depending on the computer design. The hot air discharged from the equipment is either exhausted via exhaust fans located high on the east wall of the HPC, returned to the AHUs through ductwork down the west wall of the HPC, or most commonly, a combi-nation of the two (see Air-Side Economizer and
22、 Evaporative Cooling section below). The modular AHU scheme allows maximum flexibility in intial and staged construction in terms of supplying the right amount of air to the right place with a minimum of excess capacity.The large cross sectional area of the AHUs results in low face velocities of the
23、 filters, cooling coils, and evaporative cooling media of approximately 500 fpm (150 m/min). Supply air from the air handlers is supplied to the plenum via a short ductwork which is designed at 1500 fpm (450 m/min) (max.) at full design flows. These velocities, careful attention to other internal AH
24、U pressure drops, and low pressure drops in the ductwork and air distribution result in total initial static pres-sure of 1.5 (380 Pa) at design flow (Table 1). The user group understands the value of timely replacement of filters and direct media pads and is committed to follow an appropriate maint
- 1.请仔细阅读文档,确保文档完整性,对于不预览、不比对内容而直接下载带来的问题本站不予受理。
- 2.下载的文档,不会出现我们的网址水印。
- 3、该文档所得收入(下载+内容+预览)归上传者、原创作者;如果您是本文档原作者,请点此认领!既往收益都归您。
下载文档到电脑,查找使用更方便
10000 积分 0人已下载
下载 | 加入VIP,交流精品资源 |
- 配套讲稿:
如PPT文件的首页显示word图标,表示该PPT已包含配套word讲稿。双击word图标可打开word文档。
- 特殊限制:
部分文档作品中含有的国旗、国徽等图片,仅作为作品整体效果示例展示,禁止商用。设计者仅对作品中独创性部分享有著作权。
- 关 键 词:
- ASHRAELO090132009HIGHPERFORMANCECOMPUTINGWITHHIGHEFFICIENCY 高效率 性能 计算 PDF

链接地址:http://www.mydoc123.com/p-455300.html