欢迎来到麦多课文档分享! | 帮助中心 海量文档,免费浏览,给你所需,享你所想!
麦多课文档分享
全部分类
  • 标准规范>
  • 教学课件>
  • 考试资料>
  • 办公文档>
  • 学术论文>
  • 行业资料>
  • 易语言源码>
  • ImageVerifierCode 换一换
    首页 麦多课文档分享 > 资源分类 > PDF文档下载
    分享到微信 分享到微博 分享到QQ空间

    ASHRAE LO-09-013-2009 High Performance Computing with High Efficiency《高效率高性能计算》.pdf

    • 资源ID:455300       资源大小:835.25KB        全文页数:8页
    • 资源格式: PDF        下载积分:10000积分
    快捷下载 游客一键下载
    账号登录下载
    微信登录下载
    二维码
    微信扫一扫登录
    下载资源需要10000积分(如需开发票,请勿充值!)
    邮箱/手机:
    温馨提示:
    如需开发票,请勿充值!快捷下载时,用户名和密码都是您填写的邮箱或者手机号,方便查询和重复下载(系统自动生成)。
    如需开发票,请勿充值!如填写123,账号就是123,密码也是123。
    支付方式: 支付宝扫码支付    微信扫码支付   
    验证码:   换一换

    加入VIP,交流精品资源
     
    账号:
    密码:
    验证码:   换一换
      忘记密码?
        
    友情提示
    2、PDF文件下载后,可能会被浏览器默认打开,此种情况可以点击浏览器菜单,保存网页到桌面,就可以正常下载了。
    3、本站不支持迅雷下载,请使用电脑自带的IE浏览器,或者360浏览器、谷歌浏览器下载即可。
    4、本站资源下载后的文档和图纸-无水印,预览文档经过压缩,下载后原文更清晰。
    5、试题试卷类文档,如果标题没有明确说明有答案则都视为没有答案,请知晓。

    ASHRAE LO-09-013-2009 High Performance Computing with High Efficiency《高效率高性能计算》.pdf

    1、2009 ASHRAE 179ABSTRACTHigh Performance Scientific Computing typically involves many “clusters” of processors that are close connected. This results in high energy dissipation in tightly compacted areas creating high heat intensity. As these “machines” continue to evolve, cooling requirements become

    2、 more challenging and the total electrical power requirements more resemble large industrial facilities than typical build-ings. Compounding the complexity of the HVAC design is the fact that these computers may be designed for air or liquid cooling. A new computational facility under design for the

    3、 University of California in Berkeley, CA is such a center that is being designed to accommodate either air or liquid cooling.This paper describes the unique design features of this center whose goals included both being a model of high perfor-mance computing and a showcase for energy efficiency. Th

    4、e mild climate in Berkeley provides an ideal opportunity to mini-mize energy use through the use of free cooling but traditional data center approaches could not fully take advantage of the mild climate to save energy. A design that utilizes outside air for cooling for all but a few hundred hours pe

    5、r year is described. But in addition, there was a desire to provide for the eventual transition to liquid coolingin various possible configurations. This capability is also described.INTRODUCTIONA new supercomputer facility at the University of Cali-fornia the Computational Research and Theory Facil

    6、ity (CRTF) was designed to incorporate energy efficiency strate-gies while providing flexibility for a wide range of supercom-puter cooling strategies. One of the primary goals of this facility was to provide a design that not only demonstrated world-class computational ability but also demonstrated

    7、 best practices and novel solutions to the energy use of high perfor-mance computers. This one building, with office space, computer room, and infrastructure is expected to more than double the energy use for the campus that it is associated with. An arbitrary power budget of 7.5 MW for the initial

    8、buildout and 17 MW for the ultimate buildout was established by management decision. Since the computing sciences group that will occupy the building is judged on computational output, there was strong incentive to maximize the amount of energy available for computational work and to minimize the in

    9、frastructure loading.As a result, a design target for data center infrastructure efficiency (DCiE) was established. This established the ratio of IT energy to energy use of the total facility at 0.83. (Power Usage Effectiveness (PUE) the inverse of DCiE target = 1.2). With this goal, the design team

    10、 was challenged to seek out and implement best practices and push the limits of exist-ing technologies. As can be seen in Figure 1, this would clearly place this center above the centers previously benchmarked by LBNL. While the design is not finalized, DCiE for peak power is predicted to be in the

    11、range of 0.830.90, and the DCiE for energy is predicted to be in the range of 0.900.95.FLEXIBILITY FOR AIR OR LIQUID COOLINGA key design concept for the CRTF is to accommodate many generations of supercomputer over the course of several decades. While air is the dominant cooling scheme for such mach

    12、ines at present (and thus the first iteration of computers is most likely to be air-cooled), there is a general trend in the industry toward liquid cooling as power densities increase (ASHRAE 2008), and it is anticipated that future equipment High Performance Computing with High EfficiencySteve Gree

    13、nberg, PE Amit Khanna William Tschudi, PEAssociate Member ASHRAE Member ASHRAESteve Greenberg is an energy management engineer and William Tschudi is a program manager in the Building Technologies program at the Lawrence Berkeley National Laboratory, Berkeley CA. Amit Khanna is a senior consultant a

    14、t Arup North America Ltd., San Francisco, CA.LO-09-013 2009, American Society of Heating, Refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). Published in ASHRAE Transactions 2009, vol. 115, part 2. For personal use only. Additional reproduction, distribution, or transmission in eit

    15、her print or digital form is not permitted without ASHRAEs prior written permission.180 ASHRAE Transactionswill be largely or completely liquid-cooled. The lower density areas housing memory and network equipment will likely be air-cooled for a longer period than the scientific computing machines. T

    16、hus it is imperative to maintain flexibility throughout the facility to cool the IT equipment with air, liquid, or a combination. Adding to the challenge is to main-tain this flexibility with maximum energy efficiency and mini-mum first cost.USE OF LARGE EFFICIENT AIR HANDLERSFor air cooling, the re

    17、quired airflow is determined primar-ily by the IT equipment, as well as by how effectively the flow is managed (see “Air Management” section below). With a given flow, the power and energy requirements of the air-handling equipment are determined by the fan and motor effi-ciencies and by the total p

    18、ressure drop in the system. All of these strategies are facilitated in the CRTF design by using central AHUs.The AHUs are located in a level below the computer floor, which frees up expensive raised-floor space and allows for maximum IT placement flexibility in the high-performance computing (HPC) s

    19、pace. See Figure 2.The AHU configuration is modular, with unit sizes of 100,000 cfm (2800 m3/min) each, and each bay (20 (6.1 m) wide) can be equipped with one or two AHUs. (This is a maxi-mum flow, adjusted to meet load using variable-speed fans). If a bay requires two AHUs, they will be vertically

    20、 stacked in the basement area. The ductwork from the AHU(s) in each bay feeds supply air into the 4 (1.2 m)-high raised floor plenum in multiple locations (Figure 3). The air is then delivered to the IT equipment either through a typical cold aisle arrangement or directly into the bottoms of the rac

    21、ks, depending on the computer design. The hot air discharged from the equipment is either exhausted via exhaust fans located high on the east wall of the HPC, returned to the AHUs through ductwork down the west wall of the HPC, or most commonly, a combi-nation of the two (see Air-Side Economizer and

    22、 Evaporative Cooling section below). The modular AHU scheme allows maximum flexibility in intial and staged construction in terms of supplying the right amount of air to the right place with a minimum of excess capacity.The large cross sectional area of the AHUs results in low face velocities of the

    23、 filters, cooling coils, and evaporative cooling media of approximately 500 fpm (150 m/min). Supply air from the air handlers is supplied to the plenum via a short ductwork which is designed at 1500 fpm (450 m/min) (max.) at full design flows. These velocities, careful attention to other internal AH

    24、U pressure drops, and low pressure drops in the ductwork and air distribution result in total initial static pres-sure of 1.5 (380 Pa) at design flow (Table 1). The user group understands the value of timely replacement of filters and direct media pads and is committed to follow an appropriate maint

    25、enance schedule.Figure 1 LBNL Benchmark Results.ASHRAE Transactions 181MODULAR DESIGNTo help achieve an energy-efficient design and control capital cost, a modular approach to the design was incorpo-rated. This approach provides the desired flexibility for future uncertainty while allowing systems t

    26、o operate more effi-ciently. Space and other design considerations were provided so that as the facility load is increased additional capacity can easily be added. This approach reduced first cost of the facility but it also allowed components to be sized to better match the load requirements.LIQUID

    27、 COOLING CAPABILITYBecause the industry is moving toward liquid cooling for IT equipment, the CRTF is designed to accommodate the distribution of cooling water for direct or indirect use at or in the computer racks. A four-pipe distribution scheme is planned, including chilled water from the chiller

    28、 plant (using water-cooled, electrically driven centrifugal chillers) and closed-loop cooling water (“treated water”) from the cooling towers (via plate and frame heat exchangers). Mixing valves will allow individual computing systems to use 100% chilled water, 100% treated water, or anything in bet

    29、ween as needed to satisfy the entering water temperature requirement (see Figure 4). Chilled water and treated water temperature setpoints and reset schedules will be established to meet requirements in the most energy-efficient manner.Since no water-cooled IT equipment is anticipated in the initial

    30、 configuration of the CRTF, the treated water system will be accommodated by appropriate headers, valves, blank-off plates, and space for pipe runs. The chilled water system will initially run only to the AHUs, but taps with valves and blank-off plates will be installed also for future water cooling

    31、 requirements.Table 1. Initial Pressure Drops Across Major HVAC ComponentsComponents Initial Pressure Drop in w.g (Pa)OA louvers 0.15 (38)OA dampers 0.20 (50)Filters 0.35 (90)Direct Evaporative Media Pad 0.25 (62)CHW Coil 0.25 (62)Ductwork + Plenum + Outlets 0.3 (75)Total 1.5 (380)Figure 2 Building

    32、section showing AHU located below computer floor, exhaust air path (return is to left and down to AHUs), and office floors above.182 ASHRAE TransactionsFigure 3 3-D image illustrating air movement from outside air louvers through to underfloor plenum via multiple 3 8(0.9 m 2.4 m) punctures at the st

    33、ructural slab. Design documents were produced using 3-D Revit models for better coordination between disciplines. Source: ArupFigure 4 Four-pipe cooling water system. The chilled water supply and return are the solid lines; the dashed lines are future closed-loop cooling water supply and return (via

    34、 cooling towers and heat exchangers; see Figure 7).ASHRAE Transactions 183PART-LOAD MODULATIONThe CRTF load is expected to grow from an initial load of 7.5 MW to at least 17 MW over the course of several years. The load will vary over shorter time periods as computing systems are added, changed, and

    35、 turned on and off for main-tenance. In addition, the weather variation will result in diur-nal and seasonal load changes. It is key to the operation of the facility that all of these load variations be met in a way that provides uninterrupted service, but that modulates in an effi-cient manner. To

    36、the latter end, the cooling plant will be modu-lar, and all of the significant loads in the plant and system (tower fans; chiller compressors; chilled, tower, and treated water pumps; and AHU and exhaust fans are all designed with variable-frequency drives. Part-load curves will be integrated into t

    37、he building automation system so that overall energy and power use are minimized at any combination of cooling load and outdoor conditions.ENVIRONMENTAL CONDITIONSThe project team debated whether ASHRAE TC9.9 recommended environmental conditions (ASHRAE 2004) could be used as a design basis for the

    38、facility since some of the supercomputers on the market required more stringent conditions. To resolve whether ASHRAE recommended ranges could be specified, a workshop was held with all of the major supercomputer vendors where it was agreed that all of the vendors would agree to using the recommende

    39、d ranges. Subsequent to this meeting the TC9.9 committee voted to broaden the recommended ranges even further. With these assurances, the team agreed to use a maximum of 77F (25C) as the design temperature for the inlet of the IT equipment. Of course for much of the year in Berkeley by using outside

    40、 air for cooling the temperatures could be lower than 77F (25C). A broad design humidity range was also established at 3060% RH at the inlet to the IT equipment.AIR-SIDE ECONOMIZER AND EVAPORATIVE COOLINGThe location of the CRTF in Berkeley, California (across the bay from San Francisco), and the de

    41、sign indoor conditions, allows nearly all of the air cooling to be provided by outside air. The CRTF design indoor conditions are 60 to 77F (16 to 25C) dry-bulb and 3060% RH, as noted above. Because the facility needs to be able to meet the indoor conditions at all times, outdoor temperature extreme

    42、s (beyond normal summer design temperature) were assumed with a 100F (38C) dry-bulb and 65F (18C) coincident wet bulb chosen as the design condition.Given the above design conditions, analysis of the psychrometric data (see Figure 5) shows that the system can meet the requirements by operating in on

    43、e of four modes, as noted in Table 2.For over 90% of the hours in a year, the indoor conditions can be met by mixing outside and return air (the psychrometric process of this mode is shown by the arrows in Figure 5, which is to first order along lines of constant absolute humidity, since there is ne

    44、gligible latent load in the HPC). Direct evaporative cooling (with a mix of return air as needed) brings the humid-ity into the proper range when outdoor conditions are too dry, which occurs less than 1% of the year, as does the condition where a combined use of direct evaporative cooling and the ch

    45、illed water cooling coil is indicated. Approximately 500 hours per year require the chilled water coil alone.By using a wetted media-type (using the sensible heat in either the outside air or the return air to evaporate the water), the CRTF avoids the energy use of steam or infra-red humid-ifiers. A

    46、 direct-spray system was considered, but the extra pressure drop caused by the wetted media didnt justify the first and operating cost of the reverse-osmosis or deionization system required for the make-up water needed for the direct-spray system. Figure 6 illustrates the life cycle cost perfor-manc

    47、e of spray nozzle type strategy compared to pad-media for CRTF.Other alternate strategies were explored such as floor mounted and plenum located humidifiers but were discarded due to concerns regarding non-uniform distribution, and owners preference to keep the plenum and floor clear for main-tenanc

    48、e, accessibility, and flexibility. It must be noted that the facility will have installed multiple super computers (with a variety of rack configurations) at a given time which will be replaced by new-generation super computers every 5 years, so flexible use of the space is very important. WATER-SID

    49、E ECONOMIZERWhen water-based IT cooling is implemented at the CRTF, close-approach cooling towers and plate-and-frame heat exchangers will be used to supply as much of the cooling as possible without operating the chillers. We anticipate that most of the cooling will be provided without the chillers, though until the IT equipment cooling requirements are known, no prediction can be made (Figure 7).AIR MANAGEMENTIn order to reduce fan energy and ensure adequate cooling for the high intensity computing equipment, it is necessary to separate hot and cold air st


    注意事项

    本文(ASHRAE LO-09-013-2009 High Performance Computing with High Efficiency《高效率高性能计算》.pdf)为本站会员(Iclinic170)主动上传,麦多课文档分享仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对上载内容本身不做任何修改或编辑。 若此文所含内容侵犯了您的版权或隐私,请立即通知麦多课文档分享(点击联系客服),我们立即给予删除!




    关于我们 - 网站声明 - 网站地图 - 资源地图 - 友情链接 - 网站客服 - 联系我们

    copyright@ 2008-2019 麦多课文库(www.mydoc123.com)网站版权所有
    备案/许可证编号:苏ICP备17064731号-1 

    收起
    展开