欢迎来到麦多课文档分享! | 帮助中心 海量文档,免费浏览,给你所需,享你所想!
麦多课文档分享
全部分类
  • 标准规范>
  • 教学课件>
  • 考试资料>
  • 办公文档>
  • 学术论文>
  • 行业资料>
  • 易语言源码>
  • ImageVerifierCode 换一换
    首页 麦多课文档分享 > 资源分类 > PDF文档下载
    分享到微信 分享到微博 分享到QQ空间

    ASHRAE OR-10-013-2010 Data Centers’ Energy Auditing and Benchmarking-Progress Update《数据中心的能源审计和评判 过程更新》.pdf

    • 资源ID:455675       资源大小:917.09KB        全文页数:9页
    • 资源格式: PDF        下载积分:10000积分
    快捷下载 游客一键下载
    账号登录下载
    微信登录下载
    二维码
    微信扫一扫登录
    下载资源需要10000积分(如需开发票,请勿充值!)
    邮箱/手机:
    温馨提示:
    如需开发票,请勿充值!快捷下载时,用户名和密码都是您填写的邮箱或者手机号,方便查询和重复下载(系统自动生成)。
    如需开发票,请勿充值!如填写123,账号就是123,密码也是123。
    支付方式: 支付宝扫码支付    微信扫码支付   
    验证码:   换一换

    加入VIP,交流精品资源
     
    账号:
    密码:
    验证码:   换一换
      忘记密码?
        
    友情提示
    2、PDF文件下载后,可能会被浏览器默认打开,此种情况可以点击浏览器菜单,保存网页到桌面,就可以正常下载了。
    3、本站不支持迅雷下载,请使用电脑自带的IE浏览器,或者360浏览器、谷歌浏览器下载即可。
    4、本站资源下载后的文档和图纸-无水印,预览文档经过压缩,下载后原文更清晰。
    5、试题试卷类文档,如果标题没有明确说明有答案则都视为没有答案,请知晓。

    ASHRAE OR-10-013-2010 Data Centers’ Energy Auditing and Benchmarking-Progress Update《数据中心的能源审计和评判 过程更新》.pdf

    1、2010 ASHRAE 109ABSTRACTThis paper presents a summary of the energy audit andoptimization studies conducted on more than 40 data centers.Comparison of data center energy efficiency metrics is pre-sented. Those metrics include energy utilization metrics suchas the Power Usage Effectiveness (PUE), Data

    2、 Center infra-structure Efficiency (DCiE), mechanical PUE, electrical PUEand thermal or air management metrics such as bypass andrecirculation air flow ratios. Additionally, percentages of cool-ing system, fans, UPS losses, and lighting to total data centerpower were analyzed and presented. The impa

    3、ct of climatezone as well as the operational load density compared todesign load density were considered as well. These metricsincorporate and integrate together the major factors thatdecrease the effectiveness of computer room air cooling andthe entire data center infrastructure. The energy utiliza

    4、tionmetrics determine the extent of the efficiencies of the datacenter supporting mechanical and electrical infrastructures.Interestingly, the database indicated that small data centersRaised Floor Area (RFA) 30,000 ft22.1Figure 2 Annual average PUE. 2010, American Society of Heating, Refrigerating

    5、and Air-Conditioning Engineers, Inc. (www.ashrae.org). Published in ASHRAE Transactions 2010, Vol. 116, Part 1. For personal use only. Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAEs prior written permission. ASHRAE Transactions

    6、 113growth rate, and the expected PUE for the second year isprojected to be 2.7.The impact of climate zone on data center energy effi-ciency is another important dimension. The dependence onclimate zone is presented in Figure 7. Climate zones were ex-pressed in terms of cooling degree days (CDD). Co

    7、ld climatezones “low CCD” causes the mechanical cooling systems tooperate more efficiently and offer potential for “free cooling”directly via an air economizer or indirectly via a water sideeconomizer. Hence, substantial reduction in mechanicalpower consumption and improved PUE. Hot climate zones(hi

    8、gh CDD) were observed to have cooling systems operateat full capacity and therefore energy conservation strategiessuch as variable frequency drives may not necessarily be ben-eficial. Additionally, economizers are typically not utilized inthese climates. As an example, for a data center in Phoenix(c

    9、limate 2B), limited options can be considered to lower thePUE compared to a data center in San Francisco (3C). TheFigure 3 Annual average PUEmechanical.Figure 4 Annual average PUEelectrical. 2010, American Society of Heating, Refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). Publi

    10、shed in ASHRAE Transactions 2010, Vol. 116, Part 1. For personal use only. Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAEs prior written permission. 114 ASHRAE Transactionshigh ambient temperature year round in Phoenix makes itm

    11、ore difficult on the mechanical cooling system to make useof energy reduction strategies such as economizers and vari-able frequency drives. In contrast, data centers in colder cli-mate will have less power consumed by the mechanicalcooling system due to higher efficiency of the components atcooler

    12、outdoor temperature. The cooler climate offer moreenergy efficiency measures to implement than hot climates.For example, in San Francisco, air economizers have been im-plemented successfully to cool IT equipment with outdoor airfor more than 7000 h per year. In Chicago area (5A), waterside economize

    13、rs, for example, can be used to eliminate chill-ers power consumption for at least 25% of the year resultingin an overall reduction of the data center total power or PUEby 5 to 12%, depending on a variety of factors.Figure 7 shows that data collected from the 40 datacenters are scattered and, due to

    14、 poor correlation, no cleardetermination of the impact of the climate zone is identified.Mathematically, the correlation of the data shows dependenceon the climate as indicated by the trend line.Figure 5 Variation in data center energy use.Figure 6 PUE as a function of data center growth. Figure 7 C

    15、ooling system power consumption as a functionof cooling degree days (CDD). 2010, American Society of Heating, Refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). Published in ASHRAE Transactions 2010, Vol. 116, Part 1. For personal use only. Additional reproduction, distribution, or

    16、 transmission in either print or digital form is not permitted without ASHRAEs prior written permission. ASHRAE Transactions 115Since most of the obtained data are from legacy datacenters which didnt implement free cooling; one may expectthat the dependence on climate not to be drastic. To trulyquan

    17、tify the data and conclusions further, the most efficientdata centers were found in climate 6A.The UPS losses as a percentage of the total data centerpower is shown in Figure 8 above. The percentage is expectedto be higher for more redundant system. For example, forreserve redundant UPS with automat

    18、ic static transfer switch(RR w ASTS), one may maximize the load on the UPS up to100% of the name plate data compared to 75% for an N+1 (4modules) UPS system, 50% for a 2N system, and only 37.5%for a 2 2(N+1) with ASTS system. As already known, thehigher the load factor (% loading) of the UPS system;

    19、 thelower the UPS losses. The top data point on the graph refersto a fault tolerant data center, highly reliable, combined witha very low (5 to 10%) UPS load factor. Generally speaking,UPS losses below 5% of the total building power are consid-ered good.The dependence on tier level is not presented

    20、graphically,the acquired data showed no clear dependence on theperceived tier level (I,II,III, IV) of the facility (Uptime Insti-tute 2009). In other words, when PUE was plotted as a func-tion of the tier level, it was not possible to identify anydependence. Some of the efficient data centers werepe

    21、rceived as tier IV facilities.Although generally accounts for small percentage in adata center total power, lighting is considered usually an easymethod to save energy. Many of the data centers audited havelights ON all the time in the raised floor area and not efficientlighting systems or control s

    22、trategies. A few data centers werefound to have high bay MH 250 W lighting fixtures that are ONall the time. In addition to their nominal power consumption,an equivalent load is also placed on the cooling system.Several data center were found to have efficient zoned T8lighting fixtures with occupanc

    23、y sensors. In many datacenters, retrofitting the existing lighting systems with occu-pancy sensors resulted in acceptable simple payback of lessthan 4 years especially when local utility incentives are takeninto account. Generally, if the lighting power consumption isless than 1%, then one might con

    24、sider that to be good.The cooling system power percentage of the total datacenter power is plotted against the IT load is Figure 10 above.Once can easily observe that small data center with criticalload of less than 500 kW have higher mechanical coolingpower consumption and this percentage decreases

    25、 as the ITload increases. This can be easily explained by the fact thatsmall data centers often employ air cooled direct expansionunits know (air cooled DX) while larger facilities tend toemploy chilled water plants. The specific power consumptionof air cooled DX system is usually about 1.45 kW/ton

    26、of cool-ing while the same number for water cooled chiller plants ofthose legacy data centers is around 0.8 kW/ton. Of course, thisnumber has improved drastically over the years and it is notunusual to find efficient chillers having kW/ton rating of 0.4 to0.48.Fan power is also another important com

    27、ponent. It wasfound that many data centers have more Computer Room AirConditioners (CRAC) or Handlers (CRAH-chilled water)units than they actually need. Many of the chilled water unitswere observed to have either closed or slightly open coolingvalve indicating that the cooling was not really require

    28、d. Ofcourse, the vast majority of those CRAC or CRAH units haveconstant speed fan motors which render them not efficient asthe air flow will continue unless excess units are shut down.Many of those legacy units have also low efficiency motorsless than todays standard efficiency. On the other hand,co

    29、ntemporary data centers were observed to have variablefrequency drives (VFD) CRAH units operating at 60 to 80%of the nominal speed thus consuming fraction of the nominalfan power. Those units were proven to have low fan powerconsumption accounts for less than 7% of the total data centerpower. Figure

    30、 8 UPS losses as a percentage of total data centerpower.Figure 9 Cooling system power as a percentage of totaldata center power. 2010, American Society of Heating, Refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). Published in ASHRAE Transactions 2010, Vol. 116, Part 1. For person

    31、al use only. Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAEs prior written permission. 116 ASHRAE TransactionsThe air management topic within the raised floor area ofthe data center is of great importance and was discussed andpr

    32、esented in detail (Tozer et al. 2009), metrics to calculaterecirculation (R) and bypass (BP) flow ratios were presentedwhich are based on a sensible heat transfer and energy modelthat require the temperature measurements at servers intakeand exhaust as well as the CRAC/H return and discharge airtemp

    33、eratures. The average of all of the data halls bypass andrecirculation flow ratios are 0.5 and 0.5, respectively. Thisindicates that there is a large opportunity to improve that andbring those points closer to ideal (0,0). A few data centers withlower bypass and recirculation than average were found

    34、, thosewere observed to have CRAHs with VFDs located in galleriesoutside the data center, comprehensive implementation ofblanking panels and cable brushes, and high ceiling, serversintake temperatures above 70F, and generally have all of thebest practices implemented. Data obtained recently fromconf

    35、igurations with contained cold aisles where segregationmechanisms are implemented to isolate the cool air from thehot air indicate very low air mixing levels as shown. The veryFigure 10 Lighting power as a percentage of total datacenter power.Figure 11 Fan power as a percentage of total data centerp

    36、ower.Figure 12 Air management benchmarking.Table 2. Thermoeconomic Analyses of Data Centers in the DatabaseSmall Corporate Data Centers 0 to 10,000 ft2Average PUE 2.8Target PUE 2.3Average annual energy reduction (kWh/yr) 573,476Average annual energy reduction (%) 24%Average annual energy savings ($)

    37、 $57,348Average CO2savings (metric tons) 294Average cost of improvements (estimated) $163,800Simple payback 3Average ROI 35%Midsized Data Centers 10,001 to 30,000 ft2Average PUE 2.20Target PUE 1.90Average annual energy reduction (kWh/yr) 1,031,672Average annual energy reduction (%) 14%Average annual

    38、 energy savings ($) $103,167Average CO2savings (metric tons) 615Average cost of improvements (estimated) $343,016Simple payback 3Average ROI 34%Large Data Centers 30,000 ft2+Average PUE 2.1Target PUE 1.8Average annual energy reduction (kWh/yr) 3,253,674Average annual energy reduction (%) 12%Average

    39、annual energy savings ($) $325,367Average CO2savings (metric tons) 1,831Average cost of improvements (estimated) $730,000Simple payback 3Average ROI 43% 2010, American Society of Heating, Refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). Published in ASHRAE Transactions 2010, Vol.

    40、 116, Part 1. For personal use only. Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAEs prior written permission. ASHRAE Transactions 117low levels were attributed to several air leakage paths andmeasures are in place to seal those

    41、 in order to minimize thelosses.SUMMARY AND DISCUSSIONAside from the IT equipment power, the previous graphsabove reflect the fact that most of the energy efficiencymeasures can be implemented in the mechanical coolingsystem whereas limited options available in the electricalinfrastructure. Mechanic

    42、ally, the cooling system and the datacenter fans represent the major power consuming componentswhile the UPS and the lighting systems are usually the onlypractical areas where gains can be made in the electricalsystem. The cooling load can be reduced by implementingeconomizers, VFD on the chillers a

    43、nd raising the chilled waterset points and implementing condenser water reset control asweather permits. Similarly, implementing variable air flowcan reduce the fan power consumption by up to 40%.The previous data centers were analyzed to understandwhat it would take to reduce their energy use as we

    44、ll as theircarbon emissions based on a fixed cost per kWh. As it is known,cost of power varies between the different geographical areas.In Boston, MA for example which is located in climate zone(6A), the cost of electricity is $0.16/kWh whereas the cost inPhoenix, AZ (climate 2B) is $0.08/kWh. Addit

    45、ionally, someparts of the U.S. have more green power than others. The greenpower plants (wind, solar, etc.) result in less associated carbonfootprint. The data centers in the database were grouped intothree different categories based on their size. For small datacenters, reduction of the PUE from 3

    46、to 2.3 would require smallinvestment of about $165k which would be paid back in about3 years and that would result in 300 metric tons of carbon diox-ide avoidance. Further analyses are shown in Table 2 (Salim2009) for other categories.REFERENCESDOE. 2008. US Government Computer News, http:/ EPA. 200

    47、7. Report to congress on server and data centerenergy efficiency public law 109-431. ENERGY STARProgram.U.S. DOE. 2007. Save energy now initiative. http:/www1.eere.energy.gov/industry/saveenergynow/partnering_data_centers.html.The Green Grid. 2007. The green grid data center power effi-ciency metric

    48、s: PUE and DCiE, December. http:/www.thegreengrid.org/gg_content/TGG_Data_Center_Power_Effi ciency_Metrics_PUE_and_DCiE.pdf.Tozer, R., M. Salim, and C. Kurkjian. 2009. Air managementmetrics in data centers. Transaction of ASHRAE ChicagoWinter Conference, January.Salim, M. 2009. Energy in data center

    49、s. Engineered SystemsApril. http:/ Institute. 2009. Data center site infrastructure tier stan-dard: topology. DISCUSSIONVello Ehvert, President, Ehvert Engineering, Toronto,Ontario, Canada: Would you agree that PUE and DCiE donot account for efficiency of the power source and transmis-sion loses? Your presentation focused on reducing energyconsumed by mechanical and electrical processes that supplyelectricity and remove heat from the dat


    注意事项

    本文(ASHRAE OR-10-013-2010 Data Centers’ Energy Auditing and Benchmarking-Progress Update《数据中心的能源审计和评判 过程更新》.pdf)为本站会员(bowdiet140)主动上传,麦多课文档分享仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对上载内容本身不做任何修改或编辑。 若此文所含内容侵犯了您的版权或隐私,请立即通知麦多课文档分享(点击联系客服),我们立即给予删除!




    关于我们 - 网站声明 - 网站地图 - 资源地图 - 友情链接 - 网站客服 - 联系我们

    copyright@ 2008-2019 麦多课文库(www.mydoc123.com)网站版权所有
    备案/许可证编号:苏ICP备17064731号-1 

    收起
    展开