ASHRAE LO-09-018-2009 Hurdles in Deploying Liquid Cooling in NEBS Environment《在NEBS环境阻碍液体冷却分发》.pdf
《ASHRAE LO-09-018-2009 Hurdles in Deploying Liquid Cooling in NEBS Environment《在NEBS环境阻碍液体冷却分发》.pdf》由会员分享,可在线阅读,更多相关《ASHRAE LO-09-018-2009 Hurdles in Deploying Liquid Cooling in NEBS Environment《在NEBS环境阻碍液体冷却分发》.pdf(10页珍藏版)》请在麦多课文档分享上搜索。
1、2009 ASHRAE 211ABSTRACT With computer servers exponential growth in power for a 7 rack density from sub-10kW (34,121Btu/hr) of yester-years, to 30kW (102,363Btu/hr) in the last half decade, to current product launches of over 60kW (204,726Btu/hr), there is significant desire and product research by
2、datacenter cool-ing equipment vendors, as well as computer server equipment vendors, to introduce liquid-cooling solutions in various forms, such as direct equipment level or providing air-to-liquid heat-exchanging at the rack. In this paper, we would like to differentiate the equipment for Telecom
3、Central Office (CO) environment to the more industry dominant Datacenter (DC) environment. A holistic examination, from network equipment design to the Telecom CO requirements, is followed in explain-ing the different hurdles along the way in implementing liquid cooling in the Telecom environment.IN
4、TRODUCTIONUnless otherwise specified or discussed, the current refer-ences to air cooling and liquid cooling are directed to such implementations on the equipment design and not at the rack or room level. It is well established that liquid cooling, which includes most of the non-air cooling technolo
5、gies, such as water, 2-phase/multi-phase flow and refrigerant systems, is a much more effective method of extracting heat because, by compar-ison, air is a poor conductor of heat and has low heat capacity. These alternative cooling techniques are not new in electronic cooling, but reinvestigated eve
6、ry decade or so when existing electronics technology reaches a power density plateau that cannot be adequately addressed by air cooling. The most recognized of these past liquid-cooling designs is the IBM Thermal Conduction Module (TCM) (Kraus et al. 1983) of the 80s to the early 90s for cooling Bip
7、olar devices. It was aban-doned when CMOS technology came along that provided continued scaling of higher performance with lower power consumption. However, since then the server vendors brought back liquid cooling to the latest generation of DC servers because the rack power density is surpassing 3
8、0kW (102,363Btu/hr).Air cooling has always been the cooling of choice because of its beneficial attributes that are much more attractive than pure comparison of cooling performance. Some of these attri-butes include low cost, ease of implementation (both in the design and equipment deployment), diel
9、ectric in nature, and no adverse environmental impact. Until there is a drastic para-digm shift in the ultimate heat sinking fluid, such as dumping the waste heat into the ocean (or lake) directly, from a holistic view air is still the ultimate source that the heat is dissipated to and will be conti
10、nued to be dumped into the environment.The industry is once again nearing the power plateau. Strong debates exist between the 2 dipoles of air cooling and liquid cooling of IT equipment and continuum of solutions in between them, because the power density is at the transition boundary. If the power
11、density continues its exponential growth, the cooling sweet spot may be shifted; it will not be a debate further, but will push the industry into liquid cooling because other issues are getting less manageable, such as acoustic noise, air flow distribution, and maintaining proper local component tem
12、peratures.The market for computer and computer server units are larger than the network equipment market. Most of the commercial cooling equipment vendors are much more famil-iar and involved with DC environment than the CO Hurdles in Deploying Liquid Cooling in NEBS Environment Herman ChuHerman Chu
13、 is principal engineer at Cisco Systems, Incorporated, San Jose, CA.LO-09-018 2009, American Society of Heating, Refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). Published in ASHRAE Transactions 2009, vol. 115, part 2. For personal use only. Additional reproduction, distribution,
14、 or transmission in either print or digital form is not permitted without ASHRAEs prior written permission.212 ASHRAE Transactionsenvironment. In this paper, it is the main purpose to point out the differences and hope this will help vendors to derive solu-tions that will be tailored for CO operator
15、s instead of a 1-size-fits-all approach. Before the industry can embrace liquid cool-ing, there are issues that need to be understood and addressed.MARKET SEGMENTS OVERVIEW FOR NETWORK EQUIPMENTGenerally, network equipment can be categorized into the following market segments:Consumer. Sometimes the
16、se are also referred to as cus-tomer premises equipment (CPE) for use with service provider services. They include equipment such as tele-phones, DSL modems, cable modems, set-top boxes and private branch exchanges.Small office, home office (SOHO), branch, medium office. For SOHO, it is usually from
17、 1 to 10 employees. For branch and medium office, the equipment is typi-cally in a designated area or room in the office, but not in a datacenter.Enterprise. These are the large corporations that typi-cally have DCs with well controlled environments hous-ing the IT and network equipment.Service prov
18、iders (SP). The Telecom companies, such as AT both having a profound impact on how equipment is designed.Availability, Serviceability and Redundancy ConsiderationsSimilar to the past mainframe computer requirement, high-availability is a crucial component of routers and switches deployed in COs. For
19、 instance, in a catastrophic or emergency scenario, users expect that the dial-tone to be instant when the phone is picked up. With high-availability requirement, serviceability and redundancy are key elements in the equipment design to mini-mize any intended or unintended down time. Network equip-m
20、ent has to be easily serviceable and redundancy is generally designed in to the cooling and power systems and at the board level for each type of boards. For the larger size equipment, such as a core router, it is not possible to provide redundancy by duplicating the machine. To utilize liquid cooli
21、ng, these design constraints need to be carefully addressed. As listed in Table 1, per NEBS require-ment, it can take up to 96hrs before any operator intervention. This includes any repair or replacement of the cooling system, power supplies, or boards. This means that any fluid leakage, pump failur
22、es or any other components of the liquid cooling loop need to be self-healed or self-controlled within the 96hr window. Most likely this will require redundant components (such as pump), by-pass circuitry, pneumatic on/off valves and leakage detection. ReliabilityTo illustrate the importance of reli
23、ability lets review Figure 3 again. The slope for the Telecom equipment curve is shallower than the one for the computer equipment. Both industries ride the same semiconductor technology and advances, so why are they different? One of the reasons could be because the Telecom equipment product cycle
24、is generally longer than computer and server product cycle. Once the equipment is deployed in the CO, it can be there for more than 10 years, while for computer and server industry is much shorter. Due to the difference in product life cycle, the network equipment vendors have not been able to proje
- 1.请仔细阅读文档,确保文档完整性,对于不预览、不比对内容而直接下载带来的问题本站不予受理。
- 2.下载的文档,不会出现我们的网址水印。
- 3、该文档所得收入(下载+内容+预览)归上传者、原创作者;如果您是本文档原作者,请点此认领!既往收益都归您。
下载文档到电脑,查找使用更方便
10000 积分 0人已下载
下载 | 加入VIP,交流精品资源 |
- 配套讲稿:
如PPT文件的首页显示word图标,表示该PPT已包含配套word讲稿。双击word图标可打开word文档。
- 特殊限制:
部分文档作品中含有的国旗、国徽等图片,仅作为作品整体效果示例展示,禁止商用。设计者仅对作品中独创性部分享有著作权。
- 关 键 词:
- ASHRAELO090182009HURDLESINDEPLOYINGLIQUIDCOOLINGINNEBSENVIRONMENT NEBS 环境 阻碍 液体 冷却 分发 PDF

链接地址:http://www.mydoc123.com/p-455304.html