欢迎来到麦多课文档分享! | 帮助中心 海量文档,免费浏览,给你所需,享你所想!
麦多课文档分享
全部分类
  • 标准规范>
  • 教学课件>
  • 考试资料>
  • 办公文档>
  • 学术论文>
  • 行业资料>
  • 易语言源码>
  • ImageVerifierCode 换一换
    首页 麦多课文档分享 > 资源分类 > PDF文档下载
    分享到微信 分享到微博 分享到QQ空间

    ASHRAE LV-11-C019-2011 Environmentally Opportunistic Computing Computation as Catalyst for Sustainable Design.pdf

    • 资源ID:455422       资源大小:468.63KB        全文页数:8页
    • 资源格式: PDF        下载积分:10000积分
    快捷下载 游客一键下载
    账号登录下载
    微信登录下载
    二维码
    微信扫一扫登录
    下载资源需要10000积分(如需开发票,请勿充值!)
    邮箱/手机:
    温馨提示:
    如需开发票,请勿充值!快捷下载时,用户名和密码都是您填写的邮箱或者手机号,方便查询和重复下载(系统自动生成)。
    如需开发票,请勿充值!如填写123,账号就是123,密码也是123。
    支付方式: 支付宝扫码支付    微信扫码支付   
    验证码:   换一换

    加入VIP,交流精品资源
     
    账号:
    密码:
    验证码:   换一换
      忘记密码?
        
    友情提示
    2、PDF文件下载后,可能会被浏览器默认打开,此种情况可以点击浏览器菜单,保存网页到桌面,就可以正常下载了。
    3、本站不支持迅雷下载,请使用电脑自带的IE浏览器,或者360浏览器、谷歌浏览器下载即可。
    4、本站资源下载后的文档和图纸-无水印,预览文档经过压缩,下载后原文更清晰。
    5、试题试卷类文档,如果标题没有明确说明有答案则都视为没有答案,请知晓。

    ASHRAE LV-11-C019-2011 Environmentally Opportunistic Computing Computation as Catalyst for Sustainable Design.pdf

    1、Aimee P. C. Buccellato, MDesS, LEED AP is an assistant professor in the School of Architecture at the University of Notre Dame, Notre Dame, Indiana. Paul R. Brenner, PhD, P.E. is the Associate Director, Center for Research Computing and research assistant professor in the Department of Computer Scie

    2、nce Engineering, University of Notre Dame, Notre Dame, Indiana. David B. Go, PhD is an assistant professor in the Department of Aerospace and Mechanical Engineering, University of Notre Dame, Notre Dame, Indiana. Ryan Jansen is an undergraduate in the Department of Computer Science Engineering at th

    3、e University of Notre Dame, Notre Dame, Indiana. Eric M. Ward, Jr. is an undergraduate in the Department of Aerospace and Mechanical Engineering at the University of Notre Dame, Notre Dame, Indiana. Environmentally Opportunistic Computing: Computation as Catalyst for Sustainable Design Aimee P. C. B

    4、uccellato, LEED AP Paul Brenner, PhD, P.E. David B. Go, PhD ASHRAE Member Ryan Jansen Eric M. Ward, Jr. ABSTRACT Environmentally Opportunistic Computing (EOC) is a sustainable computing concept that capitalizes on the physical and temporal mobility of modern computer processes and enables distribute

    5、d computing hardware to be integrated into a facility or network of facilities to optimize the consumption of computational waste heat in the built environment. The first implementation of EOC is the prototype Green Cloud Project at Notre Dame, where waste heat from computing hardware is used to off

    6、set the heating demands of the parent facility. EOC performs as a “system-source” thermal system, with the capability to create heat where it is locally required, to utilize energy when and where it is least expensive, and to minimize a buildings overall energy consumption. Instead of expanding acti

    7、ve measures (i.e. mechanical systems) to contend with thermal demands, the EOC concept utilizes existing high performance computing and information communications technology coupled with system controls to enable energy hungry, heat producing data systems to become service providers to a building wh

    8、ile concurrently utilizing aspects of a buildings HVAC infrastructure to cool the machines; essentially, the building receives free heat, and the machines receive free cooling. In this work, we present the vision of EOC and the current performance capabilities of the Green Cloud prototype from in si

    9、tu measurements. Recognizing EOCs potential to achieve a new paradigm for sustainable building, the research also begins to explore the integration of EOC at the building scale, acknowledging concept-critical collaboration required between architects, computational hardware and software owners, and

    10、building systems engineers. INTRODUCTION Waste heat created by high performance computing and information communications technology (HPC/ICT) is a critical resource management issue. In the U.S., billions of dollars are spent annually to power and cool data systems. The 2007 United States Environmen

    11、tal Protection Agency “Report to Congress on Server and Data Center Efficiency” estimates that the U.S. spent $4.5 billion on electrical power to operate and cool HPC and ICT servers in 2006 with the same report forecasting that our national ICT electrical energy expenditure will nearly double ballo

    12、oning to $7.4 billion by the year 2011. Current energy demand for HPC/ICT is already three percent of US electricity consumption and places considerable pressure on the domestic power grid: the peak load from HPC/ICT is estimated at 7 GW or the equivalent output of 15 baseload power plants (US EPA 2

    13、007). LV-11-C019156 ASHRAE Transactions2011. American Society of Heating, Refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). Published in ASHRAE Transactions, Volume 117, Part 1. For personal use only. Additional reproduction, distribution, or transmission in either print or digita

    14、l form is not permitted without ASHRAES prior written permission.As a result, in the “computational world”, as in the built world, optimized performance and increased systems efficiency and capability have become central priorities amidst mounting pressure from both the public and environmental advo

    15、cacy groups. However, despite evolving low power architectures in the computational sense demands for increased systems capability continue to drive up utility power consumption for computation towards economic limits on par with capital equipment costs. Not surprisingly, the faster and more efficie

    16、ntly we are able to compute, the more we grow a culture and economy requiring greater computation, simultaneously increasing power utilization for system operation and cooling need; or as Douglas Alger from Cisco points out: top-end performance often translates to top-end power demand and heat produ

    17、ction (Alger 2010). And so, regardless of streaming advances in systems capability and efficiency or perhaps even as a direct result of them architects and engineers must contend with the growing heat loads generated by computational systems, and the associated costly, involuntary energy waste invol

    18、ved in cooling them. Recognizing that power resources for data centers are not infinite, several professional entities within the technology industry have begun to explore this problem such as the High-Performance Buildings for High Tech Industries Team at Lawrence Berkley National Laboratory (Blaze

    19、k, Mills, et al. 2007), the ASHRAE Technical Committee 9.9 for Mission Critical Facilities, Technology Spaces, and Electronic Equipment (TC (b) 2008), the Uptime Institute (Brill 2008), and the Green Grid (http:/www.thegreengrid.org). At the same time, efforts by corporations, universities, and gove

    20、rnment labs to reduce their environmental footprint and more effectively manage their energy consumption have resulted in the development of novel waste heat exhaust and free cooling applications, such as the installation of the Barcelona Supercomputing Center, MareNostrum, in an 18thcentury Gothic

    21、masonry church (BSC 2010), and novel waste heat recirculation applications, such as a centralized data center in Winnipeg that uses re-circulated waste heat to heat the editorial offices of a newspaper directly above (Fontecchio 2008). Similar centralized data centers in Israel (Alger 2010) and Pari

    22、s (Miller 2010) use recaptured waste heat to condition adjacent office spaces and an on-site arboretum, respectively. Despite systems-side optimization of traditional centralized data centers and advances in waste heat monitoring and management, current efforts in computer waste heat regulation, dis

    23、tribution, and recapture are focused largely on immediate, localized solutions; and have not yet been met with comprehensive, integrated whole building design solutions. Further, while recommendations developed recently by industry leaders to improve data center efficiency and reduce energy consumpt

    24、ion through the adoption of conventional metrics for measuring Power Usage Effectiveness (PUE) recognize the importance of whole data center efficiency, the guidelines do not yet quantify the energy efficiency potential of a building-integrated distributed data center model (7x24, ASHRAE, et al. 201

    25、0). ENVIRONMENTALLY OPPORTUNISTIC COMPUTING Environmentally Opportunistic Computing (EOC) recognizes that increased efficiency in computational systems must reach beyond systems-side advancement, and that the aggressive growth of users and the demand capability of those users must necessarily be met

    26、 with new, integrated design paradigms that look beyond optimization of the traditional, single-facility data center. EOC integrates distributed computing hardware with existing facilities to create heat where it is already needed, to exploit cooling where it is already available, to utilize energy

    27、when and where it is least expensive, and to minimize the overall energy consumption of an organization. The focus of EOC research is to develop models, methods of delivery, and building/system design integrations that reach beyond current waste heat utilization applications and minimum energy stand

    28、ards to optimize the consumption of computational waste heat in the built environment. What must happen in order to push existing computation waste heat reclamation forward to be transformative is the development of a systematic method for assessing, balancing, and effectively integrating various in

    29、terrelated “market” forces (Table 1) related to the generation and efficient consumption of computer waste heat. At the building scale, the efficient consumption of computer waste heat must be closely coordinated with building HVAC systems, whether these are existing technologies or new recovery and

    30、 distribution systems designed specifically for waste heat recovery and free cooling. A sensor-control relationship must be established between these systems, the hardware they monitor, and the local input and output temperatures necessitated by the hardware and demanded by the building occupants, r

    31、espectively. The controls network must mediate not only the dynamic relationship between source and target but 2011 ASHRAE 157also the variation in source and target interaction due to governing outside factors such as seasonal variations. In the colder winter months the computational heat source ca

    32、n provide necessary thermal energy whereas the relationship inverts during the hot summer months when the facility can provide reasonably cool exhaust/make-up to the computational components. Table 1: Relevant Market Forces for Integrating HPC/ICT into the Built Environment 1. User demand for comput

    33、ational capability - Iterative examination of utilization patterns for various applications (science, business, entertainment, education, etc.) - Iterative correlation of utilization characteristics with developing software, hardware, and network capabilities 2. Computational capability mobility and

    34、 associated security concerns - Evolution and adoption of grid/cloud computing and virtualization technology - Security algorithms and implementations to allow sensitive/classified information transfer 3. Hardware thermal and environmental limits (temperature, humidity, particulate, etc) 4. Facility

    35、 concerns - Integration with existing or novel active HVAC and/or passive systems - General thermal performance variables (building materials, orientation, size, location of openings, etc) 5. Facility occupant demands/concerns - Thermal control (minimum user expectations and current standards and gu

    36、idelines) - Indoor air/ environmental quality and perception of heat source (radiant computer heat) 6. Temperature variability (indoor/ outdoor; day/ night; seasonal) 7. Return on investment, total cost of ownership, and carbon reduction cost benefits/avoidance The development of efficiency standard

    37、s and increased expectations with respect to building occupant comfort require that the optimized integration of computational waste heat in a facility or group of facilities take into account the prevailing thermal comfort standards, like ASHRAE Standard 55-2004 Thermal Comfort Conditions for Human

    38、 Occupancy which specifies “the combinations of indoor space environment and personal factors that will produce thermal environmental conditions acceptable to 80% or more of the occupants within a space” (ASHRAE 2004); and more recent provisions for enhanced controllability of systems by building oc

    39、cupants, like the USGBCs LEED rating system Environmental Quality Credit 6.2 Controllability of Systems which calls for the provision of “individual comfort controls for a minimum of 50% of the building occupants to enable adjustments to suit individual task needs and preferences”. Comfort system co

    40、ntrol may be achieved as long the building occupants have control over at least one of the primary indoor space environment criteria designated in ASHRAE Standard 55-2004: air temperature, radiant temperature, humidity, and air speed (USGBC 2007), all of which are critical considerations for the uti

    41、lization and optimization of waste heat in a facility. EOC PROTOTYPE PRELIMINARY MEASUREMENTS As the first field application of EOC, the University of Notre Dame Center for Research Computing (CRC), the City of South Bend (IN), and the South Bend Botanical Society have collaborated on a prototype bu

    42、ilding-integrated distributed data center at the South Bend Botanical Garden and Greenhouse (BGG) called the Green Cloud Project (http:/greencloud.crc.nd.edu). The Green Cloud (GC) prototype is a container that houses HPC servers and is situated immediately adjacent to the BGG facility, where it is

    43、ducted into one of the conservatories. The hardware components are directly connected to the CRC network and are currently able to run typical University-level research computing loads. The heat generated from the hardware is exhausted into the BGG public conservatory, with the goal to offset winter

    44、time heating requirements and reduce annual expenditures on heating. (In 2006, the BGG spent nearly $45,000 on heating during the months of January, February, and March, alone.) As shown in Figure 1, the prototype is a 20 ft .notdef.g00018 ft .notdef.g00018 ft (6.1 m .notdef.g00012.4 m .notdef.g0001

    45、2.4 m) container that houses 100 servers. The container was custom manufactured by Pac-Van in Elkhart, IN and each entry way is heavily secured for the security of the HPC equipment. During moderate-temperature months, external air (50 F/10 C) is introduced into the container through a single 54 in.

    46、 .notdef.g000148 in. (1.4 m .notdef.g00011.2 m) louver, heated by the hardware, and expelled into the conservatory. Conversely, during cold-temperature months, when external air is too cold ( 50 F/10 C) to appreciably heat for benefit to 158 ASHRAE Transactionsthe conservatory, a return vent has bee

    47、n ducted to the conservatory to draw air directly from the conservatory into the container, heat it from the hardware, and then return it directly into the conservatory. Air is driven by a set of three axial fans through two ducts into the BGG. The fans deliver a total volume flow rate of approximat

    48、ely 1260 cfm (3.6 l/min) at a speed of approximately 26.9 ft/s (8.2 m/s) through one duct and 18.4 ft/s (5.6 m/s) through the other. For operation during summer months, when the conservatory does not require additional heating, the ductwork is disconnected and the container uses free air cooling for

    49、 the hardware. Figure 1 (a) Layout of prototype EOC container integrated into BGG facility. (b) Photograph of the Green Cloud prototype at the BGG conservatory. (c) Schematic of prototype EOC container. Computational Control One of the largest challenges in the development of the GC prototype has been heat management. To address this problem, the authors developed a suite of temperature management scripts, designed to both efficiently run jobs on the servers as well as regulate the overall temperature of the hardware and the EOC container itself. The


    注意事项

    本文(ASHRAE LV-11-C019-2011 Environmentally Opportunistic Computing Computation as Catalyst for Sustainable Design.pdf)为本站会员(hopesteam270)主动上传,麦多课文档分享仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对上载内容本身不做任何修改或编辑。 若此文所含内容侵犯了您的版权或隐私,请立即通知麦多课文档分享(点击联系客服),我们立即给予删除!




    关于我们 - 网站声明 - 网站地图 - 资源地图 - 友情链接 - 网站客服 - 联系我们

    copyright@ 2008-2019 麦多课文库(www.mydoc123.com)网站版权所有
    备案/许可证编号:苏ICP备17064731号-1 

    收起
    展开