Recent Posts by Patrick Keller

About hot and cold air flows (in data centers)

hot-aisle_cold-aisle_01

hot-aisle_cold-aisle_conditions

Both images taken from the website Green Data Center Design and Management /  “Data Center Design Consideration: Cooling” (03.2015). Source: http://macc.umich.edu.

ASHRAE is a “global society advancing human well-being through sustainable technology for the built environment”.

 

A typical question that arise with data centers is the need to cool down the overheating servers they contain. The more they will compute, the more they’ll heat, consume energy, but also will therefore be in need to be cooled down, so to stay in operation (wide range of operation would be between 10°C – 30°C). While the optimal server room temperature seem to be around 20-21°C, ~27°C for recent and professional machines (Google recommends 26.7°C).

The exact temperature of function is subject to discussion and depends on the hardware.

Yet, in every data center comes the question of air conditioning and air flow. In this case, it always revolves around the upper drawing (variations around this organization): 1° cold air aisles, floors or areas need to be created or maintained, where the servers will take their refreshing fluid and 2° hot air aisles, ceilings or areas need to be managed where the heated air will need to be released and extracted.

Second drawing shows that humidity is important as well depending on heat.

 

As hot air, inflated and lighter, naturally moves up while cold air goes down, many interesting and possibly natural air streams could be imagined around this air configuration …

 

OpenCloud (Academic Research) Mesh

Note: When we had to pick an open source cloud computing platform at the start of our research, we dug for some time to pick the one that would better match with our planned activities. We chose ownCloud and explained our choice in a previous post, so as some identified limitations linked to it. Early this year came this announcement by ownCloud that it will initiate “Global Interconnected Private Clouds for Universities and Researchers” (with early participants such has the CERN, ETHZ, SWITCH, TU-Berlin, University of Florida, University of Vienna, etc.) So it looks like we’ve picked the right open platform! Especially also because they are announcing a mesh layer on top of different clouds to provide common access across globally interconnected organizations.

This comforts us in our initial choice and the need to bridge it with the design community, especially as this new “mesh layer” is added to ownCloud, which was something missing when we started this project (from ownCloud version 7.0, this scalability became available though). It now certainly allows what we were looking for: a network of small and personal data centers. Now the question comes back to design: if personal data centers are not big undisclosed or distant facilities anymore, how could they look like? For what type of uses? If the personal applications are not “file sharing only” oriented, what could they become? For what kind of scenarios?

 

OpenCloudMesh_4.0_150dpi

I&IC Workshop #4 with ALICE at EPFL-ECAL Lab, brief: “Inhabiting the Cloud(s)”

Note: we will start a new I&IC workshop in two weeks (02-06.02) that will be led by the architects of ALICE laboratory (EPFL), under the direction of Prof. Dieter Dietz, doctoral assistant Thomas Favre-Bulle, architect scientist-lecturer Caroline Dionne and architect studio director Rudi Nieveen. During this workshop, we will mainly investigate the territorial dimension(s) of the cloud, so as distributed “domestic” scenarios that will develop symbiosis between small decentralized personal data centers and the act of inhabiting. We will also look toward a possible urban dimension for these data centers. The workshop is open to master and bachelor students of architecture (EPFL), on a voluntary basis (it is not part of the cursus).

A second workshop will also be organized by ALICE during the same week on a related topic (see the downloadable pdf below). Both workshops will take place at the EPFL-ECAL Lab.

I introduce below the brief that has been distributed to the students by ALICE.

 

Inhabiting the Cloud(s)

IMG_9021_m

Wondering about interaction design, architecture and the virtual? Wish to improve your reactivity and design skills?

Cloud interfaces are now part of our daily experience: we use them as storage space for our music, our work, our contacts, and so on. Clouds are intangible, virtual “spaces” and yet, their efficacy relies on humongous data-centres located in remote areas and subjected to strict spatial configurations, climate conditions and access control.
Inhabiting the cloud(s) is a five days exploratory workshop on the theme of cloud interfacing, data-centres and their architectural, urban and territorial manifestations.
Working from the scale of the “shelter” and the (digital) “cabinet”, projects will address issues of inhabited social space, virtualization and urban practices. Cloud(s) and their potential materialization(s) will be explored through “on the spot” models, drawings and 3D printing. The aim is to produce a series of prototypes and user-centered scenarios.

Participation is free and open to all SAR students.

ATTENTION: Places are limited to 10, register now!
Info and registration: caroline.dionne@epfl.ch & thomas.favre-bulle@epfl.ch
www.iiclouds.org

-

Download the two briefs (Inhabiting the Cloud(s) & Montreux Jazz Pavilion)

 

Laboratory profile

The key hypothesis of ALICE’s research and teaching activities places space within the focus of human and technological processes. Can the complex ties between human societies, technology and the environment become tangible once translated into spatial parameters? How can these be reflected in a synthetic design process? ALICE strives for collective, open processes and non-deterministic design methodologies, driven by the will to integrate analytical, data based approaches and design thinking into actual project proposals and holistic scenarios.

 

http://alice.epfl.ch/

 

Clog (2012). Data Space

IMG_9022

 

Note: we mentioned this “bookazine”, Clog, in our bibliography (Clog, (2012). Data Space, Clog online), at the very early stages of our design-research project. It is undoubtedly one of the key references for this project, mostly related to thinking, territory, space and therefore rather oriented toward the architecture field. It will certainly serve in the context of our workshop with the architects (in collaboration with ALICE) next week, but not only, as it states some important stakes related to data in general. This very good and inspiring magazine is driven by a pool of editors that are Kyle May (editor in chief, we invited him as a jury member when we –fabric | ch with Tsinghua University– organized a call during 2013 Lisbon Architecture Triennale, curtated by Beatrice Galilee), Julia van den Hout, Jacob Reidel, Archie Lee Coates, Jeff Franklin.

The edition is unfortunately sold out. Reason why I assembled several images from the bookazine (for the research sake) in a pdf that can be downloaded here (60mb).

Donaghy, R. (2011). Co-opting the Cloud: An Architectural Hack of Data Infrastructure. Graduate thesis work.

Part of our bibliography (among different works by architects –K. Varnelis– or about the Internet infrastructure –T. Arnall, A. Blum–) and published in Clog (2012), this thesis work by R. Donaghy presents an interesting hack of the data center infrastructure (centered on the hardware and mostly on the object “data center” in this case).

The work is digital published online on ISUU and can be accessed here (p. 134-150).

Reblog > Power, Pollution and the Internet

Via The New York Times (via Computed·By)

—–

power_cloud

SANTA CLARA, Calif. — Jeff Rothschild’s machines at Facebook had a problem he knew he had to solve immediately. They were about to melt.

 

The company had been packing a 40-by-60-foot rental space here with racks of computer servers that were needed to store and process information from members’ accounts. The electricity pouring into the computers was overheating Ethernet sockets and other crucial components.

Thinking fast, Mr. Rothschild, the company’s engineering chief, took some employees on an expedition to buy every fan they could find — “We cleaned out all of the Walgreens in the area,” he said — to blast cool air at the equipment and prevent the Web site from going down.

That was in early 2006, when Facebook had a quaint 10 million or so users and the one main server site. Today, the information generated by nearly one billion people requires outsize versions of these facilities, called data centers, with rows and rows of servers spread over hundreds of thousands of square feet, and all with industrial cooling systems.

Recent Comments by Patrick Keller