Reblog > Deterritorialized House – Inhabiting the data center, sketches…

By fabric | ch

—–

Along different projects we are undertaking at fabric | ch, we continue to work on self initiated researches and experiments (slowly, way too slowly… Time is of course missing). Deterritorialized House is one of them, introduced below.

 

Reblog > Decentralizing the Cloud: How Can Small Data Centers Cooperate?

Note: while reading last Autumn newsletter from our scientific committee partner Ecocloud (EPFL), among the many interesting papers the center is publishing, I stumbled upon this one written by researchers Hao Zhuang, Rameez Rahman, and Prof. Karl Aberer. It surprised me how their technological goals linked to decentralization seem to question similar issues as our design ones (decentralization, small and networked data centers, privacy, peer to peer models, etc.)! Yet not in such a small size as ours, which rather look toward the “personal/small” and  “maker community” size. They are rather investigating “regional” data centers, which is considered small when you start talking about data centers.

Inhabiting and Interfacing the Cloud(s) – Talk & workshop at LIFT 15

Note: Nicolas Nova and I will be present during next Lift Conference in Geneva (Feb. 4-6 2015) for a talk combined with a workshop and a skype session with EPFL (a workshop related with the I&IC research project will be finishing at EPFL –Prof. Dieter Dietz’s ALICE Laboratory at EPFL-ECAL Lab– the day we’ll present in Geneva). All persons who follow the research on this blog and that would be present during Lift 15, please come see us and exchange ideas!

 

Via the Lift Conference

—–

Inhabiting and Interfacing the Cloud(s)

Workshop
Curated by Lift
Fri, Feb. 06 2015 – 10:30 to 12:30
Room 7+8 (Level 2)
-
Architect (EPFL), founding member of fabric | ch and Professor at ECAL
-
Principal at Near Future Laboratory and Professor at HEAD Geneva
-

Workshop description : Since the end of the 20th century, we have been seeing the rapid emergence of “Cloud Computing”, a new constructed entity that combines extensively information technologies, massive storage of individual or collective data, distributed computational power, distributed access interfaces, security and functionalism.

In a joint design research that connects the works of interaction designers from ECAL & HEAD with the spatial and territorial approaches of architects from EPFL, we’re interested in exploring the creation of alternatives to the current expression of “Cloud Computing”, particularly in its forms intended for private individuals and end users (“Personal Cloud”). It is to offer a critical appraisal of this “iconic” infrastructure of our modern age and its user interfaces, because to date their implementation has followed a logic chiefly of technical development, governed by the commercial interests of large corporations, and continues to be seen partly as a purely functional,centralized setup. However, the Personal Cloud holds a potential that is largely untapped in terms of design, novel uses and territorial strategies.

The workshop will be an opportunity to discuss these alternatives and work on potential scenarios for the near future. More specifically, we will address the following topics:

  • How to combine the material part with the immaterial, mediatized part? Can we imagine the geographical fragmentation of these setups?
  • Might new interfaces with access to ubiquitous data be envisioned that take nomadic lifestyles into account and let us offer alternatives to approaches based on a “universal” design? Might these interfaces also partake of some kind of repossession of the data by the end users?
  • What setups and new combinations of functions need devising for a partly nomadic lifestyle? Can the Cloud/Data Center itself be mobile?
  • Might symbioses also be developed at the energy and climate levels (e.g. using the need to cool the machines, which themselves produce heat, in order to develop living strategies there)? If so, with what users (humans, animals, plants)?

The joint design research Inhabiting & Interfacing the Cloud(s) is supported by HES-SO, ECAL & HEAD.

Interactivity : The workshop will start with a general introduction about the project, and moves to a discussion of its implications, opportunities and limits. Then a series of activities will enable break-out groups to sketch potential solutions.

Moving clouds: International transportation standards

As a technical starting point of this research Patrick Keller already wrote two posts on hardware standards and measures: The Rack Unit and the EIC /ECIA Standards (other articles including technical overview are the 19 Inch Rack & Rack Mount Cases). Within the same intent of understanding the technical standards and limitations that shape the topologies of data centers we decided to investigate how the racks can be packed, shipped, and gain mobility. The standards for server transportation safety are set by the Rack Transport Stability Team (RTST) guidelines. Of course, custom built server packaging exists based on the international standards. We’ll start by listing them from the smallest to the biggest dimensions. First off, the pallet is the smallest measure. Once installed on pallets, the racks can be disposed in standard 20′ or 40′ shipping containers. The image below depicts different ways of arranging the pallets within the container:

Towards a new paradigm: Fog Computing

Data-Gravity_big

 

The Internet of Things is emerging as a model, and the network routing all the IoT data to the cloud is at risk of getting clogged up. “Fog is about distributing enough intelligence out at the edge to calm the torrent of data, and change it from raw data over to real information that has value and gets forwarded up to the cloud.” Todd Baker, head of Cisco‘s IOx framework says. Fog Computing, which is somehow different from Edge Computing (we didn’t quite get how) is definitely a new business opportunity for the company who’s challenge is to package converged infrastructure services as products.

However, one interesting aspect of this new buzzword is that it adds up something new to the existing model: after all, cloud computing is based on the old client-server model, except the cloud is distributed by its nature (ahem, even though data is centralized). That’s the big difference.  There’s a basic rule that resumes the IT’s industry race towards new solutions: Moore’s law. The industry’s three building blocks are: storage, computing and network. As computing power doubles every 18 months, storage follows closely (its exponential curve is almost similar). However, if we graph network growth it appears to follow a straight line.

Network capacity is a scarce resource, and it’s not going to change any time soon: it’s the backbone of the infrastructure, built piece by piece with colossal amounts of cables, routers and fiber optics. This problematic forces the industry to find disruptive solutions, and the paradigm arising from the clash between these growth rates now has a name: Data gravity.