Reblog > Deterritorialized House – Inhabiting the data center, sketches…

By fabric | ch

—–

Along different projects we are undertaking at fabric | ch, we continue to work on self initiated researches and experiments (slowly, way too slowly… Time is of course missing). Deterritorialized House is one of them, introduced below.

 

 

Some of these experimental works concern the mutating “home” program (considered as “inhabited housing”), that is obviously an historical one for architecture but that is also rapidly changing “(…) under pressure of multiple forces –financial, environmental, technological, geopolitical. What we used to call home may not even exist anymore, having transmuted into a financial commodity measured in sqm (square meters)”, following Joseph Grima’s statement in sqm. the quantified home, “Home is the answer, but what is the question?

In a different line of works, we are looking to build physical materializations in the form of small pavilions for projects like i.e. Satellite Daylight, 46°28′N, while other researches are about functions: based on live data feeds, how would you inhabit a transformed –almost geo-engineered atmospheric/environmental condition? Like the one of Deterritorialized Living (night doesn’t exist in this fictional climate that consists of only one day, no years, no months, no seasons), the physiological environment of I-Weather, or the one of Perpetual Tropical Sunshine, etc.?

We are therefore very interested to explore further into the ways you would inhabit such singular and “creolized” environments composed of combined dimensions, like some of the ones we’ve designed for installations. Yet considering these environments as proto-architecture (architectured/mediated atmospheres) and as conditions to inhabit, looking for their own logic.

 

We are looking forward to publish the results of these different projects along the year. Some as early sketches, some as results, or both. I publish below early sketches of such an experiment, Deterritorialized House, linked to the “home/house” line of research. It is about symbiotically inhabiting the data center… Would you like it or not, we surely de-facto inhabit it, as it is a globally spread program and infrastructure that surrounds us, but we are thinking here in physically inhabiting it, possibly making it a “home”, sharing it with the machines…

What is happening when you combine a fully deterritorialized program (super or hyper-modern, “non lieu”, …) with the one of the home? What might it say or comment about contemporary living? Could the symbiotic relation take advantage of the heat the machine are generating –directly connected to the amount of processing power used–, the quality of the air, the fact that the center must be up and running, possibly lit 24/7, etc.

 

As we’ll run a workshop next week in the context of another research project (Inhabiting and Interfacing the Cloud(s), an academic program between ECAL, HEAD, EPFL-ECAL Lab and EPFL in this case) linked to this idea of questioning the data center –its paradoxically centralized program, its location, its size, its functionalism, etc.–, it might be useful to publish these drawings, even so in their early phase (theys are dating back from early 2014, the project went back and forth from this point and we are still working on it.)

 

 

1) The data center level (level -1 or level +1) serves as a speculative territory and environment to inhabit (each circle in this drawing is a fresh air pipe sourrounded by a certain number of computers cabinets –between 3 and 9).

A potential and idealistic new “infinite monument” (global)? It still needs to be decided if it should be underground, cut from natural lighting or if it should be fragmented into many pieces and located in altitude (–likely, according to our other scenarios that are looking for decentralization and collaboration), etc. Both?

Fresh air is coming from the outside through the pipes surrounded by the servers and their cabinets (the incoming air could be an underground cooled one, or the one that can be found in altitude, in the Swiss Alps –triggering scenarios like cities in the moutains? moutain data farming? Likely too, as we are looking to bring data centers back into small or big urban environments). The computing and data storage units are organized like a “landscape”, trying to trigger different atmospheric qualities (some areas are hotter than others with the amount of hot air coming out of the data servers’ cabinets, some areas are charged in positive ions, air connectivity is obviously everywhere, etc.)

Artificial lighting follows a similar organization as the servers’ cabinets need to be well lit. Therefore a light pattern emerges as well in the data center level. Running 24/7, with the need to be always lit, the data center uses a very specific programmed lighting system: Deterritorialized Daylight linked to global online data flows.

 

 

2) Linked to the special atmospheric conditions found in this “geo-data engineered atmosphere” (the one of the data center itself, level -1 or 1), freely organized functions can be located according to their best matching location. There are no thick walls as the “cabinets islands” acts as semi-open partitions.

A program starts to appear that combines the needs of a data center and the one of a small housing program which is immersed into this “climate” (dense connectivity, always artificially lit, 24°C permanent heat). “Houses” start to appear as “plugs” into a larger data center.

 

 

3) A detailed view (data center, level -1 or +1) on the “housing plug” that combine programs. At this level, the combination between an office-administration unit for a small size data center start to emerge, combined with a kind of “small office – home office” that is immersed into this perpetually lit data space. This specific small housing space (a studio, or a “small office – home office”) becomes a “deterritorialized” room within a larger housing program that we’ll find on the upper level(s), likely ground floor or level +2 of the overall compound.

 

4) Using the patterns emerging from different spatial components (heat, light, air quality –dried, charged in positive ions–, wifi connectivity), a map is traced and “moirés” patterns of spatial configurations (“moirés spaces”) start to happen. These define spatial qualities. Functions are “structurelessly” placed accordingly, on a “best matching location” basis (needs in heat, humidity, light, connectivity which connect this approach to the one of Philippe Rahm, initiated in a former research project, Form & Function Follow Climate (2006). Or also i.e. the one of Walter Henn, Burolandschaft (1963), if not the one of Junya Ishigami’s Kanagawa Institute).

Note also that this is a line of work that we are following in another experimental project at fabric | ch, about which we also hope to publish along the year, Algorithmic Atomized Functioning –a glimpse of which can be seen in Desierto Issue #3, 28° Celsius.

 

5) On ground level or on level +2, the rest of the larger house program and few parts of the data center that emerges. There are no other heating or artificial lighting devices besides the ones provided by the data center program itself. The energy spent by the data center must serve and somehow be spared by the house. Fresh and hot zones, artificial light and connectivity, etc. are provided by the data center emergences in the house, so has from the opened “small office – home office” that is located one floor below. Again, a map is traced based and moirés patterns of specific locations and spatial configurations emerge. Functions are also placed accordingly (hot, cold, lit, connected zones).

 

Starts or tries to appear a “creolized” housing object, somewhere in between a symbiotic fragmented data center and a house, possibly sustaining or triggering new inhabiting patterns…

Reblog > Decentralizing the Cloud: How Can Small Data Centers Cooperate?

Note: while reading last Autumn newsletter from our scientific committee partner Ecocloud (EPFL), among the many interesting papers the center is publishing, I stumbled upon this one written by researchers Hao Zhuang, Rameez Rahman, and Prof. Karl Aberer. It surprised me how their technological goals linked to decentralization seem to question similar issues as our design ones (decentralization, small and networked data centers, privacy, peer to peer models, etc.)! Yet not in such a small size as ours, which rather look toward the “personal/small” and  “maker community” size. They are rather investigating “regional” data centers, which is considered small when you start talking about data centers.

This, combined with the recent developments mentioned by Lucien Langton in his post about Fog Computing let us think that our goals match well with some envisioned technological evolutions of the global “cloud infrastructure”. They seem to be rooted in similar questions.

 

Via p2p-conference.org via Ecocloud newsletter

—–

Abstract

Cloud computing has become pervasive due to attractive features such as on-demand resource provisioning and elasticity. Most cloud providers are centralized entities that employ massive data centers. However, in recent times, due to increasing concerns about privacy and data control, many small data centers (SDCs) established by different providers are emerging in an attempt to meet demand locally.

However, SDCs can suffer from resource in-elasticity due to their relatively scarce resources, resulting in a loss of performance and revenue. In this paper we propose a decentralized cloud model in which a group of SDCs can cooperate with each other to improve performance. Moreover, we design a general strategy function for the SDCs to evaluate the performance of cooperation based on different dimensions of resource sharing. Through extensive simulations using a realistic data center model, we show that the strategies based on reciprocity are more effective than other involved strategies, e.g., those using prediction on historical data.

Our results show that the reciprocity-based strategy can thrive in a heterogeneous environment with competing strategies.

 

More about the paper HERE.

Inhabiting and Interfacing the Cloud(s) – Talk & workshop at LIFT 15

Note: Nicolas Nova and I will be present during next Lift Conference in Geneva (Feb. 4-6 2015) for a talk combined with a workshop and a skype session with EPFL (a workshop related with the I&IC research project will be finishing at EPFL –Prof. Dieter Dietz’s ALICE Laboratory at EPFL-ECAL Lab– the day we’ll present in Geneva). All persons who follow the research on this blog and that would be present during Lift 15, please come see us and exchange ideas!

 

Via the Lift Conference

—–

Inhabiting and Interfacing the Cloud(s)

Workshop
Curated by Lift
Fri, Feb. 06 2015 – 10:30 to 12:30
Room 7+8 (Level 2)
-
Architect (EPFL), founding member of fabric | ch and Professor at ECAL
-
Principal at Near Future Laboratory and Professor at HEAD Geneva
-

Workshop description : Since the end of the 20th century, we have been seeing the rapid emergence of “Cloud Computing”, a new constructed entity that combines extensively information technologies, massive storage of individual or collective data, distributed computational power, distributed access interfaces, security and functionalism.

In a joint design research that connects the works of interaction designers from ECAL & HEAD with the spatial and territorial approaches of architects from EPFL, we’re interested in exploring the creation of alternatives to the current expression of “Cloud Computing”, particularly in its forms intended for private individuals and end users (“Personal Cloud”). It is to offer a critical appraisal of this “iconic” infrastructure of our modern age and its user interfaces, because to date their implementation has followed a logic chiefly of technical development, governed by the commercial interests of large corporations, and continues to be seen partly as a purely functional,centralized setup. However, the Personal Cloud holds a potential that is largely untapped in terms of design, novel uses and territorial strategies.

The workshop will be an opportunity to discuss these alternatives and work on potential scenarios for the near future. More specifically, we will address the following topics:

  • How to combine the material part with the immaterial, mediatized part? Can we imagine the geographical fragmentation of these setups?
  • Might new interfaces with access to ubiquitous data be envisioned that take nomadic lifestyles into account and let us offer alternatives to approaches based on a “universal” design? Might these interfaces also partake of some kind of repossession of the data by the end users?
  • What setups and new combinations of functions need devising for a partly nomadic lifestyle? Can the Cloud/Data Center itself be mobile?
  • Might symbioses also be developed at the energy and climate levels (e.g. using the need to cool the machines, which themselves produce heat, in order to develop living strategies there)? If so, with what users (humans, animals, plants)?

The joint design research Inhabiting & Interfacing the Cloud(s) is supported by HES-SO, ECAL & HEAD.

Interactivity : The workshop will start with a general introduction about the project, and moves to a discussion of its implications, opportunities and limits. Then a series of activities will enable break-out groups to sketch potential solutions.

Moving clouds: International transportation standards

As a technical starting point of this research Patrick Keller already wrote two posts on hardware standards and measures: The Rack Unit and the EIC /ECIA Standards (other articles including technical overview are the 19 Inch Rack & Rack Mount Cases). Within the same intent of understanding the technical standards and limitations that shape the topologies of data centers we decided to investigate how the racks can be packed, shipped, and gain mobility. The standards for server transportation safety are set by the Rack Transport Stability Team (RTST) guidelines. Of course, custom built server packaging exists based on the international standards. We’ll start by listing them from the smallest to the biggest dimensions. First off, the pallet is the smallest measure. Once installed on pallets, the racks can be disposed in standard 20′ or 40′ shipping containers. The image below depicts different ways of arranging the pallets within the container:

Sizes for shipping euro and standard pallets

the pallets fit one server rack each. The diagrams below show the specs for both types of shipping containers.

20''-STANDARD-container_-internal-and-external-dimensions 40''-STANDARD-container_-internal-and-external-dimensions

However it is also possible to ship server racks via air freight, which in this case uses ULD Containers (Unit Load Device). ULD’s however come in many different sizes. For the complete list of ULD standards click here.

Air cargo ULD containers: LD-29 Reefer dimensions

Once on land, shipping containers are either trucked or shipped through rail to destination. Again, several standards of wagons exist depending on weight and / or capacity. A general overview can be consulted here.

It also appears Dell has been working on what it calls the Tactical Mobile Data Center, A low-energy consumption autonomous data center designed for air freight transport to military areas and quick deployment.

Towards a new paradigm: Fog Computing

Data-Gravity_big

 

The Internet of Things is emerging as a model, and the network routing all the IoT data to the cloud is at risk of getting clogged up. “Fog is about distributing enough intelligence out at the edge to calm the torrent of data, and change it from raw data over to real information that has value and gets forwarded up to the cloud.” Todd Baker, head of Cisco‘s IOx framework says. Fog Computing, which is somehow different from Edge Computing (we didn’t quite get how) is definitely a new business opportunity for the company who’s challenge is to package converged infrastructure services as products.

However, one interesting aspect of this new buzzword is that it adds up something new to the existing model: after all, cloud computing is based on the old client-server model, except the cloud is distributed by its nature (ahem, even though data is centralized). That’s the big difference.  There’s a basic rule that resumes the IT’s industry race towards new solutions: Moore’s law. The industry’s three building blocks are: storage, computing and network. As computing power doubles every 18 months, storage follows closely (its exponential curve is almost similar). However, if we graph network growth it appears to follow a straight line.

Network capacity is a scarce resource, and it’s not going to change any time soon: it’s the backbone of the infrastructure, built piece by piece with colossal amounts of cables, routers and fiber optics. This problematic forces the industry to find disruptive solutions, and the paradigm arising from the clash between these growth rates now has a name: Data gravity.

Data gravity is obviously what comes next after 20 years of anarchic internet content moderation. Have you ever heard of big data? Besides being the IT’s industry favorite buzzword alongside “cloud computing”, there’s a reason we don’t call it “infinite data”: the data is sorted byte after byte. The problem is, right now everything is sorted in the cloud, which means you have to push all this data up, just to get the distilled big data feedback down. If you think about your cell phone and the massive amounts of data it generates, sends out around the planet and receives in return, there’s something not quite energetically efficient about it.

For every search query, shopping cart filled, image liked and post reblog, our data travels thousands of kilometers to end up with a simple feedback sent to our device in return. In the long run, the network simply won’t be able to cope with these massive amounts of data transit. Data Gravity is the following concept: the closer the data is to the emission source, the heavier it is. If we take the analogy of the cloud, we could see data as a liquid which needs to be “evaporated” in order to be pushed to the cloud, where it can be compared and assimilated to big data, the correct answer being then pushed back to the device in return.

Data-Gravity-2

If fog computing is necessary, it is precisely because a solution is needed to distill the huge amounts of data generated “closer to the ground”. But under what physical form will it come to exist? Interestingly, it was difficult to find anybody in the field interested in this question. If the idea of creating data treatment facilities closer to users is popular, nobody seems to care about the fact that this “public infrastructure” is invisible. Indeed, the final aim, it seems, is to add a layer to the back-end of user technology, not to bring it closer to the user in terms of visibility. Rather the opposite: it seems we’re still all believing in security from opacity, even when the industry’s giants are going open-source. The amounts of money engaged in this new paradigm are colossal, Cisco’s Technology Radar assures (rather opaquely by the way).

 

fog-computing-large

 

We will keep a close eye on the trends related to Fog and Cloud. However it is essential to stress that fog computing will not elude cloud computing. It is a new model indeed, but it is aimed to extend the cloud and decentralize it’s extremities rather than change the architecture of the whole infrastructure. While our main object of study remains the cloud, which is the final abstraction of computing in terms of distance to users (both physical and in terms of cognitive familiarity), it is also important for us to map out what comes in between both. As designers and architects, our work is to build intuitive ways of interacting with reality’s abstractions through objects. But @mccrory, who came up with the concept of data gravity also set up a definition of it as a “formula”, perhaps it helps.

 

datagravityart-0121

 

––

Images credits:

Lucien Langton, I&IC Research Project

https://techradar.cisco.com/trends/Fog-Computing#prettyPhoto

http://datagravity.org/

 

Cookbook > Setting up your personal Linux & OwnCloud server

Note: would you like to install your personal open source cloud infrastructure, maintain it, manage your data by yourself and possibly develop artifacts upon it, like we needed to do in the frame of this project? If the answer is yes, then here comes below the step by step recipe on how to do it. The proposed software for Cloud-like operations, ownCloud, has been chosen among different ones. We explained our (interdisciplinary) choice in this post, commented here. It is an open source system with a wide community of developers (but no designers yet).

We plan to publish later some additional Processing libraries — in connection with this open source software — that will follow one of our research project’s objectives to help gain access to (cloud based) tools.

Would you then also like to “hide” your server in a traditional 19″ Cabinet (in your everyday physical or networked vicinity)? Here is a post that details this operation and what to possibly “learn” from it –”lessons” that will become useful when it will come to possible cabinet alternatives–.

 

A) Linux Server

1 server:

  • CPU 64bits
  • 8 to 16 Gb of memory
  • 1 to 4Tb of disk (can be duplicated in order to setup a RAID mechanism and obtaining a built-in physical redundancy backup)
  • 1 x screen
  • 1 x USB keyboard
  • 1 x USB mouse
  • (optional) – dual electric inputs for backup purpose
  • (optional) – dual network interfaces for backup purpose or network speed optimization

1 operating system (prefer Linux if you want to stick to open source projects). CentOS is a good Linux distribution to consider, very well documented. You can usually download the installer from the Internet and burn it on a CD, DVD or even on a USB Key as an installation support.

-

How To:

Plug all the wires in your brand new (or old) server, insert the Linux installer’s CD or DVD and switch the computer on.

After showing some hardware check information, it should boot up on the CD/DVD. CentOS installer will appear, few basic questions will be asked (language, time zone, network etc…). CentOS installer is well documented and propose a set of default choices for each decision. CentOS will propose a set of predefined configurations, starting from a very basic installation to a fully loaded server (which includes the setup of a Web server, domain name server etc…).

CentOS will propose a set of predefined configurations. Choose the configuration that fits your needs. Keep in mind that you can always add and remove features afterwards, at anytime, so the basic desktop is always a good choice. Prefer to add features step by step, when you are sure that you do need it. Installing useless services can easily drives you to security issue by making your server proposing unsecured features. You should always be able to know the exact list of services proposed by your server.

Once the configuration you have chosen is completely installed, the server will reboot and a bunch of seconds later you will front the CentOS GUI, being able to log in by using the credentials you have mentioned during the installation process. Welcome to your first server.

 

B) ownCloud

Prior to be able to install ownCloud on your server, you will need to setup several services usually resumed in the acronym LAMP. Each letter is associated with one piece of software. We will go through some explanations as it is always better to understand as precisely as possible what we are doing while setting up a LAMP bundle.

1 x Linux:

L stands for Linux, and we already got one by installing the CentOS operating system. This prerequisite was already explained and addressed in the previous part of this post.

1 x Apache:

A stands for Apache. Apache is one of the most used web server. Like Linux, it is open and free to use. A web server is basically what will distribute web content to you. As soon as you want to access a web link/URL beginning with http or https, your request is intercepted and treated by a web server. You ask something, the web server make it available on your preferred Internet device just a few seconds later. While ensuring a kind of very basic task, a web server can be tricky to configure. There is many parameters and some of them are linked to security issue. As a web server, somehow, distributes to the whole world the content hosted on your server, it is very important to be sure to give access only to specific parts of your server and not to the entire content of your hard drive (unless you would like to fully open it).

1 x MySQL:

M stands for MySQL. MySQL is a database server. It is like a web server but dedicated to databases. Databases are usually organized as a set of tables, each table composed of a set of data fields. A data field can be a text, a number, a date, a unique reference number etc… a database being a collection of records. For example, this web site is a database filled with a collection of posts. One post being basically defined by its author, date and time, categories and its content. MySQL is quite straight forward to install, with a few steps to secure it.

1 x PHP:

P stands for PHP. PHP is a programming language, also called server-side scripting language. A basic web page can be composed of static HTML tags that need to be delivered to a web browser and interpreted by it so to visually display these tags as a “page”. This delivered static web page can then be stored on the web server with the exact same content. It means that delivering this content can be summarized as the simple act of sending the html page from the web server to the user’s web browser.
Yet nowadays, the content of a web page can be made out of dynamic or live data extracted from a database, or it can be the result of some computation processes performed server-side (performed on the web server). That’s where a server-side scripting language like PHP is needed. Within a dynamic web page that still needs to be delivered, one can then have PHP instructions that will probe a database to extract some data, add these data to the distributed web page and send the final formatted web page. The execution of the PHP instructions is initiated by the Apache web server and PHP instructions may probe the MySQL database when needed.

Thus you now have an overall picture of the role of each LAMP modules. ownCloud (as a web site) will use Apache to show user’s content, distribute/share files, user’s data, user’s files and file’s meta information being stored in the ownCloud’s MySQL database, using PHP to filter distributed information, checking login credentials, etc…

-

How To:

Install Apache. You will have to open a terminal. If you choose the Minimal Desktop, you can go to Applications/System Tools/Terminal. That’s basically a command shell where you can invoke Linux commands. For example typing ‘ls’+ENTER will show current directory content, ‘pwd’+ENTER will show the current directory etc… (all Linux commands here). Let’s install Apache web server components by typing ‘sudo yum install httpd’+ENTER. It will display packages to be downloaded and installed, just answer ‘yes’ when prompted and Apache will be downloaded and installed. Once finished, type ‘sudo service httpd start’+ENTER, this will start Apache web server, Then open Firefox on your server and try to access http://127.0.0.1 and you should see the default Apache welcome page. In order to make Apache web server starting automatically after having (re)booted your server, type ‘sudo chkconfig httpd on’+ENTER.

Install MySQL. Still within a terminal window, type ‘sudo yum install mysql-server’+ENTER. As previously with Apache, needed MySQL components will be listed, just answer ‘yes’ when prompted to start downloading and installing MySQL. Once finished, type ‘sudo service mysqld start’+ENTER in order to initiate the MySQL server. You need then to secure MySQL by typing ‘sudo mysql_secure_installation’+ENTER. When prompted for the current root password, leave it blank and press ENTER. You will be asked to define the new root password, so take carefully note of the one you will choose. It will be your key access to your MySQL server. Then answer systematically ‘y’ to the 4 or 5 following questions, and you will be done with your MySQL server. In order to make MySQL server starting automatically after having (re)booted your server, type ‘sudo chkconfig mysqld on’+ENTER.

Install PHP. Still within a terminal window, type ‘sudo yum install php php-mysql’+ENTER. As previously with MySQL, needed PHP components will be listed, just answer ‘yes’ when prompted to start downloading and installing PHP. You are done with PHP. As PHP is just a program and not a server there is no need to make it start at reboot etc… Keep in mind that PHP is made of several modules that can be installed or removed. By typing ‘yum search php-’+ENTER you will see already installed modules. When specific modules are needed you can always install them via the command ‘sudo yum install PHPModuleName’+ENTER.

You are done with LAMP. The path to your web sites should be /var/www/html. In order to test PHP, create a php file from a terminal window again by typing ‘sudo nano /var/www/html/info.php’+ENTER. It will open a text editor, type ‘<?php phpinfo(); ?>’ and save the file. Thus, from a local web browser you should be able to access http://127.0.0.1/info.php. It will display a overall set of PHP information. Apache and PHP are operational.

InstallownCloud. Download ownCloud archive and extract it to your /var/www/html directory. Once you’ll have a directory like /var/www/html/owncloud, you will be able to follow the owncloud’s installation wizard via the web address http://127.0.0.1/owncloud. You can refer to the online documentation for setting steps. When the database choice is prompted, choose of course the MySQL option. Then, at some point, if you want your ownCloud publicly available, you will have to subscribe for a domain name (like mydomainname.org) and then configure accordingly your Apache web server in order to make it respond to http://www.mydomainname.org. You can then choose to make this web address point directly to your ownCloud server, or keep this web address for another web site and make ownCloud available via an address like http://www.mydomainname.org/owncloud/. But these choices depend on what you want to achieve.

 

Congratulations, you are now ready to play with your own personal cloud service!

 

As already stated, more Cookbooks should come in the near future under http://www.iiclouds.org/category/cookbooks/ Their purposes will be to help you work with this infrastructure and handle your data, so as to set up your own design projects that will tap into this infrastructure and transform it. Don’t forget also that an API already exists to help you develop your applications for ownCloud.

 

Setting up our own (small size) personal cloud infrastructure. Part #3, reverse engineer the “black box”

 

At a very small scale and all things considered, a computer “cabinet” that hosts cloud servers and services is a very small data center and is in fact quite similar to large ones for its key components… (to anticipate the comments: we understand that these large ones are of course much more complex, more edgy and hard to “control”, more technical, etc., but again, not so fundamentally different from a conceptual point of view).

 

SONY DSC

Documenting the black box… (or un-blackboxing it?)

 

You can definitely find similar concepts that are “scalable” between the very small – personal – and the extra large. Therefore the aim of this post, following two previous ones about software (part #1) –with a technical comment here– and hardware (part #2), is to continue document and “reverse engineer” the set up of our own (small size) cloud computing infrastructure and of what we consider as basic key “conceptual” elements of this infrastructure. The ones that we’ll possibly want to reassess and reassemble in a different way or question later during the I&IC research.

However, note that a meaningful difference between the big and the small data center would be that a small one could sit in your own house or small office, or physically find its place within an everyday situation (becoming some piece of mobile furniture? else?) and be administrated by yourself (becoming personal). Besides the fact that our infrastructure offers server-side computing capacities (therefore different than a Networked Attached Storage), this is also a reason why we’ve picked up this type of infrastructure and configuration to work with, instead of a third party API (i.e. Dropbox, Google Drive, etc.) with which we wouldn’t have access to the hardware parts. This system architecture could then possibly be “indefinitely” scaled up by getting connected to  similar distant personal clouds in a highly decentralized architecture –like i.e. ownCloud seems now to allow, with its “server to server” sharing capabilities–.

See also the two mentioned related posts:

Setting up our own (small size) personal cloud infrastructure. Part #1, components

Setting up our own (small size) personal cloud infrastructure. Part #2, components

 

For our own knowledge and desire to better understand, document and share the tools with the community, but also to be able to run this research in a highly decentralized way, so as to keep our research data under our control, we’ve set up our own small “personal cloud” infrastructure. It uses a Linux server and an ownCloud (“data, under your control”) open source software installed on RAID computing and data storage units, within a 19″ computer cabinet. This set up will help us exemplify in this post the basic physical architecture of the system and learn from it. Note that the “Cook Book” for the software set up of a personal cloud similar with ours  is accessible here.

Before opening the black box though, which as you can witness has nothing to do with a clear blue sky (see here too), let’s mention one more time this resource that fully documents the creation of open sourced data centers: Open Compute Project (surprisingly initiated by Facebook to celebrate their “hacking background” –as they stated it– and continuously evolving).

 

So, let’s first access the “secured room” where our server is located, then remove the side doors of the box and open it…

 

SONY DSC

SONY DSC

Standardized

This is the server and setup that currently hosts this website/blog and different cloud services that will be used during this research. Its main elements are quite standarized (according to the former EIA standards –now ECIA–, especially norm EIA/ECA 310E).

It is a 19 inches standardized Computer Cabinet, 600 x 800 mm (finished horizontal outside dimensions). Another typical size for a computer cabinet  is 23 inches. These dimensions reflect the width of the inner equipment including the holding frame. The heights of our cabinet is middle size and composed of 16 Rack Units (16 U). Very typical size for a computer cabinet  is 42U. A rack unit (U) is 1.75 inches (or 4,445 cm). The Rack Unit module defines in return the sizes of the physical material and hardware that can be assembled into the railings. Servers, routers, fans, plugs, etc. and computing parts need therefore to fit into this predefined module and are sized according to “U”s (1U, 2U, 4U, 12U, etc.) in vertical.

-

SONY DSC

Networked (energy and communication)

Even if not following the security standards at all in our case… our hardware is nonetheless connected to the energy and communication networks through four plugs (four redundant electric and rj45 plugs).

-

SONY DSC

Mobile

Heavy in general (made out of steel… for no precise reasons) the cabinet is usually and nonetheless mobile. It has now become a common product that enters the global chains of goods, so that before you can eventually open it like we just did, it already came into place through container boats, trains, trucks, fork trucks, palets, hands, etc. But not only… once in place, it will still need to be moved, opened, refurbished, displaced, closed, replaced, etc. If you’ve tried to move it by hands once, you won’t like to do it for a second time because of its weight… Once placed in a secured data center (in fact its bigger size casing), metal sides and doors might be removed due to the fact that the building could serve as its new and stronger “sides” (and protect the hardware from heat, dust, electrostatics, physical depredation, etc.)

-

SONY DSC

SONY DSC

SONY DSC

“False”

The computer cabinet (or the data center) usually needs some sort of “false floor” or a “trap” that can be opened and closed regularly for the many cables that need to enter the Cabinet. It needs a “false ceiling” too to handle warmed and dried air –so as additional cabling and pipes– before ejecting it outside. “False floor” and “false ceiling” are usually also used for air flow and cooling needs.

-

SONY DSC

SONY DSC

“Porous” and climatically monitored, for facilitated air flow (or any other cooling technology, water through pipes would be better)

Temperature and atmosphere monitoring, air flows, the avoidance of electrostatics are of high importance. Hardware doesn’t support high temperature and either gets down or used more quickly. Air flow must be controlled and facilitated: it gets hot and dry while cooling the machines, it might get charged in positive ions which in turn will attract dust and increase electrostatics. The progressive heating of air triggers its vertical movement: “fresh” and cool(ed) air enters from the bottom (“false floor”), is directed towards the computer units to cool them down (if the incoming air is not fresh enough, it will need artificial cooling — 27°C seems to be the very upper limit before cooling but most operators use the 24-25°C limit or less) and then gets heated by the process of cooling the servers. Therefore lighter for the same volume, air moves upward and needs to be extracted, usually by mechanical means (fans, in the “false ceiling”). In bigger data centers, cool and hot “corridors” might be used instead of floors and ceilings, which define in return whole hot and cold sqm areas.

To help air flow through in a better way, cabling, furniture and architecture must facilitate its movement and not become obstacles by any means.

-

SONY DSC

SONY DSC

Wired (redundant)

Following the need for “Redundancy” (= which should guarantee to avoid downtimes as much as possible), the hardware parts need two different energy sources (plugs in the image), so as two different network accesses (rj45 plugs). In the case of big size data centers with TIER certification, these two different energy sources (literally two different providers) need to be backed up by two additional autonomous (oil) engines.

-

SONY DSC

Wired (handling)

The more the hardware, the more the need to handle the cables. 19″ cabinets usually have enough free space on their sides and back, between the computing units and the metallic sides of the cabinet, so as enough handling parts to allow for massive cabling.

-

SONY DSC

Redundant (RAID hardware)

One of the key concept of any data center / cloud computing architecture is an almost paranoid concern about redundancy (to at least double any piece of hardware, software system and stored data) so to avoid any losses or downtimes of the service. One question that could be asked here: do we really need to keep all these (useless) data? This concern about redundancy is especially expressed with hardware in the contemporary form of RAID architecture that assure the copy of any content on two hard disks that will work in parallel. If one gets down, the service is maintained and still accessible thanks to the second one.

In the case of the I&IC cloud server, we have 1 x 2Tb disk for the system only (Linux and ownCloud) –that isn’t mounted in RAID architecture, but should be to guarantee maximum “Uptime“–, 2 x 4Tb RAID disks for data, and 1 x 4Tb for local backup –that should be duplicated in a distant second location for security reasons– Note that our server has a size of 1U.

Note that the more the computing units will be running, the more the hardware will get hot and therefore the more the need for cooling.

Therefore…

Redundant bis (“2″ is a magic number)

All parts and elements generally come by two (redundancy): two different electric providers, two electric plugs, two Internet providers, two rjs 45 plugs, two backup oil motors, RAID (parallel) hard drives, two data backups, etc. Even two parallel data centers?

-

Virtualized (data) architecture

Servers don’t need to be physical anymore: virtualize them, manage them and install many of these digital machines on physical ones.

-

SONY DSC

owncloud_desktop

Interfaced

Any data center will need to be interfaced to manage its operation (both from an “end user” perspective, which will remain as “user friendly” as possible and from its “control room”). Our system remains basic and it just needs a login/passwd, a screen, a keyboard and mouse to operate, but it can become quite complicated… (googled here).

 

We’ve almost finished our tour and can now close back the box …

 

SONY DSC

 

Yet let’s consider a few additional elements…

 

SONY DSC

DSC03052_m

Controlled physical access

The physical accesses to the hardware, software, inner space and “control room” (interface) of the “data center” are not public and are therefore restricted. For this function too, a backup solution (a basic key in our case) must be managed.

Note that while we’re closing back the cabinet, we can underline one last time what looks to be obvious: the architecture or the casing is indeed used to secure the physical parts of the infrastructure, but its purpose is also to help filter the people who could access it. The “casing” is so to say both a physical protection and a filter.

-

SONY DSC

SONY DSC

Hidden or “Furtive”?

The technological arrangement is undoubtedly wishing to remain furtive, or at least to stay unnoticed. Especially when it comes to security issues.

-

SONY DSC

Unexpressive, minimal

The “inexpressiveness” of the casing seems to serve two purposes.

The first one is to remain discreet, functional, energy efficient, inexpensive, normed and mainly unnoticed. This is especially true for the architecture of large data centers that could be almost considered as the direct scaling of the “black box” (or the shoe box) exemplified in this post. It could explain its usually very poor architecture.

The second one is a bit more speculative at this stage, but the “inexpressiveness” of the “black box” and the mystery that surrounds it remains the perfect support for technological phantasms and projections.