OpenCloud (Academic Research) Mesh

Note: When we had to pick an open source cloud computing platform at the start of our research, we dug for some time to pick the one that would better match with our planned activities. We chose ownCloud and explained our choice in a previous post, so as some identified limitations linked to it. Early this year came this announcement by ownCloud that it will initiate “Global Interconnected Private Clouds for Universities and Researchers” (with early participants such has the CERN, ETHZ, SWITCH, TU-Berlin, University of Florida, University of Vienna, etc.) So it looks like we’ve picked the right open platform! Especially also because they are announcing a mesh layer on top of different clouds to provide common access across globally interconnected organizations.

This comforts us in our initial choice and the need to bridge it with the design community, especially as this new “mesh layer” is added to ownCloud, which was something missing when we started this project (from ownCloud version 7.0, this scalability became available though). It now certainly allows what we were looking for: a network of small and personal data centers. Now the question comes back to design: if personal data centers are not big undisclosed or distant facilities anymore, how could they look like? For what type of uses? If the personal applications are not “file sharing only” oriented, what could they become? For what kind of scenarios?

 

OpenCloudMesh_4.0_150dpi

 

By ownCloud

ownCloud Initiates Global Interconnected Private Clouds for Universities and Researchers

Leading research organizations in the Americas, Europe and Asia/Pacific join to create world’s largest public private cloud mesh.

 

Lexington, MA – January 29, 2015ownCloud, Inc., the company behind the world’s most popular open source file sync and share software, today announced an ambitious project that for the first time ties together researchers and universities in the Americas, Europe and Asia via a series of interconnected, secure private clouds.

OpenCloudMesh, a joint international initiative under the umbrella of the GÉANT Association, is built on ownCloud’s open Federated Cloud sharing application programming interface (API) taking Universal File Access beyond the borders of individual Clouds and into a globally interconnected mesh of research clouds — without sacrificing any of the advantages in privacy, control and security an on-premises cloud provides. OpenCloudMesh provides a common file access layer across an organization and across globally interconnected organizations, whether the data resides on internal servers, on object storage, in applications like SharePoint or Jive, other ownClouds, or even external cloud systems such as Dropbox and Google (syncing them to desktops or mobile apps, making them available offline).

“Research labs and universities are by nature social institutions – collaborating, communicating and testing — but at the same time these same institutions must be very protective of their students, researchers and research. This often puts them at the cutting edge of technology,” said Frank Karlitschek, CTO, and co-founder, ownCloud. “OpenCloudMesh gives each organization private cloud file sync and share, while Federated Cloud sharing, also known as server-to-server sharing, enables safe sharing between those clouds. The possibilities are unlimited not just for researchers and universities, but for enterprises large and small as well.”

“We are at a critical juncture in cloud computing,” said Peter Szegedi Project Development Officer, Management Team, GÉANT Association. “There is no longer a need to choose between privacy and security and collaboration and ease of use. We believe OpenCloudMesh will redefine the way people use the cloud to share their important files.”

This open API ensures secure yet transparent connections between remote on-premises cloud installations. A first draft of this OpenCloudMesh API specification will be published early this year and participation in developing and refining the API is open to all.

To-date, 14 organizations have signed up to participate, including:

 

Get Involved
For more information, or for researchers and universities interested in getting involved please visit https://owncloud.com/opencloudmesh/.

ownCloud protects sensitive corporate files, while providing end users with flexible and easy access to files, from any device, anywhere. Federated Cloud sharing enables users on one ownCloud installation to seamlessly share files with users on a different ownCloud installation without using shared links. Both users retain the privacy and control of a private ownCloud, and gain the flexibility and ease-of-use of a public cloud.

About GÉANT Association
GÉANT is the pan-European research and education network that interconnects Europe’s National Research and Education Networks (NRENs). Together we connect over 50 million users at 10,000 institutions across Europe, supporting research in areas such as energy, the environment, space and medicine.

About ownCloud, Inc.
Based on the popular ownCloud open source file sync and share community project, ownCloud Inc. was founded in 2011 to give corporate IT greater control of their data and files — providing a common file access layer across an organization, enabling file access from any device, anytime, from anywhere, all completely managed and controlled by IT. Company headquarters are in Lexington, MA, with European headquarters in Nuremberg, Germany. For more information, visit: http://www.owncloud.com.

World Brain: a journey through data centers

By Wednesday, February 18, 2015 Tags: 0098, A, Data, Datacenter, Hardware, Infrastructure Permalink 1

World Brain” by Stéphane Degoutin and Gwenola Wagon (2015)

 

wherewelive

World Brain proposes a stroll through motley folkloric tales : data centers, animal magnetism, the Internet as a myth, the inner lives of rats, how to gather a network of researchers in the forest, how to survive in the wild using Wikipedia, how to connect cats and stones…
The world we live in often resembles a Borgesian story. Indeed, if one wanted to write a sequel to Borges’ Fictions, he could do it simply by putting together press articles.
The World Brain is made out mostly of found materials : videos downloaded on Youtube, images, scientific or pseudo scientific reports, news feeds… [...] World Brain takes the viewer through a journey inside the physical places by which the Internet transits: submarine cables, data centers, satellites. The film adopts the point of view of the data. The audience view the world as if they were information, crossing the planet in an instant, copied in an infinite number of instances or, at the contrary, stored in secret places.

 

More projects by S. Degoutin and G. Wagon on their Nogovoyage website.

Cloud Computing design exhibit in Saint Etienne

The Cité du Design in Saint Etienne (France) had an exhibit about cloud computing few months ago. It was part of an initiative by Orange, the French telco, that asked design students to speculate about “the personal digital space of tomorrow.” The question they addressed are the following:

What new uses? How to organize this space for storing personal data? How to avoid being overwhelmed by all the content that we unwinttingly store in it on a daily basis? How to make the memories that we capture on video and in photos more accessible? How can we easily send all or part of this special prvate space to the people we love? Can we find a new material or emotional value for this data?


Orange cloud
orangecloud2
orangecloud3

Interestingly, the booklet – and the work shown in the exhibit – focus less on the infrastructural and hardware component than the service/interface layer. Given that it’s a project conducted by interaction designers, this does not sound absurd; but it may show the difficulty to address devices and infrastructures (even if they’re an important component at stake in the design of cloud computing services). It’s as if the “cloud” infra was a given and that it should not be reconsidered.

(The reasons why an I&IC’s) OwnCloud Core Processing Library

Beside the reflection produced by the overall Inhabiting & Interfacing the Cloud(s) project and the related necessity to provide “access to tools” to a larger community (largely described in the founding document of the project and in a former post about the setting up of this library), new paradigms may arise in the global organization of servers farms. These new paradigms may in return generate new ways to organize files on cloud servers (by a different control of the redundancy principle for example, or a different use of file’s duplication, etc.), allowing for new projects.

In order to answer the stakes of the I&IC design research and to prepare such output/proposals, we have developed the OwnCloud Core Processing Library that will allow to setup a software layer on top of the hardware layer.

 

To download and learn how to use the OwnCloud Core Processing Library, we’ve prepared a post in the Cook Books section of this site.

 

owncloud_logo    processing2-logo

 

The current and common people’s use of file sharing systems, associated web interfaces or applications allow basically to store and synchronize files blindly, without any control nor optimization of what is transferred, when and why. That’s where the OwnCloud Core Processing Library proposes some tools to tune the “what, when and why” as well as to help manage files stored in cloud infrastructures in a different ways, or even to operate the dispatching of files within a brand new cloud organization that this project may propose.

With the overall set of bricks and elements we’ve already set up in the context of the I&IC research (OwnCloud server set up, OwnCloud Core Processing Library), we are now ready to assemble these bricks in many different ways, proposing alternatives to the now classic server farm architecture. Automated processes based on the OwnCloud Core Processing Library can tag OwnCloud contents, making possible to seal the decision to synchronize/to duplicate/to share a file according to any kind of data else than just the modified time stamp. Files transmission, to/from the cloud, may be decided by autonomous processes based on user’s point of interests, user’s current device, user’s location, etc… all this in conjunction with solutions proposed by I&ICloud(s).

Cookbook > How to set up Processing to use the OwnCloud Core Processing Library

We will describe how to use the OwnCloud Core Processing Library within the Processing framework, starting from a blank sketch. Library’s functions will be refined and new ones may be developped, some additional libraries will be added as well in order to propose high level functions deeper linked to the IICloud(s) project.

 

own_processing_logo

 

1 OwnCloud server: your OwnCloud server should be reachable either via http, either via https, something like http://www.MyOwnCloudServer.org or https://www.AnotherOwnCloudServer.com etc… Another Cook Book has been written about how to install a personal OwnCloud server.

1 Processing: please refer to Processing installation guidelines in order to install properly the Processing framework if you do not have already one ready to be used.

1 OwnCloud Core Processing Library: download the current version of the library. Extract the zip file that includes the needed library file (.jar file) and some documentation files.

-

How To:

Launch Processing

Start a blank sketch via the menu File > New

Add the OwnCloud Core Processing Library to your Processing sketch via the menu Sketch > Add File… and point the jar file you just downloaded.

Insert the following line at the top of your sketch file:

—–

import ch.fabric.processing.owncloud.OCServer;

—–

This will make your Processing sketch aware of the OwnCloud Core Processing Library and its content.

Define a global variable to point your OwnCloud Core Processing Library object by adding:

—–

// My main access to the Owncloud server
OCServer _myOCServer;

—–

In the Processing sketch’s setup function, add the followings:

—–

 // —– Create a new access to an OwnCloud server
 _myOCServer = new OCServer();
    
 // —– Define the targeted OwnCloud server
boolean resB = _myOCServer.SetServer(“data.iiclouds.org”);

—–

The first line define a new OwnCloud object. This object will be used to access OwnCloud functions (copy, search and share files etc…).

The second line establishes a connection to your OwnCloud server by giving your OwnCloud server domain name or IP address. The returned value, a boolean (true/false) will indicate if the connection to your OwnCloud was established correctly (true) or not (false).

In the Processing sketch’s setup function, add the followings:

—–

if (resB) {
      // —– Define my Owncloud server login/password
      _myOCServer.SetAccess(“MyLoginHere”, “MyPasswordHere”);

      // —– Any additional actions here…

}

—–

You are done! You are now able to copy, transfer, search and share files from you OwnCloud server within the Processing framework. You can download files, copy them, move them etc… Add any actions you would like to perform, here are few examples:

—–

   // —– Get the content of my Owncloud’s account root directory…
   println(“———————–”);
   println(“[Processing - draw()] – Listing root directory content…”);
   String [] myContent = _myOCServer.getContentList();

   // —– …and loop on the result to display the root directory’s content      
    for(int i=0;i<myContent.length;i++) {
     println(“[Processing - draw()] – “+(i+1)+” – “+myContent[i]);
   }
   println(“———————–”);
      
   // —– Test if a directory exists in my Owncloud account with error returned value testing
   println(“———————–”);
   println(“[Processing - draw()] – Directory manipulation…”);
   resI = _myOCServer.fileExists(“/music/”);
   if (resI == OCServer.FILE_EXISTS)
     println(“[Processing - draw()] – Directory /music/ is existing.”);
   else if (resI == OCServer.FILE_DOES_NOT_EXIST)
     println(“[Processing - draw()] – Directory /music/ is NOT existing.”);
   else if (resI == OCServer.NETWORK_ERROR)
     println(“[Processing - draw()] – Network problem while accessing OwnCloud.”);

—–

Check the OwnCloud Core Processing Library documentation (included in the zip file you just downloaded) for an exhaustive list of possible actions/functions, their parameters and the returned values.

Have fun!

I&IC Workshop #4 with ALICE at EPFL-ECAL Lab: output > Distributed Data Territories

Note: the post I&IC Workshop #4 with ALICE at EPFL-ECAL Lab, brief: “Inhabiting the Cloud(s)” presents the objectives and brief for this workshop.

 

The week of workshop with ALICE finished with very interesting results and we took the opportunity to “beam” the students presentation to LIFT15, where Patrick Keller and Nicolas Nova were presenting the research project at the same time. The EPFL architecture laboratory already published a post about the workshop on their blog. The final proposals of the intense week of work were centered around the question of territoriality, and how to spread and distribute cloud/fog infrastructures. You can check out the original brief here and a previous post documenting the work in progress there.

 

Data territories – a workshop at EPFL-ECAL Lab with ALICE from iiclouds.org design research on Vimeo.

 

 

wrapup

The students Anne-Charlotte Astrup, Francesco Battaini, Tanguy Dyer and Delphine Passaquay presenting their final proposal on friday (06. 02) in the workshop room of the EPFL-ECAL Lab.

 

Visibility?

Proposing to make these infrastructures visible raised a flood of questions concerning their social and architectural status. Similarly, it questions several fields about the presence of private data in the public space. How do we represent the data center as a public utility? What types of narratives/usage scenarios emerge from such a proposition? By focusing on different but correlated territorial scales, participants were able to produce scenarios for each case.

 

Mid_IMG_5421_web

The overall Inhabiting the Cloud(s) research sketches on the wall.

 

Swiss territoriality and scale(s)?

The three distinct territorial scales chosen were the following: the national/regional scale, the village/town or city, and the personal/common habitat scale. The proposals were established on the basis of an analysis of the locality where the workshop was held: the small city of Renens and its proximities. The research process focused on preexisting infrastructures which responded to several criteria necessary to implement server rack structures: access to regular and alternative power sources, access to cooling sources (water and air), preexisting cabled networks and/or main and stable access routes (in the mindset that the telegraph/telephone lines were setup along the train lines), and finally seismic stability as well as a certain security from other natural disasters.

Doing so, it also speculates about the fact that data centers could (should?) partly become public utilities.

 

Water, water mills?

The first proposition was to rehabilitate old water mills along existing rivers on the countryside leading to cities and villages in the role of “data sorting centers” or “data stream buffers” facilities. As there is no cabling this proposition may seem odd, however especially concerning Switzerland’s topography, the idea is interesting as it investigates several culturally rich aspects, not to mention the abundance of water. The analogy between water streams and network flows seems obvious, but water is also a necessary cooling source for data infrastructures. It could also be considered as a potential energy source. One could even go further and speculate on the potential interactions between the building and wildlife, as in the image used to cover this article published by Icon magazine just a few days ago.

 

8_IMG_5593_web 12_IMG_5453_web

Water Mills, water cooling scenarios and their local position on the map (around the city of Renens).

 

Disused post offices?

On the scale of the city, the preexisting infrastructure chosen was the Post Office. Postal services are still functioning, but the buildings are deserted of much of their social interaction with the public since the coming of age of internet access. The buildings are also identically structured on a national scale, which could facilitate implementation. They are strategically positioned and already well equipped with network standards. Moreover, it could revive the social role of the village square, or redefine the city as a radial organization around data (versus spirituality). Amongst the implementations discussed were the ability to use the excess heat to create a micro-climate over the square and the possibility of redefining the public space inside the post office as a Hackerspace and Makers Lab, a bit in the same way libraries function.

 

17_IMG_5461_web 18_IMG_5465_WEB

The “front” and “back ends” of most villages’ disused post offices offer quite interesting and appropriate spatial organization, if not metaphors.

 

Neighborhoods’ nuclear shelters (from the cold war period)?

On the scale of the office or housing building, the nuclear shelter was immediately proposed. In Switzerland, every home is to have a nuclear bomb shelter. This situation is unique in the world, and most obviously, better serves local metal groups and wine cellar enthusiasts then security. Nevertheless, however awkward this may seem, these shelters are almost a blueprint for a personal data center. Every one of them is equipped with high-end air filtering systems, generators for use in case of power outings, and solidity and stability standards set to resist a nuclear attack. This couldn’t become a model for the other countries though…

 

22_IMG_5514_web 23_IMG_5517_web

 

The building would therefore embed the capacity to develop it’s own thermal ecosystem alongside the usage of private, communal and public dataspaces.

 

26_IMG_5478_web 27_IMG_5470_web 29_IMG_5468_web

 

This last proposition is finally interesting as it would redefine the organization of the habitat as a radial one, a bit like the students-researchers suggested earlier above for the city. The building could therefore become a transition space in itself between public space, community space and private space. Different directions were also explored with a particular interest on the vernacular “chalet” as a possible candidate for an alpine “meshed data harvesting facilities” scenario.

For now, we’ll stick to the dream that one day, every family in Switzerland will be able to send their kids play in the data center downstairs. But remember: No Ovomaltine on the ethernet hub!

 

Pages-from-pres_finale-9_web

 

Acknowledgments:

Many thanks to the ALICE team in general and to Prof. Dieter Dietz in particular, Thomas Favre-Bulle for leading the workshop, Caroline Dionne and Rudi Nieveen for organizing it. Thanks to Nicolas Henchoz for hosting us in the EPFL-ECAL Lab, Patrick Keller and Nicolas Nova for their introduction to the stakes of the overall project, Lucien Langton for its hard work, good advices and documentation along the week and last but not least to the students, Anne-Charlotte Astrup, Francesco Battaini, Tanguy Dyer and Delphine Passaquay for their great work and deep thinking proposals.

I&IC Workshop #4 with ALICE at EPFL-ECAL Lab, Work in progress

IMG_1975_web

Above: an illustration of the third scaled model presented further.

 

As the week unrolls the workshop is starting to produce scenarios. Wednesday (yesterday) we had a quick presentation of the work in progress, which is documented briefly in the current post. Students Delphine Passaquay, Tanguy Dyer, Francesco Battaini & Anne-Charlotte Astrup working on Inhabiting the Cloud(s) as a team developed a global perspective on the subject. Their approach is focusing on four distinct territorial scales in order to question the centralized data center model. While the proposal doesn’t have a name yet, it however clearly speculates about a distributed isotrope network. The student architects focused on the preexisting urban infrastructure in order to establish their proposal.

 

Below are a few images of the research. They directly started building scaled models to illustrate different typological ideas regarding the potential size and location of big/small distributed data centers.

 

Untitled-1

 

Starting on a (trans-)national/regional scale and looking for the decentralization of the infrastructure (or a mesh of small data centers) called by the research project I&IC, the proposition is heading towards the rehabilitation of water mills along water streams as small-size, almost “village size” data centers. Water mills are mostly abandoned spaces with an obvious potential to produce a little amount of hydroelectric energy and have access to natural resources (water stream) to cool down the servers.

 

millsandoffices1

 

Mapping of water mills (up left) and old (disused) post offices (up right) in countryside villages locations. Transformation of water mills and of empty post offices as a central dispatch/sync of data streams to the individual housings.

 

In third position below comes the housing units themselves with the desire to use the heat generated by the servers to balance the heating of the homes. Heat seems to be a focus point when it comes to hybridize the data center program with other ones (heating for agriculture, heating for living, heating to mitigate climate, etc.)

The model thus leans towards a kind of data center in two layers. One part of it is handled in the house. The other part and slightly larger one takes the relay on a larger scale (village, region). Then, a third and domestic scale gets plugged in: the object, or perhaps the furniture with a modular system enabling users to “patch” objects with computing power and therefore heat.

 

habitatResearch

 

We’re looking forward to the final presentation tomorrow and we’ll be back on the blog after the workshop with a post documenting the final projects!