Or should we start thinking about tiny clusters of Raspberry Pis?
It seems that they’ve already done some debugging and Lego constructions at the Southampton University! (for a “supercomputer” though).
A joint design research project (HES-SO) between ECAL, HEAD, EPFL-ECAL Lab & EPFL
Or should we start thinking about tiny clusters of Raspberry Pis?
It seems that they’ve already done some debugging and Lego constructions at the Southampton University! (for a “supercomputer” though).
Using excess heat generated by data centers to warm homes isn’t a new idea. Earlier in our research we stumbled upon Qarnot, a french company proposing to decentralize the data center into meshed radiators to distribute computing resources across people’s homes (we’re guessing they took their name from Carnot’s Limit ;). They announced a partnership with the city of Paris to heat 350 low-income housings in 2013.
However, they are not the only rats in the race…
Note: When we had to pick an open source cloud computing platform at the start of our research, we dug for some time to pick the one that would better match with our planned activities. We chose ownCloud and explained our choice in a previous post, so as some identified limitations linked to it. Early this year came this announcement by ownCloud that it will initiate “Global Interconnected Private Clouds for Universities and Researchers” (with early participants such has the CERN, ETHZ, SWITCH, TU-Berlin, University of Florida, University of Vienna, etc.) So it looks like we’ve picked the right open platform! Especially also because they are announcing a mesh layer on top of different clouds to provide common access across globally interconnected organizations.
This comforts us in our initial choice and the need to bridge it with the design community, especially as this new “mesh layer” is added to ownCloud, which was something missing when we started this project (from ownCloud version 7.0, this scalability became available though). It now certainly allows what we were looking for: a network of small and personal data centers. Now the question comes back to design: if personal data centers are not big undisclosed or distant facilities anymore, how could they look like? For what type of uses? If the personal applications are not “file sharing only” oriented, what could they become? For what kind of scenarios?
Note: a aet of images I’ll need for a short introduction to the project.
From mainframe computers to cloud computing, through personal computer and all other initiatives that tried to curve “the way computers go” and provide access to tools to people who were not necessarily computer savvy. Under this perspective (“access to tools”), cloud computing is a new paradigm that takes us away from that of the personal computer. To the point that it brings us back to the time of the mainframe computer (no-access or difficult access to tools)?
If we consider how far the personal computer, combined with the more recent Internet, has changed the ways we interact, work and live, we should certainly pay attention to this new paradigm that is coming and probably work hard to make it “accessible”.
A very short history pdf in 14 images.
By fabric | ch
—–
Along different projects we are undertaking at fabric | ch, we continue to work on self initiated researches and experiments (slowly, way too slowly… Time is of course missing). Deterritorialized House is one of them, introduced below.
The Internet of Things is emerging as a model, and the network routing all the IoT data to the cloud is at risk of getting clogged up. “Fog is about distributing enough intelligence out at the edge to calm the torrent of data, and change it from raw data over to real information that has value and gets forwarded up to the cloud.” Todd Baker, head of Cisco‘s IOx framework says. Fog Computing, which is somehow different from Edge Computing (we didn’t quite get how) is definitely a new business opportunity for the company who’s challenge is to package converged infrastructure services as products.
However, one interesting aspect of this new buzzword is that it adds up something new to the existing model: after all, cloud computing is based on the old client-server model, except the cloud is distributed by its nature (ahem, even though data is centralized). That’s the big difference. There’s a basic rule that resumes the IT’s industry race towards new solutions: Moore’s law. The industry’s three building blocks are: storage, computing and network. As computing power doubles every 18 months, storage follows closely (its exponential curve is almost similar). However, if we graph network growth it appears to follow a straight line.
Network capacity is a scarce resource, and it’s not going to change any time soon: it’s the backbone of the infrastructure, built piece by piece with colossal amounts of cables, routers and fiber optics. This problematic forces the industry to find disruptive solutions, and the paradigm arising from the clash between these growth rates now has a name: Data gravity.
Note: I publish here the brief that Matthew Plummer-Fernandez (a.k.a. Algopop) sent me before the workshop he’ll lead next week (17-21.11) with Media & Interaction Design students from 2nd and 3rd year Ba at the ECAL.
This workshop will take place in the frame of the I&IC research project, for which we had the occasion to exchange together prior to the workshop. It will investigate the idea of very low power computing, situated processing, data sensing/storage and automatized data treatment (“bots”) that could be highly distributed into everyday life objects or situations. While doing so, the project will undoubtedly address the idea of “networked objects”, which due to the low capacities of their computing parts will become major consumers of cloud based services (computing power, storage). Yet, following the hypothesis of the research, what kind of non-standard networked objects/situations based on what king of decentralized, personal cloud architecture?
The subject of this workshop explains some recent posts that could serve as resources or tools for this workshop, as the students will work around personal “bots” that will gather, process, host and expose data.
Stay tuned for more!
Botcaves
Algorithmic and autonomous software agents known as bots are increasingly participating in everyday life. Bots can potentially gather data from both physical and digital activity, store and share data in the ‘cloud’, and develop ways to communicate and learn from their databases. In essence bots can animate data, making it useful, interactive, visual or legible. Bots although software-based require hardware from which to run from, and it is this underexplored crossover between the physical and digital presence of bots that this workshop investigates.
You will be asked to design a physical ‘housing’ or ‘interface’, either bespoke or hacked from existing objects, for your personal bots to run from. These botcaves would be present in the home, workspace or other, permitting novel interactions between the digital and physical environments that these bots inhabit.
Raspberry Pis, template bot code, APIs, cloud storage, existing services (Twitter, IFTTT, etc) and physical elements (sensors, lights, cameras, etc) may be used in the workshop.
Bio
British/ Colombian Artist and Designer Matthew Plummer-Fernandez makes work that critically and playfully examines sociocultural entanglements with technologies. His current interests span algorithms, bots, automation, copyright, 3D files and file-sharing. He was awarded a Prix Ars Electronica Award of Distinction for the project Disarming Corruptor; an app for disguising 3D Print files as glitched artefacts. He is also known for his computational approach to aesthetics translated into physical sculpture.
For research purposes he runs Algopop, a popular tumblr that documents the emergence of algorithms in everyday life as well as the artists that respond to this context in their work. This has become the starting point to a practice-based PhD funded by the AHRC at Goldsmiths, University of London, where he is also a research associate at the Interaction Research Studio and a visiting tutor. He holds a BEng in Computer Aided Mechanical Engineering from Kings College London and an MA in Design Products from the Royal College of Art.