Beyond Grid vs Cloud – EGI Community Forum 2012
‘The grid? Shouldn’t they all be doing cloud computing now?’ As a JISC programme manager working with the National Grid Service (NGS) project people ask me this question more and more often. ‘Absolutely’ and ‘not at all’ is the seemingly contradictory answer I usually give, for instance the other week when I mentioned I would attend the European Grid Infrastructure Community Forum 2012 in Munich.
I give this answer because the question originates from a double misunderstanding. The first is about the nature of cloud computing that, despite some marketing claims, is not the answer to everything and in some ways more a new business model than a new technology as such. The cloud is neither the solution for all computing needs, nor is it always the cheapest option – as a recent study commissioned by EPSRC and JISC (PDF) shows. The second misunderstanding relates to branding and the history of research computing. When projects like the National Grid Service were envisaged, grid computing was the dominant paradigm and their names reflect that. These days however, they are looking at a broad range of technologies and business models for providing compute resources, and mostly refer to themselves by their acronyms: NGS and EGI in this case. So at least for the initiated it was no surprise that the EGI conference was as much about the cloud as it was about the grid.
The conference, hosted by the LRZ supercomputing centre and the Technical University of Munich, was a five day event to bring together members of the research computing community from across and beyond Europe. With several parallel strands and many session to attend I won’t summarise the whole conference but instead pick out a few themes and projects I personally found interesting.
First of all I noticed there was a lot of interest in better understanding the use of e-infrastructures by researchers and, related to that, the impact generated by this. In some ways this is a straightforward task insofar as easy to capture and understand numbers can be collected. The EGI for instance now has over 20,000 registered users. You can count the number of cores that can be accessed, monitor the number of compute jobs users run and measure the utilisation of the infrastructure. However, this becomes more difficult when you think of a truly distributed, international infrastructure such as the EGI. Will national funders accept that – while the resources they fund may used by many researchers – much of that usage originates from abroad? If we want to support excellent research with the best tools available we have to make it as easy as possible for researchers to get access to resources no matter which country they are physically based in. Thinking in terms of large, distributed groups of researchers using resources from all over the world, often concurrently, the task of understanding what impact the research infrastructure has and where that impact occurs (leading to who may lay ‘claim’ it in terms of justifying the funding) can make your mind boggle. We need business models for funding these infrastructures that don’t erect new national barriers and address these problems from the angle of how to best support researchers.
Business models, not surprisingly, was another theme I was very interested in. Complex enough already, it is made even more difficult by commercial vendors now offering cloud nodes that for certain, smaller scale scenarios can compete with high performance computing – how do you fairly compare different infrastructures with different strengths and very different payment models? Will we see a broadly accepted funding model where researchers become customers who buy compute from their own institution or whoever offers the best value? Will we see truly regional or national research clouds compete against the likes of Amazon? What the conference has shown is that there are emerging partnerships between large academic institutions and vendors that explore new ways for joint infrastructure development. One example is a new project called ‘Helix Nebula – the Science Cloud’, a partnership that involves CERN, the European Space Agency and companies like T-Systems and Orange. Such partnerships may have a lot of potential, but finding legal structures that allow projects based in academia to work in a more commercial environment is not always easy. A presentation from the German National Grid Initiative explored some of these problems and also the question of developing sustainable funding models.
In order to develop good models for funding e-infrastructure we also need to understand the costs better. As far as institutional costs are concerned these are mostly hidden from the researchers, whereas the costs of commercial providers are very visible – but not always easy to understand in terms of what exactly it is you get for a certain price per hour. As our cloud cost study shows this is an area where more work needs to be done, and so I was happy to find a European project that aims to address this. e-FISCAL works on an analysis of the costs and cost structures of HTC and HPC infrastructures and a comparison with similar commercially offerings. It already lists a useful range of relevant studies and I hope we will see more solid data emerge over time.
In the commercial/public context I found it interesting to see that some academic cloud projects aim to take on commercial players. BeeHub, for instance, was presented as an academic DropBox. Now, to be fair to the project it aims to be an academic service for file sharing in groups and to address some of the concerns one might have regarding DropBox, but I wonder how successful they will be against such a popular offering.
I was also very interested to learn more about initiatives that address the critical question of training. Usually these are researcher training or more technically focussed programmes, but the EU-funded RAMIRI project offers training and knowledge exchange for people (hoping to be) involved in planning and managing research infrastructures. Because of the increasing complexity of this task in terms of legal, cultural, technical and other issues better support for those running what often are multi-million projects is highly desirable.
As I cannot end this post without referencing the more technical aspects of research infrastructure let me point you to a project that shows that grid and cloud can indeed live together in harmony. StratusLab is developing cloud technologies with the aim of simplifying and optimizing the use and operation of distributed computing infrastructures and it offers a production level cloud distribution that promises to marry ease of use of the cloud with grid technologies for distributed computing.
To sum up, it is not a question of grid versus cloud. It is about selecting the technologies that are best suited to facilitate great research – and then do deal with the non-technical issues from training to sustainability and cultural change that will decide how well we will be able to make use of the potential the technology offers.