Is cloud computing revolving back to Grid Computing?

A couple weeks ago I was at Sydney Google offices attending the Sydney CloudCamp and while there I attended the Cloud & Application scalability session. It was a very good session.

CloudCamp is by nature an unconference and there are no specific subjects to discuss, no key notes and no presenters. The attendees are part of the discussion and are responsible for deciding the themes to be discussed.

In this specific session I brought up a topic related to scalabity of applications in cloud enabled environments when applications require more resources from Utility Computing than a single host is able to provide. All attendees agreed that there is necessity to re-design applications if more resources than what a single host can provide is required. I understand that in most cases it is possible to ramp up host resources in a way that even the most demanding applications would be fulfilled, either adding memory, I/O or processors.

Some advancement suggests that soon applications will be able to share CPU and memory resources from different hosts in the cloud. However what is the real scalability of the cloud then? This brings the question – Is cloud computing revolving back to Grid Computing? Haven’t we been there already a decade ago?

Have Fun and Break a Leg

** This is my first authored post. I hope you enjoy and please feel free to discuss your view.

Leave a Reply