Cloud computing has become the IT buzzword of the last five years, as virtual technology has made significant advances. What began with VMware and Microsoft developing virtual private servers and hypervisors for use, has now expanded into companies like Amazon getting on board with their EC2 cloud and both VMware and Microsoft offering a host of cloud computing and office solutions.
The big benefit, say all the big providers, is that cloud computing is scalable and flexible and can provide the level of coverage you need when you need it the most. This means, in theory, that a company hosting a high-traffic Web server should be able to set up a situation in which their provider flexibly allows them greater bandwidth during peak times and reduces it during the off hours, saving the company money in the long term and making sure that they are not missing out on traffic when the rush hits. Surprisingly, this level of “tooling” is not readily available on most virtual platforms even now. Why?
In many respects, it is because the technology itself is too new for companies – both users and providers – to have fully explored. Many users are wary about moving too much of their server load to an “unknown” location, and providers are still working on storage, I/O and security issues. Still, some providers are looking at ways to deliver on one of the fundamental promises of the cloud – that is scalability – and a market is beginning to emerge that is much more friendly to companies that wish to customize their cloud experience. While a number of promises for the next generation technology have yet to be kept, the industry is seeing a strong mainstream start.