Gartner defines cloud computing as “a style of computing in which massively scalable IT-related capabilities are provided ‘as a service’ using Internet technologies to multiple external customers.” Beyond the Gartner definition, clouds are marked by self-service interfaces that let customers acquire resources at any time and get rid of them the instant they are no longer needed.
The cloud is not really a technology by itself. Rather, it is an approach to building IT services that harnesses the rapidly increasing horsepower of servers as well as virtualization technologies that combine many servers into large computing pools and divide single servers into multiple virtual machines that can be spun up and powered down at will.
Naturally, a public cloud is a service that anyone can tap into with a network connection and a credit card. “Public clouds are shared infrastructures with pay-as-you-go economics,” explains Forrester analyst James Staten in an April report. “Public clouds are easily accessible, multitenant virtualized infrastructures that are managed via a self-service portal.”
A private cloud attempts to mimic the delivery models of public cloud vendors but does so entirely within the firewall for the benefit of an enterprise’s users. A private cloud would be highly virtualized, stringing together mass quantities of IT infrastructure into one or a few easily managed logical resource pools.
Like public clouds, delivery of private cloud services would typically be done through a Web interface with self-service and chargeback attributes. “Private clouds give you many of the benefits of cloud computing, but it’s privately owned and managed, the access may be limited to your own enterprise or a section of your value chain,” Kloeckner says. “It does drive efficiency, it does force standardization and best practices.”
The largest enterprises are interested in private clouds because public clouds are not yet scalable and reliable enough to justify transferring all of their IT resources to cloud vendors, Carr says.
“A lot of this is a scale game,” Carr says. “If you’re General Electric, you’ve got an enormous amount of IT scale within your own company. And at this stage the smart thing for you to do is probably to rebuild your own internal IT around a cloud architecture because the public cloud isn’t of a scale at this point and of a reliability and everything where GE could say ‘we’re closing down all our data centers and moving to the cloud.'”
You might say software-as-a-service kicked off the whole push toward cloud computing by demonstrating that IT services could be easily made available over the Web. While SaaS vendors originally did not use the word cloud to describe their offerings, analysts now consider SaaS to be one of several subsets of the cloud computing market.
Public cloud services are breaking down into three broad categories: software-as-a-service, infrastructure-as-a-service, and platform-as-a-service. SaaS is well known and consists of software applications delivered over the Web. Infrastructure-as-a-service refers to remotely accessible server and storage capacity, while platform-as-a-service is a compute-and-software platform that lets developers build and deploy Web applications on a hosted infrastructure.
Technically, you can put any application in the cloud. But that doesn’t mean it’s a good idea. For example, there’s little reason to run a desktop disk defragmentation or systems analysis tool in the cloud, because you want the application sitting on the desktop, dedicated to the system with little to no latency, says Pund-IT analyst Charles King.
More importantly, regulatory and compliance concerns prevent enterprises from putting certain applications in the cloud, particularly those involving sensitive customer data.
IDC surveys show the top uses of the cloud as being IT management, collaboration, personal and business applications, application development and deployment, and server and storage capacity.
Yes, but that doesn’t mean it will be easy. Services have popped up to move applications from one cloud platform to another (such as from Amazon to GoGrid) and from internal data centers to the cloud. But going forward, cloud vendors will have to adopt standards-based technologies in order to ensure true interoperability, according to several industry groups. The recently released “Open Cloud Manifesto” supports interoperability of data and applications, while the Open Cloud Consortium is promoting open frameworks that will let clouds operated by different entities work seamlessly together. The goal is to move applications from one cloud to another without having to rewrite them.
Vendors and customers alike are struggling with the question of how software licensing policies should be adapted to the cloud. Packaged software vendors require up-front payments, and make customers pay for 100% of the software’s capabilities even if they use only 25% or 50%, Gens says. This model does not take advantage of the flexibility of cloud services.
Oracle and IBM have devised equivalency tables that explain how their software is licensed for the Amazon cloud, but most observers seem to agree that software vendors haven’t done enough to adapt their licensing to the cloud.
The financial services company ING, which is examining many cloud services, has cited licensing as its biggest concern. “I haven’t seen any vendor with flexibility in software licensing to match the flexibility of cloud providers,” says ING’s Alan Boehme, the company’s senior vice president and head of IT strategy and enterprise architecture. “This is a tough one because it’s a business model change. … It could take quite some time.”
Cloud vendors typically guarantee at least 99% uptime, but the ways in which that is calculated and enforced differ significantly. Amazon EC2 promises to make “commercially reasonable efforts” to ensure 99.95% uptime. But uptime is calculated on a yearly basis, so if Amazon falls below that percentage for just a week or a month, there’s no penalty or service credit.
GoGrid promises 100% uptime in its SLA. But as any lawyer points out, you have to pay attention to the legalese. GoGrid’s SLA includes this difficult-to-interpret phrase: “Individual servers will deliver 100% uptime as monitored within the GoGrid network by GoGrid monitoring systems. Only failures due to known GoGrid problems in the hardware and hypervisor layers delivering individual servers constitute failures and so are not covered by this SLA.”
Attorney David Snead, who recently spoke about legal issues in cloud computing at Sys-Con’s Cloud Computing Conference & Expo in New York City, says Amazon has significant downtime but makes it difficult for customers to obtain service credits.
“Amazon won’t stand behind its product,” Snead said. “The reality is, they’re not making any guarantees.”
Data safety in the cloud is not a trivial concern. Online storage vendors such as The Linkup and Carbonite have lost data, and were unable to recover it for customers. Secondly, there is the danger that sensitive data could fall into the wrong hands. Before signing up with any cloud vendor, customers should demand information about data security practices, scrutinize SLAs, and make sure they have the ability to encrypt data both in transit and at rest.
Before choosing a cloud vendor, do your due diligence by examining the SLA to understand what it guarantees and what it doesn’t, and scour through any publicly accessible availability data. Amazon, for example, maintains a “Service Health Dashboard” that shows current and historical uptime status of its various services.
There will always be some network latency with a cloud service, possibly making it slower than an application that runs in your local data center. But a new crop of third-party vendors are building services on top of the cloud to make sure applications can scale and perform well, such as RightScale.
By and large, the performance hit related to latency “is pretty negligible these days,” RightScale CTO Thorsten von Eicken. The largest enterprises are distributed throughout the country or world, he notes, so many users will experience a latency-caused performance hit whether an application is running in the cloud or in the corporate data center.