While SaaS provides the software and hardware to replace an internal information system, sometimes a firm develops its own custom software but wants to pay someone else to run it for them. That’s where hardware clouds, utility computing, and related technologies come in. In this model, a firm replaces computing hardware that it might otherwise run on-site with a service provided by a third party online. While the term utility computing was fashionable a few years back (and old timers claim it shares a lineage with terms like hosted computing or even time sharing), now most in the industry have begun referring to this as an aspect of cloud computing, often referred to as hardware cloudsA cloud computing model in which a service provider makes computing resources such as hardware and storage, along with infrastructure management, available to a customer on an as-needed basis. The provider typically charges for specific resource usage rather than a flat rate. In the past, similar efforts have been described as utility computing, hosting, or even time sharing.. Computing hardware used in this scenario exists “in the cloud,” meaning somewhere on the Internet. The costs of systems operated in this manner look more like a utility bill—you only pay for the amount of processing, storage, and telecommunications used. Tech research firm Gartner has estimated that 80 percent of corporate tech spending goes toward data center maintenance.J. Rayport, “Cloud Computing Is No Pipe Dream,” BusinessWeek, December 9, 2008. Hardware-focused cloud computing provides a way for firms to chip away at these costs.
Major players are spending billions building out huge data centers to take all kinds of computing out of the corporate data center and place it in the cloud. While cloud vendors typically host your software on their systems, many of these vendors also offer additional tools to help in creating and hosting apps in the cloud. Salesforce.com offers Force.com, which includes not only a hardware cloud but also several cloud-supporting tools, such as a programming environment (IDE) to write applications specifically tailored for Web-based delivery. Google’s App Engine offers developers several tools, including a database product called Big Table. And Microsoft offers a competing product—Windows Azure that runs the SQL Azure database. These efforts are often described by the phrase platform as a service (PaaS)Where cloud providers offer services that include the hardware, operating system, tools, and hosting (i.e., the platform) that its customers use to build their own applications on the provider’s infrastructure. In this scenario the cloud firm usually manages the platform (hosting, hardware, and supporting software), while the client has control over the creation and deployment of their appliation. since the cloud vendor provides a more complete platform (e.g., hosting hardware, operating system, database, and other software), which clients use to build their own applications.
Another alternative is called infrastructure as a service (IaaS)Where cloud providers offer services that include running the remote hardware and networking (i.e., the infrastructure), but client firms can choose software used (which may include operating systems, programming languages, databases, and other software packages). In this scenario the cloud firm usually manages the infrastructure (keeping the hardware and networking running), while the client has control over most other things (operating systems, storage, deployed applications, and perhaps even security and networking features like firewalls and security systems).. This is a good alternative for firms that want even more control. In IaaS, clients can select their own operating systems, development environments, underlying applications like databases, or other software packages (i.e., clients, and not cloud vendors, get to pick the platform), while the cloud firm usually manages the infrastructure (providing hardware and networking). IaaS services are offered by a wide variety of firms, including Amazon, Rackspace, Oracle, Dell, HP, and IBM.
Still other cloud computing efforts focus on providing a virtual replacement for operational hardware like storage and backup solutions. These include the cloud-based backup efforts like EMC’s Mozy, and corporate storage services like Amazon’s Simple Storage Solution (S3). Even efforts like Apple’s iCloud that sync user data across devices (phone, multiple desktops) are considered part of the cloud craze. The common theme in all of this is leveraging computing delivered over the Internet to satisfy the computing needs of both users and organizations.
Large, established organizations, small firms and start-ups are all embracing the cloud. The examples below illustrate the wide range of these efforts.
Journalists refer to the New York Times as, “The Old Gray Lady,” but it turns out that the venerable paper is a cloud-pioneering whippersnapper. When the Times decided to make roughly one hundred fifty years of newspaper archives (over fifteen million articles) available over the Internet, it realized that the process of converting scans into searchable PDFs would require more computing power than the firm had available.J. Rayport, “Cloud Computing Is No Pipe Dream,” BusinessWeek, December 9, 2008. To solve the challenge, a Times IT staffer simply broke out a credit card and signed up for Amazon’s EC2 cloud computing and S3 cloud storage services. The Times then started uploading terabytes of information to Amazon, along with a chunk of code to execute the conversion. While anyone can sign up for services online without speaking to a rep, someone from Amazon eventually contacted the Times to check in after noticing the massive volume of data coming into its systems. Using one hundred of Amazon’s Linux servers, the Times job took just twenty-four hours to complete. In fact, a coding error in the initial batch forced the paper to rerun the job. Even the blunder was cheap—just two hundred forty dollars in extra processing costs. Says a member of the Times IT group: “It would have taken a month at our facilities, since we only had a few spare PCs.…It was cheap experimentation, and the learning curve isn’t steep.”G. Gruman, “Early Experiments in Cloud Computing,” InfoWorld, April 7, 2008.
NASDAQ also uses Amazon’s cloud as part of its Market Replay system. The exchange uses Amazon to make terabytes of data available on demand, and uploads an additional thirty to eighty gigabytes every day. Market Reply allows access through an Adobe AIR interface to pull together historical market conditions in the ten-minute period surrounding a trade’s execution. This allows NASDAQ to produce a snapshot of information for regulators or customers who question a trade. Says the exchange’s VP of Product Development, “The fact that we’re able to keep so much data online indefinitely means the brokers can quickly answer a question without having to pull data out of old tapes and CD backups.”P. Grossman, “Cloud Computing Begins to Gain Traction on Wall Street,” Wall Street and Technology, January 6, 2009. NASDAQ isn’t the only major financial organization leveraging someone else’s cloud. Others include Merrill Lynch, which uses IBM’s Blue Cloud servers to build and evaluate risk analysis programs; and Morgan Stanley, which relies on Force.com for recruiting applications.
IBM’s cloud efforts, which count Elizabeth Arden and the U.S. Golf Association among their customers, offer several services, including so-called cloudburstingDescribes the use of cloud computing to provide excess capacity during periods of spiking demand. Cloudbursting is a scalability solution that is usually provided as an overflow sservice, kicking in as needed.. In a cloudbursting scenario a firm’s data center running at maximum capacity can seamlessly shift part of the workload to IBM’s cloud, with any spikes in system use metered, utility style. Cloudbursting is appealing because forecasting demand is difficult and can’t account for the ultrarare, high-impact events, sometimes called black swansUnpredicted, but highly impactful events. Scalable computing resources can help a firm deal with spiking impact from Black Swan events. The phrase entered the managerial lexicon from the 2007 book of the same name by Nassim Taleb.. Planning to account for usage spikes explains why the servers at many conventional corporate IS shops run at only 10 to 20 percent capacity.J. Parkinson, “Green Data Centers Tackle LEED Certification,” SearchDataCenter.com, January 18, 2007. While Cloud Labs cloudbursting service is particularly appealing for firms that already have a heavy reliance on IBM hardware in-house, it is possible to build these systems using the hardware clouds of other vendors, too.
Salesforce.com’s Force.com cloud is especially tuned to help firms create and deploy custom Web applications. The firm makes it possible to piece together projects using premade Web services that provide software building blocks for features like calendaring and scheduling. The integration with the firm’s SaaS CRM effort, and with third-party products like Google Maps allows enterprise mash-ups that can combine services from different vendors into a single application that’s run on Force.com hardware. The platform even includes tools to help deploy Facebook applications. Intuitive Surgical used Force.com to create and host a custom application to gather clinical trial data for the firm’s surgical robots. An IS manager at Intuitive noted, “We could build it using just their tools, so in essence, there was no programming.”G. Gruman, “Early Experiments in Cloud Computing,” InfoWorld, April 7, 2008. Other users include Jobscience, which used Force.com to launch its online recruiting site; and Harrah’s Entertainment, which uses Force.com applications to manage room reservations, air travel programs, and player relations.
Hardware clouds and SaaS share similar benefits and risk, and as our discussion of SaaS showed, cloud efforts aren’t for everyone. Some additional examples illustrate the challenges in shifting computing hardware to the cloud.
For all the hype about cloud computing, it doesn’t work in all situations. From an architectural standpoint, most large organizations run a hodgepodge of systems that include both package applications and custom code written in-house. Installing a complex set of systems on someone else’s hardware can be a brutal challenge and in many cases is just about impossible. For that reason we can expect most cloud computing efforts to focus on new software development projects rather than options for old software. Even for efforts that can be custom-built and cloud-deployed, other roadblocks remain. For example, some firms face stringent regulatory compliance issues. To quote one tech industry executive, “How do you demonstrate what you are doing is in compliance when it is done outside?”G. Gruman, “Early Experiments in Cloud Computing,” InfoWorld, April 7, 2008.
Firms considering cloud computing need to do a thorough financial analysis, comparing the capital and other costs of owning and operating their own systems over time against the variable costs over the same period for moving portions to the cloud. For high-volume, low-maintenance systems, the numbers may show that it makes sense to buy rather than rent. Cloud costs can seem super cheap at first. Sun’s early cloud effort offered a flat fee of one dollar per CPU per hour. Amazon’s cloud storage rates were twenty-five cents per gigabyte per month. But users often also pay for the number of accesses and the number of data transfers.C. Preimesberger, “Sun’s ‘Open’-Door Policy,” eWeek, April 21, 2008. A quarter a gigabyte a month may seem like a small amount, but system maintenance costs often include the need to clean up old files or put them on tape. If unlimited data is stored in the cloud, these costs can add up.
Firms should enter the cloud cautiously, particularly where mission-critical systems are concerned. Amazon’s spring 2011 cloud collapse impacted a number of firms, especially start-ups looking to leanly ramp up by avoiding buying and hosting their own hardware. HootSuite and Quora were down completely, Reddit was in “emergency read-only mode,” and Foursquare, GroupMe, and SCVNGR experienced glitches. Along with downtime, a small percentage (roughly 0.07 percent) of data involved in the crash was lost.A. Hesseldahl, “Amazon Details Last Week’s Cloud Failure, and Apologizes,” AllThingsD, April 29, 2011. If a cloud vendor fails you and all your eggs are in one basket, then you’re down, too. Vendors with multiple data centers that are able to operate with fault-tolerant provisioning, keeping a firm’s efforts at more than one location to account for any operating interruptions, will appeal to firms with stricter uptime requirements, but even this isn’t a guarantee. A human configuration error hosed Amazon’s clients, despite the fact that the firm had confirmed redundant facilities in multiple locations.M. Rosoff, “Inside Amazon’s Cloud Disaster,” BusinessInsider, April 22, 2011. Cloud firms often argue that their expertise translates into less downtime and failure than conventional corporate data centers, but no method is without risks.