What Really Is Cloud Computing? (Triple-A Cloud)
What is cloud computing? Ask a consumer, CIO, and salesman and you’ll likely get widely varying responses.
The consumer will typically think of the cloud as a hosted service, such as Apple’s iCloud, or uploading pictures to Photobucket, and scores more of like services (just keep in mind that several such services existed before it became fashionable to slap the “cloud” label on them).
Some business articles describe cloud computing as turning a capital expense into an operating expense, while others talk about moving from a product to a service. But for a CIO or IT manager, what exactly does all this mean and how does one get there?
Some CIO’s tend to view the cloud as outsourcing large pools of infrastructure as a utility (after all CIO stands for “Can I Outsource?”, right?). But does one have to outsource a third party’s capacity to apply cloud computing concepts, or can it be pursued within existing infrastructures? On the other hand, if an organization has already adopted virtualization (to some extent), some wonder why they should be looking at cloud computing?
Many definitions of cloud computing I’ve heard have an element of truth to them, but are often incomplete and often leave people wanting more and perhaps fail to truly really capture the essence of cloud computing. What if we could simplify our definition of cloud computing as being built upon three key pillars?
CLOUD COMPUTING IS….
So what are the three key ingredients in cloud computing? Abstraction, Automation and Agility.
Let’s take a closer look at each of these three ingredients of cloud computing. After a discussion of these 3 elements we’ll circle back to talk and address some of the original questions about the different models in which we find cloud computing being used.
Abstraction is essentially liberating workloads and applications from the physical boundaries of server hardware. In the past we had servers which would host only one application (hence our focus at times on servers and not applications). Virtualization provides this abstraction by separating workloads from server hardware, eliminating hardware boundaries, dependencies and providing workload mobility. That mobility is even being extended to moving workloads from internal data centers to service providers and vice versa. Today the virtual machine defines the boundary, but in the future as the OS becomes less relevant, we might see “virtual containers” defining our workloads on PaaS (Platform as a Service) infrastructures.
The original motivation for virtualization was a CAPEX (capital expense) play — fewer servers, ports, space, electricity, etc. As virtualization matured many quickly found that the management of virtual machines was significantly easier, and that there was a new way of doing many tasks, which could in turn reduce OPEX (operating expenses). To get here, we need to work towards 100% virtualization and the technical barriers have been all but smashed with today’s technology (more on this later).
Put simply, abstraction enables greater resource utilization and can be used with concepts like multi-tenancy to provide greater economies of scale than were previously possible.
There’s also another kind of abstraction taking place that’s causing a wave of disruption — the abstraction of the application away from the traditional PC. The combination of SaaS, application virtualization, VDI and the proliferation of mobile devices (tablets and smartphones) are all driving this trend. Applications no longer need to be anchored to physical PCs as users want to access their applications and their data from any device and any place. For more on this topic, see this earlier post on The New Application Paradigm.
Both of these types of abstraction are removing traditional boundaries and therefore changing the ways in which we manage infrastructure and present applications to endpoints.
And in looking at server virtualization, we see that the virtualization stack also provides a unifying management layer, which can serve as the foundation for so much more…
Where abstraction provides the foundation for the new paradigm, automation builds on top of that foundation to provide exciting opportunities for organization to reduce OPEX costs and promote agility.
Let’s start with the basics. Thanks to encapsulation (provided by virtualization), new possibilities have emerged with replication, disaster recovery, and even the backup and recovery process itself. There’s agent-less monitoring of many core performance metrics, scripting across VMs and hosts, virtual network switches and firewalls, and of course, near-instant provisioning from templates. Such levels of automation were not easily accessible before the abstraction layer of virtualization was introduced.
Now we have products such as VMware’s vCloud Director which can take all of the elements of an n-tier application, and quickly provision them — including firewall rules and even with multi-tenancy. Imagine deploying an entire N-tier application of multiple virtual machines, complete with network config and firewalls with just a few clicks. Now add to that the concept of a self-service catalog, where business units can request resources for an application over a web form, and upon approval the application is automatically provisioned consistent with the provided specifications, while conforming to existing IT standards and compliance audits.
Those are just some of many angles to automation. Another is orchestration of converged infrastructure (of which the Vblock is one example). Rather than trying to manage the core infrastructure elements of compute, storage and networking as independent silos as many do today, we can instead deploy converged infrastructure with orchestration tools that can unify and transcend across the silos, allowing infrastructure to be managed and provisioned more like a singularity. And many of these orchestration tools can plug directly into the virtualization stack (i.e. vCloud Director) for even more integration.
Now of course there are obstacles to this automation which can include “PSP syndrome” (an adhesion to Physical Server Processes), heavily siloed organizational structure, integration and even multiple hypervisors.
There’s many more angles to automation we haven’t touched on yet, but the key is that abstraction enables new opportunities for automation — and that automation can then be used to pursue….
Why does VMware say that they want infrastructure to be transparent? Let’s answer that question with another question: does the business care about storage, network or server technologies? At the end of the day the business cares mostly about two primary deliverables from IT — the health of their applications (as measured by uptime and other performance metrics) and the time it takes to deploy/provision them.
Success is the rapid and successful execution of business strategy and time is a HUGE component of this. There’s competition, market opportunities, patents and legal issues, first mover advantage and so many reasons why time is…well…money.
CAPEX and OPEX savings can have positive impact on budgets but when you get to a place where you can get major projects done in weeks rather than quarters, that’s a profound paradigm shift which can often be of more value to an organization that just CAPEX and OPEX reductions.
Imagine that the business wants to build a 200 server n-tier application to promote a new initiative and that it has to be PCI complaint. First you have to have the infrastructure (compute, storage, networking) to rapidly provision and then you need to work with the application, networking and security teams to provision those VLANs and firewall rules. If you’ve ever worked in an IT shop which is heavily siloed and uses physical server processes, the technology might be obsolete by the time you finished the solution. The back and forth between departments and ticket processes just to get the VLANs or firewall rules correctly set for the application or make any like adjustments can slow such a project to a crawl.
However if you can successfully leverage abstraction and automation in your IT department, you can get to the point where you can reduce the time to provide solutions to the business by months in many cases. It’s being done today, and that’s one of the biggest reasons why there’s so much excitement in not just IT circles, but business and leadership circles, about cloud computing.
In an earlier post I introduced the concept of a value triangle which is illustrated below. The organization begins their journey by using virtualization (which provides abstraction) to achieve CAPEX benefits. This provides the foundation for automation which enables opportunities for additional OPEX benefits, and of course all of this enables the opportunity for the organization to capture agility benefits which could potentially be of far greater economic value than CAPEX and OPEX (cost-center) combined.
The value of cloud computing is so profound that we all should be doing it this way and should be just calling it “computing”. But we aren’t quite there yet, hence the term “cloud computing”.
The bottom line is that if you can successfully execute on abstraction and then automation, you can begin to align your IT services to the needs of the business and work with the business with a partner-minded relationship, providing the agility to rapidly execute their business plan.
WHAT SHAPE IS YOUR CLOUD?
Clouds can come in many shapes and sizes. Some are internal and some are outsourced. Then there’s the whole private/public/hybrid cloud (complete with academic debate) and let’s not forget PaaS, IaaS and SaaS. Which of these “shapes” should you have in your cloud and what should it look like? Perhaps someday there will be ITIL standards on exactly what form these elements ought to manifest within an ISO 9000 complaint cloud design (no not really — I just said that to get Stevie Chambers all worked up). So where do I start working on this cloud thing and which strategy should I use?
While these are often relevant discussions, it’s often helpful to forget about these “debates” and buzzwords and focus instead on the core elements of cloud computing — abstraction, automation and agility, so that we can better understand the value proposition and consider the best methods to employ towards that end.
Do you need to outsource to a third party to leverage cloud computing? Can you leverage cloud computing in your existing datacenter? How you get there and the best path will vary, but one first must embark on the cloud journey in order to reap the benefits.
I’ve seen at least a half dozen business or IT articles over the past month alone that said either directly or strongly implied that cloud computing meant that the workloads had to be outsourced to some third-party as a utility service. Cloud computing resources can certainly be outsourced to a third party, but they don’t have to be. And even if you did make the decision to outsource it’s no magic bullet — your processes and organization need to evolve to the point where you can achieve the levels of automation – and therefore agility – that you seek.
In the long run, I tend to think that more and more workloads will indeed be moved to third-party service providers ( an upcoming post will deal with this), but for the time being the best path for many organizations may very well be to pursue cloud computing within their own existing datacenters. You must embark on the journey to first achieve abstraction, and then re-engineer your processes and organization model as you work to achieve greater levels of automation and agility.
THE CLOUD JOURNEY
The cloud journey — which is almost always a marathon — begins with virtualization. In the past we’ve had to battle “server huggers” and many other barriers to the adoption of virtualization, but especially with the release of vSphere 5, those technical barriers are gone for the overwhelming majority of workloads. If a given environment can’t virtualize an application effectively today, it’s more than likely a limitation with your architecture or skill sets as the proof is in the results — organizations are having success and creating case studies on virtualizing their ERP systems and database tiers (see the performance section of this blog for just a few examples).
Sometimes we encounter “server huggers” who still want their application to have the familiar and comfortable boundaries of a physical server they can identify, and sometimes in our datacenters we build “Frankenstructures” in which we invest great time and engineering expense into and get an excessively complex infrastructure that we have become too attached to, and the engineering burden begins to weigh us down (another area where converged infrastructure can help). Server huggers, “frankenstructures”, physical server processes and siloed org structures all can be obstacles to the cloud journey.
Clouds can have different evolutionary stages — it takes a journey — perhaps a marathon — to reach the “agility zone” in the cloud journey. Let’s take a step back from the private/public PaaS/IaaS debates and start by focusing on just the core elements of abstraction, automation, and agility. A few key points to summarize:
- Clouds have many shapes and forms, but they all rely on abstraction and automation to enable the potential for agility.
- It is not a requirement to outsource or to move anything to “the cloud”. You can begin the journey in your own datacenter(s) by first pursuing abstraction and then automation.
- Cloud isn’t just about technology. It’s also about organizational structure and processes. Re-engineer your physical server minded processes, refresh your skill sets, and knock down your organizational silos.
- Virtualization alone isn’t enough. Cloud computing requires the effective use of automation (at many different levels) to reduce provisioning and service delivery times.
Building On #AAACloud
This is sort of a “foundation” post and I’m hoping that the conversation can be continued and expanded by using the Twitter hashtag #AAACloud as well as some future content I hope to produce to further build on this discussion. This might include a video post or two, as well as SlideRocket presentations on “What comes after virtualization?” and a second presentation on the value of converged infrastructure. Also there may be some follow-up blog posts on topics such as:
1) Using Multiple Hypervisors
2) Private Datacenters versus Outsourced Service Providers
3) Legacy physical processes
4) Organizational Models
..and I’m sure more will come to mind. Join the conversation on twitter with #AAACloud