vSphere 5 Licensing — Not Quite So Bad?
With the vSphere 5 details out, most of us (especially VMware) would be rather talking about the new and exciting capabilities, but the new licensing model has generated a lot of attention and concern. The potential impact is broad and needs to be understood and perhaps some are asking “why?” as well.
I wanted to walk through a few scenarios and also discuss the impact and potential reasons for this change.
UPDATE 4: VMware has modified the license model and is granting additional entitlements. Details here.
UPDATE: Please look at Gabrie van Zanten’s post which goes into great detail on vSphere 5 licensing
UPDATE 2: Added “Monster VM” section inline below
vSphere 4 was licensed primarily on a per-CPU socket basis, but vSphere 5 will introduce a new pooled vRAM component where a certain amount of vRAM allocation is included with each CPU license depending on the edition (for example, Enterprise Plus provides 48GB of vRAM for each CPU license – details in the vSphere 5 licensing whitepaper). For many the first impression was one of concern that this would lead to higher costs. Are these concerns justified?
For a starting scenario let’s take a look at a sample environment consisting of 2-socket (CPU) hosts with 96 GB RAM running Enterprise Plus. Because there are 2 CPU sockets, the customer is entitled to 96GB of pooled vRAM (48 x 2) which means that there will be no net change in licensing cost. The bottom line is that there is no increase in cost, provided that memory-to-CPU ratio does not exceed what the license provides (48GB per CPU in Enterprise Plus).
But what if we increase the RAM in each 2-CPU host to 128GB?
vRAM versus RAM
Physical RAM installed in the host is not the same as allocated vRAM. RAM is not allocated until a VM is powered on and requests access to the memory pool. In other words your host may have 128GB installed, but the real question is how much of that 128GB is being actively used by VMs?
Within the boundaries of a single host, it would be necessary to purchase additional licenses if your allocated vRAM exceeded 96GB in this scenario, but most of us aren’t working within the boundaries of a single host either….
The vRAM pool concept is aggregated across all the hosts defined in your vCenter server, including vCenter instances that are linked to one another. Thus because the vRAM pool is in aggregate across all your hosts, chances are you will have multiple hosts for which the vRAM is below the “waterline” and all of this is aggregated into the larger pool. A well-designed environment would not use 100% of physical RAM, but rather something closer to 85%.
To illustrate aggregation in action, let’s say for example that Host A has allocated 140GB of RAM, but hosts B, C and D are allocating 60GB each. You’re still 64GB below the vRAM limit and have not incurred any new cost. Aggregation allows you to leverage unused capacity from multiple hosts into one big vRAM pool.
Core Restrictions Lifted
In vSphere 4 you were limited to either 6 (Standard, Enterprise) or 12 (Enterprise Plus) cores. With Intel and AMD shipping CPUs with more and more cores with each CPU edition, many customers would have had to purchase more licenses to satisfy the number of cores in today’s and tomorrow’s processors. In vSphere 5 any limitation on the number of cores was lifted, resulting in less revenue for VMware to drive their R&D efforts. Well, they didn’t give up the revenue as much as they repositioned it from the physical core restriction to a consumption model based on memory.
It may result in slightly different results depending on the environment but should software be licensed on the physical hardware that we provision, or on the actual resources that we consume? As we’ll get into below there may be several advantages to this approach.
Cost Bottom Line
The bottom line is how much RAM is being ALLOCATED (not provisioned) for each CPU in your environment, and then average this across ALL the hosts in your vCenter (including linked vCenter servers) environment. Certainly there may be some environments where the average memory consumed per CPU will exceed the licensed allotment, requiring additional licenses to be purchased, but this also must be balanced against licenses that may have been saved as a result of removing all core restrictions on the processors.
The Density Problem
Some of the biggest concerns may be from environments that have standardized on larger blades (128GB or 196GB) and are getting a very high density ratio. These environments seem to be far more likely to be impacted by the licensing changes. It will be very interesting to see this play out and how this licensing change might impact decisions on blade sizing for vSphere environments.
The Monster VM
vSphere 5 technically enables the “Monster VM” by supporting 32 vCPUs and up to 1TB of RAM, but it would seem at first that the new licensing scheme would reduce incentives to leverage this capability. VMware’s Scott Sauer pointed out to me that many “Monster VM” candidates would be coming from proprietary RISC/ UX systems (HP-UX, AIX, etc.). When you factor in the savings from moving away from these high-cost non-cloud platforms, the Monster VM will be far cheaper in most cases and that’s before you consider the benefits of other intangibles (cloud, ops, DR, etc.)
Chargeback And The Cloud
From an economic perspective, measuring consumption (via RAM allocation) will be a more accurate measure of resource valuation than measuring physically provisioned resources (CPU/Memory/etc.). This has significant implications for chargeback both within the IT organization and for cloud computing.
Within the IT organization it forces application owners to consider the resources they are demanding as opposed to demanding a pool of infrastructure. For many reasons I’m a big believer in accurately tracking the value of resource consumption and charging it back to those who request it. Not only are there budget considerations but often times resources are used more effectively under a chargeback model.
Now lets look towards the cloud. In the future I believe that it will be increasingly commonplace to see workloads migrated from private datacenters to service providers or even between service providers. As the public cloud evolves, wouldn’t it be nice for software licenses to be aligned with consumption, rather than a physical paradigm which may be different from one environment to the next? This is what VMware is doing and what I suspect – over time – more and more vendors will be forced to move into a similar consumption model that is not tied to physical hardware.
I can’t put it better than Wikibon’s Stuart Miniman who stated “if VMware’s licensing change becomes a forcing function for chargeback, that’s a #cloud silver lining” Chargeback is ultimately a good thing and enables a better alignment with the economics of the cloud.
In closing, I think VMware has some very good reasons for moving in this direction. Change is disruptive and we (and our processes) don’t usually like change, but aligning software costs with actual consumption rather than a less perfect physical server measure is a good thing in the long run and more vendors will likely consider a move in this direction at some point.
Once many take a moment to “do the math”, they may conclude that the new licensing model is not nearly as bad as they had once imagined. Those running larger hosts (128/196GB) will likely be the most impacted and will want to review the new licensing model before they size their servers.