Top 6 Features of vSphere 6

This changes things. It sounds cliché to say “this is our best release ever” because in a sense the newest release is usually the most evolved.  However as a four year VMware vExpert I do think that there is something special about this one.  This is a much more significant jump than going from 4.x

vsphere6vsphere6

vSphere 6.0 Public Beta — Sign Up to Learn What’s New

Yesterday, VMware announced the public availability of vSphere 6.0 Beta 2.  I can’t tell you what’s all in it due to the NDA, but you can still register for the beta yourself, read about what’s new and download the code for your home lab. There’s some pretty exciting stuff being added to vSphere 6.0 in

v6v6

Will VMware Start Selling Hardware? Meet MARVIN

The Register is running a story that VMware is preparing to launch a line of hardware servers.

marvinmarvin

VMware Pursues SDN With Upcoming NSX Offering

Earlier this week VMware announced VMware NSX – an upcoming offering that takes network virtualization to new levels. NSX appears to be somewhat of a fusion between Nicria’s SDN technology (acquired last year by VMware) and vCloud Network and Security (vCNS – formerly known as vShield App and Edge). Since I already had intentions to

NSX2NSX2

What Really Is Cloud Computing? (Triple-A Cloud)

What is cloud computing?  Ask a consumer, CIO, and salesman and you’ll likely get widely varying responses. The consumer will typically think of the cloud as a hosted service, such as Apple’s iCloud, or uploading pictures to Photobucket, and scores more of like services (just keep in mind that several such services existed before it

3pillars_f3pillars_f

Agility Part 2 — The Evolution of Value in the Private Cloud

When an IT project is commissioned it can be backed by a number of different statements such as: “It will reduce our TCO” “This is a strategic initiative” “The ROI is compelling” “There’s funds in the budget” “Our competitors are doing it” Some of these are better reasons than others, but here’s a question.  Imagine a

agility2agility2

Stacks, the Vblock and Value — A Chat with EMC’s Chad Sakac

…I reached out to EMC’s Chad Sakac to gain more insights from his perspective on how the various stacks…well…stacked up….

stacksstacks

Should You Virtualize vCenter Server (and everything else?)

When concerns are raised around virtualizing vCenter Server, in my experience they usually revolve around either performance and/or out-of-band management. The VROOM! blog at VMware just published a whitepaper that looks closely at vCenter Server performance as a VM versus native (physical) which speaks to these concerns as well as for other workloads. vCenter Performance

vcenter_virtvcenter_virt

When CPU metrics with Hyperthreading, Monster VMs and VMware Make No Sense

CPU seems like such a simple thing, but in the age of virtualization, hyper-threading and vNUMA, it can get quite complicated. In fact looking at some metrics can get you to lose your mind until you realize what’s really going on. Let’s jump right in to the original problem.

I encountered a VM with 16 vCPUs on a server with 16 physical cores (2 x 8). The VM was frequently getting alarms in vROPS (vRealize Operations Manager) as at times it would be at 90% or more for sustained periods, but the host server would show just under 50% utilization. This is illustrated in the graph below.

Screenshot_92

Click Image to Expand

Why would a VM with 16 vCPU be at 80% when the 16 core host is only at 41%. What’s going on here?

The first things we will need to explore are hyper-threading, NUMA and how different CPU metrics in VMware are calculated.

Hyperthreading

Hyper-threading was intended to solve an issue with a waste of potential resources in the CPU. It does not change the physical resources (the CPU cores) but more resources can be potentially tapped into by allowing two threads to be processed by the same execution resource simultaneously — with each physical core being an execution resource.

When hyper-threading is enabled it doubles the number of logical processors presented by the BIOS. In this example, our 2 socket, 8 core system with 16 cores total now presents 32 cores to VMware. Chris Wahl has an excellent post on this topic which I strongly encourage you to read, but for now I’m just going to “borrow” one of his graphics.

ht-image

Image taken from Chris Wahl’s post (link in article)

VMware’s CPU scheduler can make efficient use of hyper-threading and generally it should be enabled. The number of logical processors now doubles, providing a performance benefit in the range of 10-15% in most vSphere environments (depending on workload/applications, etc.).

But what about our scenario of a host which has only 1 “monster VM”?

Sizing a Monster VM with Hyper-Threading

The general rule here is that you should not provision more vCPUs than the number of PHYISCAL cores the server has. In our scenario there are 32 logical processors presented due to hyper-threading, but only 16 physical cores. If we provision more than 16 vCPUs to the VM it means that execution resources will now be shared for the VM. Now there are some exceptions here (test your workloads!), but is it generally recommended not to exceed the number of physical cores for this reason.

VMware has a blog post on this topic.  What is their guidance?

VMware’s conservative guidance about over-committing your pCPU:vCPU ratio for Monster virtual machines is simple – don’t do it.

For a deeper dive on the issues here please see Chris Wahl’s post (again) or this post on the VMware blogs about Monster VMs.

NUMA and vNUMA

In the interest of time I’m going to go too deep here, but let’s just say NUMA is a technology designed to assign affinity between CPUs and memory banks in order to optimize memory access times.

vNUMA was introduced with vSphere 5.0 which allows this technology to be extended down to guest virtual machines.

The bottom line here is that the mix of virtual sockets and virtual cores assigned matters. As this article shows, processing latency can be increased if these settings are not optimal.

First you’ll want to make sure that hot-CPU add is disabled as this disables vNUMA in any virtual machine and then you’ll want to make sure that your allocation of virtual sockets and virtual cores matches the underlying physical architecture or you could be adding some processing latency to your VM as noted in this VMware blog post.

One more point here. There’s a setting in VMWare called PerferHT. You can read about it here, but it basically changes the preferences in vNUMA. There’s no universal answer here as it will vary from application to application, but this setting is a trade off between additional compute cycles and more efficient access to processor cache and memory via vNUMA. If your application needs faster memory access more than it needs compute cycles, you may want to experiment with this setting.

BACK TO OUR PROBLEM…

As it turned out all of our settings here were optimal. We had one vCPU socket with 16 cores – matching the 16 physical cores on the server – and vNUMA enabled. If you are using a Windows guest you can download Coreinfo.exe from Sysinternals and get more detail on how vNUMA is configured within your VM.

But we still don’t have an answer to our question – why is VM CPU at 80% when the host is at 41% given 16 physical cores (host) and 16 virtual cores (VM)?

Is it possible that not all the cores are being used?  Let’s check — here is a graph from vCenter showing that all 32 logical cores are being used (and tracking within 10% of each other) but average CPU is 19% peaking at 27% over the past hour:

Screenshot_73

But then we look at the VM for the same time period and we see the same pattern except that CPU peaks at over 90% and averages 45%:

 

Screenshot_72

 

How can this be? The VM is triggering high utilization alarms when the host is at less than 50% utilization.

Let’s go to ESXTOP to get some additional metrics, but first we need to understand the difference between PCPU and “CORE” in ESXTOP:

Screenshot_90

Click to Expand

So PCPU refers to all 32 logical processors while “CORE” refers to only the 16 physical cores.

Now lets look at ESXTOP for this host.

Screenshot_91

Click to Expand

Notice how CORE UTIL % is reported at 78% while PCPU util is only 40%. That’s a big difference! Which one is right?

If we look at the Windows OS, we see that CPU at that same instant was aligned with the CORE UTIL% metric:

Screenshot_94

It seems that there’s a couple things going on here. First, the CORE UTIL metric more accurately reflects utilization for THIS Monster VM scenario as it averages across 16 physical cores and not 32 logical cores. Second it seems that the CPU utilization metrics which we tend to rely on in vCenter and other tools tend to follow the PCPU (hyper-threaded) statistics and not the “core” utilization.

A few graphs to quickly illustrate this. First once more here’s CPU for both the host and the 1 VM that is on it as reported by vROPS 6:

Screenshot_92

Same pattern in both but the host is averaged across 32 logical processors while the VM is averaged across 16 vCPUs, which results in the numbers being almost double for the VM.

We can also see this by looking at MHz rather than percent utilization:

Screenshot_96

Without breaking down the math, the number of Mhz consumed by the VM divided by the capacity of the host, does align with the CORE UTIL% metric.

One thing I could not figure out about this chart is why the host shows LESS Mhz utilized. There should be no averaging here – just raw Mhz consumed – so it’s escaping me why the host would show less consumed than the VM (not possible in raw Mhz). If anyone has an answer for this I’ll gladly update this post with attribution.

So if I’m using vROPS 6, what metric do I use to see actual core utilization without factoring for hyper threading? The documentation I must confess lost me a bit. Allegedly this metric exists but I couldn’t find it anywhere:

Screenshot_97

After some trial and error I did find a CPU Workload % metric which does appear to focus on the cores (no hyper-threading):

Screenshot_98

Again the pattern is identical except “Usage” (top) is averaged over 32 cores – not accurate for our scenario – and “Workload” (bottom) is averaged over the 16 physical cores.  Here the Workload metric (bottom) gives a far more accurate picture which aligns with the VM level metrics.  If we look at just the default Usage % metric we are left with the impression that the host has far more resources to give and that our vCPU allocation (or something else) may not be efficient, but that does not seem to be the case here.

So what would these metrics look like on a host with many workloads and no Monster VMs (more common)?

Screenshot_99

Different scales here which makes the bottom chart appear more volatile, but the gap between the two is not a doubling like we saw before. The numbers are much closer.

Now here’s a question that troubles me. The default CPU metrics in vSphere count all the logical cores but look at the peak above. If I looked at the default CPU graph, I’d think I was at 74% when the physical cores were actually at 88%. I can see how averaging across all logical cores can provide a better view of utilization, but it seems to me that the Workload metric (physical cores only) provides a better watermark for detecting bottlenecks.

CONCLUSION:

We’ve jumped about a bit here but if you’re still with me let’s try to nail down some conclusions from all this:

  • Hyper-threading does not increase execution resources, but it many cases it allows them to be used more efficiently depending on the workload (this benefit is often 10-15% in VMware environments).
  • The default CPU metrics in vSphere are averaged across all logical cores which includes those added by hyper-threading. This can result in confusing results when a single “Monster VM’ is running from a host
  • VMware’s guidance is to NOT exceed the number of physical cores on a host, when provisioning vCPUs to a Monster VM.
  • In vROPS 6 the Workload % metric appears to only look at physical cores and thus may be a better indicator for CPU bottlenecks in some cases.
  • vNUMA considerations including virtual to physical core allocation can impact performance.

As for our VM which was triggering CPU alarms, it appears that it is using an appropriate amount of resources on the host after all.  Now there’s a possibility that we could experiment with more cores and possibly get better results, but they key is we can throw out the 50% disparity between CPU utilization and VM utilization as bad data in this scenario.

And last but not least –

  • Measuring CPU utilization is not nearly as simple as we had thought.

One final note — this is my interpretation of what I am seeing. If anyone can offer better guidance (especially corrections) to anything I’ve posted here, please do so and I will be glad to update the post with attribution.

VM Snapshots — They can be a problem, but VVols in vSphere 6 can help

Snapshots in VMware have been an invaluable tool for years.  The ability to create an application consistent point-in time snapshot of a virtual machine has significant OPEX (or DevOps — take your pick) benefits. It can be used as an “undo” button for upgrades, it can facilitate clones and some replication solutions, but perhaps most commonly it is used to facilitate backups of virtual machines.

For me snapshots have always been a love-hate thing.  Wonderful feature but in some cases they’ve caused a lot of pain and disruption. In my vSphere 6 What’s New post I talk a bit about vVols — what VVols mean for snapshots just might be one of the least known and discussed features.

Here’s the issue.  A snap is created, the backup runs, and then the snap is closed.  It is this closing of the snap where the VM can become “stunned” for significant lengths of time.  I’ve seen this become an issue for highly transactional servers ranging from web servers, databases and email mail systems as well.

There’s even a VMware KB article that discusses this problem:

Screenshot_63

So what’s happening here? Think of this this way.

First the snap is opened.  From this point forward writes are not committed to the base virtual disk (VMDK) but a child VMDK.  The more writes that occur while the snap is open (often how long the backup takes), the greater the size of this child VMDK which will have to be consolidated.

 

Screenshot_64

Above you can see a VMDK with three snapshots open.  Writes go to the most recent snapshot, and the live state of the VMDK is actually a real-time calculation across this entire chain.  Once I discovered an Exchange server for which the backups were not properly configured — there was a chain of 58 snaps supporting a production Microsoft Exchange server! Yikes!

There’s actually an additional snapshot file that is created for application quiescing (Microsoft VSS) but there’s no need to go into that here.  Hopefully you already have an appreciation for how closing a snapshot can be problematic for transactional workloads.  These snapshots — and the child VMDKs created for them — need to be written back into the base VMDK.

For many VMs you might be able to backup and run snapshots just fine.  But in my experience, just an IIS server creating a steady output of IIS log files, can experience disruption during a snap close event — especially if you are doing a full backup. I’ve watched the snap close process freeze IIS servers to where web transactions are dropped and lost. And for large transactional databases you can just forget about it.

vSphere 6 and VVols

With the new VVol feature in vSphere 6 several things change.  First of all the base VMDK is ALWAYS the base VMDK.  It is always the write target.  The snapshots are now read only reference files that do NOT exist with a chain.  When the snap is closed, there’s nothing to ingest back into the base VMDK — it already has it!

 

VVol-VSS-1-300x252

This image is from Cormac Hogan’s post (link in article) which shows how snapshots are read-only references in vSphere 6 VVols.

This is a huge change from the previous method where writes went into the most recent snap in the chain and would have to be consolidated back into the base VMDK.  Now there’s nothing to consolidate when the snap is closed — the base VMDK is always the live state. VMware’s Cormac Hogan has an excellent post on this which goes into far greater detail on how this process works.

But that’s not all. VVols also enable the ability to offload snapshot functions to the array controller.  The implementation details may  vary among storage vendors, but the whole snapshot process can be offloaded to the storage array itself in some cases, providing instant and non-disruptive snapshots.

This is a huge change from vSphere 5 which should allow for backups and snap close operations on highly transactional servers where this might not have been possible in the past.  Impact free snapshot (and backuptennesse-vols-fans(2)) operations.

Now in full disclosure I’ve not had the opportunity to work with VVols in production yet, but perhaps you can see why I’m rather excited.  Non disruptive snaps and backups for ALL workloads would be a welcome feature.

Do you have any experience with snapshots with vSphere 6 and VVols? Post in the comments below.  Go VVols!

vSphere 6 (and VSAN 6) Now Available

The wait is over.  You can now download vSphere 6 and VSAN 6 from VMware.com.  I’ve written about the great features in vSphere 6 here and a lot of people are anxious to upgrade to start realizing the benefits of the new capabilities,  but hold on….before you do there’s some things you should probably review first.

First of all you’ll want to review this KB article which details compatibility considerations, upgrade considerations and much more.

If you’re using vCNS (vCloud Networking and Security) you will most definitely want to be reading this KB article before you start even making plans to upgrade.

You’ll also want to check out the vSphere HCL (Hardware Compatibility List) to see if things like your processors or your storage array will support vSphere 6.  The HCL even goes into specifics like VAAI primitives to show you which features are supported with which firmware releases on your storage array.  Also note that many storage arrays will support ESXi 6.0 with their current firmware release but support for vVols in some cases will not be available until a future firmware release.  As always check with your storage vendor.

Screenshot_44

Example of level of support detailed for a specific storage array varying with firmware version

You’ll also want to check with 3rd party vendors that integrate with vSphere.  This is basically any VIBs installed on host servers and anything that registers with vCenter Server, especially any plugins.  Backup/recovery and monitoring solutions are some of the most common here, but there’s also replication, storage acceleration and several others.  I’ve reached out to 5 vendors on the issue of vSphere 6 support it and none of them formally support it yet, but plan on delivering support in a matter of weeks or months depending on the vendor.  Whether you need to wait for formal support or want to be a pioneer depends on the use case and your tolerance for risk.  In some cases you may be able to “cheat” formal support while in other cases the risk may be just too great without formal support.

And last but not least read the releases notes!

Happy upgrading!

Politics and the Engineer

Those of you who follow me on Twitter know that I often can’t help myself but to blurt on some political content on occasion. I’ll be making some changes to my online presence and I wanted to take the opportunity to explain and also make some points along the way.

First my point about the engineer…

prucnal_1933_575As engineers in the information technology space we often have to deal with multiple viewpoints. Almost certainly at some point you found yourself standing before a whiteboard and debating with another engineer on the best approach. You may not have found yourself in agreement with that engineer but hopefully as a result of your discussion you could have an understanding of the alternate view and at least had a measure of respect for the logic that brought the engineer to this approach, even if you didn’t adopt it for yourself.

IT is complex and there near infinite combinations of scenarios and situations. What methods or approaches that work for one engineer or environment may not be as effective in another – or perhaps there our new methods. Another engineer might have discovered (out of necessity) a process that works better for them and/or their environment.

And of course let’s not forget that cloud computing isn’t inherently a unicorn – the approaches taken by the organization – org charts, processes, culture, etc. – will have a profound impact on the ability of such technology initiatives to create value. How we approach implementing a given technology can make all the difference in the world.

It is this diversity of experience and diversity of ideas that can make our teams even more effective provided that we approach our vocations with professionalism and respect. This means being willing to entertain and understand the reasoning (or even experiences) beyond differing designs and approaches.

For those of us that are married, chances are you’ve found some issue with your spouse on which you didn’t fully agree on. But hopefully you were able to reason through it, understand each other, and come to a point where you can respect each other’s position (with some fun, playful jabs along the way).

The same should be true of politics. Many choose to shy away from politics, religion and more in their presence and this is quite understandable in today’s environment. For myself, I was never representing an employer that was active in this industry, so there was little risk for me from this perspective. At the same time, I think a case can be made that as professionals in the civil society we have a responsibility to speak up on issues that impact us.government

Politics is “who gets what and when” and while we may not be interested in politics, boy is it ever interested in us!

I touched on this a bit in an earlier blog post called “Human Action and The Cloud” in which I attempted to apply Ludwig von Mises’ treatise on the responsibility of each citizen to understand economics to concepts of cloud computing. Adding another level to this, the ancient Greeks had a word for those not engaged in public policy which was “idiote”. Words have evolved in meaning such that the implication here is not so much that non-politicos are “idiots” but rather that the “enlightened” in society (many of us professionals) had a responsibility to be engaged on such issues.

The Cesspool of Internet Comments

Yeah, so if the ancient Greeks could see much of what goes on in public policy debates in social media today, they’d likely want to rethink their whole embrace of the democracy thing. So much of what goes on is surface friction, fed by parties and interest groups that never gets to the core issues. At times I’m inclined to think Twitter is geared for snark and games with its 140 character limit, but it is possible to be responsible with discipline. And by the way I certainly don’t hold myself above the fray here – I’ve certainly thrown a few jabs on occasion.

It’s a challenge to rise above this (and sometimes a bit less fun), but at some point we need to engage in a reasoned choice process – much like we do as engineers – and break down logically why we have chosen the positions that we have. This is the type of elevated discussion we should have more of in our society.

What I’m Doing

Heaven knows why, but from a very early age I was interested in answering the “why?” questions that may have been lurking several layers below the level which we were interacting with. In college I double majored in Political Science and Business Administration while also double-minoring (if that’s a word) in both Economics and Pre-Law. That should reveal much about my interests without getting into some even deeper areas of study. I consider myself a classical liberal and I am thankful to exist into a rare time in human history where we have post-Enlightenment reason, freedom and modern technology.

This will be an experiment and work in progress but what I’m going to try to do is this – I’m going to move most but not all of my political, sociological and philosophical, etc. content to a new Twitter account of @blueshiftpol . I’ll still post some news and a few things here, but I’ll try to keep the bulk of the more political content onto the new account. It will be a work in progress but hopefully over time I’ll find the right balance.

Along with this at some point I will be starting a new blog where I hope to post on occasion on a variety of topics, many that go far deeper that just mere “politics”. I haven’t created this new blog yet or settled on a name but I’ll let you know when I do.

So basically this post was a long winded way of explaining that I’m moving most of my non-tech content to a difference account — allowing this one to be a bit more “tech pure” if you will.

P.S. One more quick point. The phrase “Blue Shift” I adopted has absolutely no political implications. It actually refers to a phenomenon in the light spectrum in which something is moving towards you. “Blue Shift” is simply about what’s coming at us and how we choose to adjust and align to best deal with it.

Top 6 Features of vSphere 6

Top 6 Features of vSphere 6

This changes things.

It sounds cliché to say “this is our best release ever” because in a sense the newest release is usually the most evolved.  However as a four year VMware vExpert I do think that there is something special about this one.  This is a much more significant jump than going from 4.x to 5.x for example.  It’s not just feature packed or increasing the maximums, although it does accomplish both of these.  vSphere 6 introduces a few new paradigms which have the potential to create a lot of value, efficiency, and also good old-fashioned performance.

In our clickbait, social media driven world, “listsicles” seem to be a favorite article style . When I began to look at vSphere 6 with all of the new features I thought to myself “where does one start”?  Perhaps just this once I’ll go full “buzzfeed” and list what I feel are the top 6 new features of vSphere 6. All without any diet tips, Tumblr feeds or embarrassing celebrity photos — I promise.

I think there’s some really game changing stuff here. Let’s dive in.

UPDATE (2/9/15):  This post has been updated to reflect details which have changed from the beta to GA. Changed information will be highlighted.

1) vVols (virtual volumes)

This is arguably the biggest new feature and has the potential to fundamentally transform how storage in approached in vSphere, so it demands that we spend a bit more time exploring this one.

VASA 1.0 ( vSphere Storage APIs for Storage Awareness) was introduced with vSphere 5 which enabled many features ranging from array integration, offloading of copying and zeroing operations, multipathing, and storage awareness, which gave vSphere insight into the relative performance of your storage tiers.

While these features were great, there were several limits, including that datastores could not offer granularity to individual virtual machines, but rather all virtual machines would inherit the capabilities of a datastore.  And while we could offload some functions to the array, snapshots were still based on delta files with copy-on-write mechanics.

While this is “OK”, what if every VM could have it’s own storage container and storage policy?

Today we spend a fair amount of time managing LUNs and Volumes in vSphere which in turn determine the storage characteristics.  My VM is on “SAN02-VOL03″ but what does that mean to me as an application owner?

What if the storage array through APIs could become “aware” of vSphere elements?  What if each VM was it’s own container and vSphere administrators no longer had to deal with the management overhead and complexity of LUNs and file systems?  Just provision a server and choose “Gold”, “Silver” or “Bronze” storage — or have this predetermined by a policy.

This is what vVols along with VASA 2.0 aim to provide. Chuck Hollis has a great post going into more detail on this but for now I’m going to “borrow” one of the slides from his post to illustrate how this facilitates providing the right capabilities to the right consumers.

Screenshot_91

[Click to expand]

vVols and VASA 2.0 could be a blog post in and of itself, but to keep things simple let’s just focus on a few key characteristics of vVols:

  • VMDKs are native storage objects

That sounds good, but what does this mean exactly? Well it means that the storage array is “aware” of each VMDK and that the complexity of LUNS and mount points are no more. This layer of complexity is now removed from vSphere administration — going forward administrators only need to focus on VMs and storage policies.

Screenshot_82

Traditional Storage versus vVols (click to expand)

 

  • Virtual Volumes

Each virtual volume maps to a specific VMDK.  Because of this exclusivity, SCSI locking is no longer necessary.

  • Storage Containers

In vSphere 6 a new logical construct is a Storage Container which can contain multiple virtual volumes.  Storage containers are managed by the storage array and can be used to group together storage that will share common characteristics and/or a common storage policy.

  • Single Protocol Endpoint

All storage is unified behind a single logical construct for I/O—called a Protocol Endpoint. With all storage traffic passing through this logical element

Screenshot_8

vVol — Protocol Endpoint (Click to Expand)

 

  • Policy Based Management

Now we can have policies that we apply to VMs to govern capacity, performance and availability. Rather than managing this on the back end with LUNs and volumes we can now simply apply policies that provide the desired capacity/performance/availability configurations on a per-VM basis.  We used to do this with scripts (or CLI) against hosts for specific LUNs — now we can simply define a policy and assign it to storage objects as desired.

  • Storage Array Integration

Storage vendors can integrate with the VASA APIs to offload I/O functions (array acceleration) and granular capabilities. This existed for some functions with VASA 1.0, but now with VASA 2.0 the opportunities to unlock the full capabilities of the storage array are available throughout the vSphere ecosystem.

For just one example of this, think of the way snapshots work today – a separate file is created which is basically copy-on-write which must then be reconstituted back into the VMDKs when the snap is closed. Many of you are already familiar with the performance impacts of these operations – which are especially common with backup and replication operations. Imagine if all this could be offloaded to the storage array for fast and space efficient snapshots!

And let’s not stop there as these benefits can be extend to provisioning, replication, deduplication, caching and more.  In my opinion this is HUGE — you may be familiar with the benefits of space efficient snaps and clones, but these were always outside the domain of native vSphere snapshots.  Now all storage vendors have the ability to provide hardware accelerated snapshots (big impact on backups) as well as instantly deploy space efficient clones for test/dev and more.   There’s a lot of implications here for replication and disaster recovery as well. Pictured below are just some of the vendors that have made commitment to supporting vVols.

vendorsIn a nutshell, the complexity of LUNs and volumes is removed from the vSphere administrator, while enabling policy-based management and hardware acceleration from storage arrays for many common functions. Fast and space efficient snapshots. Space efficient instant clones for test/dev.

We’ve had storage APIs for a few releases now, but this level of integration between the storage array and the hypervisor is new.  In many ways it’s a game changer.

(I’ll update this post in the future with links to more detailed vVol articles as they become available).

2) Fault Tolerance

VMware fault tolerance was always a fantastic solution but it’s use was always limited due to the restriction to only a single vCPU, no snapshots and more. Now these restrictions are being removed, opening up new possibilities.

If you’re not familiar with Fault Tolerance, a second clone of a VM is maintained in CPU-lockstep such that either VM in the pair could become unavailable and a single CPU cycle would not be missed, nor would any TCP connection be dropped. This is critical for transactional applications, e-commerce, VoIP and many more mission critical applications.

ft

Now with vSphere 6, Fault Tolerance is now available for VMs with up to four (4) vCPUs and 64GB of RAM, enabling it to be used for larger web servers, VOIP, databases and more.  You’ll want to test your applications first for latency and failover response, but this opens up fault tolerance to a whole new set of VMs that couldn’t leverage this previously.  Some may also want to consider this as an alternative to Microsoft Cluster Server (MSCS) in some scenarios.

vCenter Server is an obvious potential use case here, especially with the retirement of the vCenter Heartbeat product.  Some deployments of vCenter Server will be supported for use with FT based on the size and scale.  Exact details of the requirements for vCenter Server support are still being worked out. (Separate from FT, Microsoft Clustering for vCenter Server will also be supported).

vSphere 6 also adds support for VADP based snapshots (not user snapshots), enabling backups and replication. Also added are support for paravirtualized devices, and storage redundancy for Fault Tolerant VMs which is critical for many use cases.

3) vMotion Improvements

vMotion has always been an incredible feature in vSphere which helps to provide both flexibility and availability, but now several new features will allow its use to be significantly expanded:

  1. vMotion across virtual switches
  2. vMotion across vCenter Servers
  3. Long Distance vMotion

The last one refers to a dramatic increase in latency tolerance as noted in this tweet from this past VMworld:

Put those three together and you now have the ability to vMotion to different regions. I worked on a large datacenter migration project (petabytes) where we had to populate data mules and ship them to remote datacenters to “seed” the replication process. I can only imagine how much time and money could have been saved if this technology were available then.

Future enhancements will support for active-passive replication as well as vSphere Replication.

4) Policy Based Management

Update:  While Policy Based Management was featured in the beta, it seems that it has been witheld from the 6.0 release and will be introduced in a later update.  The Content Libraries feature mentioned below will still be in the initial 6.0 release as I understand it.

There’s a few components to this including a new Virtual Datacenter Object which is essentially a resource pool which can span multiple vSphere clusters and facilities the assignment of policies to VMs. For example you might want to create virtual datacenters for production and another for test/dev and have these span multiple sites (and clusters). In the initial release this will be limited to a single vCenter server, with plans to support multiple vCenter servers in a future release.

polixyAnother new logical construct is tags which can be applied to any VM. These tags can be used to automate the initial deployment of VMs and ensure that the proper policies are maintained throughout the VM’s lifecycle.

Also worth a mention here is the new Content Libraries feature. Very often in VMware environments administrators will carve out datastores and/or folders for VM templates, ISOs, vApps, scripts and more. Now you can have a full content library for your virtual datacenters that can even be published across them.

With the ability to aggregate this content into a library, which can be shared and published to multiple vCenter servers, content can be standardized and made more accessible. You might even want to have different content libraries for different teams, business units and/or applications.

5) Installation and Usability Improvements

This is sort of a collection of multiple features, but I’d like to briefly touch on each as they are significant:

a) vCenter Server Appliance with guided install from ISO image

In vSphere 6 the vCenter Server Appliance has made improvements with feature parity and is now provisioned using a guided process from a self-contained ISO. I went through the guided process and it is much more quickly deployed than in prior versions.

b) Infrastructure Controller

With vSphere 6 a new Infrastructure Controller (IC) service is introduced which provides the following functions:

  • Single Sign-On (SSO)
  • Licensing
  • Certificate Authority
  • Certificate Store
  • Service Registration

Depending on your topology and requirements the Infrastructure Controller can be deployed within a vCenter Server or as its own independent server. This not only facilitates scale and more complex topologies but it simplifies both deployment and management.

c) vSphere Web Client Improvements

vSphere 6 still ships with a traditional thick client (C++) but newer functionality specific to 5.5 and later will require the Web Client which has been substantially improved in this release. The login time has been reduced to about 3 seconds while other common functions within vSphere have been improved by several full seconds (such as from 4 seconds to 1 second for invoking the Data Center Action menu).

Screenshot_83

The task pane returns to the bottom in the improved vSphere Web Client (click to expand)

 

Not only is the Web Client significantly more responsive in this release but navigation has been significantly improved by providing more right-click menus and adding the tasks pane back to the bottom of the screen.

The combination of the performance and usability improvements makes it easier to be more productive in vSphere as well as making the experience more enjoyable.

6) vCloud Air Integration

Hybrid_CouldYou already have a vSphere infrastructure but what if you could quickly add for unplanned capacity using vCloud Air and make a hybrid cloud?

What if you could quickly set up full disaster recovery capability for your most important virtual machines using vCloud Air with 15 minute recovery points?

vSphere 6 has built in integration with the vCloud Air service allowing you to quickly tap into these hybrid cloud capabilities.  On the backup and DR front, vSphere 6 features RPOs as low as 15 minutes, allowing you to effectively use the vCloud Air service as a hot site for your production workloads, with support for both failover and failback operations.

Recently I wrote a review of VMware’s vCloud Air OnDemand service and I was honored that VMware had elected to share it. Rather that talk about it here, I’ll just link here to my post on vCloud Air for more information on that service.

Note:  an earlier version had stated that RPO’s would be 5 minutes.  This was based on information communicated during the beta.  In the initial (GA) release the RPO will be 15 minutes.

Honorable Mention

Of course there’s many more features than just these six, so here’s a few I want to just briefly mention:

  • Storage I/O granularity improved to per -VM basis (was per LUN).
  • Network I/O control allows bandwidth reservations for the VMs that need it.
  • 64 node clusters hosting up to 8,000 VMs
  • VMs up to 128 vCPUs and 4TB of RAM
  • Hosts with up to 12TB RAM, 64TB datastores and up to 1,000 VMs
  • NFS 4.1 client enables multipathing, improved security, improved locking and less overhead for NFS storage.
  • vCenter Server resiliency — vCenter Server will now attempt to “self-heal” at several different levels in order to improve availability.
  • vSphere Replication now supports RPOs of as little as 15 minutes.

There’s a lot here, and the combination of vVols with VM level policy management and tagging will be huge.  Performance benefits aside, administrators can now organize and combat the configuration drift of VM sprawl by designing policies that will automatically place VMs on the desired class of storage, with the appropriate performance and availability policies.

Many of these features are worthy of their own blog post, but I hope this quick list introduced some of the reasons why I think vSphere 6 is one of the more significant releases in VMware’s history.

The Importance of Perspective

Something struck me this month. In the contrast between darkness and light, revelations were made. But to understand this we must first begin with the darkness.

Fireworks-2015-New-Year-text-designIn 2014 I was in a very dark place. We were fighting a losing battle with the bank and the court system which forced us into “survival mode” which meant among other things, working as many hours as possible. Normally when the doors are closed in one area of your life you can focus on another.

Career can be such an outlet but I found myself working on 10 year old technologies with no other opportunities or even certifications to pursue. While some certifications were attainable, the time for studying was sparse when working 80-90 hours most weeks.

Well there’s always family and your home which is incredibly important. Well about that…

Our home technically isn’t even a house and technically has no bedrooms. It’s a 600 sq. foot cottage without a working furnace, no kitchen (but a tiny sink), and a bathroom smaller than most handicapped stalls. There is no table for eating, games or homework. Our 12×16 shed collapsed and we are now storing these contents in our house (no basement). Now add 3 children ranging from 13 to 3.

I’m sure this all sounds superficial but I can assure you that the impact on us has been profound. You deal with it at first by telling yourself that it’s just temporary. But it turned out to be a prison. You can’t eat the way you want to. You can’t do activities with your kids the way you want to. You can’t sleep when (or how) you want to. Our health has been adversely impacted by toxic mold exposure. We’ve given up on many healthy habits we used to have because it’s just not possible. Not only has the house itself become a dysfunctional time sink, but it has affected all of us psychologically.

We made enormous sacrifices for years, paying over $3,400 EACH month for the mortgage to live in such an environment. Rather than spending time without kids this summer we spent it working on our legal case. But it was all for nothing as the court would not want to hear our story. It was done. After all the hard word and sacrifices, including paying two-thirds of what the home is worth we would lose ALL of it.

Career, home life, family life. All were dominated by stress, and blocked paths. Time was passing, kids were growing older and goals were fading.

Without getting into the details we could never walk away from the house, because with our predatory loan we could still be liable for huge sums of money – several times more than the house is even worth.

We were trapped. But then the clouds began to part.

Now we are looking at the possibility of losing everything and being forced to move (likely out of state) in a few months. Wait, that’s a good thing? YES IT IS. Yes, we will still lose everything. We’ve lost all that time and everything that we paid to the bank with nothing to show for it. The difference is that we have hope. We now have a reason to believe that hard work and effort can begin to positively impact us.

This is the power of hope. Without hope there is nothing but despondence and just trying to make it through the day while another week, month and year passes by. But with the hope that you can work for positive change, your outlook completely changes. Even though we have been wronged and lost badly, I feel as if an anchor has been removed from around my neck. We might have nothing, but now I can DO SOMETHING. In this case I can try to start a new life somewhere else. I can now have hope that my family can experience what a living in a middle class American house feels like. I can now have hope than I can reclaim time, healthy habits and pursue personal goals and career goals.

I have a special needs daughter who endured 10 surgeries her first year and more extreme surgery in the years to come (her story here). I found myself asking “why were we able to endure all of this with a positive attitude but be so negative and despondent this time?” We made incredible sacrifices in all areas of our life, career, financial and time to do everything we could for her. It was never even a question but we did it enthusiastically and with passion. Why would one set of challenges which absolutely immersed us be so different than another?

We ultimately didn’t have control in either situation. In one case we were doing everything for someone, where in the other one we couldn’t help ourselves. As time went on, what we dismissed as temporary sacrifices began to control and impact us and change our lives for the worse.

I’m not happy that my family and I have lost several years of what “could have” or “should have” been. But now I have hope that with some hard work, hopefully we can salvage what time we do have left. It really does feel like stepping outside of a prison for 5 years. There might be a hard road in front of us but right now I’m enjoying the feeling of the sun on my face.

Perspective has a profound impact on how we approach life. Are we letting our perceptions or fears impact and control us? What’s holding us back? Can we change our attitudes, our motivations, and even our environments to improve ourselves and attack both our problems and challenges? Are we really doing our best or are we letting negative thoughts in the workplace or in the home influence us?  Are we making moves to be leaders or are we the cynic shaking their head in the corner? Are we finding fulfillment by reaching out to others to help and nurture them?

I do not yet know why all this happened or what purpose or lesson it may serve. I’m a perfectionist and I’ve always felt I would be a failure if I didn’t start a successful business or do something significant with my time in this world. But just having hope and purpose is the best feeling and motivation I could have right now.

Here’s to a great 2015 for everyone and while I perhaps I should be scared and nervous , I’m too excited about the freedom and hope to affect positive change. I found my motivation – find yours. Because time doesn’t stop.

Best of 2014

These lists typically annoy me but I was curious what some of my most popular content was and since I have it right here….

The most popular blog post by far was “Exploring VMware’s New OnDemand Private Cloud” which was shared over 650 times on social media.

Below are some of the most popular tweets of the year, several which came at the Vice President’s expense:

American Mortgage Horror Story

I took some time to put together some charts and details of what we just experienced with our mortgage. Why?

Thinking back I think I did it both as a creative-stress outlet as well as wanting to promote understanding. When you endure hardship caused by the intentional actions of an entity, you want people to understand what you experienced as a way to raise awareness and hopefully change public policy to prevent such hardships in the future. An “act of God” is one thing, but intentional predatory abuse is another.

I think there’s also some public policy issues here but I’ll expand on that at the end. Also before I begin I want to make it clear that this is neither sympathy nor charity seeking. Should you feel compelled to give something, there are plenty more in this world that are far more in need of assistance. Please consider them.

LOAN ORIGINATION

Our loan was originated at an interest rate of 10.8% plus – we paid an additional 3 points at closing – totaling over $12,600 of interest charges that were pre-paid at closing.

Why enter into a loan with such terms? The short answer is we had no choice, but the longer answer of events that led up to this is here.

Per the Truth in Lending statement our effective interest rate would be 11.293% and we would pay another $730,000 in finance charges over 30 years:

truthinlending

That $730,000 — interest only — is almost 5 times more than what the property was just appraised for.

For comparison for the past 2 years the average rate on a 30-year ARM has been under 4.5%:

Screenshot_36

I wondered what these numbers would look like at normal interest rates, so I filled in the blanks using Zillow’s mortgage calculator:

mortgage

Our initial mortgage payment was $3,291. A rate at 6% would mean a difference of $1,000 per month, and over $1,300 per month using current rates (3.74%).

Below is the same graph, but showing the total interest to be paid over full term (not including pre-paid interest).

Screenshot_39

Subtracting from the Truth in Lending statement we were provided ($730,000), the current rate of 3.74% would result in a savings of over a half million ($537,296) over the term of the loan.

When the loan was originated it was over 75% of net income. That’s right. For years we didn’t eat out or spend a single dollar on ourselves or our children on the premise that by sacrificing to show two years of timely payments we could later get a refinance (which we will see later that the bank prevented).

PROPERTY COMPARISON

Just for contrast I thought it would be interesting to see what our monthly payment equated to in terms of other properties.

After modifications the mortgage payment would later increase to OVER $3,500, but for comparison purposes I used $3,400:

15grand

Technically the above property has zero bedrooms and 600 square feet.  It was built as a summer cottage 90-years ago and all of the cross beams are off by several inches.

This home below is in the same zip code. Assuming no down payment, the Zillow listing estimates the mortgage to be $3,460:

3460

And for fun here’s a RENTAL listing near Research Technology Park:

2900rtp

To add a bit more color to this, there isn’t enough space in our house for a table. My family eats our meals kneeling before a small coffee table. There’s a sink, but no actual kitchen, the furnace is broken, the home is contaminated with mold, and in a flood zone.

We don’t want a mansion, but we would like to know what it’s like to sit down at a table for dinner. We would like to know what it’s like to have a kitchen and to be able to cook. We would like to know what it’s like to step into a room and be able to take more than a half step. We would like to have a functional space that isn’t a non-stop drain on both time and money.

Sorry a little tangent there, but clearly something is off-kilter with this mortgage. Now let’s look at appraisals and payments made.

APPRAISALS

We had a formal appraisal done this summer which put the home value at $150K. The bank did an appraisal in the same time frame for $240K but the bank’s appraisal compared with homes that were in a different county let alone not being in a flood zone. Needless to say we feel the $150K appraisal is more accurate, but even if you round off the difference at $200K things are not quite right:

appraisal

In the first three years of the mortgage our total cash outflows (which includes escrow and taxes) was $98,000. In other words, if you accept our appraisal as an accurate valuation, we paid in 3 years almost exactly two-thirds of the value of the property.

Based on such an aggressive repayment schedule we should have some equity in the property then, right?

Not only do we have no equity but we currently owe the bank is $540,000.  That’s right — over a half million dollars. When the foreclosure process is complete, the bank will have the option to come after us for the difference between what the bank sells the home for and $540,000 – which so far they have indicated that may do.

In other words, even though we paid the bank two-thirds of the value of the property, we lose all that AND the bank may likely pursue us for an additional quarter million or more.

Yes, you read that right, but do feel free to take a deep breath and go back and read that again.

So far we have incurred tens of thousands in legal fees in an attempt to GIVE the house back to the bank and escape this home and this zip code which has been suffocating my family.

MODIFICATIONS

Surely there must be a remedy for this right? I detailed in a prior post all the modifications that were available to us and how they turned out, but I’ll mention the HAMP modification (Home Affordable Modification Program) here which is a part of the TARP program. Over $45 Billion of taxpayer money went to finance HAMP modifications for homeowners that were “underwater” after property prices collapsed. Here is a Treasury Department article that claims HAMP helps underwater homeowners. I have no doubt it has helped many but it didn’t help us.

The HAMP program has guidelines that banks must follow, and the primary guideline is what is called a front-end ratio. The front-end ratio is the ratio between your gross monthly income and your monthly payment. The goal of the program is to reduce this front-end ration to between 31 and 38%.

During our application we detailed the fact that our home was underwater, the predatory lending, and our unusual cost structure due to having a special needs child. They asked for doctor statements and more about our hardship. We provided over 30 pages of detail and there is no evidence this information was looked at, nor were any of the doctors contacted.

The HAMP modification offered to us was consistent with the front-end rule and put us as exactly 38%. But it left our home underwater. How much would our payments be reduced by?
Our monthly payments were reduced by exactly $36. But that’s not all.

The HAMP program also ADDED an additional $37,720 in additional debt to be financed. The HAMP program took an already underwater mortgage and pushed it DEEPER underwater with more debt!

At a savings of $36 a month, it would take 87 years JUST to recover the $37K in new debt, let alone the entire mortgage. 87 years to pay back a HAMP modification on a 30 year mortgage that is already underwater.

But the bank followed the HAMP guidelines – they satisfied the front-end ratio. And herein lies the issue – there is plenty of room for banks to manipulate the system to do whatever they damn well please to homeowners and their families.

There is no consideration for the property being underwater in a $45 billion taxpayer program that was purportedly intended to help homeowners who were underwater.

Also there is no consideration for a family with a special needs child (which is the only reason — medical — that we are in this house) that may have a higher cost structure than other families.  Nor is there consideration that I had to work 90 hour weeks across multiple jobs to meet these payments. Then they look at your income and declare “Oh, you can afford even more abuse from us then!”.

PREDATORY

“This loan is clearly predatory. Why didn’t you sue for predatory lending?”

Well two reasons. One is that the legal costs of going up against a major too-big-to-fail bank would have been between $30K and $50K. Since all our money already went to the bank, there was nothing left for this.

The other reason is that on the average about only half of predatory lending suits are successful.

We did hire a law firm and incurred thousands of dollars in legal fees, but in the end it didn’t matter – banks do this to thousands of home owners and they know how to manipulate every step in the process.

At one point we did get a hearing with a court appointed mediator. We came with our legal representation and a stack of paper to document our situation. We might as well not have been there. We were given no opportunity to present our case or condition. We were simply told that you must accept the bank’s offer or be foreclosed on.

We did accept the bank’s offer. We had to either make payments online or by phone. Once we entered our account info the website told us we had to call. So we called. And we called. And we called some more. When you call you have to enter an account number before you are added to a queue. Let’s just say after dozen of attempts we never did get through and we were late. We went to our attorney and naturally the bank’s attorney had no problem getting an operator at all.

[The bank manipulated many circumstances along the way ranging from retracting offer letters and playing dumb to sending time sensitive documents in a manner that made professional review impossible. Some of those details in this post.]

In addition to this we experienced a perfect storm of untimely liabilities. Because we were 2 weeks late on one of our payments, the bank indicated that they were no longer required to comply with the HAMP modification and moved to foreclose. The bank still has the option to sue us for the difference ($540,000 – bank sale price).

What was initially predatory lending became a self-fulfilling prophecy.  How can anyone satisfy a 30 year loan when no escape hatch is provided from loan shark rates? How can you owe a bank 3 times more than a property is valued after you’ve already paid two-thirds of that value?

PERSONAL INJURY

It’s hard to under estimate the injury here. The sacrifices we have made both financially and in time have been profound. I’ve lost years of time with my children spent either working additional jobs or working on our case itself that I’ll never get back.  Same goes for personal and/or career development as there’s little opportunity in this zip code.  Raising three kids and trying to keep up with a modern life style in these housing conditions is nearly impossible. The stress and exposure to mold has taken a physical toll on all of us. Over a period of weeks, months and years it changes you and not in a good way. My kids were all developmentally delayed because there was no space to move, crawl or play.

All we wanted to do for years is escape this property — which is literally making us sick — and we paid large amounts of money in legal fees to do just this, but the bank’s actions continue to trap us in this zip code and in this property year after year after year.

But none of this matters to the bank. I do not know how many others go through a systematic predatory-to-foreclosure scenario with no escape option, but I know there are many who are currently going through foreclosure forced by the bank.

This is wrong. We say our banks are too big to fail, bail them out with taxpayer money and then we STILL allow them to do this with families with no cost-effective recourse in the court system. The banks and the government post-TARP are intertwined and are different heads of the same beast. In other countries there is a “loser pays” law for court costs, but in the US such a law does not exist for predatory lending cases.

There’s always gaps in any system and perhaps our case is unique, but that’s no excuse for banks to have the liberty to effectively destroy lives and families as a result of their manipulation of the post TARP system in America.

Exploring VMware’s New OnDemand Private Cloud (Part 1)

Screenshot_108UPDATE:  vCloud Air OnDemand is out of beta and has now entered an Early Access Program for which you can sign up here.

Recently I’ve had the opportunity to explore a beta of VMware’s upcoming cloud offering – vCloud Air OnDemand through their Ambassador program. I wanted to share my observations and experiences, but there’s so much to talk about, I found it better to start with an introductory post and drill deeper with a walk through some of the details in a future post.

The quick version is that vCloud Air’s Virtual Private Cloud OnDemand is pretty much what it sounds like. Hosted IaaS (Infrastructure as a Service) running on VMware, enhanced with SDN, with on-demand availability and pricing — meaning that you are billed only for what resources (CPU, memory, disk, etc.) are actually consumed. It’s like the electricity meter on our homes, but this is measuring the resource utilization of your virtual datacenter in the “cloud”.

Amazon (AWS), Azure and Google are on most everyone’s short list for IaaS service providers but there may be some good reasons to put VMware on your short list as well.

The vCloud Air service is compelling for several reasons. To start, it runs VMware vSphere which provides easy and familiar methods for integrating with existing on-prem infrastructure. Perhaps you have a new project but don’t have time to wait to add more hardware and capacity, but still need to maintain operational methods and security. For many use cases vCloud Air Private Cloud will be seen as compelling — especially where vSphere is already used. And with over 99% of the Fortune 1000 using VMware, that’s…well…most of us.

Before we explore Virtual Private Cloud OnDemand in more detail, I’d first like to step back and review different cloud types, use cases.

Private, Public, Hybrid

The original key distinctions between private and public cloud were mostly control and multi-tenancy. With a private cloud the hosted infrastructure was exclusively yours and therefore afforded more control, whereas in a public cloud your workloads might be shared with those of others on the same hardware (multi-tenancy) which could lead to the “noisy neighbor” problem.

Advances in hypervisors, I/O virtualization, SDN and orchestration have made this a bit less of distinction now days as more control is available to the consumer and the “noisy neighbor” is not the threat that it once was.Hybrid_Could

A Hybrid cloud then is essentially a combination of an “in-house” private cloud and infrastructure from an external service provider. A perfect example is a business that runs VMware vSphere internally in their datacenter. Let’s say a new project comes along, and rather than buy new infrastructure (and incur the associated delays) they could just logically extend and scale their existing vSphere infrastructure to a hosted offering, and be billed only for what is consumed.

Is vCloud Air Hybrid or Private?

In 2013, VMware launched the vCloud Hybrid Service (vCHS) which was positioned as the the hosted cloud infrastructure needed to evolve an on-premises environment into a hybrid cloud.  The vCloud Connector facilitated building a unified view of the hybrid cloud, allowing the ability to view, manage and migrate workloads from either the on-premises side or the hosted side.

Just this past September the service was re-branded as vCloud Air with the service offering now called Virtual Private Cloud (a dedicated option is available). What changed that it’s now called a private and not a hybrid cloud? Yes, there’s a bit of marketing here but also a pretty important point.  Private cloud is all about control.  Do you control the security, the operations, the processes?vchs-vca1

When you start with the vCloud Air service you create a virtual datacenter.  There is no external access until you establish firewall rules, public IPs, SNAT/DNAT rules, routing and more.  There’s also VPN and load balancing services built in.

If that sounds like a lot, it’s not and it’s quite straight forward as you’ll see in the next post, but the point is that you have such a strong level of control that really can be considered a private cloud.  It’s like the difference between ordering a sandwich someone else designed versus building your own.   As an engineer who has encountered the friction and delays that silos bring, I found it liberating to be able to quickly design the virtual datacenter — network, storage, compute — to my requirements.  And of course if  you integrate the Virtual Private Cloud with an on-premises environment, you still have a hybrid cloud spanning those two environments.

Introducing vCloud Air Virtual Private Cloud OnDemand

The “original” vCloud Air Service that went live last year is Virtual Private Cloud.  It is powered by vCloud Director, providing VMware users with a familiar construct and interface with their on-premises environment.  With this service, capacity is purchased in “blocks”.   For example a starting block might consist of 20GB of memory, 10Ghz of CPU and 2TB of storage (pricing as of November 16, 2014 shown below).Screenshot_111

The new OnDemand service has many similarities with the original service.  They both run vSphere and vCloud Director.  They both employ SDN using VMware’s own offerings. They both integrate into vCenter Server using the vCloud Air plug-in.  They both allow stretched Layer 2 and Layer 3 so that you can “bring your own IPs” and also feature Direct Connect options (private circuit).

My understanding is that the OnDemand service is a new “pod” within the vCloud Air service meaning that it is a new and separate rack design and configuration.  The new OnDemand service — as it’s name would suggest — uses an OnDemand pricing model.  Rather than purchasing “blocks” of capacity you will be billed for what you consume as you consume it.  I haven’t done much for the past 24 hours but below you can see a screen shot of my billing for that period, broken down by CPU, memory, storage (SSD and standard tiers), and public IPs.

Screenshot_7

Click to expand

Each account has a single billing point but as we’ll see in a future post, it is possible to create multiple virtual data centers (VDCs) within your account to both track internal costs and well as to control access.

Use Cases for Virtual Private Cloud OnDemand

There’s many different use cases that are a very good fit for the OnDemand service.  If you’re a new company without much capital you might want to just use the virtual datacenter as your primary datacenter.

If you’re a medium or large business with an established on-prem vSphere infrastructure, you might elect to keep your most critical applications and data on premises, but still leverage the OnDemand service for seasonal capacity, test/dev and new projects.

I was working at a Fortune 500 once when a new project came up which required a large amount of  web servers, databases and middleware.  How nice it would have been — and how much more quickly we would have been able to execute — if we could have simply defined our vApps in Virtual Private Cloud OnDemand and then clone and distribute them as needed in the vCloud Air service.  You might even choose to keep the databases on-premises but put the web tier out in the cloud.  You have the flexibility to align your workloads between on-premises and vCloud Air with whatever balance and topology works best for your organization, your security and operational requirements — you have the flexibility to allocate as you see fit.

Disaster Recovery

As you could imagine, disaster recovery for on-premises vSphere deployments is a very popular use case and quite straight forward to setup.  Today, the Disaster Recovery option is offered as a discounted tier on the original Virtual Private Cloud Service but it is my understanding that this will move to the OnDemand service in the future.  This would be a very effective pricing model as when using the capacity for hot-site replication, most of your resources in a passive state will be storage.  CPU and memory would be at relatively low levels until a fail-over occurred at which point they would increase with all the instances coming on-line.  OnDemand capacity when you need it.

Sign Up and Getting Started

I’ll go through a detailed walk through later, but the effort required to start creating VMs and consuming resources is relatively low.  I simply registered for the service, supplied a credit card, and once I was confirmed I was off creating my virtual data center and spinning up virtual machines and vApps.  This was my first time using vCloud Air but it was not my first time using VMware and as a result it didn’t take me much time to quickly find my way around and be productive within the vCloud Director interface within the vCloud Air service.   Within a few hours of signup, most should be able to define their networks and start provisioning VMs.

VMs, vApps and Catalogs

Within vCloud Air there is a public catalog from which you can instantly provision new VMs.  At this time, the public catalog includes multiple editions of CentOS, Ubuntu and Windows Server.  The Windows Server VMs will incur a licensing surcharge for their use which is prorated to an hourly rate.  In other words you are effectively renting the Windows Server license cost by the hour.

There’s two other important ways to populate your own private catalog within vCloud Air.  First you can import any OVA into your private catalog as either a URL link or a local file — which includes the over 1700 virtual appliances available on the VMware Marketplace.  The second way is to simply upload your own ISO to your catalog.  Just to prove a point that it could be easily done, I uploaded a Windows ISO to my private catalog in vCloud Air and I was able to build a VM from scratch right form ISO.  Also using the vCloud Connector you can even keep your catalogs in sync between our on-premises vSphere environment and vCloud Air.

vApps are a vCloud Director construct which solves several problems.  You can add multiple VMs and define rules for how they should work together.  A vApp can be an n-tier application or just a set of servers that need to be managed by a common team.  You can define leases on vApps as a cost control measure (i.e. power off after x hours, delete storage when off for x days) and even fencing, which ensures VM clones which exist in multiple vApps have unique MACs and IP addresses.  More on this later but there’s a lot of rich capability here for designing and managing your virtual datacenter.

Unified View

The vCloud Air plugin that is built into current versions of the vSphere client provides support for administering vCloud Air right from within the vSphere Web client.  The video below provides a walkthrough of the functionality available in the vCloud Air plugin.

Monitoring

Having run administered many vSphere environments I’ve been somewhat spoiled by the ability to quickly extract rich metrics on VMs and hosts using vCenter and even more with vCOPS.  In the vCloud Air environment you can see your CPU, memory and storage utilization for your virtual datacenters and vApps but that’s about it.  The hosts really don’t need to be in the picture (that’s sort of the point of a cloud service) but it would be nice to know some key VM metrics (what’s my storage latency or memory allocation over time?

Two things here.  One is that there’s nothing stopping you from using the monitoring solution of our choice. Want to use Microsoft System Center, CA UIM, Nagios, etc?  Use whatever processes you use today in house.  The second thing is that VMware has a robust monitoring solution in vCOPS.  I would not at all be surprised if VMware were to release a version of this that would work within vCloud Air in the future.

UPDATE:  The vCloud Air adapter for vCOPS was released in July, 2014.  Below are some screenshots of vCOPS monitoring vCloud Air with more at the link:

vcops1

Click to expand

 

Summary

There’s much more here in terms of features and even connection options that I haven’t drilled into here and which I’ll try to explore in future posts. But just to back this up a bit, many IT consultancies have suggested that hybrid cloud is the new normal — the business having the ability to consume on-prem and hosted capacity as needs arise, with use-case flexibility and functional integration (i.e. the vCloud Air plug-in in vSphere). Some cloud providers will require you to make adjustments to operational procedures and security, but vCloud Air does a good job of making this feel seamless for VMware shops. Also keep in mind the appeal of multi-cloud (using more than one cloud service provider) which can be used to mitigate risk, provide flexibility and expand DR options.  And if you don’t already have a DR solution you may want to take a look at vCloud Air’s Disaster Recovery Service.

Most companies will want to explore options for both hybrid cloud and multi-cloud scenarios for many compelling reasons.  As a long time VMware vSphere engineer, I found the vCloud Air service very accessible and easy to quickly get started with.   If you have a significant VMware vSphere deployment in our organization or even if you are just starting out, you owe it to yourself to include vCloud Air in your short list of options.  With the new OnDemand service with its utility pricing model being prepared for launch and more datacenters being added globally, the vCloud Air solution is worth taking a close look.

Mortgages in post-TARP America (and some venting)

This is a non-tech post but wanted to share some quick facts about our mortgage horror story (and also to vent a bit so that I can move on). If you’ve ever wondered how it could be possible to pay two-thirds the value of a tiny summer cottage built 90 years ago and still owe the bank over a half-million dollars, read on.

There’s A LOT of information here but I want to share just a small sliver to show just how ridiculous this is. I’ve heard people say things along the lines of “people who are in bad situations did dumb things to get there”. Let me know if any of you still feel that way in a few paragraphs. Before I go into the mortgage history there’s some necessary background to explain how this situation developed in the first place.

Our daughter had had extremely unusual medical condition and we were advised to relocate to a short list of hospitals which had the staff to appropriately care for her. We found a small 1BR cottage (600 sq. ft.) available for rent which was close to a support system (which turned out to essential and may have saved her life).

At one point my wife needed an emergency appendectomy but was errantly sent home by the hospital. Different doctors called the next day and said, “I read your scans, please get here ASAP”. Long story short, the doctors believed she was within minutes of having large amounts of toxins released. The doctor said “I’ll understand if you want to sue” and we explained that we were grateful for their emergency work and that we weren’t like that. We tried to pay the doctors first and then the hospital (several night stay) with what disposable income we had. The week that the statute of limitations expired, the hospital sued us for a VERY large amount and moved to have my wages garnished.

We got legal help and filed for bankruptcy. When we were before the judge, we presented less than $4,000 of credit card debt. The judge was incredulous that we had no more consumer debt to claim and asked several times for more debt to claim. We didn’t want to claim our vehicles, so we only claimed $4K in credit cards and the hospital bill.

Not even one month after being discharged for bankruptcy, our landlord came to us and said “I need to sell this property. Either you buy it or I will sell it to someone else who will evict you.” Not an ideal situation but we had to put our daughter first – moving at that time was not an option. We reached out to our bankruptcy attorney who advised that if we could afford insane mortgage payments for the first year and make every payment that we could later refinance. So we entered into a mortgage at 10.8% plus paying an additional 3% in pre-paid interest on this premise that we could refinance. Per the Truth In Lending statement we would have paid over $700,000 in INTEREST ALONE over the 30 year term of the mortgage.

With that background, the below table summarizes the initial mortgage and every modification that was available along the way:

  Accepted Increase in Debt Owed Interest Rate Percentage Decrease in Monthly Payment Break Even Point
Origination 10.8% NA NA
Mod 1 Paid $200; denied by bank $6,000 6.75% 35.4% 6 months
HOPE Yes $12,985 9.125% 5.3% 6 years
Mod 2 No $20,436 8% increasing annually 28.9% 2 years
HAMP No $37,720 7.25% 1.2% 87 years

 

In the first 18 months of the mortgage we paid to the bank exactly 66% of what the house was appraised for just this summer. Go back and read that sentence again to let it sink in. After countless calls to the bank we finally got pre-approved for a modification (Mod 1) that would have been sustainable. It would have reduced our monthly payments by over one-third. We paid $200 to lock in this deal, but the bank denied it. Why? Because there wasn’t enough equity in the mortgage and therefore they couldn’t add more debt to the mortgage. Remember that when we look at the next mods.

The HOPE mod was a pre-TARP Federal program. It added $13,000 in debt to be financed. The savings from the reduced mortgage payments would have taken 6 years just to break even on this “deal” but we accepted because we were desperate enough that a 5% reduction in monthly payments would help us keep food on the table.

We kept going back to the bank asking for help and we were offered a mod (Mod 2) that would have added an ADDITIONAL $20,000 to our mortgage. We said no thanks.

Then we applied for HAMP which is a TARP program for which banks were given BILLIONS in taxpayer dollars. While the banks accepted the taxpayer dollars they had great latitude in which to implement the program for their own interest. The HAMP solution would have added over $37,000 in ADDITIONAL debt while reducing monthly mortgage payments by 1.2%. It would have taken 87 years for the savings from the reduced mortgage payments to cover JUST the additional debt that was added. Just to be clear that’s an 87 year payback on just the mod for a 30 year mortgage from a bank that accepted tens of billions in taxpayer TARP dollars.

So far I’ve only covered the modifications and none of the countless examples of bad faith by the bank. Just two quick stories. For one of these deals we received a FedEx letter on December 28 to announce a deal and we had to return signed by January 3 – making any legal or professional review impossible.

On another incident we never received the package and the bank said this would be our only offer before legal action. We never received the package. Long story short we had to work with our Congressman who was eventually able to find the tracking number and a manager at the bank was absolutely livid that this information got out. The tracking revealed that the letter made it to a distribution facility near us before the bank RECALLED it and it was sent back to their facility. The bank knew how much we already paid into the mortgage in a very short time, so they figured they try to have their cake and eat it took and finish what they’ve been doing to us for years.

And that’s just a taste of the background that led us to current situation. We went to court and he had no opportunity to present our evidence – we were forced to either accept a new modification or be evicted AND owe the bank a half-million dollars. So after paying 66% of the appraised value in 3 years what would be the new mod? To now finance an amount that is 35% larger than what the property was appraised at. When we add the legal fees we now owe, basically our monthly payments are the same as when the load was originated and we still have no equity AND an underwater mortgage.

For me it’s not so much about the money, but the impact on my family. Some might say “living in a small house is cute” but this is a bit beyond that. Meals are eating kneeling on the floor and when the refrigerator door is open no one can move between any rooms. Inconveniences yes, and things could be far worse, but we now have 3 children and living in this space is beyond “dysfunctional” or “inconvenient”. It affects our moods. It affects our time. It affects our studies and our careers. It affects how we raise our children. Many of the things I always imagined doing with my children are simply not possible in this environment. And now the bank – with the help of the court system – has essentially forced us to stay here and to spend every waking hour working to try to keep this one roof over our heads.

So yes, it’s a very demoralizing condition and occasionally I vent about it, but I try to remind myself to be thankful for our health and all that we do have. I’m going to try to not vent as much more now that I’ve gotten this out, but if I do I hope you’ll understand and forgive me. The other thing that drives me nuts is the inference I hear on occasion that “they made bad choices”. There was only one point where we ever really had any choice – and we made the only and right one.

VMware vSphere 6 — What’s New?

v6vSphere 6 has been in public beta for several months now and this week at VMworld some of the new capabilities are now public. vSphere 6 remains in beta for a future release (sign up here!), but let’s take a quick look at some of the new features that have been announced (so far)

SMP for Fault Tolerance

Just a quick overview here. Fault Tolerance is a pretty neat feature that can keep a second copy of a VM in complete lockstep for HA purposes. The second VM has it’s own VMDKs which can sit on a different datastore or SAN, while each CPU transaction is maintained on both servers. This is a great way to provide redundancy for applications which can’t afford to lose cycles during a fail-over event, but the Achilles heel was always that it was limited to a single vCPU.

v6_FT

VMware announced earlier this year that it would be discontinuing vSphere Heartbeat and now we know why. With Fault Tolerance being able to support VMs with up to 4 vCPUs in vSphere 6, it would no longer be necessary for high availability to be provided by in-OS clustering.  VMs of up to 4 vCPUs and 64GB of RAM can now enjoy the benefits of VMware Fault Tolerance.

vMotion Improvements

Some of the vMotion improvements announced include being able to vMotion across difference vCenter instances, across routed networks (this “may” work now but was never formally supported), and perhaps most importantly long distance vMotion.

BwC4PLgIQAAsoo7The latency tolerance for vMotion will be increased from 10ms to 100ms in vSphere 6! With this generous of a tolerance for latency so many more vMotion scenarios would now be a possibility without the normal geographic penalties. Personally, I think VMware should demonstrate this capability by vMotioning a VM to an EVO:RAIL cluster in a hot air balloon with a 4G LTE wireless network.

vVOLS

This is a huge feature in my opinion – a whole evolution beyond what VAAI introduced — and rather than try to drill deep here I’ll try to stick to a simple overview. A vVol is a new logical construct that appears as a datastore in your admin tools, which allows the virtual disk to be a “first class citizen” in storage (versus the LUN or volume). A vVol does not use VMFS but is a new abstraction layer that enables object based storage access (with your VMDKs being the objects).

I found a good illustration of a “before and after” view of all these pieces on Greg Schulz’ StorageIO blog which are shown below:

BEFORE vVOLs

BEFORE vVOLs

WITH vVOLs

WITH vVOLs

 

There’s several things going on here which I’ll just quickly touch on. First there is one protocol end point now versus many as illustrated below. This enables more API capabilities to be exposed and if I understand correctly, VMware has plans to allow third parties to develop filter APIs here.

download (2)

Protocols are consolidated into a single endpoint

 

vVOLS are hardware integrated much like VAAI which means the storage vendors will develop their definitions for the API to activate the capabilities of their storage arrays. For example one capability is the ability to offload the snapshot function from a copy-on write flat file – to have the storage array handle it. While snapshots are an awesome feature of vSphere (which are not backups by the way), I’m not a big fan of the copy-on-write delta file method. I’ve seen snap chains 40+ levels deep (without anyone knowing) and snaps that were left open for months until the datastore filled up. By offloading snapshots and other operations to the storage array these things can be handled a lot more effectively.

I didn’t even get to storage profiles yet which allow to define what characteristics a certain VMDK should have. There’s many scenarios here but at a high level just removing the complexity of LUNs and RAID characteristics from admins is a big deal. When a VM is provisioned the admin needs only to select the storage policy (or one is forced for them) and the desired settings are enforced without the complexity being visible.

With that very basic into a highly encourage you to read one or more of the following blog posts which go FAR deeper into vVOLs, how they work, and their benefits.

Also check out what EMC, NetApp and Nimble Storage are doing with vVols, just to name a few.

Virtual Datacenter

This is a new logical construct within vSphere which allows you to enjoin multiple vSphere clusters into one construct to force consistent policy settings, provide a top level management point and facilitate cross-cluster vMotion.

Improved Web Client

The web client has significantly improved with each release but many (like me) find the web client to be a bit slow at times. It’s clear that VMware has spent some time on this as from using the beta I can assure you that there is a significant improvement in response time between the 6.0 and 5.5 web clients.

SUMMARY

That’s just a quick summary of some of the features that were mentioned in the general session. Even more details should be available over time as vSphere 6 grows closer and closer to a GA (General Availability) release. If you’re anything like me, you probably can’t wait for vSphere 6 – perhaps the biggest feature I’m looking forward to is vVols. Until then, happy virtualizing!

VMware Announces EVO:RAIL for the Software Defined Data Center

VMW-LOGO-EVO-Rail-108-300x278The much rumored “MARVIN” has manifested today as EVO:RAIL which represents VMware’s entry into the “Infrastructure In-A-Box” or hyper-converged market.

Each “RAIL” consists of a block of four (4) x86 rack mount servers available from a list of partners, with VMware vSphere and VSAN. Because EVO can scale this solution will likely find acceptance in both branch offices as well as some larger scale-out designs – all with an HTML5 front end. Customers can now simply procure virtualization infrastructure — including storage — by purchasing multiple “RAILs” as needed for scale

Screenshot_9

“Shall we dance?”

This is a truly a software defined infrastructure solution which enables IT shops to procure infrastructure through a single vendor and scale-out as needed. Nutanix was the first to find success with this business model, and others will be sure to follow (also see Cisco and Simplivity) . I expect that this will be an increasingly popular (and disruptive) trend in the marketplace.

Also EVO RACK will be announced as being in the tech preview stage which will be intended to scale to multiple server racks of SDDC infrastructure.

Screenshot_5

More details will be announced later in the day, but for now be sure to check out Duncan Epping’s announcement post as well as VMware’s EVO:RAILS site for more details.

UPDATE:  Also see VMware CTO Chris Wolf’s announcement post on EVO RAIL here.vmw-evo-rail-screen-2