Why Microsoft?

This is a question that can be explored from many different angles, but I’d like to focus on it from not JUST a virtualization perspective, and not JUST a cloud perspective, and not JUST from my own perspective as a vExpert joining Microsoft, but a more holistic perspective which considers all of this, as well

Top 6 Features of vSphere 6

This changes things. It sounds cliché to say “this is our best release ever” because in a sense the newest release is usually the most evolved.  However as a four year VMware vExpert I do think that there is something special about this one.  This is a much more significant jump than going from 4.x

vSphere 6.0 Public Beta — Sign Up to Learn What’s New

Yesterday, VMware announced the public availability of vSphere 6.0 Beta 2.  I can’t tell you what’s all in it due to the NDA, but you can still register for the beta yourself, read about what’s new and download the code for your home lab. There’s some pretty exciting stuff being added to vSphere 6.0 in

Will VMware Start Selling Hardware? Meet MARVIN

The Register is running a story that VMware is preparing to launch a line of hardware servers.

VMware Pursues SDN With Upcoming NSX Offering

Earlier this week VMware announced VMware NSX – an upcoming offering that takes network virtualization to new levels. NSX appears to be somewhat of a fusion between Nicria’s SDN technology (acquired last year by VMware) and vCloud Network and Security (vCNS – formerly known as vShield App and Edge). Since I already had intentions to

What Really Is Cloud Computing? (Triple-A Cloud)

What is cloud computing?  Ask a consumer, CIO, and salesman and you’ll likely get widely varying responses. The consumer will typically think of the cloud as a hosted service, such as Apple’s iCloud, or uploading pictures to Photobucket, and scores more of like services (just keep in mind that several such services existed before it

Agility Part 2 — The Evolution of Value in the Private Cloud

When an IT project is commissioned it can be backed by a number of different statements such as: “It will reduce our TCO” “This is a strategic initiative” “The ROI is compelling” “There’s funds in the budget” “Our competitors are doing it” Some of these are better reasons than others, but here’s a question.  Imagine a

Stacks, the Vblock and Value — A Chat with EMC’s Chad Sakac

…I reached out to EMC’s Chad Sakac to gain more insights from his perspective on how the various stacks…well…stacked up….

Symantec ApplicationHA: High Availability for Virtualized Applications

Symantec ApplicationHA: High Availability for Virtualized Applications


How can you provide high availability to your virtualized mission-critical applications? There are several ways, and now Symantec is introducing a new option:  


VMware’s HA provides restart-ability for VMs either running on a failed host, or VM’s that are completely unavailable (i.e. blue screen). While this will promptly restart the VM on a different host, you have still incurred some downtime. Furthermore this is done without any consideration to application integrity — HA’s job is simply to make sure that the VM is powered on.  

VMware Fault Tolerance (FT) was introduced in vSphere 4.0 and allows for a passive clone of a VM to be in CPU lockstep with the active VM. In this scenario, the passive VM can take over in the event of a host server failure without missing a single CPU instruction. However, FT is currently limited to only supporting VM’s with one vCPU and snapshots are not supported which can pose challenges for backups.    

Microsoft Cluster Server (MSCS) is another option, to provide HA for VM’s but this has drawbacks as well including some additional environmental configuration, and of course the licensing costs for the Enterprise version of Windows.      

Symantec ApplicationHA

Symantec ApplicationHA could perhaps be described as the pairing of Veritas Cluster Server with VMware HA. Vertias Cluster server has been a leading clustering solution for many years with its excellent cross-platform support.  Veritas Cluster Server uses a shared storage model to provide high availability as illustrated below:    


Some may cringe when I say this, but at a very high level, MSCS are Veritas Cluster Server are similar — they both use a shared storage model, they both use a heartbeat channel and they are application aware. By integrating directly with the application and its logs, these solutions can provide strong HA support.  But what makes Symantec ApplicationHA stand out is it’s integration with VMware HA and DRS as well as it’s integration with vCenter:    


As illustrated above, the ApplicationHA management is done within the vCenter client itself utilizing a plugin. There is no need to use a separate tool to manage the application clustering — everything is available right from the vCenter client. And by integrating with VMware’s HA and DRS capabilities, ApplicationHA will be able to navigate the virtual infrastructure to configure the most optimal scenarios for high availability for applications.    

ApplicationHA will support several applications, including Exchange, SQL, IIS, Oracle and SAP, and will include push capabilities for rapid deployment.      


ApplicationHA is a very compelling solution for high availability and many will likely find it attractive at a list price of $350 (USD) per VM. I think that this is good market positioning by Symantec to leverage their proven Veritas Cluster technology with VMware integration and at an attractive price point. This also benefits VMware as organizations which previously resisted virtualizing mission-critical applications out of HA concerns, can take a closer look.  

Here is the press release and a blog post by Symantec with more details.     

Symantec will be at VMworld next week and will also be offering a breakout session on Application HA.  Be sure to check them out!    

What’s Changed in vSphere 4.1?

Courtesy of Eric Sloof’s blog post, I discovered this PowerPoint deck of over 200 slides which goes into incredible detail on all the changes that were introduced in 4.1 (from 4.0).

These slides are a gold mine of information on everything from HA/DRS, vCenter, storage and much more.  If something changed between 4.0 and 4.1 it’s detailed in these slides.  Check them out!

Virtual Geek 2010 Survey Results

Chad Sakac (a.k.a. Virtual Geek) has posted the results on his informal survey and there are some interesting results.  You can review the full details on Chad’s post but I wanted to “borrow” a few graphics that I thought were interesting.

As illustrated below, the overwhelming majority are running vSphere 4 which was released just over a year ago.  In fact more are running the just-released 4.1 than are running 3.5.  Clearly VMware customers have been quick to realize the value in vSphere and upgrade.

This table on backups was also interesting.  60% (50/33) of those who are running backup agents in their guests have indicated that they “hate” this arrangement while 90% (45/5) of those who are using VADP (vStorage) “love it” (my vote makes up a portion of that 90%):

Rackmount servers were more popular than blades by more than 2:1 (I tend to favor rackmount in most scenarios):

And over 70% indicated that they are already at Stage 2 or better on their journey to cloud computing:

There’s much more in the survey that is quite interesting.  Check it out and look it over.  If I have the time, I may later revisit this and add some additional observations.

VM Backup Reference Architecture – Part 2: Beyond VADP

UPDATE:  Some good information in the comments.  I will be posting follow-up updates on both Veeam and Vizioncore solutions either during or after VMworld.

In Part One,  I discussed the features provided in VMware’s VADP (vSphere API for Data Protection).  Most major backup vendors offer support for VADP, and some of them also support CBT, which can provide very significant reductions in both backup windows and disk I/O.

Several backup vendors offer additional capabilities in a virtualized environment but first a quick look at VADP from a different perspective.  Below is a graphic from Vizioncore which describes what they call Backup 2.0:

This graphic illustrates that there are very significant costs and complexities related to traditional agent-based backup.  By leveraging VADP, organizations can capture significant ROIs and efficiencies using image-level backups, source de-duplication, and single step restores.  For the purposes of this article I don’t want to drill to deep here, but you can read more about Backup 2.0 here


Vizioncore (Quest) and Veeam are two backup vendors that offer significant additional benefits in virtual environments.   Rather than retro-fitting a Backup 1.0 solution to work with Backup 2.0, these two vendors have developed Backup 2.0 solutions from the ground up, and are primarily focused on virtualized environments. 

According to IDC, Vizioncore (Quest) is the 3rd largest management software vendor for virtualized environments behind VMware and Microsoft, while according to Gartner, Quest/Vizioncore is one of the top 10 enterprise backup vendors and is the fastest growing among all of them.

Before I continue a quick disclaimer.  I use and have significant experience with Vizioncore’s products.   I have not yet had the opportunity to work with Veeam’s products but from what I understand they also provide very strong solutions which are worthy of consideration.

Replication, Backup and CDP

Both Vizioncore and Veeam will be demonstrating version 5 of their backup and replication products at VMWorld in less than 2 weeks.  In Vizioncore’s vRanger 5.0, the backup and replication products will be integrated and will share a common engine.  This is already the case in Veeam’s Backup and Replication 4.1. 

Integrating replication and backup enables an agentless CDP (Continuous Data Protection) capability which can be much more cost-effective than traditional agent-based CDP solutions.  Veeam supports “Near-CDP” today in their Backup and Replication 4.1 product, while Vizioncore (Quest) is expected to be adding the same in a 5.0 release due early next year.

VSS and Disaster Recovery

Both Veeam and Vizioncore offer their own VSS agents to overcome application consistency issues which I discussed in an earlier post.  They both also offer integration with de-duplication solutions like EMC’s Data Domain or inline compression such as Nimble Storage for effective disaster recovery.

Solutions like Data Domain and Nimble Storage are an excellent way to provide disaster recovery —  your backups are copied to disk in the primary site, and then the de-duped differences are replicated in a WAN-efficient manner to a like device in your DR site.


Many backup solutions will require some form of a proxy server, to handle the “data moving”.  The problem with this is scalability.  A data mover will only be able to move a certain volume of GB/hr before it becomes saturated.  This problem can be solved by adding more hardware, but this will increase your capital and operational costs.

Vizioncore’s vRanger 4.5 uses a Direct-To-Target architecture which solves this problem by eliminating the proxy server.  In this model the image is compressed at the source (ESX host) and then written directly to the backup target.  The backup model here is parallel rather than sequential.  Multiple jobs are running at the same time on the same host, and across multiple hosts.

According to tests observed by Vizioncore, this allows for backups that are 3-4 times faster for LAN backups, and a 1.75 times faster for LAN-free backups.  These results were obtained comparing with two other backup solutions –each of them supporting CBT – in an environment with only 3 ESX hosts.

Direct-to-Target is not supported for ESXi hosts at this time, however this support is planned for vRanger 5.0 early next year.

Active Block Mapping (ABM)

This is a patented technology used in Vizioncore’s vRanger product to reduce the amount of data that needs to be backed up. 

When a file is deleted in the NTFS file system, the entry is removed from the directory table but the actual file remains on the actual disk blocks until those blocks are overwritten.  ABM will query the NTFS file system to determine which blocks are used by active files, allowing it to skip disk blocks that are not actively in use.

Vizioncore has found that the ABM feature has resulted in significant reductions in both backup windows and the size of the backup archives which must be stored.

Instant/Flash Restore

Both Vizioncore and Veeam will be introducing an instant restore functionality in each of their upcoming 5.0 products.  The idea here is that the VM can be run directly from the backup respository, while a storage vMotion attempts to move the VMDK’s to production-class storage in the background. 

Needless to say, this is potentially game changing.  The idea of an instant restore for most servers was a fantasy until recently.  This can significantly reduce risk (and lost revenue) by nearly eliminating RTO latency following a server being impaired and requiring a restore.


VADP offers many benefits including CBT that many backup vendors can take advantage of.  However there are technologies that go beyond VADP that can provide significant value to the enterprise from backup vendors like Vizioncore and Veeam which include:

  • Near-CDP (Veeam 4.x, vRanger 5.0)
  • Enhanced VSS and DR integration (Veeam, vRanger)
  • Direct-To-Target (vRanger 4.5)
  • Active Block Mapping (vRanger 4.5)
  • Instant Restores (Veeam 5.0, vRanger 5.0)
  • Object-Level Restore for Exchange (vRanger 4.5)

Riverbed Virtual Steelhead Appliance Available

The virtual appliance concept I’ve found to be struggling with acceptance in certain circles.  Some managers have chosen to put their faith in the physical appliance itself and leaving this would be a step out of their comfort zone.  But what is an appliance?

Is it the hardware that makes the appliance?  Many appliances use off-the-shelf components like Intel/AMD CPUs and commodity disk and networking components.  In most cases, I would suggest that it is the code and not the hardware that makes the appliance.  So if you have a virtual infrastructure, why not run that code, as a virtual appliance?

That’s the premise behind the Virtual Appliance market where you can find all manner of appliances from companies like F5, Symantec, Mcafee , Trend Micro, Sun, Novell, Microsoft, Zimbra (recent EMC acquisition) and many more.

Riverbed just released in July a virtual appliance edition of their Steelhead (WAN Optimization) product.  It has all the features and capabilities of the “full appliance” but this is a pre-built appliance that can be quickly deployed onto an existing vSphere infrastructure.  The product page is here and you can see an overview in this PDF.  The Virtual Steelhead appliance requires VMware vSphere 4.0 or later.

Why would you want a virtual appliance rather than a physical one?  One reason is cost – why pay the high margins on the hardware in the appliance, when you can leverage your existing virtual infrastructure?  Second, there is DR.  How easy is it to provide DR for a physical appliance versus a VM?  Third is time to deploy.  Virtual appliances can be deployed much more rapidly than a physical appliance.

Riverbed is also doing quite a bit more around cloud computing.  Network World is reporting that later this year, Riverbed will launch a product called Cloud Steelhead which will be target at public cloud service providers who can use the product to accelerate WAN traffic for customers in their public clouds.

VM Backup Reference Architecture – Part 1: Fundamentals

I wanted to do a series of posts discussing the pros and cons of various VM backup strategies and some best practices, but I found it necessary to cover some fundamentals before discussing specific solutions.

As I discussed in a previous post, I don’t like backup agents within a virtual infrastructure.  It adds management overhead, several complexities, and is generally less flexible and less cost-effective.  A well designed VM Backup infrastructure can enable second-wave private cloud efficiencies and enable DR initiatives.

VMware offers 2 APIs which backup vendors can leverage to perform image-level backups.  The original API was called VCB and required the installation of additional components on a dedicated proxy server.  VCB has already reached end-of-life and has been replaced with VADP (vStorage API for Data Protection) which was introduced in vSphere 4.0.  As of vSphere 4.1, VADP is the only backup API available.  I also want to point out that some VADP capabilities are supported in ESX 3.02 and later, but key features such as CBT require vSphere 4.

When performing VCB or VADP backups of Windows applications like SQL, SharePoint and Exchange, there are additional application-consistency considerations that need to be considered which I previously wrote about here.

Differences between VCB and VADP

There are some significant architectural differences as well as some key new features in VADP as shown in the table below

Requires additional download & install No, built into the data protection software Yes
Full virtual machine image backup Yes, single step copy – source to target Yes, with two step copy – source to VCB proxy and VCB proxy to the target
Incremental virtual machine image backup Yes – using change block tracking No
File level backup Yes, both Windows and Linux Yes, Windows only
Full virtual machine image restore Yes Yes, by using VMware Converter
Incremental virtual machine image restore Yes No
File level restore Yes, using restore agents Yes, using restore agents
CLI for image backup No Yes
CLI for file backup Yes



In general you will see that VADP is more efficient and capable than VCB, but the biggest benefit may come from Change Block Tracking (CBT). 

New Feature:  Change Block Tracking (CBT)

CBT basically allows the VMKernel to track which blocks have changed since the last time a VADP snapshot was created (the last backup job).  To give an example of how this works, let’s say you have a web server with a lot of static content and less than 5% of the disk blocks change between backups.    Without CBT, the backup engine would have to read and process EVERY single block on the VMDK’s during a backup.  With CBT enabled, the VMKernel will tell the backup application “these are the only blocks that you need to read”.  Not only does this dramatically improve backup window times (based on the rate of change) but it also dramatically reduces the I/O load on your storage infrastructure as a result. 

The benefits of CBT will vary based on the rate of change on your VMDKs, but many have reported dramatic reductions (2x or more) in their backup windows using CBT. 

CBT is NOT enabled by default.  First the VM must be running virtual hardware version 7 (introduced in vSphere 4.0).  Second, it will be necessary to enable CBT on the VM as it is not enabled by default.  Some backup products will do this CBT enabling for you, so check with your backup vendor.


VAAI is not really a part of VADP but I wanted to briefly mention it as I feel that it should be a consideration for VM Backups.  When you backup a VM using VADP, a snap is taken of a VM.  The time it takes to open or close a snap can vary based on your disk system and in some cases can cause severe problems as I wrote about here.  If you are running vSphere 4.1 with a VAAI-enabled SAN, your snaps are now done at the hardware level (SAN controller) rather than via software at the VMDK level which will significantly improve snap performance – and therefore backup time as well.


VADP is an API in vSphere that backup vendors can leverage to create image-level backups of VM’s and new features like CBT can reduce both backup windows and disk I/O.  Most major backup vendors support VADP in some capacity, with varying levels of CBT support.  A few vendors are taking things quite a bit further with some innovative features.   

In the next article, we’ll take a look at some specific solutions and how they add additional value above what VADP provides, as well as some challenges and best practices. 

Virtualizing Microsoft Tier 1 Applications With VMware vSphere 4

That’s the title of a new book that was just released and is available at Amazon here.  (hat tip to Eric Sloof).

In the post I made just prior to this one, I discussed “What Should Be Virtualized?”  and made a case that Tier 1 applications can be virtualized.  This book attempts to address the “how” question and give best practices guidance for Active Directory, IIS, SQL, Exchange, SharePoint and more.

What Should Be Virtualized?

What Should Be Virtualized?

There’s several different reasons to virtualize a given server.  What criteria should be considered when deciding whether or not a given server or application should be virtualized?

Consider your goals

Many IT organizations that I am familiar with have established management business objectives (MBOs) around virtualization.  One popular MBO is to set a goal for 80% or more of servers to be virtualized.  Such MBOs are usually fairly effective as they force an organization to be accountable for easily attainable metrics allowing for TCO benefits (power/space/cooling) to be more easily calculated and reported.  Organizations that are receiving the biggest benefits from virtualization will typically have such an MBO and/or track their virtualization progress.

The motives for such an MBO may matter in the decision of what to virtualize.  The organization may be focused on what I call first wave benefits of virtualization (consolidation, power/cooling, etc.).  A good portion of the value proposition of the vBlock solution, is actually first-wave TCO reduction.  Other motives for virtualization may be because all the other cool CIO’s are doing it, or they may be pursuing second wave of virtualization benefits from the private cloud model, which requires 100% virtualization.  More on motives in a bit.

“Virtualize First” Methodology

A colleague of mine recently attended a webcast where the following slide was shown.  I presume that this slide represents Dell’s virtualization methodology and is proposing it as a model for other organizations:

There’s a few things that I found notable on this slide.  First of all, note the section on proprietary hardware – there are more options here as USB-pass-through is now supported in vSphere 4.1.  If the required peripheral is USB-based you now have the option to consider virtualizing these servers with 4.1.

But the big thing I wanted to talk about was…..


In the workflow illustrated above, some decisions are referred to the virtualization team for further consideration.  However, there appears to be an absolute rule that ALL “production database” servers not be virtualized, without recommending consultation with the virtualization team.  What would be some reasons for this?  Let’s take a closer look at the technical capabilities in this realm as well as motives and other considerations.

DATABASES: Technical Capability

There’s a stigma out there that OLTP and other demanding databases cannot be effectively virtualized.  This was true for many cases a few years ago, but may no longer be true with vSphere 4.1 and Nehalem and later processors.

The VROOM! Blog at VMWare publishes a number of detailed studies of the performance of various databases.  I’ll dive more deeply into some of these in future posts but here’s a quick summary of some VMware performance results that have been published:

These tests demonstrate that a well-tuned virtual infrastructure is usually capable of running at more than 92% of native.  This does not mean that a VM will only be 92% as fast as a physical server – rather it means that during peak loads during a stress test, there was a performance tax of up to 8% at it’s peak.  Unless the server is running hot for significant periods of time, you may not even notice any significant degradation at all.

But that’s enough theory and whitepapers.  What about the real world?

Virtualized Databases in the real world

EMC/VMware have virtualized their SAP and Oracle 11i apps and is actually a reference customer for Oracle.   Check out published success stories for virtualizing SQL, SAP and Oracle.  There’s the vBlock which large companies are considering for SAP and other mission-critical applications.  And also consider this result from a recent Oracle User Group survey reported by The Register:

Sixty two per cent with Oracle on x86 use virtualization compared to 43 per cent for non x86-based Oracle sites. Cost savings associated with server hardware virtualization and an overall reduction in hardware costs were given as the leading reasons for companies adopting virtualization – 76 per cent and 59 per cent, respectively.

In the future I’ll post a more detailed article about virtualizing databases, but at this point is should be clear that mission-critical databases can be run successfully on vSphere.  However since every datacenter is different one should consult with an organization’s virtualization team (and storage to verify IOPS), before making a decision.


Motives play a big role in whether or not virtualizing a database server is a good fit.  If the motive is primarily the first-wave benefits of consolidation, the percieved benefits may not be sufficient for a risk-averse manager to pursue.  But if the goal is to reap the benefits of the private cloud, or even just some second-wave benefits, then a strong case can often be made for virtualizing mission-critical ERP and OLTP systems.


The “Virtualize First” methodolgy as shown above is a good model, however I question the need to be so restrictive with database servers.  Some managers have a perception of virtualization that is a few years behind current capabilities, so this may have been put in to appease such concerns and take a more cautious sales approach.  If I could change one thing on the slide, it would be to refer database servers to the virtualization team.  Databases certainly can be virtualized and there are significant benefits to doing so.   Since every datacenter and database is unique, an internal analysis would be advised to determine if virtualization is plausible.

Gestalt IT VMWorld Winners Annoucned

Gestalt IT announced the two winners of the VMWorld constest.  This is a great opportunity for them to share the knowledge they will obtain at VMWorld with a wider audience.  Take a moment to follow them so that you can track their updates from VMWorld and more.

Luigi Danakos

Luigi is becoming more and more well-known in technical circles, tweeting as @NerdBlurt and blogging under that name as well. He promises to blog more content focused on VMware beginners, as well as kickstarting a program for techies to give back to children in need. We’ll let him explain that part on his blog!

Allan Ruiz

Allan noted the lack of Spanish-language VMware content and Central-American community, and has promised to fill this gap. He tweets as @AllRuiz. We look forward to hearing all Allan is able to bring home from VMworld!

Facebook Integration Now Functional

The Facebook icon at the top of this site now points to the correct Facebook account.  You can also see all of our tweets from Facebook as well.

Sorry it took so long to fix but access to Facebook was blocked by the hospital so I was first able to correct it now.

Slowly getting back to normal

My schedule has been a bit irregular as of late as I’ve been living in a hospital for the past 5 weeks.  My daughter has just been released following major surgery and is on a stable trajectory back at home.  Over the next week I hope to return to a more “normal” schedule.

I have more ideas for posts right now than I have time to write, but I’ll be focusing on family over the next few days so I’ll probably only have a handful of updates this week.  I also have to break out my P90X DVD’s to work off the effects of being in a hospital for 5 straight weeks 🙂

In the future you can look for a series on backups as well as a series of posts on candidate servers for virtualization, which I may just entitle “You can’t virtualize that!” (you can guess what my response will be 🙂 )

Go to VMworld for free!

Gestalt IT is offering a contest to pay for VMworld transportation and lodging for the contestant that can best demonstrate how they can “pay forward” the experience.  I just filled out the form — it’s worth checking out.

Gestalt IT offers valuable independent insight on IT trends and technologies and is how I became aware (indirectly) of Nimble Storage.