Why Microsoft?

This is a question that can be explored from many different angles, but I’d like to focus on it from not JUST a virtualization perspective, and not JUST a cloud perspective, and not JUST from my own perspective as a vExpert joining Microsoft, but a more holistic perspective which considers all of this, as well

Top 6 Features of vSphere 6

This changes things. It sounds cliché to say “this is our best release ever” because in a sense the newest release is usually the most evolved.  However as a four year VMware vExpert I do think that there is something special about this one.  This is a much more significant jump than going from 4.x

vSphere 6.0 Public Beta — Sign Up to Learn What’s New

Yesterday, VMware announced the public availability of vSphere 6.0 Beta 2.  I can’t tell you what’s all in it due to the NDA, but you can still register for the beta yourself, read about what’s new and download the code for your home lab. There’s some pretty exciting stuff being added to vSphere 6.0 in

Will VMware Start Selling Hardware? Meet MARVIN

The Register is running a story that VMware is preparing to launch a line of hardware servers.

VMware Pursues SDN With Upcoming NSX Offering

Earlier this week VMware announced VMware NSX – an upcoming offering that takes network virtualization to new levels. NSX appears to be somewhat of a fusion between Nicria’s SDN technology (acquired last year by VMware) and vCloud Network and Security (vCNS – formerly known as vShield App and Edge). Since I already had intentions to

What Really Is Cloud Computing? (Triple-A Cloud)

What is cloud computing?  Ask a consumer, CIO, and salesman and you’ll likely get widely varying responses. The consumer will typically think of the cloud as a hosted service, such as Apple’s iCloud, or uploading pictures to Photobucket, and scores more of like services (just keep in mind that several such services existed before it

Agility Part 2 — The Evolution of Value in the Private Cloud

When an IT project is commissioned it can be backed by a number of different statements such as: “It will reduce our TCO” “This is a strategic initiative” “The ROI is compelling” “There’s funds in the budget” “Our competitors are doing it” Some of these are better reasons than others, but here’s a question.  Imagine a

Stacks, the Vblock and Value — A Chat with EMC’s Chad Sakac

…I reached out to EMC’s Chad Sakac to gain more insights from his perspective on how the various stacks…well…stacked up….

vSphere 4.1: USB Pass-Through

I just learned courtesy of a great post at vNinja.net that the USB Pass-Through feature works across vMotion.  You can select any USB device from any host in the cluster and map it to the VM.

This removes a significant barrier where many servers could previously not be virtualized because they needed access to a USB-based licence key or some other USB peripheral which was required.

vSphere 4.1 Performance Improvements

VMware published an excellent whitepaper detailing the performance improvements introduced in vSphere 4.1.  The paper is 14 pages long so here’s a quick breakdown for those who want just a quick summary.


vSphere 4.1 is about 3 times more scalable than vSphere 4.0 as noted in the following table


New CPU efficiencies available when using NUMA enabled hardware can significantly improve performance in high workloads


It is actually faster to compress a memory page than to swap it to disk.  The size of this cache is configurable.  The performance gain depends on the extent memory overcommittment is used.  VMware claims a 15% improvement for moderate overcommitment and 25% for heavy overcommittment.


I discussed this in an earlier post and again here.  This feature will allow the administrator to set a preference for I/O on a per VM basis such that greater performance can be made available for specific VMs.  In short, mission critical VM’s can be granted a higher preference for I/O activity.


  • 8GB FC HBA Support
  • NFS Performance Improvements
  • iSCSI Performance Improvements


  • 10% improvement in VM to wire transmit speeds
  • 100% improvement in VM-to-VM transmit speeds
  • Large Recieve Offload (LRO) for Linux Guests
  • 3.6x improvement in VM Fault Tolerance traffic throughput


Many of the vSphere 4.1 performance improvements will be especially relevant to VDI environments as noted in the paper.


  • A 3x improvement on 10GbE links
  • Storage vMotion up to 25% faster


  • Registering and reconfiguring VM’s is as much as 3x faster than vSphere 4.0


This is one of my favorites as the new metrics can be very valuable in troubleshooting SAN performance issues.  Latency metrics are now collected from many different perspectives, including hosts, datastores, VMDK’s, SAN paths, and more.

VMware Beats The Street

There is technical leadership and then there is market leadership and VMware seems to be demonstrating both.  VMware beat Q2 earnings estimates by 2 cents, demonstrating a 42% increase in revenue since Q2 2009.

Credit Suisse and JP Morgan both raised thier target prices on VMware and expect continued market leadership.  As for the global economy, it shouldn’t be a complete suprise that Asia is where VMware is seeing the greatest growth.

VMware has been on a buying spree the last few years, buying companies that enhance their core position and extend into the cloud computing space (the fruits of some recent acqusitions are in vSphere 4.1 as well as the new Configuration Manager product).  The Wall Street Journal reports that EMC (VMware majority owner) will keep shopping for more acquisitions.

The Street has the full call transcript.

Introducing Nimble Storage

Introducing Nimble Storage

Nimble Storage just came out of stealth mode and will be trying to build a presence in the storage market with some very interesting technology.  I just learned about them from Jason Boche’s informative post on their innovative new storage platform.

The challenge now is trying to describe it.  The short answer is that it’s an iSCSI SAN with built-in DR and backup capabilities.  It reminds me of Data Domain in the sense that they reduce the storage footprint and offer DR, but this goes a bit further.

Nimble Storage uses inline compression which they claim can reduce the storage by 2x – 4x without increasing latency.  And by adding large amounts of flash cache in-front of low-cost high-capacity SATA drives, it can be used for production data.  As I noted in an earlier post, cache can not accelerate large sequential I/O data patterns so keep this in mind.  But this SAN will support the vStorage API (VAAI) and the de-duplication will also offer some additional benefit.

De-Dupe systems are popular because they can simplify backups and DR, but Nimble Storage has some unique advantages because it can also be your Tier 1 storage platform.  Because of this, along with API interfaces for VMWare, Exchange, SQL and more, you can get nearly instant backups AND restores, and can efficiently sync deltas over the WAN for DR purposes.

In other words it’s WAN-friendly DR and instant backup/restore within an iSCSI SAN with application support for VMware, Exchange, SQL and Sharepoint.

Nimble Storage has been around for less than a week, but i think it will be worth keeping an close eye on their unique storage solutions.

Microsoft not supporting VMware vCenter on MSCS Clusters

Duncan Epping at Yellow Bricks calls attention to a MSFT KB article which clearly states that Microsoft does NOT support clustering vCenter with Microsoft Cluster Server (MSCS).

High Availability for vCenter is critical for your virtual infrastructure.  If your vCenter server is unavailable, not only have you lost a central management point, but you have lost all DRS and HA capability as well.  In other words you’re working without a net for your VMs and you lost your primary management tool.  Many other functions like backups will also depend on the availability of vCenter.

vCenter Server Heartbeat is an excellent solution for this problem.  vCenter Server Heartbeat is based on the Neverfail’s High Availability solution which is replication-based as opposed to a shared-storage model like MSCS.  I had the opportunity to work with one of Neverfail’s engineers shortly before he went to VMWare to focus on the vCenter Heartbeat solution and when properly configured it is a very strong HA solution for vCenter.

64-bit support was added last year (5.5 Update 2) so you can now use it to protect a vCenter that is on Windows 2008 x64 (2008 R2 is not yet supported).

vCenter Heartbeat also has some nice features like automatically creating the standby-server using the clone function if vCenter is already running as a VM.

If you are running mission critical servers in your virtual infrastructure you should take a look at vHeartbeat to provide high availability for the HA and DRS functions.

UPDATE:  vCenter Server Heartbeat 6.3 has been released.  It supports vSphere 4.1, Windows 2008 R2 and more.  More details are in the Release Notes for 6.3.

vStorage API for Array Integration (VAAI) and vSphere 4.1

A key new capability of vSphere 4.1 is the new vStorage APIs for Array Integration or VAAI.

The vStorage API has been around, but it’s the “Array Integration” part here that’s new.  Storage vendors like EMC and Dell’s EqualLogic have already released VAAI support into some of their storage arrays.  So what exactly does VAAI mean for the datacenter?

Chad Sakac has an excellent and detailed post on VAAI which I highly recommend but I want to briefly call attention to a few things here.

As Chad explains in his post this is very similar to Intel VT technology and vSphere 4.  If you provide vSphere 4 with the newer Intel CPU’s, vSphere will take advantage of the new capabilities.  And much like this, if you add a VAAI-enabled SAN to  vSphere 4.1, it will also natively take advantage of the API integration.

In an earlier post I described a scenario where a VM was degraded because snapshots were taking too long to close.   What if the snapshot was performed by the SAN rather than being performed within the ESX software?  That’s what VAAI can provide.

As Chad notes, EMC found that by using VAAI, the following improvements were noted

  • VMware storage operation time reduced by 25% or more (up to 10x).
  • ESX Host CPU reduced by 50% or more during these storage operations
  • Network Traffic reduced by 99%

Below is a video put together by EMC that demonstrates VAAI in action.  There’s more at Chad’s site on VAAI including Powerpoint slides and webcasts.

UPDATE:  Here is VMWare’s FAQ on VAAI:

http://kb.vmware.com/kb/1021976 vStorage APIs for Array Integration FAQ

Performance Benefits of Storage I/O Control (SIOC)

VROOM! is one of my favorite VMware blogs because it focuses on performance.

It’s great to hear about new performance benefits but often times it’s difficult to answer the question, “what will it do for [me/my application]”?

The VROOM blog does a great job at breaking this down on everything from Exchange, to Oracle, to web servers, OLTP and more.

They just posted a whitepaper on the performance benefits of SIOC, a new feature available in vSphere 4.1.  Below is a summary graph of how SIOC can boost performance  — read the whitepaper for the full story.

vSphere 4.1 Released:  New Features

vSphere 4.1 Released: New Features

vSphere 4.1 was released for download today and has many new features.  Future blog posts will go into greater detail, but for now I’d like to provide a high-level overview of the major new features and changes in vSphere 4.1:

  • Storage I/O Control (I had an earlier post on this here)
  • Network I/O Control
  • Memory Compression
  • Active Directory Integration
  • HA/DRS Improvements
  • Scripted Installs and Boot-from-SAN support
  • Enhanced I/O Statistics  (storage latency metrics are tracked per HBA, per VM and per volume basis – a much needed capability)
  • vStorage API for Array Integration (VAAI – this will enable storage vendors to provide hardware-assist for functions like snapshots, cloning and more)
  • Load Based Network Teaming
  • Improved VSS Quisecing for Windows 2008 and Windows 7 (more on this later!)
  • 8GB HBA Support
  • vCenter performance improvements
  • Technical Support Mode
  • Passthrough USB support (memory sticks to VM’s)

There’s more, but that’s the quick tease and I will drill deeper into several of these new features in future posts.

One important note is that this is the last release of vSphere that will support the Service Console (a.k.a. the classic ESX hypervisor).  Only the ESXi hypervisor will be made available in future releases, so customers are strongly encouraged to begin the transition to ESXi as soon as possible.

For now here are some key KB articles which should be considered in preparation for vSphere 4.1 deployments:

KB Article: 1022104 – Upgrading to ESX 4.1 and vCenter Server 4.1 best practices

KB Article: 1022842 – Changes to DRS in vSphere 4.1

KB Article: 1022263 – Deploying ESXi 4.1 using the Scripted Install feature

KB Article: 1021953 – I/O Statistics in vSphere 4.1

KB Article: 1022851 – Changes to vMotion in vSphere 4.1

KB Article: 1021970 – Overview of Active Directory integration in ESX 4.1 and ESXi 4.1

vConverter 5.0 Released: New V2P Capability

With all the benefits of virtualization, why would anyone want to go backwards and do a V2P conversion?  As it turns out there are a few cases where this may be desired.

Vizioncore just released vConverter 5.0 which adds support for Microsoft Hyper-V, but perhaps the most interesting feature is to perform a V2P conversion.

While moving “backwards” might seems counter intuitive let’s look at a few scenarios where a V2P may be helpful.

Let’s say you have a physical server that must remain as a physical server for some non-technical reason (process, executive order, etc.) and you want to test a major upgrade – such as a service pack or even an OS upgrade.  With virtual machines there are very effective ways to do this, but not for physical servers – until now…


The illustration above lays it all out.  You can do a P2V and test the upgrade on a VM.  When the upgrade has been certified on a VM, you can then use the V2P functionality to go back to a physical server. 

A second scenario would be disaster recovery.  While Vizioncore’s vReplicator can be used to provide DR capability for VM’s, this doesn’t extend to physical servers.  But you can use vConverter  to refresh a VM copy on a regular schedule and then use vReplicator to maintain a copy in the DR site.   Then you have the option of restoring the server in the DR site as either a VM or a physical server.

 Now for the disclaimer.  I haven’t attempted to use this V2P feature yet and I have to imagine that there may be some additional steps involved (drivers, monitoring agents, etc).  But to the extent that a V2P process can be created and executed, there are some interesting possibilities as noted above.

I’ll post more results when I have a chance to try it, but if anyone has any experiences to share using this feature, I’d love to hear about them!

EVO 4G versus the iPhone

I got an HTC EVO 4G (Sprint) recently and I’ve got to say this is one amazing phone (and I haven’t even tested the 4G yet which is not yet available in New York City).  Not only is there great RDP and Citrix clients available but it would also work well with vCenter Mobile Access for managing your virtual infrastructure on the go.

Google just announced this week, the new App Inventor for Android, which is designed to help non-programmers write Andriod apps, which should increase the quantity (but perhaps not the quality) of apps available.  There just simply aren’t enough good virtual vuvezelas out there!

Now it just wouldn’t be any fun if I didn’t poke any fun at all the iPhone users out there.  But they may already be feeling a bit down now that Consumer Reports is unable to recommend the iPhone 4 (which is not 4G by the way) due to what they deemed a design defect.

At least iPhone users do have a place to go where they can discuss the features that do work…

Why Storage Is Essential to Virtualized Environments — Part 1

Why Storage Is Essential to Virtualized Environments — Part 1

It’s been said many times and many ways that storage is an essential element in virtualized environments and there may be no better way to make this point than to use real world examples.

At one VMWare customer, the storage and virtualization engineers were different teams and communication between these two groups was perhaps not as good as it could have been.

After relocating many VM’s to a different datacenter (with a different SAN) the customer began to experience problems on the BES (BlackBerry Enterprise Server) virtual machines running under ESX 3.5.  When the backups would run it would take hours for the snapshot to be closed and this would cause an interruption in Blackberry services.

The new environment featured IBM storage front-ended by an IBM SVC with terabytes of cache (the IBM SVC is a storage virtualization solution somewhat similar to the EMC VPLEX but yet quite a bit different).  This SAN supports many mission critical environments, so the quick knee-jerk reaction was to blame VMWare  and to suggest that VMWare couldn’t handle such critical workloads, despite that these same VMs ran just fine on the original SAN.

What’s going on here?  First let’s consider the BES VM’s.  These VM had several large drives 500GB and more which contained log files, which of course are highly transactional.  So you have a snapshot being held open for a long period of time (for the backups to run) and then a high volume of transactions that needed to be merged when the snapshot was closed.

Now let’s take a closer look at the SAN.  The customer learned that the storage pool was being comprised of SATA disk rather than FC disk.  This may have been done because the storage team may have surmised that SATA would be “fast enough” since it had massive amounts of cache to drive better speed and indeed disk I/O performance was very good – but not for all data patterns…

Caching is very effective for random I/O workloads, but not for sequential workloads.  DBA’s are very familiar with this concept which is discussed here on the MySQL Performance Blog.

To prove that there was a performance penalty for sequential I/O patterns,  I ran a series of tests using SQLIO on the SAN.  There were many interesting results from the tests but perhaps the most telling result was that throughput (MB/sec) dropped to just 25% of normal speeds when using a large (100GB) sequential file (and to 50% of the original SAN).  The difference here is simply that caching cannot effectively accelerate large sequential disk operations, which exposed the performance limitations of the SATA drives.   And closing a snapshot is a sequential disk operation.  The delta log file is merged with the VMDK to create a new VMDK, and the larger the disk and more transaction the data (more delta blocks) the longer that sequential close operation is going to take.  A cloning of a VM would be another example of a sequential I/O operation.

In this case the customer had several remediation options including:

  • Don’t use snapshots for backups and use a traditional backup agent inside the VM
  • Don’t backup the log disks and mark them as independent (no redo/delta log)
  • Move the VMDK’s to higher performing disks (i.e. 15K FC).

It is in situations just like this where we commonly see people say “virtualization is the problem”, but more often than not that’s really not the case.  In this case the problem was that the storage system did not effectively meet the demands for the types of activities that were being performed.

There are many cases like this one where virtualization and/or the virtualization vendor was blamed, when the real problem was in the storage subsystem.  Especially with vSphere 4 and current CPU capabilities there are very few x86 servers that cannot be effectively virtualized (more on that in future posts).  Make sure you understand how virtualization uses storage and also have a full understanding of your own storage infrastructure.

vSphere 4.1 to be announced July 13

vSphere 4.1 to be announced July 13

Virtualization.info is reporting that the long discussed vSphere 4.1 will be announced on July 13.

While vSphere 4.1 will feature several performance improvements, several features have already leaked including memory compression and storage I/O control.

Storage I/O control will monitor the disk I/O performance of VMs at the VMFS level and make adjustments as needed in an attempt to minimize per-VM storage latency.  This could prevent a scenario where other VM’s on the same volume are experiencing poor I/O performance due to another VM having a burst of I/O activity.

In the left illustration above we see a test VM (yellow) using up more disk I/O bandwith than a mission-critical server (red).  In the right illustration, this is corrected using SIOC and share values inputed by the user.

Scott Drummonds has a good article about how this all works, and below is a movie which explains Storage I/O Control (SIOC) in action!