Virtual Machine Considerations For NetApp Storage

In our environment we’ve done something which at first glance might seem a bit unconventional to many.  We’ve consolidated all OS-based disks (VMDKs) on a dedicated set of datastores, and have done the same with application drives, page files and even VM swap files.

Why go through the extra effort to position each VMDK for each VM on a different datastore?  The driving force behind this idea is deduplication.  NetApp storage has the ability to dedpulicate identical blocks and the boundary for this deduplication is the volume (or FlexVol in NetApp terms).   How many common blocks might there be on the C: drives of all those Windows VMs?  Probably quite a bit.  By segregating the OS, Application, page file (OS) and vSwap drives and then placing them onto common volumes, we can maximize our ability to find common blocks within a single volume, and thus maximize our disk savings.

Of course there are other reasons as well.  If you are doing any type of DR ranging from SRM to NetApp replication, you might want to exclude the page file from all that replication activity.  Since it’s all consolidated onto one or more datastores, just exclude those datastores from your DR plan and/or replication.  Easy!

The NetApp and VMware Storage Best Practices whitepaper does mention consolidating vSwap and OS page files onto common volumes, but does not make direct mention of consolidating OS volumes and app volumes — but it does hint at it by stating “NetApp recommends grouping similar operating systems and similar applications into datastores, which ultimately reside on a deduplication-enabled volume.”

50% GUARANTEE

This is also a big part of the reason for NetApp’s 50% guarantee which guarantees that you will use 50% less disk space if you enable the following features:

• Deduplication  •   RAID-DP   •   Thin Provisioning   •   NetApp SnapShot

Thin Provisioning of course is common to many storage systems, but the other three are unique to NetApp.  RAID-DP allows for a second parity bit without the traditional performance and capacity penalties, while NetApp’s snapshots use far less space that most other snapshot implementations which have to move more blocks for the same operations.  And of course NetApp offers inline block level deduplication that can meet mission critical storage requirements as opposed to post-processing which is far slower and usually reserved for backup tiers only.

In summary NetApp storage offers some unique capabilities which in many cases can be maximized by trying to group together like drives onto the same volumes.

4 Responses to Virtual Machine Considerations For NetApp Storage

  1. Ed Grigson says:

    We’ve done the same thing (also on Netapp) and we see around 70% dedupe on the OS volumes (with around 80VMs per volume). The app data is less clearcut – neither our SQL or Oracle data volumes really benefit from dedupe (circa 2-3%) and as there’s a slight performance impact we’ve decided against using it for this type of data.

  2. guest4321 says:

    Since when is Deduplication on NetApp “In-line?” Deduplication must be scheduled to run as a post-process on a NetApp filer.

  3. A wedge is a simple machine that transforms lateral force and movement of the tool into a transverse splitting force and movement of the workpiece.

  4. ThatGuy says:

    I’m in the process of making this decision myself. The one question that comes to mind is whether all of this administrative overhead is worth the disk savings you see. I’m not as concerned about dedupe rates and all that marketing hype. Disk used is disk used IMHO (I’d rather thick provision a VM than ever see it complain about a datastore being out of space due to snapshots (ya ya administer your environment…)), but I wonder if the rate of change I see would reduce by segregating this. IE, an NFS datastore over the course of an hour builds a 50 GB snapshot, which needs to be replicated. I’d hate to go through all this trouble moving things around (specifically Win page file) to reduce this by 10 GB.

Leave a Reply

Your email address will not be published. Required fields are marked *