Why Microsoft?

This is a question that can be explored from many different angles, but I’d like to focus on it from not JUST a virtualization perspective, and not JUST a cloud perspective, and not JUST from my own perspective as a vExpert joining Microsoft, but a more holistic perspective which considers all of this, as well

Top 6 Features of vSphere 6

This changes things. It sounds cliché to say “this is our best release ever” because in a sense the newest release is usually the most evolved.  However as a four year VMware vExpert I do think that there is something special about this one.  This is a much more significant jump than going from 4.x

vSphere 6.0 Public Beta — Sign Up to Learn What’s New

Yesterday, VMware announced the public availability of vSphere 6.0 Beta 2.  I can’t tell you what’s all in it due to the NDA, but you can still register for the beta yourself, read about what’s new and download the code for your home lab. There’s some pretty exciting stuff being added to vSphere 6.0 in

Will VMware Start Selling Hardware? Meet MARVIN

The Register is running a story that VMware is preparing to launch a line of hardware servers.

VMware Pursues SDN With Upcoming NSX Offering

Earlier this week VMware announced VMware NSX – an upcoming offering that takes network virtualization to new levels. NSX appears to be somewhat of a fusion between Nicria’s SDN technology (acquired last year by VMware) and vCloud Network and Security (vCNS – formerly known as vShield App and Edge). Since I already had intentions to

What Really Is Cloud Computing? (Triple-A Cloud)

What is cloud computing?  Ask a consumer, CIO, and salesman and you’ll likely get widely varying responses. The consumer will typically think of the cloud as a hosted service, such as Apple’s iCloud, or uploading pictures to Photobucket, and scores more of like services (just keep in mind that several such services existed before it

Agility Part 2 — The Evolution of Value in the Private Cloud

When an IT project is commissioned it can be backed by a number of different statements such as: “It will reduce our TCO” “This is a strategic initiative” “The ROI is compelling” “There’s funds in the budget” “Our competitors are doing it” Some of these are better reasons than others, but here’s a question.  Imagine a

Stacks, the Vblock and Value — A Chat with EMC’s Chad Sakac

…I reached out to EMC’s Chad Sakac to gain more insights from his perspective on how the various stacks…well…stacked up….

PowerGUI, vEcoshell (VESI), PowerCLI and ESXi

What do these 4 products have in common?  One is being discontinued, while the remaining two help to ease transitions to ESXi and improve automation.  Let’s take a look:

PowerGUI and VESI

PowerGUI is a free tool from Quest Software that does what it’s name suggests — it provides a GUI interface to Microsoft’s PowerShell automation framework/scripting interface.  It’s a visual shell of sorts for Powershell.  You can create a combination of queries and actions and then either run them from the GUI, or switch over to the “script” tab and see your actions in Powershell script format.  This is great for people like me who know just enough about scripting to be dangerous — you can construct your query/task in the GUI and then modify the details in the exposed script.

PowerGUI being used to explore the local registry, then...

...switch to the script tab to see the script you "built"

I’ll share just one example of how I’ve used PowerGUI in the past.  An Active Directory team was attempting to migrate users off of a legacy domain and they needed to figure out which users were still active so that they could contact them.  With PowerGUI this was no problem at all.  Within minutes and using only the GUI I had constructed a query to look for all users in the domain for whom the last login time was greater than 30 days old.  Then I made some adjustments to include key fields that they wanted in the CSV output and viola!  Anytime they needed a report on most any AD attribute I’d be able to construct the query quickly in PowerGUI.  I’ve used it to automate Active Directory and SQL server tasks and much more.  If you haven’t yet, you owe it to yourself to take a few minutes to check it out.

Now while there were VMWare powerpacks available for PowerGUI they only went so far.  Vizioncore (now Quest) developed a more robust GUI specifically for VMware environments (Hyper-V support was later added) called vEcoshell.   I won’t bother to detail all of the differences here but it had some great features including best practice queries built-in for things like “snapshots older than 7 days” and finding how much whitespace could be saved by using vOptimizer (which can also re-align on a 64K boundary).  Chances you that you’ll be surprised at how much white space is available to be reclaimed.  I’ve been using vEcoshell over the past few years and I’ve found it to be an indispensable tool for managing vSphere environments.

Bad News, Good News

The bad news is that vEcoshell is no longer being developed, but the good news is that this development has moved to PowerGUI in the form of a PowerPack.  If you download the current version of PowerGUI and then the VMWare PowerPack, you will have the same functionality that vEcoshell was intended to provide.  Quest is also leveraging this in their products, such as vRanger Pro 5.0 which will ship with a PowerGUI powerpack to open up new possibilities for managing backup and replication tasks.

PowerGUI with VMware PowerPack and Best Practice Queries shown

[There are some great reports also available but I don’t have any that I can quickly and safely share]

Now there is one catch.  When I downloaded the current PowerGUI version it forced me to “downgrade” my PowerCLI to 4.0 Update 1 before it would install.  This would be an issue for some as there have been some significant improvements in PowerCLI 4.1.1 including ESXCLI and ESXTOP.  The good news is that there’s a fix — you can follow the steps here to install a beta version of the PowerPack that will support PowerCLI 4.1.1.  I’ve followed these steps and so far have not encountered any issues.

Migrate to ESXi With Confidence

As Duncan Epping points out here, the time is now to start migrating to ESXi.  There’s been little doubt that the bare-metal architecture of ESXi is superior to the old Service Console architecture.  Less code, less patching, better security, boot from flash…you get the picture.  The big concern has always been “how will I manage my servers and/or run my scripts without a service console?”.  Well PowerCLI has really advanced over the last few releases and combined with PowerGUI (and the vMA which Duncan writes about here) and when armed with these tools, there really is no need for a Service Console.  Even if you don’t know PowerCLI, PowerGUI is a great place to start as it can shield you from the actual code with a nice GUI interface.

Do yourself a favor and start exploring the new PowerGUI, as well as begin to take steps to migrate away from the Service Console architecture.  For additional resources on migrating to ESXi, be sure to check out VMware’s ESXi Migration Info Center.

Poll: Agility Series

I have a dilemma.  I find myself getting more and more interested in exploring specifics of value in the private cloud, and linkages with organizational management and more and I’m self-teaching myself must of this in my free time (no, my job is not in the clouds).

So far I’ve noticed that my Agility series is among the least viewed pages on my site and this could be for any number of reasons ranging from interest levels to content quality and more.  Those posts were intentionally written to include a target audience of organizations which may still be in the early stages of exploring either virtualization itself or the private cloud.  In the world I live in, even x86 virtualization still has to be sold, let alone understanding value in the private cloud.

As I’ve been learning more about cloud computing (which I barely understood 7 months ago) I find myself wanting to explore some of these elements in more detail, but I have reason to question the interest level.   I know that there is strong interest in integrated stacks not just from Twitter conversations, but also from my interview with EMC’s Chad Sakac, and I’d like to explore the stack value proposition in what I think are some interesting ways.

So what do you say?  Fill out the poll below and you may just influence my approach 🙂

Should Blue Shift focus more on exploring the value proposition of the private cloud, or jump ahead to vCloud and integrated stacks?

View Results

Loading ... Loading ...

Monday Morning QB

Seeing as how this blog has already taken tangents into rock-and-roll and the political realm, why not sports as well?

I thought I’d take a moment to briefly post my thoughts on the NFL Championship Weekend:

Agility Part 2 — The Evolution of Value in the Private Cloud

Agility Part 2 — The Evolution of Value in the Private Cloud

When an IT project is commissioned it can be backed by a number of different statements such as:

  • “It will reduce our TCO”
  • “This is a strategic initiative”
  • “The ROI is compelling”
  • “There’s funds in the budget”
  • “Our competitors are doing it”

Some of these are better reasons than others, but here’s a question.  Imagine a project where you could improve your infrastructure and allow your organization to become more agile.   But without a specific business performance improvement or TCO to link to it, which of the above statements would allow you to move ahead with the project?  We may need a new model for value creation in IT if we are going to get transformative projects and not merely tactical solutions approved.

TCO is a measure which attempts to measure IT costs within the enterprise, while ROI can attempt to look at IT as an investment (at least those elements which are tangible).  To borrow an investment term, what is the NPV (Net Present Value) of your IT assets? The value of an IT asset ultimately depends on how it is used.  Just as human resources can become inefficient in a poorly modeled organization, IT resources can similarly be inefficiently utilized if not modeled on a vision of agility and value.

What’s the value of being able to deploy a new application in 1 months versus 3 months? What’s the value of our IT staff getting twice as many projects completed in a year’s time?  What’s your RoIT (Return on IT)?  As we explored in Part One, Agility creates value by allowing the business to execute their business strategy in less time.

While I want to explore valuation methods further in a future post, lets take a look at how value can evolve as IT adopts the private or hybrid cloud model.  Before we get into details about how various private cloud stacks (Vblock, FlexPod, etc.) can add value, I thought it would be helpful to “set the table” and construct a value model that can be referenced in future posts in this series:

The centerpiece of the graphic above is the inverted triangle – as organizations further evolve upwards towards the private cloud/IaaS model, there is more and more business value to be captured; first in the forms of CAPEX and OPEX, and then ultimately in the form of Agility. Let’s take a step back and look at each of the three elements within this triangle.


CAPEX is CFO shorthand for capital expense. In IT capital expense would normally consist of things like datacenter space, and server hardware (with electricity and cooling being related variable costs). If you can run 30 web servers on a single physical server that’s a significant capital expense savings. Virtualization is certainly a “green technology” as in previous posts I noted the savings that have been realized by many (and promoted by utilities) in electricity costs. CAPEX reduction is the original value proposition of virtualization (do more with less hardware) and is probably the reason many may have adopted virtualization in the first place. The evolved IT organization will look well beyond just CAPEX benefits however….


OPEX is also CFO shorthand and refers to operational expenses. Perhaps the first example of OPEX benefits that many organizations discovered after deploying virtualization is provisioning. In the past IT may have installed an OS from CD (hours of labor) or they have invested time and/or money into a more automated method for building servers.  With virtualization you could now create a new VM with a few clicks and have it ready for consumption within just a few minutes – offering a big reduction in operational expense from what many IT organizations had previously.

Basically reducing OPEX is all about getting things done more efficiently – more automation, less steps, less labor, less effort and less time – and it’s incredibly important.   The full spectrum of opportunities for OPEX reduction is huge.  OPEX will span many disciplines and technologies (even organizational management) and include key elements such as automation, orchestration and more.  We will get into more specific examples of OPEX reductions in future posts.


Now the line between OPEX and Agility here is a bit…well, cloudy…and the line here isn’t going to be clearly defined.  The point here is that the IT organization has put itself into a position where it can be more nimble and respond more quickly.  Joe Weinman of HP recently published a whitepaper called “Time is Money:  The Value of ‘On-Demand'” in which he attempts to show how lost opportunity can be quantified. The graph above shows a red demand curve, with a green resources curve falling a bit short.  The difference between these two curves represents unserved demand and can actually be quantified (to the extend D and R can be).  Unserved Demand may represent lost customers for the service provider, or it could also represent the gap between what the business wants from IT and what IT can provide.  Taken a step further it may be even possible to determine how much market demand (and revenue) was lost to the business because IT was unable to provide the volume of resources that were desired when they were desired.  Joe concludes in his whitepaper:

We have seen that not only is there a time value of money, there is a money value of time, specifically, increased agility and responsiveness lead to reduced loss, including a reduction in missed opportunities. Time is money.

This is exactly why I believe that Agility ultimately has the potential to provides more value than just OPEX alone.  Reducing OPEX is generally a TCO play on the cost center model but it can also be a building block towards Agility.  Agility is the ability to quickly deploy infrastructure as needed to satisfy business demand and when you can do this you are evolving away from the IT-as-a-cost-center model and towards IT being a revenue generator and business parter which can successfully execute the organization’s business strategy.  That’s profound.

Let’s Add Some Clarity

Now that the value model has been introduced we should probably go into a bit more detail on several things including:

  • Is virtualization a requirement for cloud computing?
  • Abstraction, APIs and OPEX
  • Has your datacenter evolved beyond a cost center?

Virtualization and Cloud Computing

First of all we should take a moment to define our terms.  Too often the word “cloud” is used in way too many different ways and has become sort of general marketing term as of late.  What we are really talking about here is Infrastucture as a Service (IaaS) which usually refers to a private cloud scenario (it can also refer to the infrastrucure used to serve up Software as a Service (Saas) and public cloud applications).

Virtualization may not be a de facto requirement for “cloud computing” but in my view it is an essential part of IaaS.  The current Wikipedia entry for cloud computing says that IaaS “delivers computer infrastructure – typically a platform virtualization environment – as a service”.  Thus when you look at CAPEX in the value triangle, the implicit assumption is that virtualization has been adopted — you need virtualization to get to CAPEX and above.

Many companies however have not yet virtualized their mission-critical applications for various reasons (this is sometimes called VM Stall). But the technical barriers have been continually getting lower, as I most recently noted here in an article about Oracle RAC on vSphere. And even if there aren’t any CAPEX benefits for an application that requires a 1-to-1 consolidation rate, many companies will want to virtualize many of these workloads because of the OPEX and Agility benefits that become available when virtualized.

APIs in the Stack:

Virtualization As a Management and Abstraction Layer

Nicholas Weaver recently demonstrated with his excellent XBOX Kinect “Fistful of Cloud” demo (my post here) how the abstraction provided by virtualization also becomes a management layer that enables a whole new level of possibilities.  Today we can change the power state of hundreds of servers by pushing a single button, move VM’s across different servers, and even failover entire datacenters with a push of the button.  Virtualization provides not only flexibility-enabling abstraction but a management layer which can be utilized in countless ways.

Already today we have API’s in the stack for storage (VAAI), data protection (VADP) and security (vShield) with more APIs and enhancements to come.  VAAI already provides a bridge between the storage array and the hypervisor and perhaps we will soon see security APIs transcend to the networking hardware as well.  A well integrated virtualization stack can provide many more opportunities to reduce our OPEX costs (both time and money) by abstracting our workloads from the boundaries and limitations of yesterday’s physical non-virtualized environments.  I have no doubt that in an undisclosed bunker somewhere these APIs and new ones are being enhanced and created right this minute to enable more and more possibilities to manage and automate our abstracted workloads (VMs).  As EMC COO Howard Elias noted in a Forbes interview:

You have a virtualized infrastructure, you’ve virtualized your business applications, and now you add a management and security model and automation and orchestration where you truly have automated, policy-based, flexible management of your IT infrastructure. That’s where you really get cloud-like operations in a private cloud and even in a public cloud.

Evolving Beyond IT as a Cost Center towards Agility

I’ve actually seen some environments where IT management wanted to adopt virtualization so that they could tell others that they were using it.  Others saw the original value proposition of virtualization but haven’t taken steps to capture the additional OPEX and Agility benefits that are available.

CIOs and IT managers should take an introspective look at their environments and goals and think about whether they want to continue using the IT as a cost center model, or evolve towards the IaaS/Private Cloud model in order to empower the business with Agility.  Chances are your competitors are in varying stages of implementing private cloud concepts.  All things being equal, the business that can execute their business strategy the quickest will win in the marketplace.  Organizations which are not actively moving towards Agility risk being left behind and being unable to quickly and efficiently execute on what the business asks of them.

What’s Next?

Now with this value model established we will be looking at more specific and details ways in which stacks, converged infrastructure and automation tools in future posts such as:

  • Obstacles To Agility
  • vCloud Director
  • The” x86 Mainframe”
  • Orchestration
  • and more….

Agility Part 3 will focus on traditional obstacles to Agility while future posts will show go into detail of specific solutions that can overcome and transcend these obstacles.

Nickapedia: Managing VMware vSphere with XBOX Kinect Gestures (Wow!)

In my house we’ve been using the XBOX Kinect features more and more.  Often times mommy and daddy will want to watch a movie and perhaps the kids didn’t put the contollers back or they are just hard to find in the dark, so why bother looking for the controllers when you can just gesture with your hands?

One day I was explaining to a manager how with vMotion your server could be over “here” on server A (gesturing to the right) and vMotion could move the workload uninterrupted over “here” on server B (gesturing to the left).  I had read by now several articles on how the XBOX Kinect had been “hacked” for a variety of other applications.  The light bulb went off and one day I made the following tweet:

Is someone writing a Kinect interface for vSphere so that we can vMotion VM’s by swiping our hand from right to left?

I was mostly joking but partially serious in that it I knew it would be possible to do (with A LOT of smarts).  Jeramiah Dooley ( @jdooley_clt) retweeted my tweet and copied @lynxbat (Nicholas Weaver).  I’ve been observing some of Nicholas’ efforts over the past year and I remember thinking “If anyone could do this, he probably could”.

During the course of the next week Nicholas made some tweets indicating that he was working on a new project that he was really excited about and working hard on, but it never occurred to me that I might know what he was working on.

After watching the Green Bay Packers dominate in the Georgia Dome last night, I woke up in a good mood this morning and decided to check out what was going on in the Twitter-verse.  I saw that Nicholas had made a new post which many others were re-tweeting with enthusiasm.  So I checked it out and began watching Nick’s video.

In the first few seconds of Nick’s video I thought I heard him mention Blue Shift Blog.  “Huh?  Me?  What in the world did *I* do??”.  As Nick went into more detail the proverbial light bulb switched on — “He did it!  He actually did it!”.

Before I share Nick’s amazing video and demonstration just a few quick (kinda) points.  Sometimes we make technologies that have that “gee-whiz” feature but over time, and in the absence of value, the novelty wears off and it gets forgotten about.  I don’t think that this is case here for several reasons.

The Evolution of Human Interfaces

The idea for working with more natural interfaces really became a part of our culture in the movie “Minority Report” where we saw Tom Cruise searching for and manipulating information using hand and finger gestures.  The advent of Microsoft Surface, iPhones, iPads and Android phones were a small step in this direction as we became more familiar with gestures such as swiping and multi-touch gestures like pinch-to-zoom.

The XBOX Kinect was a huge leap forward in that our entire bodies can be tracked (in a spring update, even facial gestures like smiling or raising one’s eyebrows will be tracked).  The Kinect has been a monster hit in the gaming market with over 8 million already sold, and Microsoft is working on extending the Kinect platform to the Windows PC market.  As amazing as Kinect is, it is still largely a 1.0 technology.  Gesture based interfaces will become both more advanced and more common place in the future and I suspect some day extend into the datacenter as well.

Gesture Interfaces as a Metaphor for Abstraction

Nick talked about this at some length in his video and it is an excellent point.  Virtualization is abstraction of server workloads from the physical hardware.  Now virtualization is not technically a requirement for cloud computing (IaaS) but in many cases I think it will be a critical piece (more on this in a future Agility post).  But a big part of the cloud computing concept is the abstraction of the workloads from traditional boundaries which enable more flexibility, agility, and savings.

While virtualization provides abstraction from the hardware, it also provides a management layer in the datacenter from which many things can be done.  As Nick mentions in his video, you can now power on hundreds of VM’s from a script rather than running around the datacenter and hitting power buttons.  In the future I expect more and more API’s to be developed within the hypervisor in areas such as security, storage and networking to provide greater functionality as well as interaction with the physical hardware elements.  In this sense, the gesture interface really does provide an interesting metaphor for this abstraction layer which is providing more and more functionality and capability — vMotion being just one example.

While few enterprises will be using motion gestures to manage their datacenters in 2011, I think that as both human interfaces and virtualization/IaaS evolve there is the potential to see much more along the lines of what Nick has demonstrated.

Nick’s Video

Enough of me talking.  Nick did all the work and what he did is nothing short of amazing.  I felt a bit guilty knowing that I may have somehow been a contributing factor to long nights and sleep deprivation, but I suspect he’d say that it was well worth it.  Here’s Nick’s blog entry and below is the video.  Amazing!

Interactive Cloud – Nicholas Weaver(@lynxbat) from Nicholas Weaver on Vimeo.

The Amazing iPad

A few months ago I did two things that few would have expected from me:

1) I walked into an Apple store

2) I purchased an Apple product

After a few months I thought I would share my perspective on what makes tablets such as the iPad such a compelling and revolutionary technology.   But first I have to share an interesting story from my college days:

When Windows is not a WIN

Back in the early 90’s our university library had Apple Macs available for the students to use during library hours.  A problem developed when I had a paper which was due and I found the library schedule to be incompatible with my social schedule.  My dorm building was in a suite layout and I asked one of my neighbors if I could use his PC computer over the weekend while he was out of town and he agreed.

When it came time to write the paper I sat down at his chair and powered up the PC.   I heard some noises, and then later some text appeared on the screen which I didn’t quite understand.   This was indeed a bit different from the Macs in the school library.

Then things quieted down and there was this “C:\” that appeared on the screen with a flashing cursor after it.   I figured I’d wait a bit more for something to happen as this clearly wasn’t right.   I tried using the mouse which was right next to the keyboard to find MS Word but there were no graphics – just a bunch of white on black text.  After a minute of waiting I pressed the power button to restart the process as something clearly didn’t work right that time . Again I found the computer returning to this “C:\” with a cursor after it. What does “C” mean?   Was it some kind of code or error message?

Shortly I learned that I could type after this mysterious letter C, but if one would press enter I would get a message informing me that I had entered a “bad command or filename”.  Hmm.   I tried typing “MSWORD” and a few variations of it but that didn’t work either.  How about “Help” I thought – surely that would disclose what my options were! “Bad command or filename”.  “Now that’s odd” I thought to myself.

Well if I just tried long enough I’d be certain to find some word that was not a “bad command or filename” from this C: thing.   I tried many combinations of words, and after repeated failures I began to employ some colorful metaphors, but was a bit less surprised that those words did not elicit a helpful response from the machine.  I ended up entering the library the moment it opened on Monday AM and typed as good of a paper as I could in 90 minutes.

When my neighbor returned I enthusiastically informed him that there was something wrong with his computer and that it kept displaying this “C:” error.   I was then informed that this was normal behavior and that I should have entered the phrase “win” and pressed enter to start MS Windows and then MS Word.  Now why didn’t I think of that?   One of the lessons here perhaps is that technology does not work unless it works for people.

Apple Goes Mobile

Needless to say after this incident I self-taught myself more about MS-DOS, Windows and OS/2 than I ever imagined that I would want to know.   By 2001 I was working for Dell Consulting Services designing Active Directory for a government client and riding on the DC Metro system.  I was using my Windows CE powered iPAQ PDA to read a whitepaper and then write down some notes in my action plan while listing to U2’s Achtung Baby!  I noticed a girl on the train using one of those new iPod thingys.   I remember looking at it and thinking “that’s all it does? – play music with a dial?  People are excited about this?”

I was in the Windows world now and I decided that I wanted complex products to master and not something that was dumb-ed down for the masses with slick marketing campaigns.  I developed the impression that Apple products were both overpriced and over-hyped for what they offered.  To borrow a concept from Apple’s marketing, I wanted to be different.   As the music market evolved I ended up getting a Zune which to this day I still find to have a much better interface and a great “all you can eat” pass which in my opinion far surpasses what iTunes has to offer.  And with Android phones like the HTC EVO taking off there was no need to look at an iPhone either.

What changed is when I noticed my daughter struggling with her homework a few months ago.  She just had major surgery and then later experienced a slipped disc in her back as a result.   She was in significant discomfort and had to be carried from her bed to the couch.  Seeing her trying to get comfortable on the couch while working with a pen and paper made it clear to me that this was not going to work.

After a short conversation with my wife, I drove down to the local Apple store and picked up an iPad. While there were other options, the iPad had the most apps and would therefore have the best selection of math and spelling apps. As I walked into the store I was able to use an iPad for myself and browse the educational apps that were available. I quickly decided that this would help her and picked one up.

My iPad Experience

As I first starting playing with the iPad the realization came over me that this was very much like an Android phone but with a bigger screen. “What have I done?” I began to wonder and grew concerned that I had wasted my money on a “phone” with an extra-large screen.  As I began to use it more I slowly began to realize that the bigger screen is exactly what makes the tablet concept work so well.  It became an overnight hit in our household as we could use it to do shopping, reading, email and more right from the couch. Unlike laptops which are big and socially obtrusive, the tablet is not a big device that you hide behind, but a more social-friendly device that you can share with others.  The iPad is an instant-on experience and one can quickly, easily and comfortably read email and order from Amazon while sitting on the couch.  We even found ourselves huddled around the iPad looking at cookie recipes to try as a family during the holidays.

How did it work for my daughter’s 4th grade schoolwork? Her teacher fully embraced it and actually incorporated it into her curriculum.  She had math and spelling tests and drills all set up on the iPad for her and every week I’d input her spelling words and tune the math apps for the specific math problems that were being covered.  It has been a hit with our family (and the teacher) well beyond what I had imagined (in a future post I’ll share some of my favorite iPad apps).

Now imagine the work experience.  Imagine being able to walk from conference room to cubicles with a small tablet rather than an laptop, which was capable of even being a VDI client, provisioning new servers or managing your virtual infrastructure.  The tablet fits a badly needed gap between the laptop and the smartphone and really does change the way we can interact with information as well as empower us without having to carry a laptop around.

Now of course the iPad isn’t the only game in town — Android tablets getting ready to make a splash (and Microsoft is trying to re-ARM Windows for tablets before the opportunity is lost).  A well designed tablet device will change and improve the way we work and play.  The tablet age is upon us and expect many new tablets to enter the market this year.

That’s enough of me talking.  I’ll let these few videos speak for themselves.  Here’s videos of the vSphere client on the iPad, VMware View, vCloud Request Manager and vFoglight on the iPad as well.

Oracle RAC on vSphere 4.1 (if you can virtualize this, you can virtualize anything!)

For those who have been following Blue Shift, it’s no secret that one of my favorite blogs is VMWare’s VROOM! and there’s several reasons for this.

Many of us have been in a situation where someone has told us “you can’t virtualize that!” and often the only options are to either go ahead and demonstrate by attempting to virtualize it (often not an option), or to provide a real world case study of someone who has. Proving that applications can be virtualized is often critical for putting organizations into position to realize additional benefits of the private cloud including reduced OPEX and increased agility.

Oracle RAC is one of those holdouts for virtualization for two key reasons (I think).  One reason deals with vendor support and while this has been an issue in the past, Oracle RAC is now supported under vSphere (that’s not to say that licensing and other improvements are not still needed). A second reason is that Oracle RAC is a larger and often very intense workload and the perception lingers in some circles that performance of Oracle RAC would suffer too much if virtualized.

Frank Sinatra sings in “New York, New York” that “if you can make it here, you can make it anywhere”. Well to a large extent, if you can virtualize Oracle RAC, you can virtualize most any x86 workload. So how does Oracle RAC perform under vSphere 4.1? Let’s take a look at what the VROOM! Blog found.

For the test lab, two Oracle RAC configurations were used – one physical and one virtual. The exact same hardware was used in each configuration (the servers were dual booted between RHEL and ESX).

Physical Environment

Virtualized Environment

After some configuration as detailed in the blog post, the testers found that the virtualized Oracle RAC instance performed 11 to 13 percent slower than the native physical environment:

If you think 11 to 13 percent is a lot, let’s pause for a second. These metrics were based on Orders Per Minute (OPM) as well as Response Time. For what percentage of the day are your Oracle RAC servers at 85% of OPM capacity? Chances are you’re servers (hopefully) aren’t running quite that hot. The CPU load of the physical hosts and the VM’s was more than 95% during these tests. So when the statement is made that the virtualized environment was 11-13% slower, this performance difference only applies to a stress test (CPU >95% scenario) and normal operating performance will likely be significantly better.

But even if one assumes the worst case of 13% performance degradation, this would still be an acceptable trade off in many datacenters to get to the point where the operational benefits of virtualization (DR, agility, private cloud) can be realized.

Still not convinced? How about listening to a real live customer share their Oracle RAC on vSphere experiences? On January 25 VMware will be hosting a webcast where IPC (The Hospitalist Company) will share their experiences with successfully virtualizing Oracle RAC on vSphere and addressing issues such as support, performance, and cost savings.

There’s very few x86 workloads that can’t be effectively virtualized and Oracle RAC can certainly be virtualized effectively in many environments.

Top 10 Posts: Looking Back on 2010 and Forward to 2011

I thought it would be a fun exercise to take a look back at the past year, the most popular posts, and take a look forward as well, so here we go!


A few years ago I began the habit of reading the Planet v12n blogs every day (there were fewer back then!). I was leading a major effort in 2009 to build a new virtual infrastructure and migrate hundreds of servers to a new data center as a result of an acquisition, so I came across many interesting observations that I felt that I would like to blog about as well, but I just didn’t have the time back then (80+ hour weeks) to consider blogging. Many doubted that we could successfully virtualize and re-IP the complex applications, but we did!

Last summer I finally decided to start a blog and see what I could do as an experiment. I wanted to write technical articles about vSphere and virtualization based on what I had learned and was continuing to learn. In July my daughter was admitted into the hospital for major surgery and while there was much going on, being in a hospital 24/7 for several months still offers a great deal of idle time and one can only read so many books. I began to write articles and slowly started to get a few hits. VMworld was ramping up and I also wrote a post on a new wave of virtualization ROI and also on VMware’s evolution and changing value proposition. Around that time John Troyer added Blue Shift to Planet v12n.

As I started writing more, a big challenge was that I was no longer working with virtualization. An organizational alignment put me into a position where I was no longer involved in all the technical things I used to do like SANs, virtualization and more (the “Project Blue Sphere” project I mentioned earlier has been stalled).  The longer I had this “technical detachment” the more difficult it became to have technical observations to write about. Adding to this I was unable to attend VMworld, access the presentations, and I didn’t have the means to build a home lab either. What would I write about?

I was also pursuing my MBA however (just 2 electives to go!) and I began learning more about cloud computing concepts and the Vblock which really began to inspire me.   The value proposition of cloud computing and the Vblock started to become both more clear and more exciting.   I began with “Let Your Fast Zebras Run Free (with the Vblock)”, posted an introduction to a new Agility series, and then interviewed EMC’s Chad Sakac on the Vblock and value.

TOP 10 POSTS of 2010 (by # of hits)

It took me somewhat by surprise that some of my older posts were the most popular in terms of hits. Here are the 10 most popular posts for the year on Blue Shift:

  1. Why Disk Alignment is important (and how to fix a misaligned VM)
  2. Application Consistent Quiescing on vSphere 4.1
  3. Can your VM be restored? VSS and VMware — Part 2 (updated)
  4. Can your VM be restored? VMware and VSS — Part 1
  5. Stacks, the Vblock and Value — A Chat with EMC’s Chad Sakac
  6. Load Based Teaming in vSphere 4.1
  7. HBA Best Practices with vSphere 4.1
  8. Symantec ApplicationHA: High Availability for Virtualized Applications
  9. VM Backup Reference Architecture – Part 2: Beyond VADP
  10. Speed Up vSphere Client on Windows 7

It’s always interesting to see which posts get the most hits — some of my favorite posts (including “Fazt Zebras” and other posts on vSphere performance didn’t even make the top 30!).  I suspect it has much do with search engines, interests and other factors as well.


My professional and personal goals for the next year are similar in that I’m looking to push myself and be in a position to pursue excellence.

My professional goal is to put myself into a position where I am once again technically engaged with vSphere, storage, cloud, ROIs, business agility and more.  I will seek to position myself such that I am challenged and immersed with cloud, storage and virtualization which should lead to a very interesting year and perhaps even better blog content as well.  I am not sure exactly what form such a change may take but that in essence is my goal.  I’ll be resuming the Agility series and also looking to build an inexpensive home lab (hopefully).

My personal goals are to push myself in every area and explore my capabilities.  This includes starting P90X in January (BRING IT!) so that I can lose the 15 or so pounds I gained in 2010 (most of them from the extended hospital stay).  🙂  Just don’t ask for me to post “before” pictures 🙂

It’s been a very interesting 2010, and virtualization, VDI, cloud computing and Vblocks are poised to make a big impact in 2011 and it should be an exciting and action packed year!

Merry Christmas and Happy Holidays!

I’ve had no shortage of posts that I’ve wanted to write, including continuing the Agility series, but I’ve decided to put blogging on the back shelf during the holiday season.

My 9-year-old daughter had major, high-risk surgery over the summer, and the surgery was a success and her long-term prognosis is very good.  So our family is just focusing on making this Christmas season a little bit extra special.

In March of this past year, The Make-A-Wish Foundation granted our daughter’s wish and sent us to Hawaii for an amazing vacation — I still want to write a post about how what a great organization the Make-A-Wish Foundation is and share some highlights of our trip, but I’ll save that for a future post.

Have a very merry Christmas and a wonderful holiday season with your friends and families!

Advanced vSphere 4 Troubleshooting by Eric Sloof

Eric Sloof has put together an excellent presentation on Advanced vSphere 4 Troubleshooting which he presented at the Dutch VMUG last week.  Just browsing through the slide-deck you’ll find an excellent summary of how to isolate and troubleshoot performance issues in many areas including:

  • CPU Troubleshooting (CPU Ready Time and more)
  • Memory (Page Sharing, Ballooning, limits, reservations and compression and swap
  • Storage (SCSI Reservations, Block Alignment, vSCSIStats)
  • Network (dropped packets, NIC settings, VLANs, Load Based Teaming)

Trust me — you’ll want to check this out.  You can also access a PDF version here.

Stacks, the Vblock and Value — A Chat with EMC’s Chad Sakac

Stacks, the Vblock and Value — A Chat with EMC’s Chad Sakac

It’s not always easy being a customer.  There’s a lot of noise in the marketplace and it’s not always clear to the customer which statements are relevant.  As we often see in public life, sometimes the most discussed issues are not always the most relevant or important.

There’s been a fair amount of noise lately coming from many directions on competing datacenter stacks, but looking past the shop talk and mud-slinging, what are they key issues for the customer?  For example a customer may not be terribly concerned with a product’s value chain and how well it works in the sales channel, but will be very concerned with how will it perform and create value in their data center.

I had some questions of my own on various capabilities of different stacks, so I reached out to EMC’s Chad Sakac who offered to answer some questions from his position as the VP of EMC’s Technology Alliance.  Before we get to the discussion I wanted to take a quick moment to define “stack”.

What’s a Stack?

A stack is essentially four components designed to work together as a datacenter platform:

  • Storage
  • Networking
  • Servers (compute)
  • Management and Orchestration across the stack – integrated APIs and tools.

Some stacks are built around x86 virtualization (stack ends at the VM) while other stacks will consider x86 virtualization as being optional (stack ends at the physical server).

Customers have choice in that they can attempt to build their own stacks (where the customer owns the engineering burden) or they can elect to purchase a stack from a vendor.  Further complicating matters is that some vendor stacks might more closely resemble a reference architecture (existing components certified to work together in a specific configuration), while other stacks might possess characteristics of a more cohesive product.

There are several considerations inherent in these stack options which can impact both OPEX (Operational Expenses) and enterprise agility that a customer needs to consider such as:

  • Simplicity of procurement
  • Scope and terms of warranty
  • Single versus multiple points of support
  • Simplification of the stack’s lifecycle (deploying, maintaining, refreshing)
  • Elasticity of the stack (are you forced to buy more than you need?)

Vendor Stack Offerings

The Vblock is a stack consisting of EMC Storage, Cisco networking, and Cisco UCS servers virtualized with VMware vSphere, coupled with integrated management and orchestration.   Vblocks are manufactured & supported by VCE, a joint venture between EMC and Cisco (with minority investment from Intel and VMware), and sold via channel partners and Cisco and EMC.

Several hardware vendors are also putting together their own x86 virtualization stacks, including HP’s Matrix (CloudStart), IBM’s Cloudbust and Oracle’s Exalogic.  Like VCE and Vblocks, they represent a new model of technology acquisition, deployment and use. Also like VCE and Vblocks, the vendors continue to support the long standing “mix and match” model of piece parts, based on standards and interoperability that have existed for years in open systems.  Oracle’s Exalogic however has received some criticism in some quarters of resembling a mainframe and lacking elasticity relative to other stacks and more.

The FlexPod offering by NetApp at first glance appears to be similar to the Vblock (as far as components) as it uses the same components for servers and networking, but utilizes NetApp storage solutions and several third-party management and orchestration tools.  After a closer look however, it appears to resemble a reference architecture in these sense that assembly, procurement, support, orchestration, etc. seem to be more of the “mix & match & assemble” model.  I reached out to EMC’s Chad Sakac to gain more insights from his perspective on how the various stacks…well…stacked up.

A Discussion With Chad Sakac

Q:  At the core, all stacks provide a platform for virtualization and reduce the engineering (integration) burden.  Is this the “least common denominator” which all stacks have in common?

A: That’s one of the “least common denominators”.  Here’s the list of what all “the stacks” offer in common: 1) an integrated product for hosting x86 workloads reducing the integration and interoperability burden. 2) a new procurement model – the unit of acquisition/warranty is “integrated infrastructure”, not piece parts; 3) a single-vendor support model as opposed to multi-vendor joint escalation; 4) management tools and APIs that orchestrate across the stack.

Q:  Is this the original market need you were trying to fill with the Vblock and how have you seen that market evolve since then?

A: Yes, but it’s been a fascinating 2 years on our own VCE journey.

Those requirements and market needs have been surprisingly consistent – period.   But our early customer interactions were… well… different.   Those differences has changed HOW we’ve gotten to serving those consistent requirements over the last two years – VCE having been public for one year, but something we were discussing privately before that.

The first few customers who started to push us in this direction happened to be HUGE cases.  For perspective, they were asking for several hundred Vblocks in one go.   In those cases, the customer was very explicit – they wanted “private/internal clouds”, and wanted them NOW.   The other major difference was that they wanted it to be built by the provider, maintained and operated by the provider of the solution, and wanted to pay for it “by the VM used” – ergo, it looked exactly like an “internal” version of the external public cloud consumption models of things like Amazon EC2.     Most often, VCE was competing with IBM in those engagements.

Those initial requirements were the thing that necessitated the VCE joint venture – as there was no way for Cisco or EMC or VMware to offer what the customer was demanding independently on the support front.

Those initial customers also drove the initial need for one of the things the joint venture needed to provide were Build/Operate/Transfer services.  Just think about it – early the model of “infrastructure as a service” is based on the type technologies that Cisco, EMC and VMware support, but the way the customer wanted to consume/pay for it was something that few (HP, IBM and the largest Systems Integrators as examples) could provide.   When you have that model, “manufacturing” the Vblock wasn’t a big deal – as the VCE joint venture or an SI partner would do it for the customer on site – so the customer was cool with boxes of component parts arriving from multiple vendors and being integrated on site (as that was all invisible to them)

That last part – “operate it for me in a Build/Operate/Transfer model, and I will pay by the VM used” – has turned out to be something that IS NOT generally a requirement.

Since then, what we’ve seen at hundreds of customers is that they come in every size and shape, but the most common thing is they want is the PRODUCT that accelerates them in offering their own Infrastructure as a Service (IaaS), not A SERVICE that is IaaS.   That meant we needed to change a few things:

  • VCE needed to continue down the “integrated support” model – that was something that was nailed early.
  • VCE needed to build real manufacturing lines – so that the product (a Vblock) arrives as a unit.
  • In turn that means that EMC and Cisco act, in every sense that matters as OEMs that provide parts to a vendor (in this case VCE) who manufactures a product.
  • VCE needed to build a model and human resources that supports the various ways that customers acquire Vblocks as a product – enabling channel partners and Cisco and EMC to be able to sell Vblock as a product and deliver integration services and value on top of Vblocks.

Here’s a quick analogy that makes what drives the customer requirements crystal clear:

“I can go to Fry’s Electronics, and build my own homebrew server out of any set of parts (motherboard, case, CPU, powersupply, NICs, HBAs, whatever) – it’s the ultimate in flexibility as I can have any parts I want.  They are all interoperable and work with each other.  Of course, I’m going to spend time putting it all together (which I as a propeller-head like, but frankly is a waste of time), and boy, sometimes I’m going to get bit by “this doesn’t work right with that”.   OR, I can by a server from HP, Dell, or IBM, where they OEM the parts, I go to a site where I pick the type of server (1U, 2U, 4U), pick my components from a pre-selected set that are pre-integrated/tested, and the whole thing arrives, and I rack and power it on.  Oh, and it comes with things like Insight Manager that let me see/manage the server as a unit”.


“I can go to any Server, Network, Storage and Management tool server and build my own homebrew stack out of any set of parts (e.g. VMware, HP servers, Cisco switches, EMC or NetApp storage, whatever) – it’s the ultimate in flexibility as I can have any parts I want.  They are all interoperable and work with each other.  Of course, I’m going to spend time putting it all together (which I as a propeller-head like, but frankly is a waste of time), and boy, sometimes I’m going to get bit by “this doesn’t work right with that”.   .  Or, I can by integrated infrastructure from VCE, where they OEM the parts, I go to a site where I pick the type of server (Vblock 0, Vblock 1, Vblock 2), pick my components (number of blade packs and the associated disk packs) from a pre-selected set that are pre-integrated/tested, and the whole thing arrives, and I rack and power it on.  Oh, and it comes with things like Unified Infrastructure Manager that let me see/manage the server as a unit”.

If you look at that, it’s literally a search and replace of the two paragraphs 🙂

While TODAY, the first kind (mix and match and build yourself) is the more common way of building IT x86 infrastructure – everything we see from the market, from customers, from the technology roadmap suggests that a move over time (nothing happens overnight) to integrated infrastructure models over time is inevitable.  It’s perhaps one of the main ways we can make material progress against the “spend less time/money on keeping the lights on” infrastructure and focus more time/money/resources on things that provide core business differentiation and value (which are applications, not infrastructure).

Q:  Several stacks see orchestration as another level in which to provide value.  For example, sometimes in IT we see delays created by the IT silos where the storage team asks “what was the zoning presentation again?” and the network team clarifies “which VLANs did you need on that interface?”.  It seems that stacks can help here, by creating a technical platform which — when properly utilized — can transcend the IT bureaucracy and thus improve agility.

Some stacks add vCenter plug-ins for orchestration, but the Vblock utilizes Unified Infrastructure Manager (UIM) for orchestration with the Vblock.  Exactly how do you see UIM providing value differently from vCenter plugins for example, used in some other stacks?

A: vCenter plugins don’t do end-to-end orchestration.   Let’s look at what we’re talking about.  What needs “orchestrating”, and what does “provisioning” look like on an integrated infrastructure product?

Well – you don’t provision storage, or networks, or servers.   You would need to provision services.  As an example of what a service would be would be “How much total capacity in my infrastructure do I have across pools of compute, networks, and storage?  Ok, I have enough – please give me a bronze vSphere cluster and a gold vSphere cluster”.  Note – you DON’T say “I want that blade, and that network, and that storage”.  Then, you need to have the following happen – blades get assigned (including all firmware updates), networks (for all network/storage needs), storage for boot/datastores gets configured and provisioned, and vSphere gets installed.

Then, you just consume the resources via vCenter or via a self-service portal like vCloud Director.

You need to have the ability to manage those services (change them, decommission them, check SLA of the service) in an integrated way.  One also does the “macro” provisioning  like creating clusters rarely (think weekly or monthly) but do the VM-level provisioning frequently (constantly).

Those capabilities need to also be accessible as an “integrated infrastructure API” (otherwise integrating with other tools/frameworks still gets done at the point element of the integrated infrastructure stack – which in turn means you save nothing when integrating into a given customer environment).

That’s a BIG difference vs. vCenter plugins.   vCenter plugins rock, and every vendor (including Cisco and EMC – which we do) really need to have them in the “mix and match” model.  The core idea of vCenter plugins is “let the VMware admin see/provision my server/network/storage”.

BTW – while vCenter plugins are not analogs to Vblock UIM, there are analogous products.    Examples are HP Matrix’s suite of management and orchestration tools,  NewScale, BMC and others.   These all use “modules” (in essence pre-built scripting to APIs) to orchestrate heterogenous servers, networks, and storage.   You can also use some of those 3rd party tools to “orchestrate” mix and match stacks (like a FlexPod for example), though invariably as people who are familiar with that model, there’s always “some construction required” in scripting and integration modules, and that isn’t just to get it running up front, it’s required over the life of the stack as it gets updated/maintained.

One thing that crystalizes the critical idea that true integrated infrastructure is an atomic PRODUCT (not an assembly of elements) is that customers with existing orchestrations tools like BMC Atrium use them WITH Vblocks – interacting with the UIM APIs directly to orchestrate Vblock services, and not the “parts” of a Vblock.

Q:  Some stacks have adopted a “confederated” support model where the customer decides which vendor to call (network, storage, etc), whereas with VCE there is a single entity supporting the product. Is this a big difference, and what have you heard from customers on the various support models?

A:  On this one, the customers I’ve talked to have been clear – for the product category of “integrated infrastructure”, they demand single end-to-end support.  That means explicitly “no multiple accountable parties”.  It helps to think of the fact that we are talking about an atomic PRODUCT, not and ASSEMBLY of products.

Note, I’m not saying they don’t understand and appreciate that leading vendors can develop great joint support models.   These take the form where one of the entity can front a case and be the point accountable party and do a good job with joint escalation.   Customers who chose VMware on Cisco UCS and EMC storage that they buy separately get this today – and we all do our best to service the customer.   This model has been around for eons.

The expectation of support for a product is that the vendor of the product is support.  PERIOD.   If the product claims to be an “integrated infrastructure stack”, then the vendor for that product supports the product.   It’s not a product if that statement is not true.

Look at the server example.  If you’re an HP customer – and you have an issue with their NIC, which unbeknownst to you is using a Broadcom chipset and has an issue – who do you call?   Easy.  Your expectation is simple – it’s an HP server, an HP part.   If you homebrewed your server, you would call Asus who made your motherboard, and Broadcom, who made the NIC.

It’s the same with the category (which is a new one) of “integrated infrastructure” category.

Q:  When it comes to stacks are cloud service providers and in-house private clouds looking for the same things, or are they looking for different qualities in their stacks?

A: an uncharacteristically short answer for me:  Generally the same things, but always different scales.   Also – tend to have different economic model preferences (based on base-load vs. unpredictable load).   Enterprises tend to want capital models (“I will pay you X for your Vblock”).  Service provider tend to want utility models (“I will pay you 1/10*X for your Vblock, but pay 1/10*110% every time I use 10% more).

Though, it’s interesting to see that more and more enterprises are waking up to the upside of utility models as a blend with capital models (aka, “use capital models for your baseload you KNOW you will have, and use utility models for your unpredictable load as opposed to swagging your 3-5 year load and capitalizing the whole thing, because if you’re wrong, it sucks”).

Q:  In a previous post on Blue Shift there was a discussion about the pros and cons of different stack configurations. VCE has Vblocks for SAP and VDI, and NetApp appears to be looking at additional FlexPod designs. There seems to be opportunities to optimize a stack for specific workloads — how far do you see VCE and other vendors going with customized stacks?

A: We’ll produce reference architectures for use cases as far as the customers ask 🙂   There’s interesting demand for “SpringSource App Dev Vblock” and other funky ones.

One thing that’s important to note, the Vblock is a product – it’s not a case of “VDI Vblock” (in other words a DIFFERENT Vblock built for VDI), rather “reference architecture where use case = VDI, products supporting it  = VMware View or Citrix Xen + Vblock”.     The analogy with NetApp is “reference architecture where use case = VDI, products supporting it = VMware View or Citrix Xen + VMware vSphere + Cisco UCS + Cisco Nexus + NetApp FAS”

Q:  Some have criticized the Vblock as being less open and less flexible than other stacks. What do you think this statement means and from the customer perspective is this a meaningful criticism?

A: Here’s the test of whether I’ve been sufficiently clear 🙂   That criticism is PARTIALLY CORRECT.

FIRST – let’s talk about “Less Flexible”:

You can choose VMware, Cisco UCS and EMC.   Or V, C, ___, or ___, C, E, or V, C , ____, or ANY VARIANT thereof.   In those vendor variants – you can have any configuration you want.   All three companies are open, innovate and create standards and comply with those innovated by others.   THAT’S NOT CHANGING – PERIOD.   Joe Tucci, John Chambers, and Paul Maritz have been clear on this – they call it the “mix and match” model (which is how the majority of how IT is done today).  I like to call it “homebrew”.

A Vblock is inherently rigid than “homebrew”.   Every product is inherently rigid in the parameters of the product.

Think of it this way… Again, analogies are useful.   What is more flexible:

“Go to Fry’s Electronics and buy ANY motherboard, ANY CPU, ANY memory, ANY hard drive, ANY case, ANY hard drive, ANY networking kit – then come home, and assemble it”


“go to www.dell.com and order a PowerEdge C2100 server”

Clearly – the first one is “more flexible”.   So – why is it that almost no enterprise – even the smallest ones – don’t buy servers that way? It’s because it’s a huge pain and support/lifecycle would be a nightmare – just like the state of commodity open systems stacks today – making “keeping the lights on” 70% of the IT expense.   The emergence of “integrated infrastructure” products is a response to the customer/market realization that ALL the stuff (server, network, compute) underneath a virtualization stack is, at its core, a commodity hardware system – just like a server is.

In Integrated Infrastructure products, FLEXIBILITY (change anything you want, however you want) is traded off for AGILITY (gives me enough flexibility to meet a broad set of needs, but the vendor has made some hard choices to narrow variability to make it a PRODUCT).

SECOND – let’s talk about “less open”:

I’m going to repeat myself, because it bears repeating…   Remember, you can choose VMware, Cisco UCS and EMC.   Or V, C, ___, or ___, C, E, or V, C , ____, or ANY VARIANT thereof.   In those vendor variants – you can have any configuration you want.   All three companies are open, innovate and create standards and comply with those innovated by others.   THAT’S NOT CHANGING – PERIOD.   Joe Tucci, John Chambers, and Paul Maritz have been clear on this – they call it the “mix and match” model (which is how the majority of how IT is done today).

Vblock is ANOTHER CHOICE.  It’s another product category though.   So, the “openness” comparison is “does the choice of Vblock limit your future choice/lock you out re replacing Vblock with another integrated infrastructure product, like HP Matrix?”, just like in the “mix and match” model you could ask “does the choice of EMC storage limit you/lock you out from switching to HP/3PAR storage in the future?”.

The answer in both cases is black and white – NO.   The difference is “what would change” if you changed your strategy – let’s look at what you would need to change – comparing the “integrated infrastructure product” category and the “mix and match” category.

  • you change how you manage the integrated infrastructure (i.e. UIM vs. HP Matrix) vs how you manage your storage (i.e. EMC Unisphere vs. HP/3PAR storage management tools)
  • you change any integration you built (i.e. UIM API vs. HP Matrix API) vs. how integration you did with your storage (i.e. Unisphere API and HP/3PAR APIs)
  • you migrate from one integrated infrastructure stack to another (i.e. vmotion/svmotion or migrate from your old integrated infrastructure product to your new one) vs. migrate from one storage platform to another (migrate your data from EMC to HP/3PAR).

You can of course move to, or away from integrated infrastructure at any point you want.


Both the topic of flexibility and lock-in is generally vendor FUD from folks originating from those who don’t have a product offering in the “integrated infrastructure stack” and only play in the “mix and match” product market.   If you’re them, you MUST say that the whole idea of “integrated infrastructure stacks are bad”.

Q:  Some non-VCE stacks are sometimes criticized as being a “reference architecture” as opposed to a “full product” like the Vblock.  This is easily said in 140 characters or less,  but it seems like there are some real issues here that could be quite significant to customers.  What’s your take on this?

A: I think we covered this.  In case my previous analogies failed, here’s another one.    A reference architecture is, in essence “take these ingredients, follow this recipe, and bake your own cake”.   A product is, in essence “go to the store, and buy a cake”.    One has infinite flexibility, one is about agility.   Both categories are valid, and are different.

VMware, Cisco and EMC work together on reference architectures all the time – and customers can mix and match how they see fit.  Heck, we all are open and support everything.  You want Hyper-V on UCS on EMC?  GREAT!  You want Citrix on HP on IBM storage?  GREAT!   You want Citrix on HP with NetApp?  GREAT!

Do you want a product?  Try a Vblock!

Q:  In most cases it would likely be quicker to develop a reference architecture than develop an entire product.  Do you see this as a potential disadvantage for the Vblock – having potentially longer development times?

A: That’s very astute.

A Vblock will ALWAYS take longer to develop than a reference architecture.   Even though the joint roadmap means the bulk of the integration/testing work occurs long before the components are revised – at a minimum, VCE needs to ramp manufacturing for a change.

It would only be a negative thing only if VMware, Cisco and EMC did nothing but Vblock, and the only thing you could buy was a Vblock 🙂  If customers choose that they want mix and match – all the power to them, and we all fully support that.  You’ll get every technology the DAY it’s released if that’s the model you want, and you’ll learn the integration challenges at the exact same time that everyone else does.

The reality is that the time to develop and update the Vblock product is ALWAYS shorter than the “test/integrate” time that the customers are doing anyway with “mix and match models”.

Hence – that’s not a disadvantage for Vblock (or ANY integrated infrastructure product) – it’s an inherent part of their value proposition.

Q:  In addition to orchestration, in what other ways does the Vblock attempt to provide value, beyond other stack offerings?

A: If by “other stack offerings” you mean “integrated infrastructure general-purpose x86 offerings”, the competitors you’re talking about are HP Matrix and IBM Cloudburst.   In those cases:

  1. their orchestration tools are, in my opinion less advanced than UIM, and instead are a whole whackload of unrelated (and not integrated) tools.
  2. Their integrated support is, in my opinion, surprisingly still disjointed relative to VCE support (has anyone called HP support, started with Proliant, then needed to be xfered to the storage team?) .  I **think** this might be an artifact that the VCE support organization was built from the start to support the integrated infrastructure model, and the others, while always in one company, grew organically over time.
  3. VCE views the virtualization layer on top of the physical infrastructure as a CORE PART of the integrated infrastructure offering, not something you “put on top of it”.   That translates into the element parts of a Vblock being best of breed for VMware, the design of the Vblock itself (one example amongst many – there’s a core reason why “blade packs” come in units of 4, which is a function of vSphere fundamental behavior).

If by “other stack offerings” you mean “integrated infrastructure tightly coupled with the application software”, the competitors you’re talking about Oracle ExaLogic:

  1. It seems (to me at least) that to Larry, “all software runs on ExaLogic” means “all Oracle software” – and he said as much at Oracle OpenWorld.   If you are mostly an Oracle shop from a software standpoint, then the logic of tightly coupling the infrastructure stack to the application makes some sense.   I find most customers have a wide variety of applications, and the difficulty of changing applications is insurmountable – so they look for infrastructure that designed for general purpose.
  2. Points 1, 2 from above still hold in this case too, but it’s notable that 3 does not.  Oracle ExaLogic has Oracle VM built in, just like Vblock has VMware built in.

I’m not aware of any other integrated infrastructure offerings – but would love to hear about any others.

Of course – I’m biased.   But, I’d encourage customers interested in integrated infrastructure products challenge Vblock and its competitors to prove the statements I make to them as they engage with you.

If you’re not interested in integrated infrastructure, but rather simply “recipes” (reference architectures) with “ingredients” (products) – then of course there’s an infinite number of players, and it’s a question of deciding which are your favorite ingredients, and then how you will integrate them best.

Q:  The VPLEX has some interesting storage technology, especially in a multi-site configuration. Can we expect to see in the future Vblocks using VPLEX technology to faciltate DR and other multi-site flexibilities?

A: Yes.

The Vblock, like any product, has a product roadmap.   That roadmap has the usual “faster/better/stronger/more efficient” stuff.  It has more and more around tight coupling, orchestration, end to end fault/root cause/correlation and more.   But, it also focuses on enabling new USE CASES.   The demands of Vblock and VCE are significant influences on the roadmap of the companies that provide constituent technologies into a Vblock.   So, expect to see things like Cisco’s DCI, EMC’s VPLEX and VMware vMotion enabling inter-Vblock and inter-site workload mobility as part and parcel of a Vblock sometime soon.

One interesting thing is a core design principle of a Vblock is to never let it become an unwieldy “Christmas Tree” for technologies from EMC and Cisco.  When I say “Christmas Tree” I mean it in the sense that Vblock gets decorated with EVERY ornament for everything under the sun from Cisco and EMC.   Vblock is fundamentally a general-purpose x86 integrated infrastructure product.  Putting TOO much on it means that updating and managing the lifecycle of the Vblock becomes increasingly impossible for VCE.   VCE are very, very focused on the target of being the best general-purpose x86 integrated infrastructure product.    This means that until technologies are very integrate-able (for example OTV being N7K only, or VPLEX being additional hardware elements make them harder to add without fundamentally changing the Vblock itself), VCE is very hesitant to add them.

Q:  EMC has a deep portfolio of storage offerings, some of which could be relevant for certain virtual infratructures, such as Data Domain as a backup target for just one example.  Can we expect to see other such solutions integrated into Vblock offerings or will these be kept separate allowing the customer to tailor their infrastructures?

A: Yes.

Today, backup solutions are part of the Vblock in one of two flavors.  Most customers consider backup to either be something that is done ON infrastructure, or as PART of infrastructure.

In the first case (“backups run ON infrastructure!”), typically want Vblock to integrate with their existing backup approach.   This is almost invariably either backups in guests (in which case, you just keep doing that) or SAN-based backups (which Vblocks can integrate with).    If they want to improve their backup targets, they can of course add Data Domain, but Data Domain isn’t part of the Vblock (remember – “Vblock is a product, a x86 general purpose compute appliance”)

In the second case (“backups runs as PART of infrastructure!”), customers add VMware-integrated array snapshots and Avamar.   Avamar is one of the world’s most popular backup applications for VMware environments for loads of reasons (source based dedupe, vCenter integration, VADP support and more), and since on a Vblock, it’s always VMware – it’s a good optional Vblock component.

Q:  Some organizations may find it necessary to restructure their silos and leverage virtual/cross-functional teams to effectively enable the benefits that are possible with a well-integrated stack.  Can you share any observations — internal or customer — on how organizations have restructured their teams to take full advantage of the Vblock for example?

A: Another astute observation.

One of the core premises of Vblock, or any integrated infrastructure product, is that in a virtual environment, the commodity servers, network, and storage platforms must operate as a tightly integrated and orchestrated system to support the virtualization layer.

This runs headlong into organizations that have “cylinders of excellence” or “silos that don’t talk to one another” 🙂   What we’ve been observing is that many orgs have started to realize that their old way (storage people who don’t talk to server people and both don’t talk to network people) is not a recipe for “cloud-like agility” 🙂   Almost everywhere, we find “virtualization teams” or “cloud infrastructure” teams emerging.  There is a common formula in enterprises:

  • Someone senior realizes that it is easily possible to create an agile, flexible and hyper-efficient “internal cloud”/IaaS model using their existing tech, but their org is part of the challenge.
  • They create this “ranger squad” to go off and build the new thing, and are not constrained by the past model.
  • They put someone in charge who is smart, collaborative, and likes to try new risky things.
  • They pull some of the best people from the “cylinders of excellence” and put them on that team.
  • The team is told “build it, build it fast, and stick to IaaS principles”.

That is usually when the idea of a Vblock becomes very interesting.  I’ve also seen it at customers where the Vblock is used as a hammer to FORCE organizational change (less effective, but it works), where they say “we are collapsing old datacenters to one or more new ones.   The new ones will be built on Vblock/HP Matrix/IBM Cloudburst/HP POD.   Deal with it, or I outsource the whole thing to IBM”.

Often, the people (like me) who are very close to the technology don’t realize that Amazon EC2 has made people in senior parts of their org ask “I don’t get it – why are they able to be so agile, so elastic compared to me?”  Quickly the answer becomes obvious “they focus on just x86 workloads, they build it in very commodity building blocks, and they don’t have a ‘every app gets its own infrastructure’ model”.   They then realize “we can do that too for all our x86 workloads – which is a HUGE amount of our IT spend”.

Q:  At the end of the day it seems that there are several different stacks that can be used to build a strong private cloud platform.  NetApp’s FlexPod gets mentioned frequently partly because it uses many of the same hardware components as the Vblock (storage being the differentiating hardware component).  Customers have a great deal of choice here.  In choosing a stack, does a lot of it come down to what vendors, solutions and support models are the best fit for the organization?

A: ANY combination of servers, storage, network can build a private cloud platform.   I can understand how people might think FlexPod and Vblock are similar, but it’s as different as the following… I’ll try another analogy, just for fun:

1.     V+C+E = me giving you all of the parts that make up a Honda Odyssey and the detailed instructions on how to put it together (VMware + Cisco + EMC).

2.     FlexPod = me giving you all of the parts that make up a Honda Odyssey, replacing the engine with the engine from a Toyota Sienna and the detailed instructions on how to put it together (VMware + Cisco + NetApp)

3.     ____ + ____ + ____ = me giving you the body of one Minivan, the interior of another, and the drivetrain of another, with detailed instructions on how to put it together.

4.     Vblock = I give you a Honda Odyssey, and ask you what features you want, and which options.   You drive it off the lot.

The core decision is “do you want to mix and match”, or “do you want to buy a product that does that”.

If the answer is the former (“I want to mix and match”) – you’ll do SOMETHING like the following:

  • Examine VMware, Cisco and EMC.
    • You’ll see presentations and messages about why EMC works well with Cisco and VMware, why Cisco works well with EMC and VMware, presented by 3 companies.  There will be lots of claims of “uniqueness” which you should examine with a skeptical eye and challenge.
    • It will be coupled with the Reference Architecture (which is based on Vblock design principles), which is a very good reference architecture (“recipe”) of how to put Cisco UCS, VMware and EMC products (“ingredients”) together.   If you ask questions about what you can change, it will be “almost anything”
    • Determine what orchestration software you want from BMC, Newscale, CA, or anybody else, and how you will integrate it.
    • Be ready for the same support experience you are used to from 3 companies, regardless of “joint escalation agreements” (which all major technologies players have had for years)
    • Determine the best channel to buy from – you will need to either issue 3 POs, or work with a system integrator, who will work to put it on a single quote, but it WILL arrive as a bunch of different parts that need to be put together by you or the system integrator, following the recipe.
  • Examine FlexPod.
    • You’ll see presentations and messages about why NetApp works well with Cisco and VMware, why Cisco works well with NetApp and VMware, presented by 3 companies.   There will be lots of claims of “uniqueness” which you should examine with a skeptical eye and challenge.
    • It will be coupled with the Cisco Validated Design, which is a very good reference architecture (“recipe”) of how to put Cisco UCS, VMware and NetApp FAS products (“ingredients”) together.   If you ask questions about what you can change, it will be “almost anything”
    • Determine what orchestration software you want from BMC, Newscale, CA, or anybody else, and how you will integrate it.
    • Be ready for the same support experience you are used to from 3 companies, regardless of “joint escalation agreements” (which all major technologies players have had for years)
    • Determine the best channel to buy from – you will need to either issue 3 POs, or work with a system integrator, who will work to put it on a single quote, but it WILL arrive as a bunch of different parts that need to be put together by you or the system integrator, following the recipe.
  • Examine ____, _____, and _____.
    • You’ll see presentations and messages about why ____ works well with _____ and _____, why ____ works well with ____ and _____, presented by 3 companies.
    • It will be coupled some sort of documentation which will be some sort of reference architecture (“recipe”) of how to put _____, _____ and _____ products (“ingredients”) together.  If you ask questions about what you can change, it will be “almost anything”
    • Determine what orchestration software you want from BMC, Newscale, CA, or anybody else, and how you will integrate it.
    • Be ready for the same support experience you are used to from 3 companies, regardless of “joint escalation agreements” (which all major technologies players have had for years)
    • Determine the best channel to buy from – you will need to either issue 3 POs, or work with a system integrator, who will work to put it on a single quote, but it WILL arrive as a bunch of different parts that need to be put together by you or the system integrator, following the recipe.

If the answer is the latter (“I want to buy a product that does that”):

  • Examine Vblock.
    • You’ll see presentations and messages about that tend to focus less the technology that composes the product (which is of COURSE pretty good – after all, it does very well in the “mix and match” market) and the more about the PRODUCT.  It will presented by one or more companies together based on how they engaged with you, and you engaged with them.
    • It is backed by VCE, which manufactures the product – Vblock.   If you ask questions about what you can change, it will be:
      • “there are three types of Vblock which represent different functions, scales, and capabilities” (just like with a cars, there are hatchbacks, minivans, sports cars, and SUVs).
      • You’ll also hear “within each Vblock, you can add blade packs and disk packs within a min/max range”.   (just like with cars, you there are “core choices” like getting a leather interior, or a canvas interior – or a V6 or a V8).
      • And, you’ll hear “you can have optional things on a Vblock, like vCloud Director, or backup” (just like with a car, there are options beyond the “core choices” like “do you want GPS?”)
    • It comes with end-to-end orchestration for the integrated infrastructure product – UIM. There will be lots of claims of “end-to-endness” which you should examine with a skeptical eye and challenge.  Ask to see it.   Evaluate it.   Ask how it can be used with vCloud Director for self-service portal for end-user consumption (this idea is layered on EITHER the integrated infrastructure model, or the mix-and-match model).
    • Demand detail about integrated product support.  There will be lots of claims of “true integrated support” which you should examine with a skeptical eye and challenge.  How does it work?   How are cases handled?  Who do you call?  The answer should start with “always” and include “no transfers, single party accountable end to end”.   Unlike the “mix and match”, the answer is crystal clear, always.
    • Determine the best channel to buy from – EMC, Cisco, or your VCE-certified System Integrator.   No matter what, you will issue a single PO, and it will arrive as a single part.
  • Examine HP Matrix and IBM Cloudburst.
    • You’ll see presentations and messages about that tend to focus less the technology that composes the product (which is of COURSE pretty good – after all, it does relatively well in the “mix and match” market) and the more about the PRODUCT.  It will presented by one or more companies together based on how they engaged with you, and you engaged with them.
    • It is backed by HP or IBM, which manufactures the product.
    • It comes with end-to-end orchestration of some kind for the integrated infrastructure product. There will be lots of claims of “end-to-endness” which you should examine with a skeptical eye and challenge.  Ask to see it.   Evaluate it.   Ask how it is coupled with vCloud Director for self-service portal for end-user consumption (this idea is layered on EITHER the integrated infrastructure model, or the mix-and-match model).
    • Demand detail about integrated product support.  There will be lots of claims of “true integrated support” which you should examine with a skeptical eye and challenge.  How does it work?   How are cases handled?  Who do you call?  The answer should start with “always” and include “no transfers, single party accountable end to end”.   Unlike the “mix and match”, the answer is crystal clear, always.
    • Determine the best channel to buy from – HP/IBM, or your System Integrator.   No matter what, you SHOULD issue a single PO, and it SHOULD arrive as a single part.


Thanks again Chad for taking the time to to answer these questions!  This was an informative discussion on stacks and value, although I must admit I’m a bit disappointed that you didn’t use the word orthogonal at all 🙂

Blue Shift just started a series on Agility (Part One here) and in future posts we will try to look at several of these issues — including stacks — from multiple vantage points.

Agility Part One — What Is Agility?

Agility Part One — What Is Agility?

The New York Times had a dilemma.   They had identified a business opportunity, but in order to execute they had to convert 11 million articles to PDF files within 60 days.   How could they obtain the hardware and execute in time?   To the cloud!

The New York Times rented 100 computers on Amazon’s cloud and converted the 11 million articles from TIFF to PDF within 24 hours at a cost of $240.

Disney had 60 days to set up a farm of web servers to support a web broadcast of the popular “Camp Rock” movie and did so using VMware in a private-cloud like scenario:

“By taking a pool of equipment and dedicate it to the event, and move it around instead of having to go through a deployment and purchasing cycle makes us more agile…”

Other sites, including ABC news and ESPN are hosted out of the same facility, “so we were able to spread our load and use 25 different machines that weren’t at a peak time. Basically by doing that, we were able to hold the peak load and there were no incremental capital costs,”

ABC’s Dancing with the Stars also claims that they were able to reduce the time required to provision voting server infrastructure from 30 days to just one day using virtualization.

And how could we overlook this week’s news with Wikileaks – regardless of how we may feel about their actions – which used Amazon’s cloud in order to elude efforts to block access to their site.

Here we see examples of companies using both the public cloud (i.e. Amazon EC2) as well as the private cloud to execute their strategy as quickly as possible.

Success = Strategy + Execution

Success in business often comes down to crafting a solid business strategy and then executing that strategy.  Sometimes the strategy isn’t nearly as hard as being able to execute it quickly and effectively.  Time is money and it is also often a short window of opportunity – and once the time or opportunity is lost it’s gone.  You simply can’t easily recover from lost time and opportunity in the marketplace.

Companies want to be first to market with a new offering (First Mover Advantage).  There are geo-political events and government regulation in a global economy which can create new responsibilities or opportunities.   There’s competitors changing their offerings and new competitors looking to encroach on what you thought was your market.   Technology continues to change the rules of the game and what’s possible in many markets.  There’s change everywhere around us, and those which can adapt and execute quickly are those who will be the most successful.

What is Agility?

One general definition of agility is the ability to change direction efficiently.  Cloud agility I would suggest is an IT infrastructure that can be used to facilitate a business response to changing conditions or opportunity.  This concept is also referred to as Infrastructure As a Service (IaaS).

Now I happen to think that the value of agility is profound, but not everyone has developed such an appreciation.  Somewhere outside Silicon Valley, several CIO’s were recently observed attending a retreat in which they were reciting a litany in repetition.  My Latin is a bit rusty, but if I’m not mistaken the words they are repeating in the video below are “IT is nothing more than a cost center…(thunk)…IT is nothing more than a cost center….”

To paraphrase Obi-Wan Kenobi, “These aren’t the monks you’re looking for”.  EMC’s vSpecialist team often refer to themselves as “Warrior-monks” and these would be the type of monk you should be seeing 🙂

All kidding aside, Agility is kind of a big deal.  The cost center is the traditional view of IT in that it was a required expense of business but didn’t directly contribute to profit.  But what if your IT infrastructure can be empowered with agility such that IT can be a business partner, finding ways to quickly execute the business strategy?

Bernard Golden, CEO of HyperStratus discusses agility with the cloud:

The ability to surround a physical product or service with supporting applications offers more value to customers and provides competitive advantage to the vendor. And knowing how to take advantage of cloud computing to speed delivery of complementary applications into the marketplace is crucial to win in the future. Failing to optimize the application delivery supply chain will hamper companies as they battle it out in the marketplace.

We have prepared an agility checklist to help companies assess if they are suffering from an agility barrier. If you’d like to assess your own organization, you may download the checklist here (registration required).

Gartner has already identified their top 10 technologies for 2011 and at the top of the list was cloud computing.  And while there has been some debate in the past of how important First Mover Advantage is, I agree with Dave Chase in his Seattle PI article which makes the case that what’s really important is the “cloud mover advantage”.

Also for more background on the concept of using your IT infrastructure to execute your business strategy, take a look at the book “Enterprise Architecture as a Strategy:  Creating a Foundation for Business Execution“.  As the title explains, this book explores how your IT infrastructure — when properly designed — can be used as a platform to execute business strategy for success.  The concept is illustrated below and I will try to return to this in more detail in future posts:


I want to cover several different areas in this series and while it may change right now the line up looks roughly like this:

Part 2: The Business Value of Agility

Part 3: Obstacles to Agility

Part 4: vCloud Director

Part 5: Stacks: Vblock and more

Part 6: Unified Infrastructure Manager (UIM)

In part 2 not only will we talk about value, but I’ll try do define cloud computing and how it relates to both virtualization and agility.