Why Microsoft?

This is a question that can be explored from many different angles, but I’d like to focus on it from not JUST a virtualization perspective, and not JUST a cloud perspective, and not JUST from my own perspective as a vExpert joining Microsoft, but a more holistic perspective which considers all of this, as well

Top 6 Features of vSphere 6

This changes things. It sounds cliché to say “this is our best release ever” because in a sense the newest release is usually the most evolved.  However as a four year VMware vExpert I do think that there is something special about this one.  This is a much more significant jump than going from 4.x

vSphere 6.0 Public Beta — Sign Up to Learn What’s New

Yesterday, VMware announced the public availability of vSphere 6.0 Beta 2.  I can’t tell you what’s all in it due to the NDA, but you can still register for the beta yourself, read about what’s new and download the code for your home lab. There’s some pretty exciting stuff being added to vSphere 6.0 in

Will VMware Start Selling Hardware? Meet MARVIN

The Register is running a story that VMware is preparing to launch a line of hardware servers.

VMware Pursues SDN With Upcoming NSX Offering

Earlier this week VMware announced VMware NSX – an upcoming offering that takes network virtualization to new levels. NSX appears to be somewhat of a fusion between Nicria’s SDN technology (acquired last year by VMware) and vCloud Network and Security (vCNS – formerly known as vShield App and Edge). Since I already had intentions to

What Really Is Cloud Computing? (Triple-A Cloud)

What is cloud computing?  Ask a consumer, CIO, and salesman and you’ll likely get widely varying responses. The consumer will typically think of the cloud as a hosted service, such as Apple’s iCloud, or uploading pictures to Photobucket, and scores more of like services (just keep in mind that several such services existed before it

Agility Part 2 — The Evolution of Value in the Private Cloud

When an IT project is commissioned it can be backed by a number of different statements such as: “It will reduce our TCO” “This is a strategic initiative” “The ROI is compelling” “There’s funds in the budget” “Our competitors are doing it” Some of these are better reasons than others, but here’s a question.  Imagine a

Stacks, the Vblock and Value — A Chat with EMC’s Chad Sakac

…I reached out to EMC’s Chad Sakac to gain more insights from his perspective on how the various stacks…well…stacked up….

Storage Trends Part 2 — 3D Chess

I’m going to do something a bit risky and perhaps crazy.  I’m going to perform a comparative analysis of various solutions in the storage market, and in the process risk starting a thousand vendor flame wars.

I hope and don’t think it will come to this, but still why would one want to do this?   In this post I wanted to answer this as well as “set the table” for the actual analysis by discussing a few more issues around this topic (which will hopefully make a large Part 3 that much smaller).

DISCLOSURE:  My current employer is both a NetApp Partner and a VMware Partner

Three-Dimensional Chess

The idea occurred to me last year after working with various storage technologies such as PernixData FVP and also researching other storage solutions ranging from All Flash, to hardware independent solutions like VMware VSAN and more.  At first I wanted to write a blog post just on the PernixData (which I may still do) but when looking at the market and the new disruptive storage solutions it occured to me that a new paradigm was forming.  In trying to find the optimal solution a few things became clear to me.

spock-chess2One is that the storage feature set is increasingly provided within the software and not the hardware – and the industry is converging somewhat to a common feature set that provides value (see Part 1 on Optimization Defined Storage).

Building on the above, we also have a new wave of hardware independent solutions with perhaps the most notable being VMware VSAN.  Other trends include the increases use of flash storage, as well moving storage closer to the CPU (server-side).

The more I looked at various solution offerings the more the market looked to me like a three-dimensional chess board.  Vendor X might have a clear advantage on one board, but on another board (or dimension) Vendor Y stood out.  The more I looked at the playing ground, each chessboard represented a different value and/or performance proposition.  For example, one solution category offers better CAPEX advantages while another offers better OPEX benefits.

This is key – the point of this exercise for me is NOT to compare and contrast vendors, but rather to identify what categories they fall into and what the nature of their value and performance benefits are for any given environment.  The best solution will vary across budgets, workloads and existing storage investments – but by segmenting the market into different categories perhaps we can gain a better sense of where the different benefits exists so that we can understand the optimal value proposition and that we (hopefully) don’t find ourselves comparing apples to oranges.

Or in other words, storage is much like buying a house or a vehicle.  What is the optimal solution for one family is going to be a different for another.  Different budgets, different starting points and different needs.

There are a lot of exciting and disrupting changes taking place in the storage market.  2014 will be a very interesting year and we have more innovations on the way including leveraging host server memory, storage class memory and more.

Optimization Defined Storage (Storage Trends Part 1)

Recently I found myself engaged in a discussion on Twitter with @DuncanYB , @vcdxnz001 and @bendiq regarding Software Defined Storage (SDS) when I realized that our definitions might be approaching the issue from slightly different perspectives.  Therefore I decided it might be a prudent exercise to first explore these definitions so that we can have a common foundation to build from in this series. 

When I looked at the storage market I found 4-5 different categories of storage solutions that were all converging one way or another towards a common set of qualities and features.  I’ll be exploring these solutions in detail in part 2, but is this common set SDS or is it something else (and do we care)?

Whenever someone says they are going to define something it has an air of pretentiousness about it. Hopefully this post will not come across as such as my intent is to provide a workable definition that can be used for a future post (Part 2).   As an engineer who continues to evolve, I reverse the right to modify my definition in the future 🙂  But for now let’s proceed….

Note:  In the spirit of full disclosure my current employer is a NetApp partner.

Do Definitions Matter?  Why Define SDS?

This really is a great question and you might be surprised at my conclusion – I don’t think the definition of SDS is terribly relevant for most of us.  There is an academic definition which we could debate endlessly along with “what is cloud” and “what is the meaning of life?” before it would change again in about two years — yet this definition might only offer limited value to those making technology decisions and investments.  In short, SDS does speak to a set of characteristics but not necessarily to value and efficiency in any given environment.

A few years ago I wrote a post which defined cloud computing as Abstraction, Automation and Agility.  By Abstracting from the hardware level, we were able to provide a new API and control plane which could serve as a platform for Automation. Then with the proper use of Automation in the organization one could being to achieve measures of Agility.

With Software Defined Storage or SDS I think the same paradigm can apply.  First we must offer a level of abstraction which enables us to do more with the storage.  On top of this abstraction we need to add services that provide value and efficiency such as caching algorithms, dedupe algorithms, data protection (RAID or erasure coding), storage tiering, instant clones (no copy on write), snapshots, replication services and more.  Then we must have an addressable API to leverage as a control plane from which we can manage, control and ultimately automate.

Now many might just focus on a definition of SDS that simply revolves around abstraction and a control plane (RESTful API, etc). For example does SDS require deduplication?  I don’t think so, however I do think that this one of several key features that provides value that the industry is trending towards.  Perhaps we need an expanded definition of SDS that focuses on value and efficiency – leveraging the SDS foundation to provide efficiency, agility and value.

Optimization Defined Storage

ODSOptimization Defined Storage (ODS) then could be a definition of SDS that focuses on efficiency and value.  ODS would be built upon an SDS foundation, and then enabled with additional capabilities to add value, efficiency and optimization such as:

  • Deduplication (increases storage efficiency as well as performance benefits).
  • Caching and Storage Tiering (including flash)
  • Instant Clones and Snapshots (no copy-on-write)
  • Efficient Replication
  • Thin Provisioning

In an ODS solution, the software can work across many levels to optimize how data is compressed, deduped, cached and tiered.  One example is Nutanix’s Shadow Clone feature which combines cloning and caching functions by distributing “shadow clones” of volumes to be used as cache by additional nodes.

Another example is VMware’s VVOL initiative which intends to somewhat shift the control plane such that the VM and VMDK characteristics will define the LUN rather than vice versa – as well as allowing the SAN to perform snap and clone operations against VMDK’s versus LUNs.

There’s much more that could be done in this SDS/ODS space (whatever you want to call it) and we haven’t gotten to object based storage yet.  The bottom line is that the ODS definition is focused on leveraging the SDS foundation to provide value, efficiency and optimization.

Relative Levels of Hardware Abstraction

Some questions / arguments I’ve heard tossed around include “is Nutanix really SDS because they sell hardware” and this argument also extends to NetApp.

Nutanix provides many features I would qualify as ODS – dedupe, instant clones and more.  These features are provided by the software but Nutanix has chosen to make their platform dependent on their hardware (which uses commodity components).  Because the product is sold as proprietary hardware does that mean it’s not SDS?

I will argue that Nutanix is still SDS despite this.  They made a business and product decision to offer their product as a hardware device to improve support, procurement as well as to facilitate scale-out.  The SDS/ODS features provided are ultimately in the software and not in the hardware (commodity components).

Also what about NetApp’s ONTAP platform – does this qualify as SDS?  I think it does.  NetApp has provided features like compression, deduplication, thin provisioning, efficient snaps, clones and replication for some time now.  Yes this baked into NetApp FAS arrays but it’s really the ONTAP software platform that’s doing everything.  To take this a step further lets add the VSA (Virtual Storage Appliance) which allows you to front-end non-NetApp storage with the ONTAP platform.  Now let’s also add ONTAP Edge which allows you to add ONTAP capabilities to storage using a VMware virtual appliance which is obviously hardware agnostic.  When you consider this full context, I think we can reasonably conclude that the “magic” happens within ONTAP (software) and that this is indeed Software Defined Storage.

Let’s Get Out of The Weeds

We could debate various definitions of SDS/ODS forever as an academic exercise but this isn’t what I want to focus on.  My primary goal was to share my definition of SDS (and ODS) that I could leverage in my next post.  Part 2 will take a broader look at the storage industry and how various solutions are trying to approach SDS/ODS from various vantage points so that we can effecitvly compare them and understand how each provides value.  Some solutions will be more effective in one environment versus another.  Some solutions focus more on CAPEX while others focus more on OPEX.  We’ve only talked about Nutanix and NetApp so far but I’d also like to talk about VMware VSAN, PernixData FVP, EMC ViPR and more.  Hopefully this post will make more sense when I get around to writing  Storage Trends Part 2.

A Look Back at 2013 – Blog and Personal Notes

Looking back on 2013 my level of blogging as well as my participation effort (especially as a three-year vExpert) is far below where I would have liked it to be.  It seems to me that there’s two simple reasons for this.


I don’t get much exposure to the latest technology.  In fact virtualization and “cloud” (much to my dismay) have very little to do with my primary job such that I spend less than 10% of my time in this space.  Most of my blog posts and tweets are not the result of hands-on experience but rather theory and observations based on what I can read in my spare time. I have almost no access to training as there’s always too much work to be done and to little budget (one reason I don’t even have VCP certification).

That’s not to say I have no exposure at all.  I do work with several Fortune 500 companies and I have insights in the inner workings of several organizations as well as my own.  I’ve also recently had the opportunity to work with PernixData’s FVP product for accelerating storage I/O operations which I will review here at some point.


This seems to be the biggest problem.  Because of events put in motion as a result of my daughter’s medical history I need to work two jobs to make ends meet (and they still don’t always meet).  For the past quarter I’ve been working between 70-80 hours most weeks and working every weekend.

On top of this I’m trying to raise three children in what is a rather unusual and perhaps extreme housing situation.  Things like sleep, exercise and hobbies are luxuries.  I played soccer in college and to be so out of shape feels very alien and uncomfortable.

As anyone knows you reach you peak performance with sleep, exercise and time for creative outlets, but there’s just very little of that available.  I could pen a litany of grievances, but the simple truth is that the alternative is my daughter wouldn’t be with us today.  I keep reminding myself of what I DO have and focus on the essentials I need to do for both them and myself. (In the 20 minutes I took to pen this I’ve already been told I shouldn’t be doing this on New Year’s Day 🙂


There are several blog posts I’ve written in my head over the past 6 months but haven’t had time to write them.  I wanted to do a post VMworld whiteboard as well as a look at the storage market and trends and also SDN as well.  There’s a wave of disruption ready to be unleashed in the IT world in 2014.

I expect that 2013 was my last year as a vExpert (my lack of activity) but I still am a big fan of VMWare and am excited to see where they continue to evolve and add value with VSAN, NSX, management tools and more.  As it requires less time, I tend to tweet more than I blog, but  its just not the same as laying out your thoughts in a more detailed and reasoned approach in a blog post for example.  I do hope to write and tweet at a more active level  about this space as time allows.

My personal goals are many but I must temper them with the time and resources I have.  There’s a book (non-technical) that I’ve been wanting to write for over a decade, but that would take  a year or more when I’m still struggling with exercise and sleep.  At the same time I’m 44 and running out of time to do anything with my life.  In many ways I still feel like I’m trying to start a career as well as looking to actually do something with the thoughts in my head

If nothing else 2014 is a blank slate and I’m excited about some new changes I anticipate in the industry and I hope that I am able to contribute and share in 2014 at a higher level than I’ve been able to this year.

Upgrading to vCenter Server 5.5 Using a New Server

You may find that you want to start looking at upgrading your vCenter Server to version 5.5 to take advantage of new capabilities, a faster and improved web interface and the ability to upgrade your hosts to ESXi 5.5.


But what if your current server might be for example vCenter 5.1 on Windows 2008R2 with SQL 2005?  You might want to take the opportunity to start clean on more current versions of Windows and/or SQL.  This article is a summary of a process that worked for me as well as a few hurdles encountered along the way.

Before we begin, make sure you also consider the vCenter Server Appliance which is a hardened pre-built vCenter Server running on Linux. Some will desire to run vCenter Server on Windows and if so this post is for you.

This article also assumes SQL is running locally on the vCenter server.  If the database is remote, this article will still work except that either you will not need to move the database, or you’ll be moving it to a different server.

UPDATE:  As this process does not transfer the ADAM database, the existing security roles will NOT be migrated to the new server.  These roles will have to be manually rebuilt unless you want to try some scripts as discussed in this post.  Special thanks to Justin King ( @vCenterGuy ) for pointing out the issue and Frank Büchsel ( @fbuechsel ) for providing the link to scripting permissions import/export! 

Build The New Server

The new server should be Windows 2012 but NOT the R2 version which is not yet supported. If you want to use a local database, go ahead and install the database at this point (we used SQL 2012).

SSL Certificates

We didn’t have custom SSL certificates but you will still need to transfer your SSL certs to work with your existing database.  When I got to installing vCenter Server in a later step I encountered this error and had to go back and grab the certs.

On the current vCenter server you should be able to find the certificates in the following hidden directory:

  • For Windows 2008:
    C:\ProgramData\VMware\VMware VirtualCenter\SSL
  • For Windows 2003:
    C:\Documents and Settings\All Users\Application Data\VMware\VMware VirtualCenter\SSL

Copy everything in the SSL folder and create the following directory on the new server and place them here:

 C:\ProgramData\VMware\VMware VirtualCenter\SSL

For more information see the following KB article on certificate errors related to vCenter Server installation

 Transfer the Database (downtime begins)

Shutdown the vCenter Services so that we can transfer the database.  There’s a few options here.  Our vCenter DB was about 30GB so I simply did a detach and copied the DB files across the wire.  If you have SQL 2008 or later you might want to take a compressed backup or look at a tool like Red Gate or LiteSpeed which can compress your SQL backups into much smaller files to transport. Additionally you also might be able to detach the relevant VMDKs and attach them to the new server, allowing you to copy them at disk speed.

Once you have the database running on the new server we can begin with the vCenter Server install.

 vCenter Server install

First rule here is the use vCenter 5.5A (build 1378901) which fixes some authentication issues on Windows 2012 in some environments.  Second rule is to install the elements one at a time. I prefer to be able to control each install individually and I’ll address each component below.

 vCenter SSO Install

When you install this you have the option to sync with an existing SSO server.  Since the only other server with SSO we were going to retire, I chose the “first server in new site” option.  We will need to edit SSO later on to enable AD authentication but not yet.

 vCenter Web Server Install

When I first installed this component I got a 404 from the web server on each attempt.  As it turns out there is an issue described in this KB article such that the web server will return 404 errors when installed to a drive other than C. Normally I try to install everything I can to a non-C drive, but it seems that this component needs to be on the C drive to function properly.

 vCenter Inventory Server and vCenter Server

These services are mostly straight forward installs.  If you copied the SSL certificates above you should have no issues in this step.  You will have the option to have vCenter automatically attempt to connect to the hosts or to do it manually.  At this point vCenter server should be working, but only local accounts might be able to login.

To fix this login to the web UI for vCenter using either a local account or the SSO admin account and perform the following steps.

1 Browse to Administration > Sign-On and Discovery > Configuration in the vSphere Web Client.
2 On the Identity Sources tab, click the Add Identity Source icon.

Add the appropriate source type such as Active Directory and add it as one of the default domains. For more information see the following help chapter on setting default domains.

vCenter Update Manager (optional)

You should mostly be in business at this point but you may also want to install vCenter Update Manager.  With this step there are a few additional considerations.

First of all you need to create a 32-bit DSN for the Update Manager Database.  There’s a KB article here but I think my method was quicker.  On the 2012 server open up the search charm and type “odbc” and press enter.  You’ll see both the 32 and 64 bit versions of the ODBC configuration utility.  Select the 32-bit utility and create your DSN, but…..

Make sure you use the SQL 2008 R2 Native Driver even if you are using a 2012 database.  As explained in this article, the vCenter Update Manager service will fail to start when using the 2012 Native Client.  Use the 2008 R2 Native Client against the 2012 SQL and it will work fine.

That’s basically it.  To summarize take the following steps:

1)  Build a new 2012 Server (not R2) and install SQL or other database

2) Copy the SSL certificates from the current vCenter to the new server

3) Shut down vCenter Server services

4) take backups and/or snapshots as desired

5) Using the method of your choice, forklift the current vCenter database to the new server (if SQL is local)

6) Install SSO

7) Install vCenter Web Server to the C: drive

8) Install Inventory Manager and then vCenter Server

9) Logon to vSphere with the web UI and configure SSO to authenticate to your Active Directory domains and/or other sources as desired.

10) Manually reconnect to your ESXi hosts if you selected this option

11) Install Update Manager using a 32-bit DSN and the 2008 R2 Native SQL Client.

Now you’ve got vCenter 5.5 using your same database but on a clean Windows 2012 server.  Now you’re ready to take advantage of the new features ranging from the improved web interface, expanded OS support and the ability to update your hosts to ESXi 5.5.  Happy virtualizing!

VMworld Live Stream

Below is the feed for VMWare’s Community TV.  I know some VMWorld content will be streamed here, but I’m not sure at this point if the keynote (noon EST today) will be streamed here or not.

Should the keynote not be available at the feed below, you can register to view at vmware.com/now

Watch live streaming video from vmwarecommunitytv at livestream.com


Keeping Up with VMworld Remotely

Tech conferences are one of those things I’ve had mixed feelings about.  It is an awful lot of time and expense in the internet age to spend in order to be bombarded with marketing efforts.  However if I were to make an exception it might be for VMworld for several reasons.

9290d377-259c-4984-b20a-7592cf0f4533The technical breakout sessions look fantastic (and it’s hard to find the time to review the recorded sessions later) and it would be great to see what techniques and solutions are being successfully leveraged and how.  What are others doing for DR, cloud scenarios, automation and more?  What works what doesn’t?  What should I be bringing to market and how?  I’m sure there’s many side conversations beyond the keynotes that would be fascinating and instructive to hear.

On top of this I find it interesting that the theme for VMworld is “Defy Convention”.  At first this strikes one as curious because virtualization has already transitioned from being an outside disruptive technology to being a mainstreamed commodity.  I think it may have something to do with the following three areas which I expect to be a big theme this year:

  • Software Defined Networking (NSX and more)
  • Software Defined Storage (many vendors and new VSAN offering)
  • VMware Hybrid Cloud Service

All of these three are related and are also highly disruptive to what is “safe” and/or “normal” in many IT shops even where compute virtualization is used.

I’m extremely interested in many of the technical and even philosophical questions among these three areas and I’d love to follow the conversation here in the detail that it will be at the conference, but if you won’t be attending VMworld like me how do you stay current with all of the discussions?

I’ve never been to VMworld but I suspect there’s no substitute for being there and engaged in the conversations (I “know” so many people on social media I wish I could get the chance to meet them) but there’s still other ways to stay current on some of the conversations.


Some of the keynotes will be live streamed along with other events (a full broadcast schedule is available here ).  I’m going to try to live stream some of the keynotes but I’m sure I won’t be able to give it my full attention.  Fortunately the keynote presentations are often posted as a recording if not that same night, they should be online by the next morning.

UPDATE:  Make sure you sign up at VMware NOW for Monday’s live events.

Also be sure to check out the vBrownbag live stream here (Thanks Cody!) :  http://professionalvmware.com/vbrownbags-live/


Twitter is a great tool to observe and participate in those conversations which might be taking place online.  Search for the #vmworld hashtag and also setup columns in TweetDeck (or whatever you use) for areas you might be interested such as “SDN” and more.

I’ll be trying to share conversations and tweets that I find interesting to my Twitter feed but make sure you follow the small army of vExperts and professionals who will be contributing to social media.  For more information on who and what to follow on Twitter, visit http://www.vmworld.com/community/twitter/#


I expect that there will be a great deal of blogging to capture in more detail some of the activity, including deep dives into new product offerings.  There is a small army of VMworld blogger contributors (including this one) which will be aggregated right there.  Just follow the feed on occasion and find the blog posts that interest you.


The technical breakout sessions are recorded and are usually posted a few weeks after VMWorld.  You do have to have a subscription however (included with admission) to view the recordings.

I’m sure attending VMworld is a great experience and while it would be ideal to be there in person, you can still follow a good portion of the conversation from wherever you are.

Looking forwarding to some great announcements and knowledge sharing this year!


I remember as a teenager we were visiting some friends just outside Dallas and at one point we want to the local mall and then stopped at a bookstore.  The bookstore was absolutely huge – bigger than anything else I had ever seen at the time.  I left with mixed emotions – awe and excitement at all of the reading choices along with sadness.  Sadness, because I would never have the time to read half the books I had found interest in.

I’ve struggled lately with how to best optimize my time and I thought I’d take some time to rationalize through the choices and possibly even pass along some insights, thoughts or motivations along the way.

So the first priority is my job which including commuting takes up 60-80 hours a week on the average.  That makes boosting my skills and community participation a secondary priority.

Being a vExpert I have access to vSphere Cloud Suite for my home lab and also to Train Signal for Certifications (If only I had access to these when I was unemployed 2 years ago!).  The home lab is great for building my VMware/vCloud skills which is critical because I don’t get any opportunity at work to work with these technologies.

The only active certification I have now is NetApp (VMware is not an option due to the 5-day class requirement) so I could be using these resources to gain some Cisco, Microsoft or other certifications, or just learning new skills.  On top of this I’m a 3-year vExpert and I would like to continue to support the community with new blog posts and more.

Home lab, certification and blogging.  How much time have I spent in last 3 months working on any of these?  Zero – which causes me a degree of discomfort if not anxiety. Not only do I need to spend more time on each of these things but I need to find the right balance or priorities among them.

As much as I want to correct this imbalance on my professional side, I also don’t want to lose sight of a broader focus and goals.  Our lives in this world are so short and chances are our work contributions won’t matter for much of anything in 30, 50 or 100 years from now.  What we can do to make our lives more meaningful?

To me the answer comes down to two things – our children and the sharing and promotion of ideas.  The influences and values we instill upon our children will impact society – for better or for worse – after our time has passed.  Technology changes rapidly but ideas – ideas can have a more profound effect.  Today we are still reading Plato’s Republic, Cicero, Machiavelli, Alexis de Tocqueville and many more.  Yes, technology can improves people’s lives, but there’s far more to building strong and stable societies than just this.

When I choose my classes in college I intentionally did not select any computer classes.  Admittedly there were some social reasons for this, but it was also because I knew that I didn’t want to spend my life working with JUST ones and zeros, but with ideas and concepts (or course in the cloud era I do wish I had more experience in programming however).

On a more personal note I’ve had ideas for writing a book for over a decade now (which would touch on philosophy, economics, politics, society and much more) but never had the time to start it.  Do I focus on my technical deficit with my career or do I take a chance and work towards some more ambitious (and risker) endeavors?

As useful as Twitter can be, it’s rather difficult to be profound in 120 character bursts – in a way the format almost seems to be designed for “snark” at times.  We might not all agree on any number of things, but if you’re like me you know that you can write something well-reasoned such that people will say “I might not agree, but I can respect his opinion.”  I would love to take time to post on topics in a more reasoned format and provoke some serious discussions about a multitude of issues I don’t think get enough attention in our society.  Quick tangent here, but I made a blog post earlier this year on Ludwig von Mises‘ classic treatise, “Human Action” where a key point I borrowed is how concepts which affect us all – economics, engineering or how to organize society – should be studied by all.  In fact the ancient Greeks had a word for those who had no interest in public affairs which was “idiote”.

Long tangent….back to topic.  So I’ve got some difficult decisions between careers focused goals and some more ambitious goals in the pursuit of value and purpose.

Now I also need to find some time with my 3 kids who are growing up so fast and help them with their studies.  When we look back on our lives, chances are we will not regret working on that project a little big harder, putting in more overtime or getting that extra certification (that will be obsolete in 5 years).

So there’s no magic formula for this but somehow we have to strike a balance between our careers and related goals and developing relationships and memories with those close to us.

So…choose to spend what free time I have between improving my career, more riskier and ambitious propositions, and spending time with family and providing guidance to my children.  On top of this I’d really like to spend an hour a day working out and maybe some time doing something fun once in a while 🙂 .

If you’re like me and you’re on the younger side, you’re focused on your career which you should.  Be ambitious, curious and learn at every opportunity.  But at the same time life is short – make sure that you somehow find time for what is important to you.  What do you want to be your legacy?  What will you leave behind?

There’s no magic formula here beyond a few simple principles:

  • Work hard and be diligent
  • Decide what is important to you – and what kind of impact you want to leave behind on this world.
  • Find balance between your relationships, and your career.  Both are important and yet neither can afford to be neglected.

This was just a quick spontaneous blog post and much of it structured around my own concerns about time but hopefully others will find some value in even just thinking about time, your life, your priorities and your goals.  Many of us set objectives for our careers but how many of us set objectives beyond this?

What are your career and life goals?  Discover them, prioritize them and pursue them while you still can, finding balance along the way.  Work hard but stop every now and then to ask “what’s important in the long run” and focus on the impacts we make on those around us along our journey.

Uncovering Value and Opportunity with Utility Computing

These are exciting times. In today’s rapidly changing business climate and technology shifts, there are many new areas in which we can find value and opportunity. There are ways to change how we procure information technology, how we manage it, and how we consume it. If we were unable to change the features of our mobile phone service for the length of our contract, it probably wouldn’t work out particularly well for us. We want – and sometimes need – to change the features on our phone service – minutes, data, voice mail and more – to accommodate our needs which are constantly changing and in flux. Just like with cable or satellite TV where we might want to change the channels we subscribe to from month to month.

What if we could buy iPads in the same manner? Turning on and off fee-based features as we need them along with a built-in tech refresh (new iPad) every year or two? Better yet, what if we could buy IT services and components like this? Often times IT components are purchased in multi-year leases but we can’t easily change the properties of those components to accommodate what the business wants from IT and when they want it. As a result, the business has to wait, which seems a bit backwards. This is just one of the promises that utility computing holds for us – more flexibility in procurement; offering not just new financial flexibility, but allowing us to “dial up” what we need out of IT on-demand in order to support the mission of the business.


Chances are that electricity and phone service are not what your organization excels at – so you contract out these services to experts who can offer them for usually a much lower price than you could yourself. With the utility mode of computing, similar economic factors are at work.

Running a datacenter and getting all those networking, computing, and storage layers to work well together and then adding on disaster recovery and more is not inexpensive or easy. But when these elements of IT are contracted out to experts who can operate at scale, per-unit costs will often decrease. Perhaps more importantly, it allows you to increase your focus on what truly matters – your applications, your data, your processes, and of course your business.


VMware introduced a paradigm shift in IT when it made x86 virtualization an effective solution. Now instead of buying dedicated server hardware for each and every function, we can run multiple servers on the same hardware. The immediate impact of this server consolidation was a reduction in capital expense or CAPEX – there were fewer physical servers, less power and heat, and more ports and space in the datacenter.

But then something else started to happen. Because the servers were encapsulated in this software wrapper, IT departments found even more opportunities to save money in operational expenses, or OPEX. Servers could be provisioned in minutes and new opportunities appeared in everything from monitoring to backups and disaster recovery to significantly improve operations, administrative overhead and associated costs.

We believe that there are many new and emerging opportunities for OPEX improvement with utility and cloud computing models, where more IT elements can now be configured programmatically. This is what VMware refers to as the Software Defined Data Center (SDDC), in which more data center resources will be abstracted so that they can become programmable – even doing things such as provisioning a new multi-tier application with PCI compliance with just a few clicks.


However, things REALLY start to get interesting when you’ve built up operational efficiency to where you can begin to position the organization for agility. At this point, IT and the business are working together as strategic partners to get projects launched and completed in less time and with strategic focus and improved efficiency. Often times businesses know exactly what they want to do, but they fail in the execution as projects take too long and opportunities – and revenues — are lost. By evolving the IT organization beyond a cost center and positioning the business for agility, an entirely new level of value becomes available to be captured.


3248041510_a75b10ebe9_bSome analysts like Forester are predicting that in the coming decade IT departments as we know them will effectively disappear — and the businesses will directly design, build and consume IT systems to support the business — which is the way it should be.  Information Technology should exist to enable the business — not a sub-bureaucracy within organizations which provides resources according to their internal budget and not business value.

VMware ushered in new opportunities for CAPEX and OPEX improvement and now new layers on top of that — including SDN — are ushering in the era of the Software Defined Data Center.  It’s time to stop getting bogged down in specific systems and adopting a vision in which resources can be consumed as a utility and also complement each other in synergy to return the best value.  It’s time to rethink operations, high availability, backups, disaster recovery, storage — and of course how we consume infrastructure.


Using Google Cloud Storage with Veeam Cloud Edition

Veeam 6.5 Cloud Edition has many nice features.  It includes the award winning Veeam Backup and Recovery and then adds a second application to replicate those backups to offsite storage — such as Google Cloud Storage.

Veeam does a great job of performing disk-to-disk backups of virtual machines, but many will want and/or need to have an offsite copy of those VMs.  Sometimes this will even be required for various certifications and/or audits.  Veeam Cloud Edition can do exactly this, adding encryption to the backups as well.  No more Iron Mountain, tape exchanges, car trunks or whatever physical method you use to get your backups offsite — now it is simply replicated to the cloud storage provider of your choice.

Google is one of many such cloud storage providers and their standard offering includes geo-redundant storage which means a lot more durability than any mechanism of shipping tapes/disks offsite.

Being new to Google Cloud Storage I went to set up integration with Veeam and I quickly got stuck over finding the proper client key and secret key to be used to access my storage bucket.  The provided help file seemed to reference a previous layout of Google Cloud’s web pages and I quickly found it somewhat useless (Veeam 7 is due this summer which I assume will have updated documentation).  After some reading and experimentation I finally found where in the Google API Console I had to go to retrieve the proper keys for Veeam.

Once in the Google API Console I eventually found the answer not in the API Access session, but in the Cloud Storage section: gcloud

At the bottom of this tab is “Interoperable Access” which you must manually enable.  Once this is enabled you will see a new subtab under Google Cloud Storage for Interoperable Access and this is where you will find the keys you need to provide to Veeam Cloud Edition to get started.  Now you should be able to connect to your Google Cloud Storage buckets and start replicating them offsite.


The replication of the backups worked flawlessly but what type of speed you get will be highly dependent on the quality of your connection to your cloud storage provider.  To walk you through a quick restore scenario I started with a small Windows 2012 virtual machine which Veeam Backup had compressed down to just 6GB.  I initiated a “Restore From Cloud” job which pulled down the backup files in about 15 minutes.  Then I simply had to import the backup files into the catalog and then I could setup the restore job (combined about 2 minutes).  In the restore job I chose to use Veeam’s “Instant Recovery” option which allows me to run the VM right from where it resides on the backup repository (I can always storage vMotion it later).  In summary it took about 20 minutes to pull a VM backup (6GB) from cloud storage and have it powered up and running.

If you’re interested in Veeam be sure to check out version 7 in a few weeks which will have even more features.

VMware vExpert 2013

I’m honored to join the ranks of the vExpert community for the 3rd consecutive year, which has now grown to a thriving 581 members.  I wanted to take a brief moment to share a few thoughts about the program.


What does it take to build such a community?  In the earlier part of my career I was an MCSE jumping on planes every week to work on Active Directory and Windows Server technologies.  The Microsoft MVP program has been around for a long time, and is still highly regarded, but I honestly don’t recall either then or now seeing the level of community and participation (and least from my vantage point) that the dynamic vExpert program displays.  I find this to be an interesting comparison.  To build such a strong vibrant community you have to have several ingredients including:

  • Technology that excites people — provides real value and solutions and is never standing still (i.e. Novell NetWare) and constantly improving and evolving.
  • A strong group of dynamic professionals to teach, lead, share and evangelize the solutions, methods and benefits.
  • and certainly not least — a strong leadership team at the core to bring together the community, develop and advance it.

The vExpert community is a special group for which I’ve seen no equal in the industry.  Great people and great solutions all around.


vExperts get some benefits including NFR licenses for their home labs and access to beta programs.  This year a number of vendors are contributing benefits ranging from clothing, to training from TrainSignal  to product licenses.  A full list of offerings has been compiled at the vInfrastructure blog.


It is with a level of discomfort that I accept the vExpert 2013 award.  This award is based on 2012 activity, but for most of this year (2013) I would rate my community contribution as poor.  The biggest reason for this is time — I have plenty of ideas for blog posts on all manner of topics from general cloud, to VMWare and much more but I’ve been burning the candle at both ends lately (which makes the year go by fast).   Additionally I’ve only been working on VMware and cloud specific technologies a small fraction now so that I’m not getting the exposure that I used to in these areas.

I know I have many more contributions in me for the community but with the current demands of both work and family I’m not sure how many of them I’ll get to,  but I am confident that to the extent I can find the time I have plenty more contributions to make.

Congrats to all the vExperts out there and big thanks the community and it’s leaders (you know who are are!) for all that you do!

The Nokia Lumia 920 Experience

Recently I was given the opportunity to test drive Nokia’s flagship phone – the Nokia Lumia 920 (Engadget Readers Choice 2012) and I thought I would share some highlights from the user experience.  My current phone is a HTC EVO 4G LTE which is quite similar in specs, but for many functions I found myself favoring the Windows 8 experience on the Nokia Lumia 920.


The first thing I noticed is that the Nokia Lumia 920 was solid – it was a bit thicker and heavier than the EVO but as I got used it, I found that I didn’t mind the added size (Nokia is now selling new models such as the 925 and 928 which are a bit lighter).

This page at Techcrunch shows detailed specs for both phones side by side.  They are quite similar for CPU, but the Nokia has a slightly smaller screen (by .2 inches) but greater pixel density as a result.  The Nokia has a “super brightness” capability which can make it readable even with sunglasses in bright sunlight – something I couldn’t do on my EVO.  Each has their pros and cons, but I found myself willing to accept a .2” loss in screen size for the extra brightness.

As for the camera, The Nokia PureView lens is actually suspended in liquid to give it more protection from vibrations and movement.  Both phones have an 8MP camera and can take rapid-fire shots where you select the best picture (for the Nokia this requires the “Blink” app).  Both took excellent pictures and did not do any serious testing to the point I could tell the difference.  Below is a picture I took with the Nokia at a circus but don’t rely on my photo skills — Nokia has an impressive collection of pictures taken by Nokia phone users here.


At the Circus (click to expand)
Taken with Nokia Lumia 920

Another hardware innovation available on the Lumia 920 is wireless charging.  My trial did not include a wireless charger, but the phone is ready for it out of the box.  At present the wireless charger sells for just under $50 on Amazon.


Truth be told this was one of the fastest and most impressive user setups I’ve seen.  I added the SIM card went through the setup and provided by Windows Live ID (my hotmail.com address — hey, I was an early adopter).  Once I did so it immediately pulled in information from my email account as well as contacts, music and more.  When setup was done the Microsoft Office app caught my eye and I launched it – it automatically connected to Skydrive and I was able to load the PowerPoint I had been working on that day – properly rendered – in a snap.  Music I had purchased/streamed from Xbox Music was also right there ready to be streamed or downloaded.  I think the process took a mere 2 minutes to get setup.

Here some of the value of the Microsoft ecosystem becomes clear.  Skydrive, Office, Email, Music, XBOX and more working seamlessly across the PC and mobile worlds.


For me is where the Nokia shined – when I had both phones available I found myself preferring the Nokia for this reason.  I created Windows 8 Live Tiles for the 4 email accounts I use the most.  I could set different alerts (sound, vibrate, etc.) for each and with just a glance at my home screen I could see which accounts had how many emails, but more importantly, I could move between inboxes at the “speed of tap”.  The email UI is fast and responsive such that I could quickly check multiple inboxes in seconds.  Doing the same on HTC EVO (Android) just wasn’t as fast or as seamless an experience to me.


The Live Tile experience is nice with many apps as well.  On my Android phone I have to launch various apps to see status, but with Live Tiles I can see updates from Skype, Box, The Weather Channel, Twitter and much more right on my home screen.  With Box, I can instantly see on my home screen updates about new files, or files being accessed and modified.  Also the Skydrive and Office applications give me about as good of an experiencing viewing and editing Office documents on a phone as I could imagine.  For me the rich Office experience combined with Box and Skydrive and the rich email experience offer the most value in a work/productivity context.

HERE-City-Lens-augumented-realityThe Lumia also came preloaded with many Nokia apps including Nokia Music, and the HERE series of apps (Maps, Transit, City Lens, and Drive).  The HERE apps (which I liked) were really useful for a quick answer to “what’s around here for [dining/shopping/etc]” and the City Lens app gives you the option using the Lumina’s camera to overlay labels on the buildings around you.

 Sometimes Windows phones get knocked for not having enough apps (as Android used to) but I wonder if they looked at the app store lately.  Sure there’s not 16 different versions of “Cupcake Maker” but the apps I wanted were there ranging from Netflix, Twitter, Pandora, Skype and much more.


In summary the Nokia Lumia 920 worked so well for me that I quickly found myself favoring it over my Android powered HTC EVO 4G.  One of biggest areas of utility for me was the rich and “speed of tap” email experience combined with Skydrive, Office and Box.  The next time I find myself purchasing or upgrading a phone, I will definitely be seriously considering the Nokia Lumia offerings.

A Tale of Two Clouds (The Hybrid Cloud Is The New Normal)

Eugène_Delacroix_-_La_liberté_guidant_le_peupleIt was the best of times, it was the worst of times, it was the peak of inflated expectations, it was the trough of disillusionment, it was an epoch of unicorns and rainbows, it was an epoch of engineers and managers looking at each other in bewilderment.  Is this a cloud I see before me?   Come let me clutch thee — I have thee not and yet I still see thee.  Et tu, Brute?

OK enough with the Dickens-Shakespeare mashups but I would like to talk about two islands — on premises systems and public IaaS clouds — and how and why we might connect these islands.  Before we talk about how to connect these islands, lets first review why….


5-apostle-islandsToday most organizations have an on-premises datacenter upon which they might have a private cloud, or just a virtualization infrastructure, or…perhaps something else entirely.  What are some reasons they might want to want to move some things from this “private island” to a public island?  Is the public island cheaper?  Well, not always…

As Chad Sakacc explains in this excellent post, the technology costs for public cloud often aren’t any cheaper and can even be more expensive.  But there’s also some variable costs here — the cost of your datacenter space, the cost of your infrastructure, power, cooling and the staff needed to maintain it.  When these costs are considered it’s possible that purchasing infrastructure as a utility might be less expensive for the organization.

But perhaps more important at times than cost, public cloud is quick and easy to consume.  No lengthy procurement process, nothing to order, ship and then deploy and configure — it’s all there just ready to be consumed when you need it (or in other words…Agilty).  Unless you already have on-premises capacity, it will usually be quicker to consume public IaaS resources.

Now some business critical workloads may not be candidates for either security concerns (real or perceived), governance requirements, operations and many other reasons.  To backup this point, a recent survey posted at Gigaom revealed that 98% of surveyed IT executives plan on expanding their datacenters to run internal private clouds with 61% citing security as the reason public clouds were not selected.

The private cloud simply isn’t going anywhere anytime soon — a strategy for leveraging the utility computing model that focuses exclusively on workloads hosted on public IaaS are missing the internal/private elements of the datacenter which are most likely a larger piece of the pie.  To fully unlock the potential value of utility computing, both sides must be addressed and a strong bridge must be built to connect our multiple “islands”.

That leaves us with some good starting use cases for public cloud — test/dev workloads, web tiers, seasonal capacity and new initiatives.  And for many environments this will be something less than half or even less than one-third of the datacenter.  So now we have two islands…..how do we connect them?


a014So lets say you’ve got a VMware infrastructure in your on-premises environment and you want to consume IaaS from one of the many vCloud providers.  Well you can start by using vCloud Connector which can package up and migrate workloads and move them over to the hosted vCloud environment.  But this really isn’t so much of a bridge between your “islands” as it is an occasional ferry that can package up VMs as OVAs and transport them back and forth.  At some point we’re going to need something more than this.

The Advanced version of vCloud Connector adds some valuable features but will not be available to VSPP (VMware Service Provider)partners until later this year.  The first feature is a Layer 2 VPN which allows for subnets to be spanned across your “islands” making it no longer necessary to change the IP scheme when you move between islands.  The second feature is content synchronization which allows for your VM templates (a part of a provisioning service catalog) to be kept consistent across all of your islands.  Now we just replaced our ferry with a small bridge between our islands.

As we look to the future there’s also more coming both from the vCloud Suite as well as VMWare’s upcoming NSX offering — Layer 2 VPN, VXLAN, and virtual firewalls.  Imagine if workloads can be moved between your islands, with both IP addresses and routing and  firewall rules maintained during the migration.  Now our one-lane bridge just became a highway as we truly begin to unlock the value of utility computing.

There’s some important lessons here for both service providers and IT organizations looking to unlock the value of utility computing.  Many of us have been focusing on public cloud (IaaS) but now we realize that this may be only viable for something less than 100% of all workloads — we may need to make investments in our on-premises environments to fully unlock the value.  There’s use cases for both private cloud and public cloud and not every workload is likely to fit in the same bucket.  This is where it becomes clear that hybrid cloud will be the new normal — and for many organizations this means making investments in their on-premises environment such as moving beyond mere virtualization to the vCloud Suite for example.


Let’s say your goal is to help your customers unlock the potential value inherent in the utility computing model.  You start selling hosted/public IaaS to your customers — which does have value and use cases — but now you’re limiting your scope to faction of the total pie — the workloads that meet the test/dev, web tier, seasonal capacity, new project criteria.  If you want to help your customers unlock the potential of utility computing you need to make investments in the on-premises environment as well.

Help your customers move beyond virtualization, by offering converged infrastructure solutions like the FlexPod and the Vblock, powered by VMWare’s vCloud Suite and perhaps adding orchestration, automation, service catalogs and more.  As you help them down this path not only will you be unlocking the value of utility computing in their on-premises environment but you’ll be helping them to build a better on-ramp and bridge to those vCloud powered public clouds.  As new capabilities like vCloud Connector Advanced and VMware NSX become available, the ability to build a strong and sustainable bridge between these islands grows dramatically.  Perhaps more importantly, you are unlocking the value to the workloads that may be captive to the on-premises environments — perhaps more than half of the entire datacenter.

If I can stand on my soapbox for a minute here, I’ve always believed that the best sales approach is a strategic and consultative approach at the CxO levels.  Talk about the full environment and strategy and opportunities for synergy.  The service provider needs to understand the customer’s environment and needs while the IT organization may need guidance on technology, trends and what strategies could be the most effective.  Contrast this to the traditional volume approach where the focus is “what can I sell you this quarter so I can meet my numbers?”.  Sales can enter the cloud era by focusing on long term strategy and not short term volume — ultimately this will lead to unlocking more value for both parties in my opinion.  [Stepping down from my soapbox…]


Many IT organizations are at different steps along their evolutionary cloud journey.  Some have adopted virtualization, some have adopted private clouds, and others are at varying points along this spectrum.  If you’re running VMware — you may want to be looking at building up your private cloud with the vCloud Suite while at the same time you look into leveraging vCloud IaaS providers where it makes sense.

Look into converged infrastructure solutions; look into the vCloud Suite — look into automation, orchestration for your workloads.  Improve your posture for those workloads you don’t expect to move to public IaaS providers in the coming years.  Unlock value and improve your foundation in order to build better bridges.

The hybrid cloud is likely to be the new normal — build up both your private and public clouds to unlock value and build strong bridges in between.  Seek out a service provider that can help you unlock the value of utility computing on BOTH of your “islands”.