Why Microsoft?

This is a question that can be explored from many different angles, but I’d like to focus on it from not JUST a virtualization perspective, and not JUST a cloud perspective, and not JUST from my own perspective as a vExpert joining Microsoft, but a more holistic perspective which considers all of this, as well


Top 6 Features of vSphere 6

This changes things. It sounds cliché to say “this is our best release ever” because in a sense the newest release is usually the most evolved.  However as a four year VMware vExpert I do think that there is something special about this one.  This is a much more significant jump than going from 4.x


vSphere 6.0 Public Beta — Sign Up to Learn What’s New

Yesterday, VMware announced the public availability of vSphere 6.0 Beta 2.  I can’t tell you what’s all in it due to the NDA, but you can still register for the beta yourself, read about what’s new and download the code for your home lab. There’s some pretty exciting stuff being added to vSphere 6.0 in


Will VMware Start Selling Hardware? Meet MARVIN

The Register is running a story that VMware is preparing to launch a line of hardware servers.


VMware Pursues SDN With Upcoming NSX Offering

Earlier this week VMware announced VMware NSX – an upcoming offering that takes network virtualization to new levels. NSX appears to be somewhat of a fusion between Nicria’s SDN technology (acquired last year by VMware) and vCloud Network and Security (vCNS – formerly known as vShield App and Edge). Since I already had intentions to


What Really Is Cloud Computing? (Triple-A Cloud)

What is cloud computing?  Ask a consumer, CIO, and salesman and you’ll likely get widely varying responses. The consumer will typically think of the cloud as a hosted service, such as Apple’s iCloud, or uploading pictures to Photobucket, and scores more of like services (just keep in mind that several such services existed before it


Agility Part 2 — The Evolution of Value in the Private Cloud

When an IT project is commissioned it can be backed by a number of different statements such as: “It will reduce our TCO” “This is a strategic initiative” “The ROI is compelling” “There’s funds in the budget” “Our competitors are doing it” Some of these are better reasons than others, but here’s a question.  Imagine a


Stacks, the Vblock and Value — A Chat with EMC’s Chad Sakac

…I reached out to EMC’s Chad Sakac to gain more insights from his perspective on how the various stacks…well…stacked up….


Upgrading to vCenter Server 5.5 Using a New Server

You may find that you want to start looking at upgrading your vCenter Server to version 5.5 to take advantage of new capabilities, a faster and improved web interface and the ability to upgrade your hosts to ESXi 5.5.


But what if your current server might be for example vCenter 5.1 on Windows 2008R2 with SQL 2005?  You might want to take the opportunity to start clean on more current versions of Windows and/or SQL.  This article is a summary of a process that worked for me as well as a few hurdles encountered along the way.

Before we begin, make sure you also consider the vCenter Server Appliance which is a hardened pre-built vCenter Server running on Linux. Some will desire to run vCenter Server on Windows and if so this post is for you.

This article also assumes SQL is running locally on the vCenter server.  If the database is remote, this article will still work except that either you will not need to move the database, or you’ll be moving it to a different server.

UPDATE:  As this process does not transfer the ADAM database, the existing security roles will NOT be migrated to the new server.  These roles will have to be manually rebuilt unless you want to try some scripts as discussed in this post.  Special thanks to Justin King ( @vCenterGuy ) for pointing out the issue and Frank Büchsel ( @fbuechsel ) for providing the link to scripting permissions import/export! 

Build The New Server

The new server should be Windows 2012 but NOT the R2 version which is not yet supported. If you want to use a local database, go ahead and install the database at this point (we used SQL 2012).

SSL Certificates

We didn’t have custom SSL certificates but you will still need to transfer your SSL certs to work with your existing database.  When I got to installing vCenter Server in a later step I encountered this error and had to go back and grab the certs.

On the current vCenter server you should be able to find the certificates in the following hidden directory:

  • For Windows 2008:
    C:\ProgramData\VMware\VMware VirtualCenter\SSL
  • For Windows 2003:
    C:\Documents and Settings\All Users\Application Data\VMware\VMware VirtualCenter\SSL

Copy everything in the SSL folder and create the following directory on the new server and place them here:

 C:\ProgramData\VMware\VMware VirtualCenter\SSL

For more information see the following KB article on certificate errors related to vCenter Server installation

 Transfer the Database (downtime begins)

Shutdown the vCenter Services so that we can transfer the database.  There’s a few options here.  Our vCenter DB was about 30GB so I simply did a detach and copied the DB files across the wire.  If you have SQL 2008 or later you might want to take a compressed backup or look at a tool like Red Gate or LiteSpeed which can compress your SQL backups into much smaller files to transport. Additionally you also might be able to detach the relevant VMDKs and attach them to the new server, allowing you to copy them at disk speed.

Once you have the database running on the new server we can begin with the vCenter Server install.

 vCenter Server install

First rule here is the use vCenter 5.5A (build 1378901) which fixes some authentication issues on Windows 2012 in some environments.  Second rule is to install the elements one at a time. I prefer to be able to control each install individually and I’ll address each component below.

 vCenter SSO Install

When you install this you have the option to sync with an existing SSO server.  Since the only other server with SSO we were going to retire, I chose the “first server in new site” option.  We will need to edit SSO later on to enable AD authentication but not yet.

 vCenter Web Server Install

When I first installed this component I got a 404 from the web server on each attempt.  As it turns out there is an issue described in this KB article such that the web server will return 404 errors when installed to a drive other than C. Normally I try to install everything I can to a non-C drive, but it seems that this component needs to be on the C drive to function properly.

 vCenter Inventory Server and vCenter Server

These services are mostly straight forward installs.  If you copied the SSL certificates above you should have no issues in this step.  You will have the option to have vCenter automatically attempt to connect to the hosts or to do it manually.  At this point vCenter server should be working, but only local accounts might be able to login.

To fix this login to the web UI for vCenter using either a local account or the SSO admin account and perform the following steps.

1 Browse to Administration > Sign-On and Discovery > Configuration in the vSphere Web Client.
2 On the Identity Sources tab, click the Add Identity Source icon.

Add the appropriate source type such as Active Directory and add it as one of the default domains. For more information see the following help chapter on setting default domains.

vCenter Update Manager (optional)

You should mostly be in business at this point but you may also want to install vCenter Update Manager.  With this step there are a few additional considerations.

First of all you need to create a 32-bit DSN for the Update Manager Database.  There’s a KB article here but I think my method was quicker.  On the 2012 server open up the search charm and type “odbc” and press enter.  You’ll see both the 32 and 64 bit versions of the ODBC configuration utility.  Select the 32-bit utility and create your DSN, but…..

Make sure you use the SQL 2008 R2 Native Driver even if you are using a 2012 database.  As explained in this article, the vCenter Update Manager service will fail to start when using the 2012 Native Client.  Use the 2008 R2 Native Client against the 2012 SQL and it will work fine.

That’s basically it.  To summarize take the following steps:

1)  Build a new 2012 Server (not R2) and install SQL or other database

2) Copy the SSL certificates from the current vCenter to the new server

3) Shut down vCenter Server services

4) take backups and/or snapshots as desired

5) Using the method of your choice, forklift the current vCenter database to the new server (if SQL is local)

6) Install SSO

7) Install vCenter Web Server to the C: drive

8) Install Inventory Manager and then vCenter Server

9) Logon to vSphere with the web UI and configure SSO to authenticate to your Active Directory domains and/or other sources as desired.

10) Manually reconnect to your ESXi hosts if you selected this option

11) Install Update Manager using a 32-bit DSN and the 2008 R2 Native SQL Client.

Now you’ve got vCenter 5.5 using your same database but on a clean Windows 2012 server.  Now you’re ready to take advantage of the new features ranging from the improved web interface, expanded OS support and the ability to update your hosts to ESXi 5.5.  Happy virtualizing!

VMworld Live Stream

Below is the feed for VMWare’s Community TV.  I know some VMWorld content will be streamed here, but I’m not sure at this point if the keynote (noon EST today) will be streamed here or not.

Should the keynote not be available at the feed below, you can register to view at vmware.com/now

Watch live streaming video from vmwarecommunitytv at livestream.com


Keeping Up with VMworld Remotely

Tech conferences are one of those things I’ve had mixed feelings about.  It is an awful lot of time and expense in the internet age to spend in order to be bombarded with marketing efforts.  However if I were to make an exception it might be for VMworld for several reasons.

9290d377-259c-4984-b20a-7592cf0f4533The technical breakout sessions look fantastic (and it’s hard to find the time to review the recorded sessions later) and it would be great to see what techniques and solutions are being successfully leveraged and how.  What are others doing for DR, cloud scenarios, automation and more?  What works what doesn’t?  What should I be bringing to market and how?  I’m sure there’s many side conversations beyond the keynotes that would be fascinating and instructive to hear.

On top of this I find it interesting that the theme for VMworld is “Defy Convention”.  At first this strikes one as curious because virtualization has already transitioned from being an outside disruptive technology to being a mainstreamed commodity.  I think it may have something to do with the following three areas which I expect to be a big theme this year:

  • Software Defined Networking (NSX and more)
  • Software Defined Storage (many vendors and new VSAN offering)
  • VMware Hybrid Cloud Service

All of these three are related and are also highly disruptive to what is “safe” and/or “normal” in many IT shops even where compute virtualization is used.

I’m extremely interested in many of the technical and even philosophical questions among these three areas and I’d love to follow the conversation here in the detail that it will be at the conference, but if you won’t be attending VMworld like me how do you stay current with all of the discussions?

I’ve never been to VMworld but I suspect there’s no substitute for being there and engaged in the conversations (I “know” so many people on social media I wish I could get the chance to meet them) but there’s still other ways to stay current on some of the conversations.


Some of the keynotes will be live streamed along with other events (a full broadcast schedule is available here ).  I’m going to try to live stream some of the keynotes but I’m sure I won’t be able to give it my full attention.  Fortunately the keynote presentations are often posted as a recording if not that same night, they should be online by the next morning.

UPDATE:  Make sure you sign up at VMware NOW for Monday’s live events.

Also be sure to check out the vBrownbag live stream here (Thanks Cody!) :  http://professionalvmware.com/vbrownbags-live/


Twitter is a great tool to observe and participate in those conversations which might be taking place online.  Search for the #vmworld hashtag and also setup columns in TweetDeck (or whatever you use) for areas you might be interested such as “SDN” and more.

I’ll be trying to share conversations and tweets that I find interesting to my Twitter feed but make sure you follow the small army of vExperts and professionals who will be contributing to social media.  For more information on who and what to follow on Twitter, visit http://www.vmworld.com/community/twitter/#


I expect that there will be a great deal of blogging to capture in more detail some of the activity, including deep dives into new product offerings.  There is a small army of VMworld blogger contributors (including this one) which will be aggregated right there.  Just follow the feed on occasion and find the blog posts that interest you.


The technical breakout sessions are recorded and are usually posted a few weeks after VMWorld.  You do have to have a subscription however (included with admission) to view the recordings.

I’m sure attending VMworld is a great experience and while it would be ideal to be there in person, you can still follow a good portion of the conversation from wherever you are.

Looking forwarding to some great announcements and knowledge sharing this year!


I remember as a teenager we were visiting some friends just outside Dallas and at one point we want to the local mall and then stopped at a bookstore.  The bookstore was absolutely huge – bigger than anything else I had ever seen at the time.  I left with mixed emotions – awe and excitement at all of the reading choices along with sadness.  Sadness, because I would never have the time to read half the books I had found interest in.

I’ve struggled lately with how to best optimize my time and I thought I’d take some time to rationalize through the choices and possibly even pass along some insights, thoughts or motivations along the way.

So the first priority is my job which including commuting takes up 60-80 hours a week on the average.  That makes boosting my skills and community participation a secondary priority.

Being a vExpert I have access to vSphere Cloud Suite for my home lab and also to Train Signal for Certifications (If only I had access to these when I was unemployed 2 years ago!).  The home lab is great for building my VMware/vCloud skills which is critical because I don’t get any opportunity at work to work with these technologies.

The only active certification I have now is NetApp (VMware is not an option due to the 5-day class requirement) so I could be using these resources to gain some Cisco, Microsoft or other certifications, or just learning new skills.  On top of this I’m a 3-year vExpert and I would like to continue to support the community with new blog posts and more.

Home lab, certification and blogging.  How much time have I spent in last 3 months working on any of these?  Zero – which causes me a degree of discomfort if not anxiety. Not only do I need to spend more time on each of these things but I need to find the right balance or priorities among them.

As much as I want to correct this imbalance on my professional side, I also don’t want to lose sight of a broader focus and goals.  Our lives in this world are so short and chances are our work contributions won’t matter for much of anything in 30, 50 or 100 years from now.  What we can do to make our lives more meaningful?

To me the answer comes down to two things – our children and the sharing and promotion of ideas.  The influences and values we instill upon our children will impact society – for better or for worse – after our time has passed.  Technology changes rapidly but ideas – ideas can have a more profound effect.  Today we are still reading Plato’s Republic, Cicero, Machiavelli, Alexis de Tocqueville and many more.  Yes, technology can improves people’s lives, but there’s far more to building strong and stable societies than just this.

When I choose my classes in college I intentionally did not select any computer classes.  Admittedly there were some social reasons for this, but it was also because I knew that I didn’t want to spend my life working with JUST ones and zeros, but with ideas and concepts (or course in the cloud era I do wish I had more experience in programming however).

On a more personal note I’ve had ideas for writing a book for over a decade now (which would touch on philosophy, economics, politics, society and much more) but never had the time to start it.  Do I focus on my technical deficit with my career or do I take a chance and work towards some more ambitious (and risker) endeavors?

As useful as Twitter can be, it’s rather difficult to be profound in 120 character bursts – in a way the format almost seems to be designed for “snark” at times.  We might not all agree on any number of things, but if you’re like me you know that you can write something well-reasoned such that people will say “I might not agree, but I can respect his opinion.”  I would love to take time to post on topics in a more reasoned format and provoke some serious discussions about a multitude of issues I don’t think get enough attention in our society.  Quick tangent here, but I made a blog post earlier this year on Ludwig von Mises‘ classic treatise, “Human Action” where a key point I borrowed is how concepts which affect us all – economics, engineering or how to organize society – should be studied by all.  In fact the ancient Greeks had a word for those who had no interest in public affairs which was “idiote”.

Long tangent….back to topic.  So I’ve got some difficult decisions between careers focused goals and some more ambitious goals in the pursuit of value and purpose.

Now I also need to find some time with my 3 kids who are growing up so fast and help them with their studies.  When we look back on our lives, chances are we will not regret working on that project a little big harder, putting in more overtime or getting that extra certification (that will be obsolete in 5 years).

So there’s no magic formula for this but somehow we have to strike a balance between our careers and related goals and developing relationships and memories with those close to us.

So…choose to spend what free time I have between improving my career, more riskier and ambitious propositions, and spending time with family and providing guidance to my children.  On top of this I’d really like to spend an hour a day working out and maybe some time doing something fun once in a while :) .

If you’re like me and you’re on the younger side, you’re focused on your career which you should.  Be ambitious, curious and learn at every opportunity.  But at the same time life is short – make sure that you somehow find time for what is important to you.  What do you want to be your legacy?  What will you leave behind?

There’s no magic formula here beyond a few simple principles:

  • Work hard and be diligent
  • Decide what is important to you – and what kind of impact you want to leave behind on this world.
  • Find balance between your relationships, and your career.  Both are important and yet neither can afford to be neglected.

This was just a quick spontaneous blog post and much of it structured around my own concerns about time but hopefully others will find some value in even just thinking about time, your life, your priorities and your goals.  Many of us set objectives for our careers but how many of us set objectives beyond this?

What are your career and life goals?  Discover them, prioritize them and pursue them while you still can, finding balance along the way.  Work hard but stop every now and then to ask “what’s important in the long run” and focus on the impacts we make on those around us along our journey.

Uncovering Value and Opportunity with Utility Computing

These are exciting times. In today’s rapidly changing business climate and technology shifts, there are many new areas in which we can find value and opportunity. There are ways to change how we procure information technology, how we manage it, and how we consume it. If we were unable to change the features of our mobile phone service for the length of our contract, it probably wouldn’t work out particularly well for us. We want – and sometimes need – to change the features on our phone service – minutes, data, voice mail and more – to accommodate our needs which are constantly changing and in flux. Just like with cable or satellite TV where we might want to change the channels we subscribe to from month to month.

What if we could buy iPads in the same manner? Turning on and off fee-based features as we need them along with a built-in tech refresh (new iPad) every year or two? Better yet, what if we could buy IT services and components like this? Often times IT components are purchased in multi-year leases but we can’t easily change the properties of those components to accommodate what the business wants from IT and when they want it. As a result, the business has to wait, which seems a bit backwards. This is just one of the promises that utility computing holds for us – more flexibility in procurement; offering not just new financial flexibility, but allowing us to “dial up” what we need out of IT on-demand in order to support the mission of the business.


Chances are that electricity and phone service are not what your organization excels at – so you contract out these services to experts who can offer them for usually a much lower price than you could yourself. With the utility mode of computing, similar economic factors are at work.

Running a datacenter and getting all those networking, computing, and storage layers to work well together and then adding on disaster recovery and more is not inexpensive or easy. But when these elements of IT are contracted out to experts who can operate at scale, per-unit costs will often decrease. Perhaps more importantly, it allows you to increase your focus on what truly matters – your applications, your data, your processes, and of course your business.


VMware introduced a paradigm shift in IT when it made x86 virtualization an effective solution. Now instead of buying dedicated server hardware for each and every function, we can run multiple servers on the same hardware. The immediate impact of this server consolidation was a reduction in capital expense or CAPEX – there were fewer physical servers, less power and heat, and more ports and space in the datacenter.

But then something else started to happen. Because the servers were encapsulated in this software wrapper, IT departments found even more opportunities to save money in operational expenses, or OPEX. Servers could be provisioned in minutes and new opportunities appeared in everything from monitoring to backups and disaster recovery to significantly improve operations, administrative overhead and associated costs.

We believe that there are many new and emerging opportunities for OPEX improvement with utility and cloud computing models, where more IT elements can now be configured programmatically. This is what VMware refers to as the Software Defined Data Center (SDDC), in which more data center resources will be abstracted so that they can become programmable – even doing things such as provisioning a new multi-tier application with PCI compliance with just a few clicks.


However, things REALLY start to get interesting when you’ve built up operational efficiency to where you can begin to position the organization for agility. At this point, IT and the business are working together as strategic partners to get projects launched and completed in less time and with strategic focus and improved efficiency. Often times businesses know exactly what they want to do, but they fail in the execution as projects take too long and opportunities – and revenues — are lost. By evolving the IT organization beyond a cost center and positioning the business for agility, an entirely new level of value becomes available to be captured.


3248041510_a75b10ebe9_bSome analysts like Forester are predicting that in the coming decade IT departments as we know them will effectively disappear — and the businesses will directly design, build and consume IT systems to support the business — which is the way it should be.  Information Technology should exist to enable the business — not a sub-bureaucracy within organizations which provides resources according to their internal budget and not business value.

VMware ushered in new opportunities for CAPEX and OPEX improvement and now new layers on top of that — including SDN — are ushering in the era of the Software Defined Data Center.  It’s time to stop getting bogged down in specific systems and adopting a vision in which resources can be consumed as a utility and also complement each other in synergy to return the best value.  It’s time to rethink operations, high availability, backups, disaster recovery, storage — and of course how we consume infrastructure.


Using Google Cloud Storage with Veeam Cloud Edition

Veeam 6.5 Cloud Edition has many nice features.  It includes the award winning Veeam Backup and Recovery and then adds a second application to replicate those backups to offsite storage — such as Google Cloud Storage.

Veeam does a great job of performing disk-to-disk backups of virtual machines, but many will want and/or need to have an offsite copy of those VMs.  Sometimes this will even be required for various certifications and/or audits.  Veeam Cloud Edition can do exactly this, adding encryption to the backups as well.  No more Iron Mountain, tape exchanges, car trunks or whatever physical method you use to get your backups offsite — now it is simply replicated to the cloud storage provider of your choice.

Google is one of many such cloud storage providers and their standard offering includes geo-redundant storage which means a lot more durability than any mechanism of shipping tapes/disks offsite.

Being new to Google Cloud Storage I went to set up integration with Veeam and I quickly got stuck over finding the proper client key and secret key to be used to access my storage bucket.  The provided help file seemed to reference a previous layout of Google Cloud’s web pages and I quickly found it somewhat useless (Veeam 7 is due this summer which I assume will have updated documentation).  After some reading and experimentation I finally found where in the Google API Console I had to go to retrieve the proper keys for Veeam.

Once in the Google API Console I eventually found the answer not in the API Access session, but in the Cloud Storage section: gcloud

At the bottom of this tab is “Interoperable Access” which you must manually enable.  Once this is enabled you will see a new subtab under Google Cloud Storage for Interoperable Access and this is where you will find the keys you need to provide to Veeam Cloud Edition to get started.  Now you should be able to connect to your Google Cloud Storage buckets and start replicating them offsite.


The replication of the backups worked flawlessly but what type of speed you get will be highly dependent on the quality of your connection to your cloud storage provider.  To walk you through a quick restore scenario I started with a small Windows 2012 virtual machine which Veeam Backup had compressed down to just 6GB.  I initiated a “Restore From Cloud” job which pulled down the backup files in about 15 minutes.  Then I simply had to import the backup files into the catalog and then I could setup the restore job (combined about 2 minutes).  In the restore job I chose to use Veeam’s “Instant Recovery” option which allows me to run the VM right from where it resides on the backup repository (I can always storage vMotion it later).  In summary it took about 20 minutes to pull a VM backup (6GB) from cloud storage and have it powered up and running.

If you’re interested in Veeam be sure to check out version 7 in a few weeks which will have even more features.

VMware vExpert 2013

I’m honored to join the ranks of the vExpert community for the 3rd consecutive year, which has now grown to a thriving 581 members.  I wanted to take a brief moment to share a few thoughts about the program.


What does it take to build such a community?  In the earlier part of my career I was an MCSE jumping on planes every week to work on Active Directory and Windows Server technologies.  The Microsoft MVP program has been around for a long time, and is still highly regarded, but I honestly don’t recall either then or now seeing the level of community and participation (and least from my vantage point) that the dynamic vExpert program displays.  I find this to be an interesting comparison.  To build such a strong vibrant community you have to have several ingredients including:

  • Technology that excites people — provides real value and solutions and is never standing still (i.e. Novell NetWare) and constantly improving and evolving.
  • A strong group of dynamic professionals to teach, lead, share and evangelize the solutions, methods and benefits.
  • and certainly not least — a strong leadership team at the core to bring together the community, develop and advance it.

The vExpert community is a special group for which I’ve seen no equal in the industry.  Great people and great solutions all around.


vExperts get some benefits including NFR licenses for their home labs and access to beta programs.  This year a number of vendors are contributing benefits ranging from clothing, to training from TrainSignal  to product licenses.  A full list of offerings has been compiled at the vInfrastructure blog.


It is with a level of discomfort that I accept the vExpert 2013 award.  This award is based on 2012 activity, but for most of this year (2013) I would rate my community contribution as poor.  The biggest reason for this is time — I have plenty of ideas for blog posts on all manner of topics from general cloud, to VMWare and much more but I’ve been burning the candle at both ends lately (which makes the year go by fast).   Additionally I’ve only been working on VMware and cloud specific technologies a small fraction now so that I’m not getting the exposure that I used to in these areas.

I know I have many more contributions in me for the community but with the current demands of both work and family I’m not sure how many of them I’ll get to,  but I am confident that to the extent I can find the time I have plenty more contributions to make.

Congrats to all the vExperts out there and big thanks the community and it’s leaders (you know who are are!) for all that you do!

The Nokia Lumia 920 Experience

Recently I was given the opportunity to test drive Nokia’s flagship phone – the Nokia Lumia 920 (Engadget Readers Choice 2012) and I thought I would share some highlights from the user experience.  My current phone is a HTC EVO 4G LTE which is quite similar in specs, but for many functions I found myself favoring the Windows 8 experience on the Nokia Lumia 920.


The first thing I noticed is that the Nokia Lumia 920 was solid – it was a bit thicker and heavier than the EVO but as I got used it, I found that I didn’t mind the added size (Nokia is now selling new models such as the 925 and 928 which are a bit lighter).

This page at Techcrunch shows detailed specs for both phones side by side.  They are quite similar for CPU, but the Nokia has a slightly smaller screen (by .2 inches) but greater pixel density as a result.  The Nokia has a “super brightness” capability which can make it readable even with sunglasses in bright sunlight – something I couldn’t do on my EVO.  Each has their pros and cons, but I found myself willing to accept a .2” loss in screen size for the extra brightness.

As for the camera, The Nokia PureView lens is actually suspended in liquid to give it more protection from vibrations and movement.  Both phones have an 8MP camera and can take rapid-fire shots where you select the best picture (for the Nokia this requires the “Blink” app).  Both took excellent pictures and did not do any serious testing to the point I could tell the difference.  Below is a picture I took with the Nokia at a circus but don’t rely on my photo skills — Nokia has an impressive collection of pictures taken by Nokia phone users here.


At the Circus (click to expand)
Taken with Nokia Lumia 920

Another hardware innovation available on the Lumia 920 is wireless charging.  My trial did not include a wireless charger, but the phone is ready for it out of the box.  At present the wireless charger sells for just under $50 on Amazon.


Truth be told this was one of the fastest and most impressive user setups I’ve seen.  I added the SIM card went through the setup and provided by Windows Live ID (my hotmail.com address — hey, I was an early adopter).  Once I did so it immediately pulled in information from my email account as well as contacts, music and more.  When setup was done the Microsoft Office app caught my eye and I launched it – it automatically connected to Skydrive and I was able to load the PowerPoint I had been working on that day – properly rendered – in a snap.  Music I had purchased/streamed from Xbox Music was also right there ready to be streamed or downloaded.  I think the process took a mere 2 minutes to get setup.

Here some of the value of the Microsoft ecosystem becomes clear.  Skydrive, Office, Email, Music, XBOX and more working seamlessly across the PC and mobile worlds.


For me is where the Nokia shined – when I had both phones available I found myself preferring the Nokia for this reason.  I created Windows 8 Live Tiles for the 4 email accounts I use the most.  I could set different alerts (sound, vibrate, etc.) for each and with just a glance at my home screen I could see which accounts had how many emails, but more importantly, I could move between inboxes at the “speed of tap”.  The email UI is fast and responsive such that I could quickly check multiple inboxes in seconds.  Doing the same on HTC EVO (Android) just wasn’t as fast or as seamless an experience to me.


The Live Tile experience is nice with many apps as well.  On my Android phone I have to launch various apps to see status, but with Live Tiles I can see updates from Skype, Box, The Weather Channel, Twitter and much more right on my home screen.  With Box, I can instantly see on my home screen updates about new files, or files being accessed and modified.  Also the Skydrive and Office applications give me about as good of an experiencing viewing and editing Office documents on a phone as I could imagine.  For me the rich Office experience combined with Box and Skydrive and the rich email experience offer the most value in a work/productivity context.

HERE-City-Lens-augumented-realityThe Lumia also came preloaded with many Nokia apps including Nokia Music, and the HERE series of apps (Maps, Transit, City Lens, and Drive).  The HERE apps (which I liked) were really useful for a quick answer to “what’s around here for [dining/shopping/etc]” and the City Lens app gives you the option using the Lumina’s camera to overlay labels on the buildings around you.

 Sometimes Windows phones get knocked for not having enough apps (as Android used to) but I wonder if they looked at the app store lately.  Sure there’s not 16 different versions of “Cupcake Maker” but the apps I wanted were there ranging from Netflix, Twitter, Pandora, Skype and much more.


In summary the Nokia Lumia 920 worked so well for me that I quickly found myself favoring it over my Android powered HTC EVO 4G.  One of biggest areas of utility for me was the rich and “speed of tap” email experience combined with Skydrive, Office and Box.  The next time I find myself purchasing or upgrading a phone, I will definitely be seriously considering the Nokia Lumia offerings.

A Tale of Two Clouds (The Hybrid Cloud Is The New Normal)

Eugène_Delacroix_-_La_liberté_guidant_le_peupleIt was the best of times, it was the worst of times, it was the peak of inflated expectations, it was the trough of disillusionment, it was an epoch of unicorns and rainbows, it was an epoch of engineers and managers looking at each other in bewilderment.  Is this a cloud I see before me?   Come let me clutch thee — I have thee not and yet I still see thee.  Et tu, Brute?

OK enough with the Dickens-Shakespeare mashups but I would like to talk about two islands — on premises systems and public IaaS clouds — and how and why we might connect these islands.  Before we talk about how to connect these islands, lets first review why….


5-apostle-islandsToday most organizations have an on-premises datacenter upon which they might have a private cloud, or just a virtualization infrastructure, or…perhaps something else entirely.  What are some reasons they might want to want to move some things from this “private island” to a public island?  Is the public island cheaper?  Well, not always…

As Chad Sakacc explains in this excellent post, the technology costs for public cloud often aren’t any cheaper and can even be more expensive.  But there’s also some variable costs here — the cost of your datacenter space, the cost of your infrastructure, power, cooling and the staff needed to maintain it.  When these costs are considered it’s possible that purchasing infrastructure as a utility might be less expensive for the organization.

But perhaps more important at times than cost, public cloud is quick and easy to consume.  No lengthy procurement process, nothing to order, ship and then deploy and configure — it’s all there just ready to be consumed when you need it (or in other words…Agilty).  Unless you already have on-premises capacity, it will usually be quicker to consume public IaaS resources.

Now some business critical workloads may not be candidates for either security concerns (real or perceived), governance requirements, operations and many other reasons.  To backup this point, a recent survey posted at Gigaom revealed that 98% of surveyed IT executives plan on expanding their datacenters to run internal private clouds with 61% citing security as the reason public clouds were not selected.

The private cloud simply isn’t going anywhere anytime soon — a strategy for leveraging the utility computing model that focuses exclusively on workloads hosted on public IaaS are missing the internal/private elements of the datacenter which are most likely a larger piece of the pie.  To fully unlock the potential value of utility computing, both sides must be addressed and a strong bridge must be built to connect our multiple “islands”.

That leaves us with some good starting use cases for public cloud — test/dev workloads, web tiers, seasonal capacity and new initiatives.  And for many environments this will be something less than half or even less than one-third of the datacenter.  So now we have two islands…..how do we connect them?


a014So lets say you’ve got a VMware infrastructure in your on-premises environment and you want to consume IaaS from one of the many vCloud providers.  Well you can start by using vCloud Connector which can package up and migrate workloads and move them over to the hosted vCloud environment.  But this really isn’t so much of a bridge between your “islands” as it is an occasional ferry that can package up VMs as OVAs and transport them back and forth.  At some point we’re going to need something more than this.

The Advanced version of vCloud Connector adds some valuable features but will not be available to VSPP (VMware Service Provider)partners until later this year.  The first feature is a Layer 2 VPN which allows for subnets to be spanned across your “islands” making it no longer necessary to change the IP scheme when you move between islands.  The second feature is content synchronization which allows for your VM templates (a part of a provisioning service catalog) to be kept consistent across all of your islands.  Now we just replaced our ferry with a small bridge between our islands.

As we look to the future there’s also more coming both from the vCloud Suite as well as VMWare’s upcoming NSX offering — Layer 2 VPN, VXLAN, and virtual firewalls.  Imagine if workloads can be moved between your islands, with both IP addresses and routing and  firewall rules maintained during the migration.  Now our one-lane bridge just became a highway as we truly begin to unlock the value of utility computing.

There’s some important lessons here for both service providers and IT organizations looking to unlock the value of utility computing.  Many of us have been focusing on public cloud (IaaS) but now we realize that this may be only viable for something less than 100% of all workloads — we may need to make investments in our on-premises environments to fully unlock the value.  There’s use cases for both private cloud and public cloud and not every workload is likely to fit in the same bucket.  This is where it becomes clear that hybrid cloud will be the new normal — and for many organizations this means making investments in their on-premises environment such as moving beyond mere virtualization to the vCloud Suite for example.


Let’s say your goal is to help your customers unlock the potential value inherent in the utility computing model.  You start selling hosted/public IaaS to your customers — which does have value and use cases — but now you’re limiting your scope to faction of the total pie — the workloads that meet the test/dev, web tier, seasonal capacity, new project criteria.  If you want to help your customers unlock the potential of utility computing you need to make investments in the on-premises environment as well.

Help your customers move beyond virtualization, by offering converged infrastructure solutions like the FlexPod and the Vblock, powered by VMWare’s vCloud Suite and perhaps adding orchestration, automation, service catalogs and more.  As you help them down this path not only will you be unlocking the value of utility computing in their on-premises environment but you’ll be helping them to build a better on-ramp and bridge to those vCloud powered public clouds.  As new capabilities like vCloud Connector Advanced and VMware NSX become available, the ability to build a strong and sustainable bridge between these islands grows dramatically.  Perhaps more importantly, you are unlocking the value to the workloads that may be captive to the on-premises environments — perhaps more than half of the entire datacenter.

If I can stand on my soapbox for a minute here, I’ve always believed that the best sales approach is a strategic and consultative approach at the CxO levels.  Talk about the full environment and strategy and opportunities for synergy.  The service provider needs to understand the customer’s environment and needs while the IT organization may need guidance on technology, trends and what strategies could be the most effective.  Contrast this to the traditional volume approach where the focus is “what can I sell you this quarter so I can meet my numbers?”.  Sales can enter the cloud era by focusing on long term strategy and not short term volume — ultimately this will lead to unlocking more value for both parties in my opinion.  [Stepping down from my soapbox…]


Many IT organizations are at different steps along their evolutionary cloud journey.  Some have adopted virtualization, some have adopted private clouds, and others are at varying points along this spectrum.  If you’re running VMware — you may want to be looking at building up your private cloud with the vCloud Suite while at the same time you look into leveraging vCloud IaaS providers where it makes sense.

Look into converged infrastructure solutions; look into the vCloud Suite — look into automation, orchestration for your workloads.  Improve your posture for those workloads you don’t expect to move to public IaaS providers in the coming years.  Unlock value and improve your foundation in order to build better bridges.

The hybrid cloud is likely to be the new normal — build up both your private and public clouds to unlock value and build strong bridges in between.  Seek out a service provider that can help you unlock the value of utility computing on BOTH of your “islands”.

Discussion: Is the Public Cloud Market Destined to Become An Oligopoly (like the airlines)?

At some point public IaaS offerings become a matter of cost and scale as consolidation and economic realities take over and opportunities to differentiate are reduced.  Gartner’s Magic Quardrant for IaaS currently ranks the top 15 IaaS players by market share but is more consolidation inevitable, not unlike what we have seen in the airline industry?237002_1

The winning APIs and models will likely become magnets for the 3rd parties and MSPs as the market determines winners and losers.  AWS is currently the market leader but could SDN offer a disrupting paradigm shift?  Could VMware’s upcoming offering attract organizations who are still using VMware based private clouds?  Do OpenStack and vCloud pose a threat to AWS?  What impact will Google’s Compute Engine make?  Virtustream and Rackspace should be followed closely as well.

How do you see the public cloud market shaping up over the next 3-5 years?  An oligopoly with APIs providing most of the differentiation?  Share your thoughts below:

Human Action and the Cloud

I came across a passage today from Ludwig von Mises‘ classic treatise, “Human Action” which I thought might have some relevance in cloud computing as well.  I’ll just start by quoting the passage, with the understanding that the context here is economics:

“There is no means by which anyone can evade his personal responsibility. Whoever neglects to examine to the best of his abilities all the problems involved voluntarily surrenders his birthright to a selfappointed elite of supermen. In such vital matters blind reliance upon ‘experts’ and uncritical acceptance of popular catchwords and prejudices is tantamount to the abandonment of self-determination and to yielding to other people’s domination. As conditions are today, nothing can be more important to every intelligent man than economics. His own fate and that of his progeny are at stake.”

“Whether we like it or not, it is a fact that economics cannot remain an esoteric branch of knowledge accessible only to small groups of scholars and specialists. Economics deals with society’s fundamental problems; it concerns everyone and belongs to all. It is the main and proper study of every citizen.”

Mises was a rationalist who accepted the limitations of human reason and economic calculations, but still saw human action — as opposed to the inaction of the content — as the most effective way to organize society faced with limited resources.

The key here is knowledge and awareness to use that reason — a quality not normally found among the content or uncurious — is required for a society to effectively deal with its challenges of finite resources.

I see many parallels here with cloud computing. In looking at the first paragraph it is mentioned how when the informed do not engage and speak up, self-appointed supermen will consolidate power and make decisions.  Sounds like any IT departments you know of?  “Blind reliance upon ‘experts’ and uncritical acceptance of popular catchwords…” — do we see this in IT departments today?  Of course we do — and what results should we expect from such models?  Are these IT captains steering the ships in the right directions?

There are the traditional siloed fiefdoms who simply don’t want much to change beyond an occasional tech refresh (also “server huggers”).  IT Directors and CIOs who have not fully embraced a cloud inspired vision of how the economics of IT can change.  The engineer who pursues technical certifications without a vision for how they might be applied to the business.  The salesman who wants to sell infrastructure to meet quarterly numbers, rather than engaging in a mutually beneficial relationship to pursue the most effective implementation of technology pursuant to the organizations goals (including the ones they might not know about).

Read the following quote a second time, with the words “economics replaced with ‘cloud’:

“Whether we like it or not, it is a fact that cloud computing cannot remain an esoteric branch of knowledge accessible only to small groups of engineers and technology evangelists.  Cloud computing deals with both IT and the business’s fundamental problems; it concerns everyone and belongs to all. It is the main and proper study of every stakeholder.”

This is where it becomes clear that cloud computing is more than just technology.  It is also a vision and a culture.  It is not a tactical solution but it is an end game.  It requires the individuals of the IT society to not be complacent, but to be curious, endeavored and inspired.  It requires human reason to analyze problems, parameters and even economics in order to improve the condition of all.

In Part One of “The NoCloud Organization” I wrote about just how complex the problems we are trying to solve truly are.  To paraphrase Mise, in order to successfully advance cloud computing, Human Action — reason and calculation — is required.  The same thought patterns, business models, leadership styles, roles and responsibilities, sales methods and career paths that brought us the legacy systems and applications we try to move beyond — will not be an effective means to promote the new paradigm of cloud computing.

VMware Pursues SDN With Upcoming NSX Offering

VMware Pursues SDN With Upcoming NSX Offering

Earlier this week VMware announced VMware NSX – an upcoming offering that takes network virtualization to new levels. NSX appears to be somewhat of a fusion between Nicria’s SDN technology (acquired last year by VMware) and vCloud Network and Security (vCNS – formerly known as vShield App and Edge). Since I already had intentions to write a post about vCNS this seemed like the perfect opportunity to do so while taking a look at the new NSX solution at the same time. Before we begin exploring vCNS, let’s just take a quick step back.


About a year and a half ago, I wrote this post in an attempt to define cloud as abstraction leading to automation leading to agility. In a sense this is what SDN – Software Defined Networking – aims to do as well. First the server hardware (compute) layer was abstracted with hypervisors like ESXi, and now this concept is being extended into the area of networking (and also storage) to provide new opportunities for automation, orchestration and integration.  Let’s start by taking a look at what vCNS is already offering today.

vCloud Network and Security (vCNS)

Originally VMware offered three different vShield products, but with the release of the vCloud Suite, the vShield App and vShield Edge solutions now constitute much of the vCNS product, along with support for VXLAN and integration into the vCloud ecosystem including vCloud Director and vCenter Server. vCNS Standard Edition is included with vCloud Suite Standard, while an upgrade to vCNS Advanced adds high availability (HA) for the virtual appliances, load balancing and data security capabilities.

Central to vCNS is the vShield appliance where all of the vCNS configuration is done via a web UI which is also exposed as a tab in the vSphere client. The vShield appliance has the ability to backup the entire vCNS configuration to an FTP/SFTP server on a regular basis, allowing you to deploy a new vShield appliance and then restore your configuration backup.

EDGE (not that guy from U2)

You can deploy Edge servers in your environment which are hardened virtual appliances which provide – well…edge services. You deploy Edge appliances from vShield manager and then – once properly configured – you can point your servers to them as gateways. Besides basic gateway services, the Edge appliances can also provide the following:

  • Layer 3 firewall
  • NAT Services
  • DHCP
  • Site-to-Site VPN (IPSEC)
  • Web Load Balancing (Advanced Edition)

There is a lot of potential here. Some of you might be thinking “why should I use virtual for this”? There’s many answers and use cases here but some of the best examples involve multi-tenancy. Sure you can use dedicated hardware and/or VLANS for each tenant but this can be quite complex and may no longer be the most efficient approach. By virtualizing these edge services we now have logical (versus physical) boundaries which can lead to more scale, more automation and less complexity as tenants are added and removed.


vShield Edge and multi-tenancy
[click image to zoom]

Also I should quickly mention a lesser known feature which is the SSL VPN which since it uses common TCP ports you should be able to use it from networks where IPSEC might be restricted. You can quickly turn this feature on an Edge appliance and then access a VPN home page from which you can either expose internal websites or offer a full VPN client download – which I tried and worked flawlessly on my Windows 8 system.

APP (Virtual Layer 2 Firewall)

Like vShield Edge, vShield App is a hardened virtual appliance deployed from vShield manager, but this appliance is deployed on a per-host basis. You can optionally configure vShield App to “fail closed” meaning that traffic to VMs will be denied unless the vShield App firewall is available. To mitigate against this you can deploy a second appliance to work in an active/passive pair (HA requires advanced edition).


A quick sidebar on the virtual appliances – both Edge and App appliances can be deployed in active/passive pairs and there are different sized appliances based on the size of your network (the large appliances allocate 2 vCPUs each). Keep this mind from a resource perspective as you may need to reserve resources for a pair of large sized appliances.

Now that you have a layer-2 application firewall running on each host you now have many new possibilities on how to design your network. What’s great about vShield App is that your firewall rules are now abstracted from the IP addresses. Today most firewall rules have any affinity to each and every IP address – what if we could do this on a virtual machine basis, or even a VM folder or Resource Pool. Drop a VM into the “Web Tier” resource pool and suddenly in inherits the appropriate firewall rules that you’ve already defined – regardless of what the IP is. And if someone changes the IP inside the VM, the firewall policy is sustained at the VM level. Depending on the environment, this can provide many benefits ranging from a reduction in network hardware, to lower operational complexity and quicker execution. In some environments a traditional DMZ may no longer be necessary depending on what new design is settled on.

Another little sidebar but another nice extra available in vShield App is the ability to monitor your network flows. This can be done either at the host level (App) or at the gateway level (Edge), but even without any firewall rules you now have the ability to monitor your network flows at various points and see which protocols are using the most network traffic.

NSX – The Future

At a high level VMware’s NSX appears to be a fusion of Nicria’s Network Virtualization Platform (NVP) and the vCNS product we just discussed, along with a few twists. VMware acquired Nicria for $1.26 billion last year, and their Open vSwitches (OVS) do the packet pushing while the NVP Controller Cluster features a RESTful web API for managing and defining these virtual switches.

Below is a full slide from VMware introducing the concept of the NSX offerings which has a few interesting features. The support for OpenStack at the top is not really new, but the “multi-hypervisor” support at the bottom is. Since Nicria’s OVS is hypervisor agnostic (Xen, KVM, ESX) it would seem that NSX is now abstracted from even ESX.


VMware NSX
[click to zoom]

There’s a few more hints in this VMware blog post on NSX including mention of Layer 2 Gateway services, active/active HA (active/passive in vCNS today) and even mention of MPLS support. And of course you know VXLAN is going to be baked in as well. And remember the backup support in vShield Manager? Well it seems that the new NSX manager takes this a step further – allowing you perform VM-style snapshots of the entire network state and ecosystem for backup or even to revert a recent change (undo).

So many scenarios for this new paradigm, but here’s just one – the cloud on-ramp. You want to move some virtual machines from your datacenter to a vCloud provider, between providers or even VMware’s new Hybrid vCloud offering. Just deploy a gateway (edge) appliance in your datacenter and establish a site-to-site VPN to the hosting datacenter running NSX. Now start migrating those virtual machines!

VMware expects to launch NXS in the second half of this year, and having seen the benefits of vCNS I’m really excited about the possibilities and benefits that Software Defined Networking (SDN) with VMware NSX can provide. I suspect that VMware is also working on advancing “Software Defined Storage” as well and I hope to also share some thoughts on those possibilities in future posts.