Why Microsoft?

This is a question that can be explored from many different angles, but I’d like to focus on it from not JUST a virtualization perspective, and not JUST a cloud perspective, and not JUST from my own perspective as a vExpert joining Microsoft, but a more holistic perspective which considers all of this, as well

Top 6 Features of vSphere 6

This changes things. It sounds cliché to say “this is our best release ever” because in a sense the newest release is usually the most evolved.  However as a four year VMware vExpert I do think that there is something special about this one.  This is a much more significant jump than going from 4.x

vSphere 6.0 Public Beta — Sign Up to Learn What’s New

Yesterday, VMware announced the public availability of vSphere 6.0 Beta 2.  I can’t tell you what’s all in it due to the NDA, but you can still register for the beta yourself, read about what’s new and download the code for your home lab. There’s some pretty exciting stuff being added to vSphere 6.0 in

Will VMware Start Selling Hardware? Meet MARVIN

The Register is running a story that VMware is preparing to launch a line of hardware servers.

VMware Pursues SDN With Upcoming NSX Offering

Earlier this week VMware announced VMware NSX – an upcoming offering that takes network virtualization to new levels. NSX appears to be somewhat of a fusion between Nicria’s SDN technology (acquired last year by VMware) and vCloud Network and Security (vCNS – formerly known as vShield App and Edge). Since I already had intentions to

What Really Is Cloud Computing? (Triple-A Cloud)

What is cloud computing?  Ask a consumer, CIO, and salesman and you’ll likely get widely varying responses. The consumer will typically think of the cloud as a hosted service, such as Apple’s iCloud, or uploading pictures to Photobucket, and scores more of like services (just keep in mind that several such services existed before it

Agility Part 2 — The Evolution of Value in the Private Cloud

When an IT project is commissioned it can be backed by a number of different statements such as: “It will reduce our TCO” “This is a strategic initiative” “The ROI is compelling” “There’s funds in the budget” “Our competitors are doing it” Some of these are better reasons than others, but here’s a question.  Imagine a

Stacks, the Vblock and Value — A Chat with EMC’s Chad Sakac

…I reached out to EMC’s Chad Sakac to gain more insights from his perspective on how the various stacks…well…stacked up….

Blog Notes

I suspect that most bloggers would feel the same if I said that I had the chance to blog on less than 20% of my ideas.  One of my favorite new products I’ve never even blogged on is vCloud Director –which probably has something to do with me never having had the opportunity to use it.

I’ve got what I think will be some interesting posts to share, on topics from organizational silos, automation and vCloud Director, a fun post on cloud security (fun?), and backups and OPEX, and our experience with a very special charity, but I just haven’t had time to fully hatch them yet.  I’m right in the middle of crunch time for a project and the domestic front has also been busy so finding time has been slow lately.

Looking back I’ve noticed that some of my older posts on things like disk alignment and application quiescing are getting some of the most hits.  I think that I may also re-post a handful of these older posts that haven’t been seen by the community yet.  As I mentioned earlier I haven’t touched anything to do with VM’s for almost a year now, which has been having an impact on my content, but I expect this to change soon…

Chasing Clouds in Australia with Rob Livingstone

Rob Livingstone spent the past 10 years as the CIO of Ricoh in Australia, and just left his CIO position to focus on launching a new consultancy focusing primarily on cloud computing.  Rob also has a new blog on CIO Magazine – The Accidental CIO.

I had the opportunity to briefly exchange a few insights with Rob regarding cloud computing and his new consultancy.

While at Ricoh, Rob adopted virtualization well in advance of Gartner’s hype curve and told me that “it works like a treat, and is very, very cost effective.”  Rob went on to explain that there are very strong benefits to cloud computing, but he finds that there are many challenges that businesses encounter along the way:

Many businesses are really struggling with the Cloud, and there are some pretty large companies that have launched into a SaaS implementation only to find after a year that the costs are horrendous, and have pulled the pin or seriously cut back the scope. Some have gone ahead and derived benefits.

Rob adds that “there’s a lot of hype about this and much misinformation” and he hopes that his consultancy can guide businesses on how to best navigate their ventures into the cloud.
Where are most of the challenges in cloud computing?  Rob says:

The ‘Cloud’ discussions are louder OUTSIDE of the IT space, and trying to discuss Cloud from WITHIN the IT Community is like banging on your own jailhouse walls, where the real action is outside, between LOB and the SaaS vendors…this is where businesses are going to run into problems, whether they be viral runaway SaaS projects, data integrity or CMS Governance issues

I’ll be following Rob’s posts in the future and you may want to do so as well.  You can follow Rob at the following sites:

The Accidental CIO (CIO Magazine Blog) (cio.com.au/blog/accidental-cio)
RLA (Rob Livingstone Advisory)  (www.rob-livingstone.com)

The Best Virtualization Bloggers Announced — A Vibrant Community of Passionate Bloggers

In an open vote, the best bloggers were just voted on by the community. Blue Shift received the fewest votes and I’ll explain in a bit why I’m pleasantly surprised with those results, but first I’d like to talk about this exciting and growing community.

Eric Siebert runs vsphere-land.com and held the vote for the top virtualization bloggers. According to Eric, 115 blogs were considered which is almost double the amount of blogs at this point last year. The fact that there are so many new bloggers demonstrates that there are a lot of knowledgeable and passionate people out there who want to share their knowledge and ideas on virtualization and cloud computing.

Why is the community growing so much? I think a big part is the technology and the great products that make it possible to use. Virtualization is a game-changing, paradigm shift, disruptive technology. It changes the way we do things, and opens up new possibilities. It reduces costs and increases business agility. In fact all these things speak to how Blue Shift got it’s name.

Of course the other side is a really great group of people who have stepped out to share their knowledge and passion with the community. This really is a strong community of professionals with some really outstanding people. I’ve been following some of the top blogs for years now and it’s been really exciting to see the community grow and flourish around them. There is a huge demand for information on virtualization and cloud computing and this community is stepping up to fill that need on a daily basis.

The top blogs in the voting came out pretty close to how I would have ranked them and the result is a really good “who’s who” in virtualization bloggers.  You should be following the Top 25 if you’re not already.

Today less than 25% of all servers are virtualized and cloud computing is just getting started in many areas, so I only expect this community to continue to grow and thrive.

Blue Shift and advice for new bloggers

When I started Blue Shift my motivation was simply to share my knowledge and ideas around virtualization and it already has been very rewarding for me personally.

Being last in the voting (as far as number of votes) I’m not in a great position to be offering advice but hopefully by the end of this post it my offer will make more sense. 🙂

First of all I want to explain why I am pleasantly surprised that Blue Shift did as well as it did. Consider the following:

  • Blue Shift was started as a personal experiment 3 months ago (July).
  • Blue Shift has been on Planet v12n for about 6 weeks.
  • Most of my first six weeks  of posts were never seen by this community.
  • I haven’t touched a VMware product in almost a year (I expect this to to change soon).
  • I don’t work for VMware, EMC or a consultancy. I’m just another in-house IT guy.
  • I did not vote for Blue Shift nor did I ask anyone else to.

Given all these obstacles, now you can see why I was surprised that even 10 of 700 voters would have even considered Blue Shift to be in their top 10! And to those 10 of you who voted for Blue Shift, thank you very much! 🙂

By the way Eric, at one point you did mention prizes for the results and I’m holding you to your promise. You can send my whoopee cushion to my office address.

I didn’t start Blue Shift to be in a popularity contest, I don’t collect any ad revenue (but I have expenses), and I doubt that under any circumstances that I would ever be in the Top 25, but I enjoy doing it and my reward is knowing that some in the community find my contributions valuable. The fact that I am even on this list is with some of the great community bloggers is an honor as far as I am concerned.

If you’re a new blogger or thinking about starting a blog there’s still a great demand. A few years ago VMWare had just a handful of products but today there are more than two-dozen VMware products and growing. The community needs good storage, networking, security, cloud and business-minded bloggers who can go deep and specialize in the wide array of products and technology that now comprise virtualization and cloud computing.

If you have started a new blog or are thinking of starting a new blog here are some tips I would offer:

1)  Don’t be an echo chamber

You often see the “echo” effect when there’s a major new product release, or event (such as the top blogger results) as many bloggers are writing about the same event around the same time.

Try to be unique and create your own content as much as possible. Refer to other blogs when possible, but don’t rely on them for the bulk of your content. Write something that is either unique itself or offers a unique perspective on the topic.

2) Drill deep but don’t lose the big picture

Good technical articles will drill deep but don’t become myopic (nearsighted). Take a step back and consider the broader picture and any underlying concepts or relationships that will make your post more relevant and more interesting.

3) Be passionate and enjoy what you do

If you believe what you are writing about and are passionate about it, it will show. The best bloggers here are very passionate about what they write about. If you enjoy what you are writing about it will show, so choose your content accordingly.

4.  Be patient

I remember in the first months writing some posts I was proud about and having less than 5 unique visotors a day on my site :). It takes time – months in most cases – for search engines and other bloggers to notice you. So don’t be discouraged – be persistent and over time you should see steady improvements.

Gartner: Virtualizing IE6 violates Microsoft’s EULA

Neil MacDonald at Gartner recently posted that Microsoft is telling customers that if they try to use application virtualization (App-V, ThinApp, etc) with IE6 that they are in violation of Microsoft’s EULA.  Here is an excerpt of a letter from Microsoft to some of their customers:

Microsoft does not support the use of Microsoft Application Virtualization (App-V) or similar third-party application virtualization products to virtualize IE6 as an “application” enabling multiple versions of Internet Explorer on a single operating system.  These unsupported approaches may potentially stop working when customers patch or update the underlying operating system, introducing technical incompatibilities and business continuity issues. In addition, the terms under which Windows and IE6 are licensed do not permit IE6 “application” virtualization.  Microsoft supports and licenses IE6 only for use as part of the Windows operating system, not as a standalone application.

As you can imagine this is maddening for companies who are planning to roll-out Windows 7.  I know of a large Global 1000 company where major ERP/CRM/HRMS applications are still running older versions for which the only supported browser is IE6.  This customer was considering VMware’s ThinApp as an elegant way to work around this problem, but Microsoft is taking away this option, forcing less attractive options like Windows XP mode which requires the more expensive Windows 7 Enterprise Edition and more hardware resources on the desktop/laptop.

I can appreciate that Microsoft has concerns about supporting an EOL browser with security issues, but I think that Microsoft could be more flexible here as the result is that they are impairing Windows 7 migrations.  Companies do want to move away from IE6, but in some cases this requires major business application upgrades which just won’t happen overnight.  By adopting this policy, Microsoft is creating more obstacles for Windows 7 migrations.

Please share your concerns with your Microsoft representative and also comment on Neil’s post as well.  Hopefully if a critical mass of customers express their concerns to Microsoft, they will reconsider and allow application virtualization of IE6 in order to allow Windows 7 deployments to proceed.

Speed up vSphere client on Windows 7

KB article 1027836 explains that the vSphere client may seem slow when running on Windows 7 systems, and that this is especially noticeable when maximixing the window, which forces a redraw of all the panes inside the client.

To improve vSphere client performance under Windows 7, you can disable a feature called desktop composition as follows:

  1. Right-click the shortcut for the vSphere Client and click Properties.
  2. Click the Compatibility tab.
  3. Select Disable desktop composition.
  4. Click OK.
  5. Run the vSphere Client.

Linux kernel vulnerability may apply to ESX Service Console

There is a recent Linux kernel vulnerability (CVE-2010-3081) which is currently being exploited by hackers.  ZDNet reports:

“In the last day, we’ve received many reports of people attacking production systems using an exploit for this vulnerability, so if you run Linux systems, we recommend that you strongly consider patching this,” said Ksplice chief executive Jeff Arnold in a blog post on Saturday.

The flaw reportedly affects every 64-bit Linux distribution since 2008.

I am hearing reports (which I can’t confirm) that this does apply to ESX.  Since the vulnerability is 64-bit specific, I am thinking that the exploit applies to ESX 4.0 and 4.1 (ESX 3.x has a 32-bit console, and ESXi has no console).  Needless to say this is an advantage of the ESXi architecture.

If what I am hearing is correct, there should be a patch made available soon.  I’ll update this post if anything official is announced.

The Risks of Thin Provisioning — and Solutions

Recently I was interviewing a candidate who mentioned his vSphere experience.  I asked him what his favorite feature of vSphere 4.1 was and he said thin provisioning (technically this was possible in ESX 3.5 but was really introduced in 4.0).  I then asked what some of the concerns and risks with thin provisioning might be.

I’ve found that not everyone is aware of these risks and concerns, as well as potential solutions.  Virtualization Review also recently made a post on this topic so I wanted to briefly cover two concerns and approaches for dealing with them.


The first concern is metadata updates.  Metadata refers to a portion of a VMFS volume that contains – well metadata.  The problem is that when metadata changes are made a lock is placed on the entire LUN using a SCSI-2 reservation which can significantly delay disk access for other ESX hosts trying to support other VM’s on that volume.  This is one reason it has long been a best practice to limit the number of active VM’s on a VMFS volume.

Metadata updates occur during activities such as a VM power on, vMotion, snapshots, and – growing a thin provisioned disk (see KB1005009 for more details).  When a server needs more space and needs to use some previously unallocated blocks, it will require a metadata update, which will place a lock on the LUN.  This can be especially bad if you have a number of servers that need to expand their thin disks at around the same time.

The good news is that this problem has been largely solved with vSphere 4.1 and a VAAI-enabled SAN.  VAAI (vStorage API for Array Integration) enables hardware acceleration for several functions – including hardware accelerated locking.  When VAAI is used, the hardware accelerated locking allows the SAN to atomically update the metadata using a single SCSI command at the block level.  Since this lock is at the block level, this no longer locks out other hosts to the LUN during the metadata updates.   The bottom line is that the negative aspects of metadata updates – specifically LUN locking – are eliminated when VAAI is used.

How do you enable VAAI?  Three simple requirements:

1)       Run vSphere 4.1

2)      Use a SAN whose firmware supports VAAI

3)      Make sure that the VMFS3.HardwareAssistedLocking parameter is enabled (1).  This can be enabled without a reboot.

It’s really that simple.   Use VAAI and all those concerns surrounding metadata updates and LUN locking are pretty much gone.  Of course you’ll also want VAAI for hardware accelerated zeroing and copying (i.e. template deployment/storage vMotion).


Generally it has been a best practice to only fill VMFS volumes to around 70 to 80% of capacity.  You need to maintain some free space for delta logs (snaps), vmotion and several other reasons.  Does using thin provisioning change the considerations here?  Yes!

Thin disks will expand whenever the server needs to access previously unallocated blocks.  You could even encounter a “perfect storm” where a number of servers will need to grow their thin disks at the same time (i.e. a Service Pack or an application is being pushed out).   You need to prepare for the possibility that these disks may grow quickly with little notice and you may need to react to avoid capacity issues. 

In other words a 20% buffer may no longer be adequate.  How much more should you buffer?  That really depends on the expected growth rate.  Most web servers are fairly static from a storage perspective while databases and other servers can be quite dynamic.

You can create alarms in vCenter for various capacity levels on VMFS volumes but what is really helpful is to trend your storage growth over time.  Products like vCenter Capacity IQ or Quest vFoglight can do just this and give you some trending for VM and VMFS consumption over time (and even alarms based on such trending). 

The bottom line is that you need to take several things into consideration before deploying thin provisioning at I would recommend the following be considered at a minimum:

  • Increase the VMFS buffer (free space cushion)to at least 30% and perhaps more based on understanding of the data growth rates of your environment (of course this chips away at the storage savings slightly).
  • Make sure you have capacity alarms on your volumes (whether vCenter or something else) that give you ample time to react to changes.
  • Use capacity trending to gain more insight into your growth rates, and perhaps build alarms around them.

Thin provisioning can still be done effectively and with significant benefits, but it does require an organization to change their monitoring processes and to put operational teams in a better position to react to the more dynamic expansion of thin disks.


If you haven’t already you need to download and experiment with vEcoshell, which is somewhat of a Powershell GUI for vSphere (similar to Quest PowerGUI, but focused on virtualization).  It’s a great tool and one of the built in queries is called “Wastefinder” which will quickly give you an estimate of how much white space could be reclaimed with a 20% buffer built in.  The math here is designed to work with Quest’s vOptimizerPro product but it can give you a quick idea of how much space you can potentially gain with thin provisioning as well.

“Will VMware become the next Novell?” and the role of the guest OS

Two questions I keep hearing being repeated are, “will VMware become the next Novell?” along with questions about the role of the guest OS following VMware’s expected purchase of SUSE Linux from Novell.  I’ve heard a variety of opinions expressed on this and I felt compelled to offer my own perspective.

Will VMware become the next Novell?

The implication here is that Microsoft will do to VMware (using Hyper-V) what Microsoft did to Novell NetWare with Windows NT/2000 Server.

The premise behind this question is that the hypervisor is what primarily matters and that Microsoft will eventually use Windows Server to “carpet bomb” the market with a high-volume/low-cost strategy and make VMware substantially less relevant.

We need to look at a few things here including the hypervisor today, the hypervisor tomorrow and the role of the hypervisor itself.

Today there are major differences between Hyper-V and ESX 4.1.  To pick just one, memory page sharing and memory overcommit has been around since ESX 3.5 and Hyper-V will first get page sharing with Windows 2008 R2 SP1 next year.  There’s no equivalent of DRS in SCVMM and vSphere 4.1 just added new features like memory compression, storage I/O control and more.  Enough said.  While Hyper-V is a capable hypervisor, today there is a significant feature gap.

A picture is worth a thousand words, so here is Gartner’s Magic Quadrant for x86 Server Virtualization Infrastructure for May 2010:

Will Microsoft close this gap in the future?  The gap will almost certainly shrink in the long run as all hypervisors will eventually drift toward parity, and that will take some time.    

A better question I think is how strong are the API’s and cloud computing ecosystem surrounding the hypervisor? 

Consider that VMware has Loadable Kernel Modules (LKMs) for Cisco switches and virtualized application firewalls and agentless antivirus (vShield).  Now add the API’s available through the hypervisor for everything from SAN array integration, to the vCloud API and much more.  Now look at the suite of products surrounding vSphere –especially vCloud Director.  Enterprises will want to choose solutions that enable them to best leverage the benefits of the private cloud and a hypervisor can’t do that all on it’s own.

Microsoft will be a major player in virtualization, but will they be able to dominate the market the way they did against Novell’s NetWare?  This is a much different market than the late 90’s and Microsoft would need much more than just a hypervisor with feature parity to do so.  In short, I don’t feel that VMware is in much danger of becoming “the next Novell” — and to the contrary I think VMware is positioning themselves to be a leader in cloud computing as well. 

The Role of the Guest OS

At VMworld 2010, VMware CEO Paul Maritz expressed a vision of cloud computing where the guest operating system would be less relevant in the long run (i.e. 5 years) in cloud computing environments.

Some felt that VMware’s acquisition of the SUSE Linux OS was a departure from this vision and strategy.  In a sense it is, but that future vision of a well-functioning cloud datacenter where the guest OS is less relevant just isn’t a reality yet.  In fact, most cloud products are 1.0 versions today. 

An acquisition of SUSE Linux may seem to fly contrary to this long term vision, but it could be very valuable over the next 5 years as it gives VMware not only a measure of independence from Microsoft, Oracle, Red Hat, etc., but it also gives VMware the full vertical stack it needs to build cloud ready applications.

To give an example, look at VMware’s recent Zimbra acquisition.  VMWare would have a mail platform (Zimbra) with an OS to run it on (SUSE), a virtualiation platform to run the OS on (vSphere,) and vCloud Director to quickly deploy and provision it.

In short, the idea of the demise of the traditional OS is a long term vision that will take years of evolution to develop into reality in our datacenters.  VMware owning a major Linux distrubution is a strategic play from which I expect them to significantly benefit over the next 5-7 years until the cloud evolves and matures. 

Just please don’t hold me to any 5 or 7-year predictions as my crystal ball is a bit cloudy these days….  🙂

Cloud Computing and Newton’s third law


Issac Newton’s third law of motion famously stated “to every action there is always an equal and opposite reaction”. If one takes a few liberties and applies this to a paradigm shift technology, one could say that every new technology innovation brings about new challenges. 

The benefits of cloud computing are quite exciting.  It enables companies to achieve greater levels of business agility while reducing TCO. It enables companies to transcend the delays associated with internal silos and gain rapid execution of business objectives.  So what’s the downside? 

Proformative recently issued a news release entitled “Economics of Cloud Computing Too Compelling to Ignore” which I thought had some interesting findings including the following:  

  • Cloud Computing and SaaS will be critical to companies over the next few years, and CFOs feel like they are already “behind the curve” and need to be educated,
  • Cloud Computing and SaaS reduce IT CapEx and OpEx and create a direct link between IT consumption and cost, and
  • Cloud Computing and SaaS have delivered to many firms higher ROI, increased collaboration, and greater confidence in systems and their business value.

I get the sense that the survey was focusing more on the public cloud (SaaS) than on the benefits of applying cloud computing principles to datacenters (private cloud). If CFO’s feel behind the curve on cloud computing I wonder how many CIO’s would respond! There’s still a great deal of uncertainty regarding cloud computing (public and private) and a desire for greater education. 

But the point here is in the title — the economics of cloud computing are too compelling to ignore. Without going into detail here (I need to save some material for future posts!) I think its clear that most have recognized that there are profound benefits to cloud computing — even when they aren’t fully understood. So what’s the downside? 

The Dark Side of the cloud

So is cloud computing really a silver bullet for both the business and IT or is there more to it? Mark Shoemaker writes on HP’s Grounded in the Cloud blog:  

If you think you had problems managing the old physical data centers just wait until you try to keep track of the complexity Cloud computing can generate. 

 Mark continues:  

If you were thinking of cloud computing as an easy way of taking all those hairy, non-standard servers you’ve been packing in the data center over the last few years via the wonders of Physical-to-Virtual transformation and shoving them into the cloud, think again. You are still going to need a plan to transform the old stuff but we’ll save that discussion for another time. 

Cloud computing is not a silver bullet if you don’t have strong standardization, lifecycle management, service level management, and change and configuration management. Cloud computing doesn’t solve these problems — it may even make them more important to tackle.   

The original downside to virtualization was VM sprawl and similar problems that come from an unstructured environment can be a barrier to successful cloud computing as well.  

VMware and EMC have recognized this problem and continue to develop products designed to tackle the challenges of cloud computing: 

VMware vCloud Director— VMware Lifecycle manager is being discontinued as vCloud Director attempts lifecycle management and application provisioning from a cloud perspective.  

vCenter Configuration Manager— Automation of guest OS confgurations and patch levels, including PCI compliance from EMC’s Ionix Family). 

vCenter Application Discovery Manager— automated dependency and configuration mapping of applications (from EMC’s Ionix Family)
 VMWare Service Manager — ITIL standards for service management in a virtualized environment.
 In the interest of time only VMware products are listed above, but there are many more 3rd party vendors with solutions in these areas.
The benefits of the private cloud can’t make up for the fact that your standards, SLAs and processes might need some attention — the agility of the cloud will likely shed even more light on any such shortcomings. 

WSJ: VMware to acquire Novell’s SUSE Linux division

Prior to VMworld, there had been rumors of a possible VMware-Novell deal in the works.  Earlier this week,  The Wall Street Journal reported that VMware is indeed in talks to acquire Novell’s SUSE Linux division, while remaining assets might be acquired by Attachmate and perhaps others. 

There won’t be any confirmation until a deal is signed, but it does appears very likely that VMware will be acquiring the SUSE Linux Enterprise Operating System from Novell.  Why would VMware make such a move and what does it mean?

SUSE is a major Enterprise Linux Distribution.  SearchDataCenter.com reported in April that SUSE Linux had captured 30% of the Enterprise Linux market share, supporting large Oracle and SAP deployments, while also supporting more applications than Red Hat. 

Chris Wolf at Gartner shares several good observations about this move, including that VMware’s efforts with JeOS (Just Enough OS) were falling short as companies wanted a more established linux distro like Red Hat or SUSE.  Having a major Linux distribution also enables VMware to gain a measure of independence from Microsoft and others.  In other words this is really a strategic move for VMware — in the long run the OS may become less relevant but over the next few years this gives VMware it’s own OS (which Microsoft, Oracle both have) which can support many initiatives ranging from large Oracle and SAP deployments on SUSE to other cloud-oriented plays.

On a side note, the question has been asked in the past if VMware would be the next Novell — meaning that eventually Microsoft would surpass VMware and make them irrelevant.  After VMworld 2010 I suspect that a lot fewer people are still asking this question.  Two quick points here.  First, in the long run all hypervisors will move towards parity — but there are still major differences today.  Second, in the long run what becomes more important than the hypervisor itself, are API’s, the automation and the cloud computing ecosystem that is built around the hypervisor.    I’m going to go out on a limb here and suggest that VMware isn’t close to becoming the “next Novell” anytime soon — and to add a bit of irony to that, VMware appears to be about to purchase the largest piece of Novell ( a piece that has little to do with Novell’s original platform, NetWare).

SUSE Linux is already available free to qualifying VMware customers due to a previously announced partnership.  For more details of VMware’s SUSE Linux offering, visit VMware’s SUSE Linux page.


I haven’t been blogging much lately, but it’s not for the lack of ideas and topics.  I’m in the middle of a big project for which all the hardware just came in for, plus I’ve been working on a few side projects (depending on how they turn out, I may be able to share some details in the future).

There’s a of news to catch up on and this is a good time to take a closer look at what happened at VMWorld a bit removed from all the flashing lights. 

I hope to slowly get back into the blogging groove over the next few days!

VMworld 2010 Keynote Replay

Below is the 4 minute intro that was played at VMworld 2010 to attempt to define the cloud with a bit of humor.  During the live feed I missed the fact that the voice inquiring “What the hell is cloud computing” actually came from Oracle HQ.  (Oracle of course has some unique support policies regarding virutalization, while they attempt to promote their own hypervisor).

If you haven’t already, I highly recommend that you take some time to watch the replay of the entire keynote.  The value proposition of cloud computing is well explained during the keynote.  As VMware CEO Paul Martiz explained, these changes in IT will happen with or without VMware.  Applications need to divest from the traditional PC model and have flexibility in the cloud.  Automation and standards can be used to reduce operational costs.  As I wrote here, enterprises can move away from the legacy IT-as-a-cost-center model and discover a new agility that can be used to quickly support strategic business initiatives and goals.