Why Microsoft?

This is a question that can be explored from many different angles, but I’d like to focus on it from not JUST a virtualization perspective, and not JUST a cloud perspective, and not JUST from my own perspective as a vExpert joining Microsoft, but a more holistic perspective which considers all of this, as well

Top 6 Features of vSphere 6

This changes things. It sounds cliché to say “this is our best release ever” because in a sense the newest release is usually the most evolved.  However as a four year VMware vExpert I do think that there is something special about this one.  This is a much more significant jump than going from 4.x

vSphere 6.0 Public Beta — Sign Up to Learn What’s New

Yesterday, VMware announced the public availability of vSphere 6.0 Beta 2.  I can’t tell you what’s all in it due to the NDA, but you can still register for the beta yourself, read about what’s new and download the code for your home lab. There’s some pretty exciting stuff being added to vSphere 6.0 in

Will VMware Start Selling Hardware? Meet MARVIN

The Register is running a story that VMware is preparing to launch a line of hardware servers.

VMware Pursues SDN With Upcoming NSX Offering

Earlier this week VMware announced VMware NSX – an upcoming offering that takes network virtualization to new levels. NSX appears to be somewhat of a fusion between Nicria’s SDN technology (acquired last year by VMware) and vCloud Network and Security (vCNS – formerly known as vShield App and Edge). Since I already had intentions to

What Really Is Cloud Computing? (Triple-A Cloud)

What is cloud computing?  Ask a consumer, CIO, and salesman and you’ll likely get widely varying responses. The consumer will typically think of the cloud as a hosted service, such as Apple’s iCloud, or uploading pictures to Photobucket, and scores more of like services (just keep in mind that several such services existed before it

Agility Part 2 — The Evolution of Value in the Private Cloud

When an IT project is commissioned it can be backed by a number of different statements such as: “It will reduce our TCO” “This is a strategic initiative” “The ROI is compelling” “There’s funds in the budget” “Our competitors are doing it” Some of these are better reasons than others, but here’s a question.  Imagine a

Stacks, the Vblock and Value — A Chat with EMC’s Chad Sakac

…I reached out to EMC’s Chad Sakac to gain more insights from his perspective on how the various stacks…well…stacked up….

vSphere 5 has gone GA — Download it Now!

VMware vSphere 5 has been released and is available immediately for download!  This is a very exciting release in vSphere’s evolution which will increase the scope of what can be virtualized as well as how efficiently these workloads perform and are managed.

For more info on getting started with vSphere 5, check out this post from the VMware Support Insider.

You can download vSphere 5 here, and also access the vSphere 5 online documentation here.

As for new vSphere 5 features, many Planetv12n bloggers have made excellent and detailed posts on the new vSphere 5 features, especially Duncan Epping at Yellow Bricks.  I haven’t gotten around to blogging on vSphere 5 yet as I have a backlog of other posts, but I downloaded it last night and hope to be able to work with it over the next few weeks and share some observations.

VMware vSphere 5 Licensing Updated (or “Me Too!”)

Most everyone by now has posted on and/or is aware of the recent vSphere 5 licensing changes, but I felt obligated to post on this to provide links as well as to follow up on my previous “vRAM” posts.

I’m not going to rehash all the details here, but I will provide links and comment on a few points…

Summary of New Changes

vRAM entitlements have been largely doubled from the previous numbers, such that Enterprise Plus now provides 96GB of vRAM per CPU socket.  In addition to the vRAM increase, the most vRAM a single VM can consume has been capped at 96GB (I think this is very significant as I will discuss in a bit.

The details of the latest changes are available at the official VMware links below — which includes a handy tool for evaluating the licensing impact against your current environment:

Why Change the Model At All?

This is a question I’ve heard a few time and it’s a valid one.  Some are wondering why we even need a vRAM component.  I discussed this in a previous post and I’ll share my thoughts again here:

From an economic perspective, measuring consumption (via RAM allocation) will be a more accurate measure of resource valuation than measuring physically provisioned resources (CPU/Memory/etc.).  This has significant implications for chargeback both within the IT organization and for cloud computing.

Within the IT organization it forces application owners to consider the resources they are demanding as opposed to demanding a pool of infrastructure.  For many reasons I’m a big believer in accurately tracking the value of resource consumption and charging it back to those who request it.  Not only are there budget considerations but often times resources are used more effectively under a chargeback model.

Now lets look towards the cloud.  In the future I believe that it will be increasingly commonplace to see workloads migrated from private datacenters to service providers or even between service providers.  As the public cloud evolves, wouldn’t it be nice for software licenses to be aligned with consumption, rather than a physical paradigm which may be different from one environment to the next?   This is what VMware is doing and what I suspect – over time – more and more vendors will be forced to move into a similar consumption model that is not tied to physical hardware.

I can’t put it better than Wikibon’s Stuart Miniman who stated “if VMware’s licensing change becomes a forcing function for chargeback, that’s a #cloud silver lining” Chargeback is ultimately a good thing and enables a better alignment with the economics of the cloud.

In closing, I think VMware has some very good reasons for moving in this direction.  Change is disruptive and we (and our processes) don’t usually like change, but aligning software costs with actual consumption rather than a less perfect physical server measure is a good thing in the long run and more vendors will likely consider a move in this direction at some point.

The Monster In the Room

One of the changes getting the least attention may be one of the most important.  With vSphere 5 it is now possible to support a “Monster VM” of up to 1TB of RAM and 32 vCPUs.  Now add a few more considerations:

Proprietary platforms like AIX/P-Series and HP/UX, etc. are great platforms for the database tier but their cost structure can be…well…profound.  It has been suggested that the “Monster VM” model for the DB Tier can achieve budgetary savings over this expensive proprietary hardware even with the original vRAM model.  Well on the cost side of the balance sheet, caping the vRAM cost at 96GB per vRAM just substantially reduced the licensing costs for that monster VM with 1TB of RAM.  And that’s before we consider some of the intangibles of providing cloud-like OPEX and Agility benefits to the database tier (replication, DR, recovery, hardware agnosticism, portability, etc.).

Is vSphere 5 the release that takes on the DB tier and we see workloads moved from “big iron” RISC hardware?  It could be.  When you can run SAP at 94% of native and gain all the CAPEX, OPEX and Agility benefits I have to think that IT shops will be wanting to pilot this and explore the potential benefits and savings in more detail.

Now Can We Talk About What vSphere 5 Can Do?

Yes!  It’s a shame that so much of the focus of the vSphere 5 launch was focused on licensing rather than the amazing new capabilities.  Many have been blogging extensively about the new vSphere 5 features and their value and I hope to be joining their ranks soon as there is much to talk about!  vSphere 5 — especially when combined with the complimentary cloud products (vCloud Director, vShield and more) — has so much to offer for not just the IT professional but for the organization looking to capture the CAPEX, OPEX and Agility benefits of the cloud computing model.

RUMOR: vSphere 5 Licensing May Be Updated — 96GB per CPU with Ent. Plus

UPDATE:  CRN has quoted sources adding more weight to these rumors.  Will link to official announcement when available.

Rumors are circulating that VMware may be changing their vSphere 5 licensing scheme .  (My original post on the topic is here).  The vSphere 5 terms were just modified a few days ago for the Service Provider program, so this would not be completely unexpected.

Gabe at Gabe’s Virtual World has a post that details the rumor and I hope he won’t mind if I quickly cut and paste the changes as he has documented:

  • VMware vSphere 5 Essentials will give a 24GB vRAM entitlement
  • VMware vSphere 5 Essentials Plus will give a 32GB vRAM entitlement
  • Max vRAM in Essentials / Essentials Plus will be maxed at 192GB vRAM
  • VMware vSphere 5 Standard vRAM entitlement has changed to 32GB ( <- my assumption)
  • VMware vSphere 5 Enterprise vRAM entitlement will be doubled to 64GB
  • VMware vSphere 5 Enterprise Plus vRAM entitlement will be doubled to 96GB

If the rumor turns out to be true, this means that with 96GB per CPU of Enterprise Plus, you can now allocate (not just provision) 192GB of VRAM per each  2-CPU (socket) host.

Another part of the rumor is that the maximum VRAM allocation that can be consumed by a single VM is capped at 96GB!  This would mean that if you deploy a “Monster VM” with 1TB of RAM for example, only 96GB will count against your VRAM license!

This would have big implications for large DB tiers for example.  Consider Chad Sakac’s recent (and excellent post) about virtualizing large Oracle DB tiers on vSphere 5.  This makes it even more compelling to move away from those high-cost and proprietary platforms, to extend the benefits of virtualization — and hopefully cloud computing concepts — to those large DB tiers.

This would be a welcome change which should largely remove licensing concerns as an obstacle to vSphere 5 adoption.

Happy Birthday John!

Today is John Troyer’s ( @jtroyer) birthday and he deserves a huge round of thanks from the VMware community that he has spent so much effort developing and promoting. Being a freshman vExpert, I’ve already developed a greater appreciation of his tireless efforts for the community. In fact I probably wouldn’t have had the vExpert opportunity if John hadn’t taken the time to respond to me and review my blog when was just starting out.

So many in the community (vExpert or not) tremendously appreciate John’s efforts – from podcasts, to events, to just providing guidance and direction for the VMware community. This community is incredibly impassioned and engaged and John’s efforts have a great deal to do with that.

I’m really excited about working with the vExpert community this coming year and everything John organizes around it. Thanks for all you do John and have a great birthday!

vSphere 5 Licensing — Not Quite So Bad?

With the vSphere 5 details out, most of us (especially VMware) would be rather talking about the new and exciting capabilities, but the new licensing model has generated a lot of attention and concern.  The potential impact is broad and needs to be understood and perhaps some are asking “why?” as well.

I wanted to walk through a few scenarios and also discuss the impact and potential reasons for this change.

UPDATE 4:  VMware has modified the license model and is granting additional entitlements.  Details here.

UPDATE:  Please look at Gabrie van Zanten’s post which goes into great detail on vSphere 5 licensing

UPDATE 2:  Added “Monster VM” section inline below 

UPDATE 3:  VMware provides an excellent breakdown here

vSphere 4 was licensed primarily on a per-CPU socket basis, but vSphere 5 will introduce a new pooled vRAM component where a certain amount of vRAM allocation is included with each CPU license depending on the edition (for example, Enterprise Plus provides 48GB of vRAM for each CPU license – details in the vSphere 5 licensing whitepaper).   For many the first impression was one of concern that this would lead to higher costs.  Are these concerns justified?

Baseline

For a starting scenario let’s take a look at a sample environment consisting of 2-socket (CPU) hosts with 96 GB RAM running Enterprise Plus. Because there are 2 CPU sockets, the customer is entitled to 96GB of pooled vRAM (48 x 2) which means that there will be no net change in licensing cost.  The bottom line is that there is no increase in cost, provided that memory-to-CPU ratio does not exceed what the license provides (48GB per CPU in Enterprise Plus).

But what if we increase the RAM in each 2-CPU host to 128GB?

vRAM versus RAM

Physical RAM installed in the host is not the same as allocated vRAM.  RAM is not allocated until a VM is powered on and requests access to the memory pool.  In other words your host may have 128GB installed, but the real question is how much of that 128GB is being actively used by VMs?

Within the boundaries of a single host, it would be necessary to purchase additional licenses if your allocated vRAM exceeded 96GB in this scenario, but most of us aren’t working within the boundaries of a single host either….

Aggregation

The vRAM pool concept is aggregated across all the hosts defined in your vCenter server, including vCenter instances that are linked to one another.  Thus because the vRAM pool is in aggregate across all your hosts, chances are you will have multiple hosts for which the vRAM is below the “waterline” and all of this is aggregated into the larger pool.  A well-designed environment would not use 100% of physical RAM, but rather something closer to 85%.

To illustrate aggregation in action, let’s say for example that Host A has allocated 140GB of RAM, but hosts B, C and D are allocating 60GB each.  You’re still 64GB below the vRAM limit and have not incurred any new cost.  Aggregation allows you to leverage unused capacity from multiple hosts into one big vRAM pool.

Core Restrictions Lifted

In vSphere 4 you were limited to either 6 (Standard, Enterprise) or 12 (Enterprise Plus) cores.  With Intel and AMD shipping CPUs with more and more cores with each CPU edition, many customers would have had to purchase more licenses to satisfy the number of cores in today’s and tomorrow’s processors. In vSphere 5 any limitation on the number of cores was lifted, resulting in less revenue for VMware to drive their R&D efforts.   Well, they didn’t give up the revenue as much as they repositioned it from the physical core restriction to a consumption model based on memory.

It may result in slightly different results depending on the environment but should software be licensed on the physical hardware that we provision, or on the actual resources that we consume?  As we’ll get into below there may be several advantages to this approach.

Cost Bottom Line

The bottom line is how much RAM is being ALLOCATED (not provisioned) for each CPU in your environment, and then average this across ALL the hosts in your vCenter (including linked vCenter servers) environment. Certainly there may be some environments where the average memory consumed per CPU will exceed the licensed allotment, requiring additional licenses to be purchased, but this also must be balanced against licenses that may have been saved as a result of removing all core restrictions on the processors.

The Density Problem

Some of the biggest concerns may be from environments that have standardized on larger blades (128GB or 196GB) and are getting a very high density ratio.  These environments seem to be far more likely to be impacted by the licensing changes.  It will be very interesting to see this play out and how this licensing change might impact decisions on blade sizing for vSphere environments.

The Monster VM

vSphere 5 technically enables the “Monster VM” by supporting 32 vCPUs and up to 1TB of RAM, but it would seem at first that the new licensing scheme would reduce incentives to leverage this capability.  VMware’s Scott Sauer pointed out to me that many “Monster VM” candidates would be coming from proprietary RISC/ UX systems (HP-UX, AIX, etc.).  When you factor in the savings from moving away from these high-cost non-cloud platforms, the Monster VM will be far cheaper in most cases and that’s before you consider the benefits of other intangibles (cloud, ops,  DR, etc.)

Chargeback And The Cloud

From an economic perspective, measuring consumption (via RAM allocation) will be a more accurate measure of resource valuation than measuring physically provisioned resources (CPU/Memory/etc.).  This has significant implications for chargeback both within the IT organization and for cloud computing.

Within the IT organization it forces application owners to consider the resources they are demanding as opposed to demanding a pool of infrastructure.  For many reasons I’m a big believer in accurately tracking the value of resource consumption and charging it back to those who request it.  Not only are there budget considerations but often times resources are used more effectively under a chargeback model.

Now lets look towards the cloud.  In the future I believe that it will be increasingly commonplace to see workloads migrated from private datacenters to service providers or even between service providers.  As the public cloud evolves, wouldn’t it be nice for software licenses to be aligned with consumption, rather than a physical paradigm which may be different from one environment to the next?   This is what VMware is doing and what I suspect – over time – more and more vendors will be forced to move into a similar consumption model that is not tied to physical hardware.

I can’t put it better than Wikibon’s Stuart Miniman who stated “if VMware’s licensing change becomes a forcing function for chargeback, that’s a #cloud silver lining” Chargeback is ultimately a good thing and enables a better alignment with the economics of the cloud.

In closing, I think VMware has some very good reasons for moving in this direction.  Change is disruptive and we (and our processes) don’t usually like change, but aligning software costs with actual consumption rather than a less perfect physical server measure is a good thing in the long run and more vendors will likely consider a move in this direction at some point.

Once many take a moment to “do the math”, they may conclude that the new licensing model is not nearly as bad as they had once imagined.  Those running larger hosts (128/196GB) will likely be the most impacted and will want to review the new licensing model before they size their servers.

Got VDI? Check Out The Free Online VMware View Bootcamp On July 19!

For many IT shops, VDI (Virtual Desktop Infrastructure) is their first step in their private cloud journey.   Automation, on-demand provisioning, access by mobile devices —  VDI has it all while providing profound benefits to the enterprise over the traditional physical desktop provisioning and management model.

For a while many felt that Citrix was the only game in town, but that changed with the release of VMware View 4 which included the PCoIP protocol.  Today Citrix and VMware View are roughly equal in VDI market share, but they may not be as equal in their architectures.

In fact, VMware View is so well integrated with the vSphere platform that VMware has produced a series of videos to illustrate the big differences in management overhead and complexity.  The video below demonstrates how creating a master image can be done in 16 seconds using VMware View, but it takes nearly 16 minutes using Citrix Xen Server.


 

There are 3 more videos in this View vs. Xen series which can be seen here on VMware View’s YouTube channel.

VMWARE VIEW BOOTCAMP

If you haven’t heard VMware is making some kind of big announcement on July 12.

VMware clearly expects some of their new announcements to increase interest in their VMware View VDI solution and will be offering a free online bootcamp starting on July 19, with a new video seminar to be released each day.

You can preview the videos and register for the bootcamp here.  Enjoy the View!

vExpert 2011!

It feels amazing, humbling and a even bit crazy that I’ve been selected to be a vExpert for 2011.  I thought I’d take a minute to recap the experience of the past year and draw some observations from it.

AN EXPERIMENT LAUNCHED

After being a loyal follower of the Planet v12n blog feed for some time, I eventually decided that I would launch my own blog experiment in order to share some ideas from a recent project where we leveraged VMware ESX to move a datacenter.  I taught myself wordpress, came up with a name, and about a year ago today I first began posting on Blue Shift Blog in July of 2010.

At this point I was about as green in both social media and cloud as one could be.  If I was asked to define “cloud” at this time, my definition most likely wouldn’t have extended far beyond a publically hosted service.  I had never used Twitter or Facebook before and created my first accounts on each service for Blue Shift.

After about 2 months and 20 posts, it was suggested to me that I may want to inquire with John Troyer about a few things.  In the process of doing so, John took the time to review my mostly unknown blog and then to my surprise, John took a chance on me and added Blue Shift to the Planet v12n blog feed.

Over the next few months I found myself intrigued with the cloud and the value proposition of converged infrastructure.  One day I was observing a contentious back and forth on Twitter which left me with some questions about competing converged stacks that I didn’t feel were being answered.  I reached out to EMC’s Chad Sakac who graciously agreed for me to ask him a series of questions for a blog interview on converged stacks.  I learned a tremendous amount from that experience and I hope others in the community found it valuable as well.  This prompted me to start my yet unfinished “Agility” series of posts.

At the same time I was watching and participating in more and more conversations about cloud computing.  I’ve learned so much over the past year that I can now engage in discussions with people like “the president of the private cloud” and feel like I’m not completely embarrassing myself.  So many people who I’ve been looking up to for so long, were willing to engage with me in cloud discussions, validate my thoughts and advance the community in general.  I was now having intelligent conversations on topics like cloud computing with individuals that I have the highest respect and admiration for.

This alone was immensely satisfying and rewarding, so imagine my surprise to learn last week that I would be joining many of these same individuals as a VMware vExpert.  After looking up to the “rock stars” of the vExpert community for all this time, it feels crazy and amazing that I would be selected to be in the same group as them.   I don’t want to name any names here for fear of unintentionally leaving some out, but to be recognized by being included in the same group as all the other vExperts is simply a tremendous honor, and I’m so humbled to be included in this group and community.

In summary, the vExpert award almost makes me feel a bit like this

COMMUNITY OPPORTUNITY

Two things from the above stand out for me.  One is the amazing community that John Troyer and his team have developed over the years.  It’s not just that there is so much amazing talent in the community, but that there’s so many great people who are willing to develop and support the community at large using blogs, twitter, VMTN forums, VMUGs and so much more.  This really is a special and thriving community and I am very proud to be a part of it.

The other thing that stands out for me is opportunity.  Maybe you have some ideas or a fresh perspective you would like to share.  If so, you’re no further behind than I was a year ago.  Make a blog, presentation, or whatever you feel is your strength and start sharing with the community.  There’s always a need for fresh perspectives and there’s so many area in virtualization/cloud to specialize in.  Just jump on in!

At times I was disappointed that my ability to work with VMware products was limited to ESX 3.5 and that I didn’t have more access to some of the great technologies others were writing about, but as my experience shows, this isn’t always necessary — sometimes there’s great value in just discussing concepts and trends.

BLOG STALL

The 2011 vExpert award is based on contributions made in the 2010 calendar year, and for the past 5 months this blog has been rather quiet.  There’s a few reasons for that slow down including:

  • Bad habit of tweeting things that may be blog worthy
  • The birth of our 3rd child
  • Major post-birth medical complications for my wife
  • Lack of opportunity to work with VMware products

Needless to say my work is cut out for me for vExpert 2012, but many of the above conditions are finally changing, plus there will be a lot of exciting things to talk about in mid July!

In summary I am honored to be a vExpert and I’m very excited about the potential of that program to create even more opportunities for me to contribute to the community.

One more special thanks to John Troyer and the rest of the team at VMware for all they do to develop and promote this great community!

Watch the vSphere 5 Launch on Tuesday, July 12th!

VMware just announced a live webcast to be held on Tuesday, July 12 titled “Raising the Bar, Part V” in which Paul Maritz and Steve Herrod will “unveil the next major step forward in cloud infrastructure”.  As the “Part V” suggests, speculation is that vSphere 5 will be introduced during this live webcast.

What do we know about vSphere 5?  One thing would seem to be a significantly improved capacity to handle high end workloads.  Back in February, EMC’s Chad Sakac made a post detailing EMC’s success at virtualizing Oracle internally, but mentioned that the database tier had not yet been virtualized and that they’ve been working with VMware on “future vSphere releases” and that “one of the main goals of that next release is to handily virtualize the DB tier”.  We will have to wait for the details of course, but at the least we know that VMWare has been working towards supporting more demanding workloads in future releases and some of that may appear in vSphere 5.

There’s also been some “leaks” regarding vSphere 5 which is rumored to include the following new capabilities:

  • Support for up to 512 VMs per host (!!!)
  • Support for up to 160 Logical CPUs and 2 TB or RAM
  • vSphere Auto Deploy combining host profiles, Image Builder and PXE
  • Unified CLI framework, allowing consistency of authentication, roles and auditing.
  • Support for up to 1 TB of memory
  • Support for 32 vCPU’s per VM
  • Client-connected USB devices
  • Apple Mac OS X Server 10.6 (Snow Leopard) guest OS support
  • Improved version of the Cluster File System, VMFS5
  • Accelerator for specific use with View (VDI) workloads, providing a read cache optimized for recognizing, handling and deduplicating VDI client images.
  • Storage APIs – Array Integration: Thin Provisioning enabling reclaiming blocks of a thin provisioned LUN on the array when a virtual disk is deleted
  • 2TB+ LUN support
  • Storage vMotion snapshot support
  • vNetwork Distributed Switch improvements providing improved visibility in VM traffic
  • vCenter Server Appliance
  • Revamped VMware High Availability (HA) with Fault Domain Manager

Some of these rumored capabilities were discussed elsewhere.  For example, back in November, Duncan Epping at Yellow Bricks blogged about VAAI and mentioned that dead space relclamation for thin disks was being developed, which is mentioned among the above rumors.

REGISTER FOR THE EVENT

VMware vSphere has established itself as the premier virtualization platform in an industry where cloud computing is gaining in importance.  Just this week Gartner began advising that businesses should not only be shifting from traditional computing to virtualization, but should also be leveraging virtualization to build private clouds.

VMware will certainly be raising the bar with vSphere 5, so be sure to watch the announcement and learn about the details of vSphere 5.  You can register for the event here.

Coming Soon…

I haven’t posted in some time (not for a lack of topics/ideas) so I thought I’d offer a quick update…

Sometimes I find myself sharing information via Twitter rather than taking the time to blog about it, but sometimes more than 140 characters is needed to make a point or two 🙂

One reason for the pause is time — with a newborn still needing near constant attention and other events — it’s been difficult to find the time as of late and I hope that as we get into July this will improve.

A second reason, is the nature of my work. I haven’t had the opportunity to work with VMware products for some time but that’s about to change as well.   We will be upgrading from ESX 3.5 to vSphere 4.1 and in the process we will be addressing many things including:

  • New vSphere architecture
  • Optimize Backups and integration with 3rd party systems
  • vCenter Heartbeat
  • IBM SVC integration (VAAI and vCenter Plug-In)
  • Opportunities for automation and much more…

As time allows, I hope to have the opportunity to share observations and lessons learned from these activities which I expect to begin in earnest in July.

Also I have not finished posting on Agility and the value of converged stacks and I hope to be able to continue that if time allows.  Also coming soon is my daughter’s Make-A-Wish story which I really hope to be able to finish and post this month yet.

So hopefully things will slowly pick up here as the summer progresses…

Today is World Wish Day!

Today is a celebration of the very first wish granted which marked the beginning of the Make-A-Wish Foundation.  Since that day, Make-A-Wish has grown to an international organization of almost 25,000 volunteers dedicated to granting wishes to children with life-threatening medical conditions.

The best way I can share just what the Make-A-Wish foundation means to these children and their families is to share the experience of my own daughter who was granted her wish last year.  Unfortunately, due to having a newborn and some medical issues in the family, I haven’t had the time to write her story for this special day.

I will attempt to post my daughter’s story soon, but in the mean time I will share this special video of of a boy who got his wish to spend the day with Michael Jordan:

You can learn more about the Make-A-Wish foundation, including ways in which you can volunteer at http://wish.org

 

Blue Shift is an E-Ambassador for the Make-A-Wish Foundation

The Make-A-Wish Foundation is an outstanding organization and it is a privilege to be able to share with my readers stories and details on their incredible mission that has been enriching lives since 1980.  Below I’ll break down some details on what this means, as well as ways in which you can get involved.

HOW WILL BLUE SHIFT BE INVOLVED?

From time to time, I will post and/or link to stories about Make-A-Wish recipients as well as news and event updates (some updates will be sent as tweets).  The first post will be the story of my own daughter who was granted a wish by the Make-A-Wish Foundation last year, and I hope to post her story here in the near future.  Sharing experiences like mine and others I think are the best way to demonstrate what the Make-A-Wish foundation is all about and the profound ways it changes lives.

You may also notice that on Blue Shift you will now see banners for the Make-A-Wish Foundation on the top right of every page.

WORLD WISH DAY AND WAYS TO GET INVOLVED

On April 29,1980 the first wish was granted and in celebration of this beginning, April 29 is World Wish Day — a global celebration of wish making.

There are many ways to help promote the Make-A-Wish Foundation’s Mission including:

  • Sharing stories with social media (Twitter/Facebook)
  • Donating time, money or frequent flier miles
  • Corporate sponsorships

For more details on all the ways you can help, please visit http://wish.org/help

WILL YOU STILL BE BLOGGING ABOUT VIRTUALIZATION, CLOUD, TECH, ETC ?

I can assure you that it is still my intention to continue making technical posts.  My posting has been slow for several different reasons over the past few months (one of them is preparing for the birth of our 3rd child in perhaps just a matter of hours now).  At this point I don’t know how frequently I will be posting, but it certainly is my intention to continue blogging from time to time on IT/tech/management issues.

We hear much about “work-life balance” and in a way this is a way to bring such a balance to Blue Shift.  Life is not all about work, and I hope that by adding an occasional Make-A-Wish story to the “normal” content here will make for a good mix and balance.

 

 

Blue Shift is Downshifting

The next few months will likely be very quiet for me blogging-wise.  There are several obstacles but perhaps the biggest is that I will continue to have very little opportunity (if any) to work with virtualization ( let alone cloud concepts), or even storage over the next quarter or two.  Another obstacle is that our third child is due over the next 8 weeks and our housing situation is a challenge (to put it mildly) so some effort is required towards making a semi-effective environment.  And this doesn’t even include the 2 electives I have remaining to complete my MBA.  In order to either read or write about virtualization or cloud I basically have to “steal” time from somewhere.

I really want to do a lot more posts in the Agility series, talk about the Vblock, talk about clouds and value and much more.  As excited as I am to share some examples of value with integrated stacks like the Vblock, I’m even more frustrated at my inability to find time to write them.   Perhaps I’ll get lucky and I’ll find some time here and there to crank out a few posts.  While my blogging activity may decrease, I”ll still be on Twitter bringing attention to articles I find of interest and other random thoughts 🙂