vSphere 6.0 Public Beta — Sign Up to Learn What’s New

Yesterday, VMware announced the public availability of vSphere 6.0 Beta 2.  I can’t tell you what’s all in it due to the NDA, but you can still register for the beta yourself, read about what’s new and download the code for your home lab. There’s some pretty exciting stuff being added to vSphere 6.0 in

v6v6

Will VMware Start Selling Hardware? Meet MARVIN

The Register is running a story that VMware is preparing to launch a line of hardware servers.

marvinmarvin

VMware Pursues SDN With Upcoming NSX Offering

Earlier this week VMware announced VMware NSX – an upcoming offering that takes network virtualization to new levels. NSX appears to be somewhat of a fusion between Nicria’s SDN technology (acquired last year by VMware) and vCloud Network and Security (vCNS – formerly known as vShield App and Edge). Since I already had intentions to

NSX2NSX2

What Really Is Cloud Computing? (Triple-A Cloud)

What is cloud computing?  Ask a consumer, CIO, and salesman and you’ll likely get widely varying responses. The consumer will typically think of the cloud as a hosted service, such as Apple’s iCloud, or uploading pictures to Photobucket, and scores more of like services (just keep in mind that several such services existed before it

3pillars_f3pillars_f

Agility Part 2 — The Evolution of Value in the Private Cloud

When an IT project is commissioned it can be backed by a number of different statements such as: “It will reduce our TCO” “This is a strategic initiative” “The ROI is compelling” “There’s funds in the budget” “Our competitors are doing it” Some of these are better reasons than others, but here’s a question.  Imagine a

agility2agility2

Stacks, the Vblock and Value — A Chat with EMC’s Chad Sakac

…I reached out to EMC’s Chad Sakac to gain more insights from his perspective on how the various stacks…well…stacked up….

stacksstacks

Should You Virtualize vCenter Server (and everything else?)

When concerns are raised around virtualizing vCenter Server, in my experience they usually revolve around either performance and/or out-of-band management. The VROOM! blog at VMware just published a whitepaper that looks closely at vCenter Server performance as a VM versus native (physical) which speaks to these concerns as well as for other workloads. vCenter Performance

vcenter_virtvcenter_virt

Can your VM be restored? VSS and VMware — Part 2 (updated)

The backup job for your VM completed successfully so the backup is good, right? Unfortunately it’s not that simple and a failure to effectively deal with VM backups can result in data loss and perhaps even legal consequences.

vss2vss2

The Nokia Lumia 920 Experience

Recently I was given the opportunity to test drive Nokia’s flagship phone – the Nokia Lumia 920 (Engadget Readers Choice 2012) and I thought I would share some highlights from the user experience.  My current phone is a HTC EVO 4G LTE which is quite similar in specs, but for many functions I found myself favoring the Windows 8 experience on the Nokia Lumia 920.

Hardware

The first thing I noticed is that the Nokia Lumia 920 was solid – it was a bit thicker and heavier than the EVO but as I got used it, I found that I didn’t mind the added size (Nokia is now selling new models such as the 925 and 928 which are a bit lighter).

This page at Techcrunch shows detailed specs for both phones side by side.  They are quite similar for CPU, but the Nokia has a slightly smaller screen (by .2 inches) but greater pixel density as a result.  The Nokia has a “super brightness” capability which can make it readable even with sunglasses in bright sunlight – something I couldn’t do on my EVO.  Each has their pros and cons, but I found myself willing to accept a .2” loss in screen size for the extra brightness.

As for the camera, The Nokia PureView lens is actually suspended in liquid to give it more protection from vibrations and movement.  Both phones have an 8MP camera and can take rapid-fire shots where you select the best picture (for the Nokia this requires the “Blink” app).  Both took excellent pictures and did not do any serious testing to the point I could tell the difference.  Below is a picture I took with the Nokia at a circus but don’t rely on my photo skills — Nokia has an impressive collection of pictures taken by Nokia phone users here.

WP_20130309_067

At the Circus (click to expand)
Taken with Nokia Lumia 920

Another hardware innovation available on the Lumia 920 is wireless charging.  My trial did not include a wireless charger, but the phone is ready for it out of the box.  At present the wireless charger sells for just under $50 on Amazon.

SETUP EXPERIENCE

Truth be told this was one of the fastest and most impressive user setups I’ve seen.  I added the SIM card went through the setup and provided by Windows Live ID (my hotmail.com address — hey, I was an early adopter).  Once I did so it immediately pulled in information from my email account as well as contacts, music and more.  When setup was done the Microsoft Office app caught my eye and I launched it – it automatically connected to Skydrive and I was able to load the PowerPoint I had been working on that day – properly rendered – in a snap.  Music I had purchased/streamed from Xbox Music was also right there ready to be streamed or downloaded.  I think the process took a mere 2 minutes to get setup.

Here some of the value of the Microsoft ecosystem becomes clear.  Skydrive, Office, Email, Music, XBOX and more working seamlessly across the PC and mobile worlds.

EMAIL

For me is where the Nokia shined – when I had both phones available I found myself preferring the Nokia for this reason.  I created Windows 8 Live Tiles for the 4 email accounts I use the most.  I could set different alerts (sound, vibrate, etc.) for each and with just a glance at my home screen I could see which accounts had how many emails, but more importantly, I could move between inboxes at the “speed of tap”.  The email UI is fast and responsive such that I could quickly check multiple inboxes in seconds.  Doing the same on HTC EVO (Android) just wasn’t as fast or as seamless an experience to me.

APPS

The Live Tile experience is nice with many apps as well.  On my Android phone I have to launch various apps to see status, but with Live Tiles I can see updates from Skype, Box, The Weather Channel, Twitter and much more right on my home screen.  With Box, I can instantly see on my home screen updates about new files, or files being accessed and modified.  Also the Skydrive and Office applications give me about as good of an experiencing viewing and editing Office documents on a phone as I could imagine.  For me the rich Office experience combined with Box and Skydrive and the rich email experience offer the most value in a work/productivity context.

HERE-City-Lens-augumented-realityThe Lumia also came preloaded with many Nokia apps including Nokia Music, and the HERE series of apps (Maps, Transit, City Lens, and Drive).  The HERE apps (which I liked) were really useful for a quick answer to “what’s around here for [dining/shopping/etc]” and the City Lens app gives you the option using the Lumina’s camera to overlay labels on the buildings around you.

 Sometimes Windows phones get knocked for not having enough apps (as Android used to) but I wonder if they looked at the app store lately.  Sure there’s not 16 different versions of “Cupcake Maker” but the apps I wanted were there ranging from Netflix, Twitter, Pandora, Skype and much more.

SUMMARY

In summary the Nokia Lumia 920 worked so well for me that I quickly found myself favoring it over my Android powered HTC EVO 4G.  One of biggest areas of utility for me was the rich and “speed of tap” email experience combined with Skydrive, Office and Box.  The next time I find myself purchasing or upgrading a phone, I will definitely be seriously considering the Nokia Lumia offerings.

A Tale of Two Clouds (The Hybrid Cloud Is The New Normal)

Eugène_Delacroix_-_La_liberté_guidant_le_peupleIt was the best of times, it was the worst of times, it was the peak of inflated expectations, it was the trough of disillusionment, it was an epoch of unicorns and rainbows, it was an epoch of engineers and managers looking at each other in bewilderment.  Is this a cloud I see before me?   Come let me clutch thee — I have thee not and yet I still see thee.  Et tu, Brute?

OK enough with the Dickens-Shakespeare mashups but I would like to talk about two islands — on premises systems and public IaaS clouds — and how and why we might connect these islands.  Before we talk about how to connect these islands, lets first review why….

THE PUBLIC OPPORTUNITY

5-apostle-islandsToday most organizations have an on-premises datacenter upon which they might have a private cloud, or just a virtualization infrastructure, or…perhaps something else entirely.  What are some reasons they might want to want to move some things from this “private island” to a public island?  Is the public island cheaper?  Well, not always…

As Chad Sakacc explains in this excellent post, the technology costs for public cloud often aren’t any cheaper and can even be more expensive.  But there’s also some variable costs here — the cost of your datacenter space, the cost of your infrastructure, power, cooling and the staff needed to maintain it.  When these costs are considered it’s possible that purchasing infrastructure as a utility might be less expensive for the organization.

But perhaps more important at times than cost, public cloud is quick and easy to consume.  No lengthy procurement process, nothing to order, ship and then deploy and configure — it’s all there just ready to be consumed when you need it (or in other words…Agilty).  Unless you already have on-premises capacity, it will usually be quicker to consume public IaaS resources.

Now some business critical workloads may not be candidates for either security concerns (real or perceived), governance requirements, operations and many other reasons.  To backup this point, a recent survey posted at Gigaom revealed that 98% of surveyed IT executives plan on expanding their datacenters to run internal private clouds with 61% citing security as the reason public clouds were not selected.

The private cloud simply isn’t going anywhere anytime soon — a strategy for leveraging the utility computing model that focuses exclusively on workloads hosted on public IaaS are missing the internal/private elements of the datacenter which are most likely a larger piece of the pie.  To fully unlock the potential value of utility computing, both sides must be addressed and a strong bridge must be built to connect our multiple “islands”.

That leaves us with some good starting use cases for public cloud — test/dev workloads, web tiers, seasonal capacity and new initiatives.  And for many environments this will be something less than half or even less than one-third of the datacenter.  So now we have two islands…..how do we connect them?

BUILDING A BRIDGE

a014So lets say you’ve got a VMware infrastructure in your on-premises environment and you want to consume IaaS from one of the many vCloud providers.  Well you can start by using vCloud Connector which can package up and migrate workloads and move them over to the hosted vCloud environment.  But this really isn’t so much of a bridge between your “islands” as it is an occasional ferry that can package up VMs as OVAs and transport them back and forth.  At some point we’re going to need something more than this.

The Advanced version of vCloud Connector adds some valuable features but will not be available to VSPP (VMware Service Provider)partners until later this year.  The first feature is a Layer 2 VPN which allows for subnets to be spanned across your “islands” making it no longer necessary to change the IP scheme when you move between islands.  The second feature is content synchronization which allows for your VM templates (a part of a provisioning service catalog) to be kept consistent across all of your islands.  Now we just replaced our ferry with a small bridge between our islands.

As we look to the future there’s also more coming both from the vCloud Suite as well as VMWare’s upcoming NSX offering — Layer 2 VPN, VXLAN, and virtual firewalls.  Imagine if workloads can be moved between your islands, with both IP addresses and routing and  firewall rules maintained during the migration.  Now our one-lane bridge just became a highway as we truly begin to unlock the value of utility computing.

There’s some important lessons here for both service providers and IT organizations looking to unlock the value of utility computing.  Many of us have been focusing on public cloud (IaaS) but now we realize that this may be only viable for something less than 100% of all workloads — we may need to make investments in our on-premises environments to fully unlock the value.  There’s use cases for both private cloud and public cloud and not every workload is likely to fit in the same bucket.  This is where it becomes clear that hybrid cloud will be the new normal — and for many organizations this means making investments in their on-premises environment such as moving beyond mere virtualization to the vCloud Suite for example.

SERVICE PROVIDERS

Let’s say your goal is to help your customers unlock the potential value inherent in the utility computing model.  You start selling hosted/public IaaS to your customers — which does have value and use cases — but now you’re limiting your scope to faction of the total pie — the workloads that meet the test/dev, web tier, seasonal capacity, new project criteria.  If you want to help your customers unlock the potential of utility computing you need to make investments in the on-premises environment as well.

Help your customers move beyond virtualization, by offering converged infrastructure solutions like the FlexPod and the Vblock, powered by VMWare’s vCloud Suite and perhaps adding orchestration, automation, service catalogs and more.  As you help them down this path not only will you be unlocking the value of utility computing in their on-premises environment but you’ll be helping them to build a better on-ramp and bridge to those vCloud powered public clouds.  As new capabilities like vCloud Connector Advanced and VMware NSX become available, the ability to build a strong and sustainable bridge between these islands grows dramatically.  Perhaps more importantly, you are unlocking the value to the workloads that may be captive to the on-premises environments — perhaps more than half of the entire datacenter.

If I can stand on my soapbox for a minute here, I’ve always believed that the best sales approach is a strategic and consultative approach at the CxO levels.  Talk about the full environment and strategy and opportunities for synergy.  The service provider needs to understand the customer’s environment and needs while the IT organization may need guidance on technology, trends and what strategies could be the most effective.  Contrast this to the traditional volume approach where the focus is “what can I sell you this quarter so I can meet my numbers?”.  Sales can enter the cloud era by focusing on long term strategy and not short term volume — ultimately this will lead to unlocking more value for both parties in my opinion.  [Stepping down from my soapbox…]

THE IT ORGANIZATION

Many IT organizations are at different steps along their evolutionary cloud journey.  Some have adopted virtualization, some have adopted private clouds, and others are at varying points along this spectrum.  If you’re running VMware — you may want to be looking at building up your private cloud with the vCloud Suite while at the same time you look into leveraging vCloud IaaS providers where it makes sense.

Look into converged infrastructure solutions; look into the vCloud Suite — look into automation, orchestration for your workloads.  Improve your posture for those workloads you don’t expect to move to public IaaS providers in the coming years.  Unlock value and improve your foundation in order to build better bridges.

The hybrid cloud is likely to be the new normal — build up both your private and public clouds to unlock value and build strong bridges in between.  Seek out a service provider that can help you unlock the value of utility computing on BOTH of your “islands”.

Discussion: Is the Public Cloud Market Destined to Become An Oligopoly (like the airlines)?

At some point public IaaS offerings become a matter of cost and scale as consolidation and economic realities take over and opportunities to differentiate are reduced.  Gartner’s Magic Quardrant for IaaS currently ranks the top 15 IaaS players by market share but is more consolidation inevitable, not unlike what we have seen in the airline industry?237002_1

The winning APIs and models will likely become magnets for the 3rd parties and MSPs as the market determines winners and losers.  AWS is currently the market leader but could SDN offer a disrupting paradigm shift?  Could VMware’s upcoming offering attract organizations who are still using VMware based private clouds?  Do OpenStack and vCloud pose a threat to AWS?  What impact will Google’s Compute Engine make?  Virtustream and Rackspace should be followed closely as well.

How do you see the public cloud market shaping up over the next 3-5 years?  An oligopoly with APIs providing most of the differentiation?  Share your thoughts below:

Human Action and the Cloud

I came across a passage today from Ludwig von Mises‘ classic treatise, “Human Action” which I thought might have some relevance in cloud computing as well.  I’ll just start by quoting the passage, with the understanding that the context here is economics:

“There is no means by which anyone can evade his personal responsibility. Whoever neglects to examine to the best of his abilities all the problems involved voluntarily surrenders his birthright to a selfappointed elite of supermen. In such vital matters blind reliance upon ‘experts’ and uncritical acceptance of popular catchwords and prejudices is tantamount to the abandonment of self-determination and to yielding to other people’s domination. As conditions are today, nothing can be more important to every intelligent man than economics. His own fate and that of his progeny are at stake.”

“Whether we like it or not, it is a fact that economics cannot remain an esoteric branch of knowledge accessible only to small groups of scholars and specialists. Economics deals with society’s fundamental problems; it concerns everyone and belongs to all. It is the main and proper study of every citizen.”

Mises was a rationalist who accepted the limitations of human reason and economic calculations, but still saw human action — as opposed to the inaction of the content — as the most effective way to organize society faced with limited resources.

The key here is knowledge and awareness to use that reason — a quality not normally found among the content or uncurious — is required for a society to effectively deal with its challenges of finite resources.

I see many parallels here with cloud computing. In looking at the first paragraph it is mentioned how when the informed do not engage and speak up, self-appointed supermen will consolidate power and make decisions.  Sounds like any IT departments you know of?  “Blind reliance upon ‘experts’ and uncritical acceptance of popular catchwords…” — do we see this in IT departments today?  Of course we do — and what results should we expect from such models?  Are these IT captains steering the ships in the right directions?

There are the traditional siloed fiefdoms who simply don’t want much to change beyond an occasional tech refresh (also “server huggers”).  IT Directors and CIOs who have not fully embraced a cloud inspired vision of how the economics of IT can change.  The engineer who pursues technical certifications without a vision for how they might be applied to the business.  The salesman who wants to sell infrastructure to meet quarterly numbers, rather than engaging in a mutually beneficial relationship to pursue the most effective implementation of technology pursuant to the organizations goals (including the ones they might not know about).

Read the following quote a second time, with the words “economics replaced with ‘cloud':

“Whether we like it or not, it is a fact that cloud computing cannot remain an esoteric branch of knowledge accessible only to small groups of engineers and technology evangelists.  Cloud computing deals with both IT and the business’s fundamental problems; it concerns everyone and belongs to all. It is the main and proper study of every stakeholder.”

This is where it becomes clear that cloud computing is more than just technology.  It is also a vision and a culture.  It is not a tactical solution but it is an end game.  It requires the individuals of the IT society to not be complacent, but to be curious, endeavored and inspired.  It requires human reason to analyze problems, parameters and even economics in order to improve the condition of all.

In Part One of “The NoCloud Organization” I wrote about just how complex the problems we are trying to solve truly are.  To paraphrase Mise, in order to successfully advance cloud computing, Human Action – reason and calculation – is required.  The same thought patterns, business models, leadership styles, roles and responsibilities, sales methods and career paths that brought us the legacy systems and applications we try to move beyond — will not be an effective means to promote the new paradigm of cloud computing.

VMware Pursues SDN With Upcoming NSX Offering

VMware Pursues SDN With Upcoming NSX Offering

Earlier this week VMware announced VMware NSX – an upcoming offering that takes network virtualization to new levels. NSX appears to be somewhat of a fusion between Nicria’s SDN technology (acquired last year by VMware) and vCloud Network and Security (vCNS – formerly known as vShield App and Edge). Since I already had intentions to write a post about vCNS this seemed like the perfect opportunity to do so while taking a look at the new NSX solution at the same time. Before we begin exploring vCNS, let’s just take a quick step back.

SOFTWARE DEFINED EVERYTHING

About a year and a half ago, I wrote this post in an attempt to define cloud as abstraction leading to automation leading to agility. In a sense this is what SDN – Software Defined Networking – aims to do as well. First the server hardware (compute) layer was abstracted with hypervisors like ESXi, and now this concept is being extended into the area of networking (and also storage) to provide new opportunities for automation, orchestration and integration.  Let’s start by taking a look at what vCNS is already offering today.

vCloud Network and Security (vCNS)

Originally VMware offered three different vShield products, but with the release of the vCloud Suite, the vShield App and vShield Edge solutions now constitute much of the vCNS product, along with support for VXLAN and integration into the vCloud ecosystem including vCloud Director and vCenter Server. vCNS Standard Edition is included with vCloud Suite Standard, while an upgrade to vCNS Advanced adds high availability (HA) for the virtual appliances, load balancing and data security capabilities.

Central to vCNS is the vShield appliance where all of the vCNS configuration is done via a web UI which is also exposed as a tab in the vSphere client. The vShield appliance has the ability to backup the entire vCNS configuration to an FTP/SFTP server on a regular basis, allowing you to deploy a new vShield appliance and then restore your configuration backup.

EDGE (not that guy from U2)

You can deploy Edge servers in your environment which are hardened virtual appliances which provide – well…edge services. You deploy Edge appliances from vShield manager and then – once properly configured – you can point your servers to them as gateways. Besides basic gateway services, the Edge appliances can also provide the following:

  • Layer 3 firewall
  • NAT Services
  • DHCP
  • Site-to-Site VPN (IPSEC)
  • Web Load Balancing (Advanced Edition)

There is a lot of potential here. Some of you might be thinking “why should I use virtual for this”? There’s many answers and use cases here but some of the best examples involve multi-tenancy. Sure you can use dedicated hardware and/or VLANS for each tenant but this can be quite complex and may no longer be the most efficient approach. By virtualizing these edge services we now have logical (versus physical) boundaries which can lead to more scale, more automation and less complexity as tenants are added and removed.

vshieldedge

vShield Edge and multi-tenancy
[click image to zoom]

Also I should quickly mention a lesser known feature which is the SSL VPN which since it uses common TCP ports you should be able to use it from networks where IPSEC might be restricted. You can quickly turn this feature on an Edge appliance and then access a VPN home page from which you can either expose internal websites or offer a full VPN client download – which I tried and worked flawlessly on my Windows 8 system.

APP (Virtual Layer 2 Firewall)

Like vShield Edge, vShield App is a hardened virtual appliance deployed from vShield manager, but this appliance is deployed on a per-host basis. You can optionally configure vShield App to “fail closed” meaning that traffic to VMs will be denied unless the vShield App firewall is available. To mitigate against this you can deploy a second appliance to work in an active/passive pair (HA requires advanced edition).

vshieldapp

A quick sidebar on the virtual appliances – both Edge and App appliances can be deployed in active/passive pairs and there are different sized appliances based on the size of your network (the large appliances allocate 2 vCPUs each). Keep this mind from a resource perspective as you may need to reserve resources for a pair of large sized appliances.

Now that you have a layer-2 application firewall running on each host you now have many new possibilities on how to design your network. What’s great about vShield App is that your firewall rules are now abstracted from the IP addresses. Today most firewall rules have any affinity to each and every IP address – what if we could do this on a virtual machine basis, or even a VM folder or Resource Pool. Drop a VM into the “Web Tier” resource pool and suddenly in inherits the appropriate firewall rules that you’ve already defined – regardless of what the IP is. And if someone changes the IP inside the VM, the firewall policy is sustained at the VM level. Depending on the environment, this can provide many benefits ranging from a reduction in network hardware, to lower operational complexity and quicker execution. In some environments a traditional DMZ may no longer be necessary depending on what new design is settled on.

Another little sidebar but another nice extra available in vShield App is the ability to monitor your network flows. This can be done either at the host level (App) or at the gateway level (Edge), but even without any firewall rules you now have the ability to monitor your network flows at various points and see which protocols are using the most network traffic.

NSX – The Future

At a high level VMware’s NSX appears to be a fusion of Nicria’s Network Virtualization Platform (NVP) and the vCNS product we just discussed, along with a few twists. VMware acquired Nicria for $1.26 billion last year, and their Open vSwitches (OVS) do the packet pushing while the NVP Controller Cluster features a RESTful web API for managing and defining these virtual switches.

Below is a full slide from VMware introducing the concept of the NSX offerings which has a few interesting features. The support for OpenStack at the top is not really new, but the “multi-hypervisor” support at the bottom is. Since Nicria’s OVS is hypervisor agnostic (Xen, KVM, ESX) it would seem that NSX is now abstracted from even ESX.

pic4

VMware NSX
[click to zoom]

There’s a few more hints in this VMware blog post on NSX including mention of Layer 2 Gateway services, active/active HA (active/passive in vCNS today) and even mention of MPLS support. And of course you know VXLAN is going to be baked in as well. And remember the backup support in vShield Manager? Well it seems that the new NSX manager takes this a step further – allowing you perform VM-style snapshots of the entire network state and ecosystem for backup or even to revert a recent change (undo).

So many scenarios for this new paradigm, but here’s just one – the cloud on-ramp. You want to move some virtual machines from your datacenter to a vCloud provider, between providers or even VMware’s new Hybrid vCloud offering. Just deploy a gateway (edge) appliance in your datacenter and establish a site-to-site VPN to the hosting datacenter running NSX. Now start migrating those virtual machines!

VMware expects to launch NXS in the second half of this year, and having seen the benefits of vCNS I’m really excited about the possibilities and benefits that Software Defined Networking (SDN) with VMware NSX can provide. I suspect that VMware is also working on advancing “Software Defined Storage” as well and I hope to also share some thoughts on those possibilities in future posts.

Installing Windows Server 2012 from ISO on ESXi 5.1

UPDATE:  It seems that one host in this vSphere 5.1 cluster was running ESX 5.0 which may have been causing the issues I experienced.  However, this approach may be helpful in similar environments where not all hosts are running 5.1.

Just a quick note on something I ran into.  I was installing 2012 Server from ISO in order to make a VM template.  I configured the VM (on ESXi 5.1) for Windows Server 2012, but after the first reboot I would get nothing but the spinning white dot-circle and high CPU on the VM.

I deleted the VM, and made a new one — this time changing the firmware from BIOS to EFI (options tab and select “Boot Options” on VM properties) and this fixed the problem and allowed me to complete the install.  Not sure if there is a KB for this, but this seems to be an effective workaround.

 

efi2012

Uncovering Value And Opportunity With Utility Computing

I recently wrote a blog post for my employer in which I discuss:

  • how utility computing can change procurement and create value
  • companies should focus on their apps, data and business — not running a datacenter
  • VMware has introduced a new value paradigm for value, which parallels additional solutions.
  • Looking beyond individual components to find value in the synergy of multiple solutions.

You can view the full blog post here.

 

valuetriangle_new

Moving On Up (in the stack)

This time of year a lot of folks are making their 2013 goals and/or predictions.  I do happen to have a professional goal that happens to be somewhat of a prediction as well – moving up in the stack.  What does that mean?

crystal ball

Taking a step back for a moment, I recall being in grade school and having AT&T come in and talk to us about the technology and innovation behind – yes telephones.  At the time it was exciting how we could actually have a conversation with someone around the world – a form of virtual reality if you will.  It was exciting to imagine new possibilities that might be available in the future (no flying cars though).

What was innovating, and cutting edge back then, we tend to take for granted today.  We’ve built so many layers of technology above this foundation that we expect to be able to have a video call on our mobile phones, or stream HD movies and music to our living rooms and mobile devices.  We continue to add layers of technology onto yesterday’s innovations as they become the foundation for even more.

I see a similar trend with virtualization and cloud computing.  I love VMware and I’ve enjoyed doing amazing things with it.  I’ve watched several entire pallets of physical servers that my team eliminated with VMware be moved out of the datacenter and leveraged virtualization to relocate datacenters, improve backup RTO/RPO, operations and much more.

vSphere and the ESXi hypervisor aren’t going anywhere and I am sure there are still more exciting innovations to come in this space, but increasingly we are seeing a greater focus on automation and agility – layers that build upon vSphere and the ESXi hypervisor.  If you’ve seen my previous blog posts you’re already familiar with my “value triangle” – the hypervisor can only address CAPEX and some OPEX benefits by itself – the lower (and smaller) half of the value triangle.  But start layering automation and operational improvements on top of this, and you can capture far more OPEX benefits in your organization and perhaps even enter the “agility” zone.

Earlier this year VMware introduced the vCloud Suite which now includes vCloud Director (vCD) and vCloud Networking and Security (vCloud N&S – formerly the vShield products).  These additional “layers” are becoming less of a high-end product and increasingly a part of the core foundation.  Additional products like vCloud Automation Center (from the Dynamic Ops acquisition) will build upon foundational layers like vSphere, ESXi and vCloud Director to enable more automation, efficiency and agility in their operations.

vcloud

One example I like to share is how long would it take to procure and deploy an n-tier application (i.e. web front end, database backend, plus middleware) in your current environment, working across the technical silos, teams and internal processes?  I’ve seen some firewall changes take weeks of back and forth until the correct changes could be implemented.  But what if you could quickly provision such complex applications consistent with PCI and other security requirements for networking?  Weeks and months become days as project delivery dates are shortened and business goals are met in significantly less time.  And you better believe you’re competitors will be looking to do this too :)

I think 2013 will be a year where a critical mass begins to focus their attention above the hypervisor and onto these empowering new layers.  In 2013 I’m planning on installing the vCloud Suite and vCloud Automation Center in my home lab and becoming more familiar with the capabilities of these products and I’m very much looking forward to the learning experience.  What are your technical and professional goals for 2013?

vCenter Operations Foundation Available to vSphere Owners

As a part of the release of vCenter Operations Manager 5.6 (a.k.a. vCOPS) a new Foundation edition has been made available to all vSphere owners.  This is a great opportunity to become familiar with the capabilities of vCOPS to monitor trends and proactively alert you to risks you may not have been aware of.  How many of us have gotten burned by an unanticipated capacity limit – either CPU, memory or storage?

The new Foundation edition of vCOPS now appears as one of the vSphere downloads right next to ESXi and vCenter Server as illustrated below.

The Foundation edition does give you some useful information on vSphere health and trends, but if you want to start looking into capacity management and trending for example, or even chargeback, root cause analysis or OS management, you’ll want to look at upgrading to a higher edition.  A detailed breakdown of the available features by definition is available here.

For those that are already using vCOPS, the 5.6 release adds new features including vSphere Web Client integration, improved capacity analysis and more (see what’s new in release notes here).

If you own vSphere, what are you waiting for?  Start with vCenter Operations Manager Foundation and begin gaining insights into the health and trends of your vSphere virtual infrastructure.  For a breakdown of the new versions and entitlements, see the chart below:

POLL: Should Microsoft have a separate OS for mobile devices?

Windows 8 has been released to mixed reviews.  The general consensus seems to be that the Windows 8 UI works well for touch devices, but is a deterrent to businesses who rely on the traditional keyboard and mouse interfaces.

Microsoft’s lead Windows Engineer, Steven Sinofsky resigned from Microsoft just weeks after the Windows 8 launch.  While some may assume this to be an admission of failure, it should also be noted that some say Sinofsky did not work well with other teams and created a “toxic environment”.

Did Microsoft make the right design choices?  Clearly there is some benefit in a consistent UI across all platforms which enables mobile computing and touch interfaces, but at what cost?  Apple for example has different operating systems for the traditional computer (OSX) and for mobile devices (iOS).

What do you think?  Vote in the poll below and then share your comments below.  If time allows I’ll be adding my own thoughts about Windows 8 and the UI experience in a future post.

Should Microsoft have made a separate OS and UI for mobile devices?

View Results

Loading ... Loading ...

Performance Issues with Networking on ESXi and UCS B200 / 6100

I ran into an environment which had a series of issues with the virtual infrastructure and I thought I’d share the result in the event someone else found it helpful.

This was an environment running UCS B200 servers and UCS 6100 Fabric Interconnects.  NFS storage was being used for some volumes and at times vCenter (5.0) was recording very high levels of storage latency.  This was confirmed by the VKERNEL log which showed the intermittent loss of NFS mount points and path failures.

One thing we came across is this Cisco document which explains that some UCS servers have issues with Interrupt Remapping.  This can be disabled in the BIOS and vSphere, but in this case the UCS BIOS was upgraded to a current release which did noticeably improve the environment.

The other item we found is the following VMware KB which explains that the network load balancing policy “Route By IP Hash” is NOT supported with UCS B200 servers with UCS 6100 fabric interconnects.  As NFS based storage uses IP as a transport, this could explain some of the latency which was observed.  From the KB article:

When enabled, the NIC teaming policy Route based on IP hash involves a team of at least two NICs that selects an uplink based on a hash of the source and destination IP addresses of each packet. Host network performance might degrade if Route based on IP hash is enabled on ESX or ESXi because cross-stack link aggregation, or grouping of multiple physical ports, on UCS 6100 Series Fabric Interconnects deployed as a redundant pair is not supported. As a result of the network performance degradation, you may see intermittent packet loss and the vSphere Client or vCenter Server might lose connection to the ESX or ESXi host.

The sum of both changes is that storage performance issues were eliminated and began functioning as intended.

On a quick side note, this also makes a case for the value of converged infrastructure and reference architectures.  Systems are increasingly complex and when you build something on your own it is very easy to encounter issues like this and more.  Converged infrastructure and reference architectures can help here by providing a blueprint as to what components can work together and under what conditions.  When you use one of these solutions you can have the confidence that you’re not the only one operating from your blueprint and that significant level of engineering and testing was invested in your architecture.  Additionally if an issue is encountered, you can be proactively notified of the risks and what changes are recommended to mitigate them.

“What Is Cloud Computing” Revisited One Year Later

Almost exactly a year ago, I made a post on “What Really Is Cloud Computing?”.  We hear so much about private vs. public vs. hybrid and what is or is not “cloud”.  Yes, sometimes trying to define cloud computing seems like an exercise as valuable and productive as splitting hairs, but I thought it still might be interesting to take a look a year later.

In the original post I described cloud computing as being based on three pillars:  abstraction, automation and agility.   Generally speaking, I think this still works today.  Virtualization is not cloud computing per se, but it does enable new opportunities for abstraction, automation which empower us to more effectively pursue agility.

ABSTRACTION

When we talked about abstraction back then it was mostly at the compute level, but now we are seeing abstraction in the storage and networking areas – most notably with VMware purchase of SDN vendor Nicira for $1.2 Billion.  We are moving towards abstracting the entire stack (compute, storage, networking) and wrapping more automation and orchestration layers around it.

Not only are we abstracting these infrastructure elements but we are abstracting something else – applications from the traditional PC architecture.  We now have an increased proliferation of mobile applications.  Even the Windows operating itself is no longer constrained to PCs as there are variants that run on tablets and phones and data is synched across devices using online services (now commonly referred to as “the cloud”).

The point is that we are empowering our workers with flexibility, mobility and options, while behind the scenes in our datacenters, abstraction of the core infrastructure continues to provide new opportunities for automation and agility.

AUTOMATION

Most of us understand the concept of automation – less administrative overhead, means more getting done in less time and with less resources.  Or in other words, less OPEX, less time, and more agility.  There is so much going on in this area – Nick Weaver’s work with Razor comes to mind as one example of this but there is so much more.  Systems are becoming increasingly complex and it is going to require a new generation of orchestration and automation tools (as well as APIs) to help us to reach our goals. And there’s not just automation within clouds, but across them as well.

AGILITY

This is the ultimate goal – getting more done and in less time.  This is where things get fun.  And once you can bring agility to IT, the possibility exists to bring it to your business as well.  Business agility speaks to being to quickly and cost-effectively execute a business strategy and this can make all the difference in the world.  This is where the full potential of cloud computing is realized.
 

 WHAT SHAPE OF CLOUD?

There’s been much oscillating about why private cloud is better, public cloud is better and so on.  Does one model possess more cloud-like attributes than the other?  The advantage of public clouds is that they meet the low-cost utility model.  Relatively quick to access and consume and you don’t have to get involved in the messy details (a.k.a. costs) of running a datacenter.  But there will also be times where a private cloud is compelling for security, auditing compliance and several other reasons (A previous post on this here, but I would especially recommend reading what Rodney Rodgers had to say on this topic).

In short there is no universal answer answer to what the best cloud model may be.  For many environments the best answer will be to leverage hybrid cloud management tools to transcend across and leverage BOTH private and public cloud, so that each application/workload is placed on the more appropriate and effective platform.

A bit of a tangent here but to summarize there is no “one-size all” answer to what a cloud should look like.  A debate on public/private clouds can descend into something about as useful as what the best diet is – the best answer may be different for everyone.

The more important thing is that you recognize cloud computing as a strategy – identify products and technologies that can help, and change your infrastructure, processes, teams and cultures to work effectively with this new paradigm.

DECLARATION OF CLOUD INDEPENDENCE

Who Signed this RFP?!?

If there were a Declaration of Cloud Independence – freedom from the IT-as-a-Cost-Center model where projects are slow, operational expense is high and IT is a big money pit – I think it might start with this line:

We hold these truths to be self-evident – that not all datacenters are created equal but they are endowed by their creators the ability to empower using Abstraction and Automation in the pursuit of Agility.

It’s the abstraction and automation which allow us to pursue agility – and not just within IT but for the business as well.  This is the vision for cloud computing and the potential that it holds.  Cloud computing is not a product, not a reference architecture, but a strategy.  A strategy and vision that requires inspired and enlightened insights into technology, products, workflow, culture, organizational management, and process.

Cloud computing is not a product or even a technology.  It’s abstraction and automation in the pursuit of agility.  It’s a new approach to doing IT which takes IT out of the cost-center and into the boardrooms with IT as a trusted partner to help facilitate the execution of bus iness strategy.  We’ve come a long ways, but we have an even longer ways to go.