vSphere 6.0 Public Beta — Sign Up to Learn What’s New

Yesterday, VMware announced the public availability of vSphere 6.0 Beta 2.  I can’t tell you what’s all in it due to the NDA, but you can still register for the beta yourself, read about what’s new and download the code for your home lab. There’s some pretty exciting stuff being added to vSphere 6.0 in

v6v6

Will VMware Start Selling Hardware? Meet MARVIN

The Register is running a story that VMware is preparing to launch a line of hardware servers.

marvinmarvin

VMware Pursues SDN With Upcoming NSX Offering

Earlier this week VMware announced VMware NSX – an upcoming offering that takes network virtualization to new levels. NSX appears to be somewhat of a fusion between Nicria’s SDN technology (acquired last year by VMware) and vCloud Network and Security (vCNS – formerly known as vShield App and Edge). Since I already had intentions to

NSX2NSX2

What Really Is Cloud Computing? (Triple-A Cloud)

What is cloud computing?  Ask a consumer, CIO, and salesman and you’ll likely get widely varying responses. The consumer will typically think of the cloud as a hosted service, such as Apple’s iCloud, or uploading pictures to Photobucket, and scores more of like services (just keep in mind that several such services existed before it

3pillars_f3pillars_f

Agility Part 2 — The Evolution of Value in the Private Cloud

When an IT project is commissioned it can be backed by a number of different statements such as: “It will reduce our TCO” “This is a strategic initiative” “The ROI is compelling” “There’s funds in the budget” “Our competitors are doing it” Some of these are better reasons than others, but here’s a question.  Imagine a

agility2agility2

Stacks, the Vblock and Value — A Chat with EMC’s Chad Sakac

…I reached out to EMC’s Chad Sakac to gain more insights from his perspective on how the various stacks…well…stacked up….

stacksstacks

Should You Virtualize vCenter Server (and everything else?)

When concerns are raised around virtualizing vCenter Server, in my experience they usually revolve around either performance and/or out-of-band management. The VROOM! blog at VMware just published a whitepaper that looks closely at vCenter Server performance as a VM versus native (physical) which speaks to these concerns as well as for other workloads. vCenter Performance

vcenter_virtvcenter_virt

Can your VM be restored? VSS and VMware — Part 2 (updated)

The backup job for your VM completed successfully so the backup is good, right? Unfortunately it’s not that simple and a failure to effectively deal with VM backups can result in data loss and perhaps even legal consequences.

vss2vss2

Monitoring Storage Elements with LSI Controllers in ESXi

Cisco UCS servers have made quite an impact in the market and are currently #1 in blades.  Most UCS Servers don’t use any local storage beyond maybe booting ESXi from an SD card.  But what if you had a use case where you needed to use direct attached storage? Not a common use case today, but VMware VSAN is likely to change that.

The problem I encountered is that ESXi in UCS servers would not report health for storage elements to ESXi.  Cisco UCS servers use LSI controllers and we were completely blind to events like a hard drive failure, RAID rebuild, predictive failure and so forth. The use case here was a single UCS-C server with direct-attached storage which hasn’t been a common use case until just now with VMware VSAN.

Using different combinations of drivers blessed by VMware and Cisco I was unable to get physical drive and controller health to report in ESXi. I did my due diligience on a few Google searches but was unable to find any solution.

Then I went on the LSI website to look at the available downloads and something caught my eye — an SMI-S provider for VMware. I remembered that SMI-S is basically CIM, which is what ESXi uses to collect health information. This is a separate VIB that is independent of the megaraid_sas driver in ESXi.  With the SMI-S provider installed in ESXi suddenly I could see all the things that were missing in the health section such as:

  • Controller health
  • Battery health
  • Physical drive health
  • Logical drive health

Screenshot_50Basically the moral of the story is this — if you have an LSI array controller (common in UCS-C) then you’ll need to follow these steps to get health monitoring on your storage elements:

1) Go to LSI’s website and download the current SMI-S provider for VMware for your card.

2) Upload the VIB file to a VMFS datastore

3) From an SSH shell type “esxcli software vib install -v [full path to vib file]”

4) Reboot

I’m not clear on why this capability is not exposed by the driver, but it seems for the time being that installing this additional VIB is required to get ESXi to monitor the health of storage elements on LSI controllers.

Hope some will find this valuable.

vSphere 6.0 Public Beta — Sign Up to Learn What’s New

vSphere 6.0 Public Beta — Sign Up to Learn What’s New

Yesterday, VMware announced the public availability of vSphere 6.0 Beta 2.  I can’t tell you what’s all in it due to the NDA, but you can still register for the beta yourself, read about what’s new and download the code for your home lab.

There’s some pretty exciting stuff being added to vSphere 6.0 in quite a few areas.  One of these new areas is vVols — a new abstraction for volumes that enables tighter integration with storage arrays through the VASA API. You can read more about vVols in vSphere 6.0 on Rawlinson’s post.

One more thing — after you sign up for the beta you will be able to attend the following two webinars on the vSphere 6.0 beta

  • Introduction / Overview – Tuesday, July 8, 2014
  • Installation & Upgrade – Thursday, July 10, 2014

Needless to say there’s some pretty awesome stuff in the 6.0 Beta.  Start your download engines!

https://communities.vmware.com/community/vmtn/vsphere-beta

 

Nimble Storage Revisited: The CS700 and Adaptive Flash

Back in 2010 I noticed with this blog post the entry of Nimble Storage into the storage market. With their release of their new CS700 line and what they call Adaptive Flash, I figured it was a good time for a second look.

CASL Architecture

Before we look at the new offerings a quick refresh on Nimble Storage’s CASL architecture would be in order. CASL stands for Cache Accelerated Sequential Layout and Nimble describes the key functions here:

CASL collects or coalesces random writes, compresses them, and writes them sequentially to disks.

Nimble states that this this approach to writes can be “as much as 100x faster” than traditional disks.  The image below is a bit fuzzy, but if you click to expand it should be readable.

Screenshot_12

CASL Features (click to enlarge)

It is important to note that both the compression and the automated storage tiering to flash is inline (no post-process or bolt-ons) which adds additional efficiencies. Also features such as snaps, data protection, replication and zero-copy clones are included.

For more details on CASL (including a 75 minute video deep dive) visit Nimble Storage’s CASL page here: http://www.nimblestorage.com/products/architecture.php

New Offering: CS-700

The CS700 is the new model which features Ivy Bridge processors, 12 HDDs and 4SSDs for a hybrid storage pool Nimble claims is up to 2.5x faster than previous models, with up to 125K IOPS from just one shelf.cs700

Now you can buy expansion shelves for the CS700 including an All-Flash shelf and this is where something called “Adaptive Flash” kicks in. The All-Flash shelves host up to 12.8TB of flash each in a 3U shelf and are used exclusively for reads.nimbleflashThe product materials on Adaptive Storage I found to be a bit light on technical details but from what I can discern some of the secret sauce is provided by a back-end cloud engine.

Nimble Storage has a robust “phone home” feature called InfoSight which sends health, configuration and utilization information to cloud services for analysis. Several vendors do this, but the twist here seems to be that they are using the resources of the cloud based engine to “crunch” your utilization data and send guidance back to your controllers on how they should be leveraging the flash tier. In summary the big idea here seems to be that leveraging greater computing resources “big data” style in the cloud can make better decisions on cache allocation and tuning that the controllers themselves.

The Big Picture

Nimble uses a scale-out architecture to scale out storage nodes into clusters. Nimble Storage claims that a four (4) node cluster with Adaptive Flash and support a half-million IOPS.

Below is a table (created by Nimble Storage) which position the CS700 in a 4-node cluster against EMC’s VNX7600 with ExtremeIO. I’d like to see an independent comparison but it appears Nimble Storage may be on to something with this architecture.

nimble_vs_vnx_and_xtremio

All-Flash arrays are nice but they aren’t the only game in town. Nimble Storage seems to have a compelling story around a hybrid solution which is driven by both controller software, as well as back-end software hosted on cloud services.

Patch Available for NFS APD issue on ESXi 5.5 U1

There is an issue with using NFS on ESXi 5.5 U1 where intermittent APDs (All Paths Down) conditions occur which can disrupt active workloads. The KB for the issue is here.

Patch 5.5 E4 was released on June 10 which fixes this issue.  The patch can be obtained here and the KB for the patch is here.

Will VMware Start Selling Hardware? Meet MARVIN

Will VMware Start Selling Hardware? Meet MARVIN

UPDATE:  After a Twitter discussion this morning with Christian Mohn ( @h0bbel — see his MARVIN post here ) I think we are in agreement on MARVIN may be.  This post has been updated accordingly.

The Register is running a story that VMware is preparing to launch a line of hardware servers leveraging vSphere and VSAN:

Evidence for MARVIN’s existence comes from two sources.

One is this trademark filingdescribing MARVIN as “Computer hardware for virtualization; computer hardware enabling users to manage virtual computing resources that include networking and data storage”.

The second source is the tweet below, which depicts a poster for MARVIN on a VMware campus.

BpgDQmDCYAESs-A

If this pans out to be true it would be a very interesting development indeed.  It is important to note that that the trademark specifically says that MARVIN is “hardware”. But will it be VMware’s hardware?  As Christian pointed on his post it would go against VMware’s DNA to sell it’s own hardware.  But EMC — VMware’s majority owner — already has VSPEX — a confederated hardware offering from multiple OEMs but purchased through EMC.  It seems more plausible that VMware would leverage a VSPEX-like model and utilize Dell, Cisco, SuperMicro, etc. hardware for MARVIN.  What VMware really needs is a way to sell converged infrastructure nodes as one SKU (mitigate design risk) and one point of support — a VSPEX-like model for MARVIN would accomplish exactly this without VMware actually selling their own hardware.

MARVIN at first glance would also seem to be a validation of the Nutanix model — build a scale-out storage solution and sell as boxes that include the full stack.  That’s not an apples to apples comparison and it’s not my intent to split hairs here, but one of the attractive things about the Nutanix model is that “you just buy a box”.  By combining VMware VSAN with vSphere and hardware, VMware can offer a scale-out modular solution where customers just need to “buy a box” as well.

Of course its possible to build your own VSAN-enabled vSphere cluster using hardware of your choice from the HCL, but as noted with some recent issues there’s some risk in not selecting the optimal components. By offering a complete IaaS stack as a modular hardware unit, this eliminates the “design risk” for the customer and enables more support options.

One more thing to keep in mind.  EMC recently acquired DSSD with the goal of developing persistent storage that sits in the memory bus, therefore closer to the CPU. It wouldn’t surprise me to see this introduced in future editions as well.

This could be an interesting development.  What are your thoughts about the potential entry of VMware into the hardware market?

Also what could MARVIN stand for?  How about…

Modular ARray of Virtualization Infrastructure Nodes?

Be Mindful of vSphere Client Versions When Working with OVAs

A colleague of mine was working with an OVA had used several times before.  After upgrading ESXi with the Heartbleed patches (5.5 Update 1a) he found that he received a generic connection failed error when uploading the OVA.

(Note:  in this drill there is no vCenter but we were connecting directly to the ESXi host)

I noticed that his vSphere client was a slightly older build — pre Heartbleed but new enough that it would appear to work fine with the 5.5 host. Knowing that Heartbleed is about SSL I recommended that he update the vSphere client to the same build that was released with the Heartbleed patch.  This changed the error but did not fix the problem. Not sure what the exactly underlying issue is but the existing OVAs (that were created with 5.5) could no longer be deployed.

Using the latest vSphere client he tried exporting the source VMs into a new OVA and was able to import with no issues.

I’m not sure of the exact interaction but I’m assuming that the OVAs are signed with the private key and that somehow the Heartbleed patch “breaks” some interaction here such that the OVA is not accepted. Perhaps there will be a KB on this in the future but for the time being make sure that you have the latest build of the vSphere Client when creating and importing OVAs.

Why vCenter Log Insight is a “Must Have” for vSphere Environments

I recently took VMware’s vCenter Log Insight (2.0 beta) for a test drive and I was impressed at the time-to-value as well as the benefits relative to cost. Before I get started, I’d like to step back a bit and look at vSphere monitoring and explore the benefits of log monitoring.

UPDATE 6-11-2014:  vCenter Log Insight 2.0 is now GA and has been released!

Monitoring vSphere with vCenter

vCenter out of the box does a great job of monitoring the vast majority of the things you’d want to know about. Hardware failures, datastore space, CPU/Memory utilization, failed tasks and so on. But chances are that on more than one occasion you had to peruse through ESXi host logs and/or vCenter log files to either find more detail or perhaps discover errors for conditions that vCenter doesn’t report on.

For example are you seeing SCSI errors or warnings? Path failures or All Paths Down (APD) errors? Any unauthorized intrusion attempts? Are API calls timing out? Is one host logging more errors than others? The bottom line is that for full holistic monitoring of a vSphere environment, log monitoring is a required element. The traditional problem here is time – SSH into a host at a time as needed and manually peruse the log files? There needs to be a better way.

Splunk is a popular option for log monitoring as it has the capability to ingest logs from multiple sources so that you can correlate events and/or time frames across multiple devices. There is a vSphere app for Splunk which I understand works fairly well, however once of the issues seems to be cost. As ESXi and vCenter logs can create large amounts of logs, this increases costs as Splunk is usually priced around the volume of log data that is ingested.

Enter vSphere Log Insight

vSphere Log Insight is designed for vSphere environments and list pricing starts at $250 per device (a device being an ESXi host, a vCenter Server, SAN, switch/router, firewall, etc.).

I decided to download the beta of Log Insight 2.0 and give it a spin. It’s simply a pre-built virtual appliance that you import as an OVA. Once I had the appliance running I logged into the website and added details and credentials to access the vCenter server. Within 30 minutes of downloading I was exploring the interface which was now collecting logs from vCenter and all the ESXi hosts defined within it.

One of the first things I noticed was the clean, fast and snappy HTML5 based interface. Compared to the flash based vCenter Web client it’s hard to not notice the difference (which increases my anticipation of the next vSphere release which I hope to have an HTML5 based interface).

Out the box, Log Insight comes with dashboards and content packs for both vSphere and vCenter Operations Manager (vCOPS). In the image below you will see on the left pane several dashboard views that can be selected within the vSphere pack. In the main window, one can click on and point in time on the top graph, an element of the pie chart, or even the “has results” of one of the queries and be instantly taken to an “Interactive Analytics” view where you can view the log events in detail (click image to expand).

LI_4

vCenter Overview Dashboard in Log Insight 2.0 (beta)

If you were on the “Storage – SCSI Latency Errors” screen for example you’d see bar graphs for SCSI errors by device, path and host to quickly identify anomalies, as well as some pre-built queries as shown below.  Clicking on any “Has Results” text will take you to a drill down view of the events that match the query.

LI_3

The next day we ran into an issue were a certain VM failed to vMotion to another host. I logged into vCenter Log Insight, selected the “vCenter Server – Overview” tab, set the time range to “past 5 minutes”, and instantly identified the time interval of the failure. I clicked on it and in a blink I was looking at all the relevant log entries. It literally took me seconds to log in and get to this point – a huge time saver!

But wait, there’s more!

vCenter Log Insight is at it’s core a SYSLOG engine. While it is designed to immediately exploit vSphere log elements it can also be used for SANs, switches, firewalls and more. If you browse the Soltuion Exchange you will see that content packs already exist for NetApp, HyTrust, VCE, Cisco UCS, vCAC, Brocade, EMC VNX, Puppet and more. In summary you can point Log Insight at anything that outputs logs with a growing library of content packs to provide even more value.

The Bottom Line

The bottom line is that if you want to see everything going on in your vSphere environment you need to be looking at logs. Log Insight can be used to create alarms as well vastly expedite the process to peruse through log files from multiple sources to see what is going on.

I was impressed in how easy it was to deploy and in how quickly we received almost immediate value from it. At a list price of $250 per device (per year) it seems like a no-brainer for many mission-critical vSphere environments.

vCenter Log Insight 1.0 is available today, but if you’re evaluating, give the beta of Log Insight 2.0 a try.

Also take a look at the following whitepapers:

End Your Data Center Logging Chaos with VMware vCenter Log Insight

VMware vCenter Log Insight Delivers Immediate Value to IT Operations

VMware Virtual SAN (VSAN) Launched

VMware introduced the revolutionary and disruptive concept of server hardware virtualization which has helped usher in a new era of computing.  This abstraction has provided more management, more automation, more scale, and of course more value.

So who better to introduce a storage array that is hardware agnostic (within the vSphere HCL) than VMware with Virtual SAN (VSAN)?  I’m genuinely intrigued and excited by VSAN and I wanted to briefly share a few reasons why.

Other vendors made been making impressive advances with SDS and offering more capability and value in their offerings, but must of these solutions – even though they may be powered by software – are packaged as hardware (and this isn’t necessarily bad).  Now to be fair VMware VSAN is not technically the first hardware agnostic solution but I suspect that it is the first one to offer mission-critical performance at this scale:

BiEId1nCIAAhib0

We’ve already seen the trend of more and more storage capabilities being provided by a “storage hypervisor” in several solutions, but VSAN is unique in that it truly is hardware independent.  If your servers are on the VMware HCL (hardware compatibility list) then just add disks and VSAN licenses and you’ve got a SAN.  This dramatically reduces the capital investment required — you’ll still want 10GB switches of course but the rest is software and disks (HDD/SSD).

In this sense VMware VSAN is somewhat of a watershed moment. Yes, there are other SDS solutions (more on this in a future post), but this is the first one that is sold as software only and can perform to the kind of scale noted above. The VMware VSAN team deserves much credit here and it will be exciting to see how this solution is further improved in future releases.

SCALE

One way VSAN provides strong performance is by using the scale-out model – as you add host nodes with DAS your performance scales out along with it due to the highly parallel nature of the processing  (several vendors such as Nutanix have had success with this model).

VMware surprised us by launching VSAN with support for 32 nodes in a cluster when most of us were expecting only 16 based on the beta program.  With support for 32 node scale-out this immediately positions VSAN as a viable candidate for much more than just the SMB market.

FLEXIBILITY

With VSAN you design your vSphere clusters and choose your storage elements (drives and/or SSDs) consistent with your needs and budget.  There is more opportunity to custom tailor your storage solution to meet your specific budget and needs.

 

VSAN Launch Partners

OPERATIONAL BENEFITS

VMware VSAN is closely integrated into vSphere and includes some of the VVOL concepts which are not yet available for other arrays.  With VSAN you can create storage policies that shield operators from the complexities of constructs like LUNs and RAID levels.  If you want to provision a template or a pre-defined application the end user needs only to select a storage policy (or perhaps only one option has been assigned).  No LUNs have to be created or zoned – VSAN automatically provisions the VMDK from storage pools consistent with the specified storage policies.

CAPEX

Like vSphere, VMware VSAN is a strong CAPEX play meaning that you can significantly reduce the capital investment required to provide storage for your vSphere environment.

Some organizations will choose to continue the business-as-usual model of the “no one got fired for choosing [insert major storage vendor here” variety. But this may be a false sense of safety and may not result in the optimal solution and value.  The storage market is facing disruption and those that successfully navigate this new wave will reap the benefits.

I could talk about these concepts in more detail but I need to save some of it for an upcoming review of the storage market in general (Storage Trends Part 3).  In summary I’m very excited about what VMware VSAN offers and it’s potential in the future as it continues to be enhanced with more SDS capability and for many environments it will likely be worth taking a close look.

General Availability and Pricing for VSAN are expected to be released next week (week of Monday 3-10-2014).  For more information here are a few links:

VMware VSAN Product Page

VSAN Design and Sizing Guide

Duncan Epping’s VSAN articles

Cormac Hogan’s VSAN articles

Storage Trends Part 2 — 3D Chess

I’m going to do something a bit risky and perhaps crazy.  I’m going to perform a comparative analysis of various solutions in the storage market, and in the process risk starting a thousand vendor flame wars.

I hope and don’t think it will come to this, but still why would one want to do this?   In this post I wanted to answer this as well as “set the table” for the actual analysis by discussing a few more issues around this topic (which will hopefully make a large Part 3 that much smaller).

DISCLOSURE:  My current employer is both a NetApp Partner and a VMware Partner

Three-Dimensional Chess

The idea occurred to me last year after working with various storage technologies such as PernixData FVP and also researching other storage solutions ranging from All Flash, to hardware independent solutions like VMware VSAN and more.  At first I wanted to write a blog post just on the PernixData (which I may still do) but when looking at the market and the new disruptive storage solutions it occured to me that a new paradigm was forming.  In trying to find the optimal solution a few things became clear to me.

spock-chess2One is that the storage feature set is increasingly provided within the software and not the hardware – and the industry is converging somewhat to a common feature set that provides value (see Part 1 on Optimization Defined Storage).

Building on the above, we also have a new wave of hardware independent solutions with perhaps the most notable being VMware VSAN.  Other trends include the increases use of flash storage, as well moving storage closer to the CPU (server-side).

The more I looked at various solution offerings the more the market looked to me like a three-dimensional chess board.  Vendor X might have a clear advantage on one board, but on another board (or dimension) Vendor Y stood out.  The more I looked at the playing ground, each chessboard represented a different value and/or performance proposition.  For example, one solution category offers better CAPEX advantages while another offers better OPEX benefits.

This is key – the point of this exercise for me is NOT to compare and contrast vendors, but rather to identify what categories they fall into and what the nature of their value and performance benefits are for any given environment.  The best solution will vary across budgets, workloads and existing storage investments – but by segmenting the market into different categories perhaps we can gain a better sense of where the different benefits exists so that we can understand the optimal value proposition and that we (hopefully) don’t find ourselves comparing apples to oranges.

Or in other words, storage is much like buying a house or a vehicle.  What is the optimal solution for one family is going to be a different for another.  Different budgets, different starting points and different needs.

There are a lot of exciting and disrupting changes taking place in the storage market.  2014 will be a very interesting year and we have more innovations on the way including leveraging host server memory, storage class memory and more.

Optimization Defined Storage (Storage Trends Part 1)

Recently I found myself engaged in a discussion on Twitter with @DuncanYB , @vcdxnz001 and @bendiq regarding Software Defined Storage (SDS) when I realized that our definitions might be approaching the issue from slightly different perspectives.  Therefore I decided it might be a prudent exercise to first explore these definitions so that we can have a common foundation to build from in this series. 

When I looked at the storage market I found 4-5 different categories of storage solutions that were all converging one way or another towards a common set of qualities and features.  I’ll be exploring these solutions in detail in part 2, but is this common set SDS or is it something else (and do we care)?

Whenever someone says they are going to define something it has an air of pretentiousness about it. Hopefully this post will not come across as such as my intent is to provide a workable definition that can be used for a future post (Part 2).   As an engineer who continues to evolve, I reverse the right to modify my definition in the future :)  But for now let’s proceed….

Note:  In the spirit of full disclosure my current employer is a NetApp partner.

Do Definitions Matter?  Why Define SDS?

This really is a great question and you might be surprised at my conclusion – I don’t think the definition of SDS is terribly relevant for most of us.  There is an academic definition which we could debate endlessly along with “what is cloud” and “what is the meaning of life?” before it would change again in about two years — yet this definition might only offer limited value to those making technology decisions and investments.  In short, SDS does speak to a set of characteristics but not necessarily to value and efficiency in any given environment.

A few years ago I wrote a post which defined cloud computing as Abstraction, Automation and Agility.  By Abstracting from the hardware level, we were able to provide a new API and control plane which could serve as a platform for Automation. Then with the proper use of Automation in the organization one could being to achieve measures of Agility.

With Software Defined Storage or SDS I think the same paradigm can apply.  First we must offer a level of abstraction which enables us to do more with the storage.  On top of this abstraction we need to add services that provide value and efficiency such as caching algorithms, dedupe algorithms, data protection (RAID or erasure coding), storage tiering, instant clones (no copy on write), snapshots, replication services and more.  Then we must have an addressable API to leverage as a control plane from which we can manage, control and ultimately automate.

Now many might just focus on a definition of SDS that simply revolves around abstraction and a control plane (RESTful API, etc). For example does SDS require deduplication?  I don’t think so, however I do think that this one of several key features that provides value that the industry is trending towards.  Perhaps we need an expanded definition of SDS that focuses on value and efficiency – leveraging the SDS foundation to provide efficiency, agility and value.

Optimization Defined Storage

ODSOptimization Defined Storage (ODS) then could be a definition of SDS that focuses on efficiency and value.  ODS would be built upon an SDS foundation, and then enabled with additional capabilities to add value, efficiency and optimization such as:

  • Deduplication (increases storage efficiency as well as performance benefits).
  • Caching and Storage Tiering (including flash)
  • Instant Clones and Snapshots (no copy-on-write)
  • Efficient Replication
  • Thin Provisioning

In an ODS solution, the software can work across many levels to optimize how data is compressed, deduped, cached and tiered.  One example is Nutanix’s Shadow Clone feature which combines cloning and caching functions by distributing “shadow clones” of volumes to be used as cache by additional nodes.

Another example is VMware’s VVOL initiative which intends to somewhat shift the control plane such that the VM and VMDK characteristics will define the LUN rather than vice versa – as well as allowing the SAN to perform snap and clone operations against VMDK’s versus LUNs.

There’s much more that could be done in this SDS/ODS space (whatever you want to call it) and we haven’t gotten to object based storage yet.  The bottom line is that the ODS definition is focused on leveraging the SDS foundation to provide value, efficiency and optimization.

Relative Levels of Hardware Abstraction

Some questions / arguments I’ve heard tossed around include “is Nutanix really SDS because they sell hardware” and this argument also extends to NetApp.

Nutanix provides many features I would qualify as ODS – dedupe, instant clones and more.  These features are provided by the software but Nutanix has chosen to make their platform dependent on their hardware (which uses commodity components).  Because the product is sold as proprietary hardware does that mean it’s not SDS?

I will argue that Nutanix is still SDS despite this.  They made a business and product decision to offer their product as a hardware device to improve support, procurement as well as to facilitate scale-out.  The SDS/ODS features provided are ultimately in the software and not in the hardware (commodity components).

Also what about NetApp’s ONTAP platform – does this qualify as SDS?  I think it does.  NetApp has provided features like compression, deduplication, thin provisioning, efficient snaps, clones and replication for some time now.  Yes this baked into NetApp FAS arrays but it’s really the ONTAP software platform that’s doing everything.  To take this a step further lets add the VSA (Virtual Storage Appliance) which allows you to front-end non-NetApp storage with the ONTAP platform.  Now let’s also add ONTAP Edge which allows you to add ONTAP capabilities to storage using a VMware virtual appliance which is obviously hardware agnostic.  When you consider this full context, I think we can reasonably conclude that the “magic” happens within ONTAP (software) and that this is indeed Software Defined Storage.

Let’s Get Out of The Weeds

We could debate various definitions of SDS/ODS forever as an academic exercise but this isn’t what I want to focus on.  My primary goal was to share my definition of SDS (and ODS) that I could leverage in my next post.  Part 2 will take a broader look at the storage industry and how various solutions are trying to approach SDS/ODS from various vantage points so that we can effecitvly compare them and understand how each provides value.  Some solutions will be more effective in one environment versus another.  Some solutions focus more on CAPEX while others focus more on OPEX.  We’ve only talked about Nutanix and NetApp so far but I’d also like to talk about VMware VSAN, PernixData FVP, EMC ViPR and more.  Hopefully this post will make more sense when I get around to writing  Storage Trends Part 2.

A Look Back at 2013 – Blog and Personal Notes

Looking back on 2013 my level of blogging as well as my participation effort (especially as a three-year vExpert) is far below where I would have liked it to be.  It seems to me that there’s two simple reasons for this.

EXPOSURE

I don’t get much exposure to the latest technology.  In fact virtualization and “cloud” (much to my dismay) have very little to do with my primary job such that I spend less than 10% of my time in this space.  Most of my blog posts and tweets are not the result of hands-on experience but rather theory and observations based on what I can read in my spare time. I have almost no access to training as there’s always too much work to be done and to little budget (one reason I don’t even have VCP certification).

That’s not to say I have no exposure at all.  I do work with several Fortune 500 companies and I have insights in the inner workings of several organizations as well as my own.  I’ve also recently had the opportunity to work with PernixData’s FVP product for accelerating storage I/O operations which I will review here at some point.

TIME

This seems to be the biggest problem.  Because of events put in motion as a result of my daughter’s medical history I need to work two jobs to make ends meet (and they still don’t always meet).  For the past quarter I’ve been working between 70-80 hours most weeks and working every weekend.

On top of this I’m trying to raise three children in what is a rather unusual and perhaps extreme housing situation.  Things like sleep, exercise and hobbies are luxuries.  I played soccer in college and to be so out of shape feels very alien and uncomfortable.

As anyone knows you reach you peak performance with sleep, exercise and time for creative outlets, but there’s just very little of that available.  I could pen a litany of grievances, but the simple truth is that the alternative is my daughter wouldn’t be with us today.  I keep reminding myself of what I DO have and focus on the essentials I need to do for both them and myself. (In the 20 minutes I took to pen this I’ve already been told I shouldn’t be doing this on New Year’s Day :)

THE FUTURE AND GOALS

There are several blog posts I’ve written in my head over the past 6 months but haven’t had time to write them.  I wanted to do a post VMworld whiteboard as well as a look at the storage market and trends and also SDN as well.  There’s a wave of disruption ready to be unleashed in the IT world in 2014.

I expect that 2013 was my last year as a vExpert (my lack of activity) but I still am a big fan of VMWare and am excited to see where they continue to evolve and add value with VSAN, NSX, management tools and more.  As it requires less time, I tend to tweet more than I blog, but  its just not the same as laying out your thoughts in a more detailed and reasoned approach in a blog post for example.  I do hope to write and tweet at a more active level  about this space as time allows.

My personal goals are many but I must temper them with the time and resources I have.  There’s a book (non-technical) that I’ve been wanting to write for over a decade, but that would take  a year or more when I’m still struggling with exercise and sleep.  At the same time I’m 44 and running out of time to do anything with my life.  In many ways I still feel like I’m trying to start a career as well as looking to actually do something with the thoughts in my head

If nothing else 2014 is a blank slate and I’m excited about some new changes I anticipate in the industry and I hope that I am able to contribute and share in 2014 at a higher level than I’ve been able to this year.

Upgrading to vCenter Server 5.5 Using a New Server

You may find that you want to start looking at upgrading your vCenter Server to version 5.5 to take advantage of new capabilities, a faster and improved web interface and the ability to upgrade your hosts to ESXi 5.5.

vmlogo

But what if your current server might be for example vCenter 5.1 on Windows 2008R2 with SQL 2005?  You might want to take the opportunity to start clean on more current versions of Windows and/or SQL.  This article is a summary of a process that worked for me as well as a few hurdles encountered along the way.

Before we begin, make sure you also consider the vCenter Server Appliance which is a hardened pre-built vCenter Server running on Linux. Some will desire to run vCenter Server on Windows and if so this post is for you.

This article also assumes SQL is running locally on the vCenter server.  If the database is remote, this article will still work except that either you will not need to move the database, or you’ll be moving it to a different server.

UPDATE:  As this process does not transfer the ADAM database, the existing security roles will NOT be migrated to the new server.  These roles will have to be manually rebuilt unless you want to try some scripts as discussed in this post.  Special thanks to Justin King ( @vCenterGuy ) for pointing out the issue and Frank Büchsel ( @fbuechsel ) for providing the link to scripting permissions import/export! 

Build The New Server

The new server should be Windows 2012 but NOT the R2 version which is not yet supported. If you want to use a local database, go ahead and install the database at this point (we used SQL 2012).

SSL Certificates

We didn’t have custom SSL certificates but you will still need to transfer your SSL certs to work with your existing database.  When I got to installing vCenter Server in a later step I encountered this error and had to go back and grab the certs.

On the current vCenter server you should be able to find the certificates in the following hidden directory:

  • For Windows 2008:
    C:\ProgramData\VMware\VMware VirtualCenter\SSL
  • For Windows 2003:
    C:\Documents and Settings\All Users\Application Data\VMware\VMware VirtualCenter\SSL

Copy everything in the SSL folder and create the following directory on the new server and place them here:

 C:\ProgramData\VMware\VMware VirtualCenter\SSL

For more information see the following KB article on certificate errors related to vCenter Server installation

 Transfer the Database (downtime begins)

Shutdown the vCenter Services so that we can transfer the database.  There’s a few options here.  Our vCenter DB was about 30GB so I simply did a detach and copied the DB files across the wire.  If you have SQL 2008 or later you might want to take a compressed backup or look at a tool like Red Gate or LiteSpeed which can compress your SQL backups into much smaller files to transport. Additionally you also might be able to detach the relevant VMDKs and attach them to the new server, allowing you to copy them at disk speed.

Once you have the database running on the new server we can begin with the vCenter Server install.

 vCenter Server install

First rule here is the use vCenter 5.5A (build 1378901) which fixes some authentication issues on Windows 2012 in some environments.  Second rule is to install the elements one at a time. I prefer to be able to control each install individually and I’ll address each component below.

 vCenter SSO Install

When you install this you have the option to sync with an existing SSO server.  Since the only other server with SSO we were going to retire, I chose the “first server in new site” option.  We will need to edit SSO later on to enable AD authentication but not yet.

 vCenter Web Server Install

When I first installed this component I got a 404 from the web server on each attempt.  As it turns out there is an issue described in this KB article such that the web server will return 404 errors when installed to a drive other than C. Normally I try to install everything I can to a non-C drive, but it seems that this component needs to be on the C drive to function properly.

 vCenter Inventory Server and vCenter Server

These services are mostly straight forward installs.  If you copied the SSL certificates above you should have no issues in this step.  You will have the option to have vCenter automatically attempt to connect to the hosts or to do it manually.  At this point vCenter server should be working, but only local accounts might be able to login.

To fix this login to the web UI for vCenter using either a local account or the SSO admin account and perform the following steps.

1 Browse to Administration > Sign-On and Discovery > Configuration in the vSphere Web Client.
2 On the Identity Sources tab, click the Add Identity Source icon.

Add the appropriate source type such as Active Directory and add it as one of the default domains. For more information see the following help chapter on setting default domains.

vCenter Update Manager (optional)

You should mostly be in business at this point but you may also want to install vCenter Update Manager.  With this step there are a few additional considerations.

First of all you need to create a 32-bit DSN for the Update Manager Database.  There’s a KB article here but I think my method was quicker.  On the 2012 server open up the search charm and type “odbc” and press enter.  You’ll see both the 32 and 64 bit versions of the ODBC configuration utility.  Select the 32-bit utility and create your DSN, but…..

Make sure you use the SQL 2008 R2 Native Driver even if you are using a 2012 database.  As explained in this article, the vCenter Update Manager service will fail to start when using the 2012 Native Client.  Use the 2008 R2 Native Client against the 2012 SQL and it will work fine.

That’s basically it.  To summarize take the following steps:

1)  Build a new 2012 Server (not R2) and install SQL or other database

2) Copy the SSL certificates from the current vCenter to the new server

3) Shut down vCenter Server services

4) take backups and/or snapshots as desired

5) Using the method of your choice, forklift the current vCenter database to the new server (if SQL is local)

6) Install SSO

7) Install vCenter Web Server to the C: drive

8) Install Inventory Manager and then vCenter Server

9) Logon to vSphere with the web UI and configure SSO to authenticate to your Active Directory domains and/or other sources as desired.

10) Manually reconnect to your ESXi hosts if you selected this option

11) Install Update Manager using a 32-bit DSN and the 2008 R2 Native SQL Client.

Now you’ve got vCenter 5.5 using your same database but on a clean Windows 2012 server.  Now you’re ready to take advantage of the new features ranging from the improved web interface, expanded OS support and the ability to update your hosts to ESXi 5.5.  Happy virtualizing!