Why Microsoft?

This is a question that can be explored from many different angles, but I’d like to focus on it from not JUST a virtualization perspective, and not JUST a cloud perspective, and not JUST from my own perspective as a vExpert joining Microsoft, but a more holistic perspective which considers all of this, as well

Top 6 Features of vSphere 6

This changes things. It sounds cliché to say “this is our best release ever” because in a sense the newest release is usually the most evolved.  However as a four year VMware vExpert I do think that there is something special about this one.  This is a much more significant jump than going from 4.x

vSphere 6.0 Public Beta — Sign Up to Learn What’s New

Yesterday, VMware announced the public availability of vSphere 6.0 Beta 2.  I can’t tell you what’s all in it due to the NDA, but you can still register for the beta yourself, read about what’s new and download the code for your home lab. There’s some pretty exciting stuff being added to vSphere 6.0 in

Will VMware Start Selling Hardware? Meet MARVIN

The Register is running a story that VMware is preparing to launch a line of hardware servers.

VMware Pursues SDN With Upcoming NSX Offering

Earlier this week VMware announced VMware NSX – an upcoming offering that takes network virtualization to new levels. NSX appears to be somewhat of a fusion between Nicria’s SDN technology (acquired last year by VMware) and vCloud Network and Security (vCNS – formerly known as vShield App and Edge). Since I already had intentions to

What Really Is Cloud Computing? (Triple-A Cloud)

What is cloud computing?  Ask a consumer, CIO, and salesman and you’ll likely get widely varying responses. The consumer will typically think of the cloud as a hosted service, such as Apple’s iCloud, or uploading pictures to Photobucket, and scores more of like services (just keep in mind that several such services existed before it

Agility Part 2 — The Evolution of Value in the Private Cloud

When an IT project is commissioned it can be backed by a number of different statements such as: “It will reduce our TCO” “This is a strategic initiative” “The ROI is compelling” “There’s funds in the budget” “Our competitors are doing it” Some of these are better reasons than others, but here’s a question.  Imagine a

Stacks, the Vblock and Value — A Chat with EMC’s Chad Sakac

…I reached out to EMC’s Chad Sakac to gain more insights from his perspective on how the various stacks…well…stacked up….

Virtualization Myths and Realities

One of my several motivations for this blog was to debunk myths that served as barriers to the adoption of virtualization.  EVERY time I was given the opportunity to move beyond fears and demonstrate what could be done with virtualization, it was a complete success, and several would say “I really didn’t think it was going to work, but it did!”.

I’ve made several posts on this blog on that very theme, and so I also wanted to share this info-graphic created by VMware and EMC using data from IDC.  Often I’m inherently skeptical that the numbers in info-graphics might be inflated, but based on my experiences I have a lot less reasons to doubt these numbers.

While virtualization is not inherently de facto cloud computing, I believe that it can be a key ingredient to achieving efficiencies and agility which can transform how both business and IT get done.



 

The NoCloud Organization Part 1: Would You Like Fries With Your Cloud?

When Ray Kroc looked to expand his restaurant business beyond just a few stores and into a larger franchise he would face several challenges in delivering a consistent standardized product on a larger scale.  One challenge was labor — if the restaurant was to expand under the franchise model, it would be necessary to rapidly find labor replacements at low cost and minimal training. How would his restaurant be able to acquire new labor at low cost and yet still be able to train them to provide a highly consistent and standardized product?

The solution was to turn the kitchen into something that mimicked a Model-T assembly line.  New tools, such as larger grills and condiment dispensers would be to create a consistent and easily repeatable process.  In addition the division of labor would be split across key inputs such as the hamburger patties, condiments and toasting buns such that each employee had only one simple repetitive task to do.  Not only did this provide consistency, but it reduced the level of training required such that new employees could quickly be plugged into the hamburger assembly line in a business where high turnover was common.

The short 30 second video below highlights some of this:

In taking this approach Ray Kroc was able to reduce the time it took to provision a hamburger from 20 minutes to 30 seconds.  New labor could be quickly plugged in with little training and at low cost– one would focus exclusively on condiments, and another buns — while the product became consistent and standardized.

CAN THIS APPROACH WORK IN IT?

I was working at a large global company which had chosen to standardize on IT products from a single vendor wherever possible.  Part of the reasoning behind this was a belief that the standardization of contracts and support would offer benefits, as well as a belief in greater interoperability coming from a homogenous environment.

For one of my projects I was looking into a solution that I believed had the potential to offer more value than what was available from this single vendor, and eventually they were able to get an audience with our Vice President (who was based in another region).  The vendor would call me back the same day to tell me that during the meeting the VP had indicated that he wanted to make IT tasks “so simple that a monkey could do it” to explain why he wanted to stick with a single vendor.

Indeed at this organization, people were corralled into a strict division of labor wherever possible, and tasks were made to be as simple as possible (or as simple as management believed they were). This is a significant contrast to picking best of breed solutions and demonstrating faith in your engineers to design and implement the best solutions. Can such an approach be effective knowing just how complex cloud computing can be?  How do you think this approach worked out for this organization?  (Check back later for Part 2 of this article for the answer).

ARE WE BUILDING A HAMBURGER?

Perhaps in the 1990’s it was possible (to some extent) to silo your infrastructure into repeatable and standardized tasks, but as we move towards IaaS and PaaS models things are a bit more complex.  There’s only one hamburger on the menu, but your organization’s provisioning menu could have many choices, each with different operating systems, hardware allocations, applications (including n-tier), and even performance and security profiles.  And with new virtualization and automation layers, the core elements are far more inter-dependent and intertwined than they used to be, making a strict division of labor a challenge in many areas.

Now think about all the IT acquisitions, mergers, alliances, startups, and new products revolving around the cloud.  Can you really buy from just one company?  Are you really making things easier for your “monkeys”, or are the “monkeys” now pulling on levers that are just grinding the gears of your IT machinery?

Having said that, some are trying to bring best of breed solutions to market under a single support umbrella such as VCE’s Vblock.  The Vblock (discussed here) is VMware software, Cisco hardware, EMC storage plus orchestration software all designed to work together to provide an efficient infrastructure.  While there are significant advantages to converged infrastructure (includinging consolidated purchasing, contract and support), you can no longer continue to effectively manage IT as separate silos.  There will still be deep SMEs (subject matter experts) in many core areas, but you will need skilled generalists (as Nick Weaver explains here) to be able to transcend across these areas (and teams) to support, develop and automate the solution.  If your network people aren’t talking to your virtualization people, who aren’t talking to your storage people, your operations probably aren’t going as smoothly as they could be.

A DIFFERENT KIND OF COMPLEX SYSTEM

Recently I had the opportunity to observe medical teams in close detail.  I would participate in rounds and had numerous impromptu discussions with medical professionals on everything from medical technology to specific patient cases.  Each medical professional would have a specialty or area of expertise including pulmonary, cardiology, general surgery, plastic surgery, pain management, physical therapy, nutrition, infectious control and more.  While each had a formal role in the organization, they all needed to have enough knowledge of medicine to see “the big picture” and to discuss with each other about how to best approach a given case.  There would be countless trade-offs between the different medical “silos” that would need to be considered and understood by all.  Some of the goals of pulmonary might conflict with goals from the pain management and surgical teams and there would be numerous combinations of such trade-offs and competing metrics and goals.  There was no room for egos — no need to prove anything and no agendas to be driven home.  Just professionals with differing areas of expertise transcending their own areas and comfort zones in order to  discuss a case and seek the best approach for a patient.  If any organizational and/or cultural walls would interfere with this ability to collaborate across different medical areas, the patient might be adversely affected.

Cloud computing — which is really what today’s “computing” wants to be when it grows up — is far more complex than 1990’s style IT management.  There were always the silos of compute, network, storage and applications, but the interactions are far more complex today, not unlike the systems within the human body.  To give one example of just how complex cloud computing is, Christian Reilly writes “today, there isn’t a CMBD tool on earth (yet) that can realistically and efficiently keep pace with the inherent fluidity, agility and flexibility of even the most well intended cloud deployments.”  Or in other words, we’re not provisioning hamburgers here.

It sounds a bit cliche — perhaps because it is said so much — but the biggest obstacles today to the benefits of cloud computing are not technology, but people.  Mind sets, cultures, organizational models and operational processes can each be major barriers to cloud computing.  While the goal of cloud computing may be to operate with increased automation and efficiency, a model based on single vendors, monkeys pulling levers, and assembly-line style silos may not be the best approach.  In Part 2 of this series, we will take a look at an organization which might just have done all the wrong things when it comes to effective IT in the era of cloud computing.

Is Your IT Department a Cost Center or a Business Partner?

About 12 years ago I was working as a consultant (Novell, WINNT/2000, Citrix, etc. were big technologies at the time) and I became fascinated with ROI analysis and using it as a way to demonstrate the value behind various initiatives.  It wouldn’t take long before the issue of intangibles came up.  Not everything can be defined neatly in an ROI analysis.

For example, what’s the value of your sales force having access to CRM data on their smartphones versus dialing back in to corporate?  You can’t easily define various measures of productivity in an ROI analysis, but understanding such intangibles is key – I think – to moving beyond the concept of IT as a cost center.

I found myself often challenging colleagues, managers and even CIO’s to think about whether IT was just a cost center or something more.  What does “IT as a cost center” mean?  In my view it basically means that IT is a “necessary expense” of the business.  You need email, websites, CRM, ERP and departmental apps, and so on and so forth and while some of them may provide value, for the most part IT is considered “a cost of doing business”.

But what if IT could be something more?  One step to this was to identify intangibles, understand value creation and choose to make technology decisions from an enlightened perspective, taking into consideration the business and their needs and gaps.  Now with cloud computing the potential to move beyond a cost center just moved to a whole new level.

I’ve tried to tackle this concept in a few previous posts, but with cloud computing IT has the potential to evolve beyond being a mere cost center towards being a business partner.  Yesterday you bought servers, storage and networks independently for the most part, but in a cloud environment it becomes all about providing on-demand capacity for the business as they need it and when they need it, in order to execute their business strategy.  You might be using converged infrastructure, public cloud providers or any myriad of models to satisfy demand.  But you’ll need to be able to predict this demand as well as understand your supply.

The biggest problem in many organizations is not so much obtaining “cloud technology” as it is getting IT at all levels to understand “cloud” and then to change mindsets, org charts and processes to work within this new paradigm (an upcoming blog post will focus on real-world examples of what types of IT behaviors are NOT compatible with cloud computing).   But once you get there, you can begin to look at a model where IT becomes an engaged partner with the business, learning how they can provide the capacity to run the applications the business needs and when they need it.

Christian Reilly shared this image recently which shows an IT organization actively working at the highest levels with the business in order to support –not IT or a “necessary expense” – but the business itself:


And that’s ultimately where IT has the potential to be and should be – not a cost center but an engaged partner in the business, leveraging a cloud operating model.  Companies who fail to adopt this approach will only put themselves at a competitive disadvantage and perhaps hasten their own demise.  One of my favorite business leaders is Andy Grove, who despite Intel’s dominance, had a healthy paranoia of the competition and a perpetual curiosity that provoked new improvements and opportunities.  The time to begin moving your organization towards the cloud model is yesterday.

In my next post I’ll expand on this from a few different angles based on my own experiences with one IT organization.  Stay tuned…

Cloud Explained: Whiteboard Videos by Brian Gracely

I had shared this on Twitter yesterday but I decided that this content was good enough to warrant a blog post to get more attention.

Brain Gracely has posted several excellent videos on Cloudcast.net that go into detail on topics like Cloud fundamentals, IaaS, PaaS, cloud economics and much more.  I believe that it is critical for us technologists (and ideally management) to have a strong understanding of these concepts and what they mean from several different perspectives.

I love these types of discussions and will hopefully explore some of these topics from different angles in the coming year (until I have access to  whiteboard I may have to use SlideRocket!).

Below is the first video in the series — Cloud Computing Basics — and the rest can be accessed via either Cloudcast.NET or on their YouTube Channel

The SSD and the Home Lab

I thought I’d do a quick post about the impact an SSD (and perhaps a memory upgrade) can have on a home lab.  I tend to lag behind when it comes to access to both technology and experience (which I’m trying to change) so this isn’t a super-duper state-of the-art lab, but it does show what can be done in many cases with a modern PC and a modest investment.

A few months ago, I wrote this post on building a VMware View lab using just a PC with 8GB of RAM.  Yes, it was slow and painful, but it was possible and it could be done.  I had wanted to do so much more in the home lab – ranging from vCloud Director to vCOPS to vShield, but I found my home lab (with one SATA spindle) either inadequate or intolerable.  The break in the clouds came when I realized that I had a line of credit with Dell (note:  I’m not advocating personal debt here, but I viewed this as an investment in my development).

The CPU (Intel Core i7 (Sandy Bridge)) was fine, but I would need both more RAM and faster disk if I wanted to do more with my lab.  Dell only had one model of SSD available (Intel 320 SSD) so there wasn’t much research to be done.  I ordered the 120GB SSD drive along with an additional 8GB of RAM (for a total of 16GB) using my Dell credit for about $300.

The memory alone had a profound impact on what I could do – with more RAM available there would be less swapping and I could be more generous in memory allocation to VMs, but the biggest impact was the SSD.

SSD Performance

The SSD drive (and RAM) appeared the next morning and I quickly ran a few tests.  One of the biggest problems in traditional hard drive is the “seek time” that is consumed when the hard drive head moves across the platter to locate the desired blocks – most commonly seen in random I/O patterns (as opposed to sequential).

Sequential operations were anywhere from 50-150% improved, but the real “WOW” factor came with random access patterns.  Consider the following random access tests from first the hard drive and then the SSD:

image

A max of 52 IOPS and a max of 35 MB/s.  Now for the SSD:

image

Huge differences of a factor of thousands in some cases.  For the random transfer size the SSD was about 8 times faster!

How would this transfer into real world scenarios?  One of the first tests I did was an automated “smart install” of Windows 2008 R2 on VMware Workstation 8.  There are significant amounts of time where the disk is not used during a Windows OS install, so I wouldn’t experience anything like a 800% improvement, but the time required was reduced from 19 minutes to just 9 minutes – a very significant and welcome improvement.  I was able to rebuild most of my lab VMs in the single day as opposed to several days for the same task in the old lab.  I used the linked clone feature of VMware Workstation 8 which – when combined with the SSD makes it really fast and efficient to provision new VMs as well as being space efficient.

There’s a few steps and details in the home lab setup ranging from networking, DNS, AD and more that I worked through – not sure if there’s any interest in a step-by-step configuration for a home lab but perhaps I’ll save that to write on a rainy day if there’s interest.

Intel SSD Toolbox

My SSD was OEM so it literally came with nothing, but I looked around and found that Intel has this nice SSD Toolbox for their SSD drives.  It gives detailed info on your SSD drive and includes some optimization checks including:

  • Disabling Superfetch/Prefetch on the SSD drive
  • Disabling ReadyBoost on the SSD drive
  • Checking for defragmentation tasks (Intel recommends that defragmentation not be run on their SSD drives)

It also has an “optimization” feature which Intel recommends that you run weekly, which will basically de-allocate blocks which are no longer in use.

The Home Lab

The home lab I think is critical for many who wish to gain more experience and/or test scenarios which they may be unable to do in the office for one reason or another.  Much like certification, you need to make a financial investment for the opportunity that a home lab provides and not everyone is able to make these investments, but if you can afford it, it can often be a very efficient investment.

The combination of increasing RAM to 16GB and adding an SSD can make a world of difference for a home lab.  Working on VM’s now feels like I’m running on a really fast SAN – and I have more RAM to support more complex combinations of VMs (including ESXi hosts) to enable many more lab scenarios.  I’m looking forward to being able to do many things with VMware View, vShield, vCloud Director, ESX5 features and much more.

If you’re looking for a new system or upgrading an existing one for the educational opportunity that a home lab can provide, keep these points in mind.  In my case it took a $300 investment in RAM and SSD to open a whole new set of possibilities with what I could accomplish in the home lab.  As for me, I’m really excited about what this upgrade now allows me to do, and I’m sure that I’ll be posting more entries here about various lab activities and experiences that I’m looking forward to both experiencing and sharing the results.

What is Private versus Public Cloud? (Triple-A Cloud Part 2)

In “What Really Is Cloud Computing?” I tried to break down the essence of cloud computing as abstraction, automation and agility.  Part of the idea there was to break down where the value of cloud computing comes from, and step back a bit from the private versus public versus hybrid discussions.  Some of the points I tried to make include:

  • Cloud is not just outsourcing or a product that you build.  It’s a complete transformation of mindsets, culture, processes and org structure within IT.
  • Don’t wait to sort out the private/public/hybrid debates to get started.  If you’re not ready for public cloud, start by chasing 100% virtualization and automation (OPEX benefits) today as this will better position you for public cloud in the future.

Having said all that I wanted to further confuse everyone by explaining why I think that private cloud is a good choice in the short term, but public cloud (in whatever form it evolves into) will be the longer term trend.  Before I get into that post, I thought it might be beneficial to have a brief discussion on defining just what is private versus public cloud.

Private Cloud

The formal definition of private cloud is fairly straight forward, yet there’s several different variations of it in the market place (more on that in a bit).

The current Wikipedia entry describes private cloud as “… infrastructure operated solely for a single organization, whether managed internally or by a third-party and hosted internally or externally.”

In other words you’re adopting or pursuing the three A’s of cloud within a traditional infrastructure intended for use by a single organization.  You control the routers, firewalls, switches, SAN, workloads, etc. for your organization and you don’t share any of them with anyone else.

Now to make this a bit more confusing there are hosting providers offering “private cloud hosting” solutions which might not really conform with the definition of private cloud.  Sometimes one or more components of so called “private cloud hosting” can be shared with another tennant — and  technically this is not private cloud.  True private cloud means that none of your resources — hosts, SAN, firewalls, etc — are shared with another tenant.  There may be other tenants in the same datacenter but none of them should share or have access to any elements of  your private cloud infrastructure.

Public Cloud

Public cloud as we know it today, is a multi-tennant design.  The benefit of this is fairly straight forward — by knocking down walls and barriers it is possible to leverage hardware more efficiently in a larger pool.  Efficency goes up and costs go down.  You gain a more elastic capacity, utlity pricing and managed operations.

Thus in a true public cloud solution, you may have several tenents sharing the same physical server, the same SAN, the same firewall, etc., but a variety of technologies (including VMware vShield and vCloud Director in several cases) are used to provide a logical boundary/barrier for the tenants.

While security is natually a big concern in public cloud model, this does not necessarilly mean that private clouds are inherently more secure either.  Standards, audits and transparency are needed in the public cloud and over time, both process and technology should continue to evolve in this area.

Hybrid Cloud

Hybrid is when an organization has adopted both private and public cloud elements.  Often times a private cloud will be used for “planned” or budgeted capacity while public cloud will be sometimes leveraged for initiatives for which the capacity was not anticipated or budgeted.

What’s important here is not just the portability of workloads between private and public clouds, but maintaining the security and the operational model as you move from one to another (more on this in a future post).

Conclusion

That’s my quick take on the private/public/hybrid models.  In a future post I’ll be going into much more detail on why I think the longer term trend is the public cloud model, but that private cloud is very valid and will likely grow considerably over the next several years.  Also I’ll be looking into the vCloud ecosystem a bit.

But another quick point that I want to restate is that organizations should not let uncertainty deter them from pursuing the three A’s of cloud computing.  While there are some great products and solutions being offered, there is no magic bullet to escape the fact that cloud computing is a fundamental trnasformation of how IT is approached and executed.  Start with 100% virtualization and automation and start building a cloud (and transforming your IT org).

Do you or agree or disagree?  Have a comment?  Join the discussion below….

Clouds Without Borders

This is one of those topics I was just intrigued with and felt compelled to blog about it because it’s not only very interesting but very important.  What entities have the right to search or seize data based on its physical location?  This is already a big issue in several contexts and will only become more complicated as the use of cloud computing continues to grow.

Let’s start by taking a step back and looking at the economics of free trade and protectionism:

Proponents of free trade argue that it results in the most equitable and efficient use of resources.  If Japan has the lowest opportunity cost for rice, and the U.S. for corn, then each of those countries should specialize where they have a comparative advantage and purchase from others where they don’t.  Harvard Economist Gregory Mankiw stated, “Few propositions command as much consensus among professional economists as that open world trade increases economic growth and raises living standards.”  John Maynard Keynes was also convinced of the economic benefit of free trade writing that we should defer to the “wisdom of Adam Smith” in this manner.

However, there can be agendas that might be in conflict with free trade ranging from national security (i.e. a national supply of steel for the military) or other national or cultural interests in maintaining a specific industry at a specific level.  Protectionist tariffs and other measures can be erected to protect national industries which are deemed “strategic” by whatever measure.

In the past, the jurisdiction of commerce was fairly simple to define with traditional physical products, but with cloud computing the “product” might be hosted (and stored) in Singapore, transmitted through Germany, and consumed in Switzerland.  If there is an issue of national or international law, who has access to the data and at which points and terms?   And do we have to start thinking about which countries that data routes through?

We’ve already seen this to some extent with not just China censoring the Internet, but also nations like Iran, North Korea and Cuba which have been trying to build their own “private Internets” to keep out external influences.  This can work as a protectionist barrier against not just the economics of cloud computing but also free speech.

Law of Unintended Consequences

The issue is becoming a rather big one in the United States given the growth of cloud computing and the Patriot Act.  Under the Patriot Act many potential cloud computing customers are either genuinely concerned by the issue or they are using it as a bargaining chip (or both).  POLITICO explain the issue in a recent article:

The PATRIOT Act, which had key provisions extended by President Barack Obama in May, has become a flash point in sales of cloud computing services to governments in parts of Europe, Asia and elsewhere around the globe because of fears that under the law, providers can be compelled to hand over data to U.S. authorities.

To make matters worse it’s not just limited to data that is hosted within the United States.  Microsoft recently admitted what was long suspected – that being a US based company they have to comply with the US and –if asked to – turnover data hosted in Europe to US authorities.

Regardless of to what extent these concerns are exaggerated for negotiating leverage versus being legitimate concerns one thing is certain – clarity is needed.  At the very least it should be explained under exactly what circumstances (i.e. a FISA court approved warrant) a seizure could take place and more details on the process and recovery.

Strategic Clouds

But going beyond the Patriot Act there are other issues that need to be worked out.  We know that the value and potential of cloud computing is diminished when barriers against the free flow of data are raised, but might there be cases where some restrictions are warranted?   As noted by Politico, the GSA recently tried to impose restrictions on where datacenters could be located.  There’s probably some countries that we would not want to have a claim on government data in any circumstances and some data sets we might even want to keep within out borders.  On the other hand, if we adopt too protectionist of a stance, other nations may follow suit, harming our global competitiveness.

Many are inclined to approach issues like this based on ideology first and ask questions later.  But these are some very complex issues and balancing acts that require a deep and sincere analysis.  Should governments require that government data stay within physical borders?  Should strategic industries avoid using cloud services in nations that we have less than solid alliances with?   Both governments and business need to think about where they host their data and applications and understand the potential legal situations.

We’re not trading physical commodities any more.  We are offering computing services which can be stored in one country, transmitted through a second (or more), and accessed from a third.  As cloud computing grows the potential for impact on not just economics but both free speech and security.  We need to start thinking about these complex issues and lobbying our government(s) to get it right.

Fixing the Problem

Government typically is unable to react to fast-changing dynamic industries like cloud computing (and often times this might be a good thing) but cloud computing is a growing industry and the US shouldn’t be in a position where it is losing ground in cloud computing because the government failed to provide clarity and address the real market issues at play.

Cloud computing didn’t really exist when the Patriot Act was written and one can’t help but wonder how many of the 535 members (of the US Congress) have an understanding of cloud computing let alone this situation.  The good news is that the executive branch appears to be taking action on some of the issues but will this be enough?  In the long run, there probably needs to be action on the legislative fronts as well to find a clear and responsible approach to issues of national security that don’t umm…cloud or confuse the impact and adversely affect the cloud computing industry.

Just as there may be needs for court approved wiretaps for national security, it is not inconceivable that there would be a legitimate for this scope to be extended to email, social networking and more – much of which could be using cloud service providers.

This is no place for knee-jerk ideology as there’s complex issues that we need to get right.  We need to engage with out lawmakers and lobby that our laws in this matter be clear and cloud-friendly and avoid protectionist stances in the absence of strong strategic agendas.

What you think?  Comment below.

Our Make-A-Wish Experience

How should a life be lived? People will offer many different answers to this question and some may even be too busy to be bothered by it.  But on one thing we should be able to agree — the need to laugh, smile, and enjoy our time with friends and family during our time in this world.

When I was a child, I first learned of the Make-A-Wish Foundation while listening to and attending sporting events.  I knew that it was a highly respected organization that helped to bring smiles to very sick children but I really did not appreciate the full value of what they did — not until we experienced it for ourselves.

We’ve got so many details, subplots and side-stories that we could easily write a book – in fact our daughter wants to do exactly this someday.  But for now, here is a shorter version of our Make-A-Wish experience.

GRACE

To help put everything in perspective we felt that some background could provide a better context for what happened.  Courtney was born in early 2001 with a birth defect called an omphalocele.  When she was born, her liver, bowels and other organs were outside of her body enclosed within a thin membrane.

While omphaloceles are not all that rare ( 1 in 5000), this one was especially unique because of its size.  She was a case study at the time for a “very large omphalocele” — there simply wasn’t enough room in her body to accommodate her organs.

In the first year of her life, she went into the operating room 10 times for major surgery – some planned and some unplanned.  Each time she went in for surgery we would watch the balloons on her cart fade as she passed through the operating room doors, and then the tears and long nervous waits would begin.

We were told on several different occasions that she did not have long to live – at one point her life expectancy was being measured in hours.  We were also told that she would never be able to walk, and certainly would be severely disabled her entire life.  One by one, Courtney would prove all these predictions to be wrong.

GOING HOME

Fast forwarding after about a year in the neonatal intensive care unit (NICU), the medical team was able to get a thin layer of skin to grow over the top her abdominal organs.  The abdominal muscles had been cut and/or pushed aside in previous operations so there was no protection for these organs beyond this thin layer of skin.  The best way to describe it is that she looked as if she were pregnant, with a protrusion that was about the size of a very large cantaloupe.

This protrusion alone made things very difficult.  The surgeon asked us to imagine eating a Thanksgiving dinner, lying on your back and then having a 50lb bag of sand sitting on your stomach.  This is not unlike what it felt like for her, with this constant weight pushing up on her diaphragm making it more difficult to breath.

When she was still 1 year old, she was released from the hospital with a trache, an oxygen tank, home nursing and much more.  After time and much physical therapy we were able to get her to crawl and then walk.  She was eventually mainstreamed in school and we had to be very careful to avoid any blunt force to her abdominal area.

Fast Forward To 2009

In 2009, Courtney had turned 8 and the medical team had decided that she had grown to the point where they could finally attempt to repair the omphalocele.  The vast majority of omphaloceles are repaired at much younger ages, so to repair one at this age was not only rare but considerably complicated.

Shortly after her 9th birthday in 2010, Courtney indicated that she was ready to begin her medical journey and wanted to have her belly “fixed”.  They would first insert saline balloons which would be slowly inflated over time in order promote the development of more skin which would be needed to cover the wound site.  During the operation, all her abdominal organs would be moved and repositioned within the abdominal cavity and then covered with porcine tissue (pig muscle).

To say that this would be extreme surgery was an understatement.  It would be extremely complicated and the risks were extremely high as well.  Her organs were not in the normal locations and each organ would have to be manipulated and repositioned the best they could.  Her chances of survival were documented by one doctor at 15%.  Courtney understood the risks and she asked us to work with the doctors to start this new chapter in her medical journey.  She was ready.

Life threatening surgery is never easy, but this time Courtney was no longer a baby, but a brave nine year old girl with whom we now had many priceless memories.  After the lead surgeon discussed the procedure with Courtney (and the use of the pig muscle) she said “just slap a couple strips of bacon on me and let’s call it done!”.

MAKING A WISH

When Courtney’s surgery was scheduled, an application was submitted to the Make a Wish Foundation and it wasn’t long before we heard from them.  The Make-A-Wish team was absolutely wonderful and it was such a joy to work with them.

They started and asked Courtney if she had any wishes in mind and Courtney didn’t miss a beat in saying that she wanted to meet Miley Cyrus.  The staff explained to her that while such wishes could be made, there was a long waiting list and that such requests could often take more than a year, if granted.

92% of parents saw their kids experience re-empowerment to take back the ability to make decisions in their lives, post-wish — The Make-A-Wish Foundation

Then they began to ask questions like “if you could go anywhere in the world, where would you go?”  Her answer surprised us as we had never discussed it with us.  She said that she wanted to go to Hawaii to meet a surfer dude, see a hula girl, see the whales and see a volcano.  Not only was her wish was granted, but an itinerary was made which included a whale cruise, seeing hula girls at a luau, and traveling to the big island to see the volcano.

OUR TRIP

We arrived in Honolulu in the evening and the next morning and we went downstairs to enjoy “breakfast on the beach” in Waikiki and ended up playing in the water in our street clothes.  It was a surreal experience having been driving in New Jersey snow just a day before.  The girls wasted no time in playing in the Waikiki beach.

The morning view of Waikiki Beach filled with surfers, as seen from our room’s balcony

Courtney and Brittney experiencing Hawaii for the first time

Welcome To Sunny Hawaii!

The next day we went out on a whale watching cruise which was absolutely amazing.  Not only did we see several whales but we saw a baby whale breaching with its mother, which the guides said was highly unusual.

The next day was an incredibly full day of fun and adventure.  We had plans for the evening in the SW part of Oahu, so we tried to work a few things in along the way.  We got up early and drove to the Dole Pineapple plantation, and went on a fantastic tour and then ended up getting lost in this giant maze!

From there we drove up towards the North Shore to have some famous Matsumoto Shaved Ice.  Below is me and the girls enjoying a shaved ice (the girls found it difficult to smile without sunglasses).

Hawaiian Shaved Ice

 

From there we ventured onto the North Shore and stopped in the first surf shop we encountered to explore, which just happened to be Surf N Sea — Hawaii’s oldest surf & dive shop.  Now, there was no surfing as a part of our itinerary, so we thought we would inquire and see if we could do something on our own.  We spoke to the manager and explained everything and he said “I know just the guy – hang on”.

89% of parents observed increases in wish kids’ emotional strength, which can help them improve their health status — The Make-A-Wish Foundation

“T.G.” was his name and he was the real deal – he had spent seasons living out of his van in Australia chasing after the best waves.  He drove in and offered to give the girls a free surfing lesson on  his day off!  Below is a 5-minute video of our amazing time that afternoon.

Pineapples, shaved ice and surfing.  Now it was time to drive down to the SouthWest corner of Oahu for an authentic luau to finish off the day:

Ready for the party!

Courtney and Brittney pose at the luau party!

There’s a bird on my head!

That was about as much family adventure and priceless memories as could possibly be packed into a single day.

On the day we were supposed to fly over to Hawaii (“the big island”) we couldn’t get out of our hotel.  There had been a 6.9 quake in Peru and there was a tsunami alert.  Everyone in the hotel had to say above the 4th floor as a precaution and no one was allowed to leave.

The beach was evacuated in anticipation of the tsunami, but one truck lost all the surfboards! Everyone nearby pitched in to help load the boards back on to the truck.

What was amazing was to watch from our balcony almost every single boat in Southern Oahu heading out to sea.  The ocean was just filled with hundreds of boats, streaming out one after another – a very unique sight.

It may be hard to see in this picture, but if you look close there are A LOT of boats heading out of Honolulu harbor to avoid the tsunami. This was early in the day so the number of boats was much greater later that morning.

Later that day we finally made it to Hawaii and settled in near the Kona coast.  The next morning we began our volcano adventure and drove up to Hawaii’s active Kilauea volcano.  Imagine waking up and watching sea turtles sun along the Kona Coast, and then a few hours later wearing a jacket as the strong cool and misty wind blew on top of an active volcanoes as you overlook the lava fields (later in the day we would be in a rainforest like tropic on the other side of the volcano).

75% of parents observed that the wish experience increased wish kids’ physical health and strength — The Make-A-Wish Foundation

We explored the steam vents, asked questions for the park rangers and had an absolutely awesome experience exploring this wonder of nature.

Courtney in the lava field!

The girls by the active steam vents on top of the active Kilauea volcano

Courtney and the volcano — with a double rainbow

Girls, volcano and rainbow

Courtney enters a lava tunnel

Then we grabbed dinner not far from the volcano, and then we decided to go back up to the volcano to see it glowing in the night.  On the drive back in we saw the most amazing thing….

A green light was forming on the horizon and as we drove closer, it became clear that this was a “rainbow” but since it was illuminated by moonlight it had a greenish glow to it — (almost like the Northern lights!).  We tried to get some pictures but we just didn’t have the right camera for these unique conditions.

When we got to the top we didn’t know how to describe what we saw.  Imagine standing on top of a volcano at night looking over a lava filled landscape with a huge green rainbow arcing over the horizon.  It literally looked (and felt) like a science fiction movie as if the “green rainbow” were the rings of a planet or something.  The moon was full and bright on the other side of the volcano.  It was an absolutely incredible experience which we will never forget and left us with a sense of awe and amazement.

97% of volunteers reported feeling more grateful and thankful as a result of helping to grant a wish — The Make-A-Wish Foundation

Thanks to the Make-A-Wish Foundation we got the opportunity to have an amazing vacation filled with memories that we would never forget.  Courtney vowed to return to Hawaii to take up surfing and her little sister vowed to open a beachfront restaurant on the same beach.

While the future was uncertain, each of us had a new sense of peace about us as we prepared to face this critical phase of Courtney’s medical journey.

SURGERY

In early July 2010, Courtney went in for her “big” surgery.  When Courtney was going under anesthesia her mother had her pretending to be on a Hawaiian beach and eating snow cones.  She had a smile on her face as she drifted into sleep.

The day of the surgery was very intense as were the following several days.  For most of the first week she was in a near-coma state — unconscious with no movement.  There were many medical challenges and many difficult balancing acts.

After about 8 weeks in the hospital Courtney was released and went home.  I can’t make this post and not mention how professional, helpful and generally outstanding the entire team was at Morristown Hospital.  We had MANY in-depth conversations with many medical professionals and were present for many of the AM rounds.

This was extreme surgery.  Very extreme.  And we have no doubt that our experience in Hawaii — thanks to the Make-A-Wish Foundation — helped to put Courtney in a state where she would be the most receptive to healing and recovery.

Courtney still has some challenges and minor surgeries ahead, but she is on a great trajectory and she will never forget the amazing experiences she had in Hawaii.

HELP MAKE A WISH

This is what the Make-A-Wish-Foundation does every day — putting smiles on the faces and joy in the hearts of children in very serious situations with very uncertain futures.  Hope, strength and joy is what they offer to these children in very uncertain times.

The Make-A-Wish Foundation is filled with outstanding people who absolutely love what they do, and to give something back and share in what they do is a blessing.

There are so many different way to donate to the Make-A-Wish Foundation including monetary donations (cash or stocks),  frequent flier miles, computer equipment, loyalty points and more.  You can also volunteer time and/or services or even the Adopt-A-Wish program where generous donors can choose a specific wish for which they will completely sponsor.

For more information on donation/volunteer options at Make-A-Wish, click here.

 

What Really Is Cloud Computing? (Triple-A Cloud)

What Really Is Cloud Computing? (Triple-A Cloud)

What is cloud computing?  Ask a consumer, CIO, and salesman and you’ll likely get widely varying responses.

The consumer will typically think of the cloud as a hosted service, such as Apple’s iCloud, or uploading pictures to Photobucket, and scores more of like services (just keep in mind that several such services existed before it became fashionable to slap the “cloud” label on them).

Some business articles describe cloud computing as turning a capital expense into an operating expense, while others talk about moving from a product to a service.  But for a CIO or IT manager, what exactly does all this mean and how does one get there?

Where is this proverbial "cloud" and how do I get there?

Some CIO’s tend to view the cloud as outsourcing large pools of infrastructure as a utility (after all CIO stands for “Can I Outsource?”, right?).  But does one have to outsource a third party’s capacity to apply cloud computing concepts, or can it be pursued within existing infrastructures?  On the other hand, if an organization has already adopted virtualization (to some extent), some wonder why they should be looking at cloud computing?

Many definitions of cloud computing I’ve heard have an element of truth to them, but are often incomplete and often leave people wanting more and perhaps fail to  truly really capture the essence of cloud computing.  What if we could simplify our definition of cloud computing as being built upon three key pillars?

CLOUD COMPUTING IS….

So what are the three key ingredients in cloud computing?  Abstraction, Automation and Agility.

Let’s take a closer look at each of these three ingredients of cloud computing.  After a discussion of these 3 elements we’ll circle back to talk and address some of the original questions about the different models in which we find cloud computing being used.

ABSTRACTION

Abstraction is essentially liberating workloads and applications from the physical boundaries of server hardware.  In the past we had servers which would host only one application (hence our focus at times on servers and not applications).  Virtualization provides this abstraction by separating workloads from server hardware, eliminating hardware boundaries, dependencies and providing workload mobility.  That mobility is even being extended to moving workloads from internal data centers to service providers and vice versa.  Today the virtual machine defines the boundary, but in the future as the OS becomes less relevant, we might see “virtual containers” defining our workloads on PaaS (Platform as a Service) infrastructures.

The original motivation for virtualization was a CAPEX (capital expense) play — fewer servers, ports, space, electricity, etc.  As virtualization matured many quickly found that the management of virtual machines was significantly easier, and that there was a new way of doing many tasks, which could in turn reduce OPEX (operating expenses).  To get here, we need to work towards 100% virtualization and the technical barriers have been all but smashed with today’s technology (more on this later).

Put simply, abstraction enables greater resource utilization and can be used with concepts like multi-tenancy to provide greater economies of scale than were previously possible.

There’s also another kind of abstraction taking place that’s causing a wave of disruption — the abstraction of the application away from the traditional PC.  The combination of SaaS, application virtualization, VDI and the proliferation of mobile devices (tablets and smartphones) are all driving this trend.  Applications no longer need to be anchored to physical PCs as users want to access their applications and their data from any device and any place.  For more on this topic,  see this earlier post on The New Application Paradigm.

Both of these types of abstraction are removing traditional boundaries and therefore changing the ways in which we manage infrastructure and present applications to endpoints.

And in looking at server virtualization, we see that the virtualization stack also provides a unifying management layer, which can serve as the foundation for so much more…

AUTOMATION

Where abstraction provides the foundation for the new paradigm, automation builds on top of that foundation to provide exciting opportunities for organization to reduce OPEX costs and promote agility.

Let’s start with the basics.  Thanks to encapsulation (provided by virtualization), new possibilities have emerged with replication, disaster recovery, and even the backup and recovery process itself.  There’s agent-less monitoring of many core performance metrics, scripting across VMs and hosts, virtual network switches and firewalls, and of course, near-instant provisioning from templates.  Such levels of automation were not easily accessible before the abstraction layer of virtualization was introduced.

How fluffy did they want their cloud?

Now we have products such as VMware’s vCloud Director which can take all of the elements of an n-tier application, and quickly provision them — including firewall rules and even with multi-tenancy.  Imagine deploying an entire N-tier application of multiple virtual machines, complete with network config and firewalls with just a few clicks.  Now add to that the concept of a self-service catalog, where business units can request resources for an application over a web form, and upon approval the application is automatically provisioned consistent with the provided specifications, while conforming to existing IT standards and compliance audits.

Those are just some of many angles to automation.  Another is orchestration of converged infrastructure (of which the Vblock is one example).  Rather than trying to manage the core infrastructure elements of compute, storage and networking as independent silos as many do today, we can instead deploy converged infrastructure with orchestration tools that can unify and transcend across the silos, allowing infrastructure to be managed and provisioned more like a singularity.  And many of these orchestration tools can plug directly into the virtualization stack (i.e. vCloud Director) for even more integration.

Now of course there are obstacles to this automation which can include “PSP syndrome” (an adhesion to Physical Server Processes), heavily siloed organizational structure, integration and even multiple hypervisors.

There’s many more angles to automation we haven’t touched on yet, but the key is that abstraction enables new opportunities for automation — and that automation can then be used to pursue….

AGILITY

Why does VMware say that they want infrastructure to be transparent?  Let’s answer that question with another question:  does the business care about storage, network or server technologies?  At the end of the day the business cares mostly about two primary deliverables from IT — the health of their applications (as measured by uptime and other performance metrics) and the time it takes to deploy/provision them.

Success is the rapid and successful execution of business strategy and time is a HUGE component of this.  There’s competition, market opportunities, patents and legal issues, first mover advantage and so many reasons why time is…well…money.

Agility in the clouds

CAPEX and OPEX savings can have positive impact on budgets but when you get to a place where you can get major projects done in weeks rather than quarters, that’s a profound paradigm shift which can often be of more value to an organization that just CAPEX and OPEX reductions.

Imagine that the business wants to build a 200 server n-tier application to promote a new initiative and that it has to be PCI complaint.  First you have to have the infrastructure (compute, storage, networking) to rapidly provision and then you need to work with the application, networking and security teams to provision those VLANs and firewall rules.  If you’ve ever worked in an IT shop which is heavily siloed and uses physical server processes, the technology might be obsolete by the time you finished the solution.  The back and forth between departments and ticket processes just to get the VLANs or firewall rules correctly set for the application or make any like adjustments can slow such a project to a crawl.

However if you can successfully leverage abstraction and automation in your IT department, you can get to the point where you can reduce the time to provide solutions to the business by months in many cases.  It’s being done today, and that’s one of the biggest reasons why there’s so much excitement in not just IT circles, but business and leadership circles, about cloud computing.

In an earlier post I introduced the concept of a value triangle which is illustrated below.  The organization begins their journey by using virtualization (which provides abstraction) to achieve CAPEX benefits.  This provides the foundation for automation which enables opportunities for additional OPEX benefits, and of course all of this enables the opportunity for the organization to capture agility benefits which could potentially be of far greater economic value than CAPEX and OPEX (cost-center) combined.

The value of cloud computing is so profound that we all should be doing it this way and should be just calling it “computing”.  But we aren’t quite there yet, hence the term “cloud computing”.

The bottom line is that if you can successfully execute on abstraction and then automation, you can begin to align your IT services to the needs of the business and work with the business with a partner-minded relationship, providing the agility to rapidly execute their business plan.

WHAT SHAPE IS YOUR CLOUD?

Clouds can come in many shapes and sizes.  Some are internal and some are outsourced.  Then there’s the whole private/public/hybrid cloud (complete with academic debate) and let’s not forget PaaS, IaaS and SaaS.  Which of these “shapes” should you have in your cloud and what should it look like?  Perhaps someday there will be ITIL standards on exactly what form these elements ought to manifest within an ISO 9000 complaint cloud design (no not really — I just said that to get Stevie Chambers all worked up).  So where do I start working on this cloud thing and which strategy should I use?

This unicorn needs a rainbow

While these are often relevant discussions, it’s often helpful to forget about these “debates” and buzzwords and focus instead on the core elements of cloud computing — abstraction, automation and agility, so that we can better understand the value proposition and consider the best methods to employ towards that end.

Do you need to outsource to a third party to leverage cloud computing?  Can you leverage cloud computing in your existing datacenter?   How you get there and the best path will vary, but one first must embark on the cloud journey in order to reap the benefits.

I’ve seen at least a half dozen business or IT articles over the past month alone that said either directly  or strongly implied that cloud computing meant that the workloads had to be outsourced to some third-party as a utility service.  Cloud computing resources can certainly be outsourced to a third party, but they don’t have to be.  And even if you did make the decision to outsource it’s no magic bullet — your processes and organization need to evolve to the point where you can achieve the levels of automation – and therefore agility – that you seek.

In the long run, I tend to think that more and more workloads will indeed be moved to third-party service providers ( an upcoming post will deal with this), but for the time being the best path for many organizations may very well be to pursue cloud computing within their own existing datacenters.  You must embark on the journey to first achieve abstraction, and then re-engineer your processes and organization model as you work to achieve greater levels of automation and agility.

THE CLOUD JOURNEY

The cloud journey — which is almost always a marathon — begins with virtualization.  In the past we’ve had to battle “server huggers” and many other barriers to the adoption of virtualization, but especially with the release of vSphere 5, those technical barriers are gone for the overwhelming majority of workloads.  If a given environment can’t virtualize an application effectively today, it’s more than likely a limitation with your architecture or skill sets as the proof is in the results — organizations are having success and creating case studies on virtualizing their ERP systems and database tiers (see the performance section of this blog for just a few examples).

Sometimes we encounter “server huggers” who still want their application to have the familiar and comfortable boundaries of a physical server they can identify, and sometimes in our datacenters we build “Frankenstructures” in which we invest great time and engineering expense into and get an excessively complex infrastructure that we have become too attached to, and the engineering burden begins to weigh us down (another area where converged infrastructure can help).  Server huggers, “frankenstructures”, physical server processes and siloed org structures all can be obstacles to the cloud journey.

Clouds can have different evolutionary stages — it takes a journey — perhaps a marathon — to reach the “agility zone” in the cloud journey.  Let’s take a step back from the private/public PaaS/IaaS debates and start by focusing on just the core elements of abstraction, automation, and agility.  A few key points to summarize:

  • Clouds have many shapes and forms, but they all rely on abstraction and automation to enable the potential for agility.
  • It is not a requirement to outsource or to move anything to “the cloud”.  You can begin the journey in your own datacenter(s) by first pursuing abstraction and then automation.
  • Cloud isn’t just about technology.  It’s also about organizational structure and processes.  Re-engineer your physical server minded processes, refresh your skill sets, and knock down your organizational silos.
  • Virtualization alone isn’t enough.  Cloud computing requires the effective use of automation (at many different levels) to reduce provisioning and service delivery times.

Building On #AAACloud

This is sort of a “foundation” post and I’m hoping that the conversation can be continued and expanded by using the Twitter hashtag #AAACloud as well as some future content I hope to produce to further build on this discussion.  This might include a video post or two, as well as SlideRocket presentations on “What comes after virtualization?” and a second presentation on the value of converged infrastructure.  Also there may be some follow-up blog posts on topics such as:

1)      Using Multiple Hypervisors

2)      Private Datacenters versus Outsourced Service Providers

3)      Legacy physical processes

4)      Organizational Models

..and I’m sure more will come to mind.  Join the conversation on twitter with #AAACloud

Quick Thoughts on the Kindle Fire & App Compatibility

First things first – anyone comparing the Kindle Fire to the iPad will not only be disappointed, but is missing the point.  One likely scenario is that the iPad becomes the high-end tablet of choice and the Kindle Fire becomes the low-end tablet of choice, squeezing out all the other tablets in the middle.

The intent of the Kindle Fire was never to challenge the iPad but to create a low cost consumption vehicle for Amazon’s huge storefront, ranging from books (Kindle), music, digital movies, and of course general purpose shopping on Amazon’s storefront.  In order to pursue this goal, Amazon sought to modify the Android OS to have the user’s identity revolve around an Amazon account rather than a Google account.  This of course is one major reason why Amazon built their own Appstore for Android (Google’s Market App requires a Google login).

In pursuing this shift to Amazon credentials, this in turn made the Kindle Fire incompatible with any apps which require a Google login such as Gmail, Market, Reader, Voice and more.  Apparently, Google Maps will work (with some manual effort) as it does not require a Google login.  So yes, Amazon’s strategic direction here did result in “breaking” some Google App functionality, but let’s not forget the other side of the coin which is that it was Google who “baked in” some the Google account functionality into Android OS, thus creating this situation.  While this is all a disappointment, I don’t see the lack of these Google apps being a big obstacle to the Kindle Fire’s success in the lower end of the tablet market.

My 10-year old daughter loves both reading and Angry Birds.  With the Kindle Fire, as a parent I can provide her with Kindle version of her favorite books at lower cost, while she can do other things on the tablet ranging from reading other periodicals, streaming TV shows, researching topics on the Internet, and playing games.  (Side note:  tablets don’t have parental controls – there’s no replacement for monitoring your child’s activities and providing them with guidance when using online services).  At the other end of the demographic I could also see the Kindle Fire being a good fit for my parents.

The Kindle Fire is no iPad, but it was never intended to be one either.  It can be a good low end media consumption device which is good enough for millions of consumers for whom an iPad might be too much, but the Kindle Fire is within reach.

How to Disable Complex Passwords In Your Lab with 2008 R2

Many of you who have home virtual labs will have an Active Directory domain as a part of it.  While complex passwords can improve security in a production environment, you might not want this complexity in your home lab.  Where and how to change this isn’t always intuitive so I thought I’d share a quick tip on how to quickly disable this.

In Windows 2003 there used to be an Administrative Tool called Domain Security Policy where you could quickly do to modify these settings.  In 2008 this tool was removed, leaving the Local Security Policy tool, which does allow you to view all the password policies, but you will not be able to modify them.  The Local Security Tool was not intended to be used for domain policy but since you can view domain policies (read only) using this tool, it can seem a bit confusing, leaving some to believe that they don’t have access.  Fortunately, there’s a very simple workaround — use the domain based Group Policy Management Console (GPMC).

In Administrative Tools, select “Group Policy Management”, edit your default domain policy to your preferences and you’re set.  A quick step by step:

1)  Under Administrative Tools on your domain controller, run Group Policy Management

2)  Drill down to expand your domain and select it.  Then in the right pane, edit the Default Domain Policy by right-clicking on it and selecting “edit”

Click to zoom

3)  A new window has opened for the Group Policy Editor.  Expand “Computer Configuration” > “Policies” > “Windows Settings” > “Security Settings” > “Account Policies” > “Password Policies”.

Click to zoom

The settings have now been saved, but in order for them to become effective you must either reboot, wait for up to an hour, or manually force a group policy refresh by running “gpupdate /force” at a command prompt.

At this point you should be able to view and edit all of the relevant policies as shown above and edit them to match your preferences for your lab.

Free UBERAlign Tool for Identifying and Correcting VM Disk Alignment Issues

One of the most popular posts on this blog has been this post on disk alignment with almost 6,500 views.  In that post, the disk alignment problem is described and some remedies are discussed.  I wanted everyone to be aware of a new tool by Nick Weaver ( @lynxbat ) that is designed to both identify and remediate these issues.

Nick’s post on his new UBERAlign tool goes through all the details and it looks awesome.  It will scan and report on your VM alignment and remediate up to 6VMs concurrently.  Do yourself a favor and go check it out.