Why Microsoft?

This is a question that can be explored from many different angles, but I’d like to focus on it from not JUST a virtualization perspective, and not JUST a cloud perspective, and not JUST from my own perspective as a vExpert joining Microsoft, but a more holistic perspective which considers all of this, as well

Top 6 Features of vSphere 6

This changes things. It sounds cliché to say “this is our best release ever” because in a sense the newest release is usually the most evolved.  However as a four year VMware vExpert I do think that there is something special about this one.  This is a much more significant jump than going from 4.x

vSphere 6.0 Public Beta — Sign Up to Learn What’s New

Yesterday, VMware announced the public availability of vSphere 6.0 Beta 2.  I can’t tell you what’s all in it due to the NDA, but you can still register for the beta yourself, read about what’s new and download the code for your home lab. There’s some pretty exciting stuff being added to vSphere 6.0 in

Will VMware Start Selling Hardware? Meet MARVIN

The Register is running a story that VMware is preparing to launch a line of hardware servers.

VMware Pursues SDN With Upcoming NSX Offering

Earlier this week VMware announced VMware NSX – an upcoming offering that takes network virtualization to new levels. NSX appears to be somewhat of a fusion between Nicria’s SDN technology (acquired last year by VMware) and vCloud Network and Security (vCNS – formerly known as vShield App and Edge). Since I already had intentions to

What Really Is Cloud Computing? (Triple-A Cloud)

What is cloud computing?  Ask a consumer, CIO, and salesman and you’ll likely get widely varying responses. The consumer will typically think of the cloud as a hosted service, such as Apple’s iCloud, or uploading pictures to Photobucket, and scores more of like services (just keep in mind that several such services existed before it

Agility Part 2 — The Evolution of Value in the Private Cloud

When an IT project is commissioned it can be backed by a number of different statements such as: “It will reduce our TCO” “This is a strategic initiative” “The ROI is compelling” “There’s funds in the budget” “Our competitors are doing it” Some of these are better reasons than others, but here’s a question.  Imagine a

Stacks, the Vblock and Value — A Chat with EMC’s Chad Sakac

…I reached out to EMC’s Chad Sakac to gain more insights from his perspective on how the various stacks…well…stacked up….

My vResume – Where I’ve Been and Where I’d Like to Be

In my current career search, I was advised that it may not be a bad idea to publish my resume in a blog post, but just slapping my resume online and saying “hey, look at me!  I’m available!” just didn’t feel right.  Instead I opted for a different approach in which to share my experiences in a way which could potentially generate more interest and discussion beyond just me personally.

I’ll start by talking through a few specific work experience history and get to the “good stuff” in the later sections where I’ll expand on some specific experiences – sometimes with a cloud focus – and delve into some broader topics,  including education, certification, skill sets, how I see the IT market and what I’d like to be doing.

It may take some time to get through the experience, but I think relating to some of these specific experiences will come in more useful later on.

EXPERIENCE – the 90’s

In 1992 I graduated from college with a B.A. double-major in Business Administration and Political Science, with minors in economics and pre-law.  You’re probably wondering “where’s the tech?”  There is none except a FORTRAN class I took in my freshman year.  I did have experience with computers (BASIC on a Commodore 64, etc.) and technology but up to this point it was never my primary focus.

Companies were simply not hiring college grads in ’92 so I took some odd jobs until I found an opportunity.  A software company just miles from where I lived at the time had developed the first 32-bit TCP/IP stack for Windows and also the first 32-bit VXD (386 Enhanced Mode driver) implementation of a TCP/IP stack on Windows 3.1.  I provided technical support on the TCP/IP Suite and also participated in business development which included running trade show booths with our sales manager in both NYC and LA (in fact the LA trip was right after the ’94 Northridge quake – we got to see crumbled freeways and feel some aftershocks).

It’s also interesting to think back about what TCP/IP was at this time.  It was mostly text and utilities and NCSA Mosaic was just released during my tenure there, which was one of the first graphical web browsers and a precursor to Netscape.

Over the next several years I took a variety of in-house and consulting positions which included designing and deploying directories (Netware NDS / Microsoft Active Directory), designing and deploying new mail environments and performing mail migrations, deploying Citrix and other Windows server technologies (clusters, SQL, web) and storage solutions in many different capacities.

I never had any formal training (nor experience initially) in NetWare, Windows or any of these things, but I was hungry and I taught myself by watching, asking, reading and doing wherever possible.  Some of the companies that I either worked or consulted for included well-known brands such as Kohler, Miller Brewing, Harley Davidson, Lands’ End, Milwaukee Public Schools, a large international insurance company, and Wisconsin’s largest hospital.  I also participated in the Windows 2000 product launch by presenting a break-out session on Active Directory and security in Milwaukee.

Perhaps the biggest impact from this phase of my career (besides self-teaching myself NetWare) was the opportunity to work with Active Directory.  I had the opportunity to begin working on a large AD project while Windows 2000 was still in beta.  Armed with nothing but a book and my NDS (NetWare Directory Services) experience I sat out to teach myself Active Directory and design a migration for a large customer while working with a beta version of Windows 2000 and Active Directory.  I could have been intimidated by accepting a responsibility for something I’d never done before, but no one else had done it either!

After this project I would never again be intimidated by working on something in which I didn’t have strong experience.  I knew that given a modest amount of time and resources I could self-teach myself anything and without formal training – and I did exactly that.  New technology would never intimidate me and I welcomed each the opportunity to jump outside of my comfort zone to learn more and build on my strengths.

EXPERIENCE – 00’s

In early 2001 my employer had suddenly gone bankrupt after merging with a large web development firm, and I quickly found an opportunity with Dell.  At Dell I would consult on Active Directory and Exchange designs for organizations including Harvard University (and other schools), The District of Columbia, The Navy-Marine Corps Intranet (NMCI) and a large multi-national.

The District of Columbia was especially interesting as there were 69 different “NT kingdoms” for each of the agencies (Transportation, Health and Human Services, etc.).  Now consider that each of these agencies had been accustomed to having complete autonomy and some had data privacy requirements including HIPPA and more.  We had to come up with an Active Directory design that would address everyone’s concerns regarding a shared environment and “sell” it to these agencies and the CIO.  This was a very fascinating and very rewarding project – learning to understand the personal and cultural objections to a paradigm shift and addressing both the political and technical issues.

For one large multinational, we ended up literally traveling around the world to understand the different business units and perform a discovery on how they were organized as well as the process work flows.  This would be critical to providing a design foundation that would enable technology and policy to effectively map to each business unit (more on this later).

The Navy-Marine Corps Intranet I was told was the largest Active Directory deployment at that time with over 100,000 security principals in a very strict and secure environment.  I spent almost a year on this fascinating project ranging from supporting Dell-EMC storage to AD/Exchange services and more.

In 2003 I began a new adventure at the US headquarters for a Global 500 company.  When I started, the organization was running mostly NetWare and there was a desire to move towards both Active Directory and moving file shares to clustered Windows servers.  Over time, I would get heavily involved in just about everything from enterprise backup/DR, security, SQL, and building an enterprise monitoring solution with HP OpenView.  I also played a lead role in administering our SAN and switches along with researching a variety of storage technologies (dedupe, block auto-tiering, caching and more).

By 2004 I was well aware of VMware Workstation, but had not had the opportunity to work with it yet.  I began working with VMware GSX shortly thereafter which proved remarkably useful for test/dev and sandbox environments.  I spent much of the next year and more making my case, including unsolicited ROI presentations and more, to begin looking at ESX.  Finally an opportunity arose, and the hardware was available to begin building an ESX 3 cluster.  Over the course of the next year we would have about 150 virtual machines, including several mission-critical servers.  The rate of adoption was slow as there was much resistance from various groups to this “new virtualization thing” but slowly we made steady gains and proved that the environment could suit mission critical workloads.

Following an acquisition and management change, I would be assigned to lead a team to figure out how to move 95% of our x86 environment to a datacenter in a different region of the country.  We would end up P2V-ing around 250 physical servers, and then replicating and re-IPing (changing the IP address) each of the ~400 servers to a new ESX 3.5 cluster we designed and built in the new data center.  We used Quest vReplicator for the replication (SRM was not an option for us) and the technology was the easy part in this project.  The hard part?  Changing each IP address (I had inquired on stretched VLANS — if only VXLAN had been available then 🙂 ) and working with the application owners on doing a proper discovery and change logistics.

Often times we’d meet with an application owner and ask them if they could predict what would happen to their application if the server IP had changed.  Crickets were usually the first response.  We’d end up digging into code and looking for server-side includes, firewall rules and much more.

Another application consisted of a mix of Netware 5 servers, NT4 domain controllers and Windows NT4 Terminal Server with Citrix.  And this all connected back to both mainframe and HP-UX components.  No one in the org believed that’d we’d be able to virtualize the Netware and NT servers, change the IP’s and still have everything work, but somehow they allowed me to try.  Hearing that lack of faith was all the motivation I needed, and I successfully virtualized all the x86 components of this application (about a dozen servers) and would later re-IP each of them.

In fact, no manager believed that these 400+ servers could be effectively be virtualized and relocated, but we did exactly that (including designing and building a new vSphere infrastructure) all within about 9 months.

Later our environment would grow to well over 500 VMs and I would lead projects to upgrade to vSphere 4 and ESXi.  I became fascinated with the IaaS (Infrastructure As a Service) concept and looked for more ways to leverage our virtualization foundation with things like VDI, SRM, vCloud Director (vCD), Service Catalog and more to provide even more value and agility benefits to the organization.  As I’ve posted in detail in my blog over the past year, I feel that there is a huge transformation taking place in IT, as well as new opportunities to provide agility and value.

EDUCATION

In 2005 a corporate tuition plan became available to me and I decided to take advantage of it.  At that time, I saw little value in renewing my MCSE, and VMware certifications hadn’t been introduced yet.  I was always interested in the business aspect of this so why not pursue a Masters’ in Business Administration (MBA)?

At this point in my career I probably would have been better off pursuing VMware VCDX certification (had it been available back then) as far as career marketability goes, but I still think an MBA is incredibly important and useful – especially as a cloud computing model becomes more common place.

What’s so great about an MBA in a technical field?  Quite a bit in my opinion.  For starters just having that foundation of business principles and case studies will serve as a great reference and foundation to help you have better insights regarding with what the business may be trying to accomplish, and what their challenges, concerns and motivations may be (marketing, competition, process improvement, regulation, etc).  How many of us in IT have had to do budgets, ROI justifications, and/or solve and (then present) a complex array of variables and probabilities?  Well an MBA can help develop those skills too.

And now what about cloud computing?  As I’ve said before (as have others) the biggest obstacles are often not the technology, but the people, processes and culture.  How do you change these things?  How do you get your server, storage, networking and application SME’s (subject matter experts) out of the comfort zone of their silos and  take responsibility within a new technology paradigm where complex problems now span multiple silos, layers and managerial boundaries?  How do you construct your organizational chart and change your formal and informal culture to facilitate this new paradigm?  And if the goal is to empower the business with agility, how do you align your IT efforts and process to the business in order to accomplish this effectively?    Those who haven’t pursued an MBA or haven’t had similar experiences or like-minded reading may be at a disadvantage in dealing with such questions.  In my MBA I chose a Leadership elective path — which focuses on changing an organization’s culture, as well as general leadership.

As long as I’m on the topic of skills and marketability a few more points.  In the past, one could become an expert within a given silo (i.e. networking, storage, etc.) and that might be good enough.  But now we have infrastructure and applications which often transcends all of these layers with complex interactions.  In the near future we will likely see more advanced APIs in the OS and application stack that will enable more direct interactions with everything from the hypervisor to networking hardware.  The most valuable IT employees in my opinion, will be the ones who may not have the most knowledge within any one silo, but have a good background in each of the silos (network, storage, compute, application) that they can design, interact, and troubleshoot across all of these layers.  This is exactly where I’ve tried to position myself – with a core focus on virtualization but also gaining a deep understanding and experience in the periphery technologies like storage, networking, and the application disciplines.  And of course those that can do this with a strong sense of business/leadership principles and awareness will be some of the most valuable employees in my opinion (but hey I’m biased 🙂 )

When I was doing AD designs I would advise clients that the OU (organizational unit) design had to take into consideration not just the logical org chart, but that it must also consider work flows, processes and worker roles, in order to provide the most usable foundation for policy (GPO) on computers, groups and users.  In much the same way, I think it is critical to take this information into consideration for “cloud” elements as well, ranging from elements like SaaS to your virtualization stack and the building blocks on top of that stack, ranging from VDI to vCD to service catalogs and even DR.  Performing a discovery to build an understanding of how work gets done in an organization (and especially what they want to accomplish) is essential in my opinion for such designs.

By the way I should probably mention that I haven’t quite completed my MBA yet.  While I completed all of the core courses with a 4.0 GPA, I’ve only completed one of the three Leadership electives and have two more to go.  I hope to have the opportunity to complete this program should I be fortunate to obtain tuition assistance in the future.

CERTIFICATION

Over a decade ago I achieved MCSE certification (as well as a few others).  After I had some major AD and Exchange accomplishments under my belt I was faced with a situation where I would have to spend time and money to maintain my certification.  By 2004 when I was up for renewal, I felt that my experience spoke far more loudly than any multiple-choice test could, so I chose to spend my time focusing on work projects instead and allowed this certification to lapse.

Today I would be very interested in VMware certification but the cost in terms of both money and time is a bit out of reach at the moment.  You are required to purchase a week-long class as well as the exam which sums up to a considerable monetary investment.  Given circumstances and means, once again I felt that my experience with VMware spoke more loudly than any multiple-choice test and as a result, I’ve not pursued any VMware certifications at this time.  However, partly as a result of this blog, I was awarded VMware vExpert designation from VMware earlier this year — and I am very proud that VMware chose to recognize my contributions with this award and designation.

My long-term certification goal however, would be to pursue VCDX (VMware Certified Design Expert) certification which I hope to have the opportunity to pursue at some point.

WHAT DO I WANT TO DO?

My perfect job – if there were such a thing, would likely include the ability to design, implement and evangelize solutions.  Design is essential in any solution and I love the challenges it presents.  Implementation because I like to get my hands on the technology when I can and build things.  Evangelism because often a little salesmanship is needed before the optimal solution can be provided.

But what kind of solutions would we provide?  Not only am I fascinated with storage, infrastructure and the vSphere virtualization platform, but I’m also very interested in complementary technologies ranging from VDI, SRM, vCloud Director, vShield, storage/networking technologies and much more.  One thing that has been lacking for me recently is access to technology and the ability to work on some of the latest storage tech, and/or work with several of these solutions in a lab (a bit more capable than my single spindle PC).

That’s my “perfect” job of course.  At the end of the day what I really want to do is tackle complex problems, and work with passionate teams, to design/build/engineer solutions that provide value – and possibly a transformative value in the form of agility.

AND BY THE WAY…I’M AVAILABLE!

I’ll be making more posts here in the near future including at least one video post.  Recently, I did a presentation on converged infrastructure and I’m really excited about building a more detailed presentation based off of that, and also a second one that discusses going beyond mere virtualization, and presenting some cloud concepts in ways I haven’t seen addressed before.

I hope to get some of thost posts/videos/presentations made over the next few weeks.  In the event that you’re a recruiter or hiring manager and you’d like to contact me about a potential opportunity that you may be aware of, you just need to combine “blueshiftblog” with gmail.com (I try to avoid typing the actual address to stop various spambots).

Thanks for taking the time to read my “vResume” and I hoped it provided some valuable discussion points beyond just talking about “me”.  Thanks!

vCenter Server Heartbeat 6.3 — Experiences and Recommendations

I had the opportunity to work with vCenter Server Heartbeat earlier this year and I wanted to share my experiences with the product – ranging from why it might be needed and what it can provide.

vCenter Server Availability

Is vCenter Server a mission-critical application for which you cannot afford downtime?  It depends on the environment, but for many organizations the availability of vCenter Server is becoming more and more critical.  For starters you lose a single point of management for all your ESX hosts, DRS and potentially quite a bit more.  Let’s take a look at some specific impacts of vCenter Server being unavailable.

  • VM and Host Management – Virtual Machines (as well as ESX hosts) would need to be managed directly from each individual ESX host – which can be time consuming if you don’t know which VM is on which host.  In addition, you would be unable to provision new VMs from a template.
  • Performance and Monitoring – vCenter is constantly collecting performance metrics from VMs and hosts, as well as evaluating alarm criteria.  Without vCenter, no metrics are captured for analysis.  In addition several third party applications such as Quest vFoglight also rely on vCenter server for data collection.
  • vMotion – vMotion – including Storage vMotion – is not possible without an active vCenter server.
  • VMware HA – The host agents still provide HA functionality without vCenter, however there is no more safeguard regarding admission control.  A cluster could be over-populated as there are no admission control safeguards available when vCenter Server is unavailable.
  • VMware DRS – Unavailable – workload imbalances will not be corrected, which could impact performance.
  • Backups – Several backup products rely on vCenter Server for their functionality.
  • VMware View – Unable to provision new desktops
  • vCloud Director – Unable to allocate resources or provision new VMs

When you sum up the above list, it’s pretty clear that a vCenter Server outage will affect operations, and could also potentially effect application performance and availability as well.

Enter vCenter Heartbeat

VMware chose to partner with Neverfail to provide high availability service for vCenter Server.  I had experience with Neverfail in the past for supporting BES (Blackberry) servers and I was familiar with the benefits and challenges, so I was anxious to see how this would be addressed in vCenter Server Heartbeat 6.3.

The Neverfail engine is replication-based.  Basically you build a second server and install the application you are protecting (vCenter Server in this case) on it.  Then you configure a dedicated NIC on each server to be used as the “Heartbeat Channel” which will contain both monitoring and replication traffic as illustrated below.

vCenter Server Heartbeat will monitor both vCenter Server as well as a local SQL database, and will monitor application health while constantly replicating relevant files and registry keys between the hosts.  A packet filter driver is then installed on each server which will block all traffic for the application’s IP address, should that server be passive, while the active server is unblocked.  This allows both servers to be configured with the same identity – including the same IP – at the same time and vCenter Server Heartbeat manages this during failover.

(NOTE:  In a WAN configuration, different IP addresses can be used on the vCenter Servers)

Installing vCenter Server Heartbeat

When choosing your servers, you can choose between 2 physical servers, 2 virtual servers or one of each.  If you choose to have both servers virtual you have the option of using vCenter’s clone VM function – simply clone your working vCenter Server VM and then use the new clone as the second server in the pair.  While some are more comfortable with having a physical server in the mix, we choose to make both servers virtual and employed some strategies to improve manageability (more on this later).

Now of course while you create the second server and configure it with the same IP address you should disable this IP interface until vCenter Server Heartbeat is installed and functioning.  This process is well explained in the documentation.  Once vCenter Hearbeat is installed on both nodes it will begin monitoring vCenter Server (and optionally SQL) and begin replicating data from the active (primary) server to the passive (secondary) server.

One thing you’ll want to do to make management easier is to create a 2nd IP address on the primary/public NIC for management.  Ideally the heartbeat NICs are on a private and non-routable VLAN, and if the server is currently secondary, the packet filter is blocking traffic for that IP address – how will you remotely manage it?  By adding a secondary IP address to the public NIC on each server in the pair, you can provide a permanent IP which can be used for anything from management agents/antivirus updates to remote desktop.  Just be careful about DNS registration for this additional new IP address such that you aren’t registering a new IP in DNS for an existing name.

Testing vCenter Server Heartbeat

Once both VMs were running with vCenter Server heartbeat I proceeded to run a battery of tests to evaluate response to an array of failure conditions, ranging from service failure, to host failure to network error.  In all of my tests the failover process worked flawlessly – except for one.

First I need to explain what “Split-Brain” is.  In earlier versions of Neverfail it would be possible for both servers to believe that they were active (and therefore have the IP’s unblocked) at the same time.  In later versions – including vCenter Server Heartbeat 6.3 – a split-brain avoidance feature is enabled (which leverages the secondary IP address I mentioned above).  Now this works well in most scenarios,  but I ran into a specific scenario that posed some challenges.

One of the scenarios I tested was to disable all vNICs on the active server.  Within seconds (consistent with a configured threshold), vCenter Server Heartbeat successfully failed the server over to the secondary and service was restored within 90 seconds.  But what happens when the failed network link is restored?  After all, the disconnected server still “believes” that it is the primary server.   What I observed is that the reconnected server was put into the secondary role (blocked) which is correct, but then the primary was shut down –presumably as a split-brain avoidance precaution after detecting another active server — resulting in BOTH servers being offline.  Without manual intervention, vCenter Server would remain offline.

I repeated this test with several different settings and continued to receive the same results.  I called VMware and explained the scenario I was testing and asked if there were any setting that I was missing that might prevent this specific behavior in this scenario.  Was there a way that I could prevent the active server from being shut down when a disconnected primary node reappears on the network?   I was told that this was a known limitation of the current release, and that future releases would have more intelligence and awareness for dealing with such situations.

Granted a network disconnect followed by a successful failover and then network restoration is a fairly rare in most environments, but it can and does happen, and it’s good to be aware that vCenter Server Heartbeat 6.3 may not automatically intervene correctly, and that manual intervention (which is a quick and simple fix) may be needed.

vCenter Server Heartbeat Strategies

There’s many different approaches that can be used to tailor to your unique environment.  A strategy that our management was comfortable with was to disable automatic failover.  This gave us the benefits of application monitoring (emails on warning conditions for either vCenter Server or SQL) as well as redundancy for vCenter Server, including a console that could be used to quickly initiate a failover manually, as the operations center was staffed 24.7.

But if both servers were virtual how would we know which hosts they were on, so that the console could be accessed if necessary?  We addressed this by positioning the 2 servers on two specific hosts and excluding these two VMs from participating in DRS (and also added an anti-affinity rule should things get moved around for any reason).

By the way, the application monitoring is a really nice feature as it can bring to your attention specific issues as it can alert to you conditions and configurations within either vCenter or SQL (if local) that may deserve attention.  In other words not only do you gain redundancy and failover protection for your vCenter Server, but you also gain proactive monitoring insights into the health of the application.

The bottom line is that there are very good reasons to consider vCenter Server as a “mission critical” application and that vCenter Server Heartbeat can offer good improvements to vCenter Server availability.  Just make sure that you explore the solution sufficiently to understand the options, in order to configure it to your environment’s needs and requirements.

Using the “Forklift” Strategy To Upgrade vCenter Server

When upgrading vCenter server, there’s several different approaches available and it may be beneficial to take advantage of the opportunity to do an OS refresh or more.  I thought I’d share my own experience with upgrading from 4.0 to 4.1 as well as look at the 4.x > 5.0 recommendations.

We had a vCenter 4.0 server with a local SQL database that we wanted to upgrade to vCenter 4.1, but we wanted to upgrade to SQL 2008 R2 and Windows 2008 R2 and there were some other “legacy” elements from this build such that we would have preferred to clean up with a fresh start.  We also wanted to deploy vCenter Heartbeat so trying to do an in-place upgrade just didn’t seem right as we wanted a “cleaner” vCenter environment.

Fortunately it’s rather easy to “forklift” the vCenter database to a new server.  The first thing we would do is to build a new Windows 2008 R2 server the way we wanted it, including the network customizations to accommodate vCenter Heartbeat.  Then we installed SQL 2008 and then vCenter Server  the way we wanted it and we were ready for the “forklift”.

We used RedGate SQL Backup which was able to compress the vCenter database to less than 5% of its native size.  This allowed us to quickly copy the much smaller file over the LAN and restore the database onto the target server.  In a matter of minutes the database was up and running on the target server.

Now not all vCenter information is stored within the database.  Things like SSL certs and some vCenter configuration parameters are stored in additional files.  Here you have the choice to use VMware’s data migration tool or you can choose to move these files manually.

After a few DNS changes and re-installing plugins (like VUM and Orchestrator) we were done and with very little downtime.  No data was lost – all the performance history, permissions, cluster HA/DRS settings were intact.

Often times the first-take is that in-place upgrades are easier, but they can also be a little less “clean”.  If you have reason to desire a fresh start, don’t overlook the opportunity to build a new vCenter server and “forklift” your active configuration to it.

In the VMware vSphere 5.0 Upgrade Best Practices document, some forklift approaches are discussed in greater detail.  Don’t overlook the opportunity to get a clean start as its really not that difficult.

VMware View 5 Lab on a basic home PC with 8GB RAM

This was an interesting project which offered a few lessons I thought I’d share.  It demonstrates how much you can do with little resources as well as how simple VMware View 5 is to configure.

Basically I built a functional VMware View 5 environment all running from a standard PC with 8GB of RAM and a single SATA spindle.  Why attempt this?

My original idea was to access a Windows 8 (dev.  preview) desktop using an iPad over View 5.  This part of the concept fell apart when I discovered that Windows 8 would not install in a nested ESXi 5 host.  Windows 8 runs fine under VMware Workstation 8, but the problem here is an ESXi5 host running UNDER Workstation 8.  The Windows 8 bootstrap would initialize, but once the Windows 8 kernel was fully loaded, the VM powered off with a CPU error.

My next idea was to do the same with a Windows 7 desktop, but I eventually learned that there are some challenges with the current iPad view client and SSL, which has not been updated for View 5 yet.  But I did get the View 5 environment fully functional for Windows clients.

CONFIGURATION

The hardware is a basic Dell PC (Intel Sandy Bridge) with 8GB of RAM and a single SATA spindle.  All the software I used consisted of trial versions obtained from either Microsoft or VMware.

For VMs I would need the following:

  • AD Domain Controller
  • vCenter
  • View Connection Server
  • ESXi5 host
  • Windows 7 desktop

It became a challenge to find the right mix of settings within VMware Workstation and the VM’s to get the memory and disk I/O stabilized. I had originally used the vCenter appliance until I noticed that View 5 currently requires a Windows based vCenter which required a bit more RAM.   I also disabled Aero on the Windows 7 host PC to gain some extra memory.

If I allocated too little memory to the VMs, the guest OS would swap at various points – sometimes constantly swap – making it nearly impossible to do anything.  The final mix of VM’s running under Workstation 8 looked like this:

  • AD Domain Controller (2008 R2) – 512MB
  • vCenter 5 (2008 R2) – 1.5GB
  • View 5 Connection Server – 1GB
  • ESXi 5 host – 2GB

That’s 5GB of VM’s running under VMware Workstation 8 running on top of Windows 7.  The ESXi5 host would of course host an additional Windows 7 VM which would be presented by the View 5 connection server.

One thing I perhaps could have done to improve memory utilization would have been to move more of the VM’s under the nested ESXi host – this would have allowed the use of transparent page sharing (TPS) which would have reduced the memory footprint by de-duplicating common memory pages across VMs.  As I had only 1 SATA spindle I skipped this extra VM importing activity and tried to make progress with the status quo.

After various DNS tricks and more I got to the point where everything was functional.  The actual configuration of View 5 was incredibly fast and straight forward.  Add a vCenter server, create a pool, add VMs and configure the settings appropriately.  The actually View configuration took only a few minutes and after experimenting with the settings I was able to connect to the Windows 7 VM – nested two hypervisors deep – using a View 5 client.

The biggest takeaways for me was what could be accomplished with so few resources (no cost beyond the PC!) and also how powerful yet simple to configure View 5 was.  Granted this was not a complex View 5 configuration but with other products there would be many more configuration steps (and time) involved.  I was also impressed with the PCoIP 5 experience.  Even though everything was on my local home network the user experience (with Aero turned on in the hosted VM and 3D graphics enabled in View) was remarkably fast and responsive and felt much more “snappier” than I ever recalled RDP feeling.

When you look at everything View 5 can do and start adding things like vShield Endpoint, Persona Management, mobile endpoints (iPad, Android, etc.) and how the entire desktop provisioning process is vastly simplified from the “traditional” full PC model , there’s a great deal of value and productivity available in the View 5 suite.  We’ve been hearing that 20xx will be the year of VDI for quite some time.  I think VDI is on the verge of a big push and View 5 will be a big part of that.

Virtualization Is Not The Problem (Part 2)

I was reading Jase McCarty’s post about his experiences with virtualiztion being wrongfully attributed as the problem and I wanted to expand on this and include my own experiences.

The bottom line is that when a new layer has been added (i.e. virtualization) it’s very easy and convenient to say “virtualization is the problem” and this perception –fair or not — is often manifested.  Now it’s very easy for problems to exist within the virtualization layer, but that’s quite different from saying that the problem IS virtualization.  Virtualization has come a long way  and can deliver exceptional performance in many conditions (check out the performance category for examples).

Infrastructure can be complex at times and an improperly tuned infrastructure can easily create the perception that virtualization is the problem.

One example I experienced was when the storage infrastructure did not meet the specifications that were requested. Due to caching and I/O patterns the problem was not easily identified, but it led to developing the perception in many that virtualization was not capable of supporting “demanding” workloads.  This was one of the first few posts I wrote back when my blog had only 3 loyal followers 🙂

Here’s the post:  Blue Shift:  Why Storage is Essential to Virtualized Environments (Part 1)

The New Application Paradigm — Is the PC Still The Center of the Universe?

The New Application Paradigm — Is the PC Still The Center of the Universe?

For roughly 1500 years it was believed that the Earth was the center of the universe.  Copernicus wrote a paper in 1532, which would not be published until several years following his death, which argued that the planets in fact revolved around the sun.  Copernicus’ ideas were still being attacked as heresy by the fixtures of society and the scientific community when Galileo introduced new evidence from his telescope in the early 1600’s.

Galileo would be mocked and worse for his defense of Copernican theory and would spend the final years of his life under arrest.  It would take decades and some work by Issac Newton before the idea that the planets revolved around the sun would slowly begin to gain acceptance.  The Catholic Church would continue to prohibit publications which embraced Copernican theory until 1758.

I went into a bit more historical detail than was necessary, but one key point from above is that humans (and human systems) can be remarkably slow and adverse to change.  In the 1980’s the PC became the center of the application universe, and in many ways it has been for some time.  But then came along a wave of disruptive technologies – first the internet, then wireless networking, smart phones and tablet devices.

Virtualization of course is abstracting workloads from the physical server hardware (and I can’t type this without mentioning Nicholas Weaver’s excellent vMotion over XBOX Kinect demo).  Today what we are seeing is the abstraction of applications from the PC.  As long as the PC hosted our applications, it was the center of the universe.  But now those applications are moving to smartphones, tablets, thin clients, and even web browsers (HTML5).  The application is what really matters and many applications are now available on mobile platforms, empowering the user even when they are not near a PC.

IT shops which traditionally sought tight control over the platforms hosting their client applications (PCs) and would often assign policy to those devices.  Now the enterprise must face a new paradigm of smartphones and tablets – many of them purchased by the employee.    The enterprise now much manage USERS (think identity management) who will seek to access managed applications from unmanaged devices.

Much was announced at VMworld just a few weeks ago, and I wanted to review some of the announcements that follow in this new paradigm, some of which may I may go into greater detail in future posts.

VMware View 5

VMware View is VMware’s Virtual Desktop Infrastructure (VDI) solution which runs on top of vSphere.  It enables the presentation of a full and rich desktop hosted on vSphere, to a variety of thin clients, tablets and more.

According to some reports, VMWare View had about 40% of the VDI market share before View 5 was announced.  One of the biggest differentiators in the past, was that VMware View’s PCoIP protocol was not as efficient as Citrix ICA for example.  In the View 5 release, the PCoIP protocol has been significantly improved, allowing for much improved WAN and high speed (i.e. video) performance.

You can view the rest of this slide set which was presented at VMworld here.

Now in a tweet I had said at one point that “ICA has no protocol advantage” which was me trying to make a point in just a few characters.  Now that I have more characters to work with, what I really meant to say is that the protocol was a huge differentiator in the past, and in View 5 this gap is a lot less significant.  I’ve worked with ICA since Winframe 1.6 and it indeed it is a very powerful and rich protocol, but with the performance improvements now introduced in View 5, many these protocol differences just aren’t as significant as they used to be in my opinion.

VMware View 5 also has excellent integration with vSphere and has gone a long way to simplify the number of steps and time that is required for administration and provisioning, while offering new persona management capabilities.  For many reasons, I see big potential for growth in the VDI space, and I think that VMWare View will continue to be a rising force within that space.  Here’s a video from Chad Sakac discussing his 5 favorite new features of VMware View 5:

Also on Brian Madden’s blog, you can find a video exploring the new persona management capabilities in View 5

VMWare Horizon App Manager

If you have a tablet or smartphone chances are that you’ve got access to some sort of app store from which you can peruse and purchase apps for your device.  But what if your enterprise wants to make sure you have access to both your SaaS and in-house applications – and quickly available for new hires?

VMware Horizon integrates application management and identify management to enable users to access a portal and quickly access applications which they are entitled to.  Combined with VMware ThinApp technology applications are quickly made available to the end user’s workstation in a seamless manner.  See the following video for an example:

VMware Horizon Mobile – Extending The Enterprise App Store

Now what if we could extend the Horizon concept above to mobile devices even further?  Employees are bringing in their own mobile devices – how do we manage these devices and extend the concept of the enterprise app store to them?

VMware is doing several things here.  First they use virtualization and encryption on your phone (yes – on your phone) to create a workspace that the enterprise can have some control over.  Then they leverage the Horizon capabilities to make enterprise applications (SaaS or internally hosted) available on your mobile device.

Here’s a brief video demo of VMware Horizon Mobile:

Project Octopus

Most people who have mobile devices have encountered a common problem – “How do I get the files I want from any of my devices at any time?”  There’s several services that offer such functionality today, including DropBox.  VMware’s Project Octopus intends to take this concept a step further by adding rich social and security elements. Here’s a preview of Project Octopus:

While Project Octopus is not available yet, you can sign up here to be notified when it is ready.

VMware Appblast

Above we’ve covered different ways of delivering applications to mobile devices, ranging from VDI to ThinApp and others.  Imagine if it were possible to deliver any Windows, Mac or Linux application over a web browser.  Yes, you read that right.  Using the power of HTML5, VMware has created AppBlast — offering users the ability to run applications from any HTML5 complaint web browser.  Imaging being able to run a Windows application on your Mac via a web browser, or vice versa!  And since most tablets and smartphones suport HTML5, these platforms can be instant clients without having to install any software!

Perhaps the best way to explain AppBlast is to show a couple video demos:

Applications Are the Center of the Universe

Untethering applications from the PC platform and liberating them onto mobile platforms while facilitating policy and expedited deployment can go a long way to improving productivity in any organization.  But as we noted in the first paragraph, it took decades after Copernicus and Galileo showed us the way, before it would actually be embraced.  Our human nature is to resist change — and human systems can taken even longer.

We know how to purchase PC’s by the pallet and roll out a huge desktop refresh — we’ve become very comfortable in this process and so we keep doing what we know how to do.  But mobile platforms are a huge disruptive technology that has the potential to introduce paradigm shifts in productivity as well as solve other IT problems.

Change is rarely easy, but those that see the potential and pursue it will be the first to unlock these benefits for their advantage.  The tools to make this happen are becoming available.

Make-A-Wish: Electron Boy

This is a very touching but bittersweet story I wanted to share.  I say that it’s bittersweet because just a few days ago, Erik Martin (a.k.a. “Electron Boy”) lost his battle with cancer.

Last year, Erik was the happiest boy in Seattle as his wish was granted to be “”Electron Boy” for a day.  Interstates were shut down and local sports teams were involved as Erik rescued Seattle from the forces of darkness.

The video speaks for itself, but this is a fantastic example of what the Make-A-Wish foundation does every day.  Bringing smiles to children and their families and providing unforgettable memories.

The Make-A-Wish Foundation is an outstanding organization which we had the opportunity to experience first hand.  Our daughter’s own story who was granted a wish from the Make-A-Wish foundation will be posted here shortly.

For information on how you can help contribute and support the Make-A-Wish foundation’s mission, please visit this page.

Exchange Server 2010 On a vSphere 5 Host Supporting 16K Heavy Users? No Problem!

I’ve been a fan of the VROOM! blog for some time as often time I keep hearing “you can’t do that with a virtual machine” and I turn around and say “someone already did — look!”.  So now that both Exchange 2010 and vSphere 5 have been released, what’s possible?  The VROOM! team’s whitepaper is here, but I walked to take a quick walk through some key areas and provide a summary.

The physical host used was a Dell 910 (4 CPUs) with 256GB of RAM running vSphere 5.   The first test was to see how much acceptable load could be sustained using a single VM.  By “acceptable” it is meant that latency for each end user is kept below 500ms for 95% of all transactions.

As illustrated above,  vSphere 5 was able to provide a 13% improvement in latency over vSphere 4.1 at a capacity of 8,000 users.  vSphere 4.1 can only support 8 vCPUs in a guest, but vSphere 5 didn’t have this limitation.  vSphere 5 supported 12,000 users on the single 12-vCPU VM, while using less than 15% of the CPU capacity on the host.

Now it has been demonstrated with other applications, that you can often achieve more transactions on a physical host, but using several smaller VMs as opposed to a single VM, and the same is true here.  By using 8 different virtualized Exchange servers (4 mailbox, 4 client/hub) on the same physical host, the team was able to support 16,000 heavy Exchange users with 95% of transactions having a latency of less than 200 ms (anything under 500ms is generally considered acceptable).  During this 16,000 user test, only 32% of the CPU capacity of the physical host server was consumed:

But if you had to vMotion an Exchange server you’d create problems for the end users, right?  Not in these drills and probably not in most environments either.  The vMotion of an Exchange server took 47 seconds (71 seconds with vSphere 4.1) and the number of end user task exceptions was ZERO.

Compared to vSphere 4.1, vSphere 5 shows a 34% improved in vMotion migration time and an 11% improvement in Storage vMotion time.

In all, it’s pretty clear that vSphere 5 is a strong platform for Exchange 2010 and many other applications as well.  Virtualization can provide consolidation (density) and scalability benefits, as well as provide many new options for high availability, disaster recovery, and much more.   Virtualization itself is not cloud computing, but the more we virtualize, the more workloads we have for which we can consider for taking advantage of the benefits that are possible with cloud computing.

VMware Workstation 8.0 Released

Lately I’ve gotten in the habit of tweeting things and neglecting to blog on them.  VMware Workstation 8.0 was just released so I thought it would be worthy of a quick blog post.

Now you can do a lot with the free VMware Player — including running ESX4i or ESX5i — but there are several limitations (only one VM running at a time for starters).

I’ve used early versions of VMware Workstation and looking at how far it’s come is impressive.  For example, now you can use VMWare Workstation as a server and host VM’s that others can remotely connect to using Workstation.  And if you have some ESX hosts you can connect to them as well, and work with VMs from different workstation and ESX hosts in the same GUI.

The interface has also been redesigned — here’s a screenshot of Ubuntu (Linux), ESXi 5 and Windows 8 (Developer Preview) all running at the same time with live thumbnails:

(click to enlarge)

I should also note that previous versions of VMware Workstation (and Player) will NOT run the Windows 8 preview — VMWare Workstation 8 is required for this.

Here’s a short list of some of the new capabilities of VMware Workstation 8.0:

  • VMs up to 64GB RAM
  • HD Audio, Bluetooth and USB 3.0 support in guests
  • Enable CPU hardware virtualization (i.e. Intel-VT) to guests
  • ESXi hosts running under Workstation can now support 64-bit guests
  • Have VM’s autostart with the host
  • Allow VM’s to be shared and accessed remotely by other user of Worksation 8.0
  • Remotely connect to other Workstation instance and/or ESX hosts to work with even more VMs
  • Upload a VM to an ESXi host or Virtual Center

There’s a free 30-day trial available so go give it a spin!

September 11, 2001

It was a beautiful summer morning.  I was working for Dell and I had just finished delivering an Active Directory and Exchange design proposal for a large company based in Florida and I had earned a break with a week at home with my family after flying in and out each week.

We were just getting settled as a few weeks earlier my daughter had been flown via medevac from Wisconsin to New Jersey and had just been released from intensive care, but with 24-7 nursing.  We had nurses with us around the clock to monitor my daughter who was on a respirator vent (I’ll share more details on my daughter in a future post).

I was up and drinking a cup of coffee when suddenly the alarms started triggering on the pulse oximeter and respirator.   She suddenly wasn’t getting enough oxygen and she was “de-STATing”.  The nurse and my wife worked anxiously to get her oxygen and respiratory levels back to stable levels.  There was no warning and no obstruction – it all just began suddenly and this seemed different and more serious than any previous incidents.

I was in the same room watching all of this go on, trying to get a feel of what was happening when the phone rang.  I stepped out to answer the phone and I was advised to turn on the news because there was a big fire at the World Trade Center.  The TV is in the next room so I find myself anxiously going between the two rooms to check on each emergency and communicate what was going on to my wife and the nurse.

I went back to the TV for a few moments and then I watched live on TV as the second plane struck the towers.  I didn’t need to wait for an announcer to tell me that this was no accident.  I went back into my daughter’s room and couldn’t help from adding anxiety to the situation by expressing to them what I had just witnessed on TV.

For some time I went back between the two rooms until my daughter slowly began to stabilize.  We called the surgeon and explained what happened and he asked us if we could drive down to the hospital to have her reviewed that afternoon.  I was getting the truck ready and loading up her oxygen tank and other medical supplies and I couldn’t believe what I was hearing on the radio.  I heard the announcer describing in panicked horror that the first tower had completely collapsed and that there was nothing left but a cloud of dust that quickly enveloped the immediate area.

We got in the truck and began driving towards Morristown, NJ which is a Level One Regional Trauma Center.  The highway had turned into a parking lot as ambulances tried to gain passage to bring victims to Morristown hospital.  It felt like the longest ride ever as we watched the ,medical vehicles scrambling one after another to get through.

When we got to Morristown the emergency area was filled with ambulances – perhaps more than anyone had ever imagined.  We could tell based on the procedures being used that for many of the transported victims it was already too late for them.

Eventually we were able to see our doctor and our daughter was reviewed and they could find nothing out of the ordinary and they sent us back home.  On the way back home we decided to take a detour on Skyline Drive just a mile from our house.  Up there you had a clear view of the New York City skyline, but many others had the same idea and the police had the area blocked off.  As we turned around we had less than a minute to view a sight that we would never forget — the New York City skyline missing the twin towers and a huge cloud of smoke across the entire lower Manhattan area.

Shock.  Disbelief.  Anger.  Fear.  Sorrow. Loss.  We felt a complex array of emotions in these short seconds as we realized that things had forever changed.

When we got back home to recall the events we reviewed the logs and made a shocking discovery.  To the best we could tell, the medical alarms began sounding in the exact minute that the first plane struck the towers.  I’m not trying to infer any sort of belief here, but’s it almost as if it could have been a “I sense a great disturbance in the force” type of moment.  Our daughter never had an episode quite like this where she went into distress this suddenly and for no apparent reason.  Some who are inclined to view the world from a more spiritual perspective suggested that she must somehow have had a spiritual connection to what had happened.  What I can say with certainty is that some profound events took place on that day and that the timing of these events would give most anyone considerable pause.

The first Monday after the attack I was in Newark Airport boarding a flight to Ronald Reagan Airport in Washington D.C.  The airport felt like a ghost town.  It was the shortest boarding and quietest flight I had ever been on.  Later that day I checked into my hotel just a half-mile from the White House and turned on the TV to learn more about the recovery efforts in both cities.

A week or two later I was back at home with my wife and daughter in the waiting area of a doctor’s office.  We watched as a young boy – perhaps 4 – began building a tower as tall as he could from the LEGO-like blocks.  When his mother saw that he was monopolizing the bricks for his project she asked him what he was doing.  He innocently said “I’m building a new tower to fix the one the bad people knocked down”.  We exchanged looks but no words needed to be said.  I think we each smiled and shed a tear at the same time at that moment.

Hurricane Irene Floods New Jersey (w pics)

In 2007, Oakland, New Jersey was rated by Business Week Magazine as one of the top 50 cities in which to raise kids, helped by an outstanding school system.

While Oakland is indeed a great town, parts of it are prone to flooding, especially in areas along the Ramapo River.  When we were looking for a home the first priority was to be close to our in-laws in order to provide a strong support system for our daughter who had special medical needs at the time.   So while we bought a home in a flood zone, it was on a raised foundation and historically never got flooded.  Just last year the Army Corps of Engineers just finished dredging the river and a new dam went operational so there was even less expectation of flooding.

When Hurricane Irene hit no one was quite sure what to expect.  They used the dam to lower the river in advance of Irene, but the day after the downpour it became too much.  We watched as the waters filled the lower areas immediately off of the river first, and within a few hours the river rose above the road and after just came rushing at a high rate.  About 3 hours later it was already well over 3ft. deep in the front yard, and with the rate the river was continuing to rush in and a projected river crest not until 2AM, we moved to get everything off of the floor and then evacuate.  We had to walk about 0.2 miles one-way to get to safe ground and I’ll never forget carrying my girls and our Rhodesian Ridgeback through the waist-deep waters.  I made a few extra trips to help out neighbors and on the last trip the current was so strong, there was a point where I was moving only a tiny bit with each step.  It reminded me of how I felt when our soccer coach had made us do suicide sprints in waist-deep water, but worse because on the currents — my legs felt like rubber when it was done.

We spent the night in a hotel fully expecting that the flood waters would have been in our house, but to our surprise the waters quickly receded the next day and water had just missed getting in our house by inches.    We were relieved to hear this, but then reality slowly set in as we found our shed contents to be lost and some significant damage to our home’s foundation.  One thing to understand about our neighborhood is that most of the homes here were never actually “homes” but rather summer cottages which later ended up being used as homes.  So like many of our neighbors, we relied on several sheds in our yard to make up for the unusually small living spaces and we lost quite a bit.  One of our neighbors had his shelves collapse inside the shed so he basically lost everything in it.

In the aftermath, the neighborhood looked like a war zone.  Piles of garbage appeared on front lawns 4-5 feet high and up to 20 feet wide.  There were large and heavy swingsets, basketball posts, and more that ended up scattered all across the neighborhood.  Other areas of Oakland were even harder hit.  Four homes were knocked off their foundations and we spoke with some of the victims who are devastated.  And just a few miles away in Pompton Lakes one house exploded when the gas was not shut off to the area.

It took as the better part of a week of non-stop cleanup to get to where we felt like we could take a breath.  We lost our garden crops and much of the contents of three sheds in our backyard.  But compared to others we encountered we fared relatively well.  Some had their homes destroyed and will need a lot of support to get through this difficult time.  Our U.S. Congressman and Mayor appeared outside our front door to survey the damage and President Obama came to Patterson, NJ (10 miles away) last week.  You can find more information at theoaklandjournal.com and oaklandcares.com.

I didn’t get pictures of the garbage piles and more before it got picked up, but I did manage to get a few shots and found some others on the Internets.

Flooding began in the AM on the other side of the river….

Flooding began in the AM on the other side of the river

I took this picture of our street about an hour before we evacuated.  It was good foot higher than this when I last saw it — inches from the mailboxes.

This heavy swingset was carried for a block by the flood waters until it became snagged in this telephone pole support cable:

Here’s a video a neighbor took of a canoe trip around our neighborhood.  This was taken well before the waters had peaked.  The “tour” goes down our street at one point but never goes past our house:

Some of the flooding in town about two miles north of us:

Here’s video of the Pompton Lakes Dam at Hamburg Turnpike with the flood filling up a KIA auto dealer lot and a CVS store:

Here’s a picture of the home in Pompton Lakes which exploded when the gas in the area was not turned off:

August Update

It’s been a crazy summer and I’m a bit annoyed that my blogging has been rather sparse lately.  However I do have a few excuses including being in between jobs, a new baby and having just gone through major flooding from Hurricane Irene.  Fortunately our home was spared, but we did lose quite a bit that was either outside or in our sheds so we still have a lot of cleaning up to do.

For me, watching VMWorld remotely is a bit of a buzzkill.  Great people, great technology, great vendors. great discussions and great Hands-on-Labs — experiences that you can’t exactly capture remotely.  I’ve been in a bit of a “technical purgatory” for the past few years and I’m really hoping that soon I’ll be on the front lines in the cloud revolution, rather than just reading about it.

In any case, I wanted to quickly write some notes about some potential future blogging topics not just for my readers, but for myself too to jog my memory.

FUTURE POSTS

I’m finishing up a rather long post on our Make-A-Wish journey which I hope to finish soon.  After that I was hoping to write a review of vCenter Heartbeat which I got to spend some time with this summer.

Now there’s a lot of new product releases ranging from vSphere 5, View 5, which I want to blog about in various capacities as well as many other exciting solutions and partnerships (including the new VXLAN spec).  Rather than trying to blog about many of these news items individually I thought I’d try to address several of them at once as a part of a larger vision.  Such as all the new DBaaS/PaaS and cloud portability initiatives as well as the abstraction of applications away from the transitional PC.  There’s just so much going on, when when you can put them into the context of a larger vision, its even more exciting.

I also wanted to do a series of posts on IT management in general and how it is approached.  For example a more away from the IT-as-a-Cost-Center model and siloed skill sets and organization, to a new vision for IT and Business to promote Agilty.  There’s a revolution going on, and those that don’t embrace it (especially IT management) run the risk of becoming increasingly irrelevant.

Recently I had the opportunity to put together a 15-minute presentation on converged infrastructure.  I actually wrote it on a napkin and had to present it on a whiteboard with no practice :).  In this experience I found many opportunities for both expansion and improvement on the presentation.  I’m not sure yet if I’m going to try to do it on video or SlideRocket, but I’m pretty excited about the potential for this to be a very good — as well as entertaining — presentation.

I’m left with the idea that I’m leaving something out, but that’s it for now.  I’m looking forward to writing several of these posts in the coming weeks!