Record Breaking Hyper-Converged Performance with Windows Server 2016
(Disclaimer – this blog post was written by a Microsoft employee)
As Gartner has noted, hyper-converged SANs (often referred to as Software Defined Storage, SDS or HCI) have been taking off in popularity in the mid-market and even for side projects in larger environments as well. One big reason is cost – the cost to deploy a hyper-converged SAN is far less than purchasing and maintaining a traditional array.
OK, but how well do they perform?
Using Intel NVMe SSDs, Intel engineers were able to achieve 3 million IOPS using an all flash configuration on Windows Storage Spaces Direct, with Hyper-V VMs creating the workload on the same hardware.
We’ve seen fast hyper-converged SANs before with 32 and 64 node clusters, but have you seen 3 million IOPS with just 4 hosts?
I know what you’re thinking – these massive IOPS tests are interesting for reads, but what about writes? Intel also tested 144 SQL 2016 virtual machines deployed to a similar 4 node cluster, and achieved a rate of over 28,000 transactions per second – suitable for many OLTP scenarios.
So what made these record performance numbers possible?
Well first of all the Intel NMVe SSDs offer incredible performance, but we need a way to sustain this activity across multiple hosts. 40GB network interfaces can help but how can you further reduce latency and approach line speed?
Windows Storage Spaces Direct leverages SMB Direct (RDMA) which allows for direct memory access to the network cards, significantly reducing network latency. And SMB Multichannel is built in – allowing all network adapters to dynamically load balance. Of course there’s a lot more to Windows Storage Spaces Direct from erasure coding to caching tiers as noted in the video below:
And let’s not forget about ReFS 2.0 which has been designed as a file system for the cloud era. Cloning, snapshots, and provisioning can normally create a lot of write IOPS, but ReFS offloads this I/O by making these operations a simple metadata operation. A short 2 minute demo of this in action (no sound) is available here.
The bottom line is that Storage Spaces Direct (S2D), SMB Direct (RDMA) and ReFS 2.0 go a long way to enabling strong hyper converged performance – especially with state of the art NVMe solutions.
Now throw the power of Azure on top of all this, and you’ve got Azure Stack – everything you need to run a private cloud – that is consistent with the Azure public cloud – in just one mini rack.
Stay tuned for more Windows Server and Azure Stack news as MSIgnite unfolds this week.