2-Node Virtual SAN Software Defined Self Healing

I continue to think one of the hidden gem features of VMware Virtual SAN (VSAN) is its software defined self healing ability.  I recently received a request for a description of 2-Node self healing. I wrote about our self healing capabilities for 3-Node, 4-Node and more here. And I wrote about Virtual SAN 6 Rack Awareness Software Defined Self Healing with Failure Domains here. I suggest you check out both before reading the rest of this. I also suggest you check out these two posts on 2-Node VSAN for a description on how they work here and are licensed here.

For VSAN, protection levels can be defined through VMware’s Storage Policy Based Management (SPBM) which is built into vSphere and managed through vCenter.  VM objects can be assigned to different policy which dictates the protection level they receive on VSAN. With a 2-Node Virtual SAN there is only one option for protection, which is the default # Failures To Tolerate (#FTT) equal to 1 using RAID1 mirroring. In other words, each VM will write to both hosts, if one fails, the data exists on the other host and is accessible as long as the VSAN Witness VM is available.

Now that we support 2-Node VSAN, the smallest VSAN configuration possible is 2 physical nodes with 1 caching device (SSD, PCIe, or NVMe) and 1 capacity device (HDD, SSD, PCIe, or NVMe) each and one virtual node (VSAN Witness VM) to hold all the witness components. Let’s focus on a single VM with the default # Failures To Tolerate (#FTT) equal to 1.  A VM has at least 3 objects (namespace, swap, vmdk).  Each object has at least 3 components (data mirror 1, data mirror 2, witness) to satisfy #FTT=1.  Lets just focus on the vmdk object and say that the VM sits on host 1 with mirror components of its vmdk data on host 1 and 2 and the witness component on the virtual Witness VM (host 3).

01 - 2-Node VSAN min

OK, lets start causing some trouble.  With the default # Failures To Tolerate equal 1, VM data on VSAN should be available if a single caching device, a single capacity device, or an entire host fails.  If a single capacity device fails, lets say the one on esxi-02, no problem, another copy of the vmdk is available on esxi-01 and the witness is available on the Witness VM so all is good.  There is no outage, no downtime, VSAN has tolerated 1 failure causing loss of one mirror, and VSAN is doing its job per the defined policy and providing access to the remaining mirror copy of data.  Each object has more that 50% of its components available (one mirror and witness are 2 out of 3 i.e. 66% of the components available) so data will continue to be available unless there is a 2nd failure of either the caching device, capacity device, or esxi-01 host.  The situation is the same if the caching device on esxi-02 fails or the whole host esxi-02 fails. VM data on VSAN would still be available and accessible. If the VM happened to be running on esxi-02 then HA would fail it over to esxi-01 and data would be available. In this configuration, there is no automatic self healing because there’s no where to self heal to. Host esxi-02 would need to be repaired or replaced in order for self healing to kick in and get back to compliance with both mirrors and witness components available.

02 - 2-Node VSAN min

Self healing upon repair

How can we get back to the point where we are able to tolerate another failure?  We must repair or replace the failed caching device, capacity device, or failed host.  Once repaired or replaced, data will resync, and the VSAN Datastore will be back to compliance where it could then tolerate one failure.  With this minimum VSAN configuration, self healing happens only when the failed component is repaired or replaced.

03 - 2-Node VSAN min

2-Node VSAN Self Healing Within Hosts and Across Cluster

To get self healing within hosts and across the hosts in the cluster you must configure your hosts with more disks. Let’s investigate what happens when there are 2 SSD and 4 HDD per host and 4 hosts in a cluster and the policy is set to # Failures To Tolerate equal 1 using the RAID 1 (mirroring) protection method.

01~ - 2-Node VSAN.png

If one of the capacity devices on esxi-02 fails then VSAN could chose to self heal to:

  1. Other disks in the same disk group
  2. Other disks on other disk groups on the same host

The green disks in the diagram below are eligible targets for the new instant mirror copy of the vmdk:

02~ - 2-Node VSAN

This is not an all encompassing and thorough explanation of all the possible scenarios.  There are dependencies on how large the vmdk is, how much spare capacity is available on the disks, and other factors.  But, this should give you a good idea of how failures are tolerated and how self healing can kick in to get back to policy compliance.

Self Healing When SSD Fails

If there is a failure of the caching device on esxi-02 that supports the capacity devices that contain the mirror copy of the vmdk then VSAN could chose to self heal to:

  1. Other disks on other disk groups on the same host
  2. Other disks on other disk groups on other hosts.

The green disks in the diagram below are eligible targets for the new instant mirror of the vmdk:

03~ - 2-Node VSAN.png

Self Healing When a Host Fails

If there is a failure of a host (e.g. esxi-02) that supports mirror of the vmdk then VSAN cannot self heal until the host is repaired or replaced.

04~ - 2-Node VSAN


VMware Virtual SAN leverages all the disks on all the hosts in the VSAN datastore to self heal.  Note that I’ve only discussed above the self healing behavior of one VM but other VM’s on other hosts may have data on the same failed disk(s) but their mirror may be on different disks in the cluster and VSAN might choose to self heal to other different disks in the cluster.  Thus the self healing workload is a many-to-many operation and thus spread around all the disks in the VSAN datastore.

Self healing is enabled by default, behavior is dependent on the software defined protection policy (#FTT setting), and can occur to disks in the same disk group, to other disk groups on the same host, or to other disks on other hosts. The availability and self healing properties make VSAN a robust storage solution for all data center applications.

VMware Virtual SAN at Storage Field Day 9 (SFD9) – Making Storage Great Again!

On Friday, March 18 I took the opportunity to watch the live Webcast of Storage Field Day 9. If you can carve our some time, I highly recommend this.

Tech Field Day‎@TechFieldDay
VMware Storage Presents at Storage Field Day 9

The panel of industry experts ask all the tough questions and the great VMware Storage team answers them all.

Storage Industry Experts VMware Virtual SAN Experts
  • Alex Galbraith @AlexGalbraith
  • Chris M Evans @ChrisMEvans
  • Dave Henry @DaveMHenry
  • Enrico Signoretti @ESignoretti
  • Howard Marks @DeepStorageNet
  • Justin Warren @JPWarren
  • Mark May @CincyStorage
  • Matthew Leib @MBLeib
  • Richard Arnold @3ParDude
  • Scott D. Lowe @OtherScottLowe
  • Vipin V.K. @VipinVK111
  • W. Curtis Preston @WCPreston
  • Yanbing Le @ybhighheels
  • Christos Karamanolis @XtosK
  • Rawlinson Rivera @PunchingClouds
  • Vahid Fereydouny @vahidfk
  • Gaetan Castelein @gcastelein1
  • Anita Kibunguchy @kibuanita


The ~2 hour presentation was broken up into easily consumable chunks. Here’s a breakdown or the recoded session:

VMware Virtual SAN Overview

In this Introduction, Yanbing Le, Senior Vice President and General Manager, Storage and Availability, discusses VMware’s company success, the state of the storage market, and the success of HCI market leading Virtual SAN in over 3000 customers.

What Is VMware Virtual SAN?

Christos Karamanolis, CTO, Storage and Availability BU, jumps into how Virtual SAN works, answers questions on the use of high endurance and commodity SSD, and how Virtual SAN service levels can be managed through VMware’s common control plane – Storage Policy Based Management.

VMware Virtual SAN 6.2 Features and Enhancements

Christos continues the discussion around VSAN features as they’ve progressed from the 1st generation Virtual SAN released in March 12, 2014 to the 2nd, 3rd, and now 4th generation Virtual SAN that was just released March 16, 2016. The discussion in this section focuses a lot on data protection features like stretched clustering and vSphere Replication. They dove deep into how vSphere Replication can deliver application consistent protection as well as a true 5 minute RPO based on the built in intelligent scheduler sending the data deltas within the 5 minute window, monitoring the SLAs, and alerting if they cannot be met due to network issues.

VMware Virtual SAN Space Efficiency

Deduplication, Compression, Distributed RAID 5 & 6 Erasure Coding are all now available to all flash Virtual SAN configurations. Christos provides the skinny on all these data reduction space efficiency features and how enabling these add very little overhead on the vSphere hosts. Rawlinson chimes on the automated way Virtual SAN can build the cluster of disks and disk groups that deliver the capacity for the shared VSAN datastore. These can certainly be built manually but VMware’s design goal is to make the storage system as automated as possible. The conversation moves to checksum and how Virtual SAN is protecting the integrity of data on disks.

VMware Virtual SAN Performance

OK, this part was incredible! Christos laid down the gauntlet, so to speak. He presented the data behind the testing that shows minimal impact on the hosts when enabling the space efficiency features. Also, he presents performance data for OLTP workloads, VDI, Oracle RACK, etc. All cards on the table here. I can’t begin to summarize, you’ll just need to watch.

VMware Virtual SAN Operational Model

Rawlinson Rivera takes over and does what he does best, throwing all caution to the wind and delivering live demonstrations. He showed the Virtual SAN Health Check and the new Virtual SAN Performance Monitoring and Capacity Management views built into the vSphere Web Client. Towards the end, Howard Marks asked about supporting future Intel NVMe capabilities and Christos’s response was that it’s safe to say VMware is working closely with Intel on ensuring the VMware storage stack can utilize the next generation devices. Virtual SAN already supports the Intel P3700 and P3600 NVMe devices.

This was such a great session I thought I’d promote it and make it easy to check it out. By the way, here’s Rawlinson wearing a special hat!

Make Storage Great Again




What Makes EVO:RAIL Different

EVO:RAIL is the only Hyper-Converged solution that ships Pre-Built with VMware software and is ready to deploy VM’s when it arrives. There, that’s it.

OK, maybe you want more detail than that.

This analogy has been used before but it’s worth repeating for those who haven’t heard it before. This comes from my days as a vSpecialist at EMC. If you want a cake, you have 3 primary options.


The first way to get a cake is you Build your own. You purchase the ingredients (flour, eggs, milk, etc.), you measure the quantities you think you need, mix them together, and make a cake. The second time you make one it might be a bit better based on some lessons learned. Eventually, if you do it enough, you’ll probably get pretty good at it.

The second way to get a cake is to buy a Reference Architecture. This is a specific set of pre-measured ingredients that you buy, but you still have to make it. You open the box, add eggs and water to the mix and the end result is a cake. If you make another, it’ll probably be pretty similar to the last one.

The third option is you go to a bakery and buy a cake. It’s professionally made and ready to eat. And if you want another one just like it, your favorite bakery can reproduce it and get it to you pretty quickly.

Lets now shift this analogy to data center infrastructure. The first way to get data center infrastructure is to build your own (i.e. bake a cake). Purchase your favorite servers, network switches, and storage system, connect them together, configure them, install VMware software, and eventually you’ll have a place to provision virtual machines. The next time you need to build out infrastructure you’ll likely be able to do it a bit faster, with less configuration errors, and have it run more optimally based on some lessons learned. Eventually, if you do it enough you’ll get pretty good at it.

The second way to get data center infrastructure is to purchase a prepackaged reference architecture solution, but you still have to make it (i.e. cake mix). You get the hardware, connect it to the network, install VMware software, and you have infrastructure. The performance is fairly predictable since the hardware was chosen to meet a certain workload profile.

The third option to get data center infrastructure is to purchase a pre-built solution (i.e. bakery). And this is where EVO:RAIL is different. There are only 3 ways I know of to purchase infrastructure pre-built with VMware software that is ready to provision VM’s when they arrive. The first way that emerged several years ago is VCE Vblock or VxBlock. The second way now available is the Hyper-converged EVO:RAIL from an Qualified EVO:RAIL Partner (Dell, EMC, Fujitsu, HP, Hitachi, inspur, NetApp, netone, and SuperMicro). Receive the system, power it on, and start provisioning VM’s since its already running the VMware software you need to do so. The third way is EVO:RACK which is currently available as a tech-preview from a few Qualified EVO:RACK Partners. More information is available here: EVO: RACK Tech Preview at VMworld 2014

That’s it, no one else, without a specific agreement to do so, can ship hardware pre-built with VMware software, just VCE and Qualified EVO:RAIL and EVO:RACK Partners. All other “converged infrastructure” solutions, require you to obtain the hardware (either by picking and choosing components yourself, or by going with a reference architecture). None of them are able to arrive with VMware software already installed. Once the hardware arrives the VMware software must be installed first. And in the case of all other “converged” infrastructure solutions other than VMware Virtual SAN, you must install the storage software on top of vSphere. VI wrote about this here: What Makes VSAN Different?

OK, lets review with a diagram I put together based on EMC’s recent definition of Blocks, Racks, and Appliances. See the Virtual Geek blog here for more info: EMC World Day 1: BLOCKS, RACKS, APPLIANCES.

Block, Rack, Appliance

Notice that the concept of Build your own converged infrastructure combining compute and storage on the same host is not unique. There are approximately 15 companies with this solution including VMware. It’s a crowded space. VMware Virtual SAN is unique here in that it’s the only one that is built into the hypervisor.

Next notice that the concept of Reference Architecture converged infrastructure is not unique. There are approximately 5 companies with this solution including VMware. VMware Virtual SAN is unique here in that it’s the only one that is built into the hypervisor.

Finally, notice that there is only 1 way to obtain Pre-Built converged infrastructure and that’s EVO:RAIL which uses the VMware Virtual SAN storage that is built into the hypervisor. All you need to do is rack it, cable it, power it on, and start consuming VM’s. Kind of like buying a cake from the bakery, getting a fork, and start eating it.

OK, one last analogy… today, if you need a Virtual Machine and even EVO:RAIL isn’t a quick enough way to get it, it’s possible to simply provision one on demand from a service provider like vCloud Air. Now, wouldn’t it be great if you could get a piece of cake on demand? How long until this becomes a reality?

Data cake