vSAN ReadyNode Sizer

If you plan on implementing HCI to support your workloads and are looking to size an environment, the vSAN ReadyNode Sizer tool is the place to go.

vsansizer.vmware.com

There are 3 ways to use this.

  • Don’t log in – use the tool in “Evaluation” mode
  • Login using your My VMware account
  • Login using your Partner Central account

In “Evaluation” mode you’ll be able to create some basic configurations for general purpose workloads but will have no ability to customize or download the sizing results.

If you log in using your My VMware account or Partner Central account, you’ll have a lot more functionality. First, you’ll be asked if you want to configure an All Flash cluster or Hybrid cluster.

vSAN Sizer 1

Previously, the only place to size a Hybrid cluster was using the old vSAN Sizing tool. The ability to configure Hybrid clusters was just added to the new tool so now there is one place to size either option.

Next you’ll be asked if you want to size for a “Single Workload Cluster” or “Multi-Workload Cluster”

vSAN Sizer 2

The Single Workload Cluster provides options to configure for VDI, Relational Databases, or General Purpose workloads.

vSAN Sizer 3

The Multi-Workload Cluster choice is I helpful if you plan to have different types of VM’s and want to input the various workload specifics. There are a ton of customization options including block size, IO Pattern, vCPU/Core, etc. And of course, either option allows you to choose what vSAN protection level and method for each workload. You can even size for stretched clusters.

Our great product team at VMware has put a ton of work into this tool including some complex math to come up with a simple and easy way to configure clusters for vSAN. Check out the tool and see for yourself. But, also, feel free to contact your favorite partner or VMware Systems Engineer to also help. The vSAN SE team has done hundreds and thousands of these configurations and can help make sure you’ll be good to go.

 

 

VMworld Hands-on-Labs – 9,640 Labs Were delivered by vSAN

The Hands-on-Labs (HoL) at VMworld are always a big hit. A ton of work goes into putting them on and supporting them and everyone seems to love them. This was a big year for vSAN in the HoL. At VMworld Las Vegas, 11,444 labs were completed and the vSAN lab, HOL-1808-01-HCI – vSAN 6.6, was the #2 overall lab completed. Our NSX friends held the #1 spot.

The HoL’s were delivered from 5 different data centers. Each handled approximately 20% of the workloads. vSAN was the storage in 4 of the data centers. 2 of the 4 were VMware data centers running vSphere, NSX, and vSAN for software defined compute, network and storage. Another was IBM BlueMix (SoftLayer) built with VMware Cloud Foundation (vSphere, NSX, vSAN, and SDDC Manager). And the other was VMware Cloud on ASW also built with VMware Cloud Foundation (vSphere, NSX, vSAN, and SDDC Manager). The 5th data center was another VMware data center running traditional storage. This is a great Hybrid Cloud / Multi Cloud example leveraging 3 of our own datacenters and 2 of the largest public cloud data centers offering Infrastructure as a Service (Iaas).

 

VMware Cross Cloud Architecture

 

9,640 of the HoL’s were deployed across the 4 vSAN data centers. This represents 84% of the labs delivered at VMworld US were delivered by vSAN. To support the HoL’s, over 90,000 VM’s were provisioned in just 5 days. Actually, more than that since extra HoL’s are pre-provision that don’t all get used. This is a huge win for HCI and vSAN as it performed like a champ for this heavy workload.

These stats are too impressive not to share and they are a great testament to all the people that make it happen.

 

 

 

 

 

vSAN Maintenance Mode Considerations

There are 3 options when putting a host in maintenance mode when that host is a member of a vSphere Cluster with vSAN enabled.  You follow the normal process to put a host in maintenance mode, but if vSAN is enabled, these options will pop up:

  1. Ensure accessibility
  2. Full data migration
  3. No data migration

There’s a 4th consideration that I’ll describe at the end.

I would expect most virtualization administrators to pick “Ensure accessibility” almost every time.

Ensure accessibility

Before we investigate, I want to reinforce that vSAN, by default, is designed to work and continue to provide VM’s access to data even if a host disappears.  The default vSAN policy is “Number of Failures To Tolerate” equal to 1 (#FTT=1), which means a HDD, SSD, or whole host (thus all the SSD and HDD on that host) can be unavailable, and data is available somewhere else on another host in the cluster.  If a host is in maintenance mode, then it is down, but vSAN by default has another copy of the data on another host.

VMware documents the options here:

Place a Member of Virtual SAN Cluster in Maintenance Mode

Ensure accessibility

This option will check to make sure that putting the particular host in maintenance mode will not take away the only data copy of any VM.  There are two scenarios I can think of that this would happen:

  • In Storage Policy Based Management, you created a Storage Policy based on vSAN with #FTT=0 and attached at least 1 VM to that policy and that VM has data on the host going into maintenance mode.
  • Somewhere in the cluster you have failed drives or hosts and vSAN self-healing rebuilds haven’t completed. You then put a host into maintenance mode and that host has the only good copy of data remaining.

As rare as these scenarios are, they are possible.  By choosing the “Ensure accessibility” option, vSAN will find the single copies of data on that host and regenerate them on other hosts. Now when the host goes into maintenance mode, all VM data is available.  This is not a full migration of all the data off that host, its just a migration of the necessary data to “ensure accessibility” by all the VM’s in the cluster.  When the host goes into maintenance mode, it may take a little bit of time to complete the migration but you’ll know that VM’s won’t be impacted.  During the maintenance of this host, some VM’s will likely be running in a degraded state with 1 less copy that the policy specifies.  Personally, I think this choice makes the most sense most of the time, it is the default selection, and I expect vSphere administrators to choose this option almost every time.

No data migration

This option puts the host in maintenance mode no matter what’s going on in the cluster.  I would expect virtualization administrators to almost never pick this option unless:

  • You know the cluster is completely healthy (no disk or host failures anywhere else)
  • The VM’s that would be impacted aren’t critical.
  • All the VM’s in the cluster are powered off.

For the reasons explained in the “Ensure accessibility” above, its possible that the host going into maintenance mode has the only good copy of the data.  If this is not a problem, then choose this option for the fastest way to put a host into maintenance mode.  Otherwise, choose “Ensure accessibility”.

Full data migration

I would expect virtualization administrators to choose this option less frequently than “Ensure Accessibility” but will choose it for a couple of reasons:

  • The host is being replaced by a new one.
  • The host will be down for a long time, longer than the normal maintenance window of applying a patch and rebooting.
  • You want to maintain the #FTT availability for all VM’s during the maintenance window

Keep in mind, if you choose this option you must have 4 or more hosts in your cluster, and you don’t mind waiting for the data migration to complete.  The time to complete the data migration is dependent on the amount of capacity consumed on the host going into maintenance mode.  Yes, this could take some time.  The laws of physics apply.  10GbE helps to move more data in the same amount of time. And it helps if the overall environment is not too busy.

When the migration is complete, the host is essentially evacuated out of the cluster and all it’s data is spread across the remaining hosts.  VM’s will not be running in a degraded state during the maintenance window and will be able to tolerate the failures per their #FTT policy.

4th consideration

I mentioned there is a 4th consideration.  For the VM’s that you want protected with at least two copies of data (#FTT=1) even during maintenance windows, you have two options.  One is to set the #FTT=2 for those VM’s so they have 3 copies on 3 different hosts.  If one of those hosts is in maintenance mode and you didn’t choose “Full Data Migration” then you still have 2 copies on other hosts, thus the VM’s could tolerate another failure of a disk or host.  You could choose to create a storage policy based on vSAN with #FTT=2 and attach your most critical VM’s to it.  For more information on running business critical applications on vSAN see:

Running Microsoft Business Critical Application on Virtual SAN 6.0

I hope this helps in your decision making while administering vSAN.  I recommend testing the scenarios prior to implementing a cluster in production so you get a feel for the various options.

Podcast Fun!

In my role I have to drive a lot around New England. To pass the time I listen to a number of podcasts. Some of my favorites include:

Job Related:

Fun stuff:

But by far my favorite and the most entertaining is:

Virtually Speaking

I guess it’s partly because it focuses on storage for VMware environments, but, it’s also because Pete Flecha and John Nicholson are the right amount of funny, geek, and attitude all rolled into one.

A few weeks ago I had the chance to sit with John Nicholson and Duncan Epping to record some sound bits regarding customer experiences with vSAN in the field. I get to meet and work with a lot of remarkable customers up and down the eastern USA and over the last 3 years I’ve seen them accomplish great things with vSAN. You name an application or use case and it’s pretty likely its being done with vSAN. I was able to share a few stories as was Josh Fidel (@jcefidel) who’s doing great things with vSAN at customers in the Michigan, Ohio, Indiana, and Kentucky areas. He’s no SLOB and don’t let him fool you, he’s as smart as he is interesting. Check out what I mean by listening to this episode:

Virtually Speaking Podcast Episode 36: vSAN Use Cases

https://blogs.vmware.com/virtualblocks/2017/02/21/vspeaking-podcast-episode-36-vsan-use-cases/

 

 

 

vSAN and Data-At-Rest Encryption – Rebooted (i.e. Part 2)

 

Encryption is here, now shipping with vSphere 6.5.

I first wrote about vSAN and Encryption here:

Virtual SAN and Data-At-Rest Encryption – https://livevirtually.net/2015/10/21/virtual-san-and-data-at-rest-encryption/

At the time, I knew what was coming but couldn’t say. Also, the vSAN team had plans that changed. So, let’s set the record straight.

vSAN

  • Does not support Self Encrypting Drives (SEDs) with encryption enabled.
  • Does not support controller based encryption.
  • Supports 3rd party software based encryption solutions like HyTrust DataControl and Dell EMC Cloud Link.
  • Supports the VMware VM Encryption released with vSphere 6.5
  • Will support its own VMware vSAN Encryption in a future release.

At VMworld 2016 in Barcelona VMware announced vSphere 6.5 and with it, VM Encryption. In the past, VMware relied on 3rd party encryption solutions, but now, VMware has its own. For more details, check out:

What’s new in vSphere 6.5: Security – https://blogs.vmware.com/vsphere/2016/10/whats-new-in-vsphere-6-5-security.html

In this, Mike Foley briefly highlights a few advantages of VM Encryption. Stay tuned for more from him on this topic.

In addition to what Mike highlighted, VM encryption is implemented using VAIO Filters, can be enabled per VM object (vmdk), will encrypt VM data no matter what storage solution is implemented (e.g. object, file, block using vendors like VMware vSAN, Dell Technologies, NetApp, IBM, HDS, etc.), and satisfies data-in-flight and data-at-rest encryption. The solution does not require SED’s so it works with all the commodity HDD, SSD, PCIe, and NVMe devices and integrates with several third party Key Management solutions. Since VM Encryption is set via policy, that policy could extended across to public clouds like Cloud Foundation on IBM SoftLayer, VMware Cloud on AWS, VMware vCloud Air or to any vCloud Air Network partner. This is great because your VM’s could live in the cloud but you will own and control the encryption keys. And you can use different keys for different VM’s.

At VMworld 2016 in Las Vegas VMware announced the upcoming vSAN Beta. For more details see:

Virtual SAN Beta – Register Today! – https://blogs.vmware.com/virtualblocks/2016/09/07/virtual-san-beta-register-today/

This vSAN Beta includes vSAN encryption targeted for a future release of vSphere. vSAN Encryption will satisfy data-at-rest encryption. You might ask why vSAN Encryption would be necessary if vSphere has VM Encryption? I will say that you should always look to use VM Encryption first. The one downside to VM Encryption is that since the VM’s data is encrypted as soon as it leaves the VM and hits the ESXi kernel, each block is unique, so no matter what storage system that data goes to (e.g. VMware vSAN, Dell Technologies, NetApp, IBM, HDS, etc.) that block can’t be deduped or compressed. The benefit of vSAN encryption will be that the encryption will be done at the vSAN level. Data will be send to the vSAN cache and encrypted at that tier. When it is later destaged, it will be decrypted, deduped, compressed, and encrypted when its written to the capacity tier. This satisfies the data-at-rest encryption requirements but not data-in-flight. It does allow you to take advantage of the vSAN dedupe and compression data services and it’s one key for the entire vSAN datastore.

It should be noted that both solutions will require a 3rd party Key Management Server (KMS) and the same one can be used for both VM Encryption and vSAN Encryption. The KMS must support the Key Management Interoperability Protocol (KMIP) 1.1 standard. There are many that do and VMware has tested a lot of them. We’ll soon be publishing a list, but for now, check with your KMS vendor or your VMware SE for details.

VMware is all about customer choice. So, we offer a number of software based encryption options depending on your requirements.

It’s worth restating that VM Encryption should be the standard for software based encryption for VM’s. After reviewing vSAN Encryption, some may choose it instead to go with vSAN encryption if they want to take advantage of deduplication and compression. Duncan Epping provides a little more detail here:

The difference between VM Encryption in vSphere 6.5 and vSAN encryption – http://www.yellow-bricks.com/2016/11/07/the-difference-between-vm-encryption-in-vsphere-6-5-and-vsan-encryption/

 

In summary:

  1. Use VM Encryption for Hybrid vSAN clusters
  2. Use VM Encryption on All-Flash if storage efficiency (dedupe/compression) is not critical
  3. Wait for vSAN native software data at rest encryption if you must have dedupe/compression on All-Flash

 

Correlating vSAN versions with vSphere (vCenter & ESXi) Versions

I often get asked if a certain version of vSAN can be deployed on a different version of vSphere. The answer is no. vSAN is built into the vSphere version. That means vCenter needs to be upgraded to the correct version of vCenter and all the hosts in the cluster need to be upgraded to the correct version of ESXi in order to get the features of that version of vSAN. Lastly, vSAN formats each disk drive with an on-disk format, so to get the full features of a specific release, you may need to update the on-disk format.

Here’s basically how everything breaks down:

  • If you have vSphere 5.5 (vCenter Server 5.0 & ESXi 5.0) then you have vSAN 5.5.
  • If you have vSphere 6.0 (vCenter Server 6.0 & ESXi 6.0) then you have vSAN 6.0.
  • If you have vSphere 6.0 U1 (vCenter Server 6.0 Update 2 & ESXi 6.0 Update 1) then you have vSAN 6.1.
  • If you have vSphere 6.0 U2 (vCenter Server 6.0 Update 2 & ESXi 6.0 Update 2) then you have vSAN 6.2.
  • If you have vSphere 6.5 (vCenter Server 6.5 & ESXi 6.5) then you have vSAN 6.5.
  • If you have vSphere 6.5.0d (vCenter Server 6.5.0d & ESXi 6.5.0d) then you have vSAN 6.6.
  • If you have vSphere 6.5 Update 1 (vCenter Server 6.5 Update 1 & ESXi 6.5 Update 1) then you have vSAN 6.6.1.
  • If you have vSphere 6.7 (vCenter Server 6.7 & ESXi 6.7) then you have vSAN 6.7

Here’s a more detailed matrix:

Version Release

Date

Build

Number

Installer Build Number vSAN Version vSAN

On-Disk Format

(Web Client)

ESXi 6.5 U2 2018-05-03 8294253 N/A 6.6.1 U2 5
ESXi 6.7 GA 2018-04-17 8169922 N/A 6.7 GA 6
ESXi 6.6.1 Patch 02 2017-12-19 7388607 N/A 6.6.1 Patch 02 5
ESXi 6.5 Express Patch 4 2017-10-05 6765664 N/A 6.6.1 Express Patch 4 5
ESXi 6.5 Update 1 2017-07-27 5969303 N/A 6.6.1 5
ESXi 6.5.0d 2017-04-18 5310538 N/A 6.6 5
ESXi 6.5. Express Patch 1a 2017-03-28 5224529 N/A 6.5 Express Patch 1a 3
ESXi 6.5. Patch 01 2017-03-09 5146846 5146843 6.5 Patch 01 3
ESXi 6.5.0a 2017-02-02 4887370 N/A 6.5.0a 3
ESXi 6.5 GA 2016-11-15 4564106 N/A 6.5 3
ESXi 6.0 Patch 7 2018-07-26 9239799 N/A 6.2 Patch 7 3
ESXi 6.0 Patch 6 2017-11-09 6921384 N/A 6.2 Patch 6 3
ESXi 6.0 Express Patch 11 2017-10-05 6765062 N/A 6.2 Express Patch 11 3
ESXi 6.0 Patch 5 2017-06-06 5572656 N/A 6.2 Patch 5 3
ESXi 6.0 Express Patch 7c 2017-03-28 5251623 N/A 6.2 Express Patch 7c 3
ESXi 6.0 Express Patch 7a 2017-03-28 5224934 N/A 6.2 Express Patch 7a 3
ESXi 6.0 Update 3 2017-02-24 5050593 N/A 6.2 Update 3 3
ESXi 6.0 Patch 4 2016-11-22 4600944 N/A 6.2 Patch 4 3
ESXi 6.0 Express Patch 7 2016-10-17 4510822 N/A 6.2 Express Patch 7 3
ESXi 6.0 Patch 3 2016-08-04 4192238 N/A 6.2 Patch 3 3
ESXi 6.0 Express Patch 6 2016-05-12 3825889 N/A 6.2 Express Patch 6 3
ESXi 6.0 Update 2 2016-03-16 3620759 N/A 6.2 3
ESXi 6.0 Express Patch 5 2016-02-23 3568940 N/A 6.1 Express Patch 5 2
ESXi 6.0 Update 1b 2016-01-07 3380124 N/A 6.1 Update 1b 2
ESXi 6.0 Express Patch 4 2015-11-25 3247720 N/A 6.1 Express Patch 4 2
ESXi 6.0 U1a (Express Patch 3) 2015-10-06 3073146 N/A 6.1 U1a (Express Patch 3) 2
ESXi 6.0 U1 2015-09-10 3029758 N/A 6.1 2
ESXi 6.0.0b 2015-07-07 2809209 N/A 6.0.0b 2
ESXi 6.0 Express Patch 2 2015-05-14 2715440 N/A 6.0 Express Patch 2 2
ESXi 6.0 Express Patch 1 2015-04-09 2615704 2615979 6.0 Express Patch 1 2
ESXi 6.0 GA 2015-03-12 2494585 N/A 6.0 2
ESXi 5.5 Patch 10 2016-12-20 4722766 4761836 5.5 Patch 10 1
ESXi 5.5 Patch 9 2016-09-15 4345813 4362114 5.5 Patch 9 1
ESXi 5.5 Patch 8 2016-08-04 4179633 N/A 5.5 Patch 8 1
ESXi 5.5 Express Patch 10 2016-02-22 3568722 N/A 5.5 Express Patch 10 1
ESXi 5.5 Express Patch 9 2016-01-04 3343343 N/A 5.5 Express Patch 9 1
ESXi 5.5 Update 3b 2015-12-08 3248547 N/A 5.5 Update 3b 1
ESXi 5.5 Update 3a 2015-10-06 3116895 N/A 5.5 Update 3a 1
ESXi 5.5 Update 3 2015-09-16 3029944 N/A 5.5 Update 3 1
ESXi 5.5 Patch 5 re-release 2015-05-08 2718055 N/A 5.5 Patch 5 re-release 1
ESXi 5.5 Express Patch 7 2015-04-07 2638301 N/A 5.5 Express Patch 7 1
ESXi 5.5 Express Patch 6 2015-02-05 2456374 N/A 5.5 Express Patch 6 1
ESXi 5.5 Patch 4 2015-01-27 2403361 N/A 5.5 Patch 4 1
ESXi 5.5 Express Patch 5 2014-12-02 2302651 N/A 5.5 Express Patch 5 1
ESXi 5.5 Patch 3 2014-10-15 2143827 N/A 5.5 Patch 3 1
ESXi 5.5 Update 2 2014-09-09 2068190 N/A 5.5 Update 2 1
ESXi 5.5 Patch 2 2014-07-01 1892794 N/A 5.5 Patch 2 1
ESXi 5.5 Express Patch 4 2014-06-11 1881737 N/A 5.5 Express Patch 4 1
ESXi 5.5 Update 1a 2014-04-19 1746018 N/A 5.5 Update 1a 1
ESXi 5.5 Express Patch 3 2014-04-19 1746974 N/A 5.5 Express Patch 3 1
ESXi 5.5 Update 1 2014-03-11 1623387 N/A 5.5 Update 1 1
ESXi 5.5 Patch 1 2013-12-22 1474528 N/A 5.5 Patch 1 1
ESXi 5.5 GA 2013-09-22 1331820 N/A 5.5 1

As a reference, see:

Build numbers and versions of VMware vSAN (2150753) – This is a new KB post that went up on July 31, 2017 which provides the same information as above.

Build numbers and versions of VMware vCenter Server (2143838)

Build numbers and versions of VMware ESXi/ESX (2143832)

Understanding vSAN on-disk format versions (2145267)

 

 

 

 

 

VMware Storage Technology Names & Acronyms

  • vSAN = VMware’s Software Defined Storage Solution formerly known as Virtual SAN or VSAN. Now the only acceptible name is “vSAN” with the little “v”.
  • SPBM = Storage Policy Based Management
  • VASA = vSphere API’s for Storage Awareness
  • VVol = Virtual Volume
  • PE = Protocol Endpoint
  • VAAI = vSphere API’s for Array Integration
  • VAIO Filtering = vSphere API’s for IO Filtering
  • VR = vSphere Replication
  • SRM = Site Recovery Manager
  • VDP = vSphere Data Protection
  • vFRC = vSphere Flash Read Cache
  • VSA = vSphere Storage Appliance (end of life)
  • VMFS = Virtual Machine File System
  • SvMotion = Storage vMotion
  • XvMotion – Across Host, Cluster, vCenter vMotion (without shared storage)
  • SDRS = Storage Distributed Resource Scheduler
  • SIOC = Storage Input Output Control
  • MPIO = Multi Path Input Output

Replays of Virtual SAN Sessions at VMworld 2016 That You Didn’t Want to Miss

What a great week last week at VMworld 2016. I had many good meetings with customers, participated in 3 breakout sessions, met up with some old friends and met some new ones. If you missed VMworld, well, then you missed a bunch of great sessions. There’s no way you could have seen them all, so, VMware has made them available here: http://www.vmworld.com/en/sessions/2016.html.

I participated in two sessions:

The first one was a customer panel discussion on Tuesday afternoon. I need to thank Glenn Brown from Stanley Black & Decker, Mike Caruso from Synergent, Tom Cronin from M&T Bank, and Andrew Schilling from Baystate Health who all did a fantastic job representing themselves, their companies, and their use of Virtual SAN. We had great interaction from the audience with lots of good questions. For a replay of the session check this out:

Four Unique Enterprise Customers Deployment of VMware Virtual SAN [STO7560]
Glen Brown
, System Engineer, Stanley Black and Decker
Michael Caruso, AVP Corporate Information Systems, Synergent
Tom Cronin, Sr. Staff Specialist – Platforms Engineering Group, M&T Bank
Frank Gesino, Senior Technical Account Manager, VMware
Andrew Schilling, Team Leader – IT Infrastructure, Baystate Health Inc.
Tuesday, Aug 30, 5:00 p.m. – 6:00 p.m.

The other session I was involved in was on Wednesday and repeated on Thursday. I had the good fortune to present with two VSAN Product Managers who are responsible for making VSAN great. Vahid Fereydounkolahi is responsible for driving new features into the VSAN product and Rakesh Radhakrishnan is responsible for making sure all the vendor hardware components are properly qualified for VSAN and for looking out into the future of new technologies like NVMe and RDMA to adopt into VSAN. For a replay of the two sessions check these out:

Peter Keilty, Office of the CTO, Americas Field – Storage and Availability, VMware, Inc.
Rakesh Radhakrishnan, Product Management & Strategy Leader, VMware
Wednesday, Aug 31, 2:00 p.m. – 3:00 p.m.
Vahid Fereydounkolahi kicked this one off discussion VSAN features, capabilities, and how it works, I took over in the middle to discuss Day 2 operations, and Rakesh Radhakrishnan finished it off discussing the Ready Node program as well as current and future flash and IO technology that VSAN incorporates or will incorporate.
Virtual SAN Technical Deep Dive and What’s New [STO8246R]

Thursday, Sep 01, 10:30 a.m. – 11:30 a.m.
Vahid wasn’t able to make this time so I kicked things off talking about VSAN features, capabilities, how it works, and Day 2 operations, and Rakesh Radhakrishnan finished it off discussing the Ready Node program as well as current and future flash and IO technology that VSAN incorporates or will incorporate.
Virtual SAN Technical Deep Dive and What’s New [STO8246R]

In my previous blog post I highlighted the sessions you wouldn’t want to miss. So here, I’ll provide the links to those sessions. A few either haven’t been uploaded yet or won’t because of legal or future looking reasons:

Christos Karamanolis is literally the brains behind VSAN since its inception and our chief visionary for Storage. If you want the whole picture wrapped up in a 1 hour session, this is it.
An Industry Roadmap: From storage to data management [STO7903]
Christos Karamanolis, VMware Fellow – CTO of Storage and Availability, VMware
Wednesday, Aug 31, 4:00 p.m. – 5:00 p.m.

Continue reading “Replays of Virtual SAN Sessions at VMworld 2016 That You Didn’t Want to Miss”

Virtual SAN Sessions You Won’t Want to Miss at VMworld 2016

Shameless self-promotion here. I’m very excited to be presenting 2 sessions at the upcoming VMworld 2016 in Las Vegas. So, of course I think you shouldn’t miss them. The first is a customer panel session that I’ll be hosting. I’ve worked with each of these customers who have had VSAN running production workloads for well over a year. Everything wasn’t always perfect, but, they continue to expand their usage of VSAN in their data centers. In two of the customers, they are now standardized on VSAN for any new workloads. These customers will provide an overview of their deployments, answer some of my questions, then take questions from the audience.

Four Unique Enterprise Customers Deployment of VMware Virtual SAN [STO7560]
Glen Brown, System Engineer, Stanley Black and Decker
Michael Caruso, AVP Corporate Information Systems, Synergent
Tom Cronin, Sr. Staff Specialist – Platforms Engineering Group, M&T Bank
Frank Gesino, Senior Technical Account Manager, VMware
Andrew Schilling, Team Leader – IT Infrastructure, Baystate Health Inc.
Tuesday, Aug 30, 5:00 p.m. – 6:00 p.m.

This VSAN Deep Dive session will cover features of the latest VSAN release, how they work, and some best practices for deploying VSAN. I’ll be presenting along with our lead VSAN Product Managers. This session will be held on two different days.

Virtual SAN Technical Deep Dive and What’s New [STO8246R]
Peter Keilty, Office of the CTO, Americas Field – Storage and Availability, VMware, Inc.
Rakesh Radhakrishnan, Product Management & Strategy Leader, VMware
Wednesday, Aug 31, 2:00 p.m. – 3:00 p.m.
Thursday, Sep 01, 10:30 a.m. – 11:30 a.m.

Other VSAN Sessions You Won’t Want to Miss

There are so many great VSAN sessions it’s hard to pick just a few. So, here are the ones I am most familiar with that I’m confident will be great. But that doesn’t mean that some of the others won’t be.

Christos Karamanolis is literally the brains behind VSAN since its inception and our chief visionary for Storage. If you want the whole picture wrapped up in a 1 hour session, this is it.

An Industry Roadmap: From storage to data management [STO7903]
Christos Karamanolis, VMware Fellow – CTO of Storage and Availability, VMware
Wednesday, Aug 31, 4:00 p.m. – 5:00 p.m.

Continue reading “Virtual SAN Sessions You Won’t Want to Miss at VMworld 2016”

2-Node Virtual SAN Software Defined Self Healing

I continue to think one of the hidden gem features of VMware Virtual SAN (VSAN) is its software defined self healing ability.  I recently received a request for a description of 2-Node self healing. I wrote about our self healing capabilities for 3-Node, 4-Node and more here. And I wrote about Virtual SAN 6 Rack Awareness Software Defined Self Healing with Failure Domains here. I suggest you check out both before reading the rest of this. I also suggest you check out these two posts on 2-Node VSAN for a description on how they work here and are licensed here.

For VSAN, protection levels can be defined through VMware’s Storage Policy Based Management (SPBM) which is built into vSphere and managed through vCenter.  VM objects can be assigned to different policy which dictates the protection level they receive on VSAN. With a 2-Node Virtual SAN there is only one option for protection, which is the default # Failures To Tolerate (#FTT) equal to 1 using RAID1 mirroring. In other words, each VM will write to both hosts, if one fails, the data exists on the other host and is accessible as long as the VSAN Witness VM is available.

Now that we support 2-Node VSAN, the smallest VSAN configuration possible is 2 physical nodes with 1 caching device (SSD, PCIe, or NVMe) and 1 capacity device (HDD, SSD, PCIe, or NVMe) each and one virtual node (VSAN Witness VM) to hold all the witness components. Let’s focus on a single VM with the default # Failures To Tolerate (#FTT) equal to 1.  A VM has at least 3 objects (namespace, swap, vmdk).  Each object has at least 3 components (data mirror 1, data mirror 2, witness) to satisfy #FTT=1.  Lets just focus on the vmdk object and say that the VM sits on host 1 with mirror components of its vmdk data on host 1 and 2 and the witness component on the virtual Witness VM (host 3).

01 - 2-Node VSAN min

OK, lets start causing some trouble.  With the default # Failures To Tolerate equal 1, VM data on VSAN should be available if a single caching device, a single capacity device, or an entire host fails.  If a single capacity device fails, lets say the one on esxi-02, no problem, another copy of the vmdk is available on esxi-01 and the witness is available on the Witness VM so all is good.  There is no outage, no downtime, VSAN has tolerated 1 failure causing loss of one mirror, and VSAN is doing its job per the defined policy and providing access to the remaining mirror copy of data.  Each object has more that 50% of its components available (one mirror and witness are 2 out of 3 i.e. 66% of the components available) so data will continue to be available unless there is a 2nd failure of either the caching device, capacity device, or esxi-01 host.  The situation is the same if the caching device on esxi-02 fails or the whole host esxi-02 fails. VM data on VSAN would still be available and accessible. If the VM happened to be running on esxi-02 then HA would fail it over to esxi-01 and data would be available. In this configuration, there is no automatic self healing because there’s no where to self heal to. Host esxi-02 would need to be repaired or replaced in order for self healing to kick in and get back to compliance with both mirrors and witness components available.

02 - 2-Node VSAN min

Self healing upon repair

How can we get back to the point where we are able to tolerate another failure?  We must repair or replace the failed caching device, capacity device, or failed host.  Once repaired or replaced, data will resync, and the VSAN Datastore will be back to compliance where it could then tolerate one failure.  With this minimum VSAN configuration, self healing happens only when the failed component is repaired or replaced.

03 - 2-Node VSAN min

2-Node VSAN Self Healing Within Hosts and Across Cluster

To get self healing within hosts and across the hosts in the cluster you must configure your hosts with more disks. Let’s investigate what happens when there are 2 SSD and 4 HDD per host and 4 hosts in a cluster and the policy is set to # Failures To Tolerate equal 1 using the RAID 1 (mirroring) protection method.

01~ - 2-Node VSAN.png

If one of the capacity devices on esxi-02 fails then VSAN could chose to self heal to:

  1. Other disks in the same disk group
  2. Other disks on other disk groups on the same host

The green disks in the diagram below are eligible targets for the new instant mirror copy of the vmdk:

02~ - 2-Node VSAN

This is not an all encompassing and thorough explanation of all the possible scenarios.  There are dependencies on how large the vmdk is, how much spare capacity is available on the disks, and other factors.  But, this should give you a good idea of how failures are tolerated and how self healing can kick in to get back to policy compliance.

Self Healing When SSD Fails

If there is a failure of the caching device on esxi-02 that supports the capacity devices that contain the mirror copy of the vmdk then VSAN could chose to self heal to:

  1. Other disks on other disk groups on the same host
  2. Other disks on other disk groups on other hosts.

The green disks in the diagram below are eligible targets for the new instant mirror of the vmdk:

03~ - 2-Node VSAN.png

Self Healing When a Host Fails

If there is a failure of a host (e.g. esxi-02) that supports mirror of the vmdk then VSAN cannot self heal until the host is repaired or replaced.

04~ - 2-Node VSAN

Summary

VMware Virtual SAN leverages all the disks on all the hosts in the VSAN datastore to self heal.  Note that I’ve only discussed above the self healing behavior of one VM but other VM’s on other hosts may have data on the same failed disk(s) but their mirror may be on different disks in the cluster and VSAN might choose to self heal to other different disks in the cluster.  Thus the self healing workload is a many-to-many operation and thus spread around all the disks in the VSAN datastore.

Self healing is enabled by default, behavior is dependent on the software defined protection policy (#FTT setting), and can occur to disks in the same disk group, to other disk groups on the same host, or to other disks on other hosts. The availability and self healing properties make VSAN a robust storage solution for all data center applications.