About a year ago, an astute college at VMware, Kevin Lees, reached out inquiring about writing a book on Operationalizing VMware vSAN. He had created a book on Operationalizing VMware NSX and thought writing one on vSAN would be a good idea. His extensive background in consulting and expertise in operationalizing infrastructure makes him a perfect fit for this series of books. I of course said it was a great idea and we talked about the topics to cover. I kept in touch with the project for a few months and scanned an early draft. Many others jumped in after than and helped create the book that was just recently released. Its a great read so check it out here:
I’m looking forward to VMworld 2018 in a few weeks. It’s always a long week but a great time. I look forward to catching up with coworkers, partners, customers, and friends. And, I’ll also have to do a little work. This year I have 2 breakout speaking sessions.
vSAN Technical Customer Panel on vSAN Experiences [HCI1615PU]
Monday, Aug 27, 12:30 p.m. – 1:30 p.m.
The Panel will consist of 4 vSAN customers: General Motors, United States Senate Federal Credit Union, Rent-A-Center, and
Brinks Oakland University. Brinks is a great vSAN customer but is doing an NSX session at the same time as the vSAN session so we are lucky to add Oakland University to the panel. I will moderate the session, ask the customers to describe their company, role, environment, and how they are using vSAN. General Motors will talk about their large VDI deployment. Unites States Federal Credit Union will discuss their use of vSAN in remote offices, VVols, and Storage Policy Based Management (SPBM). Rent-A-Center will discuss vSAN for management clusters, VDI, and the benefit of VxRail. Oakland University will discuss their vSAN stretched cluster, Data at Rest Encryption, and Dedupe/Compression. After each panelist does this, we’ll take questions from the audience.
Here’s a recording of last year’s session to give you an idea: https://youtu.be/x4ioatHqQOI
On the panel we had Sanofi, Travelers, Sekisui Pharmaceutical, and Herbalife. The year before we had Stanley Black and Decker, Synergent Bank, M&T Bank, and Baystate Health. Both were great sessions and this year looks like it will be too.
Achieving a GDPR-Ready Architecture Leveraging VMware vSAN [HCI3452BU]
Wednesday, Aug 29, 12:30 p.m. – 1:30 p.m.
When it comes to security in vSAN, most think Data at Rest Encryption and to make this all work you need a key management server. It’s tough to beat HyTrust for this. They offer the software for free and support for a small fee. But that’s not all they do. Check out this session to find out more. Dave Siles and I will discuss GDPR-Ready Architecture and how vSAN encryption can help.
Recently, one of my colleagues was working with a customer that was intermittently getting an error on the vSAN health check in vSAN 6.6.x indicating that “A few hosts were failing ping test – large packet ping test: vsan: mtu check (ping with large packet size)”. As reported by the customer the same cluster would sometimes pass all tests in vSAN Health, and other times report the error above.
The customer enabled the vSphere distributed switch (VDS) health check and ran it on the vSphere distributed switch that was supporting the cluster. The VDS health check immediately reported …
- Mismatched VLAN trunks between a vSphere distributed switch and physical switch.
- Mismatched MTU settings between physical network adapters, distributed switches, and physical switch ports.
The VDS health check also reported which uplinks across the hosts had these specific misconfiguration issues, so customer had something concrete to take to his networking team to resolve the problem.
I thought this was a good example of using these two tools together to identify a networking problem and providing evidence to help facilitate the resolution.
You’ve built your vSphere cluster with vSAN enabled, now what? Of course, you can start provisioning VM’s in the cluster and their vmdk’s onto the vSAN datastore. But, what if you want to move existing VM’s onto your new cluster? Well, there are several methods to consider, each with their own benefits and detractors. This topic has been explored a few times and here are some useful links:
Migrating VMs to vSAN
Migrating to vSAN
I had the opportunity to record an overview of this topic using our Lightboard technology at VMware headquarters in Palo Alto. You can check it out here:
The video lightboard explores the following methods:
Simply, you can backup your VMs sitting in one cluster, shut them down, then restore them onto the new cluster.
Cross Cluster vMotion (AKA XvMotion), Cross vCenter vMotion, Long Distance vMotion (LDM)
You can migrate live VM’s from one cluster to another cluster (Cross cluster vMotion) and those clusters could be managed by different vCenters (Cross vCenter vMotion). This can be great for a few VM’s but if it’s a lot of VM’s and a lot of data then it can take a while. There’s no downtime for the VM’s, but, you could be waiting a long time for the migration to complete. For more details, see one of my previous posts:
This is only possible if your source and destination hosts are connected to the same source storage system LUN/Volume. If so, you can have both clusters mount the same LUN/Volume and move the VM from the source cluster to the destination cluster and also move the data from the source datastore (LUN/Volume on SAN/NAS) to the destination datastore (vSAN). If you are moving off a traditional fibre channel SAN then you’ll need to put fibre channel HBA’s in the hosts supporting the new vSAN datastore.
VMware vSphere Replication
VMware’s vSphere Replication replicates any VM on one cluster to any other cluster. This host based replication feature is storage agnostic so it doesn’t matter what the underlying storage is on either cluster. A vSphere snapshot of the VM is taken and that snapshot is used as the source of the replication. Once you know the data is in sync between the source cluster and destination cluster you can shut down the VM’s in the source cluster and power them up in the destination cluster. So, there is downtime. If something doesn’t go right, you could revert back to the source cluster. Here’s a good whitepaper on vSphere Replication.
VMware vSphere Replication + Site Recovery Manager
VMware’s vSphere Replication replicates any VM on one cluster to any other cluster. VMware Site Recovery Manager allows you to test and validate the failover from the source to the destination. It allows you to script the order in which VM’s are powered on as well as Re-IP them if necessary and can automate running pre and post scripts if necessary. Once you validate the failover will happen as you want it to, you can do it for real knowing it’s been pretested. If something goes wrong it has a “revert” feature to reverse the cut-over and go back to the source cluster until you can fix the problem. Here are a few good whitepapers on Site Recovery Manager.
3rd Party Replication
DellEMC RP4VMs replicates data prior to cut over. Once you know the data is in sync between the source cluster and destination cluster you can shut down the VM’s in the source cluster and power them up in the destination cluster. So, there is downtime. If something doesn’t go right, you could revert back to the source cluster. There are other 3rd party options on the market including solutions from Zerto and Veeam.
What About VMware Cloud on AWS?
Since vSAN is the underlying storage on VMware Cloud on AWS, all the options above will work for migrating workloads from on Premises to VMware Cloud on AWS.
Personally, I like the ability to test the failover migration “cut over” using Site Recover Manager so I’d opt for the vSphere Replication + Site Recovery Manager option if possible. if it’s only a few VM’s and a small amount of data then XvMotion would be the way to go.
The Hands-on-Labs (HoL) at VMworld are always a big hit. A ton of work goes into putting them on and supporting them and everyone seems to love them. This was a big year for vSAN in the HoL. At VMworld Las Vegas, 11,444 labs were completed and the vSAN lab, HOL-1808-01-HCI – vSAN 6.6, was the #2 overall lab completed. Our NSX friends held the #1 spot.
The HoL’s were delivered from 5 different data centers. Each handled approximately 20% of the workloads. vSAN was the storage in 4 of the data centers. 2 of the 4 were VMware data centers running vSphere, NSX, and vSAN for software defined compute, network and storage. Another was IBM BlueMix (SoftLayer) built with VMware Cloud Foundation (vSphere, NSX, vSAN, and SDDC Manager). And the other was VMware Cloud on ASW also built with VMware Cloud Foundation (vSphere, NSX, vSAN, and SDDC Manager). The 5th data center was another VMware data center running traditional storage. This is a great Hybrid Cloud / Multi Cloud example leveraging 3 of our own datacenters and 2 of the largest public cloud data centers offering Infrastructure as a Service (Iaas).
9,640 of the HoL’s were deployed across the 4 vSAN data centers. This represents 84% of the labs delivered at VMworld US were delivered by vSAN. To support the HoL’s, over 90,000 VM’s were provisioned in just 5 days. Actually, more than that since extra HoL’s are pre-provision that don’t all get used. This is a huge win for HCI and vSAN as it performed like a champ for this heavy workload.
These stats are too impressive not to share and they are a great testament to all the people that make it happen.
I started at VMware on the vSAN team 4 years ago when we had 0 customers. It’s been a pretty wild and fun ride to get to 10,000 but we’ve only just begun. Customers are seeing the benefits of HCI and vSAN for all sorts of use cases including mission critical applications, management clusters, VDI, ROBO, DMZ, test/dev, DR Sites, and IaaS at IBM Bluemix (formerly SoftLayer) and soon at Amazon with VMware Cloud on AWS.
Unfortunately, we cannot fit all 10,000 customers in one breakout session at VMworld, but we can fit 4. I’m hosting a breakout session titled:
vSAN Technical Customer Panel [STO2615PU]
(Now that the session has happened, here is the video recording:)
I hosted a similar session last year with Stanley Black and Decker, Synergent Bank, M&T Bank, and Baystate Health and it was a lot of fun with some great audience participation. For more information check here.
This year we are fortunate to have Sanofi, Sekisui, Travelers, and Herbalife join the panel. The format is this:
- Introduce the Panel
- Panelists introduce their company, their VMware environment, and their use of vSAN
- Q&A – I will have some questions for the panel but we expect the audience questions to generate some great discussion.
Let’s meet the Panelists:
Director, Virtualization Engineering Services
In 2016 this large pharma needed to refresh their Remote Office Branch Office (ROBO) sites. After a successful proof of concept, 2+ Node vSAN on HPE ProLiant Servers was chosen. Since then, vSAN has been deployed for management clusters and VDI in USA and EMEA as well as in 2 of their 13 regional data centers. Next, Cloud Foundation is being considered to replace their legacy Blade servers & Storage arrays.
Director, Global IT
In early 2014 this mid-size pharma needed to build a DR site and chose a 4 Node vSphere cluster with vSAN enabled. They used vSphere Replication and SRM to test and automate DR. They also moved their test and development environment to this cluster. This year they are replacing their production data center with HCI and vSAN.
Senior Systems Engineer
vSAN was chosen to support production and test/dev Hadoop workloads. Two other vSAN clusters are used for new application workload POC’s. In addition, 2 Cloud Foundation configurations, each with a management cluster and a VM workload cluster are being implemented to prove how the built-in automation simplifies operations.
Worldwide Manager of Linux & VMWare
Herbalife International of America
In early 2014 this nutrition company wanted to modernize their data center by automating IT to simplify application access and management and transform Windows delivery. Today they run vSphere and vSAN and are evaluating NSX in multiple call centers to support 4000 Horizon VDI across 5 ROBO sites and their primary data centers for mission critical applications. They’ve achieved great cost savings resulting in significantly reduced TCO while delivering exceptional performance to their users.
I’m looking forward to seeing many great friends and to meet new ones at VMworld. I hope you can come and participate and enjoy this session with these great guests.
In my role I have to drive a lot around New England. To pass the time I listen to a number of podcasts. Some of my favorites include:
- VMware Communities Roundtable
But by far my favorite and the most entertaining is:
I guess it’s partly because it focuses on storage for VMware environments, but, it’s also because Pete Flecha and John Nicholson are the right amount of funny, geek, and attitude all rolled into one.
A few weeks ago I had the chance to sit with John Nicholson and Duncan Epping to record some sound bits regarding customer experiences with vSAN in the field. I get to meet and work with a lot of remarkable customers up and down the eastern USA and over the last 3 years I’ve seen them accomplish great things with vSAN. You name an application or use case and it’s pretty likely its being done with vSAN. I was able to share a few stories as was Josh Fidel (@) who’s doing great things with vSAN at customers in the Michigan, Ohio, Indiana, and Kentucky areas. He’s no SLOB and don’t let him fool you, he’s as smart as he is interesting. Check out what I mean by listening to this episode:
Virtually Speaking Podcast Episode 36: vSAN Use Cases
Encryption is here, now shipping with vSphere 6.5.
I first wrote about vSAN and Encryption here:
Virtual SAN and Data-At-Rest Encryption – https://livevirtually.net/2015/10/21/virtual-san-and-data-at-rest-encryption/
At the time, I knew what was coming but couldn’t say. Also, the vSAN team had plans that changed. So, let’s set the record straight.
- Does not support Self Encrypting Drives (SEDs) with encryption enabled.
- Does not support controller based encryption.
- Supports 3rd party software based encryption solutions like HyTrust DataControl and Dell EMC Cloud Link.
- Supports the VMware VM Encryption released with vSphere 6.5
- Will support its own VMware vSAN Encryption in a future release.
At VMworld 2016 in Barcelona VMware announced vSphere 6.5 and with it, VM Encryption. In the past, VMware relied on 3rd party encryption solutions, but now, VMware has its own. For more details, check out:
What’s new in vSphere 6.5: Security – https://blogs.vmware.com/vsphere/2016/10/whats-new-in-vsphere-6-5-security.html
In this, Mike Foley briefly highlights a few advantages of VM Encryption. Stay tuned for more from him on this topic.
In addition to what Mike highlighted, VM encryption is implemented using VAIO Filters, can be enabled per VM object (vmdk), will encrypt VM data no matter what storage solution is implemented (e.g. object, file, block using vendors like VMware vSAN, Dell Technologies, NetApp, IBM, HDS, etc.), and satisfies data-in-flight and data-at-rest encryption. The solution does not require SED’s so it works with all the commodity HDD, SSD, PCIe, and NVMe devices and integrates with several third party Key Management solutions. Since VM Encryption is set via policy, that policy could extended across to public clouds like Cloud Foundation on IBM SoftLayer, VMware Cloud on AWS, VMware vCloud Air or to any vCloud Air Network partner. This is great because your VM’s could live in the cloud but you will own and control the encryption keys. And you can use different keys for different VM’s.
At VMworld 2016 in Las Vegas VMware announced the upcoming vSAN Beta. For more details see:
Virtual SAN Beta – Register Today! – https://blogs.vmware.com/virtualblocks/2016/09/07/virtual-san-beta-register-today/
This vSAN Beta includes vSAN encryption targeted for a future release of vSphere. vSAN Encryption will satisfy data-at-rest encryption. You might ask why vSAN Encryption would be necessary if vSphere has VM Encryption? I will say that you should always look to use VM Encryption first. The one downside to VM Encryption is that since the VM’s data is encrypted as soon as it leaves the VM and hits the ESXi kernel, each block is unique, so no matter what storage system that data goes to (e.g. VMware vSAN, Dell Technologies, NetApp, IBM, HDS, etc.) that block can’t be deduped or compressed. The benefit of vSAN encryption will be that the encryption will be done at the vSAN level. Data will be send to the vSAN cache and encrypted at that tier. When it is later destaged, it will be decrypted, deduped, compressed, and encrypted when its written to the capacity tier. This satisfies the data-at-rest encryption requirements but not data-in-flight. It does allow you to take advantage of the vSAN dedupe and compression data services and it’s one key for the entire vSAN datastore.
It should be noted that both solutions will require a 3rd party Key Management Server (KMS) and the same one can be used for both VM Encryption and vSAN Encryption. The KMS must support the Key Management Interoperability Protocol (KMIP) 1.1 standard. There are many that do and VMware has tested a lot of them. We’ll soon be publishing a list, but for now, check with your KMS vendor or your VMware SE for details.
VMware is all about customer choice. So, we offer a number of software based encryption options depending on your requirements.
It’s worth restating that VM Encryption should be the standard for software based encryption for VM’s. After reviewing vSAN Encryption, some may choose it instead to go with vSAN encryption if they want to take advantage of deduplication and compression. Duncan Epping provides a little more detail here:
The difference between VM Encryption in vSphere 6.5 and vSAN encryption – http://www.yellow-bricks.com/2016/11/07/the-difference-between-vm-encryption-in-vsphere-6-5-and-vsan-encryption/
- Use VM Encryption for Hybrid vSAN clusters
- Use VM Encryption on All-Flash if storage efficiency (dedupe/compression) is not critical
- Wait for vSAN native software data at rest encryption if you must have dedupe/compression on All-Flash
I often get asked if a certain version of vSAN can be deployed on a different version of vSphere. The answer is no. vSAN is built into the vSphere version. That means vCenter needs to be upgraded to the correct version of vCenter and all the hosts in the cluster need to be upgraded to the correct version of ESXi in order to get the features of that version of vSAN. Lastly, vSAN formats each disk drive with an on-disk format, so to get the full features of a specific release, you may need to update the on-disk format.
Here’s basically how everything breaks down:
- If you have vSphere 5.5 (vCenter Server 5.0 & ESXi 5.0) then you have vSAN 5.5.
- If you have vSphere 6.0 (vCenter Server 6.0 & ESXi 6.0) then you have vSAN 6.0.
- If you have vSphere 6.0 U1 (vCenter Server 6.0 Update 2 & ESXi 6.0 Update 1) then you have vSAN 6.1.
- If you have vSphere 6.0 U2 (vCenter Server 6.0 Update 2 & ESXi 6.0 Update 2) then you have vSAN 6.2.
- If you have vSphere 6.5 (vCenter Server 6.5 & ESXi 6.5) then you have vSAN 6.5.
- If you have vSphere 6.5.0d (vCenter Server 6.5.0d & ESXi 6.5.0d) then you have vSAN 6.6.
- If you have vSphere 6.5 Update 1 (vCenter Server 6.5 Update 1 & ESXi 6.5 Update 1) then you have vSAN 6.6.1.
- If you have vSphere 6.7 (vCenter Server 6.7 & ESXi 6.7) then you have vSAN 6.7
Here’s a more detailed matrix:
|Installer Build Number||vSAN Version||vSAN
|ESXi 6.5 U2||2018-05-03||8294253||N/A||6.6.1 U2||5|
|ESXi 6.7 GA||2018-04-17||8169922||N/A||6.7 GA||6|
|ESXi 6.6.1 Patch 02||2017-12-19||7388607||N/A||6.6.1 Patch 02||5|
|ESXi 6.5 Express Patch 4||2017-10-05||6765664||N/A||6.6.1 Express Patch 4||5|
|ESXi 6.5 Update 1||2017-07-27||5969303||N/A||6.6.1||5|
|ESXi 6.5. Express Patch 1a||2017-03-28||5224529||N/A||6.5 Express Patch 1a||3|
|ESXi 6.5. Patch 01||2017-03-09||5146846||5146843||6.5 Patch 01||3|
|ESXi 6.5 GA||2016-11-15||4564106||N/A||6.5||3|
|ESXi 6.0 Patch 7||2018-07-26||9239799||N/A||6.2 Patch 7||3|
|ESXi 6.0 Patch 6||2017-11-09||6921384||N/A||6.2 Patch 6||3|
|ESXi 6.0 Express Patch 11||2017-10-05||6765062||N/A||6.2 Express Patch 11||3|
|ESXi 6.0 Patch 5||2017-06-06||5572656||N/A||6.2 Patch 5||3|
|ESXi 6.0 Express Patch 7c||2017-03-28||5251623||N/A||6.2 Express Patch 7c||3|
|ESXi 6.0 Express Patch 7a||2017-03-28||5224934||N/A||6.2 Express Patch 7a||3|
|ESXi 6.0 Update 3||2017-02-24||5050593||N/A||6.2 Update 3||3|
|ESXi 6.0 Patch 4||2016-11-22||4600944||N/A||6.2 Patch 4||3|
|ESXi 6.0 Express Patch 7||2016-10-17||4510822||N/A||6.2 Express Patch 7||3|
|ESXi 6.0 Patch 3||2016-08-04||4192238||N/A||6.2 Patch 3||3|
|ESXi 6.0 Express Patch 6||2016-05-12||3825889||N/A||6.2 Express Patch 6||3|
|ESXi 6.0 Update 2||2016-03-16||3620759||N/A||6.2||3|
|ESXi 6.0 Express Patch 5||2016-02-23||3568940||N/A||6.1 Express Patch 5||2|
|ESXi 6.0 Update 1b||2016-01-07||3380124||N/A||6.1 Update 1b||2|
|ESXi 6.0 Express Patch 4||2015-11-25||3247720||N/A||6.1 Express Patch 4||2|
|ESXi 6.0 U1a (Express Patch 3)||2015-10-06||3073146||N/A||6.1 U1a (Express Patch 3)||2|
|ESXi 6.0 U1||2015-09-10||3029758||N/A||6.1||2|
|ESXi 6.0 Express Patch 2||2015-05-14||2715440||N/A||6.0 Express Patch 2||2|
|ESXi 6.0 Express Patch 1||2015-04-09||2615704||2615979||6.0 Express Patch 1||2|
|ESXi 6.0 GA||2015-03-12||2494585||N/A||6.0||2|
|ESXi 5.5 Patch 10||2016-12-20||4722766||4761836||5.5 Patch 10||1|
|ESXi 5.5 Patch 9||2016-09-15||4345813||4362114||5.5 Patch 9||1|
|ESXi 5.5 Patch 8||2016-08-04||4179633||N/A||5.5 Patch 8||1|
|ESXi 5.5 Express Patch 10||2016-02-22||3568722||N/A||5.5 Express Patch 10||1|
|ESXi 5.5 Express Patch 9||2016-01-04||3343343||N/A||5.5 Express Patch 9||1|
|ESXi 5.5 Update 3b||2015-12-08||3248547||N/A||5.5 Update 3b||1|
|ESXi 5.5 Update 3a||2015-10-06||3116895||N/A||5.5 Update 3a||1|
|ESXi 5.5 Update 3||2015-09-16||3029944||N/A||5.5 Update 3||1|
|ESXi 5.5 Patch 5 re-release||2015-05-08||2718055||N/A||5.5 Patch 5 re-release||1|
|ESXi 5.5 Express Patch 7||2015-04-07||2638301||N/A||5.5 Express Patch 7||1|
|ESXi 5.5 Express Patch 6||2015-02-05||2456374||N/A||5.5 Express Patch 6||1|
|ESXi 5.5 Patch 4||2015-01-27||2403361||N/A||5.5 Patch 4||1|
|ESXi 5.5 Express Patch 5||2014-12-02||2302651||N/A||5.5 Express Patch 5||1|
|ESXi 5.5 Patch 3||2014-10-15||2143827||N/A||5.5 Patch 3||1|
|ESXi 5.5 Update 2||2014-09-09||2068190||N/A||5.5 Update 2||1|
|ESXi 5.5 Patch 2||2014-07-01||1892794||N/A||5.5 Patch 2||1|
|ESXi 5.5 Express Patch 4||2014-06-11||1881737||N/A||5.5 Express Patch 4||1|
|ESXi 5.5 Update 1a||2014-04-19||1746018||N/A||5.5 Update 1a||1|
|ESXi 5.5 Express Patch 3||2014-04-19||1746974||N/A||5.5 Express Patch 3||1|
|ESXi 5.5 Update 1||2014-03-11||1623387||N/A||5.5 Update 1||1|
|ESXi 5.5 Patch 1||2013-12-22||1474528||N/A||5.5 Patch 1||1|
|ESXi 5.5 GA||2013-09-22||1331820||N/A||5.5||1|
As a reference, see:
Build numbers and versions of VMware vSAN (2150753) – This is a new KB post that went up on July 31, 2017 which provides the same information as above.
There are many VMware and Citrix customers happily running Citrix XenApp and XenDesktop on VMware vSphere clusters with Virtual SAN enabled.
Citrix XenApp is fully supported on VSAN.
Citrix XenDesktop PVS is fully supported on VSAN.
Citrix XenDesktop MCS is still not supported on VSAN by Citrix at the time of this writing on October 7, 2016. Citrix has a fix that is in 7.8 and 7.9 already and customers have reported that the fix works, however Citrix claims the fix has not been qualified by them and thus is not supported. ETA for their official support is unclear at this point but is the responsibility of Citrix. If you are needing this feature, please reach out to Citrix to let them know.
Our friends at Dell Technologies (EMC/VCE) have tested XenApp, XenDesktop PVS and MCS on VxRail and have produced a report here:
Citrix XenDesktop 7.9 and VMware vSphere 6.0 with VCE VxRail Appliance
In it they state “Citrix official support of MCS on VMware Virtual SAN is expected in a future release of XenDesktop. EMC tested this configuration and found no observable issues.”
For the record, I’ve been a fan of Citrix since I first deployed Citrix WinView in my data center and remote sites back in 1994. Yes, I’m that old. I’m sure this will all get worked out.