vSAN and Data-At-Rest Encryption – Why SED’s are not Supported (i.e. Part 3)

I first wrote about vSAN and Encryption here: Virtual SAN and Data-At-Rest Encryption

And then again here: vSAN and Data-At-Rest Encryption – Rebooted (i.e. Part 2)

And then vSAN Encryption went live in vSAN 6.6 announced here: vSAN 6.6 – Native Data-at-Rest Encryption

Today I was asked if vSAN supports Self Encrypting Drives (SED). The answer is No. The vSAN product team looked at SEDs but there are too few choices, they are too expensive, and they increase the operational burden.

vSAN only supports vSAN Encryption, VM Encryption, or other 3rd party VM encryption solutions like HyTrust DataControl.

vSAN is Software Defined Storage so the product team decided to focus on software-based encryption to allow vSAN to support data at rest encryption (D@RE) on any storage device that exists today or will come in the future. When vSAN went live supporting Intel Optane, this new flash device was immediately capable of D@RE. The vSAN Encryption operational model is simple. Just click a check box to enable it on the vSAN datastore and point to a Key Management Server. One encryption key to manage for the entire vSAN datastore. The additional benefits of vSAN Encryption is that it supports vSAN Dedupe and Compression and vSAN 6.7 encryption has achieved FIPS 140-2 validation.

Another choice is to leverage VMware’s VM Encryption described here: What’s new in vSphere 6.5: Security
This is per VM encryption, so you point vCenter to a Key Management Server and then enable encryption per VM via policy. This flexibility allows some VM’s to be encrypted and some not to be. And, if the VM is migrated to another vSphere cluster or to VMware Cloud on AWS, the encryption and key management follows the VM. This requires the administrator to manage a key per VM, and because the encryption happens immediately as the write leaves the VM and goes through the VAIO filter, no storage system will be able to dedupe the VM’s data since each block is unique.

Finally, there are various 3rd party per VM encryption solutions on the market that vSAN would also support. For instance, HyTrust Datacontrol.

I hope this helps clear up what options there are for vSAN encryption and the various tradeoffs.

VMworld 2018 – My 2 Breakout Sessions

I’m looking forward to VMworld 2018 in a few weeks. It’s always a long week but a great time. I look forward to catching up with coworkers, partners, customers, and friends. And, I’ll also have to do a little work. This year I have 2 breakout speaking sessions.

vSAN Technical Customer Panel on vSAN Experiences [HCI1615PU]
Monday, Aug 27, 12:30 p.m. – 1:30 p.m.

The Panel will consist of 4 vSAN customers: General Motors, United States Senate Federal Credit Union, Rent-A-Center, and Brinks. I will moderate the session, ask the customers to describe their company, role, environment, and how they are using vSAN. After each panelist does this, we’ll take questions from the audience. Here’s a recording of last year’s session to give you an idea: https://youtu.be/x4ioatHqQOI 
On the panel we had Sanofi, Travelers, Sekisui Pharmaceutical, and Herbalife. The year before we had Stanley Black and Decker, Synergent Bank, M&T Bank, and Baystate Health. Both were great sessions and this year looks like it will be too.

Achieving a GDPR-Ready Architecture Leveraging VMware vSAN [HCI3452BU]
Wednesday, Aug 29, 12:30 p.m. – 1:30 p.m.

When it comes to security in vSAN, most think Data at Rest Encryption and to make this all work you need a key management server. It’s tough to beat HyTrust for this. They offer the software for free and support for a small fee. But that’s not all they do. Check out this session to find out more. Dave Siles and I will discuss GDPR-Ready Architecture and how vSAN encryption can help.

Troubleshooting vSAN Networking Issues with Health Checks – vSAN Health Check and vSphere Distributed Switch (VDS) Health Check

Recently, one of my colleagues was working with a customer that was intermittently getting an error on the vSAN health check in vSAN 6.6.x indicating that “A few hosts were failing ping test – large packet ping test: vsan: mtu check (ping with large packet size)”. As reported by the customer the same cluster would sometimes pass all tests in vSAN Health, and other times report the error above.

The customer enabled the vSphere distributed switch (VDS) health check and ran it on the vSphere distributed switch that was supporting the cluster. The VDS health check immediately reported …

  • Mismatched VLAN trunks between a vSphere distributed switch and physical switch.
  • Mismatched MTU settings between physical network adapters, distributed switches, and physical switch ports.

The VDS health check also reported which uplinks across the hosts had these specific misconfiguration issues, so customer had something concrete to take to his networking team to resolve the problem.

I thought this was a good example of using these two tools together to identify a networking problem and providing evidence to help facilitate the resolution.

What Capacity Utilization Will I have after I Evacuate a vSAN Host?

To fully evacuate a vSAN host and satisfy FTT=1, FTM=RAID1 you must have at least 4 hosts in the cluster. When a host is put in maintenance mode and fully evacuated, that host data is spread across the surviving hosts. In other words, if you follow the vSAN best practice guidance to stay less than or equal to 70% utilized, then the capacity that represents the 70% utilization must now fit on 3 hosts, which means those 3 hosts become 93% utilized (70% utilized * 4 nodes / 3 nodes = 93.3% utilized). The more hosts you have in the cluster, the less utilized your cluster will be when putting a host in maintenance mode. For example: 70% utilized * 10 nodes / 9 nodes = 77.7% utilized after evacuation of a host.

The formula for this is:

% Utilization after evacuation = (% Utilization before evacuation * # nodes) / (# nodes – 1)

vSphere 6.7 and vSAN 6.7 in the News

Yesterday was a big day for vSphere and vSAN with the launch of the 6.7 release. There are many great blogs written so rather than repeat the content, here’s a list with links.

VMware Written Content

VMware Web Site: What’s New: vSAN 6.7
VMware Virtual Blocks: Extending Hybrid Cloud Leadership with vSAN 6.7
VMware Virtual Blocks: What’s New with VMware vSAN 6.7
VMware Virtual Blocks: vSpeaking Podcast Episode 75: What’s New in vSAN 6.7
Yellow-Bricks.com: vSphere 6.7 announced!
CormacHogan.com: What’s in the vSphere and vSAN 6.7 release?
Tohuw.Net: The Art in the Architecture – vSAN & Shared Nothing

Industry Analyst Content

 

Migrating Workloads onto vSAN

You’ve built your vSphere cluster with vSAN enabled, now what? Of course, you can start provisioning VM’s in the cluster and their vmdk’s onto the vSAN datastore. But, what if you want to move existing VM’s onto your new cluster? Well, there are several methods to consider, each with their own benefits and detractors. This topic has been explored a few times and here are some useful links:
Migrating VMs to vSAN
Migrating to vSAN

I had the opportunity to record an overview of this topic using our Lightboard technology at VMware headquarters in Palo Alto. You can check it out here:

Migrating Workloads onto vSAN

The video lightboard explores the following methods:

Backup

Simply, you can backup your VMs sitting in one cluster, shut them down, then restore them onto the new cluster.

Cross Cluster vMotion (AKA XvMotion), Cross vCenter vMotion, Long Distance vMotion (LDM)

You can migrate live VM’s from one cluster to another cluster (Cross cluster vMotion) and those clusters could be managed by different vCenters (Cross vCenter vMotion). This can be great for a few VM’s but if it’s a lot of VM’s and a lot of data then it can take a while. There’s no downtime for the VM’s, but, you could be waiting a long time for the migration to complete. For more details, see one of my previous posts:

XvMotion, Cross-vCenter vMotion, VVols, LVM active/active, LVM active/passive, SRM & Stretched Storage, VAIO Filters

Storage vMotion

This is only possible if your source and destination hosts are connected to the same source storage system LUN/Volume. If so, you can have both clusters mount the same LUN/Volume and move the VM from the source cluster to the destination cluster and also move the data from the source datastore (LUN/Volume on SAN/NAS) to the destination datastore (vSAN). If you are moving off a traditional fibre channel SAN then you’ll need to put fibre channel HBA’s in the hosts supporting the new vSAN datastore.

VMware vSphere Replication

VMware’s vSphere Replication replicates any VM on one cluster to any other cluster. This host based replication feature is storage agnostic so it doesn’t matter what the underlying storage is on either cluster. A vSphere snapshot of the VM is taken and that snapshot is used as the source of the replication. Once you know the data is in sync between the source cluster and destination cluster you can shut down the VM’s in the source cluster and power them up in the destination cluster. So, there is downtime. If something doesn’t go right, you could revert back to the source cluster. Here’s a good whitepaper on vSphere Replication.

VMware vSphere Replication + Site Recovery Manager

VMware’s vSphere Replication replicates any VM on one cluster to any other cluster. VMware Site Recovery Manager allows you to test and validate the failover from the source to the destination. It allows you to script the order in which VM’s are powered on as well as Re-IP them if necessary and can automate running pre and post scripts if necessary. Once you validate the failover will happen as you want it to, you can do it for real knowing it’s been pretested. If something goes wrong it has a “revert” feature to reverse the cut-over and go back to the source cluster until you can fix the problem. Here are a few good whitepapers on Site Recovery Manager.

3rd Party Replication

DellEMC RP4VMs replicates data prior to cut over. Once you know the data is in sync between the source cluster and destination cluster you can shut down the VM’s in the source cluster and power them up in the destination cluster. So, there is downtime. If something doesn’t go right, you could revert back to the source cluster. There are other 3rd party options on the market including solutions from Zerto and Veeam.

What About VMware Cloud on AWS?

Since vSAN is the underlying storage on VMware Cloud on AWS, all the options above will work for migrating workloads from on Premises to VMware Cloud on AWS.

Summary

Personally, I like the ability to test the failover migration “cut over” using Site Recover Manager so I’d opt for the vSphere Replication + Site Recovery Manager option if possible.  if it’s only a few VM’s and a small amount of data then XvMotion would be the way to go.

Migrating Workloads onto vSAN.png

 

 

 

 

 

vSAN ReadyNode Sizer

If you plan on implementing HCI to support your workloads and are looking to size an environment, the vSAN ReadyNode Sizer tool is the place to go.

vsansizer.vmware.com

There are 3 ways to use this.

  • Don’t log in – use the tool in “Evaluation” mode
  • Login using your My VMware account
  • Login using your Partner Central account

In “Evaluation” mode you’ll be able to create some basic configurations for general purpose workloads but will have no ability to customize or download the sizing results.

If you log in using your My VMware account or Partner Central account, you’ll have a lot more functionality. First, you’ll be asked if you want to configure an All Flash cluster or Hybrid cluster.

vSAN Sizer 1

Previously, the only place to size a Hybrid cluster was using the old vSAN Sizing tool. The ability to configure Hybrid clusters was just added to the new tool so now there is one place to size either option.

Next you’ll be asked if you want to size for a “Single Workload Cluster” or “Multi-Workload Cluster”

vSAN Sizer 2

The Single Workload Cluster provides options to configure for VDI, Relational Databases, or General Purpose workloads.

vSAN Sizer 3

The Multi-Workload Cluster choice is I helpful if you plan to have different types of VM’s and want to input the various workload specifics. There are a ton of customization options including block size, IO Pattern, vCPU/Core, etc. And of course, either option allows you to choose what vSAN protection level and method for each workload. You can even size for stretched clusters.

Our great product team at VMware has put a ton of work into this tool including some complex math to come up with a simple and easy way to configure clusters for vSAN. Check out the tool and see for yourself. But, also, feel free to contact your favorite partner or VMware Systems Engineer to also help. The vSAN SE team has done hundreds and thousands of these configurations and can help make sure you’ll be good to go.