VMworld 2018 – vSAN Technical Customer Panel on vSAN Experiences (HCI1615PU)

We had a great time at VMworld 2018 during the vSAN Technical Customer Panel with these 4 great vSAN customers:

HCI1615PU-Panalists

They introduced themselves, discussed how they are using vSAN in their environment, and the benefits achieved. After that, we had a stream of questions from the audience that provoked some interesting discussions. If you want to check it out you can view the recorded session here:

vSAN Technical Customer Panel on vSAN Experiences (HCI1615PU)

Also, there is a great TechTarget Converged Infrastructure summary of the session by Dave Raffo here:

vSAN hyper-converged users offer buying, implementing advice

This is the third year in the row I’ve been fortunate enough to host this session. This year was the best attended and had the best audience questions. FYI, my colleague, Lee Dilworth, will be hosing this session in Barcelona so we look forward to a good crowd with more good questions and discussion.

vSAN and Data-At-Rest Encryption – Why SED’s are not Supported (i.e. Part 3)

I first wrote about vSAN and Encryption here: Virtual SAN and Data-At-Rest Encryption

And then again here: vSAN and Data-At-Rest Encryption – Rebooted (i.e. Part 2)

And then vSAN Encryption went live in vSAN 6.6 announced here: vSAN 6.6 – Native Data-at-Rest Encryption

Today I was asked if vSAN supports Self Encrypting Drives (SED). The answer is No. The vSAN product team looked at SEDs but there are too few choices, they are too expensive, and they increase the operational burden.

vSAN only supports vSAN Encryption, VM Encryption, or other 3rd party VM encryption solutions like HyTrust DataControl.

vSAN is Software Defined Storage so the product team decided to focus on software-based encryption to allow vSAN to support data at rest encryption (D@RE) on any storage device that exists today or will come in the future. When vSAN went live supporting Intel Optane, this new flash device was immediately capable of D@RE. The vSAN Encryption operational model is simple. Just click a check box to enable it on the vSAN datastore and point to a Key Management Server. One encryption key to manage for the entire vSAN datastore. The additional benefits of vSAN Encryption is that it supports vSAN Dedupe and Compression and vSAN 6.7 encryption has achieved FIPS 140-2 validation.

Another choice is to leverage VMware’s VM Encryption described here: What’s new in vSphere 6.5: Security
This is per VM encryption, so you point vCenter to a Key Management Server and then enable encryption per VM via policy. This flexibility allows some VM’s to be encrypted and some not to be. And, if the VM is migrated to another vSphere cluster or to VMware Cloud on AWS, the encryption and key management follows the VM. This requires the administrator to manage a key per VM, and because the encryption happens immediately as the write leaves the VM and goes through the VAIO filter, no storage system will be able to dedupe the VM’s data since each block is unique.

Finally, there are various 3rd party per VM encryption solutions on the market that vSAN would also support. For instance, HyTrust Datacontrol.

I hope this helps clear up what options there are for vSAN encryption and the various tradeoffs.

VMworld 2018 – My 2 Breakout Sessions

I’m looking forward to VMworld 2018 in a few weeks. It’s always a long week but a great time. I look forward to catching up with coworkers, partners, customers, and friends. And, I’ll also have to do a little work. This year I have 2 breakout speaking sessions.

vSAN Technical Customer Panel on vSAN Experiences [HCI1615PU]
Monday, Aug 27, 12:30 p.m. – 1:30 p.m.

The Panel will consist of 4 vSAN customers: General Motors, United States Senate Federal Credit Union, Rent-A-Center, and Brinks Oakland University. Brinks is a great vSAN customer but is doing an NSX session at the same time as the vSAN session so we are lucky to add Oakland University to the panel. I will moderate the session, ask the customers to describe their company, role, environment, and how they are using vSAN. General Motors will talk about their large VDI deployment. Unites States Federal Credit Union will discuss their use of vSAN in remote offices, VVols, and Storage Policy Based Management (SPBM). Rent-A-Center will discuss vSAN for management clusters, VDI, and the benefit of VxRail. Oakland University will discuss their vSAN stretched cluster, Data at Rest Encryption, and Dedupe/Compression. After each panelist does this, we’ll take questions from the audience.

Here’s a recording of last year’s session to give you an idea: https://youtu.be/x4ioatHqQOI 
On the panel we had Sanofi, Travelers, Sekisui Pharmaceutical, and Herbalife. The year before we had Stanley Black and Decker, Synergent Bank, M&T Bank, and Baystate Health. Both were great sessions and this year looks like it will be too.

Achieving a GDPR-Ready Architecture Leveraging VMware vSAN [HCI3452BU]
Wednesday, Aug 29, 12:30 p.m. – 1:30 p.m.

When it comes to security in vSAN, most think Data at Rest Encryption and to make this all work you need a key management server. It’s tough to beat HyTrust for this. They offer the software for free and support for a small fee. But that’s not all they do. Check out this session to find out more. Dave Siles and I will discuss GDPR-Ready Architecture and how vSAN encryption can help.

What Capacity Utilization Will I have after I Evacuate a vSAN Host?

To fully evacuate a vSAN host and satisfy FTT=1, FTM=RAID1 you must have at least 4 hosts in the cluster. When a host is put in maintenance mode and fully evacuated, that host data is spread across the surviving hosts. In other words, if you follow the vSAN best practice guidance to stay less than or equal to 70% utilized, then the capacity that represents the 70% utilization must now fit on 3 hosts, which means those 3 hosts become 93% utilized (70% utilized * 4 nodes / 3 nodes = 93.3% utilized). The more hosts you have in the cluster, the less utilized your cluster will be when putting a host in maintenance mode. For example: 70% utilized * 10 nodes / 9 nodes = 77.7% utilized after evacuation of a host.

The formula for this is:

% Utilization after evacuation = (% Utilization before evacuation * # nodes) / (# nodes – 1)

vSphere 6.7 and vSAN 6.7 in the News

Yesterday was a big day for vSphere and vSAN with the launch of the 6.7 release. There are many great blogs written so rather than repeat the content, here’s a list with links.

VMware Written Content

VMware Web Site: What’s New: vSAN 6.7
VMware Virtual Blocks: Extending Hybrid Cloud Leadership with vSAN 6.7
VMware Virtual Blocks: What’s New with VMware vSAN 6.7
VMware Virtual Blocks: vSpeaking Podcast Episode 75: What’s New in vSAN 6.7
Yellow-Bricks.com: vSphere 6.7 announced!
CormacHogan.com: What’s in the vSphere and vSAN 6.7 release?
Tohuw.Net: The Art in the Architecture – vSAN & Shared Nothing

Industry Analyst Content

 

vSAN ReadyNode Sizer

If you plan on implementing HCI to support your workloads and are looking to size an environment, the vSAN ReadyNode Sizer tool is the place to go.

vsansizer.vmware.com

There are 3 ways to use this.

  • Don’t log in – use the tool in “Evaluation” mode
  • Login using your My VMware account
  • Login using your Partner Central account

In “Evaluation” mode you’ll be able to create some basic configurations for general purpose workloads but will have no ability to customize or download the sizing results.

If you log in using your My VMware account or Partner Central account, you’ll have a lot more functionality. First, you’ll be asked if you want to configure an All Flash cluster or Hybrid cluster.

vSAN Sizer 1

Previously, the only place to size a Hybrid cluster was using the old vSAN Sizing tool. The ability to configure Hybrid clusters was just added to the new tool so now there is one place to size either option.

Next you’ll be asked if you want to size for a “Single Workload Cluster” or “Multi-Workload Cluster”

vSAN Sizer 2

The Single Workload Cluster provides options to configure for VDI, Relational Databases, or General Purpose workloads.

vSAN Sizer 3

The Multi-Workload Cluster choice is I helpful if you plan to have different types of VM’s and want to input the various workload specifics. There are a ton of customization options including block size, IO Pattern, vCPU/Core, etc. And of course, either option allows you to choose what vSAN protection level and method for each workload. You can even size for stretched clusters.

Our great product team at VMware has put a ton of work into this tool including some complex math to come up with a simple and easy way to configure clusters for vSAN. Check out the tool and see for yourself. But, also, feel free to contact your favorite partner or VMware Systems Engineer to also help. The vSAN SE team has done hundreds and thousands of these configurations and can help make sure you’ll be good to go.

 

 

Public Speaking Advice

Over my career, I’ve had the opportunity to publicly speak at VMUGs, vForums, Partner events, and other technology focused events. Recently I was asked to provide some Public Speaking Advice and I quickly jotted down some notes in an email and sent them. This is by no means complete, but perhaps someone else will benefit from this:

Be prepared for the worst-case scenarios

  • No network – Assume you will have no network connectivity. Perhaps you will have network connectivity and will be able to link to your live demo system, however, you should be prepared to deliver your message assuming the connectivity is too slow or broken.
  • Broken Laptop – Assume your laptop won’t boot up or can’t connect to the overhead projector. Have your presentation on a USB stick or cloud storage so you can access your presentation from someone else’s device or from your secondary device.
  • Test and verify – Do a dry run of your presentation ahead of time if possible, if not, arrive early and make sure your setup will work.

If it’s a Web based presentation

  • Have a plan if people cannot connect, maybe use an alternate method (e.g. WebEx, Skype, GoToMeeting, etc.) or be prepared to just talk through your material without visuals.
  • Don’t move your mouse all over the place, it’s annoying to the viewers.
  • Make sure you engage the audience. Ask them questions. Don’t just talk and hope they are hearing you.
  • Leverage the web presentation tools to enhance your presentation – whiteboard, highlighter, marker, etc.

If it’s an in person presentation

  • Dress for success – In other words, dress how you want to be perceived. If its your first meeting with a customer you should probably dress up. Likewise, if you are on stage at a big event, then you’ll probably want to wear a dress shirt, sport coat, polished shoes. But if it’s a technical deep dive at a customer or a technical breakout session at a conference, then you might want to dress more casual, perhaps in your company golf shirt.
  • Empty your pockets – this will prevent you from fidgeting with your cell phone, wallet, coins, etc.
  • Eliminate other Distractions – Take off your badge, lanyard, or anything that’s distracting so that the focus is on you and what you are saying. Also, don’t pick up pens or markers and click them or open and close the cap repeatedly
  • Setup the Room – Sometimes it is not possible to rearrange the room, but, if it is, then make it so you can move around the room and engage the audience. A U-shape works well for this.
  • Posture – Stand tall, arms at your side in a relaxed confident manner to start the presentation and as much as possible throughout.
  • Hand Gestures & Movement – Use as many hand gestures as possible. It shows your passion and emphasizes the content. Also move around the room as much as possible. It forces people to pay more attention. If someone is on their phone or falling asleep, move closer to them.
  • Maintain Eye Contact – This is a hard skill to master, but, extremely effective when you do. It is the #1 way to help eliminate saying “um” and “ah” which is the #1 complaint against a public speaker. To practice, cut out faces and paste them on the wall. Say a sentence to one face, then randomly make eye contact with another face and say the next sentence or complete thought. Continue to move around and randomly change who you are looking at throughout the presentation.

Slides

  • Don’t introduce yourself. Have your opening slide with your name & title on it but don’t repeat that in an opening statement. Start your conversation with an interesting opening statement that makes the audience want to hear more.
  • Keep them as simple as possible. The focus should be on you, the presenter, not the slides. Technical presentations tend to have a lot of details to convey so it may be hard to avoid showing some complex slides, but use them as a trigger for your talk track; never read them.
  • Present what you know (i.e. deleting slides is OK). We in the tech industry often get handed slide decks from corporate marketing which are great, but, often there are certain slides that just don’t make sense or you cannot figure out how to talk to it. Its best to just hide or delete the slide than try to fumble around trying to talk to it, or turn your back to the audience and read it. Build a story that you can tell by only glancing at the slides once in a while.
  • Incorporate a product demo into your presentation if possible. People like to see things in action. Best to do this earlier than leaving it until the end.

Public Speaking Training

 

 

VMworld Hands-on-Labs – 9,640 Labs Were delivered by vSAN

The Hands-on-Labs (HoL) at VMworld are always a big hit. A ton of work goes into putting them on and supporting them and everyone seems to love them. This was a big year for vSAN in the HoL. At VMworld Las Vegas, 11,444 labs were completed and the vSAN lab, HOL-1808-01-HCI – vSAN 6.6, was the #2 overall lab completed. Our NSX friends held the #1 spot.

The HoL’s were delivered from 5 different data centers. Each handled approximately 20% of the workloads. vSAN was the storage in 4 of the data centers. 2 of the 4 were VMware data centers running vSphere, NSX, and vSAN for software defined compute, network and storage. Another was IBM BlueMix (SoftLayer) built with VMware Cloud Foundation (vSphere, NSX, vSAN, and SDDC Manager). And the other was VMware Cloud on ASW also built with VMware Cloud Foundation (vSphere, NSX, vSAN, and SDDC Manager). The 5th data center was another VMware data center running traditional storage. This is a great Hybrid Cloud / Multi Cloud example leveraging 3 of our own datacenters and 2 of the largest public cloud data centers offering Infrastructure as a Service (Iaas).

 

VMware Cross Cloud Architecture

 

9,640 of the HoL’s were deployed across the 4 vSAN data centers. This represents 84% of the labs delivered at VMworld US were delivered by vSAN. To support the HoL’s, over 90,000 VM’s were provisioned in just 5 days. Actually, more than that since extra HoL’s are pre-provision that don’t all get used. This is a huge win for HCI and vSAN as it performed like a champ for this heavy workload.

These stats are too impressive not to share and they are a great testament to all the people that make it happen.

 

 

 

 

 

vSAN Maintenance Mode Considerations

There are 3 options when putting a host in maintenance mode when that host is a member of a vSphere Cluster with vSAN enabled.  You follow the normal process to put a host in maintenance mode, but if vSAN is enabled, these options will pop up:

  1. Ensure accessibility
  2. Full data migration
  3. No data migration

There’s a 4th consideration that I’ll describe at the end.

I would expect most virtualization administrators to pick “Ensure accessibility” almost every time.

Ensure accessibility

Before we investigate, I want to reinforce that vSAN, by default, is designed to work and continue to provide VM’s access to data even if a host disappears.  The default vSAN policy is “Number of Failures To Tolerate” equal to 1 (#FTT=1), which means a HDD, SSD, or whole host (thus all the SSD and HDD on that host) can be unavailable, and data is available somewhere else on another host in the cluster.  If a host is in maintenance mode, then it is down, but vSAN by default has another copy of the data on another host.

VMware documents the options here:

Place a Member of Virtual SAN Cluster in Maintenance Mode

Ensure accessibility

This option will check to make sure that putting the particular host in maintenance mode will not take away the only data copy of any VM.  There are two scenarios I can think of that this would happen:

  • In Storage Policy Based Management, you created a Storage Policy based on vSAN with #FTT=0 and attached at least 1 VM to that policy and that VM has data on the host going into maintenance mode.
  • Somewhere in the cluster you have failed drives or hosts and vSAN self-healing rebuilds haven’t completed. You then put a host into maintenance mode and that host has the only good copy of data remaining.

As rare as these scenarios are, they are possible.  By choosing the “Ensure accessibility” option, vSAN will find the single copies of data on that host and regenerate them on other hosts. Now when the host goes into maintenance mode, all VM data is available.  This is not a full migration of all the data off that host, its just a migration of the necessary data to “ensure accessibility” by all the VM’s in the cluster.  When the host goes into maintenance mode, it may take a little bit of time to complete the migration but you’ll know that VM’s won’t be impacted.  During the maintenance of this host, some VM’s will likely be running in a degraded state with 1 less copy that the policy specifies.  Personally, I think this choice makes the most sense most of the time, it is the default selection, and I expect vSphere administrators to choose this option almost every time.

No data migration

This option puts the host in maintenance mode no matter what’s going on in the cluster.  I would expect virtualization administrators to almost never pick this option unless:

  • You know the cluster is completely healthy (no disk or host failures anywhere else)
  • The VM’s that would be impacted aren’t critical.
  • All the VM’s in the cluster are powered off.

For the reasons explained in the “Ensure accessibility” above, its possible that the host going into maintenance mode has the only good copy of the data.  If this is not a problem, then choose this option for the fastest way to put a host into maintenance mode.  Otherwise, choose “Ensure accessibility”.

Full data migration

I would expect virtualization administrators to choose this option less frequently than “Ensure Accessibility” but will choose it for a couple of reasons:

  • The host is being replaced by a new one.
  • The host will be down for a long time, longer than the normal maintenance window of applying a patch and rebooting.
  • You want to maintain the #FTT availability for all VM’s during the maintenance window

Keep in mind, if you choose this option you must have 4 or more hosts in your cluster, and you don’t mind waiting for the data migration to complete.  The time to complete the data migration is dependent on the amount of capacity consumed on the host going into maintenance mode.  Yes, this could take some time.  The laws of physics apply.  10GbE helps to move more data in the same amount of time. And it helps if the overall environment is not too busy.

When the migration is complete, the host is essentially evacuated out of the cluster and all it’s data is spread across the remaining hosts.  VM’s will not be running in a degraded state during the maintenance window and will be able to tolerate the failures per their #FTT policy.

4th consideration

I mentioned there is a 4th consideration.  For the VM’s that you want protected with at least two copies of data (#FTT=1) even during maintenance windows, you have two options.  One is to set the #FTT=2 for those VM’s so they have 3 copies on 3 different hosts.  If one of those hosts is in maintenance mode and you didn’t choose “Full Data Migration” then you still have 2 copies on other hosts, thus the VM’s could tolerate another failure of a disk or host.  You could choose to create a storage policy based on vSAN with #FTT=2 and attach your most critical VM’s to it.  For more information on running business critical applications on vSAN see:

Running Microsoft Business Critical Application on Virtual SAN 6.0

I hope this helps in your decision making while administering vSAN.  I recommend testing the scenarios prior to implementing a cluster in production so you get a feel for the various options.