Best Practice for Preparing Hardware for a Virtual SAN Deployment

This may be stating the obvious but I think it’s worth repeating. Before building a Virtual SAN enabled cluster make sure:

  • The server hardware is updated to the latest and greatest system ROM / BIOS / firmware
  • The IO Controller is running the latest firmware
  • The SSD are running the latest firmware
  • The HDD are running the latest firmware

These firmware updates often resolve some important hardware issues.

Next, make sure you follow the Performance Best Practices for VMware vSphere® 5.5

  • Specifically, make sure Power Management BIOS Settings are disabled in the server BIOS (see page 17)

Once ESXi is installed on the host

  • Make sure the IO Controller is loading the correct version of the device driver.  You can look this up on the Virtual SAN HCL

I work with a lot of customers who are evaluating or implementing Virtual SAN and following these simple, obvious, but important best practices have led to better performance and a better overall experience with Virtual SAN.

XvMotion, Cross-vCenter vMotion, VVols, LVM active/active, LVM active/passive, SRM & Stretched Storage, VAIO Filters

Recently, with the announcement of the availability of VVols in vSphere.NEXT I was asked to give a deep dive presentation to a customer with a focus on what VVols meant for protection VM’s. While at EMC as a vSpecialist I lead a group focused on protecting VM’s so this is something I’ve been interested in for awhile. I’m a big fan of RecoverPoint and am excited about virtual RecoverPoint’s ability to offer continuous data protection for VSAN as I indicated here.   I’m also a huge fan of VPLEX and spent a lot of time during my days at EMC discussing what it could do. The more I dug into what VVols could do to help with various VM movement and data protection schemes the more I realized there was much to be excited about but also much need for clarification. So, after some research, phone calls, and email exchanges with people in the know I gathered the information and felt it would be good information to share.

What follows is kind of a “everything but the kitchen sink” post on various ways to move and protect VM’s. There were several pieces of the puzzle to put together so here are the past, present, and future options.

XvMotion (Enhanced vMotion) – vMotion without shared storage – Released in vSphere 5.1

In vSphere 5.1 VMware eliminated the shared storage requirement of vMotion.

  • vMotion – vMotion can be used to non-disruptively move a VM from one host to another host provided both hosts have access to the same shared storage (i.e. A datastore backed by a LUN or volume on a storage array or shared storage device). Prior to vSphere 5.1 this was the only option to non-disruptively move a VM between hosts.
  • Storage vMotion – this allows VM vmdk’s to be non-disruptively moved from one datastore to another datastore provided the host has access to both.
  • XvMotion – As of vSphere 5.1. XvMotion allows a VM on one host, regardless of the storage it is using, to be non-disruptively moved to another host, regardless of the storage it is using. Shared storage is no longer a requirement. The data is moved through the vMotion network. This was a major step towards VM mobility freedom, especially when you think of moving workloads in and out of the cloud.
  • For more information see: Requirements and Limitations for vMotion Without Shared Storage

Cross-vCenter vMotion – Announced at VMworld 2014, available in vSphere.NEXT (future release)

This new feature was announced during the VMworld 2014 US – General Session – Tuesday.

Continue reading “XvMotion, Cross-vCenter vMotion, VVols, LVM active/active, LVM active/passive, SRM & Stretched Storage, VAIO Filters”