What Makes VSAN Different?

I had a question today asking how VMware Virtual SAN (VSAN) compares to XYZ company. There are over a dozen virtual machine software based solutions that leverage the local disks in ESXi hosts to present storage back to the hosts in the vSphere cluster. Those solutions require a vSphere cluster to be created then their virtual machine must be installed on every host to handle the storage services. Some are more efficient at this than others but there is always level of effort to “build-your-own” storage on top of the vSphere cluster and those virtual machines can take up significant host resources to deliver on the storage services they offer. So converged infrastructure itself is nothing new or unique. Its how it’s done that is important.

Here’s what makes VMware Virtual SAN (VSAN) different:

  • VSAN is the ONLY software defined storage solution that is embedded into the ESXi hypervisor making it the most efficient data path for performance.  VM’s send their data through the hypervisor right to disk, there’s no middle man.  In addition, VSAN is the most efficient in its use of the host resources to deliver on the storage service. VSAN is designed to take up no more than 10% of the host CPU and memory resources and testing with vSphere 6 show significantly less impact than that. Since VSAN is not a VM on top of the hypervisor, it has this distinct advantage. This was a positive tradeoff for the fact that VSAN is a VMware vSphere only solution.
  • Being built in also makes it simple and easy to manage. There is no VSAN install, it is simply enabled as a feature of the hypervisor by clicking a check box. When enabled, VSAN will collect all the local disks on all the hosts and create the VSAN Datastore. Bear in mind, the server IO controller and disks must be in place and networking configurations must be completed to make sure VSAN will work when you click that check box.

VSAN Checkbox

  • VSAN is fully integrated into VMware Storage Policy Based Management (SPBM), VASA, and VVOLs. When that check box is clicked, the VSAN datastore is created and it’s VASA provider is registered with vCenter to expose it’s capabilities to SPBM. This allows different policy to be created so the same pool of capacity can deliver different service levels to different VM’s based on performance, availability, and protection. When VM’s are attached to a policy service level, their VM objects get created on the VSAN datastore in the form of Virtual Volume (VVOL) objects. VSAN further breaks these VVOL objects up into components to deliver on the defined protection and performance service levels.

VSAN and SPBM

  • VSAN deals with data protection at the software layer so it doesn’t suffer the performance and capacity penalty of hardware RAID. Different “tiers” of protection can be defined by policy and set for different VM’s using the same pool of disks in the VSAN datastore.  Numbers of Failures to Tolerate settings determine how many data replicas are written to different hosts to deliver the desired protection level for VM’s.
  • VSAN now supports a feature called “Rack Diversity”.  I wrote about the benefits here.  This brings Software Defined Self Healing with Failure Domains.  Hosts in the same rack can be placed into the same fault domain so that if an entire rack is lost then data remains available since another replica copy of the data resides on another host in another rack.

VSAN Rack Diversity

  • VSAN is a hybrid storage solution leveraging SSD as cache to accelerate both reads and writes and low cost high capacity hard disks to persist the data. This results in near All-Flash array performance at a fraction of the cost. With vSphere 6 along with Virtual SAN 6, an All-Flash VSAN is supported delivering extreme performance.

VMware Virtual SAN™ 6.0 Performance

  • VSAN is one of the few software based storage solutions that can leverage the in host SSD/Flash for Read AND Write caching. There are many solutions that can leverage in host SSD/Flash for read caching. Write back caching is more difficult to implement but VSAN does it while maintaining high availability of those writes across the cluster.

All other converged software based storage solutions require running a Virtual Machine on top of ESXi. So all VM’s have to go through their own IO path, through the hypervisor, then through that single VM IO path, then back through the hypervisor, then to the disks. In some cases, the disks themselves need to be setup with a hardware RAID configuration then their VM solution implements software RAID in addition to the underlying hardware RAID paying a double performance and capacity penalty. Each of these VM’s take on additional host CPU and Memory. Some require 2-4 vCPU’s and 16GB or more of RAM. And some are limited to the number of nodes they can scale to and how much total capacity can be supported. Again, some solutions are more efficient and scalable than others so do the homework and ask the right questions when comparing. Finally, most don’t support VMware’s Storage Policy Based Management which is the VMware framework for managing all vSphere storage going forward.

VMware’s vision for Virtual SAN is that it be the best storage solution for Virtual Machines. With the release of vSphere 6 and Virtual SAN 6, VMware is closer to that vision. There are many software defined storage choices out there.  Hopefully this helps in that decision making process.

Virtual SAN 6 Rack Awareness – Software Defined Self Healing with Failure Domains

I continue to think one of the hidden gem features of VMware Virtual SAN (VSAN) is it’s software defined self healing ability. I wrote about it a few months back here in: Virtual SAN Software Defined Self Healing

Since Virtual SAN is such a different way to do storage, it allows for some interesting configuration combinations. With vSphere 6 (built into vSphere 6), VMware will be introducing a new add-on feature for Virtual SAN called “Rack Awareness” accomplished by creating multiple “Failure Domains” and placing hosts in the same rack into the same Failure Domain. This “Rack Awareness” feature exploits the # Failures To Tolerate policy of Virtual SAN.

The rest of this post will look a lot like the previous post I did on self healing but will translate it for the Rack Awareness feature.

Minimum Rack Awareness Configuration

Lets start with the smallest VSAN “Rack Awareness” configuration possible that provides redundancy: a 3 rack, 6 host (2 per rack) vSphere cluster with VSAN enabled and 1 SSD and 1 HDD per host. In VSAN, an SSD constitutes a disk group so the 1 HDD is placed into a Disk Group with the 1 SSD. The SSD performs the write and read caching for the HDD’s in its disk group. The HDD permanently stores the data.

Lets start with a single VM with the default # Failures To Tolerate (#FTT) equal to 1. A VM has at least 3 objects (namespace, swap, vmdk). Each object has 3 components (data 1, data 2, witness) to satisfy #FTT=1. Lets just focus on the vmdk object and say that the VM sits on host 1 with replicas/mirrors/copies (these terms can be used interchangeably) of its vmdk data on host 1 in rack 1 and host 2 in rack 2 and the witness on host 3 in rack 3. The rule in Virtual SAN is that each of these three components of an object (data 1, data 2, witness) must sit on different hosts. With Rack Awareness, they also must be in different hosts in different racks.

RackAware01

OK, lets start causing some trouble. With the default # Failures To Tolerate equal 1, VM data on VSAN should be available if a single SSD, a single HDD, a single host fails, or an entire rack fails.

Continue reading “Virtual SAN 6 Rack Awareness – Software Defined Self Healing with Failure Domains”

VMware Jobs!!! – Software Defined Storage (Virtual SAN, EVO:RAIL, etc.)

I’ve been at VMware for 1.5 years and have had a blast talking to customers, partners, and VMware employees about all things software defined storage. This primarily involves Virtual SAN & EVO:RAIL which take advantage of VASA, Storage Policy Based Management, and VVOLS. Because we are talking about storage it also includes discussing the benefits of vSphere Replication, Site Recovery Manager, and vSphere Data Protection. Basically, anything to do with storing, protecting, and managing Virtual Machine data.  Its exciting to be part of the whole software defined data center strategy.

We are growing our Software Defined Storage team and are looking for qualified rockstars. If you are one, and the topics above are familiar to you, and you are interested in joining the VMware Software Defined Storage Team, then check out the openings below.  Feel free to apply directly or reach out to me with any questions at: pkeilty at vmware dot com

You can find the openings on the VMware Public Job Page: http://vmware.jobs/

Plug in the Requisition Number below to find more details on the openings and full job descriptions:

Systems Engineers

  • Requisition Number 55635BR – Sr. Systems Engineer-Software Defined Storage-East in New York New York United States

We are also looking for SE’s in the Ohio Valley and South East USA. In addition, we are looking for a Technical Field SE in the East. These jobs Requisitions will be posted soon.

Sales

  • Requisition Number 58265BR – Storage Account Executive in Austin Texas United States
  • Requisition Number 58420BR – Storage Account Executive – Federal in Reston Virginia United States
  • Requisition Number 58501BR – Sales Leader, Software Defined Storage – Palo Alto or Austin in Austin Texas United States
  • Requisition 58504BR – Inside Sales Representative, Software Defined Storage in Austin Texas United States

Good luck!

VMware Software Defined Storage and Virtual SAN at PEX

Unfortunately I won’t be attending VMware PEX this year.  Its a great event to meet up with our great VMware partners and learn the latest VMware tech.  There will be tons of Software Defined goodness, specifically, here is a great link to all the storage stuff:

Discover Software-Defined Storage & VMware Virtual SAN at PEX 2015!

 

Best Practice for Preparing Hardware for a Virtual SAN Deployment

This may be stating the obvious but I think it’s worth repeating. Before building a Virtual SAN enabled cluster make sure:

  • The server hardware is updated to the latest and greatest system ROM / BIOS / firmware
  • The IO Controller is running the latest firmware
  • The SSD are running the latest firmware
  • The HDD are running the latest firmware

These firmware updates often resolve some important hardware issues.

Next, make sure you follow the Performance Best Practices for VMware vSphere® 5.5

  • Specifically, make sure Power Management BIOS Settings are disabled in the server BIOS (see page 17)

Once ESXi is installed on the host

  • Make sure the IO Controller is loading the correct version of the device driver.  You can look this up on the Virtual SAN HCL

I work with a lot of customers who are evaluating or implementing Virtual SAN and following these simple, obvious, but important best practices have led to better performance and a better overall experience with Virtual SAN.

XvMotion, Cross-vCenter vMotion, VVols, LVM active/active, LVM active/passive, SRM & Stretched Storage, VAIO Filters

Recently, with the announcement of the availability of VVols in vSphere.NEXT I was asked to give a deep dive presentation to a customer with a focus on what VVols meant for protection VM’s. While at EMC as a vSpecialist I lead a group focused on protecting VM’s so this is something I’ve been interested in for awhile. I’m a big fan of RecoverPoint and am excited about virtual RecoverPoint’s ability to offer continuous data protection for VSAN as I indicated here.   I’m also a huge fan of VPLEX and spent a lot of time during my days at EMC discussing what it could do. The more I dug into what VVols could do to help with various VM movement and data protection schemes the more I realized there was much to be excited about but also much need for clarification. So, after some research, phone calls, and email exchanges with people in the know I gathered the information and felt it would be good information to share.

What follows is kind of a “everything but the kitchen sink” post on various ways to move and protect VM’s. There were several pieces of the puzzle to put together so here are the past, present, and future options.

XvMotion (Enhanced vMotion) – vMotion without shared storage – Released in vSphere 5.1

In vSphere 5.1 VMware eliminated the shared storage requirement of vMotion.

  • vMotion – vMotion can be used to non-disruptively move a VM from one host to another host provided both hosts have access to the same shared storage (i.e. A datastore backed by a LUN or volume on a storage array or shared storage device). Prior to vSphere 5.1 this was the only option to non-disruptively move a VM between hosts.
  • Storage vMotion – this allows VM vmdk’s to be non-disruptively moved from one datastore to another datastore provided the host has access to both.
  • XvMotion – As of vSphere 5.1. XvMotion allows a VM on one host, regardless of the storage it is using, to be non-disruptively moved to another host, regardless of the storage it is using. Shared storage is no longer a requirement. The data is moved through the vMotion network. This was a major step towards VM mobility freedom, especially when you think of moving workloads in and out of the cloud.
  • For more information see: Requirements and Limitations for vMotion Without Shared Storage

Cross-vCenter vMotion – Announced at VMworld 2014, available in vSphere.NEXT (future release)

This new feature was announced during the VMworld 2014 US – General Session – Tuesday.

Continue reading “XvMotion, Cross-vCenter vMotion, VVols, LVM active/active, LVM active/passive, SRM & Stretched Storage, VAIO Filters”

“Virtualization and Cloud Are Here to Stay” PC Connection podcast series – VMware Software Defined Storage and Virtual SAN

This is another fun short project I was fortunate enough to be involved in with a great VMware partner, PC Connection.

VMware Software Defined Storage and Virtual SAN

This is part of their “Virtualization and Cloud Are Here to Stay” podcast series.  Thanks to PC Connection for letting me be a part of it.