- vSAN = VMware’s Software Defined Storage Solution formerly known as Virtual SAN or VSAN. Now the only acceptible name is “vSAN” with the little “v”.
- SPBM = Storage Policy Based Management
- VASA = vSphere API’s for Storage Awareness
- VVol = Virtual Volume
- PE = Protocol Endpoint
- VAAI = vSphere API’s for Array Integration
- VAIO Filtering = vSphere API’s for IO Filtering
- VR = vSphere Replication
- SRM = Site Recovery Manager
- VDP = vSphere Data Protection
- vFRC = vSphere Flash Read Cache
- VSA = vSphere Storage Appliance (end of life)
- VMFS = Virtual Machine File System
- SvMotion = Storage vMotion
- XvMotion – Across Host, Cluster, vCenter vMotion (without shared storage)
- SDRS = Storage Distributed Resource Scheduler
- SIOC = Storage Input Output Control
- MPIO = Multi Path Input Output
What Makes VSAN Different?
I had a question today asking how VMware Virtual SAN (VSAN) compares to XYZ company. There are over a dozen virtual machine software based solutions that leverage the local disks in ESXi hosts to present storage back to the hosts in the vSphere cluster. Those solutions require a vSphere cluster to be created then their virtual machine must be installed on every host to handle the storage services. Some are more efficient at this than others but there is always level of effort to “build-your-own” storage on top of the vSphere cluster and those virtual machines can take up significant host resources to deliver on the storage services they offer. So converged infrastructure itself is nothing new or unique. Its how it’s done that is important.
Here’s what makes VMware Virtual SAN (VSAN) different:
- VSAN is the ONLY software defined storage solution that is embedded into the ESXi hypervisor making it the most efficient data path for performance. VM’s send their data through the hypervisor right to disk, there’s no middle man. In addition, VSAN is the most efficient in its use of the host resources to deliver on the storage service. VSAN is designed to take up no more than 10% of the host CPU and memory resources and testing with vSphere 6 show significantly less impact than that. Since VSAN is not a VM on top of the hypervisor, it has this distinct advantage. This was a positive tradeoff for the fact that VSAN is a VMware vSphere only solution.
- Being built in also makes it simple and easy to manage. There is no VSAN install, it is simply enabled as a feature of the hypervisor by clicking a check box. When enabled, VSAN will collect all the local disks on all the hosts and create the VSAN Datastore. Bear in mind, the server IO controller and disks must be in place and networking configurations must be completed to make sure VSAN will work when you click that check box.
- VSAN is fully integrated into VMware Storage Policy Based Management (SPBM), VASA, and VVOLs. When that check box is clicked, the VSAN datastore is created and it’s VASA provider is registered with vCenter to expose it’s capabilities to SPBM. This allows different policy to be created so the same pool of capacity can deliver different service levels to different VM’s based on performance, availability, and protection. When VM’s are attached to a policy service level, their VM objects get created on the VSAN datastore in the form of Virtual Volume (VVOL) objects. VSAN further breaks these VVOL objects up into components to deliver on the defined protection and performance service levels.
- VSAN deals with data protection at the software layer so it doesn’t suffer the performance and capacity penalty of hardware RAID. Different “tiers” of protection can be defined by policy and set for different VM’s using the same pool of disks in the VSAN datastore. Numbers of Failures to Tolerate settings determine how many data replicas are written to different hosts to deliver the desired protection level for VM’s.
- VSAN now supports a feature called “Rack Diversity”. I wrote about the benefits here. This brings Software Defined Self Healing with Failure Domains. Hosts in the same rack can be placed into the same fault domain so that if an entire rack is lost then data remains available since another replica copy of the data resides on another host in another rack.
- VSAN is a hybrid storage solution leveraging SSD as cache to accelerate both reads and writes and low cost high capacity hard disks to persist the data. This results in near All-Flash array performance at a fraction of the cost. With vSphere 6 along with Virtual SAN 6, an All-Flash VSAN is supported delivering extreme performance.
VMware Virtual SAN™ 6.0 Performance
- VSAN is one of the few software based storage solutions that can leverage the in host SSD/Flash for Read AND Write caching. There are many solutions that can leverage in host SSD/Flash for read caching. Write back caching is more difficult to implement but VSAN does it while maintaining high availability of those writes across the cluster.
All other converged software based storage solutions require running a Virtual Machine on top of ESXi. So all VM’s have to go through their own IO path, through the hypervisor, then through that single VM IO path, then back through the hypervisor, then to the disks. In some cases, the disks themselves need to be setup with a hardware RAID configuration then their VM solution implements software RAID in addition to the underlying hardware RAID paying a double performance and capacity penalty. Each of these VM’s take on additional host CPU and Memory. Some require 2-4 vCPU’s and 16GB or more of RAM. And some are limited to the number of nodes they can scale to and how much total capacity can be supported. Again, some solutions are more efficient and scalable than others so do the homework and ask the right questions when comparing. Finally, most don’t support VMware’s Storage Policy Based Management which is the VMware framework for managing all vSphere storage going forward.
VMware’s vision for Virtual SAN is that it be the best storage solution for Virtual Machines. With the release of vSphere 6 and Virtual SAN 6, VMware is closer to that vision. There are many software defined storage choices out there. Hopefully this helps in that decision making process.
XvMotion, Cross-vCenter vMotion, VVols, LVM active/active, LVM active/passive, SRM & Stretched Storage, VAIO Filters
Recently, with the announcement of the availability of VVols in vSphere.NEXT I was asked to give a deep dive presentation to a customer with a focus on what VVols meant for protection VM’s. While at EMC as a vSpecialist I lead a group focused on protecting VM’s so this is something I’ve been interested in for awhile. I’m a big fan of RecoverPoint and am excited about virtual RecoverPoint’s ability to offer continuous data protection for VSAN as I indicated here. I’m also a huge fan of VPLEX and spent a lot of time during my days at EMC discussing what it could do. The more I dug into what VVols could do to help with various VM movement and data protection schemes the more I realized there was much to be excited about but also much need for clarification. So, after some research, phone calls, and email exchanges with people in the know I gathered the information and felt it would be good information to share.
What follows is kind of a “everything but the kitchen sink” post on various ways to move and protect VM’s. There were several pieces of the puzzle to put together so here are the past, present, and future options.
XvMotion (Enhanced vMotion) – vMotion without shared storage – Released in vSphere 5.1
In vSphere 5.1 VMware eliminated the shared storage requirement of vMotion.
- vMotion – vMotion can be used to non-disruptively move a VM from one host to another host provided both hosts have access to the same shared storage (i.e. A datastore backed by a LUN or volume on a storage array or shared storage device). Prior to vSphere 5.1 this was the only option to non-disruptively move a VM between hosts.
- Storage vMotion – this allows VM vmdk’s to be non-disruptively moved from one datastore to another datastore provided the host has access to both.
- XvMotion – As of vSphere 5.1. XvMotion allows a VM on one host, regardless of the storage it is using, to be non-disruptively moved to another host, regardless of the storage it is using. Shared storage is no longer a requirement. The data is moved through the vMotion network. This was a major step towards VM mobility freedom, especially when you think of moving workloads in and out of the cloud.
- For more information see: Requirements and Limitations for vMotion Without Shared Storage
Cross-vCenter vMotion – Announced at VMworld 2014, available in vSphere.NEXT (future release)
This new feature was announced during the VMworld 2014 US – General Session – Tuesday.
Continue reading “XvMotion, Cross-vCenter vMotion, VVols, LVM active/active, LVM active/passive, SRM & Stretched Storage, VAIO Filters”
Quick discussion on VVols
One of the big topics at VMworld 2014 was VVols. VMware announced it will be part of the next release of vSphere and almost every storage vendor on the planet is excited about the benefits that VVols bring. I was working the VVol booth at VMworld and had the pleasure of being interviewed by VMworld TV to discuss the comparison between VSAN and VVols. This was fun but unscripted and off the cuff so here it is:
VMworld TV Interview: Peter Keilty of VMware Discussed Virtual Volumes
What I’m trying to say is:
- VSAN is the first supported storage solution takes advantage of VVols.
- VVols, in vSphere.NEXT, will work in conjunction with VASA to allow all block and file based storage arrays to fully realize the benefits of Storage Policy Based Management (SPBM).
- Each storage vendor can write a VASA/VVol provider that registers with vCenter to integrate with the vSphere API’s and promote their storage capabilities to vCenter. I expect just about every storage array vendor to do this. I have seen VVol demonstrations by EMC, NetApp, Dell, HP, and IBM.
- VVols eliminates the requirements of creating LUNs or Volumes on the arrays, instead, arrays present a pool or multiple pools of capacity in the form of storage containers that the hosts in the cluster see as datastores
- Through SPBM, administrators can create different service levels in the form of policy that can be satisfied by the underlying storage provider container.
- When VM’s get provisioned, they get assigned to a policy, and their objects (namespace, swap, vmdk’s, snap/clones) get placed as native objects into the container in the form of VVols.
- You can even assign objects from the same VM to different policy to give them different service levels, all potentially satisfied by the same storage provider or perhaps different provider containers. In other words, a vmdk for an OS image might want dedupe enabled but a vmdk for a database might not want dedupe but might want cache acceleration. Different policy can be set and each object can be assigned to the policy that will deliver the desired service level. The objects could be placed into the same storage array pool but taking advantage of different storage array features. And these can be changed on the fly as needed.
Like all the storage vendors out there, I’m very excited about the benefits of VVols. For a full description and deep dive check out this awesome VMworld session by Rawlinson Rivera (http://www.punchingclouds.com/) and Suzy Visvanathan:
Virtual Volumes Technical Deep Dive
VMware VSAN Beta Highlights & Best Practices
I had the pleasure of speaking at one of the breakout sessions at the DFW VMUG in Dallas, TX this past week. To prepare I was able to talk to Cormac Hogan who is VMware’s Senior Technical Marketing Architect for VSAN. Cormac is a wealth of knowledge so I also spent a lot of time absorbing the great articles in his blog http://cormachogan.com/ and his VSAN demos here. Additionally I found good stuff on Duncan Epping’s http://www.yellow-bricks.com. In 45 minutes I couldn’t do a deep dive so I had to stick to the highlights which I’ve listed below. Bear in mind this is related to the VSAN beta that just recently went live. If you haven’t already done so, sign up at http://vsanbeta.com/.
- vSphere 5.5 & vCenter 5.5 required – VSAN is built into vSphere & management is through the Web Client for vSphere 5.5.
- Min 1 SSD & 1 HDD per host, Max 1 SSD & 6 HDD per disk group, Max 5 disk groups per host
- Min 3 Hosts, Max 8 Hosts, Max 1 VSAN datastore per cluster (support for more hosts may increase in the future)
- Max vsanDatastore = (8 hosts * 5 disk groups * 6 disks * size of disks) = 240 * size of disks
- Capacity based on HDD only. SSD do not contribute towards capacity, used as read cache and write buffer
- Can provision individual VMs with different profiles on the same VSAN datastore
- Data stripes and copies can be anywhere in the cluster (no locality of reference)
- SAS/SATA Raid Controller must work in “pass-thru” or “HBA” mode (no RAID)
VSAN Best Practices
- Host Boot image: no stateless, preferred is to boot using SDcard/USB
- SSD should be Min 10% of HDD Capacity (e.g. 1 GB of SSD to every 10 GB of SAS/SATA)
- Disparate hardware configurations are supported but best practice is to use identical host hardware configurations (same #, capacity, performing disks)
- Dedicated 10Gb (1GB is supported) network for VSAN. NIC team of 2 x 10Gb NICs for availability purposes
- Not much sense to enable vSphere Flash Read Cache (VSAN uses SSD for cache)
- VSAN VM Policy Management – Leave at default unless specific need to change
- Number of Disk Stripes Per Object: Default = 1; Max = 12
- Number of Failures To Tolerate: Default = 1; Max = 3
- Object Space Reservation: Default = 0%, Maximum = 100%
- Flash Read Cache Reservation: Default = 0%, Maximum = 100%
- Force Provisioning: Default = Disabled
I hope this helps summarize what VSAN is all about. I was excited to get many great questions from the audience and to see how excited they all were about VSAN. I’m looking forward to how the Beta goes and how people like it!