Storage Best Practices in Virtualization and Cloud

One of the most complex and poorly designed portion of the Virtualization infrastructure is storage , historically we have seen that most of the issues faced with the Virtual infrastructure is in the Network and Storage not actually a problem with the hypervizor itself. Saying that your Virtual infrastructure is poorly designed is simple but when you start looking at the layers which you have to take into account when designing storage makes it difficult. As usual similar to the rest of the best practices I have added the Storage Best Practices page now , you can get all the information that you require for a successful storage design with the vSphere platform.

Now lets look at the layers in the storage design process , to understand the basics. In this blog post I am concentrating only on the FC based storage , not iscsi , nas and FCoE. I am taking a bottoms up approach when discussing about these layers.

1st Layer:

The Storage itself , in this layer what you are mainly concerned is with disk types (FC,SATA,SSD), how many iops this storage can serve, read/write cache, no of storage processors , front/back end ports.

But wait what is more important here is to know that what are the applications which are going to run from the storage , have you done what is called as application/workload profiling. Do you have an application which needs more iops,is the application more heavy on reads or writes. Based on this you will have to drive the decision of the disk type and also the RAID type to be used. You can see the RAID characteristics below and also in the diagram above you can see that for high IO work loads the RAID 10 is used while for normal IO RAID 5 is used.

Disk type selection is again going to be driven by the workload profiling and the costs. Once you finalize this piece you will have to then decide on what size Lun you want to carve out from the storage , this decision has to be driven by the capacity and the iops plus how many Virtual Machines you are going to run per VMFS Volume (Datastore) and this depends on the requirements at your side. So no there is no one size fits for all answer here, in terms of what is the sweet spot for running VMs from a single VMFS Volume (datastore).

2nd Layer:

The Fabric : this is where the magic happens , so what is the fabric (FC switches) responsible for , they are there for working as an interconnect between the vSphere hypervizor (host) and the storage. This is the place where you decide which Lun should be visible to which servers through how many paths or routes in short multipathing. This is achieved through zoning (kind of VLANs) the host HBAs with the storage HBAs i.e you first have to physically connect your host HBA cards to the FC Switch and then similar for the storage. Once the connectivity is done you move on to decide which type of zoning you want to create (hard – port based) or soft (WWPN based) unless you have very strict security and compliance regulations soft zoning should be considered.

There are few things which works well when you do zoning like you get better results when you do a single initiator (host HBA) zonning with single target (storage HBA) this way you reduce RSCN, GPN_ID (Get Port Name) , GID_FT (Get Port Identifiers) & PLOGI (Port Logins) events.

For redundancy you need to have more than one fabric switches also by default VMware recommends a minimum of  4 paths per Lun ,so keep this in mind when you are doing zoning configurations on the switches.

3rd Layer:

First and foremost is to have multiple HBAs in the vSphere host , once the connectivity has been established with the storage you need to do scan on the vSphere host for detecting any new SCSI device , SCSI device here is the Lun that you have created as part of the storage provisioning.

Then the next step is to create a VMFS volume (Datastore) this is the place from where the VMs are going to run/reside. You need to verify the multipathing policy selected for a given Lun from the vCenter interface , vSphere by default based on the storage detection selects the correct multipathing policy. A quick snapshot of what is the default policy for which array is described in the diagram above.

Note : The VMW_PSP_FIXED_AP policy has been removed from ESXi 5.0. For ALUA arrays in ESXi 5.0 the PSP MRU is normally selected but some storage arrays need to use Fixed.

You can then decide to make use of features such as SDRS , SIOC or thin provisioning to even further optimize the storage piece from the vSphere layer.

The storage landscape is changing fast and there are lot of new announcements in this area , I will make sure that I keep the Storage Best Practices page updated with all the latest on this topic. Till then happy reading and hope this is helpful.

4 Replies to “Storage Best Practices in Virtualization and Cloud”

  1. Hi Roshan,

    I have question please clarify.

    When we are upgrading the vmware environment which one should upgrade ESXi or vcenter.

    Example.We have ESXi host with 5.0 and vcenter with 5.0 version’s.

    Now I want to upgrade it to esxi 5.5 and vcenter 5.5..Please tell me which one should i upgrade first Host or VC..? and why..? what is the best practice.

    I hope you understand my question.

Leave a Reply to Sravana Cancel reply

Your email address will not be published. Required fields are marked *

*


+ 4 = seven