With vSphere 4.0, VMware introduced the ability to specify a percentage next to a number of host failures and a designated failover host. This was a single value for CPU and memory. With vSphere 5.0, this has changed as it is now possible to select different percentages for CPU and memory as shown below.
The main advantage of the percentage based Admission Control Policy is that it avoids the commonly experienced slot size issue where values are skewed due to a large reservation.
Some of the other advantages of this Admission control policy are that this is much more accurate as it considers actual reservation per virtual machine to calculate available failover resources. This also means that cluster dynamically adjusts when resources are added.
But wait this has been written about several times, what is special that I am trying to add here, well as we know for each design decision that we make there are certain constraints, for this policy selection you need to manually calculate the % needed when adding additional hosts in a cluster and if you want number of host failures needs to remain unchanged.
To answer this I have simplified the calculation process for the folks going to use this as their preferred HA Admission Control Policy.
Take a look at my HA Admission Control Calculator which solves this for you , all you need to do is to put the number of hosts in the cluster , how many hosts failures you want the cluster to tolerate , rest is calculated and you will be given a single % value for both CPU and memory.
Hope this helps people trying to use the % based Admission control policy make their task easy.
Great article…I fell that the above mentioned calculations applies for homogenous clusters(Hosts having same no of CPU and Memory).. What if we have a heterogeneous clusters(hosts having different no of CPUs and memory). Should we calculate the total available resources i.e total computing capacity and memory of the cluster . lets take an example of a cluster with 8 hosts out of which we want a failover for 2 in a heterogeneous cluster… should we add the memory and CPU for 2 hosts with maximums and calculate that as a percentage of total memory and CPU and reserve it.
Also i would like what you recommend while configuring Admission Control should we go with the percentage or no of host option?
That’s correct the calculations do apply to hosts which have homogenous clusters , one of the recommended best practices of HA cluster is to have homogenous hosts in the cluster so that the cluster is not unbalanced, which means failure of a high configured host (CPU/Memory) results in not all the virtual machines getting powered on due to lack of resources in the rest of the hosts in the cluster.
Even for % based Admission control policy it is recommended to select a single or multiple host as a percentage of resources reserved for failover. One more thing to consider here is that any % you define it’s going to be applied for the entire cluster and all the hosts participating in that cluster will reserve the specified Cpu and Memory % for HA. So if you have host1 with 40 GB a 10 % reservation will reserve 4 GB , on host2 with 10 GB memory and 10% reservation results in 1 GB, in this scenario if the host with 40 GB fails HA will not be able to power on all the VMs in host2.
HA’s main objective is to provide automatic recovery for virtual machines after a physical server failure. For this reason, it is recommended to reserve resources equal to a single or multiple hosts.Having a % closer to the resources of a single host also avoids wastage.
% based is the most flexible policy as it uses the actual reservation per virtual machine instead of taking a “worst case” scenario approach like the number of host failures does.With the added level of integration between HA and DRS a Percentage based Admission Control Policy will fit most environments.
Thanks a lot for your explanation 🙂