Stress Testing VMware Fault Tolerance

By | November 6, 2014

I have been working with VMware on a possible FT bug and got to the point where I needed to try to recreate the bug while they had some enhanced logging going on.

My test bed consisted of the following

  • 3xBL460c Gen8 Hosts with 256GB Ram with 8 Flex Nics (3 vmkernels, Management, vMotion, NFS/FT)
  • 30 RHEL 6.4 VMs with varying amounts of memory and cpu (I put CPU limits on all of the VMs) and a 100 GB ext4 filesystem mounted at /local
  • 4 RHEL 6.4 VMs with 1 vCPU with 4GB that will be used for FT
  • The Stress package built and placed at /usr/local/bin/stress

In each VM I ran stress with various combinations, but this is the format

  • stress -c 10 -i 10 –vm 4 –vm-bytes 1G
    • This will spin up 10 processes hammering the cpu, 10 processes running sync (supposed to flush the buffer cache I believe), and 4 processes eating 1GB each.
    • you can also add the following “–hdd 10 –hdd-bytes 10G” have 10 processes writing 1GB each. Be sure to be in the /local directory or a directory with sufficient space or it will abort
  • EDIT: 06/11/2015 -> After using the tool more, I have standardized on stress -c 1 –vm 1 –vm-bytes 3G . For a single vCPU FT, the single CPU process is fine, also I didn’t really need to test the IO. Lastly, my FT VMs are usually 4GB, possibly 8GB so I used a single process maxing out between 3GB and 7GB. This simplifies the settings greatly.

Now for my specific tests I needed to migrate the primary and secondary FT VMs around and also turn FT on and off.

For turning FT on and Off I used a PowerCLI function from vNiklas, it is part of the code below.

For migrating the VMs around I made my own function.


Just change the VCENTER and the FTVM placeholders and you are good to go. This thing will loop forever.

I also added alarms on the FT test cluster so that I knew if a FT vm had to be restarted by HA (indicating something bad)





Leave a Reply

Your email address will not be published. Required fields are marked *