Comparing Performance R5 vs R1 with and without Compression/Dedupe on All-Flash VSAN

By | March 16, 2017

I built a small 4-node all-flash VSAN for a specific use at my company. We didn’t need high performance, but we wanted the space efficiencies of dedupe/compression plus erasure encoding (in our R5).

Note that FTT=1, I could have tested FTT=2 for R1 but not for R5 so I didn’t bother.

Equipment:

4xDL380 Gen9 servers with 512 GB RAM connected via 10GB running ESXi 6.0u2 Patch 4

2xDiskgroups of 800gb cache SSD and 2×3.2TB SSD capacity

Overall, 40+ TB available prior to FTT and efficiencies

Test Setup:

HCIbench 1.6.1

8 VMs 70 read, 100% rand,  threads, 10 disks of 28GB, no warmup, ran for one hour. Most tests only 7/8 vms finished

*These tests had 8VMs finish, multiplied by 7/8 to scale the results

Compression/Dedupe No Compression/No Dedupe
RAID R5* R1 R5 R1 R1 Hybrid*
IOPS IO/s 34135.36 57564.71 34764.49 97721.17 54223.93
THROUGHPUT MB/s 135.79 224.86 135.8 381.73 242.06
LATENCY ms 16.1037 10.1067 18.561 5.7989 9.4491
R_LATENCY ms 6.5406 1.8841 5.9705 1.8294 4.9589
W_LATENCY ms 38.4339 29.3076 47.953 15.0631 19.9434

 

R5 w/compress R1 w/compress R5 R1 R1 Hybrid
R5 w/compress 59.30% 98.19% 34.93% 55.08%
R1 w/compress 168.64% 165.58% 58.91% 92.89%
R5 101.84% 60.39% 35.58% 56.10%
R1 286.28% 169.76% 281.09% 157.69%
R1 Hybrid 181.54% 107.65% 178.26% 63.42%

My testing was a bit backwards, I started with R5 and compression and worked my way backwards. I should have started with R1 and no compression first and then layered on the other settings. Also, I had to turn compression/deduplication off which

From my testing, R1 with no compression/no dedupe was the fastest with 97721 IOPS, but I wasn’t looking for pure speed. I wanted to know what the performance penalty would be as we tried to save space. Looking at R1 compressed vs R1 non-compressed, R1 compressed is 58.91% of the IOPS of non-compressed, but compression can yield from 1.5x and up space savings

Comparing R5 and R1 uncompressed, 34764 vs 97721 IOPS or 33.58% of the R1 uncompressed. Savings would be around 1.33x (ratio is around 75%)

R5 compressed vs R5 was weird, it was almost the exactly the same though IOPs. Savings would be maybe 1.5x conservatively (ratio is around 67%)

R5 compressed vs R1 compressed was 60.39% of IOPS. Savings would be 1.33x convervatively (ratio is around 75%)

R5 compressed vs R1  was 35.57% which is the biggest difference, but the space used ratio is around 50%

R1 compressed vs R5 uncompressed is 165.58%, space used is ratio is about 88%

 

So what should you go with? Of course it depends.

Max storage: R5 with compression

Max performance: R1 no compression

Best mix of storage savings and performance: R1 with compression

 

7 thoughts on “Comparing Performance R5 vs R1 with and without Compression/Dedupe on All-Flash VSAN

  1. Roman

    Hi Chris,

    Thank you for sharing those results.

    I was just doing similar test on VSAN 6.5 with R5 with dedup / compression enabled on Dell’s R630’s and the results were similar in terms of throughput (130-140 MB/s).

    The question is why those numbers are so low. Any ideas?

    Reply
    1. Chris Post author

      No idea, but Chei (one of the authors of hcibench) pointed out that I didn’t run disk prep or clear cache. I’m going to re-run everything again. I’m not hopeful on the R5 results though, I’m guessing the erasure coding calculation plus the dedup lookup have a much larger penalty. If I remember I’m going to look at the performance charts for the back end to see how busy it is during the runs

      Reply
  2. Pingback: Updated: Comparing Performance R5 vs R1 with and without Compression/Dedupe on All-Flash VSAN – Virtual Chris

  3. Roman

    Thank you Chris! Great addition to the test results.

    I am going to implement those changes after confirming them with VMware and et you know about how performance improves as a result of those adjustments.

    Reply
  4. Pingback: Performance Boost: All-Flash VSAN on vSphere 6.0u3 – Virtual Chris

Leave a Reply

Your email address will not be published. Required fields are marked *