How to really compare All Flash Arrays

In my previous post I mentioned that Vaughn from Pure’s All Flash Array comparison table contained some errors. In this post I have corrected them. I have also added some additional sections. Vaughn’s post

Updated Table

afa_table

Performance, it’s the primary purpose of AFA’s. 

I added performance to the table because the primary use case for a Tier 1 All Flash Array is to provide better performance than is available with existing Arrays even ones equipped with SSD shelves.

Performance is the Primary purpose of an All Flash Array, leaving out any attempt to compare competing devices on any measure of how effective they are at fulfilling their Primary purpose reduces the value of the table to close to Zero. Its a bit like excluding 0-60 performance numbers from a table comparing Super Car’s in favour of the number of passenger seats.

In the performance section I added:

1                 Quoted IO’s per second with a mixed read write load. Some vendors particularly ones with always on de-dupe prefer read only numbers for obvious reasons but if most IO was read only then customers would have less need for Flash Arrays.

2                 A Flash storage type category, Flash Modules or SSD’s. It should be clear by now that not having access to Array wide GC coordination and a number of other factors cause SSD based devices to have lower performance with more host spots so this is a useful piece of information.

3                 If vendors have any published benchmarks such as TPC or VMmark. This demonstrates that the Array is able to backup their quoted numbers with real published benchmark results. TPC results are relevant because they measure price performance.

4                 Selective de-dupe and compression. De-dupe has limited benefits for the majority of Tier 1 AFA use cases, compression is more useful but does not have to be done at the Array level and there are good reasons for some use cases to do it elsewhere. Being able to turn these off will generally improve performance.

Environmental Requirements

I added Power and Rack space; unlike Storage Efficiency this measure has value to customers because it is not a marketing driven guess.

Finally I have corrected some of the mistakes; there was some confusion between TB and TiB and I have changed some numbers to reflect this. I have also put a * beside EMC’s 40TB Raw number, XtremIO has 25x 400GB drives after formatting plus 8x SSD’s 2 in each of the controllers. Prior to formatting the drives are larger. 40TB is definitely not the Raw capacity of a fully configured XtremIO.

More controversially I changed 3par, Pure and EMC’s Non-Disruptive firmware update from yes to Partial. This is because none of them seem to have any thought for how you update the Firmware in their SSD’s.

I asked them and got no response, trawling through their docs only produced references to controller updates not SSD updates. It is possible they have never considered doing this but how for example can they make changes to Array Garbage Collection or Error Correction or a host of other functions performed by the SSD and not by the Array Controller Firmware?

Is all SSD Firmware finished, with no bugs and no possible improvements? EMC were eloquent in their claims that SSD Firmware is improving all the time. Are customers with current Arrays frozen out of any future improvements?

The revised table offers a much more complete way of comparing AFA’s.

The grey boxes are where I was unable to obtain numbers or where there is disagreement about the answer. I could not work out exactly how much rack space a fully loaded 3par 7450 requires and what its power budget is, I was also only able to find a Read only IOP’s number for the 7450. I am also happy to change the NDU from Partial to Yes for any of the vendors who can explain how they upgrade SSD firmware in situ.

Violin Insights

Flash Storage and Spinning Rust

Architecting IT

Flash Storage and Spinning Rust

flashdba

Database Performance in the Cloud

lightspeedstorage

Flash Storage and Spinning Rust