While synthetic 100% read or 100% write workloads do a great job of testing the underlying technology and reporting easy to understand results, they aren’t always indicative of how the drive will be used by the end user. Workloads that simulate enterprise environments try to bridge that gap without being overly complex. The process of measuring our server workload performance is the same as measuring random. The drive is first secure erased to get it in a clean state. Next, the drive is filled by sequentially writing to the RAW NAND capacity twice. We then precondition the drive with respective server workload at QD256 until the drive is in a steady state. Finally, we cycle through QD1-256 for 5 minutes each measuring performance. All this is scripted to run with no breaks in between. The last hour of our preconditioning, the average IOPS, and average latency for each QD is graphed below.
The Database profile is 8K transfers, and 67% percent of operations are reads.
During our database run, the non-overprovisioned model lagged behind the competition, but once over-provisioned to 800GB, it gave the DC400 a fighting chance. At 800GB the DC400 outperforms the Micron 5100 ECO and is very close to the Toshiba HK4R. The Samsung and Toshiba SSDs, and Micron 5100 MAX, however, are the clear winners here. The consistency is also much, much better once over-provisioned and it doesn’t have any lag spikes like it does at 960GB.
The Email Server profile is similar to the Database profile, only it 8K transfers at 50% reads and 50% writes.
Just as in our database profile, the DC400’s results are much better once it is overprovisioned and consistency is much improved. It even edges out above the Toshiba HK4R and Micron 5100 ECO.