Techman SSD XC100 Series NVMe SSD Review (3.2TB)

SERVER PROFILES

While synthetic 100% read or 100% write workloads do a great job of testing the underlying technology and reporting easy to understand results, they aren’t always indicative of how the drive will be used by the end user. Workloads that simulate enterprise environments try to bridge that gap without being overly complex. The process of measuring our server workload performance is the same as measuring random. The drive is first secure erased to get it in a clean state. Next, the drive is filled by sequentially writing to the RAW NAND capacity twice. We then precondition the drive with respective server workload at QD256 until the drive is in a steady state. Finally, we cycle through QD1-256 for 5 minutes each measuring performance. All this is scripted to run with no breaks in between. The last hour of our preconditioning, the average IOPS, and average latency for each QD is graphed below.

Techman SSD XC100 3.2TB DB PreTechman SSD XC100 3.2TB DB IOPSTechman SSD XC100 3.2TB DB Lat

The Database profile is 8K transfers, and 67% percent of operations are reads.

The results speak for themselves. TheXC100’s lower write performance places it lower on the charts than that of the test pool. Latency just broke 2ms at QD 256 and it achieved about 120K IOPS.

Techman SSD XC100 3.2TB ES PreTechman SSD XC100 3.2TB ES IOPSTechman SSD XC100 3.2TB ES Lat

The Email Server profile is similar to the Database profile, only it 8K transfers at 50% reads and 50% writes.

The XC100 reached a 100K IOPS from QD128-256. Again, the results show it lags behind the pack with nearly 20% lower performance than the HGST SN100, though it has the same controller. Consistency is also not that tight with a very large range of results.

Leave a Reply

Your email address will not be published. Required fields are marked *