Intel 750 U.2/AIC SSD RAID0 (X3) Report – 5GB/S & 750K IOPS

CRYSTAL DISK BENCHMARK VER. 3.0.3 x64

Crystal Disk Benchmark is visually straightforward, and is used for measuring the speeds at which your storage device reads and writes in both compressible (oFill/1Fill) and random, mostly incompressible, data. Random data is more consistent with everyday use of a computer, such as transferring videos, pictures and music. We run the benchmark twice, using oFill data first, and then proceeding to test with random data. Since results typically return with nearly identical scores, we only include the results for random data samples.

Intel 750 NVMe RAID X3 CDM

Crystal DiskMark provided great results with our high transfer speed of 5.3GB/s, a definite accomplishment for any three SSD scenario.

AS SSD BENCHMARK VER. 1.7

AS SSD Benchmark uses incompressible data in their testing of SSDs, essentially providing results that would be consistent with using the heaviest workload, thus lower speeds are expected. Transfer speeds (MB/s) are seen in the left picture below and IOPS (Input/Output Operations Per Second) are on the right.

Intel 750 NVMe RAID X3 AS SSDIntel 750 NVMe RAID X3 AS SSD IOPS

Considering that we are now testing with 100% incompressible data, these results are just as impressive.  Any storage setup that brings in over 500K IOPS is definitely capable of some great workloads.

Intel 750 NVMe RAID X3 AS SSD Copy Bench

Better yet, it isn’t often that we are seeing realistic load transfer speeds of 2.6GB/s where this ISO file was transferred in less than a second.

ANVIL STORAGE UTILITIES PROFESSIONAL

Anvil Storage Utilities is essentially an all-in-one tool for all of your SSD benchmarking needs. Anvil can be used for basic consumer testing, as well as endurance testing and threaded I/O read, write and mixed tests. It displays data regarding the SSD, and even about your system.

Intel 750 NVMe RAID X3 Anvil

Once again, we are seeing results that are a bit low but thought we might see what we can pull off for read and write IOPS.

Intel 750 NVMe RAID X3 Anvil 700K Read IOPSIntel 750 NVMe RAID X3 Anvil 185K Write IOPS

Just under 700K IOPS read with 189K IOPS write is really starting to impress.

7 comments

  1. blank

    Just a great article! Really good job on the test. I thought the DMI 3 speed of 3.93 GB/s would affect performance, and yet it didn’t. How come?

  2. blank

    Guys, I believe:

    1) [b]ATTO READ results are victim of 32bit overflow [/b]. 1024kB block show result of 4053109kB/s (=4GB/s), 2048kB block size only shows 700729kB/s caused by the 32bit overflow. This is AFTER it crossed 4GB/s threshold (2^32 = 4194304kB/s) so the TOTAL write speed in fact was 4GB/s + 0.7GB/s = 4.7GB/s. As I say, ATTO can NOT count with bigger than 32bit numbers internally, so that’s the reason why it only displays 0.7GB/s instead of what it should be 4.7GB/s. The same thing applies to 4096kB and 8192kB block size results, they SHOULD be 4.96GB/s and 4.99GB/s.

    There is no other explanation for such cruel drop in performance when going up from 1024kB block size to 2048, 4096 and 8192kB blocks. Just the results are displayed incorrectly.

    My table :
    size write read
    1024 3614841 4053109 [no overflow here, 4194304 is the break point, 2^32 = 4194304]
    2048 3681783 700729 [should be 4194304+700729 = 4859033]
    4096 3768675 967166 [should be 4194304+967166 = 5161470]
    8192 3863339 993412 [should be 4194304+993412 = 5187718]

    These results are PERFECTLY aligned with Crystal DiskMark results on second page. Crystal gets to 5353MB/s which is the same league as numbers displayed here.

    Write performance is obviously not affected with this, because it only gets to 3863339kB/s maximum which is below 2^32 = 4194304kB/s. SSDs simply can’t write faster. But they can read.

    2) it would be possible to squeeze out little bit more than 755k IOPS (4kB) in IOMeter test on last page. I see 0.6245ms AVERAGE latency for this 4kB test, what in my experience shows disk subsystem being not utilized to its maximum capabilities.

    In SSD RAID tests, I personally have ALWAYS achieved higher IOPS when my latency got above 1ms, quite often above 2ms, (example : I achieve 60.000 IOPS 4kB random read with 0.8ms latency where I sit ; the same config shows 80.000 IOPS 4kB random read with 2ms when I kick it to higher queue depths or more workers).

    My thought is supported by all three remaining IOMeter tests which show 12.5332ms average latency when testing sequential throughput resulting in 5353.58MB/s (first IOMeter screenshot on last page), 16.8915ms latency 3971.78MB/s throughput (second screenshot) and 2.3510ms latency in the last screenshot.

    Compared to those latencies (2.35ms, 12.53ms and 16.89ms), the average 4kB read test latency of 0.6245ms possibly was not taxing RAID to the maximum possible extent.

  3. blank

    We’re starting to hit the really intoxicating performance levels here. When you consider that you can get 2 800Gb units for about $800 and RAID0 them that’s a damn bargain.

  4. blank

    It’s interesting how Intel can provide 3 new 750s for a raid 0 review, but can’t provide 1 mainstream 535 2.5 ssd for review to any site and it’s been out for a while now.

  5. blank

    Is there a recommendation for enterprise use with regard to RAID 1 and the SSD form factor NVMe drives? I know the PICe-attached ones are built to be single devices, but the 2.5 inch form factor, do I need to use RAID 1 in production?

Leave a Reply

Your email address will not be published. Required fields are marked *