LSI SAS 9207-8i PCIe 3.0 HBA Overview – Eight Crucial M4 SSDs Pushed to 4.1GB/s Performance

MULTI-DRIVE PERFORMANCE

Starting with 4K random reads, the results of adding seven additional drives is immediate.

The 0ctet of Crucials is able to hit 369K IOPS at QD32/QD64. That’s just about perfect scaling, merely a few thousand IOPS shy of 375K (8x the 46.9K IOPS a single M4 can do). So far, so good.

The M4s don’t quite scale perfectly with random writes at higher queue depths. Two, three, and four drives compound well, but after that, performance begins somewhat poorly.

Here are some 4K latency charts for 1, 4, and 8 drives. No surprises here.

13 comments

  1. Very impressive results. Nice to see the controller scales so beautifully. Was the M4 your first choice of SSD to test with? Do you think it’s a better choice for this use-case than the Vertex 4 or 830?

  2. Great article. Typo though: “The 9702-8i is going into our Enterprise Test Bench for more testing,”–>”The 9207-8i is going into our Enterprise Test Bench for more testing,”

  3. Excellent performance!
    I will need to build a ssd raid 0 array arround this HBA to work with my ramdisk ISOs (55Gb). What are the better drives to work with that kind of data?

    Thanks in advance and great review!

  4. Thinking about using this in a project. How did you setup your RAID? the 9207 doesn’t support raid out of the box. Did you flash the firmware or just setup software RAID?

  5. Fernando Martinez

    Test it with i540 240GB!

  6. Christopher, what method did you use to create a RAID-0 array? Was this done using Windows Striping? (I am guessing that based on the fact that you’ve used IT firmware, which causes 8 drives to show up as JBOD in the disk manager)

    • Originally, the article was supposed to include IR mode results along side the JBOD numbers, but LSI’s IR firmware wasn’t released until some time after the original date of publication. Software RAID through Windows and Linux was unpredictable too, so we chose to focus on JBOD to show what the new RoC could do on 8 PCIe Gen 3 lanes. In the intervening months, we did experiment with IR mode, but found performance to be quite similar to software-based RAID. It’s something we may return to in a future article, but needless to say, you won’t get the most out of several fast SSDs with a 9207-8i flashed to IR mode.

      • I had a chance to try SoftRAID0 on IT firmware, RAID0 with IR firmware, and 9286-8i RAID0, all with Intel 520 480GB SSD. All configurations are speed-limited at 5 drives since I am rinning PCIe 2.0 x8. Time for a new motherboard, I guess…

  7. Great review ! Very nice to see suvh great info on this subject.
    BTW does this HBA support Trim ?

  8. There’s a clarification that needs to be made in this article.

    The theoretical raw bit rate of PCIe 2.0 is around 4GB/s. PCIe is a packet-based store-and-forward protocol, so there’s packet overhead that limits the theoretical data transfer rate.

    At stock settings, this overhead is 20%. However, one can increase the size of PCIe packets to decrease this overhead significantly (to around ~3-5%) in BIOS.

    I know this because I’ve raided eight 128GB Samsung 840 Pro with the LSI MegaRAID 9271-8iCC on a PCIe 2.0 motherboard, and I’ve hit this limit on sequential reads. In order to get around it, I raised the PCIe packet size, but doing so increases latency and may cause stuttering issues with some GPUs if raised too high.

  9. Could you provide the settings you uses for the raid strip? i.e Strip Size,, Read Policy,Write Policy, IO etc. I just purchased this card and have been playing with configures to get an optimum result.
    thanks!

  10. Seems like most people want details of how the drives were configured so they can either try to do the tests themselves or just gain from the added transparency.

Leave a Reply