LSI SAS 9207-8i PCIe 3.0 HBA Overview – Eight Crucial M4 SSDs Pushed to 4.1GB/s Performance

For today’s quick look at the LSI SAS 9207-81 Gen3 HBA performance, we’ll be using eight Crucial M4  SATA 3 SSDs, each 256GB in capacity and running the latest 000F firmware. The 9207-8i is flashed with IT firmware. All Gen 3 LSI HBA products will ship with IT firmware factory flashed, with IR mode FW made available for the end user to flash as needed.

SINGLE DRIVE PERFORMANCE

First, we thought we’d take a look at how one single M4 performs on the 9207 8i. On the left, we have one Crucial M4 hanging off of our X79 PCH ports.  As we can see on the right, the Intel RSTe drivers, utilized on X79 platforms, are somewhat more conservative than the regular RST used on mainstream platforms from Intel.

blankblank

The M4 does slightly better across the board. We’re just running CDM as a reality check, but sure enough, the M4 is just a little faster. Not every SATA drive is quicker on the 9207 than Intel’s 6gbps ports, but it seems as though the M4 likes it. So with that established, we can move on to looking at single drive performance with IOmeter.

blankThe M4 is able to put down some good numbers when fresh. Read and write performance is balanced, and works well at high and low queue depths alike. Reads continue to increase from QD1 to QD16 where they level off. Random writes are a similar story, holding steady in the mid-50K IOPS range from QD4 to QD16. If all goes well, when we add more drives, we can get multiples of this performance.

blankAfter our run with IOmeter, we get 519MB/s sequential reads and 266MB/s sequential writes.

blankAnd lastly, we look at latency with a single drive.

With all that out of the way, we can add some drives and see what happens.

13 comments

  1. blank

    Very impressive results. Nice to see the controller scales so beautifully. Was the M4 your first choice of SSD to test with? Do you think it’s a better choice for this use-case than the Vertex 4 or 830?

  2. blank

    Great article. Typo though: “The 9702-8i is going into our Enterprise Test Bench for more testing,”–>”The 9207-8i is going into our Enterprise Test Bench for more testing,”

  3. blank

    Excellent performance!
    I will need to build a ssd raid 0 array arround this HBA to work with my ramdisk ISOs (55Gb). What are the better drives to work with that kind of data?

    Thanks in advance and great review!

  4. blank

    Thinking about using this in a project. How did you setup your RAID? the 9207 doesn’t support raid out of the box. Did you flash the firmware or just setup software RAID?

  5. blank
    Fernando Martinez

    Test it with i540 240GB!

  6. blank

    Christopher, what method did you use to create a RAID-0 array? Was this done using Windows Striping? (I am guessing that based on the fact that you’ve used IT firmware, which causes 8 drives to show up as JBOD in the disk manager)

    • blank

      Originally, the article was supposed to include IR mode results along side the JBOD numbers, but LSI’s IR firmware wasn’t released until some time after the original date of publication. Software RAID through Windows and Linux was unpredictable too, so we chose to focus on JBOD to show what the new RoC could do on 8 PCIe Gen 3 lanes. In the intervening months, we did experiment with IR mode, but found performance to be quite similar to software-based RAID. It’s something we may return to in a future article, but needless to say, you won’t get the most out of several fast SSDs with a 9207-8i flashed to IR mode.

      • blank

        I had a chance to try SoftRAID0 on IT firmware, RAID0 with IR firmware, and 9286-8i RAID0, all with Intel 520 480GB SSD. All configurations are speed-limited at 5 drives since I am rinning PCIe 2.0 x8. Time for a new motherboard, I guess…

  7. blank

    Great review ! Very nice to see suvh great info on this subject.
    BTW does this HBA support Trim ?

  8. blank

    There’s a clarification that needs to be made in this article.

    The theoretical raw bit rate of PCIe 2.0 is around 4GB/s. PCIe is a packet-based store-and-forward protocol, so there’s packet overhead that limits the theoretical data transfer rate.

    At stock settings, this overhead is 20%. However, one can increase the size of PCIe packets to decrease this overhead significantly (to around ~3-5%) in BIOS.

    I know this because I’ve raided eight 128GB Samsung 840 Pro with the LSI MegaRAID 9271-8iCC on a PCIe 2.0 motherboard, and I’ve hit this limit on sequential reads. In order to get around it, I raised the PCIe packet size, but doing so increases latency and may cause stuttering issues with some GPUs if raised too high.

  9. blank

    Could you provide the settings you uses for the raid strip? i.e Strip Size,, Read Policy,Write Policy, IO etc. I just purchased this card and have been playing with configures to get an optimum result.
    thanks!

  10. blank

    Seems like most people want details of how the drives were configured so they can either try to do the tests themselves or just gain from the added transparency.

Leave a Reply

Your email address will not be published. Required fields are marked *