HighPoint RocketRAID 6Gb/s SATA/SAS 2720SGL Review Utilizing 8 Micron C400 6Gbps SSDs

TEST BENCH

This is the Test Bench.  As you may have guessed, it truly is no ordinary Test Bench.  I have dedicated many countless creating this build which has been used in many benchmarking excursions and extreme overclocking sessions. She has even set a few HWBot world records along the way!

SYSTEM CONFIGURATION

CPU: Intel i7-920 DO @ 4.46 Ghz

MOTHERBOARD: EVGA E760 Classified Motherboard

RAM: 6 GB EK-Watercooled Corsair Dominators 2000MHz CL8 kit. 7-8-7-24@1700

GPU: Dual EVGA GTX480

POWER: ST1500 Fully Modular 1500 Watt Power Supply 1500W (Peak 1600W) 12v1320W/110A (Peak 120A) combined+3.3 5v 280W

CHASSIS: Danger Den Torture Rack

CPU COOLER:  HeatKiller 3.0

WATER SYSTEM: Two KMP-400 w/reservoirs in a Parallel loop with Bitspower reservoir, two MCR320-QP rads, and 1 BIPS 240 rad. Areca 1880IX-12 watercooled, EK Ram Block, EK Fullboard Mobo Block on Loop 1.  Loop 2- CPU only.  Loop 3- MCP-655 and Honda Radiator on dual GPUs.

ENCLOSURE: ARC-4036 6gb/s JBOD, provided by Areca for our test bench. Our review of this product can be found HERE.

CRUCIAL C400/M4  256 GB SSDs

blankThis review will be the first unveiling of our new primary Test Array. Crucial/Micron has graciously provided us with 8 of their ultra-high performance C400 SSDs as our primary testing array for the site. Our previous array of 8 C300s are still very high performance drives and serve us very well in our testing endeavors.

The C400 is the enterprise variant of the Crucial M4 SSD and both contain physically identical hardware, each having become incredibly popular in their specific areas..  One of the key strengths of these types of SSDs is that they can give consistent performance over a long period of time. One of the hallmarks of these devices are their impressive performance with the 0009 update, and also rock solid reliability.

The Micron C400 6Gb/s SSDs boast a sequential read speed of 415 MB/s and 260 MB/s write speed. With random 4k read at 40,000 IOPS and random 4k write weighing in at 50,000 IOPS, these C400 SSDs are fast enough to saturate any RAID controller easily.

TEST METHODOLOGY

Anvils Storage Utilities, ATTO, Crystal Disk Mark, AS SSD, and QuickBench are staples of our Test Bench for synthetic testing.

For real world trace testing we will utilize PCMark Vantage.

27 comments

  1. blank

    hi what driver di you use on the card i did get 3-600 MB/s on atto test whit old drivers (first ver) ther is a test on HW no that shows this card doing 4170 on atto

    • blank

      That figure of “4170” — presumably Megabytes per second —
      overstates the upstream bandwidth of x8 PCIe 2.0 lanes:

      x8 @ 500 = 4,000 MBps. Thus, the extra “170” must be
      a residual result of some other factor, like OS caching.

      Highpoint’s readme.txt recommends downloading
      the latest driver and the latest bios for that card,
      particularly if one wants to DISABLE INT13 —
      Interrupt 13 — which must be ENABLED
      in order to boot from that card.

      Flashing a new bios can be done with a Windows program,
      so obviously one must be able to boot into Windows
      in order to execute this program.

      That program allows INT13 to be ENABLED or DISABLED.

      Another hurdle that users can encounter is the effect
      INT13 has when the 2720SGL is installed in an
      existing system, with an existing storage subsystem:

      Highpoint recommends that no storage devices
      be connected to the 2720SGL, to give the OS a chance
      to install its device driver correctly.

      This hurdle is explained in the readme.txt file,
      but many users have failed to read it first.

      The symptom that occurs when Highpoint’s
      advice is not followed, is the disappearance
      of existing storage devices in the motherboard’s BIOS.

      Happily, if one is doing a fresh install of Windows,
      the default of INT13 ENABLED simply requires
      that the device driver be ready to load at the
      appropriate point during Windows Setup.

      In this latter respect, the 2720SGL is no different
      from RAID controllers integrated into motherboard chipsets.

      I hope this helps.

      MRFS

    • blank

      The maximum results capable with any hardware raid solution over a single pcie 2.0 slot is roughly 2.7-2.8 GB/s. This is true of several different manufacturers. I would like to see a link with these results of anything near 4000 MB/s. According to posted pcie specs, that is impossible.
      There is more communication going on with the device and the bus than just the data. There is an overhead with any specification. This is the effective limitation of these devices.

      • blank

        I didn’t mean to imply that “throughput” could reach 4,000 with PCIe 2.0.

        I did refer to that number as “max bandwidth” i.e. x8 @ 500 MBps = 4,000 MBps.

        Nevertheless, each x1 PCIe 2.0 lane oscillates at 5 GHz; and
        the reason why that translates into 500 MBps is the
        8b/10b “legacy frame” which adds one start bit
        and one stop big to each byte transmitted —
        hence 10 bits per transmitted byte:

        5 GHz / 10 = 500 MBps

        The same legacy frame is also used in the current SATA-III standard,
        only the clock rate is 6 GHz instead of 5 GHz:

        6 GHz / 10 = 600 MBps (i.e. the “max bandwidth” of one SATA-III channel)

        Now, use the PCie 3.0 specifications of 8 GHz and
        a “jumbo frame” of 128b/130b, and we get:

        8 GHz / 8 = 1,000 MBps

        Thus, 20% of the current transmission overhead is directly attributable
        to 2 extra binary digits for every 8-bit byte transmitted.

        https://www.pcisig.com/news_room/faqs/pcie3.0_faq/

        “Q: How does the PCIe 3.0 8GT/s “double” the PCIe 2.0 5GT/s bit rate?

        “A: The PCIe 2.0 bit rate is specified at 5GT/s, but with the 20 percent performance overhead of the 8b/10b encoding scheme, the delivered bandwidth is actually 4Gbps. PCIe 3.0 removes the requirement for 8b/10b encoding and uses a more efficient 128b/130b encoding scheme instead. By removing this overhead, the interconnect bandwidth can be doubled to 8Gbps with the implementation of the PCIe 3.0 specification.”

        e.g. 2,700 / 0.80 = 3,375 MBps max throughput

        Conclusion: a “SATA-IV” standard should extend this PCIe 3.0 spec
        outwards over future SATA channels, using the same 8 GHz clock rate
        and the same 128b/130b “jumbo frame” to raise the the max bandwidth
        to 1.0 GBps for each SATA cable.

        MRFS

      • blank

        You must read those results with care. something is amiss. Those controllers, and i mean NONE of them, are rated for those speeds. not one of them claim to be able to reach those speeds, simply because they cannot. 9265-8i and 9260-8i are rated at 2.7-2.8 MAX.
        Also, when they run Anvils benchmark, which i am extremely familiar with, they are also not receiving anything near what they are claiming with the graphs that they made.
        Also, even though they are running Anvil, they should be getting much higher results. they have it configured to a 1GB test file that allows the data to run in the CACHE only!!!
        I would bet some serious money that if you look at their atto tests (were they to provide a screenshot) then you would see that they are running it at 256mb test file. This is an incorrectly configured test for RAID controllers that have cache.
        In short, the speeds that they have listed are not backed by conclusive data.
        1.They are on a graph that they made themselves, with no verification screenshots.
        2. the secondary tests listed are configured incorrectly, displaying a lack of correct preparation and/or knowledge.
        3. the secondary tests do not back up their previoius tests. the results are far different.
        4. the speeds listed are so far above the rated specification of the equipment they are testing it is obviously wrong. If those devices could do double their rated maximum specs, you can bet money that they would be advertised as such. Nobody would market, advertise, and sell their equipment at lower specs than they were capable of. let alone by 2x!
        5. The speeds listed are above the maximum throughput of the pcie specification.
        6. their PCMV scores also do not reflect the speeds that they are claiming. Bear in mind that we did set a PCMV Worl Record with the areca controller, so we have an idea of what the results would look like.

        There may be something lost in translation, as Google Translate isnt the best, but from what i see, something is wrong with these results.

      • blank

        Paul, I’m sure you are correct: there is NO WAY that x8 PCIe 2.0 lanes can deliver more then 4 GB/second.

        Here’s why:
        x8 PCIe 2.0 lanes @ 5 GHz / 10 = 4,000 MB/second MAX!!

        (I’ve done that calculation literally dozens of times — on paper, in my head, and at numerous Forums.)

        However, their measuring tool may be watching ONLY traffic between those controllers and 8 x Samsung 830 SSDs (as shown in one of their photos), then:

        8 x SSDs @ ~520 = 4,160 MB/second

        https://prisguide.hardware.no/produkt/samsung-ssd-830-series-64gb-151501

        The latter rate is very close to what I see reported (without any translations, however): “4250 MB/s”

        Clearly, the upstream max bandwidth is 4,000 MB/s,
        whereas the max bandwidth downstream is 4,800 MB/s:

        8 x SATA-III channels @ 6 GHz / 10 = 4,800 MB/s

        Thus, some of the traffic over the 8 x SATA channels may occur ONLY between the host controllers and the internal DRAM caches in those Samsung 830 SSDs, which would explain the slightly higher rate of 4,250 MB/second.

        This same type of bias occurs if the measuring tool does
        NOT control for a large integrated buffer in an expensive
        RAID controller like the Areca models.

        MRFS

      • blank

        I have a friend, actually anvil himself, who is the maker of the Anvil Utilities bench. He is norwegian, and is very active on that very site that contains said review. I am sending him a quick email to get his thoughts, he natively speaks the language needed to read over the article. he is very knowledgeable, and like myself, owns every controller used in that very review.
        I will report back when he replies 🙂

      • blank

        On the other hand, if their test file does fit entirely within the
        8 SSD caches which are additive in RAID 0 mode, this is the
        kind of test that demonstrates how the upstream bandwidth
        may ultimately emerge as the real limiting factor.

        I remember commenting, several years ago, how RAID cards
        were very slow to exploit all x16 PCIe lanes, whereas
        video cards did so very early after PCI Express first became
        available.

        In order to supply an upstream bandwidth that exceeds
        4,000 MB/second, either:

        (1) a RAID controller with a full x16 edge connector
        must be installed;

        -or-

        (2) an x8 RAID controller with PCIe 3.0 support
        must be installed with either:

        (a) an 8 GHz clock rate on each x1 lane;

        -or-

        (b) (2a) + the 128b/130b “jumbo frame” during transmission.

        In the case of (1) above, the max bandwidth doubles
        to x16 @ 5 GHz / 10 = 8,000 MB/second.

        In the case of (2a) above, the max bandwidth increases
        to x8 @ 8 GHz / 10 = 6,400 MB/second.

        In the case of (2b) above, the max bandwidth increases
        to x8 @ 8 GHz / 8 = 8,000 MB/second (same as (1) above).

        Something like the above must happen, if/when
        the current crop of 6G SSDs saturate an
        upstream max bandwidth of 4,000 MB/second.

        MRFS

  2. blank

    Do the C400s not get the 0009FW? or are they still on 0002?

  3. blank

    > There is definitely a notable step down in performance when the controller is handling the load.

    I’m really glad to see this permutation measured in an apples-to-apples comparison.

    With the proliferation of multi-core CPUs, it seemed rather obvious to exploit one or more idle cores to do the I/O processing, that would otherwise be done by a dedicated IOP on a more expensive RAID controller.

    Here, we see that general-purpose CPU cores do a much better job of exploiting PCIe bandwidth, than the 2720SGL’s own on-board hardware can deliver.

    Very interesting!

    MRFS

    • blank

      Now, just as the Windows 7 scheduler was recently modified better to exploit AMD’s Bulldozer architecture, it may be worthwhile to distribute software RAID queuing across multiple cores of a multi-core CPU.

      In the Forums here, we’ve already discussed how Windows may assign all RAID queuing to a single CPU core e.g. like “set affinity” in Windows Task Manager, regardless of the number of RAID members.

      As such, this single CPU core ends up being a big I/O bottleneck.

      With 8 x SSDs like those assembled in this review, it would make sense to distribute queuing across 2 or more CPU cores, in order to enhance parallelism.

  4. blank

    > The Micron C400 6Gb/s SSDs boast a sequential read speed of 415 MB/s and 260 MB/s write speed.

    > we were able to reach our highest throughput at 2.7 Gb/s with the Software RAID. This is effectively the maximum practical limit of the PCIe 2.0 x8 bus.

    I don’t think the card is the only bottleneck here:

    Let’s do a simple parametric analysis:

    x8 PCIe lanes @ 500 MBps = 4,000 MBps max bandwidth

    8 x C400 @ 415 = 3,320 MBps max throughput (perfect scaling)

    2,742 / 3,320 = 0.826 –> 17.4% overhead (realistic scaling)

    My limited measurements of the 2720SGL suggest
    a much lower overhead in RAID 0 mode, however.

    Let’s start from a realistic overhead of 5%, and
    then work backwards, again using a parametric approach:

    3,320 x 0.95 = ~3,150 MBps = maximum theoretical throughput

    PCIe 3.0 should help a lot here too, because it eliminates 20% of the overhead
    that results from the 8b/10b “legacy” frame and replaces that legacy frame
    with a 128b/130b “jumbo frame”.

    Also, other SSDs can sustain READs at more than 500 MBps.

    So, I predict that the 2720SGL can demonstrate higher throughput,
    e.g. with 8 faster SSDs in “software RAID” mode.

    • blank

      I believe that in order for the device to take advantage of the jumbo frame the device itself would have to be pcie 3.0 compliant.
      Even the highest performing solutions currently available, the 9260 and 9265, can only peak at around 2.7-2.8 GB/s, there is also the 9211 HBA that peaks around this limitation as well.

      • blank

        > in order for the device to take advantage of the jumbo frame the device itself would have to be pcie 3.0 compliant.

        Correct: such a new protocol would require compatibility with
        the PCIe 3.0 “jumbo frame” at both ends of the data cable.

        That’s one of the main reasons why I am suggesting that
        this feature should be standardized in a “SATA-IV” specification,
        for adoption industry-wide.

        At the moment, that “jumbo frame” appears to be a feature
        limited to the PCIe 3.0 chipsets i.e. fixed wire traces that are
        embedded in internal motherboard circuitry.

        BTW: I’ve noticed that, when motherboard manufacturers e.g.
        Tier 1 vendors like ASUS, Gigabyte and MSI, advertise support for PCIe 3.0,
        they fail to clarify whether the chipset lanes are clocked at 8 GHz, or
        whether the 128b/130b “jumbo frame” is also supported, or both.

        My “best guess” is that those current chipsets can deliver 8 GHz clock rates
        to the PCIe slots, and auto-configure downwards if an installed card
        runs at the slower 5.0 GHz for Gen2, or 2.5 GHz for Gen1 PCie slots.

        But, I don’t see very many PCIe add-on cards that also support
        the 128b/130 “jumbo frame” at their edge connectors.

        Please me us know if you come across any of the latter!
        They may already exist, but the latter feature is just not
        being reported in the published technical specs.

        (From an engineering point of view, it has probably been easier simply
        to increase the clock rate, rather than to modify the frame layout too,
        and they advertise it as PCIe 3.0.)

        MRFS

      • blank

        Case in point:

        https://hexus.net/tech/news/mainboard/32968-msi-outs-x79a-gd45-8d-x79-motherboard/

        “the latest PCI Express Gen 3 to provide up to 32GB/s transfer bandwidth for the expansion cards”

        This necessarily implies BOTH an 8GHz clock rate AND
        the 128b/130b “jumbo frame” in order to deliver 16 GB/s
        in one direction across a standard x16 PCIe 3.0 edge connector.

        The Gen3 spec has simplified bandwidth planning:
        8G/8b = 1.0 GB/s per x1 PCIe lane (i.e. 1 GB/s per lane).

        Thus, 16 such PCie lanes @ 1.0 GB/s = 16 GB/s in on direction,
        and double that in both directions: 16 x 2 = 32 GB/s.

        But, we must be careful when vendors of PCIe add-on cards
        advertise “Gen3 compatibility”: a card can actually be
        Gen2 compatible and it should still work because the
        Gen3 standard requires backwards compatibility with
        these slower add-on cards (read “Plug-and-Play”).

        MRFS

  5. blank

    Paul,

    Where did you read that the 2720SGL supports RAID 6?

    According to the packaging this statement is not accurate ‘The HighPoint RocketRAID controller has a surprising amount of functionality for such a small controller, supporting RAID 0, 1, 5, 6, 10, 50 and JBOD.” I cannot find any reference to supporting RAID 6 in any of the documentation or on the product packaging so I am extremely curious about the discreprency. Especially since you made an extra point of how unusual this option was on a RAID card in this price range.

    Thanx,
    Peter

  6. blank

    Paul,

    It seems you are not the only person stating that this card supports RAID 6. I just checked on NewEgg.com and their spec sheet for this product also states that it supports RAID 6? Wow now I am totally confused.

    The product packaging clearly states the following:
    RocketRAID 2720SGL
    – RAID 0, 1, 5, 10, 50, single-disk and JBOD

    Please excuse my lack of knowlege of RAID specifications. Your article was instrumental in convicing me to purchase 4 of these cards specifically because of the quoted support for RAID 6 so I would like to verify which is the correct specification.

    Peter Klimon

    • blank

      Hi Peter,
      Yes, this device does support RAID 6, it is also mentioned on the product page with the manufacturer.
      https://highpoint-tech.com/USA_new/CS-series_rr272x.htm
      If you are experiencing issues with the controller not allowing you to select the “RAID 6” settings, then that usually means that there are not enough devices connected to do so. Depending upon the amount of devices connected, it will allow you to configure to the compatible RAID levels. If there aren’t enough devices present, the option is inaccessible.
      I hope that this helps, there might also simply be an inaccuracy on the packaging itself, one that i am sure highpoint would be interested in hearing of! If you need further assistance, feel free to post in our forums, or email me directly at my site email.
      Thanks for reading!

  7. blank

    Nice review. This could be just the thing I have been looking for, for a long time now. I’ve been thinking of buying used IBM M1015, but it would require also some sort of special key to enable raid 5 and above support.

    This card seems to be much better solution. I do have overclocked quad core cpu and I always wondered why no one come up with raid card solution which could take use of mine massive idling cpu power. To bad you only do SSD reviews, would be nice to see some 4 drive raid 5 tests with normal hard drives.

  8. blank

    I just purchased a RocketRaid 2710 card and tried to put 3 SSD’s in RAID 0 to use as a boot drive for Windows 7 64 bit. Unfortunately, I found out that Highpoint does not have signed drivers, which apparently are not allowed in a Windows 7 64 bit installation, and there is no way to get around this restriction. Thus I cannot use the card! Has anyone found a way around this?

    • blank

      Hi, I also have the same problem, highpoint support aint to much help, I then tried to install with the 32 bit driver and the install says it aint working, but it does, I can then se the raid0 volume and get to install 64 bit and then after the installation, I updated the driver manually to the 64 bit one.

      • blank

        Thanks for the tip, however when i tried your suggestion it did not work for me. Windows 7 64 bit would not boot for me with the 32 bit driver. oh well.

  9. blank

    We have some series compatibility issues as we tried to use this card on a Dell T710 Poweredge Server. This is a 2011 Xeon 5600 motherboard – so quite surprised that it does not work.

  10. blank

    how comes the software RAID is faster then hardware RAID?

  11. blank

    The review is fine and the numbers great. But what is missing is some particulars about raid 6 usage. It does work, and in my test I yanked 2 of 4 disks and let it rebuild, which it did without a hiccup. BUT raid 6 is NOT supported in the bios! It can create and use raid 6 arrays only from a working operating system. You cannot create a bootable raid 6 array. Most dissapointing…

  12. blank

    I know this is old, but it would be way more infromative with RAID5 or RAID6 insted since then the kontroller or CPU (in SW RAID) actually had to do some kind of calulcation.

Leave a Reply

Your email address will not be published. Required fields are marked *