Adaptec (by PMC) ASR-8885 12Gb/s PCIe RAID Adapter Review – Raid Testing With 8 HGST 12Gbps SSDs

SERVER PROFILES

While synthetic workloads do a great job of testing the underlying technology and reporting easy to understand results, they aren’t always indicative of how the drive will be used by the end user. Workloads that simulate enterprise environments try to bridge that gap without being overly complex. 

ASR-8885-DB

The database profile is 8K transfers, and 67% percent of operations are reads.

Our Series 8 setup took no mercy on the database profile.  With nearly 500K IOPS, it easily blew by the Series 7.  Only a pair of Series 7 and 3x the number of drives was able to get a higher score, and even then, by less than 15%.

ASR-8885-fs

The fileserver profile is based on an 80% read/20% write mix. Its made up of blocksizes from 512 to 64K, each making up a different percentage of the access pattern.

The pattern is: 512 bytes=10%, 1k=5%,2k=5%, 4k=60%, 8k=2%, 16k=4%, 32k=4%, 64k=10%.

In this test, the Series 7 and 8 were pretty evenly matched.  The larger block sizes probably account for the closeness of the results.  The 24 SSDs in the Series 7 setup give it a clear advantage with large block sequential performance.

ASR-8885-ws

The webserver profile is similar to the fileserver profile, but has some additional 128K and 512K accesses thrown in for good measure. Additionally, the profile is 100% read.

Considering the webserver profile is so close to fileserver, but with more large block transfers, it was no surprise that the Series 8 fell behind the Series 7.

Now is a good time to reiterate that the Series 8 would do just as good, or better than the Series 7, given the same drive setup.  We were more interested in seeing if a 12Gbps solution was superior at lower drive counts.  Except for large block sequential transfers, where the number of drives play such an important role, the Series 8 held its own in many of the tests.

14 comments

  1. blank
    Lisa Jackson = Rich Windsor

    Wow.

    I didn’t see the ASR-8885 as being listed, via the Amazon link provided. Is it available yet? If so, who besides Amazon would carry it? And likewise for the HGST SSD800MM?

    • blank

      We commonly review products before such availability and I might keep my eyes open on the Amazon links for the card. As for the HGST 12Gbps SAS SSDs, I somehow don’t think we are going to be seeing those available through retail means anytime soon, keeping in mind thjat they are still a new and very hard to come by item.

      Thanks ahead though for using our links!

      • blank
        Lisa Jackson = Rich Windsor

        The “wow” for the performance numbers. Again, wow 8) and thanks for the informative review. Would like to buy, but am going to have to save pennies for awhile.

  2. blank

    You wrote:

    “… Finally, you can see the SFF-8643 internal connectors and SFF-8644
    external connectors. With both external and internal ports, the
    ASR-8885 has a lot of flexibility….”

    Ok, so for example, (8)Internal / (8)External , so does this “flexibility” mean, I can simply connect 16 Drives “Internally” to the Raid card, and for example utilize all 16 in a Raid-10 ???

    • blank

      If you want to connect only internal drives using the 8885 then you would want to use a backplane with expanders, as physically, only 8 directly cabled drives can be connected to the internal ports on this model. However if you had an externla drive enclosure attached to this model, then you could use the external porsts in combination with the internal.

      With the use of an expander based enclosure, you would be able to see all the drives in the enclosure through one connection (single point to point cable from controller to enclosure).

      You could also use a 16 port internal controller such as the 81605ZQ, which does offer 16 internal ports. With either controller you would be able to create arrays using all available drives, whether in a single array or divided among different arrays or using the same drives for multiple arrays (up to 4 arrays can be created with the same set of drives.

      Adaptec by PMC

  3. blank

    Hi
    Do you know if the new series 8 follows the same nature as previous one series 6 and 7, I mean, they seem to shine at high queue depths, but can you tell me if the made progress on random 4K QD1? Thank you

    • blank

      Hi Ricardo,

      Adaptec has been working on improving lower queue depth performance and the up-comming release of firmware expected at the end of 2013/beginning of 2014 should reflect these changes in series 7 and 8 performance numbers. However, even when these changes are implemented, queue depth 1 performance is
      expected to be low compared to more common IO depth performance applications.

      Especially since at IO depth 1, the full latency of every stage in the IO path is exposed – from application, driver, command thread, drive and all the way back. At all times, only one stage is active. All others have to wait, because there is only one IO.

      • blank

        Thank you for the answer Adaptec.
        I’ve been using Adaptec 6405 with ZMM module connecting a pair of samsung 830 ssd specifically on windows environment. I learned that Dynamic mode is the best suited for windows environment (every type of sequencial and random nature). Ok I know the benchmarks (wherever they are) don’t tell the true story and my “slow” 6405 is really fast scanning a single seagate barracuda 7200.10 at 920mb/s (all cache enabled when protected by zmm) and drives cache itself). I asked before about random 4k QD1 because each manufacturer do their controllers specifically for one environment,for example: Areca for random(read ahead and write back everything), LSI for random too, 3ware for sequencial, Atto and Adaptec for sequencial. Well booting windows from the controller,well is espected to see some nice random results and, in my opinion, would be great to see the Adaptec cache works effectively on random patterns.
        Another question: why the ZMM module has 4GB SLC since it is suppose to support only 512mb? Does the module other actions like helping actively on reads and writes? (Excluding the task writen by the cache and flushing to it) in case of power failure)

      • blank

        It does not help with read/writes. It is just a replacement for a BBU. BBU has problems with maintaining voltage, in some cases you will have to empty it to gauge how much power it has, or to increase how it’s life expectancy. That is the downside to this, it cannot keep power in check for a too long of period of time. And you will have to empty the BBU and recharge it again. That means in the mean time of a few hours it will use passthrough instead of backed. In most cases it is not tolerateble to have it like that. Since some server has high loads and things might become a bottleneck when it suddenly shut’s off.

  4. blank

    Are throughput test done in RAID 5 mode?

  5. blank

    Unfortunately all I gained from this article was that you are happy Adaptec has raid controllers. If those charts included a performance comparison to a 9271 or even a 9260 I’d have some idea how an LSI benchmarks are in relation to the new adaptec. Just because it is 12G or PCI 3.0 that doesn’t mean it is faster, just that it has access to more bandwidth however the processor may be so slow it gains nothing.

  6. blank

    Fantastic article! I am running a small post production shop. Our network runs on 10Gbe and as I am adding editors our current RAID (a Promise R8 connected via thunderbolt to an iMac server) is starting to become a bottle neck.

    I’m currently considering building a Windows 2012 R2 server with one of these Adaptec cards to run either an 8 drive or 16 drive RAID-5. Obviously we can not afford the SSD drives tested here so I am wondering if it would be possible to run an SSD raid based on costumer SSD drives. The top of the line SSD’s from companies such as Sandisk currently achieve 400MB/s with 1TB capacity.

    So an 8 bay RAID 5 configuration would give us 7TB of space and run at 2.8GB/s. An 16 bay RAID 5 would give us 15 TB of space and throughput of 6.0GB/s. Are my assumptions here correct?

    Would it even be possible to use consumer grade SSDs in a server configuration or will they just die after 3 months use?

    Would love to get your thoughts!

Leave a Reply

Your email address will not be published. Required fields are marked *