LSI 9265-8i 6Gbps MegaRAID Card RAID 5 Tested! – Settings and Test Considerations

In testing the 9265, we ran some of the usual suspects for consumer benchmarking to include AS SSD,  ATTO, Crystal Disk Benchmark, and HDtune Pro.

We also have ran quite a few tests with IOMeter, and  enjoy the use of AIDA 64 for latency testing as it gives a very good indication of average device latency.

The Windows Experience Index disk subtest for Win7 is also included.

In order to show a bit of the enterprise usage patterns for this review we have included Passmark for File Server, Webserver, Database, and the all-important Workstation tests. We also ran the standard PCMV HDD suite to test overall disk performance in a range of scenarios.


All tests for RAID 5 were ran with a 64k strip size, and LSI  recommended settings:  No Read Ahead, Direct I/O, Always Write Back, and Disk Cache Enabled.


A problem that we did encounter during testing of this card is a limitation with respect to the drives in use.  The C300 is an absolutely superb SSD that we believe suits the card well.

There were, however, some noticeably low write results achieved with this array and the low/high Queue Depth random small file access. The sequential performance on the other hand, is just absolutely amazing. Part of the reason is that RAID 5 does scale lower on the random writes in almost all situations. This points back to the inherent differences between MLC and SLC so we thought we might take a moment to explain this further.


MLC is a multi-level cell NAND flash architecture for the internal NAND components of the SSD. The C300 drives that we used for this testing are MLC drives. MLC, in the current generation of SSD devices, is slower than SLC by a large margin, especially in regards to small file random writing.  MLC is the ‘consumer variant’ of NAND that is used by your average non-enterprise user, and enjoys a much lower price structure than its counterpart.

SLC is a single-layer cell NAND flash architecture for the internal NAND components of the SSD. SLC devices are much faster in some areas and much more resilient. A particular strength of SLC is the small random write performance. SLC is the preferred type of device used in the enterprise sector as it is much more durable and faster in many aspects than MLC drives. SLC drives are, however, very cost prohibitive for a normal user. The price of SLC drives have relegated them to the enterprise sector almost exclusively.

With RAID 5 usage, the difference between the MLC and the SLC SSDs is highlighted much more than in a normal situation. RAID 5 creates a layer of data redundancy that, in layman’s terms, means there is a ‘second copy’ of the data on the storage array that can be revived if there is either corruption or a loss of one drive. In an eight drive array I could lose one drive and then install a fresh drive and rebuild the entire array, all the while without losing any data and still using the machine!

The drawback to this is that each piece of data is written twice, so the difference in the write performance of these MLC drives is going to be highlighted in a more drastic way with this type of RAID. During the course of the testing I did contact LSI about the low random write performance and they were very happy to help diagnose the problem. Here is a tidbit of information from one of their documents that is particularly helpful to illustrate the difference:

Note that CacheCade and FastPath performance depends on SSD performance. SSD performance depends on make, model, firmware, technology, specifications, and number of SSDs used. It is important to select the right SSD to achieve customers performance requirements. The chart below displays IOMeter benchmark results on an MLC and an SLC SSD. The single drives were configured as RAID 0 arrays. 4KB random writes were performed on the SSDs over two hours.

blankAs you can see here there is a huge difference in the random write performance capability of the drives that we are using in our testing as opposed to the types of drives that are used in enterprise applications.

Due to cost constraints we are unfortunately not able to provide write testing with the SLC drives. In the future it will be nice to test with the future generations of MLC drives. As technology progresses, more and more functionality is being gained with MLC, mainly due to improved interleaving, wear leveling algorithms, Garbage Collection features, and improved ECC.

As a side note, Intels next line of ‘E’ series (enterprise) SSDs will be MLC for the first time. Previous generations were SLC exclusively. It will be very interesting to see if these next generation of MLC devices can produce at a level even close to today’s SLC drives.

One of the major hurdles that LSI has conquered with this RAID card is the RAID 5 performance. RAID 5 is historically much slower in write performance and LSI has really redefined performance in this area with this new series of controllers. Their internal testing with SLC drives shows some amazing results!

The purpose of explaining this is to illustrate that the relatively low random write speeds may be, in part,  a limitation of the devices that we have attached, and we are not testing at the full capabilities of this RAID controller.


Introduction ~ Test Bench & Protocol ~ Card Settings

RAID 5 Explained ~ Initial Tests ~ AS SSD

HDTune ~ Win 7 & AIDA ~ IOMeter 08

Vantage and Final Thoughts


Leave a Reply

Your email address will not be published. Required fields are marked *