LSI MegaRAID CacheCade Pro 2.0 Review – Test Bench and Protocol

This is the Test Bench.  As you may have guessed, it truly is no ordinary Test Bench, and I have dedicated many many hours creating this build which has been used in many benchmarking excursions and extreme overclocking sessions. She has even set a few HWBot world records along the way!

Our main focal point in this report will be the performance differences between a base HDD volume with and without caching.

We want to test for the maximum IOPS attainable with the solution and, for that testing, we will be using three of the five SLC Lightning LS 300S EFD (Enterprise Flash Devices) graciously provided by SanDisk Enterprise Storage Solutions for testing. Many in the consumer realm will not be very familiar with SanDisk ESS. These are heavy duty drives that are used in server applications and SanDisk ESS is leading the way with the absolute most powerful devices on the market. These aren’t your typical devices, weighing in at $6,800 USD EACH, and these devices aren’t for the fainthearted but packing 300 Gb of SLC per drive is certainly going to carry a premium. Rated at a blistering 160,000 IOPS per device these are definitely the heavyweights of Enterprise Flash Storage. These drives are made for taking a beating and are warrantied for unlimited writes for 5 years.

It should be noted that many who will be using a CacheCade Pro solution will not be using drives as expensive as these. The model of a CacheCade configuration is to use several less expensive SSDs for acceleration. While these drives are the absolute best for this product assessment, one would not have to use such high powered devices to enjoy the massive benefits of CacheCade Pro. If one were to have some Lightning drives lying about though, you certainly would not find a better suited drive for any enterprise purpose.

For the base array we are using 4 x Western Digital 500 GB Hard Disk Drives. These drives are 7200 RPM and the fastest that could be sourced for the project. These aren’t 15,000 RPM SAS drives, mind you, but they did suit our purposes. Much of the internal testing and comparison from LSI are from large 12 member arrays of 15K SAS drives which can generate a significantly higher amount of IOPS than our lowly SATA drives can.


The only real drawback to this is the resulting longer ramp-times, the time from when the caching tests begin until we see acceleration. The rate of ramp is directly tied to the performance of the base array, as the data has to be accessed multiple times to be considered “Hot”. However, for what they are they performed admirably, and do really highlight the performance benefits of the caching.


CPU: Intel i7-920 DO @ 4.46 Ghz

MOTHERBOARD: EVGA E760 Classified Motherboard

RAM: 6 GB EK-Watercooled Corsair Dominators 2000MHz CL8 kit. 7-8-7-24@1700


POWER: ST1500 Fully Modular 1500 Watt Power Supply 1500W (Peak 1600W) 12v1320W/110A (Peak 120A) combined+3.3 5v 280W

CHASSIS: Danger Den Torture Rack

CPU COOLER:  HeatKiller 3.0

WATER SYSTEM: Two KMP-400 w/reservoirs in a Parallel loop with Bitspower reservoir, two MCR320-QP rads, and 1 BIPS 240 rad. Areca 1880IX-12, EK Ram Block, EK Fullboard Mobo Block on Loop 1.  Loop 2- CPU only.  Loop 3- MCP-655 and Honda Radiator on dual GPUs.

STORAGE:  3x 300 GB SLC SanDisk Lightning EFD (Enterprise Flash Drives), 4 Western Digital Caviar Black 500 GB Hard Disk Drives.

CONTROLLER: LSI 9260 W/ CacheCade Pro 2.0 enabled

RAID CONTROLLER CONFIGURATION: Base HDD Volume configuration is RAID 0/64KB Stripe Size/Write Through/Direct IO/No Read Ahead/Disk Cache Disabled

ENCLOSURE: ARC-4036 6gb/s JBOD, provided by Areca for our test bench. Our review of this product can be found HERE.


We will be using Iometer exclusively for testing. Some of the metrics are hard to benchmark outside of real applications so there will be some charts with an explanation of real world results. Our results are comparative and should be used as a base comparison of acceleration only. As always, any and all large configuration changes should be evaluated and administered by a storage professional, thus being optimized for the workloads involved.

We have amassed a number of big name sponsors for a number of high-powered reviews that will be released fairly quickly, and we would like to thank Areca, LSI, and SanDisk for making this installment possible.


The Lightning LS 300S drives aren’t your standard equipment. In order to receive maximum performance with one controller the devices have to be configured in WidePort mode, which essentially uses two connections to one drive. It is, in essence, like having two devices connected in one. The SAS expander we are using does support two connections to each device through failover mode. The typical expander does not support the WidePort function in the capacity that the SanDisk uses it, so we sourced some special cables to enable us to get the job done. Using these cables we were able to connect to the rear of the expander, and use it as a pass through device to carry the data through the expander and into the RAID controller. This is where the ability to daisy-chain the ARC-4036 really came in handy!

Here is a ‘mock-up’ of how we ran the drives. The top two SAS cables went to the RAID controller, and the bottom two cables were used to connect the SanDisk drives. The typical set of cables will allow four devices to be connected to each cable, but the special DualPort cables we used actually uses two ports per device, for a total of two drives per cable. In this configuration we can leverage both the awesome power of the SanDisk Lightning LS 300S drives and the four WD Caviar Black drives at will.

blankNow on to the results of our testing!

 NEXT: Single Zone Results

~ Introduction ~ Basic Concepts and Application ~

~ Enter Write Caching ~ Exploring TCO ~ Test Bench and Protocol~

~ Single Zone Results ~ Overlapped Region Results ~

~ Real World Results and Conclusion ~


One comment

  1. blank

    What SAS expander were you using?

Leave a Reply

Your email address will not be published. Required fields are marked *