LSI MegaRAID CacheCade Pro 2.0 Review – Real World Results and Conclusion

All of our extensive testing with Iometer is, at the end of the day, merely synthetic testing. A program such as this is no easy task to benchmark, especially when using synthetic measurements. The main thrust of the testing is to reproduce situations that are similar to real world applications and how they would behave.

I feel that, if anything, our tests have under-emphasized the performance of this product in the server application. I understand this may be hard to believe considering the absolutely massive gains that we have seen here, but one cannot underestimate the power of write caching. Simply put, it is very hard to synthesize results that can show just how much the small random writes can paralyze HDD arrays under certain scenarios. If you add RAID 5 or RAID 6 into the mix, you can see some drastic slowdowns. Any Raidset with parity on an HDD array can be painful to deal with.

Here I will acquiesce to an LSI document that shows some of the results from standard real world application benchmarks. This is a graphic comparing the acceleration results between the previous version of CacheCade, which is read-only caching, and the current Pro version.

There isn’t a lot that can be said about these results that isn’t already said in the results themselves.

A FINAL CONSIDERATION

One thing that raid sets with parity bring is data security. Another thing that they bring is degradation and rebuilding periods. Unfortunately these scenarios can hobble any storage subsystem regardless of how busy it is. A degraded or rebuilding array simply does not care how inconvenient it is for the users or administrators. Such is life in a data center, but wouldn’t it be nice to have a parachute of sorts? With so many features of CacheCade Pro and the multitude of great things that it can accomplish, this is just yet another value-added proposition for one to consider. One of the most important metrics is up-time and solid, sustainable performance during usage. Nothing is more frustrating than a degraded or rebuilding array.

Here we can see the tremendous amount of stability that having a fast, nimble cache flash layer can provide. When the storage solution is under duress, the flash caching can raise to the occasion to smooth out the transition period.  A click of the chart will provide an up-close view of the graph.

blankCONCLUSION

With our last CacheCade review, we closed noting that there was only one drawback to CacheCade, that being that read data is not kept upon reboot. While this is still an issue, luckily there is hope in sight! The write data in this version, being of great importance for data integrity, has to be maintained. They have come up with a way of preserving the write tables. We are happy to report that they are currently working on retaining the read tables as well, and LSI is including read data retention in the next incremental upgrade for CacheCade Pro. Along the same timeline this capability will be extended to the flagship 9265 controller along with RAID 5 capabilities for the caching volumes. Did we forget to mention Caching Volumes being extended up to 2 TB?

We are also happy to report that the upgrade to CacheCade Pro 2.0 is going to be free for the current users of CacheCade V1.0!

This is great for users that already have implemented this into their systems. With the exciting upgrade that Pro is and the future upgrades that are in the pipeline, this product is simply the cream of the crop when it comes to the caching alternatives available today.

Simplicity and ease-of-use are really the hallmarks of this application. Once configured, the software handles all of the parameters with absolutely minimal human intervention. There is now a CLI command method of sampling caching statistics, and this is soon to be included in the GUI. The management software is easy and unobtrusive, and truly is plug-and-play. Pop in some SSDs, upgrade the card, and off you go. It really is just that simple.

I remember the performance that we were dealing with just a few short years ago with HDD arrays and, in particular,RAID 5 and RAID 6. Sluggish and painful, R5 and R6 have led many to frustration, especially when they are in sub-optimal states. Usher in the SSD revolution, and things are notably different. The capability of these devices is bringing about new and exciting opportunities for datacenters and administrators everywhere.

The findings included in this review, and soon in a data center near you, all point to the same thing. CacheCade Pro is the perfect fusion of HDD and SSD technology, and allows for great price points for those in need of more performance. At just $270 the price for the upgrade itself is very small considering that most of the infrastructure, in most cases, is already there. There is no need to do massive restructuring and you can still gain some very large performance gains. One server can essentially begin to do the work of several, just by a topical upgrade.

Once the data is considered and the benefits weighed, it is quite simply the best upgrade that one could make to an existing server in need of acceleration. Best of all, with the fast pace which LSI is moving at, there will be many more exciting possibilities with CacheCade Pro in the future!

  NEXT:  Introduction

~ Introduction ~ Basic Concepts and Application ~

~ Enter Write Caching ~ Exploring TCO ~ Test Bench and Protocol ~

~ Single Zone Results ~ Overlapped Region Results ~

~ Real World Results and Conclusion ~

Feel Free to JOIN THE DISCUSSION in our forums!

 

8 comments

  1. blank

    Can you clarify – do the SSDs need to be plugged into the LSI controller or can they simply be present in the system?

    And if they can simply be present in the system, will the SSD’s cover multiple LSI controllers in a system? The review isn’t clear on this is neither is any of the documentation I have been able to find so far. Since I only have a four port LSI controller and it’s full with four hard drives, if I have to plug the SSD into the card this is less interesting – but with the max of 32 SSDs cited I’m thinking they don’t have to be plugged into the LSI controller. Either way it would be nice for someone to confirm one way or the other.

    Otherwise, excellent article!

    • blank

      EricE-
      As to my knowledge, the SSDs need to be connected to the RAID controller itself. However, any VD that is connected to the controller can utilize the CacheCade Caching Volume. When you configure the VDs you can assign them to the caching volume. I will find out if the SSDs would have to be present on the card itself, but I am pretty sure that is so.
      Also, even though you have only a four port, you can utilize an expander or JBOD that will allow you to utilize many more devices!
      Thanks for reading,
      Paul Alcorn

    • blank

      JackG-
      Right now the only supported controllers are the 9260 and 9280 series. I know that they are supporting the 9265 with write caching as well in the next release. I am compiling a number of questions for LSI that i will submit, and will find out if the write caching will be extended to all controllers that support the read caching as well.
      Thanks for reading!
      Paul Alcorn

    • blank

      I have tested it, ssd need to be connected to LSI to be seen by controller as a candidate to CacheCade

  2. blank

    Awesome article! I have a Dell T7500 Precision Workstation with a Dell H700 Raid controller. This is an LSI controller with 1g NVRAM cache and it supposedly has CacheCade technology. I am running an OCZ Vertex3 240g for the system and I have a RAID 10 setup with 4 Seagate Barracuda 2TB drives. I just added an addtitional OCZ Vertex3 60g drive to try and set up CacheCade to play around with the technology. I am having a hard time figuring out how to implement the SVCD. My question is; Will CacheCade Pro work with the Dell/LSI RAID cards? Thanks for the great article and keep up the good work!

  3. blank

    Interesting article. According to https://kb.lsi.com/KnowledgebaseArticle16562.aspx MegaRAID controllers do not support the TRIM command. Could you explain what the effects could be for CacheCade Pro 2.0? And how about SSD’s not used as cache, but as a traditional (RAID) disk. Slower writes? More SSD wear (lower lifetime)?
    TIA, Bart

  4. blank

    I am in the process of purchasing a server with the IBM M5015 RAID Card which is essentially a rebadged 9260-8i (but with an IBM BIOS I believe).

    It has the IBM-ised “Performance Accelerator Key” (https://www.redbooks.ibm.com/abstracts/tips0799.html) which I assume is also a rebadged LSI hardware key.

    Do you have any idea if LSI will honor the upgrade to CacheCade 2.0 when it is released? (I assume IBM will take its sweet time releasing the update).

    Or at worst, if I buy a similar hardware key direct from LSI with CacheCade 2.0, will it be compatible with the M5015?

  5. blank

    Paul – hope you are still reading comments on this article! You mention a future upgrade bringing 2TB caching volumes – do you know if the 32 drives for CacheCade is also being changed? For the current 512GB/32 drive limit, my feeling is that one of the best price/performance options is the lowly Intel 311. Although only 20GB, it is SLC so well suited to CacheCade, and 32 of them in RAID0 should give over 1 million IOPS (random 4k read). However, with 2TB and 32 drives, something like 64GB Crucial M4s will perform similarly, cost slightly less, provide more cache, albeit using MLC…

Leave a Reply

Your email address will not be published. Required fields are marked *