Micron P320h HHHL 700GB PCIe Enterprise SSD Review – Unbelievable IOPS and Absurd Endurance

TEST BENCH AND PROTOCOL

In testing the Micron P320h HHHL PCIe SSD, well be using our Enterprise Test System.

Well be using Red Hat Enterprise Linux and/or CentOS for most all of our enterprise testing. Linux has less overhead and is generally more flexible when it comes to evaluating performance.

That said, our enterprise test bench is OS agnostic. Well apply a few new standardized testing techniques in addition to some of our older test protocols. We want to isolate and explore the individual performance of the review drives as accurately as we can.

As the test bench evolves, we hope the result is a more tangible, relevant performance evaluation.

A special thanks to Asus, CrucialOCZ, and Fractal Design for sponsoring our Enterprise Test Bench.

CAPACITY AND R.A.I.N.

As we stated earlier, the P320h’s 1024GB of flash gets used in different ways. 128GB, or 12.5%, goes to R.A.I.N.— Micron’s NAND-level redundancy scheme. Should one NAND device fail prematurely, the R.A.I.N. system should keep the drive operating though the Micron parity scheme (at the expense of 1/8th of the drive’s flash). After that, 197GB is used for over-provisioning and 50GB goes to spare area (~7%).

All told, that leaves just about 650GB available capacity.

Micron’s R.A.I.N. (redundant array of independent NAND) system is similar to other systems. For every 7 elements of NAND, one additional unit goes to parity. In the event of a failure, the drive can seamlessly recover the data of the failed NAND from the parity data. There are other schemes that could be used, but the 7 +1 method offers the best combination of speed and capacity.

R.A.I.N. takes blocks from different bits of flash, and groups them into “super blocks” composed of eight individual blocks. One of these blocks is used to store parity data for the other 7 blocks, which is enough to recover one of the remaining seven individual blocks. At a global level, it’s enough to recover should a whole die fail in service. Whole die failures aren’t the most common cause of drive failure (that dubious distinction probably belongs to firmware issues), but they happen frequently enough that using 1/8th of the flash on a drive for protection makes sense.

12 comments

  1. Micron doesnt own the controller. It is made by IDT.

    • We are aware of that, thanks. Our reasoning behind wording as such is because this is, by no means, a simple stock implementation of a controller and similar could not have been accomplished without Micron’s engineering expertise and software. Great point and perhaps we could reword things just a bit…

    • Micron has a Minneapolis-based controller team which did much of the work on the controller. Basically, IDT has a stock PCIe controller, but it’s easily modified for custom jobs. Micron refined the design for the P320h. IDT now has a reference NVMe design, but the NVMe standard is far from universal yet. One day, a PCIe SSD won’t need a special driver, but today they do.

  2. Micron developed and owns the chip, IDT just fabs it.

    • Incorrect. This is the very same controller that is used with the new NVMe controllers that IDT has developed.

      • Just to help you out, this is what has been posted at Anands after they inadvertently stated it was NVMe.:

        Update: Micron tells us that the P320h doesn’t
        support NVMe, we are digging to understand how Micron’s controller
        differs from the NVMe IDT controller with a similar part number.

        Our interpretation of the chip appears to be correct as it is written and this same ‘structure’ has been used in the SSD industry prior. This is not a simple plug and play adaption of a chip, but rather, custom package.

        Thanks again.

      • Yes, it isnt NVMe, but it is an IDT chip, therefore it is not developed in house by Micron.

  3. Just needs a few heat sinks and a fan or maybe a water block to keep it cooler.

  4. Todd – What makes you think you know so much about this chip?

  5. Is the RAIN implementation safe enough to use without RAID 1 running outside of it (say across 2 350GB cards) it sounds good, but if you have a firmware or controller related failure you’re still at risk right?

  6. Is this bootable? And just for kicks, what would the as-ssd results be?

Leave a Reply