As fast as SSDs have found mainstream consumer use, they are unfortunately grouped in the same picture as a hard drive, if only for the fact that they are seen as storage and little more. Both the size and demography of our readership paints this picture clearly, however, there exists an amazing opportunity to take our readers a step further; an opportunity to show why we might have such passion for what many might consider a very dry area. In this segment of ‘Learning To Run With Flash’ let’s gain an understanding of the ‘Big 3’ of SSD performance, these being throughput, latency and IOPS.
UNDERSTANDING THE BIG 3
The data storage system (SSD, HDD) of any computer is the slowest part, compared to any other major component such as CPU, DRAM memory, or video card. DRAM memory can transfer data at over 20 gigabytes per second (GB/s). CPUs and video card processors can execute their internal instructions billions of times a second. Meanwhile, most storage drives are capable of, at best, processing a few hundred megabytes of data per second (MB/s). Most if not all of the hardware and software running in a PC wait for data from storage devices much longer than any other source. When we see the measure of MB/s, or even GB/s, we are identifying data transfer speeds, or throughput.
Data transfer speed is not the only important performance aspect of a storage device; in fact it is secondary. How long it takes for a data transfer to BEGIN, called latency, is even more important. Moving data slowly from one system to another reduces performance, but the time it takes for the data to begin moving, when no useful work is being done, is a huge factor in the performance of data storage devices.
There is another aspect of data storage device performance that is at least as important as latency. That is how OFTEN the storage device can perform a data transfer. How many Input or Output (IO) operations can be performed by the storage device every second, or IOPS, is a very important measure of its performance, and one that is overlooked too often. Throughput seems to be the ‘measuring stick’ for the consumer, where latency and IOPS are integral measuring tools when we move to the enterprise, and even data center space.
Our three main performance areas, how often IOs can occur, how long it takes for an IO task to begin, and the speed of the data transfer into or out of the storage device are simply defined in the computer industry as this:
- How often a storage device can perform IO tasks is measured in Input/Output Operations per Second (IOPS), and varies depending on the type of IO being done. The greater the number of IOPS, the better the performance.
- How long it takes for a storage device to start an IO task, or latency, is measured in fractions of a second. The smaller the latency time is the better.
- The speed at which data is transferred out of or into the storage device is measured in bytes per second, normally kilobytes and megabytes per second. We all want more Megabytes per second.
The Big 3 (throughput, latency and IOPS) are what truly indicate the performance capability of a storage device. Let’s expand our understanding of HDD and SSD performance beyond simply MB/s, or throughput, as that is only one part of the performance story. Using a free and very simple to use benchmark called AS SSD, we can do a comparison of a HDD and a SSD while taking a closer look at ‘The Big 3’.
HDD/SSD PERFORMANCE COMPARISON
AS SSD is the bread and butter of SSD synthetic benchmarks as it reads and writes several gigabytes of data to and from the drive. As a bit of a heads up, it took a little over an hour to complete the AS SSD Benchmark on the HDD, while the SSD was done in under five minutes.
Results for the hard drive are shown on the left, while that of the SSD are on the right.
IOPS tells us how quickly each drive can process IO requests. The first row is the read and write IOPS of a 16MB file, a large file sequential IO. The difference between the HDD and SSD is not huge, the SSD can perform 3.4 times the read IOP requests than the HDD. The large file sequential write IOPS and speeds are similar to the reads, the SSD is about 3.5 times faster than the HDD.
The next row is the small 4K file “random” read and write IOPS. Random means the files are scattered all over the drive, not in neat rows or groups, so take more work to find. Random IO is the most difficult and time consuming type a storage device must deal with. Here we see the HDD can do 176 IOPS, while the SSD gives us 9417 IOPS, or over 53 times more read requests. Since small file, 4K reads are the most common IO done in typical PC usage, this difference reveals how much quicker a SSD can be for a user. The 4K write IOPS show a stunning difference in performance, the HDD 311 IOPS, the SSD 32,933 IOPS, or over 105 times faster. Can performance like this not be noticed by a user?
AS SSD’s “4K-64Thrd” test in the next row is a test of drive’s ability to use the Native Command Queuing (NCQ) feature of AHCI. NCQ simply provides a drive with direct access to up to 32 IO requests in the system’s memory, with only one IO command sent to the drive, instead of 32 IO commands, one for each IO request. That eliminates all the overhead involved processing 32 individual commands. The “4K” is the file size, the IO is random as described above, and the “64Thrd” (64 thread) means that two, 32 NCQ type IO requests are done in the test.