Confused by mesured "queue depth" in a RAM drive.

Discussion in 'SSD Benchmark Locker' started by naimc, Apr 10, 2013.

  1. naimc

    naimc Guest

    I'm a bit confused about something, when SSD reviews are measure performance at say a QD of 32 in iometer where are they setting QD in the interface ?. Are they doing this by creating 32 Works or are they using the # of Outstanding I/Os ?

    I think of QD in the sense of it's classic definition "Queue depth, in storage, is the number of pending input/output (I/O) requests for a volume" So it has to be mesured.

    I did a simple test using I/O meter I using a 1 gig test file with 8KB 100% sequential read. using 50 outstanding I/O
    -on SATA 7K drive the Perfmon queue length /sec measured 49. ok that's a one to one correlation.
    -on a SSD Preval Elite the Perfmon Queue length / sec measured 48.3
    -on RAMDrive the Perfmon Queue length / sec measure .3

    It does not mater to what value I increase the Outstanding I/O value, 100, 500, 5000, 50000, 500000 the Queue length / sec always stays at around the .3 with a ramdrive. I tried two different product same result.

    I would have expected to eventual start seeing latency increate as QD does ?

    Can anyone explain what's going on here ? ( Is it that ram latency is so low that the Perfmon Queue lenght/sec is not showing me the difference of latency that might seem big when viewed at the nanosecond scale ? )

    (cpu on test system is Xeon x5560 with DDR3 PC12800 ram)

    Naim.
     
  2. OS-Wiz

    OS-Wiz Guest

    Just a guess, but I think they mean two different measures. Perfmon Queue length / sec measure .3 might be the average amount of time the I/O remains in the queue until it is serviced?????
     
  3. naimc

    naimc Guest

    In other words both the HD and SSD show Queue Length / sec values that correspond to number of # of Outstanding I/Os because because it take longer than 1 second for the IO to be serviced where as the RAMDISK i/o is always serviced under 1 second ?
     
  4. OS-Wiz

    OS-Wiz Guest

    Not really. You need to get some deeper knowledge of what the various reported numbers represent. Here's a couple excerpts you'll find helpful along with their urls.

    "Avg. Disk sec/Transfer (Avg. Disk sec/Read, Avg. Disk sec/Write)
    Displays the average time the disk transfers took to complete, in seconds. Although the scale is seconds, the counter has millisecond precision, meaning a value of 0.004 indicates the average time for disk transfers to complete was 4 milliseconds.
    This is the counter in Perfmon used to measure IO latency.
    I wrote a blog specifically about measuring latency with Perfmon. For details got to “
    Measuring Disk Latency with Windows Performance Monitor”.
    There are some things Perfmon will not be able to tell us. For advanced analysis, Windows provides us with xPerf, enabling state of the art performance data capture through Event Tracing for Windows (ETW). There is an excellent bog on the subject by Robert Smith (Sr. PFE/SDE). “Analyzing Storage Performance using the Windows Performance Analysis ToolKit (WPT)”. "


    See: http://blogs.technet.com/b/askcore/...formance-monitor-disk-counters-explained.aspx
    and: http://www.oreillynet.com/pub/a/network/2002/01/18/diskperf.html
     

Share This Page