Though many thought that 2015 might be the year for NVMe SSDs, it might surprise many to also see NVMe mature so quickly. Such is the case with the newest Samsung PM1725 (HHHL) enterprise SSD which has unheard of data transfer speeds of 5.5GB/s read and 1.8GB/s write. Built on a structure of PCIe 3.0 and using eight lanes sets this card up for a theoretical high maximum 8GB/s bandwidth.
With availability expected in the near future, the PM1725 will be available in capacities of 1.6, 3.2 and 6.4 TB, will also be able to reach 1 million read IOPS, along with 120,000 write IOPS, and will also contain power-loss protection.
Both the PM1725 HHHL (half-height half-length) and 2.5″ form factors are able to reach their maximum capacities of 6.4TB and 3.2TB respectively as a result of Samsung’s newest 48-layer TLC V-NAND memory, although the 2.5″ speed is a bit less as shown in this display sample:
With data throughput of 3.1GB/s read and 1.8GB/s write, along with 750K IOPS, this remains to be a great example of just what NVMe is capable of. Release dates and pricing are not available for the PM1725 NVMe SSD.
What’s holding companies back from going with a full 16 lanes for the cards?
Controller throughput maybe?
And its a different slot form factor
for example no application to use 5.5GB/s and 1.000.000 IOPS constantly ?
Imagine you have these drives in your servers. What are you going to do with 5.5GB of data, each second ? Are you going to copy them to nul ? How are you going to process them ? What level of parallelism you need to have to achieve 1 million IOPS ? How long are you able to sustain it ? For three seconds ? Does it matter ?
And now you are asking for twice as much performance. Let’s say you can use all 16 lanes instead of 8 we have here. That gives you 11GB/s a 2.000.000 IOPS, theoretically. What are your use cases ? What price are you willing to pay to read through all your storage (6.4TB) in 9 minutes ?
The very last thing to dream about : how many servers do you know to use all 16 lanes in single PCI-e 3.0 connector ?
You are thinking much to small…this is not a consumer, oem or even small server business toy…
I have a business case that requires we read 60GB of data in as little time as possible. That addresses the bandwidth. I have a business case that requires we process a food recall report. The faster this happens, the faster we can report on which food products might harm people. The addresses the IOPs.
this is all-right, but as I say : are you able to utilize that IOPS and bandwidth ? When I was young and inexperienced, I thought that SSD with 1.000.000 IOPS must be 10x faster than SSD with 100.000 IOPS.
Now reality cought me and I don’t think so anymore. I have seen things.
What level of parallelism (queue depth) will this Sammy require to get to 1 mil IOPS ? What is your application capable of ?
60GB ? Peanuts. What were you doing up to now, what storage you have ? What performance you achieve today without super-Sammy NVMe, what is the bandwidth and IOPS you can achieve today ? This is what I’d be most interested in knowing to be honest. Are you RAIDing 16 standard SSDs to achieve real 0.5mil IOPS ? Are you already using NVMe, like four of them in server ?
What if Sammy breaks ? What is your redundancy and business continuity model, how fast can you read your 60GB of data then ? What is your DR strategy and what would happen if you don’t read those 60GB at 5.5GB/s tempo ?
How could you survive until today without super-Sammy ?
Les, believe me I’m not thinking too small.
To anyone not aware of why all 16 lanes or more could be used easily, think visual effects. Right now I’m compositing approx 100 layers of 4K at 32 float and would love to play back in real time. This is a huge amount of data. (approx 8GB per frame or 2TB per second) . The best decked out VFX system on the planet (HP Z840, w Xeons, 256 GB ram dual Q6000s and Dot Hill 16G fibre w 100TBs cannot even come close in 2015).
So thats why we all need it, detest waiting for renders, or at least I do.
this is not going to save you then 🙂
you need 2TB per second
this has 5.5GB per second.
that’s ~400 times less. Uff. Not some percent here or there, entire 400 times.
120TB per minute is too much .. even NASA or GOOGle cant provide that B/W. Drop down to 1980x and you should be ok.
U went too hardcore 🙂 I recently ordered for one of our business partners IBM/Lenovo x3950 X6 with 8 x 18Core Xeon and 4TB DDR4 RAM, with 4 x NVME Accelerators and 2 x Tesla K80s for their solution to customer. Customer is doing some huge renders, 5 architects work on this server raping him, but there was nothing close to even 100GB Per second, because no single hardware can push that speed. Only virtualized IBM High End flash with some insane clustering. and u push 2TB per sec? Do you work on NASA Servers? 😀
Brilliant thinking and brilliand discussion!!!
Imagine database around 100TB in a medium bank, which does billion of operations per second and u need some cache to swap currently working part of the database and times matter. For the bank/ISP/GSM operator 15 seconds and 12 seconds difference matter a lot, it’s a question of millions. so 5k USD for a SSD drive which saves you million is nothing 🙂