Samsung PM1725 NVME TLC SSD Gets 5.5GB/s Speeds and 6.4 TB Size – 2015 Samsung SSD Global Summit Update

Though many thought that 2015 might be the year for NVMe SSDs, it might surprise many to also see NVMe mature so quickly.  Such is the case with the newest Samsung PM1725 (HHHL) enterprise SSD which has unheard of data transfer speeds of 5.5GB/s read and 1.8GB/s write.  Built on a structure of PCIe 3.0 and using eight lanes sets this card up for a theoretical high maximum 8GB/s bandwidth.

Samsung Display

With availability expected in the near future, the PM1725 will be available in capacities of 1.6, 3.2 and 6.4 TB, will also be able to reach 1 million read IOPS, along with 120,000 write IOPS, and will also contain power-loss protection.

Samsung PM1725 NVMe SSD

Both the PM1725 HHHL (half-height half-length) and 2.5″ form factors are able to reach their maximum capacities of 6.4TB and 3.2TB respectively as a result of Samsung’s newest 48-layer TLC V-NAND memory, although the 2.5″ speed is a bit less as shown in this display sample:

Samsung PM1725 2.5 NVMe SSD 2

With data throughput of 3.1GB/s read and 1.8GB/s write, along with 750K IOPS, this remains to be a great example of just what NVMe is capable of. Release dates and pricing are not available for the PM1725 NVMe SSD.

12
Leave a Reply

avatar
1 Comment threads
11 Thread replies
0 Followers
 
Most reacted comment
Hottest comment thread
9 Comment authors
Alexander SiniovNuno G.DromoLubomir Zvolenskystewart Recent comment authors
  Subscribe  
newest oldest most voted
Notify of
dravo1
Guest
dravo1

What’s holding companies back from going with a full 16 lanes for the cards?

Alexxxxx
Guest
Alexxxxx

Controller throughput maybe?

And its a different slot form factor

Lubomir Zvolensky
Guest
Lubomir Zvolensky

for example no application to use 5.5GB/s and 1.000.000 IOPS constantly ? Imagine you have these drives in your servers. What are you going to do with 5.5GB of data, each second ? Are you going to copy them to nul ? How are you going to process them ? What level of parallelism you need to have to achieve 1 million IOPS ? How long are you able to sustain it ? For three seconds ? Does it matter ? And now you are asking for twice as much performance. Let’s say you can use all 16 lanes instead… Read more »

Les@TheSSDReview
Guest

You are thinking much to small…this is not a consumer, oem or even small server business toy…

Marcel Broesky
Guest
Marcel Broesky

I have a business case that requires we read 60GB of data in as little time as possible. That addresses the bandwidth. I have a business case that requires we process a food recall report. The faster this happens, the faster we can report on which food products might harm people. The addresses the IOPs.

Lubomir Zvolensky
Guest
Lubomir Zvolensky

this is all-right, but as I say : are you able to utilize that IOPS and bandwidth ? When I was young and inexperienced, I thought that SSD with 1.000.000 IOPS must be 10x faster than SSD with 100.000 IOPS. Now reality cought me and I don’t think so anymore. I have seen things. What level of parallelism (queue depth) will this Sammy require to get to 1 mil IOPS ? What is your application capable of ? 60GB ? Peanuts. What were you doing up to now, what storage you have ? What performance you achieve today without super-Sammy… Read more »

stewart
Guest
stewart

To anyone not aware of why all 16 lanes or more could be used easily, think visual effects. Right now I’m compositing approx 100 layers of 4K at 32 float and would love to play back in real time. This is a huge amount of data. (approx 8GB per frame or 2TB per second) . The best decked out VFX system on the planet (HP Z840, w Xeons, 256 GB ram dual Q6000s and Dot Hill 16G fibre w 100TBs cannot even come close in 2015). So thats why we all need it, detest waiting for renders, or at least… Read more »

Lubomir Zvolensky
Guest
Lubomir Zvolensky

this is not going to save you then 🙂

you need 2TB per second
this has 5.5GB per second.

that’s ~400 times less. Uff. Not some percent here or there, entire 400 times.

Dromo
Guest
Dromo

120TB per minute is too much .. even NASA or GOOGle cant provide that B/W. Drop down to 1980x and you should be ok.

Alexander Siniov
Guest
Alexander Siniov

U went too hardcore 🙂 I recently ordered for one of our business partners IBM/Lenovo x3950 X6 with 8 x 18Core Xeon and 4TB DDR4 RAM, with 4 x NVME Accelerators and 2 x Tesla K80s for their solution to customer. Customer is doing some huge renders, 5 architects work on this server raping him, but there was nothing close to even 100GB Per second, because no single hardware can push that speed. Only virtualized IBM High End flash with some insane clustering. and u push 2TB per sec? Do you work on NASA Servers? 😀

Nuno G.
Guest
Nuno G.

Brilliant thinking and brilliand discussion!!!

Alexander Siniov
Guest
Alexander Siniov

Imagine database around 100TB in a medium bank, which does billion of operations per second and u need some cache to swap currently working part of the database and times matter. For the bank/ISP/GSM operator 15 seconds and 12 seconds difference matter a lot, it’s a question of millions. so 5k USD for a SSD drive which saves you million is nothing 🙂