Breaking News

Intel DC S3700 Data Center SSD Review (200/800GB)

A few months back, we randomly heard three words during a passing conversation; Intel, Taylorsville, and latency. That was it; no context, no explanation and no hints. Given that we happened to be at the Flash Memory Summit, it was a safe bet that something solid state was afoot, but at the time these were just random words to file away for later. During a phone call with Intel a few weeks later, it all made sense.

As it turns out, Taylorsville is Intel-speak for their third generation DC S3700 enterprise SATA SSD offering. Packing a new Intel SATA III controller, it was designed with consistently low latency in mind and not maximum performance. As it turns out though, the S3700 isnt exactly a slouch in that department, either. Intel has made some pretty bold performance claims as well, including a staggering 15x improvement in steady state write performance over Taylorsvilles 3Gbps predecessor, the similarly-named 700 series.


The DC S3700, previously code-named Taylorsville, has a newly styled model designation as well. The DC prefix stands for data center, while the 3 refers to the third generation of Intel SSDs. The same three number 300, 500, 700 and 900 hierarchy is retained, just with a few more modifiers to bring the SSD naming scheme in line with 2nd and 3rd gen Core CPUs.

To recap, the 300 and 500 series are consumer intended products, while the 700 and 900 series are enterprise-focused offerings primarily intended for server use. The S3700 falls into that latter category, so well strap it to the enterprise bench to see what shakes loose a bit later.

The S3700 ships in four capacities and two form factors. In 2.5 variants, 100GB, 200GB, 400GB, and 800GB drives are available, while the diminutive 1.8 only gets 200GB and 400GB versions. Intel made note that the 1.8 size is making a comeback, primarily for high density server situations, but only so much flash can fit on a PCB with 25nm/64gbit NAND.


The 800GB model pictured here possesses a whopping sixteen 64GB packages. Each side holds eight 29F64B08PCMEI 25nm HET-MLC emplacements, with each octal-die package consisting of eight 64gbit dice. Do the math, and you get 8GB per die, 8 die per package, and 16 packages for 1024GB of NAND. It isn’t clear how the total amount of flash is being utilized, but Intel informed us that over-provisioning was around 20%. 745GB of user addressable space is available on the 800GB, which would point to 28% OP, but surely some of that flash is being used for other purposes.

The 200GB samples we received had a mix of packages, amounting to 256GB with another 8GB thrown in for good measure. The 800GB seems to have exactly 1024GB of flash on board, so Intel probably needed just a bit more extra flash for redundancy in order to hit performance targets (by not sacrificing over provisioning space for other needs) on smaller capacities. The 320 series also used an odd amount of flash, with the extra going into a parity-based die level redundancy scheme. Ironically, the 100GB S3700 will sell for just slightly more than the outgoing 120GB 320 does currently.

The 800GB has two 512MB Micron DDR3 1333MHz DRAM ICs, while smaller capacities will utilize just one 512MB package. Intel claims to not cache any user data in DRAM, instead preferring to use the volatile cache for internal page tracking and management. All the same, corruption could result if power is interrupted during a write. The S3700 has power loss protection to keep a sudden outage from corrupting data, but if the system detects a fault in the two capacitors powering the system, it will voluntarily disable the volatile cache system. Unlike previous Intel PLP systems, the S3700 uses two 105c rated electrolytic 3.5v/47uF capacitors instead of a solid capacitor array.

Intel says the DC S3700 is good for 10 full span, random drive writes per day (DWPD). Over five years, that amounts to over 14 petabytes for the 800GB, but up to 20PB if the workload becomes more sequential in nature. The industry is transitioning to the DWPD metric for endurance, as it factors drive capacity out of the equation while standardizing the test workload to an extent. Previously, a hodgepodge mix total byte written figures were used instead.

Under a high queue depth, full span random workload, write amplification (WA) usually goes through the ceiling. What might only be 200GB of host writes might incur 2,000GB of writes internally to the flash, a 10x difference. If you can lower the WA factor, you get more random drive writes per day over five years. If you raise the flash’s ability to handle stress events, like writing and erasing, you get more DWPD too. The average consumer drive with 3K PE rated flash can handle between .8 and 1.2 DWPD over five years. So to get 10 DWPD, you need low WA and high endurance flash. Over provisioning plays a big role here too, but keeping price/GB competitive is increasingly leading manufactures to dump lots of OP as a solution to better drive life.