Sandforce Drives and TRIM

Discussion in 'SSD Discussion' started by Bill Gates, Feb 7, 2012.

  1. Bill Gates

    Bill Gates Guest

    How well do you think TRIM works on a SandForce drive?

    [​IMG]

    Thats right not at all. Without a format of the drive there will be no recovery by TRIM

    source: Anand Tech
     
  2. Computurd

    Computurd Guest

    i noticed this. the drive has to be already written to 100% of LBAs with totally incompressible data, then written to with more incompressible data. but yes for it to 'panic lock' in that manner is interesting. an extreme corner case, but still possible...im wondering if the SF controller requires more 'room' to trim incompressible data than provided by the OP? very strange, but this only manifests itself when the drive is 100 percent full.

    What i wonder if it does this when it is utilizing RAISE and its extra OP.
     
  3. groberts101

    groberts101 Guest

    that's actually 100% right on target there, Bill. I've backed my drive into that same corner more than several times in the past. The funny thing is that he needed to go to the trouble of filling it up first. I did not.
    [​IMG]
     
  4. Bill Gates

    Bill Gates Guest

    question how did you go from 15gb to 19gb the very next run?
     
  5. groberts101

    groberts101 Guest

    there was a bit of an issue with me overclock it appears. LOL

    It went like this. I nearly finished the 9 x 4000MB run and saw nearly the same 112MB/s(actually 115MB/s IIRC) write speed as posted in the above 1 x 1000MB test run. I crashed.. I cussed.. I rebooted.. I immediately reran a quick 1 x 1000MB test to verify that I was indeed still at that 11xMB/s throttled speed. It obviously was.

    Then after the next series of 9 x 4000MB test runs, I realized that the CDM test file(CDM3 Temp file actually scratched to the drive being tested) of 4000MB was actually still there from the prior crash. I later deleted it when I saw the size discrepency. No trickery involved.. just partial stupidity and plain old ignorance are my only excuses. LOL
     
  6. Bill Gates

    Bill Gates Guest

    no accusation whatsoever that makes perfect sense just curious
     
  7. Computurd

    Computurd Guest

    im wondering if this isnt actually the sign of LTT kicking in, and anand just isnt familiar with the throttling. maybe filling the device with data propagates an instant lifetime throttling, instead of going off of the write rate.
     
  8. Bill Gates

    Bill Gates Guest

    that makes a whole lot of sense. i bet your right
     
  9. m.oreilly

    m.oreilly Guest

    the sf don't trim until it runs out of 'fresh' nand.
     
    Last edited by a moderator: Feb 7, 2012
  10. groberts101

    groberts101 Guest

    more precisely.. they send trim to mark deleted blocks immediately BUT choose to recover blocks on-the-fly ONLY when the drive has no fresh blocks remaining.

    This is why the silly dodo's who test force-trims on these drives are incorrectly lead to believe that recovery has occured because they have forced the same amount of space trimmed?.. to be immediately recovered by the controller as most others on the market will do in near immediate fashion. When in essence, they have only succeeded in writing to all the remainder of the free space availble on the already struggling drive by writing that data needed to force the trim. They have literally forced their drive into initiating its much more aggressive on-the-fly recovery algorithm by pushing it over the edge with even more writes.

    I've said it before.. and I'll say it again.. you CANNOT force-trim a performance degraded Sandforce drive back to fresh speeds(or even near fresh for that matter). You can however, beat the snot out of it with even more data and force it into more aggressive immediate recovery mode. Or.. do as some of the "extreme usres" do and let GC do that and so much more in idle time. Such as... deleted data/block recovery(trim or no-trim).. partial block consolidation.. and even static data rotation to keep wear leveling in tip-top shape.

    If you're really extreme(apparently my usage is considered to be enterprise by some standards)?.. add some additional manual OP to the mix for even better results. Here's a newer review comparing OP on the 520 and the Vertex 3 which just goes to show that the testers are finally starting to figure stuff out that has been known by others for quite some time now. Better late then never, I guess. lol
    Intel SSD 520 Enterprise Review | StorageReview.com - Storage Reviews
     
  11. Bill Gates

    Bill Gates Guest

    this is why mine always looks like this:
    [​IMG]
     
  12. MRFS

    MRFS Guest

    So, Bill, stupid question: is that controller sensitive to the existence, and amount, of such "unallocated" space?

    I would have guessed that the firmware really does not see a higher-level concept like NTFS formatting and related data structures.

    I would love it if you would prove me wrong about that guess.


    p.s. Do you have a "rule of thumb" to decide how much "unallocated" space there should be -- ideally --
    for any given SSD capacity?


    MRFS

    ---------- Post added at 08:54 AM ---------- Previous post was at 08:49 AM ----------

    Addendum: BTW, switching to NTFS was a TRULY FABULOUS decision for Windows, in general.

    What a huge difference NTFS made for overall reliability and data integrity. THANKS, Bill!!

    And, I just discovered "exFAT" which allows large drive images to be saved on USB flash drives --
    VAN NUYS!!

    For those who don't already know, exFAT allows a single file to exceed 4GB in size:
    there is a Windows Update/patch that you download from the Microsoft website,
    and you're good to go! It also installs an updated FORMAT program, if you prefer
    to do formatting inside Command Prompt.


    MRFS
     
  13. Bill Gates

    Bill Gates Guest

    20% op is kind of the rule of thumb
     
  14. groberts101

    groberts101 Guest

    you should NEVER exceed the amount of spare area of these controllers with any one worksessions writeloads. They will throttle and/or be forced in on-the-fly recovery with slightly increased latency as a result.

    This is why extra manual OP is beneficial in the sense that you increase the controllers efficiency by allowing a larger space for it to do background recovery and helps to keep a larger fresh reserve to avoid running out of fresh blocks and forcing the issues mentioned above. OP is good for ANY SSD made right now.. but Sandforce maintains far better consistency when it's leveraged in these particular controllers.

    And the controller doesn't need to be aware or file type at all.. only to read the metadata/LBA and compare its internal maps with what appears to still be viable data from the OS's perspective. This is what makes garbage collection even possible in the first place.. as without these internal map comparisons?.. we would definately need trim to keep things organized and on track since the controller would never be able to figure out what was deleted. Trim commands just enable greater efficiency of figuring it all out but is far from needed if logoff idling is implemented. Plus, idle time recovery for this controller(and I suspect many others) is much more efficient in utilizing/optimizing a controllers internal physical data structure. Data rotation and partial block consolidation being added to GC is always a good combo in my book... trim or no-trim.

    So to finally answer that question.. there is no set rule for percentage to be used as 20% of a 60GB drive is FAR different than 20% of a 240GB drive.

    Best way I could say it is this.. always leave at least 25% more fresh blocks available AFTER writes in any one logged on session. If you write heavier?.. you need more. Rarely scratch streaming media to your OS drive and use smaller files or typically read only?.. you can nearly fill these things up to the brim and never see a slowdown in user perspective.

    The biggest problem here is that we can never have access to the physical layout/fresh blocks remaining of an SSD so we can only guess that the space that we have free is clean and ready to rock and roll. But we would be wrong to make any assumptions since they would probably be wrong 9 times out of 10. BUT.. there is a way to make an educated attempt to assure that you have nearly all the space currently free on your drive?.. to be fresh and available for the next write load. It's called idle-time recovery and allows the drive the necessary time to leverage all the code(optimization algorithms) that the engineers put into these controllers.

    Which then brings us full circle to the question of the affect of extra OP on disk performance. It can and does help to maintain efficiency/consistency on any Sandforce controlled drive(or any others for that matter). And the more you write to your OS drive?.. the more you need.
     
    Last edited by a moderator: Feb 10, 2012
  15. MRFS

    MRFS Guest

    I keep coming back to SDRAM as THE BEST solution for the MOST FREQUENTLY written files.

    Here's one idea I wrote up and submitted in a provisional patent application several years ago:
    back then, there were only BIOS subsystems, and no UEFI, but the idea should work even
    better with the latest UEFI implementations:

    Enhance BIOS/UEFI subsystems to support a FORMAT RAM option at initial startup,
    before Windows Setup is run the first time.

    Properly implemented, the resulting ramdisk would appear to the BIOS as
    just another NFTS partition (JANP).

    Then, Windows Setup would load directly into RAM in a transparent fashion,
    meaning that Windows Setup would not really know, or care, that C: is a ramdisk.

    Assuming that adequate hardware security can be implemented --
    to prevent application programs from "hosing" that ramdisk --
    this solves a host of problems that still persist with SSDs:

    frequently written files like browser caches, swap files, and such
    would be stored by default in C: -- resulting in enormous speed.

    The remainder of RAM that is not assigned to the OS
    can be either locked or paged on-demand, as usual, to a storage device
    of the User's choice, or simply allowed to default.

    My preference would be to assign this ramdisk to the
    uppermost RAM addresses, in the same manner as
    logic implemented in RamDisk Plus from SuperSpeed, LLC.

    In this manner, the remainder of RAM addresses
    below the starting RAM address of C: is available for
    OS to manage in the normal way.

    For example, if we begin with 32GB of RAM,
    we should be able to fit an OS into 16GB of RAM;
    third-party software that does not fit into C:
    can be installed on another drive letter e.g. D:+ .

    VOILA! No need for garbage collection;
    defragmentation is worthless, because
    each byte address in RAM is truly "random";
    and there is absolutely no worry about
    Write Endurance because all modern SDRAM
    comes with a LIFETIME warranty (meaning
    that Write Endurance is effectively INFINITE).

    Yes, RAM is volatile; but, whenever I've had this
    conversation with IT professionals, THE FIRST
    question I ask them is this: Are your workstations
    powered by a UPS/battery backup, or not?

    I can't imagine anyone building a system that
    needs at 1000W or 1500W PSU and NOT powering
    such a high-end system with a UPS!

    Every one of our workstations is powered by a
    dedicated APC UPS, and they report an automatic
    switch to batteries about 3-5 times EVERY WEEK.

    Finally, drive imaging has become such a standard
    maintenance procedure, it should be fairly easy to use
    existing software e.g. Acronis for restoring a drive image,
    chiefly because the C: partition works the same
    as it would on an SSD or HDD.


    MRFS

    ---------- Post added at 06:31 AM ---------- Previous post was at 06:26 AM ----------

    p.s. Here are those provisional patent applications:

    Computer BIOS Enhancements to Create and Load Memory Resident File Systems (1 of 2)

    Computer BIOS Enhancements to Create and Load Memory Resident File Systems (2 of 2)


    Hope this helps.


    MRFS
     
  16. MRFS

    MRFS Guest

    This photo is actually quite old, but you get the idea
    for very large memory subsystems, now even more feasible
    and less expensive due to higher density DIMMs:

    16xDIMM.slots.jpg


    MRFS
     
  17. renosablast

    renosablast Guest

    Some would kill to have sixteen MM's on one board! Especially with today's higher-density DIMM's!
     
  18. MRFS

    MRFS Guest

    Larger server motherboards already support 32 x DIMM slots e.g.:

    tyan.staggers.4.dimm.banks.JPG


    MRFS
     
  19. MRFS

    MRFS Guest

    With 1TB of RAM now available, it should not be too difficult to pre-load an entire OS
    into the uppermost 16GB to 32GB of physical RAM ...

    Here's a review of a larger server with 64 x DIMM slots: 64 @ 16GB = 1,024GB = 1TB RAM!!

    AnandTech - Quad Xeon 7500, the Best Virtualized Datacenter Building Block?

    QuantaQCT > STRATOS Server > 4P Server > S410 series

    High Memory Capacity – up to 1TB!

    QSSC-S4R overcomes the space limitation of the 4U form factor and can accommodate up to 8 hot-swappable 8-DIMM memory riser boards.

    With 1TB of system memory, this is double the capacity of the previous generation in this category.

    From a business planning perspective, this system offers future proof capability to host virtual machines and memory-intensive HPC applications, such as engineering design automation, geophysical modeling, seismic applications, etc.

    From a memory-budget perspective, the system supports lower cost 4 GB DDR3 registered (with ECC) DIMMs for a total of up to 256 GB.

    Or alternately, QSSC-S4R supports memory DIMMs of newer DRAM technology and of larger DIMM capacity (at higher cost), such as 8GB and 16GB DDR3 registered DIMMs.


    MRFS

    ---------- Post added at 08:24 AM ---------- Previous post was at 08:12 AM ----------

    IBM System x3690 X5 is expandable now to 2TB:

    IBM System x3690 X5: Overview

    Nice video promotion, too!


    MRFS

    ---------- Post added at 08:35 AM ---------- Previous post was at 08:24 AM ----------

    See also the video below this text:

    Leadership memory scalability


    Samsung 32GB RDIMM combined with IBM eX5 servers with MAX5 enables leadership memory scalability for VMware vSphere 5.

    IBM eX5 Leadership Memory Scalability - IBM and Samsung - YouTube




    ---------- Post added at 08:37 AM ---------- Previous post was at 08:35 AM ----------

    Listen to the IBM rep say:

    "... up to 6 Terabytes of memory in an 8-socket system"

    ---------- Post added at 08:41 AM ---------- Previous post was at 08:37 AM ----------

    "... many clients are now able to place their entire databases inside of memory
    rather than just caching a portion of it"


    MRFS
     

Share This Page