cancel
Showing results for 
Search instead for 
Did you mean: 

An idiots understanding of SSD limitations/trim/GC – guru's please opine

idata
Esteemed Contributor III
In the absence of trim/garbage collection an SSD has the following problem (not present in spinners which can automatically overwrite old data): –In a heavily used drive, the SSD has to find "invalid" pages (as marked by the OS) to write new information. Here we have two sets of problems. Firstly the SSD cannot overwrite the "invalid" page without deleting this page. Secondly, and more seriously, SSD's cannot simply select one or more pages to delete/write but are limited to working with an entire block (each block consisting of 128 pages utilizing 512 K). As a result (barring trim/GC) the SSD has to read the entire block into its cache/flash and perform the delete/write process on the respective pages. It then has to copy the entire "corrected" block from on board cache to the drive even though it might be only working with one or two or more of the total 128 pages within the block. This process is what causes the delays in the heavily used untrimmed SSD. Trim when executed correctly, instantly marks the aforementioned, OS identified invalid pages for deletion. This allows the SSD's controller to execute the time-consuming aforementioned process prior to any writes to these pages (whether this process occurs instantaneously or during idle periods is questionable but irrelevant as long as it occurs relatively quickly). Garbage collection also is designed for the SSD controller to execute a similar erase function based on the design of the SSD controller. Obviously, in very heavily used SSD's and/or inefficient controllers and/or improper OS set up, SSD's will lose their performance and often cause stuttering. In such situations the secure erase followed by an image restore might be the only solution. Wear leveling does not directly affect these processes unless trim/GC cannot keep up with very heavy usage and the drive is saturated. Guru's please opine but be gentle. I am trying my best to understand these processes.
26 REPLIES 26

idata
Esteemed Contributor III

The compression processing is done by the SandForce controller. It does not rely on the host system CPU at all.

Yes I know that I mean like a CPU limit for the SandForce controller itself yes....Which is why I said "like a slow CPU" .

idata
Esteemed Contributor III

If I understand it right, trim on the SF drives occurs in a very controlled manner and is designed to minimize write amplification. As a result, unlike the more traditional (indilinx) SSD's where there can be a rapid implementation of trim/garbage collection resulting in rapid recovery of speed (at the cost of increased write amplification), sand force drives on the other hand, will sacrifice "as new" speeds in favor of a more measured and selective implementation of trim with selective blocks being pushed through for erase and rewrite. There is also reasonable speculation that SF drives monitor the amounts of writes during a given period and use that data to adjust write speed to secure nand longevity consistent with warranty periods. This combination of maximizing nand longevity/minimizing write amplification does result in an overall decline in performance to a "settled state" that can be 15 to 20% below "as new". Furthermore extensive use of incompressible data (video, music, zip and rar etc.) will slow these drives in both performance and ability to minimize write amplification.

Curiouser and curiouser.

idata
Esteemed Contributor III

PeterUK wrote:

The compression processing is done by the SandForce controller. It does not rely on the host system CPU at all.

Yes I know that I mean like a CPU limit for the SandForce controller itself yes....Which is why I said "like a slow CPU" .

Even if there was a processor-bottleneck, the gains are still noticable. BTW, the SF-2xxx will be released next year and are suppose to hit 500MB/s sequential...

idata
Esteemed Contributor III

Even if there was a processor-bottleneck, the gains are still noticable. BTW, the SF-2xxx will be released next year and are suppose to hit 500MB/s sequential...

Very noticeable if you like writing ones and zeros.

^ and by that I mean a file that consists when written just ones or zeros so it can compress it well.

Of course there is a disadvantaged (advantages too just posting the disadvantage) by doing compression on a SSD which is you are limited to the port speed since it decompresses at the SSD and so can't push data out faster then the port but with NTFS compression you pull data off the SSD compressed and decompress it after.

Message was edited by: PeterUK

idata
Esteemed Contributor III

PeterUK wrote:

Even if there was a processor-bottleneck, the gains are still noticable. BTW, the SF-2xxx will be released next year and are suppose to hit 500MB/s sequential...

Very noticeable if you like writing ones and zeros.

^ and by that I mean a file that consists when written just ones or zeros so it can compress it well.

Of course there is a disadvantaged (advantages too just posting the disadvantage) by doing compression on a SSD which is you are limited to the port speed since it decompresses at the SSD and so can't push data out faster then the port but with NTFS compression you pull data off the SSD compressed and decompress it after.

Message was edited by: PeterUK

.44x write amplification with Vista + Office 2007 install... that means compression is probably in the 60-70% range: http://images.anandtech.com/reviews/storage/SandForce/SF-2000/durawrite.jpg http://images.anandtech.com/reviews/storage/SandForce/SF-2000/durawrite.jpg

Don't forget that port bandwidth is really only an issue on sequential read/writes with relatively deep queue depths. The SF-2xxx is going to be SATA 6Gb/s so port bandwidth won't be a problem.