cancel
Showing results for 
Search instead for 
Did you mean: 

An idiots understanding of SSD limitations/trim/GC – guru's please opine

idata
Esteemed Contributor III
In the absence of trim/garbage collection an SSD has the following problem (not present in spinners which can automatically overwrite old data): –In a heavily used drive, the SSD has to find "invalid" pages (as marked by the OS) to write new information. Here we have two sets of problems. Firstly the SSD cannot overwrite the "invalid" page without deleting this page. Secondly, and more seriously, SSD's cannot simply select one or more pages to delete/write but are limited to working with an entire block (each block consisting of 128 pages utilizing 512 K). As a result (barring trim/GC) the SSD has to read the entire block into its cache/flash and perform the delete/write process on the respective pages. It then has to copy the entire "corrected" block from on board cache to the drive even though it might be only working with one or two or more of the total 128 pages within the block. This process is what causes the delays in the heavily used untrimmed SSD. Trim when executed correctly, instantly marks the aforementioned, OS identified invalid pages for deletion. This allows the SSD's controller to execute the time-consuming aforementioned process prior to any writes to these pages (whether this process occurs instantaneously or during idle periods is questionable but irrelevant as long as it occurs relatively quickly). Garbage collection also is designed for the SSD controller to execute a similar erase function based on the design of the SSD controller. Obviously, in very heavily used SSD's and/or inefficient controllers and/or improper OS set up, SSD's will lose their performance and often cause stuttering. In such situations the secure erase followed by an image restore might be the only solution. Wear leveling does not directly affect these processes unless trim/GC cannot keep up with very heavy usage and the drive is saturated. Guru's please opine but be gentle. I am trying my best to understand these processes.
26 REPLIES 26

idata
Esteemed Contributor III

Because of the way its done (if it could be done) from the OS side I don't think it would it might only if you run it over and over so you would have to run it when you think the SSD's are filled with garbage.

idata
Esteemed Contributor III

Peter, you may need to lay-off the coffee, given the 100 word sentence/paragraph in your post. Rather difficult to understand as well.

idata
Esteemed Contributor III

We're both Peter's.

idata
Esteemed Contributor III

SF drives are great for enterprise, where the majority of data is highly compressible. This was I believe the intended target market for SF. Somehow the distinction between enterprise, enthusiast and main stream has got mixed up, with the end result that mainstream users are paying an enterprise premium for performance that they have no chance of being able to take advantage of.

Stupidly high performance and prices to match are doing nothing to make SSD mainstream.

idata
Esteemed Contributor III

I might of missed out the part about the read-modify-write where it erases a block before it writes to it the trick is to not write anything back as your faking a write to the array.

There are two types of blocks the read-modify-write have to handle when its doing this a block of NAND with some valid data and some garbage in which case it writes the valid data back with the rest of the NAND empty and the other is a block of NAND with just garbage on so when you read the block out it sees its garbage erases the block but as your faking a write you don't write anything back so the whole block of NAND is empty.

The reason its not going to cause write amplification is because the array is filled with valid data and garbage anyway so if you was going to write a 70GB file you would have to do the whole read-modify-write thing but if you fake a 70GB file write you just do what I said above. The reason it could cause write amplification when you run this before the array is filled with garbage again is that a block of NAND with some valid data and the rest empty may cause a unnecessary read-modify-write and a block of NAND that is empty may cause a unnecessary block to be erased again.