cancel
Showing results for 
Search instead for 
Did you mean: 

X25M G2 80GB SSD freezes after ~200ms - can get ~95% of data out

idata
Esteemed Contributor III

Hi,

unfortunately, gremlins struck my Intel X25M G2 80GB SSD. It is no longer detected by the BIOS (tried AHCI and legacy mode) and having it connected at startup makes the system hang for a long time on POST. Using it inside an external eSATA/USB enclosure does not work either.

However, I discovered that the OS is able to detect the partitions when hotplugging the device (Windows and Linux).When accessing the drives or trying any other commands (extract drive info, read SMART parameters etc.), the SSD freezes again (HDD LED is on for maybe 30 seconds - and the drive does not react to any ATA commands - not even resets). So using Intel's SSD toolbox or updating the firmware is not possible (currently 02HA).

Regrettably, there's some important data on that device that I'd REALLY like to get out. A reasonably priced company specialized on restoring flash drives told me they (using the PC 3000) currently cannot cope with X25M G2, because the controller encrypts (?) (some?) data.

So I tried myself...

To get the partitions, the OS needs to read sector 0. It took me quite a while to figure out where and how the Linux kernel does this. Making various changes to the kernel it was possible to read other sectors as well at this place (having an Open Source OS is really cool in situations like these...)

Experimenting further, I made some progress with this issue and thought I might share my findings.

Currently, I have a linux kernel with greatly reduced SATA timeouts (for convenience) that does nothing when detecting the SSD except of providing it as /dev/sdb as quickly as possible.Using a tuned "dd" (waits for the input device to be available and immediately starts dumping data) I can get 30-50 MB out of the SSD before it freezes. To get another 30-50MB out, I need do detach the SSD from the system and reattach it again (I use a drive bay for this).

Unfortunately, this seems to work only with 95%-98% of the sectors. Accessing some areas on the drive causes immediate freezing. Coming closer to these areas, the regions "dd" can extract become smaller (sometimes only a handful of sectors).

Currently, I have extracted 2GB from the drive and the data seems correct. However, it takes quite a long time... especially because of these "broken" (?) regions.

I haven't come up with an idea yet, how to possibly automate this. I read that integrating a switch into the SATA power supply cable would not be a good idea... Maybe someone else has a good idea?

I also modified "hdparm" this way, making it possible to do various things with the drive before it freezes:

- With a low-level sector read, it is possible to extract a few more sectors around the "broken" regions.- Turning of look-ahead did not bring any improvements.- It's possible to trim individual sectors. However, trimming already fails with sectors 3 and 4 (resulting in immediate freeze again).- SMART values and drive info look reasonable

Apart from that, I discovered one "broken" region where the drive does not freeze, but issues an I/O error instead.

At this stage, I'm starting to run out of ideas what else I could try.

I'm thinking about the following questions:

- Why does the SSD freeze? Could the firmware get stuck inside an infinite loop? Is there a way to prevent this (making firmware do something else... maybe a proprietary SATA command, trimming certain sectors, patching firmware)?- I got advice that trimming/wiping an Intel SSD can make it operational again. Do I need to do this with the whole drive or could trimming half the drive already help? What about the sectors that cannot be trimmed with hdparm - as it seems?- Are these "broken" regions really broken or could there be another way to access them? Considering my luck, I guess this is the file system info...Suppose I had a raw dump of the SSD's flash chip contents, as well as 95%-98% of the data I extracted before. Would there be chances to somehow restore the missing 2-5% of that data?- How could I automate detaching and reaattaching the drive to my SATA controller? Or could it be possible to do this via Software? (My mainboard is an Intel DH55TC (H55)).

Any ideas would be very much appreciated - especially clues from people who know what's going on inside these SSDs .

(I don't care about RMA that much btw.)

I'll gladly provide more info if required - as well as the source code modifications if someone has similar issues and wants to try this.

Thanks & Best regards,

Max Reichardt
4 REPLIES 4

idata
Esteemed Contributor III

It sounds like the drive is simply going bad. The drive firmware may be caught up trying to deal with whatever brokenness is going on within the drive internally; the FTL may be busted, maybe one of the NAND flash modules is horked, maybe there's an internal power circuitry problem that's only affecting some area of the drive. Who knows. The FTL going bad or freaking out would cause certain LBA-to-NAND mappings to be broken, while others would work. So, that would be my guess for the fault.

There's nothing you can do about this. Honestly. Nothing in software is going to be able to deal with the underlying storage mechanism and hardware inside of the drive faulting. You can use a modified dd that will handle error conditions like so: defaults to bs=64k, when encountering an error or timeout condition (timeout would help the "freezing" you see) drops bs=32k and re-reads, another read error results in bs=16k, then 8k, then 4k, then 2k, then 1k, then 512, before considering the block (not LBA) bad. Keep reading for details about block size vs. LBA, but simply put 512 bytes = one LBA. If you let this tool run, it would take days depending on the condition of the drive (how many read errors it encounters). Ultimately it's not worth it.

Furthermore, I don't know what you're talking about, re: "modified hdparm to do a low-level sector read". There are no sectors on SSDs (nor have there been on hard disks for quite some time) -- everyone has used LBA addressing for the past, oh, 10 years. There is no such thing as a "low-level sector read"; everything uses READ_DMA48 or (god forbid) READ_DMA. You can accomplish a "low-level read" by using dd on /dev/sdb directly with a block size of 512. For example, "dd if=/dev/sdb of=/dev/stdout bs=512 skip=12345 | hd" would show you the contents of LBA (what you keep calling "sector") 12345. For a block size, see above. Intel drives, like most other drives, advertise a logical and physical "sector" size of 512 bytes, not 4KB or 8KB -- they do this to remain compatible with existing legacy software). hdparm wouldn't do anything "magical" that dd wouldn't do. The underlying libsata driver on Linux will use LBA addressing and that's that. You don't want to use old CHS addressing, believe me. Nightmare.

DO NOT TRIM LBAS MANUALLY! By doing so you are telling the drive "it's okay to erase-then-write this NAND cell". You will lose data doing this, given that the drive's GC will interfere with what you're trying to do. So DO NOT DO THIS -- especially since you're highly concerned over trying to get your data back.

There is no "internal encryption" of data in the NAND flash. The company that told you that is mistaken.

My summary -- and this is what you should take away from what I've written, if anything -- is to RMA the drive. Do not try to get data off of it. If you can still write to parts of it, and you're worried about Intel RMA folks "stealing" your data, try to dd if=/dev/zero of=/dev/sdb bs=64k and then use the seek= parameter (not skip! skip does lseek() on the input device, seek() does lseek on the output device) to skip past areas which might result in errors. You will seriously be wasting months (no exaggeration) of time trying to get data off the drive, and even if you do, you're going to have to piece it back together and try to get Linux to honour a fsck of it (and you also have to assume the fsck will work; there is no guarantee it won't crash with that amount of data loss). If you use reiserfs or ext4fs there's no guarantee the filesystem driver won't crash with that amount of data loss, even in the case of a journalled filesystem. Step back and ponder all the "what ifs", then ask yourself just how important your time is.

The fact that you have highly important data (re: "there's some important data on that device") yet you don't do backups is very disappointing. I hope you've learned your lesson and will do regular (potentially automated; on *IX systems this is very easy, try rsync!) backups to avoid this situation. Remember: SSDs go bad just like MHDDs do. Don't think even for a minute that using an SSD justifies not doing backups. Do backups.

idata
Esteemed Contributor III

Thankyou very much for your detailed and helpful response.

Furthermore, I don't know what you're talking about, re: "modified hdparm to do a low-level sector read". There are no sectors on SSDs (nor have there been on hard disks for quite some time) -- everyone has used LBA addressing for the past, oh, 10 years.

On the hdparm manpage, a "low-level read" is mentioned (taking an LBA address though):

--read-sector

Reads from the specified sector number, and dumps the contents in hex to standard output. The sector number must be given (base10) after this option. hdparm will issue a low-level read (completely bypassing the usual block layer read/write mechanisms) for the specified sector. This can be used to definitively check whether a given sector is bad (media error) or not (doing so through the usual mechanisms can sometimes give false positives).

DO NOT TRIM LBAS MANUALLY! By doing so you are telling the drive "it's okay to erase-then-write this NAND cell". You will lose data doing this, given that the drive's GC will interfere with what you're trying to do. So DO NOT DO THIS -- especially since you're highly concerned over trying to get your data back.

I don't need all the partitions on the drive (the OS is not important). That's why I thought I might trim the partitions I don't need (with the small hope that the drive might no longer get stuck).

There is no "internal encryption" of data in the NAND flash. The company that told you that is mistaken.

That's interesting, thanks.

My summary -- and this is what you should take away from what I've written, if anything -- is to RMA the drive. Do not try to get data off of it. If you can still write to parts of it, and you're worried about Intel RMA folks "stealing" your data, try to dd if=/dev/zero of=/dev/sdb bs=64k and then use the seek= parameter (not skip! skip does lseek() on the input device, seek() does lseek on the output device) to skip past areas which might result in errors. You will seriously be wasting months (no exaggeration) of time trying to get data off the drive, and even if you do, you're going to have to piece it back together and try to get Linux to honour a fsck of it (and you also have to assume the fsck will work; there is no guarantee it won't crash with that amount of data loss). If you use reiserfs or ext4fs there's no guarantee the filesystem driver won't crash with that amount of data loss, even in the case of a journalled filesystem. Step back and ponder all the "what ifs", then ask yourself just how important your time is.

Hehe... this indeed comes down to the question whether it is more work extracting the relevant data or recreating it (or if I need some of it at all...)

Up to now, I don't consider my time wasted, because I learnt a lot about the Linux kernel, HDDs and SATA in general 🙂

The most important partition is only 6GGB. So I think I will extract the 95% of data I can get out, mount the image and see what fsck says about it. It's ext4.

Some of the important data is source code and plain text. So it's also possible to grep in the disk image in case the filesystem is indeed broken.

If I could somehow automate this, I'd need little time to get the 95% data out of the whole drive - which would be better than nothing.

The fact that you have highly important data (re: "there's some important data on that device") yet you don't do backups is very disappointing. I hope you've learned your lesson and will do regular (potentially automated; on *IX systems this is very easy, try rsync!) backups to avoid this situation. Remember: SSDs go bad just like MHDDs do. Don't think even for a minute that using an SSD justifies not doing backups. Do backups.

I learnt that lesson - you can be very sure about that... . Somehow, my USB sticks seemed almost indestructible (even surviving sessions in the washing machine etc.), so I had the impression that flash technology is very robust. Especially SSDs seem more vulnerable though.

I was always planning to do these regular backups... everyone knows that you should do this . I certainly have backups of my most important data. But those 50GB of semi-important data would be nice to have.

Message was edited by: mreichardt - Better wording, minor formatting

idata
Esteemed Contributor III

mreichardt wrote:

On the hdparm manpage, a "low-level read" is mentioned (taking an LBA address though):

--read-sector

Reads from the specified sector number, and dumps the contents in hex to standard output. The sector number must be given (base10) after this option. hdparm will issue a low-level read (completely bypassing the usual block layer read/write mechanisms) for the specified sector. This can be used to definitively check whether a given sector is bad (media error) or not (doing so through the usual mechanisms can sometimes give false positives).

The man page makes it sound like something "magical" is going on. All this option does is issue a READ_DMA48 request to the LBA you specify, and return the contents of it (512 bytes worth). Functionally there is absolutely no difference between this and a dd to the proper LBA (using skip). If the LBA is unreadable, dd will see that. If the LBA is unreadable, hdparm --read-sector will see that. The underlying disk will still return an ATA-level I/O error if the LBA is unreadable in both situations. "So what's the difference?" The difference is if you're using a non-512-byte block size with dd (e.g. dd bs=64k).

Bypassing the block layer (e.g. /dev/sdb is a block device) is possible, but that would indicate there is low-level (kernel-level) code in hdparm. I have a hard time believing that. I'm willing to bet the source simply calls open() on the /dev/sdb device, then issues ioctl() commands to the fd (of the open device). This is how we do it on FreeBSD. If that's how hdparm operates, then yes, the block layer is still used.

What's important here is that the block layer IS NOT what's causing you problems. The underlying physical medium (the SSD) is what's causing issues. So, hdparm --read-sector gains you absolutely nothing here.

DO NOT TRIM LBAS MANUALLY! By doing so you are telling the drive "it's okay to erase-then-write this NAND cell". You will lose data doing this, given that the drive's GC will interfere with what you're trying to do. So DO NOT DO THIS -- especially since you're highly concerned over trying to get your data back.

I don't need all the partitions on the drive (the OS is not important). That's why I thought I might trim the partitions I don't need (with the small hope that the drive might no longer get stuck).

Now you're confusing me. "Trim the partitions?" Do you mean resize (or delete) the partitions you don't need, or are you actually talking about the TRIM data set management command? These are two unrelated/separate things that have nothing to do with one another. So please don't issue TRIM data set management commands with custom LBAs. You can absolutely lose data on your drive doing this. Doing this means, in effect, that you (the human being) know exactly what LBA regions aren't being used by the filesystem for **any** structure, including data.

Hehe... this indeed comes down to the question whether it is more work extracting the relevant data or recreating it (or if I need some of it at all...)

Up to now, I don't consider my time wasted, because I learnt a lot about the Linux kernel, HDDs and SATA in general 🙂

The most important partition is only 6GGB. So I think I will extract the 95% of data I can get out, mount the image and see what fsck says about it. It's ext4.

Some of the important data is source code and plain text. So it's also possible to grep in the disk image in case the filesystem is indeed broken.

It's going to take you a lot of time to do this, given the failure situations you're describing. Any time you encounter an issue with the FTL (assuming that's what's broken), you're going to tack on tons of time (possibly hours) with the procedure I described (re: decrease block size, retry the read, decrease block size, retry the read, etc.). You could use dd with a 512-byte block size to read literally every LBA, one at a time -- this will probably take days to complete. And you'll need to be using a modified version of dd that ignores read errors (should write 512 bytes of zero to the of= in exchange).

Furthermore, the filesystem layer will make getting all of your data back very, very difficult. Let's say you have a single source code file that's about 200KBytes in size. The source is not going to be linearly stored across the medium; it may be scattered all over the image. You will have to piece it together, piece by piece, until you get what you want. ext4 makes this even more difficult. You'll have to do this procedure *per every single file*. Using grep will not suffice -- you will almost certainly have to use strings -a. My recommendation? Don't use any of those tools. Use a hex editor.

You are in for a very serious, balls-deep undertaking if you want to try and recover the filesystem entirely. You should ask your Linux distribution mailing list if there are any ext4 data recovery tools for your situation (a drive that has between 5-6% of its capacity completely lost). There may be things that can work out the pains of the previous paragraph for you, automatically. That is outside of the topic of the Intel forum -- again, you will need to talk to the folks who maintain your Linux distribution.

The fact that you have highly important data (re: "there's some important data on that device") yet you don't do backups is very disappointing. I hope you've learned your lesson and will do regular (potentially automated; on *IX systems this is very easy, try rsync!) backups to avoid this situation. Remember: SSDs go bad just like MHDDs do. Don't think even for a minute that using an SSD justifies not doing backups. Do backups.

I learnt that lesson - you can be very sure about that... . Somehow, my USB sticks seemed almost indestructible (even surviving sessions in the washing machine etc.), so I had the impression that flash technology is very robust. Especially SSDs seem more vulnerable though.

I was always planning to do these regular backups... everyone knows that you should do this . I certainly have backups of my most important data. But those 50GB of semi-important data would be nice to have.

Flash technology is far from robust. The only thing it offers is lack of mechanical failure. That's it. It's prone to failure just as any other piece of hardware is. In fact, in some regards it is *more* prone to failure given how flaky NAND flash is as a whole, and all the electrical requirements/complexities. I wish the industry would have gone with memristors instead (see Wikipedia).

The important thing to take away from this is to do backups no matter what. Buy yourself a 1TB hard disk ($50-60 at most) and do backups to it regularly. I do my backups automatically every 24 hours. I figure losing a day's worth of work is a lot better than losing months, possibly years.

idata
Esteemed Contributor III

koitsu wrote:

The man page makes it sound like something "magical" is going on. All this option does is issue a READ_DMA48 request to the LBA you specify, and return the contents of it (512 bytes worth). Functionally there is absolutely no difference between this and a dd to the proper LBA (using skip). If the LBA is unreadable, dd will see that. If the LBA is unreadable, hdparm --read-sector will see that. The underlying disk will still return an ATA-level I/O error if the LBA is unreadable in both situations. "So what's the difference?" The difference is if you're using a non-512-byte block size with dd (e.g. dd bs=64k).

Bypassing the block layer (e.g. /dev/sdb is a block device) is possible, but that would indicate there is low-level (kernel-level) code in hdparm. I have a hard time believing that. I'm willing to bet the source simply calls open() on the /dev/sdb device, then issues ioctl() commands to the fd (of the open device). This is how we do it on FreeBSD. If that's how hdparm operates, then yes, the block layer is still used.

What's important here is that the block layer IS NOT what's causing you problems. The underlying physical medium (the SSD) is what's causing issues. So, hdparm --read-sector gains you absolutely nothing here.

Well, interestingly, I have been using a block size of 512 byte with dd and it made a difference. I'm sure about that. I was trying to read the first broken sector I discovered with all kinds methods. When it suddenly succeeded with hdparm, for I moment I thought it would be possible to read the whole drive now (sadly, in the end it was only 4 LBAs more...).

I did not dig through the kernel code entirely, but I had the impression that the block layer tries to map a complete page (4096 kb) to memory at once - even if you only want to read a single sector - but I'm absolutely not sure about that.

hdparm indeed uses open() and ioctl(). However, it seems build the the low-level ATA commands all by itself (in sgio.h and sgio.c there are all these ATA opcodes and low-level structs). For instance, the do_read_sectors function selects an ATA command depending on the LBA

...

ata_op = (lba >= lba28_limit) ? ATA_OP_READ_PIO_EXT : ATA_OP_READ_PIO;

init_hdio_taskfile(r, ata_op, RW_READ, LBA28_OK, lba, 1, 512);

...

Now you're confusing me. "Trim the partitions?" Do you mean resize (or delete) the partitions you don't need, or are you actually talking about the TRIM data set management command? These are two unrelated/separate things that have nothing to do with one another. So please don't issue TRIM data set management commands with custom LBAs. You can absolutely lose data on your drive doing this. Doing this means, in effect, that you (the human being) know exactly what LBA regions aren't being used by the filesystem for **any** structure, including data.

Sorry about that. I wasn't aware of these different notions of trimming.

My relevant partitions start at LBA 44 million something. So I thought I could tell the SSD to trim (discard/garbage-collect) LBAs 0 to 44 million (?)

It's going to take you a lot of time to do this, given the failure situations you're describing. Any time you encounter an issue with the FTL (assuming that's what's broken), you're going to tack on tons of time (possibly hours) with the procedure I described (re: decrease block size, retry the read, decrease block size, retry the read, etc.). You could use dd with a 512-byte block size to read literally every LBA, one at a time -- this will probably take days to complete. And you'll need to be using a modified version of dd that ignores read errors (should write 512 bytes of zero to the of= in exchange).

I noticed that block size in the dd command is (almost) irrelevant. In one run, about the same amount of data comes out of the drive - regardless of whether block size 4K or 512K. I already have a script that dumps the next block of data with an appropriate tool and block size, everytime I attach the drive.

So now it's down to attaching and detaching the drive automatically. I only need a linear actuator for this (~1cm) - controlled by the PC. I guess, organizing some parts of Lego Mindstorm is probably by far the cheapest way to do this - and control it depending on what messages come in from dmesg - and let the whole thing run for 2 days... (and hope the SATA connectors don't wear out...)

Furthermore, the filesystem layer will make getting all of your data back very, very difficult. Let's say you have a single source code file that's about 200KBytes in size. The source is not going to be linearly stored across the medium; it may be scattered all over the image. You will have to piece it together, piece by piece, until you get what you want. ext4 makes this even more difficult. You'll have to do this procedure *per every single file*. Using grep will not suffice -- you will almost certainly have to use strings -a. My recommendation? Don't use any of those tools. Use a hex editor.

Well I have done this before (after accidentally deleting a file... (ext3)) and got the data that I wanted. Admittedly, it's not that convenient.

The important thing to take away from this is to do backups no matter what. Buy yourself a 1TB hard disk ($50-60 at most) and do backups to it regularly. I do my backups automatically every 24 hours. I figure losing a day's worth of work is a lot better than losing months, possibly years.

I have some unused 2TB hard drives at home... that's not the problem.

Thinking about it, I originally did not intend to put any important data on that (external) SSD. Just an OS that can be booted from any PC you attach it to. This was so convenient (apart from power consumption being critical with some USB/eSATAp ports), that from time to time Emails, semi-important passwords (both encrypted), working copies of source code etc. materialized on that drive...