A computer running Win7_64bit off a partition on a 2-drive-RAID-0 array built with Intel Rapid Storage Technology BSODed during video playback. Everything had worked fine for approximately 6 months. On first two restarts, RAID driver did not detect any hard drives connected to the system. On next restart RAID driver detected both hard drives and recognized an array. Windows booted but BSODed again upon opening Firefox and attempting to restore lost tabs. On next and several subsequent restarts RAID driver detects both hard drives but recognizes only one as a member of an array, and labeles the other 'ERROR OCCURED (0)'. Windows boots but Intel RST reports array failed and the 2nd drive inaccessible. However files fully accessible and computer seems to have full functionality.
Before examing the issue further, I connected an identical pair of drives, used Intel RST to construct a RAID-0 array, then cloned the old volume to it using a bootable Acronis Disk Director cd. Clone completed without errors. With the old pair of drives disconnected, RAID driver detects the new pair fine, Windows boots and Intel RST reports no errors. Computer seems fully functional.
So what kind of an error can cause a drive to simultaneously be accessible and inaccessible, cause BSODs but have no problem reading its whole 1 TB worth of contents, be alternatingly detectable and not detectable, and even affect whether another drive in the system is detected?
I have the same problem, I can do a ddrescue out of the faulty drive into another drive. When I add the drive to the raid i says that the drive doesn't belong to the Raid. How can I "fool" the raid into accept the new drive? If these are bad sectors is ther any way to force the raid to mount the volume in order to copy the data out of the volume, even if we have some errors?
I also have this problem. I have 4 OCZ Agility 3 60GB SSDs in a RAID 0 and the 4th one reports as "Error Occurred (0)" and the array is marked as FAILED. This array is the boot volume and the only storage volume on the system, yet the system still boots and functions. I guess I should be glad, but what the heck has happened? I'm guessing that while an error occurred, the SSD handled it in some non-fatal way and allowed the system to continue functioning, yet the RST driver thinks that the malfunction was fatal and expects the array to be dead. How can I inform RST that everything is okay and reset the status to NORMAL? Also, it would be nice to get some additional info on what actually happend on the 4th drive.
I have just had the same problem, which brought to this forum for the first time. So hi everybody!
A balloon tip popped up from my notification area saying that my volume had failed and that I should run CHKDSK. No BSODs whatsoever.
Then I restared my computer and CHKDSK ran automatically without finding any erros.
I´m using a Corsair 60GB SSD as cache for a 500GB ordinary HDD, the whole system is about 4 months old, and right now it seems to be fully funcional; it also seems to me that acceleration is working fine, but the Intel RST software keeps me giving a failed disk.
Upon booting the computer, the screen information right after mobo splashscreen doesn´t give me any warnings, it says what it has always said: it lists the SSD as cached disk and the others as non-raid volumes.
Is it any kind of bug or should I be really concerned? Shoud I deactivate acceleration and mark volume as normal?
Here is a captured screen of Intel RST software:
Thank you very much.
I had a simular problem. A database program crashed and later a BSOD occured while running the database again. Then the raid drive 1 failure was reported in bios (Intel raid option bios 8 on ASUS gene iii) at restart and in the RST manager after boot. Strange it says the drive is 'not accessible', although the array boots and all the files seem to be there. I clicked 'set to normal' to clear the error in the RST manager and did a check disk -repair on another reboot. The failure has not reappeared in bios or the RST manager. I tried upgrading the driver from 10 to 12 in device manager, but the RST manager service would not start, so I reverted the driver to 10 in device manager. Using Windows 7 64, with raid 0, 2 - 1TB drives for C boot and single partition is encrypted with TrueCrypt. Running about 5 days a week for three years. Hopefully any bad sector is already locked out on the controller.
I have the same issue with a 'ERROR OCCURRED (0)' in the BIOS Intel RST screen on my RAID-0 Stripe, but my system is still functioning [mostly] normally. The only difference is I am running Windows 8.1 and there doesn't seem to be a Intel RST app for Windows 8+. This is the ONLY thread I've found anywhere on the Internet that was even close to my issue.
Earlier this week, my system started to lag and freeze for a bit before working again. This definitely seemed like a hardware issue, and sure enough, I noticed the drive error during boot, though the RAID volume status is normal. I purchased a replacement drive and was able to capture a system image to restore once the RAID array is rebuilt.
Are there any Intel tools to determine exactly what the error is? Perhaps the drive is recoverable and usable? (though I wouldn't re-use it for this purpose) Or would I have to break apart the RAID array and use a tool from the drive OEM?
Bear in mind RAID 0 does not have redundancy so, any modification to the current status will break the structure and data will not be accessible.
I regret to inform you that the rapid storage software does not offer any option to see what does "error occurred" means, this may be just a SMART error and you many need to clean SMART errors through BIOS, or the drives are going defective.
Thank you Allan,
I won't push my luck with this old drive. It will get replaced tomorrow and the RAID-0 stripe rebuild & imaged.
Fortunately, I was able to get enough early warning to take evasive action and avoid data loss, though that may not always be the case.