Community
cancel
Showing results for 
Search instead for 
Did you mean: 
idata
Community Manager
1,378 Views

Consistant failure of RAID 1 volume despite replacing HDs?

I'm at the end of my rope with my intel RAID and it looks as though years of data is probably lost already. I'm hoping to find a solution to this problem though to start replacing my lost data. Any help would be much appreciated.

About 3 months back I installed Windows 7 x64 on my home server (system details at end of post). I have been using Intel's raid for a year or so on Vista so I simply migrated the volumes to my new OS install. I have a RAID 1 mirror of 964 GB Samsung drives and a RAID 1 mirrof of 1.5 TB Seagate drives. The driver I used was intel MSM 8.9. After a few weeks my computer started rebooting itself randomly. After checking my event logs I tracked it down to as Kernel_Power Failure likely caused iaStor.sys. In a single day it crashed my server 5 times and caused my 1.5 TB volume to fail. This is when the failure starts and has never been resolved. I attempted rebuilding, but after about 10% in every time the same drive would fail or go missing and cause the status to degrade once more. I searched all over these boards for some kind of RAID problem (of the myriad results) that looked like my own. In the process I upgraded my BIOS and upgraded to intel RST 9.6 after reading some success with that.

After upgrading to intel RST 9.6 my Kernel_Power events dissappeared and my system became stable. But my 1.5 TB volume was STILL degraded, and the drive would continue to fail each repair about 10% in. The next logical step was an HD RMA. I did an advanced RMA with Seagate and got another 1.5 TB drive in the mail. After replacing my failing drive with the new HD and attempting to rebuild the volume to the new drive it failed again at 10% but this time faulted the drive that was always showing healthy before. It was at this point I started getting nervous. This time I replaced the newly faulting driver with a new SATA cable and changed the port. Again I marked it normal and attempted a rebuild. This time the brand new HD was faulted as missing and degrading the array.

At this point it would seem the HDs were not the problem. But what is? The very same setup had been completely stable on Vista. The only change was Windows 7 x64 and the replacing of older HDs with the new 1.5TB Seagates. It would appear intel RAID is simply unable to keep these two volumes working regardless of the HDs involved. Any help with this issue would be appreciated - even just some more steps to try to pinpoint the issue. I'll list pertinate system information below:

OS: Windows 7 x64 Enterprise

Mobo: DFI Infinity P965-S (SB ICH8/R)

BIOS: Pheonix 09/01/2008

 

1st Volume: RAID1 w/ 965GB Samsung HD103UJ (Ports 0 & 1)

2nd Failing Volume: RAID1 w/ 1.5TB Seagate ST31500341AS (Ports 3 & 5)

3rd HD for OS: WD WD500DAAKS-D0YGA0 (Port 2)

RAID Driver: Intel RST 9.6.0.1014 (Port 4)

I can provide any further info if requested. Again, any kind of help or direction would be appreciated. Thanks.

- SeanG

Tags (1)
0 Kudos
5 Replies
idata
Community Manager
67 Views

Any reason why there hasn't been a response to this question? I'm having a very similar problem. I have two 2TB drives in a RAID 1 configuration. The difference with my issue is that I can get the array to rebuild but once rebuilt it's a matter of time until the array breaks again. It doesn't show me that a single drive is the problem but rather that the array is degraded and both drives say "an error has occurred".

Sean - I'm not sure what you're using to power your drives but I have heard that Intel suggests at least a 550W PSU if you're using RAID. Just something to try, but I would imagine you probably have a larger PSU than that.

I'm using multiple identical Intel machines at various locations and it's hit or miss which ones show RAID issues using identical hardware.

Mobo - Intel DQ57TM (Latest BIOS version)

I'm running Windows 7 X64 Pro

idata
Community Manager
67 Views

I did manage to get this phenomenon to stop occurring. Only after I had given up on half the information on the drives, though. I had to just allow the RAID to be deleted, then remake it. Not technically difficult but a solution that RAID is designed to avoid.

idata
Community Manager
67 Views

Seang, I have similar problem, replace 3 new WD1002FBYS 1TB Raid Enterprise on port 0 sata every 3 to 4 months, I still get

"raid volume degraded a harddrive in raid volume has failed and the drive should be replaced to restore data redundancy, click here to identify the failed frive."

every time I can rebuild data successfully.

My system config is

Gigabyte GA-P55A-UD3(REV 2.0) BIOS VER F5 , WAS PURCHASE 12, 2009

4GB MEMORY KINGSTON

INTEL I5-750, 2660MHZ DDR2

2X WD 1TB HD SATA, PORT 0 AND 1

POWER SUPPLY 500W

WINDOWS XP PRO SP3

DO YOU HAVE ANY SUGGESTION, YOU MENTION REMOVED 1 RAID DRIIVE AND LOST SOME DATA???, CAN YOU TELL ME

HOW TO REMOVE ONE RAID HD, LET SYSTEM JUST HAVE ONE HD LIKE NORML WITHOUT RAID, BUT NOT LOST DATA.

THANKS,

idata
Community Manager
67 Views

Rich,

I don't know if you ever resolved your issue, but if you haven't, try updating your BIOS. Intel has released many BIOS updates since this was originally posted. I have built a lot of machines using the same Intel DQ57TM mobo and same RAID mirrors and they seem to be much more stable with the latest BIOS and system drivers.

Also, after your RAID is configured, install the Intel Rapid Storage software driver in windows. Launch the software, highlight the RAID array and turn off the caching. If you leave the caching on and your computer loses power you are much more likely to break the array and have errors. You will not have as great of performance with the caching disabled but IMO it's worth it.

RParl
Beginner
67 Views

I'm experiencing the same thing--every 2 months my drives fall out of RAID. Sometimes a rebuild works and other times it reports an actual disk fault and my only choice is to RMA the drives. I actually have 2 systems that have this exact same behavior. In the last 6 months I've fed these two systems 9 drives!

Reply