I have a problem with Intel RAID SRCSASBB8I comtroller that local reseller is unable to help me with.
The problem is that the controller some times throws hard drives out of array and marking them as Unconfigured BAD.
The event log has entries "Drive is degraded" or "Predictive failed".
I already replaced the controlleк and all 8 hard drives. I had Hitachi HDE721010SLA330 drives inititally, which were rejected from the array almost every week.I recently changed them to Seagate ST31000340NS drives. They worked fine for two month anв yesterday two of them left the array.
Does any one has similar situation or heard anything on that?
Intel support is uselles.
Thank you for quick reply. We have the following configuration:
Raid controller: SRCSASBB8I (Firmware version is 1.40.92-0746, latest) with BBU AXXRSBBU6
HDD: Seagate ST31000340NS (Firmware SN06, latest)
Intel(R) Xeon(R) CPU E5520 @ 2.27GHz
there is no backplane in our server.
The other interesting problem with this server is that both ethernet ports do not work on 1Gb with Linksys SRW2048 switch, but works fine with SOHO В-Link switch.
Okay so your direct connecting your HDDs, correct. Controller SFF8087 port with a 4 connector fan-out cable plugged into the HDDs? How many HDDs? Is it random drives that get the degraded or failure errors, or the same drives? Could there be vibration issues affecting the dirives? What drive types were you using previously? Is you power supply adequate to suppor tall the system components?
Can you use the CMDTool2 utility and provide a RAID log using the following command line?:
CmdTool2 –AdpALILog –aAll > Raidlog.txt
Our chassis is SC5600BRP and it's equipped with two power supply units 750W each for redundancy.
Initially we had 8 Hitachi HDE721010SLA330 drives. They were randomly marked as failed almost every week.
We were getting hard drive rejected from the array with RAID5 and RAID10 under Win Server, Linux, and WMVare.
We were told to send the controller for replacement. That didn't help. Then we changed hard drives to Seagate ST31000340NS.
RAID10 worked fine for about two months under WMVare ESX 4.1.0. Then one of hard drives failed. We took that drive out on 9/15. On the next day another hard drive fail (9/16), but we didn't noticed that as it was a spare drive. Two days later (9/18) one more hard drive left the array. The log file is attached. The CMDTool2 doesn't work under WMVare ESX, so I used lsi_log to dump the log file.
I hope you are well today.
Could you please let me know whether I can expect an answer from you?
We got another two hard drives thrown out of the array when we unplugged the server from one switch and plugged it into another.
We had to disconnect the BBU unit to get those drives back. Controller didn't see them with BBU attached.
Setting cache to WRITE THOUGH didn't help.
I have a same problem with the same controller. I have 2 array RAID1, where uses disks Seagate ST3146356SS who are attached direct to controller. Motheboard model S5500BC.
I changed caching to WRITE THROUGH meanwhile. The server is working ok for two weeks already. However, with the Seagate disks the array is much more stable.
It is pity that Intel can't provide a normal support for that case. We are struggling with the problem for more than year now.