i have a ss4000-e running a raid10 on which one disk was about to fail.
Its led turned yellow, i rebooted the unit and after some hours it was fine again and the led was green.
Since it was some years running and last year i replaced another disk who failed, today i decided to
change all disks starting with that which recently failed.
So i replaced that disk ONLY and rebooted the unit.
Now, the worst is that it sees ALL disks as NEW !!..
And also putting the old disk in place of the new one, results in the "Disk changed" notify screen
on which all disks, although all serials numbers corresponds to the old disks and all are into their
correct slots (i replaced just one disk), are seen as NEW.
How can i bring my original configuration (and VERY hopefully) my datas back?
Many thanks for support.
If your firmware version is 1.3 or lower, you may waht to try the http://www.intel.com/support/motherboards/server/ss4000-e/sb/CS-033844.htm loss of system access recovery script and recovery script use instructions.
i have 1.4 build 10.
While waiting for any intel solution, i was trying to read the disks using a linux box.
I have a pc running lubuntu and, plugging an old disk of the nas i replaced last year,
it sees the disk as a part of a raid10 but says it cannot be mounted because other
disks are missing (correct).
But on that pc i haven't all sata ports i need to plug two or more disks of the array
so today i'll look for a multiport pci sata interface.
While waiting for any intel forum/staff solution, my questions are:
-has anybody succeeded in reading the nas array using any linux distro?
-is there any nas/raid recovery tool for linux (or windows) i can use?
-i tried using mdadm, it sees the partitions and reports useful infos about
the raid status, but i really FEAR to use other commands except --examine
since i never used this command, i don't know it and i fear to mess
the situation further.
-so, does anybody know how to use mdadm in this situation?
Many thanks for any info.