- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
hi---
i have a SS4200-E with 4x WD Caviar Green 1TB drives in it running latest firmware.
before i remove the drives and try the LVM2/dmadm solution in a linux box, i'd like to see if there's an easier solution.
the drives appear intact and the system reports ops normal.
however, when i try and read data from the drive, after a few minutes it crashes with no indication other than the disk-activity LED off. the system is unresponsive except for a hard power off. there are no indications of a drive failure on the front panel (all LED other than disk-activity LED on/blue). if i had to guess, i'd say that it happens faster with a fast network connection. not sure if that's an indication of losing network or drive (NCQ?) requests and not being able to recover properly.
using a mac (os-x, 10.5 on a powerpc, 10.6 on a nehalem mobile). could try a windows box, but at that point using LVM2/dmadm might actually be easier....
thanks in advance; best,
---K
Link Copied
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
If you haven't already tried resetting to factory default, give that a go. It won't erase the RAID data.
http://www.intel.com/support/motherboards/server/ss4200-e/sb/CS-028587.htm http://www.intel.com/support/motherboards/server/ss4200-e/sb/CS-028587.htm
You can also find directions on obtaining and reading the dump file at
http://www.intel.com/support/motherboards/server/ss4200-e/sb/CS-031424.htm http://www.intel.com/support/motherboards/server/ss4200-e/sb/CS-031424.htm
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
i followed the instructions and got a dump file before the raid died. the dump showed that LVM2 is dying from inconsistent data structures. i've got the drives installed in a linux server and am trying to figure out how to mount them and see if the data's still intact, but mdadm/lvm don't seem to recognize them---still working on figuring that part out.
any other suggestions? thanks!
best,
---K
=================================================================================
Apr 14 15:41:15 DungeonStorage _0_ Engine: plugin_user_message: Message is: LVM2: Object sdd1 has an LVM2 PV label and header, but the recorded size of the object (5860559616 sectors) does not match the actual size (1953520002 sectors). Please indicate whether or not sdd1 is an LVM2 PV.
If your container includes an MD RAID region, it's possible that LVM2 has found the PV label on one of that region's child objects instead of on the MD region itself. If this is the case, then object sdd1 is most likely NOT one of the LVM2 PVs.
Choosing "no" here is the default, and is always safe, since no changes will be made to your configuration. Choosing "yes" will modify your configuration, and will cause problems if it's not the correct choice. The only time you would really need to choose "yes" here is if you are converting an existing container from using the LVM2 tools to using EVMS, and the container is NOT created from an MD RAID region. If you created and manage your containers only with EVMS, you should always be able to answer "no".
If you answer "no" and your volumes are correctly discovered and activated, you may disable this message in the future by editing the EVMS config file and setting the device_size_prompt option to "no" in the lvm2 section.
Apr 14 15:41:15 DungeonStorage _0_ Engine: plugin_user_message: Answer is: "No, it is not a PV."
Apr 14 15:41:15 DungeonStorage _0_ Engine: plugin_user_message: Message is: LVM2: Object sda1 has an LVM2 PV label and header, but the recorded size of the object (5860559616 sectors) does not match the actual size (1953520002 sectors). Please indicate whether or not sda1 is an LVM2 PV.
If your container includes an MD RAID region, it's possible that LVM2 has found the PV label on one of that region's child objects instead of on the MD region itself. If this is the case, then object sda1 is most likely NOT one of the LVM2 PVs.
Choosing "no" here is the default, and is always safe, since no changes will be made to your configuration. Choosing "yes" will modify your configuration, and will cause problems if it's not the correct choice. The only time you would really need to choose "yes" here is if you are converting an existing container from using the LVM2 tools to using EVMS, and the container is NOT created from an MD RAID region. If you created and manage your containers only with EVMS, you should always be able to answer "no".
If you answer "no" and your volumes are correctly discovered and activated, you may disable this message in the future by editing the EVMS config file and setting the device_size_prompt option to "no" in the lvm2 section.
Apr 14 15:41:15 DungeonStorage _0_ Engine: plugin_user_message: Answer is: "No, it is not a PV."
Apr 14 15:41:15 DungeonStorage _5_ Engine: initial_discovery: Discovery time: 00:00:00.046646
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page