We're trying to get a raid array going for a linux Centos 5.5 box using 5 510 SSDs. We've tried the following combinations using the same five drives.
- LSI 9280-8e + SansDigital AS108X configured for RAID10 + spare ( Latest (April 15) LSI firmware. Tried both 2M and 0.5M miniSAS cables.)
Results: Using linux, we could read the entire array as a raw device in an endless loop with no errors. Write tests using 'dd' with large block sizes to the raw device caused the LSI controller to fail a disk (different disks on different runs) and eventually kill the array. Moving disks around in the enclosure didn't change the random pattern.
- HighPoint RocketRaid 2722 (upgraded to latest firmware) + SanDigital AS108X configured for RAID10 + spare
Results: Reads worked well as with the LSI controller. The first serious writes to the array caused disk errors and finally a kernel panic on Centos 5.5.
- Generic Silicon Image SiL3124 SATA controller (upgraded to latest firmware) with 5 disk eSATA JBOD from Frys.
Results: Reads worked well and we were able to make a linux software raid partition, create a file system, and mount. Trying a 'dd if=/dev/zero of=zeros bs=100M count=5 oflag=direct' caused the disks to go offline (no pattern as to which disk fails).
In summary, it seems like all of the 510's we got work fine for high traffic reads and modest write traffic. Under serious large block write traffic, something seems to go wrong with three different controllers using two different enclosures and three different cable types.
Does anybody out there have an array of 510's under linux that's willing to try a loop of
dd if=/dev/zero of=/path/to/raid/array/zeros bs=100M count=5 oflag=direct
and see if your array survives?
Before we declare these defective by design and return them, we're open to suggestions or ideas.