I have a boot SSD, a WD 500GB HD, 3 Samsung 2TB drives, and a DVD attached to ports 0 thru 5 (in the listed order) on an ASUS P6T motherboard.
Windows 7 64 Enterprise is installed.
RST console 10.1 does not see anything.
In fact, when I click "Manage" it crashes!
When I use the ROM utility to create a RAID5 with the Samsung drives, RST sees this RAID, but none of the other devices; however the individual disks comprising the RAID are listed as 0GB and unknown port/location.
Additionally, the speed of the RAID is very slow: HDDScan reported read speeds of something like 50MB/s when any indivdual disk is capable of over 100 alone.
Something is wrong here.
Well, for anyone googling this with a similar problem, here is half of an answer...
When I installed the OS, the "Storage Configuration" BIOS setting was "AHCI", not "RAID".
This was because at first I was using 2 of the Samsung disks in a Windows Dynamic Disk software RAID0, but now that I've got a 3rd one I wanted a RAID5.
I would have just converted over to a software RAID5, but Microsoft, in their eternal wisdom, has decided that no one other than Windows Server users get that.
My only choice was to backup what I had and switch to using the ICH9 motherboard RAID5.
So I switched to the "RAID" type of "Storage Configuration" in the BIOS and rebooted.
Switching from standard IDE mode to AHCI or RAID mode will usually result in a BLUE SCREEN if the boot disk is on the controller, however I have had no problems switching from AHCI to RAID in the past, however, this time, although Windows booted fine, and could actually see the disks, the Intel Storage Console was not happy (as can be seen in my pics).
The solution was to de-install RST, then update the ICH9 storage controller driver with the latest F6 boot disk driver, then re-install the latest RST.
This is HALF the answer because the performance could still be an issue; I don't know whether it will be or not because I cannot bench the RAID for another day due to it running an "Initialize".
Just for kicks, I ran perfs anyways.
Hopefully, when the RAID volume is done initializing, I'll be seeing the 200+MB/sec throughput I'm expecting.
The other "half" of the answer.
The array finished initalizing and I am seeing 250MB/sec sequential read test scores in HDDScan - this is what I hoped and expected to see.
For some reason, I still get 50-60MB/sec throughput when doing a verify test.
The HDDScan verify test is supposed to be like the read test except that verify is supposedly done all by the disk without any data being transferred across the data cable (I guess the program says "verify this LBA" and the disk replies with OK or FAIL without transferring any data). I would expect this test to be at least as fast as the read test as it doesn't involve the overhead of transferring data from the disk to physicial memory. There's most likely a good explanation for this discrepancy, but that'll be an investigation for another time as everything else looks good, including the write/erase score of a whooping 550MB/sec - gotta be happy with that.