Community
cancel
Showing results for 
Search instead for 
Did you mean: 
TBrud
New Contributor I
2,600 Views

RAID 0 Performance Settings for RST

Hello,

I have a NUC6i7KYK... I recently installed two 512GB Samsung Pro 960's in a RAID 0 volume and re-installed Windows 10 Pro from scratch. All seemed to go well.

After Windows 10 installed, and updated, I also installed the latest Intel RST software.

I ran Crystal Disk Mark on the system expecting to see significantly improved performance over the results I saw from my single Samsung 960 Pro setup, but I didn't. As a matter of fact, every time I run CDM, it hangs my system. It seems to hang when the testing switches from the READ column and moves up to the first sequential WRITE column.

Although I see a 1TB volume, I'm concerned something isn't setup properly. Anytime I run Crystal Disk Mark now, I need to power cycle my computer to get it out of the hung state. Before the RAID 0 setup CDM ran like a champ.

Any suggestions for me?

//Brew

0 Kudos
8 Replies
n_scott_pearson
Super User Retired Employee
483 Views

Yes, I have a suggestion: Don't waste your time using RAID; it is NOT going to improve your SSD throughput.

Consider:

  • The transfer rate of a PCIe lane is 8GT/S. The theoretical throughput of a x4 PCIe connection is 3.94GB/s.
  • The 4 PCIe lanes routed to each of the two M.2 connectors come from the Platform Controller Hub (PCH) component.
  • The CPU communicates with the PCH via a DMI 3.0 link. This link is equivalent to 4 PCIe lanes. That is, overall theoretical throughput is 3.94GB/s.
  • The throughput of the DMI 3.0 link is used by the PCH to support *many* interfaces. This includes: the two x4 PCIe M.2 connectors, the x1 PCIe M.2 connector for WiFi, the x1 PCIe for the SD Card Controller, the x1 PCIe for the GbE Controller, USB-C (USB 3.0), 6 USB 3.0, 2 USB 2.0, two SATA 6.0Gb/s lanes, the LPC Bus (connection to Super I/O component), the SPI bus and the I2S connection to the Audio CODEC.

Needless to say, with all of these devices and interfaces vying for the throughput of the DMI 3.0 link, there is little chance of getting the theoretical throughput of 3.94GB/s to one of the M.2 NVMe SSDs, let alone that required for overlapped operations to two M.2 NVMe SSDs. Bottom line, enabling RAID is a complete waste of time. It is NOT going to improve your throughput.

Sorry, but this is reality...

...S

TBrud
New Contributor I
483 Views

Thanks for the technical explanation. That definitely helps me understand the limitations.

My primary reason for the RAID 0 array was having a single 1TB volume to run windows... I hate having two drives.. any performance improvement was a secondary benefit.

Most concerning is why the disk tool is hanging my system.... not so much the reported values.

If I can expect to have system stability issues with this kind of setup, I'll convert back to two individual volumes in a heartbeat.

//Brew

TBrud
New Contributor I
483 Views

I MAY have found the stability issue... I was trying to tweak the RST settings and changed the value below to DISABLED.... Once I switched it back to ENABLED, my system seems 100% stable... But, as mentioned previously, if I can believe the Crystal Disk Mark scores......... my system is SLOWER in a RAID 0 setup then as individual drives. I'm leaning towards just going back to an individual setup.

n_scott_pearson
Super User Retired Employee
483 Views

That's my recommendation.

...S

TBrud
New Contributor I
483 Views

Took your advice and rebuilt the system. thanks for all the feedback.

idata
Community Manager
483 Views

I only get this scores with RAID0 volume, two (2) 960 Pro 512GB

idata
Community Manager
483 Views

That's on a 2015 Gaming Laptop, a new desktop, "floored" same thing

n_scott_pearson
Super User Retired Employee
483 Views

Yes, this is because of the limitations of the DMI Bus connecting the processor to the PCH.

...S

Reply