Rapid Storage Technology
Intel® RST, RAID
2055 Discussions

Slow write performance with RAID5 on Z170 and Z270 chipsets

TGołę
Novice
12,366 Views

Hello community, in short story:

I had fully functional RAID1, built with 2 x ST4000DM004 and Vertex 3 60GB SSD as SSD cache. Z170 chipset, 3.815.448MB capacity.

I've migrated to RAID5 to expand RAID array. Now it's 3 x ST4000DM004 without SSD cache, 7.630.891MB capacity. And now problem occured.

 

Write performance is awful, 5-10MB/s during sequentional transfers. Read performance sometimes good, sometimes bad. I generaly have large files on volume.

With SSD cache ON, I had several data loss incidents. So I've disabled it.

With newer version of BIOS and Intel RST, I had several data loss incidents, including FULL-RESTORE-FROM-BACKUP my system NVMe drive (which is not a part of RAID array by the way). So I've rolled back BIOS and fully restored OS.

Currently, only Intel RST read-only cache is enabled. No buffer flush enabled. And I'm running following versions:

Intel RST software 15.8.1.1007 (while latest available is: 16.8.0.1000, or 15.9.x for Win8)

MSI Z170 Gaming M5, BIOS ver. 7977v1G (while latest available is: 7977v1I)

3 x ST4000DM004, all perfectly fine condition (tested separately under Linux, no errors, no performance drops), FW ver: 0001

Windows 8.1 64-bit

 

The problem rises when newer version of RST or/and BIOS is installed. Data loss occurs even on single system disk (this is really worrying), when it's perfecty fine NVMe device. The system hungs randomly.

I belive, this has something to do with timeouts on devices because they start to appear in system log, like: device has been reset, NTFS flush error or something similar. High response times occurs during accessing volume.

 

To figure this out is it my OS or motherboard problem I've tested such config just by installing new, clean Win10 64-bit on Z270 motherboard and moving RAID 5 members around. The degraded write peformance exists. No error logged (NTFS flush, device reset), but high response times exists.

 

There is no such problems when disks work as standalone or RAID1 devices, either Z170 or Z270.

 

I remember from the past that Intel chipsets have no problems on earlier generations with RAID5, but now it is. Why?

 

1 Solution
TGołę
Novice
8,989 Views

After long struggling with the isse it turned to be SMR drives.

ST4000DM004 are SMRs and manufacturer (Seagate) has never mentioned that they're SMRs.

View solution in original post

0 Kudos
34 Replies
Wanner_G_Intel
Moderator
2,461 Views
Hello TKavv, Thank you for your response. We will look into this issue and get back to you soon. Wanner G. Intel Customer Support Technician Under Contract to Intel Corporation
0 Kudos
Wanner_G_Intel
Moderator
2,462 Views
Hello TKavv, In order to troubleshoot this issue, please follow these steps: 1. We recommend testing the array with 3 drives. You mentioned that you have a new build, but if you have used the drives in other systems, it is important to run tools to check their health. 2. Also, make sure the drives are using the latest firmware update. Wanner G. Intel Customer Support Technician Under Contract to Intel Corporation
0 Kudos
TKavv
Beginner
2,462 Views

Hi!

 

So is the amount of drives the issue? Does the speed drop /drive attached?

 

I already tested the array with 3 drives: similar performance. Two of the drives are new, but all share the same firmware and individually they perform as expected, around 140-180 MB/s read/write.

Also, I tried to build a RAID 10 array, which performed very well. I reverted back to RAID 5 as I don't need the extra redundancy but capacity.

 

Anything else I should try? Or do I have to accept the speed as it is? It's just so underwhelming.

0 Kudos
TKavv
Beginner
2,462 Views

And to clarify: all the five drives have been error checked and benchmarked before I built the current array, no problems with any of the drives.

0 Kudos
Wanner_G_Intel
Moderator
2,462 Views
Hello TKavv, Thank you for your response. We will get back to you soon. Wanner G. Intel Customer Support Technician Under Contract to Intel Corporation
0 Kudos
Wanner_G_Intel
Moderator
2,462 Views
Hello TKavv, Based on the information provided and the fact that you have tested the array with three drives, which is required to test the performance of RAID 5, this behavior is most likely caused by hardware issues with the RST controller. Wanner G. Intel Customer Support Technician Under Contract to Intel Corporation
0 Kudos
MClar19
Beginner
2,462 Views

I apologize is this is inappropriate, but I would like to second the OP's issues with RAID 5 and the Z270 chipset. While i have not investigated as thoroughly as the OP has, I can say that RAID5 performance went down the tubes sometime during 2018. I'm not sure if it was an Intel driver change or a Windows 10 change or both, but something really killed the performance of my RAID5. This is after years and years and years of using RAID5 on my personal computer without these issues.

 

For me, the slow write speeds were noticeable, but not SUBSTANTIALLY worse than they were in the past (RAID5 is slow to write at the best of times). What really killed my system's performance was the system seemed to be constantly "pinging" the RAID5 drive (despite there being zero system files or active files from that drive at the time), and makes video playback, games, software rendering, you name it, all come out herky-jerky.

 

I'm trying to convert my system over to RAID 1 to abandon RAID 5. I agree with the OP: RAID 5 performance took a nose dive recently, that is not indicative of how RAID 5 has historically performed, and the biggest culprits to investigate for this slow down is Intel's drivers and Window's configuration.

0 Kudos
MClar19
Beginner
2,462 Views

And just to close the loop on this, I found that my computer performance issues (video stutters) were actually due to an unrelated power supply issue, and had nothing to do with RAID 5 or the storage driver. My overall computer performance is back up to performing great again (though I do still only get about 10 MBps maximum write speed on my RAID 5 array.)

0 Kudos
PHala
Beginner
2,462 Views

Ind device manager: storage controller, right click on the raid volume. Click Properties. On the Policies tab, click on: Turn off Windows write-cache buffer otion.

After reboot, go to RST app, click Manage tab, and select the raid volume, and change Cache mode to write back. Reboot.

 

0 Kudos
Wanner_G_Intel
Moderator
2,462 Views
Hello all, Thank you for your feedback. In order to provide a better resolution to this issue, we will investigate it further to determine whether this is expected behavior or if it was caused by a recent driver release. It is worth mentioning that we recommend using the Intel® RST driver version provided by your Original Equipment Manufacturer (OEM). If you notice any improvement when using OEM's drivers, please let us know. As soon as we have any update about this behavior, we will update this thread. Wanner G. Intel Customer Support Technician Under Contract to Intel Corporation
0 Kudos
Wanner_G_Intel
Moderator
2,462 Views
Hello all, We have not been able to replicate the RAID 5 behavior reported on this thread. For example, we tested an array with 3 drives, and we got 84.2 MB/s average read performance. This issue seems to be related to your environment. Wanner G. Intel Customer Support Technician Under Contract to Intel Corporation
0 Kudos
TGołę
Novice
8,990 Views

After long struggling with the isse it turned to be SMR drives.

ST4000DM004 are SMRs and manufacturer (Seagate) has never mentioned that they're SMRs.

0 Kudos
AlHill
Super User
2,325 Views

It is a shame that you had to go through this.  Toshiba and WD have come clean on their SMR drives, but not Seagate.  Here is an article from April 2020 that discusses SMR drives:

https://www.extremetech.com/computing/309389-western-digital-seagate-reportedly-shipping-slow-smr-drives-without-informing-customers

 

0 Kudos
TGołę
Novice
2,318 Views

Shame I had lost over $500 to figure this out myself

Because, suspecting low I/O performance, I've ordered more disks and I've done several migrations: RAID1->RAID1+SSD->RAID5->RAID5+SSD->RAID10->RAID10+SSD

while hidden SMR issue was just killing my performance.

Don't get me wrong - these drives work perfectly fine as standalone AHCI devices, but in RAID..... just DON'T.

0 Kudos
Reply