gigabyte z370 Aorus gaming 7 mainboard
i7 8700k cpu
Samsung evo 960 NVME hosting the OS (Windows 10 x64 v1709)
3 Hitachi HDS721010CLA630 (firmwareJP4OA41B) drives in Raid 5 as data drive
2 WDC WD10EAVS-00M480 drives in Raid 1 as backup drive
Intel RST version 126.96.36.1997 and also have tried the later drivers provided by Gigabyte 15.9.*.*
I am currently on a beta bios provided by Gigabyte F5g but have also used F5e
The problem is the computer failing to respond when writing to the raid 5 volume. I have iTunes backups going to the raid 5 and randomly when backing up all writing will stop and the system will partially hang. Any further attempt to access the drives will result in explorer hanging. The system will not shutdown gracefully and the only option left is to hold the power button. This will not occur all the time but enough to be a real problem.
These drives were working in an old system using rst 7.5 fine. All the data was copied off the drives and the array recreated on the new system and the data copied back (problem became evident during the copy back). The issue only occurred after the raid was initialised and write back cache mode enabled. I do not have Intel Optane memory installed.
There does not appear to be errors in the windows event log. When it happens during a file copy the write speed will drop to 0 and activity will remain at 100%.
Thank you for contacting Intel® Communities Support
I am glad to assist you, please check the https://www.intel.com/content/www/us/en/support/articles/000006401/technologies.html RAID Volume Data Verify and Repair.
The problem seems to be related to the motherboard RAID Controller.
I ran a verify and repair on the raid 5 volume and 5 verify errors were found and corrected.
0 blocks with media errors were found. It managed to get through the entire process without freezing as the PC was not used during the process.
System behaviour remains the same.
I tried to copy 100GB of data from a usb drive to the raid 5 volume with a hang at 2%. Same symptoms.
I am also getting "Reset to device, \Device\RaidPort0, was issued" in the event log (iaStoreA event 129).
Thank you for your response,
At this point the problem seems to be the Motherboard RAID controller.
Please check with http://www.gigabyte.us/support Gigabyte to claim warranty with them.
Removed drives from raid and tested each with manufacture's utility. All drives tested ok.
Recreated raid 5 array and initialised.
Changed RST driver to 188.8.131.522 an copied 1TB onto raid 5 array without incident.
I believe the issue is the newer RST drivers not the motherboard raid controller.
Will see how it goes but am confident the older drivers will continue to work.
I have the same issue:
ASRock z370 Pro4 mainboard on latest 1.50 BIOS
i7 8700k cpu
Samsung EVO 960 NVME 250GB hosting the OS (Windows Server 2016)
3 x Seagate 8TB NAS Drives drives in Raid 5 as data drive
Intel RST version 184.108.40.2065 (latest)
This system is STABLE if have Write Cache Buffer flushing Enabled and Write-Through cache on RAID
Disk array hangs if I have Write Cache Buffer flushing Disabled and Write-Back cache on RAID
I have the same issue on MSI Raider GE73VR-7RF, 2 960 EVO RAID 0, IRST 220.127.116.115 (provided by MSI).
When testing disk with Crystal disk (e.g.) and write back is ENABLED, system freeze when initialize write tests. If i leave all disabled no issue. Might be an IRST bug becuase with IRST 15.2 (manually downloaded from Intel site) i haven't any issue even in write tests. But read/write is worst (-20% +/-). Inel please fix it as soon as possible, MSI closed my case and also Intel assistance did.
I also have this issue on a Clevo P17SM-A laptop with two Samsung 840 EVO SSDs in RAID 1, and only since updating drivers.
I also see the same behaviour when I have a RAID 0 volume across a Kingston sms200s3/120g and Kingston sms200s3/480g on m.SATA, so this is not limited to Samsung and not limited to SATA devices.
I see the Physical Disk Current Queue Length go up to about 38 and then the volume stops responding. The queue then drains by 1 every 30 seconds, with a single flicker of the HDD LED, the timing suggests to me that this is a timeout.
Presently the draining has also stopped and the queue sits at 22. At this point I have no option but to forcibly power down.
I have the same issue on ASUS Z97-DELUXE, 4 TOSHIBA THNSNJ512GCSU RAID 0, IRST 18.104.22.1685 (provided by ASUS), Windows 10 1709 and also on -brand new PC- ASUS MAXIMUS X HERO, 2 Samsung 960 PRO M.2 RAID 0, IRST 22.214.171.1245 (provided by ASUS), Windows 10 1709.
I thought that the former broke down, so I purchased the latter. However, I was quite surprised that the same symptoms occurred even in the latter, which should be brand new.
Same issue here. Issue:
-Copy speed starts at 125 MB/s, and after a few gigabytes it drops to 20MB/s fast (when disabling write-back cache, it will be 20MB/s right from the start. I will test if explorer will hang then as well).
-Volume utilization is then 100%
-After a while, the speed drops to 0 MB/s, and Windows Explorer starts to hang
-Volume utilization will remain at 100%
-When rebooting system, it will hang. It will only restart by pressing the hard reset button
-I also noticed the issue when benchmarking the RAID 5 volume with CrystalDisk (same symptoms, 100% utilization, win explorer hang)
-When benchmarking the RAID 0 volume, no issues appear
-Performed extended SMART tests on all drives, no issues.
-Seen messages that the RAID 5 volume is protected from a disk failure, while there isnt any disk failure (when I open RST it shows healthy states)
System specs:ASRock Z370 Extreme4 motherboard (BIOS version 1.30)
Intel core i7 8700
2x Samsung 850 500 gb in RAID 0
4 x Samsung 4 TB harddrive in RAID 5
2 x 8 GB RAM
Software and settings:
Intel RST 126.96.36.1990 (downloaded from Asrock website)
RAID 5 initialized (took me 2 days)
Data stripe 128kb
Empty write cache disabled, enabled write-back cache
Windows version: Windows 10 pro build 1709
I'll try to update BIOS, and test if the RAID array hangs without write-back cache enabled as well...
Just updated my BIOS to 1.60. Copies of large fles are MUCH more stable now, didn't change anything else.
My write speeds are now between 40 MB/s and 140 MB/s. Crystaldiskmark is still crashing, however.
Tried to set it to write caching, and no caching. Volume seemed to be stable, and no crash when using crystalmark. Write speeds are slow: 20 MB/s.
Set it back to write back caching (and off course rebooted), speed issue is back. It drops to 20 MB / s and stays that way again. Hope there will be a fix soon...
-Tested Mirrorring, that was just about as fast as a single disk, so no issues with that.
-I do want to combine all 4 disks and want to be protected from 1 disk failure.
-I started using an old PC as dedicated NAS with the 4 4TB drives (AMD 6000+, 4 GB DDR2). I started using Freenas on it, and created a RAIDZ1 volume (=RAID5). Created a SAMBA share, and mounted it on my primary pc. Now I have write speeds of 70 - 80 MB/s continuous OVER THE NETWORK WITH AN OLD PC!!! The bottleneck is the CPU of this old pc, I guess I can gain even higher speeds with a better CPU, as the disks are not fully utilized.
Feel free to pm me with questions.
Hi, Same issue as described in previous posts:
MB: ASUS Maximus ix Z270, BIOS version 1203 (latest)
GPU: MSI Nvidia GTX 980ti
CPU: Intel core i7 6700k
RAM: 32GB DDR4 G.Skill TridentZ RGB 3200 MHz
M.2 NVMe RAID0 array: 2 x Intel, 512GB, 600p firmware version, PSF121C, Boots EFI, GPT, NTFS
SATAIII RAID0 array: 2 x PNY, 240GB. CS2211 SSD firmware version, CS221016, GPT
Operating System Windows 10 64bit ver, 1709 (build 16299.334)
Intel Chipset SATA/PCIe RST Premium Controller driver version 188.8.131.525
The Intel 600p controller sinks get burning hot with lots of IO. Have 70mm CPU fan directed on them with open case and lots of airflow.
STILL, AND I QUOTE PREVIOUS POSTER,
"This system is STABLE if have Write Cache Buffer flushing Enabled and Write-Through cache on RAID
Disk array hangs if I have Write Cache Buffer flushing Disabled and Write-Back cache on RAID"
This is true for the M.2 NVMe RAID array only where Windows is installed. The other array is for backups that I use to recover from regularly
when above reported problem renders my system unbootable. In fact, I just got done with a recovery but it was to change stripe size on the
NVMe array. (still freezing, would not complete ATTO disk benchmark v. 3.5) Luckily, no corruption THIS TIME, from a hard reset.
Truly Yours Intel,
PS: THIS QUESTION IS ASSUMED ANSWERED??? More appropriately; this question was answered incorrectly and requires further analysis???
I am seeing this exact same issue with a new ASUS Z370-A prime board ...8700K...latest bios ...latest ASUS provided RST . Seems to work fine as long as I don't touch cache options. Enabling Cache causes the volume to lock up and crash. I have two 2TB Seagate Barracuda pro drives in RAID 0. Really disappointing all this new hardware and can't get optimal settings to work.
This really seems to be something intel and the board manufacturers need to respond to ASAP. It's widespread.