- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Similar to the post in the forum at https://forums.intel.com/s/question/0D50P0000490VxbSAE/vroc-raid5-parity-errors, we get dozens to hundreds of Parity errors following every Intel Rapid Storage Technology Verification and Repair run on multiple computers running RAID5. The drives all test out fine and I don't see any parity errors if put those same drives in RAID1 configurations. The drives are mostly Samsung Evo 860 SSD drives (SATA, not NVMe). There are some EVO 850 drives, but some systems are running exclusively new 860 drives and experience the same problems.
We only see these errors in RAID5 volumes with more than 3 drives. At only 3 drives, no errors. And the number of errors increases significantly (from a couple dozen to a few hundred) when moving from 4 drives to 5.
As far as we know, we have not had any problems with the RAID volumes. So we're not sure if this is a reporting problem and everything is fine, or if this is an indication that Intel's RAID5 system is fatally flawed and can't support more than 3 drives (which would be terrible, because performance increases on SATA with more volumes and the chief benefit of RAID5 over other forms of RAID is to lower the cost of parity by only using a single drive of many, vs RAID1 which uses half the drives for party).
Also note that when IRST Verification and Repair runs, the initial report, when it starts, always shows 0 errors. It's only the final report AFTER it runs that reports the parity errors. This is part of what makes us think it could be a reporting problem, rather than actual errors.
Regardless of whether this is a reporting problem or an actual RAID issue, because these drives appear in all other checks to be fine (including in IRST arrays with 3 or fewer drives), this appears to be a software or Intel firmware problem for the Rapid Storage Technology support for RAID5, where it fails with more than 3 drives.
Here's the relevant segment of the report after running Verification and Repair, this one with 5 drives:
Volume name: SSD RAID5
Status: Normal
Type: RAID 5
Size: 1,907,750 MB
System volume: Yes
Data stripe size: 32 KB
Write-back cache: Write through
Initialized: Yes
Parity errors: 527
Blocks with media errors: 0
Physical sector size: 512 Bytes
Logical sector size: 512 Bytes
Here's another from a different system with 4 drives:
Volume name: RAID5 System
Status: Normal
Type: RAID 5
Size: 1,430,812 MB
System volume: Yes
Data stripe size: 32 KB
Write-back cache: Write through
Initialized: Yes
Parity errors: 20
Blocks with media errors: 0
Physical sector size: 512 Bytes
Logical sector size: 512 Bytes
Please advise:
- Is this a serious error that we should shut down the arrays, or is this just a reporting bug and we should ignore the parity errors (if it is just a reporting error, please fix)?
- Is there a fix or work-around that still includes the use of RAID5 with 4 or 5 drives?
- If we need to wait for an updated IRST or driver update, is there a scheduled release date?
Link Copied
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
I have same problem with parity errors on RAID-5 with 4 drives (HDD nas edition Toshiba and WD).
Scan run monthly and fix hudrends parity errors.
Attached zip contain result of SSU scanning.
Please advise : what's happened with data if one drive in array really failed? Will such parity errors prevent to restore array in that case?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Wow, I see that all my e-mails to you are appearing as posts to this forum (unlike this post, which I'm making directly at forums.intel.com). I'm apparently not able to edit them either. Please grab the data you need to troubleshoot and delete or remove that information from my post. I do not want the serial numbers or other unique information being made public like this.
Also, please see the other post in this thread from BRudo who reports the same bug in the IRST software.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hello GraniteStateColin,
DavidV_Intel in the next comment recommend you downgrade version of installed Intel Rapid Storage Technology to 15.7.0.1014.
Please let me know if this action fix a problem. My motherboard came with drivers version 16.5.0.1027, and this version generate parity errors. Update to 16.8.0.1000 does not help.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
One system does, the other that I've posted here has 300 series, specifically, Intel(R) 300 Series Chipset Family LPC Controller (Z390) - A305. Both have parity errors, and ONLY when running RAID5 with 4+ drives, so the chipset is clearly not the problem. RAID1 and RAID5 with 3 drives both work without yielding any parity errors.
Could you confirm that 4+ drive RAID5 works internally for Intel in your testing?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
1 - What is the size of the hard drives? Are they all the same size or mismatch?
I have provided that data already. All drives in the arrays are the same size. In the case of the newer system, the drives are all identical drives: Samsung EVO 860, 500GB. The exact size is 476,940MB. For the older system, there are some Evo 860s and some Evo 850s, but all the same size. On other computers (all have the same problem if RAID5 with more than 3 drives, problems go away with 3 or fewer drives), the sizes are similarly matched. Most of those other systems exhibiting identical symptoms are running all Evo 850s.
The only common configuration these systems may have is that all are running Samsung EVO drives (850 or 860). The motherboard, other hardware and even operating systems vary (Windows 10 or Windows Server 2016).
2 - How much data is in the array when they run the scan? Or is it empty?
Between 25% - 33% full.
3 - Have you installed the latest BIOS version from ASUS*?
Neither of these systems use an ASUS motherboard, but the latest BIOS is installed on the older Gigabyte system. I had upgraded the BIOS in the newer ASRock 300 series/8th gen i7-8600 CPU motherboard in December 2018. I see there is a newer BIOS that just came out in January, but it only appears to address memory timings for overclocking for systems running the 8086K CPU (which I'm not).
Again: Does Intel not get these errors when running 4 or more SATA drives in RAID5? And is there reason to believe our data is at risk? This is clearly a bug on the RAID5 configuration with more than 3 drives. What is not clear is if the bug is in the reporting or if there really are data-threatening errors on the volumes.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
I have filing nobody read thread before answer. Quote:
---cut---
The drives are mostly Samsung Evo 860 SSD drives (SATA, not NVMe).
---cut---
And I have absolutely the same problem (remind you, if you forget it's Parity errors) with 4 x 4Tb HDD WD and Toshiba
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Indeed. I am not aware that this configuration works for ANYONE. I have not heard of a single case with 4+ SATA drives working in RAID5. Note that I don't know it doesn't work for anyone, I've just not found it to work on any of my systems and not heard a single report or response from anyone that this configuration works. David_V is clearly not actually trying this, he's just posting the occasional response. Very frustrating. Certainly starting to make me want to avoid Intel solutions in the future.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
David V, all the drives are SATA drives, not PCIe. Not sure what leads you to think otherwise, but these are all Samsung Evo 850 or 860 drives. Those are SATA.
If this should work, what is Intel's proposed solution?
Thanks,
Colin
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content

- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page