I have BIOS 0040 installed.
I am attempting to RAID 2 800P SSDs and I am getting some kind of bug in the BIOS. This is what I get in the 3 ways I can configure M.2.
As you can see when attempting to remap these drives to RST M.2 drive # 2 does not do so correctly.
I have already swapped drive positions and M.2 # 2 still will not remap.
I did some more checking and it seems to be related to port # 2 and the 800P specifically.
Even with no drive installed installed in port # 1 the 800P in port # 2 will not remap to RST.
With a single drive in port # 1 OR with both port # 1 and 2 populated I can always get port # 1 to remap.
I am thinking about trying to get windows installed on external media and then using RST to force these 2 drives into RAID 0 to see if they become correctly recognized in BIOS.
Does anyone know if windows has the same restrictions on thunderbolt that it does on USB? I am pretty sure I cant install directly but it would be cool if I am wrong about that.
If that wont work there are more than a few ways to get windows installed on a USB3 flash drive.
I was able to get windows to go installed on a SATA drive via USB -> SATA adapter and RST does indeed see the 2 800P drives but there is no option to put them in RAID.
I think this is the end of the line unless Intel actually wants to chime in here.
I did notice the same bug in RST that I saw in the visual BIOS. The 800P in M.2 # 2 is not listed as having a port. M.2 # 1 in RST is listed as being in the first port on the controller but M.2 # 2 lists no port at all.
Best I can tell the NUC8i7HVK is not capable of remapping 2 800P SSDs to RST due to a bug on M.2 # 2.
Thank you for joining the community, and for all the information provided I am going to investigate the case, I will be posting back as soon as news become available.
I have done some additional testing and this is not related to the 800P specifically, with BIOS 0040 I am unable to RAID any M.2 NVMe SSDs. I tried 2 NVMe gen 3 X 4 lane SSDs and the same bug occurs.
I can't provide any help, but also have this NUC and am trying to figure out how to Raid0 two Samsung 970's.
I haven't found much guidance in Google searching, I didn't even find BIOS settings, I'll look again after seeing your images.
I did get a clue that I was supposed to load RST software. I downloaded version 184.108.40.206, when I try to install it I get an error message "This platform is not supported." Maybe it needs to be enabled in BIOS first...
I do have Win10EDU installed, everything else seems to work fine.
This link has the correct instructions:
PCIe (NVMe) RAID <- expand this option to see the NVME RAID setup procedure.<strong>
You will have to reinstall Windows to setup RAID. You cannot change a single live OS disk into RAID 0.
Allow me to share with you that, Intel® Optane™ SSD 800P is being positioned as Optane (accelerator) drives and they have not been validated to work with Hades. Please refer to the following link for reference:
For what it's worth the 800P is also invisible to Optane software. I tried accelerating a SATA m.2 SSD with Optane and it does not work.
Does Intel have plans to add compatibility with their own hardware? I am a little surprised I even have to ask this.
The Hades Canyon platform does not currently have support for the Optane Memory capabilities. So while RAID is possible using an optane memory module or optane ssd to accelerate a larger slower storage medium will not function.
Your feedback is appreciated.
Here, have a look. The top pic is RAID on, remapping off. The bottom is the error you get when attempting to remap 2 drives. This is the identical bug that you see with the 800P leading me to believe that this has nothing to do with Optane at all. RAID is simply broken.
I will attempt to reproduce this when I get back into the office tomorrow. I will say that I have a unit running raid (remapped 800p 2x118GB) with BIOS 0037 that was installed fresh with Windows 10RS3 from a few months back. I will attempt with 2x samsung, 2x intel sata, and 2x intel nvme and report back.
Thank you for your response,
I appreciate your valuable comment concerning new processor technologies. Please bear in mind that Intel customer support does not comment about an unreleased product, future technologies or technology trends. We always strive to exceed our customers' expectations and meet their requirements. I have forwarded your comments to the appropriate team. Thank you for taking the time and provide your feedback.
So as a check-in on this. BIOS 0040.
I was able to test and enable raid with multiple Intel, Samsung, and intel/optane SSDs into RAID-0.
My process was as follows:
are reporting as "Remapped to RST"
Please attempt this process and report back.
If this process doesn't work for you then your BIOS might be
stuck in a strange configuration at which point I would set defaults, remove
both storage. Boot up, turn off system,
remove power adapter, wait about 30s and then reinstall your hardware and
attempt the above step by step process.
Obviously from the screen shots I have posted this is exactly the procedure I have been following.
There is no screenshot function within RST but I can confirm that only the remapped drive appears, the other drive does is not listed.
I am tending to believe at this point that I have a broken system with actual physical damage, perhaps a missing pin connection on the CPU.
The only other potential issue could be a corrupt BIOS flash but I highly doubt that this is the case as there must be a HASH check to confirm a successful flash.
I have also used the default configuration option within the BIOS as well as clearing the CMOS by removing the battery, this was all done before I began my thread.
I have even gone as far as clearing the security keys and resetting them within BIOS.
I figured that I would try one last CMOS reset before I exchanged this for a new one and I believe that I have just confirmed that this unit is faulty.
When trying to boot after CMOS clear I get 3 solid lights and 0 attempt to boot at all. Holding down the power button also fails to shut down the system, a sure sign that it is in fact dead.
Sorry I wasted everyone's time, looks like hardware failure right from the start.
3 Solid lights followed by a no post usually points to memory rather than system level hardware. But I don't blame you for wanting to start fresh.
Out of curiosity which memory were you running in your platform?
I was able to run this in both XMP profiles without issue. While attempting to diagnose this bug I had reverted to JEDEC just to eliminate any potential issue.
I have been attempting to get any sign of life out of this unit but it is completely dead. It does not even attempt to post.