I have a NUC9VXQNX and am trying to configure RAID 1 on the two PCIe slots using (2) Icy Dock PCIe/NVMe trays (https://www.icydock.com/goods.php?id=309). The drives are SAMSUNG 970 EVO Plus SSD 2TB. I want to configure RAID 1 before installing Linux, but I am unable to use "CTRL-I" to access the Option ROM during POST, both with and without secure boot enabled.
I've updated to the most recent BIOS 0059. The BIOS recognizes both drives in the Security section, but after changing the SATA Mode to RAID and checking both M.2 Slot Remapping boxes the drives do not show up in the Intel Rapid Storage Technology area. The Intel RST RAID driver is listed as 18.104.22.16855 in the BIOS, and I see there is a newer RST driver version 22.214.171.1241 that notes detection of PCIe NVMe drives here (https://downloadcenter.intel.com/download/29236/Intel-Rapid-Storage-Technology-RAID-with-Intel-Optan...).
How can I update the BIOS version of the RST Driver to 126.96.36.1991?
I've even went so far as to install Windows on one of the drives and ran the .exe RST driver updates from the above link, but this doesn't change the BIOS driver version for Intel RST.
Any help greatly appreciated!
Before anything else, you do understand that you cannot use Intel RST with Linux, right?
You cannot update the RST OpROM contents yourself. It is provided as part of the BIOS package. The fact that there is a newer Windows driver does not mean that there are corresponding change(s) in the OpROM contents - nor does it imply that any changes in the OpROM contents are even necessary.
The remapping capability is specific to M.2 slots 1 and 2 on the motherboard. I do not know if it is even possible to support remapping of PCIe-based media. I thought that this remapping capability was specific to the M.2 slots that utilize PCH-based PCIe lanes, not the baseboard PCIe slots (which utilize processor PCIe lanes).
Hope this helps,
@n_scott_pearson Thank you, that was very helpful.
Now I'm trying to configure software RAID 1 during Debian install and running into another problem. Apparently UEFI doesn't recognize Linux software RAID so I need to disable UEFI boot.. I've disabled secure boot and read that I need to also disable Modern Standby but I can't find that setting. I also cannot uncheck "Native ACPI OS PCIe Support" as mentioned here: https://community.intel.com/t5/Intel-NUCs/NUC10-can-t-enable-Legacy-Boot/td-p/602288
I'm still on 0059 firmware. Do I need to downgrade firmware to use legacy boot?
Looks like my RAID 1 dreams will need to put on hold.
I need to get this thing up and running asap so I'm going to use just one of the SSDs. I am NOT going to be using a GPU and since there are three m.2 slots does it make a difference which one I use if I'm looking for maximum I/O performance? Would there be any benefit to using the CPU m.2 slot over the PCH slots?
At issue is the M.2 slot on the baseboard, which utilizes PCIe lanes from the processor,
- [Advantage] The R/W performance of a SSD in a M.2 connector that utilizes processor PCIe lanes may be better than the R/W performance of a SSD in a M.2 connector that utilizes PCH (chipset) PCIe lanes. It depends upon whether the R/W operations become bottlenecked on the DMI bus, which connects the processor to the PCH. Bottlenecking can occur if bursts of I/O operations are occurring involving multiple high-speed devices (the one or two M.2 NVMe SSDs, Thunderbolt, USB 3.x, SATA, LAN, etc.).
- [Disadvantage] If the baseboard's M.2 connector is used, the number of PCIe lanes available for use by an add-in graphics card will be cut from 16 to 8. Whether this has an appreciable effect on graphics performance is dependent upon the operations being performed.
When I asked one of the QN/QX platform experts about this, he said that bottlenecking on the DMI bus is exceptionally rare. He also said that knocking down the number of PCIe lanes available to a graphics card rarely has an appreciable effect on graphics performance. This is, of course, moot if you are not using an add-in graphics card.
P.S. Remember that you can do RAID0 or RAID1 with the two SSDS plugged into the motherboard's M.2 connectors. Alas not with Linux, however.