I'm debating whether or not I should Opt for an NVMe SSD vs SATA for a second storage drive. I know there are a lot of PCI-E "Plugs" of sorts in this PC. I have a SATA SSD now so No lanes are in use. I plan on using a EGPU in the future so can keep this PC for as long as possible. If anyone can explain this to me that would be great!!
Thank you for posting.
Please, let me double check this. I will update the thread as soon as possible.
Intel Customer Support Technician
Under Contract to Intel Corporation
Dominator211, thank you for your patience.
These do not share PCIe lanes, the 2 storage slots M.2 are PCIe x4 and they go directly to the Intel® HM175 Chipset.
In case you are interested on checking valited parts for this Intel® NUC model, visit the Intel® Product Compatibility Tool: http://compatibleproducts.intel.com/ProductDetails?prodSearch=True&searchTerm=Intel%C2%AE%20NUC%20Ki...
For more information you can visit:
Intel Customer Support Technician
Under Contract to Intel Corporation
While Amy response is technically correct, the answer is actually a lot more complicated...
- The processor is connected to the PCH - the chipset - via the DMI bus.
- All of the peripheral interfaces offered by the PCH -- all of the downstream PCIe lanes, USB ports, SATA ports, LPC bus interface, SPI bus interface, HDA bus interface, Ethernet interfaces, etc. and etc. -- share the throughput of this DMI bus.
- Each of the four lanes of the DMI bus is capable of supporting 8 GT/s. That's roughly 1GB/s (the total for the 4 lanes is actually 3.93 GB/s).
- Since individual PCIe 3.0 lanes are also capable of supporting 8 GT/s, this means that each of the lanes of the DMI bus is effectively equivalent to a PCIe lane.
- Ultimately, this means that all of the peripheral interfaces offered by the PCH are being supported by (sharing the throughput of) the equivalent of 4 PCIe lanes.
The equivalent of 4 PCIe lanes are supporting:
- The 4 PCIe lanes consumed by the TBT3 controller,
- The 4 PCIe lanes consumed by each of the two M.2 NVMe interfaces (and/or the 1 PCIe lane essentially consumed by the SATA controller),
- The PCIe lane consumed by the M.2 WiFi interface,
- The (roughly) 4 PCIe lanes consumed to support the USB 3.0 interfaces,
- The 2 PCIe lanes consumed by the GbE interfaces,
This roughly works out to those 4 PCIe lanes supporting (being shared by) 20 downstream PCIe interfaces!
We know, of course, that not all of these interfaces are going to be fully active at any one point in time. Still, anyone can see that the DMI bus is going to be saturated on a regular basis, especially when utilizing both M.2 NVMe interfaces.
I hope this more clearly represents the situation.
so if I am understanding you correctly... the NVMe slots do not directly interfere with the back 2 Thunderbolt ports? What if let's say down the line I replace my current SATA SSD with an NVMe one would that give each SSD a 2x lane? Is this the same scenario with the two thunderbolt 3 ports on the back? Let's say I were to use a Thunderbolt docking station to add more ports to NUC as well as using an E-GPU enclosure. would I be better off just using the E-GPU alone so I would get more performance? Or can every single PCI-E x4 slot maintain full speed if in use at once? That would be cool if it were possible but I just don't see how with today's technology we can cram that much data down a chipset without it literally bursting into flames.
OMG, so many questions!
RE: "... the NVMe slots do not directly interfere with the back 2 Thunderbolt ports?"
--> No, they *do* interfere, though I would prefer to think of this as contending for the same resource, namely the DMI bus. When the bus has the bandwidth, they will be capable of running at full speed; when it doesn't, they will run at a degraded speed (actually, I don't like using 'degraded' either).
RE: "What if let's say down the line I replace my current SATA SSD with an NVMe one would that give each SSD a 2x lane?"
--> No, the two NVMe drives would each continue to use/consume 4 of the PCH's PCIe lanes and would contend for the DMI bus as a shared resource.
RE: "Is this the same scenario with the two thunderbolt 3 ports on the back?"
--> No, the two TBT3 ports share a set of 4 PCIe lanes, so they together contend with the NVMe interfaces (etc.) for the resources of the DMI bus.
RE: "Let's say I were to use a Thunderbolt docking station to add more ports to NUC as well as using an E-GPU enclosure. would I be better off just using the E-GPU alone so I would get more performance?"
--> Each TBT3 port has a maximum bandwidth and everything you connect to that port shares this bandwidth. For this reason, if you were going to use a eGPU, I would recommend you put it on a separate port.
RE: "Or can every single PCI-E x4 slot maintain full speed if in use at once?"
--> No, they cannot maintain full speed; they are sharing the DMI bus and it will be saturated and thus their performance degraded. As an example, back when the NUC6i7KYK (Skull Canyon) NUC first came out, some folks started experimenting with using RAID0 across the two (PCIex4) NVMe interfaces. Their findings were that the overall performance of the RAID0 array was improved over that of the NVMe SSD drive that was used (Samsung 960 PRO at that time) but couldn't achieve double because of DMI bus saturation.
RE: "That would be cool if it were possible but I just don't see how with today's technology we can cram that much data down a chipset without it literally bursting into flames."
--> Right, you can't cram down more than what the DMI bus can support - but the PCH can definitely consume this much without, um, 'bursting into flames'
So the glaring issue here is this contention. You might ask: "Why not make the DMI bus wider?"
--> Well, the percentage of time where the requirements are actually high enough that the DMI bus is saturating is fairly small. Intel is definitely making a trade-off here; balancing this throughput with affects on, amongst other things, processor performance and memory bus contention. Time will tell whether growth in the width of this bus becomes necessary. Certainly, things like NVMe and eGPUs are adding significantly to the requirements.
Scott's answers above are excellent, and I would only add the following points:
1. When the thunderbolt ports are only used for display connections, they will not be competing with the NVME slots for bandwidth from the DMI bus. In this case the thunderbolt ports will be functioning in alternate DP mode, and will be streaming data from the AMD GPU's internal displayport ports directly. If you use the thunderbolt ports
2. It's actually a fairly rare occurrence to be driving full speed IO from two storage endpoints at once in the consumer space. In fact, the RAID scenario that Scott mentioned is about the only scenario that I can think of that would do that on a consistent basis. Even in the case where you are copying data from one NVME drive to another, the transfer is one way and can utilize the full 4GB/s of bandwidth of both ports since one port is outputting data and the other is inputting, and ports support 4GB/s simultaneously in both directions.
The takeaway from all of this is that the bus configuration for this NUC is not much different from current, non-HEDT desktop that utilize the PCIe x16 slot for a GPU. Those systems too are running all of their peripherals off of the southbridge just like the NUC, and have the same 4GB/s DMI link conecting the southbridge to the uncore (northbridge) of the CPU.