So I've been extensively tweaking my Hades Canyon NUC, and I've noticed one very severe issue.
I can overclock the CPU and GPU - separately - but if I overclock both I get microstutters and lag, and then I notice the GPU begins throttling down.
My GPU OC is 1350/900 with power limit set to 75%. I notice increased draw from the wall, a total of 175W, up from ~135W at stock, but it never goes above that, despite the adapter supporting 230W of delivery. In fact, I've only ever seen it draw above 175W once, randomly, while running FurMark with power limit increased.
If I clock the CPU BELOW stock, down to something like 3.5GHz, the power draw remains at 175W, but I notice the stuttering disappears during gameplay. The CPU is not under heavy load, but going from 4.5GHz to 3.5GHz the power draw from the CPU decreases by about 15W, so I assume that 15W goes to the GPU instead and prevents said stuttering.
I have XTU setting the TDP limit to Unlimited, but from what I've experienced the only explanation for this is that the entire chip refuses to draw above 100W (as advertised), despite every other unlocked chip being able to go as far above the ARK TDP as you can push it.
Even more bizarre, if I leave XTU open while playing a game with both systems overclocked, XTU tells me I'm being thermal throttled, despite both systems being under 75C! It's not VR throttling, or power throttling, or current throttling. For some reason it's triggering a thermal throttle at under 75C, and downclocking the GPU to below stock settings despite the CPU barely drawing above 30W.
EDIT: Power Sense is off and it doesn't seem to make a difference, though it no shows thermal throttling in XTU..
What gives? I paid a significant amount of money for this kit and it's disappointing that I cannot seem to actually overclock it. My system is stable and I've isolated this issue down to 175W seemingly being the max draw from the wall, and the thermal throttling leads me to believe that there's a false limitation on the chip limiting it to 100W despite no actual thermal throttling.
Thank you for contacting us; it will be more than a pleasure to guide you.
In this case, the system instability that you are experiencing is totally expected as the default values of the CPU and GPU are increased at the same time.
Intel ® ensures that the platform tools exist and function for those that decide to operate beyond Intel's validated and warranted configuration.
Altering clock frequency and/or voltage may reduce system stability and useful life of the system and processor; cause the processor and other system components to fail; cause reductions in system performance; cause additional heat or other damage; and baffect system data integrity. Intel ® has not tested, and does not warranty, the operation of the processor beyond its specifications.
Even if we do not provide active support for this case scenario, we will keep your thread open in case that a community peer jumps in with a piece of advise.
I hope this helps.
We just wanted to double check if you still need further assistance.
Please don't hesitate on contacting us back.
I hope to hear from you soon.
Your product specifications, listed https://www.intel.com/content/dam/support/us/en/documents/mini-pcs/nuc-kits/NUC8i7HVK_TechProdSpec.p... here, explicitly state that the NUC supports drawing up to 230W from the wall.
At stock settings, running Prime95 and FurMark, I am unable to draw above 175W.
Raising power draw on the GPU using included tools that come with Intel's own Radeon Graphics drivers, found http://https//downloadcenter.intel.com/download/27881/Graphics-Radeon-RX-Vega-M-Graphics here, I am unable to draw above 175W.
Using XTU, Intel's official tool, to raise IccMax and TDP, found http://https//downloadcenter.intel.com/download/24075/Intel-Extreme-Tuning-Utility-Intel-XTU- here, I am unable to draw above 175W.
No combination of settings or tests, stock or otherwise, allow me to actually pull the 230W advertised. Instead, the system will throttle down extensively and under perform, despite having both thermal and power overhead from the PSU to do so.
This is a bug. I will be RMAing my product until I get one that performs to the spec advertised, or will be requesting a refund if none do the 230W I was told.
These comments are so unbelievably idiotic that I considered ignoring the post completely.
The only way that the unit is going to draw this much current is if you max out the current draw on the various connectors (USB, TBT3, M.2, etc.). The overall current draw has to take into account the maximum supported draw across all connectors in addition to that for the processor, chipset, graphics, etc. components. Bottom line, if you want to see the highest current draw, you need to test with components plugged into all connectors that also draw the maximum supported current.
How do you propose drawing an extra 55W from peripherals alone?
The problem isn't inherently that it's drawing too much at stock (since it shouldn't outside of AVX loads with FurMark running at the same time), it's that instead of drawing more, it opts to throttle down when there is no reason to throttle down if thermal and power headroom exists.
There is a hidden cap of 175W for the core components of the system (CPU + GPU), and that isn't mentioned anywhere. Reviewers have also noticed this in their systems.
I'm not capping out the components. There's no reason I shouldn't be able to pull more than 175W when utilizing the GPU and CPU at full load simultaneously.
Is this a joke? Can you not do simple math? Each of the 6 USB 3.0 ports can provide up to 4.5W. Each of the 2 USB 2.0 ports can provide 2.5W. The USB 3.0 charging port can provide 7.5W. Each of the 4 USB 3.1 Gen 2 (USB-A and USB-C) ports can provide up to 7.5W. That adds up to 69.5W all by itself. There are many other connectors as well as the mount points for SO-DIMMs, M.2 devices, etc. to also be dealt with as well.
Your wannabe smug reply does not address the artificial limit set by the board at 175W. Power isn't "reserved" like that, it is rated for a combined draw of 230W, not X watts for GPU Y watts for CPU and Z for peripherals. This is quite clear given that the CPU draw will cause the GPU to be downclocked due to the CPU essentially "stealing" power from the GPU.
George, if you think you know the answer, then why are you here? You have been given the best possible advice, yet you want to argue the point and think you know the subject matter better than someone with vast experience in this area.
So, take the advice/answer, or not. But, continuing to argue will cause the thread to be closed.
The only advice I've been given is that my device can't draw more power. That's exactly why I'm here - to find out why there's a restriction on the power draw on an overclockable device.
Well Al, he is definitely incorrect to boot. There is no "Robbing Peter to pay Paul" going on. The CPU and GPU each have their own independent supply circuit. There is no power sharing between the two. Freeing up power at the CPU does not make more power available to the GPU (or vice versa).
Okay, there's a start.
Is there another potential reason that the GPU downclocks when both parts are overclocked, but not when they are independently overclocked? The reason I think that is what's going on is because it makes sense when combined with the measured power draw never exceeding 175W. I cannot think of another logical conclusion.
Thank you for your response.
In this case please consider that Intel does not support over-clocking on the Intel® NUCs, and using them out of specs may void the warranty.
My recommendation is to follow the guidelines in the following document and let us know the results.
Overclock Assistant Guides for Intel® NUC:
Also, please provide us the system configuration of the components you are using (Ram memory)
Here is the link to the system specifications so you can make sure that it meets all the requirements:
I hope this helps,
This doesn't address my original query.
I've looked through the guides and menus. There's nothing related to limiting the power draw ("Power Sense" doesn't affect this - and it shouldn't until 230 watts anyway, but I've tested it both on and off).
I'm well aware that it's unsupported. But you should be supporting issues with power delivery on your device. My question is not about overclocking itself.
Baloney, your query is specifically regarding overclocking. Overclocking is not supported. Don't do it. Period.
Building UCFF systems requires a lot of compromises and some tough design decisions, especially with respect to thermals and cooling. The size of the system and the acceptable acoustic response window makes this very difficult. Since there's no possibility of using flow-through cooling, the use of (much more inefficient) blowers is an unfortunate must. It's tough enough handling TDP conditions (noise levels is the # 1 complaint with NUCs), building in a margin to support overclocking is simply not possible. When you overclock, IMHO, your NUC is not going to be adequately cooled. Don't do it. Period.
This is all I have to say. Don't do it. Period.
This NUC was literally advertised and sold as overclockable. My query has nothing to with overclocking - it's about power draw. While one is causing the other, it does not explain why there is an artificial power limit in the system. My temperatures are fine and noise isn't an issue - the NUC was made and advertised for people like me. Yet Intel decided to lock down the power draw for some unexplicable reason. I'm here asking for an explanation. You don't seem to be able to understand the core problem and keep telling me about overclocking when my issue is about power draw - the overclocking works fine.
You say "the overclocking works fine".
But, in your very first post you said "if I overclock both I get microstutters and lag"
So, which is it?
Look, do you want to use your nuc as intended, or do you want to heat your apartment as well? You decide.
Since you are not getting the answer you want, why persist?
Continued hand waving is not an acceptable answer to my question.
The overclocking functions as intended - but the power delivery does not. This is the logical explanation for why individual overclocking of either part functions properly with identical settings.
If you don't know the answer, stop deflecting and just move on. Getting tired of repeating myself because the two of you either don't understand to refuse to understand my basic question and would rather be rude and attempt to show off. You very clearly don't know the answer and that's okay. I'll wait for someone who actually does.
"just move on". I agree that you should. It serves no purpose for you to continue this effort here.
The hand waving is called facepalm. Even the kitty does it.