- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Since some applications could not effectively utilize more than one NUMA node, I'm testing the situation when NUMA was disabled with a 2S cascade-lake-sp system(48 cores, 12channels total).
The memory latency impact was expected, but the bandwidth was also severely lowered to about 73GB/s, that is 1/3 of the same system with NUMA enabled, or almost half that of a 1S 6-channel system.
Why disabling NUMA has such huge effect on memory bandwidth, even more than on latency? Are there anything I can do to improve on this?
Thanks.
Link Copied
- « Previous
-
- 1
- 2
- Next »
21 Replies
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Thx, another small question, what is the CPUID for Cooperlake-SP Xeon?

Reply
Topic Options
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- « Previous
-
- 1
- 2
- Next »