Intel® Optane™ Persistent Memory
Examine Issues Related to Intel® Optane™ Persistent Memory
62 Discussions

Restricting the amount of DRAM in Memory Mode


I'd like to run some evaluation in Memory Mode. My problem is that I have plenty of DRAM on my machine, and it'll force my evaluation to be too large. Is there any way I can restrict the amount of DRAM available in that mode? I don't mind reducing the overall amount of DRAM available to the system, but I can't go to the server and remove some memory sticks. Easiest solution would have been along the lines of cgroups, but I assume cgroups can't discriminate between DRAM and PMEM when applying the resource limitations.


FWIW, my machine is RHEL 7 with 192GB of DRAM and 252GB of PMEM, divided between two sockets.

0 Kudos
1 Solution

Yes, that should work. You can monitor the DRAM and PMem bandwidth and the hit rate (Data in DRAM) using the open-source PCM utility, specifically the `pcm-memory` command. 



View solution in original post

0 Kudos
4 Replies

Hello, erangilad.

Thank you for posting on the Intel Community Support forums.

We have a dedicated community section for this specific product, so I will move your thread there so it can be answered as soon as possible.

Best regards,

Bruce C.

Intel Customer Support Technician

0 Kudos

MemoryMode is implemented in the BIOS and Memory Controller so the only way to change the ratio of DDR to PMem is to physically remove DIMMs. There's no way to disable DDR slots in the BIOS. Memory Mode has some requirements for ratios and dimm population, which are documented in your systems manual. You need to stay within the supported population and ratio rules, otherwise the host may fail POST. 


Depending on your goals, there are volatile memory solutions available when the host is in AppDirect - where ratios of DRAM:PMem are not as strict and more configurable. Solutions that require no application code changes include:

  • Linux Kernel Memory Tiering 
    • Available in Linux Kernel 5.15 and later. This feature uses PMem to hold colder pages (DRAM to PMem). There are patches that will promote data when they become hot (PMem -> DRAM). A custom Kernel is available that has both features. 
  • Linux System-RAM
    • Available in Linux Kernel 5.4 and later. This feature allows you to provision PMem as Memory-Only NUMA nodes. You can then use groups, ndctl, etc to decide how much DRAM and PMem an app has access to. This is foundational to the automatic memory tiering solution (above).
  • MemVerge Memory Machine
    • Proprietary code that implements transparent memory tiering and snapshots, fast restore, and replication.
  • Libmemtier/Memkind+
    • A simple interposer library to manage pages between DRAM and PMem
  • VMWare Capitola (Tech Preview at the time of writing)
  • Metall (Research Paper)
    • A persistent memory allocator for data-centric analysis
  • Poseidon (Research Paper)
    • Safe, Fast, and Scalable persistent memory allocator


Options that require code changes:

  • Memkind
    • If you have access to the source code, you can use this library to have the application manage which volatile data resides in DRAM and PMem.
  • Persistent Memory Development Kit (PMDK)
    • A suite of libraries that deliver features for developers to utilize multiple tiers of memory.


To summarise, Memory Mode is an easy button for sure, but it's not the only option for tiered memory solutions. One advantage for AppDirect is that you get to fully utilize and manage DRAM and PMem. 


0 Kudos

Thanks Steve!


My project involves adding persistence to some data structure in AppDirect mode. The purpose of the Memory Mode evaluation was to compare the extended structure in AppDirect with the vanilla structure in Memory Mode. So the interesting tools you've linked won't help much in my case.


I have one more option in mind: if I limit the process to a single socket (out of the two I have), it'll be able to access only half of the DRAM and half of the PMEM, right?

0 Kudos

Yes, that should work. You can monitor the DRAM and PMem bandwidth and the hit rate (Data in DRAM) using the open-source PCM utility, specifically the `pcm-memory` command. 



0 Kudos