All, I am doing some research, just getting started, on my Computer Science Graduate project and one of the questions we hope to answer is whether it is possible to "engage" each of the cores using MPI on a 64 bit Quad Core machine.
Has anyone done this successfully or is OpenMP the way to go?
Here is the situation;
We have a in house distributed simulation kernel (C++/Linux) that utilizes MPI for message passing. There are a number of Quad Core intel machines in a high speed network which are utilized by the Simulation Kernel. Unfortunately, we have seen memory issues and the ability to only engage a single core per machine during the migration to the 64 bit architecture. We are not 100% certain of the root cause at this point which is part of the project that needs to be tackled. It could be a number of issues, ranging from the machine's initial setup/software all the way through to simulation code.
We are trying to troubleshoot the issue more thoroughly and if needed identify any code that may have to be changed to make the simulation kernel run on the new 64 bit architecture across all cores.
Currently the memory management in the kernel is accomplished using techniques from the book, "Modern C++ Design: Generic Programming and Design" by Andrei Alexandrescu. This correlation was construed from my review of the code and the different references to the book in comment sections. I have purchased the book and I am awaiting for its arrival (5-10 days). Is there any large differences between 32 bit memory management and 64 bit memory management (blatently obvious)? What should I look for in memory management code that should set alarm bells off that there are problems with the current design of the simulation kernel. I am looking for a good book on 64 bit memory management, architecture, and multi-core development. Any suggestions?
Certainly, MPI (possibly with MPI_THREAD_FUNNELED) is a proven method for managing all the cores on a cluster. You seem to be vacillating as to whether you want a cluster, or a single shared memory machine, where OpenMP by itself would be another option, simpler, but with more limitations. In the last year, OpenMPI and Intel MPI have been the most successful MPIs in my experience. Both have excellent built-in schemes for processor affinity. As Jim suggested, you can find the experts on Intel MPI on the Intel HPC forum, while OpenMPI has excellent public forum which you would find it useful to follow. With so many issues on your plate, I wonder if you have time to deal with low level memory management issues where you might see a difference between 32- and 64-bit (beyond the vastly expanded upper limit), unless you have gone out of your way to write non-portable code. You have raised too many issues to expect adequate coverage in a single book.