- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Background: I am having trouble installing a Xeon Phi *Coprocessor* 7120P because of incompatibility with the Xeon Phi *Processor* (7210) software. This takes the form of error messages like the following upon attempting to install SPSS on a system that has an older version of XPPSL:
"file /usr/bin/micnativeloadex from install of mpss-coi-3.8-1.glibc2.12.x86_64 conflicts with file from package xppsl-coi-1.3.3-151.x86_64".
Common sense dictates I should try to update to the latest version of XPPS to gain better compatibility with the latest version of MPSS. My current versions of xppsl tools seem to be 1.3.3 as installed by the OS (CentOS 7.3) during routine updates. I want to try the 1.5.0 version because I think it may fix the incompatibility problem.
Information about the software (including a frankly terrible download page with absolutely no instructions how/what to install these binaries, what is needed in order to update all the components, why the CentOS 7.3 package is so tiny compared to 7.2 [do I need to install the former first?]).
https://software.intel.com/en-us/articles/xeon-phi-software#lx1-5rel
Please advise. Thanks!
Link Copied
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Gabriel,
The page https://software.intel.com/en-us/articles/xeon-phi-software#lx1-5rel is for Intel Xeon Phi Processor. Instruction on how to install is included the User Guide. We intend to use Intel Xeon Phi Processor to run highly parallel code.
On the other hand, the page Intel Manycore MPSS software https://software.intel.com/en-us/articles/intel-manycore-platform-software-stack-mpss is for the host machine which has the Intel Xeon Phi Coprocessor. This model is used when you want to combine your scalar code running on the host machine and your parallel code running on the coprocessor.
As for today, we don't see any benefit of running part of the parallel code on the Intel Xeon Phi processor and part of your parallel code on the Intel Xeon Phi coprocessor. Therefore, both stacks MPSS and XPPSL are not supposed to work together.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
I am not interested in the offload model. We have poured a lot of effort into parallelizing even traditionally serial parts of out codebase. It works quite well.
As for how, have you heard about MPI? It lets you distribute highly parallel tasks across nodes. The Xeon Phi coprocessor can appear as a node within the host machine. Users can even ssh into the coprocessor itself, or access host systems from the coprocessor using virtual shared memory over the PCI-E bridge. These are usage patterns that are arguably more powerful for highly threaded code than the "offload" model.
Intel knows about these modes and has published on them. Therefore it is disingenuous to claim you "do not see a benefit" or use-case for using Xeon Phi processor-based system with a co-processor. This is an artificial limitation imposed by the poor decision to name files the same thing. We paid good money for both of these tools. Without the ability to use the co-processor on our servers, what good is it?
How can I ssh into the co-processor or use it in hybrid MPI without reliance on the broken MPSS components?
Thanks,
Gabe
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Loc Nguyen: please contact me for a demonstration of "the benefit of running... parallel code on the Intel Xeon Phi processor and ... the Intel Xeon Phi coprocessor."
I don't want to detract from the seriousness of this bug (which is, in fact, a bug as it is not documented to be incompatible, leading to a lot of investment gone to waste if this is not fixed), and even if you can't see the benefit doesn't mean it shouldn't be fixed -- but I really do see a "benefit" in convincing you how the presence of a few hundred extra cores can in fact be helpful in MPI environments.
Of course I agree with you that the "offload" model is not important for this case. This is obviously not the model we want to use.
My Intel Premier Business Support ID for this case is 6000164713 and I'd be delighted to chat with you more about this. If you need help repackaging or renaming a few files, please let me know.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Gabriel,
Thank you for your feedback. I see your point. Intel Xeon Phi Coprocessor is designed to connect to an Intel Xeon host, not Intel Xeon Phi Processor. This is just a business decision.
If you want to use Intel Xeon Coprocessor as a MPI node, you can connect it to an Xeon host. I will try to figure out how to chat with you regarding your suggestion.
Thank you

- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page