Community
cancel
Showing results for 
Search instead for 
Did you mean: 
John_M_4
Beginner
111 Views

"Cannot start collection because the Intel VTune Amplifier XE 2013 failed to create a result directory" when running with MPI

Hi all,

I get this error:

Cannot start collection because the Intel VTune Amplifier XE 2013 failed to create a result directory. Unknown error.

when I run the following amplxe-cl command:

mpiexec -np 3 ~/local/vtune_amplifier_xe_2013_update7/vtune_amplifier_xe_2013/bin64/amplxe-cl -r mpi003 --collect hotspots -- ~/local/MyBuilds/HDGProject/release/install/bin/MyFESolverDP


The errors only come from the non-master processes. If I run the above command with -np 1 (only one process), everything works fine. So, it seems that the non-master processes cannot create a results directory.

Does anyone know what might be going on here/ have any suggestions?

Thank you for your time!

John

0 Kudos
2 Replies
Peter_W_Intel
Employee
111 Views

It seems that 2013 Update 7 is very old version (I don't know if MPI was supported then), you may try latest 2015 u1.

Another tip is that I recommend to collect under administrator privilege. In general, I work in this way (all processes data will be generated in one result)

> amplxe-cl -r mpi003 --collect hotspots -- mpiexec -np 3 ~/local/MyBuilds/HDGProject/release/install/bin/MyFESolverDP 

Dmitry_P_Intel1
Employee
111 Views

Hello,

I would also recommend to use newer VTune Amplifier XE Updates.

Please also note:

VTune Amplifier extracts the MPI process rank from the environment variables PMI_RANK or PMI_ID (whichever is set) to make sure the process belongs to an MPI job and to capture the rank in the result directory name. If an alternative MPI implementation does not set these environment variables, the VTune Amplifier does not capture the rank in the result directory name and a usual automatic naming scheme for result directories is used. Default value for the -result-dir option is r@@@{at}, which results in sequence of result directories like r000hs, r001hs, and so on.

MPICH-based MPI implemnetations is usually setting PMI_RANK.

Please also note that Peter's suggestion to use amplxe-cl for mpiexec will incorporate the ranks that launched on the same machine where mpiexec was launched so it will not cover other nodes if you run a app that is distributed through multiple nodes.

Thanks & Regards, Dmitry

Reply