Software Archive
Read-only legacy content
17061 Discussions

hpl-2.0 on 2 MIC cards

Sangamesh_B_
Beginner
320 Views

Hi,

    I'm trying to run hpl-2.0(which is downloaded and compiled manually by me) on the Host system + 2 MIC cards. It fails with following error:

rank = 25, revents = 29, state = 1
Assertion failed in file ../../socksm.c at line 2963: (it_plfd->revents & POLLERR) == 0
internal ABORT - process 0
APPLICATION TERMINATED WITH THE EXIT STRING: Interrupt (signal 2)

The mpirun command used is:

export I_MPI_MIC=enable

/opt/ics/impi/4.1.1.036/intel64/bin/mpirun  -genv  I_MPI_FABRICS=shm:tcp -n 24 -host da-1 ./xhpl.cpu.T : -n 1 -host mic0 -env OMP_NUM_THREADS=244 /tmp/xhpl.mic.T : -n 1 -host mic1 -env OMP_NUM_THREADS=244 /tmp/xhpl.mic.T

What could be the reason for this?

0 Kudos
3 Replies
Loc_N_Intel
Employee
320 Views

Hi San,

Let me investigate this issue. Could you give me the link where you download hpl-2.0 and how you compile it?

Thank you much.

0 Kudos
Loc_N_Intel
Employee
320 Views

Hi San,

I ran hpl-2.1 successfully on my system with two coprocessors. I haven’t tried hpl-2.0 yet. Below are the steps I did:

  1. Get the hpl-2.1 source code from http://www.netlib.org/benchmark/hpl/hpl-2.1.tar.gz

  2. Get the make file for MIC sent by Francesca from his thread posted here

  3. Update this make file with your current Intel® compiler and Intel® MPI Library. In my case, I use Intel Composer XE 2013 SP1 Update 2 and MPI 4.1.3.

  4. Set env variables

    # source /opt/intel/composer_xe_2013_sp1.2.144/bin/compilervars.sh intel64

    # source /opt/intel/impi/4.1.3.048/bin64/mpivars.sh

  5. Build the application for MIC. The results are saved in ../hpl-2.1/bin/IntelMIC

  6. Modify the previous make file to build the application for host: change MKL lib and include path for intel64 instead of mic and remove “–mmic” in the make file.

  7. Build the application for host. The results are saved in ../hpl-2.1/bin/IntelHost

  8. Transfer the application for MIC (/bin/xhpl and file /bin/HPL.dat) to the directory /tmp in mic0 and mic1.

    # cd /opt/hpl-2.1-TEST/bin/IntelMIC

    # scp xhpl mic0:/tmp/

    # scp HPL.dat mic0:/tmp/

    # scp xhpl mic1:/tmp/

    # scp HPL.dat mic1:/tmp/

  9. Transfer all the necessary MPI libraries to mic0 and mic1.

    # scp /opt/intel/impi/4.1.3.048/mic/bin/* mic0:/bin/

    # scp /opt/intel/impi/4.1.3.048/mic/lib/* mic0:/lib64/

    # scp /opt/intel/composer_xe_2013_sp1.2.144/compiler/lib/mic/* mic0:/lib64/

    # scp /opt/intel/impi/4.1.3.048/mic/bin/* mic1:/bin/

    # scp /opt/intel/impi/4.1.3.048/mic/lib/* mic1:/lib64/

    # scp /opt/intel/composer_xe_2013_sp1.2.144/compiler/lib/mic/* mic1:/lib64/

  10. Enable MIC communication and configure MPSS peer-to-peer.

    # export I_MPI_MIC=enable

    # /sbin/sysctl -w net.ipv4.ip_forward=1

  11. Finally, I run the command you want successfully:

    # cd /opt/hpl-2.1-TEST/bin/IntelHost

    # mpirun -genv I_MPI_FABRICS=shm:tcp -n 24 -host localhost ./xhpl HPL.dat : -n 1 -host mic0 -env OMP_NUM_THREADS=244 -wdir /tmp ./xhpl : -n 1 -host mic1 -env OMP_NUM_THREADS=244 -wdir /tmp ./xhpl

     

0 Kudos
Loc_N_Intel
Employee
320 Views

Just tested with hpl-2.0, I would like to confirm that the above procedure also worked when testing hpl-2.0 downloaded from  http://www.netlib.org/benchmark/hpl  

Thank you.

0 Kudos
Reply