Intel® oneAPI Math Kernel Library
Ask questions and share information with other developers who use Intel® Math Kernel Library.

[HPCG] Compilation with GCC//GOMP/OpenMPI - error at run-time

STOFFEL__Mathieu
Beginner
1,551 Views

Hello everyone,

 

Brief summary of the issue:

  • GCC version: 7.3.0;
  • An hybrid OpenMP/MPI version of HPCG was compiled from the source code delivered with the Intel MKL;
  • It was linked to OpenMPI-3.0.1, and MKL 2018.3.222;
  • The compilation and the linkage complete without any error nor warnings;
  • At run-time, when executed on a compute node with 2 sockets (E5-2620 v4) - 2 MPI ranks, 8 OpenMP threads per rank - an error raised in src/CheckProblem.cpp of the HPCG source code. The hpcg.dat file being used is the default one (192 192 192).

 

Some more details:

  • The setup file associated with the compilation/linkage of HPCG is attached to this message;
  • In order to run HPCG, I used the following commands:
export OMP_NUM_THREADS=8 ; mpirun -np 2 ./xhpcg_avx2
  • The aforementioned run-time error is the following:
xhpcg_avx2: src/CheckProblem.cpp:123: void CheckProblem(const SparseMatrix&, Vector*, Vector*, Vector*): Assertion `*currentIndexPointerG++ == curcol' failed

 

Please, feel free to ask any piece of information you think might be helpful, I'll do my best to provide it.

 

Thank you in advance for help,

--Mathieu.

0 Kudos
11 Replies
Gennady_F_Intel
Moderator
1,551 Views

Mathieu, didn't you try version 2019?

 

0 Kudos
STOFFEL__Mathieu
Beginner
1,551 Views

Thank you for your answer.

No, the only version of the Intel MKL I have access to is the one I specified (namely the 2018.3.222).

I am not sure whether or not I could have access to the newest version of the Intel MKL.

0 Kudos
Gennady_F_Intel
Moderator
1,551 Views

ok, then we will check with 2018 version. here you may take the latest versions you want.

0 Kudos
STOFFEL__Mathieu
Beginner
1,551 Views

Once again, thank you for your answer. And in advance, thanks for the time you will dedicate to help me.

When looking at the requirements on the page you linked, I saw that the Intel MKL library had been tested alongside OpenMPI 1.8.x.

If I am not mistaken, several versions of OpenMPI (not sure which one though) are installed and available on the clusters I use. If I try to compile/link with other versions of OpenMPI, would it make it easier for you to investigate the issue?

0 Kudos
Gennady_F_Intel
Moderator
1,551 Views

yes, i see the same problem with version 2019 also. We will check what's going wrong with this specific case. meantime running the prebuilt avx2 binaries didn't show some problems;

]$ mpirun -np 2 ./xhpcg_avx2
HPCG result is VALID with a GFLOP/s rating of 29.870816
 

0 Kudos
Alexander_K_Intel2
1,551 Views

Hi Mathieu,

Can you add  -DNDEBUG to your CXXFLAG variable and build your binary again? And why do you use gcc instead of Intel compiler to build HPCG, if it is not a secret?

Thanks,

Alex

0 Kudos
STOFFEL__Mathieu
Beginner
1,551 Views

Hi Alexander,

 

First, thank you for your answer.

I tried building with -DNDEBUG before opening this thread. As far as I remember, same error and no more information output.

I will have access to the test cluster in approximately 16 hours, I will test it again and update this post accordingly.

 

This is not a secret at all. I am working on a French research-oriented cluster (Grid5000, which Chameleon in the US is "inspired of"). Intel Cluster Studio is not available on this plateform so I cannot compile with the Intel compilation suite.

I tried to download the free version for academic/research purposes, but my institutional e-mail address seems not to be recognised.

 

And, by the way, any progress on the issue?

0 Kudos
Alexander_K_Intel2
1,551 Views

Hi Mathieu,

We know about this issue and currently we are investigating it. Actually this issue is not correlate to benchmark at all it is more about check the matrix for optimized kernels. I checked mentioned workaround on my side and it's work correctly. Can you double check it on your side? By the way, we publish pre-build exactable files of HPCG with MKL release so why do you need to rebuild them?

Thanks,

Alex

0 Kudos
STOFFEL__Mathieu
Beginner
1,551 Views

Hi Alexander,

 

Thank you for your answer, you were right. I double-checked the "-DNDEBUG" workaround, and it works like a charm.

The first time I tried it, I must have made a typo (-NDEBUG or -DDEBUG).

 

Concerning the pre-built binaries of HPCG, I tried to use the version specific to machines with AVX2 support but I could not make it work. Even if I sourced "compilvars.sh" and "mklvars.sh", 3 dynamic libraries related to MPI were still not found by the executable. And the distribution of OpenMPI installed on the nodes seemed not to supply an equivalent of those libraries I could have symlinked to ... I thought it was compiled/linked using Intel MPI, and since I do not have access to it, I tried to compile HPCG from the sources supplied by the MKL. It is not unlikely at all that I missed something.

 

If you have any clue on how to make the pre-built version work, I would appreciate it (I would rather use it than a custom-compiled version of HPCG).

If not, one more huge "thank you" for the workaround!

 

Regards,

--Mathieu.

0 Kudos
Alexander_K_Intel2
1,551 Views

Hi Mathieu,

Can you sent me the log with linking error via forum or intel Support ticket system?

Thanks,

Alex

0 Kudos
STOFFEL__Mathieu
Beginner
1,551 Views

Hi Alexander,

 

First of all, I am very sorry for the huge latency associated with this answer ... I have been quite busy lately.

 

The issue concerning the missing libraries stemmed from the fact that the Intel MPI libraries were not installed on the cluster I was using (only the Intel MKL libraries were).

After installing the Intel MPI libraries, I was able to use the pre-built version of HPCG.

 

Thanks again to all the people who helped me through this thread,

--Mathieu.

0 Kudos
Reply