Intel® MPI Library
Get help with building, analyzing, optimizing, and scaling high-performance computing (HPC) applications.
2153 Discussions

Interpreting Intel cluster checker results

Amit1
Beginner
5,025 Views

Hi,

We are trying to figure out issues w.r.t the machine “host-e8” when launching Mpi jobs on it.

 

A simple MPI ring application is failing with the following error when host-e8 is included in the hostfile.

 

Abort(1615503) on node 4 (rank 4 in comm 0): Fatal error in PMPI_Init: Other MPI error, error stack:

MPIR_Init_thread(136)........:

MPID_Init(904)...............:

MPIDI_OFI_mpi_init_hook(1421):

MPIDU_bc_table_create(338)...: Missing hostname or invalid host/port description in business card

 

This application works fine when host-e8 is excluded from the host-file (machine list).

 

To analyze this issue, we tried using cluster checker which was recommended in one of the other posts on this forum.

I have attached the corresponding cluster checker log with this post.

 

Can you please help us with the interpretation of this log as this seems to mostly contain the differences between various hosts that were specified with “-f (machinelist) without really high-lighting any issue with host-e8 that can explain this error.

It will be useful if you can also recommend potential remedies.

 

Thanks,

_Amit

 

0 Kudos
22 Replies
SantoshY_Intel
Moderator
4,470 Views

Hi,

 

Thanks for reaching out to us.

 

Could you provide details for the below questions, so that we can investigate your issue well?

 

1. Are you able to run a sample MPI program on the failed node(host-e8)?

2. Is your application includes a hybrid code of MPI/OpenMP? If yes, could you please share the code with us?

 

>>"It will be useful if you can also recommend potential remedies."

set I_MPI_PLATFORM to "auto" by using the below command:

export I_MPI_PLATFORM=auto

Now run your application on multiple nodes by setting I_MPI_DEBUG=10 and share the complete debug log with us.

 

Thanks & Regards,

Santosh

 

 

0 Kudos
Amit1
Beginner
4,458 Views

Hi Santosh,

 

Thanks for your reply.

 

To answer your questions,

-> We are not able to run the sample MPI program on the failed host (host-e8).
-> I don't think the sample MPI program uses any hybrid code of MPI/OpenMP.
     It will be good if you can provide us with further details/examples around this hybrid usage and I can try to look for it.

Based on your recommendation, I tried setting the following two environment variables :-

I_MPI_PLATFORM=auto
I_MPI_DEBUG=10

and the resulting failing log for the sample MPI ring application was as follows:-

[0] MPI startup(): libfabric version: 1.10.0a1-impi
[0] MPI startup(): libfabric provider: tcp;ofi_rxm
Abort(1615503) on node 4 (rank 4 in comm 0): Fatal error in PMPI_Init: Other MPI error, error stack:
MPIR_Init_thread(136)........:
MPID_Init(904)...............:
MPIDI_OFI_mpi_init_hook(1421):
MPIDU_bc_table_create(338)...: Missing hostname or invalid host/port description in business card
Abort(1615503) on node 0 (rank 0 in comm 0): Fatal error in PMPI_Init: Other MPI error, error stack:
MPIR_Init_thread(136)........:
MPID_Init(904)...............:
MPIDI_OFI_mpi_init_hook(1421):
MPIDU_bc_table_create(338)...: Missing hostname or invalid host/port description in business card
Abort(1615503) on node 8 (rank 8 in comm 0): Fatal error in PMPI_Init: Other MPI error, error stack:
MPIR_Init_thread(136)........:
MPID_Init(904)...............:
MPIDI_OFI_mpi_init_hook(1421):
MPIDU_bc_table_create(338)...: Missing hostname or invalid host/port description in business card

 

Please advise on the next steps.

 

Thanks & Regards,

_Amit

0 Kudos
SantoshY_Intel
Moderator
4,411 Views

Hi,


Could you please provide the command you used to run the MPI sample on a single node i.e host-e8?

Also please provide us the complete debug log after running the MPI sample on host-e8 with I_MPI_DEBUG=10.


Awaiting your reply.


Thanks & Regards

Santosh


0 Kudos
Amit1
Beginner
4,398 Views

Hi Santosh,

Thanks for your reply.

 

When I run the Mpi sample program with a machine list, which only comprises of single node "host-e8", then it works fine, but once I add any other host to this list it does not work.

For the case where there is more than 1 host (not working), I have already shared the logs.

 

For the case where there is single host "host-e8" and which is working, logs are as follows :-

Command : ./run.sh -np 2 -hostfile machlist1 -s 2 -gcc /med/build/gcc/gcc-6.2.0/rhel6/bin/gcc

 

Log:

/usr/bin/gcc -I/scratch/userA/MachineIssue/RingApplication/mpi_test_package_2/./intelmpi/intel64/include /scratch/userA/MachineIssue/RingApplication/mpi_test_package_2/./ring_c.c -o /scratch/userA/MachineIssue/RingApplication/mpi_test_package_2/ring_c -L/scratch/userA/MachineIssue/RingApplication/mpi_test_package_2/./intelmpi/intel64/lib/release -lmpi -L/scratch/userA/MachineIssue/RingApplication/mpi_test_package_2/./intelmpi/intel64/libfabric/lib -lfabric
/scratch/userA/MachineIssue/RingApplication/mpi_test_package_2/./intelmpi/intel64/bin/mpirun -machinefile machlist1 -n 2 /scratch/userA/MachineIssue/RingApplication/mpi_test_package_2/ring_c 2
[0] MPI startup(): libfabric version: 1.10.0a1-impi
[0] MPI startup(): libfabric provider: tcp;ofi_rxm
[0] MPI startup(): selected platform: unknown
[0] MPI startup(): Rank Pid Node name Pin cpu
[0] MPI startup(): 0 163464 host-e8 {0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,
30,31,32,33,34,35,36,37,38,39}
[0] MPI startup(): 1 163465 host-e8 {40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,61,62,63,64,65,66
,67,68,69,70,71,72,73,74,75,76,77,78,79}
[0] MPI startup(): I_MPI_ROOT=/scratch/userA/MachineIssue/RingApplication/mpi_test_package_2/intelmpi
[0] MPI startup(): I_MPI_MPIRUN=mpirun
[0] MPI startup(): I_MPI_HYDRA_TOPOLIB=hwloc
[0] MPI startup(): I_MPI_INTERNAL_MEM_POLICY=default
[0] MPI startup(): I_MPI_PLATFORM=auto
[0] MPI startup(): I_MPI_DEBUG=10
process 1 on host host-e8 163465
process 0 on host host-e8 163464
Process 0 (host-e8) sending 2 to 1, tag 201 (2 processes in ring)
Process 0 (host-e8) sent to 1
Process 0 (host-e8) received message 2 from process 1
Process 0 (host-e8) decremented value: 1
Process 1 (host-e8) received message 2 from process 0
Process 1 (host-e8) sent message 2 to process 0
Process 0 (host-e8) sent message 1 to process 1
Process 0 (host-e8) received message 1 from process 1
Process 0 (host-e8) decremented value: 0
Process 0 (host-e8) sent message 0 to process 1
Process 1 (host-e8) received message 1 from process 0
Process 1 (host-e8) sent message 1 to process 0
Process 0 exiting
Process 1 (host-e8) received message 0 from process 0
Process 1 (host-e8) sent message 0 to process 0
Process 1 exiting

 

Please advise on the next steps.

 

Thanks & Regards,

_Amit

0 Kudos
SantoshY_Intel
Moderator
4,375 Views

Hi,

>>"Command : ./run.sh -np 2 -hostfile machlist1 -s 2 -gcc /med/build/gcc/gcc-6.2.0/rhel6/bin/gcc"

To investigate further could you please provide us the content of run.sh?

 

Thanks & Regards,

Santosh

 

0 Kudos
Amit1
Beginner
4,366 Views

Hi Santosh,

 

I have attached run.sh.tar.gz with this message.

Command used to run

./run.sh -np 2 -hostfile machlist1 -s 2 -gcc /usr/bin/gcc

 

Please let me know if you need any further information from me on this.

 

Thanks & Regards,

_Amit

 

 

0 Kudos
SantoshY_Intel
Moderator
4,324 Views

Hi,

 

We tested whether there are any issues with the run.sh & the command you used to run the sample on host-e8. But they are working fine and we didn't find any issues with them.

We observed the issues: ethernet-driver-version-is-not-consistent & ethernet-interrupt-coalescing-state-not-uniform from the cluster checker log that you provided.

So, try to use a consistent ethernet driver version in host-e8 and follow the remedy provided in the log for ethernet-interrupt-coalescing-state-not-uniform and run the sample on heterogeneous nodes including host-e8. If the issue still persists, please get back to us.

 

Thanks & Regards,

Santosh

 

0 Kudos
SantoshY_Intel
Moderator
4,306 Views

Hi,


We haven't heard back from you. Is your issue resolved? Please confirm whether your issue is resolved so that we can close this thread.


Thanks & Regards,

Santosh


0 Kudos
Amit1
Beginner
4,299 Views

Hi Santosh,

 

Thanks for your email.

 

This is to confirm that this issue is not yet resolved for us.
We are in the process of engaging IT to try your recommendations.

 

Meanwhile, can you please respond to the following questions.

-> There are 30+ issues in the intel-cluster checker log file.
Is there something in the log that confirms that the two issues (ethernet-driver-version-is-not-consistent & ethernet-interrupt-coalescing-state-not-uniform ) that you have listed out in your response are real errors while others are just warnings.
Or
Is it just that the issues you have listed out specifically correspond to host-e8.

-> The issue ethernet-driver-version-is-not-consistent is listed for two hosts host-e8 and host-a2.
Does it mean that for both these hosts the driver-version is not consistent with the other hosts.
Also, if this is an issue then why host-a2 is not exhibiting any issues when used for MPI launches along with a list of other hosts not including host-e8.


Thanks & Regards,
__AMIT

0 Kudos
SantoshY_Intel
Moderator
4,270 Views

Hi,


>>"Is there something in the log that confirms that the two issues (ethernet-driver-version-is-not-consistent & ethernet-interrupt-coalescing-state-not-uniform ) that you have listed out in your response are real errors while others are just warnings. (Or) Is it just that the issues you have listed out specifically correspond to host-e8."

-->Yes, as the issues have been listed out specifically correspond to host-e8, we think that "ethernet-driver-version-is-not-consistent & ethernet-interrupt-coalescing-state-not-uniform" might be the issues with host-e8.


>>"Also, if this is an issue then why host-a2 is not exhibiting any issues when used for MPI launches along with a list of other hosts not including host-e8."

-->We think the ethernet driver version of host-a2 might be aligned with other nodes except host-e8.

As you said: "host-e8 is exhibiting issues when used for MPI launches along with a list of other hosts", So we think it might be an issue only with host-e8.


Thanks & Regards,

Santosh



0 Kudos
SantoshY_Intel
Moderator
4,234 Views

Hi,


We haven't heard back from you. Is the solution provided helped? Please get back to us if the issue still persists. If not, Could you please confirm whether to close this thread?


Thanks & Regards,

Santosh


0 Kudos
Amit1
Beginner
4,225 Views

Hi Santosh,

Thanks for checking.

 

Unfortunately this issue is not solved for us.
Based on the recommendations, we attempted to change the settings on host-e8, which look as follows:-

 

Coalesce parameters for eno1:
Adaptive RX: off TX: off
stats-block-usecs: 0
sample-interval: 0
pkt-rate-low: 0
pkt-rate-high: 0

rx-usecs: 20

 

This is same as the other good machines in the machine-list.

What is surprising is that intel cluster-checker continues to list ethernet-interrupt-coalescing-state-not-uniform as a concern like before.
Please advise.


Thanks & Regards,
_Amit

0 Kudos
SantoshY_Intel
Moderator
4,216 Views

Hi,

 

Could you provide the complete log by keeping -check_mpi flag in addition to I_MPI_DEBUG & I_MPI_PLATFORM while running the sample?

 

To use -check_mpi see the below example:

I_MPI_DEBUG=20 mpirun -check_mpi -np 2  ./sample

>>"What is surprising is that intel cluster-checker continues to list ethernet-interrupt-coalescing-state-not-uniform as a concern like before."

Could you please share with us the recent cluster checker log?

 

Thanks & regards,

Santosh

 

 

 

0 Kudos
SantoshY_Intel
Moderator
4,193 Views

Hi,

We haven't heard back from you. Is your issue resolved? If not, could you please provide the details that have been asked in my previous post?


Thanks,

Santosh


0 Kudos
Amit1
Beginner
4,175 Views

Thanks Santosh for your message.


I am seeing the following output for the sample ring application.

With following in the Environment :-

I_MPI_PLATFORM=auto
I_MPI_DEBUG=20

 

Output Log:

ERROR: ld.so: object 'libVTmc.so' from LD_PRELOAD cannot be preloaded: ignored.
ERROR: ld.so: object 'libVTmc.so' from LD_PRELOAD cannot be preloaded: ignored.
ERROR: ld.so: object 'libVTmc.so' from LD_PRELOAD cannot be preloaded: ignored.
ERROR: ld.so: object 'libVTmc.so' from LD_PRELOAD cannot be preloaded: ignored.
ERROR: ld.so: object 'libVTmc.so' from LD_PRELOAD cannot be preloaded: ignored.
ERROR: ld.so: object 'libVTmc.so' from LD_PRELOAD cannot be preloaded: ignored.
ERROR: ld.so: object 'libVTmc.so' from LD_PRELOAD cannot be preloaded: ignored.
ERROR: ld.so: object 'libVTmc.so' from LD_PRELOAD cannot be preloaded: ignored.
[0] MPI startup(): libfabric version: 1.10.0a1-impi
[0] MPI startup(): libfabric provider: tcp;ofi_rxm
[0] MPI startup(): max_ch4_vcis: 1, max_reg_eps 1, enable_sep 0, enable_shared_ctxs 0, do_av_insert 1
[0] MPI startup(): addrname_len: 16, addrname_firstlen: 16
Abort(1615503) on node 6 (rank 6 in comm 0): Fatal error in PMPI_Init: Other MPI error, error stack:
MPIR_Init_thread(136)........:
MPID_Init(904)...............:
MPIDI_OFI_mpi_init_hook(1421):
MPIDU_bc_table_create(338)...: Missing hostname or invalid host/port description in business card
Abort(1615503) on node 0 (rank 0 in comm 0): Fatal error in PMPI_Init: Other MPI error, error stack:
MPIR_Init_thread(136)........:
MPID_Init(904)...............:
MPIDI_OFI_mpi_init_hook(1421):
MPIDU_bc_table_create(338)...: Missing hostname or invalid host/port description in business card
Abort(1615503) on node 4 (rank 4 in comm 0): Fatal error in PMPI_Init: Other MPI error, error stack:
MPIR_Init_thread(136)........:
MPID_Init(904)...............:
MPIDI_OFI_mpi_init_hook(1421):
MPIDU_bc_table_create(338)...: Missing hostname or invalid host/port description in business card

 

Please note that the following messages also show up for runs that are successful and that do not involve host-e8 and so I am not sure why are these being called errors.

ERROR: ld.so: object 'libVTmc.so' from LD_PRELOAD cannot be preloaded: ignored.
ERROR: ld.so: object 'libVTmc.so' from LD_PRELOAD cannot be preloaded: ignored.

 

The most recent cluster log continues to report the following :-

ethernet-interrupt-coalescing-state-not-uniform
Message: Ethernet interrupt coalescing is not enabled/disabled uniformly
across nodes in the same grouping.
Remedy: Append "/sbin/ethtool -C eno1 rx-usecs <value>" to the site
specific system startup script. Use '0' to permanently disable
Ethernet interrupt coalescing or other value as needed. The
site specific system startup script is typically
/etc/rc.d/rc.local or /etc/rc.d/boot.local.
1 node: host-e8
Test: ethernet
Details:
#Nodes State Interface Nodes
1 enabled eno1 host-e8
1 enabled eno3 host-e8

 

Thanks,
_Amit

 

0 Kudos
SantoshY_Intel
Moderator
4,163 Views

Hi Amit,

 

From the above response, we see that there are no -check_mpi statements available.

To use -check_mpi flag, do follow the below steps:

 

source /opt/intel/oneapi/setvars.sh
clck --version

 

If the version details are available, then we can use -check_mpi flag. 

Now use -check_mpi flag as below in the "run.sh" file in line no:94(you have attached run.sh file earlier).

 

$OMPI_ROOT/bin/mpirun -check_mpi $mpiargs $curDir/ring_c $ringargs

 

Now could you please attach your complete debug information? (Please do not truncate the log/error )

And also, attach a recent file that includes a complete cluster checker log having details of all the nodes.

 

Thanks,

Santosh

 

 

0 Kudos
Amit1
Beginner
4,150 Views

Hi Santosh,

 

Thanks for your reply.

 

It seems that there is some confusion here.

In your previous post, you had mentioned to use -check_mpi as an argument to mpirun when running the sample.

(Could you provide the complete log by keeping -check_mpi flag in addition to I_MPI_DEBUG & I_MPI_PLATFORM while running the sample?)

 

Sample ring application is being ran is an independent terminal which has no settings for clck (cluster-checker).
So the dependency between clck version and usage of -check_mpi with sample ring application is unclear.

Also, -check_mpi flag was specified with the sample ring application and (run.sh:94) was modified from

$OMPI_ROOT/bin/mpirun $mpiargs $curDir/ring_c $ringargs

to

$OMPI_ROOT/bin/mpirun -check_mpi $mpiargs $curDir/ring_c $ringargs

 

For what it is worth, cluster checker in use has the following version:-

Intel(R) Cluster Checker 2021 Update 1 (build 20201104)
Copyright (C) 2006-2020 Intel Corporation. All rights reserved.

 

Also, logs corresponding to sample application were not truncated and logs corresponding to cluster checker were also complete w.r.t ethernet-interrupt-coalescing-state-not-uniform.

 

We can share additional information once we have clarity on what needs to be generated and how do we establish the dependency between cluster checker and the sample ring application.

 

Thanks,

_Amit

 

 

 

0 Kudos
SantoshY_Intel
Moderator
4,129 Views

Hi,


Please find the below details:


>>"So the dependency between clck version and usage of -check_mpi with sample ring application is unclear."

There is no dependency between clck and check_mpi.


>>"If the version details are available, then we can use -check_mpi flag. "

We asked you to check version details to make sure the Intel trace analyzer and collector(ITAC) environment is set before using the check_mpi option as initializing the ITAC environment is essential before using the check_mpi option. I must have asked you to check the ITAC version instead so that there will not be any confusion. Sorry for the miscommunication.


check_mpi: 

We use this option to perform correctness checking of an MPI application. we can run the application with the -check_mpi option of mpirun .

For example:

$ mpirun -check_mpi -n 4 ./myApp

So We asked to check this option in order to "perform correctness checking of your sample application on host-e8". Hence provided the following command:

$OMPI_ROOT/bin/mpirun -check_mpi $mpiargs $curDir/ring_c $ringargs


>>"$OMPI_ROOT/bin/mpirun -check_mpi $mpiargs $curDir/ring_c $ringargs For what it is worth, cluster checker in use has the following version:-"

The above command is given to make use of the check_mpi option only. It has nothing to do with the cluster checker.


>>"Also, logs corresponding to sample application were not truncated and logs corresponding to cluster checker were also complete w.r.t ethernet-interrupt-coalescing-state-not-uniform."

  1. We are expecting a complete debug log of the sample application run on host-e8 as an attachment.
  2. Also, a complete cluster checker log file generated by clck(Not only ethernet-interrupt-coalescing-state-not-uniform, but the complete cluster check log) as an attachment.


>>" how do we establish the dependency between cluster checker and the sample ring application."

  1. There is no dependency between cluster checker & sample ring application. We asked to keep the check_mpi option to check the correctness of the sample MPI application on host-e8 while running.
  2. In addition to that, we expect you to provide us a complete cluster checker log as you did earlier at the time of posting the question in the intel community.


Hope everything is clear now.


Thanks,

Santosh


0 Kudos
SantoshY_Intel
Moderator
4,107 Views

Hi,


We haven't heard back from you. Is your issue resolved? please get back to us if the issue persists.


Thanks,

Santosh


0 Kudos
Amit1
Beginner
4,091 Views

Hi Santosh,

Thanks for checking with us.
No, This issue is not yet resolved for us.
I have attached the requested logs with this post.

 

Mpi Ring application has been ran with the following set in the enviroment.
I_MPI_PLATFORM=auto
I_MPI_DEBUG=10

 

Please note that as stated earlier, the following lines are also getting printed for successful runs that do not involve host-e8.
ERROR: ld.so: object 'libVTmc.so' from LD_PRELOAD cannot be preloaded: ignored.
ERROR: ld.so: object 'libVTmc.so' from LD_PRELOAD cannot be preloaded: ignored.

 

Please update us with your findings.

 

Thanks,
_Amit

0 Kudos
Reply