- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
I am trying to run the Linpack benchmark found with under here: https://software.intel.com/content/www/us/en/develop/articles/intel-mkl-benchmarks-suite.html
The Linpack benchmark uses mpirun and I get the following error:
[root@7b514c24d78f mp_linpack]# ./runme_intel64_dynamic -p 2 -q 1 -b 384 -n 80000
This is a SAMPLE run script. Change it to reflect the correct number
of CPUs/threads, number of nodes, MPI processes per node, etc..
This run was done on: Mon Oct 4 18:19:23 UTC 2021
RANK=1, NODE=1
RANK=0, NODE=0
./runme_intel64_prv: line 30: 890 Bus error (core dumped) ./${HPL_EXE} "$@"
===================================================================================
= BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES
= RANK 1 PID 888 RUNNING AT 7b514c24d78f
= KILLED BY SIGNAL: 9 (Killed)
===================================================================================
We believe the error is with mpirun since the following line gives a similar issue:
[root@f9103950d892]# mpirun -np 2 IMB-MPI1
===================================================================================
= BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES
= RANK 0 PID 2669 RUNNING AT f9103950d892
= KILLED BY SIGNAL: 9 (Killed)
===================================================================================
===================================================================================
= BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES
= RANK 1 PID 2670 RUNNING AT f9103950d892
= KILLED BY SIGNAL: 7 (Bus error)
===================================================================================
This is all being run inside a docker container. I think there is an issue with running mpirun inside a docker container, is there a way to work around this?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi,
>>"Meanwhile, we will get back to you regarding the error while using Intel oneAPI HPC Toolkit 2021.4."
We are able to run IMB-MPI1 and the Linpack benchmark successfully using the latest Intel oneAPI 2021.4 as well.
We followed the below steps:
- docker pull intel/oneapi-hpckit
- docker run --shm-size=4gb -it intel/oneapi-hpckit
- cd /opt/intel/oneapi/mkl/latest/benchmarks/mp_linpack
- ./runme_intel64_dynamic -p 2 -q 1 -b 384 -n 80000
In brief, adding the "--shm-size=4gb" option to the "docker run" statement resolves this issue.
If this resolves your issue, make sure to accept this as a solution. This would help others with similar issues.
Thank you!
Best Regards,
Santosh
Link Copied
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi,
Thanks for reaching out to us.
Could you please provide us the following details to investigate more on your issue?
1. The version of the operating system being used.
2. The version of intel oneAPI installed.
3. The usage model that you have followed to run the application from the link:
4. The steps to reproduce your issue.
Thanks & regards,
Santosh
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
1. CentOS 8
This is the Dockerfile I built which answers the other questions:
FROM centos:8.3.2011 AS build
RUN dnf -y install epel-release && dnf group -y install "Development Tools" && dnf -y install wget cmake libarchive
RUN no_proxy=$(echo $no_proxy | tr ',' '\n' | grep -v -E '^\.?intel.com$' | tr '\n' ',') yum install intel-hpckit -y
RUN wget https://software.intel.com/content/dam/develop/external/us/en/documents/l_onemklbench_p_2021.2.0_109.tgz && tar -xvzf l_onemklbench_p_2021.2.0_109.tgz
RUN dnf install numactl -y
These are the steps I take to run the benchmark that uses mpirun:
cd benchmarks_2021.2.0/linux/mkl/benchmarks/mp_linpack/
source /opt/intel/oneapi/setvars.sh
./runme_intel64_dynamic -p 2 -q 1 -b 384 -n 80000
Not sure what you mean about the usage model. Isn't the latest version of mpi included in the latest oneAPI hpc toolkit?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi,
Thanks for providing the details.
>>" Not sure what you mean about the usage model."
You can choose from three usage models for running your application using a singularity container. For more information, you can refer to the below link.
https://www.intel.com/content/www/us/en/develop/documentation/mpi-developer-guide-linux/top/running-applications/running-intel-mpi-library-in-containers/run-the-application-with-a-container.html
>>"Isn't the latest version of mpi included in the latest oneAPI hpc toolkit?"
Yes, the latest oneAPI HPC toolkit includes the latest version of Intel MPI Library.
We are working on your issue and we will get back to you soon.
Thanks & Regards,
Santosh
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi,
We are able to reproduce your issue at our end. However, we tried to run the Linpack benchmark successfully using the below workaround on a ubuntu 18.04 machine:
- The docker file that we built on the ubuntu machine is given below:
FROM tacc/tacc-ubuntu18-impi19.0.7-common:latest RUN apt-get update && apt-get upgrade -y && apt-get install wget RUN wget --user-agent="Mozilla" https://www.intel.com/content/dam/develop/external/us/en/documents/l_onemklbench_p_2021.2.0_109.tgz RUN tar -xvzf l_onemklbench_p_2021.2.0_109.tgz
- To build a new MPI-capable container, use the below command:
docker build -t USERNAME/pi-estimator:0.1-mpi -f Dockerfile.mpi .
Note: Don’t forget to change USERNAME to your DockerHub username!
- To push the successfully built image to the DockerHUb, use the below command:
docker push USERNAME/pi-estimator:0.1-mpi
- To run the Linpack benchmark, use the below command:
docker run --rm -it USERNAME/pi-estimator:0.1-mpi \ benchmarks_2021.2.0/linux/mkl/benchmarks/mp_linpack/runme_intel64_dynamic -p 2 -q 1 -b 384 -n 80000
Could you please try the above workaround using "Intel MPI Library 2019 update 7" and let us know the outcomes?
Meanwhile, we will get back to you regarding the error while using Intel oneAPI HPC Toolkit 2021.4.
Thanks & regards,
Santosh
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi,
We haven't heard back from you. Have you tried the above workaround? Please get back to us if you face any issues while following the workaround provided above.
Thanks & Regards,
Santosh
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi,
>>"Meanwhile, we will get back to you regarding the error while using Intel oneAPI HPC Toolkit 2021.4."
We are able to run IMB-MPI1 and the Linpack benchmark successfully using the latest Intel oneAPI 2021.4 as well.
We followed the below steps:
- docker pull intel/oneapi-hpckit
- docker run --shm-size=4gb -it intel/oneapi-hpckit
- cd /opt/intel/oneapi/mkl/latest/benchmarks/mp_linpack
- ./runme_intel64_dynamic -p 2 -q 1 -b 384 -n 80000
In brief, adding the "--shm-size=4gb" option to the "docker run" statement resolves this issue.
If this resolves your issue, make sure to accept this as a solution. This would help others with similar issues.
Thank you!
Best Regards,
Santosh
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi,
Thanks for accepting our solution. If you need any additional information, please post a new question as this thread will no longer be monitored by Intel.
Thanks & Regards,
Santosh
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page