node_counts array: 2 4 8 packed array: 48 96 192 [0] MPI startup(): Multi-threaded optimized library [1] DAPL startup(): trying to open DAPL provider from I_MPI_DAPL_PROVIDER: ofa-v2-mlx4_0-1u [0] DAPL startup(): trying to open DAPL provider from I_MPI_DAPL_PROVIDER: ofa-v2-mlx4_0-1u [1] MPI startup(): DAPL provider ofa-v2-mlx4_0-1u [1] MPI startup(): dapl data transfer mode [0] MPI startup(): DAPL provider ofa-v2-mlx4_0-1u [0] MPI startup(): dapl data transfer mode [0] MPID_nem_init_dapl_coll_fns(): User set DAPL collective mask = 0000 [0] MPID_nem_init_dapl_coll_fns(): Effective DAPL collective mask = 0000 [1] MPID_nem_init_dapl_coll_fns(): User set DAPL collective mask = 0000 [1] MPID_nem_init_dapl_coll_fns(): Effective DAPL collective mask = 0000 [0] MPI startup(): Rank Pid Node name Pin cpu [0] MPI startup(): 0 63133 n0471 {0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23} [0] MPI startup(): 1 39621 n0483 {0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23} [0] MPI startup(): I_MPI_DEBUG=5 [0] MPI startup(): I_MPI_INFO_NUMA_NODE_MAP=mlx4_0:0 [0] MPI startup(): I_MPI_INFO_NUMA_NODE_NUM=2 [0] MPI startup(): I_MPI_PIN_MAPPING=1:0 0 #------------------------------------------------------------ # Intel (R) MPI Benchmarks 2017, MPI-1 part #------------------------------------------------------------ # Date : Tue Apr 18 18:41:51 2017 # Machine : x86_64 # System : Linux # Release : 2.6.32-504.8.1.el6.x86_64 # Version : #1 SMP Wed Jan 28 21:11:36 UTC 2015 # MPI Version : 3.1 # MPI Thread Environment: # Calling sequence was: # src/IMB-MPI1 -input IMB_SELECT_MPI1 -msglen ./msglens # Message lengths were user defined # # MPI_Datatype : MPI_BYTE # MPI_Datatype for reductions : MPI_FLOAT # MPI_Op : MPI_SUM # # # List of Benchmarks to run: # PingPong #--------------------------------------------------- # Benchmarking PingPong # #processes = 2 #--------------------------------------------------- #bytes #repetitions t[usec] Mbytes/sec 524288 80 94.01 5318.42 # All processes entering MPI_Finalize Usage: ./mpiexec [global opts] [exec1 local opts] : [exec2 local opts] : ... Global options (passed to all executables): Global environment options: -genv {name} {value} environment variable name and value -genvlist {env1,env2,...} environment variable list to pass -genvnone do not pass any environment variables -genvall pass all environment variables not managed by the launcher (default) Other global options: -f {name} | -hostfile {name} file containing the host names -hosts {host list} comma separated host list -configfile {name} config file containing MPMD launch options -machine {name} | -machinefile {name} file mapping procs to machines -pmi-connect {nocache|lazy-cache|cache} set the PMI connections mode to use -pmi-aggregate aggregate PMI messages -pmi-noaggregate do not aggregate PMI messages -trace {} trace the application using profiling library; default is libVT.so -trace-imbalance {} trace the application using imbalance profiling library; default is libVTim.so -check-mpi {} check the application using checking library; default is libVTmc.so -ilp64 Preload ilp64 wrapper library for support default size of integer 8 bytes -mps start statistics gathering for MPI Performance Snapshot (MPS) -trace-pt2pt collect information about Point to Point operations -trace-collectives collect information about Collective operations -tune [] apply the tuned data produced by the MPI Tuner utility -use-app-topology perform optimized rank placement based statistics and cluster topology -noconf do not use any mpiexec's configuration files -branch-count {leaves_num} set the number of children in tree -gwdir {dirname} working directory to use -gpath {dirname} path to executable to use -gumask {umask} mask to perform umask -tmpdir {tmpdir} temporary directory for cleanup input file -cleanup create input file for clean up -gtool {options} apply a tool over the mpi application -gtoolfile {file} apply a tool over the mpi application. Parameters specified in the file Local options (passed to individual executables): Local environment options: -env {name} {value} environment variable name and value -envlist {env1,env2,...} environment variable list to pass -envnone do not pass any environment variables -envall pass all environment variables (default) Other local options: -host {hostname} host on which processes are to be run -hostos {OS name} operating system on particular host -wdir {dirname} working directory to use -path {dirname} path to executable to use -umask {umask} mask to perform umask -n/-np {value} number of processes {exec_name} {args} executable name and arguments Hydra specific options (treated as global): Bootstrap options: -bootstrap bootstrap server to use (ssh rsh pdsh fork slurm srun ll llspawn.stdio lsf blaunch sge qrsh persist service pbsdsh) -bootstrap-exec executable to use to bootstrap processes -bootstrap-exec-args additional options to pass to bootstrap server -prefork use pre-fork processes startup method -enable-x/-disable-x enable or disable X forwarding Resource management kernel options: -rmk resource management kernel to use (user slurm srun ll llspawn.stdio lsf blaunch sge qrsh pbs cobalt) Processor topology options: -binding process-to-core binding mode Extended fabric control options: -rdma select RDMA-capable network fabric (dapl). Fallback list is ofa,tcp,tmi,ofi -RDMA select RDMA-capable network fabric (dapl). Fallback is ofa -dapl select DAPL-capable network fabric. Fallback list is tcp,tmi,ofa,ofi -DAPL select DAPL-capable network fabric. No fallback fabric is used -ib select OFA-capable network fabric. Fallback list is dapl,tcp,tmi,ofi -IB select OFA-capable network fabric. No fallback fabric is used -tmi select TMI-capable network fabric. Fallback list is dapl,tcp,ofa,ofi -TMI select TMI-capable network fabric. No fallback fabric is used -mx select Myrinet MX* network fabric. Fallback list is dapl,tcp,ofa,ofi -MX select Myrinet MX* network fabric. No fallback fabric is used -psm select PSM-capable network fabric. Fallback list is dapl,tcp,ofa,ofi -PSM select PSM-capable network fabric. No fallback fabric is used -psm2 select Intel* Omni-Path Fabric. Fallback list is dapl,tcp,ofa,ofi -PSM2 select Intel* Omni-Path Fabric. No fallback fabric is used -ofi select OFI-capable network fabric. Fallback list is tmi,dapl,tcp,ofa -OFI select OFI-capable network fabric. No fallback fabric is used Checkpoint/Restart options: -ckpoint {on|off} enable/disable checkpoints for this run -ckpoint-interval checkpoint interval -ckpoint-prefix destination for checkpoint files (stable storage, typically a cluster-wide file system) -ckpoint-tmp-prefix temporary/fast/local storage to speed up checkpoints -ckpoint-preserve number of checkpoints to keep (default: 1, i.e. keep only last checkpoint) -ckpointlib checkpointing library (blcr) -ckpoint-logfile checkpoint activity/status log file (appended) -restart restart previously checkpointed application -ckpoint-num checkpoint number to restart Demux engine options: -demux demux engine (poll select) Debugger support options: -tv run processes under TotalView -tva {pid} attach existing mpiexec process to TotalView -gdb run processes under GDB -gdba {pid} attach existing mpiexec process to GDB -gdb-ia run processes under Intel IA specific GDB Other Hydra options: -v | -verbose verbose mode -V | -version show the version -info build information -print-rank-map print rank mapping -print-all-exitcodes print exit codes of all processes -iface network interface to use -help show this message -perhost place consecutive processes on each host -ppn stand for "process per node"; an alias to -perhost -grr stand for "group round robin"; an alias to -perhost -rr involve "round robin" startup scheme -s redirect stdin to all or 1,2 or 2-4,6 MPI processes (0 by default) -ordered-output avoid data output intermingling -profile turn on internal profiling -l | -prepend-rank prepend rank to output -prepend-pattern prepend pattern to output -outfile-pattern direct stdout to file -errfile-pattern direct stderr to file -localhost local hostname for the launching node -nolocal avoid running the application processes on the node where mpiexec.hydra started Intel(R) MPI Library for Linux* OS, Version 2017 Update 2 Build 20170125 (id: 16752) Copyright (C) 2003-2017, Intel Corporation. All rights reserved. Usage: ./mpiexec [global opts] [exec1 local opts] : [exec2 local opts] : ... Global options (passed to all executables): Global environment options: -genv {name} {value} environment variable name and value -genvlist {env1,env2,...} environment variable list to pass -genvnone do not pass any environment variables -genvall pass all environment variables not managed by the launcher (default) Other global options: -f {name} | -hostfile {name} file containing the host names -hosts {host list} comma separated host list -configfile {name} config file containing MPMD launch options -machine {name} | -machinefile {name} file mapping procs to machines -pmi-connect {nocache|lazy-cache|cache} set the PMI connections mode to use -pmi-aggregate aggregate PMI messages -pmi-noaggregate do not aggregate PMI messages -trace {} trace the application using profiling library; default is libVT.so -trace-imbalance {} trace the application using imbalance profiling library; default is libVTim.so -check-mpi {} check the application using checking library; default is libVTmc.so -ilp64 Preload ilp64 wrapper library for support default size of integer 8 bytes -mps start statistics gathering for MPI Performance Snapshot (MPS) -trace-pt2pt collect information about Point to Point operations -trace-collectives collect information about Collective operations -tune [] apply the tuned data produced by the MPI Tuner utility -use-app-topology perform optimized rank placement based statistics and cluster topology -noconf do not use any mpiexec's configuration files -branch-count {leaves_num} set the number of children in tree -gwdir {dirname} working directory to use -gpath {dirname} path to executable to use -gumask {umask} mask to perform umask -tmpdir {tmpdir} temporary directory for cleanup input file -cleanup create input file for clean up -gtool {options} apply a tool over the mpi application -gtoolfile {file} apply a tool over the mpi application. Parameters specified in the file Local options (passed to individual executables): Local environment options: -env {name} {value} environment variable name and value -envlist {env1,env2,...} environment variable list to pass -envnone do not pass any environment variables -envall pass all environment variables (default) Other local options: -host {hostname} host on which processes are to be run -hostos {OS name} operating system on particular host -wdir {dirname} working directory to use -path {dirname} path to executable to use -umask {umask} mask to perform umask -n/-np {value} number of processes {exec_name} {args} executable name and arguments Hydra specific options (treated as global): Bootstrap options: -bootstrap bootstrap server to use (ssh rsh pdsh fork slurm srun ll llspawn.stdio lsf blaunch sge qrsh persist service pbsdsh) -bootstrap-exec executable to use to bootstrap processes -bootstrap-exec-args additional options to pass to bootstrap server -prefork use pre-fork processes startup method -enable-x/-disable-x enable or disable X forwarding Resource management kernel options: -rmk resource management kernel to use (user slurm srun ll llspawn.stdio lsf blaunch sge qrsh pbs cobalt) Processor topology options: -binding process-to-core binding mode Extended fabric control options: -rdma select RDMA-capable network fabric (dapl). Fallback list is ofa,tcp,tmi,ofi -RDMA select RDMA-capable network fabric (dapl). Fallback is ofa -dapl select DAPL-capable network fabric. Fallback list is tcp,tmi,ofa,ofi -DAPL select DAPL-capable network fabric. No fallback fabric is used -ib select OFA-capable network fabric. Fallback list is dapl,tcp,tmi,ofi -IB select OFA-capable network fabric. No fallback fabric is used -tmi select TMI-capable network fabric. Fallback list is dapl,tcp,ofa,ofi -TMI select TMI-capable network fabric. No fallback fabric is used -mx select Myrinet MX* network fabric. Fallback list is dapl,tcp,ofa,ofi -MX select Myrinet MX* network fabric. No fallback fabric is used -psm select PSM-capable network fabric. Fallback list is dapl,tcp,ofa,ofi -PSM select PSM-capable network fabric. No fallback fabric is used -psm2 select Intel* Omni-Path Fabric. Fallback list is dapl,tcp,ofa,ofi -PSM2 select Intel* Omni-Path Fabric. No fallback fabric is used -ofi select OFI-capable network fabric. Fallback list is tmi,dapl,tcp,ofa -OFI select OFI-capable network fabric. No fallback fabric is used Checkpoint/Restart options: -ckpoint {on|off} enable/disable checkpoints for this run -ckpoint-interval checkpoint interval -ckpoint-prefix destination for checkpoint files (stable storage, typically a cluster-wide file system) -ckpoint-tmp-prefix temporary/fast/local storage to speed up checkpoints -ckpoint-preserve number of checkpoints to keep (default: 1, i.e. keep only last checkpoint) -ckpointlib checkpointing library (blcr) -ckpoint-logfile checkpoint activity/status log file (appended) -restart restart previously checkpointed application -ckpoint-num checkpoint number to restart Demux engine options: -demux demux engine (poll select) Debugger support options: -tv run processes under TotalView -tva {pid} attach existing mpiexec process to TotalView -gdb run processes under GDB -gdba {pid} attach existing mpiexec process to GDB -gdb-ia run processes under Intel IA specific GDB Other Hydra options: -v | -verbose verbose mode -V | -version show the version -info build information -print-rank-map print rank mapping -print-all-exitcodes print exit codes of all processes -iface network interface to use -help show this message -perhost place consecutive processes on each host -ppn stand for "process per node"; an alias to -perhost -grr stand for "group round robin"; an alias to -perhost -rr involve "round robin" startup scheme -s redirect stdin to all or 1,2 or 2-4,6 MPI processes (0 by default) -ordered-output avoid data output intermingling -profile turn on internal profiling -l | -prepend-rank prepend rank to output -prepend-pattern prepend pattern to output -outfile-pattern direct stdout to file -errfile-pattern direct stderr to file -localhost local hostname for the launching node -nolocal avoid running the application processes on the node where mpiexec.hydra started Intel(R) MPI Library for Linux* OS, Version 2017 Update 2 Build 20170125 (id: 16752) Copyright (C) 2003-2017, Intel Corporation. All rights reserved. Usage: ./mpiexec [global opts] [exec1 local opts] : [exec2 local opts] : ... Global options (passed to all executables): Global environment options: -genv {name} {value} environment variable name and value -genvlist {env1,env2,...} environment variable list to pass -genvnone do not pass any environment variables -genvall pass all environment variables not managed by the launcher (default) Other global options: -f {name} | -hostfile {name} file containing the host names -hosts {host list} comma separated host list -configfile {name} config file containing MPMD launch options -machine {name} | -machinefile {name} file mapping procs to machines -pmi-connect {nocache|lazy-cache|cache} set the PMI connections mode to use -pmi-aggregate aggregate PMI messages -pmi-noaggregate do not aggregate PMI messages -trace {} trace the application using profiling library; default is libVT.so -trace-imbalance {} trace the application using imbalance profiling library; default is libVTim.so -check-mpi {} check the application using checking library; default is libVTmc.so -ilp64 Preload ilp64 wrapper library for support default size of integer 8 bytes -mps start statistics gathering for MPI Performance Snapshot (MPS) -trace-pt2pt collect information about Point to Point operations -trace-collectives collect information about Collective operations -tune [] apply the tuned data produced by the MPI Tuner utility -use-app-topology perform optimized rank placement based statistics and cluster topology -noconf do not use any mpiexec's configuration files -branch-count {leaves_num} set the number of children in tree -gwdir {dirname} working directory to use -gpath {dirname} path to executable to use -gumask {umask} mask to perform umask -tmpdir {tmpdir} temporary directory for cleanup input file -cleanup create input file for clean up -gtool {options} apply a tool over the mpi application -gtoolfile {file} apply a tool over the mpi application. Parameters specified in the file Local options (passed to individual executables): Local environment options: -env {name} {value} environment variable name and value -envlist {env1,env2,...} environment variable list to pass -envnone do not pass any environment variables -envall pass all environment variables (default) Other local options: -host {hostname} host on which processes are to be run -hostos {OS name} operating system on particular host -wdir {dirname} working directory to use -path {dirname} path to executable to use -umask {umask} mask to perform umask -n/-np {value} number of processes {exec_name} {args} executable name and arguments Hydra specific options (treated as global): Bootstrap options: -bootstrap bootstrap server to use (ssh rsh pdsh fork slurm srun ll llspawn.stdio lsf blaunch sge qrsh persist service pbsdsh) -bootstrap-exec executable to use to bootstrap processes -bootstrap-exec-args additional options to pass to bootstrap server -prefork use pre-fork processes startup method -enable-x/-disable-x enable or disable X forwarding Resource management kernel options: -rmk resource management kernel to use (user slurm srun ll llspawn.stdio lsf blaunch sge qrsh pbs cobalt) Processor topology options: -binding process-to-core binding mode Extended fabric control options: -rdma select RDMA-capable network fabric (dapl). Fallback list is ofa,tcp,tmi,ofi -RDMA select RDMA-capable network fabric (dapl). Fallback is ofa -dapl select DAPL-capable network fabric. Fallback list is tcp,tmi,ofa,ofi -DAPL select DAPL-capable network fabric. No fallback fabric is used -ib select OFA-capable network fabric. Fallback list is dapl,tcp,tmi,ofi -IB select OFA-capable network fabric. No fallback fabric is used -tmi select TMI-capable network fabric. Fallback list is dapl,tcp,ofa,ofi -TMI select TMI-capable network fabric. No fallback fabric is used -mx select Myrinet MX* network fabric. Fallback list is dapl,tcp,ofa,ofi -MX select Myrinet MX* network fabric. No fallback fabric is used -psm select PSM-capable network fabric. Fallback list is dapl,tcp,ofa,ofi -PSM select PSM-capable network fabric. No fallback fabric is used -psm2 select Intel* Omni-Path Fabric. Fallback list is dapl,tcp,ofa,ofi -PSM2 select Intel* Omni-Path Fabric. No fallback fabric is used -ofi select OFI-capable network fabric. Fallback list is tmi,dapl,tcp,ofa -OFI select OFI-capable network fabric. No fallback fabric is used Checkpoint/Restart options: -ckpoint {on|off} enable/disable checkpoints for this run -ckpoint-interval checkpoint interval -ckpoint-prefix destination for checkpoint files (stable storage, typically a cluster-wide file system) -ckpoint-tmp-prefix temporary/fast/local storage to speed up checkpoints -ckpoint-preserve number of checkpoints to keep (default: 1, i.e. keep only last checkpoint) -ckpointlib checkpointing library (blcr) -ckpoint-logfile checkpoint activity/status log file (appended) -restart restart previously checkpointed application -ckpoint-num checkpoint number to restart Demux engine options: -demux demux engine (poll select) Debugger support options: -tv run processes under TotalView -tva {pid} attach existing mpiexec process to TotalView -gdb run processes under GDB -gdba {pid} attach existing mpiexec process to GDB -gdb-ia run processes under Intel IA specific GDB Other Hydra options: -v | -verbose verbose mode -V | -version show the version -info build information -print-rank-map print rank mapping -print-all-exitcodes print exit codes of all processes -iface network interface to use -help show this message -perhost place consecutive processes on each host -ppn stand for "process per node"; an alias to -perhost -grr stand for "group round robin"; an alias to -perhost -rr involve "round robin" startup scheme -s redirect stdin to all or 1,2 or 2-4,6 MPI processes (0 by default) -ordered-output avoid data output intermingling -profile turn on internal profiling -l | -prepend-rank prepend rank to output -prepend-pattern prepend pattern to output -outfile-pattern direct stdout to file -errfile-pattern direct stderr to file -localhost local hostname for the launching node -nolocal avoid running the application processes on the node where mpiexec.hydra started Intel(R) MPI Library for Linux* OS, Version 2017 Update 2 Build 20170125 (id: 16752) Copyright (C) 2003-2017, Intel Corporation. All rights reserved. Usage: ./mpiexec [global opts] [exec1 local opts] : [exec2 local opts] : ... Global options (passed to all executables): Global environment options: -genv {name} {value} environment variable name and value -genvlist {env1,env2,...} environment variable list to pass -genvnone do not pass any environment variables -genvall pass all environment variables not managed by the launcher (default) Other global options: -f {name} | -hostfile {name} file containing the host names -hosts {host list} comma separated host list -configfile {name} config file containing MPMD launch options -machine {name} | -machinefile {name} file mapping procs to machines -pmi-connect {nocache|lazy-cache|cache} set the PMI connections mode to use -pmi-aggregate aggregate PMI messages -pmi-noaggregate do not aggregate PMI messages -trace {} trace the application using profiling library; default is libVT.so -trace-imbalance {} trace the application using imbalance profiling library; default is libVTim.so -check-mpi {} check the application using checking library; default is libVTmc.so -ilp64 Preload ilp64 wrapper library for support default size of integer 8 bytes -mps start statistics gathering for MPI Performance Snapshot (MPS) -trace-pt2pt collect information about Point to Point operations -trace-collectives collect information about Collective operations -tune [] apply the tuned data produced by the MPI Tuner utility -use-app-topology perform optimized rank placement based statistics and cluster topology -noconf do not use any mpiexec's configuration files -branch-count {leaves_num} set the number of children in tree -gwdir {dirname} working directory to use -gpath {dirname} path to executable to use -gumask {umask} mask to perform umask -tmpdir {tmpdir} temporary directory for cleanup input file -cleanup create input file for clean up -gtool {options} apply a tool over the mpi application -gtoolfile {file} apply a tool over the mpi application. Parameters specified in the file Local options (passed to individual executables): Local environment options: -env {name} {value} environment variable name and value -envlist {env1,env2,...} environment variable list to pass -envnone do not pass any environment variables -envall pass all environment variables (default) Other local options: -host {hostname} host on which processes are to be run -hostos {OS name} operating system on particular host -wdir {dirname} working directory to use -path {dirname} path to executable to use -umask {umask} mask to perform umask -n/-np {value} number of processes {exec_name} {args} executable name and arguments Hydra specific options (treated as global): Bootstrap options: -bootstrap bootstrap server to use (ssh rsh pdsh fork slurm srun ll llspawn.stdio lsf blaunch sge qrsh persist service pbsdsh) -bootstrap-exec executable to use to bootstrap processes -bootstrap-exec-args additional options to pass to bootstrap server -prefork use pre-fork processes startup method -enable-x/-disable-x enable or disable X forwarding Resource management kernel options: -rmk resource management kernel to use (user slurm srun ll llspawn.stdio lsf blaunch sge qrsh pbs cobalt) Processor topology options: -binding process-to-core binding mode Extended fabric control options: -rdma select RDMA-capable network fabric (dapl). Fallback list is ofa,tcp,tmi,ofi -RDMA select RDMA-capable network fabric (dapl). Fallback is ofa -dapl select DAPL-capable network fabric. Fallback list is tcp,tmi,ofa,ofi -DAPL select DAPL-capable network fabric. No fallback fabric is used -ib select OFA-capable network fabric. Fallback list is dapl,tcp,tmi,ofi -IB select OFA-capable network fabric. No fallback fabric is used -tmi select TMI-capable network fabric. Fallback list is dapl,tcp,ofa,ofi -TMI select TMI-capable network fabric. No fallback fabric is used -mx select Myrinet MX* network fabric. Fallback list is dapl,tcp,ofa,ofi -MX select Myrinet MX* network fabric. No fallback fabric is used -psm select PSM-capable network fabric. Fallback list is dapl,tcp,ofa,ofi -PSM select PSM-capable network fabric. No fallback fabric is used -psm2 select Intel* Omni-Path Fabric. Fallback list is dapl,tcp,ofa,ofi -PSM2 select Intel* Omni-Path Fabric. No fallback fabric is used -ofi select OFI-capable network fabric. Fallback list is tmi,dapl,tcp,ofa -OFI select OFI-capable network fabric. No fallback fabric is used Checkpoint/Restart options: -ckpoint {on|off} enable/disable checkpoints for this run -ckpoint-interval checkpoint interval -ckpoint-prefix destination for checkpoint files (stable storage, typically a cluster-wide file system) -ckpoint-tmp-prefix temporary/fast/local storage to speed up checkpoints -ckpoint-preserve number of checkpoints to keep (default: 1, i.e. keep only last checkpoint) -ckpointlib checkpointing library (blcr) -ckpoint-logfile checkpoint activity/status log file (appended) -restart restart previously checkpointed application -ckpoint-num checkpoint number to restart Demux engine options: -demux demux engine (poll select) Debugger support options: -tv run processes under TotalView -tva {pid} attach existing mpiexec process to TotalView -gdb run processes under GDB -gdba {pid} attach existing mpiexec process to GDB -gdb-ia run processes under Intel IA specific GDB Other Hydra options: -v | -verbose verbose mode -V | -version show the version -info build information -print-rank-map print rank mapping -print-all-exitcodes print exit codes of all processes -iface network interface to use -help show this message -perhost place consecutive processes on each host -ppn stand for "process per node"; an alias to -perhost -grr stand for "group round robin"; an alias to -perhost -rr involve "round robin" startup scheme -s redirect stdin to all or 1,2 or 2-4,6 MPI processes (0 by default) -ordered-output avoid data output intermingling -profile turn on internal profiling -l | -prepend-rank prepend rank to output -prepend-pattern prepend pattern to output -outfile-pattern direct stdout to file -errfile-pattern direct stderr to file -localhost local hostname for the launching node -nolocal avoid running the application processes on the node where mpiexec.hydra started Intel(R) MPI Library for Linux* OS, Version 2017 Update 2 Build 20170125 (id: 16752) Copyright (C) 2003-2017, Intel Corporation. All rights reserved. Usage: ./mpiexec [global opts] [exec1 local opts] : [exec2 local opts] : ... Global options (passed to all executables): Global environment options: -genv {name} {value} environment variable name and value -genvlist {env1,env2,...} environment variable list to pass -genvnone do not pass any environment variables -genvall pass all environment variables not managed by the launcher (default) Other global options: -f {name} | -hostfile {name} file containing the host names -hosts {host list} comma separated host list -configfile {name} config file containing MPMD launch options -machine {name} | -machinefile {name} file mapping procs to machines -pmi-connect {nocache|lazy-cache|cache} set the PMI connections mode to use -pmi-aggregate aggregate PMI messages -pmi-noaggregate do not aggregate PMI messages -trace {} trace the application using profiling library; default is libVT.so -trace-imbalance {} trace the application using imbalance profiling library; default is libVTim.so -check-mpi {} check the application using checking library; default is libVTmc.so -ilp64 Preload ilp64 wrapper library for support default size of integer 8 bytes -mps start statistics gathering for MPI Performance Snapshot (MPS) -trace-pt2pt collect information about Point to Point operations -trace-collectives collect information about Collective operations -tune [] apply the tuned data produced by the MPI Tuner utility -use-app-topology perform optimized rank placement based statistics and cluster topology -noconf do not use any mpiexec's configuration files -branch-count {leaves_num} set the number of children in tree -gwdir {dirname} working directory to use -gpath {dirname} path to executable to use -gumask {umask} mask to perform umask -tmpdir {tmpdir} temporary directory for cleanup input file -cleanup create input file for clean up -gtool {options} apply a tool over the mpi application -gtoolfile {file} apply a tool over the mpi application. Parameters specified in the file Local options (passed to individual executables): Local environment options: -env {name} {value} environment variable name and value -envlist {env1,env2,...} environment variable list to pass -envnone do not pass any environment variables -envall pass all environment variables (default) Other local options: -host {hostname} host on which processes are to be run -hostos {OS name} operating system on particular host -wdir {dirname} working directory to use -path {dirname} path to executable to use -umask {umask} mask to perform umask -n/-np {value} number of processes {exec_name} {args} executable name and arguments Hydra specific options (treated as global): Bootstrap options: -bootstrap bootstrap server to use (ssh rsh pdsh fork slurm srun ll llspawn.stdio lsf blaunch sge qrsh persist service pbsdsh) -bootstrap-exec executable to use to bootstrap processes -bootstrap-exec-args additional options to pass to bootstrap server -prefork use pre-fork processes startup method -enable-x/-disable-x enable or disable X forwarding Resource management kernel options: -rmk resource management kernel to use (user slurm srun ll llspawn.stdio lsf blaunch sge qrsh pbs cobalt) Processor topology options: -binding process-to-core binding mode Extended fabric control options: -rdma select RDMA-capable network fabric (dapl). Fallback list is ofa,tcp,tmi,ofi -RDMA select RDMA-capable network fabric (dapl). Fallback is ofa -dapl select DAPL-capable network fabric. Fallback list is tcp,tmi,ofa,ofi -DAPL select DAPL-capable network fabric. No fallback fabric is used -ib select OFA-capable network fabric. Fallback list is dapl,tcp,tmi,ofi -IB select OFA-capable network fabric. No fallback fabric is used -tmi select TMI-capable network fabric. Fallback list is dapl,tcp,ofa,ofi -TMI select TMI-capable network fabric. No fallback fabric is used -mx select Myrinet MX* network fabric. Fallback list is dapl,tcp,ofa,ofi -MX select Myrinet MX* network fabric. No fallback fabric is used -psm select PSM-capable network fabric. Fallback list is dapl,tcp,ofa,ofi -PSM select PSM-capable network fabric. No fallback fabric is used -psm2 select Intel* Omni-Path Fabric. Fallback list is dapl,tcp,ofa,ofi -PSM2 select Intel* Omni-Path Fabric. No fallback fabric is used -ofi select OFI-capable network fabric. Fallback list is tmi,dapl,tcp,ofa -OFI select OFI-capable network fabric. No fallback fabric is used Checkpoint/Restart options: -ckpoint {on|off} enable/disable checkpoints for this run -ckpoint-interval checkpoint interval -ckpoint-prefix destination for checkpoint files (stable storage, typically a cluster-wide file system) -ckpoint-tmp-prefix temporary/fast/local storage to speed up checkpoints -ckpoint-preserve number of checkpoints to keep (default: 1, i.e. keep only last checkpoint) -ckpointlib checkpointing library (blcr) -ckpoint-logfile checkpoint activity/status log file (appended) -restart restart previously checkpointed application -ckpoint-num checkpoint number to restart Demux engine options: -demux demux engine (poll select) Debugger support options: -tv run processes under TotalView -tva {pid} attach existing mpiexec process to TotalView -gdb run processes under GDB -gdba {pid} attach existing mpiexec process to GDB -gdb-ia run processes under Intel IA specific GDB Other Hydra options: -v | -verbose verbose mode -V | -version show the version -info build information -print-rank-map print rank mapping -print-all-exitcodes print exit codes of all processes -iface network interface to use -help show this message -perhost place consecutive processes on each host -ppn stand for "process per node"; an alias to -perhost -grr stand for "group round robin"; an alias to -perhost -rr involve "round robin" startup scheme -s redirect stdin to all or 1,2 or 2-4,6 MPI processes (0 by default) -ordered-output avoid data output intermingling -profile turn on internal profiling -l | -prepend-rank prepend rank to output -prepend-pattern prepend pattern to output -outfile-pattern direct stdout to file -errfile-pattern direct stderr to file -localhost local hostname for the launching node -nolocal avoid running the application processes on the node where mpiexec.hydra started Intel(R) MPI Library for Linux* OS, Version 2017 Update 2 Build 20170125 (id: 16752) Copyright (C) 2003-2017, Intel Corporation. All rights reserved. Usage: ./mpiexec [global opts] [exec1 local opts] : [exec2 local opts] : ... Global options (passed to all executables): Global environment options: -genv {name} {value} environment variable name and value -genvlist {env1,env2,...} environment variable list to pass -genvnone do not pass any environment variables -genvall pass all environment variables not managed by the launcher (default) Other global options: -f {name} | -hostfile {name} file containing the host names -hosts {host list} comma separated host list -configfile {name} config file containing MPMD launch options -machine {name} | -machinefile {name} file mapping procs to machines -pmi-connect {nocache|lazy-cache|cache} set the PMI connections mode to use -pmi-aggregate aggregate PMI messages -pmi-noaggregate do not aggregate PMI messages -trace {} trace the application using profiling library; default is libVT.so -trace-imbalance {} trace the application using imbalance profiling library; default is libVTim.so -check-mpi {} check the application using checking library; default is libVTmc.so -ilp64 Preload ilp64 wrapper library for support default size of integer 8 bytes -mps start statistics gathering for MPI Performance Snapshot (MPS) -trace-pt2pt collect information about Point to Point operations -trace-collectives collect information about Collective operations -tune [] apply the tuned data produced by the MPI Tuner utility -use-app-topology perform optimized rank placement based statistics and cluster topology -noconf do not use any mpiexec's configuration files -branch-count {leaves_num} set the number of children in tree -gwdir {dirname} working directory to use -gpath {dirname} path to executable to use -gumask {umask} mask to perform umask -tmpdir {tmpdir} temporary directory for cleanup input file -cleanup create input file for clean up -gtool {options} apply a tool over the mpi application -gtoolfile {file} apply a tool over the mpi application. Parameters specified in the file Local options (passed to individual executables): Local environment options: -env {name} {value} environment variable name and value -envlist {env1,env2,...} environment variable list to pass -envnone do not pass any environment variables -envall pass all environment variables (default) Other local options: -host {hostname} host on which processes are to be run -hostos {OS name} operating system on particular host -wdir {dirname} working directory to use -path {dirname} path to executable to use -umask {umask} mask to perform umask -n/-np {value} number of processes {exec_name} {args} executable name and arguments Hydra specific options (treated as global): Bootstrap options: -bootstrap bootstrap server to use (ssh rsh pdsh fork slurm srun ll llspawn.stdio lsf blaunch sge qrsh persist service pbsdsh) -bootstrap-exec executable to use to bootstrap processes -bootstrap-exec-args additional options to pass to bootstrap server -prefork use pre-fork processes startup method -enable-x/-disable-x enable or disable X forwarding Resource management kernel options: -rmk resource management kernel to use (user slurm srun ll llspawn.stdio lsf blaunch sge qrsh pbs cobalt) Processor topology options: -binding process-to-core binding mode Extended fabric control options: -rdma select RDMA-capable network fabric (dapl). Fallback list is ofa,tcp,tmi,ofi -RDMA select RDMA-capable network fabric (dapl). Fallback is ofa -dapl select DAPL-capable network fabric. Fallback list is tcp,tmi,ofa,ofi -DAPL select DAPL-capable network fabric. No fallback fabric is used -ib select OFA-capable network fabric. Fallback list is dapl,tcp,tmi,ofi -IB select OFA-capable network fabric. No fallback fabric is used -tmi select TMI-capable network fabric. Fallback list is dapl,tcp,ofa,ofi -TMI select TMI-capable network fabric. No fallback fabric is used -mx select Myrinet MX* network fabric. Fallback list is dapl,tcp,ofa,ofi -MX select Myrinet MX* network fabric. No fallback fabric is used -psm select PSM-capable network fabric. Fallback list is dapl,tcp,ofa,ofi -PSM select PSM-capable network fabric. No fallback fabric is used -psm2 select Intel* Omni-Path Fabric. Fallback list is dapl,tcp,ofa,ofi -PSM2 select Intel* Omni-Path Fabric. No fallback fabric is used -ofi select OFI-capable network fabric. Fallback list is tmi,dapl,tcp,ofa -OFI select OFI-capable network fabric. No fallback fabric is used Checkpoint/Restart options: -ckpoint {on|off} enable/disable checkpoints for this run -ckpoint-interval checkpoint interval -ckpoint-prefix destination for checkpoint files (stable storage, typically a cluster-wide file system) -ckpoint-tmp-prefix temporary/fast/local storage to speed up checkpoints -ckpoint-preserve number of checkpoints to keep (default: 1, i.e. keep only last checkpoint) -ckpointlib checkpointing library (blcr) -ckpoint-logfile checkpoint activity/status log file (appended) -restart restart previously checkpointed application -ckpoint-num checkpoint number to restart Demux engine options: -demux demux engine (poll select) Debugger support options: -tv run processes under TotalView -tva {pid} attach existing mpiexec process to TotalView -gdb run processes under GDB -gdba {pid} attach existing mpiexec process to GDB -gdb-ia run processes under Intel IA specific GDB Other Hydra options: -v | -verbose verbose mode -V | -version show the version -info build information -print-rank-map print rank mapping -print-all-exitcodes print exit codes of all processes -iface network interface to use -help show this message -perhost place consecutive processes on each host -ppn stand for "process per node"; an alias to -perhost -grr stand for "group round robin"; an alias to -perhost -rr involve "round robin" startup scheme -s redirect stdin to all or 1,2 or 2-4,6 MPI processes (0 by default) -ordered-output avoid data output intermingling -profile turn on internal profiling -l | -prepend-rank prepend rank to output -prepend-pattern prepend pattern to output -outfile-pattern direct stdout to file -errfile-pattern direct stderr to file -localhost local hostname for the launching node -nolocal avoid running the application processes on the node where mpiexec.hydra started Intel(R) MPI Library for Linux* OS, Version 2017 Update 2 Build 20170125 (id: 16752) Copyright (C) 2003-2017, Intel Corporation. All rights reserved. [0] MPI startup(): Multi-threaded optimized library [0] DAPL startup(): trying to open DAPL provider from I_MPI_DAPL_PROVIDER: ofa-v2-mlx4_0-1u [1] DAPL startup(): trying to open DAPL provider from I_MPI_DAPL_PROVIDER: ofa-v2-mlx4_0-1u [0] MPI startup(): DAPL provider ofa-v2-mlx4_0-1u [0] MPI startup(): dapl data transfer mode [1] MPI startup(): DAPL provider ofa-v2-mlx4_0-1u [1] MPI startup(): dapl data transfer mode [0] MPID_nem_init_dapl_coll_fns(): User set DAPL collective mask = 0000 [0] MPID_nem_init_dapl_coll_fns(): Effective DAPL collective mask = 0000 [1] MPID_nem_init_dapl_coll_fns(): User set DAPL collective mask = 0000 [1] MPID_nem_init_dapl_coll_fns(): Effective DAPL collective mask = 0000 [0] MPI startup(): Rank Pid Node name Pin cpu [0] MPI startup(): 0 63173 n0471 {0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23} [0] MPI startup(): 1 39661 n0483 {0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23} [0] MPI startup(): I_MPI_DEBUG=5 [0] MPI startup(): I_MPI_INFO_NUMA_NODE_MAP=mlx4_0:0 [0] MPI startup(): I_MPI_INFO_NUMA_NODE_NUM=2 [0] MPI startup(): I_MPI_PIN_MAPPING=1:0 0 #------------------------------------------------------------ # Intel (R) MPI Benchmarks 2017, MPI-1 part #------------------------------------------------------------ # Date : Tue Apr 18 18:41:52 2017 # Machine : x86_64 # System : Linux # Release : 2.6.32-504.8.1.el6.x86_64 # Version : #1 SMP Wed Jan 28 21:11:36 UTC 2015 # MPI Version : 3.1 # MPI Thread Environment: # Calling sequence was: # src/IMB-MPI1 -npmin 2 -input IMB_SELECT_MPI1 -msglen ./msglens # # Message lengths were user defined # # MPI_Datatype : MPI_BYTE # MPI_Datatype for reductions : MPI_FLOAT # MPI_Op : MPI_SUM # # # List of Benchmarks to run: # Biband # Biband # Biband # Biband # Biband # Biband # Biband #--------------------------------------------------- # Benchmarking Biband # #processes = 2 #--------------------------------------------------- #bytes #repetitions Mbytes/sec Msg/sec 0 1000 0.00 4036957 65536 640 6827.09 109234 524288 80 10448.28 20897 4194304 10 11422.10 2856 #--------------------------------------------------- # Benchmarking Biband # #processes = 2 #--------------------------------------------------- #bytes #repetitions Mbytes/sec Msg/sec 0 1000 0.00 3973407 65536 640 6694.14 107106 524288 80 10441.78 20884 4194304 10 11389.90 2847 #--------------------------------------------------- # Benchmarking Biband # #processes = 2 #--------------------------------------------------- #bytes #repetitions Mbytes/sec Msg/sec 0 1000 0.00 3964810 65536 640 6688.08 107009 524288 80 10448.36 20897 4194304 10 11420.72 2855 #--------------------------------------------------- # Benchmarking Biband # #processes = 2 #--------------------------------------------------- #bytes #repetitions Mbytes/sec Msg/sec 0 1000 0.00 3892118 65536 640 6681.17 106899 524288 80 10460.87 20922 4194304 10 11422.58 2856 #--------------------------------------------------- # Benchmarking Biband # #processes = 2 #--------------------------------------------------- #bytes #repetitions Mbytes/sec Msg/sec 0 1000 0.00 3962995 65536 640 6704.09 107265 524288 80 10452.67 20905 4194304 10 11327.99 2832 #--------------------------------------------------- # Benchmarking Biband # #processes = 2 #--------------------------------------------------- #bytes #repetitions Mbytes/sec Msg/sec 0 1000 0.00 3824875 65536 640 6701.40 107222 524288 80 10458.33 20917 4194304 10 11428.75 2857 #--------------------------------------------------- # Benchmarking Biband # #processes = 2 #--------------------------------------------------- #bytes #repetitions Mbytes/sec Msg/sec 0 1000 0.00 3752767 65536 640 6686.87 106990 524288 80 10461.02 20922 4194304 10 11439.98 2860 # All processes entering MPI_Finalize [0] MPI startup(): Multi-threaded optimized library [0] DAPL startup(): trying to open DAPL provider from I_MPI_DAPL_PROVIDER: ofa-v2-mlx4_0-1u [1] DAPL startup(): trying to open DAPL provider from I_MPI_DAPL_PROVIDER: ofa-v2-mlx4_0-1u [1] MPI startup(): DAPL provider ofa-v2-mlx4_0-1u [0] MPI startup(): DAPL provider ofa-v2-mlx4_0-1u [1] MPI startup(): dapl data transfer mode [0] MPI startup(): dapl data transfer mode [2] DAPL startup(): trying to open DAPL provider from I_MPI_DAPL_PROVIDER: ofa-v2-mlx4_0-1u [3] DAPL startup(): trying to open DAPL provider from I_MPI_DAPL_PROVIDER: ofa-v2-mlx4_0-1u [2] MPI startup(): DAPL provider ofa-v2-mlx4_0-1u [2] MPI startup(): dapl data transfer mode [3] MPI startup(): DAPL provider ofa-v2-mlx4_0-1u [3] MPI startup(): dapl data transfer mode [2] MPID_nem_init_dapl_coll_fns(): User set DAPL collective mask = 0000 [2] MPID_nem_init_dapl_coll_fns(): Effective DAPL collective mask = 0000 [1] MPID_nem_init_dapl_coll_fns(): User set DAPL collective mask = 0000 [1] MPID_nem_init_dapl_coll_fns(): Effective DAPL collective mask = 0000 [0] MPID_nem_init_dapl_coll_fns(): User set DAPL collective mask = 0000 [0] MPID_nem_init_dapl_coll_fns(): Effective DAPL collective mask = 0000 [3] MPID_nem_init_dapl_coll_fns(): User set DAPL collective mask = 0000 [3] MPID_nem_init_dapl_coll_fns(): Effective DAPL collective mask = 0000 [0] MPI startup(): Rank Pid Node name Pin cpu [0] MPI startup(): 0 63216 n0471 {0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23} [0] MPI startup(): 1 39705 n0483 {0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23} [0] MPI startup(): 2 24277 n0497 {0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23} [0] MPI startup(): 3 13452 n0501 {0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23} [0] MPI startup(): I_MPI_DEBUG=5 [0] MPI startup(): I_MPI_INFO_NUMA_NODE_MAP=mlx4_0:0 [0] MPI startup(): I_MPI_INFO_NUMA_NODE_NUM=2 [0] MPI startup(): I_MPI_PIN_MAPPING=1:0 0 #------------------------------------------------------------ # Intel (R) MPI Benchmarks 2017, MPI-1 part #------------------------------------------------------------ # Date : Tue Apr 18 18:42:17 2017 # Machine : x86_64 # System : Linux # Release : 2.6.32-504.8.1.el6.x86_64 # Version : #1 SMP Wed Jan 28 21:11:36 UTC 2015 # MPI Version : 3.1 # MPI Thread Environment: # Calling sequence was: # src/IMB-MPI1 -npmin 4 -input IMB_SELECT_MPI1 -msglen ./msglens # # Message lengths were user defined # # MPI_Datatype : MPI_BYTE # MPI_Datatype for reductions : MPI_FLOAT # MPI_Op : MPI_SUM # # # List of Benchmarks to run: # Biband # Biband # Biband # Biband # Biband # Biband # Biband #--------------------------------------------------- # Benchmarking Biband # #processes = 4 #--------------------------------------------------- #bytes #repetitions Mbytes/sec Msg/sec 0 1000 0.00 7199701 65536 640 13233.43 211735 524288 80 19708.07 39416 4194304 10 22623.73 5656 #--------------------------------------------------- # Benchmarking Biband # #processes = 4 #--------------------------------------------------- #bytes #repetitions Mbytes/sec Msg/sec 0 1000 0.00 7003136 65536 640 13432.77 214924 524288 80 19955.49 39911 4194304 10 22507.57 5627 #--------------------------------------------------- # Benchmarking Biband # #processes = 4 #--------------------------------------------------- #bytes #repetitions Mbytes/sec Msg/sec 0 1000 0.00 7146226 65536 640 13383.41 214135 524288 80 19985.91 39972 4194304 10 22556.36 5639 #--------------------------------------------------- # Benchmarking Biband # #processes = 4 #--------------------------------------------------- #bytes #repetitions Mbytes/sec Msg/sec 0 1000 0.00 7096680 65536 640 13392.27 214276 524288 80 20194.73 40389 4194304 10 21990.05 5498 #--------------------------------------------------- # Benchmarking Biband # #processes = 4 #--------------------------------------------------- #bytes #repetitions Mbytes/sec Msg/sec 0 1000 0.00 7166257 65536 640 13406.77 214508 524288 80 20117.24 40234 4194304 10 22651.36 5663 #--------------------------------------------------- # Benchmarking Biband # #processes = 4 #--------------------------------------------------- #bytes #repetitions Mbytes/sec Msg/sec 0 1000 0.00 7123752 65536 640 13404.23 214468 524288 80 20141.71 40283 4194304 10 22632.44 5658 #--------------------------------------------------- # Benchmarking Biband # #processes = 4 #--------------------------------------------------- #bytes #repetitions Mbytes/sec Msg/sec 0 1000 0.00 7182940 65536 640 13422.70 214763 524288 80 20157.00 40314 4194304 10 22595.73 5649 # All processes entering MPI_Finalize [0] MPI startup(): Multi-threaded optimized library [2] DAPL startup(): trying to open DAPL provider from I_MPI_DAPL_PROVIDER: ofa-v2-mlx4_0-1u [1] DAPL startup(): trying to open DAPL provider from I_MPI_DAPL_PROVIDER: ofa-v2-mlx4_0-1u [0] DAPL startup(): trying to open DAPL provider from I_MPI_DAPL_PROVIDER: ofa-v2-mlx4_0-1u [3] DAPL startup(): trying to open DAPL provider from I_MPI_DAPL_PROVIDER: ofa-v2-mlx4_0-1u [2] MPI startup(): DAPL provider ofa-v2-mlx4_0-1u [2] MPI startup(): dapl data transfer mode [1] MPI startup(): DAPL provider ofa-v2-mlx4_0-1u [1] MPI startup(): dapl data transfer mode [0] MPI startup(): DAPL provider ofa-v2-mlx4_0-1u [0] MPI startup(): dapl data transfer mode [3] MPI startup(): DAPL provider ofa-v2-mlx4_0-1u [3] MPI startup(): dapl data transfer mode [4] DAPL startup(): trying to open DAPL provider from I_MPI_DAPL_PROVIDER: ofa-v2-mlx4_0-1u [7] DAPL startup(): trying to open DAPL provider from I_MPI_DAPL_PROVIDER: ofa-v2-mlx4_0-1u [5] DAPL startup(): trying to open DAPL provider from I_MPI_DAPL_PROVIDER: ofa-v2-mlx4_0-1u [6] DAPL startup(): trying to open DAPL provider from I_MPI_DAPL_PROVIDER: ofa-v2-mlx4_0-1u [4] MPI startup(): DAPL provider ofa-v2-mlx4_0-1u [4] MPI startup(): dapl data transfer mode [5] MPI startup(): DAPL provider ofa-v2-mlx4_0-1u [7] MPI startup(): DAPL provider ofa-v2-mlx4_0-1u [7] MPI startup(): dapl data transfer mode [5] MPI startup(): dapl data transfer mode [6] MPI startup(): DAPL provider ofa-v2-mlx4_0-1u [6] MPI startup(): dapl data transfer mode [7] MPID_nem_init_dapl_coll_fns(): User set DAPL collective mask = 0000 [7] MPID_nem_init_dapl_coll_fns(): Effective DAPL collective mask = 0000 [4] MPID_nem_init_dapl_coll_fns(): User set DAPL collective mask = 0000 [4] MPID_nem_init_dapl_coll_fns(): Effective DAPL collective mask = 0000 [2] MPID_nem_init_dapl_coll_fns(): User set DAPL collective mask = 0000 [2] MPID_nem_init_dapl_coll_fns(): Effective DAPL collective mask = 0000 [6] MPID_nem_init_dapl_coll_fns(): User set DAPL collective mask = 0000 [6] MPID_nem_init_dapl_coll_fns(): Effective DAPL collective mask = 0000 [5] MPID_nem_init_dapl_coll_fns(): User set DAPL collective mask = 0000 [5] MPID_nem_init_dapl_coll_fns(): Effective DAPL collective mask = 0000 [1] MPID_nem_init_dapl_coll_fns(): User set DAPL collective mask = 0000 [1] MPID_nem_init_dapl_coll_fns(): Effective DAPL collective mask = 0000 [3] MPID_nem_init_dapl_coll_fns(): User set DAPL collective mask = 0000 [3] MPID_nem_init_dapl_coll_fns(): Effective DAPL collective mask = 0000 [0] MPID_nem_init_dapl_coll_fns(): User set DAPL collective mask = 0000 [0] MPID_nem_init_dapl_coll_fns(): Effective DAPL collective mask = 0000 [0] MPI startup(): Rank Pid Node name Pin cpu [0] MPI startup(): 0 63258 n0471 {0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23} [0] MPI startup(): 1 39746 n0483 {0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23} [0] MPI startup(): 2 24319 n0497 {0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23} [0] MPI startup(): 3 13493 n0501 {0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23} [0] MPI startup(): 4 17385 n0506 {0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23} [0] MPI startup(): 5 50082 n0510 {0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23} [0] MPI startup(): 6 45419 n1070 {0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23} [0] MPI startup(): 7 58573 n1089 {0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23} [0] MPI startup(): I_MPI_DEBUG=5 [0] MPI startup(): I_MPI_INFO_NUMA_NODE_MAP=mlx4_0:0 [0] MPI startup(): I_MPI_INFO_NUMA_NODE_NUM=2 [0] MPI startup(): I_MPI_PIN_MAPPING=1:0 0 #------------------------------------------------------------ # Intel (R) MPI Benchmarks 2017, MPI-1 part #------------------------------------------------------------ # Date : Tue Apr 18 18:42:43 2017 # Machine : x86_64 # System : Linux # Release : 2.6.32-504.8.1.el6.x86_64 # Version : #1 SMP Wed Jan 28 21:11:36 UTC 2015 # MPI Version : 3.1 # MPI Thread Environment: # Calling sequence was: # src/IMB-MPI1 -npmin 8 -input IMB_SELECT_MPI1 -msglen ./msglens # # Message lengths were user defined # # MPI_Datatype : MPI_BYTE # MPI_Datatype for reductions : MPI_FLOAT # MPI_Op : MPI_SUM # # # List of Benchmarks to run: # Biband # Biband # Biband # Biband # Biband # Biband # Biband #--------------------------------------------------- # Benchmarking Biband # #processes = 8 #--------------------------------------------------- #bytes #repetitions Mbytes/sec Msg/sec 0 1000 0.00 13157956 65536 640 26034.58 416553 524288 80 38119.43 76239 4194304 10 44181.45 11045 #--------------------------------------------------- # Benchmarking Biband # #processes = 8 #--------------------------------------------------- #bytes #repetitions Mbytes/sec Msg/sec 0 1000 0.00 12550897 65536 640 26185.85 418974 524288 80 37619.20 75238 4194304 10 43518.92 10880 #--------------------------------------------------- # Benchmarking Biband # #processes = 8 #--------------------------------------------------- #bytes #repetitions Mbytes/sec Msg/sec 0 1000 0.00 12656378 65536 640 26570.49 425128 524288 80 37737.87 75476 4194304 10 43349.97 10837 #--------------------------------------------------- # Benchmarking Biband # #processes = 8 #--------------------------------------------------- #bytes #repetitions Mbytes/sec Msg/sec 0 1000 0.00 12461606 65536 640 26553.95 424863 524288 80 37932.81 75866 4194304 10 43916.07 10979 #--------------------------------------------------- # Benchmarking Biband # #processes = 8 #--------------------------------------------------- #bytes #repetitions Mbytes/sec Msg/sec 0 1000 0.00 12481451 65536 640 26592.22 425475 524288 80 38145.04 76290 4194304 10 42889.35 10722 #--------------------------------------------------- # Benchmarking Biband # #processes = 8 #--------------------------------------------------- #bytes #repetitions Mbytes/sec Msg/sec 0 1000 0.00 12530611 65536 640 26421.11 422738 524288 80 38302.06 76604 4194304 10 44516.13 11129 #--------------------------------------------------- # Benchmarking Biband # #processes = 8 #--------------------------------------------------- #bytes #repetitions Mbytes/sec Msg/sec 0 1000 0.00 12299730 65536 640 26464.87 423438 524288 80 37292.43 74585 4194304 10 44503.54 11126 # All processes entering MPI_Finalize [0] MPI startup(): Multi-threaded optimized library [41] DAPL startup(): trying to open DAPL provider from I_MPI_DAPL_PROVIDER: ofa-v2-mlx4_0-1u [34] DAPL startup(): trying to open DAPL provider from I_MPI_DAPL_PROVIDER: ofa-v2-mlx4_0-1u [41] MPI startup(): DAPL provider ofa-v2-mlx4_0-1u [41] MPI startup(): shm and dapl data transfer modes [34] MPI startup(): DAPL provider ofa-v2-mlx4_0-1u [34] MPI startup(): shm and dapl data transfer modes [12] DAPL startup(): trying to open DAPL provider from I_MPI_DAPL_PROVIDER: ofa-v2-mlx4_0-1u [46] DAPL startup(): trying to open DAPL provider from I_MPI_DAPL_PROVIDER: ofa-v2-mlx4_0-1u [42] DAPL startup(): trying to open DAPL provider from I_MPI_DAPL_PROVIDER: ofa-v2-mlx4_0-1u [11] DAPL startup(): trying to open DAPL provider from I_MPI_DAPL_PROVIDER: ofa-v2-mlx4_0-1u [44] DAPL startup(): trying to open DAPL provider from I_MPI_DAPL_PROVIDER: ofa-v2-mlx4_0-1u [1] DAPL startup(): trying to open DAPL provider from I_MPI_DAPL_PROVIDER: ofa-v2-mlx4_0-1u [26] DAPL startup(): trying to open DAPL provider from I_MPI_DAPL_PROVIDER: ofa-v2-mlx4_0-1u [12] MPI startup(): DAPL provider ofa-v2-mlx4_0-1u [47] DAPL startup(): trying to open DAPL provider from I_MPI_DAPL_PROVIDER: ofa-v2-mlx4_0-1u [46] MPI startup(): DAPL provider ofa-v2-mlx4_0-1u [46] MPI startup(): shm and dapl data transfer modes [42] MPI startup(): DAPL provider ofa-v2-mlx4_0-1u [1] MPI startup(): DAPL provider ofa-v2-mlx4_0-1u [1] MPI startup(): shm and dapl data transfer modes [9] DAPL startup(): trying to open DAPL provider from I_MPI_DAPL_PROVIDER: ofa-v2-mlx4_0-1u [11] MPI startup(): DAPL provider ofa-v2-mlx4_0-1u [11] MPI startup(): shm and dapl data transfer modes [12] MPI startup(): shm and dapl data transfer modes [26] MPI startup(): DAPL provider ofa-v2-mlx4_0-1u [26] MPI startup(): shm and dapl data transfer modes [42] MPI startup(): shm and dapl data transfer modes [44] MPI startup(): DAPL provider ofa-v2-mlx4_0-1u [44] MPI startup(): shm and dapl data transfer modes [47] MPI startup(): DAPL provider ofa-v2-mlx4_0-1u [47] MPI startup(): shm and dapl data transfer modes [32] DAPL startup(): trying to open DAPL provider from I_MPI_DAPL_PROVIDER: ofa-v2-mlx4_0-1u [39] DAPL startup(): trying to open DAPL provider from I_MPI_DAPL_PROVIDER: ofa-v2-mlx4_0-1u [36] DAPL startup(): trying to open DAPL provider from I_MPI_DAPL_PROVIDER: ofa-v2-mlx4_0-1u [27] DAPL startup(): trying to open DAPL provider from I_MPI_DAPL_PROVIDER: ofa-v2-mlx4_0-1u [5] DAPL startup(): trying to open DAPL provider from I_MPI_DAPL_PROVIDER: ofa-v2-mlx4_0-1u [9] MPI startup(): DAPL provider ofa-v2-mlx4_0-1u [9] MPI startup(): shm and dapl data transfer modes [19] DAPL startup(): trying to open DAPL provider from I_MPI_DAPL_PROVIDER: ofa-v2-mlx4_0-1u [13] DAPL startup(): trying to open DAPL provider from I_MPI_DAPL_PROVIDER: ofa-v2-mlx4_0-1u [8] DAPL startup(): trying to open DAPL provider from I_MPI_DAPL_PROVIDER: ofa-v2-mlx4_0-1u [24] DAPL startup(): trying to open DAPL provider from I_MPI_DAPL_PROVIDER: ofa-v2-mlx4_0-1u [25] DAPL startup(): trying to open DAPL provider from I_MPI_DAPL_PROVIDER: ofa-v2-mlx4_0-1u [27] MPI startup(): DAPL provider ofa-v2-mlx4_0-1u [27] MPI startup(): shm and dapl data transfer modes [28] DAPL startup(): trying to open DAPL provider from I_MPI_DAPL_PROVIDER: ofa-v2-mlx4_0-1u [29] DAPL startup(): trying to open DAPL provider from I_MPI_DAPL_PROVIDER: ofa-v2-mlx4_0-1u [30] DAPL startup(): trying to open DAPL provider from I_MPI_DAPL_PROVIDER: ofa-v2-mlx4_0-1u [31] DAPL startup(): trying to open DAPL provider from I_MPI_DAPL_PROVIDER: ofa-v2-mlx4_0-1u [32] MPI startup(): DAPL provider ofa-v2-mlx4_0-1u [32] MPI startup(): shm and dapl data transfer modes [33] DAPL startup(): trying to open DAPL provider from I_MPI_DAPL_PROVIDER: ofa-v2-mlx4_0-1u [35] DAPL startup(): trying to open DAPL provider from I_MPI_DAPL_PROVIDER: ofa-v2-mlx4_0-1u [36] MPI startup(): DAPL provider ofa-v2-mlx4_0-1u [36] MPI startup(): shm and dapl data transfer modes [37] DAPL startup(): trying to open DAPL provider from I_MPI_DAPL_PROVIDER: ofa-v2-mlx4_0-1u [38] DAPL startup(): trying to open DAPL provider from I_MPI_DAPL_PROVIDER: ofa-v2-mlx4_0-1u [39] MPI startup(): DAPL provider ofa-v2-mlx4_0-1u [39] MPI startup(): shm and dapl data transfer modes [40] DAPL startup(): trying to open DAPL provider from I_MPI_DAPL_PROVIDER: ofa-v2-mlx4_0-1u [43] DAPL startup(): trying to open DAPL provider from I_MPI_DAPL_PROVIDER: ofa-v2-mlx4_0-1u [45] DAPL startup(): trying to open DAPL provider from I_MPI_DAPL_PROVIDER: ofa-v2-mlx4_0-1u [29] MPI startup(): DAPL provider ofa-v2-mlx4_0-1u [29] MPI startup(): shm and dapl data transfer modes [0] DAPL startup(): trying to open DAPL provider from I_MPI_DAPL_PROVIDER: ofa-v2-mlx4_0-1u [2] DAPL startup(): trying to open DAPL provider from I_MPI_DAPL_PROVIDER: ofa-v2-mlx4_0-1u [3] DAPL startup(): trying to open DAPL provider from I_MPI_DAPL_PROVIDER: ofa-v2-mlx4_0-1u [4] DAPL startup(): trying to open DAPL provider from I_MPI_DAPL_PROVIDER: ofa-v2-mlx4_0-1u [5] MPI startup(): DAPL provider ofa-v2-mlx4_0-1u [5] MPI startup(): shm and dapl data transfer modes [6] DAPL startup(): trying to open DAPL provider from I_MPI_DAPL_PROVIDER: ofa-v2-mlx4_0-1u [7] DAPL startup(): trying to open DAPL provider from I_MPI_DAPL_PROVIDER: ofa-v2-mlx4_0-1u [10] DAPL startup(): trying to open DAPL provider from I_MPI_DAPL_PROVIDER: ofa-v2-mlx4_0-1u [14] DAPL startup(): trying to open DAPL provider from I_MPI_DAPL_PROVIDER: ofa-v2-mlx4_0-1u [15] DAPL startup(): trying to open DAPL provider from I_MPI_DAPL_PROVIDER: ofa-v2-mlx4_0-1u [16] DAPL startup(): trying to open DAPL provider from I_MPI_DAPL_PROVIDER: ofa-v2-mlx4_0-1u [18] DAPL startup(): trying to open DAPL provider from I_MPI_DAPL_PROVIDER: ofa-v2-mlx4_0-1u [19] MPI startup(): DAPL provider ofa-v2-mlx4_0-1u [19] MPI startup(): shm and dapl data transfer modes [21] DAPL startup(): trying to open DAPL provider from I_MPI_DAPL_PROVIDER: ofa-v2-mlx4_0-1u [22] DAPL startup(): trying to open DAPL provider from I_MPI_DAPL_PROVIDER: ofa-v2-mlx4_0-1u [23] DAPL startup(): trying to open DAPL provider from I_MPI_DAPL_PROVIDER: ofa-v2-mlx4_0-1u [17] DAPL startup(): trying to open DAPL provider from I_MPI_DAPL_PROVIDER: ofa-v2-mlx4_0-1u [20] DAPL startup(): trying to open DAPL provider from I_MPI_DAPL_PROVIDER: ofa-v2-mlx4_0-1u [13] MPI startup(): DAPL provider ofa-v2-mlx4_0-1u [13] MPI startup(): shm and dapl data transfer modes [8] MPI startup(): DAPL provider ofa-v2-mlx4_0-1u [8] MPI startup(): shm and dapl data transfer modes [25] MPI startup(): DAPL provider ofa-v2-mlx4_0-1u [25] MPI startup(): shm and dapl data transfer modes [24] MPI startup(): DAPL provider ofa-v2-mlx4_0-1u [30] MPI startup(): DAPL provider ofa-v2-mlx4_0-1u [24] MPI startup(): shm and dapl data transfer modes [30] MPI startup(): shm and dapl data transfer modes [31] MPI startup(): DAPL provider ofa-v2-mlx4_0-1u [31] MPI startup(): shm and dapl data transfer modes [28] MPI startup(): DAPL provider ofa-v2-mlx4_0-1u [28] MPI startup(): shm and dapl data transfer modes [38] MPI startup(): DAPL provider ofa-v2-mlx4_0-1u [38] MPI startup(): shm and dapl data transfer modes [0] MPI startup(): DAPL provider ofa-v2-mlx4_0-1u [0] MPI startup(): shm and dapl data transfer modes [15] MPI startup(): DAPL provider ofa-v2-mlx4_0-1u [15] MPI startup(): shm and dapl data transfer modes [43] MPI startup(): DAPL provider ofa-v2-mlx4_0-1u [43] MPI startup(): shm and dapl data transfer modes [35] MPI startup(): DAPL provider ofa-v2-mlx4_0-1u [35] MPI startup(): shm and dapl data transfer modes [18] MPI startup(): DAPL provider ofa-v2-mlx4_0-1u [37] MPI startup(): DAPL provider ofa-v2-mlx4_0-1u [18] MPI startup(): shm and dapl data transfer modes [37] MPI startup(): shm and dapl data transfer modes [33] MPI startup(): DAPL provider ofa-v2-mlx4_0-1u [33] MPI startup(): shm and dapl data transfer modes [40] MPI startup(): DAPL provider ofa-v2-mlx4_0-1u [40] MPI startup(): shm and dapl data transfer modes [45] MPI startup(): DAPL provider ofa-v2-mlx4_0-1u [45] MPI startup(): shm and dapl data transfer modes [10] MPI startup(): DAPL provider ofa-v2-mlx4_0-1u [10] MPI startup(): shm and dapl data transfer modes [23] MPI startup(): DAPL provider ofa-v2-mlx4_0-1u [4] MPI startup(): DAPL provider ofa-v2-mlx4_0-1u [23] MPI startup(): shm and dapl data transfer modes [4] MPI startup(): shm and dapl data transfer modes [14] MPI startup(): DAPL provider ofa-v2-mlx4_0-1u [14] MPI startup(): shm and dapl data transfer modes [16] MPI startup(): DAPL provider ofa-v2-mlx4_0-1u [16] MPI startup(): shm and dapl data transfer modes [22] MPI startup(): DAPL provider ofa-v2-mlx4_0-1u [22] MPI startup(): shm and dapl data transfer modes [2] MPI startup(): DAPL provider ofa-v2-mlx4_0-1u [7] MPI startup(): DAPL provider ofa-v2-mlx4_0-1u [2] MPI startup(): shm and dapl data transfer modes [7] MPI startup(): shm and dapl data transfer modes [3] MPI startup(): DAPL provider ofa-v2-mlx4_0-1u [3] MPI startup(): shm and dapl data transfer modes [6] MPI startup(): DAPL provider ofa-v2-mlx4_0-1u [6] MPI startup(): shm and dapl data transfer modes [17] MPI startup(): DAPL provider ofa-v2-mlx4_0-1u [17] MPI startup(): shm and dapl data transfer modes [20] MPI startup(): DAPL provider ofa-v2-mlx4_0-1u [20] MPI startup(): shm and dapl data transfer modes [21] MPI startup(): DAPL provider ofa-v2-mlx4_0-1u [21] MPI startup(): shm and dapl data transfer modes [0] MPID_nem_init_dapl_coll_fns(): User set DAPL collective mask = 0000 [0] MPID_nem_init_dapl_coll_fns(): Effective DAPL collective mask = 0000 [24] MPID_nem_init_dapl_coll_fns(): User set DAPL collective mask = 0000 [3] MPID_nem_init_dapl_coll_fns(): User set DAPL collective mask = 0000 [3] MPID_nem_init_dapl_coll_fns(): Effective DAPL collective mask = 0000 [25] MPID_nem_init_dapl_coll_fns(): User set DAPL collective mask = 0000 [25] MPID_nem_init_dapl_coll_fns(): Effective DAPL collective mask = 0000 [11] MPID_nem_init_dapl_coll_fns(): User set DAPL collective mask = 0000 [11] MPID_nem_init_dapl_coll_fns(): Effective DAPL collective mask = 0000 [34] MPID_nem_init_dapl_coll_fns(): User set DAPL collective mask = 0000 [34] MPID_nem_init_dapl_coll_fns(): Effective DAPL collective mask = 0000 [12] MPID_nem_init_dapl_coll_fns(): User set DAPL collective mask = 0000 [12] MPID_nem_init_dapl_coll_fns(): Effective DAPL collective mask = 0000 [36] MPID_nem_init_dapl_coll_fns(): User set DAPL collective mask = 0000 [36] MPID_nem_init_dapl_coll_fns(): Effective DAPL collective mask = 0000 [4] MPID_nem_init_dapl_coll_fns(): User set DAPL collective mask = 0000 [4] MPID_nem_init_dapl_coll_fns(): Effective DAPL collective mask = 0000 [46] MPID_nem_init_dapl_coll_fns(): User set DAPL collective mask = 0000 [46] MPID_nem_init_dapl_coll_fns(): Effective DAPL collective mask = 0000 [5] MPID_nem_init_dapl_coll_fns(): User set DAPL collective mask = 0000 [5] MPID_nem_init_dapl_coll_fns(): Effective DAPL collective mask = 0000 [6] MPID_nem_init_dapl_coll_fns(): User set DAPL collective mask = 0000 [6] MPID_nem_init_dapl_coll_fns(): Effective DAPL collective mask = 0000 [7] MPID_nem_init_dapl_coll_fns(): User set DAPL collective mask = 0000 [7] MPID_nem_init_dapl_coll_fns(): Effective DAPL collective mask = 0000 [26] MPID_nem_init_dapl_coll_fns(): User set DAPL collective mask = 0000 [26] MPID_nem_init_dapl_coll_fns(): Effective DAPL collective mask = 0000 [8] MPID_nem_init_dapl_coll_fns(): User set DAPL collective mask = 0000 [8] MPID_nem_init_dapl_coll_fns(): Effective DAPL collective mask = 0000 [27] MPID_nem_init_dapl_coll_fns(): User set DAPL collective mask = 0000 [27] MPID_nem_init_dapl_coll_fns(): Effective DAPL collective mask = 0000 [9] MPID_nem_init_dapl_coll_fns(): User set DAPL collective mask = 0000 [9] MPID_nem_init_dapl_coll_fns(): Effective DAPL collective mask = 0000 [28] MPID_nem_init_dapl_coll_fns(): User set DAPL collective mask = 0000 [28] MPID_nem_init_dapl_coll_fns(): Effective DAPL collective mask = 0000 [10] MPID_nem_init_dapl_coll_fns(): User set DAPL collective mask = 0000 [10] MPID_nem_init_dapl_coll_fns(): Effective DAPL collective mask = 0000 [29] MPID_nem_init_dapl_coll_fns(): User set DAPL collective mask = 0000 [29] MPID_nem_init_dapl_coll_fns(): Effective DAPL collective mask = 0000 [13] MPID_nem_init_dapl_coll_fns(): User set DAPL collective mask = 0000 [13] MPID_nem_init_dapl_coll_fns(): Effective DAPL collective mask = 0000 [30] MPID_nem_init_dapl_coll_fns(): User set DAPL collective mask = 0000 [30] MPID_nem_init_dapl_coll_fns(): Effective DAPL collective mask = 0000 [14] MPID_nem_init_dapl_coll_fns(): User set DAPL collective mask = 0000 [14] MPID_nem_init_dapl_coll_fns(): Effective DAPL collective mask = 0000 [31] MPID_nem_init_dapl_coll_fns(): User set DAPL collective mask = 0000 [31] MPID_nem_init_dapl_coll_fns(): Effective DAPL collective mask = 0000 [17] MPID_nem_init_dapl_coll_fns(): User set DAPL collective mask = 0000 [17] MPID_nem_init_dapl_coll_fns(): Effective DAPL collective mask = 0000 [32] MPID_nem_init_dapl_coll_fns(): User set DAPL collective mask = 0000 [32] MPID_nem_init_dapl_coll_fns(): Effective DAPL collective mask = 0000 [21] MPID_nem_init_dapl_coll_fns(): User set DAPL collective mask = 0000 [21] MPID_nem_init_dapl_coll_fns(): Effective DAPL collective mask = 0000 [33] MPID_nem_init_dapl_coll_fns(): User set DAPL collective mask = 0000 [33] MPID_nem_init_dapl_coll_fns(): Effective DAPL collective mask = 0000 [23] MPID_nem_init_dapl_coll_fns(): User set DAPL collective mask = 0000 [23] MPID_nem_init_dapl_coll_fns(): Effective DAPL collective mask = 0000 [35] MPID_nem_init_dapl_coll_fns(): User set DAPL collective mask = 0000 [35] MPID_nem_init_dapl_coll_fns(): Effective DAPL collective mask = 0000 [1] MPID_nem_init_dapl_coll_fns(): User set DAPL collective mask = 0000 [1] MPID_nem_init_dapl_coll_fns(): Effective DAPL collective mask = 0000 [37] MPID_nem_init_dapl_coll_fns(): User set DAPL collective mask = 0000 [37] MPID_nem_init_dapl_coll_fns(): Effective DAPL collective mask = 0000 [15] MPID_nem_init_dapl_coll_fns(): User set DAPL collective mask = 0000 [15] MPID_nem_init_dapl_coll_fns(): Effective DAPL collective mask = 0000 [38] MPID_nem_init_dapl_coll_fns(): User set DAPL collective mask = 0000 [38] MPID_nem_init_dapl_coll_fns(): Effective DAPL collective mask = 0000 [16] MPID_nem_init_dapl_coll_fns(): User set DAPL collective mask = 0000 [16] MPID_nem_init_dapl_coll_fns(): Effective DAPL collective mask = 0000 [39] MPID_nem_init_dapl_coll_fns(): User set DAPL collective mask = 0000 [39] MPID_nem_init_dapl_coll_fns(): Effective DAPL collective mask = 0000 [18] MPID_nem_init_dapl_coll_fns(): User set DAPL collective mask = 0000 [18] MPID_nem_init_dapl_coll_fns(): Effective DAPL collective mask = 0000 [40] MPID_nem_init_dapl_coll_fns(): User set DAPL collective mask = 0000 [40] MPID_nem_init_dapl_coll_fns(): Effective DAPL collective mask = 0000 [19] MPID_nem_init_dapl_coll_fns(): User set DAPL collective mask = 0000 [19] MPID_nem_init_dapl_coll_fns(): Effective DAPL collective mask = 0000 [41] MPID_nem_init_dapl_coll_fns(): User set DAPL collective mask = 0000 [41] MPID_nem_init_dapl_coll_fns(): Effective DAPL collective mask = 0000 [20] MPID_nem_init_dapl_coll_fns(): User set DAPL collective mask = 0000 [20] MPID_nem_init_dapl_coll_fns(): Effective DAPL collective mask = 0000 [42] MPID_nem_init_dapl_coll_fns(): User set DAPL collective mask = 0000 [42] MPID_nem_init_dapl_coll_fns(): Effective DAPL collective mask = 0000 [22] MPID_nem_init_dapl_coll_fns(): User set DAPL collective mask = 0000 [22] MPID_nem_init_dapl_coll_fns(): Effective DAPL collective mask = 0000 [43] MPID_nem_init_dapl_coll_fns(): User set DAPL collective mask = 0000 [43] MPID_nem_init_dapl_coll_fns(): Effective DAPL collective mask = 0000 [2] MPID_nem_init_dapl_coll_fns(): User set DAPL collective mask = 0000 [44] MPID_nem_init_dapl_coll_fns(): User set DAPL collective mask = 0000 [44] MPID_nem_init_dapl_coll_fns(): Effective DAPL collective mask = 0000 [2] MPID_nem_init_dapl_coll_fns(): Effective DAPL collective mask = 0000 [45] MPID_nem_init_dapl_coll_fns(): User set DAPL collective mask = 0000 [45] MPID_nem_init_dapl_coll_fns(): Effective DAPL collective mask = 0000 [47] MPID_nem_init_dapl_coll_fns(): User set DAPL collective mask = 0000 [47] MPID_nem_init_dapl_coll_fns(): Effective DAPL collective mask = 0000 [24] MPID_nem_init_dapl_coll_fns(): Effective DAPL collective mask = 0000 [0] MPI startup(): Rank Pid Node name Pin cpu [0] MPI startup(): 0 63302 n0471 0 [0] MPI startup(): 1 63303 n0471 1 [0] MPI startup(): 2 63304 n0471 2 [0] MPI startup(): 3 63305 n0471 3 [0] MPI startup(): 4 63306 n0471 4 [0] MPI startup(): 5 63307 n0471 5 [0] MPI startup(): 6 63308 n0471 6 [0] MPI startup(): 7 63309 n0471 7 [0] MPI startup(): 8 63310 n0471 8 [0] MPI startup(): 9 63311 n0471 9 [0] MPI startup(): 10 63312 n0471 10 [0] MPI startup(): 11 63313 n0471 11 [0] MPI startup(): 12 63314 n0471 12 [0] MPI startup(): 13 63315 n0471 13 [0] MPI startup(): 14 63316 n0471 14 [0] MPI startup(): 15 63317 n0471 15 [0] MPI startup(): 16 63318 n0471 16 [0] MPI startup(): 17 63319 n0471 17 [0] MPI startup(): 18 63320 n0471 18 [0] MPI startup(): 19 63321 n0471 19 [0] MPI startup(): 20 63322 n0471 20 [0] MPI startup(): 21 63323 n0471 21 [0] MPI startup(): 22 63324 n0471 22 [0] MPI startup(): 23 63325 n0471 23 [0] MPI startup(): 24 39790 n0483 0 [0] MPI startup(): 25 39791 n0483 1 [0] MPI startup(): 26 39792 n0483 2 [0] MPI startup(): 27 39793 n0483 3 [0] MPI startup(): 28 39794 n0483 4 [0] MPI startup(): 29 39795 n0483 5 [0] MPI startup(): 30 39796 n0483 6 [0] MPI startup(): 31 39797 n0483 7 [0] MPI startup(): 32 39798 n0483 8 [0] MPI startup(): 33 39799 n0483 9 [0] MPI startup(): 34 39800 n0483 10 [0] MPI startup(): 35 39801 n0483 11 [0] MPI startup(): 36 39802 n0483 12 [0] MPI startup(): 37 39803 n0483 13 [0] MPI startup(): 38 39804 n0483 14 [0] MPI startup(): 39 39805 n0483 15 [0] MPI startup(): 40 39806 n0483 16 [0] MPI startup(): 41 39807 n0483 17 [0] MPI startup(): 42 39808 n0483 18 [0] MPI startup(): 43 39809 n0483 19 [0] MPI startup(): 44 39810 n0483 20 [0] MPI startup(): 45 39811 n0483 21 [0] MPI startup(): 46 39812 n0483 22 [0] MPI startup(): 47 39813 n0483 23 [0] MPI startup(): I_MPI_DEBUG=5 [0] MPI startup(): I_MPI_INFO_NUMA_NODE_MAP=mlx4_0:0 [0] MPI startup(): I_MPI_INFO_NUMA_NODE_NUM=2 [0] MPI startup(): I_MPI_PIN_MAPPING=24:0 0,1 1,2 2,3 3,4 4,5 5,6 6,7 7,8 8,9 9,10 10,11 11,12 12,13 13,14 14,15 15,16 16,17 17,18 18,19 19,20 20,21 21,22 22,23 23 #------------------------------------------------------------ # Intel (R) MPI Benchmarks 2017, MPI-1 part #------------------------------------------------------------ # Date : Tue Apr 18 18:43:10 2017 # Machine : x86_64 # System : Linux # Release : 2.6.32-504.8.1.el6.x86_64 # Version : #1 SMP Wed Jan 28 21:11:36 UTC 2015 # MPI Version : 3.1 # MPI Thread Environment: # Calling sequence was: # src/IMB-MPI1 -npmin 48 -input IMB_SELECT_MPI1 -msglen ./msglens # # Message lengths were user defined # # MPI_Datatype : MPI_BYTE # MPI_Datatype for reductions : MPI_FLOAT # MPI_Op : MPI_SUM # # # List of Benchmarks to run: # Biband # Biband # Biband # Biband # Biband # Biband # Biband #--------------------------------------------------- # Benchmarking Biband # #processes = 48 #--------------------------------------------------- #bytes #repetitions Mbytes/sec Msg/sec 0 1000 0.00 78305298 65536 570 11018.36 176294 524288 65 9867.81 19736 4194304 8 9860.26 2465 #--------------------------------------------------- # Benchmarking Biband # #processes = 48 #--------------------------------------------------- #bytes #repetitions Mbytes/sec Msg/sec 0 1000 0.00 77118158 65536 571 11024.63 176394 524288 65 9849.15 19698 4194304 8 9855.79 2464 #--------------------------------------------------- # Benchmarking Biband # #processes = 48 #--------------------------------------------------- #bytes #repetitions Mbytes/sec Msg/sec 0 1000 0.00 77094626 65536 570 11016.80 176269 524288 65 9848.02 19696 4194304 8 9857.39 2464 #--------------------------------------------------- # Benchmarking Biband # #processes = 48 #--------------------------------------------------- #bytes #repetitions Mbytes/sec Msg/sec 0 1000 0.00 77965570 65536 572 10977.01 175632 524288 63 9826.81 19654 4194304 8 9855.97 2464 #--------------------------------------------------- # Benchmarking Biband # #processes = 48 #--------------------------------------------------- #bytes #repetitions Mbytes/sec Msg/sec 0 1000 0.00 77909941 65536 569 11007.09 176113 524288 65 9836.17 19672 4194304 8 9862.77 2466 #--------------------------------------------------- # Benchmarking Biband # #processes = 48 #--------------------------------------------------- #bytes #repetitions Mbytes/sec Msg/sec 0 1000 0.00 77609138 65536 571 11019.38 176310 524288 65 9842.79 19686 4194304 8 9862.31 2466 #--------------------------------------------------- # Benchmarking Biband # #processes = 48 #--------------------------------------------------- #bytes #repetitions Mbytes/sec Msg/sec 0 1000 0.00 78120616 65536 569 11019.45 176311 524288 65 9853.66 19707 4194304 8 9859.27 2465 # All processes entering MPI_Finalize [0] MPI startup(): Multi-threaded optimized library [8] DAPL startup(): trying to open DAPL provider from I_MPI_DAPL_PROVIDER: ofa-v2-mlx4_0-1u [14] DAPL startup(): trying to open DAPL provider from I_MPI_DAPL_PROVIDER: ofa-v2-mlx4_0-1u [8] MPI startup(): DAPL provider ofa-v2-mlx4_0-1u [8] MPI startup(): shm and dapl data transfer modes [14] MPI startup(): DAPL provider ofa-v2-mlx4_0-1u [39] DAPL startup(): trying to open DAPL provider from I_MPI_DAPL_PROVIDER: ofa-v2-mlx4_0-1u [14] MPI startup(): shm and dapl data transfer modes [25] DAPL startup(): trying to open DAPL provider from I_MPI_DAPL_PROVIDER: ofa-v2-mlx4_0-1u [15] DAPL startup(): trying to open DAPL provider from I_MPI_DAPL_PROVIDER: ofa-v2-mlx4_0-1u [25] MPI startup(): DAPL provider ofa-v2-mlx4_0-1u [25] MPI startup(): shm and dapl data transfer modes [39] MPI startup(): DAPL provider ofa-v2-mlx4_0-1u [39] MPI startup(): shm and dapl data transfer modes [24] DAPL startup(): trying to open DAPL provider from I_MPI_DAPL_PROVIDER: ofa-v2-mlx4_0-1u [54] DAPL startup(): trying to open DAPL provider from I_MPI_DAPL_PROVIDER: ofa-v2-mlx4_0-1u [50] DAPL startup(): trying to open DAPL provider from I_MPI_DAPL_PROVIDER: ofa-v2-mlx4_0-1u [15] MPI startup(): DAPL provider ofa-v2-mlx4_0-1u [15] MPI startup(): shm and dapl data transfer modes [24] MPI startup(): DAPL provider ofa-v2-mlx4_0-1u [5] DAPL startup(): trying to open DAPL provider from I_MPI_DAPL_PROVIDER: ofa-v2-mlx4_0-1u [16] DAPL startup(): trying to open DAPL provider from I_MPI_DAPL_PROVIDER: ofa-v2-mlx4_0-1u [83] DAPL startup(): trying to open DAPL provider from I_MPI_DAPL_PROVIDER: ofa-v2-mlx4_0-1u [62] DAPL startup(): trying to open DAPL provider from I_MPI_DAPL_PROVIDER: ofa-v2-mlx4_0-1u [11] DAPL startup(): trying to open DAPL provider from I_MPI_DAPL_PROVIDER: ofa-v2-mlx4_0-1u [49] DAPL startup(): trying to open DAPL provider from I_MPI_DAPL_PROVIDER: ofa-v2-mlx4_0-1u [6] DAPL startup(): trying to open DAPL provider from I_MPI_DAPL_PROVIDER: ofa-v2-mlx4_0-1u [24] MPI startup(): shm and dapl data transfer modes [34] DAPL startup(): trying to open DAPL provider from I_MPI_DAPL_PROVIDER: ofa-v2-mlx4_0-1u [37] DAPL startup(): trying to open DAPL provider from I_MPI_DAPL_PROVIDER: ofa-v2-mlx4_0-1u [26] DAPL startup(): trying to open DAPL provider from I_MPI_DAPL_PROVIDER: ofa-v2-mlx4_0-1u [36] DAPL startup(): trying to open DAPL provider from I_MPI_DAPL_PROVIDER: ofa-v2-mlx4_0-1u [45] DAPL startup(): trying to open DAPL provider from I_MPI_DAPL_PROVIDER: ofa-v2-mlx4_0-1u [81] DAPL startup(): trying to open DAPL provider from I_MPI_DAPL_PROVIDER: ofa-v2-mlx4_0-1u [47] DAPL startup(): trying to open DAPL provider from I_MPI_DAPL_PROVIDER: ofa-v2-mlx4_0-1u [43] DAPL startup(): trying to open DAPL provider from I_MPI_DAPL_PROVIDER: ofa-v2-mlx4_0-1u [5] MPI startup(): DAPL provider ofa-v2-mlx4_0-1u [5] MPI startup(): shm and dapl data transfer modes [54] MPI startup(): DAPL provider ofa-v2-mlx4_0-1u [44] DAPL startup(): trying to open DAPL provider from I_MPI_DAPL_PROVIDER: ofa-v2-mlx4_0-1u [38] DAPL startup(): trying to open DAPL provider from I_MPI_DAPL_PROVIDER: ofa-v2-mlx4_0-1u [40] DAPL startup(): trying to open DAPL provider from I_MPI_DAPL_PROVIDER: ofa-v2-mlx4_0-1u [18] DAPL startup(): trying to open DAPL provider from I_MPI_DAPL_PROVIDER: ofa-v2-mlx4_0-1u [88] DAPL startup(): trying to open DAPL provider from I_MPI_DAPL_PROVIDER: ofa-v2-mlx4_0-1u [31] DAPL startup(): trying to open DAPL provider from I_MPI_DAPL_PROVIDER: ofa-v2-mlx4_0-1u [95] DAPL startup(): trying to open DAPL provider from I_MPI_DAPL_PROVIDER: ofa-v2-mlx4_0-1u [46] DAPL startup(): trying to open DAPL provider from I_MPI_DAPL_PROVIDER: ofa-v2-mlx4_0-1u [29] DAPL startup(): trying to open DAPL provider from I_MPI_DAPL_PROVIDER: ofa-v2-mlx4_0-1u [49] MPI startup(): DAPL provider ofa-v2-mlx4_0-1u [49] MPI startup(): shm and dapl data transfer modes [50] MPI startup(): DAPL provider ofa-v2-mlx4_0-1u [50] MPI startup(): shm and dapl data transfer modes [54] MPI startup(): shm and dapl data transfer modes [62] MPI startup(): DAPL provider ofa-v2-mlx4_0-1u [62] MPI startup(): shm and dapl data transfer modes [64] DAPL startup(): trying to open DAPL provider from I_MPI_DAPL_PROVIDER: ofa-v2-mlx4_0-1u [34] MPI startup(): DAPL provider ofa-v2-mlx4_0-1u [34] MPI startup(): shm and dapl data transfer modes [41] DAPL startup(): trying to open DAPL provider from I_MPI_DAPL_PROVIDER: ofa-v2-mlx4_0-1u [67] DAPL startup(): trying to open DAPL provider from I_MPI_DAPL_PROVIDER: ofa-v2-mlx4_0-1u [28] DAPL startup(): trying to open DAPL provider from I_MPI_DAPL_PROVIDER: ofa-v2-mlx4_0-1u [35] DAPL startup(): trying to open DAPL provider from I_MPI_DAPL_PROVIDER: ofa-v2-mlx4_0-1u [42] DAPL startup(): trying to open DAPL provider from I_MPI_DAPL_PROVIDER: ofa-v2-mlx4_0-1u [27] DAPL startup(): trying to open DAPL provider from I_MPI_DAPL_PROVIDER: ofa-v2-mlx4_0-1u [33] DAPL startup(): trying to open DAPL provider from I_MPI_DAPL_PROVIDER: ofa-v2-mlx4_0-1u [70] DAPL startup(): trying to open DAPL provider from I_MPI_DAPL_PROVIDER: ofa-v2-mlx4_0-1u [30] DAPL startup(): trying to open DAPL provider from I_MPI_DAPL_PROVIDER: ofa-v2-mlx4_0-1u [32] DAPL startup(): trying to open DAPL provider from I_MPI_DAPL_PROVIDER: ofa-v2-mlx4_0-1u [1] DAPL startup(): trying to open DAPL provider from I_MPI_DAPL_PROVIDER: ofa-v2-mlx4_0-1u [2] DAPL startup(): trying to open DAPL provider from I_MPI_DAPL_PROVIDER: ofa-v2-mlx4_0-1u [4] DAPL startup(): trying to open DAPL provider from I_MPI_DAPL_PROVIDER: ofa-v2-mlx4_0-1u [6] MPI startup(): DAPL provider ofa-v2-mlx4_0-1u [6] MPI startup(): shm and dapl data transfer modes [9] DAPL startup(): trying to open DAPL provider from I_MPI_DAPL_PROVIDER: ofa-v2-mlx4_0-1u [10] DAPL startup(): trying to open DAPL provider from I_MPI_DAPL_PROVIDER: ofa-v2-mlx4_0-1u [11] MPI startup(): DAPL provider ofa-v2-mlx4_0-1u [11] MPI startup(): shm and dapl data transfer modes [12] DAPL startup(): trying to open DAPL provider from I_MPI_DAPL_PROVIDER: ofa-v2-mlx4_0-1u [13] DAPL startup(): trying to open DAPL provider from I_MPI_DAPL_PROVIDER: ofa-v2-mlx4_0-1u [16] MPI startup(): DAPL provider ofa-v2-mlx4_0-1u [16] MPI startup(): shm and dapl data transfer modes [17] DAPL startup(): trying to open DAPL provider from I_MPI_DAPL_PROVIDER: ofa-v2-mlx4_0-1u [19] DAPL startup(): trying to open DAPL provider from I_MPI_DAPL_PROVIDER: ofa-v2-mlx4_0-1u [20] DAPL startup(): trying to open DAPL provider from I_MPI_DAPL_PROVIDER: ofa-v2-mlx4_0-1u [21] DAPL startup(): trying to open DAPL provider from I_MPI_DAPL_PROVIDER: ofa-v2-mlx4_0-1u [22] DAPL startup(): trying to open DAPL provider from I_MPI_DAPL_PROVIDER: ofa-v2-mlx4_0-1u [7] DAPL startup(): trying to open DAPL provider from I_MPI_DAPL_PROVIDER: ofa-v2-mlx4_0-1u [23] DAPL startup(): trying to open DAPL provider from I_MPI_DAPL_PROVIDER: ofa-v2-mlx4_0-1u [3] DAPL startup(): trying to open DAPL provider from I_MPI_DAPL_PROVIDER: ofa-v2-mlx4_0-1u [0] DAPL startup(): trying to open DAPL provider from I_MPI_DAPL_PROVIDER: ofa-v2-mlx4_0-1u [18] MPI startup(): DAPL provider ofa-v2-mlx4_0-1u [18] MPI startup(): shm and dapl data transfer modes [37] MPI startup(): DAPL provider ofa-v2-mlx4_0-1u [37] MPI startup(): shm and dapl data transfer modes [55] DAPL startup(): trying to open DAPL provider from I_MPI_DAPL_PROVIDER: ofa-v2-mlx4_0-1u [26] MPI startup(): DAPL provider ofa-v2-mlx4_0-1u [26] MPI startup(): shm and dapl data transfer modes [81] MPI startup(): DAPL provider ofa-v2-mlx4_0-1u [81] MPI startup(): shm and dapl data transfer modes [83] MPI startup(): DAPL provider ofa-v2-mlx4_0-1u [83] MPI startup(): shm and dapl data transfer modes [87] DAPL startup(): trying to open DAPL provider from I_MPI_DAPL_PROVIDER: ofa-v2-mlx4_0-1u [88] MPI startup(): DAPL provider ofa-v2-mlx4_0-1u [88] MPI startup(): shm and dapl data transfer modes [56] DAPL startup(): trying to open DAPL provider from I_MPI_DAPL_PROVIDER: ofa-v2-mlx4_0-1u [91] DAPL startup(): trying to open DAPL provider from I_MPI_DAPL_PROVIDER: ofa-v2-mlx4_0-1u [94] DAPL startup(): trying to open DAPL provider from I_MPI_DAPL_PROVIDER: ofa-v2-mlx4_0-1u [95] MPI startup(): DAPL provider ofa-v2-mlx4_0-1u [95] MPI startup(): shm and dapl data transfer modes [66] DAPL startup(): trying to open DAPL provider from I_MPI_DAPL_PROVIDER: ofa-v2-mlx4_0-1u [36] MPI startup(): DAPL provider ofa-v2-mlx4_0-1u [36] MPI startup(): shm and dapl data transfer modes [64] MPI startup(): DAPL provider ofa-v2-mlx4_0-1u [64] MPI startup(): shm and dapl data transfer modes [48] DAPL startup(): trying to open DAPL provider from I_MPI_DAPL_PROVIDER: ofa-v2-mlx4_0-1u [71] DAPL startup(): trying to open DAPL provider from I_MPI_DAPL_PROVIDER: ofa-v2-mlx4_0-1u [45] MPI startup(): DAPL provider ofa-v2-mlx4_0-1u [45] MPI startup(): shm and dapl data transfer modes [67] MPI startup(): DAPL provider ofa-v2-mlx4_0-1u [67] MPI startup(): shm and dapl data transfer modes [10] MPI startup(): DAPL provider ofa-v2-mlx4_0-1u [10] MPI startup(): shm and dapl data transfer modes [65] DAPL startup(): trying to open DAPL provider from I_MPI_DAPL_PROVIDER: ofa-v2-mlx4_0-1u [86] DAPL startup(): trying to open DAPL provider from I_MPI_DAPL_PROVIDER: ofa-v2-mlx4_0-1u [31] MPI startup(): DAPL provider ofa-v2-mlx4_0-1u [87] MPI startup(): DAPL provider ofa-v2-mlx4_0-1u [31] MPI startup(): shm and dapl data transfer modes [87] MPI startup(): shm and dapl data transfer modes [40] MPI startup(): DAPL provider ofa-v2-mlx4_0-1u [40] MPI startup(): shm and dapl data transfer modes [43] MPI startup(): DAPL provider ofa-v2-mlx4_0-1u [20] MPI startup(): DAPL provider ofa-v2-mlx4_0-1u [43] MPI startup(): shm and dapl data transfer modes [20] MPI startup(): shm and dapl data transfer modes [17] MPI startup(): DAPL provider ofa-v2-mlx4_0-1u [17] MPI startup(): shm and dapl data transfer modes [47] MPI startup(): DAPL provider ofa-v2-mlx4_0-1u [47] MPI startup(): shm and dapl data transfer modes [85] DAPL startup(): trying to open DAPL provider from I_MPI_DAPL_PROVIDER: ofa-v2-mlx4_0-1u [91] MPI startup(): DAPL provider ofa-v2-mlx4_0-1u [91] MPI startup(): shm and dapl data transfer modes [12] MPI startup(): DAPL provider ofa-v2-mlx4_0-1u [12] MPI startup(): shm and dapl data transfer modes [29] MPI startup(): DAPL provider ofa-v2-mlx4_0-1u [38] MPI startup(): DAPL provider ofa-v2-mlx4_0-1u [2] MPI startup(): DAPL provider ofa-v2-mlx4_0-1u [29] MPI startup(): shm and dapl data transfer modes [2] MPI startup(): shm and dapl data transfer modes [38] MPI startup(): shm and dapl data transfer modes [44] MPI startup(): DAPL provider ofa-v2-mlx4_0-1u [44] MPI startup(): shm and dapl data transfer modes [19] MPI startup(): DAPL provider ofa-v2-mlx4_0-1u [1] MPI startup(): DAPL provider ofa-v2-mlx4_0-1u [19] MPI startup(): shm and dapl data transfer modes [1] MPI startup(): shm and dapl data transfer modes [33] MPI startup(): DAPL provider ofa-v2-mlx4_0-1u [33] MPI startup(): shm and dapl data transfer modes [22] MPI startup(): DAPL provider ofa-v2-mlx4_0-1u [22] MPI startup(): shm and dapl data transfer modes [46] MPI startup(): DAPL provider ofa-v2-mlx4_0-1u [46] MPI startup(): shm and dapl data transfer modes [27] MPI startup(): DAPL provider ofa-v2-mlx4_0-1u [27] MPI startup(): shm and dapl data transfer modes [35] MPI startup(): DAPL provider ofa-v2-mlx4_0-1u [35] MPI startup(): shm and dapl data transfer modes [21] MPI startup(): DAPL provider ofa-v2-mlx4_0-1u [21] MPI startup(): shm and dapl data transfer modes [7] MPI startup(): DAPL provider ofa-v2-mlx4_0-1u [7] MPI startup(): shm and dapl data transfer modes [9] MPI startup(): DAPL provider ofa-v2-mlx4_0-1u [42] MPI startup(): DAPL provider ofa-v2-mlx4_0-1u [42] MPI startup(): shm and dapl data transfer modes [9] MPI startup(): shm and dapl data transfer modes [4] MPI startup(): DAPL provider ofa-v2-mlx4_0-1u [4] MPI startup(): shm and dapl data transfer modes [0] MPI startup(): DAPL provider ofa-v2-mlx4_0-1u [28] MPI startup(): DAPL provider ofa-v2-mlx4_0-1u [28] MPI startup(): shm and dapl data transfer modes [0] MPI startup(): shm and dapl data transfer modes [30] MPI startup(): DAPL provider ofa-v2-mlx4_0-1u [30] MPI startup(): shm and dapl data transfer modes [3] MPI startup(): DAPL provider ofa-v2-mlx4_0-1u [3] MPI startup(): shm and dapl data transfer modes [32] MPI startup(): DAPL provider ofa-v2-mlx4_0-1u [32] MPI startup(): shm and dapl data transfer modes [13] MPI startup(): DAPL provider ofa-v2-mlx4_0-1u [13] MPI startup(): shm and dapl data transfer modes [41] MPI startup(): DAPL provider ofa-v2-mlx4_0-1u [41] MPI startup(): shm and dapl data transfer modes [23] MPI startup(): DAPL provider ofa-v2-mlx4_0-1u [23] MPI startup(): shm and dapl data transfer modes [51] DAPL startup(): trying to open DAPL provider from I_MPI_DAPL_PROVIDER: ofa-v2-mlx4_0-1u [52] DAPL startup(): trying to open DAPL provider from I_MPI_DAPL_PROVIDER: ofa-v2-mlx4_0-1u [53] DAPL startup(): trying to open DAPL provider from I_MPI_DAPL_PROVIDER: ofa-v2-mlx4_0-1u [55] MPI startup(): DAPL provider ofa-v2-mlx4_0-1u [55] MPI startup(): shm and dapl data transfer modes [56] MPI startup(): DAPL provider ofa-v2-mlx4_0-1u [56] MPI startup(): shm and dapl data transfer modes [57] DAPL startup(): trying to open DAPL provider from I_MPI_DAPL_PROVIDER: ofa-v2-mlx4_0-1u [58] DAPL startup(): trying to open DAPL provider from I_MPI_DAPL_PROVIDER: ofa-v2-mlx4_0-1u [59] DAPL startup(): trying to open DAPL provider from I_MPI_DAPL_PROVIDER: ofa-v2-mlx4_0-1u [60] DAPL startup(): trying to open DAPL provider from I_MPI_DAPL_PROVIDER: ofa-v2-mlx4_0-1u [61] DAPL startup(): trying to open DAPL provider from I_MPI_DAPL_PROVIDER: ofa-v2-mlx4_0-1u [63] DAPL startup(): trying to open DAPL provider from I_MPI_DAPL_PROVIDER: ofa-v2-mlx4_0-1u [66] MPI startup(): DAPL provider ofa-v2-mlx4_0-1u [66] MPI startup(): shm and dapl data transfer modes [68] DAPL startup(): trying to open DAPL provider from I_MPI_DAPL_PROVIDER: ofa-v2-mlx4_0-1u [69] DAPL startup(): trying to open DAPL provider from I_MPI_DAPL_PROVIDER: ofa-v2-mlx4_0-1u [70] MPI startup(): DAPL provider ofa-v2-mlx4_0-1u [70] MPI startup(): shm and dapl data transfer modes [71] MPI startup(): DAPL provider ofa-v2-mlx4_0-1u [71] MPI startup(): shm and dapl data transfer modes [72] DAPL startup(): trying to open DAPL provider from I_MPI_DAPL_PROVIDER: ofa-v2-mlx4_0-1u [73] DAPL startup(): trying to open DAPL provider from I_MPI_DAPL_PROVIDER: ofa-v2-mlx4_0-1u [74] DAPL startup(): trying to open DAPL provider from I_MPI_DAPL_PROVIDER: ofa-v2-mlx4_0-1u [75] DAPL startup(): trying to open DAPL provider from I_MPI_DAPL_PROVIDER: ofa-v2-mlx4_0-1u [76] DAPL startup(): trying to open DAPL provider from I_MPI_DAPL_PROVIDER: ofa-v2-mlx4_0-1u [77] DAPL startup(): trying to open DAPL provider from I_MPI_DAPL_PROVIDER: ofa-v2-mlx4_0-1u [78] DAPL startup(): trying to open DAPL provider from I_MPI_DAPL_PROVIDER: ofa-v2-mlx4_0-1u [79] DAPL startup(): trying to open DAPL provider from I_MPI_DAPL_PROVIDER: ofa-v2-mlx4_0-1u [80] DAPL startup(): trying to open DAPL provider from I_MPI_DAPL_PROVIDER: ofa-v2-mlx4_0-1u [82] DAPL startup(): trying to open DAPL provider from I_MPI_DAPL_PROVIDER: ofa-v2-mlx4_0-1u [84] DAPL startup(): trying to open DAPL provider from I_MPI_DAPL_PROVIDER: ofa-v2-mlx4_0-1u [86] MPI startup(): DAPL provider ofa-v2-mlx4_0-1u [86] MPI startup(): shm and dapl data transfer modes [89] DAPL startup(): trying to open DAPL provider from I_MPI_DAPL_PROVIDER: ofa-v2-mlx4_0-1u [90] DAPL startup(): trying to open DAPL provider from I_MPI_DAPL_PROVIDER: ofa-v2-mlx4_0-1u [92] DAPL startup(): trying to open DAPL provider from I_MPI_DAPL_PROVIDER: ofa-v2-mlx4_0-1u [93] DAPL startup(): trying to open DAPL provider from I_MPI_DAPL_PROVIDER: ofa-v2-mlx4_0-1u [94] MPI startup(): DAPL provider ofa-v2-mlx4_0-1u [94] MPI startup(): shm and dapl data transfer modes [48] MPI startup(): DAPL provider ofa-v2-mlx4_0-1u [48] MPI startup(): shm and dapl data transfer modes [65] MPI startup(): DAPL provider ofa-v2-mlx4_0-1u [85] MPI startup(): DAPL provider ofa-v2-mlx4_0-1u [65] MPI startup(): shm and dapl data transfer modes [85] MPI startup(): shm and dapl data transfer modes [78] MPI startup(): DAPL provider ofa-v2-mlx4_0-1u [78] MPI startup(): shm and dapl data transfer modes [89] MPI startup(): DAPL provider ofa-v2-mlx4_0-1u [89] MPI startup(): shm and dapl data transfer modes [59] MPI startup(): DAPL provider ofa-v2-mlx4_0-1u [59] MPI startup(): shm and dapl data transfer modes [51] MPI startup(): DAPL provider ofa-v2-mlx4_0-1u [51] MPI startup(): shm and dapl data transfer modes [75] MPI startup(): DAPL provider ofa-v2-mlx4_0-1u [75] MPI startup(): shm and dapl data transfer modes [53] MPI startup(): DAPL provider ofa-v2-mlx4_0-1u [53] MPI startup(): shm and dapl data transfer modes [68] MPI startup(): DAPL provider ofa-v2-mlx4_0-1u [68] MPI startup(): shm and dapl data transfer modes [52] MPI startup(): DAPL provider ofa-v2-mlx4_0-1u [52] MPI startup(): shm and dapl data transfer modes [58] MPI startup(): DAPL provider ofa-v2-mlx4_0-1u [58] MPI startup(): shm and dapl data transfer modes [69] MPI startup(): DAPL provider ofa-v2-mlx4_0-1u [57] MPI startup(): DAPL provider ofa-v2-mlx4_0-1u [69] MPI startup(): shm and dapl data transfer modes [57] MPI startup(): shm and dapl data transfer modes [61] MPI startup(): DAPL provider ofa-v2-mlx4_0-1u [61] MPI startup(): shm and dapl data transfer modes [60] MPI startup(): DAPL provider ofa-v2-mlx4_0-1u [80] MPI startup(): DAPL provider ofa-v2-mlx4_0-1u [80] MPI startup(): shm and dapl data transfer modes [60] MPI startup(): shm and dapl data transfer modes [63] MPI startup(): DAPL provider ofa-v2-mlx4_0-1u [63] MPI startup(): shm and dapl data transfer modes [72] MPI startup(): DAPL provider ofa-v2-mlx4_0-1u [72] MPI startup(): shm and dapl data transfer modes [74] MPI startup(): DAPL provider ofa-v2-mlx4_0-1u [74] MPI startup(): shm and dapl data transfer modes [92] MPI startup(): DAPL provider ofa-v2-mlx4_0-1u [73] MPI startup(): DAPL provider ofa-v2-mlx4_0-1u [92] MPI startup(): shm and dapl data transfer modes [73] MPI startup(): shm and dapl data transfer modes [90] MPI startup(): DAPL provider ofa-v2-mlx4_0-1u [90] MPI startup(): shm and dapl data transfer modes [79] MPI startup(): DAPL provider ofa-v2-mlx4_0-1u [79] MPI startup(): shm and dapl data transfer modes [77] MPI startup(): DAPL provider ofa-v2-mlx4_0-1u [77] MPI startup(): shm and dapl data transfer modes [82] MPI startup(): DAPL provider ofa-v2-mlx4_0-1u [82] MPI startup(): shm and dapl data transfer modes [76] MPI startup(): DAPL provider ofa-v2-mlx4_0-1u [84] MPI startup(): DAPL provider ofa-v2-mlx4_0-1u [84] MPI startup(): shm and dapl data transfer modes [76] MPI startup(): shm and dapl data transfer modes [93] MPI startup(): DAPL provider ofa-v2-mlx4_0-1u [93] MPI startup(): shm and dapl data transfer modes [72] MPID_nem_init_dapl_coll_fns(): User set DAPL collective mask = 0000 [72] MPID_nem_init_dapl_coll_fns(): Effective DAPL collective mask = 0000 [74] MPID_nem_init_dapl_coll_fns(): User set DAPL collective mask = 0000 [74] MPID_nem_init_dapl_coll_fns(): Effective DAPL collective mask = 0000 [75] MPID_nem_init_dapl_coll_fns(): User set DAPL collective mask = 0000 [75] MPID_nem_init_dapl_coll_fns(): Effective DAPL collective mask = 0000 [84] MPID_nem_init_dapl_coll_fns(): User set DAPL collective mask = 0000 [84] MPID_nem_init_dapl_coll_fns(): Effective DAPL collective mask = 0000 [76] MPID_nem_init_dapl_coll_fns(): User set DAPL collective mask = 0000 [76] MPID_nem_init_dapl_coll_fns(): Effective DAPL collective mask = 0000 [77] MPID_nem_init_dapl_coll_fns(): User set DAPL collective mask = 0000 [77] MPID_nem_init_dapl_coll_fns(): Effective DAPL collective mask = 0000 [0] MPID_nem_init_dapl_coll_fns(): User set DAPL collective mask = 0000 [0] MPID_nem_init_dapl_coll_fns(): Effective DAPL collective mask = 0000 [48] MPID_nem_init_dapl_coll_fns(): User set DAPL collective mask = 0000 [48] MPID_nem_init_dapl_coll_fns(): Effective DAPL collective mask = 0000 [78] MPID_nem_init_dapl_coll_fns(): User set DAPL collective mask = 0000 [78] MPID_nem_init_dapl_coll_fns(): Effective DAPL collective mask = 0000 [1] MPID_nem_init_dapl_coll_fns(): User set DAPL collective mask = 0000 [1] MPID_nem_init_dapl_coll_fns(): Effective DAPL collective mask = 0000 [24] MPID_nem_init_dapl_coll_fns(): User set DAPL collective mask = 0000 [24] MPID_nem_init_dapl_coll_fns(): Effective DAPL collective mask = 0000 [53] MPID_nem_init_dapl_coll_fns(): User set DAPL collective mask = 0000 [53] MPID_nem_init_dapl_coll_fns(): Effective DAPL collective mask = 0000 [79] MPID_nem_init_dapl_coll_fns(): User set DAPL collective mask = 0000 [79] MPID_nem_init_dapl_coll_fns(): Effective DAPL collective mask = 0000 [2] MPID_nem_init_dapl_coll_fns(): User set DAPL collective mask = 0000 [2] MPID_nem_init_dapl_coll_fns(): Effective DAPL collective mask = 0000 [26] MPID_nem_init_dapl_coll_fns(): User set DAPL collective mask = 0000 [26] MPID_nem_init_dapl_coll_fns(): Effective DAPL collective mask = 0000 [54] MPID_nem_init_dapl_coll_fns(): User set DAPL collective mask = 0000 [54] MPID_nem_init_dapl_coll_fns(): Effective DAPL collective mask = 0000 [80] MPID_nem_init_dapl_coll_fns(): User set DAPL collective mask = 0000 [80] MPID_nem_init_dapl_coll_fns(): Effective DAPL collective mask = 0000 [4] MPID_nem_init_dapl_coll_fns(): User set DAPL collective mask = 0000 [4] MPID_nem_init_dapl_coll_fns(): Effective DAPL collective mask = 0000 [30] MPID_nem_init_dapl_coll_fns(): User set DAPL collective mask = 0000 [30] MPID_nem_init_dapl_coll_fns(): Effective DAPL collective mask = 0000 [59] MPID_nem_init_dapl_coll_fns(): User set DAPL collective mask = 0000 [59] MPID_nem_init_dapl_coll_fns(): Effective DAPL collective mask = 0000 [81] MPID_nem_init_dapl_coll_fns(): User set DAPL collective mask = 0000 [81] MPID_nem_init_dapl_coll_fns(): Effective DAPL collective mask = 0000 [6] MPID_nem_init_dapl_coll_fns(): User set DAPL collective mask = 0000 [6] MPID_nem_init_dapl_coll_fns(): Effective DAPL collective mask = 0000 [31] MPID_nem_init_dapl_coll_fns(): User set DAPL collective mask = 0000 [31] MPID_nem_init_dapl_coll_fns(): Effective DAPL collective mask = 0000 [60] MPID_nem_init_dapl_coll_fns(): User set DAPL collective mask = 0000 [60] MPID_nem_init_dapl_coll_fns(): Effective DAPL collective mask = 0000 [82] MPID_nem_init_dapl_coll_fns(): User set DAPL collective mask = 0000 [82] MPID_nem_init_dapl_coll_fns(): Effective DAPL collective mask = 0000 [8] MPID_nem_init_dapl_coll_fns(): User set DAPL collective mask = 0000 [8] MPID_nem_init_dapl_coll_fns(): Effective DAPL collective mask = 0000 [35] MPID_nem_init_dapl_coll_fns(): User set DAPL collective mask = 0000 [35] MPID_nem_init_dapl_coll_fns(): Effective DAPL collective mask = 0000 [49] MPID_nem_init_dapl_coll_fns(): User set DAPL collective mask = 0000 [49] MPID_nem_init_dapl_coll_fns(): Effective DAPL collective mask = 0000 [83] MPID_nem_init_dapl_coll_fns(): User set DAPL collective mask = 0000 [83] MPID_nem_init_dapl_coll_fns(): Effective DAPL collective mask = 0000 [27] MPID_nem_init_dapl_coll_fns(): User set DAPL collective mask = 0000 [27] MPID_nem_init_dapl_coll_fns(): Effective DAPL collective mask = 0000 [50] MPID_nem_init_dapl_coll_fns(): User set DAPL collective mask = 0000 [50] MPID_nem_init_dapl_coll_fns(): Effective DAPL collective mask = 0000 [85] MPID_nem_init_dapl_coll_fns(): User set DAPL collective mask = 0000 [85] MPID_nem_init_dapl_coll_fns(): Effective DAPL collective mask = 0000 [9] MPID_nem_init_dapl_coll_fns(): User set DAPL collective mask = 0000 [9] MPID_nem_init_dapl_coll_fns(): Effective DAPL collective mask = 0000 [28] MPID_nem_init_dapl_coll_fns(): User set DAPL collective mask = 0000 [28] MPID_nem_init_dapl_coll_fns(): Effective DAPL collective mask = 0000 [51] MPID_nem_init_dapl_coll_fns(): User set DAPL collective mask = 0000 [51] MPID_nem_init_dapl_coll_fns(): Effective DAPL collective mask = 0000 [86] MPID_nem_init_dapl_coll_fns(): User set DAPL collective mask = 0000 [86] MPID_nem_init_dapl_coll_fns(): Effective DAPL collective mask = 0000 [10] MPID_nem_init_dapl_coll_fns(): User set DAPL collective mask = 0000 [10] MPID_nem_init_dapl_coll_fns(): Effective DAPL collective mask = 0000 [29] MPID_nem_init_dapl_coll_fns(): User set DAPL collective mask = 0000 [29] MPID_nem_init_dapl_coll_fns(): Effective DAPL collective mask = 0000 [52] MPID_nem_init_dapl_coll_fns(): User set DAPL collective mask = 0000 [52] MPID_nem_init_dapl_coll_fns(): Effective DAPL collective mask = 0000 [87] MPID_nem_init_dapl_coll_fns(): User set DAPL collective mask = 0000 [87] MPID_nem_init_dapl_coll_fns(): Effective DAPL collective mask = 0000 [11] MPID_nem_init_dapl_coll_fns(): User set DAPL collective mask = 0000 [11] MPID_nem_init_dapl_coll_fns(): Effective DAPL collective mask = 0000 [32] MPID_nem_init_dapl_coll_fns(): User set DAPL collective mask = 0000 [32] MPID_nem_init_dapl_coll_fns(): Effective DAPL collective mask = 0000 [55] MPID_nem_init_dapl_coll_fns(): User set DAPL collective mask = 0000 [55] MPID_nem_init_dapl_coll_fns(): Effective DAPL collective mask = 0000 [88] MPID_nem_init_dapl_coll_fns(): User set DAPL collective mask = 0000 [88] MPID_nem_init_dapl_coll_fns(): Effective DAPL collective mask = 0000 [12] MPID_nem_init_dapl_coll_fns(): User set DAPL collective mask = 0000 [12] MPID_nem_init_dapl_coll_fns(): Effective DAPL collective mask = 0000 [33] MPID_nem_init_dapl_coll_fns(): User set DAPL collective mask = 0000 [33] MPID_nem_init_dapl_coll_fns(): Effective DAPL collective mask = 0000 [56] MPID_nem_init_dapl_coll_fns(): User set DAPL collective mask = 0000 [56] MPID_nem_init_dapl_coll_fns(): Effective DAPL collective mask = 0000 [89] MPID_nem_init_dapl_coll_fns(): User set DAPL collective mask = 0000 [89] MPID_nem_init_dapl_coll_fns(): Effective DAPL collective mask = 0000 [13] MPID_nem_init_dapl_coll_fns(): User set DAPL collective mask = 0000 [13] MPID_nem_init_dapl_coll_fns(): Effective DAPL collective mask = 0000 [34] MPID_nem_init_dapl_coll_fns(): User set DAPL collective mask = 0000 [34] MPID_nem_init_dapl_coll_fns(): Effective DAPL collective mask = 0000 [57] MPID_nem_init_dapl_coll_fns(): User set DAPL collective mask = 0000 [57] MPID_nem_init_dapl_coll_fns(): Effective DAPL collective mask = 0000 [90] MPID_nem_init_dapl_coll_fns(): User set DAPL collective mask = 0000 [90] MPID_nem_init_dapl_coll_fns(): Effective DAPL collective mask = 0000 [15] MPID_nem_init_dapl_coll_fns(): User set DAPL collective mask = 0000 [15] MPID_nem_init_dapl_coll_fns(): Effective DAPL collective mask = 0000 [36] MPID_nem_init_dapl_coll_fns(): User set DAPL collective mask = 0000 [36] MPID_nem_init_dapl_coll_fns(): Effective DAPL collective mask = 0000 [58] MPID_nem_init_dapl_coll_fns(): User set DAPL collective mask = 0000 [58] MPID_nem_init_dapl_coll_fns(): Effective DAPL collective mask = 0000 [91] MPID_nem_init_dapl_coll_fns(): User set DAPL collective mask = 0000 [91] MPID_nem_init_dapl_coll_fns(): Effective DAPL collective mask = 0000 [17] MPID_nem_init_dapl_coll_fns(): User set DAPL collective mask = 0000 [17] MPID_nem_init_dapl_coll_fns(): Effective DAPL collective mask = 0000 [37] MPID_nem_init_dapl_coll_fns(): User set DAPL collective mask = 0000 [37] MPID_nem_init_dapl_coll_fns(): Effective DAPL collective mask = 0000 [62] MPID_nem_init_dapl_coll_fns(): User set DAPL collective mask = 0000 [62] MPID_nem_init_dapl_coll_fns(): Effective DAPL collective mask = 0000 [92] MPID_nem_init_dapl_coll_fns(): User set DAPL collective mask = 0000 [92] MPID_nem_init_dapl_coll_fns(): Effective DAPL collective mask = 0000 [18] MPID_nem_init_dapl_coll_fns(): User set DAPL collective mask = 0000 [18] MPID_nem_init_dapl_coll_fns(): Effective DAPL collective mask = 0000 [38] MPID_nem_init_dapl_coll_fns(): User set DAPL collective mask = 0000 [38] MPID_nem_init_dapl_coll_fns(): Effective DAPL collective mask = 0000 [63] MPID_nem_init_dapl_coll_fns(): User set DAPL collective mask = 0000 [63] MPID_nem_init_dapl_coll_fns(): Effective DAPL collective mask = 0000 [93] MPID_nem_init_dapl_coll_fns(): User set DAPL collective mask = 0000 [93] MPID_nem_init_dapl_coll_fns(): Effective DAPL collective mask = 0000 [21] MPID_nem_init_dapl_coll_fns(): User set DAPL collective mask = 0000 [21] MPID_nem_init_dapl_coll_fns(): Effective DAPL collective mask = 0000 [39] MPID_nem_init_dapl_coll_fns(): User set DAPL collective mask = 0000 [39] MPID_nem_init_dapl_coll_fns(): Effective DAPL collective mask = 0000 [64] MPID_nem_init_dapl_coll_fns(): User set DAPL collective mask = 0000 [64] MPID_nem_init_dapl_coll_fns(): Effective DAPL collective mask = 0000 [94] MPID_nem_init_dapl_coll_fns(): User set DAPL collective mask = 0000 [94] MPID_nem_init_dapl_coll_fns(): Effective DAPL collective mask = 0000 [22] MPID_nem_init_dapl_coll_fns(): User set DAPL collective mask = 0000 [22] MPID_nem_init_dapl_coll_fns(): Effective DAPL collective mask = 0000 [40] MPID_nem_init_dapl_coll_fns(): User set DAPL collective mask = 0000 [40] MPID_nem_init_dapl_coll_fns(): Effective DAPL collective mask = 0000 [65] MPID_nem_init_dapl_coll_fns(): User set DAPL collective mask = 0000 [65] MPID_nem_init_dapl_coll_fns(): Effective DAPL collective mask = 0000 [95] MPID_nem_init_dapl_coll_fns(): User set DAPL collective mask = 0000 [95] MPID_nem_init_dapl_coll_fns(): Effective DAPL collective mask = 0000 [5] MPID_nem_init_dapl_coll_fns(): User set DAPL collective mask = 0000 [5] MPID_nem_init_dapl_coll_fns(): Effective DAPL collective mask = 0000 [41] MPID_nem_init_dapl_coll_fns(): User set DAPL collective mask = 0000 [41] MPID_nem_init_dapl_coll_fns(): Effective DAPL collective mask = 0000 [66] MPID_nem_init_dapl_coll_fns(): User set DAPL collective mask = 0000 [66] MPID_nem_init_dapl_coll_fns(): Effective DAPL collective mask = 0000 [73] MPID_nem_init_dapl_coll_fns(): User set DAPL collective mask = 0000 [7] MPID_nem_init_dapl_coll_fns(): User set DAPL collective mask = 0000 [7] MPID_nem_init_dapl_coll_fns(): Effective DAPL collective mask = 0000 [42] MPID_nem_init_dapl_coll_fns(): User set DAPL collective mask = 0000 [42] MPID_nem_init_dapl_coll_fns(): Effective DAPL collective mask = 0000 [67] MPID_nem_init_dapl_coll_fns(): User set DAPL collective mask = 0000 [67] MPID_nem_init_dapl_coll_fns(): Effective DAPL collective mask = 0000 [73] MPID_nem_init_dapl_coll_fns(): Effective DAPL collective mask = 0000 [14] MPID_nem_init_dapl_coll_fns(): User set DAPL collective mask = 0000 [14] MPID_nem_init_dapl_coll_fns(): Effective DAPL collective mask = 0000 [43] MPID_nem_init_dapl_coll_fns(): User set DAPL collective mask = 0000 [43] MPID_nem_init_dapl_coll_fns(): Effective DAPL collective mask = 0000 [68] MPID_nem_init_dapl_coll_fns(): User set DAPL collective mask = 0000 [68] MPID_nem_init_dapl_coll_fns(): Effective DAPL collective mask = 0000 [16] MPID_nem_init_dapl_coll_fns(): User set DAPL collective mask = 0000 [16] MPID_nem_init_dapl_coll_fns(): Effective DAPL collective mask = 0000 [44] MPID_nem_init_dapl_coll_fns(): User set DAPL collective mask = 0000 [44] MPID_nem_init_dapl_coll_fns(): Effective DAPL collective mask = 0000 [69] MPID_nem_init_dapl_coll_fns(): User set DAPL collective mask = 0000 [69] MPID_nem_init_dapl_coll_fns(): Effective DAPL collective mask = 0000 [19] MPID_nem_init_dapl_coll_fns(): User set DAPL collective mask = 0000 [19] MPID_nem_init_dapl_coll_fns(): Effective DAPL collective mask = 0000 [45] MPID_nem_init_dapl_coll_fns(): User set DAPL collective mask = 0000 [45] MPID_nem_init_dapl_coll_fns(): Effective DAPL collective mask = 0000 [70] MPID_nem_init_dapl_coll_fns(): User set DAPL collective mask = 0000 [70] MPID_nem_init_dapl_coll_fns(): Effective DAPL collective mask = 0000 [20] MPID_nem_init_dapl_coll_fns(): User set DAPL collective mask = 0000 [20] MPID_nem_init_dapl_coll_fns(): Effective DAPL collective mask = 0000 [46] MPID_nem_init_dapl_coll_fns(): User set DAPL collective mask = 0000 [46] MPID_nem_init_dapl_coll_fns(): Effective DAPL collective mask = 0000 [71] MPID_nem_init_dapl_coll_fns(): User set DAPL collective mask = 0000 [71] MPID_nem_init_dapl_coll_fns(): Effective DAPL collective mask = 0000 [23] MPID_nem_init_dapl_coll_fns(): User set DAPL collective mask = 0000 [23] MPID_nem_init_dapl_coll_fns(): Effective DAPL collective mask = 0000 [47] MPID_nem_init_dapl_coll_fns(): User set DAPL collective mask = 0000 [47] MPID_nem_init_dapl_coll_fns(): Effective DAPL collective mask = 0000 [61] MPID_nem_init_dapl_coll_fns(): User set DAPL collective mask = 0000 [61] MPID_nem_init_dapl_coll_fns(): Effective DAPL collective mask = 0000 [3] MPID_nem_init_dapl_coll_fns(): User set DAPL collective mask = 0000 [3] MPID_nem_init_dapl_coll_fns(): Effective DAPL collective mask = 0000 [25] MPID_nem_init_dapl_coll_fns(): User set DAPL collective mask = 0000 [25] MPID_nem_init_dapl_coll_fns(): Effective DAPL collective mask = 0000 [0] MPI startup(): Rank Pid Node name Pin cpu [0] MPI startup(): 0 63445 n0471 0 [0] MPI startup(): 1 63446 n0471 1 [0] MPI startup(): 2 63447 n0471 2 [0] MPI startup(): 3 63448 n0471 3 [0] MPI startup(): 4 63449 n0471 4 [0] MPI startup(): 5 63450 n0471 5 [0] MPI startup(): 6 63451 n0471 6 [0] MPI startup(): 7 63452 n0471 7 [0] MPI startup(): 8 63453 n0471 8 [0] MPI startup(): 9 63454 n0471 9 [0] MPI startup(): 10 63455 n0471 10 [0] MPI startup(): 11 63456 n0471 11 [0] MPI startup(): 12 63457 n0471 12 [0] MPI startup(): 13 63458 n0471 13 [0] MPI startup(): 14 63459 n0471 14 [0] MPI startup(): 15 63460 n0471 15 [0] MPI startup(): 16 63461 n0471 16 [0] MPI startup(): 17 63462 n0471 17 [0] MPI startup(): 18 63463 n0471 18 [0] MPI startup(): 19 63464 n0471 19 [0] MPI startup(): 20 63465 n0471 20 [0] MPI startup(): 21 63466 n0471 21 [0] MPI startup(): 22 63467 n0471 22 [0] MPI startup(): 23 63468 n0471 23 [0] MPI startup(): 24 39933 n0483 0 [0] MPI startup(): 25 39934 n0483 1 [0] MPI startup(): 26 39935 n0483 2 [0] MPI startup(): 27 39936 n0483 3 [0] MPI startup(): 28 39937 n0483 4 [0] MPI startup(): 29 39938 n0483 5 [0] MPI startup(): 30 39939 n0483 6 [0] MPI startup(): 31 39940 n0483 7 [0] MPI startup(): 32 39941 n0483 8 [0] MPI startup(): 33 39942 n0483 9 [0] MPI startup(): 34 39943 n0483 10 [0] MPI startup(): 35 39944 n0483 11 [0] MPI startup(): 36 39945 n0483 12 [0] MPI startup(): 37 39946 n0483 13 [0] MPI startup(): 38 39947 n0483 14 [0] MPI startup(): 39 39948 n0483 15 [0] MPI startup(): 40 39949 n0483 16 [0] MPI startup(): 41 39950 n0483 17 [0] MPI startup(): 42 39951 n0483 18 [0] MPI startup(): 43 39952 n0483 19 [0] MPI startup(): 44 39953 n0483 20 [0] MPI startup(): 45 39954 n0483 21 [0] MPI startup(): 46 39955 n0483 22 [0] MPI startup(): 47 39956 n0483 23 [0] MPI startup(): 48 24408 n0497 0 [0] MPI startup(): 49 24409 n0497 1 [0] MPI startup(): 50 24410 n0497 2 [0] MPI startup(): 51 24411 n0497 3 [0] MPI startup(): 52 24412 n0497 4 [0] MPI startup(): 53 24413 n0497 5 [0] MPI startup(): 54 24414 n0497 6 [0] MPI startup(): 55 24415 n0497 7 [0] MPI startup(): 56 24416 n0497 8 [0] MPI startup(): 57 24417 n0497 9 [0] MPI startup(): 58 24418 n0497 10 [0] MPI startup(): 59 24419 n0497 11 [0] MPI startup(): 60 24420 n0497 12 [0] MPI startup(): 61 24421 n0497 13 [0] MPI startup(): 62 24422 n0497 14 [0] MPI startup(): 63 24423 n0497 15 [0] MPI startup(): 64 24424 n0497 16 [0] MPI startup(): 65 24425 n0497 17 [0] MPI startup(): 66 24426 n0497 18 [0] MPI startup(): 67 24427 n0497 19 [0] MPI startup(): 68 24428 n0497 20 [0] MPI startup(): 69 24429 n0497 21 [0] MPI startup(): 70 24430 n0497 22 [0] MPI startup(): 71 24431 n0497 23 [0] MPI startup(): 72 13583 n0501 0 [0] MPI startup(): 73 13584 n0501 1 [0] MPI startup(): 74 13585 n0501 2 [0] MPI startup(): 75 13586 n0501 3 [0] MPI startup(): 76 13587 n0501 4 [0] MPI startup(): 77 13588 n0501 5 [0] MPI startup(): 78 13589 n0501 6 [0] MPI startup(): 79 13590 n0501 7 [0] MPI startup(): 80 13591 n0501 8 [0] MPI startup(): 81 13592 n0501 9 [0] MPI startup(): 82 13593 n0501 10 [0] MPI startup(): 83 13594 n0501 11 [0] MPI startup(): 84 13595 n0501 12 [0] MPI startup(): 85 13596 n0501 13 [0] MPI startup(): 86 13597 n0501 14 [0] MPI startup(): 87 13598 n0501 15 [0] MPI startup(): 88 13599 n0501 16 [0] MPI startup(): 89 13600 n0501 17 [0] MPI startup(): 90 13601 n0501 18 [0] MPI startup(): 91 13602 n0501 19 [0] MPI startup(): 92 13603 n0501 20 [0] MPI startup(): 93 13604 n0501 21 [0] MPI startup(): 94 13605 n0501 22 [0] MPI startup(): 95 13606 n0501 23 [0] MPI startup(): I_MPI_DEBUG=5 [0] MPI startup(): I_MPI_INFO_NUMA_NODE_MAP=mlx4_0:0 [0] MPI startup(): I_MPI_INFO_NUMA_NODE_NUM=2 [0] MPI startup(): I_MPI_PIN_MAPPING=24:0 0,1 1,2 2,3 3,4 4,5 5,6 6,7 7,8 8,9 9,10 10,11 11,12 12,13 13,14 14,15 15,16 16,17 17,18 18,19 19,20 20,21 21,22 22,23 23 #------------------------------------------------------------ # Intel (R) MPI Benchmarks 2017, MPI-1 part #------------------------------------------------------------ # Date : Tue Apr 18 18:47:05 2017 # Machine : x86_64 # System : Linux # Release : 2.6.32-504.8.1.el6.x86_64 # Version : #1 SMP Wed Jan 28 21:11:36 UTC 2015 # MPI Version : 3.1 # MPI Thread Environment: # Calling sequence was: # src/IMB-MPI1 -npmin 96 -input IMB_SELECT_MPI1 -msglen ./msglens # # Message lengths were user defined # # MPI_Datatype : MPI_BYTE # MPI_Datatype for reductions : MPI_FLOAT # MPI_Op : MPI_SUM # # # List of Benchmarks to run: # Biband # Biband # Biband # Biband # Biband # Biband # Biband #--------------------------------------------------- # Benchmarking Biband # #processes = 96 #--------------------------------------------------- #bytes #repetitions Mbytes/sec Msg/sec 0 1000 0.00 99908906 65536 514 19829.27 317268 524288 65 19577.67 39155 4194304 8 19522.99 4881 #--------------------------------------------------- # Benchmarking Biband # #processes = 96 #--------------------------------------------------- #bytes #repetitions Mbytes/sec Msg/sec 0 1000 0.00 96664555 65536 514 19821.30 317141 524288 64 19531.64 39063 4194304 8 19451.72 4863 #--------------------------------------------------- # Benchmarking Biband # #processes = 96 #--------------------------------------------------- #bytes #repetitions Mbytes/sec Msg/sec 0 1000 0.00 97306231 65536 515 19822.94 317167 524288 64 19592.85 39186 4194304 8 19519.99 4880 #--------------------------------------------------- # Benchmarking Biband # #processes = 96 #--------------------------------------------------- #bytes #repetitions Mbytes/sec Msg/sec 0 1000 0.00 69006544 65536 513 19845.87 317534 524288 64 19527.81 39056 4194304 8 19538.67 4885 #--------------------------------------------------- # Benchmarking Biband # #processes = 96 #--------------------------------------------------- #bytes #repetitions Mbytes/sec Msg/sec 0 1000 0.00 97251882 65536 514 19811.48 316984 524288 64 19495.02 38990 4194304 8 19423.66 4856 #--------------------------------------------------- # Benchmarking Biband # #processes = 96 #--------------------------------------------------- #bytes #repetitions Mbytes/sec Msg/sec 0 1000 0.00 96690669 65536 515 19824.98 317200 524288 64 19445.16 38890 4194304 8 19440.11 4860 #--------------------------------------------------- # Benchmarking Biband # #processes = 96 #--------------------------------------------------- #bytes #repetitions Mbytes/sec Msg/sec 0 1000 0.00 96643167 65536 515 19819.61 317114 524288 64 19504.21 39008 4194304 8 19514.79 4879 # All processes entering MPI_Finalize [0] MPI startup(): Multi-threaded optimized library [179] DAPL startup(): trying to open DAPL provider from I_MPI_DAPL_PROVIDER: ofa-v2-mlx4_0-1u [26] DAPL startup(): trying to open DAPL provider from I_MPI_DAPL_PROVIDER: ofa-v2-mlx4_0-1u [179] MPI startup(): DAPL provider ofa-v2-mlx4_0-1u [179] MPI startup(): shm and dapl data transfer modes [176] DAPL startup(): trying to open DAPL provider from I_MPI_DAPL_PROVIDER: ofa-v2-mlx4_0-1u [100] DAPL startup(): trying to open DAPL provider from I_MPI_DAPL_PROVIDER: ofa-v2-mlx4_0-1u [187] DAPL startup(): trying to open DAPL provider from I_MPI_DAPL_PROVIDER: ofa-v2-mlx4_0-1u [26] MPI startup(): DAPL provider ofa-v2-mlx4_0-1u [26] MPI startup(): shm and dapl data transfer modes [23] DAPL startup(): trying to open DAPL provider from I_MPI_DAPL_PROVIDER: ofa-v2-mlx4_0-1u [176] MPI startup(): DAPL provider ofa-v2-mlx4_0-1u [14] DAPL startup(): trying to open DAPL provider from I_MPI_DAPL_PROVIDER: ofa-v2-mlx4_0-1u [45] DAPL startup(): trying to open DAPL provider from I_MPI_DAPL_PROVIDER: ofa-v2-mlx4_0-1u [50] DAPL startup(): trying to open DAPL provider from I_MPI_DAPL_PROVIDER: ofa-v2-mlx4_0-1u [153] DAPL startup(): trying to open DAPL provider from I_MPI_DAPL_PROVIDER: ofa-v2-mlx4_0-1u [176] MPI startup(): shm and dapl data transfer modes [59] DAPL startup(): trying to open DAPL provider from I_MPI_DAPL_PROVIDER: ofa-v2-mlx4_0-1u [187] MPI startup(): DAPL provider ofa-v2-mlx4_0-1u [71] DAPL startup(): trying to open DAPL provider from I_MPI_DAPL_PROVIDER: ofa-v2-mlx4_0-1u [187] MPI startup(): shm and dapl data transfer modes [18] DAPL startup(): trying to open DAPL provider from I_MPI_DAPL_PROVIDER: ofa-v2-mlx4_0-1u [42] DAPL startup(): trying to open DAPL provider from I_MPI_DAPL_PROVIDER: ofa-v2-mlx4_0-1u [100] MPI startup(): DAPL provider ofa-v2-mlx4_0-1u [100] MPI startup(): shm and dapl data transfer modes [118] DAPL startup(): trying to open DAPL provider from I_MPI_DAPL_PROVIDER: ofa-v2-mlx4_0-1u [157] DAPL startup(): trying to open DAPL provider from I_MPI_DAPL_PROVIDER: ofa-v2-mlx4_0-1u [50] MPI startup(): DAPL provider ofa-v2-mlx4_0-1u [50] MPI startup(): shm and dapl data transfer modes [14] MPI startup(): DAPL provider ofa-v2-mlx4_0-1u [14] MPI startup(): shm and dapl data transfer modes [23] MPI startup(): DAPL provider ofa-v2-mlx4_0-1u [23] MPI startup(): shm and dapl data transfer modes [152] DAPL startup(): trying to open DAPL provider from I_MPI_DAPL_PROVIDER: ofa-v2-mlx4_0-1u [118] MPI startup(): DAPL provider ofa-v2-mlx4_0-1u [118] MPI startup(): shm and dapl data transfer modes [24] DAPL startup(): trying to open DAPL provider from I_MPI_DAPL_PROVIDER: ofa-v2-mlx4_0-1u [42] MPI startup(): DAPL provider ofa-v2-mlx4_0-1u [42] MPI startup(): shm and dapl data transfer modes [45] MPI startup(): DAPL provider ofa-v2-mlx4_0-1u [45] MPI startup(): shm and dapl data transfer modes [142] DAPL startup(): trying to open DAPL provider from I_MPI_DAPL_PROVIDER: ofa-v2-mlx4_0-1u [36] DAPL startup(): trying to open DAPL provider from I_MPI_DAPL_PROVIDER: ofa-v2-mlx4_0-1u [59] MPI startup(): DAPL provider ofa-v2-mlx4_0-1u [59] MPI startup(): shm and dapl data transfer modes [71] MPI startup(): DAPL provider ofa-v2-mlx4_0-1u [71] MPI startup(): shm and dapl data transfer modes [18] MPI startup(): DAPL provider ofa-v2-mlx4_0-1u [18] MPI startup(): shm and dapl data transfer modes [13] DAPL startup(): trying to open DAPL provider from I_MPI_DAPL_PROVIDER: ofa-v2-mlx4_0-1u [153] MPI startup(): DAPL provider ofa-v2-mlx4_0-1u [153] MPI startup(): shm and dapl data transfer modes [157] MPI startup(): DAPL provider ofa-v2-mlx4_0-1u [157] MPI startup(): shm and dapl data transfer modes [178] DAPL startup(): trying to open DAPL provider from I_MPI_DAPL_PROVIDER: ofa-v2-mlx4_0-1u [175] DAPL startup(): trying to open DAPL provider from I_MPI_DAPL_PROVIDER: ofa-v2-mlx4_0-1u [171] DAPL startup(): trying to open DAPL provider from I_MPI_DAPL_PROVIDER: ofa-v2-mlx4_0-1u [184] DAPL startup(): trying to open DAPL provider from I_MPI_DAPL_PROVIDER: ofa-v2-mlx4_0-1u [177] DAPL startup(): trying to open DAPL provider from I_MPI_DAPL_PROVIDER: ofa-v2-mlx4_0-1u [170] DAPL startup(): trying to open DAPL provider from I_MPI_DAPL_PROVIDER: ofa-v2-mlx4_0-1u [24] MPI startup(): DAPL provider ofa-v2-mlx4_0-1u [24] MPI startup(): shm and dapl data transfer modes [182] DAPL startup(): trying to open DAPL provider from I_MPI_DAPL_PROVIDER: ofa-v2-mlx4_0-1u [27] DAPL startup(): trying to open DAPL provider from I_MPI_DAPL_PROVIDER: ofa-v2-mlx4_0-1u [34] DAPL startup(): trying to open DAPL provider from I_MPI_DAPL_PROVIDER: ofa-v2-mlx4_0-1u [36] MPI startup(): DAPL provider ofa-v2-mlx4_0-1u [36] MPI startup(): shm and dapl data transfer modes [44] DAPL startup(): trying to open DAPL provider from I_MPI_DAPL_PROVIDER: ofa-v2-mlx4_0-1u [93] DAPL startup(): trying to open DAPL provider from I_MPI_DAPL_PROVIDER: ofa-v2-mlx4_0-1u [180] DAPL startup(): trying to open DAPL provider from I_MPI_DAPL_PROVIDER: ofa-v2-mlx4_0-1u [39] DAPL startup(): trying to open DAPL provider from I_MPI_DAPL_PROVIDER: ofa-v2-mlx4_0-1u [15] DAPL startup(): trying to open DAPL provider from I_MPI_DAPL_PROVIDER: ofa-v2-mlx4_0-1u [11] DAPL startup(): trying to open DAPL provider from I_MPI_DAPL_PROVIDER: ofa-v2-mlx4_0-1u [19] DAPL startup(): trying to open DAPL provider from I_MPI_DAPL_PROVIDER: ofa-v2-mlx4_0-1u [142] MPI startup(): DAPL provider ofa-v2-mlx4_0-1u [145] DAPL startup(): trying to open DAPL provider from I_MPI_DAPL_PROVIDER: ofa-v2-mlx4_0-1u [152] MPI startup(): DAPL provider ofa-v2-mlx4_0-1u [152] MPI startup(): shm and dapl data transfer modes [160] DAPL startup(): trying to open DAPL provider from I_MPI_DAPL_PROVIDER: ofa-v2-mlx4_0-1u [166] DAPL startup(): trying to open DAPL provider from I_MPI_DAPL_PROVIDER: ofa-v2-mlx4_0-1u [158] DAPL startup(): trying to open DAPL provider from I_MPI_DAPL_PROVIDER: ofa-v2-mlx4_0-1u [181] DAPL startup(): trying to open DAPL provider from I_MPI_DAPL_PROVIDER: ofa-v2-mlx4_0-1u [27] MPI startup(): DAPL provider ofa-v2-mlx4_0-1u [27] MPI startup(): shm and dapl data transfer modes [164] DAPL startup(): trying to open DAPL provider from I_MPI_DAPL_PROVIDER: ofa-v2-mlx4_0-1u [168] DAPL startup(): trying to open DAPL provider from I_MPI_DAPL_PROVIDER: ofa-v2-mlx4_0-1u [49] DAPL startup(): trying to open DAPL provider from I_MPI_DAPL_PROVIDER: ofa-v2-mlx4_0-1u [75] DAPL startup(): trying to open DAPL provider from I_MPI_DAPL_PROVIDER: ofa-v2-mlx4_0-1u [64] DAPL startup(): trying to open DAPL provider from I_MPI_DAPL_PROVIDER: ofa-v2-mlx4_0-1u [185] DAPL startup(): trying to open DAPL provider from I_MPI_DAPL_PROVIDER: ofa-v2-mlx4_0-1u [189] DAPL startup(): trying to open DAPL provider from I_MPI_DAPL_PROVIDER: ofa-v2-mlx4_0-1u [48] DAPL startup(): trying to open DAPL provider from I_MPI_DAPL_PROVIDER: ofa-v2-mlx4_0-1u [161] DAPL startup(): trying to open DAPL provider from I_MPI_DAPL_PROVIDER: ofa-v2-mlx4_0-1u [174] DAPL startup(): trying to open DAPL provider from I_MPI_DAPL_PROVIDER: ofa-v2-mlx4_0-1u [178] MPI startup(): DAPL provider ofa-v2-mlx4_0-1u [68] DAPL startup(): trying to open DAPL provider from I_MPI_DAPL_PROVIDER: ofa-v2-mlx4_0-1u [178] MPI startup(): shm and dapl data transfer modes [57] DAPL startup(): trying to open DAPL provider from I_MPI_DAPL_PROVIDER: ofa-v2-mlx4_0-1u [183] DAPL startup(): trying to open DAPL provider from I_MPI_DAPL_PROVIDER: ofa-v2-mlx4_0-1u [155] DAPL startup(): trying to open DAPL provider from I_MPI_DAPL_PROVIDER: ofa-v2-mlx4_0-1u [81] DAPL startup(): trying to open DAPL provider from I_MPI_DAPL_PROVIDER: ofa-v2-mlx4_0-1u [166] MPI startup(): DAPL provider ofa-v2-mlx4_0-1u [169] DAPL startup(): trying to open DAPL provider from I_MPI_DAPL_PROVIDER: ofa-v2-mlx4_0-1u [31] DAPL startup(): trying to open DAPL provider from I_MPI_DAPL_PROVIDER: ofa-v2-mlx4_0-1u [34] MPI startup(): DAPL provider ofa-v2-mlx4_0-1u [34] MPI startup(): shm and dapl data transfer modes [175] MPI startup(): DAPL provider ofa-v2-mlx4_0-1u [39] MPI startup(): DAPL provider ofa-v2-mlx4_0-1u [39] MPI startup(): shm and dapl data transfer modes [43] DAPL startup(): trying to open DAPL provider from I_MPI_DAPL_PROVIDER: ofa-v2-mlx4_0-1u [44] MPI startup(): DAPL provider ofa-v2-mlx4_0-1u [44] MPI startup(): shm and dapl data transfer modes [175] MPI startup(): shm and dapl data transfer modes [171] MPI startup(): DAPL provider ofa-v2-mlx4_0-1u [171] MPI startup(): shm and dapl data transfer modes [37] DAPL startup(): trying to open DAPL provider from I_MPI_DAPL_PROVIDER: ofa-v2-mlx4_0-1u [114] DAPL startup(): trying to open DAPL provider from I_MPI_DAPL_PROVIDER: ofa-v2-mlx4_0-1u [124] DAPL startup(): trying to open DAPL provider from I_MPI_DAPL_PROVIDER: ofa-v2-mlx4_0-1u [128] DAPL startup(): trying to open DAPL provider from I_MPI_DAPL_PROVIDER: ofa-v2-mlx4_0-1u [142] MPI startup(): shm and dapl data transfer modes [46] DAPL startup(): trying to open DAPL provider from I_MPI_DAPL_PROVIDER: ofa-v2-mlx4_0-1u [190] DAPL startup(): trying to open DAPL provider from I_MPI_DAPL_PROVIDER: ofa-v2-mlx4_0-1u [51] DAPL startup(): trying to open DAPL provider from I_MPI_DAPL_PROVIDER: ofa-v2-mlx4_0-1u [173] DAPL startup(): trying to open DAPL provider from I_MPI_DAPL_PROVIDER: ofa-v2-mlx4_0-1u [172] DAPL startup(): trying to open DAPL provider from I_MPI_DAPL_PROVIDER: ofa-v2-mlx4_0-1u [25] DAPL startup(): trying to open DAPL provider from I_MPI_DAPL_PROVIDER: ofa-v2-mlx4_0-1u [62] DAPL startup(): trying to open DAPL provider from I_MPI_DAPL_PROVIDER: ofa-v2-mlx4_0-1u [98] DAPL startup(): trying to open DAPL provider from I_MPI_DAPL_PROVIDER: ofa-v2-mlx4_0-1u [2] DAPL startup(): trying to open DAPL provider from I_MPI_DAPL_PROVIDER: ofa-v2-mlx4_0-1u [188] DAPL startup(): trying to open DAPL provider from I_MPI_DAPL_PROVIDER: ofa-v2-mlx4_0-1u [3] DAPL startup(): trying to open DAPL provider from I_MPI_DAPL_PROVIDER: ofa-v2-mlx4_0-1u [8] DAPL startup(): trying to open DAPL provider from I_MPI_DAPL_PROVIDER: ofa-v2-mlx4_0-1u [85] DAPL startup(): trying to open DAPL provider from I_MPI_DAPL_PROVIDER: ofa-v2-mlx4_0-1u [184] MPI startup(): DAPL provider ofa-v2-mlx4_0-1u [13] MPI startup(): DAPL provider ofa-v2-mlx4_0-1u [13] MPI startup(): shm and dapl data transfer modes [17] DAPL startup(): trying to open DAPL provider from I_MPI_DAPL_PROVIDER: ofa-v2-mlx4_0-1u [21] DAPL startup(): trying to open DAPL provider from I_MPI_DAPL_PROVIDER: ofa-v2-mlx4_0-1u [184] MPI startup(): shm and dapl data transfer modes [15] MPI startup(): DAPL provider ofa-v2-mlx4_0-1u [96] DAPL startup(): trying to open DAPL provider from I_MPI_DAPL_PROVIDER: ofa-v2-mlx4_0-1u [191] DAPL startup(): trying to open DAPL provider from I_MPI_DAPL_PROVIDER: ofa-v2-mlx4_0-1u [15] MPI startup(): shm and dapl data transfer modes [94] DAPL startup(): trying to open DAPL provider from I_MPI_DAPL_PROVIDER: ofa-v2-mlx4_0-1u [170] MPI startup(): DAPL provider ofa-v2-mlx4_0-1u [186] DAPL startup(): trying to open DAPL provider from I_MPI_DAPL_PROVIDER: ofa-v2-mlx4_0-1u [29] DAPL startup(): trying to open DAPL provider from I_MPI_DAPL_PROVIDER: ofa-v2-mlx4_0-1u [41] DAPL startup(): trying to open DAPL provider from I_MPI_DAPL_PROVIDER: ofa-v2-mlx4_0-1u [170] MPI startup(): shm and dapl data transfer modes [76] DAPL startup(): trying to open DAPL provider from I_MPI_DAPL_PROVIDER: ofa-v2-mlx4_0-1u [101] DAPL startup(): trying to open DAPL provider from I_MPI_DAPL_PROVIDER: ofa-v2-mlx4_0-1u [28] DAPL startup(): trying to open DAPL provider from I_MPI_DAPL_PROVIDER: ofa-v2-mlx4_0-1u [177] MPI startup(): DAPL provider ofa-v2-mlx4_0-1u [11] MPI startup(): DAPL provider ofa-v2-mlx4_0-1u [105] DAPL startup(): trying to open DAPL provider from I_MPI_DAPL_PROVIDER: ofa-v2-mlx4_0-1u [35] DAPL startup(): trying to open DAPL provider from I_MPI_DAPL_PROVIDER: ofa-v2-mlx4_0-1u [145] MPI startup(): DAPL provider ofa-v2-mlx4_0-1u [145] MPI startup(): shm and dapl data transfer modes [177] MPI startup(): shm and dapl data transfer modes [11] MPI startup(): shm and dapl data transfer modes [32] DAPL startup(): trying to open DAPL provider from I_MPI_DAPL_PROVIDER: ofa-v2-mlx4_0-1u [146] DAPL startup(): trying to open DAPL provider from I_MPI_DAPL_PROVIDER: ofa-v2-mlx4_0-1u [99] DAPL startup(): trying to open DAPL provider from I_MPI_DAPL_PROVIDER: ofa-v2-mlx4_0-1u [137] DAPL startup(): trying to open DAPL provider from I_MPI_DAPL_PROVIDER: ofa-v2-mlx4_0-1u [148] DAPL startup(): trying to open DAPL provider from I_MPI_DAPL_PROVIDER: ofa-v2-mlx4_0-1u [160] MPI startup(): DAPL provider ofa-v2-mlx4_0-1u [160] MPI startup(): shm and dapl data transfer modes [166] MPI startup(): shm and dapl data transfer modes [77] DAPL startup(): trying to open DAPL provider from I_MPI_DAPL_PROVIDER: ofa-v2-mlx4_0-1u [180] MPI startup(): DAPL provider ofa-v2-mlx4_0-1u [47] DAPL startup(): trying to open DAPL provider from I_MPI_DAPL_PROVIDER: ofa-v2-mlx4_0-1u [151] DAPL startup(): trying to open DAPL provider from I_MPI_DAPL_PROVIDER: ofa-v2-mlx4_0-1u [180] MPI startup(): shm and dapl data transfer modes [132] DAPL startup(): trying to open DAPL provider from I_MPI_DAPL_PROVIDER: ofa-v2-mlx4_0-1u [158] MPI startup(): DAPL provider ofa-v2-mlx4_0-1u [158] MPI startup(): shm and dapl data transfer modes [40] DAPL startup(): trying to open DAPL provider from I_MPI_DAPL_PROVIDER: ofa-v2-mlx4_0-1u [104] DAPL startup(): trying to open DAPL provider from I_MPI_DAPL_PROVIDER: ofa-v2-mlx4_0-1u [30] DAPL startup(): trying to open DAPL provider from I_MPI_DAPL_PROVIDER: ofa-v2-mlx4_0-1u [38] DAPL startup(): trying to open DAPL provider from I_MPI_DAPL_PROVIDER: ofa-v2-mlx4_0-1u [159] DAPL startup(): trying to open DAPL provider from I_MPI_DAPL_PROVIDER: ofa-v2-mlx4_0-1u [163] DAPL startup(): trying to open DAPL provider from I_MPI_DAPL_PROVIDER: ofa-v2-mlx4_0-1u [182] MPI startup(): DAPL provider ofa-v2-mlx4_0-1u [164] MPI startup(): DAPL provider ofa-v2-mlx4_0-1u [164] MPI startup(): shm and dapl data transfer modes [182] MPI startup(): shm and dapl data transfer modes [110] DAPL startup(): trying to open DAPL provider from I_MPI_DAPL_PROVIDER: ofa-v2-mlx4_0-1u [154] DAPL startup(): trying to open DAPL provider from I_MPI_DAPL_PROVIDER: ofa-v2-mlx4_0-1u [33] DAPL startup(): trying to open DAPL provider from I_MPI_DAPL_PROVIDER: ofa-v2-mlx4_0-1u [147] DAPL startup(): trying to open DAPL provider from I_MPI_DAPL_PROVIDER: ofa-v2-mlx4_0-1u [31] MPI startup(): DAPL provider ofa-v2-mlx4_0-1u [53] DAPL startup(): trying to open DAPL provider from I_MPI_DAPL_PROVIDER: ofa-v2-mlx4_0-1u [31] MPI startup(): shm and dapl data transfer modes [70] DAPL startup(): trying to open DAPL provider from I_MPI_DAPL_PROVIDER: ofa-v2-mlx4_0-1u [125] DAPL startup(): trying to open DAPL provider from I_MPI_DAPL_PROVIDER: ofa-v2-mlx4_0-1u [107] DAPL startup(): trying to open DAPL provider from I_MPI_DAPL_PROVIDER: ofa-v2-mlx4_0-1u [149] DAPL startup(): trying to open DAPL provider from I_MPI_DAPL_PROVIDER: ofa-v2-mlx4_0-1u [143] DAPL startup(): trying to open DAPL provider from I_MPI_DAPL_PROVIDER: ofa-v2-mlx4_0-1u [49] MPI startup(): DAPL provider ofa-v2-mlx4_0-1u [49] MPI startup(): shm and dapl data transfer modes [60] DAPL startup(): trying to open DAPL provider from I_MPI_DAPL_PROVIDER: ofa-v2-mlx4_0-1u [61] DAPL startup(): trying to open DAPL provider from I_MPI_DAPL_PROVIDER: ofa-v2-mlx4_0-1u [112] DAPL startup(): trying to open DAPL provider from I_MPI_DAPL_PROVIDER: ofa-v2-mlx4_0-1u [64] MPI startup(): DAPL provider ofa-v2-mlx4_0-1u [64] MPI startup(): shm and dapl data transfer modes [123] DAPL startup(): trying to open DAPL provider from I_MPI_DAPL_PROVIDER: ofa-v2-mlx4_0-1u [43] MPI startup(): DAPL provider ofa-v2-mlx4_0-1u [43] MPI startup(): shm and dapl data transfer modes [48] MPI startup(): DAPL provider ofa-v2-mlx4_0-1u [48] MPI startup(): shm and dapl data transfer modes [128] MPI startup(): DAPL provider ofa-v2-mlx4_0-1u [128] MPI startup(): shm and dapl data transfer modes [119] DAPL startup(): trying to open DAPL provider from I_MPI_DAPL_PROVIDER: ofa-v2-mlx4_0-1u [136] DAPL startup(): trying to open DAPL provider from I_MPI_DAPL_PROVIDER: ofa-v2-mlx4_0-1u [57] MPI startup(): DAPL provider ofa-v2-mlx4_0-1u [68] MPI startup(): DAPL provider ofa-v2-mlx4_0-1u [57] MPI startup(): shm and dapl data transfer modes [68] MPI startup(): shm and dapl data transfer modes [66] DAPL startup(): trying to open DAPL provider from I_MPI_DAPL_PROVIDER: ofa-v2-mlx4_0-1u [1] DAPL startup(): trying to open DAPL provider from I_MPI_DAPL_PROVIDER: ofa-v2-mlx4_0-1u [2] MPI startup(): DAPL provider ofa-v2-mlx4_0-1u [2] MPI startup(): shm and dapl data transfer modes [3] MPI startup(): DAPL provider ofa-v2-mlx4_0-1u [3] MPI startup(): shm and dapl data transfer modes [4] DAPL startup(): trying to open DAPL provider from I_MPI_DAPL_PROVIDER: ofa-v2-mlx4_0-1u [5] DAPL startup(): trying to open DAPL provider from I_MPI_DAPL_PROVIDER: ofa-v2-mlx4_0-1u [6] DAPL startup(): trying to open DAPL provider from I_MPI_DAPL_PROVIDER: ofa-v2-mlx4_0-1u [8] MPI startup(): DAPL provider ofa-v2-mlx4_0-1u [8] MPI startup(): shm and dapl data transfer modes [9] DAPL startup(): trying to open DAPL provider from I_MPI_DAPL_PROVIDER: ofa-v2-mlx4_0-1u [10] DAPL startup(): trying to open DAPL provider from I_MPI_DAPL_PROVIDER: ofa-v2-mlx4_0-1u [12] DAPL startup(): trying to open DAPL provider from I_MPI_DAPL_PROVIDER: ofa-v2-mlx4_0-1u [17] MPI startup(): DAPL provider ofa-v2-mlx4_0-1u [17] MPI startup(): shm and dapl data transfer modes [19] MPI startup(): DAPL provider ofa-v2-mlx4_0-1u [19] MPI startup(): shm and dapl data transfer modes [20] DAPL startup(): trying to open DAPL provider from I_MPI_DAPL_PROVIDER: ofa-v2-mlx4_0-1u [21] MPI startup(): DAPL provider ofa-v2-mlx4_0-1u [21] MPI startup(): shm and dapl data transfer modes [0] DAPL startup(): trying to open DAPL provider from I_MPI_DAPL_PROVIDER: ofa-v2-mlx4_0-1u [16] DAPL startup(): trying to open DAPL provider from I_MPI_DAPL_PROVIDER: ofa-v2-mlx4_0-1u [22] DAPL startup(): trying to open DAPL provider from I_MPI_DAPL_PROVIDER: ofa-v2-mlx4_0-1u [37] MPI startup(): DAPL provider ofa-v2-mlx4_0-1u [7] DAPL startup(): trying to open DAPL provider from I_MPI_DAPL_PROVIDER: ofa-v2-mlx4_0-1u [37] MPI startup(): shm and dapl data transfer modes [97] DAPL startup(): trying to open DAPL provider from I_MPI_DAPL_PROVIDER: ofa-v2-mlx4_0-1u [74] DAPL startup(): trying to open DAPL provider from I_MPI_DAPL_PROVIDER: ofa-v2-mlx4_0-1u [75] MPI startup(): DAPL provider ofa-v2-mlx4_0-1u [75] MPI startup(): shm and dapl data transfer modes [51] MPI startup(): DAPL provider ofa-v2-mlx4_0-1u [81] MPI startup(): DAPL provider ofa-v2-mlx4_0-1u [81] MPI startup(): shm and dapl data transfer modes [83] DAPL startup(): trying to open DAPL provider from I_MPI_DAPL_PROVIDER: ofa-v2-mlx4_0-1u [51] MPI startup(): shm and dapl data transfer modes [86] DAPL startup(): trying to open DAPL provider from I_MPI_DAPL_PROVIDER: ofa-v2-mlx4_0-1u [124] MPI startup(): DAPL provider ofa-v2-mlx4_0-1u [89] DAPL startup(): trying to open DAPL provider from I_MPI_DAPL_PROVIDER: ofa-v2-mlx4_0-1u [168] MPI startup(): DAPL provider ofa-v2-mlx4_0-1u [91] DAPL startup(): trying to open DAPL provider from I_MPI_DAPL_PROVIDER: ofa-v2-mlx4_0-1u [124] MPI startup(): shm and dapl data transfer modes [93] MPI startup(): DAPL provider ofa-v2-mlx4_0-1u [93] MPI startup(): shm and dapl data transfer modes [168] MPI startup(): shm and dapl data transfer modes [95] DAPL startup(): trying to open DAPL provider from I_MPI_DAPL_PROVIDER: ofa-v2-mlx4_0-1u [56] DAPL startup(): trying to open DAPL provider from I_MPI_DAPL_PROVIDER: ofa-v2-mlx4_0-1u [54] DAPL startup(): trying to open DAPL provider from I_MPI_DAPL_PROVIDER: ofa-v2-mlx4_0-1u [46] MPI startup(): DAPL provider ofa-v2-mlx4_0-1u [189] MPI startup(): DAPL provider ofa-v2-mlx4_0-1u [46] MPI startup(): shm and dapl data transfer modes [185] MPI startup(): DAPL provider ofa-v2-mlx4_0-1u [120] DAPL startup(): trying to open DAPL provider from I_MPI_DAPL_PROVIDER: ofa-v2-mlx4_0-1u [62] MPI startup(): DAPL provider ofa-v2-mlx4_0-1u [185] MPI startup(): shm and dapl data transfer modes [189] MPI startup(): shm and dapl data transfer modes [55] DAPL startup(): trying to open DAPL provider from I_MPI_DAPL_PROVIDER: ofa-v2-mlx4_0-1u [62] MPI startup(): shm and dapl data transfer modes [78] DAPL startup(): trying to open DAPL provider from I_MPI_DAPL_PROVIDER: ofa-v2-mlx4_0-1u [111] DAPL startup(): trying to open DAPL provider from I_MPI_DAPL_PROVIDER: ofa-v2-mlx4_0-1u [67] DAPL startup(): trying to open DAPL provider from I_MPI_DAPL_PROVIDER: ofa-v2-mlx4_0-1u [174] MPI startup(): DAPL provider ofa-v2-mlx4_0-1u [69] DAPL startup(): trying to open DAPL provider from I_MPI_DAPL_PROVIDER: ofa-v2-mlx4_0-1u [174] MPI startup(): shm and dapl data transfer modes [121] DAPL startup(): trying to open DAPL provider from I_MPI_DAPL_PROVIDER: ofa-v2-mlx4_0-1u [181] MPI startup(): DAPL provider ofa-v2-mlx4_0-1u [10] MPI startup(): DAPL provider ofa-v2-mlx4_0-1u [85] MPI startup(): DAPL provider ofa-v2-mlx4_0-1u [63] DAPL startup(): trying to open DAPL provider from I_MPI_DAPL_PROVIDER: ofa-v2-mlx4_0-1u [181] MPI startup(): shm and dapl data transfer modes [10] MPI startup(): shm and dapl data transfer modes [85] MPI startup(): shm and dapl data transfer modes [114] MPI startup(): DAPL provider ofa-v2-mlx4_0-1u [114] MPI startup(): shm and dapl data transfer modes [65] DAPL startup(): trying to open DAPL provider from I_MPI_DAPL_PROVIDER: ofa-v2-mlx4_0-1u [58] DAPL startup(): trying to open DAPL provider from I_MPI_DAPL_PROVIDER: ofa-v2-mlx4_0-1u [88] DAPL startup(): trying to open DAPL provider from I_MPI_DAPL_PROVIDER: ofa-v2-mlx4_0-1u [94] MPI startup(): DAPL provider ofa-v2-mlx4_0-1u [94] MPI startup(): shm and dapl data transfer modes [183] MPI startup(): DAPL provider ofa-v2-mlx4_0-1u [183] MPI startup(): shm and dapl data transfer modes [52] DAPL startup(): trying to open DAPL provider from I_MPI_DAPL_PROVIDER: ofa-v2-mlx4_0-1u [25] MPI startup(): DAPL provider ofa-v2-mlx4_0-1u [134] DAPL startup(): trying to open DAPL provider from I_MPI_DAPL_PROVIDER: ofa-v2-mlx4_0-1u [25] MPI startup(): shm and dapl data transfer modes [144] DAPL startup(): trying to open DAPL provider from I_MPI_DAPL_PROVIDER: ofa-v2-mlx4_0-1u [146] MPI startup(): DAPL provider ofa-v2-mlx4_0-1u [146] MPI startup(): shm and dapl data transfer modes [76] MPI startup(): DAPL provider ofa-v2-mlx4_0-1u [137] MPI startup(): DAPL provider ofa-v2-mlx4_0-1u [148] MPI startup(): DAPL provider ofa-v2-mlx4_0-1u [148] MPI startup(): shm and dapl data transfer modes [169] MPI startup(): DAPL provider ofa-v2-mlx4_0-1u [5] MPI startup(): DAPL provider ofa-v2-mlx4_0-1u [76] MPI startup(): shm and dapl data transfer modes [137] MPI startup(): shm and dapl data transfer modes [150] DAPL startup(): trying to open DAPL provider from I_MPI_DAPL_PROVIDER: ofa-v2-mlx4_0-1u [169] MPI startup(): shm and dapl data transfer modes [5] MPI startup(): shm and dapl data transfer modes [77] MPI startup(): DAPL provider ofa-v2-mlx4_0-1u [115] DAPL startup(): trying to open DAPL provider from I_MPI_DAPL_PROVIDER: ofa-v2-mlx4_0-1u [133] DAPL startup(): trying to open DAPL provider from I_MPI_DAPL_PROVIDER: ofa-v2-mlx4_0-1u [155] MPI startup(): DAPL provider ofa-v2-mlx4_0-1u [155] MPI startup(): shm and dapl data transfer modes [190] MPI startup(): DAPL provider ofa-v2-mlx4_0-1u [77] MPI startup(): shm and dapl data transfer modes [131] DAPL startup(): trying to open DAPL provider from I_MPI_DAPL_PROVIDER: ofa-v2-mlx4_0-1u [156] DAPL startup(): trying to open DAPL provider from I_MPI_DAPL_PROVIDER: ofa-v2-mlx4_0-1u [190] MPI startup(): shm and dapl data transfer modes [87] DAPL startup(): trying to open DAPL provider from I_MPI_DAPL_PROVIDER: ofa-v2-mlx4_0-1u [161] MPI startup(): DAPL provider ofa-v2-mlx4_0-1u [161] MPI startup(): shm and dapl data transfer modes [186] MPI startup(): DAPL provider ofa-v2-mlx4_0-1u [162] DAPL startup(): trying to open DAPL provider from I_MPI_DAPL_PROVIDER: ofa-v2-mlx4_0-1u [186] MPI startup(): shm and dapl data transfer modes [165] DAPL startup(): trying to open DAPL provider from I_MPI_DAPL_PROVIDER: ofa-v2-mlx4_0-1u [188] MPI startup(): DAPL provider ofa-v2-mlx4_0-1u [188] MPI startup(): shm and dapl data transfer modes [92] DAPL startup(): trying to open DAPL provider from I_MPI_DAPL_PROVIDER: ofa-v2-mlx4_0-1u [132] MPI startup(): DAPL provider ofa-v2-mlx4_0-1u [167] DAPL startup(): trying to open DAPL provider from I_MPI_DAPL_PROVIDER: ofa-v2-mlx4_0-1u [80] DAPL startup(): trying to open DAPL provider from I_MPI_DAPL_PROVIDER: ofa-v2-mlx4_0-1u [98] MPI startup(): DAPL provider ofa-v2-mlx4_0-1u [132] MPI startup(): shm and dapl data transfer modes [151] MPI startup(): DAPL provider ofa-v2-mlx4_0-1u [86] MPI startup(): DAPL provider ofa-v2-mlx4_0-1u [98] MPI startup(): shm and dapl data transfer modes [151] MPI startup(): shm and dapl data transfer modes [86] MPI startup(): shm and dapl data transfer modes [116] DAPL startup(): trying to open DAPL provider from I_MPI_DAPL_PROVIDER: ofa-v2-mlx4_0-1u [125] MPI startup(): DAPL provider ofa-v2-mlx4_0-1u [82] DAPL startup(): trying to open DAPL provider from I_MPI_DAPL_PROVIDER: ofa-v2-mlx4_0-1u [96] MPI startup(): DAPL provider ofa-v2-mlx4_0-1u [125] MPI startup(): shm and dapl data transfer modes [96] MPI startup(): shm and dapl data transfer modes [129] DAPL startup(): trying to open DAPL provider from I_MPI_DAPL_PROVIDER: ofa-v2-mlx4_0-1u [141] DAPL startup(): trying to open DAPL provider from I_MPI_DAPL_PROVIDER: ofa-v2-mlx4_0-1u [41] MPI startup(): DAPL provider ofa-v2-mlx4_0-1u [41] MPI startup(): shm and dapl data transfer modes [90] DAPL startup(): trying to open DAPL provider from I_MPI_DAPL_PROVIDER: ofa-v2-mlx4_0-1u [172] MPI startup(): DAPL provider ofa-v2-mlx4_0-1u [172] MPI startup(): shm and dapl data transfer modes [173] MPI startup(): DAPL provider ofa-v2-mlx4_0-1u [173] MPI startup(): shm and dapl data transfer modes [29] MPI startup(): DAPL provider ofa-v2-mlx4_0-1u [191] MPI startup(): DAPL provider ofa-v2-mlx4_0-1u [191] MPI startup(): shm and dapl data transfer modes [101] MPI startup(): DAPL provider ofa-v2-mlx4_0-1u [143] MPI startup(): DAPL provider ofa-v2-mlx4_0-1u [113] DAPL startup(): trying to open DAPL provider from I_MPI_DAPL_PROVIDER: ofa-v2-mlx4_0-1u [143] MPI startup(): shm and dapl data transfer modes [29] MPI startup(): shm and dapl data transfer modes [101] MPI startup(): shm and dapl data transfer modes [102] DAPL startup(): trying to open DAPL provider from I_MPI_DAPL_PROVIDER: ofa-v2-mlx4_0-1u [72] DAPL startup(): trying to open DAPL provider from I_MPI_DAPL_PROVIDER: ofa-v2-mlx4_0-1u [106] DAPL startup(): trying to open DAPL provider from I_MPI_DAPL_PROVIDER: ofa-v2-mlx4_0-1u [84] DAPL startup(): trying to open DAPL provider from I_MPI_DAPL_PROVIDER: ofa-v2-mlx4_0-1u [73] DAPL startup(): trying to open DAPL provider from I_MPI_DAPL_PROVIDER: ofa-v2-mlx4_0-1u [117] DAPL startup(): trying to open DAPL provider from I_MPI_DAPL_PROVIDER: ofa-v2-mlx4_0-1u [79] DAPL startup(): trying to open DAPL provider from I_MPI_DAPL_PROVIDER: ofa-v2-mlx4_0-1u [159] MPI startup(): DAPL provider ofa-v2-mlx4_0-1u [6] MPI startup(): DAPL provider ofa-v2-mlx4_0-1u [159] MPI startup(): shm and dapl data transfer modes [28] MPI startup(): DAPL provider ofa-v2-mlx4_0-1u [6] MPI startup(): shm and dapl data transfer modes [28] MPI startup(): shm and dapl data transfer modes [109] DAPL startup(): trying to open DAPL provider from I_MPI_DAPL_PROVIDER: ofa-v2-mlx4_0-1u [163] MPI startup(): DAPL provider ofa-v2-mlx4_0-1u [163] MPI startup(): shm and dapl data transfer modes [103] DAPL startup(): trying to open DAPL provider from I_MPI_DAPL_PROVIDER: ofa-v2-mlx4_0-1u [108] DAPL startup(): trying to open DAPL provider from I_MPI_DAPL_PROVIDER: ofa-v2-mlx4_0-1u [122] DAPL startup(): trying to open DAPL provider from I_MPI_DAPL_PROVIDER: ofa-v2-mlx4_0-1u [40] MPI startup(): DAPL provider ofa-v2-mlx4_0-1u [40] MPI startup(): shm and dapl data transfer modes [135] DAPL startup(): trying to open DAPL provider from I_MPI_DAPL_PROVIDER: ofa-v2-mlx4_0-1u [105] MPI startup(): DAPL provider ofa-v2-mlx4_0-1u [105] MPI startup(): shm and dapl data transfer modes [99] MPI startup(): DAPL provider ofa-v2-mlx4_0-1u [99] MPI startup(): shm and dapl data transfer modes [123] MPI startup(): DAPL provider ofa-v2-mlx4_0-1u [35] MPI startup(): DAPL provider ofa-v2-mlx4_0-1u [123] MPI startup(): shm and dapl data transfer modes [127] DAPL startup(): trying to open DAPL provider from I_MPI_DAPL_PROVIDER: ofa-v2-mlx4_0-1u [138] DAPL startup(): trying to open DAPL provider from I_MPI_DAPL_PROVIDER: ofa-v2-mlx4_0-1u [32] MPI startup(): DAPL provider ofa-v2-mlx4_0-1u [32] MPI startup(): shm and dapl data transfer modes [139] DAPL startup(): trying to open DAPL provider from I_MPI_DAPL_PROVIDER: ofa-v2-mlx4_0-1u [35] MPI startup(): shm and dapl data transfer modes [126] DAPL startup(): trying to open DAPL provider from I_MPI_DAPL_PROVIDER: ofa-v2-mlx4_0-1u [38] MPI startup(): DAPL provider ofa-v2-mlx4_0-1u [38] MPI startup(): shm and dapl data transfer modes [147] MPI startup(): DAPL provider ofa-v2-mlx4_0-1u [30] MPI startup(): DAPL provider ofa-v2-mlx4_0-1u [147] MPI startup(): shm and dapl data transfer modes [136] MPI startup(): DAPL provider ofa-v2-mlx4_0-1u [9] MPI startup(): DAPL provider ofa-v2-mlx4_0-1u [130] DAPL startup(): trying to open DAPL provider from I_MPI_DAPL_PROVIDER: ofa-v2-mlx4_0-1u [9] MPI startup(): shm and dapl data transfer modes [136] MPI startup(): shm and dapl data transfer modes [1] MPI startup(): DAPL provider ofa-v2-mlx4_0-1u [104] MPI startup(): DAPL provider ofa-v2-mlx4_0-1u [1] MPI startup(): shm and dapl data transfer modes [140] DAPL startup(): trying to open DAPL provider from I_MPI_DAPL_PROVIDER: ofa-v2-mlx4_0-1u [104] MPI startup(): shm and dapl data transfer modes [107] MPI startup(): DAPL provider ofa-v2-mlx4_0-1u [107] MPI startup(): shm and dapl data transfer modes [30] MPI startup(): shm and dapl data transfer modes [33] MPI startup(): DAPL provider ofa-v2-mlx4_0-1u [33] MPI startup(): shm and dapl data transfer modes [47] MPI startup(): DAPL provider ofa-v2-mlx4_0-1u [47] MPI startup(): shm and dapl data transfer modes [12] MPI startup(): DAPL provider ofa-v2-mlx4_0-1u [154] MPI startup(): DAPL provider ofa-v2-mlx4_0-1u [12] MPI startup(): shm and dapl data transfer modes [154] MPI startup(): shm and dapl data transfer modes [165] MPI startup(): DAPL provider ofa-v2-mlx4_0-1u [110] MPI startup(): DAPL provider ofa-v2-mlx4_0-1u [53] MPI startup(): DAPL provider ofa-v2-mlx4_0-1u [53] MPI startup(): shm and dapl data transfer modes [150] MPI startup(): DAPL provider ofa-v2-mlx4_0-1u [110] MPI startup(): shm and dapl data transfer modes [60] MPI startup(): DAPL provider ofa-v2-mlx4_0-1u [60] MPI startup(): shm and dapl data transfer modes [165] MPI startup(): shm and dapl data transfer modes [61] MPI startup(): DAPL provider ofa-v2-mlx4_0-1u [61] MPI startup(): shm and dapl data transfer modes [150] MPI startup(): shm and dapl data transfer modes [70] MPI startup(): DAPL provider ofa-v2-mlx4_0-1u [70] MPI startup(): shm and dapl data transfer modes [66] MPI startup(): DAPL provider ofa-v2-mlx4_0-1u [66] MPI startup(): shm and dapl data transfer modes [144] MPI startup(): DAPL provider ofa-v2-mlx4_0-1u [83] MPI startup(): DAPL provider ofa-v2-mlx4_0-1u [144] MPI startup(): shm and dapl data transfer modes [156] MPI startup(): DAPL provider ofa-v2-mlx4_0-1u [20] MPI startup(): DAPL provider ofa-v2-mlx4_0-1u [83] MPI startup(): shm and dapl data transfer modes [149] MPI startup(): DAPL provider ofa-v2-mlx4_0-1u [20] MPI startup(): shm and dapl data transfer modes [156] MPI startup(): shm and dapl data transfer modes [7] MPI startup(): DAPL provider ofa-v2-mlx4_0-1u [7] MPI startup(): shm and dapl data transfer modes [4] MPI startup(): DAPL provider ofa-v2-mlx4_0-1u [4] MPI startup(): shm and dapl data transfer modes [74] MPI startup(): DAPL provider ofa-v2-mlx4_0-1u [74] MPI startup(): shm and dapl data transfer modes [120] MPI startup(): DAPL provider ofa-v2-mlx4_0-1u [119] MPI startup(): DAPL provider ofa-v2-mlx4_0-1u [120] MPI startup(): shm and dapl data transfer modes [119] MPI startup(): shm and dapl data transfer modes [149] MPI startup(): shm and dapl data transfer modes [162] MPI startup(): DAPL provider ofa-v2-mlx4_0-1u [162] MPI startup(): shm and dapl data transfer modes [167] MPI startup(): DAPL provider ofa-v2-mlx4_0-1u [167] MPI startup(): shm and dapl data transfer modes [0] MPI startup(): DAPL provider ofa-v2-mlx4_0-1u [0] MPI startup(): shm and dapl data transfer modes [16] MPI startup(): DAPL provider ofa-v2-mlx4_0-1u [16] MPI startup(): shm and dapl data transfer modes [91] MPI startup(): DAPL provider ofa-v2-mlx4_0-1u [22] MPI startup(): DAPL provider ofa-v2-mlx4_0-1u [22] MPI startup(): shm and dapl data transfer modes [91] MPI startup(): shm and dapl data transfer modes [56] MPI startup(): DAPL provider ofa-v2-mlx4_0-1u [54] MPI startup(): DAPL provider ofa-v2-mlx4_0-1u [56] MPI startup(): shm and dapl data transfer modes [54] MPI startup(): shm and dapl data transfer modes [55] MPI startup(): DAPL provider ofa-v2-mlx4_0-1u [112] MPI startup(): DAPL provider ofa-v2-mlx4_0-1u [95] MPI startup(): DAPL provider ofa-v2-mlx4_0-1u [112] MPI startup(): shm and dapl data transfer modes [55] MPI startup(): shm and dapl data transfer modes [95] MPI startup(): shm and dapl data transfer modes [63] MPI startup(): DAPL provider ofa-v2-mlx4_0-1u [63] MPI startup(): shm and dapl data transfer modes [67] MPI startup(): DAPL provider ofa-v2-mlx4_0-1u [67] MPI startup(): shm and dapl data transfer modes [121] MPI startup(): DAPL provider ofa-v2-mlx4_0-1u [121] MPI startup(): shm and dapl data transfer modes [58] MPI startup(): DAPL provider ofa-v2-mlx4_0-1u [58] MPI startup(): shm and dapl data transfer modes [89] MPI startup(): DAPL provider ofa-v2-mlx4_0-1u [89] MPI startup(): shm and dapl data transfer modes [111] MPI startup(): DAPL provider ofa-v2-mlx4_0-1u [111] MPI startup(): shm and dapl data transfer modes [52] MPI startup(): DAPL provider ofa-v2-mlx4_0-1u [52] MPI startup(): shm and dapl data transfer modes [65] MPI startup(): DAPL provider ofa-v2-mlx4_0-1u [65] MPI startup(): shm and dapl data transfer modes [69] MPI startup(): DAPL provider ofa-v2-mlx4_0-1u [69] MPI startup(): shm and dapl data transfer modes [97] MPI startup(): DAPL provider ofa-v2-mlx4_0-1u [97] MPI startup(): shm and dapl data transfer modes [134] MPI startup(): DAPL provider ofa-v2-mlx4_0-1u [134] MPI startup(): shm and dapl data transfer modes [88] MPI startup(): DAPL provider ofa-v2-mlx4_0-1u [88] MPI startup(): shm and dapl data transfer modes [116] MPI startup(): DAPL provider ofa-v2-mlx4_0-1u [116] MPI startup(): shm and dapl data transfer modes [78] MPI startup(): DAPL provider ofa-v2-mlx4_0-1u [78] MPI startup(): shm and dapl data transfer modes [87] MPI startup(): DAPL provider ofa-v2-mlx4_0-1u [87] MPI startup(): shm and dapl data transfer modes [115] MPI startup(): DAPL provider ofa-v2-mlx4_0-1u [115] MPI startup(): shm and dapl data transfer modes [131] MPI startup(): DAPL provider ofa-v2-mlx4_0-1u [131] MPI startup(): shm and dapl data transfer modes [92] MPI startup(): DAPL provider ofa-v2-mlx4_0-1u [92] MPI startup(): shm and dapl data transfer modes [133] MPI startup(): DAPL provider ofa-v2-mlx4_0-1u [133] MPI startup(): shm and dapl data transfer modes [106] MPI startup(): DAPL provider ofa-v2-mlx4_0-1u [106] MPI startup(): shm and dapl data transfer modes [80] MPI startup(): DAPL provider ofa-v2-mlx4_0-1u [80] MPI startup(): shm and dapl data transfer modes [82] MPI startup(): DAPL provider ofa-v2-mlx4_0-1u [117] MPI startup(): DAPL provider ofa-v2-mlx4_0-1u [82] MPI startup(): shm and dapl data transfer modes [117] MPI startup(): shm and dapl data transfer modes [129] MPI startup(): DAPL provider ofa-v2-mlx4_0-1u [72] MPI startup(): DAPL provider ofa-v2-mlx4_0-1u [113] MPI startup(): DAPL provider ofa-v2-mlx4_0-1u [129] MPI startup(): shm and dapl data transfer modes [72] MPI startup(): shm and dapl data transfer modes [113] MPI startup(): shm and dapl data transfer modes [90] MPI startup(): DAPL provider ofa-v2-mlx4_0-1u [141] MPI startup(): DAPL provider ofa-v2-mlx4_0-1u [141] MPI startup(): shm and dapl data transfer modes [122] MPI startup(): DAPL provider ofa-v2-mlx4_0-1u [122] MPI startup(): shm and dapl data transfer modes [102] MPI startup(): DAPL provider ofa-v2-mlx4_0-1u [102] MPI startup(): shm and dapl data transfer modes [103] MPI startup(): DAPL provider ofa-v2-mlx4_0-1u [103] MPI startup(): shm and dapl data transfer modes [108] MPI startup(): DAPL provider ofa-v2-mlx4_0-1u [108] MPI startup(): shm and dapl data transfer modes [109] MPI startup(): DAPL provider ofa-v2-mlx4_0-1u [109] MPI startup(): shm and dapl data transfer modes [139] MPI startup(): DAPL provider ofa-v2-mlx4_0-1u [139] MPI startup(): shm and dapl data transfer modes [73] MPI startup(): DAPL provider ofa-v2-mlx4_0-1u [73] MPI startup(): shm and dapl data transfer modes [79] MPI startup(): DAPL provider ofa-v2-mlx4_0-1u [79] MPI startup(): shm and dapl data transfer modes [84] MPI startup(): DAPL provider ofa-v2-mlx4_0-1u [84] MPI startup(): shm and dapl data transfer modes [90] MPI startup(): shm and dapl data transfer modes [127] MPI startup(): DAPL provider ofa-v2-mlx4_0-1u [127] MPI startup(): shm and dapl data transfer modes [130] MPI startup(): DAPL provider ofa-v2-mlx4_0-1u [130] MPI startup(): shm and dapl data transfer modes [140] MPI startup(): DAPL provider ofa-v2-mlx4_0-1u [140] MPI startup(): shm and dapl data transfer modes [135] MPI startup(): DAPL provider ofa-v2-mlx4_0-1u [126] MPI startup(): DAPL provider ofa-v2-mlx4_0-1u [126] MPI startup(): shm and dapl data transfer modes [135] MPI startup(): shm and dapl data transfer modes [138] MPI startup(): DAPL provider ofa-v2-mlx4_0-1u [138] MPI startup(): shm and dapl data transfer modes [168] MPID_nem_init_dapl_coll_fns(): User set DAPL collective mask = 0000 [168] MPID_nem_init_dapl_coll_fns(): Effective DAPL collective mask = 0000 [170] MPID_nem_init_dapl_coll_fns(): User set DAPL collective mask = 0000 [170] MPID_nem_init_dapl_coll_fns(): Effective DAPL collective mask = 0000 [0] MPID_nem_init_dapl_coll_fns(): User set DAPL collective mask = 0000 [0] MPID_nem_init_dapl_coll_fns(): Effective DAPL collective mask = 0000 [72] MPID_nem_init_dapl_coll_fns(): User set DAPL collective mask = 0000 [72] MPID_nem_init_dapl_coll_fns(): Effective DAPL collective mask = 0000 [96] MPID_nem_init_dapl_coll_fns(): User set DAPL collective mask = 0000 [96] MPID_nem_init_dapl_coll_fns(): Effective DAPL collective mask = 0000 [120] MPID_nem_init_dapl_coll_fns(): User set DAPL collective mask = 0000 [120] MPID_nem_init_dapl_coll_fns(): Effective DAPL collective mask = 0000 [24] MPID_nem_init_dapl_coll_fns(): User set DAPL collective mask = 0000 [24] MPID_nem_init_dapl_coll_fns(): Effective DAPL collective mask = 0000 [48] MPID_nem_init_dapl_coll_fns(): User set DAPL collective mask = 0000 [48] MPID_nem_init_dapl_coll_fns(): Effective DAPL collective mask = 0000 [144] MPID_nem_init_dapl_coll_fns(): User set DAPL collective mask = 0000 [144] MPID_nem_init_dapl_coll_fns(): Effective DAPL collective mask = 0000 [171] MPID_nem_init_dapl_coll_fns(): User set DAPL collective mask = 0000 [171] MPID_nem_init_dapl_coll_fns(): Effective DAPL collective mask = 0000 [2] MPID_nem_init_dapl_coll_fns(): User set DAPL collective mask = 0000 [2] MPID_nem_init_dapl_coll_fns(): Effective DAPL collective mask = 0000 [73] MPID_nem_init_dapl_coll_fns(): User set DAPL collective mask = 0000 [73] MPID_nem_init_dapl_coll_fns(): Effective DAPL collective mask = 0000 [99] MPID_nem_init_dapl_coll_fns(): User set DAPL collective mask = 0000 [99] MPID_nem_init_dapl_coll_fns(): Effective DAPL collective mask = 0000 [122] MPID_nem_init_dapl_coll_fns(): User set DAPL collective mask = 0000 [122] MPID_nem_init_dapl_coll_fns(): Effective DAPL collective mask = 0000 [30] MPID_nem_init_dapl_coll_fns(): User set DAPL collective mask = 0000 [30] MPID_nem_init_dapl_coll_fns(): Effective DAPL collective mask = 0000 [53] MPID_nem_init_dapl_coll_fns(): User set DAPL collective mask = 0000 [53] MPID_nem_init_dapl_coll_fns(): Effective DAPL collective mask = 0000 [148] MPID_nem_init_dapl_coll_fns(): User set DAPL collective mask = 0000 [148] MPID_nem_init_dapl_coll_fns(): Effective DAPL collective mask = 0000 [180] MPID_nem_init_dapl_coll_fns(): User set DAPL collective mask = 0000 [180] MPID_nem_init_dapl_coll_fns(): Effective DAPL collective mask = 0000 [10] MPID_nem_init_dapl_coll_fns(): User set DAPL collective mask = 0000 [10] MPID_nem_init_dapl_coll_fns(): Effective DAPL collective mask = 0000 [84] MPID_nem_init_dapl_coll_fns(): User set DAPL collective mask = 0000 [84] MPID_nem_init_dapl_coll_fns(): Effective DAPL collective mask = 0000 [108] MPID_nem_init_dapl_coll_fns(): User set DAPL collective mask = 0000 [108] MPID_nem_init_dapl_coll_fns(): Effective DAPL collective mask = 0000 [132] MPID_nem_init_dapl_coll_fns(): User set DAPL collective mask = 0000 [132] MPID_nem_init_dapl_coll_fns(): Effective DAPL collective mask = 0000 [33] MPID_nem_init_dapl_coll_fns(): User set DAPL collective mask = 0000 [33] MPID_nem_init_dapl_coll_fns(): Effective DAPL collective mask = 0000 [54] MPID_nem_init_dapl_coll_fns(): User set DAPL collective mask = 0000 [54] MPID_nem_init_dapl_coll_fns(): Effective DAPL collective mask = 0000 [156] MPID_nem_init_dapl_coll_fns(): User set DAPL collective mask = 0000 [156] MPID_nem_init_dapl_coll_fns(): Effective DAPL collective mask = 0000 [169] MPID_nem_init_dapl_coll_fns(): User set DAPL collective mask = 0000 [169] MPID_nem_init_dapl_coll_fns(): Effective DAPL collective mask = 0000 [12] MPID_nem_init_dapl_coll_fns(): User set DAPL collective mask = 0000 [12] MPID_nem_init_dapl_coll_fns(): Effective DAPL collective mask = 0000 [75] MPID_nem_init_dapl_coll_fns(): User set DAPL collective mask = 0000 [75] MPID_nem_init_dapl_coll_fns(): Effective DAPL collective mask = 0000 [98] MPID_nem_init_dapl_coll_fns(): User set DAPL collective mask = 0000 [98] MPID_nem_init_dapl_coll_fns(): Effective DAPL collective mask = 0000 [123] MPID_nem_init_dapl_coll_fns(): User set DAPL collective mask = 0000 [123] MPID_nem_init_dapl_coll_fns(): Effective DAPL collective mask = 0000 [36] MPID_nem_init_dapl_coll_fns(): User set DAPL collective mask = 0000 [36] MPID_nem_init_dapl_coll_fns(): Effective DAPL collective mask = 0000 [60] MPID_nem_init_dapl_coll_fns(): User set DAPL collective mask = 0000 [60] MPID_nem_init_dapl_coll_fns(): Effective DAPL collective mask = 0000 [145] MPID_nem_init_dapl_coll_fns(): User set DAPL collective mask = 0000 [145] MPID_nem_init_dapl_coll_fns(): Effective DAPL collective mask = 0000 [172] MPID_nem_init_dapl_coll_fns(): User set DAPL collective mask = 0000 [172] MPID_nem_init_dapl_coll_fns(): Effective DAPL collective mask = 0000 [1] MPID_nem_init_dapl_coll_fns(): User set DAPL collective mask = 0000 [1] MPID_nem_init_dapl_coll_fns(): Effective DAPL collective mask = 0000 [76] MPID_nem_init_dapl_coll_fns(): User set DAPL collective mask = 0000 [76] MPID_nem_init_dapl_coll_fns(): Effective DAPL collective mask = 0000 [100] MPID_nem_init_dapl_coll_fns(): User set DAPL collective mask = 0000 [100] MPID_nem_init_dapl_coll_fns(): Effective DAPL collective mask = 0000 [124] MPID_nem_init_dapl_coll_fns(): User set DAPL collective mask = 0000 [124] MPID_nem_init_dapl_coll_fns(): Effective DAPL collective mask = 0000 [35] MPID_nem_init_dapl_coll_fns(): User set DAPL collective mask = 0000 [35] MPID_nem_init_dapl_coll_fns(): Effective DAPL collective mask = 0000 [65] MPID_nem_init_dapl_coll_fns(): User set DAPL collective mask = 0000 [65] MPID_nem_init_dapl_coll_fns(): Effective DAPL collective mask = 0000 [146] MPID_nem_init_dapl_coll_fns(): User set DAPL collective mask = 0000 [146] MPID_nem_init_dapl_coll_fns(): Effective DAPL collective mask = 0000 [173] MPID_nem_init_dapl_coll_fns(): User set DAPL collective mask = 0000 [173] MPID_nem_init_dapl_coll_fns(): Effective DAPL collective mask = 0000 [4] MPID_nem_init_dapl_coll_fns(): User set DAPL collective mask = 0000 [4] MPID_nem_init_dapl_coll_fns(): Effective DAPL collective mask = 0000 [77] MPID_nem_init_dapl_coll_fns(): User set DAPL collective mask = 0000 [77] MPID_nem_init_dapl_coll_fns(): Effective DAPL collective mask = 0000 [104] MPID_nem_init_dapl_coll_fns(): User set DAPL collective mask = 0000 [104] MPID_nem_init_dapl_coll_fns(): Effective DAPL collective mask = 0000 [126] MPID_nem_init_dapl_coll_fns(): User set DAPL collective mask = 0000 [126] MPID_nem_init_dapl_coll_fns(): Effective DAPL collective mask = 0000 [37] MPID_nem_init_dapl_coll_fns(): User set DAPL collective mask = 0000 [37] MPID_nem_init_dapl_coll_fns(): Effective DAPL collective mask = 0000 [49] MPID_nem_init_dapl_coll_fns(): User set DAPL collective mask = 0000 [49] MPID_nem_init_dapl_coll_fns(): Effective DAPL collective mask = 0000 [149] MPID_nem_init_dapl_coll_fns(): User set DAPL collective mask = 0000 [149] MPID_nem_init_dapl_coll_fns(): Effective DAPL collective mask = 0000 [175] MPID_nem_init_dapl_coll_fns(): User set DAPL collective mask = 0000 [175] MPID_nem_init_dapl_coll_fns(): Effective DAPL collective mask = 0000 [5] MPID_nem_init_dapl_coll_fns(): User set DAPL collective mask = 0000 [5] MPID_nem_init_dapl_coll_fns(): Effective DAPL collective mask = 0000 [78] MPID_nem_init_dapl_coll_fns(): User set DAPL collective mask = 0000 [78] MPID_nem_init_dapl_coll_fns(): Effective DAPL collective mask = 0000 [109] MPID_nem_init_dapl_coll_fns(): User set DAPL collective mask = 0000 [109] MPID_nem_init_dapl_coll_fns(): Effective DAPL collective mask = 0000 [127] MPID_nem_init_dapl_coll_fns(): User set DAPL collective mask = 0000 [127] MPID_nem_init_dapl_coll_fns(): Effective DAPL collective mask = 0000 [38] MPID_nem_init_dapl_coll_fns(): User set DAPL collective mask = 0000 [38] MPID_nem_init_dapl_coll_fns(): Effective DAPL collective mask = 0000 [51] MPID_nem_init_dapl_coll_fns(): User set DAPL collective mask = 0000 [51] MPID_nem_init_dapl_coll_fns(): Effective DAPL collective mask = 0000 [150] MPID_nem_init_dapl_coll_fns(): User set DAPL collective mask = 0000 [150] MPID_nem_init_dapl_coll_fns(): Effective DAPL collective mask = 0000 [177] MPID_nem_init_dapl_coll_fns(): User set DAPL collective mask = 0000 [177] MPID_nem_init_dapl_coll_fns(): Effective DAPL collective mask = 0000 [6] MPID_nem_init_dapl_coll_fns(): User set DAPL collective mask = 0000 [6] MPID_nem_init_dapl_coll_fns(): Effective DAPL collective mask = 0000 [79] MPID_nem_init_dapl_coll_fns(): User set DAPL collective mask = 0000 [79] MPID_nem_init_dapl_coll_fns(): Effective DAPL collective mask = 0000 [110] MPID_nem_init_dapl_coll_fns(): User set DAPL collective mask = 0000 [110] MPID_nem_init_dapl_coll_fns(): Effective DAPL collective mask = 0000 [128] MPID_nem_init_dapl_coll_fns(): User set DAPL collective mask = 0000 [128] MPID_nem_init_dapl_coll_fns(): Effective DAPL collective mask = 0000 [40] MPID_nem_init_dapl_coll_fns(): User set DAPL collective mask = 0000 [40] MPID_nem_init_dapl_coll_fns(): Effective DAPL collective mask = 0000 [52] MPID_nem_init_dapl_coll_fns(): User set DAPL collective mask = 0000 [52] MPID_nem_init_dapl_coll_fns(): Effective DAPL collective mask = 0000 [151] MPID_nem_init_dapl_coll_fns(): User set DAPL collective mask = 0000 [151] MPID_nem_init_dapl_coll_fns(): Effective DAPL collective mask = 0000 [179] MPID_nem_init_dapl_coll_fns(): User set DAPL collective mask = 0000 [179] MPID_nem_init_dapl_coll_fns(): Effective DAPL collective mask = 0000 [7] MPID_nem_init_dapl_coll_fns(): User set DAPL collective mask = 0000 [7] MPID_nem_init_dapl_coll_fns(): Effective DAPL collective mask = 0000 [80] MPID_nem_init_dapl_coll_fns(): User set DAPL collective mask = 0000 [80] MPID_nem_init_dapl_coll_fns(): Effective DAPL collective mask = 0000 [111] MPID_nem_init_dapl_coll_fns(): User set DAPL collective mask = 0000 [111] MPID_nem_init_dapl_coll_fns(): Effective DAPL collective mask = 0000 [130] MPID_nem_init_dapl_coll_fns(): User set DAPL collective mask = 0000 [130] MPID_nem_init_dapl_coll_fns(): Effective DAPL collective mask = 0000 [41] MPID_nem_init_dapl_coll_fns(): User set DAPL collective mask = 0000 [41] MPID_nem_init_dapl_coll_fns(): Effective DAPL collective mask = 0000 [55] MPID_nem_init_dapl_coll_fns(): User set DAPL collective mask = 0000 [55] MPID_nem_init_dapl_coll_fns(): Effective DAPL collective mask = 0000 [152] MPID_nem_init_dapl_coll_fns(): User set DAPL collective mask = 0000 [152] MPID_nem_init_dapl_coll_fns(): Effective DAPL collective mask = 0000 [181] MPID_nem_init_dapl_coll_fns(): User set DAPL collective mask = 0000 [181] MPID_nem_init_dapl_coll_fns(): Effective DAPL collective mask = 0000 [8] MPID_nem_init_dapl_coll_fns(): User set DAPL collective mask = 0000 [8] MPID_nem_init_dapl_coll_fns(): Effective DAPL collective mask = 0000 [81] MPID_nem_init_dapl_coll_fns(): User set DAPL collective mask = 0000 [81] MPID_nem_init_dapl_coll_fns(): Effective DAPL collective mask = 0000 [113] MPID_nem_init_dapl_coll_fns(): User set DAPL collective mask = 0000 [113] MPID_nem_init_dapl_coll_fns(): Effective DAPL collective mask = 0000 [131] MPID_nem_init_dapl_coll_fns(): User set DAPL collective mask = 0000 [131] MPID_nem_init_dapl_coll_fns(): Effective DAPL collective mask = 0000 [42] MPID_nem_init_dapl_coll_fns(): User set DAPL collective mask = 0000 [42] MPID_nem_init_dapl_coll_fns(): Effective DAPL collective mask = 0000 [56] MPID_nem_init_dapl_coll_fns(): User set DAPL collective mask = 0000 [56] MPID_nem_init_dapl_coll_fns(): Effective DAPL collective mask = 0000 [153] MPID_nem_init_dapl_coll_fns(): User set DAPL collective mask = 0000 [153] MPID_nem_init_dapl_coll_fns(): Effective DAPL collective mask = 0000 [184] MPID_nem_init_dapl_coll_fns(): User set DAPL collective mask = 0000 [184] MPID_nem_init_dapl_coll_fns(): Effective DAPL collective mask = 0000 [11] MPID_nem_init_dapl_coll_fns(): User set DAPL collective mask = 0000 [11] MPID_nem_init_dapl_coll_fns(): Effective DAPL collective mask = 0000 [82] MPID_nem_init_dapl_coll_fns(): User set DAPL collective mask = 0000 [82] MPID_nem_init_dapl_coll_fns(): Effective DAPL collective mask = 0000 [116] MPID_nem_init_dapl_coll_fns(): User set DAPL collective mask = 0000 [116] MPID_nem_init_dapl_coll_fns(): Effective DAPL collective mask = 0000 [134] MPID_nem_init_dapl_coll_fns(): User set DAPL collective mask = 0000 [134] MPID_nem_init_dapl_coll_fns(): Effective DAPL collective mask = 0000 [44] MPID_nem_init_dapl_coll_fns(): User set DAPL collective mask = 0000 [44] MPID_nem_init_dapl_coll_fns(): Effective DAPL collective mask = 0000 [57] MPID_nem_init_dapl_coll_fns(): User set DAPL collective mask = 0000 [57] MPID_nem_init_dapl_coll_fns(): Effective DAPL collective mask = 0000 [154] MPID_nem_init_dapl_coll_fns(): User set DAPL collective mask = 0000 [154] MPID_nem_init_dapl_coll_fns(): Effective DAPL collective mask = 0000 [185] MPID_nem_init_dapl_coll_fns(): User set DAPL collective mask = 0000 [185] MPID_nem_init_dapl_coll_fns(): Effective DAPL collective mask = 0000 [13] MPID_nem_init_dapl_coll_fns(): User set DAPL collective mask = 0000 [13] MPID_nem_init_dapl_coll_fns(): Effective DAPL collective mask = 0000 [83] MPID_nem_init_dapl_coll_fns(): User set DAPL collective mask = 0000 [83] MPID_nem_init_dapl_coll_fns(): Effective DAPL collective mask = 0000 [118] MPID_nem_init_dapl_coll_fns(): User set DAPL collective mask = 0000 [118] MPID_nem_init_dapl_coll_fns(): Effective DAPL collective mask = 0000 [135] MPID_nem_init_dapl_coll_fns(): User set DAPL collective mask = 0000 [135] MPID_nem_init_dapl_coll_fns(): Effective DAPL collective mask = 0000 [45] MPID_nem_init_dapl_coll_fns(): User set DAPL collective mask = 0000 [45] MPID_nem_init_dapl_coll_fns(): Effective DAPL collective mask = 0000 [58] MPID_nem_init_dapl_coll_fns(): User set DAPL collective mask = 0000 [58] MPID_nem_init_dapl_coll_fns(): Effective DAPL collective mask = 0000 [155] MPID_nem_init_dapl_coll_fns(): User set DAPL collective mask = 0000 [155] MPID_nem_init_dapl_coll_fns(): Effective DAPL collective mask = 0000 [187] MPID_nem_init_dapl_coll_fns(): User set DAPL collective mask = 0000 [187] MPID_nem_init_dapl_coll_fns(): Effective DAPL collective mask = 0000 [14] MPID_nem_init_dapl_coll_fns(): User set DAPL collective mask = 0000 [14] MPID_nem_init_dapl_coll_fns(): Effective DAPL collective mask = 0000 [85] MPID_nem_init_dapl_coll_fns(): User set DAPL collective mask = 0000 [85] MPID_nem_init_dapl_coll_fns(): Effective DAPL collective mask = 0000 [119] MPID_nem_init_dapl_coll_fns(): User set DAPL collective mask = 0000 [119] MPID_nem_init_dapl_coll_fns(): Effective DAPL collective mask = 0000 [136] MPID_nem_init_dapl_coll_fns(): User set DAPL collective mask = 0000 [136] MPID_nem_init_dapl_coll_fns(): Effective DAPL collective mask = 0000 [46] MPID_nem_init_dapl_coll_fns(): User set DAPL collective mask = 0000 [46] MPID_nem_init_dapl_coll_fns(): Effective DAPL collective mask = 0000 [59] MPID_nem_init_dapl_coll_fns(): User set DAPL collective mask = 0000 [59] MPID_nem_init_dapl_coll_fns(): Effective DAPL collective mask = 0000 [157] MPID_nem_init_dapl_coll_fns(): User set DAPL collective mask = 0000 [157] MPID_nem_init_dapl_coll_fns(): Effective DAPL collective mask = 0000 [191] MPID_nem_init_dapl_coll_fns(): User set DAPL collective mask = 0000 [191] MPID_nem_init_dapl_coll_fns(): Effective DAPL collective mask = 0000 [15] MPID_nem_init_dapl_coll_fns(): User set DAPL collective mask = 0000 [15] MPID_nem_init_dapl_coll_fns(): Effective DAPL collective mask = 0000 [86] MPID_nem_init_dapl_coll_fns(): User set DAPL collective mask = 0000 [86] MPID_nem_init_dapl_coll_fns(): Effective DAPL collective mask = 0000 [97] MPID_nem_init_dapl_coll_fns(): User set DAPL collective mask = 0000 [97] MPID_nem_init_dapl_coll_fns(): Effective DAPL collective mask = 0000 [140] MPID_nem_init_dapl_coll_fns(): User set DAPL collective mask = 0000 [140] MPID_nem_init_dapl_coll_fns(): Effective DAPL collective mask = 0000 [25] MPID_nem_init_dapl_coll_fns(): User set DAPL collective mask = 0000 [25] MPID_nem_init_dapl_coll_fns(): Effective DAPL collective mask = 0000 [61] MPID_nem_init_dapl_coll_fns(): User set DAPL collective mask = 0000 [61] MPID_nem_init_dapl_coll_fns(): Effective DAPL collective mask = 0000 [158] MPID_nem_init_dapl_coll_fns(): User set DAPL collective mask = 0000 [158] MPID_nem_init_dapl_coll_fns(): Effective DAPL collective mask = 0000 [174] MPID_nem_init_dapl_coll_fns(): User set DAPL collective mask = 0000 [174] MPID_nem_init_dapl_coll_fns(): Effective DAPL collective mask = 0000 [16] MPID_nem_init_dapl_coll_fns(): User set DAPL collective mask = 0000 [16] MPID_nem_init_dapl_coll_fns(): Effective DAPL collective mask = 0000 [87] MPID_nem_init_dapl_coll_fns(): User set DAPL collective mask = 0000 [87] MPID_nem_init_dapl_coll_fns(): Effective DAPL collective mask = 0000 [101] MPID_nem_init_dapl_coll_fns(): User set DAPL collective mask = 0000 [101] MPID_nem_init_dapl_coll_fns(): Effective DAPL collective mask = 0000 [142] MPID_nem_init_dapl_coll_fns(): User set DAPL collective mask = 0000 [142] MPID_nem_init_dapl_coll_fns(): Effective DAPL collective mask = 0000 [26] MPID_nem_init_dapl_coll_fns(): User set DAPL collective mask = 0000 [26] MPID_nem_init_dapl_coll_fns(): Effective DAPL collective mask = 0000 [62] MPID_nem_init_dapl_coll_fns(): User set DAPL collective mask = 0000 [62] MPID_nem_init_dapl_coll_fns(): Effective DAPL collective mask = 0000 [159] MPID_nem_init_dapl_coll_fns(): User set DAPL collective mask = 0000 [159] MPID_nem_init_dapl_coll_fns(): Effective DAPL collective mask = 0000 [178] MPID_nem_init_dapl_coll_fns(): User set DAPL collective mask = 0000 [178] MPID_nem_init_dapl_coll_fns(): Effective DAPL collective mask = 0000 [17] MPID_nem_init_dapl_coll_fns(): User set DAPL collective mask = 0000 [17] MPID_nem_init_dapl_coll_fns(): Effective DAPL collective mask = 0000 [88] MPID_nem_init_dapl_coll_fns(): User set DAPL collective mask = 0000 [88] MPID_nem_init_dapl_coll_fns(): Effective DAPL collective mask = 0000 [102] MPID_nem_init_dapl_coll_fns(): User set DAPL collective mask = 0000 [102] MPID_nem_init_dapl_coll_fns(): Effective DAPL collective mask = 0000 [125] MPID_nem_init_dapl_coll_fns(): User set DAPL collective mask = 0000 [125] MPID_nem_init_dapl_coll_fns(): Effective DAPL collective mask = 0000 [28] MPID_nem_init_dapl_coll_fns(): User set DAPL collective mask = 0000 [28] MPID_nem_init_dapl_coll_fns(): Effective DAPL collective mask = 0000 [63] MPID_nem_init_dapl_coll_fns(): User set DAPL collective mask = 0000 [63] MPID_nem_init_dapl_coll_fns(): Effective DAPL collective mask = 0000 [160] MPID_nem_init_dapl_coll_fns(): User set DAPL collective mask = 0000 [160] MPID_nem_init_dapl_coll_fns(): Effective DAPL collective mask = 0000 [182] MPID_nem_init_dapl_coll_fns(): User set DAPL collective mask = 0000 [182] MPID_nem_init_dapl_coll_fns(): Effective DAPL collective mask = 0000 [18] MPID_nem_init_dapl_coll_fns(): User set DAPL collective mask = 0000 [18] MPID_nem_init_dapl_coll_fns(): Effective DAPL collective mask = 0000 [89] MPID_nem_init_dapl_coll_fns(): User set DAPL collective mask = 0000 [89] MPID_nem_init_dapl_coll_fns(): Effective DAPL collective mask = 0000 [105] MPID_nem_init_dapl_coll_fns(): User set DAPL collective mask = 0000 [105] MPID_nem_init_dapl_coll_fns(): Effective DAPL collective mask = 0000 [129] MPID_nem_init_dapl_coll_fns(): User set DAPL collective mask = 0000 [129] MPID_nem_init_dapl_coll_fns(): Effective DAPL collective mask = 0000 [29] MPID_nem_init_dapl_coll_fns(): User set DAPL collective mask = 0000 [29] MPID_nem_init_dapl_coll_fns(): Effective DAPL collective mask = 0000 [64] MPID_nem_init_dapl_coll_fns(): User set DAPL collective mask = 0000 [64] MPID_nem_init_dapl_coll_fns(): Effective DAPL collective mask = 0000 [161] MPID_nem_init_dapl_coll_fns(): User set DAPL collective mask = 0000 [161] MPID_nem_init_dapl_coll_fns(): Effective DAPL collective mask = 0000 [183] MPID_nem_init_dapl_coll_fns(): User set DAPL collective mask = 0000 [183] MPID_nem_init_dapl_coll_fns(): Effective DAPL collective mask = 0000 [19] MPID_nem_init_dapl_coll_fns(): User set DAPL collective mask = 0000 [19] MPID_nem_init_dapl_coll_fns(): Effective DAPL collective mask = 0000 [90] MPID_nem_init_dapl_coll_fns(): User set DAPL collective mask = 0000 [90] MPID_nem_init_dapl_coll_fns(): Effective DAPL collective mask = 0000 [106] MPID_nem_init_dapl_coll_fns(): User set DAPL collective mask = 0000 [106] MPID_nem_init_dapl_coll_fns(): Effective DAPL collective mask = 0000 [133] MPID_nem_init_dapl_coll_fns(): User set DAPL collective mask = 0000 [133] MPID_nem_init_dapl_coll_fns(): Effective DAPL collective mask = 0000 [31] MPID_nem_init_dapl_coll_fns(): User set DAPL collective mask = 0000 [31] MPID_nem_init_dapl_coll_fns(): Effective DAPL collective mask = 0000 [66] MPID_nem_init_dapl_coll_fns(): User set DAPL collective mask = 0000 [66] MPID_nem_init_dapl_coll_fns(): Effective DAPL collective mask = 0000 [162] MPID_nem_init_dapl_coll_fns(): User set DAPL collective mask = 0000 [162] MPID_nem_init_dapl_coll_fns(): Effective DAPL collective mask = 0000 [189] MPID_nem_init_dapl_coll_fns(): User set DAPL collective mask = 0000 [189] MPID_nem_init_dapl_coll_fns(): Effective DAPL collective mask = 0000 [20] MPID_nem_init_dapl_coll_fns(): User set DAPL collective mask = 0000 [20] MPID_nem_init_dapl_coll_fns(): Effective DAPL collective mask = 0000 [91] MPID_nem_init_dapl_coll_fns(): User set DAPL collective mask = 0000 [91] MPID_nem_init_dapl_coll_fns(): Effective DAPL collective mask = 0000 [107] MPID_nem_init_dapl_coll_fns(): User set DAPL collective mask = 0000 [107] MPID_nem_init_dapl_coll_fns(): Effective DAPL collective mask = 0000 [137] MPID_nem_init_dapl_coll_fns(): User set DAPL collective mask = 0000 [137] MPID_nem_init_dapl_coll_fns(): Effective DAPL collective mask = 0000 [32] MPID_nem_init_dapl_coll_fns(): User set DAPL collective mask = 0000 [32] MPID_nem_init_dapl_coll_fns(): Effective DAPL collective mask = 0000 [67] MPID_nem_init_dapl_coll_fns(): User set DAPL collective mask = 0000 [67] MPID_nem_init_dapl_coll_fns(): Effective DAPL collective mask = 0000 [163] MPID_nem_init_dapl_coll_fns(): User set DAPL collective mask = 0000 [163] MPID_nem_init_dapl_coll_fns(): Effective DAPL collective mask = 0000 [186] MPID_nem_init_dapl_coll_fns(): User set DAPL collective mask = 0000 [186] MPID_nem_init_dapl_coll_fns(): Effective DAPL collective mask = 0000 [21] MPID_nem_init_dapl_coll_fns(): User set DAPL collective mask = 0000 [21] MPID_nem_init_dapl_coll_fns(): Effective DAPL collective mask = 0000 [92] MPID_nem_init_dapl_coll_fns(): User set DAPL collective mask = 0000 [92] MPID_nem_init_dapl_coll_fns(): Effective DAPL collective mask = 0000 [112] MPID_nem_init_dapl_coll_fns(): User set DAPL collective mask = 0000 [112] MPID_nem_init_dapl_coll_fns(): Effective DAPL collective mask = 0000 [138] MPID_nem_init_dapl_coll_fns(): User set DAPL collective mask = 0000 [138] MPID_nem_init_dapl_coll_fns(): Effective DAPL collective mask = 0000 [34] MPID_nem_init_dapl_coll_fns(): User set DAPL collective mask = 0000 [34] MPID_nem_init_dapl_coll_fns(): Effective DAPL collective mask = 0000 [68] MPID_nem_init_dapl_coll_fns(): User set DAPL collective mask = 0000 [68] MPID_nem_init_dapl_coll_fns(): Effective DAPL collective mask = 0000 [164] MPID_nem_init_dapl_coll_fns(): User set DAPL collective mask = 0000 [164] MPID_nem_init_dapl_coll_fns(): Effective DAPL collective mask = 0000 [188] MPID_nem_init_dapl_coll_fns(): User set DAPL collective mask = 0000 [188] MPID_nem_init_dapl_coll_fns(): Effective DAPL collective mask = 0000 [22] MPID_nem_init_dapl_coll_fns(): User set DAPL collective mask = 0000 [22] MPID_nem_init_dapl_coll_fns(): Effective DAPL collective mask = 0000 [93] MPID_nem_init_dapl_coll_fns(): User set DAPL collective mask = 0000 [93] MPID_nem_init_dapl_coll_fns(): Effective DAPL collective mask = 0000 [114] MPID_nem_init_dapl_coll_fns(): User set DAPL collective mask = 0000 [114] MPID_nem_init_dapl_coll_fns(): Effective DAPL collective mask = 0000 [139] MPID_nem_init_dapl_coll_fns(): User set DAPL collective mask = 0000 [139] MPID_nem_init_dapl_coll_fns(): Effective DAPL collective mask = 0000 [39] MPID_nem_init_dapl_coll_fns(): User set DAPL collective mask = 0000 [39] MPID_nem_init_dapl_coll_fns(): Effective DAPL collective mask = 0000 [69] MPID_nem_init_dapl_coll_fns(): User set DAPL collective mask = 0000 [69] MPID_nem_init_dapl_coll_fns(): Effective DAPL collective mask = 0000 [165] MPID_nem_init_dapl_coll_fns(): User set DAPL collective mask = 0000 [165] MPID_nem_init_dapl_coll_fns(): Effective DAPL collective mask = 0000 [190] MPID_nem_init_dapl_coll_fns(): User set DAPL collective mask = 0000 [190] MPID_nem_init_dapl_coll_fns(): Effective DAPL collective mask = 0000 [23] MPID_nem_init_dapl_coll_fns(): User set DAPL collective mask = 0000 [23] MPID_nem_init_dapl_coll_fns(): Effective DAPL collective mask = 0000 [94] MPID_nem_init_dapl_coll_fns(): User set DAPL collective mask = 0000 [94] MPID_nem_init_dapl_coll_fns(): Effective DAPL collective mask = 0000 [115] MPID_nem_init_dapl_coll_fns(): User set DAPL collective mask = 0000 [115] MPID_nem_init_dapl_coll_fns(): Effective DAPL collective mask = 0000 [141] MPID_nem_init_dapl_coll_fns(): User set DAPL collective mask = 0000 [141] MPID_nem_init_dapl_coll_fns(): Effective DAPL collective mask = 0000 [43] MPID_nem_init_dapl_coll_fns(): User set DAPL collective mask = 0000 [43] MPID_nem_init_dapl_coll_fns(): Effective DAPL collective mask = 0000 [70] MPID_nem_init_dapl_coll_fns(): User set DAPL collective mask = 0000 [70] MPID_nem_init_dapl_coll_fns(): Effective DAPL collective mask = 0000 [166] MPID_nem_init_dapl_coll_fns(): User set DAPL collective mask = 0000 [166] MPID_nem_init_dapl_coll_fns(): Effective DAPL collective mask = 0000 [176] MPID_nem_init_dapl_coll_fns(): User set DAPL collective mask = 0000 [176] MPID_nem_init_dapl_coll_fns(): Effective DAPL collective mask = 0000 [3] MPID_nem_init_dapl_coll_fns(): User set DAPL collective mask = 0000 [3] MPID_nem_init_dapl_coll_fns(): Effective DAPL collective mask = 0000 [95] MPID_nem_init_dapl_coll_fns(): User set DAPL collective mask = 0000 [95] MPID_nem_init_dapl_coll_fns(): Effective DAPL collective mask = 0000 [117] MPID_nem_init_dapl_coll_fns(): User set DAPL collective mask = 0000 [117] MPID_nem_init_dapl_coll_fns(): Effective DAPL collective mask = 0000 [143] MPID_nem_init_dapl_coll_fns(): User set DAPL collective mask = 0000 [143] MPID_nem_init_dapl_coll_fns(): Effective DAPL collective mask = 0000 [47] MPID_nem_init_dapl_coll_fns(): User set DAPL collective mask = 0000 [47] MPID_nem_init_dapl_coll_fns(): Effective DAPL collective mask = 0000 [71] MPID_nem_init_dapl_coll_fns(): User set DAPL collective mask = 0000 [71] MPID_nem_init_dapl_coll_fns(): Effective DAPL collective mask = 0000 [167] MPID_nem_init_dapl_coll_fns(): User set DAPL collective mask = 0000 [167] MPID_nem_init_dapl_coll_fns(): Effective DAPL collective mask = 0000 [9] MPID_nem_init_dapl_coll_fns(): User set DAPL collective mask = 0000 [9] MPID_nem_init_dapl_coll_fns(): Effective DAPL collective mask = 0000 [74] MPID_nem_init_dapl_coll_fns(): User set DAPL collective mask = 0000 [103] MPID_nem_init_dapl_coll_fns(): User set DAPL collective mask = 0000 [103] MPID_nem_init_dapl_coll_fns(): Effective DAPL collective mask = 0000 [121] MPID_nem_init_dapl_coll_fns(): User set DAPL collective mask = 0000 [121] MPID_nem_init_dapl_coll_fns(): Effective DAPL collective mask = 0000 [27] MPID_nem_init_dapl_coll_fns(): User set DAPL collective mask = 0000 [27] MPID_nem_init_dapl_coll_fns(): Effective DAPL collective mask = 0000 [50] MPID_nem_init_dapl_coll_fns(): User set DAPL collective mask = 0000 [147] MPID_nem_init_dapl_coll_fns(): User set DAPL collective mask = 0000 [147] MPID_nem_init_dapl_coll_fns(): Effective DAPL collective mask = 0000 [74] MPID_nem_init_dapl_coll_fns(): Effective DAPL collective mask = 0000 [50] MPID_nem_init_dapl_coll_fns(): Effective DAPL collective mask = 0000 [0] MPI startup(): Rank Pid Node name Pin cpu [0] MPI startup(): 0 63595 n0471 0 [0] MPI startup(): 1 63596 n0471 1 [0] MPI startup(): 2 63597 n0471 2 [0] MPI startup(): 3 63598 n0471 3 [0] MPI startup(): 4 63599 n0471 4 [0] MPI startup(): 5 63600 n0471 5 [0] MPI startup(): 6 63601 n0471 6 [0] MPI startup(): 7 63602 n0471 7 [0] MPI startup(): 8 63603 n0471 8 [0] MPI startup(): 9 63604 n0471 9 [0] MPI startup(): 10 63605 n0471 10 [0] MPI startup(): 11 63606 n0471 11 [0] MPI startup(): 12 63607 n0471 12 [0] MPI startup(): 13 63608 n0471 13 [0] MPI startup(): 14 63609 n0471 14 [0] MPI startup(): 15 63610 n0471 15 [0] MPI startup(): 16 63611 n0471 16 [0] MPI startup(): 17 63612 n0471 17 [0] MPI startup(): 18 63613 n0471 18 [0] MPI startup(): 19 63614 n0471 19 [0] MPI startup(): 20 63615 n0471 20 [0] MPI startup(): 21 63616 n0471 21 [0] MPI startup(): 22 63617 n0471 22 [0] MPI startup(): 23 63618 n0471 23 [0] MPI startup(): 24 40082 n0483 0 [0] MPI startup(): 25 40083 n0483 1 [0] MPI startup(): 26 40084 n0483 2 [0] MPI startup(): 27 40085 n0483 3 [0] MPI startup(): 28 40086 n0483 4 [0] MPI startup(): 29 40087 n0483 5 [0] MPI startup(): 30 40088 n0483 6 [0] MPI startup(): 31 40089 n0483 7 [0] MPI startup(): 32 40090 n0483 8 [0] MPI startup(): 33 40091 n0483 9 [0] MPI startup(): 34 40092 n0483 10 [0] MPI startup(): 35 40093 n0483 11 [0] MPI startup(): 36 40094 n0483 12 [0] MPI startup(): 37 40095 n0483 13 [0] MPI startup(): 38 40096 n0483 14 [0] MPI startup(): 39 40097 n0483 15 [0] MPI startup(): 40 40098 n0483 16 [0] MPI startup(): 41 40099 n0483 17 [0] MPI startup(): 42 40100 n0483 18 [0] MPI startup(): 43 40101 n0483 19 [0] MPI startup(): 44 40102 n0483 20 [0] MPI startup(): 45 40103 n0483 21 [0] MPI startup(): 46 40104 n0483 22 [0] MPI startup(): 47 40105 n0483 23 [0] MPI startup(): 48 24558 n0497 0 [0] MPI startup(): 49 24559 n0497 1 [0] MPI startup(): 50 24560 n0497 2 [0] MPI startup(): 51 24561 n0497 3 [0] MPI startup(): 52 24562 n0497 4 [0] MPI startup(): 53 24563 n0497 5 [0] MPI startup(): 54 24564 n0497 6 [0] MPI startup(): 55 24565 n0497 7 [0] MPI startup(): 56 24566 n0497 8 [0] MPI startup(): 57 24567 n0497 9 [0] MPI startup(): 58 24568 n0497 10 [0] MPI startup(): 59 24569 n0497 11 [0] MPI startup(): 60 24570 n0497 12 [0] MPI startup(): 61 24571 n0497 13 [0] MPI startup(): 62 24572 n0497 14 [0] MPI startup(): 63 24573 n0497 15 [0] MPI startup(): 64 24574 n0497 16 [0] MPI startup(): 65 24575 n0497 17 [0] MPI startup(): 66 24576 n0497 18 [0] MPI startup(): 67 24577 n0497 19 [0] MPI startup(): 68 24578 n0497 20 [0] MPI startup(): 69 24579 n0497 21 [0] MPI startup(): 70 24580 n0497 22 [0] MPI startup(): 71 24581 n0497 23 [0] MPI startup(): 72 13732 n0501 0 [0] MPI startup(): 73 13733 n0501 1 [0] MPI startup(): 74 13734 n0501 2 [0] MPI startup(): 75 13735 n0501 3 [0] MPI startup(): 76 13736 n0501 4 [0] MPI startup(): 77 13737 n0501 5 [0] MPI startup(): 78 13738 n0501 6 [0] MPI startup(): 79 13739 n0501 7 [0] MPI startup(): 80 13740 n0501 8 [0] MPI startup(): 81 13741 n0501 9 [0] MPI startup(): 82 13742 n0501 10 [0] MPI startup(): 83 13743 n0501 11 [0] MPI startup(): 84 13744 n0501 12 [0] MPI startup(): 85 13745 n0501 13 [0] MPI startup(): 86 13746 n0501 14 [0] MPI startup(): 87 13747 n0501 15 [0] MPI startup(): 88 13748 n0501 16 [0] MPI startup(): 89 13749 n0501 17 [0] MPI startup(): 90 13750 n0501 18 [0] MPI startup(): 91 13751 n0501 19 [0] MPI startup(): 92 13752 n0501 20 [0] MPI startup(): 93 13753 n0501 21 [0] MPI startup(): 94 13754 n0501 22 [0] MPI startup(): 95 13755 n0501 23 [0] MPI startup(): 96 17525 n0506 0 [0] MPI startup(): 97 17526 n0506 1 [0] MPI startup(): 98 17527 n0506 2 [0] MPI startup(): 99 17528 n0506 3 [0] MPI startup(): 100 17529 n0506 4 [0] MPI startup(): 101 17530 n0506 5 [0] MPI startup(): 102 17531 n0506 6 [0] MPI startup(): 103 17532 n0506 7 [0] MPI startup(): 104 17533 n0506 8 [0] MPI startup(): 105 17534 n0506 9 [0] MPI startup(): 106 17535 n0506 10 [0] MPI startup(): 107 17536 n0506 11 [0] MPI startup(): 108 17537 n0506 12 [0] MPI startup(): 109 17538 n0506 13 [0] MPI startup(): 110 17539 n0506 14 [0] MPI startup(): 111 17540 n0506 15 [0] MPI startup(): 112 17541 n0506 16 [0] MPI startup(): 113 17542 n0506 17 [0] MPI startup(): 114 17543 n0506 18 [0] MPI startup(): 115 17544 n0506 19 [0] MPI startup(): 116 17545 n0506 20 [0] MPI startup(): 117 17546 n0506 21 [0] MPI startup(): 118 17547 n0506 22 [0] MPI startup(): 119 17548 n0506 23 [0] MPI startup(): 120 50221 n0510 0 [0] MPI startup(): 121 50222 n0510 1 [0] MPI startup(): 122 50223 n0510 2 [0] MPI startup(): 123 50224 n0510 3 [0] MPI startup(): 124 50225 n0510 4 [0] MPI startup(): 125 50226 n0510 5 [0] MPI startup(): 126 50227 n0510 6 [0] MPI startup(): 127 50228 n0510 7 [0] MPI startup(): 128 50229 n0510 8 [0] MPI startup(): 129 50230 n0510 9 [0] MPI startup(): 130 50231 n0510 10 [0] MPI startup(): 131 50232 n0510 11 [0] MPI startup(): 132 50233 n0510 12 [0] MPI startup(): 133 50234 n0510 13 [0] MPI startup(): 134 50235 n0510 14 [0] MPI startup(): 135 50236 n0510 15 [0] MPI startup(): 136 50237 n0510 16 [0] MPI startup(): 137 50238 n0510 17 [0] MPI startup(): 138 50239 n0510 18 [0] MPI startup(): 139 50240 n0510 19 [0] MPI startup(): 140 50241 n0510 20 [0] MPI startup(): 141 50242 n0510 21 [0] MPI startup(): 142 50243 n0510 22 [0] MPI startup(): 143 50244 n0510 23 [0] MPI startup(): 144 45559 n1070 0 [0] MPI startup(): 145 45560 n1070 1 [0] MPI startup(): 146 45561 n1070 2 [0] MPI startup(): 147 45562 n1070 3 [0] MPI startup(): 148 45563 n1070 4 [0] MPI startup(): 149 45564 n1070 5 [0] MPI startup(): 150 45565 n1070 6 [0] MPI startup(): 151 45566 n1070 7 [0] MPI startup(): 152 45567 n1070 8 [0] MPI startup(): 153 45568 n1070 9 [0] MPI startup(): 154 45569 n1070 10 [0] MPI startup(): 155 45570 n1070 11 [0] MPI startup(): 156 45571 n1070 12 [0] MPI startup(): 157 45572 n1070 13 [0] MPI startup(): 158 45573 n1070 14 [0] MPI startup(): 159 45574 n1070 15 [0] MPI startup(): 160 45575 n1070 16 [0] MPI startup(): 161 45576 n1070 17 [0] MPI startup(): 162 45577 n1070 18 [0] MPI startup(): 163 45578 n1070 19 [0] MPI startup(): 164 45579 n1070 20 [0] MPI startup(): 165 45580 n1070 21 [0] MPI startup(): 166 45581 n1070 22 [0] MPI startup(): 167 45582 n1070 23 [0] MPI startup(): 168 58718 n1089 0 [0] MPI startup(): 169 58719 n1089 1 [0] MPI startup(): 170 58720 n1089 2 [0] MPI startup(): 171 58721 n1089 3 [0] MPI startup(): 172 58722 n1089 4 [0] MPI startup(): 173 58723 n1089 5 [0] MPI startup(): 174 58724 n1089 6 [0] MPI startup(): 175 58725 n1089 7 [0] MPI startup(): 176 58726 n1089 8 [0] MPI startup(): 177 58727 n1089 9 [0] MPI startup(): 178 58728 n1089 10 [0] MPI startup(): 179 58729 n1089 11 [0] MPI startup(): 180 58730 n1089 12 [0] MPI startup(): 181 58731 n1089 13 [0] MPI startup(): 182 58732 n1089 14 [0] MPI startup(): 183 58733 n1089 15 [0] MPI startup(): 184 58734 n1089 16 [0] MPI startup(): 185 58735 n1089 17 [0] MPI startup(): 186 58736 n1089 18 [0] MPI startup(): 187 58737 n1089 19 [0] MPI startup(): 188 58738 n1089 20 [0] MPI startup(): 189 58739 n1089 21 [0] MPI startup(): 190 58740 n1089 22 [0] MPI startup(): 191 58741 n1089 23 [0] MPI startup(): I_MPI_DEBUG=5 [0] MPI startup(): I_MPI_INFO_NUMA_NODE_MAP=mlx4_0:0 [0] MPI startup(): I_MPI_INFO_NUMA_NODE_NUM=2 [0] MPI startup(): I_MPI_PIN_MAPPING=24:0 0,1 1,2 2,3 3,4 4,5 5,6 6,7 7,8 8,9 9,10 10,11 11,12 12,13 13,14 14,15 15,16 16,17 17,18 18,19 19,20 20,21 21,22 22,23 23 #------------------------------------------------------------ # Intel (R) MPI Benchmarks 2017, MPI-1 part #------------------------------------------------------------ # Date : Tue Apr 18 18:51:02 2017 # Machine : x86_64 # System : Linux # Release : 2.6.32-504.8.1.el6.x86_64 # Version : #1 SMP Wed Jan 28 21:11:36 UTC 2015 # MPI Version : 3.1 # MPI Thread Environment: # Calling sequence was: # src/IMB-MPI1 -npmin 192 -input IMB_SELECT_MPI1 -msglen ./msglens # # Message lengths were user defined # # MPI_Datatype : MPI_BYTE # MPI_Datatype for reductions : MPI_FLOAT # MPI_Op : MPI_SUM # # # List of Benchmarks to run: # Biband # Biband # Biband # Biband # Biband # Biband # Biband #--------------------------------------------------- # Benchmarking Biband # #processes = 192 #--------------------------------------------------- #bytes #repetitions Mbytes/sec Msg/sec 0 1000 0.00 154330995 65536 510 39184.51 626952 524288 64 39331.77 78664 4194304 8 38956.75 9739 #--------------------------------------------------- # Benchmarking Biband # #processes = 192 #--------------------------------------------------- #bytes #repetitions Mbytes/sec Msg/sec 0 1000 0.00 146142831 65536 509 39281.80 628509 524288 64 39015.86 78032 4194304 8 39033.88 9758 #--------------------------------------------------- # Benchmarking Biband # #processes = 192 #--------------------------------------------------- #bytes #repetitions Mbytes/sec Msg/sec 0 1000 0.00 147142092 65536 509 39292.94 628687 524288 64 38914.18 77828 4194304 8 38914.36 9729 #--------------------------------------------------- # Benchmarking Biband # #processes = 192 #--------------------------------------------------- #bytes #repetitions Mbytes/sec Msg/sec 0 1000 0.00 135553138 65536 510 38947.73 623164 524288 62 38752.63 77505 4194304 8 38934.71 9734 #--------------------------------------------------- # Benchmarking Biband # #processes = 192 #--------------------------------------------------- #bytes #repetitions Mbytes/sec Msg/sec 0 1000 0.00 146860756 65536 510 39018.26 624292 524288 63 38688.24 77376 4194304 8 38828.75 9707 #--------------------------------------------------- # Benchmarking Biband # #processes = 192 #--------------------------------------------------- #bytes #repetitions Mbytes/sec Msg/sec 0 1000 0.00 146640133 65536 510 39290.95 628655 524288 64 39050.67 78101 4194304 8 39023.20 9756 #--------------------------------------------------- # Benchmarking Biband # #processes = 192 #--------------------------------------------------- #bytes #repetitions Mbytes/sec Msg/sec 0 1000 0.00 146222853 65536 511 39305.73 628892 524288 63 39035.15 78070 4194304 8 39035.32 9759 # All processes entering MPI_Finalize