I am using Visial Studio as enrivonment for intel fortran. I am studying MPI by myself so I do not understand what you said about ifort, mpif70.. Are they commands in Unix OS?
Can I study and practice MPI on my destop computer? If can, what should I need?
What is pre-build?
My fisrt example is
integer rank, size, ierror
call MPI_COMM_SIZE(MPI_COMM_WORLD, size, ierror)
call MPI_COMM_RANK(MPI_COMM_WORLD, rank, ierror)
print *, 'I am MPI process', rank, ' of ', size
i have tried to install mpi in java by using mpiJava-1.2.5x.tar but wont compile. gives error of mpich1.2.1 installed path but i am using intel cluster studio so i guess i dont need mpich library.
please anybody having any idea regarding this please suggest me.
If anyone knows of efforts with Java and Intel MPI, you should get an answer on the HPC/Cluster forum. Easily searchable information indicates that most Windows Java MPI work has been carried out with specific non-Intel MPI implementations e.g.
Bit of a long story with a cry for help at the end.
The usual scenario for running an MPI application from within an existing application is to build a script with the relevant commands (mpdboot, mpiexec , mpdallexit) and to run that script (e.g. using execve). The results from the mpi application data/messages can then be retrieved and used in the calling application. The limitation of this model is that the calling application isn't part of the MPI ring and as such can't use MPI message passing to send info to/from the MPI application. One could obviously set up some other sort of communication channel between e.g. the rank 0 process and the calling process but that is a quite elaborate.
We successfully implemented a model where the existing application first runs mpdboot but instead of launching a script with the mpiexec commands uses MPI_Comm_Spawn_multiple() to launch the (other) mpi tasks and uses MPI_Intercom_Merge() all the tasks into a single group.
All worked well until we changed the communication fabric from tcp to afo by setting I_MPI_FABRICS to shm:ofa. Whilst this works perfectly well when we run the mpi application on it's own (launched using a script) this caused MPI_Init() to fail when called from the existing application. To cut a long story short it appears that this issue is caused by the existing application not being launched by mpiexec.
We've tried to simulate the environment in which mpiexec launches a process but have not been able to resolve this issue. Has anybody been successful in using the "ofa" fabric from an MPI process that wasn't launched using mpiexec?
Any hints would be welcome.