- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi,
I just downloaded the Intel Winodws MPI library.
Is there a way to start the MPI application without using mpiexec? I checked the reference manual and didn't find any information about this.
The reason is that we already have our own Windows Service application that acts like a process manager so it knows which servers should start the program and so on. Also, we have to use the visible desktop and user impersonation Windows API in certain cases when we launch our application.
I have found an example for you in another Windows MPI library called MPI Deino with this feature:
http://mpi.deino.net/manual.htm
"Singleton init supports MPI-2 spawn functions. DeinoMPI allows single
processes started without the process manager to call the spawn
functions just as if it had been started by mpiexec. In other words, if
you start your application like this, mpiexec n 1 myapp.exe, or like
this, myapp.exe, both applications can call MPI_Comm_spawn."
So I want to know if its possible to do the same thing with Intel MPI Library as well.
Thanks a lot!
Eric
I just downloaded the Intel Winodws MPI library.
Is there a way to start the MPI application without using mpiexec? I checked the reference manual and didn't find any information about this.
The reason is that we already have our own Windows Service application that acts like a process manager so it knows which servers should start the program and so on. Also, we have to use the visible desktop and user impersonation Windows API in certain cases when we launch our application.
I have found an example for you in another Windows MPI library called MPI Deino with this feature:
http://mpi.deino.net/manual.htm
"Singleton init supports MPI-2 spawn functions. DeinoMPI allows single
processes started without the process manager to call the spawn
functions just as if it had been started by mpiexec. In other words, if
you start your application like this, mpiexec n 1 myapp.exe, or like
this, myapp.exe, both applications can call MPI_Comm_spawn."
So I want to know if its possible to do the same thing with Intel MPI Library as well.
Thanks a lot!
Eric
Link Copied
10 Replies
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Eric,
You are actually looking at two different capabilities. The first is a job scheduler. In Windows*, the Intel MPI Library integrates with the Microsoft* Job Scheduler and the PBS Pro* Job Scheduler. If you are using one of those, you would use the instructions in the Reference Manual. These schedulers still call mpiexec to launch the processes.
It sounds like you are using a different scheduler. In this case, what I would recommend is to set up a wrapper that will generate a configuration file and/or set appropriate environment variables for each MPI run, and then call mpiexec with that configuration file or environment. Are you using a custom scheduler, or is it one that is available to the public?
The second capability is process management within an MPI job. The MPI_Comm_spawn and other MPI-2 process management capabilities are supported in the Intel MPI Library. You will still need to launch the original executable with mpiexec in order to use this capability.
Generally, you can launch an MPI application without using mpiexec, but then it will only run as a single instance. Typically with MPI applications, each instance should be linked to the others under mpiexec. Process spawning is still handled under mpiexec.
Please let me know if you have additional questions or need further detail or clarification.
Sincerely,
James Tullos
Technical Consulting Engineer
Intel Cluster Tools
You are actually looking at two different capabilities. The first is a job scheduler. In Windows*, the Intel MPI Library integrates with the Microsoft* Job Scheduler and the PBS Pro* Job Scheduler. If you are using one of those, you would use the instructions in the Reference Manual. These schedulers still call mpiexec to launch the processes.
It sounds like you are using a different scheduler. In this case, what I would recommend is to set up a wrapper that will generate a configuration file and/or set appropriate environment variables for each MPI run, and then call mpiexec with that configuration file or environment. Are you using a custom scheduler, or is it one that is available to the public?
The second capability is process management within an MPI job. The MPI_Comm_spawn and other MPI-2 process management capabilities are supported in the Intel MPI Library. You will still need to launch the original executable with mpiexec in order to use this capability.
Generally, you can launch an MPI application without using mpiexec, but then it will only run as a single instance. Typically with MPI applications, each instance should be linked to the others under mpiexec. Process spawning is still handled under mpiexec.
Please let me know if you have additional questions or need further detail or clarification.
Sincerely,
James Tullos
Technical Consulting Engineer
Intel Cluster Tools
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi James,
If I launch two instances of the MPI application without mpiexec, can they still talk to each other? The problem with using mpiexec is that it will send commands to SMPD which will in turn launch our application. SMPD doesn't have the capability to switch to visible desktop or perform user impersonation. We have our own job scheduler / process manager written as a Windows service with the ability to switch to visible desktop / perform user impersonation already. That is why I am trying to see if there is a way to workaround the mpiexec utility. I think mpiexec is mainly just launching the process and retrieving the rank/size from the SMPD right? Is there some API that I can call to SMPD directly to get the same info?
If I launch two instances of the MPI application without mpiexec, can they still talk to each other? The problem with using mpiexec is that it will send commands to SMPD which will in turn launch our application. SMPD doesn't have the capability to switch to visible desktop or perform user impersonation. We have our own job scheduler / process manager written as a Windows service with the ability to switch to visible desktop / perform user impersonation already. That is why I am trying to see if there is a way to workaround the mpiexec utility. I think mpiexec is mainly just launching the process and retrieving the rank/size from the SMPD right? Is there some API that I can call to SMPD directly to get the same info?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Eric,
What is your MPI application trying to do that would require a visible desktop on nodes other than the launching node? Typically, I/O (and any user interaction) should only occur from a single rank in the job, and if using standard input and output, will be redirected to the computer that launched the job initially, even if none of the processes are running on that computer.
While the standard allows two different communicators to connect to each other, this is something I've never tried. I believe that they will still need to be launched from mpiexec to obtain the proper communications, as this is the case with other MPI commands that involved process management (I have personally tried this with MPI_Comm_spawn).
The process rank and communicator size are set and provided by mpiexec, not SMPD. SMPD only exists as a means to launch the processes ona remote host.
Sincerely,
James Tullos
Technical Consulting Engineer
Intel Cluster Tools
What is your MPI application trying to do that would require a visible desktop on nodes other than the launching node? Typically, I/O (and any user interaction) should only occur from a single rank in the job, and if using standard input and output, will be redirected to the computer that launched the job initially, even if none of the processes are running on that computer.
While the standard allows two different communicators to connect to each other, this is something I've never tried. I believe that they will still need to be launched from mpiexec to obtain the proper communications, as this is the case with other MPI commands that involved process management (I have personally tried this with MPI_Comm_spawn).
The process rank and communicator size are set and provided by mpiexec, not SMPD. SMPD only exists as a means to launch the processes ona remote host.
Sincerely,
James Tullos
Technical Consulting Engineer
Intel Cluster Tools
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi James,
We have an existingWindows GUI application where we are trying to replace WinSock communication code with MPI code. We start one copy of our application per CPU core on a given server. The default invisible desktop does not have enough heap size to support more than 4 copies of our application in most cases. I understand that most application will have user interaction with a single rank in the job. It does not apply to our case.
So if I don't startusing mpiexec, how doI provide the rank / size information to the MPI application. Is that possible with the Intel MPI library?
From the MPI Deino documentation, it says:
"...if you start your application like this, mpiexec n 1 myapp.exe, or like this, myapp.exe, both applications can call MPI_Comm_spawn."
I want to confirm if this is also possible with Intel MPI library or not.
We have an existingWindows GUI application where we are trying to replace WinSock communication code with MPI code. We start one copy of our application per CPU core on a given server. The default invisible desktop does not have enough heap size to support more than 4 copies of our application in most cases. I understand that most application will have user interaction with a single rank in the job. It does not apply to our case.
So if I don't startusing mpiexec, how doI provide the rank / size information to the MPI application. Is that possible with the Intel MPI library?
From the MPI Deino documentation, it says:
"...if you start your application like this, mpiexec n 1 myapp.exe, or like this, myapp.exe, both applications can call MPI_Comm_spawn."
I want to confirm if this is also possible with Intel MPI library or not.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Eric,
As I stated before, in order to call MPI_Comm_spawn, you will need to run under mpiexec (with the Intel MPI Library). Trying to run without mpiexec and calling MPI_Comm_spawn leads to an application crash.
Starting outside of mpiexec, the process will have a rank of 0 and size of 1. Keep in mind that rank and size are communicator specific. If you use MPI_Comm_spawn to create new processes, the new processes will have a different MPI_COMM_WORLD from the parent. The ranks are not unique either. If a parent with rank 0 spawns 3 child processes, the first child process will also have rank 0 within the new communicator. If you build a communicator that has the parent and the children, the rank within that communicator will depend on the order the processes are added.
Doesyour application interact with the user in any way? Is it a single GUI for all instances, or a separate GUI for each instance?
Sincerely,
James Tullos
Technical Consulting Engineer
Intel Cluster Tools
As I stated before, in order to call MPI_Comm_spawn, you will need to run under mpiexec (with the Intel MPI Library). Trying to run without mpiexec and calling MPI_Comm_spawn leads to an application crash.
Starting outside of mpiexec, the process will have a rank of 0 and size of 1. Keep in mind that rank and size are communicator specific. If you use MPI_Comm_spawn to create new processes, the new processes will have a different MPI_COMM_WORLD from the parent. The ranks are not unique either. If a parent with rank 0 spawns 3 child processes, the first child process will also have rank 0 within the new communicator. If you build a communicator that has the parent and the children, the rank within that communicator will depend on the order the processes are added.
Doesyour application interact with the user in any way? Is it a single GUI for all instances, or a separate GUI for each instance?
Sincerely,
James Tullos
Technical Consulting Engineer
Intel Cluster Tools
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi James,
Our application interact with user and there is a separate GUI for each instance.
I am not using MPI_Comm_spawn. I started all the MPI processes separately and then use MPI_Comm_accept and MPI_Comm_connect to connect them.
So is it possible in my case to launch without mpiexec and get the ranks / size?
This is an issue that I must resolve before I can begin testing.
Thanks a lot!
Eric
Our application interact with user and there is a separate GUI for each instance.
I am not using MPI_Comm_spawn. I started all the MPI processes separately and then use MPI_Comm_accept and MPI_Comm_connect to connect them.
So is it possible in my case to launch without mpiexec and get the ranks / size?
This is an issue that I must resolve before I can begin testing.
Thanks a lot!
Eric
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Eric,
I have not yet tested that scenario, so I cannot say for certain whether or not it would work. However, launching without mpiexec will give each individual instance a rank of 0 and size of 1 (for MPI_COMM_WORLD). When an application joins using MPI_Comm_accept/MPI_Comm_connect, new communicators are created.
For example, let's say you start four instances, one acting as server and the other three joining as clients. When the first client connects to the server, the server would now have two communicators, MPI_COMM_WORLD(unchanged), and client_comm1(intercommunicator with the server as the local group and the client as the remote group). The client would also have two communicators, MPI_COMM_WORLD(unchanged, but not the same as the MPI_COMM_WORLD from the server), and server_newcomm(intercommunicator with the client as the local group and the server as the remote group). The same would be true for each of the other clients, with a new communicator in the server (client_comm2, client_comm3) for each new client.
At this point, the clients can communicate with the server, and the server can communicate with the clients. However, the clients cannot directly communicate with each other. In order to do this, you would need to build a new communicator that contains all of the instances (or at least all of the clients, if you don't want the server involved). In this communicator, each client will have a rank, and the size would be the size of the new communicator.
I believe this is what you are wanting. If so, then it should work, but as I said, I have not tested this and cannot confirm if it does work. If it does not work within the Intel MPI Library, then it is something that could be added to a future release.
Sincerely,
James Tullos
Technical Consulting Engineer
Intel Cluster Tools
I have not yet tested that scenario, so I cannot say for certain whether or not it would work. However, launching without mpiexec will give each individual instance a rank of 0 and size of 1 (for MPI_COMM_WORLD). When an application joins using MPI_Comm_accept/MPI_Comm_connect, new communicators are created.
For example, let's say you start four instances, one acting as server and the other three joining as clients. When the first client connects to the server, the server would now have two communicators, MPI_COMM_WORLD(unchanged), and client_comm1(intercommunicator with the server as the local group and the client as the remote group). The client would also have two communicators, MPI_COMM_WORLD(unchanged, but not the same as the MPI_COMM_WORLD from the server), and server_newcomm(intercommunicator with the client as the local group and the server as the remote group). The same would be true for each of the other clients, with a new communicator in the server (client_comm2, client_comm3) for each new client.
At this point, the clients can communicate with the server, and the server can communicate with the clients. However, the clients cannot directly communicate with each other. In order to do this, you would need to build a new communicator that contains all of the instances (or at least all of the clients, if you don't want the server involved). In this communicator, each client will have a rank, and the size would be the size of the new communicator.
I believe this is what you are wanting. If so, then it should work, but as I said, I have not tested this and cannot confirm if it does work. If it does not work within the Intel MPI Library, then it is something that could be added to a future release.
Sincerely,
James Tullos
Technical Consulting Engineer
Intel Cluster Tools
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Eric,
As an update, I have just tested using MPI_Comm_accept and MPI_Comm_connect in Windows*, and it is possible to connect to a server application without using mpiexec.
Sincerely,
James Tullos
Technical Consulting Engineer
Intel Cluster Tools
As an update, I have just tested using MPI_Comm_accept and MPI_Comm_connect in Windows*, and it is possible to connect to a server application without using mpiexec.
Sincerely,
James Tullos
Technical Consulting Engineer
Intel Cluster Tools
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Eric,
I made an independent scheduler to manage MPI_COMM_Spawn processes to optimize the cluster resources but i have a problem while i want to runfibonnaciprogram that uses MPI_COMM_Spawn in it like recursion program but the new process will be spawned in another machine according to my scheduler but i have problem when i tried to spawn more than 8 procesess it gives me this problem :
Fatal error in MPI_Comm_spawn: Other MPI error, error stack:
MPI_Comm_spawn(145).............................: MPI_Comm_spawn(cmd="./fibonacci", argv=0xbfad8fc4, maxprocs=1, info=0x9c000002, root=0, MPI_COMM_SELF, intercomm=0xbfad8e60, errors=0xbfad8e64) failed
MPIDI_Comm_spawn_multiple(230)..................:
MPID_Comm_accept(153)...........................:
MPIDI_Comm_accept(937)..........................:
MPIDI_Create_inter_root_communicator_accept(205):
MPIDI_CH3I_Progress(150)........................:
MPID_nem_mpich2_blocking_recv(948)..............:
MPID_nem_tcp_connpoll(1720).....................:
state_commrdy_handler(1556).....................:
MPID_nem_tcp_recv_handler(1446).................: socket closed
Fatal error in MPI_Comm_spawn: Other MPI error, error stack:MPI_Comm_spawn(145).............................: MPI_Comm_spawn(cmd="./fibonacci", argv=0xbfad8fc4, maxprocs=1, info=0x9c000002, root=0, MPI_COMM_SELF, intercomm=0xbfad8e60, errors=0xbfad8e64) failedMPIDI_Comm_spawn_multiple(230)..................:MPID_Comm_accept(153)...........................:MPIDI_Comm_accept(937)..........................:MPIDI_Create_inter_root_communicator_accept(205):MPIDI_CH3I_Progress(150)........................:MPID_nem_mpich2_blocking_recv(948)..............:MPID_nem_tcp_connpoll(1720).....................:state_commrdy_handler(1556).....................:MPID_nem_tcp_recv_handler(1446).................: socket closedand i tried to run this fibonnaci program by using MPI 2 library but i have the same error but the question is why it can't spawn more than 8 processes in the same machine or when i run each spawn processes in each machine?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi sherif,
Can you show me the code you are using? What distribution are you using? How much memory and how many cores are available on the node(s) you are using? What version of the Intel MPI Library are you using?
I am not able to replicate the behavior you are experiencing, but I am noticing a slowdown at 9 processes per node (24 cores and 24 GB RAM per node on the system I tested). Everything still runs, there is just a pause when starting the 7th process if there are more than 8 on a single node. I will be investigating this further.
Sincerely,
James Tullos
Technical Consulting Engineer
Intel Cluster Tools
Can you show me the code you are using? What distribution are you using? How much memory and how many cores are available on the node(s) you are using? What version of the Intel MPI Library are you using?
I am not able to replicate the behavior you are experiencing, but I am noticing a slowdown at 9 processes per node (24 cores and 24 GB RAM per node on the system I tested). Everything still runs, there is just a pause when starting the 7th process if there are more than 8 on a single node. I will be investigating this further.
Sincerely,
James Tullos
Technical Consulting Engineer
Intel Cluster Tools
Reply
Topic Options
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page