The issue here is that, when you try to start the MPD daemons from the 'cluster' node, it's unable to connect to the 'cl1n001' node.
As Tim mentioned, can you verify that passwordless SSH is setup on the cluster? Meaning that you can ssh from cluster to cl1n001 without being prompted for a password? That's a requirement for the Intel MPI Library.
Also, make sure that no old MPD daemons are running on the cluster. To do so, execute:
$ ps aux | grep mpd
If you see a listing of any 'mpd' python processes running under your account, kill -9 those to clear out the port Intel MPI is trying to use (both for cluster and cl1n001).
Finally, this could be an issue where Intel MPI tries to create the initial mpd logfile but it can't. By default, this will be done in /tmp on the node. Can you verify that you have access and can indeed write into /tmp, or if there is a file called /tmp/mpd2.logfile_?
Generally, I would also recommend upgrading to the latest Intel MPI Library 3.2 Update 1.