Intel® MPI Library
Get help with building, analyzing, optimizing, and scaling high-performance computing (HPC) applications.
2154 Discussions

Errno: 10055 - insufficient buffer space - queue full when using MPI_Send

mcapogreco
Beginner
1,203 Views
Hi,

I have done some testing and found that I get error 10055 http://www.sockets.com/err_lst1.htm#WSAENOBUFS which occurs when I am doing a syncronised send. I am using boost.mpi and have found that this occurs when using send and isend and then mpi_wait.

From my understanding if I do a syncronised send, the buffer can be resused once the send has handshaked, but it doesn't appear to be the case.

I am using standard tcp on ethernet as my backbone.

Does anyone know if there is a setting in the Intel MPI config that will allow me to increase the buffer or other to get around this limit as it is occuring at quite a low level for my project purposes.

Also, is there a way to zip data that is sent with the Intel MPI lib.

Thanks

Mark

0 Kudos
4 Replies
Andrey_D_Intel
Employee
1,203 Views

Hi Mark,

Could you please clarify your system configuration (hardware, OS version)?

Strictly saying the TCP receive buffers and transmit buffer size is controlled by your system configuration. Intel MPI Library itself provides control over TCP buffers size if operating system allow adjust this value. Set the I_MPI_TCP_BUFFER_SIZE environment variable to desired value to override defaults. But keep in mind that actual TCP socket buffer size is restricted by the existing TCP settings on your system.

Best regards,

Andrey

0 Kudos
mcapogreco
Beginner
1,203 Views
Hi,

Thanks for your feedback.

I am running on Windows XP 32 bit, with a Dell T5500 with 2 Xean 6 core CPU's with standard ethernet WinSock backbone.


If the backbone does proof to be a bottleneck, do you know if there is a facility in the Intel MPI Lib to be able to zip the data before it is sent and unzip it at the remote end or does this need to be done manually.

Thanks

Mark

0 Kudos
TimP
Honored Contributor III
1,203 Views
As far as I can see from Dell's literature, T5500 is a standard dual socket Xeon 5520 platform. If, instead, it is actually a Xeon 56xx platform, that would be interesting to know. If you are running on a single node, Intel MPI should choose shared memory communication automatically, giving you the best efficiency. If there is a NUMA option in the BIOS setup, you would want to set that, unless for some reason you must choose a Windows version where it doesn't work.
I haven't heard of any MPI providing automatic zipping of messages. It's hard to see why you would want to do that, unless your messages consist of large character strings.
32-bit Windows might pose a bottleneck to the availability of buffer sizes, particularly when you choose a discontinued Windows version which doesn't support your platform fully. I doubt that you can overcome such problems by strange experiments. It's difficult enough to get full MPI performance on Windows when you don't set handicaps for yourself.
0 Kudos
Dmitry_K_Intel2
Employee
1,203 Views
Hi Mark,

You can get advantage of zipping/unzipping only for large messages. Our testing showed that it can happen for messages more than 5 MB and only if you are using ipp library. Such big messages are sent not so often and Intel MPI Library cannot do it automatically.

Regards!
Dmitry
0 Kudos
Reply