- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Everyone,
I'm sorry if this is the incorrect forum. My question is I have an application running on Linux that makes a call to send(), to of coarse send a packet over TCP/IP (Nagling is disabled and the payload is 4 bytes). The thing I can't get over is tcpdump and my own timing of the send() function return 20 microseconds.
I can't figure any reason why there would be 20 microseconds delay in any of the code run once send() is called. Is there any chance that the 20 microseconds is occuring somewhere in the hardware? I'm using a Pentium Dual Core around 2.0 GHz. My ethernet card is onboard. If I used a PCI card would it be faster?
Anyway, if the latency is in the hardware, could someone please help me understand why this is occuring?
Thanks!
Brandon
Link Copied
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page