Community
cancel
Showing results for 
Search instead for 
Did you mean: 
Altera_Forum
Honored Contributor I
1,142 Views

TCP/IP Offload Engine Implementations -- What's the big deal?

Hi all, 

 

I've recently begun a project where I have to build a TCP/IP offload engine (maintain at least a few dozen TCP sessions; calculate checksums, maintain some state for each, retransmission buffers, etc. etc.). Basically, the IP core will accept standard, "uninterrupted" (read "buffered") Avalon-ST like interfaces for transmission and receipt of data, but manage a full TCP link within itself, seamless to the user(s) of the interface. 

 

Of course, my first inclination was to look for IP out there for sale which will easily port to and work on the Altera device I am targeting (a Stratix V chip). To my dismay, most of these IP cores are actually quite expensive! What's the big idea? It really seems to me like an experienced FPGA developer would be able to build a TCP offload engine within a few months time, and the market competition + number of purchasers of such an IP core would warrant making the price lower than several tens of thousands of USD. 

 

From my perspective, one would only have to maintain a few retransmission buffers, some session state information (directly mapping to each of the states documented in the TCP specs), tack on the appropriate stuff to the headers (a la UDP; except for things like window sizes and sequence numbers); I understand it's not totally trivial, but am I missing something crucial here? Before I set out to build this: has anyone here worked on a TCP or similar engine in Verilog/VHDL? What were the biggest hurdles? Why would this be any more complicated than something like a PCI Express IP core? Are there any IP cores which Altera offers which support a protocol that is similar to TCP and solves some of the same fundamental problems (retransmission buffers; windowing; etc)?
0 Kudos
4 Replies
Altera_Forum
Honored Contributor I
35 Views

Are you aware of this offering? 

http://comblock.com/product_list_ip.html 

(COM-5402SOFT) 

 

But to answer your question with a bunch of cliche:  

 

"you get what you pay for"  

"time is money" (readily available vs. slide-ware is worth something) 

"what the market will bear" (financial sector is among the early adopters) 

 

 

Overall, I agree with your assessment of the complexity though. 

 

Good luck!
Altera_Forum
Honored Contributor I
35 Views

I've wondered how well a recent linux (etc) will get on talking to a minimal TCP implementation - ie one that has no options, doesn't do slow start, acks every packet, fixed window size etc. 

If the other end of the connection is local, then you probably won't have any real issues - since a few extra packets won't matter and none get discarded so retries don't really ever happen. 

Actually, if you are pushing enough data through the connections to need to consider offload, can you actually afford to the time to recover from packet loss? On a working local network you don't lose packets at all. In which case sending UDP (much simpler to offload) might be an option.
Altera_Forum
Honored Contributor I
35 Views

 

--- Quote Start ---  

 

Actually, if you are pushing enough data through the connections to need to consider offload, can you actually afford to the time to recover from packet loss? 

--- Quote End ---  

 

 

I just wanted to mention that high bandwidth is only one motivation for offloading. 

 

Another reason is for lowest latency, and the applications may happen to have extremely low bandwidth requirements. (e.g. maybe you pick 10G over 1G because you want your messages to move 10x as fast, not because you want to transfer 10x the data). 

 

The reason for TCP over UDP is probably due to external constraints (interfacing with legacy systems).  

 

I agree that UDP is much simpler and possibly entirely adequate in closed [new] systems.
Altera_Forum
Honored Contributor I
35 Views

Something to also take into account is the testing time. A lot of things can go wrong in a TCP connection and a lot of tests need to be done to be sure your TCP stack will behave correctly in all cases. I think it is one of the reasons why the commercial TCP stacks are so expensive (and I would be very careful if I bought a miraculously cheap TCP stack!). You need a lot of test cases with some lost packets (including those responsible for acknowledges or window size changes), different states in the sender and the receiver (socket still open on one side but not the other, or if one of the sides crashes), timeouts, too many open sockets... if your system will be connected to an open network, you may also have to test its robustness against malformed packets, or even DOS attacks.

Reply