Ethernet Products
Determine ramifications of Intel® Ethernet products and technologies
4808 Discussions

Only 1Gbps on a 2Gbps Team Dynamic Link (802.3ad) w/ PRO1000 PT & PM Port Adapters

idata
Employee
3,548 Views

Hi,

Pounding head against wall moment here.

Attempting to increase bandwidth to 2Gbps utilizing a cisco 3560 switch and 3 servers (2:WinServer 2008 & 1:WinServer 2003 all with Intel Pro/1000 Dual or MB-integrated NICs). All servers have 2 NIC ports teamed using the PROset drivers (vrs 9.13.41... dtd 3/26/2010), Team type IEEE 802.ad Dynamic Link Aggregation - for increase bandwidth.

The 'network' is operational, however using the IPERF tool (wiki / iperf) I see my 'network' will not budge past the 50% network utilization as seen in Network performance -Task Manager. (50% of 2Gbps~1Gbps=no real aggregation?) Generally, results show about 920-950Mbit/sec in multiple tries. Config for the tests were: Iperf as 'server' on one of the '2008 servers' and Iperf in multiple client instances on the other 2 servers. [2 client addresses were used because I had read @Intel.com, that increased bandwidth aggregation is only available to multiple addresses. ? Have I implemented here correctly ?]

Querying the cisco switch and showing config: I note that each server's team comes into its own Etherchannel port channel. The 3 channels show dynamci 802.3ad (cisco's LACP mode) activity. Speed & Duplex for all teams and switch are set 1Gb/s - Full. All Teams entering the cisco switch share the same Vlan to themselves - ie. no other traffic....

The Vlan only resides on the cisco switch to keep other traffic away for the test. No Vlan setup in PROset on the Intel NICs.

Looking at counters on the switch appears to show that the individual ports in each team are acting as primary send or receive ports. I say this because the bulk of the transfers sent or received (depending on which server you are looking at) fall to a single port and are not distributed equally on the teamed ports.

Is there something I'm missing in 'theory of aggregation' or a step in the Intel PROset NIC Configs, switch config.... that I need to patch up to get my 'proto-network' to attain the 2Gbps thru-put?

thanx

0 Kudos
5 Replies
Mark_H_Intel
Employee
1,628 Views

Teaming the ports will not increase the speed of any of the connections, but the potential aggregate throughput is higher. http://www.ieee802.org/3/hssg/public/apr07/frazier_01_0407.pdf IEEE 802.3ad Link Aggregation (LAG): what it is, and what it is not* provides the explanation that I use to explain how link aggegation works.

As you can see from the explanation in the link, you do not really get a 2 Gbps pipe when you team the ports. You do have the opportunity to use each of the 1 Gbps pipes simultaneously when you have multiple TCP connections going.

To get a higher speed connection, you would need to move to 10-gigabit Ethernet.

I hope this helps.

Mark H

idata
Employee
1,628 Views

Mark,

Thanks for the response. I read thru the frazier PPT and start a bit more investigating. I successfully got this team to work and tested SAT at 1.8Gbps, by configuring for pagp vice lacp which the Pro NICs are compatible with as SLA. I had tried pagp previously but had missed the INTEL page stating I needed to force my cisco switch ports to Manual ON to get the physical ports to come up ( Network connectivity ). Once this was done the link came up operational.

0 Kudos
idata
Employee
1,628 Views

sorry for the thread jacking. But I have exactly the same issue as the OP.

I use NTTTCP for testing, I have Pro1000PT quad port adapter. I've teamed the two ports together to crated a 2G pipeline as well.

I am using HP Procurve 1800-8G switch in between then, and no matter what I do , NTTTCP would not exceede 980Mbit/s,

and network utilitzation is only at 49%, this led me to think that the network is only using one adapter.

I would like to achive around 1.5G bandwidth, but it doesn't seem like I am able to.

Is it my switch problem ? thanks !

0 Kudos
idata
Employee
1,628 Views

oxyl, making some assumptions here, without a clear understanding of your server, switch, (multiple-)clients config. It sounds like you are trying to archive data at (hopefully) 1.5Gbps from a server to a storage-server. Referring to Mark H's post and the Frazier PPT he attached, aggregation won't increase bandwidth to a single IP address. Therefore your config needs to have data from the server being transferred to atleast 2 clients (seperate IPs). Making a creative assumption of a possible config would be server [w/data to be archived] >> 2Gbps pipe (SLA worked for me here) >> HP1800-8G > (client# 1-# 1IP only need 1Gbps pipe) & > (client# 2-IP# 2 only need 1Gbps pipe) ..... In this config 'when the NICs and the switch are configured properly & compatible' you could see the hopeful 1.5Gbps on the server 2Gbps link. I don't have any experience with the HP procurve, but glancing at the HP manual online showed the switch can do 'static' aggregation (static-LACP, as the manual states), which is the eventual Teaming soln we have running here with our NICs and cisco 3560 switches. HTH mjs

0 Kudos
idata
Employee
1,628 Views

Hi Mike, thanks for the reply.

Yes it is a storage server, but I am trying to use LACP for internal sync between the two storage server.

For my HP1800-8G it doesn't support static aggregation, only on the 1810 version

".. aggregation won't increase bandwidth to a single IP address.." this make sense a bit more sense to me. So with 1 single IP on the teamed adapter, it can only utilize one pipeline which is 1Gbps connection then, hence explain the below 50% network utilization ?

I grabbed one of our 2960 switch, and set it for LACP, and on my adapter I did SLA, and it was a lot slower than in LACP mode. Not sure why.

I am beginning to think maybe I should just do two pipeline of 1Gbps crossover to another storage server, this way I would really utilize the two nework port.

0 Kudos
Reply