Server Products
Data Center Products including boards, integrated systems, Intel® Xeon® Processors, RAID Storage, and Intel® Xeon® Processors
4761 Discussions

Hyper-V Teaming with 802.3ad

idata
Employee
2,732 Views

Hello,

I am trying to setup a new 4 node Hyper V Cluster. I have 4 Intel Gigabit ET2 Quad Port Adapters. I have installed Proset 16.2. I can create the team without issues. I have enabled VMDQ. The problem starts when I create my Virtual Switch in HyperV. I point the virtual switch to the newly created Team adapter. I have set it up to "allow management operating system to share this adapter" and set the VLAN. It creates it without issues. I then manually setup my IP address. Now I can't get any communication across this port. I cannot ping my gateway or anything.

I have also setup the Virtual Switch without "allow management operating system to share this adapter". Then setup the correct VLAN ID in my VM. Still no communication.

Microsoft, of course, say they do not support NIC teaming even though Intel is a partner.

Has anyone had any luck with this or recommend a solution?

Dell Poweredge 2950

Intel Gigabit ET2 Quad Port Adapter

Windows Server 2008R2 with SP1

Thanks,

Neal

0 Kudos
7 Replies
idata
Employee
610 Views

As more information,

We are using Dynamic LACP mode. I know the switch side is correct because I can create a mulitple VLANs within the new team(disables VMDQ) and it works without issues. Intel doesn't recommend this because once you bind HyperV to that adapter, Proset doesn't allow any more VLANs to be created. We need to be able to grow and configure VLANS via the VLAN ID within the VM.

0 Kudos
Mark_H_Intel
Employee
610 Views

Try this. Disable VMQ on the adapters. Only create the VLAN on the Hyper-V switch or on the individual VM. Do not configure any VLAN on the NIC team. As I recall, someone else had communications with this type of setup, but the communications stopped when they enabled VMQ.

Mark H

0 Kudos
idata
Employee
610 Views

Mark,

Thank you for your response. That did fix it. I setup 3 hosts without using VMDQ. For giggles, I set one of them up with VMDQ and the VMs couldn't communicate. It is a shame that I cannot take advantage of VMDQ. I have already submitted a case and I would hope this bug will be taken care of in the future. Nevertheless, this fixes my immediate needs. Thanks again for your help.

-Neal

0 Kudos
Mark_H_Intel
Employee
610 Views

Neal,

I am glad that you were able to use the adapters with your virtual machines. The issue with losing communications when enabling virtual machine queues is under investigation by Intel. A future fix should allow you to enable the feature on your adapters. I plan to post a further response whenever a fix is released.

Mark H

0 Kudos
idata
Employee
610 Views

Hello,

to grab up this old thread, is there some new information about the issue?

We have done some further investigation with software:

We have 6-way Hyper-V Cluster Server 2008re sp1 with Intel x520 and intel quadport et mezzanine adapters.

Partially it seems to be fixed. We use latest boot rom and driver with Intel Quad Port et adapter. Everything works fine with vmq except that we can't use vlan's in combination with hyper-v legacy network adapters. No Problem with synthetic adapters. Cross checking the scenario with previously used broadcom adapters and vmq works fine.

We tested without teaming and with the modes static aggregation, lacp and vmlb. All the same. Combination of vlan, vmdq and legacy network adapter don´t work. Disabling vmq fixes the issue.

Only option we see at the moment to get vmq to work ist by using no vlan and set switch port to access mode.

Will this be fixed in near future? Cause with growing number of vm's per host disabling of vmq is really a performance breakdown.

Another side question: Is there any recomendation from Intel side which teaming mode is preferable in combination with hyper-v in case of technical availabilty of all the modes? We use Dell m610 Blades with PowerConnect 6348 Blade Switches.

greetings,

Stefan

0 Kudos
idata
Employee
610 Views

HI, I have just spent weeks trying to fix an issue with my two three node clusters where servers could not communicate between nodes. I have had cases with Microsoft and Dell as I use Dell M610's to no avail.

I found this post tonight and it seems to have resolved my issue, I was using a single nic (no team) with the hyper-v profile and it did not work, I have turned off VMQ as per this post on all nodes and it seems to have resolved my issue. I am using driver version 15.5.2

Is there a fix for this soon, as I could really use VMQ

thanks

0 Kudos
Patrick_K_Intel1
Employee
610 Views

Your problem description didn't provide much information

There are much newer drivers available that have various fixes, some ofr VMQ. I would try the new drivers available here:

http://downloadcenter.intel.com/ http://downloadcenter.intel.com/

0 Kudos
Reply