Community
cancel
Showing results for 
Search instead for 
Did you mean: 
idata
Community Manager
2,410 Views

VMQ on a team which is trunked - 10 GbE

Hi,

I'm running into a strange situation when I enable VMQ on my two NIC team used for trunking in Hyper-V.

I use VMLB as teaming type. When I do a Live Migration with VMQ enabled on the team I will loose more than the usual 1 ping to a VM.

The NIC's that I'm using are: Intel(R) Ethernet Server Adapter X520-2

Driver version is: 16.6 from Intel (http://downloadcenter.intel.com/Detail_Desc.aspx?agr=Y&DwnldID=18725&ProdId=3153&lang=eng http://downloadcenter.intel.com/Detail_Desc.aspx?agr=Y&DwnldID=18725&ProdId=3153&lang=eng)

Servers are Dell PowerEdge R810 and the are connected to Dell PowerConnect 8024F switches.

When I disable VMQ everything is as it should be. I loose at most 1 ping during a Live Migration.

In the release notes (http://downloadmirror.intel.com/18725/eng/readme.txt http://downloadmirror.intel.com/18725/eng/readme.txt) there is some text regarding teming and VMQ:

In the Teaming Known Issues

--------------------

Teaming VMQ-enabled devices may disable VMQ on the NICs

-------------------------------------------------------

If you create a team out of VMQ-enabled devices, VMQ may become disabled on

all devices in the team. To work around this issue, create the team first,

then enable VMQ on an adapter. If all adapters in the team are capable of VMQ,

VMQ will become enabled on the team.

I tried creating the team with VMQ disabled and enabled. No difference.

Any tips?

Regards

0 Kudos
27 Replies
Patrick_K_Intel1
Employee
161 Views

Thanks for using Intel Ethernet and visiting our forum.

I've sent your question off to our virtualization team and will post a response when they get back to me.

idata
Community Manager
161 Views

Any answer yet?

I'm getting the same with 1 GbE Intel Quad ET cards now?

Same driver, same sympthoms.

Patrick_K_Intel1
Employee
161 Views

The eval team is going to try to reproduce this issue in our lab soon.

idata
Community Manager
161 Views

HI, I have just spent weeks trying to fix an issue with my two three node clusters where servers could not communicate between nodes. I have had cases with Microsoft and Dell as I use Dell M610's to no avail.

I found this post tonight and it seems to have resolved my issue, I was using a single nic (no team) with the hyper-v profile and it did not work, I have turned off VMQ as per this post on all nodes and it seems to have resolved my issue. I am using driver version 15.5.2

Is there a fix for this soon, as I could really use VMQ

thanks

Patrick_K_Intel1
Employee
161 Views

Our validation team is still trying to reproduce the issue your reported. Thus far they have been unable to and are requesting some additional information:

"Has everything required for Live Migration (VM traffic, LM traffic, iSCSI traffic, management traffic) running on the trunk, or does he have separate networks/connections for management and iSCSI? "

If you could provide details on your configuration it may help us to make progress.

thanx,

Patrick

idata
Community Manager
161 Views

I have seperate network cards for the other networks, I had two dedicated Intel nics for VM traffic and could reproduce the issue with a team and without.

I am more than happy to work with your team remotely if they wnt to run any tests or see it happening on my network.

idata
Community Manager
161 Views

Hi,

I have 2 Hyper-V clusters with the same issue.

I test everything with a VM with al least 16GB of RAM because then the Live Migration period is long enough the reporoduce the issue.

Bassically the setup of cluster 1 is as follow:

Windows Hyper-V Server 2008 R2 SP1

3 x Dell R810 with:

2 x Intel Xeon E7 2860, 10 core

256 GB of RAM

4 x OnBoard Broadcom 5709C

3 x Dual Intel Ethernet Server Adapter X520-2

Intel driver version: 16.6

Broadcom driver: 14.4.8.4

Broadcom Management Software: 14.4.11.3

Switches:

2 x Dell PowerConnect 8024F

2 x Dell PowerConnect 6248

The Intel Ethernet Server Adapter X520-2 cards are connected to the PowerConnect 8024F. These switches arre "stacked" bij use of VRRP. I have tested the Live Migration with VMQ enabled and disabled with the swithes "stacked" and non stacked. Same result.

First test was as follow:

1. Created a team of two Intel Ethernet Server Adapter X520-2 in VMLB

2. Create External Virtual Switches in Hyper-V Manager, no parent partition connection

3. Configuring all the settings as advised here on the : http://blogs.technet.com/b/cedward/archive/2011/04/13/hyper-v-networking-optimizations-part-2-of-6-v... http://blogs.technet.com/b/cedward/archive/2011/04/13/hyper-v-networking-optimizations-part-2-of-6-v...

4. Did some tests with a VM between the 3 nodes. All the same result. After 5-10% the pings start loosing and the connection to the VM is lost when using RDP.

Second test was as follow:

1. Single Intel Ethernet Server Adapter X520-2

2. Create External Virtual Switches in Hyper-V Manager, no parent partition connection

3. Configuring all the settings as advised here on the : http://blogs.technet.com/b/cedward/archive/2011/04/13/hyper-v-networking-optimizations-part-2-of-6-v... http://blogs.technet.com/b/cedward/archive/2011/04/13/hyper-v-networking-optimizations-part-2-of-6-v...

4. Did some tests with a VM between the 3 nodes. All the same result. After 5-10% the pings start loosing and the connection to the VM is lost when using RDP

Third test:

1. Created a team of two Intel Ethernet Server Adapter X520-2 in VMLB

2. Configuring all the settings as advised here on the : http://blogs.technet.com/b/cedward/archive/2011/04/13/hyper-v-networking-optimizations-part-2-of-6-v... http://blogs.technet.com/b/cedward/archive/2011/04/13/hyper-v-networking-optimizations-part-2-of-6-v...

3. Create External Virtual Switches in Hyper-V Manager, no parent partition connection

4. Did some tests with a VM between the 3 nodes. All the same result. After 5-10% the pings start loosing and the connection to the VM is lost when using RDP

Fourth test:

1. Single Intel Ethernet Server Adapter X520-2

2. Configuring all the settings as advised here on the : http://blogs.technet.com/b/cedward/archive/2011/04/13/hyper-v-networking-optimizations-part-2-of-6-v... http://blogs.technet.com/b/cedward/archive/2011/04/13/hyper-v-networking-optimizations-part-2-of-6-v...

3. Create External Virtual Switches in Hyper-V Manager, no parent partition connection

4. Did some tests with a VM between the 3 nodes. All the same result. After 5-10% the pings start loosing and the connection to the VM is lost when using RDP

In all tests the Live Migration was successful BUT the RDP connection to the VM and pings were lost during the Live MIgration process. Also sometimes the RDP connectin wasn't reconnected after the Livie Migration was succesfull finished.

When disabling VMQ in the single NIC or the teamed NOC configuration, Live Migration was still succesfull BUT the biggest difference was that the RDP connection stayed up and there was no pingloss during the Live Migration.

As mentioned I have a second cluster where I'm experiencing this issue. Main difference here is that this environment only uses Intel Ethernet Server Adapater Quad ET. Same Drivers, same OS. The hosts in this cluster are 2 Dell R710's with one X5670~6cores and 64GB of RAM. Same test setups, same result. Disable VMQ and LIve MIgration is succesfull in all area's.

Hopefully this helps. If any more question please reply!

idata
Community Manager
161 Views

"Has everything required for Live Migration (VM traffic, LM traffic, iSCSI traffic, management traffic) running on the trunk, or does he have separate networks/connections for management and iSCSI? "

 

To answer the questions you asked.

1. I have seperate networks for Live Migration, Virtual Machine, iSCSI and Management Traffic.

  • Live Migration Traffic: Single Intel Ethernet Server Adapter X520-2

     

  • Virtual Machine Traffic: Two Intel Ethernet Server Adapter X520-2 on diffrerent cards in different risers in a VMLB team. This team then is used to create a virtual network without connection to the parent partition.

     

  • iSCSI Traffic: Two Intel Ethernet Server Adapter X520-2 on diffrerent cards in different risers

     

  • Parent Partition Traffic (management traffic): Two onboard Broadcom 5709C cards in a SFT team (1/4 and 3/4)

     

  • Cluster / Heartbeat Traffic: Two onboard Broadcom 5709C cards in a SFT team (2/4 and 4/4)

     

2. The Virtual Machine Traffic is connected configured to be used as a trunk.

3. Yes I have seperate networks/connections for Management and iSCSI.

idata
Community Manager
161 Views

Hi have the same setup as Martius

idata
Community Manager
161 Views

Tried latest driver 16.7. Unfortunatly no success.

The issue starts when the Live Migration enters the "Brow-out" part of the migration. With a VM with 16GB of RAM I'm still getting 5-6 ping loss when executing a Live Migration.

idata
Community Manager
161 Views

Any update on this? The issue is quite a show stopper for VMQ and live migration.

Patrick_K_Intel1
Employee
161 Views

Sorry for the long delay - I am afrid this issue got lost in the confusion of the holidays and the fact that I was away in Antarctica for a month.

Back now and am happy to report that our eval team has reproduced this issue. A defect has been filed and the engineering team has added it to their list of tasks to work on. At this time I do not have an E.T.A. on a release; when I have one I will pass it along.

Make sure to keep poking me to keep me honest though.

idata
Community Manager
161 Views

Thats good news that you have been able to recreate the issue. Hopefully the issue is not too complex to resolve.

Will keep bumping this occasionally.

Patrick_K_Intel1
Employee
161 Views

My friends in the driver team tell me that they have a fix in the works for this! Should be released in the near future. Will keep you posted.

idata
Community Manager
161 Views

Hi, Patrick

Any news about this fix?

I'm expiriencing the same issue as Martius.

My situation is:

We are using VMLB and servers with Hyper-V R2 SP1 installed. When we do a Live Migration of the VM with VMQ enabled on the teamed adapters, we completely loose network connection to a VM for some seconds (from 5 till 30 - depends on memory size of a VM).

Also we are using VLANs inside our VMs network adapter properties.

NICs that we're using are: dual-port http://ark.intel.com/products/49185/Intel-Ethernet-Server-Adapter-I340-T2 Intel(R) Ethernet Server Adapter I340-T2 (with Intel Pro Set v.16.8.46.0 installed).

Servers are IBM x3650M3 and they are connected to Cisco 2960G switch.

When I disable VMQ in the network adapters properties, everything run (I mean Live Migration) as it should be, without any disruption in network connectivity, except just one ping loss (sometimes).

VMQ was enabled according http://technet.microsoft.com/ru-ru/library/gg162704(v=ws.10).aspx this and http://blogs.technet.com/b/cedward/archive/2011/04/13/hyper-v-networking-optimizations-part-2-of-6-v... this instructions.

I've even tried to adjust *VMQVlanFiltering value in the registry on Hyper-V hosts, as descibed http://msdn.microsoft.com/en-us/library/windows/hardware/hh205410(v=vs.85).aspx here, but with no luck, seems that Intel network adapters doesn't support this parameter.

Patrick_K_Intel1
Employee
161 Views

We are currently testing the fix. Will provide more information when available.

idata
Community Manager
161 Views

Hi,

Any updates?

Patrick_K_Intel1
Employee
161 Views

The fix for this is in final testing and will be in the next driver release. I'll post back here when it is available. Thanks for your patience.

idata
Community Manager
38 Views

Hi Patrick,

Any update on the driver? If its not ready can you release a patch?

Thanks,

Edd

Patrick_K_Intel1
Employee
38 Views

Great timing for the question. This is because the update was just published this morning!

You can get it at:

http://downloadcenter.intel.com/Detail_Desc.aspx?DwnldID=21228 http://downloadcenter.intel.com/Detail_Desc.aspx?DwnldID=21228.

Please give it a try and let us know if it fixes your issues!

Best of luck,

Patrick

Reply