Ethernet Products
Determine ramifications of Intel® Ethernet products and technologies
5292 Discussions

vf does not functions if guest domain is halted and started , reboot is fine!

idata
Employee
3,323 Views

Dear Community ,

We are using latest version of ixgbe , ixgbevf , libvrt , kvm .

We have assigned VF to VM via direct pci assignment. When the guest starts

for the first time we are able to ping to and from guest. This persists across

VM reboots. However when we 'halt' the guest vm and start the VM again

we are not able to ping to and from the VM. This 'problem' is fixed if we

restart the VM after reloading the ixgbe kernel module in Host and ifupdown'ing

the interfaces. But this 'fix' is not repeatable. It does not works all the time. However

If i reboot the host OS it gets 'fix'ed , and this 'fix' is repeatable.

I have also tried lower version of the various components but to no avail.

This is my first post and i am not sure what all info should have been posted.

Any help would be greatly appreciated.

regds

Rajesh Kumar Mallah.

0 Kudos
1 Solution
idata
Employee
1,748 Views

Dear All ,

I would want to report back that the linux VFIO driver cleanly solves the

original problem. For this latest qemu (1.2+) is required that recognizes

the vfio-pci type device. The VFIO driver has been merged to the mainline

since version 3.6 . ( https://lkml.org/lkml/2012/7/25/288 LKML: Alex Williamson: [GIT PULL (PATCH 0/4)] VFIO driver for v3.6

A tutorial is present here: https://docs.google.com/file/d/0B4Em50Bac2U7dmFKN3JZVjZjOG8/edit KVM-Forum-2012-VFIO.pdf - Google Drive

however i am yet to test the pass through of sriov vfs , so far the PF pass through

has been tested successfully.

thanks everyone.

regds

Rajesh Kumar Mallah.

View solution in original post

0 Kudos
10 Replies
Patrick_K_Intel1
Employee
1,748 Views

Hi Rajesh and welcome to the community!

When using SR-IOV and you assign a VF to a VM, that VM is accessing physical hardware resources. The VF driver in the VM initializes those resources and places them into a known state for use.

When you pause a VM, which is of coure a way of taking a 'snapshot' of the VM, itincludes a snopshot all of it's hardware configuration, including all the registers and such in the VF.

When that VM is then brough back up again and resumed, it does not know it has been paused - it assumes everything is as it was before and tried to continue running. This includes the VF, the VM assumes it has the same exact VF that it had before, and that all the registers in the VF are in the same exact state.

This is almost never the case - once you pause a VM, the VF is in an unknown state and will have to be re-initialized. You are also not guaranteed to get back the same exact VF that you had before, even if you do the state of the VF will almost certainly have changed.

This is a hypervisor level issue, one that I know many are working on, though I don't have any details I'm allowed to share at this time.

A couple of possible solutions are to somehow simulate a PCI hot-plug event for the VF when you resume your VM, so that the VM will re-enumerate the VF and reload the driver. Another possibility is to have an emulated Ethernet device that you failover to before you pause the VM, and then when you resume your VM, the emulated Ethernet device will work for you, while you perform a PCI hot-plug event for the VF as mentioned previously.

SR-IOV is a cool technology that give you improved performance, however it also comes with additional complications and challenges that are still being worked on by the hypervisor vendors.

Hope that helps,

Patrick

 

idata
Employee
1,748 Views

Dear Patrick ,

Thanks for the response. Based on my understanding of your suggestions i am doing some changes.

particularly seeing if hotplugging of the pci device into the VM via hypervisor (KVM) triggers proper

initialisation of the VFs.

regds

mallah.

0 Kudos
Patrick_K_Intel1
Employee
1,748 Views

Great! If you have a chance, please come back and let us know how it worked out for you.

- Patrick

0 Kudos
idata
Employee
1,748 Views

Dear Patrick ,

Following is what did *not* work for me:

1) compiled a recent kernel inside the VM with pci hotplug support and the bundled ixgbevf inbuilt.

(btw ixgbevf 2.6.2 does not compiles with vanilla 3.5.1)

2) used hypervisor command

/usr/local/libvirt/bin/virsh attach-device new-installed ~/nic.xml

to hotplug the pci device into the running VM.

nic.xml contained

3) tailed syslog in the VM which nicely indicated

3.1 the hotplugging event

3.2 the handling of the new PCI device by ixgbevf

3.3 upping of the eth0 and link ready message.

3.4 lspci did indicate the virtual PCI device

however the connectivity was not successful.

In your original posting I did not very much understand why you reiterated on the pausing

and resuming of VM. the problem is with the shutdown the VM instance. I read it many

times still i cannot connect all the points. thanks for your support till now and we look

forward to further hints from you. Inspite of all these i feel sriov is really cool !

HMMM: just realized i do not have permission to upload the 2 screenshots that a made from

virt-manager ! ( Can i have the permission to attach please!!!)

Message was edited by: Rajesh Kumar Mallah

0 Kudos
idata
Employee
1,748 Views

What is counter intuitive is that in spite of all the good messages in the

host and the guest kernel logs and in spite of all the OS tools indicating

everything being fine about the VFs, they are still not functional. Are the

messages not misleading then ?

0 Kudos
idata
Employee
1,748 Views

OK!

here is what works!

if we detach the VF from the guest before its shutdown , the VF connectivity works.

I detached the nic using the

/usr/local/libvirt/bin/virsh detach-device new-installed ~/nic.xml

command and then halted the guest.

After the guest was started again i attached the nic using command

/usr/local/libvirt/bin/virsh attach-device new-installed ~/nic.xml

the network in the guest was configured to allow hot-plug of ethernet interfaces.

It came up nicely and i was able to ping to and from. Hence my 'problem' is partly

solved as long as i remember to detach the device from the guest before shutting it.

If anyone has a more elegant way of doing it or anyway to automate this , i would be

keen to hear.

regds

mallah.

0 Kudos
idata
Employee
1,748 Views

now exploring libvirt callbacks to automate the detaching of the vf prior to domain shutdown or

destroy.

0 Kudos
Patrick_K_Intel1
Employee
1,748 Views

Thanx much for keeping us up to date on your progress!

0 Kudos
idata
Employee
1,748 Views

Yep! that is the beauty of community!

BTW we also discovered the hooks folder under libvirt/etc so we are trying to put

custom scripts there. These scripts (hooks) are called by libvirt with appropriate

args. My colleague is working on it and we shall post again once something

interesting is discovered.

regds

mallah.

0 Kudos
idata
Employee
1,749 Views

Dear All ,

I would want to report back that the linux VFIO driver cleanly solves the

original problem. For this latest qemu (1.2+) is required that recognizes

the vfio-pci type device. The VFIO driver has been merged to the mainline

since version 3.6 . ( https://lkml.org/lkml/2012/7/25/288 LKML: Alex Williamson: [GIT PULL (PATCH 0/4)] VFIO driver for v3.6

A tutorial is present here: https://docs.google.com/file/d/0B4Em50Bac2U7dmFKN3JZVjZjOG8/edit KVM-Forum-2012-VFIO.pdf - Google Drive

however i am yet to test the pass through of sriov vfs , so far the PF pass through

has been tested successfully.

thanks everyone.

regds

Rajesh Kumar Mallah.

0 Kudos
Reply