Running a Dell PowerEdge R710 with a i350-T4 controller. v17.4 drivers are loaded. Running Windows Server 2012 with Hyper-V on the host and have several Windows Server 2012 guests. BIOS and drivers are updated to latest revs. SR-IOV is configured on Hyper-V's virtual switch. I'm using the i350 to connect to an EqualLogic array and want to offload as much of the iSCSI and TCP/IP processing as possible to the NIC. Jumbo Frames, 9014 bytes, is set on the adapters in the host system.
Here's the problem: If jumbo frames are disabled in the guest, then SR-IOV is listed as active from Hyper-V Manager. As soon as I enable jumbo frames on the adapters within the guest, Hyper-V Manager updates and lists the adapter status as Degraded (SR-IOV disabled).
Thoughts, comments, suggestions...
I need to support jumbo frames within the guest Win 2012 for network efficiency with iSCSI, but I would also like the features of SR-IOV. Are they mutually exclusive, or should they work together?
Hi Robert, thanx for posting to the forum.
So you want to run iSCSI directly from you VM, interesting. Been a while since I looked at that, however last time I was reading about it, it was a discouraged activity for a number of reasons. However technically feasable.
As for Jumbo Frames on the I350, should work fine on a VF. I've reached out to my local experts and will get back to you with any results I find.
Just "discovered" something. If I just configure Jumbo Frames from within Windows "Network Connections" for the "Microsoft Hyper-V Network Adapter"(s), SR-IOV deactivates. If I then go into Device Manager and also set Jumbo Frames on the "Intel(R) i350 Virtual Function" adapters, SR-IOV reactivates.
I'm hoping that SR-IOV and the other offload functions of the adapters and Win 2012 will eliminate the huge performance penalty of configuring iSCSI initiators within a VM.
I've used VHD/VHDX files for data disks within VM's, but I can't expand them on a live VM that's a part of a Cluster Shared Volume. As for pass-through disks, all those offline disks (with no names) in Server Manager get's confusing.
At any rate, thanks for looking into this. It seems to be working now. If you have any other suggestions for increasing performance / decreasing overhead of iSCSI initiators within a VM, please let me know.
Great, glad you found a solution. Configuing Ethernet 'goodies' under Hyper-V can be interesting. Many ways to go enable & configure features (such as Jumbo Frames), do not always actually perform the same task.
I suspect (been a few years since I've played with Hyper-v) configuring via "Network Connections" does more with the internal virtual switching infrastructure of Hyper-V, while the Device Manager goes in and twiddles the goodies within the Intel I350 directly.
Feel free to report back your findings on performance of iSCSI!