Ethernet Products
Determine ramifications of Intel® Ethernet products and technologies
5653 Discussions

E610-XT4 Hyper-V 2025: Queue Pairs limited to 4 per vPort — Event 280/285

ReubenTishkoff
7,699 Views
Hi,

I’m using Intel E610-XT4 for OCP 3.0 NICs (8 units total) in a Windows Server 2025 Hyper-V host with a SET Team virtual switch. Driver and firmware are from the official Intel Ethernet Adapter Complete Driver Pack 30.1.1. NVM version is 1.33 (1.21).



Problem:

Hyper-V constantly logs Event ID 280, saying:

QueuePairs adjusted from requested 16 to actual 4.

This happens for every VM, regardless of the IOVQueuePairsRequested setting, event if I force it to 4.

Additionally, I get Event ID 285 (OID_GEN_STATISTICS timeout) on the host’s vNIC.



What I’ve tried:
• Applied Set-VMNetworkAdapter -IOVQueuePairsRequested 4 to all VMs
• Disabled RSC on vSwitch
• Firmware and drivers fully up to date

Still, the hardware appears to limit Queue Pairs per vPort to 4, while similar Intel NICs like X710 do not.



Questions:
1. Is this limitation expected on the E610-XT4 under Hyper-V 2022/2025?
2. Will there be a firmware or driver update to allow more QPs per vPort?
3. Is this card certified for Hyper-V SET scenarios?

I’ve attached screenshots of NVM version and both warnings. Thanks in advance!

Best regards,
0 Kudos
17 Replies
vish1
Employee
7,450 Views

Hello ReubenTishkoff,


Greetings!!


Thank you for bringing this to our attention.


This is not the expected behavior for a modern 10GbE NIC in a Hyper-V Switch Embedded Teaming (SET) environment. The Intel® E610-XT4 adapter supports advanced features like SR-IOV and VMQ; similar Intel NICs such as the X710 typically allow for more queue pairs per vPort.


Based on our findings, the limitation appears to be enforced by the Intel driver at the NDIS layer rather than by Hyper-V itself. While Hyper-V requests 16 queue pairs, the Intel driver restricts this to only 4 per vPort when SET is enabled.

As of now, there is no official fix or update from Intel addressing this limitation. Even with the latest driver pack (30.1.1) and NVM firmware (1.33), users continue to experience the same behavior.


We recommend continuing to monitor Intel’s official support channels and driver/firmware release notes for any updates or specific fixes related to Hyper-V SET or VMQ/queue pair handling.


If SET functionality is critical to your environment, we suggest validating SET certification for the E610-XT4 directly with Intel. Alternatively, you may consider using a different Intel NIC model such as the X710, which is known to offer better compatibility with Hyper-V SET configurations.


Please let us know if you need further assistance or guidance.


Best Regards,

Vishal Shet P

Intel Customer Support Technician


ReubenTishkoff
7,341 Views
Thank you, Vishal, for the detailed reply and clarification.

We do have Intel X710 NICs deployed successfully in previous-generation Dell servers, such as the PowerEdge R660 and R7525, where they have worked reliably with Hyper-V and SET. However, in the newer Dell 17th generation platforms like the R770 and R7725, the X710 is no longer available as a factory option — only the E610 series is offered.

Intel’s own product brief for the E610 family describes it as having “comprehensive operating system and hypervisor support” (see source https://www.intel.com/content/dam/www/central-libraries/us/en/documents/intel-ethernet-800-700-600-series-brief.pdf , page 3), which is why this limitation (4 queue pairs per vPort in SET) was unexpected. We understand this may be due to driver-level enforcement at the NDIS layer, as you mentioned.

Given that this issue is likely to affect many more users across different server vendors — not just Dell — as the E610 series becomes the default 10GbE OCP NIC in new platforms, would it be possible for you to escalate this internally or advise us on how best to proceed?

We would appreciate any further guidance you can provide.

Best regards,
0 Kudos
Akshaya1
Employee
7,325 Views

Hello ReubenTishkoff,


Thank you for your response.


Kindly allow some time to review the details. I will get back to you with an update shortly.


Regards,

Akshaya

Intel Customer Support Technician

 


ReubenTishkoff
7,077 Views

Hi again,

 

I wanted to provide some follow-up findings that may be useful for diagnosing the issue further.

 

In addition to the queue pair limitation (Event ID 280), I’m now also seeing Event ID 285 repeatedly in Hyper-V-VmSwitch logs, involving both:

  • OID_RECEIVE_FILTER_MOVE_FILTER (66096)

  • UNSPECIFIED_OPERATION_CODE (4) on RNDIS for specific VMs

V-Switch operation OID_RECEIVE_FILTER_MOVE_FILTER (66096) took too long to complete. 
Queued time: 1844674402169740 ms. Expected execution time less than 0 ms.

 

Operation Type: RNDIS. Operation took too long to complete.
VmName: TEST-VM

 

These seem to indicate that the driver is either returning malformed values or timing out when interacting with the virtual switch, especially during receive filter transitions or queue assignments.

This further supports the idea that the driver is not yet fully compatible with Hyper-V SET and VMMQ handling in Windows Server 2025.

I’m happy to provide additional logs or VM configurations if needed.

Please let me know if this can be escalated internally or if Intel needs help reproducing the issue.

Best regards,

0 Kudos
vish1
Employee
7,052 Views

Hello ReubenTishkoff,


Greetings!!


The adjustment of the queue pair to 4 is expected behavior based on how queuing in a virtualized environment works with the Intel® E610 network adapter.


For more technical details, we recommend referring to Section 7.1.3.2 of the E610 Datasheet, which provides an explanation of this behavior and its relevance in virtualized scenarios.

https://www.intel.com/content/www/us/en/content-details/743371/intel-ethernet-controller-e610-datasheet.html


Best Regards,

Vishal Shet P

Intel Customer Support Technician


ReubenTishkoff
6,943 Views

Hi Vishal,

Thank you again for the detailed responses and for referencing section 7.1.3.2 of the E610 datasheet.

After reviewing that section in depth, I must respectfully say that the documentation does not appear to support a hard-coded limitation of 4 queue pairs per vPort in virtualized environments.

In fact:

  • The hardware supports 128 Rx queues.
  • Several queue pool configurations are described in the datasheet, including 32 pools × 4 RSS queues — which should allow more than 4 QPs per VM or vNIC.
  • The driver initially advertises 16 queue pairs, and only later downgrades to 4 at runtime — which causes Event ID 280 in Hyper-V.

If the actual design of the E610 controller were limited to 4 QPs per vPort, it would be expected for the driver to report that limitation from the start — and Hyper-V would not attempt to request 16. The current behavior suggests a driver-level policy or implementation gap, rather than a hardware constraint.

I believe this is worth a closer look, especially as this adapter is now shipping in default configurations in Dell PowerEdge R770/R7725 and likely many other platforms.

We really appreciate your involvement. Your continued follow-up is invaluable to understanding how this NIC performs in real-world virtualization deployments, and I hope this can be escalated internally.

Thank you again

0 Kudos
Amina_Sadiya
Employee
6,932 Views

Hello ReubenTishkoff,

 

Thank you for your response. Kindly allow me sometime to check this internally and get back to you with an update shortly.

 

Regards,

Amina

Intel Customer Support Technician


PnoT
Beginner
6,791 Views

I am also observing this behavior in Server 2025, using Hyper-V, with SET switches and the Hyper-V Port algorithm, and Intel(R) Ethernet 25G 2P E810-XXV adapters. As far as I've seen, these are platinum-level certified NICs for all things Hyper-V and Server 2025.

The 280 events were resolved by setting the Queue Pairs from 4 to 16, as this was an option on my adapters; however, I am still seeing the 285 in the Hyper-V-VmSwitch event log.

 

 

ReubenTishkoff
5,928 Views

Any update regarding the queue pair limitation (only 4 per vPort) on the E610-XT4 with Hyper-V 2025 + SET? Still seeing Event ID 10403.

Is a fix or driver update expected?

 

Thanks,

0 Kudos
IntelSupport
Community Manager
4,834 Views

Hello ReubenTishkoff,


Please allow us some additional time as we are currently checking on this internally.


We will get back to you as soon as possible


Regards,

Shankith K P

Intel Customer Support Technician




MartinBlanchette
Beginner
3,349 Views

Any update regarding the queue pair limitation?

(attached image) Default Device is (16), and still have Hyper-V limitation (4).. 

intel.png

IntelSupport
Community Manager
3,035 Views

Hi ReubenTishkoff,


Greetings for the day!


As checked with our internal team, we have a confirmation that E610 supports up to 4 queues in VMMQ mode and 2 queues in SRIOV mode.


Regards

Jerome

Intel Customer Support Technician


0 Kudos
ReubenTishkoff
2,942 Views

Thank you, Jerome,

I reviewed §7.1.3.2; it explains queue pools (e.g., 32×4, 64×2) but doesn’t state a fixed 4 QPs per vPort limit. In practice Hyper-V requests 16, the driver then downgrades to 4 at runtime → Event ID 280/285. If 4 is by design, why does the driver expose 16 to NDIS instead of the actual limit? That mismatch feels like a driver policy/implementation issue rather than hardware.

Also, this is a step down from X710, where Hyper-V typically operated with ~8 QPs/vPort.
Could you confirm the intended per-vPort limit for E610 under SET and whether a driver/NVM update will both report the true capability and/or raise the limit?

0 Kudos
Amina_Sadiya
Employee
2,929 Views

Hi ReubenTishkoff,


Thank you for your response, kindly allow me sometime to check this internally and get back to you with an update.


Best regards,

Amina

Intel Customer Support Technician



ReubenTishkoff
2,052 Views

Hi Amina,

 

Thanks for the follow-up. We’ve been stuck on this for quite a while and it’s starting to impact our deployment plans on Dell 17th-gen servers (R770/R7725) with Windows Server 2025, Hyper-V + SET.

 

Could you please help with the following, or escalate with an ETA?

Confirm the intended per-vPort QP limits for the E610 under Hyper-V SET (VMMQ and SR-IOV).

Explain why the driver advertises 16 QPs to NDIS and then clamps to 4 (Event 280/285). Is there a way to make the driver expose the real limit to avoid the mismatch?

Provide a driver/NVM roadmap or hotfix that (a) reports the true capability, and (b) raises the limit if possible.

We’ve also tested Intel X710 (certified for Hyper-V) and still see issues in practice, so we’re evaluating Broadcom 57412 Quad-Port 10GBase-T OCP 3.0 as a fallback. We’d much prefer to stay on Intel if there’s a path forward.

 

Happy to share VMSwitch/NDIS traces or run any engineering builds. If you can open a formal case and share the SR number, that would be great.

 

Thank you!

0 Kudos
IntelSupport
Community Manager
1,311 Views

Hi ReubenTishkoff,


Greetings for the day!


Apologies for the delay, we are still checking with our engineering team on this issue, kindly allow us some more time to provide you with an update.


Thanks for your understanding.


Regards

Jerome

Intel Customer Support Technician


0 Kudos
ReubenTishkoff
36 Views

Hi Intel team,

Quick update on this issue: I’ve updated the adapter to the 30.4 driver package (E610 driver) and the 1.48 NVM/firmware. After clean reinstall and reboots, I’m still getting the same warnings in Hyper‑V:

Event 280 — “VMS Utilization Plan Vport QueuePairs adjusted from requested number (16) to actual (4). Reason: The number requested exceeds the maximum QPs per VPort supported by physical NIC.”

Event 285 — “V‑Switch operation OID_GEN_STATISTICS took too long to complete … NicFriendlyName: <my vSwitch>.”

VMMQ is enabled; if I set VmmqQueuePairs to 16 on the vNICs, it’s always reduced back to 4 and Event 280 is logged. Event 285 appears periodically on the management vSwitch.

Could you confirm whether 4 QPs per vPort is currently an intentional limit for E610 on Windows Server 2025 / Hyper‑V, or if there’s a fix/workaround in a newer driver/NVM? Happy to share logs and exact build numbers if needed.

 

Thanks!

0 Kudos
Reply