Server Products
Data Center Products including boards, integrated systems, Intel® Xeon® Processors, RAID Storage, and Intel® Xeon® Processors
4784 Discussions

Associate vNICs of virtual machines in Xen to queues of physical NIC 10 GbE multi-queue with ixgbe driver.

idata
Employee
1,465 Views
Hello, I have installed two Ethernet cards multi-queue Intel 10 Gb 82598 Controller into two Linux machines (SuSE), the first one works on Xen for virtualization (release 2.6.18.8-xen0) and the second one works on native linux SuSE, so I have installed the driver in each one (ixgbe-3.3.9), the issue is when I ping the second card from the first one, the traffic is transmitted and received by only one or two queues in each side. In the first time, I want to define the number of queues in each card (Rx_queue and Tx_queue), how can I do this?, and how can I organize the traffic in the way that if I transmit the traffic from the virtual machine number 1 (in the Xen machine), it will be received by the queue number 1 of the first card (in the Xen machine ), and in the other side (the native linux machine) it will be received by the queue number 1 of the second card?. Also, I want to connect the queues of the first card which works on Xen machine to the differents virtual machines existed in this physical machine (the first machine which works on Xen), each virtual machine have a vNIC - virtual NIC, these vNIC must connect with the queues of the card, so each virtual machine will have its own queue, how can I do this?. You find below some statistics of both the Xen machine and the native linux machine: the Xen machine:NIC statistics: rx_packets: 291 tx_packets: 300 rx_bytes: 30709 tx_bytes: 33687 rx_errors: 0 tx_errors: 0 rx_dropped: 0 tx_dropped: 0 multicast: 10 collisions: 0 rx_over_errors: 0 rx_crc_errors: 0 rx_frame_errors: 0 rx_fifo_errors: 0 rx_missed_errors: 0 tx_aborted_errors: 0 tx_carrier_errors: 0 tx_fifo_errors: 0 tx_heartbeat_errors: 0 rx_pkts_nic: 291 tx_pkts_nic: 300 rx_bytes_nic: 31873 tx_bytes_nic: 35157 lsc_int: 1 tx_busy: 0 non_eop_descs: 0 broadcast: 2 rx_no_buffer_count: 0 tx_timeout_count: 0 tx_restart_queue: 0 rx_long_length_errors: 0 rx_short_length_errors: 0 tx_flow_control_xon: 0 rx_flow_control_xon: 0 tx_flow_control_xoff: 0 rx_flow_control_xoff: 0 rx_csum_offload_errors: 0 low_latency_interrupt: 0 alloc_rx_page_failed: 0 alloc_rx_buff_failed: 0 rx_no_dma_resources: 0 hw_rsc_aggregated: 0 hw_rsc_flushed: 0 rx_flm: 0 tx_queue_0_packets: 300 tx_queue_0_bytes: 33687 rx_queue_0_packets: 15 rx_queue_0_bytes: 900 rx_queue_1_packets: 0 rx_queue_1_bytes: 0 rx_queue_2_packets: 0 rx_queue_2_bytes: 0 rx_queue_3_packets: 232 rx_queue_3_bytes: 22879 rx_queue_4_packets: 22 rx_queue_4_bytes: 3465 rx_queue_5_packets: 0 rx_queue_5_bytes: 0 rx_queue_6_packets: 22 rx_queue_6_bytes: 3465 rx_queue_7_packets: 0 rx_queue_7_bytes: 0 the native linux machine:NIC statistics: rx_packets: 3926 tx_packets: 2377 rx_bytes: 3521826 tx_bytes: 809560 lsc_int: 24 tx_busy: 0 non_eop_descs: 0 rx_errors: 0 tx_errors: 0 rx_dropped: 0 tx_dropped: 0 multicast: 40 broadcast: 4 rx_no_buffer_count: 0 collisions: 0 rx_over_errors: 0 rx_crc_errors: 0 rx_frame_errors: 0 rx_fifo_errors: 0 rx_missed_errors: 0 tx_aborted_errors: 0 tx_carrier_errors: 0 tx_fifo_errors: 0 tx_heartbeat_errors: 0 tx_timeout_count: 0 tx_restart_queue: 0 rx_long_length_errors: 0 rx_short_length_errors: 0 tx_tcp4_seg_ctxt: 54 tx_tcp6_seg_ctxt: 0 tx_flow_control_xon: 0 rx_flow_control_xon: 0 tx_flow_control_xoff: 0 rx_flow_control_xoff: 0 rx_csum_offload_good: 2688 rx_csum_offload_errors: 0 tx_csum_offload_ctxt: 376 rx_header_split: 0 low_latency_interrupt: 0 alloc_rx_page_failed: 0 alloc_rx_buff_failed: 0 lro_aggregated: 2498 lro_flushed: 600 tx_queue_0_packets: 2377 tx_queue_0_bytes: 809560 tx_queue_1_packets: 0 tx_queue_1_bytes: 0 rx_queue_0_packets: 1438 rx_queue_0_bytes: 144078 rx_queue_1_packets: 2488 rx_queue_1_bytes: 3377748 Thanks,Houssem.
0 Kudos
1 Reply
Reply