Ethernet Products
Determine ramifications of Intel® Ethernet products and technologies
5445 Discussions

Intel Converged X540-T2 - Not working Proxmox - Debian

crazyhorse1
Beginner
1,362 Views

Hi I have this card PCI-E Intel 540-AT2 adapter in a supermicro sys-4028.

 

The card is recognized by the bios but no lights in proxmox. It's not the cable or the switch (c3850) as I have tested with other cables and other switches.

 

I have also updated the driver with the one provided by Intel (https://ark.intel.com/content/www/us/en/ark/products/58954/intel-ethernet-converged-network-adapter-x540-t2.html) ... no results unfortunately:

 

ethtool -i enp10s0f0

 

root@S3ai:~# ethtool -i enp10s0f0

driver: ixgbe

version: 5.21.5

firmware-version: 0x80000389

expansion-rom-version:

bus-info: 0000:0a:00.0

supports-statistics: yes

supports-test: yes

supports-eeprom-access: yes

supports-register-dump: yes

supports-priv-flags: yes

root@S3ai:~# ethtool -i enp10s0f1

driver: ixgbe

version: 5.21.5

firmware-version: 0x80000389

expansion-rom-version:

bus-info: 0000:0a:00.1

supports-statistics: yes

supports-test: yes

supports-eeprom-access: yes

supports-register-dump: yes

supports-priv-flags: yes

root@S3ai:~# 

 

I have four interfaces:

root@S3ai:~# lshw -class network -businfo

Bus info Device Class Description

========================================================

pci@0000:0a:00.0 enp10s0f0 network Ethernet Controller 10-Gigabit X540-AT2

pci@0000:0a:00.1 enp10s0f1 network Ethernet Controller 10-Gigabit X540-AT2

pci@0000:81:00.0 enp129s0f0 network Ethernet Controller 10-Gigabit X540-AT2

pci@0000:81:00.1 enp129s0f1 network Ethernet Controller 10-Gigabit X540-AT2

 

enp129s0f0 and enp129s0f1 are working in a bond with the ethernet card provided natively with the server. Bond seems to be slower 10G instead of 20G but working.

 

enp10s0f0 and enp10s0f1 do not work and it says NO-CARRIER

 

ip a

 

root@S3ai:~# ip a

1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000

link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00

inet 127.0.0.1/8 scope host lo

valid_lft forever preferred_lft forever

inet6 ::1/128 scope host noprefixroute

valid_lft forever preferred_lft forever

2: enp10s0f0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc mq state DOWN group default qlen 1000

link/ether a0:36:9f:1a:45:84 brd ff:ff:ff:ff:ff:ff

3: enp10s0f1: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc mq state DOWN group default qlen 1000

link/ether a0:36:9f:1a:45:86 brd ff:ff:ff:ff:ff:ff

4: enp129s0f0: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master bond0 state UP group default qlen 1000

link/ether 0c:c4:7a:eb:35:a2 brd ff:ff:ff:ff:ff:ff

5: enp129s0f1: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master bond0 state UP group default qlen 1000

link/ether 0c:c4:7a:eb:35:a2 brd ff:ff:ff:ff:ff:ff permaddr 0c:c4:7a:eb:35:a3

6: bond0: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 1500 qdisc noqueue master vmbr0 state UP group default qlen 1000

link/ether 0c:c4:7a:eb:35:a2 brd ff:ff:ff:ff:ff:ff

7: bond0.1001@bond0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000

link/ether 0c:c4:7a:eb:35:a2 brd ff:ff:ff:ff:ff:ff

inet 10.30.3.3/24 scope global bond0.1001

valid_lft forever preferred_lft forever

inet6 fe80::ec4:7aff:feeb:35a2/64 scope link

valid_lft forever preferred_lft forever

8: vmbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000

link/ether 0c:c4:7a:eb:35:a2 brd ff:ff:ff:ff:ff:ff

inet 10.30.1.3/24 scope global vmbr0

valid_lft forever preferred_lft forever

inet6 fe80::ec4:7aff:feeb:35a2/64 scope link

valid_lft forever preferred_lft forever

9: bond0.1000@bond0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master vmbr1 state UP group default qlen 1000

link/ether 0c:c4:7a:eb:35:a2 brd ff:ff:ff:ff:ff:ff

10: vmbr1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000

link/ether 0c:c4:7a:eb:35:a2 brd ff:ff:ff:ff:ff:ff

inet6 fe80::ec4:7aff:feeb:35a2/64 scope link

valid_lft forever preferred_lft forever

 

/etc/network/interfaces

 

root@S3ai:~# cat /etc/network/interfaces

# network interface settings; autogenerated

# Please do NOT modify this file directly, unless you know what

# you're doing.

#

# If you want to manage parts of the network configuration manually,

# please utilize the 'source' or 'source-directory' directives to do

# so.

# PVE will preserve these directives, but will NOT read its network

# configuration from sourced files, so do not attempt to move any of

# the PVE managed interfaces into external files!



auto lo

iface lo inet loopback



auto enp129s0f0

iface enp129s0f0 inet manual

#10G



auto enp129s0f1

iface enp129s0f1 inet manual

#10G



auto enp10s0f0

iface enp10s0f0 inet manual

#10G network card not connected



auto enp10s0f1

iface enp10s0f1 inet manual

#10G network card not connected



auto bond0

iface bond0 inet manual

bond-slaves enp129s0f0 enp129s0f1

bond-miimon 100

bond-mode 802.3ad

bond-xmit-hash-policy layer3+4

bond-downdelay 200

bond-updelay 200

#10G aggregation



auto bond0.1000

iface bond0.1000 inet manual

#cluster public network - 10.30.2.3/24



auto bond0.1001

iface bond0.1001 inet static

address 10.30.3.3/24

#cluster private network



auto vmbr0

iface vmbr0 inet static

address 10.30.1.3/24

gateway 10.30.1.1

bridge-ports bond0

bridge-stp off

bridge-fd 0

#PX mgmt



auto vmbr1

iface vmbr1 inet manual

bridge-ports bond0.1000

bridge-stp off

bridge-fd 0

bridge-vlan-aware yes

bridge-vids 2-4094



source /etc/network/interfaces.d/*

 

log

 

 

[ 2.248907] ixgbe: loading out-of-tree module taints kernel.

[ 2.249219] ixgbe: module verification failed: signature and/or required key missing - tainting kernel

[ 2.282472] ixgbe 0000:0a:00.0 0000:0a:00.0 (uninitialized): ixgbe_check_options: FCoE Offload feature enabled

[ 2.449199] ixgbe 0000:0a:00.0: Multiqueue Enabled: Rx Queue count = 63, Tx Queue count = 63 XDP Queue count = 0

[ 2.497404] ixgbe 0000:0a:00.0: 16.000 Gb/s available PCIe bandwidth, limited by 5.0 GT/s PCIe x4 link at 0000:00:1c.0 (capable of 32.000 Gb/s with 5.0 GT/s PCIe x8 link)

[ 2.530923] ixgbe 0000:0a:00.0 eth0: MAC: 3, PHY: 3, PBA No: G45270-003

[ 2.531217] ixgbe 0000:0a:00.0: a0:36:9f:1a:45:84

[ 2.531453] ixgbe 0000:0a:00.0 eth0: Enabled Features: RxQ: 63 TxQ: 63 FdirHash

[ 2.537877] ixgbe 0000:0a:00.0 eth0: Intel(R) 10 Gigabit Network Connection

[ 2.543470] ixgbe 0000:0a:00.1 0000:0a:00.1 (uninitialized): ixgbe_check_options: FCoE Offload feature enabled

[ 2.715932] ixgbe 0000:0a:00.1: Multiqueue Enabled: Rx Queue count = 63, Tx Queue count = 63 XDP Queue count = 0

[ 2.765913] ixgbe 0000:0a:00.1: 16.000 Gb/s available PCIe bandwidth, limited by 5.0 GT/s PCIe x4 link at 0000:00:1c.0 (capable of 32.000 Gb/s with 5.0 GT/s PCIe x8 link)

[ 2.798985] ixgbe 0000:0a:00.1 eth1: MAC: 3, PHY: 3, PBA No: G45270-003

[ 2.799231] ixgbe 0000:0a:00.1: a0:36:9f:1a:45:86

[ 2.799472] ixgbe 0000:0a:00.1 eth1: Enabled Features: RxQ: 63 TxQ: 63 FdirHash

[ 2.805854] ixgbe 0000:0a:00.1 eth1: Intel(R) 10 Gigabit Network Connection

[ 2.810660] ixgbe 0000:81:00.0 0000:81:00.0 (uninitialized): ixgbe_check_options: FCoE Offload feature enabled

[ 2.975917] ixgbe 0000:81:00.0: Multiqueue Enabled: Rx Queue count = 63, Tx Queue count = 63 XDP Queue count = 0

[ 3.025016] ixgbe 0000:81:00.0: 16.000 Gb/s available PCIe bandwidth, limited by 5.0 GT/s PCIe x4 link at 0000:80:00.0 (capable of 32.000 Gb/s with 5.0 GT/s PCIe x8 link)

[ 3.033752] ixgbe 0000:81:00.0 eth2: MAC: 3, PHY: 3, PBA No: 030C00-000

[ 3.034134] ixgbe 0000:81:00.0: 0c:c4:7a:eb:35:a2

[ 3.034405] ixgbe 0000:81:00.0 eth2: Enabled Features: RxQ: 63 TxQ: 63 FdirHash

[ 3.040850] ixgbe 0000:81:00.0 eth2: Intel(R) 10 Gigabit Network Connection

[ 3.044725] ixgbe 0000:81:00.1 0000:81:00.1 (uninitialized): ixgbe_check_options: FCoE Offload feature enabled

[ 3.210429] ixgbe 0000:81:00.1: Multiqueue Enabled: Rx Queue count = 63, Tx Queue count = 63 XDP Queue count = 0

[ 3.260919] ixgbe 0000:81:00.1: 16.000 Gb/s available PCIe bandwidth, limited by 5.0 GT/s PCIe x4 link at 0000:80:00.0 (capable of 32.000 Gb/s with 5.0 GT/s PCIe x8 link)

[ 3.269678] ixgbe 0000:81:00.1 eth3: MAC: 3, PHY: 3, PBA No: 030C00-000

[ 3.270207] ixgbe 0000:81:00.1: 0c:c4:7a:eb:35:a3

[ 3.270500] ixgbe 0000:81:00.1 eth3: Enabled Features: RxQ: 63 TxQ: 63 FdirHash

[ 3.276929] ixgbe 0000:81:00.1 eth3: Intel(R) 10 Gigabit Network Connection

[ 3.292756] ixgbe 0000:0a:00.0 enp10s0f0: renamed from eth0

[ 3.299150] ixgbe 0000:0a:00.1 enp10s0f1: renamed from eth1

[ 3.333744] ixgbe 0000:81:00.0 enp129s0f0: renamed from eth2

[ 3.347071] ixgbe 0000:81:00.1 enp129s0f1: renamed from eth3

[ 9.388859] ixgbe 0000:0a:00.0: registered PHC device on enp10s0f0

[ 9.779075] ixgbe 0000:0a:00.1: registered PHC device on enp10s0f1

[ 10.185021] ixgbe 0000:81:00.0: registered PHC device on enp129s0f0

[ 10.555598] ixgbe 0000:81:00.1: registered PHC device on enp129s0f1

[ 10.675605] ixgbe 0000:81:00.0 enp129s0f0: entered allmulticast mode

[ 10.677409] ixgbe 0000:81:00.1 enp129s0f1: entered allmulticast mode

[ 10.680855] ixgbe 0000:81:00.0 enp129s0f0: entered promiscuous mode

[ 10.682723] ixgbe 0000:81:00.1 enp129s0f1: entered promiscuous mode

[ 16.452049] ixgbe 0000:81:00.0 enp129s0f0: NIC Link is Up 10 Gbps, Flow Control: None

[ 16.836114] ixgbe 0000:81:00.1 enp129s0f1: NIC Link is Up 10 Gbps, Flow Control: None
0 Kudos
4 Replies
Meghak
Employee
281 Views

Hi Team,

 

Thank you for posting in the Intel Community.

 

Please accept our sincere apologies for the delay in response.

 

We have received your concern and would like to assure you that assisting you is our top priority.

 

Could you please let us know if the issue has been resolved, or if you require further assistance from us?

 

Your prompt response will greatly help us in diagnosing and resolving the issue as quickly as possible.

 

We appreciate your patience and understanding, and we look forward to hearing from you soon.

 

Thank you for using Intel products and services.

 

Best Regards,

Megha K

Intel Customer Support Technician


0 Kudos
Meghak
Employee
258 Views

Hi Team,

 

Thank you for posting in the Intel Community.

 

This is the first follow-up regarding the issue you reported to us.

 

We request to please let us know if the issue has been resolved, or if you require further assistance from us?

 

We await your response to assist you further.

 

Thank you for using Intel products and services.

 

Best Regards,

Megha K

Intel Customer Support Technician


0 Kudos
Meghak
Employee
204 Views

Hi Team,

 

Thank you for posting in the Intel Community.

 

This is the second follow-up regarding the reported issue.

 

We request to please let us know if the issue has been resolved, or if you require further assistance from us?

 

We await your response to assist you further.

 

Thank you for using Intel products and services.

 

Best regards,

Megha K

Intel Customer Support Technician

 


0 Kudos
Akshaya1
Employee
164 Views

Hi crazyhorse1,


Thank you for contacting Intel. 

 

This is the third follow-up regarding the reported issue. We're committed to ensuring a swift resolution and would greatly appreciate any updates or additional information you can provide. 

 

As we have not heard back from you, we'll assume the issue has been resolved and will proceed to close the case. 

 

Please feel free to respond to this email at your earliest convenience. 

 

Regards,

Akshaya 

Intel Customer Support Technician



0 Kudos
Reply