cancel
Showing results for 
Search instead for 
Did you mean: 

Intel SSD DC P4510 Series, reset controller

Pete_H_
New Contributor

I have 6 P4510 in a RAID 6 array. Under seemingly random circumstances, I am getting kernel messages such as:

Dec 18 01:34:26 dimebox kernel: nvme nvme0: I/O 55 QID 52 timeout, reset controller

The issue seems to be triggered more frequently during periods of high I/O, lots of reads and simultaneous writes. The machine has yet to fail, but when the controller is reset, all I/O operations are stalled.

The operating system pertinent information is:

[root@dimebox ~]# cat /etc/redhat-release ; uname -a

CentOS Linux release 7.6.1810 (Core)

Linux dimebox.stata.com 3.10.0-957.1.3.el7.x86_64 #1 SMP Thu Nov 29 14:49:43 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux

[root@dimebox ~]# df -hl | grep dev

/dev/md127   7.0T 1.4T 5.3T 21% /

devtmpfs     63G   0  63G  0% /dev

tmpfs      63G 4.0K  63G  1% /dev/shm

/dev/md125   249M  12M 238M  5% /boot/efi

[root@dimebox ~]# cat /proc/mdstat 

Personalities : [raid6] [raid5] [raid4] [raid1] 

md125 : active raid1 nvme5n1p3[5] nvme2n1p3[2] nvme4n1p3[4] nvme3n1p3[3] nvme0n1p3[0] nvme1n1p3[1]

   254912 blocks super 1.0 [6/6] [UUUUUU]

   bitmap: 0/1 pages [0KB], 65536KB chunk

md126 : active raid6 nvme3n1p2[3] nvme1n1p2[1] nvme5n1p2[5] nvme4n1p2[4] nvme0n1p2[0] nvme2n1p2[2]

   16822272 blocks super 1.2 level 6, 512k chunk, algorithm 2 [6/6] [UUUUUU]

    

md127 : active raid6 nvme1n1p1[1] nvme3n1p1[3] nvme5n1p1[5] nvme4n1p1[4] nvme0n1p1[0] nvme2n1p1[2]

   7516188672 blocks super 1.2 level 6, 512k chunk, algorithm 2 [6/6] [UUUUUU]

   bitmap: 8/14 pages [32KB], 65536KB chunk

unused devices: <none>

As you can see, I also overprovisioned the drives, leaving approximately 7GB free on each:

n[root@dimebox ~]# parted /dev/nvme0n1 unit MB print

Model: NVMe Device (nvme)

Disk /dev/nvme0n1: 2000399MB

Sector size (logical/physical): 512B/512B

Partition Table: gpt

Disk Flags:

Number Start   End    Size    File system Name Flags

 1   1.05MB   1924281MB 1924280MB           raid

 2   1924281MB 1928592MB 4312MB            raid

 3   1928592MB 1928853MB 261MB   fat16       raid

In the attached nvme.txt, you can see the output of isdct show -a -intelssd. nvme2.txt contains the kernel's ring buffer filtered on nvme entries from last boot. What I find most interesting is that not every "timeout, aborting" entry will trigger the reset of the controller. I am also not certain if these timeout entries are noticed by the user, but the "timeout, reset controller" issues are.

Does Intel have any idea what could be triggering these events, more importantly, how to avoid them?

13 REPLIES 13

Pete_H_
New Contributor

SSU logs attached.

JosafathB_Intel
Valued Contributor
Hello Pete H. Thank you for your reply. We are going to be working on reviewing the information you shared with us and in trying to reproduce the issue you are reporting in our lab. We will be contacting you back as soon as we have an update or in case that further information is required. Have a nice day. Best regards, Josh B. Intel® Customer Support Technician Under Contract to Intel Corporation

Pete_H_
New Contributor

Josh,

Just to update, this appears to be function of random read I/O. To verify:

cat /dev/zero > myfile

cat /dev/nvme0n1 > myfile

Neither of the above experienced any timeouts nor controller resets, they are also not random reads. In my tests, I created 300GB files with both commands without issue.

tar cf file.tar ./<some directories with 2.2 million files>

I can get the timeout/controller reset to occur multiple times within a short interval (1 to 10 minutes) .

-Pete

JosafathB_Intel
Valued Contributor
Hello Pete H. Thank you for your reply. Based on the logs you shared we noticed the following: That your system is a Super Micro* model H11DSU-iN, this been said based on your original equipment manufacturer (OEM) website: • Based on the “System HDD / SSD [H11DSU-iN]” list of compatible hardware your “INTEL SSDPE2KX020T8” is not listed as tested, validated or compatible with your server system. https://www.supermicro.com/support/resources/HDD/systemHDD.cfm?ProductID=85751&forMB=true • Based on the logs you shared with us we noticed that you are running “CentOS Linux release 7.6.1810 (Core)” your server system is tested and validated to work with CentOS 7.3 as stated on the OS Compatibility list available on your OEM website. https://www.supermicro.com/Aplus/support/resources/OS/OS_Comp_EPYC7000.cfm • We tried to reproduce your issue in our lab but we did not experience the same issue you are reporting using CentOS 7. • We advise you to open a ticket in parallel with your original equipment manufacturer Super Micro * in your case in order to check if there is any hardware known issue/limitation that could be related to the situation you are experiencing with your current configuration. We will be looking forward to your reply. Have a nice day. Best regards, Josh B. Intel® Customer Support Technician Under Contract to Intel Corporation

Pete_H_
New Contributor

Josh,

Thank you for the detective work. The system was 100% assembled by SuperMicro other than the OS installation so I will kick this back to them.

-Pete