cancel
Showing results for 
Search instead for 
Did you mean: 

Intel DC SSD P3600 runs on PCIe 1.0 x4 although plugged into a PCIe 3.0 x16 slot

NSrnd
New Contributor II

I just got a workstation with an https://www.asus.com/us/Commercial_Servers_Workstations/X99E_WS/ Asus X99-E WS motherboard and a 2.5-inch form factor Intel DC SSD P3600 1.2 TB. The SSD runs on PCIe 1.0 x4 and achieves a maximum read speed of around 730 to 800 MB/s, depending on the benchmark. The PCIe 1.0 x4 connection was confirmed both using lspci and the Intel® Solid-State Drive Data Center Tool. However, the supplier claims that the SSD performs as advertised on Windows 7, which I do not use in production.

The drive is connected using a http://ark.intel.com/products/82790/Hot-swap-Backplane-PCIe-Combination-Drive-Cage-Kit-for-P4000-Ser... Hot-swap Backplane PCIe Combination Drive Cage Kit for P4000 Server Chassis FUP8X25S3NVDK (2.5in NVMe SSD), which is plugged into one of the seven PCIe 3.0 x16 slots on the motherboard (I tried every slot so far). It is the only drive in the drive cage. This drive's LED on the drive cage is green and blinking when the OS boots up. The only other PCIe card is an http://www.evga.com/Products/Product.aspx?pn=512-P3-1311-KR EVGA Nvidia GT 210 GPU with 512 MB RAM, a PCIe 2.0 x16 device.

I have installed both Ubuntu 15.04 (kernel 3.19) and Centos 7 (kernel 3.10) and both display the same behaviour. Following the http://downloadmirror.intel.com/23929/eng/Intel_Linux_NVMe_Driver_Reference_Guide_330602-002.pdf Intel Linux NVMe Driver Reference Guide for Developers, I got these results:

dd if=/dev/zero of=/dev/nvme0n1 bs=1M oflag=direct

gives me a write rate of about 620 MB/s, and

hdparm -tT --direct /dev/nvme0n1

gives 657 MB/s O_DIRECT cached reads and 664 MB/s O_DIRECT disk reads.

The lspci output:

[user@localhost ~]$ sudo lspci -vvv -s 6:0.006:00.0 Non-Volatile memory controller: Intel Corporation PCIe Data Center SSD (rev 01) (prog-if 02 [NVM Express]) Subsystem: Intel Corporation DC P3600 SSD [2.5" SFF] Control: I/O- Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx+ Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- SERR- Latency: 0 Interrupt: pin A routed to IRQ 40 Region 0: Memory at fb410000 (64-bit, non-prefetchable) [size=16K] Expansion ROM at fb400000 [disabled] [size=64K] Capabilities: [40] Power Management version 3 Flags: PMEClk- DSI- D1- D2- AuxCurrent=0mA PME(D0-,D1-,D2-,D3hot-,D3cold-) Status: D0 NoSoftRst+ PME-Enable- DSel=0 DScale=0 PME- Capabilities: [50] MSI-X: Enable+ Count=32 Masked- Vector table: BAR=0 offset=00002000 PBA: BAR=0 offset=00003000 Capabilities: [60] Express (v2) Endpoint, MSI 00 DevCap: MaxPayload 256 bytes, PhantFunc 0, Latency L0s <4us, L1 <4us</span> ExtTag+ AttnBtn- AttnInd- PwrInd- RBE+ FLReset+ DevCtl: Report errors: Correctable- Non-Fatal- Fatal- Unsupported- RlxdOrd- ExtTag- PhantFunc- AuxPwr- NoSnoop+ FLReset- MaxPayload 256 bytes, MaxReadReq 512 bytes DevSta: CorrErr- UncorrErr- FatalErr- UnsuppReq- AuxPwr- TransPend- LnkCap: Port # 0, Speed 8GT/s, Width x4, ASPM L0s L1, Exit Latency L0s <4us, L1 <4us</span> ClockPM- Surprise- LLActRep- BwNot-<span style="font-family: courier new,courier;"...
5 REPLIES 5

jbenavides
Valued Contributor II

Hello humorist,

These are the most relevant aspects we found after reviewing the information you have provided:

- The lspci output shows that the Intel® SSD DC P3600 Series is capable of using PCIe Gen3 x4, however, the link status shows it is running in PCIe Gen1 x4. This suggests that another component in the BUS is limiting the connection to Gen1 and the SSD is using the link speed of the slowest device in the BUS. You might want to check the link capabilities of the related devices.

LnkCap: Port # 0, Speed 8GT/s, Width x4, ASPM L0s L1, Exit Latency L0s <4us, L1 <4us

...

LnkSta: Speed 2.5GT/s, Width x4, TrErr- Train- SlotClk- DLActive- BWMgmt- ABWMgmt-

- The Hot-swap Backplane PCIe Combination Drive Cage Kit for P4000 Server Chassis FUP8X25S3NVDK (2.5in NVMe SSD) is supported by the Intel server specialists, however, the information we found indicates that this is designed to be used with Intel® P4000 Server chassis and Intel® Server Boards, even in this configuration thehttp://www.intel.com/support/motherboards/server/s2600cw/sb/CS-035538.htm?wapkw=hot-swap+backplane+p... Intel Motherboards require specific updates to operate properly with this device. 3rd party systems may not be fully compatible with this Backplane kit.

- Checking the https://www.asus.com/us/Commercial_Servers_Workstations/X99E_WS/specifications/ Specifications of the Asus X99-E WS*, it does not mention support for the Drive cage kit either. We advise you to check with Asus if they support this configuration. Asus may have another solution to attach 2.5-inch NVMe storage devices with the SFF-8639 connector.

We advise you to check with http://support.asus.com/ Asus Support if your motherboard is fully compatible with the Hot-swap Backplane PCIe Combination Drive Cage Kit for P4000 Server Chassis FUP8X25S3NVDK. And if it is, make sure it has any applicable system updates and configuration to make full use of it.

Additionally, you could confirm if this behavior is expected with the Hot-swap Backplane PCIe Combination Drive Cage Kit for P4000 Server Chassis FUP8X25S3NVDK. You can obtain assistance for the Backplane kit by contacting the /community/tech/servers Intel Servers Support Community, or you can http://www.intel.com/p/en_US/support/contactsupport Contact Intel Support to get assistance assistance from a server support specialist in your region.

NSrnd
New Contributor II

Hello jonathan_intel,

Thank you for the detailed and informative reply. I am following your advice and am in the process of contacting both ASUS support and the Intel Server Support.

The supplier of this workstation claims to have tested the SSD under Windows 7 64-bit and that it achieved the advertised performance. I cannot verify this claim (I only run Linux) but have no reason to doubt it. Would that rule out any hardware issues and signify a problem with software instead?

jbenavides
Valued Contributor II

I would advise you to contact the support teams for the motherboard and backplane kit to confirm or discard any hardware issues. The SSD is being detected and it has the proper PCIe capabilities advertised, so the bottleneck appears to be in a different component.

As we understand, Asus X99 Motherboards Support NVM Express Devices, however, using the 2.5" drive versions requires a BIOS update and use of the ASUS Hyper Kit expansion card; I was not able to find reference of Intel Backplane kits being used with this type of systems. You might want to check with Asus if this applies to the Asus X99-E WS.

http://rog.asus.com/418662015/labels/product-news/asus-announces-all-x99-and-z97-motherboards-suppor... ASUS Announces All X99 and Z97 Motherboards Support NVM Express Devices

NSrnd
New Contributor II

Hi jonathan_intel,

As you initially wrote, this was indeed a hardware compatibility issue. The backplane is not compatible with this motherboard. The HyperKit solution was also problematic to achieve as a special cable is required to connect the SSD to HyperKit. This cable is not bundled with either the SSD nor the HyperKit and is also not sold separately. It only comes bundled with the Intel SSD 750 Series 2.5' form factor, but after a lot of emails and phone calls, our supplier was able to get one specially delivered for us.

I thank you for the prompt and expert replies