cancel
Showing results for 
Search instead for 
Did you mean: 

Intel DC SSD P3600 runs on PCIe 1.0 x4 although plugged into a PCIe 3.0 x16 slot

NSrnd
New Contributor II

I just got a workstation with an https://www.asus.com/us/Commercial_Servers_Workstations/X99E_WS/ Asus X99-E WS motherboard and a 2.5-inch form factor Intel DC SSD P3600 1.2 TB. The SSD runs on PCIe 1.0 x4 and achieves a maximum read speed of around 730 to 800 MB/s, depending on the benchmark. The PCIe 1.0 x4 connection was confirmed both using lspci and the Intel® Solid-State Drive Data Center Tool. However, the supplier claims that the SSD performs as advertised on Windows 7, which I do not use in production.

The drive is connected using a http://ark.intel.com/products/82790/Hot-swap-Backplane-PCIe-Combination-Drive-Cage-Kit-for-P4000-Ser... Hot-swap Backplane PCIe Combination Drive Cage Kit for P4000 Server Chassis FUP8X25S3NVDK (2.5in NVMe SSD), which is plugged into one of the seven PCIe 3.0 x16 slots on the motherboard (I tried every slot so far). It is the only drive in the drive cage. This drive's LED on the drive cage is green and blinking when the OS boots up. The only other PCIe card is an http://www.evga.com/Products/Product.aspx?pn=512-P3-1311-KR EVGA Nvidia GT 210 GPU with 512 MB RAM, a PCIe 2.0 x16 device.

I have installed both Ubuntu 15.04 (kernel 3.19) and Centos 7 (kernel 3.10) and both display the same behaviour. Following the http://downloadmirror.intel.com/23929/eng/Intel_Linux_NVMe_Driver_Reference_Guide_330602-002.pdf Intel Linux NVMe Driver Reference Guide for Developers, I got these results:

dd if=/dev/zero of=/dev/nvme0n1 bs=1M oflag=direct

gives me a write rate of about 620 MB/s, and

hdparm -tT --direct /dev/nvme0n1

gives 657 MB/s O_DIRECT cached reads and 664 MB/s O_DIRECT disk reads.

The lspci output:

[user@localhost ~]$ sudo lspci -vvv -s 6:0.006:00.0 Non-Volatile memory controller: Intel Corporation PCIe Data Center SSD (rev 01) (prog-if 02 [NVM Express]) Subsystem: Intel Corporation DC P3600 SSD [2.5" SFF] Control: I/O- Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx+ Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- SERR- Latency: 0 Interrupt: pin A routed to IRQ 40 Region 0: Memory at fb410000 (64-bit, non-prefetchable) [size=16K] Expansion ROM at fb400000 [disabled] [size=64K] Capabilities: [40] Power Management version 3 Flags: PMEClk- DSI- D1- D2- AuxCurrent=0mA PME(D0-,D1-,D2-,D3hot-,D3cold-) Status: D0 NoSoftRst+ PME-Enable- DSel=0 DScale=0 PME- Capabilities: [50] MSI-X: Enable+ Count=32 Masked- Vector table: BAR=0 offset=00002000 PBA: BAR=0 offset=00003000 Capabilities: [60] Express (v2) Endpoint, MSI 00 DevCap: MaxPayload 256 bytes, PhantFunc 0, Latency L0s <4us, L1 <4us</span> ExtTag+ AttnBtn- AttnInd- PwrInd- RBE+ FLReset+ DevCtl: Report errors: Correctable- Non-Fatal- Fatal- Unsupported- RlxdOrd- ExtTag- PhantFunc- AuxPwr- NoSnoop+ FLReset- MaxPayload 256 bytes, MaxReadReq 512 bytes DevSta: CorrErr- UncorrErr- FatalErr- UnsuppReq- AuxPwr- TransPend- LnkCap: Port # 0, Speed 8GT/s, Width x4, ASPM L0s L1, Exit Latency L0s <4us, L1 <4us</span> ClockPM- Surprise- LLActRep- BwNot-<span style="font-family: courier new,courier;"...
5 REPLIES 5

jbenavides
Valued Contributor II

Hello Humorist,

I am glad to know that the recommendations and analysis helped you resolve this issue, and now you are able to use the full capacity of your new Intel® DC SSD P3600.