Intel® Optane™ Solid State Drives
Support for Issues Related to Solid State Drives based on Intel® Optane™ technology, Intel® MAS and Firmware Update Tool

Optane 900P slow down

Alibek
Beginner
4,314 Views

I have 4 identical hosts with 4x nvme Optane 900P 280GB in U.2 form factor, next model:

 

Model Number: INTEL SSDPE21D280GA

Serial Number: PHM2746000??280AGN

Firmware Version: E2010325

When i test it i see next - someone Optane is very slow

Before run tests i drop caches:

# echo 3 > /proc/sys/vm/drop_caches

All Optane have

/sys/block/nvme?n1/queue/io_poll = 1

Nothing io per nvme in parallel. Only this test:

host-1 ~# for d in {0..3}; do dd if=/dev/nvme${d}n1 of=/dev/null bs=4k count=256000; done

256000+0 records in

256000+0 records out

1048576000 bytes (1.0 GB, 1000 MiB) copied, 0.385942 s, 2.7 GB/s

256000+0 records in

256000+0 records out

1048576000 bytes (1.0 GB, 1000 MiB) copied, 0.390112 s, 2.7 GB/s

256000+0 records in

256000+0 records out

1048576000 bytes (1.0 GB, 1000 MiB) copied, 0.38746 s, 2.7 GB/s

256000+0 records in

256000+0 records out

1048576000 bytes (1.0 GB, 1000 MiB) copied, 0.387112 s, 2.7 GB/s

host-1 ~# for d in {0..3}; do hdparm -Tt --direct /dev/nvme${d}n1; done

/dev/nvme0n1:

Timing O_DIRECT cached reads: 4776 MB in 2.00 seconds = 2388.57 MB/sec

Timing O_DIRECT disk reads: 7122 MB in 3.00 seconds = 2373.22 MB/sec

/dev/nvme1n1:

Timing O_DIRECT cached reads: 4880 MB in 2.00 seconds = 2440.49 MB/sec

Timing O_DIRECT disk reads: 7300 MB in 3.00 seconds = 2433.20 MB/sec

/dev/nvme2n1:

Timing O_DIRECT cached reads: 4826 MB in 2.00 seconds = 2413.76 MB/sec

Timing O_DIRECT disk reads: 7010 MB in 3.00 seconds = 2336.50 MB/sec

/dev/nvme3n1:

Timing O_DIRECT cached reads: 4834 MB in 2.00 seconds = 2417.19 MB/sec

Timing O_DIRECT disk reads: 7286 MB in 3.00 seconds = 2428.46 MB/sec

host-2 ~# for d in {0..3}; do dd if=/dev/nvme${d}n1 of=/dev/null bs=4k count=256000; done

256000+0 records in

256000+0 records out

1048576000 bytes (1.0 GB, 1000 MiB) copied, 0.386011 s, 2.7 GB/s

256000+0 records in

256000+0 records out

1048576000 bytes (1.0 GB, 1000 MiB) copied, 0.700671 s, 1.5 GB/s

256000+0 records in

256000+0 records out

1048576000 bytes (1.0 GB, 1000 MiB) copied, 135.126 s, 7.8 MB/s

256000+0 records in

256000+0 records out

1048576000 bytes (1.0 GB, 1000 MiB) copied, 0.3885 s, 2.7 GB/s

host-2 ~# for d in {0..3}; do hdparm -Tt --direct /dev/nvme${d}n1; done

/dev/nvme0n1:

Timing O_DIRECT cached reads: 4870 MB in 2.00 seconds = 2435.09 MB/sec

Timing O_DIRECT disk reads: 7276 MB in 3.00 seconds = 2425.19 MB/sec

/dev/nvme1n1:

Timing O_DIRECT cached reads: 2758 MB in 2.00 seconds = 1379.17 MB/sec

Timing O_DIRECT disk reads: 2726 MB in 3.00 seconds = 908.07 MB/sec

/dev/nvme2n1:

Timing O_DIRECT cached reads: 614 MB in 2.12 seconds = 290.25 MB/sec

Timing O_DIRECT disk reads: 64 MB in 3.13 seconds = 20.42 MB/sec

/dev/nvme3n1:

Timing O_DIRECT cached reads: 4716 MB in 2.00 seconds = 2358.23 MB/sec

Timing O_DIRECT disk reads: 6068 MB in 3.00 seconds = 2022.55 MB/sec

host-3 ~# for d in {0..3}; do dd if=/dev/nvme${d}n1 of=/dev/null bs=4k count=256000; done

<span style="font-family: t...

0 Kudos
15 Replies
idata
Employee
1,902 Views

Hello Alibek,

 

 

Thank you for contacting Intel®Technical Support.

 

 

As we understand, you need assistance with your Intel® Optane™ SSD 900P Series (280GB, 2.5in PCIe x4, 20nm, 3D XPoint™). If we infer correctly, to begin diagnosis and consequent troubleshooting that could take us to a resolution, we would appreciate if you could, please, reply to this post with the following, important, basic information:
  • System Integration (please, describe how your system is integrated; please, include the manufacturer and model of all the components)
  • Operating system information ( OS distribution, Kernel version, etc)
  • The SMART logs extracted from one of your Intel® Optane™ SSD 900P Series.
  • Usage that you are giving to these drives (primary drives, storage/secondary drives, part of a RAID array, etc).
We will be looking forward to your reply.

 

 

Best regards,

 

 

Josh B.

 

Intel Customer Support.

 

0 Kudos
Alibek
Beginner
1,902 Views

Hi Josh, thank you for your attention!

  • System Integration (please, describe how your system is integrated; please, include the manufacturer and model of all the components)
  • Operating system information ( OS distribution, Kernel version, etc)

Use next platform https://www.supermicro.com/Aplus/system/2U/2123/AS-2123BT-HNC0R.cfm Supermicro A+ Server 2123BT-HNC0R - 4 nodes

Per node

NVME

4x 2.5" U.2 https://www.intel.com/content/www/us/en/products/memory-storage/solid-state-drives/gaming-enthusiast-ssds/optane-900p-series/900p-280gb-2-5-inch-20nm.html Intel Optane 900P 280GB

1x M.2 https://www.samsung.com/semiconductor/ssd/client-ssd/MZVPV256HEGL/ Samsung SM961 256GB (NVMe) SM961

SAS2x 2'5" SSD https://www.samsung.com/semiconductor/ssd/enterprise-ssd/MZILS7T6HMLS/ Samsung PM1633a 7.68TBFCQLogic QLE8362 (attache to FC switch, for use exported pools from external storages)CPU2x AMD EPYC 7601 with SMT (64 cores/128 threads)MemoryDDR4 ECC 2 TiBNetwork2x 10Gbps Intel X550T Ethernet - bonding balanced-albOS

Debian GNU/Linux 9.5 (stretch) with linux kernel version 4.15.18-1-pve # 1 SMP PVE 4.15.18-17 (Mon, 30 Jul 2018 12:53:35 +0200)

 

repo: http://download.proxmox.com/debian/pve http://download.proxmox.com/debian/pve stretch/pve-no-subscription amd64Virtualization

Proxmox 5.2-2, PVE Manager Version pve-manager/5.2-7/8d88e66a

repo: http://download.proxmox.com/debian/pve http://download.proxmox.com/debian/pve stretch/pve-no-subscription amd64

Ceph Version

luminous 12.2.7-pve1

 

repo: http://download.proxmox.com/debian/ceph-luminous http://download.proxmox.com/debian/ceph-luminous stretch/main amd64

  • The SMART logs extracted from one of your Intel® Optane™ SSD 900P Series.

Look at attached file nvme-slowdown-systeminfo.tar.gz

  • Usage that you are giving to these drives (primary drives, storage/secondary drives, part of a RAID array, etc).

Currently all 4 servers not used, Optane not used, only testing, I will planning use it in Ceph cluster as metadata or tier-cache for hyperconvergence configuration (to scale together - cpu, memory, storage - by appending new hosts)

0 Kudos
idata
Employee
1,902 Views

Hello Alibek,

 

Thank you for your reply.

 

 

Based on the information you provide us we notice the following:

 

 

-System Requirements for an Intel® Optane™ SSD 900P Series Drive

 

 

-To use as a secondary data drive, you need:
  • Microsoft Windows 7 or Windows 10 OS
NoteOther OS supporting these requirements may function properly, but we have not validated them yet.
  • Intel® NVMe driver installed after OS installation
NoteOS that contains a native NVMe driver should also work, but the Intel NVMe driver is preferred for Windows.
  • Based on the Smart information you shared with us this device have more or less 2000 hours of usage, did you experienced the same issues since the beginning or the issue you are reporting occurred during a specific process?

 

This being said please take into consideration that it is expected to experience issues/performance drops or others since your configuration is not validated or supported.

 

In order to further assist you please visit the https://www.intel.com/content/www/us/en/support/articles/000025989/memory-and-storage.html Evaluation Guide for Client Intel® Optane™ SSD's and follow the step by step PDF guide and provide us with the results.

 

 

Once you followed all the steps of the previous guide we advise you to download the https://downloadcenter.intel.com/download/28036/Intel-Solid-State-Drive-Toolbox?product=80096 Intel® Solid State Drive Toolbox and check the following information:

 

  • Drive health
  • Estimated drive life remaining
  • SMART attributes
We hope you find this information useful.

 

 

Best regards,

 

 

Josh B.

 

Intel Customer Support Technician

 

Under Contract to Intel Corporation

 

0 Kudos
Alibek
Beginner
1,902 Views

Josh, thank you for your notice!

* Please take into consideration that your Intel® Optane™ SSD 900P Series (280GB, 2.5in PCIe x4, 20nm, 3D XPoint™) is qualified as an INTEL® SOLID STATE DRIVE FOR GAMING AND ENTHUSIASTS and was not designed to be used on a server environment and using your drives in an out of specs environment can void your warranty.

I'm is enthusiast of opensource technologies and I use different technologies for own projects, for example:

workstations, servers, single-board computers, laptops, different storages for my own projects for mining, deep machine learning, automotive and other.

And when I buy hardware to my property then I expect it is will work as stated by the manufacturer in specs (include internal specs as declarated performance, endurance and interfaces as PCIe 3.0 x4 with NVMe support by PCIe-capable SFF-8643 connector).

* Based on the Smart information you shared with us this device have more or less 2000 hours of usage, did you experienced the same issues since the beginning or the issue you are reporting occurred during a specific process?

Servers and devices is not used 2000 hours, they only is powered on. I began to performe the performance tests on this week only. Before it I configured networks (Intel X550T Ethernet controller not autonegotiate 10Gbps speed, and 2 month is loss on this with support Supermicro)

I guess, my be if servers is not use devices Optane 900P for a while, a long time, then power managment is make this situation - some devices is slow down.

I tried reset nvme controller by https://github.com/linux-nvme/nvme-cli nvme-cli tools - but this not make effect.

I found out - if I restart server with slow down Optane then after reboot some devices again demonstrate high performance, but some other devices demonstrate only half or quarter performance:

Before reboot:

host-3 # for d in {0..3}; do hdparm -Tt --direct /dev/nvme${d}n1; done

/dev/nvme0n1:

Timing O_DIRECT cached reads: 92 MB in 2.00 seconds = 45.98 MB/sec

Timing O_DIRECT disk reads: 140 MB in 3.03 seconds = 46.15 MB/sec

/dev/nvme1n1:

Timing O_DIRECT cached reads: 4738 MB in 2.00 seconds = 2369.24 MB/sec

Timing O_DIRECT disk reads: 3008 MB in 3.00 seconds = 1002.60 MB/sec

/dev/nvme2n1:

Timing O_DIRECT cached reads: 20 MB in 2.17 seconds = 9.20 MB/sec

Timing O_DIRECT disk reads: 20 MB in 3.12 seconds = 6.42 MB/sec

/dev/nvme3n1:

Timing O_DIRECT cached reads: 678 MB in 2.00 seconds = 338.76 MB/sec

Timing O_DIRECT disk reads: 362 MB in 3.02 seconds = 119.86 MB/sec

After reboot:

host-3 # for d in {0..3}; do hdparm -Tt --direct /dev/nvme${d}n1; done

/dev/nvme0n1:

Timing O_DIRECT cached reads: 4782 MB in 2.00 seconds = 2390.90 MB/sec

Timing O_DIRECT disk reads: 2806 MB in 3.00 seconds = 935.22 MB/sec

/dev/nvme1n1:

Timing O_DIRECT cached reads: 4790 MB in 2.00 seconds = 2395.45 MB/sec

Timing O_DIRECT disk reads: 3576 MB in 3.00 seconds = 1191.40 MB/sec

/dev/nvme2n1:

Timing O_DIRECT cached reads: 4834 MB in 2.00 seconds = 2417.37 MB/sec

Timing O_DIRECT disk reads: 3732 MB in 3.00 seconds = 1242.66 MB/sec

/dev/nvme3n1:

Timing O_DIRECT cached reads: 4736 MB in 2.00 seconds = 2368.74 MB/sec

Timing O_DIRECT disk reads: 2412 MB in 3.00 seconds = 803.44 MB/sec

* This being said please take into consideration that it is expected to experience issues/performance drops or others since your configuration is not validated or supported.

In order to further assist you please visit the https://www.intel.com/content/www/us/en/support/articles/000025989/memory-and-storage.html Evaluation Guide for Client Intel® Optane™ SSD's and follow the step by step PDF guide and provide us with the results.

Once you followed all the steps of the previous guide we advise you to download the https://downloadcenter.intel.com/download/28036/Intel-Solid-State-Drive-Toolbox?product=80096 Intel® Solid State Drive Toolbox and check the following information:

Drive health

Estimated drive life remaining

SMART attributes

We hope you find this information useful.

Thank you! But that information is not useful. Evaluation Guide for Client Intel® Optane™ SSD's and Intel® Solid State Drive Toolbox based and need Windows OS. But in my enviroment around is not present any Windows OS. And in fact You exert force me to buy Windows?

0 Kudos
idata
Employee
1,902 Views

Hello Alibek,

 

 

Thank you for your reply.

 

 

In order to clarify and to reply to your inquiry let me share the following information:
  • "when I buy hardware for my property then I expect it is will work as stated by the manufacturer in specs"
We do agree on this, our hardware should be able to work as stated in the specs available at ark.intel.com; if the hardware and software that the device is running in fulfills the System Requirements for an Intel® Optane™ SSD 900P Series Drive and the configuration have been tested, validated and is supported by Intel®.

 

 

As stated in our previous interaction your Intel® Optane™ SSD 900P Series (280GB, 2.5in PCIe x4, 20nm, 3D XPoint™) is qualified as an Intel® SOLID STATE DRIVE FOR GAMING AND ENTHUSIASTS and was not designed to be used on a server/data-center environment, and using your drives in an out of specs environment void your warranty.

 

 

To explain the reason why the misuse would void the warranty; when a customer uses a client/enthusiasts drive in a data center usage, is exposing the drive to different workloads that involve different endurance and therefore surpassing the specs. Our client/enthusiasts drives are expected for client/gaming usage. That is why we offer data center drives since those have specs which have validated for enterprise usage.

 

 

Based on the NVMe Options available on the Supermicro website we do not see the 900P as a validated or tested to work with your system either. This website specifically recommends you for your platform Supermicro A+ Server 2123BT-HNC0R to base purchase NVMe 2.5" SSDs from Supermicro to ensure compatibility and revision level of these devices.
  • "Evaluation Guide for Client Intel® Optane™ SSD's and Intel® Solid State Drive Toolbox based and need Windows OS. But in my environment around is not present any Windows OS"
We do understand this but as part of the basic system requirements for an Intel® Optane™ SSD 900P Series Drive to work properly you need the following:

 

 

Microsoft Windows 7 or Windows 10 OS

 

 

Note: Other OS supporting these requirements may function properly, but we have not validated them yet.

 

 

Since your OS has not been validated to work with the hardware it is expected to experience unknown issues (performance drops or others since your configuration is not validated or supported) and we advise you to get in contact with your OS Open source community in order to get support and further information on how to try to solve your issue.
  • In order to summarize the ideas Intel® does not support your current configuration since your product is being used in a data center environment and in a system that the hardware and software does not fulfill the basic requirements and is out of the manufacturer specs and this voided the warranty on your Intel® Optane™ SSD 900P Series (280GB, 2.5in PCIe x4, 20nm, 3D XPoint™) devices.
Thank you for your patience and understanding.

 

 

Best regards,

 

 

Josh B.

 

Intel Customer Support Technician

 

Under Contract to Intel Corporation
0 Kudos
Alibek
Beginner
1,902 Views

Hello Josh!

I know about warranties, compatibility and other. But please don't to explain about it again.

And I do not try to surpassing the specs! Devices used in evironment with compatible specs!

Yes this not tested your company and not tested producer of server. But stated specs of server is full compatible with stated specs of nvme device!

I think this problem is present with other NVMe and I think that in the interests of your company it is also necessary to solve this problem.

> we advise you to get in contact with your OS Open source community in order to get support and further information on how to try to solve your issue.

Already, but please, do not raft me to open source community, this can be an unpleasant incident for the Intel, such as Meldown, Spectre or MCU-Path-License

I need Intel community and Intel corp help to solve problem.

But I do not need clarification of warranty conditions or instructions to appeal to an open source community. Please refrain from this.

It will be better if you involve competent specialists in the consideration of this problem.

To tests:

I run other tests and the results make the situation even more incomprehensible:

NVMe devices demonstrate full speed under https://manpages.debian.org/stretch/fio/fio.1.en.html FIO with ioengine=libaio, and half speed with ioengine=sync or psync

But dd or hdparm (both use sync https://manpages.debian.org/stretch/manpages-dev/read.2.en.html read/https://manpages.debian.org/stretch/manpages-dev/write.2.en.html write) still showing bad results

Look at the and of file fio-host4.txt at attached archive fio-test.tar.gz

I don't reboot host4 for specifically, it stay in the current bad state with nvme for to study the situation.

Josh, thank you for your attention, again!

I know about warranties and please don't to explain about it. I think this problem is present with other NVMe and I think that in the interests of your company it is also necessary to solve this problem.

0 Kudos
Jose_G_Intel4
Employee
1,902 Views

Hi Alibek, the main issue I see is that you are using hdparm to run the benchmarks and this is not a tool that we recommend for benchmarking in Linux; you must use FIO, as you seem to have already done; and as you stated to have noticed, the performance numbers are higher. The way hdparm runs benchmarking is not ideal, as it does not simulate "real world workloads"; side note for your reference: https://www.linux.com/learn/inspecting-disk-io-performance-fio https://www.linux.com/learn/inspecting-disk-io-performance-fio

That said, the closest thing you could do is use our DC P3700/P3600/P3500 series Evaluation Guide as a reference for running benchmarking tests over Linux; but please bear in mind that this is a document for Datacenter products… therefore it does not really apply for the 900P:

http://manuals.ts.fujitsu.com/file/12176/fujitsu_intel-ssd-dc-pcie-eg-en.pdf http://manuals.ts.fujitsu.com/file/12176/fujitsu_intel-ssd-dc-pcie-eg-en.pdf

Hope this helps.

Thanks,

JE

0 Kudos
Alibek
Beginner
1,902 Views

Hi JE!

 

If you look first post and look into attached files in other my posts - you will see:

I use not only hdparm, I also use https://manpages.debian.org/stretch/coreutils/dd.1.en.html dd. Both tools make sequentially read (and write) operations on device. If look at strace - can see only https://manpages.debian.org/stretch/manpages-dev/read.2.en.html read/https://manpages.debian.org/stretch/manpages-dev/write.2.en.html write operations with specified device.

In document http://manuals.ts.fujitsu.com/file/12176/fujitsu_intel-ssd-dc-pcie-eg-en.pdf http://manuals.ts.fujitsu.com/file/12176/fujitsu_intel-ssd-dc-pcie-eg-en.pdf in paragraph "4.2.4 Run the Benchmark Test", in clause number 3 - https://manpages.debian.org/stretch/coreutils/dd.1.en.html dd recommended as tool for sequential writes of random data to device:

Pre-condition the drive by filling up the drive with sequential writes.

This makes sure that the benchmark tool does not record an artificially high out-of-box

performance.

– In Windows, use Iometer to sequentially write to 100% span of the drive.

– In Linux, use the "dd" command on the test drive.

e.g. dd of=/dev/nvme0n1 if=/dev/urandom oflag=direct

If you look at first post you can see dd results - for example for host4 (2018-08-29):

host-4 ~# for d in {0..3}; do dd if=/dev/nvme${d}n1 of=/dev/null bs=4k count=256000; done

256000+0 records in

256000+0 records out

1048576000 bytes (1.0 GB, 1000 MiB) copied, 134.697 s, 7.8 MB/s

256000+0 records in

256000+0 records out

1048576000 bytes (1.0 GB, 1000 MiB) copied, 0.555736 s, 1.9 GB/s

256000+0 records in

256000+0 records out

1048576000 bytes (1.0 GB, 1000 MiB) copied, 0.385807 s, 2.7 GB/s

256000+0 records in

256000+0 records out

1048576000 bytes (1.0 GB, 1000 MiB) copied, 14.1933 s, 73.9 MB/s

host-4 (2018-09-06):

root@host-4:~# echo 3 > /proc/sys/vm/drop_caches && for d in {0..3}; do echo nvme${d}n1; dd if=/dev/nvme${d}n1 of=/dev/null bs=4k count=256000; done

nvme0n1

256000+0 records in

256000+0 records out

1048576000 bytes (1.0 GB, 1000 MiB) copied, 118.43 s, 8.9 MB/s

nvme1n1

256000+0 records in

256000+0 records out

1048576000 bytes (1.0 GB, 1000 MiB) copied, 133.926 s, 7.8 MB/s

nvme2n1

256000+0 records in

256000+0 records out

1048576000 bytes (1.0 GB, 1000 MiB) copied, 39.2288 s, 26.7 MB/s

nvme3n1

256000+0 records in

256000+0 records out

1048576000 bytes (1.0 GB, 1000 MiB) copied, 130.507 s, 8.0 MB/s

and with 1M blocksize:

root@host-4:~# echo 3 > /proc/sys/vm/drop_caches && for d in {0..3}; do echo nvme${d}n1; dd if=/dev/nvme${d}n1 of=/dev/null bs=1M count=1024; done

nvme0n1

1024+0 records in

1024+0 records out

1073741824 bytes (1.1 GB, 1.0 GiB) copied, 120.246 s, 8.9 MB/s

nvme1n1

1024+0 records in

1024+0 records out

1073741824 bytes (1.1 GB, 1.0 GiB) copied, 136.472 s, 7.9 MB/s

nvme2n1

1024+0 records in

1024+0 records out

1073741824 bytes (1.1 GB, 1.0 GiB) copied, 100.636 s, 10.7 MB/s

nvme3n1

1024+0 records in

1024+0 records out

1073741824 bytes (1.1 GB, 1.0 GiB) copied, 133.799 s, 8.0 MB/s

root@host-4:~# uptime

18:25:40 up 16 days, 21:14, 1 user, load average: 0.00, 0.16, 0.33

As can see the read operations from device nvme1n1 and nvme2n1 after few date without reboot degraded too.

But if use iflag=direct (strace show: open("/dev/nvme0n1", O_RDONLY|O_DIRECT)):

root@host-4:~# echo 3 > /proc/sys/vm/drop_caches && for d in {0..3}; do echo nvme${d}n1; dd if=/dev/nvme${d}n1 of=/dev/null iflag=direct bs=4k count=256000; done

nvme0n1

256000+0 records in

256000+0 records out

1048576000 bytes (1.0 GB, 1000 MiB) copied, 3.35532 s, 313 MB/s

nvme1n1

256000+0 records in

256000+0 records out

1048576000 bytes (1.0 GB, 1000 MiB) copied, 3.02359 s, 347 MB/s

nvme2n1

256000+0 records in

256000+0 records out

1048576000 bytes (1.0 GB, 1000 MiB) copied, 3.27172 s, 320 MB/s

nvme3n1

256000+0 records in

256000+0 records out

And I found next - I consistently performed the followi...

0 Kudos
Jose_G_Intel4
Employee
1,902 Views

Hi Alibek, we can't guarantee server performance results, but we are testing on client based platform to share our results on Linux in general for your reference.

Thanks,

Jose

0 Kudos
idata
Employee
1,902 Views

Hi Alibek,

 

 

As pointed out in an earlier post, we ran our tests in a client board H270-Gaming 3, Intel® H270 Express Chipset, Intel® Core™ i3-7100 CPU @ 3.90GHz, with Ubuntu 16.04.4 LTS Kernel 4.13.0 and FIO 2.2.10 set as follows (sequential reads and sequential writes):

 

 

# Test 1: 128K Sequential Reads

 

fio --output=128K_Seq_Read.txt --name=seqread --write_bw_log=128K_Seq_Read_sec_by_sec.csv --filename=/dev/nvme0n1p1 --rw=read --direct=1 --ioengine=libaio --blocksize=128k --norandommap --numjobs=8 --randrepeat=0 --size=5G --runtime=600 --group_reporting --iodepth=128

 

 

# Test 2: 128k Sequential Writes

 

fio --output=128K_Seq_Write.txt --name=seqwrite --write_bw_log=128K_Seq_Write_sec_by_sec.csv --filename=/dev/nvme0n1p1 --rw=write --direct=1 --ioengine=libaio --blocksize=128k --norandommap --numjobs=8 --randrepeat=0 --size=5G --runtime=600 --group_reporting --iodepth=128

 

 

The test results are basically what is shared through our specs in ARK:

 

 

Test FIO ARK

 

Sequential Reads 2573.9 MB/s 2500 MB/s

 

Sequential Writes 2153.4 MB/s 2000 MB/s

 

 

Best regards,

 

 

Josh B.

 

Intel Customer Support Technician

 

Under Contract to Intel Corporation

 

0 Kudos
idata
Employee
1,902 Views

Hello Alibek.

 

 

Thank you for having contacted Intel Technical Support.

 

 

We have not heard from you since our last communication and we would like to know if you need further assistance or if we can close this case?

 

 

Important note: Should further assistance or clarification be required, we will greatly appreciate if you reply to this post instead of writing a new one unless your inquiry is completely unrelated. This way we will prevent generating a duplicate post and we will not lose the train of thought.

 

 

We will be looking forward to your reply.

 

 

Best regards,

 

 

Josh B.

 

Intel Customer Support.
0 Kudos
Alibek
Beginner
1,902 Views

Hi Jose and Josh!

I was busy on other tasks.

And now return to tests of nvme:

I found next - when I fist run test FIO with small block 512 - 4k, after when I run FIO with bs=32k - speed is degradate, but if I run test with bs=64k-1M several times - speed is restored, also I check this with dd and I have same results - run test dd wth bs=64k-1M several times - speed of nvme restored (see the example below).

Currently, on all hosts kernel is up to date and hosts was rebooted, but I get the same results (exclude host-1, on it - speed is not degradate all of time).

Example of test:

root@host-4:~# uptime

16:42:35 up 27 days, 19:31, 1 user, load average: 10.75, 7.04, 2.85

root@host-4:~# uname -a

Linux host-4 4.15.18-1-pve # 1 SMP PVE 4.15.18-17 (Mon, 30 Jul 2018 12:53:35 +0200) x86_64 GNU/Linux

root@host-4:~# cat fio-optane.cfg

[global]

ioengine=libaio

direct=1

sync=1

# readwrite=randread

rw=read

rw_sequencer=sequential

iodepth=256

bs=32k

buffered=0

size=100%

runtime=60

time_based

randrepeat=0

norandommap

refill_buffers

ramp_time=30

group_reporting=1

[job nvme0n1]

filename=/dev/nvme0n1

# cpus_allowed=32-39

numjobs=8

[job nvme1n1]

filename=/dev/nvme1n1

# cpus_allowed=96-103

numjobs=8

[job nvme2n1]

filename=/dev/nvme2n1

# cpus_allowed=56-63

numjobs=8

[job nvme3n1]

filename=/dev/nvme3n1

# cpus_allowed=120-127

numjobs=8

root@host-4:~# echo 3 > /proc/sys/vm/drop_caches; fio fio-optane.cfg

job nvme0n1: (g=0): rw=read, bs=32K-32K/32K-32K/32K-32K, ioengine=libaio, iodepth=256

...

job nvme1n1: (g=0): rw=read, bs=32K-32K/32K-32K/32K-32K, ioengine=libaio, iodepth=256

...

job nvme2n1: (g=0): rw=read, bs=32K-32K/32K-32K/32K-32K, ioengine=libaio, iodepth=256

...

job nvme3n1: (g=0): rw=read, bs=32K-32K/32K-32K/32K-32K, ioengine=libaio, iodepth=256

...

fio-2.16

Starting 32 processes

Jobs: 32 (f=32): [R(32)] [5.9% done] [8472MB/0KB/0KB /s] [271K/0/0 iops] [eta 24m:11s]

job nvme0n1: (groupid=0, jobs=32): err= 0: pid=2921636: Mon Sep 17 16:46:22 2018

read : io=521390MB, bw=8684.7MB/s, iops=277771, runt= 60036msec

slat (usec): min=2, max=46966, avg=21.43, stdev=116.79

clat (usec): min=378, max=136067, avg=29462.12, stdev=11223.74

lat (usec): min=383, max=136081, avg=29479.20, stdev=11223.65

clat percentiles (usec):

...

0 Kudos
idata
Employee
1,902 Views

Hi Alibek,

We ran our tests to try to reproduce the behavior you are reporting in a client board H270-Gaming 3, Intel® H270 Express Chipset, Intel® Core™ i3-7100 CPU @ 3.90GHz, with Ubuntu 16.04.4 LTS Kernel 4.13.0 and FIO 2.2.10.

The test results are what is shared through our specs in ARK in all scenarios:

Test FIO 128k FIO 32k FIO 4k ARK

Sequential Reads 2573.9 MB/s 2477.8MB/s 2363.2MB/s 2500 MB/s

Sequential Writes 2153.4 MB/s 2158.7MB/s 2185.1MB/s 2000 MB/s

As mentioned before we can't guarantee server performance results. It's great that when you tested with the bs=64k-1M you got good results, however, we cannot guarantee that the behavior will stay like that based on your current server environment.

Best regards,

Josh B.

Intel Customer Support Technician

Under Contract to Intel Corporation

0 Kudos
idata
Employee
1,902 Views

Hi Alibek,

 

 

Thank you for having contacted Intel Technical Support.

 

 

We have not heard from you since our last communication and we would like to know if you need further assistance or if we can close this case?

 

 

Important note: Should further assistance or clarification be required, we will greatly appreciate if you reply to this post instead of writing a new one unless your inquiry is completely unrelated. This way we will prevent generating a duplicate post and we will not lose the train of thought.

 

 

We will be looking forward to your reply.

 

 

Best regards,

 

 

Josh B.

 

Intel Customer Support.
0 Kudos
Alibek
Beginner
1,902 Views

Hi Josh!

Problem wasn't solved but the discussion can be closed.

Тhank you for your opinion!

0 Kudos
Reply