Ethernet Products
Determine ramifications of Intel® Ethernet products and technologies
4809 Discussions

Intel X520-SR2 iscsi performance

idata
Employee
2,181 Views

Hello, can i get suggestion from intel support team to resolve ISCSI performance issue related to this type of NICs?

Short explanation of current test lab:

two PC's, one is client Windows 7 x64 another is server Windows 2012 (iscsi target software installed - Starwind ISCSI SAN) each has 1 x520 nic directly connected to another by using Intel DA cable.

Negotiated speed is 10 Gbps. Only one port is connected directly to another nic on the server.

first test i've made - by using ntttcp utility to test total throughput - nothing closer to 10 GbE

test config

ntttcps -m 1,0,192.168.10.1 -l 1048576 -n 100000 -w -a 16

ntttcpr -m 1,0,192.168.10.1 -l 1048576 -rb 2097152 -n 100000 -w -a 16 -fr

this should load network up to 10 Gb.. but no luck.

one direction only 850-890 Mb, back direction 6 Gb.

updated latest drivers for windows 7 and windows 2012

playing with jumbo frames, C-states BIOS - no luck.

For ISCSI i'm receiving unacceptable results - even i'm using RAM disk iscsi target.

Any suggestions?

0 Kudos
4 Replies
Mark_H_Intel
Employee
1,213 Views

I checked with someone who works with performance testing. This is the feedback I received:

You should check CPU load. This looks like an RSS issue since your ntttcp command is only targeting a single core. You probably need to pin your threads to different CPUs for your test.

I hope this helps with your testing.

Mark H

0 Kudos
idata
Employee
1,213 Views

Hi, Mark

i will try to change this test settings and see what happens. But a have another question(this should be put into production later). Two ISCSI servers (all Intel hardware - the latest Intel server platform http://www.intel.com/content/www/us/en/server-systems/server-system-r2300gz.html R2308GZ4GS9, Intel H/W RAID controller, SSD's and Intel 10 GbE NICs. - connected directly. (bypassing switch to be sure it's not a switch proble)

by using the same test config

ntttcps -m 1,0,192.168.10.1 -l 1048576 -n 100000 -w -a 16

ntttcpr -m 1,0,192.168.10.1 -l 1048576 -rb 2097152 -n 100000 -w -a 16 -fr

i can easily load network up to 10 Gbps in both directions. That's fine. But when i'm starting to use these connections for ISCSI traffic - i cannot load network more than 5 Gbps and avrg. latency is unacceptable.

Just read article about Intel and Microsoft got 1000000 iops and wired speed for ISCSI.. sounds good but cannot get. Local storage (RAID 10 SSD's) works excellent. more than 2Gb/s read and 2Gb/s write speed, with latency less than 5 ms.

Interesting thing - when i use RAM drive or Image file (as ISCSI target) the network performance is the same. but RAM drive should load network up to 10 Gbps. nothing.. only 5 Gbps and high latency..

 

Do you have suggestions about? Because i'm using only Intel hardware, everything should be compatible whith each other.

tried Windows 2012 as well as Windows 2008 R2, Jumbo, latest drivers, ISCSI registry tweaks. i was not able to get acceptable performance.

another interesting thing - when i use built-in 1 GbE network connection as ISCSI paths - i can load this connections up to 1 Gbps. It seems 1Gb works better than 10 GbE ;-) Is there any special BIOS settings? now i'm using max performance profile in BIOS.

 

thank you.

 

0 Kudos
Craig_P_Intel
Employee
1,213 Views

The short answer is that I've been able to get line rate bidirectional (10Gb Tx/10Gb Rx simultaneously) traffic with iSCSI server with SSDs on Window Server 2008 R@. Since you're not achieving line-rate, my guess is that you are I/O bound. RSS should be on by default on a Windows server. Mark is right, you should check since you are using Windows 7. Go to the advanced properties of the NIC in Device Manager to check. The longer answer is setup of the environment. Since you MS iSCSI Target set up go to your server manager and right+click the Microsoft iSCSI Software Target and select properties. Then you set the 10Gb adapter to be the only one used for iSCSI. Typically i wouldn't use NTttcp to test storage. I would use IOmeter, which can be downloaded from http://www.iometer.org/ http://www.iometer.org/. Here a link to a paper that can help you set up IOmeter. http://www.intelcloudbuilders.com/docs/icb_ra_cloud_computing_unified_storage_NetApp.pdf http://www.intelcloudbuilders.com/docs/icb_ra_cloud_computing_unified_storage_NetApp.pdf On your attached target, format it on the client side rather than using the raw volume. Now set up IOmeter to use a larger block size. Maybe 16K or larger to achieve line-rate. Good luck on your testing. Craig

0 Kudos
Craig_P_Intel
Employee
1,213 Views

I would look at updating the Win7 machine to server. I've not tested client OS with 10Gb adapters. I would also try running NTttcp without all the switches (-l, -rb, -n, -fr). I would also make sure that RSS is enabled. By the way, how many cores are your machines?

0 Kudos
Reply