Server Products
Data Center Products including boards, integrated systems, Intel® Xeon® Processors, RAID Storage, and Intel® Xeon® Processors
4784 Discussions

2.8 Gbyte/sec limitation on RMS25CB080

idata
Employee
1,432 Views

3 series of tests on Intel R2216GZ4GCLX server show maximum read speed about 2.8Gbyte/sec, then theoretical maximum is 6Gbyte/sec (6Gbit/s per port, 8 ports)

Drives: 10 Intel SATA SSD 730 240Gb (520Mbyte/sec sequential read)

Mainboard and SAS controller has latest firmwares, PCIe Gen3 enabled in BIOS.

Testing sequential read with 256K blocks, 64 outstanding I/O (iometer, SQLIO), Win2012R2, LSI drivers 6.708.7.0, 10 drives with SAS Expander and 8 drives direct controller to backplane.

1 drive - 520Mbyte/s, 2 drives - 1025, 3 - 1535, 4 - 2040, 5 - 2550, 6 - 2800, add more drives do not increase transfer speed.

Any ideas about bottle-neck?

0 Kudos
1 Solution
idata
Employee
503 Views

Hello,

We strongly recommend contacting our support group directly for further assistance on this inquiry:

http://www.intel.com/p/en_US/support/contactsupport http://www.intel.com/p/en_US/support/contactsupport

View solution in original post

0 Kudos
6 Replies
AP16
Valued Contributor III
503 Views

First, while LSI 2208 chip advertised with PCI-E 3.0 capability, Intel states for http://ark.intel.com/products/60283/Intel-Integrated-RAID-Module-RMS25CB080 RMS25CB080 only Gen2 PCI Express lanes, resulting in 4 GB/s peak. Second, the diagram of LSI 2208 shows a 4.8 GB/s chips internal bandwidth limit for all of SAS ports. Then, take a look at official LSI benchmarking guide for better testbed setups http://docs.avagotech.com/docs/12353177 LSI MegaRAID Controller Benchmark Tips (1891 KB)

0 Kudos
idata
Employee
503 Views

Thanks, is very useful document, however it doesn't explain value 2.8Gbyte/s. Besides, I don't understand restriction 4.8Gb/s on 8 SAS ports (page 15). If each port transfers 6Gbit/s, 8*6Gbit has to give 6Gbyte, right?

As far as I know, my controlers already have support of PCIe Gen3.

From Intel RMS25CB080 Datasheet:

Features:

...

4

x8 PCI Express Generation 3

1

interface for fast communication with the server board

1

At launch, compatibility will be limited to PCIe Gen2. PCIe Gen3 support is anticipated to be added by August 2012 and will require a module with a manufacture date that follows the

addition of PCIe Gen3 support.

1

At launch, compatibility will be limited to PCIe Gen2. PCIe Gen3 support is anticipated to be added by August 2012 and will require a module with a manufacture date that follows the

addition of PCIe Gen3 support.

0 Kudos
AP16
Valued Contributor III
503 Views

If each port transfers 6Gbit/s, 8*6Gbit has to give 6Gbyte, right?

You forgot about signal encoding. 6Gbit/s is a raw PHY level rate, at ATA protocol level you have only 4/5 of it, thus only 4800 Gbit/s, 600 MB/s. So eight ports = 4.8 GB/s.

 

Practically, data transfers are even slower, about 4/5 part from ATA level. The magic numbers 520 MB/s in all Intel SATA SSD`s spec comes from that factor - its maximum really achievable speed.

Again, follow the LSI benchmark guide to reach more speed from controller.

0 Kudos
idata
Employee
503 Views

OK, I agree with restriction in 520Mbyte/s on the port (to a disk). I studied LSI Benchmark Tips and found out that my tests completely correspond to them. Settings of the controler: No Read Ahead, Write Through, Direct IO. Setting up test: 256Kb I/O size, 0% random, 100% read, 64 queue depth. Values 2.5GByte/s received on 5 disks and 2.8GByte/s on 6 disks shows that I use both lines from the controler to expander, differently I would receive about 2.4. However I still don't see the reason for a limit in 2.8Gbyte/s.

0 Kudos
AP16
Valued Contributor III
503 Views

Well, the only idea is to create a support ticket to Intel and let them deal with LSI.

0 Kudos
idata
Employee
504 Views

Hello,

We strongly recommend contacting our support group directly for further assistance on this inquiry:

http://www.intel.com/p/en_US/support/contactsupport http://www.intel.com/p/en_US/support/contactsupport

0 Kudos
Reply