Server Products
Data Center Products including boards, integrated systems, Intel® Xeon® Processors, RAID Storage, and Intel® Xeon® Processors
4762 Discussions

MFSYS25 / Modular Server - Super slow RAID-Subsystem

idata
Employee
3,657 Views

Hi!

We have two diffent MFSYS25 with the same problem!

I don't know if it's by design but the Transferspeeds of the Raid-Controller totally SUCK!

If've tested it with 3 X25-M drives in RAID0 (with the new SATA-to-SAS-Converter):

max Read: 80 MB/s

max Write: 80 MB/s

the same drives on my workstation with P35-Chipset have an astonishing speed:

max Read: 720 MB/s

max Write: 220 MB/s

The same happens with 3 2,5" 10k HDDS, they only reach 80MB/s and that's to slow for your applications!

So were exactly is the Problem? Is there somewhere a Limitation? Is there a workaround or do i have to life with a server that's slower than a normal 2,5" Notebook drive!?

best regards

0 Kudos
10 Replies
Daniel_O_Intel
Employee
525 Views

Do the X25 drives have any cache settings you can play around with?

0 Kudos
idata
Employee
525 Views

yes played around with these already!

And they changed NOTHING.

For example on Windows 7 i have to disable the Caching an reenable it to get the full speed but there i get atleast 220MB/s

0 Kudos
Richard_A_Intel1
Employee
525 Views

Septic

What firmware version are you running?

What are you using to test the disk performance?

Are your OS's installed on the same 3 disks where you are testing the performance? Was this the same setup as on the workstation?

Can you click on the SCM on the back of the chassis and then look at the battery tab to ensure that the battery state is normal.

Can you also click on storage on the left hand side under settings and change Cache to enabled then click Save.

Certainly your read/write performance does not sound good - but this is not a known problem of the system, so it must be to do with either your system or configuration.

0 Kudos
idata
Employee
525 Views

What firmware version are you running?

Current Build Version: 5.0.100.20090928.19055

What are you using to test the disk performance?

HD Tune

Are your OS's installed on the same 3 disks where you are testing the performance?

Yes, Windows Server 2008 was installed

Was this the same setup as on the workstation?

Same setup, 3 SSDs in RAID0, Windows Server 2008, Drivers installed

Can you click on the SCM on the back of the chassis and then look at the battery tab to ensure that the battery state is normal.

Battery capacity is normal

Can you also click on storage on the left hand side under settings and change Cache to enabled then click Save.

HDD Write Back Cache: Enabled - was already activated

The System is currently configured with 3 Seagate ST9146802SS with a Single RAID5 drive on them.

HD Tune Test: http://notebooks.ccs.at/HDTune_MFS_seagate.png http://notebooks.ccs.at/HDTune_MFS_seagate.png

According to Seagates Product Specififation ONE of these drives should have between 55 and 89 MB/s substained transfer rate.

I also made a test with a Seagate 1.5 TB 7200rpm drive which is connected over ExternalSAS (miniSas or whatever it is called from an SAS-Expander)

HD Tune Test: http://notebooks.ccs.at/HDTune_Expander_seagate.png http://notebooks.ccs.at/HDTune_Expander_seagate.png

0 Kudos
Richard_A_Intel1
Employee
525 Views

We will try and replicate and report our findings. Hopefully we should come back to you by Wed....

0 Kudos
Richard_A_Intel1
Employee
525 Views

Hi Septic

What we have discovered is that HD Tune uses a command queue depth of 1 and fixed 64k sequential blocks. We don't feel like this is a good tool to test a SAN environment (multiple servers connected to a shared storage subsystem) which we have in the Modular Server.

IOMeter is a better benchmarking tool for shared storage environments and more accurately represents the performance cababilities of the Modular Server. Please could you download this tool and increase the queue depth. You can also look at random workloads with this tool to getter a better feel for real word performance.

Each SAS controller on the compute module is 3Gb/s which translates to approx 300MB/s of potential bandwidth. We can certainly achieve these kind of speeds with enough disks and real world simulated workloads.

Thanks

Rich

0 Kudos
idata
Employee
525 Views

Hi Richard.

Regarding HD Tune you were right, i now used ATTO Disk Benchmark.

Still there is a problem with the RAID Subsystem

I used a single SSD X25-M 80GB first Generation

 

reformatted it with NTFS

Workstation:

Intel E8400

 

Gigabyte P35-DQ6

 

Intel ICh-9 Raid-Controller activated

 

8 GB DDR2 RAM

 

Windows 7 x64 Ultimate

System Drive: 2 Intel SSD X25-M second Generation RAID0

Server:

2 * Intel Xeon E5420

 

Intel Compute Module 5000

 

MFS-Raid Controller

 

16 GB FB-FIMM

 

Windows Server 2008 Enterprise

System Drive: 3 Seagate 10k drives 146GB RAID 5

Benchmarks:

with ATTO Disk Benchmark

I run them with a Queue Depth of 1, 2, 4 and 8

Workstation:

http://notebooks.ccs.at/Workstation_Queue_1.png http://notebooks.ccs.at/Workstation_Queue_1.png

 

http://notebooks.ccs.at/Workstation_Queue_2.png http://notebooks.ccs.at/Workstation_Queue_2.png

 

http://notebooks.ccs.at/Workstation_Queue_4.png http://notebooks.ccs.at/Workstation_Queue_4.png

 

http://notebooks.ccs.at/Workstation_Queue_8.png http://notebooks.ccs.at/Workstation_Queue_8.png

Server over SAS-Expander:

http://notebooks.ccs.at/ServerExp_Queue_1.png http://notebooks.ccs.at/ServerExp_Queue_1.png

 

http://notebooks.ccs.at/ServerExp_Queue_2.png http://notebooks.ccs.at/ServerExp_Queue_2.png

 

http://notebooks.ccs.at/ServerExp_Queue_3.png http://notebooks.ccs.at/ServerExp_Queue_3.png

 

http://notebooks.ccs.at/ServerExp_Queue_4.png http://notebooks.ccs.at/ServerExp_Queue_4.png

Server: (Configured as Single RAID0)

http://notebooks.ccs.at/Server_Queue_1.png http://notebooks.ccs.at/Server_Queue_1.png

 

http://notebooks.ccs.at/Server_Queue_2.png http://notebooks.ccs.at/Server_Queue_2.png

 

http://notebooks.ccs.at/Server_Queue_3.png http://notebooks.ccs.at/Server_Queue_3.png

 

http://notebooks.ccs.at/Server_Queue_4.png http://notebooks.ccs.at/Server_Queue_4.png

 

as you see there is a significant difference when comparing these

 

is it intended that a Raid-Controller which is > 1000$ is slower than a standard consumer onboard Raid-Controller?

btw: Is there a way to reduce the fanspeeds instead to increase them if a power connection fails?

0 Kudos
Richard_A_Intel1
Employee
525 Views

Hi Septic

The difference that i can see is not too significant. Also it is important to remember that a single server with direct attach disks has lower latencies than a SAN envrironment and will in many cases perform better. however, this environment is not scalable, which is why people move to SAN. Our system is optimized for multiple server deployments, not single server deployments. There is a significant cost advantage also since you are virtualizing the hard drives and sharing both RAID controller resources and disk resources. To fully consume the disks in a SAN environment, you must use a mutliple servers and you will see the performance of the disk subsystem scale. Our storage subsystem can handle more than 1000MB/s of data transfer using SSD drives for both reads and writes, which is more than adequate for SMB deployments.

0 Kudos
Richard_A_Intel1
Employee
525 Views

No there is no way to reduce fans speeds if a power supply fails. In the event of a power supply failure, the system will spin up the fans to overcompensate for the extra cooling required (since there is fans in the power supplies themselves) and the system is also going into an error condition which itself must ramp up the fans to indicate a hardware failure.

0 Kudos
idata
Employee
525 Views

Hi

Regarding Fan Speed: but why do they turn up if JUST the cable to the power supply fails? the power supply can still run it's fans (even if they could run slower because there is lower heat) and the other power supply which now has a higher load could turn up it's fans if necessary .. but if all fans turn up the power consumption increases by 200 Watts or more .. that's just contraproductive if you think of a UPS .. still thinking of a small server configuration!!

Regarding Raid: I believe you that the system performs that fast with many disks .. but why has the performance to be poor with just 1 disk and 1 server?

% Server is slower than Workstation Queue12340,5-228%-165%-240%-180%-183%-93%-132%-91%1-234%-173%-256%-172%-219%-136%-118%-98%2-244%-160%-164%-136%67%-121%-116%-64%4-202%-287%-208%-125%-112%-123%-52%-77%8-106%-145%-121%-125%-53%-133%-32%-77%16-54%-100%<td align="right" class="xl64" style="background-color: transp...

0 Kudos
Reply