Community
cancel
Showing results for 
Search instead for 
Did you mean: 
Highlighted
Beginner
1,268 Views

IMS upgrade? or add rack servers?

Jump to solution

We currently have an MFSYS25 with 3 compute mods and 14 disks. We are planning to update this site with 3 new servers. We are trying to decide weather to add 2nd scm and vtrak E310sd attached directly and 3 additional compute modules, or 3 seperate servers attached to the vtrak. Our need (concern) is disk speed (performance) Is the direct attach approach a good solution? Any experience?

0 Kudos

Accepted Solutions
Highlighted
Employee
3 Views

I don't see any performance numbers comparing the two. But if all your hard drives are already in use, it makes sense to not try dumping three more servers on top of them.

Is hard drive access your current bottleneck?

View solution in original post

0 Kudos
3 Replies
Highlighted
Employee
4 Views

I don't see any performance numbers comparing the two. But if all your hard drives are already in use, it makes sense to not try dumping three more servers on top of them.

Is hard drive access your current bottleneck?

View solution in original post

0 Kudos
Highlighted
Beginner
3 Views

Thanks for your reply. I am having the same problem finding comparison info. (Disk I/O performance...) We are adding servers to the IMS plus a vtek array...E series, and disk contention has been a problem in the past, but I think the ext. direct attached will solve the problem. I don't know if anyone has tried the promise J310sD in place of the E310sD...that would be interesting info.

0 Kudos
Highlighted
Community Manager
3 Views

We have a few fully populated modular servers and with the testing we have done the disk performance is going to depend a lot on how you have pooled your disks and of course the disk I/O intensity of the applications you run on top of it.

There is great benefit in putting all your drives in one big pool and building RAID10 VDs for the systems that need the disk I/O, but this can have a negative impact if you have multiple disk I/O heavy systems. In that case it might be better to have smaller pools which give you predictable disk access, but at a capped performance. (We went for a big pool)

We've also seen cases where the OS access to the disk is the actual bottleneck. i.e. splitting the applications to another compute module improves the disk I/O performance of the OS. So it looks like the controller in the modular server can handle quite a bit of load.

As for the Promise vtrak. I've just got my first one in the lab, busy testing for a client. Best advice there is load up your systems and do lots of iometer testing. Test local and external storage to compare and do simultanious tests to simulate more than one system with heavy I/O.

0 Kudos