cancel
Showing results for 
Search instead for 
Did you mean: 

What's the best RAID controller that's validated with X25-E & X25-M?

idata
Esteemed Contributor III

I hope those who are currently using either X25-M or X25-E with any of the validated RAID controller could share your experience here.

So far I only saw some products from Adaptec with official validation of both X25-M & X25-E. Please update this thread and I will try my best to maintain the list for the benefit of everybody. I personally need some help on this for my virtualization project, therefore I can't segregating the optimum use of SSD or set up hybrid combi for whatever purpose. All the Virtual Machines are just a bunch of files so that they are very easy to maintain/migrate around should the host fail. So, everything must run on the SSDs! *except the backups

List of validated RAID cards with X25-M & X25-E:

1.Adaptec RAID 51245

2.Adaptec RAID 516453.Adaptec RAID 52445

4.Adaptec RAID 5405

5.Adaptec RAID 54456.Adaptec RAID 58057.Adaptec RAID 5085

Just curious, is Intel IOP 348 @ 1200 the best RAID processor in the market? Is adaptec also using this? If not, is Adaptec processors better than the Intel's?

19 REPLIES 19

idata
Esteemed Contributor III

Hi,

I was wondering if anyone had any answers to the original question?

I currently have an Areca 1680ix (which is based on IOP348 @1200Mhz) with 4GB of cache. I don't have any SSDs yet. This card performs very well, with a few caveats (e.g. RAID 1 rebuilds are inexplicably slow). But, so long as the write cache isn't overrun, random and sequential writes go as fast as the PCIe x8 connection allows. With this much cache, I'm pretty sure that most of the time the slower write performance of the X25-M wouldn't be an issue.

Areca do not list the X25-E or the X25-M on their SATA compatiblity sheet (http://www.areca.us//support/download/RaidCards/Documents/Hardware/HDDCompatibilityList.zip http://www.areca.us//support/download/RaidCards/Documents/Hardware/HDDCompatibilityList.zip) - but they also miss a lot of drives, and don't list any SAS drives (not sure if that means that they are all supposed to work!).

Yet they do have a performance report on using Intel SSDs with thier 1231ML (SATA only) controller

http://www.areca.us/support/download/RaidCards/Documents/Performance/ARC1231ML_5_Intel_SDD_HDD.zip http://www.areca.us/support/download/RaidCards/Documents/Performance/ARC1231ML_5_Intel_SDD_HDD.zip

Interestingly, Areca recommend their SATA-only controllers if you only have SATA drives, as the IOP348 does SATA by emulation.

LSIs latest firmware has some SSD specific features, but I can't find a compatibility chart for them.

I am looking for reliabilty as well as performance. I will be using SSD in RAID 1 and possibly RAID 0 if reliability is good enough (I will be mirroring data across arrays so effectively would be a mirrored RAID-0).

I'm worried about SSDs dropping out of volume sets (which is common with some SATA drives like the WD Velociraptor), I get this on both Areca and Dell (LSI) Perc6i with SATA drives. Is the X25M/E firmware really RAID optimised?

So has anyone got a raid controller that they would recommend for X25 (E or M)? I'd be happy with 1GB/sec throughput (sequential read) as that's the limit of 10GbE iSCSI (the servers are iSCSI targets), but I need a raid controller and ssd that work reliably together. So ideally someone with a few month's trouble free experience, in a high i/o environment. I wish that there were SAS SSDs (apart from STEC!).

cheers,

Aitor

idata
Esteemed Contributor III

aitor_ibarra, may I know what kind of SAN software are you going to use ?

4-8 X25-M is more than enough to go beyond iSCSI's specs in terms of bandwidth. X25-E is just overkill.......Unless you have an SAS X25-E and newer generation of RAID that has twice the speed of IOP348. Otherwise, you're just wasting your money. X25-E is good for single drive or acting as a cache.

I am very happy with my Adaptec 5805Z with 8pcs of X25-M 160GB. Just need to find the best stripe size to optimize the performance of these SSD. Due to G2 recall, G1 price shoot up like mad.......result of huge shortage

The highest I saw was like 3GB/s recorded in perfmon under W2K8R2x64. Pretty stable at 1800MB/s range. Write is close to 1000MB/s with the cache enabled.

I'm considering bonding 8x 1Gbps RJ45 to the iSCSI SAN switch, coz 10GBps iSCSI switch is simply too expensive......

idata
Esteemed Contributor III

Hi Tingshen,

I'm going to be using Starwind. This is one of the few affordable options that supports persistent reservations, which is necessarry to run windows 2008 clustering. I'm currently using it in production as the storage for a hyper-v cluster. Works very well. In tests I was able to max out 10GbE when using a RAM disk and presenting it as iSCSI target. One tip: isolate the iSCSI traffic in a VLAN or distinct physical switches, make sure you can trust everything connected to it, and turn off WIndows Firewall on both initiator and target machines (just for your iSCSI network!). WIndows Firewall doesn't dent performance very much at 1Gbit, but at 10GbE you won't achieve max throughput or IOPs without turning it off.

Can you say how long you've been running them off the Adaptec 5805Z, whether that's been 24/7, and whether there have been any problems in that time. I'm covering all my bases - new server will have 7 PCIe 8x slots so I can have multiple RIAD cards and NICs; I will have an Areca 1680ix in there so no problems pulling over existing RAID volumes, but I could add another RAID controller if necessarry. It also has LSI SAS 2 on the motherboard, but no cache for this.

If you are considering 8x1Gbps and bonding them, you may find it cheaper to go 10Gbe depending on how many endpoints on your network. I have Dell PC6224 switches. They are Layer 3 capable so quite expensive for gigabit switches, but the cost of adding 10Gbe (CX4) modules to them is very low, and each switch can have 4 CX4 ports. Also, Intel quad port 1Gbe cards are similar price to dual CX4. You can also get CX4 cables quite cheaply now.

cheers,

Aitor

idata
Esteemed Contributor III

Aitor, just curious, why Starwind the window based SAN software? Why not Open-E? or other non-windows based SAN software? Having another windows on top is making the cost going up at least twice.....although we are on SELECT.

We are using 2U Nehalem servers only (eg, R710 from Dell or x3650 M2 from IBM) for both SAN & Hyper-V R2 host. 2U servers only have 4 PCI-e, and if you're lucky, you may be able to fit the RAID controller in the hidden PCI-e slots for the slow LSI cards. Quad port Intel 1Gbps card is very affordable and I would probably have just 3 hosts max.Switches wise I wanted 6224 but Dell claimed that it is not designed for iSCSI at the first place, so they recommended us to take 2x5224 instead, probably for the simplicity?! Unless you see the need to expand more ports by stacking up....then Layer3 is a must....I'm still a newbie in iSCSI

Do you include your management tools like SCOM & SCVMM in the same iSCSI SAN or another dedicated heartbeat network?

Well, I cannot guaranttee it is super stable as I am still waiting for my sample VM, with SQL2008x64 installed, to simulate 5000 employees payroll calculation and hope to complete it within 5mins, current process takes more than 1hr but it's single core Xeon, 32 bit with 4GB RAM only, running on some dBase kind of program. I would love to test for you if you have any sample VM that could torture the machine

However, when you're talking about iSCSI SAN set up, that's different story compare to standalone server. Everything counts! Actually I am not so worry about bandwidth, my VMs could be accessed by up to 400 users over 150 locations via VPN, not sure what's the impact when the DW get drilled like mad....

The only downside.....SAS cable terminating with SFF-8087 both ends is **** hard to source. Adaptec engineer loans me some cables, but they are very very short, 0.5m and I am forced to run everything with the case open, until I got my 1m SAS cables. Dell & IBM SAS backplane seems fully compatible with 5805Z, otherwise, you gotta use the direct SATA connection which lose the hot plug capability. Adaptec has to make it work coz they have already validated them, so why bother? if it doesn't work well, you gotta complain to Adaptec and get them fixed every **** thing! I am not sure about Areca, but it is something meant for some unknown DIY/OEM servers with limited support and service. We got 2hr express service from both Dell & IBM Most importantly, I like the Zero Maintenance concept from the new range of Adaptec cards. You really can forget about battery and turn on all your write cache without worries (but make sure you got the latest model, the July one got recalled).

idata
Esteemed Contributor III

Starwind:

OK, I have to have another Windows license, but as I'm a service provider (I rent virtual machines), I'm on SPLA license for Microsoft, which means I pay monthly for each cpu (or user depending on the product). With the new networking features in R2 I could virtualise Starwind, if I have enough RAM and CPU in the box after running the SAN.

Starwind worked first time with no hassle. Also their support is very, very good. At the time of purchase, Open-E and Openfiler and all Linux based iSCSI targets lacked persistent reservation, so could not be used for hyper-v clusters. This may have changed by now. Also Open-E charges by how much storage you have. With Starwind it's just how many servers you run Starwind on. They don't care about how many TB/drives/clients etc.

The main disadvantage with Starwind is that it doesn't have full high availablility yet (they are working on that right now). So if I have to take a Starwind server down (e.g. windows patches, starwind upgrade, hardware upgrade) then all the VMs running off it need to be shut down too. With HA I will have two servers with automatic seamless failover between them. It doubles the number of disks that I have to buy, but still way cheaper and better performance than going for a hardware box with HA (such as EMC or Dell MD3000i etc).

You don't need a layer 3 switch, only layer 2 to support vlans, but Layer 3 features seem to be in every 10GbE switch I looked at! If you want to mix iSCSI and other traffic on the same switch, then you really need something with Layer 2 support (so you can do VLANs) - the 5224s qualify, but I didn't see anything in the feature set that made them better for iSCSI than the 6224. I mean, nice that they can prioritize iSCSI traffic, but you can do that in the 6224 with a little config. Once you've got iSCSI traffic in its own vlan, it will be isolated from the rest of the LAN - use another vlan for management traffic. With 10GBe, you've got enough bandwidth to put both vlans on the same nic, but if you are using 1Gbit/sec I would definetley have them on different ports.

In the UK (where I am) - Intel PRO/1000 PT costs about £300 and the Intel dual 10GbE CX4 NIC costs about £465... so yes, more expensive, but way cheaper per MBit/sec. So for me, it was definitely a worthwhile investment.

I really like Adaptec's use of a capacitor + flash instead of a battery. But I would like to see bigger cache! It seems that only Areca hit 4GB. I would love to see much larger caches - I guess the cpus used on the controllers haven't gone 64bit yet?.

Are you actually getting Dell to support you even though you are using RAID cards and SSDs sourced elsewhere? That's quite impressive...

SAS cables: in the UK, try SPAN - http://www.span.com/index.php?cPath=28_1209 http://www.span.com/index.php?cPath=28_1209

Thanks for the offer of testing! That won't be necessarry, as I'll have new hardware arriving soon, I just wanted to see that Intel SSD didn't have RAID problems like Velociraptors and other SATA drives... I guess I will know soon enough.