I hope those who are currently using either X25-M or X25-E with any of the validated RAID controller could share your experience here.
So far I only saw some products from Adaptec with official validation of both X25-M & X25-E. Please update this thread and I will try my best to maintain the list for the benefit of everybody. I personally need some help on this for my virtualization project, therefore I can't segregating the optimum use of SSD or set up hybrid combi for whatever purpose. All the Virtual Machines are just a bunch of files so that they are very easy to maintain/migrate around should the host fail. So, everything must run on the SSDs! *except the backups
List of validated RAID cards with X25-M & X25-E:
1.Adaptec RAID 51245
2.Adaptec RAID 51645
3.Adaptec RAID 52445
4.Adaptec RAID 5405
5.Adaptec RAID 5445
6.Adaptec RAID 5805
7.Adaptec RAID 5085
Just curious, is Intel IOP 348 @ 1200 the best RAID processor in the market? Is adaptec also using this? If not, is Adaptec processors better than the Intel's?
Sorry, I cannot contribute much here in the way of 'validation' of hardware. I did notice you mention virtualization, so I thought I'd just mention the FusionIO products. They offer extremely high-speed PCI-e based I/O with redundancy built-in, specifically engineered to make virtualization super-fast. The cost associated with multiple SSDs and the appropriate RAID controller might be more than getting one, or even two, FusionIO cards. Take a look at their product, too, if you haven't already.
Also, if you haven't already, do some research on the fastest RAID *chipsets* instead of the whole device. That might point you in new directions. For example, what did they use in the "Battleship MTRON" tests? (http://www.nextlevelhardware.com/storage/battleship/ http://www.nextlevelhardware.com/storage/battleship/)
Hope this helps you, and thanks for listing the products you did find on Adaptec!
I can't find any validation from Areca website. Areca is cheaper than Adaptec. I am not keen to look at non-Intel SSDs as they doesn't have full 3 years warranty at affordable price. It is also not nice to talk about rival products in the Intel support community. Do let me know if you can find the validation document from Areca for both X25-E & X25-M.
I did notice the fusionIO product. The problem is, I am at Singapore, I have no clue who is selling them. We are constantly talking to Texas Memory System, they also have very reliable PCIe based SSD products but it cost as much as USD18K.
It cost me about USD300 odd for a X25-M 80GB. Having 10 of them gives me 700GB of effective storage through RAID 0 at USD3000 with potentially more than 2GB/s read speed and 700GB/s. May I know how much does an ioDrive Duo 640GB cost? Can it be made as a primary boot drive? If the cost is way cheaper than USD3000 + Adaptec RAID card, I think I have to seriously consider using FusionIO.
PCIe SSD may be faster, but bare in mind about the following concerns:
1.PCIe cannot be made available easily to multiple end users' desktop after 3 years of service in the server environment.
2.PCIe may not be able to scale up by small amount accordingly. Adding 1-2 X25-M simply increase the capacity and performance easily, this kind of flexibility can't be found in PCIe SSD like ioDrive Duo
Pull your handbrake OJ. What is your definition about "Ruling All!!!" ???
IOP 348 1200Mhz chipset from Intel has an internal bandwidth of 12GB/s. This will only be saturated by about 175pcs of X25-M from the writing speed perspective (in theory). I can't seem to find much info about the LSI 1078 chip used in Intel SRCSASJV RAID card. But for sure, it is validated with X25-E ONLY.....
This is not looking good, and for sure it doesn't rule at all but chasing behind all the vendors adopting IOP348.
X25-E is very fast (170MB/s writing speed compare to 70MB/s in X25-M), but that would mean you only need very little amount of X25-E to saturate a particular RAID controller. Let's take IOP 348 1200Mhz for example, it takes about 72pcs of X25-E to saturate the RAID controller in terms of writing speed. What does that mean?
You'll be paying a premium, for up to 72pcs of X25-E 32GB, to have only about 2TB of effective storage, and see no more performance improvement when you scale up beyond 72pcs of X25-E.....
With the same amount of money, you can probably get 144pcs of X25-M 80GB, to have up to 10TB (5x of X25-E 32GB) of effective storage, and still continue to see linear performance when you scale up all the way to 175pcs before IOP 348 1200Mhz is showing no more performance increase.
The above assumption is just based on my personal understanding towards SSD & RAID chip performance so far. Do correct me if I'm wrong.
So are you still going to stick to X25-E or forget about the limited 100K erase/write cycle of X25-M which you could probably never able to hit within the 36 months warranty period?
I really do not understand the following facts from Intel:
1.IOP 348 1200Mhz is never part of the Intel RAID card product line but all the slow LSI based products.
2.Intel didn't validate X25-M together with X25-E for many of the Intel RAID controller as well as Intel Servers.
Was it done on purpose? So that all the 3rd party OEM like Adaptec can start selling IOP348 based product and make calculative people like me start chasing for X25-M?
But one thing for sure.....it doesn't matter you buy IOP348 based RAID products from 3rd party or Intel RAID controllers; it doesn't matter you opt for X25-E instead of X25-M because you're so over-worried that the drive may just refuse to write anything to the cell out of sudden......
Intel still win......lol!
thanks for starting this topic, I guess many of us can consider really profitable
We have similar interest in our company as we're planning to build a server with FusionIO card. It would be a database server, so we don't need too much space (like you), but the greater IOPS the better. FusionIO cards maybe not-so-good at bandwidth, but if you are in need of IOPS, it's ideal (at its price level - Texas you mentioned is way more expensive). You're right, it's cannot boot (yet) and you can't share the old pieces among user computers
So I would assure you to go with ssd and not with pci card, as you have virtual machines and maybe you need better bandwidth than IOPS.
As for raid cards.. We would use two/four Intel X-25E for the system and we're not sure whether to use raid card or not. But if we will, it will be definitely an Adaptec card.
Actually I am still quite confused between the 2 important terms. Bandwidth vs IOPS. May I know why is FusionI/O card not having high bandwidth but high IOPS? I saw IBM also get FushionI/O to OEM their PCIe SSD recently: http://www-03.ibm.com/systems/storage/disk/ssd/ssd_adapters.html but the price will sure be skyhigh compare to buying direct from FushionI/O.
When using X25-M or X25-E Intel SSD, is the bandwidth and IOPS closely dependent on the RAID controller itself?
actually the FusionIO drive has very good bandwidth compared to hdds (700/600 MB/s r/w) - the old version, the new ioDriveDuo is even better (1500/1000).
I guess their card contains some memory chip (like SSDs) integrated so they can achieve this performance.
Anyway, as I understood, higher IOPS needs you, if your system has many calls for the storage. (While HDDs can achieve some hundreds IOPS (multiplied in SANs), the X-25E may reach 35000/3300 r/w IOPS, the IOcard can do 100000 IOPS.) High bandwidth needs you, if you have many file opertations (mostly big files).
We have one 2-3 GB database file, and our business system reads from/writes into that file everytime users do something, we'd better going for higher IOPS solution.
Thanks for the IBM link, a month ago I got a reply from them and they said they can't give me price for the product for Europe and it hasn't changed by today :-/By the way, HP has IO solution too: http://www.tomshardware.com/news/HP-fusion-io-SSD,7198.html http://www.tomshardware.com/news/HP-fusion-io-SSD,7198.html
As for the controller... I guess every controller has a limit (both bandwidth and IOPS) that they can handle. The FusionIO card is placed into a PCIe slot, so it doesn't depend on the raid card. I don't know how many SSD can overflow a raid card, maybe a little googling could help us out
akhhu, any idea how much does the fusion i/o drive cost?
What you mentioned is just the paper bandwidth and IOPS. Is there a way (application or device) for us to benchmark our application for the bandwidth and IOPS that's consuming?
IBM is going to give me the price soon. However, I dunno if they are using the identical fushion io drive or a different firmware on it, and I dunno how much would IBM mark up for such "OEM" product......
Fushion IO, texas Memory or even consumer base product like photofast monster, they are all relying on another RAID controller to hold the SSDs. The only difference between an Adaptec RAID card + many X25-M is......there is no SATA or SAS interface acting as another layer of bottleneck. So the onboard RAID controller will still be the bottleneck, it will never saturate the PCIe bandwidth.....
So IOPS is relying on which part? The RAID controller? Or the SSD controller?
the price goes from around $3000,00 for the cheapest version of 80 gb, unless price changed.
Anyway i don't think the Fusion-io drive is what you are looking for:
not bootable and soon not longer the fastest.
In my opinion you can equal the performance with 8 X25e's.
jeff_rys, that's quite an old info...80GB for USD3K.Yup, can't boot......very sickening....
Why aren't you guys looking at X25-M? the read speed is the same, only the writing speed is cut by half compare to X25-E. For 200GB, I have to buy 7 X25-E but I only need to buy 3 X25-M....for the price of 7 X25-E, I can probably buy 14 X25-M.....
maybe thingshen, but it is my personal opinion.
I know some time ago the prices where on the DVn....website. Right now probably between $3000, maybe $4000....
Well i consider 70mb write not half of 170mb.
Also you can with SLC let Perfectdisk run all the time.
You can write as many times as you like.
Your speed will go down with SLC but maybe with 10%.
MLC speed will probably drop with 30%.
Your reads or writes of small files will be faster, faster and faster.
Tingshen, since you ask about the Fusion drives.....you must be interested in speed.
True you can buy 14 M's for 7 X's and even have more GB, but for the price of the Fusion you can buy 10 X25-e's.
The problem one has with PCIE disks:
most give 1 year of warranty. If broken, well your troubles begin.
Some companies give 5 year warranty, Intel 3 years.
So if having 7-8 Intels and one dies after your warranty, you replace that one or go further with what is left.
But with PCIE you do not need to buy a controller.
Hi jeff_rys, actually something that always holds me back about using PCIe cards is always the recycle and breakdown issues.....we can't recycle the PCIe card to many end users PCs after warranty (and usually it comes with 1 yr warranty only) and if the card break down, the turn around time will be a big headache, unless you buy 1 more for redundancy.
I am looking for performance, or you called "speed". However, it's the READ performance (especially in RANDOM, but I'm not sure about virtual machine as it's a big block of files) that matter to me, write performance on the other hand, it's no big deal even if MLC dropped 30%, it's still very fast....bare in mind that this ratio is only at the sequential write, when it comes to random write, the gap gets closer.....
Now look back at your data & applications, whatever benchmark programmes out there are just general simulation which doesn't reflect actual usage at all. You shall use your very own application to benchmark whatever set up instead of taking everything what general benchmark programmes give you. Let's take business intelligence as an example, if it's SQL2008 based, users will only execute write during cube write back. This is extremely tiny small data and you won't be expecting hundreds of users doing such activity at the same time. The key is still READ where both X25-M & X25-E are having the same 250MB/s specs.
If you look at HR/Payroll system, it's still very READ heavy and adhoc light write operation. So why is WRITE performance so important to you? and to everybody out there? I am keen to know....perhaps there are some applications that's write heavy and it's so crucial that you always have to get the "best"....
By the way, any idea what is the maximum bandwidth of PCIe x8? 4000MB/s or 8000MB/s? Adaptec (Intel IOP348 1200Mhz) 's capability is at 250,000 I/Os 1.2GBs based on the info given from their website. Does that mean it will be saturated by 5 Intel SSD in terms of READ & 17pcs X-25M in terms of WRITE?
well as said, SLC drives live longer (10 times).
Maybe MLC drives are ok, but i have a Raid0 with 6 128 GB drives (MLC).
Right now i am not interested in buying other MLC's.
On the other hand, end of the year some big changes can be released.
Look at Intel/Micron with their 34 nm technologie.
I guess, i will wait further.
True, PCI-E cards with only 1 year warranty is not so good.
I think even if MLC reads are fast, SLC will still perform better, specially in the smaller file size.
Many GB is not needed for me, so it's nice to have some 80GB Intel, but each drive cost almost as much as a 32GB SLC.
I was wondering if anyone had any answers to the original question?
I currently have an Areca 1680ix (which is based on IOP348 @1200Mhz) with 4GB of cache. I don't have any SSDs yet. This card performs very well, with a few caveats (e.g. RAID 1 rebuilds are inexplicably slow). But, so long as the write cache isn't overrun, random and sequential writes go as fast as the PCIe x8 connection allows. With this much cache, I'm pretty sure that most of the time the slower write performance of the X25-M wouldn't be an issue.
Areca do not list the X25-E or the X25-M on their SATA compatiblity sheet (http://www.areca.us//support/download/RaidCards/Documents/Hardware/HDDCompatibilityList.zip http://www.areca.us//support/download/RaidCards/Documents/Hardware/HDDCompatibilityList.zip) - but they also miss a lot of drives, and don't list any SAS drives (not sure if that means that they are all supposed to work!).
Yet they do have a performance report on using Intel SSDs with thier 1231ML (SATA only) controller
Interestingly, Areca recommend their SATA-only controllers if you only have SATA drives, as the IOP348 does SATA by emulation.
LSIs latest firmware has some SSD specific features, but I can't find a compatibility chart for them.
I am looking for reliabilty as well as performance. I will be using SSD in RAID 1 and possibly RAID 0 if reliability is good enough (I will be mirroring data across arrays so effectively would be a mirrored RAID-0).
I'm worried about SSDs dropping out of volume sets (which is common with some SATA drives like the WD Velociraptor), I get this on both Areca and Dell (LSI) Perc6i with SATA drives. Is the X25M/E firmware really RAID optimised?
So has anyone got a raid controller that they would recommend for X25 (E or M)? I'd be happy with 1GB/sec throughput (sequential read) as that's the limit of 10GbE iSCSI (the servers are iSCSI targets), but I need a raid controller and ssd that work reliably together. So ideally someone with a few month's trouble free experience, in a high i/o environment. I wish that there were SAS SSDs (apart from STEC!).
aitor_ibarra, may I know what kind of SAN software are you going to use ?
4-8 X25-M is more than enough to go beyond iSCSI's specs in terms of bandwidth. X25-E is just overkill.......Unless you have an SAS X25-E and newer generation of RAID that has twice the speed of IOP348. Otherwise, you're just wasting your money. X25-E is good for single drive or acting as a cache.
I am very happy with my Adaptec 5805Z with 8pcs of X25-M 160GB. Just need to find the best stripe size to optimize the performance of these SSD. Due to G2 recall, G1 price shoot up like mad.......result of huge shortage
The highest I saw was like 3GB/s recorded in perfmon under W2K8R2x64. Pretty stable at 1800MB/s range. Write is close to 1000MB/s with the cache enabled.
I'm considering bonding 8x 1Gbps RJ45 to the iSCSI SAN switch, coz 10GBps iSCSI switch is simply too expensive......
I'm going to be using Starwind. This is one of the few affordable options that supports persistent reservations, which is necessarry to run windows 2008 clustering. I'm currently using it in production as the storage for a hyper-v cluster. Works very well. In tests I was able to max out 10GbE when using a RAM disk and presenting it as iSCSI target. One tip: isolate the iSCSI traffic in a VLAN or distinct physical switches, make sure you can trust everything connected to it, and turn off WIndows Firewall on both initiator and target machines (just for your iSCSI network!). WIndows Firewall doesn't dent performance very much at 1Gbit, but at 10GbE you won't achieve max throughput or IOPs without turning it off.
Can you say how long you've been running them off the Adaptec 5805Z, whether that's been 24/7, and whether there have been any problems in that time. I'm covering all my bases - new server will have 7 PCIe 8x slots so I can have multiple RIAD cards and NICs; I will have an Areca 1680ix in there so no problems pulling over existing RAID volumes, but I could add another RAID controller if necessarry. It also has LSI SAS 2 on the motherboard, but no cache for this.
If you are considering 8x1Gbps and bonding them, you may find it cheaper to go 10Gbe depending on how many endpoints on your network. I have Dell PC6224 switches. They are Layer 3 capable so quite expensive for gigabit switches, but the cost of adding 10Gbe (CX4) modules to them is very low, and each switch can have 4 CX4 ports. Also, Intel quad port 1Gbe cards are similar price to dual CX4. You can also get CX4 cables quite cheaply now.
Aitor, just curious, why Starwind the window based SAN software? Why not Open-E? or other non-windows based SAN software? Having another windows on top is making the cost going up at least twice.....although we are on SELECT.
We are using 2U Nehalem servers only (eg, R710 from Dell or x3650 M2 from IBM) for both SAN & Hyper-V R2 host. 2U servers only have 4 PCI-e, and if you're lucky, you may be able to fit the RAID controller in the hidden PCI-e slots for the slow LSI cards. Quad port Intel 1Gbps card is very affordable and I would probably have just 3 hosts max.Switches wise I wanted 6224 but Dell claimed that it is not designed for iSCSI at the first place, so they recommended us to take 2x5224 instead, probably for the simplicity?! Unless you see the need to expand more ports by stacking up....then Layer3 is a must....I'm still a newbie in iSCSI
Do you include your management tools like SCOM & SCVMM in the same iSCSI SAN or another dedicated heartbeat network?
Well, I cannot guaranttee it is super stable as I am still waiting for my sample VM, with SQL2008x64 installed, to simulate 5000 employees payroll calculation and hope to complete it within 5mins, current process takes more than 1hr but it's single core Xeon, 32 bit with 4GB RAM only, running on some dBase kind of program. I would love to test for you if you have any sample VM that could torture the machine
However, when you're talking about iSCSI SAN set up, that's different story compare to standalone server. Everything counts! Actually I am not so worry about bandwidth, my VMs could be accessed by up to 400 users over 150 locations via VPN, not sure what's the impact when the DW get drilled like mad....
The only downside.....SAS cable terminating with SFF-8087 both ends is **** hard to source. Adaptec engineer loans me some cables, but they are very very short, 0.5m and I am forced to run everything with the case open, until I got my 1m SAS cables. Dell & IBM SAS backplane seems fully compatible with 5805Z, otherwise, you gotta use the direct SATA connection which lose the hot plug capability. Adaptec has to make it work coz they have already validated them, so why bother? if it doesn't work well, you gotta complain to Adaptec and get them fixed every **** thing! I am not sure about Areca, but it is something meant for some unknown DIY/OEM servers with limited support and service. We got 2hr express service from both Dell & IBM Most importantly, I like the Zero Maintenance concept from the new range of Adaptec cards. You really can forget about battery and turn on all your write cache without worries (but make sure you got the latest model, the July one got recalled).
OK, I have to have another Windows license, but as I'm a service provider (I rent virtual machines), I'm on SPLA license for Microsoft, which means I pay monthly for each cpu (or user depending on the product). With the new networking features in R2 I could virtualise Starwind, if I have enough RAM and CPU in the box after running the SAN.
Starwind worked first time with no hassle. Also their support is very, very good. At the time of purchase, Open-E and Openfiler and all Linux based iSCSI targets lacked persistent reservation, so could not be used for hyper-v clusters. This may have changed by now. Also Open-E charges by how much storage you have. With Starwind it's just how many servers you run Starwind on. They don't care about how many TB/drives/clients etc.
The main disadvantage with Starwind is that it doesn't have full high availablility yet (they are working on that right now). So if I have to take a Starwind server down (e.g. windows patches, starwind upgrade, hardware upgrade) then all the VMs running off it need to be shut down too. With HA I will have two servers with automatic seamless failover between them. It doubles the number of disks that I have to buy, but still way cheaper and better performance than going for a hardware box with HA (such as EMC or Dell MD3000i etc).
You don't need a layer 3 switch, only layer 2 to support vlans, but Layer 3 features seem to be in every 10GbE switch I looked at! If you want to mix iSCSI and other traffic on the same switch, then you really need something with Layer 2 support (so you can do VLANs) - the 5224s qualify, but I didn't see anything in the feature set that made them better for iSCSI than the 6224. I mean, nice that they can prioritize iSCSI traffic, but you can do that in the 6224 with a little config. Once you've got iSCSI traffic in its own vlan, it will be isolated from the rest of the LAN - use another vlan for management traffic. With 10GBe, you've got enough bandwidth to put both vlans on the same nic, but if you are using 1Gbit/sec I would definetley have them on different ports.
In the UK (where I am) - Intel PRO/1000 PT costs about £300 and the Intel dual 10GbE CX4 NIC costs about £465... so yes, more expensive, but way cheaper per MBit/sec. So for me, it was definitely a worthwhile investment.
I really like Adaptec's use of a capacitor + flash instead of a battery. But I would like to see bigger cache! It seems that only Areca hit 4GB. I would love to see much larger caches - I guess the cpus used on the controllers haven't gone 64bit yet?.
Are you actually getting Dell to support you even though you are using RAID cards and SSDs sourced elsewhere? That's quite impressive...
Thanks for the offer of testing! That won't be necessarry, as I'll have new hardware arriving soon, I just wanted to see that Intel SSD didn't have RAID problems like Velociraptors and other SATA drives... I guess I will know soon enough.