What is the best raid stripe size for the X25-E and X-25-M?
Will the existing line of Intel SSD's be compatible with Windows 7 TRIM command? If so when is this likely to happen?
If TRIM is going to be supported will it be supported via RAID?
Is there any update on the new 34nm SSD product line? When will specs be available?
Will the 34nm technology use the same controller or will it be a new controller?
I can now answer part of my own question. After extensive testing with IOmeter I have concluded that a 128k stripe size is going to work best for raid 0 in the vast majority of cases, and certainly for normal OS use. That is based on test results using two different controllers and stripe sizes ranging between 16k & 1042k. (Tests on one controller were limited to 256k due to limitations of the controller.) This involved a lot of work, for something that could easily have been explained.
you are currently dealing with enthusiasts in your new SSD market, who are interested in the technology and want to know as much as possible about it. Why have the anonymous corporate attitude to your new and exciting product line? Why is there the party line of saying nothing about such an exciting product? (Even something as basic as letting people know they should be using AHCI and not IDE mode).MS seemed to have learnt that this is not the way to go with all the fantastic work they have done with Windows 7. Enthusiasts are raving about Windows 7 and that is going to really help Windows 7 launch to the mainstream with maximum impact.
Every now and then it is good to throw your (loyal) dog a bone ;)
/message/14887# 14887 Various questions regarding SSD
So, MS asked SSD manufacturers to work with them on SSD optimisations last November. The TRIM command spec was at rev 6 in 2007.
SSD optimisations, including TRIM have been in Windows 7 from day one. RC is publically available .........and no one can say if/ when it will be supported in a fw update?
Am I asking for a trade secret or something by asking if/ when TRIM will be supported in the current range of SSD's? I don't understand the stone wall of silence on this subject.......
i was wondering the same things.
unfortunately i don't think the information will become available until its available i know. it sucks.
and as you know, intel has policies of not comenting on unreleased products/technologies. i too wish there was more 'wiggle-room'.
I can maybe understand why they don't want to say anything about the new gen drives, although quite a few clues from Intel already exist, but the current drives are released products and all I am asking is a question about functionality. A good product talks by itself and the X25's are an exceptional product...but a little more info for end users would be helpful. Intel don't seem to be so shy when it comes to cpu's
@ redux and other Intel SSD Enthusiasts
Greetings from the NAND Solutions Group inside Intel!
First, we appreciate your support and enthusiam for our product. We believe we have the highest performing and most reliable SSD in the market today.
To address some of your questions:
We appreciate your loyalty and would love to throw each of you a specific "bone". Though we can't release "bones" in a public blog, stay tuned for upcoming product announcements.
Intel NAND Solutions Group
Thanks for the feedback.
Could you maybe please explain a bit more about how the X25-e works with raid 0? I appreciate this is normally an issue primarily related to computer use, but obviously ssd works quite differently to hdd and it would be really interesting to understand the issues related to the X25-e.
For example, I've tried various stripe sizes and anything between 64K & 256k works quite well, but smaller stripe sizes seem to really have a negative impact. Why would that be the case? Do smaller stripe sizes have a detrimental impact to wear levelling and write amplification? I'm not looking for an insight into how the technology works (although that would be nice) I'm just trying to understand how to use the product to its best ability.
Good luck with the 34nm technology. I've heard that it will be quite a feat of engineering and I'm really looking forward to its release.
ps....does "look for upcomming product lines" mean no TRIM suppport for existing product lines?
I'm not an expert on SSD any more than most of you, but I do know that internally, SSD drives use a flash-like technology on chips, not platters like HDDs. These chips use fixed-size addressable pages to read/write data. SSDs must read/write very specific page-sizes somewhat like sectors on HDDs, but not exactly. For example, if you write only 2K of data, the SSD would actually need to read a 128K page of data, make changes to the data in RAM/cache, clear the original 128K page of data, then re-write 128K of data. This might be why you see gains and specific sizes of striping. I cannot speak as to what page sizes are optimal for the Intel SSDs, but perhaps you've discovered it. If you're striping is smaller (say, 4K), then it must repeat this procedure multiple times for the same 128K page because you've taken perhaps 64K of data and broken into 8 write operations. Does this make sense?
I have two x25-e's on a 5405. Your atto benchmark is seriously out of tilt. I can hit 500mb/sec with my set up. You can find reviews as linked below, which should give you a better idea of what you should be able to achieve.
do you get to see any official validation document from Areca regarding their range of products against X25-E & X25-M? I can easily find it in Adaptec website but not Areca
I read that the x25's have a 128k erase block, however there is also a statement from Winslow (Intel) that states "We've figured out a way to program and erase just what is necessary (in small bytes rather than large blocks) so our drive has tremendous efficiency for the life of the computing environment." Maybe that is a reference to the 128k erase block size rather than the 512k erase block size on other ssd's or maybe less than 128k gets erased. Only Intel knows.
Write combining may also be a factor and also the cache on hard raid may be a factor.
Interesting on two of the reviews I linked above a 64k strip was used.
I've also read someone stating on another forum that a 16K strip worked best for them using 2 X25-m's, but I find that hard to believe.
If the erase block size is 128k it would seem logical that a small strip size (on a full drive) would be a lot slower whilst also significantly increasing the erase cell count.
From my own testing with IOmeter this seemed to be the case as small strip sizes decreased speed but increased cpu usage (i.e. more work for less benefit.)
Considering the potential that the wrong strip size might be significantly increasing erase counts it's a shame Intel can't help shed a little light on this issue.
I think we can speculate all day and all night about how fast a particular piece of hardware is going to be. We can argue up and down about whether or not a 'validated' product is better than one which is not. We can arm-wrestle over the numbers in a PDF file to see if it's factual or not. However, it's always going to be a crapshoot until someone sits down and makes accurate and consistent test results.
First of all, which software benchmarking products out there are actually "validated" to give acccurate test results with any specific SSD drives, or specific RAID controllers, or specific I/O drivers? Which ones produce tests which are truly realistic to servers or hardware or application expectations? Which ones take advantage of special or unique command sets and settings? What third-party testing has "validated" such claims of said piece of software?
Secondly, how can anybody know what exact configuration is going to work with a specific combination of hardware, until it is flat-out tested and recorded with an agreed upon set of tests and settings? For example, how does anybody know that the "page size" of a specific SSD drive will even line-up with the "stripe size" of a specific RAID controller? Just because an SSD writes in 128K doesn't mean a RAID controller will stripe it at the exact same addressing space -- does it, or does it not? Perhaps this is why we see so many different results from different folks.
Third, even if we know the "theoretical" speed of a RAID processor, who's to say there's no other bottleneck or bottlenecks? Perhaps the motherboard will cap the speed. Perhaps the testing software cannot accurately measure or enumerate the speeds. Perhaps the CPU will choke or the RAM will explore or the cables will leak out radiation. Perhaps super-powerful sun flares from outer space will cause an I/O controller to burst into flames!
That fact is, if we really, honestly, truly must have every piece of copper in our configuration pushed to the maximum limits of electrical resistance, maybe we ought to look at the entire project over again. If the application seriously requires 12 GB/seconds transfer speeds, then maybe taking this "cheap" route of SSD/RAID on a single contoller isn't the best way to go. Step back, and look at what the honest-to-God requirements are, and determine just how realistic those requirements are considering the budget, and whether it'll just be wasted on the carbon-based organisms connecting to it. Seriously, how many applications out there fully take advantage of systems which can read or write terabytes of data in minutes? You'd have to be a monster of a company to need that kind of throughput, with hundreds if not thousands of users, running dozens of simultaneous data-spewing and data-chewing applications. Anybody raising their hand? Do tell.
Back to the point -- No matter how much research we do, we'll still need to thoroughly test, test again, and yet test one more time, all possible configurations from multiple angles, rinse, repeat. There are many more factors to consider in real-world use than simply "validation" or "GHz" or "ioMeter" or "stripe size."
One of those factors will be, inevitably: Is it good enough?
I understand some people's desire to benchmark record-breaking speeds on their systems, but honestly, 90% of the reviews and reports I read on the Internet about various specific products have any number of holes in their testing procedures or platforms because, quite frankly, few people are aware of all the facts.
I wish us all luck in our quest, and if you happen to find that Holy Grail, please do share with the rest of the world!
I agree with what you are saying. Obviously a raid 0 strip size would normally be set up on a specific usage pattern requirement, so there would not be a right or wrong answer, but ssd has rewritten the rules. I have tested extensively with IOmeter, but as you state maybe that is not telling the right story.
To elaborate further on my interest on this issue....I only use raid 0 for capacity due to the small current drive capacities that are available and my guess is that a lot of other people are in the same position. It would be nice to know that I am not inadvertently incurring more erase counts than necessary, especially if it is also incurring a performance penalty at the same time.
In real life application testing raid 0 for desktop use does not seem to offer an advantage, (i.e. boot up times, application opening, games etc.) In benchmarks it seems that hard raid adds a small latency penalty and it also seems to highlight that hard raid has not caught up with optimisations that are available with the different way that ssd works in comparison to hdd.....a case in point being the extremely quick ramp up performance speed of the Intel drives that does not seem to be utilised in hard raid.
We can only guess and only Intel can shed light.
Hi redux, that link you mentioned didn't state any single word about "validation". Look at page 12 of Adaptec's document at: http://www.adaptec.com/NR/rdonlyres/C466A497-0E83-4B11-B80F-C751B155BEBF/0/4_ARCSASCompatibilityRepo... it's clearly stated both Intel X25-E & X25-M models are supported.
zulishk, "official validation" is important to me. Coz I am going to use it for my enterprise virtualization implementation. Any "unvalidated" hardware combination, regardless its performance, will not be allowed in my environment. This is important because if for some reason, the combination refuse to work, you have somebody to "blame to" or rather, pushing the manufacturer, eg Adaptec to make sure it's up and running as per normal should it's a manufacturer's fault.
I do agree with you that all those benchmark are not clear indication of actual production usage. People are just going clueless just to produce their so called "best results". These are reference figures which we have to take into serious consideration given that we can't buy each and every hardware in the market and test it ourselves. So the indicative figures do carry certain weight in terms of our final decision in the procurement process. However, I guess most people may go just too extreme and over-worried about under-provision instead of over-provision lol! That makes me think of the old days where we played a lot in overclocking and try to push everything to its limit, out-of-specs.
At the end of the day, it's individual's application's benchmark that count. Nevertheless, as an application person, there are definately rooms for improvement when you dun see good results in a decent hardware. Perhaps your query is not optimized? Perhaps your vendors are just too lazy to code in the most optimized way? So long as there is 1 settings that fit all, not necessary giving the best but moderately meeting the expectation, I guess I am fine with it.
Even for stripe size......it could still be application specific. But when it comes to virtualization, every VMs are just a bunch or potentially very big files. Does that ring a bell in a particular stripe size at the hypervisor level? My guess is we shall follow strictly to the smallest block size of the SSD design. Some says 512KB, some says 128KB but I also hear this interesting formula: 4K x numbers of drive x 2 x internal RAID level within an SSD , which is making things even more complicated! Perhaps I shall go and ask Adaptec what's the correct size when we scale up with more drives if I were to buy from them. You may have to end up reconstruct the whole disk array whenever you need to add in more spindles.....
If it's pure native application server, i think that really depends on how your application runs...is it creating many small size files or always in big block size...this is not my problem from now onwards coz any new systems has to be virtualized in my environment to leverage on the HUGE savings in Microsoft licensing. It's not longer just "grab the best hardware out there if cost is not a concern" but finding the best configuration to squeeze the server so that I dun pay an extra processor license for SQL Server.....your single dual processor server is probably costing a small fraction of it.....
If you're on the DIY SAN path, the safest way is have 2 separate units of RAID 0 disk array SAN and mirror them (potentially RAID 10) in real time.
Do correct me if I got a wrong perspective.....
You seem to be looking for 'validated' products, which are not validated by any third-parties, but only the manufacturer of one device. There's nothing "official" about that, because that one company can't test every combination of hardware or software, such as chipsets, CPUs, motherboards, drivers, etc. So, even if they say "Yes, this drive works with our card," it doesn't mean it'll work in every brand of server or every patched or virtualized version of an operating system, nor at optimized performance. So, again I ponder, what exactly are the requirements (not desires) of the application? (This is a rhetorical question -- please don't answer it here.)
(On a side note, I do hope you are not sacrificing less redundant systems for lower cost licensing. Virtulization does have its downfalls.)
You might begin a new topic here titled, "Has anybody done virtualization with the Intel SSDs?" and ask for people to post their configurations and satisfaction with the results. Just a suggestion!
zulishk, let me tell you why the answer will be most probably no....
Virtualization in a SAN environment can be extremely expensive. Dell told me for around 500GB of SSD SAN, it would easily cost USD66K. Texas Memory System's MLC SAN cost about USD88K for 2TB. I doubt anybody is ready for that ...unless budget is not a concern. So that gave us some idea about building our own redundant SANs with RAID 0 on quite a number of X25-M (2 DIY SANs mirror each other to give RAID 10 kind of "redundancy"). That solves the licensing issue very nicely as I can easily scale up a system if necessary . Would take your suggestion to open up this interesting topic separately....
Well, "validated hardware" does give certain level of assurance. I am sure you dun want to sacrificing "validation" for better performance but potentially risky hardware combination right? So "validation" is still important. What I do not quite understand was why Intel didn't want to validate their X25-M along with X25-E at the same time.....
Anyway, I am more or less pretty ready to get hold of a dual X5550 with 6x4GB DDR3-1333Mhz Dell or Intel box with Adaptec RAID and start playing with 6-8 X25-M with various of stripe size on Hyper-V R2. This will just be a test set up and benchmark my own VMs with SQL2008 applications. Til then I can share more about my findings , which will be the basis of building the whole virtualization set up.
well i am new on the board, but like to join in.
Currently having 6 OCZ Core V1's on a Adaptec 5805 (yes with the horrible Jmicron controller), but thanks to the Adaptec the Raid0 works well.
Looking to buy 4 x X25-e to put in Raid 0, so looking here for extra info.
My thougts where that no mather what SSD right now, the controller always needs to erase 512 kb after your delete, so before writing to it.
The 128 kb block is maybe for the future, but not now i think. Even if Windows I7 will delete only 4kb or 128kb, the SSD will (for the moment) use 512kb. Windows cannot interfere with the hardware of the drive, right?
I was hoping that Intel would provide a TRIM program one can use, lets say once a week, but apparantly this is not so.
Difficult to buy right now and maybe shortly an improved product arrives.......