cancel
Showing results for 
Search instead for 
Did you mean: 

Various questions regarding SSD

idata
Esteemed Contributor III

What is the best raid stripe size for the X25-E and X-25-M?

Will the existing line of Intel SSD's be compatible with Windows 7 TRIM command? If so when is this likely to happen?

If TRIM is going to be supported will it be supported via RAID?

Is there any update on the new 34nm SSD product line? When will specs be available?

Will the 34nm technology use the same controller or will it be a new controller?

EDIT:

I can now answer part of my own question. After extensive testing with IOmeter I have concluded that a 128k stripe size is going to work best for raid 0 in the vast majority of cases, and certainly for normal OS use. That is based on test results using two different controllers and stripe sizes ranging between 16k & 1042k. (Tests on one controller were limited to 256k due to limitations of the controller.) This involved a lot of work, for something that could easily have been explained.

Intel....

you are currently dealing with enthusiasts in your new SSD market, who are interested in the technology and want to know as much as possible about it. Why have the anonymous corporate attitude to your new and exciting product line? Why is there the party line of saying nothing about such an exciting product? (Even something as basic as letting people know they should be using AHCI and not IDE mode).MS seemed to have learnt that this is not the way to go with all the fantastic work they have done with Windows 7. Enthusiasts are raving about Windows 7 and that is going to really help Windows 7 launch to the mainstream with maximum impact.

Every now and then it is good to throw your (loyal) dog a bone 😉

/message/14887# 14887 Various questions regarding SSD

45 REPLIES 45

idata
Esteemed Contributor III

All,

I think we can speculate all day and all night about how fast a particular piece of hardware is going to be. We can argue up and down about whether or not a 'validated' product is better than one which is not. We can arm-wrestle over the numbers in a PDF file to see if it's factual or not. However, it's always going to be a crapshoot until someone sits down and makes accurate and consistent test results.

First of all, which software benchmarking products out there are actually "validated" to give acccurate test results with any specific SSD drives, or specific RAID controllers, or specific I/O drivers? Which ones produce tests which are truly realistic to servers or hardware or application expectations? Which ones take advantage of special or unique command sets and settings? What third-party testing has "validated" such claims of said piece of software?

Secondly, how can anybody know what exact configuration is going to work with a specific combination of hardware, until it is flat-out tested and recorded with an agreed upon set of tests and settings? For example, how does anybody know that the "page size" of a specific SSD drive will even line-up with the "stripe size" of a specific RAID controller? Just because an SSD writes in 128K doesn't mean a RAID controller will stripe it at the exact same addressing space -- does it, or does it not? Perhaps this is why we see so many different results from different folks.

Third, even if we know the "theoretical" speed of a RAID processor, who's to say there's no other bottleneck or bottlenecks? Perhaps the motherboard will cap the speed. Perhaps the testing software cannot accurately measure or enumerate the speeds. Perhaps the CPU will choke or the RAM will explore or the cables will leak out radiation. Perhaps super-powerful sun flares from outer space will cause an I/O controller to burst into flames!

That fact is, if we really, honestly, truly must have every piece of copper in our configuration pushed to the maximum limits of electrical resistance, maybe we ought to look at the entire project over again. If the application seriously requires 12 GB/seconds transfer speeds, then maybe taking this "cheap" route of SSD/RAID on a single contoller isn't the best way to go. Step back, and look at what the honest-to-God requirements are, and determine just how realistic those requirements are considering the budget, and whether it'll just be wasted on the carbon-based organisms connecting to it. Seriously, how many applications out there fully take advantage of systems which can read or write terabytes of data in minutes? You'd have to be a monster of a company to need that kind of throughput, with hundreds if not thousands of users, running dozens of simultaneous data-spewing and data-chewing applications. Anybody raising their hand? Do tell.

Back to the point -- No matter how much research we do, we'll still need to thoroughly test, test again, and yet test one more time, all possible configurations from multiple angles, rinse, repeat. There are many more factors to consider in real-world use than simply "validation" or "GHz" or "ioMeter" or "stripe size."

One of those factors will be, inevitably: Is it good enough?

I understand some people's desire to benchmark record-breaking speeds on their systems, but honestly, 90% of the reviews and reports I read on the Internet about various specific products have any number of holes in their testing procedures or platforms because, quite frankly, few people are aware of all the facts.

I wish us all luck in our quest, and if you happen to find that Holy Grail, please do share with the rest of the world!

William

idata
Esteemed Contributor III

I agree with what you are saying. Obviously a raid 0 strip size would normally be set up on a specific usage pattern requirement, so there would not be a right or wrong answer, but ssd has rewritten the rules. I have tested extensively with IOmeter, but as you state maybe that is not telling the right story.

To elaborate further on my interest on this issue....I only use raid 0 for capacity due to the small current drive capacities that are available and my guess is that a lot of other people are in the same position. It would be nice to know that I am not inadvertently incurring more erase counts than necessary, especially if it is also incurring a performance penalty at the same time.

In real life application testing raid 0 for desktop use does not seem to offer an advantage, (i.e. boot up times, application opening, games etc.) In benchmarks it seems that hard raid adds a small latency penalty and it also seems to highlight that hard raid has not caught up with optimisations that are available with the different way that ssd works in comparison to hdd.....a case in point being the extremely quick ramp up performance speed of the Intel drives that does not seem to be utilised in hard raid.

We can only guess and only Intel can shed light.

idata
Esteemed Contributor III

Hi redux, that link you mentioned didn't state any single word about "validation". Look at page 12 of Adaptec's document at: http://www.adaptec.com/NR/rdonlyres/C466A497-0E83-4B11-B80F-C751B155BEBF/0/4_ARCSASCompatibilityRepo... it's clearly stated both Intel X25-E & X25-M models are supported.

zulishk, "official validation" is important to me. Coz I am going to use it for my enterprise virtualization implementation. Any "unvalidated" hardware combination, regardless its performance, will not be allowed in my environment. This is important because if for some reason, the combination refuse to work, you have somebody to "blame to" or rather, pushing the manufacturer, eg Adaptec to make sure it's up and running as per normal should it's a manufacturer's fault.

I do agree with you that all those benchmark are not clear indication of actual production usage. People are just going clueless just to produce their so called "best results". These are reference figures which we have to take into serious consideration given that we can't buy each and every hardware in the market and test it ourselves. So the indicative figures do carry certain weight in terms of our final decision in the procurement process. However, I guess most people may go just too extreme and over-worried about under-provision instead of over-provision lol! That makes me think of the old days where we played a lot in overclocking and try to push everything to its limit, out-of-specs.

At the end of the day, it's individual's application's benchmark that count. Nevertheless, as an application person, there are definately rooms for improvement when you dun see good results in a decent hardware. Perhaps your query is not optimized? Perhaps your vendors are just too lazy to code in the most optimized way? So long as there is 1 settings that fit all, not necessary giving the best but moderately meeting the expectation, I guess I am fine with it.

Even for stripe size......it could still be application specific. But when it comes to virtualization, every VMs are just a bunch or potentially very big files. Does that ring a bell in a particular stripe size at the hypervisor level? My guess is we shall follow strictly to the smallest block size of the SSD design. Some says 512KB, some says 128KB but I also hear this interesting formula: 4K x numbers of drive x 2 x internal RAID level within an SSD , which is making things even more complicated! Perhaps I shall go and ask Adaptec what's the correct size when we scale up with more drives if I were to buy from them. You may have to end up reconstruct the whole disk array whenever you need to add in more spindles.....

If it's pure native application server, i think that really depends on how your application runs...is it creating many small size files or always in big block size...this is not my problem from now onwards coz any new systems has to be virtualized in my environment to leverage on the HUGE savings in Microsoft licensing. It's not longer just "grab the best hardware out there if cost is not a concern" but finding the best configuration to squeeze the server so that I dun pay an extra processor license for SQL Server.....your single dual processor server is probably costing a small fraction of it.....

If you're on the DIY SAN path, the safest way is have 2 separate units of RAID 0 disk array SAN and mirror them (potentially RAID 10) in real time.

Do correct me if I got a wrong perspective.....

idata
Esteemed Contributor III

Hello tingshen,

You seem to be looking for 'validated' products, which are not validated by any third-parties, but only the manufacturer of one device. There's nothing "official" about that, because that one company can't test every combination of hardware or software, such as chipsets, CPUs, motherboards, drivers, etc. So, even if they say "Yes, this drive works with our card," it doesn't mean it'll work in every brand of server or every patched or virtualized version of an operating system, nor at optimized performance. So, again I ponder, what exactly are the requirements (not desires) of the application? (This is a rhetorical question -- please don't answer it here.)

(On a side note, I do hope you are not sacrificing less redundant systems for lower cost licensing. Virtulization does have its downfalls.)

You might begin a new topic here titled, "Has anybody done virtualization with the Intel SSDs?" and ask for people to post their configurations and satisfaction with the results. Just a suggestion!

Respectfully,

William

idata
Esteemed Contributor III

zulishk, let me tell you why the answer will be most probably no....

Virtualization in a SAN environment can be extremely expensive. Dell told me for around 500GB of SSD SAN, it would easily cost USD66K. Texas Memory System's MLC SAN cost about USD88K for 2TB. I doubt anybody is ready for that ...unless budget is not a concern. So that gave us some idea about building our own redundant SANs with RAID 0 on quite a number of X25-M (2 DIY SANs mirror each other to give RAID 10 kind of "redundancy"). That solves the licensing issue very nicely as I can easily scale up a system if necessary . Would take your suggestion to open up this interesting topic separately....

Well, "validated hardware" does give certain level of assurance. I am sure you dun want to sacrificing "validation" for better performance but potentially risky hardware combination right? So "validation" is still important. What I do not quite understand was why Intel didn't want to validate their X25-M along with X25-E at the same time.....

Anyway, I am more or less pretty ready to get hold of a dual X5550 with 6x4GB DDR3-1333Mhz Dell or Intel box with Adaptec RAID and start playing with 6-8 X25-M with various of stripe size on Hyper-V R2. This will just be a test set up and benchmark my own VMs with SQL2008 applications. Til then I can share more about my findings , which will be the basis of building the whole virtualization set up.