Parallelefficiency = T1 / (TN * NP) where T1 is the runtime on one processor, TN is the runtime on N processors, and NP is the number of processors. For example, an application thattakes 100 seconds to execute in serial but only 10 seconds to execute in parallel on 10 processors achieves 100% parallel efficiency. An application thattakes 100 seconds to execute in serial but 50 seconds to execute in parallel on 10 processors achieves only20% parallel efficiency.
Linear speed-up and scale-up alone is inadequate.
In some cases, these two items are enough. Sure, the pencil pushers and accountants are going to be looking at price-performance. However, the scientists that use the equipment will be looking just at performance. From my experience, if you can provide a machine that will allow their application to run twice as fast (or with a data set twice the size) when you double the number of processors, you will be their hero. Assuming that they have applications or data sets that can fill the larger machine, which really isn't as uncommon as that may sound.
So, CAS may be envious of the Thunder efficiency and look to be upgrading their hardware in order to achieve a similar measure. That is, if there is a need for the better efficiency from the users of the system. If the system is serving the needs of the clients at CAS, there would be no need to invest any resources to improve the already pretty high efficiency.