I want to implement CPU-based chargeback on a large consolidated server (16x) running a mix of applications and am uncertain about the implications of hyperthreading. Does HT invalidate the CPU usage metric? Or can it be thought of as inflationary in that there are simply more CPU seconds, but the distribution of real vs virtual CPU seconds is still reasonably uniform across all applications running on the server. Therefore, you just charge less per CPU second on a hyperthread-enabled system.
I have read that virtual CPU seconds are about 25% as productive as real CPU seconds. I?m concerned that well implemented multi-threaded Apps may receive a higher percentage of virtual CPU seconds and be inadvertently penalized in CPU-based chargeback.
Relatively few applications see as much value as you quote, from what you call "virtual" CPU seconds (the time charged against 2nd logical processors). If you charge the same rate for "real" and "virtual" CPU seconds, you will be charging extra for loading the system heavily, or automatically giving greater value for running when the system is more lightly loaded, and restricting the number of threads in use.
I don't know of a way to charge differently for virtual and real CPU seconds. The kernelmode and usermode CPU counters that the OS tracks for each process don't breakdown further into which CPUs the process used.
In general, would you agree that hyperthreading and CPU-based chargeback are at odds with one another?
If you wished to make the effort, you might try to log the number of logical CPU's in use. Charge full rate during time intervals when that doesn't exceed the number of physical CPU's. Otherwise, discount according to the number of additional logical CPU's in use, reaching perhaps a 40% discount when all are in use, or less of a discount if you want a dis-incentive for running against peak load.
In general, I would agree that chargeback by CPU seconds was not taken into consideration, in design of hyperthreading, or implementation of it in the OS.
Please excuse my confusion, but I still need further clarification.
I have implemented CPU-based chargeback on large server running application mix. If I enable hyperthreading on this server, can I say that while theoretical inequities may exist, they are very small and can be reasonably ignored. The response to hyperthreading is to reduce the billing charge per CPU second by one-half because the total number of CPU seconds is doubled.
I am uncertain whether different applications running simultaneously on a large server could consume CPU seconds in SIGNIFICANTLY different ways on a hyperthreading system such that CPU-based chargeback is inequitable. Theoretically, if different applications with different threading implementation strategies get a higher proportion of virtual CPU seconds charged to them, then they are effectively overcharged. And CPU-based chargeback is not equitable on heterogeneous application mix.
Or have I have misconstrued the abstraction of virtual and real CPU seconds? Or perhaps the ?overcharge? in CPU seconds is still so small as to not matter.