Thank you ,Jim Dempsey.
Recently ihavestudied load balance algorithm in multi-core archtitecture.
It'swell known that static load balancingalgorithm will benotadapted for fine-grainedscheduling.
Static load balancing also can not make full use of thread level paralling and not achieve high level of balancingin multi-core architecture,exspecially not adjusting load balancing by specific environment when system load varies.
I want to study heuristic algorithm which can dynamically adjust load in the light of real environment to make load balance on each core .
Hope thateveryone can give me some advice.
thank you for your reading.
Thank you,Rober Reed.
Your suggestions are very important for me to further study kernel scheduling.
I am studying on cache footprint for these days and would try to find some methods to charaterize cache
footprint.Maybe i am a little fuzzy about the concept of cache foorprint.
My understanding on cache footprint is as follows:
first, cache footprint is the frequency of accessing cache entry(TLB or L1 data cache) ????
for example ,Assure we access B object. we must fetch the pointer of B in A object before accessing B .
if both A and B are in the same cache page , we can directly find the pointer of B in TLB.The frequency of cache footprint is "1".if not ,the frequency is "2".
if both A and B is in the alignment space,we also can get the same result as the above only by accessing L1 data cache not TLB.
The smaller cache footprint of accessing cache entry in executing a task ,the higher the cache hit ratio ?
The smaller cache footprint of accessing cache entry in executing a task,the less the cache and the main memory interactive ?
The smaller cache footprint of accessing cache entry in executing a task,the faster the task executed ?
the above is my understanding about cache footprint,please correct my error.
I still have some questions which make me confused,please give me some explantation.
first. the sentence "processes with small cache footprint might be scheduled together to share the available cache while another process that is a cache hog would get its own, private HW thread, the idea can reduce cache thrashing between processes."?
what is the mean of cache hog?
It means cache hot or larger cache footprint or other means?
could you take a example about the above ?i don't exactly understand.
thank you for your reading and looking forward your reply.