is it correct that when operating in the hyperthreading mode, cache is split equally between the two threads. In such a case what would happen to a memory buffer whose size is > 0.5*cache size and is shared between two application threads each of which is running on a processor thread. would it fit entirely into the cache?
Cache is not split evenly between logical processors. There is not necessarily any problem with threads sharing a memory buffer in L2 cache, provided that one thread doesn't write into the same pair of cache lines which the other is busy reading or writing. Various strategies have been taken with L1 cache.
"When HyperThreading is active, the L1 cache (and several other resources on the processor) is evenly divided between the two threads. These divided resources are dedicated to each logical processor. A logical processor may only use that half of the L1 cache that is allocated to it. It cannot use any part of the cache that has been allocated to the other logical processor."
In light of this message do you want to refine what you said earlier about data sharing etc in L1 cache.
I'm not certain that all HyperThreading systems are set up in the way described. I believe that strategy of splitting L1 was adopted to alleviate 64K aliasing on the models which had that problem. It would appear to interfere with the ability of the 2 threads to read from the same cache line, but it makes clearer the nature of the false aliasing problem. Before a thread could bring in a cache line to its segment of L1, after it is modified by the other thread, it must be flushed out to L2, and an L1 cache miss satisfied. On such models, the basic penalty for L1 miss which hits in L2 is not very large, so the L2 still provides the ability for the 2 threads to share data, provided that false aliasing is avoided. As cache line reads are coupled in pairs (at least by default) by the adjacent sector prefetch scheme, avoiding adverse interaction may require the 2 threads to work at least 128 bytes apart in address. If your question goes deeper into hardware issues than this, I'd better bow out.
The early HyperThreading CPUs mapped all address regions with the same address, mod 64k, to the same physical physical segment of cache. This could produce frequent cache evictions even without approaching cache capacity. You could read about recommendations to pad the stack differently for each thread to avoid having the stack of each thread evict the stack of the other from cache, since the default in an important OS was to start the stacks at 1MB intervals. Recent steppings avoid such conflicts unless there is an address conflict mod 4M.