- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
I have read a number of documents on the usage of the processor cache. However, I have a lingering question which has not yet been answered. How does the cache perform with a collection of pointers?
I assume that the collection of pointers will be loaded into the cache, and each time I have a pointer access that the cache will be flushed, and the destination of the pointer will be loaded into the cache. Is this correct?
So I have a collection of pointers, which are pointing to objects elsewhere in memory. If I have a concurrent_vector of pointers, I assume that the pointers will be loaded on their own cache line. However the objects which they point to have been allocated with the scalable_allocator. I assume that these objects are located elsewhere in memory, and perhaps not on the same cache line as each other. It seems to me that the first potential performance improvement would be to place the allocated objects themselves into a concurrent_vector so that they are contiguous in memory if I use many of them together. It also seems that I can conserve cache space by using an int to refer to the position of these objects in the concurrent_vector (in the case of 64-bit pointers, this would free up some cache space).
It seems to me that the best performance would occur if the object holding concurrent_vector and the pointer concurrent_vector where far enough away in memory that they were loaded onto their own cache line... that way each could be in the cache at the same time.
Any suggestions or general comments on how the processor cache handles pointers? I assume they are treated as any other data type.
Link Copied
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Okay, suppose I have a data structure that consists of a number of pointers. These pointers point to a chunk of memory that has information to be used, and the pointer to follow is selected based on conditions. So suppose I have the Sample struct.
struct Sample
{
vector< vector
};
I'm going to iterate through the pointers vector, and find one to follow based on some other condition. Now, the question is how does this work in the cache? I assume that the pointers data structure will get loaded into the L2 cache if it is used often enough. But what of the vector
So I might have answered my own question. Pointers are just treated like normal memory, loaded into a cache line, then when they are dereferenced the cache line holding the pointers is flushed, then a cache line holding the pointed-to memory is loaded. Is that right ?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Inmy understanding, a cache line content is usually replaced (it is called cache lineeviction) only when there is no other space in cache to load necessary data. Eviction strategies can be different but obviously hot lines should not be evicted. Having in mind that with a N-way associative cache any address has only N places where it can be loaded, it does not seem a cache line will stay for long if not used. On the other hand, neighbour lines never conflict.
Back toyour example, I do not think the cache line holding pointers is immediately evicted after the pointers are dereferenced. As I said above, it will be done as its content gets "cold" and there is a need to place a "competing" cache line. The line holding the pointed-to memory is loaded as the memory is requested, that's right. And even if it competes for cache with pointers, it's not much likely the latter one is evicted as most probablyit is still considered hot.
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page