I'm a newbie here so I've a basic question about microarchitecture supporting out-of-order execution.
Just to fix ideas consider the "simple" Intel Core microarchitecture (see for instance http://arstechnica.com/gadgets/2006/04/core/3/). Core Execution core Reservation Station (RS) has 6 issue (or dispatch) ports used to issue microps (uop) for execution. In particular Port 3,4,5 are dedicated to issue Load/Store uops to the three Load Store units belonging to the execution unit.
Now consider the case in which RS issue a Load uop to the Load unit using Port 3. Latency to complete Load uop depends upon data availability in cache (L1/l2/L3) and in case of cache miss an (off-chip) memory access is needed (just to fix ideas suppose 100 clock cycles are needed)
In that interval of time (100 clock cycles) does Load unit remain stalled by that load operation or not ? If there is another Load uop stored in the RS buffer, can RS issue it towards Load unit using Port 3 on the next clock cycle ?
Basically the question try to understand if Load/Store units have a level of pipelining that allow to issue upos at successive clock cycles (regardless of the latency needed to complete each single load/store uop)
This is described in a couple of places....
The Intel Optimization Reference Manual (document 248966, latest revision 031, September 2015) describes several Intel microarchitectures in Chapter 2. Figure 2-2 shows the Haswell microarchitecture, and you should note the box in the upper left quadrant labelled "Load Buffers, Store Buffers, Reorder Buffers" and the box in the bottom center labelled "Line Fill Buffers".
Although it does not make it into a table in this chapter, the very end of section 2.2 notes that there are 72 load buffers and 42 store buffers to support "micro-ops execution in-flight". Awkward wording, but the point is clear enough -- the processor can support up to 72 concurrent load micro-ops before stalling due to lack of load buffers.
Note that these buffers are for "load micro-ops" and don't have any direct connection to cache misses.
L1 Data Cache misses are supported by the Line Fill Buffers. The number of Line Fill Buffers is only mentioned for some of the microarchitectures, but Table 2-18 and the text in Section 184.108.40.206 say that both Nehalem and Sandy Bridge have 10 Line Fill Buffers, and can therefore directly support up to 10 L1 Data Cache Misses. My interpretation is that this has not changed in Haswell or Skylake, but I have not tested this explicitly.
A processor core can have more than 10 requests to the L3 and/or memory only by way of the L2 Hardware Prefetchers. Note that the L1 Hardware Prefetchers use the same 10 Line Fill Buffers as Demand misses, so L1 HW prefetching can provide better overlap of latency, and they can provide concurrency in cases where the core cannot (e.g., prefetching based on a series of dependent loads where the core can't issue out of order because it needs data from memory to compute the next address), but they don't increase the maximum amount of concurrency that the hardware supports.
I pay a lot of attention to concurrency because bandwidth is limited by concurrency in many situations. For example on a Xeon E5-2660 v3 system (10 core, 2.6 GHz nominal, 4-channel DDR4/2133 DRAM), the memory latency on my test systems is just over 85 ns. With a bandwidth of 4*8*2.133=68.2 GB/s, the amount of concurrent memory traffic required to fully overlap the latency is 85 ns * 68.2 GB/s = 5800 Bytes, or about 91 cache lines. If a core can only manage 10 concurrent cache transfers, it will only get 10*64 Bytes/85 ns = 7.5 GB/s of sustained bandwidth. Fortunately there are both L2 hardware prefetchers and multiple cores. With Haswell, Intel has done a very good job of increasing the effectiveness of the L2 prefetching, so that for the STREAM benchmark, I get the maximum performance using 5 of the 10 cores, and I get within 3%-4% of maximum performance using 4 of the 10 cores. (Performance degrades slightly as I use more than 5 cores because of increasing DRAM bank conflicts, giving about the same performance for 10 cores as for 4 cores.)
Another very useful resource for learning about the microarchitecture of Intel, AMD, and VIA processors is Agner Fog's excellent write-up "microarchitecture.pdf" available at http://www.agner.org/optimize/ ; This document is most helpful if you read it from front to back because he describes the earlier processors first and then describes the improvements and increased complexity for each of the following processors in historical order. This incremental approach makes the complexity of the more recent processors a bit less overwhelming.
As always, John's comments are a valuable contribution to my understanding.
On the question of fill buffers, I think John may be influenced by their role in the ARM architecture, which is different from the role in Intel architectures beginning with Woodcrest, where they replaced the Write Combine Buffers of the NetBurst CPUs.
As John said, the number of Line Fill Buffers per core has remained at 10 for a number of years (except that MIC KNC returns to 8). Beginning (I think) with Nehalem, they are "competitively shared" between logical processors, which may limit performance of HyperThreading. Earlier CPUs gave each logical processor access to only half of the fill buffers, even if only one of the logical processors was in use.
The Intel optimization manual describes the use of the fill buffers only indirectly. There is just one brief comment about their use for non-temporal "streaming" stores which (to me) muddies the waters for the more usual case of non-streaming stores.
The fill buffers come into play on data stores (move to memory). They read from memory "read for ownership" in order to replicate the current data when allocating a buffer and not using streaming stores. This would be the only type of possible cache miss involved with fill buffers. When data are flushed from a fill buffer, they go directly to L1D. Ideally, this happens only when full cache lines have been updated, but a loop which writes alternately to too many different cache lines may thrash the fill buffers. In my past experience, for applications which operate in this mode, 6 to 9 write streams (with HT disabled) have appeared to be optimum. Intel compilers make some effort at -O3 to split and fuse loops so as to fall into this range.
John may have more knowledge about how the number of fill buffers influences performance of streaming stores. I suppose at least one of them needs to be sending data to memory while another is being filled.
On the NetBurst CPUs, the hardware attempted to keep 2 write combine buffers ready by initiating flush (write to L2) for the oldest buffers, regardless of whether they had been filled with updated data. There has been nothing presented about any similar action for fill buffers. It seems clear that attempting to write 10 streams already runs into bottlenecks, yet there are situations where the compiler correctly allows more than 10 streams by not splitting a loop when it would reduce register data locality.
On MIC, and AVX512, where optimized code stores full cache lines anyway, fill buffer thrashing should be a thing of the past.
If there is a read or prefetch in the cache line whose only valid copy is the one in the line buffer, that also has to be taken care of, possibly by flushing the buffer, or copying to a read stream buffer. The optimization guide refers briefly to the case where prefetch has to check the line buffers, as we don't want to prefetch an invalid cache line. The read after write to the same cache line might be handled better by this scheme of pushing the line buffer out to L1D than by the older scheme of flushing Write Combining buffer to L2, at least for the case where the same thread is writing and reading. I didn't want to discuss these cases of false sharing as they go even further beyond what is adequately documented and into where performance problems are expected regardless of implementation.
With such sketchy documentation, some low level research such as John has engaged in might be needed to clarify the role of the line fill buffers.
The Line Fill Buffers are associated with pending load or store requests for cache lines that are not in the L1 cache, but do not have a 1:1 relation to load and store micro-ops.
Pending load and store micro-ops are associated with the "load buffers" and "store buffers", and multiple load buffers and store buffers can point to each of the Line Fill Buffers.
For example, with 32-bit loads to consecutive addresses, you would end up with 16 "load buffers" pointing to a single Line Fill Buffer. This is why the number of load buffers and store buffers needs to be so much larger than the number of Line Fill Buffers.
Yes, that description seems accurate. In this case (assuming no TLB misses), the core would be able to issue the 10 loads to 10 different cache lines in 5 cycles.
During and after these 5 cycles the core will continue fetching instructions, decoding them into uops, performing register renaming, sending the corresponding uops to the reservation stations, and then dispatching the uops to the execution ports. This will continue until the core runs out of some other resource.
According to http://www.anandtech.com/show/6355/intels-haswell-architecture/8, the "Reorder Buffer" holds 192 uops and the "Reservation Station" hold 60 uops. At a rate of 4 uops/cycle, it will take 15 more cycles to fill the Reservation Station and 48 cycles to fill the reorder buffer. Any of the instructions that make it as far as the Reservation Station (and that don't depend on the data from prior loads) can be executed and the result written back to the Reorder Buffer. So the Reservation Station will fill up if the subsequent instructions depend on the data from the loads (so they can't execute), while the Reorder Buffer will fill up if the subsequent instructions are independent of the results of the loads (so they get executed and returned to the Reorder Buffer).
Assuming that the data requested by the 10 initial loads is actually in memory (not in cache), it is not going to show up for a long time. For a single-socket system the latency may be as low as ~60 ns, so at 3.0 GHz the latency would be about 180 cycles. Unless there is a serious problem with instruction fetch, the Reorder Buffer is going to fill up a long time before that.
After an instruction has been executed, its corresponding uops are removed from the Reorder Buffer at Retirement. But for the program to be serially correct, the instructions must retire in program order. So in any cycle, the processor will attempt to retire instructions starting with the *oldest* instruction in the Reorder Buffer (i.e., the instruction that appeared first in program order). If the oldest instruction in the Reorder Buffer has not completed its execution, then nothing can be retired, and no uops can be removed from the Reorder Buffer. With a 200 cycle memory latency and no more than ~50 cycles to fill the Reorder Buffer, a memory-latency-bound code can be stalled for a long time.
In this scenario, the 10 loads to different cache lines are going to be processed mostly concurrently. The first data to be returned will be in about 200 cycles, but the next nine loads will get there data much more quickly (perhaps as fast as one load every 10 cycles, limited by the time required to transfer the cache lines from DRAM through the L3, L2, L1 and into the core) because most of the latency has been overlapped. It is possible for the memory controller to return the cache lines out of order, which would allow the loads to execute out of order. This, in turn, might allow the dispatch and execution of dependent operations in the Reservation Stations, but nothing can retire until the data from the first (in program order) load is returned and that load instruction is completed.
Sorry for the confusion -- I was not as careful as I should have been in my description....
Anyway, the point is that you can fill up the Reorder Buffer in a fairly small fraction of a memory latency (~50 cycles out of ~200 cycles). So even if all of the subsequent instructions are independent of the data being fetched and can be executed out of order, you will only be able to tolerate a fairly small fraction of the memory latency. More commonly many of the subsequent instructions will be dependent on the data coming back from the loads, which will allow even less effective overlap.
But this is why the hardware has aggressive hardware prefetchers -- to try to get the data in the caches before the loads, so the latencies are not so long.
Looking at the numbers, one would expect that these OOO mechanisms would be extremely effective for data that is in the L2 cache (~12-14 cycle latency), and quite effective for data that is in the L3 cache (~30-40 cycles latency). Experience generally agrees -- the core can tolerate L2 latencies almost perfectly and L3 latencies very well. Bandwidth is another issue -- if you only have 3 instructions for every word coming from L3 or memory, then your average instruction retirement rate will be limited by the sustained bandwidth. (In other words, the ability to tolerate latency requires that you have work to do while waiting -- not all codes have extra work to do.)
I was using "in flight" and "in the Reorder Buffer" as synonyms.
Without knowing anything about Intel's implementation, one categorization of items in the Reorder Buffer might be:
Other states may exist for speculatively issued instructions. OOO and speculation have many similarities, but the implementations may need to be different. It is also not clear what happens to uops that are dispatched to an execution port, but then rejected because the data is not ready. These probably stay in the Reservation Station to be retried, but they could be kicked back to the Reorder Buffer.
When I say that "most of the latency has been overlapped", I simply mean that while each load takes about 200 cycles (from issue to completion or retirement), the elapsed time for 10 loads is much less than 10 loads * 200 cycles/load = 2000 cycles.
The rate at which cache lines are returned depends on lots of complex interactions between the Core, L1, L2, L3, Ring, Memory Controller, DRAM Controller, DRAM, and (in multi-chip systems) cache coherence transactions.
The specific numbers will depend on the processor model (and any user-configurable options, such as Turbo Boost), the memory configuration (e.g., maximum frequency of installed DRAM, number of populated DRAM channels, number of DIMMs and ranks/DIMM in each channel), the details of the test pattern (e.g., contiguous vs strided vs random addresses, number of read and write streams), and the number of cores used for the test.
For an older processor like the 2006-era Core processor mentioned above, a common configuration might have a 1066 MHz Front Side Bus. This has a peak bandwidth of 8*1.066=8.53 GB/s. Assuming that an appropriate set of DIMMs is installed, the sustained bandwidth on the STREAM benchmark should be in the range of ~5.1 GB/s. (e.g., http://www.cs.virginia.edu/stream/stream_mail/2007/0018.html for a similar Core2 Duo processor). With 64 byte cache lines, 5.12 GB/s is 80 Million cache lines per second, or one cache line every 12.5 ns. At a nominal 2 GHz frequency, this is one cache line every 25 core clock cycles. This is a long-term average -- the detailed timing is likely to be quite complex.
A wide range of values can be obtained depending on the specifics of the configuration.
Certainly there are parts of the memory subsystem that are fully pipelined -- each cache line load takes exactly 8 transfer cycles (4 major cycles) on the DRAM bus, and typically internal transfers are also "atomic". Consecutive transfers are only sometimes pipelined without gaps, and sometimes the gaps are quite large. Because a large number of units that have to interact in the memory access path, there are many opportunities for stalls, and these cannot be understood in detail using publicly available information. That is why measurements (such as STREAM) continue to be useful.