WO2001025903A1 - A method for precise trap handling in case of speculative and out-of-order loads - Google Patents
A method for precise trap handling in case of speculative and out-of-order loads Download PDFInfo
- Publication number
- WO2001025903A1 WO2001025903A1 PCT/US2000/026815 US0026815W WO0125903A1 WO 2001025903 A1 WO2001025903 A1 WO 2001025903A1 US 0026815 W US0026815 W US 0026815W WO 0125903 A1 WO0125903 A1 WO 0125903A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- stage
- load
- pipeline
- instruction
- bit
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims description 23
- 238000012545 processing Methods 0.000 claims description 24
- 230000007704 transition Effects 0.000 claims description 23
- 238000010586 diagram Methods 0.000 description 8
- 230000000977 initiatory effect Effects 0.000 description 4
- 230000001939 inductive effect Effects 0.000 description 3
- 230000008569 process Effects 0.000 description 3
- 230000001976 improved effect Effects 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 230000004044 response Effects 0.000 description 2
- 230000009471 action Effects 0.000 description 1
- 238000003491 array Methods 0.000 description 1
- 230000001427 coherent effect Effects 0.000 description 1
- 238000004891 communication Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 230000002452 interceptive effect Effects 0.000 description 1
- 230000007257 malfunction Effects 0.000 description 1
- 239000002184 metal Substances 0.000 description 1
- 230000037361 pathway Effects 0.000 description 1
- 230000002093 peripheral effect Effects 0.000 description 1
- 230000008093 supporting effect Effects 0.000 description 1
- 238000012546 transfer Methods 0.000 description 1
- 230000001960 triggered effect Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/30—Arrangements for executing machine instructions, e.g. instruction decode
- G06F9/38—Concurrent instruction execution, e.g. pipeline or look ahead
- G06F9/3854—Instruction completion, e.g. retiring, committing or graduating
- G06F9/3856—Reordering of instructions, e.g. using queues or age tags
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/30—Arrangements for executing machine instructions, e.g. instruction decode
- G06F9/38—Concurrent instruction execution, e.g. pipeline or look ahead
- G06F9/3836—Instruction issuing, e.g. dynamic instruction scheduling or out of order instruction execution
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/30—Arrangements for executing machine instructions, e.g. instruction decode
- G06F9/38—Concurrent instruction execution, e.g. pipeline or look ahead
- G06F9/3836—Instruction issuing, e.g. dynamic instruction scheduling or out of order instruction execution
- G06F9/3838—Dependency mechanisms, e.g. register scoreboarding
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/30—Arrangements for executing machine instructions, e.g. instruction decode
- G06F9/38—Concurrent instruction execution, e.g. pipeline or look ahead
- G06F9/3854—Instruction completion, e.g. retiring, committing or graduating
- G06F9/3858—Result writeback, i.e. updating the architectural state or memory
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/30—Arrangements for executing machine instructions, e.g. instruction decode
- G06F9/38—Concurrent instruction execution, e.g. pipeline or look ahead
- G06F9/3861—Recovery, e.g. branch miss-prediction, exception handling
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/30—Arrangements for executing machine instructions, e.g. instruction decode
- G06F9/38—Concurrent instruction execution, e.g. pipeline or look ahead
- G06F9/3867—Concurrent instruction execution, e.g. pipeline or look ahead using instruction pipelines
Definitions
- This invention relates to processing, tracking, and managing out-of-order and speculative load instructions in a processor that performs precise trap handling.
- an automated system for various processing applications may handle multiple events or processes concurrently.
- a single process is termed a thread of control, or "thread", and is the basic unit of operation of independent dynamic action within the system.
- a program has at least one thread.
- a system performing concurrent operations typically has many threads, some of which are transitory and others enduring.
- Systems that execute among multiple processors allow for true concurrent threads.
- Single-processor systems can only have illusory concurrent threads, typically attained by time-slicing of processor execution, shared among a plurality of threads.
- JavaTM programming language that is advantageously executed using an abstract computing machine
- Java Virtual Machine
- a Java Virtual Machine is capable of supporting multiple threads of execution at one time.
- the multiple threads independently execute Java code that operates on Java values and objects residing in a shared main memory.
- the multiple threads may be supported using multiple hardware processors, by time-slicing a single hardware processor, or by time-slicing many hardware processors.
- JavaTM, Sun, Sun Microsystems and the Sun logo are trademarks or registered trademarks of Sun Microsystems, Inc. in the United States and other countries. All SPARC trademarks, including UltraSPARC I and UltraSPARC II, are used under license and are trademarks of SPARC International, Inc. in the United States and other countries. Products bearing SPARC trademarks are based upon an architecture developed by Sun Microsystems, Inc.
- a method for facilitating precise trap handling for out-of-order and speculative load instructions tracks the age of a load instruction.
- the age of the load instruction is determined by the current stage of its execution in a sequence of pipeline of stages.
- the age is tracked in an age indicator.
- the age indicator includes one bit for each pipeline stage that it tracks.
- the method determines when a precise trap has occurred. When a precise trap has occurred, it is determined whether the load instruction was issued before the trapping instruction. Whether the load instruction was issued before the trapping instruction is determined by examining the age indicator. If the age indicator indicates that the trapping instruction trapped before the load instruction completed its execution through all pipeline stages, then the load instruction is either the same age or younger than the trapping instruction, and the load instruction is invalidated. Invalidation is accomplished by resetting a valid bit. Invalidation effectively cancels the load instruction.
- the age indicator and valid bit are maintained by a load store unit. In another embodiment, the age indicator and valid bit are maintained by a load annex.
- a processor is configured to perform the method for precise trap handling for out-of-order and speculative load instructions described above.
- the processor includes a main memory and a plurality of processing units. It keeps track of the age of load instructions in a shared scheme that includes a load buffer and a load annex. All precise exceptions are detected in a T phase of a load pipeline. Data and control information concerning load operations that hit in the cache are staged in a load annex during the A2, A3, and T pipeline stages until all exceptions in the same, or earlier, instruction packet are detected. Data and control information from all other load instructions is staged in the load annex after the load data is retrieved. If an exception occurs, any load in the same instruction packet as the instruction causing the exception is canceled. Any load instructions that are "younger" than the instruction that caused the exception are also canceled.
- the age of load instructions is determined by tracking the pipe stages of the instruction. When a trap occurs, any load instruction with a non-zero age indicator is canceled.
- FIGURE 1 is a schematic block diagram illustrating one embodiment of a multiple-thread processor.
- FIGURE 2 is a schematic block diagram showing the core of one embodiment of a multi-thread processor.
- FIGURE 3 is a schematic timing diagram illustrating one embodiment of a dedicated load/store pipeline.
- FIGURE 4 is a block diagram of at least one embodiment of a load/store unit.
- FIGURE 5 is a schematic diagram illustrating a load/store unit and a pipe control unit that share information concerning load instructions.
- FIGURE 1 a schematic block diagram illustrates a processor 100 having an improved architecture for multiple-thread operation on the basis of a highly parallel structure including multiple independent parallel execution paths, shown herein as two media processing units 110 and 112.
- the execution paths execute in parallel across threads and include a multiple-instruction parallel pathway within a thread.
- the multiple independent parallel execution paths include functional units executing an instruction set having special data-handling instructions that are advantageous in a multiple-thread environment.
- the multiple-threading architecture of the processor 100 is advantageous for usage in executing multiple-threaded applications using a language such as the JavaTM language runmng under a multiple-threaded operating system on a multiple-threaded Java Virtual MachineTM.
- the illustrative processor 100 includes two independent processor elements, the media processing units 110 and 112, forming two independent parallel execution paths.
- a language that supports multiple threads, such as the JavaTM programming language generates two threads that respectively execute in the two parallel execution paths with very little overhead incurred.
- the special instructions executed by the multiple-threaded processor include instructions for accessing arrays, and instructions that support garbage collection.
- a single integrated circuit chip implementation of a processor 100 includes a memory interface 102 for interfacing with a main memory, a geometry decompressor 104, the two media processing units 110 and 112, a shared data cache 106, and several interface controllers.
- the interface controllers support an interactive graphics environment with real-time constraints by integrating fundamental components of memory, graphics, and input/output bridge functionality on a single die.
- the components are mutually linked and closely linked to the processor core with high bandwidth, low- latency communication channels to manage multiple high-bandwidth data streams efficiently and with a low response time.
- the interface controllers include an UltraPort Architecture Interconnect (UP A) controller 116 and a peripheral component interconnect (PCI) controller 120.
- UP A UltraPort Architecture Interconnect
- PCI peripheral component interconnect
- the illustrative memory interface 102 is a direct Rambus dynamic RAM (DRDRAM) controller.
- the shared data cache 106 is a dual-ported storage that is shared among the media processing units 110 and 112 with one port allocated to each media processing unit.
- the data cache 106 is four-way set associative, follows a writeback protocol, and supports hits in the fill buffer (not shown).
- the data cache 106 allows fast data sharing and eliminates the need for a complex, error-prone cache coherency protocol between the media processing units 110 and 112.
- the processor 100 issues and retires instructions in order.
- processor 100 implements dynamic instruction rescheduling and speculative execution of load instructions, which allows instructions to execute and complete out of order. Even though the operations may finish out of order, and therefore may generate exceptions out of order, the processor 100 nonetheless provides precise trap handling and maintains the appearance of in-order execution following a trap.
- the media processing units 110 and 112 each include an instruction cache 210, an instruction aligner 212, an instruction buffer 214, a pipeline control unit (PCU) 226, a split register file 216, a plurality of functional units, and a load/store unit 218.
- the media processing units 110 and 112 use a plurality of functional units for executing instructions.
- the functional units for a media processing unit 110 include three media functional units (MFU) 220 and one general functional unit (GFU) 222.
- An individual independent parallel execution path 110 or 112 has operational units including instruction supply blocks and instruction preparation blocks, functional units 220 and 222, and a register file 216 that are separate and independent from the operational units of other paths of the multiple independent parallel execution paths.
- the instruction supply blocks include a separate instruction cache 210 for the individual independent parallel execution paths, however the multiple independent parallel execution paths share a single data cache 106 since multiple threads sometimes share data.
- the data cache 106 is dual-ported, allowing data access in both execution paths 110 and 112 in a single cycle. Sharing of the data cache 106 among independent processor elements 110 and 112 advantageously simplifies data handling, avoiding a need for a cache coordination protocol and the overhead incurred in controlling the protocol.
- the instruction supply blocks in an execution path include the instruction aligner 212, and the instruction buffer 214 that precisely format and align a full instruction group of four instructions to prepare to access the register file 216.
- An individual execution path has a single register file 216 that is physically split into multiple register file segments 224, each of which is associated with a particular functional unit of the multiple functional units. At any point in time, the register file segments as allocated to each functional unit each contain the same content.
- a multi-ported register file is typically metal limited to the area consumed by the circuit which is proportional with the square of the number of ports.
- the processor 100 has a register file structure divided into a plurality of separate and independent register files to form a layout structure with an improved layout efficiency.
- the read ports of the total register file structure 216 are allocated among the separate and individual register files.
- Each of the separate and individual register files has write ports that correspond to the total number of write ports in the total register file structure. Writes are fully broadcast so that all of the separate and individual register files are coherent.
- the media functional units 220 are multiple single-instruction-multiple-data (MSIMD) media functional units. Each of the media functional units 220 is capable of processing parallel 16-bit components. Various parallel 16-bit operations supply the single-instruction-multiple-data capability for the processor 100 including add, multiply- add, shift, compare, and the like.
- the media functional units 220 operate in combination as tightly coupled digital signal processors (DSPs). Each media functional unit 220 has a separate and individual sub-instruction stream, but all three media functional units 220 execute synchronously so that the subinstructions progress lock-step through pipeline stages. During operation of the processor 100, traps may occur.
- a trap is a vectored transfer of control to privileged software, taken by the processor 100 in response to the presence of certain conditions. Traps may occur due to internal events or external events.
- An external condition that will cause a trap is an interrupt.
- An interrupt is a request for service presented to the functional unit by a device external to the functional unit.
- An interrupt is asynchronous to the instruction stream of the functional unit receiving the interrupt.
- Internally, a trap may occur due to an exception.
- An exception is triggered by the execution of an instruction within the functional unit.
- An exception is a condition that makes it impossible for a functional unit to continue executing the current instruction stream without software intervention. The functional unit may be set to ignore some exceptions.
- trap is generated by an attempt to execute an instruction.
- An instruction may generate an exception it if encounters some condition that makes it impossible to complete normal execution.
- Such an exception may, in turn, generate a precise trap. It is induced by a particular instruction and occurs before any program- visible state of the processor 100 has been changed by the trap- inducing instruction. For load instructions, this means that the trap occurs before the results of the trap-inducing load are written to the register file.
- the instructions When instructions are generated for processor 100, either by hand or by compiler, the instructions are organized into packets of instructions.
- the instruction packet may contain from one to N instructions, where N is the number of functional units included in the media processing units 110, 112. In at least one embodiment, the instruction packets include four instructions. Each instruction packet either executes to completion or causes an exception. If any instruction generates a recoverable error, the processor 100 provides precise trap handling by returning to its machine state at the time the exception occurred, and resuming operation. When a precise trap occurs, the processor 100 ensures a precise state by completing execution of all instruction packets issued before the one that induced the trap.
- the processor 100 prevents all instruction packets that issued after that one the induced the trap from completing execution, even if they finished out-of-order before the trap-inducing instruction.
- the processor 100 restores itself to its state at the time of the exception. After such restoration, execution may be resumed. Operation may either be resumed from the trapping instruction or from the instruction following the trapping instruction. In this manner the processor 100 provides that instructions that finish out of order with respect to other packet instructions, or other packets, and then generate an exception, will nonetheless allow the processor 100 to resume operation at a precise state, as long as the error is a recoverable error (i.e., the error does not prevent restoration of the exception- time machine state).
- Catastrophic errors are a class of errors that occur due to a hardware malfunction from which, due to the nature of the error, the state of the machine at the time of the exception cannot be restored. Since the machine state cannot be restored, execution after an exception caused by a catastrophic error may not be resumed.
- An example of such a catastrophic error is an uncorrectable bus parity error.
- FIGURE 3 is relevant to a discussion of precise trap handling for load instructions, it being understood that the load instructions may be scheduled speculatively and may also be scheduled to execute out of order.
- Processor 100 maintains a dedicated load/store pipe 300 for processing load and store memory operations.
- FIGURE 3 is a schematic timing diagram illustrating one embodiment of the dedicated load/store pipe 300.
- the load/store pipe 300 includes nine sequential stages, including three initiating stages, a plurality of execution stages, and two terminating stages.
- the operation of the GFU load/store pipe 300 is controlled by the Pipe Control Unit (PCU) 226.
- the first of the initiating stages of the load/store pipeline 300 is a fetch stage 310 (F stage).
- the processor 100 fetches instructions from the instruction cache 210.
- the fetched instructions are aligned in the instruction aligner 212 and forwarded to the instruction buffer 214 during an align stage 312 (A stage), a second stage of the initiating stages.
- decoding stage 314 D stage
- the PCU 226 decodes the fetched and aligned instruction out of the instruction packet.
- the PCU 226 sends information concerning the current load instruction to the LSU 219.
- the four register file segments 224 each hold either floating-point data or integer data.
- the register file 216 is read in the decoding (D) stage 314.
- the scoreboard (not shown) is read and updated.
- the scoreboard is a structure with information concerning unfinished loads. It provides a hardware interlock between any unfinished load operation and a younger instruction that has data/output dependency with the unfinished load operation.
- a new instruction enters the D stage 314, it compares its source and destination register operands with all of the scoreboard entries.
- the number of entries in the scoreboard allocated for unfinished loads is equal to the number of entries in the load buffer 400 (FIGURE 4) of the LSU, described below.
- the scoreboard contains 5 load instruction entries. Each scoreboard entry for a load instruction has a 5-bit stage field that indicates how old the unfinished instruction is.
- This stage field is similar to the load buffer status word 410 (FIGURE 4) discussed below.
- the stage bits are shifted right by one position as each pipeline stage executes. If a trap is detected before the load instruction's stage field indicates the WB stage (IB'0000'), then the scoreboard entry is invalidated.
- the execution stages are performed.
- the E stage 332 the GFU 222 calculates the address of each load and store instruction.
- all load and store instructions in the instruction packet are sent to the load/store unit (LSU) 218 for execution.
- LSU load/store unit
- processing of load instructions during the remaining pipeline stages 334, 336, 338, 360, 362 is handled as follows. From the E stage 332 forward to the T stage 360, the LSU 218 keeps track of the load instruction's age. When forwarded to the LSU 218 in the E stage, the load instructions are placed into the load buffer 400 of the LSU. In at least one embodiment, the load buffer 400 has five entries and is therefore capable of maintaining up to five load instructions.
- processor 100 allows one hit under four misses (described immediately below).
- 5 load entries are supported in the load buffer 400, and five load entries are supported by the scoreboard, described above.
- a "hit under miss” reference is made to FIGURE 2.
- the LSU 218 attempts to access an item of information requested in a load operation, the item is either already present in the data cache 106 or not. If present, a cache "hit” has occurred. If the item is not in the data cache 106 when requested by the LSU 218, a cache "miss" occurs.
- Processor 100 allows for a later-submitted load instruction that "hits" to obtain information from the data cache 106 before an earlier submitted load instruction that suffers a cache miss. This situation is referred to as a "hit under miss".
- each status word 410A, 410B, 410C, 410D, 410E includes four stage bits, each stage bit corresponding to one of the C/Al, A2, A3, or T pipeline stages.
- the LSU detects the transition from one pipeline stage to the next and, upon each transition, shifts the stage bits to the right by one position.
- the age of a load instruction is tracked in the status word 410 as indicated below in Table 1.
- the LSU 218 accesses the data cache 106 in the C/Al stage 334 of the load/store pipeline 300. If the load hits the data cache 106, data returns from the data cache 106 and is forwarded to the PCU 226 in the same cycle. The LSU 218 also sends to the PCU 226 the status word 410 with the age of the load. In the case where the load hits the data cache 106 in the C/Al Stage 334, the status word will reflect a value of IB' 1000', indicating that the age of the load corresponds to the C/Al pipeline stage 334. On such a cache hit, load data returns to the PCU 226 during the same C/Al stage 334 that the LSU 218 accessed the data cache 106.
- the load buffer 400 and LDX 500 can share functionality in terms of tracking the age of load instructions and invalidating "younger" instructions when an "older" instruction traps. This functionality is further described below and generally involves resetting, during the T stage 360, a valid bit associated with any load instruction in the same instruction packet as trapping instruction, as well as resetting a valid bit for all other load instructions that are "younger" than the trapping instruction.
- load data is received by the PCU 226, it is not immediately written to the register files 224. To do so might cause data incoherence in a machine that executes load instructions speculatively and out of order. Instead, the load data and associated load information enters a load annex (LDX) 500.
- Load data is staged in the LDX 500 for a sufficient number of cycles so that the load instruction can reach the T pipeline stage before its data is broadcast to the register files 224. While load data is being staged in the LDX 500, the data is available to be bypassed to other functional units.
- the load data is broadcast to the register files in the T stage 360 if no trap was detected. Traps are detected in the T pipeline stage 360 (FIGURE 3).
- the load data is staged in the LDX 500 for three stages before being broadcast to the register files 224.
- the register files 224 latch the data locally and update the registers in the next clock cycle.
- FIGURE 5 illustrates that LDX 500 contains four entries labeled Idxl, ldx2, ldx3, and ldx4. These LDX entries act as a FIFO queue, with newer load data from the LSU 218 being placed in ldxl, and older load data being written to the register files 224 from Idx4.
- the registers 224 have a dedicated write port for load instructions, so the load data is shifted down one entry in the FIFO LDX 500 each clock cycle.
- FIGURE 5 illustrates that the LDX 500 includes four entries ldxl, Idx2, ldx3, ldx4 even though the load data is only staged for three cycles.
- the fourth entry ldx4 is used to write the load data to the register files 224. Because load data cannot be accessed in the same cycle that it is being written to the register files 224, the additional ldx4 entry holds the load data while it is being written.
- Each LDX entry ldxl, ldx2, ldx3, ldx4 includes a stage field 510.
- This stage field 510 is derived from the value of the load buffer status word 410 associated with the LDX entry when it enters the PCU 226.
- the value of the stage field 510 indicates the age of the load instruction in the LDX entry.
- the load data was received by the LDX 500, at the earliest, during the C/Al phase, so the LDX 500 need only track the age of the particular load instruction through the A2, A3, and T stages to ensure that the data from load instructions that hit in the data cache 106 are not written to the register files 224 until the particular load instruction has completed the T stage.
- the stage bits in the four-bit status word 410 for the particular load instruction are therefore shifted right by one bit and the stage bits corresponding to the A2, A3, and T stages are placed in the 3-bit stage field 510 of the LDX entry associated with the particular load instruction.
- the PCU 226 detects the transition from one pipeline stage to the next.
- the PCU 226 shifts the stage bits in the stage field 510 to the right by one bit position. Because only one stage bit, at the most, is set for a load instruction at any one time, shifting to the right effectively resets the stage bit for the last stage and sets the stage bit for the current stage.
- the values of the stage field 510 for each pipeline stage that the LDX tracks is set forth below in Table 2.
- Table 2 illustrates that the sequential shift-right scheme for each successive transition from one pipeline stage to the other has the effect that all stage bits are reset for the WB stage 362 and any stages that occur after the load instruction has reached its WB stage 362. If a trap is detected before a load instruction reaches the WB stage 362, the load instruction is invalidated.
- the valid bit 520 in the LDX entry is reset by the pcu trap signal that indicates that the PCU 226 has detected a trap.
- Each LDX entry ldxl, ldx2, ldx3, ldx4 also includes a dsize field.
- the dsize field indicates whether the data associated with the load instruction is a 64-bit data word or a 32-bit data word.
- the data is staged in the LDX 500 during the A2 and A3 stages 336, 338.
- the presence of trap conditions are detected by the PCU 226 in the T stage 360 of the load/store pipeline 300.
- FIGURE 3 illustrates that the two terminating stages of the load/store pipe 300 include a trap-handling stage 360 (T stage) and a write-back stage 362 (WB stage) during which result data is written-back to the register file 216. Processing of a load instruction during each of these stages is discussed in detail below.
- FIGURE 5 illustrates that, if the PCU 226 detects a trap, it generates a trap signal pcu trap. This signal is used during the T stage 360 to reset the "valid" bit in the LDX entries for load instructions that are younger than the trapping instructions.
- the PCU 226 sends the pcu_trap signal to the LSU 218, and the LSU 218 then resets its valid bits 420A, 420B, 420C, 420D, 420E for any load instructions in the load buffer 400 that are younger than the trapping instruction.
- the load instruction will only be invalidated if it has not reached the WB 362 stage by the time the trap is detected. In other words, any load instruction that has reached the WB 362 stage may be written to the register files 224, regardless of its age, since it obviously was not canceled before or during the trap stage of its pipeline.
- the LSU 218 and PCU 226 determine whether a load instruction is "younger" than a trapping instruction as follows.
- the LDX stage field 510 and the load buffer status word 410 each keep track of the age of a load instruction.
- the LDX 500 will, at the earliest, receive the load instruction one cycle after the LSU 218 receives it; the stage field 510 has one less bit than the status word 410 as the PCU 226 keeps track of one less stage.
- Table 1 and Table 2 demonstrate, the status word 410 and stage field 510 will always have a non-zero value until the load instruction reaches the WB stage 362.
- the PCU 226 and LSU 218 therefore determine that a load instruction is "younger” than the trapping instruction if the age of a load instruction is non-zero since traps are detected in the T 360 stage.
- the PCU 226 resets the valid bit 520 for any LDX entry ldxl, ldx2, ldx3 whose stage field 510 is non-zero.
- the valid bit 520 is reset before the LDX entries ldxl, Idx2, ldx3 are shifted down in cycle N+l .
- a value of all zeros in the stage field 510 indicates that the load data is safe to broadcast to the register files 224 because it has proceeded past the T stage 360 and has at least reached the WB stage 362
- a status word 410 value of all zeroes means that a cache miss has occurred over a relatively long time. Only load instructions that have missed the cache or have otherwise not retrieved their data remain in the load buffer; cache hits and load instructions that have retrieved data from the memory interface 102 are sent to the LDX 500 as described above. All zeros in the status word 410 for a load instruction in the load buffer 400 means, then, that a miss has occurred while the load instruction's pipeline stages completed to, or past, the WB stage 362. In this case, the load instruction need not be canceled, since it is older than the trapping instruction. In contrast, the LSU 218 cancels any load that has a non-zero value in its status word 410 when the LSU 218 receives the pcu trap indicator from the PCU 226.
Landscapes
- Engineering & Computer Science (AREA)
- Software Systems (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Advance Control (AREA)
Abstract
Description
Claims
Priority Applications (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
AU76224/00A AU7622400A (en) | 1999-10-01 | 2000-09-29 | A method for precise trap handling in case of speculative and out-of-order loads |
EP00965521A EP1221087A1 (en) | 1999-10-01 | 2000-09-29 | A method for precise trap handling in case of speculative and out-of-order loads |
JP2001528796A JP2003511754A (en) | 1999-10-01 | 2000-09-29 | Method for accurate trapping in case of speculative and out-of-order loading |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US41182499A | 1999-10-01 | 1999-10-01 | |
US09/411,824 | 1999-10-01 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2001025903A1 true WO2001025903A1 (en) | 2001-04-12 |
Family
ID=23630480
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/US2000/026815 WO2001025903A1 (en) | 1999-10-01 | 2000-09-29 | A method for precise trap handling in case of speculative and out-of-order loads |
Country Status (4)
Country | Link |
---|---|
EP (1) | EP1221087A1 (en) |
JP (1) | JP2003511754A (en) |
AU (1) | AU7622400A (en) |
WO (1) | WO2001025903A1 (en) |
Cited By (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6542988B1 (en) | 1999-10-01 | 2003-04-01 | Sun Microsystems, Inc. | Sending both a load instruction and retrieved data from a load buffer to an annex prior to forwarding the load data to register file |
WO2004068361A1 (en) * | 2003-01-27 | 2004-08-12 | Fujitsu Limited | Storage control device, data cache control device, central processing unit, storage device control method, data cache control method, and cache control method |
US7634639B2 (en) * | 2005-08-23 | 2009-12-15 | Sun Microsystems, Inc. | Avoiding live-lock in a processor that supports speculative execution |
US9720693B2 (en) | 2015-06-26 | 2017-08-01 | Microsoft Technology Licensing, Llc | Bulk allocation of instruction blocks to a processor instruction window |
US9792252B2 (en) | 2013-05-31 | 2017-10-17 | Microsoft Technology Licensing, Llc | Incorporating a spatial array into one or more programmable processor cores |
US9946548B2 (en) | 2015-06-26 | 2018-04-17 | Microsoft Technology Licensing, Llc | Age-based management of instruction blocks in a processor instruction window |
US9952867B2 (en) | 2015-06-26 | 2018-04-24 | Microsoft Technology Licensing, Llc | Mapping instruction blocks based on block size |
US10169044B2 (en) | 2015-06-26 | 2019-01-01 | Microsoft Technology Licensing, Llc | Processing an encoding format field to interpret header information regarding a group of instructions |
US10175988B2 (en) | 2015-06-26 | 2019-01-08 | Microsoft Technology Licensing, Llc | Explicit instruction scheduler state information for a processor |
US10191747B2 (en) | 2015-06-26 | 2019-01-29 | Microsoft Technology Licensing, Llc | Locking operand values for groups of instructions executed atomically |
US10346168B2 (en) | 2015-06-26 | 2019-07-09 | Microsoft Technology Licensing, Llc | Decoupled processor instruction window and operand buffer |
US10409606B2 (en) | 2015-06-26 | 2019-09-10 | Microsoft Technology Licensing, Llc | Verifying branch targets |
US10409599B2 (en) | 2015-06-26 | 2019-09-10 | Microsoft Technology Licensing, Llc | Decoding information about a group of instructions including a size of the group of instructions |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5903918A (en) * | 1995-08-23 | 1999-05-11 | Sun Microsystems, Inc. | Program counter age bits |
US5930491A (en) * | 1997-06-18 | 1999-07-27 | International Business Machines Corporation | Identification of related instructions resulting from external to internal translation by use of common ID field for each group |
-
2000
- 2000-09-29 AU AU76224/00A patent/AU7622400A/en not_active Abandoned
- 2000-09-29 WO PCT/US2000/026815 patent/WO2001025903A1/en active Application Filing
- 2000-09-29 EP EP00965521A patent/EP1221087A1/en not_active Withdrawn
- 2000-09-29 JP JP2001528796A patent/JP2003511754A/en active Pending
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5903918A (en) * | 1995-08-23 | 1999-05-11 | Sun Microsystems, Inc. | Program counter age bits |
US5930491A (en) * | 1997-06-18 | 1999-07-27 | International Business Machines Corporation | Identification of related instructions resulting from external to internal translation by use of common ID field for each group |
Cited By (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6542988B1 (en) | 1999-10-01 | 2003-04-01 | Sun Microsystems, Inc. | Sending both a load instruction and retrieved data from a load buffer to an annex prior to forwarding the load data to register file |
WO2004068361A1 (en) * | 2003-01-27 | 2004-08-12 | Fujitsu Limited | Storage control device, data cache control device, central processing unit, storage device control method, data cache control method, and cache control method |
US7634639B2 (en) * | 2005-08-23 | 2009-12-15 | Sun Microsystems, Inc. | Avoiding live-lock in a processor that supports speculative execution |
US9792252B2 (en) | 2013-05-31 | 2017-10-17 | Microsoft Technology Licensing, Llc | Incorporating a spatial array into one or more programmable processor cores |
US9952867B2 (en) | 2015-06-26 | 2018-04-24 | Microsoft Technology Licensing, Llc | Mapping instruction blocks based on block size |
US9946548B2 (en) | 2015-06-26 | 2018-04-17 | Microsoft Technology Licensing, Llc | Age-based management of instruction blocks in a processor instruction window |
US9720693B2 (en) | 2015-06-26 | 2017-08-01 | Microsoft Technology Licensing, Llc | Bulk allocation of instruction blocks to a processor instruction window |
US10169044B2 (en) | 2015-06-26 | 2019-01-01 | Microsoft Technology Licensing, Llc | Processing an encoding format field to interpret header information regarding a group of instructions |
US10175988B2 (en) | 2015-06-26 | 2019-01-08 | Microsoft Technology Licensing, Llc | Explicit instruction scheduler state information for a processor |
US10191747B2 (en) | 2015-06-26 | 2019-01-29 | Microsoft Technology Licensing, Llc | Locking operand values for groups of instructions executed atomically |
US10346168B2 (en) | 2015-06-26 | 2019-07-09 | Microsoft Technology Licensing, Llc | Decoupled processor instruction window and operand buffer |
US10409606B2 (en) | 2015-06-26 | 2019-09-10 | Microsoft Technology Licensing, Llc | Verifying branch targets |
US10409599B2 (en) | 2015-06-26 | 2019-09-10 | Microsoft Technology Licensing, Llc | Decoding information about a group of instructions including a size of the group of instructions |
Also Published As
Publication number | Publication date |
---|---|
JP2003511754A (en) | 2003-03-25 |
AU7622400A (en) | 2001-05-10 |
EP1221087A1 (en) | 2002-07-10 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US5887161A (en) | Issuing instructions in a processor supporting out-of-order execution | |
US6021485A (en) | Forwarding store instruction result to load instruction with reduced stall or flushing by effective/real data address bytes matching | |
JP3588755B2 (en) | Computer system | |
US5961636A (en) | Checkpoint table for selective instruction flushing in a speculative execution unit | |
US6542988B1 (en) | Sending both a load instruction and retrieved data from a load buffer to an annex prior to forwarding the load data to register file | |
US6138230A (en) | Processor with multiple execution pipelines using pipe stage state information to control independent movement of instructions between pipe stages of an execution pipeline | |
US7444498B2 (en) | Load lookahead prefetch for microprocessors | |
US5630149A (en) | Pipelined processor with register renaming hardware to accommodate multiple size registers | |
US5913048A (en) | Dispatching instructions in a processor supporting out-of-order execution | |
EP0751458B1 (en) | Method and system for tracking resource allocation within a processor | |
US5150469A (en) | System and method for processor pipeline control by selective signal deassertion | |
US7620799B2 (en) | Using a modified value GPR to enhance lookahead prefetch | |
US6098167A (en) | Apparatus and method for fast unified interrupt recovery and branch recovery in processors supporting out-of-order execution | |
US8255670B2 (en) | Replay reduction for power saving | |
US5931957A (en) | Support for out-of-order execution of loads and stores in a processor | |
EP0649085A1 (en) | Microprocessor pipe control and register translation | |
JP2003514274A (en) | Fast multithreading for closely coupled multiprocessors | |
US6073231A (en) | Pipelined processor with microcontrol of register translation hardware | |
KR19990072271A (en) | High performance speculative misaligned load operations | |
EP0649086B1 (en) | Microprocessor with speculative execution | |
EP1221087A1 (en) | A method for precise trap handling in case of speculative and out-of-order loads | |
US5841999A (en) | Information handling system having a register remap structure using a content addressable table | |
US7089405B2 (en) | Index-based scoreboarding system and method | |
US6928534B2 (en) | Forwarding load data to younger instructions in annex |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AK | Designated states |
Kind code of ref document: A1 Designated state(s): AE AL AM AT AU AZ BA BB BG BR BY CA CH CN CR CU CZ DE DK DM EE ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX NO NZ PL PT RO RU SD SE SG SI SK SL TJ TM TR TT TZ UA UG UZ VN YU ZA ZW |
|
AL | Designated countries for regional patents |
Kind code of ref document: A1 Designated state(s): GH GM KE LS MW MZ SD SL SZ TZ UG ZW AM AZ BY KG KZ MD RU TJ TM AT BE CH CY DE DK ES FI FR GB GR IE IT LU MC NL PT SE BF BJ CF CG CI CM GA GN GW ML MR NE SN TD TG |
|
121 | Ep: the epo has been informed by wipo that ep was designated in this application | ||
DFPE | Request for preliminary examination filed prior to expiration of 19th month from priority date (pct application filed before 20040101) | ||
ENP | Entry into the national phase |
Ref country code: JP Ref document number: 2001 528796 Kind code of ref document: A Format of ref document f/p: F |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2000965521 Country of ref document: EP |
|
WWP | Wipo information: published in national office |
Ref document number: 2000965521 Country of ref document: EP |
|
REG | Reference to national code |
Ref country code: DE Ref legal event code: 8642 |