WO2001061480A1 - Processor having replay architecture with fast and slow replay paths - Google Patents

Processor having replay architecture with fast and slow replay paths Download PDF

Info

Publication number
WO2001061480A1
WO2001061480A1 PCT/US2000/035590 US0035590W WO0161480A1 WO 2001061480 A1 WO2001061480 A1 WO 2001061480A1 US 0035590 W US0035590 W US 0035590W WO 0161480 A1 WO0161480 A1 WO 0161480A1
Authority
WO
WIPO (PCT)
Prior art keywords
instruction
replay
error
execution
type
Prior art date
Application number
PCT/US2000/035590
Other languages
French (fr)
Inventor
Michael D. Upton
David A. Sager
Darrell D. Boggs
Glenn J. Hinton
Original Assignee
Intel Corporation
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US09/503,853 external-priority patent/US6735688B1/en
Application filed by Intel Corporation filed Critical Intel Corporation
Priority to AU2001224640A priority Critical patent/AU2001224640A1/en
Priority to DE10085438T priority patent/DE10085438B4/en
Priority to GB0221325A priority patent/GB2376328B/en
Priority to KR10-2002-7010573A priority patent/KR100508320B1/en
Publication of WO2001061480A1 publication Critical patent/WO2001061480A1/en
Priority to HK03101110.0A priority patent/HK1048872B/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/30Arrangements for executing machine instructions, e.g. instruction decode
    • G06F9/38Concurrent instruction execution, e.g. pipeline or look ahead
    • G06F9/3861Recovery, e.g. branch miss-prediction, exception handling
    • G06F9/3863Recovery, e.g. branch miss-prediction, exception handling using multiple copies of the architectural state, e.g. shadow registers
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/30Arrangements for executing machine instructions, e.g. instruction decode
    • G06F9/38Concurrent instruction execution, e.g. pipeline or look ahead
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/30Arrangements for executing machine instructions, e.g. instruction decode
    • G06F9/38Concurrent instruction execution, e.g. pipeline or look ahead
    • G06F9/3836Instruction issuing, e.g. dynamic instruction scheduling or out of order instruction execution
    • G06F9/3842Speculative instruction execution
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/30Arrangements for executing machine instructions, e.g. instruction decode
    • G06F9/38Concurrent instruction execution, e.g. pipeline or look ahead
    • G06F9/3861Recovery, e.g. branch miss-prediction, exception handling

Definitions

  • the present invention relates generally to the field of processors, and more specifically to a replay architecture having fast and slow replay paths for facilitating data-speculating operations.
  • FIG. 1 shows a block diagram of one embodiment of a processor 100 disclosed in U.S. Patent No. 5,966,544.
  • the processor 100 shown in Figure 1 includes an I/O ring 1 1 1 which operates at a first clock frequency (I/O clock), a latency-tolerant execution core 121 which operates at a second clock frequency (e.g., slow clock), a latency-intolerant execution sub-core 131 which operates at a third clock frequency (e.g., medium clock), and a latency-critical execution sub-core 141 which operates at a fourth clock frequency (e.g., fast clock).
  • I/O clock first clock frequency
  • a latency-tolerant execution core 121 which operates at a second clock frequency (e.g., slow clock)
  • a latency-intolerant execution sub-core 131 which operates at a third clock frequency (e.g., medium clock)
  • a latency-critical execution sub-core 141 which operates at a fourth clock frequency (e.g., fast clock).
  • the processor 100 shown in Figure 1 also includes clock multiplication and/or division units 110, 120, and 130 which are configured to provide appropriate clocking to the various portions or sub-cores of the processor 100, as taught in the prior application.
  • the specific portion of the prior application's teachings which is most pertinent here is that the execution core may include two or more portions (sub-cores) which operate at different clock rates.
  • the I/O ring 111 communicates with the rest of the computer system (not shown) by performing various I/O operations, such as memory reads and writes, at the I/O clock frequency.
  • the processor 100 may perform an I/O operation at the I/O ring 111 at the I O clock frequency to read in data from an external memory device.
  • the various execution sub-cores 121, 131, and 141 can perform various functions or operations with respect to the input instructions and/or input data at their respective clock frequencies.
  • the latency-tolerant execution sub-core 121 may perform an execution operation on the input data to produce a first result.
  • the latency-intolerant sub-core 131 may perform an execution operation on the first result to produce a second result.
  • the latency-critical execution sub-core 141 may perform another execution operation on the second result to produce a third result.
  • the various operations performed by the various execution sub-cores may include arithmetic operations, logic operations, and other operations, etc. It should be appreciated and understood by one skilled in the art that the execution order in which the various operations are performed need not necessarily follow the hierarchical order of the various execution sub-cores. For example, the input data could go immediately and directly to the innermost sub-core and the result obtained therefrom could go from the innermost sub-core to any other sub-core or back to the I/O ring 111 for write-back.
  • on-chip cache structures may be split across two or more portions of the processor 100.
  • certain operations and/or functions can be performed at one clock frequency with respect to one aspect of the data stored in the on-chip cache while other operations and/or functions can be performed at a different frequency with respect to another aspect of the data stored in the on-chip cache.
  • a way predictor miss with respect to the on-chip cache may be performed in one sub-core at one clock frequency while the TLB hit/miss detection and/or page fault detection may be performed in another sub-core at a different frequency.
  • certain errors and conditions can be detected earlier in the execution process than other errors and conditions.
  • FIG. 2 illustrates a block diagram of one embodiment of a processor 200 disclosed in the prior application which includes a generalized replay architecture to facilitate data speculation operations.
  • the processor 200 includes a scheduler 231 coupled to a multiplexor 241 to provide instructions received from an instruction cache (I-cache) 211 to an execution core 251 for execution.
  • the execution core 251 may perform data speculation in executing the various instructions received from the multiplexor 241.
  • the processor 200 as shown in Figure 2 includes a checker unit 281 to send a copy of the executed instruction back to the execution core 251 for re- execution (replay) if it is determined that the data speculation is erroneous.
  • the checker unit 281 is positioned after the execution core 251, after the TLB and tag logic 261, and after the cache hit/miss logic 271.
  • Some instructions may have been known to have been executed incorrectly (i.e., because data speculation is erroneous) earlier than this checker positioning would permit detection.
  • certain errors and conditions can be detected earlier which indicates that data speculation in these cases is erroneous even before the TLB/TAG logic 261 and the hit/miss logic 271 are executed.
  • a microprocessor includes an execution core, a first replay mechanism and a second replay mechanism.
  • the execution core performs data speculation in executing a first instruction.
  • the first replay mechanism is used to replay the first instruction via a first replay path if an error of a first type is detected which indicates that the data speculation is erroneous.
  • the second replay mechanism is used to replay the first instruction via a second replay path if an error of a second type is detected which indicates that the data speculation is erroneous.
  • Figure 1 is a block diagram of one embodiment of a processor including various sub-cores operated at different frequencies;
  • Figure 2 shows a block diagram of one embodiment of a processor having a generalized replay architecture
  • Figure 3 illustrates a flow diagram of one embodiment of a processor pipeline in which the teachings of the present invention are implemented
  • Figure 4 shows a block diagram of one embodiment of a processor having first and second replay mechanisms
  • Figure 5 shows a more detailed block diagram of one embodiment of a processor having first and second replay mechanisms
  • Figure 6 shows a flow diagram of one embodiment of a method according to the teachings of the present invention.
  • an execution unit performs data speculation in executing an input instruction. If the data speculation is erroneous, the input instruction is re-executed by the execution unit until the correct result is obtained. In one embodiment, the data speculation is determined to be erroneous if errors of a first type or errors of a second type are detected. Errors of the first type can be detected earlier than the errors of the second type.
  • a first checker is responsible for sending a first copy of the input instruction back to the execution unit for re-execution or replay if an error of the first type is detected with respect to the execution of the input instruction.
  • a second checker is responsible for sending a second copy of the input instruction back to the execution unit for re-execution or replay if an error of the second type is detected with respect to the execution of the input instruction.
  • a selector is used to provide either a subsequent input instruction, the first copy of the incorrectly executed instruction or the second copy of the incorrectly executed instruction to the execution unit for execution, based upon a predetermined priority scheme.
  • FIG. 3 is a block diagram of one embodiment of a processor pipeline 300 within which the present invention may be implemented.
  • processor refers to any machine that is capable of executing a sequence of instructions and shall be taken to include, but not be limited to, general purpose microprocessors, special purpose microprocessors, graphics controller, audio processors, video processors, multi-media controllers and micro-controllers.
  • the processor pipeline 300 includes various processing stages beginning with a fetch stage 310. At this stage, instructions are retrieved and fed into the pipeline 300. For example, a macroinstruction may be retrieved from a cache memory that is integral within the processor or closely associated therewith, or may be retrieved from an external memory unit via a system bus.
  • the instructions retrieved at the fetch stage 310 are then input into a decode stage 320 where the instructions or macroinstructions are decoded into microinstructions or micro-operations (also referred as UOPs or uops herein) for execution by the processor.
  • microinstructions or micro-operations also referred as UOPs or uops herein
  • UOPs or uops microinstructions or micro-operations
  • processor resources necessary for the execution of the microinstructions are allocated.
  • the next stage in the pipeline is a rename stage 340 where references to external registers are converted into internal register references to eliminate false dependencies caused by register reuse.
  • each microinstruction or UOP is scheduled and dispatched to an execution unit.
  • the microinstructions or UOPs are then executed at an execute stage 360. After execution, the microinstructions are then retired at a retire stage 370.
  • the various stages described above can be organized into three phases.
  • the first phase can be referred to as an in-order front end including the fetch stage 310, decode stage 320, allocate stage 330, and rename stage 340.
  • the second phase can be referred to as the out-of-order execution phase including the schedule/dispatch stage 350 and the execute stage 360.
  • each instruction may be scheduled, dispatched and executed as soon as its data dependencies are resolved and the appropriate execution unit is available, regardless of its sequential position in the original program.
  • the third phase referred to as the in-order retirement phase which includes the retire stage 370 in which instructions are retired in their original, sequential program order to preserve the integrity and semantics of the program.
  • an input UOP may be dispatched to the execution unit for execution even though its source data may not have been ready or known. If it is determined that the data speculation is erroneous with respect to the execution of the input UOP, the respective UOP can be sent back to the execution unit for re-execution (replay) until the correct result is obtained. It is, of course, desirable to limit the amount of replay or re-execution, as each replayed UOP uses available resources and degrades overall system performance. Nevertheless, gain in net performance may be obtained by taking such chances.
  • the on-chip data cache (also referred to as the level zero or L0 data cache) may be split such that its data storage array resides in a higher clock domain than the logic which provides hit/miss determination with respect to the data storage array.
  • the TLB and tag logic may also reside in a slower clock domain than the data storage array.
  • the TLB and tag logic may be, but are not required to be, in the same clock domain as the hit/miss logic.
  • UOPs the execution of which relies on or uses data from the L0 data cache. Rather than making all UOPs wait until their source data are determined to be valid, it is better to speculatively dispatch and execute some UOPs so early in the process even though it cannot yet be known - but is likely and suspected - that their source data reside in the L0 data cache. In the majority of cases, the L0 data cache will be hit and valid data will be used as sources. In only a small number of cases, the data speculation is erroneous and the UOPs will have to be replayed. As such, the majority of UOPs are correctly executed in a reduced number of cycles, thus improving the overall performance.
  • Figure 4 is a block diagram of one embodiment of a processor 400 having first
  • the processor 400 includes a scheduler/dispatcher 411 coupled to an instruction cache (not shown) to schedule and dispatch a first instruction received from the instruction cache to an execution core 431 for execution via a selector (e.g., multiplexor) 421.
  • the execution core 431 performs data speculation in executing an input instruction.
  • the input instruction may be dispatched and executed even though its source data may not have been ready or known.
  • the execution of the input instruction may require source data that may or may not be in the L0 data cache.
  • the processor 400 further includes a first replay mechanism 441 to replay the input instruction if an error of a first type is detected indicating that the data speculation is erroneous. In one embodiment, the error of the first type is detectable within a first period.
  • the processor 400 also includes a second replay mechanism 451 to replay the input instruction if an error of a second type is detected indicating that the data speculation is erroneous. In one embodiment, the error of the second type is detectable within a second period which is longer than the first period.
  • the present invention allows the incorrectly executed instruction to be replayed much faster than it would have been if the instruction had to wait until an error of a second type is detected.
  • the first replay mechanism the fast or early checker 441 will send the respective instruction back to the execution core 431 for re-execution (replay) via the multiplexor 421.
  • the second replay mechanism (the slow or late checker) 451 will send the respective instruction back to the execution core 431 for re- execution (replay) via the multiplexor 421.
  • the functions and operations of the first and second replay mechanisms shown in Figure 4 are described in more detail below.
  • Figure 5 shows a more detailed block diagram of one embodiment of a processor 500 having first and second replay paths as described above with respect to Figure 4.
  • the processor 500 includes a scheduler 511 that schedules and dispatches instructions received from an instruction cache (not shown) to an execution core 531 for execution via a multiplexor 521.
  • the function and operation of the multiplexor 521 is described in detail below.
  • the execution core 531 performs data speculation in executing an input instruction received from the multiplexor 521.
  • the processor 500 further includes a first delay unit 541 to make a first copy of the input instruction and to hold the first copy of the input instruction for at least one clock cycle in a first clock domain.
  • the processor 500 also includes a first checker 545 coupled to the first delay unit 541 and the execution core 531.
  • the first checker 545 is configured to determine whether the data speculation is erroneous with respect to a first set of error types and to send the first copy of the input instruction back to the execution core via a first buffer 547 for re-execution if the data speculation is erroneous with respect to the first set of error types.
  • the processor 500 also includes a first checker 545 coupled to the first delay unit 541 and the execution core 531.
  • the first checker 545 is configured to determine whether the data speculation is erroneous with respect to a first set of error types and to send the first copy of the input instruction back to the execution core via a first buffer 547 for re-execution if the data speculation is erroneous with respect to the first set of error types.
  • the processor 500 further includes a second delay unit 551 coupled to the first delay unit and configured to make and hold a second copy of the input instruction for at least one clock cycle in a second clock domain in one embodiment.
  • the processor 500 includes a second checker 555 coupled to the second delay unit 551 and the first checker 545.
  • the second checker is configured to determine whether the execution of the instruction is erroneous with respect to a second set of error types and to send the second copy of the input instruction back to the execution core 531 for re-execution via a second buffer 557 if the execution is erroneous with respect to the second set of error types.
  • the multiplexor 521 is coupled to the scheduler 511, the execution core 531, the first delay unit 541, the first checker 545, the second checker
  • the multiplexor 555, the first buffer 547, and the second buffer 557.
  • the multiplexor 555, the first buffer 547, and the second buffer 557.
  • the multiplexor 521 is configured to selectively provide either the subsequent instruction, the first copy of the input instruction, or the second copy of the input instruction to the execution core 531 for execution, based on a predetermined priority scheme.
  • the second copy of the input instruction is given a first priority for execution
  • the first copy of the input instruction is given a second priority for execution
  • the subsequent instruction is given a third priority for execution.
  • the first priority is higher than the second priority and the second priority is higher than the third priority.
  • the first set of error types is a subset of the second set of error types. In another embodiment, the first set of error types is complimentary to the second set of error types. In one embodiment, the first set of error types includes an error indicating that a level zero cache way predictor is missed, an error indicating that the level zero cache CAM extension is mismatched, and an error indicating that a store forwarding buffer data is unknown. In one embodiment, the second set of error types contains an error indicating a TLB miss, and an error indicating a page-fault, or any other error type indicating that the instruction was executed incorrectly and that the respective instruction needs to be replayed, etc.
  • first delay unit 541 is configured to provide the first copy of the input instruction to the first checker 545 after a predetermined number of clock cycles in the first clock domain.
  • the predetermined number of clock cycles in the first clock domain corresponds approximately to a time delay of the input instruction through the execution core.
  • a memory control unit or memory execution unit (not shown) in the processor 500 may occasionally need to dispatch instructions for execution within its own pipeline including full store operations, or UOPs to handle page splits and TLB reloading, etc.
  • These types of instructions are referred to as manufactured instructions because they are generated or manufactured by a unit within the processor 500 and are not in the instruction flow from the instruction cache.
  • the multiplexor 521 is also coupled to receive the manufactured instructions and send them to the execution core 531 for execution. Since multiplexor 521 may receive instructions from different paths at the same time, a predetermined priority scheme is needed to coordinate the execution priority between the instructions sent to the multiplexor 521 from different paths.
  • the multiplexor may, in the same processing period or clock cycle, receive a subsequent instruction from the scheduler 511, a first copy of the input instruction to be replayed from the first checker 545, another input instruction to be replayed from the second checker 555, and also a manufactured instruction from another unit (e.g., the memory control or execution unit).
  • the multiplexor 521 gives a low priority to instructions coming from the instruction cache, a medium priority to replay instructions coming from the first checker, a high priority to replay instructions coming from the second checker, and a highest priority to manufactured instructions.
  • the error conditions detected by the first checker 545 can be a subset of the error conditions detected by the second checker
  • the second checker 555 needs to provide robust checking because a
  • the error conditions handled by the first checker 545 can be complimentary to the error conditions handled by the second checker 555.
  • the first checker 545 would need to provide robust checking on its set of error cases, rather than the "high confidence but not guaranteed" checking described above, because the late checker would not be re-checking the outcome of the early checker.
  • the subset mode is, therefore, preferred.
  • the second checker 555 can provide additional and/or complimentary checking on error cases that are not handled by the first checker 545.
  • the decision as to what cases are handled by which checker may be driven by various factors including, but not limited to, concerns of processor performance, design complexity, die area, etc.
  • the second checker 555 is responsible for replaying instructions due to TLB misses and other various problems that may arise, for example, in the memory control unit (not shown) of the processor 500. These various problems may include problems or error conditions that are hard to detect in a short amount of time, such as cache miss based on full physical address check, incorrect forwarding from store based on full physical address check, etc.
  • the first checker 545 and the second checker 555 cooperate to control the operation of the multiplexor 521.
  • the multiplexor 521 performs its corresponding function based upon the select signals received from the first checker 545, the second checker 555, and optionally another select signal from another unit such as the memory control unit (not shown) that generates a manufactured instruction that is not in the instruction flow from the instruction cache.
  • These various select signals are used by the multiplexor 521 to determine which instruction is to be sent to the execution core 531 for execution in a given processing cycle if there are more than one instruction from different paths waiting to be executed.
  • manufactured instructions are given the first execution priority
  • the instructions coming from the second checker 555 are given the second priority which is lower than the first priority
  • the instructions coming from the first checker are given the third priority which is lower than the second priority
  • subsequent instructions coming from the instruction cache via the scheduler 511 are given the fourth priority which is lower than the third priority.
  • each UOP may include some special fields that can be used by the first checker 545 and the second checker 555 to coordinate the replay activities between the two checkers.
  • a UOP may include a field referred to as NEEDS_FAST_REPLAY field, which is set by the first checker 545 to indicate that the first checker 545 wants to send it around for fast replay.
  • the respective UOP may also include another field called GOT_FAST_REPLAY field.
  • the GOT_FAST_REPLAY field in one embodiment, is set by a cooperation between the first checker 545 and the second checker 555. For example, assuming that the first checker wants to send a first instruction around for fast replay because an error of a first type has been detected. In this case, the first checker 545 will set the corresponding
  • Figure 6 illustrates a flow diagram of one embodiment of a method 600 for using fast and slow replay paths to facilitate data speculation operations.
  • the method 600 starts at block 601 and proceeds to block 605.
  • an execution core or unit performs data speculation in executing an input instruction.
  • the method 600 then proceeds from block 605 to block 609.
  • the input instruction is re-executed if an error of the first type has been detected.
  • a first checker unit i.e., the fast or early checker
  • the method 600 then proceeds to block 617.
  • a second checker i.e., a slow or late checker
  • the input instruction is re- executed.
  • the second checker is responsible for sending a copy of the input instruction around for replay on the slow replay path if an error of the second type has occurred.

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Advance Control (AREA)
  • Hardware Redundancy (AREA)
  • Debugging And Monitoring (AREA)

Abstract

According to one aspect of the invention, a microprocessor is provided that includes an execution core, a first replay mechanism and a second replay mechanism. The execution core performs data speculation in executing a first instruction. The first replay mechanism is used to replay the first instruction via a first replay path if an error of a first type is detected which indicates that the data speculation is erroneous. The second replay mechanism is used to replay the first instruction via a second replay path if an error of a second type is detected which indicates that the data speculation is erroneous.

Description

PROCESSOR HAVING REPLAY ARCHITECTURE WITH FAST AND SLOW
REPLAY PATHS CROSS-REFERENCES TO RELATED APPLICATIONS
This application is a continuation-in-part of Application No. 09/222,805, filed on 12/30/98, which is a continuation-in-part of Application No. 08/746,547, filed on 11/23/1996, now U.S. Patent No. 5,966,544. This application and the above-identified applications are all assigned to Intel Corporation of Santa Clara, California.
BACKGROUND OF THE INVENTION
1. Field of the Invention
The present invention relates generally to the field of processors, and more specifically to a replay architecture having fast and slow replay paths for facilitating data-speculating operations.
2. Background Information
Figure 1 shows a block diagram of one embodiment of a processor 100 disclosed in U.S. Patent No. 5,966,544. The processor 100 shown in Figure 1 includes an I/O ring 1 1 1 which operates at a first clock frequency (I/O clock), a latency-tolerant execution core 121 which operates at a second clock frequency (e.g., slow clock), a latency-intolerant execution sub-core 131 which operates at a third clock frequency (e.g., medium clock), and a latency-critical execution sub-core 141 which operates at a fourth clock frequency (e.g., fast clock). The processor 100 shown in Figure 1 also includes clock multiplication and/or division units 110, 120, and 130 which are configured to provide appropriate clocking to the various portions or sub-cores of the processor 100, as taught in the prior application. The specific portion of the prior application's teachings which is most pertinent here is that the execution core may include two or more portions (sub-cores) which operate at different clock rates.
In operation, the I/O ring 111 communicates with the rest of the computer system (not shown) by performing various I/O operations, such as memory reads and writes, at the I/O clock frequency. For example, the processor 100 may perform an I/O operation at the I/O ring 111 at the I O clock frequency to read in data from an external memory device. The various execution sub-cores 121, 131, and 141 can perform various functions or operations with respect to the input instructions and/or input data at their respective clock frequencies. For example, the latency-tolerant execution sub-core 121 may perform an execution operation on the input data to produce a first result. The latency-intolerant sub-core 131 may perform an execution operation on the first result to produce a second result. Similarly, the latency-critical execution sub-core 141 may perform another execution operation on the second result to produce a third result. The various operations performed by the various execution sub-cores may include arithmetic operations, logic operations, and other operations, etc. It should be appreciated and understood by one skilled in the art that the execution order in which the various operations are performed need not necessarily follow the hierarchical order of the various execution sub-cores. For example, the input data could go immediately and directly to the innermost sub-core and the result obtained therefrom could go from the innermost sub-core to any other sub-core or back to the I/O ring 111 for write-back. In addition, as it is disclosed and taught in the prior application, on-chip cache structures may be split across two or more portions of the processor 100. As such, certain operations and/or functions can be performed at one clock frequency with respect to one aspect of the data stored in the on-chip cache while other operations and/or functions can be performed at a different frequency with respect to another aspect of the data stored in the on-chip cache. For example, a way predictor miss with respect to the on-chip cache may be performed in one sub-core at one clock frequency while the TLB hit/miss detection and/or page fault detection may be performed in another sub-core at a different frequency. As such, certain errors and conditions can be detected earlier in the execution process than other errors and conditions.
Figure 2 illustrates a block diagram of one embodiment of a processor 200 disclosed in the prior application which includes a generalized replay architecture to facilitate data speculation operations. In this embodiment, the processor 200 includes a scheduler 231 coupled to a multiplexor 241 to provide instructions received from an instruction cache (I-cache) 211 to an execution core 251 for execution. The execution core 251 may perform data speculation in executing the various instructions received from the multiplexor 241. The processor 200 as shown in Figure 2 includes a checker unit 281 to send a copy of the executed instruction back to the execution core 251 for re- execution (replay) if it is determined that the data speculation is erroneous. However, in this generalized replay architecture, the checker unit 281 is positioned after the execution core 251, after the TLB and tag logic 261, and after the cache hit/miss logic 271. Some instructions may have been known to have been executed incorrectly (i.e., because data speculation is erroneous) earlier than this checker positioning would permit detection. Specifically, there are cases in which certain errors and conditions can be detected earlier which indicates that data speculation in these cases is erroneous even before the TLB/TAG logic 261 and the hit/miss logic 271 are executed. Unfortunately, because of the current positioning of the checker unit 281, the respective instructions that were executed incorrectly due to erroneous data speculation would not be sent back to the execution core 251 for re-execution or replay until they reach the checker unit 281. Thus, there is an unnecessary delay between the time when an instruction is known to have been executed incorrectly due to erroneous data speculation until the time when the respective instruction is actually sent back for re-execution. Thus, the system performance is not being optimized as much as it could have been had those instructions which were executed incorrectly been re-executed or replayed earlier in the process.
SUMMARY OF THE INVENTION
According to one aspect of the invention, a microprocessor is provided that includes an execution core, a first replay mechanism and a second replay mechanism. The execution core performs data speculation in executing a first instruction. The first replay mechanism is used to replay the first instruction via a first replay path if an error of a first type is detected which indicates that the data speculation is erroneous. The second replay mechanism is used to replay the first instruction via a second replay path if an error of a second type is detected which indicates that the data speculation is erroneous.
BRIEF DESCRIPTION OF THE DRAWINGS
The features and advantages of the present invention will be more fully understood by reference to the accompanying drawings, in which:
Figure 1 is a block diagram of one embodiment of a processor including various sub-cores operated at different frequencies;
Figure 2 shows a block diagram of one embodiment of a processor having a generalized replay architecture;
Figure 3 illustrates a flow diagram of one embodiment of a processor pipeline in which the teachings of the present invention are implemented;
Figure 4 shows a block diagram of one embodiment of a processor having first and second replay mechanisms; Figure 5 shows a more detailed block diagram of one embodiment of a processor having first and second replay mechanisms; and
Figure 6 shows a flow diagram of one embodiment of a method according to the teachings of the present invention.
DETAILED DESCRIPTION In the following detailed description numerous specific details are set forth in order to provide a thorough understanding of the present invention. However, it will be appreciated by one skilled in the art that the present invention may be practiced without these specific details.
In the discussion below, the teachings of the present invention are utilized to implement a method, apparatus, and system for facilitating data speculation in executing input instructions. To reduce execution time, an execution unit performs data speculation in executing an input instruction. If the data speculation is erroneous, the input instruction is re-executed by the execution unit until the correct result is obtained. In one embodiment, the data speculation is determined to be erroneous if errors of a first type or errors of a second type are detected. Errors of the first type can be detected earlier than the errors of the second type. In one embodiment, a first checker is responsible for sending a first copy of the input instruction back to the execution unit for re-execution or replay if an error of the first type is detected with respect to the execution of the input instruction. A second checker is responsible for sending a second copy of the input instruction back to the execution unit for re-execution or replay if an error of the second type is detected with respect to the execution of the input instruction. In one embodiment, a selector is used to provide either a subsequent input instruction, the first copy of the incorrectly executed instruction or the second copy of the incorrectly executed instruction to the execution unit for execution, based upon a predetermined priority scheme. The teachings of the present invention are applicable to any processor or machine that performs data speculation in executing instructions. However, the present invention is not limited to processors or machines that perform data speculation and can be applied to any processor and machine in which multiple levels of replay mechanism is needed.
Figure 3 is a block diagram of one embodiment of a processor pipeline 300 within which the present invention may be implemented. For the purposes of the present specification, the term "processor" refers to any machine that is capable of executing a sequence of instructions and shall be taken to include, but not be limited to, general purpose microprocessors, special purpose microprocessors, graphics controller, audio processors, video processors, multi-media controllers and micro-controllers. The processor pipeline 300 includes various processing stages beginning with a fetch stage 310. At this stage, instructions are retrieved and fed into the pipeline 300. For example, a macroinstruction may be retrieved from a cache memory that is integral within the processor or closely associated therewith, or may be retrieved from an external memory unit via a system bus. The instructions retrieved at the fetch stage 310 are then input into a decode stage 320 where the instructions or macroinstructions are decoded into microinstructions or micro-operations (also referred as UOPs or uops herein) for execution by the processor. At an allocate stage 330, processor resources necessary for the execution of the microinstructions are allocated. The next stage in the pipeline is a rename stage 340 where references to external registers are converted into internal register references to eliminate false dependencies caused by register reuse. At a schedule/dispatch stage 350, each microinstruction or UOP is scheduled and dispatched to an execution unit. The microinstructions or UOPs are then executed at an execute stage 360. After execution, the microinstructions are then retired at a retire stage 370.
In one embodiment, the various stages described above can be organized into three phases. The first phase can be referred to as an in-order front end including the fetch stage 310, decode stage 320, allocate stage 330, and rename stage 340. During the in-order front end phase, the instructions proceed through the pipeline 300 in their original program order. The second phase can be referred to as the out-of-order execution phase including the schedule/dispatch stage 350 and the execute stage 360. During this phase, each instruction may be scheduled, dispatched and executed as soon as its data dependencies are resolved and the appropriate execution unit is available, regardless of its sequential position in the original program. The third phase, referred to as the in-order retirement phase which includes the retire stage 370 in which instructions are retired in their original, sequential program order to preserve the integrity and semantics of the program.
In a processor having a replay architecture, certain liberties may be taken with respect to the scheduling and execution of the input instructions. For example, an input UOP may be dispatched to the execution unit for execution even though its source data may not have been ready or known. If it is determined that the data speculation is erroneous with respect to the execution of the input UOP, the respective UOP can be sent back to the execution unit for re-execution (replay) until the correct result is obtained. It is, of course, desirable to limit the amount of replay or re-execution, as each replayed UOP uses available resources and degrades overall system performance. Nevertheless, gain in net performance may be obtained by taking such chances. For example, if the majority of UOPs get executed correctly in a reduced number of cycles, and only a few UOPs have to be replayed, then the overall throughput will improve compared with the lowest-common-denominator case of making all UOPs wait as long as the worst case might take.
As taught in the prior application, the on-chip data cache (also referred to as the level zero or L0 data cache) may be split such that its data storage array resides in a higher clock domain than the logic which provides hit/miss determination with respect to the data storage array. The TLB and tag logic may also reside in a slower clock domain than the data storage array. The TLB and tag logic may be, but are not required to be, in the same clock domain as the hit/miss logic.
One of the instances where net performance gain may be obtained is in the case of UOPs the execution of which relies on or uses data from the L0 data cache. Rather than making all UOPs wait until their source data are determined to be valid, it is better to speculatively dispatch and execute some UOPs so early in the process even though it cannot yet be known - but is likely and suspected - that their source data reside in the L0 data cache. In the majority of cases, the L0 data cache will be hit and valid data will be used as sources. In only a small number of cases, the data speculation is erroneous and the UOPs will have to be replayed. As such, the majority of UOPs are correctly executed in a reduced number of cycles, thus improving the overall performance.
Figure 4 is a block diagram of one embodiment of a processor 400 having first
(also referred to as fast or early) and second (also referred to as slow or late) replay paths to facilitate data speculation in executing instructions. As shown in Figure 4, the processor 400 includes a scheduler/dispatcher 411 coupled to an instruction cache (not shown) to schedule and dispatch a first instruction received from the instruction cache to an execution core 431 for execution via a selector (e.g., multiplexor) 421. The execution core 431 , in one embodiment, performs data speculation in executing an input instruction. As described above, the input instruction may be dispatched and executed even though its source data may not have been ready or known. For example, the execution of the input instruction may require source data that may or may not be in the L0 data cache. However, as explained above, net performance may be gained by speculating that the required source data for the execution of the input instruction reside in the L0 data cache. The processor 400 further includes a first replay mechanism 441 to replay the input instruction if an error of a first type is detected indicating that the data speculation is erroneous. In one embodiment, the error of the first type is detectable within a first period. The processor 400 also includes a second replay mechanism 451 to replay the input instruction if an error of a second type is detected indicating that the data speculation is erroneous. In one embodiment, the error of the second type is detectable within a second period which is longer than the first period. As such, if an error of a first type is already detected, the present invention allows the incorrectly executed instruction to be replayed much faster than it would have been if the instruction had to wait until an error of a second type is detected. As shown in Figure 4, if it is determined that the execution of the input instruction is performed incorrectly because an error of the first type has been detected which indicates that the data speculation is erroneous, the first replay mechanism (the fast or early checker) 441 will send the respective instruction back to the execution core 431 for re-execution (replay) via the multiplexor 421. Likewise, if it is determined that the execution of the input instruction is incorrect because an error of the second type has been detected which indicates that the data speculation is erroneous or that other error conditions are present, the second replay mechanism (the slow or late checker) 451 will send the respective instruction back to the execution core 431 for re- execution (replay) via the multiplexor 421. The functions and operations of the first and second replay mechanisms shown in Figure 4 are described in more detail below.
Figure 5 shows a more detailed block diagram of one embodiment of a processor 500 having first and second replay paths as described above with respect to Figure 4. As shown in Figure 5, the processor 500 includes a scheduler 511 that schedules and dispatches instructions received from an instruction cache (not shown) to an execution core 531 for execution via a multiplexor 521. The function and operation of the multiplexor 521 is described in detail below. In one embodiment, the execution core 531 performs data speculation in executing an input instruction received from the multiplexor 521. The processor 500 further includes a first delay unit 541 to make a first copy of the input instruction and to hold the first copy of the input instruction for at least one clock cycle in a first clock domain. The processor 500 also includes a first checker 545 coupled to the first delay unit 541 and the execution core 531. In one embodiment, the first checker 545 is configured to determine whether the data speculation is erroneous with respect to a first set of error types and to send the first copy of the input instruction back to the execution core via a first buffer 547 for re-execution if the data speculation is erroneous with respect to the first set of error types. As shown in Figure 5, the processor
500 further includes a second delay unit 551 coupled to the first delay unit and configured to make and hold a second copy of the input instruction for at least one clock cycle in a second clock domain in one embodiment. The processor 500 includes a second checker 555 coupled to the second delay unit 551 and the first checker 545. In one embodiment, the second checker is configured to determine whether the execution of the instruction is erroneous with respect to a second set of error types and to send the second copy of the input instruction back to the execution core 531 for re-execution via a second buffer 557 if the execution is erroneous with respect to the second set of error types. As shown in Figure 5, the multiplexor 521 is coupled to the scheduler 511, the execution core 531, the first delay unit 541, the first checker 545, the second checker
555, the first buffer 547, and the second buffer 557. In one embodiment, the multiplexor
521 is configured to receive the input instruction and a subsequent instruction from the instruction cache, the first copy of the input instruction from the first checker and the second copy of the input instruction from the second checker. In one embodiment, the multiplexor 521 is further configured to selectively provide either the subsequent instruction, the first copy of the input instruction, or the second copy of the input instruction to the execution core 531 for execution, based on a predetermined priority scheme. In one embodiment, the second copy of the input instruction is given a first priority for execution, the first copy of the input instruction is given a second priority for execution, and the subsequent instruction is given a third priority for execution. In one embodiment, the first priority is higher than the second priority and the second priority is higher than the third priority. In one embodiment, the first set of error types is a subset of the second set of error types. In another embodiment, the first set of error types is complimentary to the second set of error types. In one embodiment, the first set of error types includes an error indicating that a level zero cache way predictor is missed, an error indicating that the level zero cache CAM extension is mismatched, and an error indicating that a store forwarding buffer data is unknown. In one embodiment, the second set of error types contains an error indicating a TLB miss, and an error indicating a page-fault, or any other error type indicating that the instruction was executed incorrectly and that the respective instruction needs to be replayed, etc. In one embodiment, first delay unit 541 is configured to provide the first copy of the input instruction to the first checker 545 after a predetermined number of clock cycles in the first clock domain. In one embodiment, the predetermined number of clock cycles in the first clock domain corresponds approximately to a time delay of the input instruction through the execution core.
There are instances where another unit within the processor 500 can generate its own instructions to perform its corresponding function. For example, a memory control unit or memory execution unit (not shown) in the processor 500 may occasionally need to dispatch instructions for execution within its own pipeline including full store operations, or UOPs to handle page splits and TLB reloading, etc. These types of instructions are referred to as manufactured instructions because they are generated or manufactured by a unit within the processor 500 and are not in the instruction flow from the instruction cache. In one embodiment, the multiplexor 521 is also coupled to receive the manufactured instructions and send them to the execution core 531 for execution. Since multiplexor 521 may receive instructions from different paths at the same time, a predetermined priority scheme is needed to coordinate the execution priority between the instructions sent to the multiplexor 521 from different paths. For example, the multiplexor may, in the same processing period or clock cycle, receive a subsequent instruction from the scheduler 511, a first copy of the input instruction to be replayed from the first checker 545, another input instruction to be replayed from the second checker 555, and also a manufactured instruction from another unit (e.g., the memory control or execution unit). In one embodiment, the multiplexor 521 gives a low priority to instructions coming from the instruction cache, a medium priority to replay instructions coming from the first checker, a high priority to replay instructions coming from the second checker, and a highest priority to manufactured instructions.
As mentioned above, in one embodiment, the error conditions detected by the first checker 545 can be a subset of the error conditions detected by the second checker
555. In this case, the second checker 555 needs to provide robust checking because a
UOP cannot be replayed once it gets by the second checker 555. In another embodiment, the error conditions handled by the first checker 545 can be complimentary to the error conditions handled by the second checker 555. In this case, the first checker 545 would need to provide robust checking on its set of error cases, rather than the "high confidence but not guaranteed" checking described above, because the late checker would not be re-checking the outcome of the early checker. The subset mode is, therefore, preferred.
As described previously, the second checker 555 can provide additional and/or complimentary checking on error cases that are not handled by the first checker 545. The decision as to what cases are handled by which checker may be driven by various factors including, but not limited to, concerns of processor performance, design complexity, die area, etc. In one embodiment, the second checker 555 is responsible for replaying instructions due to TLB misses and other various problems that may arise, for example, in the memory control unit (not shown) of the processor 500. These various problems may include problems or error conditions that are hard to detect in a short amount of time, such as cache miss based on full physical address check, incorrect forwarding from store based on full physical address check, etc.
In one embodiment, the first checker 545 and the second checker 555 cooperate to control the operation of the multiplexor 521. As shown in Figure 5, the multiplexor 521 performs its corresponding function based upon the select signals received from the first checker 545, the second checker 555, and optionally another select signal from another unit such as the memory control unit (not shown) that generates a manufactured instruction that is not in the instruction flow from the instruction cache. These various select signals are used by the multiplexor 521 to determine which instruction is to be sent to the execution core 531 for execution in a given processing cycle if there are more than one instruction from different paths waiting to be executed. In one embodiment, manufactured instructions are given the first execution priority, the instructions coming from the second checker 555 are given the second priority which is lower than the first priority, the instructions coming from the first checker are given the third priority which is lower than the second priority, and subsequent instructions coming from the instruction cache via the scheduler 511 are given the fourth priority which is lower than the third priority.
In one embodiment, once a particular UOP has been sent around for fast replay by the first checker 545, the same instantiation of that UOP would not also be sent around for slow replay by the second checker 555 because a duplicate would exist. To prevent this from happening, in one embodiment, each UOP may include some special fields that can be used by the first checker 545 and the second checker 555 to coordinate the replay activities between the two checkers. For example, in one embodiment, a UOP may include a field referred to as NEEDS_FAST_REPLAY field, which is set by the first checker 545 to indicate that the first checker 545 wants to send it around for fast replay.
The respective UOP may also include another field called GOT_FAST_REPLAY field.
The GOT_FAST_REPLAY field, in one embodiment, is set by a cooperation between the first checker 545 and the second checker 555. For example, assuming that the first checker wants to send a first instruction around for fast replay because an error of a first type has been detected. In this case, the first checker 545 will set the corresponding
NEEDS_FAST_REPLAY field of the respective UOP to indicate that this particular
UOP needs to be replayed on the fast replay path. If the second checker 555 wants to send a second UOP around for slow replay in the same clock cycle, the
GOT_FAST_REPLAY field of the first instruction will be cleared and the multiplexor
521 will be controlled to select the slow replay UOP instead of the one seeking fast replay. Later, when the first UOP reaches the second checker 555, it will be sent around for replay on the slow replay path because its corresponding NEEDS_FAST_REPLAY field has been set.
Figure 6 illustrates a flow diagram of one embodiment of a method 600 for using fast and slow replay paths to facilitate data speculation operations. The method 600 starts at block 601 and proceeds to block 605. At block 605, an execution core or unit performs data speculation in executing an input instruction. The method 600 then proceeds from block 605 to block 609. At block 609, it is determined whether an error of a first type has been detected. As explained above, in one embodiment, an error of the first type occurs if the L0 data cache way predictor is missed, in which case the data cannot be in the L0 data cache, the L0 data cache CAM extension is mismatched (i.e., the way predictor hits but the tags do not match), or store forwarding buffer data unknown
(i.e., the data is supposed to be forwarded from a store forwarding buffer but the store data is unavailable), etc. At block 613, the input instruction is re-executed if an error of the first type has been detected. As described above, when an error of the first type is detected, a first checker unit (i.e., the fast or early checker) will send a copy of the input instruction around for replay or re-execution on the fast replay path. The method 600 then proceeds to block 617. At block 617, it is determined whether an error of a second type has been detected. In this embodiment, a second checker (i.e., a slow or late checker) is responsible for determining whether an error of the second type has occurred. At block 621, if an error of the second type has occurred, the input instruction is re- executed. As described above, the second checker is responsible for sending a copy of the input instruction around for replay on the slow replay path if an error of the second type has occurred.
The invention has been described in conjunction with the preferred embodiment. It is evident that numerous alternatives, modifications, variations and uses will be apparent to those skilled in the art in light of the foregoing description.

Claims

CLAIMSWhat is claimed is:
1. A microprocessor comprising: an execution core to perform data speculation in executing a first instruction; a first replay mechanism to replay the first instruction via a first replay path if an error of a first type is detected indicating that the data speculation is erroneous; and a second replay mechanism to replay the first instruction via a second replay path if an error of a second type is detected indicating that the data speculation is erroneous.
2. The microprocessor of claim 1 wherein the error of the first type is detectable within a first period and the error of the second type is detectable within a second period which is longer than the first period.
3. The microprocessor of claim 1 wherein the first replay mechanism comprises: a first delay unit to make a first copy of the first instruction and to hold the first copy for at least one clock cycle in a first clock domain; and a first checker to determine whether the error of the first type has been detected and to send the first copy of the first instruction back to the execution core for replay if the error of the first type has been detected.
4. The microprocessor of claim 3 wherein the second replay mechanism comprises: a second delay unit to make a second copy of the first instruction and to hold the second copy for at least one clock cycle in a second clock domain; and a second checker to determine whether the error of the second type has been detected and to send the second copy of the first instruction back to the execution core for replay if the error of the second type has been detected.
5. The microprocessor of claim 4 further comprising: an instruction cache to store and provide the first instruction and a subsequent instruction to the execution core.
6. The microprocessor of claim 5 further comprising: a selector coupled to receive the subsequent instruction from the instruction cache, the first copy of the first instruction from the first checker, another instruction from the second checker, the selector to provide to the execution core for execution either the subsequent instruction from the instruction cache, the first copy of the first instruction from the first checker, or said another instruction from the second checker, based upon a predetermined priority scheme.
7. The microprocessor of claim 6 wherein the selector comprises a multiplexor.
8. The microprocessor of claim 6 wherein said another instruction is given a first priority for execution, the first copy of the first instruction is given a second priority for execution which is lower than the first priority, and the subsequent instruction is given a third priority for execution which is lower than the second priority.
9. The microprocessor of claim 1 wherein errors of the first type are a subset of errors of the second type.
10. The microprocessor of claim 1 wherein errors of the first type are complimentary to errors of the second type.
11. The microprocessor of claim 1 wherein the error of the first type is selected from the group consisting of an error indicating that a level zero cache way predictor is missed, an error indicating that the level zero cache CAM extension is mismatched, and an error indicating that a store forwarding buffer data is unknown.
12. The microprocessor of claim 1 wherein the error of the second type is selected from the group consisting of an error indicating a TLB miss, and an error indicating incorrect forwarding from store based on full physical address check.
13. The microprocessor of claim 3 wherein the first delay unit is to provide the first copy of the first instruction after a predetermined number of clock cycles in the first clock domain, the predetermined number of clock cycles in the first clock domain corresponding approximately to a time delay of the first instruction through the execution core.
14. The microprocessor of claim 6 further comprising:
means for manufacturing instructions that are not in an instruction flow from the instruction cache.
15. The microprocessor of claim 14 wherein the selector is coupled to receive the manufactured instructions and send them to the execution core for execution.
16. The microprocessor of claim 15 wherein the selector gives a low priority to instructions coming from the instruction cache, a medium priority to replay instructions coming from the first checker, a high priority to replay instructions coming from the second checker, and a highest priority to the manufactured instructions.
17. A method comprising: performing data speculation in executing a first instruction in an execution core; re-executing the first instruction through a first replay path in response to an error of a first type indicating that the data speculation is erroneous; and re-executing the first instruction through a second replay path in response to an error of a second type indicating that the data speculation is erroneous.
18. A microprocessor comprising: means for performing data speculation in executing a first instruction; means for re-executing the first instruction via a first replay path if an error of a first type is detected indicating that the data speculation is incorrect; and means for re-executing the first instruction via a second replay path if an error of a second type is detected indicating that the data speculation is incorrect.
19. A processing system comprising: an instruction cache to store and provide a first instruction for execution; a scheduler coupled to the instruction cache, the scheduler to schedule and dispatch the first instruction received from the instruction cache for execution; an execution core coupled to execute the first instruction dispatched from the scheduler, the execution core to perform data speculation in executing the first instruction; first replay mechanism to send a first copy of the first instruction back to the execution core for re-execution if a first error is detected indicating that the data speculation is erroneous; and second replay mechanism to send a second copy of the first instruction back to execution core for re-execution if a second error is detected indicating that the data speculation is erroneous.
PCT/US2000/035590 2000-02-14 2000-12-29 Processor having replay architecture with fast and slow replay paths WO2001061480A1 (en)

Priority Applications (5)

Application Number Priority Date Filing Date Title
AU2001224640A AU2001224640A1 (en) 2000-02-14 2000-12-29 Processor having replay architecture with fast and slow replay paths
DE10085438T DE10085438B4 (en) 2000-02-14 2000-12-29 Repeating architecture processor with fast and slow repeat paths
GB0221325A GB2376328B (en) 2000-02-14 2000-12-29 Processor having replay architecture with fast and slow replay paths
KR10-2002-7010573A KR100508320B1 (en) 2000-02-14 2000-12-29 Processor having replay architecture with fast and slow replay paths
HK03101110.0A HK1048872B (en) 2000-02-14 2003-02-17 Processor having replay architecture with fast and slow replay paths

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US09/503,853 US6735688B1 (en) 1996-11-13 2000-02-14 Processor having replay architecture with fast and slow replay paths
US09/503,853 2000-02-14

Publications (1)

Publication Number Publication Date
WO2001061480A1 true WO2001061480A1 (en) 2001-08-23

Family

ID=24003786

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2000/035590 WO2001061480A1 (en) 2000-02-14 2000-12-29 Processor having replay architecture with fast and slow replay paths

Country Status (7)

Country Link
KR (1) KR100508320B1 (en)
CN (1) CN1208716C (en)
AU (1) AU2001224640A1 (en)
DE (1) DE10085438B4 (en)
GB (1) GB2376328B (en)
HK (1) HK1048872B (en)
WO (1) WO2001061480A1 (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2004099978A2 (en) * 2003-05-02 2004-11-18 Advanced Micro Devices, Inc. Apparatus and method to identify data-speculative operations in microprocessor
CN100362536C (en) * 2004-09-10 2008-01-16 华中科技大学 Intelligent vehicle condition monitor system based on mobile communication
CN100367196C (en) * 2003-06-10 2008-02-06 先进微装置公司 Load store unit with replay mechanism
US11223761B2 (en) 2018-09-04 2022-01-11 Samsung Electronics Co., Ltd. Electronic device for obtaining images by controlling frame rate for external moving object through point of interest, and operating method thereof

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110320781A1 (en) 2010-06-29 2011-12-29 Wei Liu Dynamic data synchronization in thread-level speculation
KR101254911B1 (en) * 2012-01-31 2013-04-18 서울대학교산학협력단 Method, system and computer-readable recording medium for performing data input and output via multiple path
CN103744800B (en) * 2013-12-30 2016-09-14 龙芯中科技术有限公司 Caching method and device towards replay mechanism

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US3618042A (en) * 1968-11-01 1971-11-02 Hitachi Ltd Error detection and instruction reexecution device in a data-processing apparatus
WO1998021684A2 (en) * 1996-11-13 1998-05-22 Intel Corporation Processor having replay architecture
US6094717A (en) * 1998-07-31 2000-07-25 Intel Corp. Computer processor with a replay system having a plurality of checkers

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US3618042A (en) * 1968-11-01 1971-11-02 Hitachi Ltd Error detection and instruction reexecution device in a data-processing apparatus
WO1998021684A2 (en) * 1996-11-13 1998-05-22 Intel Corporation Processor having replay architecture
US6094717A (en) * 1998-07-31 2000-07-25 Intel Corp. Computer processor with a replay system having a plurality of checkers

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN100373330C (en) * 2003-05-02 2008-03-05 先进微装置公司 Speculation pointers to identify data-speculative operations in microprocessor
WO2004099978A3 (en) * 2003-05-02 2005-12-08 Advanced Micro Devices Inc Apparatus and method to identify data-speculative operations in microprocessor
GB2418045A (en) * 2003-05-02 2006-03-15 Advanced Micro Devices Inc Speculation pointers to identify data-speculative operations in microprocessor
JP2006525595A (en) * 2003-05-02 2006-11-09 アドバンスト・マイクロ・ディバイシズ・インコーポレイテッド A speculative pointer that identifies data speculative operations in a microprocessor
GB2418045B (en) * 2003-05-02 2007-02-28 Advanced Micro Devices Inc Speculation pointers to identify data-speculative operations in microprocessor
US7266673B2 (en) 2003-05-02 2007-09-04 Advanced Micro Devices, Inc. Speculation pointers to identify data-speculative operations in microprocessor
DE112004000741B4 (en) * 2003-05-02 2008-02-14 Advanced Micro Devices, Inc., Sunnyvale Speculation pointer for identifying data-speculative operations in a microprocessor
WO2004099978A2 (en) * 2003-05-02 2004-11-18 Advanced Micro Devices, Inc. Apparatus and method to identify data-speculative operations in microprocessor
JP4745960B2 (en) * 2003-05-02 2011-08-10 アドバンスト・マイクロ・ディバイシズ・インコーポレイテッド A speculative pointer that identifies data speculative operations in a microprocessor
KR101057163B1 (en) * 2003-05-02 2011-08-17 글로벌파운드리즈 인크. Inference Pointers to Identify Data-Inference Operations in a Microprocessor
CN100367196C (en) * 2003-06-10 2008-02-06 先进微装置公司 Load store unit with replay mechanism
CN100362536C (en) * 2004-09-10 2008-01-16 华中科技大学 Intelligent vehicle condition monitor system based on mobile communication
US11223761B2 (en) 2018-09-04 2022-01-11 Samsung Electronics Co., Ltd. Electronic device for obtaining images by controlling frame rate for external moving object through point of interest, and operating method thereof

Also Published As

Publication number Publication date
AU2001224640A1 (en) 2001-08-27
GB2376328A (en) 2002-12-11
DE10085438B4 (en) 2006-01-05
CN1208716C (en) 2005-06-29
HK1048872A1 (en) 2003-04-17
HK1048872B (en) 2004-10-21
DE10085438T1 (en) 2003-01-16
KR20030007425A (en) 2003-01-23
CN1452736A (en) 2003-10-29
GB0221325D0 (en) 2002-10-23
GB2376328B (en) 2004-04-21
KR100508320B1 (en) 2005-08-17

Similar Documents

Publication Publication Date Title
US6735688B1 (en) Processor having replay architecture with fast and slow replay paths
US6912648B2 (en) Stick and spoke replay with selectable delays
US6857064B2 (en) Method and apparatus for processing events in a multithreaded processor
US5799167A (en) Instruction nullification system and method for a processor that executes instructions out of order
US6079014A (en) Processor that redirects an instruction fetch pipeline immediately upon detection of a mispredicted branch while committing prior instructions to an architectural state
US7685410B2 (en) Redirect recovery cache that receives branch misprediction redirects and caches instructions to be dispatched in response to the redirects
EP0495165B1 (en) Overlapped serialization
US20030126405A1 (en) Stopping replay tornadoes
US6289445B2 (en) Circuit and method for initiating exception routines using implicit exception checking
US7603543B2 (en) Method, apparatus and program product for enhancing performance of an in-order processor with long stalls
US6076153A (en) Processor pipeline including partial replay
KR100472346B1 (en) A processor pipeline including instruction replay
US5898864A (en) Method and system for executing a context-altering instruction without performing a context-synchronization operation within high-performance processors
US8799628B2 (en) Early branch determination
US20240036876A1 (en) Pipeline protection for cpus with save and restore of intermediate results
JP4624988B2 (en) System and method for preventing instances of operations executing in a data speculation microprocessor from interrupting operation replay
US20080148026A1 (en) Checkpoint Efficiency Using a Confidence Indicator
US8977837B2 (en) Apparatus and method for early issue and recovery for a conditional load instruction having multiple outcomes
US6047370A (en) Control of processor pipeline movement through replay queue and pointer backup
US6189093B1 (en) System for initiating exception routine in response to memory access exception by storing exception information and exception bit within architectured register
US5727177A (en) Reorder buffer circuit accommodating special instructions operating on odd-width results
US7100012B2 (en) Processor and data cache with data storage unit and tag hit/miss logic operated at a first and second clock frequencies
WO2001061480A1 (en) Processor having replay architecture with fast and slow replay paths
US7822950B1 (en) Thread cancellation and recirculation in a computer processor for avoiding pipeline stalls
US6711670B1 (en) System and method for detecting data hazards within an instruction group of a compiled computer program

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A1

Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BY BZ CA CH CN CR CU CZ DE DK DM DZ EE ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NO NZ PL PT RO RU SD SE SG SI SK SL TJ TM TR TT TZ UA UG US UZ VN YU ZA ZW

AL Designated countries for regional patents

Kind code of ref document: A1

Designated state(s): GH GM KE LS MW MZ SD SL SZ TZ UG ZW AM AZ BY KG KZ MD RU TJ TM AT BE CH CY DE DK ES FI FR GB GR IE IT LU MC NL PT SE TR BF BJ CF CG CI CM GA GN GW ML MR NE SN TD TG

ENP Entry into the national phase

Ref document number: 0221325

Country of ref document: GB

Kind code of ref document: A

Free format text: PCT FILING DATE = 20001229

121 Ep: the epo has been informed by wipo that ep was designated in this application
DFPE Request for preliminary examination filed prior to expiration of 19th month from priority date (pct application filed before 20040101)
WWE Wipo information: entry into national phase

Ref document number: 1020027010573

Country of ref document: KR

WWE Wipo information: entry into national phase

Ref document number: 008194211

Country of ref document: CN

RET De translation (de og part 6b)

Ref document number: 10085438

Country of ref document: DE

Date of ref document: 20030116

WWE Wipo information: entry into national phase

Ref document number: 10085438

Country of ref document: DE

WWP Wipo information: published in national office

Ref document number: 1020027010573

Country of ref document: KR

122 Ep: pct application non-entry in european phase
NENP Non-entry into the national phase

Ref country code: JP

REG Reference to national code

Ref country code: DE

Ref legal event code: 8607

WWG Wipo information: grant in national office

Ref document number: 1020027010573

Country of ref document: KR