US20060212680A1 - Methods and apparatus for dynamic prediction by software - Google Patents

Methods and apparatus for dynamic prediction by software Download PDF

Info

Publication number
US20060212680A1
US20060212680A1 US11/344,403 US34440306A US2006212680A1 US 20060212680 A1 US20060212680 A1 US 20060212680A1 US 34440306 A US34440306 A US 34440306A US 2006212680 A1 US2006212680 A1 US 2006212680A1
Authority
US
United States
Prior art keywords
instruction
processor
fetch
value
branch
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
US11/344,403
Other versions
US7627740B2 (en
Inventor
Masahiro Yasue
Akiyuki Hatakeyama
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sony Interactive Entertainment Inc
Sony Network Entertainment Platform Inc
Original Assignee
Sony Computer Entertainment Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sony Computer Entertainment Inc filed Critical Sony Computer Entertainment Inc
Priority to US11/344,403 priority Critical patent/US7627740B2/en
Publication of US20060212680A1 publication Critical patent/US20060212680A1/en
Assigned to SONY COMPUTER ENTERTAINMENT INC. reassignment SONY COMPUTER ENTERTAINMENT INC. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: HATAKEYAMA, AKIYUKI, YASUE, MASAHIRO
Priority to US12/540,522 priority patent/US8250344B2/en
Application granted granted Critical
Publication of US7627740B2 publication Critical patent/US7627740B2/en
Assigned to SONY NETWORK ENTERTAINMENT PLATFORM INC. reassignment SONY NETWORK ENTERTAINMENT PLATFORM INC. CHANGE OF NAME (SEE DOCUMENT FOR DETAILS). Assignors: SONY COMPUTER ENTERTAINMENT INC.
Assigned to SONY COMPUTER ENTERTAINMENT INC. reassignment SONY COMPUTER ENTERTAINMENT INC. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: SONY NETWORK ENTERTAINMENT PLATFORM INC.
Active legal-status Critical Current
Adjusted expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/30Arrangements for executing machine instructions, e.g. instruction decode
    • G06F9/38Concurrent instruction execution, e.g. pipeline or look ahead
    • G06F9/3836Instruction issuing, e.g. dynamic instruction scheduling or out of order instruction execution
    • G06F9/3842Speculative instruction execution
    • G06F9/3846Speculative instruction execution using static prediction, e.g. branch taken strategy
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/30Arrangements for executing machine instructions, e.g. instruction decode
    • G06F9/30003Arrangements for executing specific machine instructions
    • G06F9/3004Arrangements for executing specific machine instructions to perform operations on memory
    • G06F9/30047Prefetch instructions; cache control instructions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/30Arrangements for executing machine instructions, e.g. instruction decode
    • G06F9/30003Arrangements for executing specific machine instructions
    • G06F9/30072Arrangements for executing specific machine instructions to perform conditional operations, e.g. using predicates or guards
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/30Arrangements for executing machine instructions, e.g. instruction decode
    • G06F9/38Concurrent instruction execution, e.g. pipeline or look ahead
    • G06F9/3802Instruction prefetching
    • G06F9/3804Instruction prefetching for branches, e.g. hedging, branch folding

Definitions

  • the present invention relates generally to the field of computing systems, and methods for improving instruction execution, for example, in reducing branch instruction delays in pipelined processors.
  • Instruction cache allows the processor to queue the next several instructions in a pipeline while the processor is simultaneously executing another instruction. Thus, when the processor finishes executing an instruction the next instruction in the cache is available and ready for execution.
  • Pipelining of instructions in an instruction cache may not be effective, however, when it comes to conditional jumps.
  • the next set of instructions to be executed will typically be either the instructions immediately following the conditional jump instruction in sequence, which is currently stored in the instruction cache, or a set of instructions at a different address, which may not be stored in the cache. If the next instruction to be executed is not located at an address within the instruction cache, the processor will be effectively paused (e.g., by executing “NOP” instructions) for a number of clock cycles while the necessary instructions are loaded into the instruction cache.
  • branch penalty may be shorter when branching to an instruction already within the cache and longer when the instruction must be loaded into the cache.
  • program execution will jump from the “Branch COND, L1” instruction to the “Inst C0” instruction at label L 1 if COND is TRUE, or non-zero. Otherwise, program execution will proceed to the “Inst B0” instruction first. If the instructions at the L 1 label are not in the processor's instruction buffer at the time the “Branch COND, L1” instruction is executed, the processor will have to read the instructions into the buffer, during which time no program instructions are executed. The processor clock cycles wasted by a processor awaiting instructions to be read into its instruction buffer are a measure of the branch penalty.
  • this example assumes a single clock cycle per instruction, branch or NOP.
  • the number of clock cycles needed to provide the instruction “Inst C0” is only for exemplary purposes, i.e., the “Inst C0” instruction may be available to the processor before or after clock cycle 18 .
  • the “Branch to L1” instruction was executed at clock cycle 2
  • the instruction “Inst C0” at L 1 was not available to the processor in its instruction buffer until clock cycle 18 .
  • the branch penalty in this example is thus 16 clock cycles.
  • Hardware methods have included the development of processor instruction pipeline architectures that attempt to predict whether an upcoming branch in an instruction set will be taken, and pre-fetch or pre-load the necessary instructions into the processor's instruction buffer.
  • pipelined processors can execute one instruction per machine clock cycle when a well-ordered, sequential instruction stream is executed. This may be accomplished even though each instruction itself may implicate or require a number of separate micro-instructions to be effectuated.
  • a BHT may be in the form of a table of bitmaps wherein each entry corresponds to a branch instruction for the executing program, and each bit represents a single branch or no-branch decision.
  • Some BHT's provide only a single bit for each branch instruction, thus the prediction for each occurrence of the branch instruction corresponds to whatever happened last time. This is also known as 1-bit dynamic prediction. Using 1-bit prediction, if a conditional branch is taken, it is predicted to be taken the next time. Otherwise, if the conditional branch is not taken, it is predicted to not be taken the next time.
  • a BHT is also used to perform 2-bit dynamic prediction.
  • 2-bit dynamic prediction if a given conditional branch is taken twice in succession, it is predicted to be taken next time. Likewise, if the branch is not taken twice in succession, it is predicted to not be taken the next time. If the branch is both taken once and not taken once in the prior two instances, then the prediction for the next instance is the same as the last time.
  • 2-bit dynamic prediction using a BHT is better than 1-bit because the branch is taken only once per loop or execution. Two-bit prediction tends to be more accurate and have a greater hardware cost. Therefore, if the state does not change frequently, 1-bit prediction may be sufficient for many purposes.
  • a BHT uses a significant amount of processor hardware resources, and may also result in significant branch penalties.
  • an instruction is provided that causes a pre-fetch of instructions starting at a specified address.
  • this software takes the form of a HINT(ADDRESS) instruction, wherein the processor automatically pre-fetches the instructions beginning at ADDRESS as soon as the HINT is encountered.
  • HINT(ADDRESS) instruction By placing the HINT several instructions before the actual branching instruction, the programmer can reduce the number of clock cycles during which the processor is awaiting the fetch of the next instruction.
  • HINT(ADDRESS) instruction unconditionally causes the pre-fetching of instructions. Thus, whether or not the branch is taken, the instructions at the branch address will be pre-fetched. The programmer must therefore decide where to place the HINT(ADDRESS) instruction, and the programmer may not always make the best predictions as to when the branch will be taken. If the programmer is incorrect, i.e., if the HINT(ADDRESS) instruction is given and the branch is not taken, the pre-fetch is a wasted effort. A significant time penalty is thus incurred from having to squash the erroneous instruction, flush the pipeline and re-load the correct instruction sequence. Depending on the size of the pipeline, this penalty can be quite large.
  • the present invention addresses these drawbacks.
  • the present invention provides a method that includes specifying a value in a first portion of a conditional pre-fetch instruction associated with a branch instruction used for effectuating a branch operation in a processor. Next a target instruction address is specified in a second portion in the conditional pre-fetch instruction. The value is then evaluated to determine whether a condition is met. If the condition is met, one or more instructions are pre-fetched starting at the target instruction address into an instruction buffer of the processor.
  • the target instructions are pre-fetched as described above when the value is non-zero.
  • a method of operating a processor having an instruction cache and using a branch control instruction associated with a conditional pre-fetch instruction includes a test condition portion and an instruction address portion.
  • the method includes the steps of: (a) determining if the test condition portion of the conditional pre-fetch instruction evaluates to a TRUE value; and (b) when the test condition portion of the conditional pre-fetch instruction evaluates to a TRUE value, preloading one or more instructions beginning at an address indicated by the instruction address portion of the conditional pre-fetch instruction into an instruction buffer of the processor.
  • a storage medium is also provided in an aspect of the present invention.
  • the storage medium contains a program including a conditional pre-fetch instruction operable to cause a processor to perform several steps a value is specified in a first portion of the conditional pre-fetch instruction.
  • the conditional pre-fetch instruction is associated with a branch instruction used for effectuating a branch operation in the processor.
  • a target instruction address is specified in a second portion in the conditional pre-fetch instruction. The value is used to determine whether a condition is met. If the condition is met, one or more instructions starting at the target instruction address are pre-fetched into an instruction buffer of the processor.
  • a processor under control of a program including a conditional pre-fetch instruction in conjunction with a branch instruction is also provided.
  • the program causes the processor to decode a first portion of the conditional pre-fetch instruction, which specifies a value for evaluation the value evaluates to a true or false. Next, if the value evaluates to true, a second portion of the conditional pre-fetch instruction is decoded. This second portion identifies an address of a particular target instruction, which is then pre-fetched.
  • the pre-fetching operation is associated with an operation to move the particular target instruction from a cache to an instruction buffer of the processor.
  • FIG. 2 is an exemplary flowchart of a method in accordance with an embodiment of the present invention.
  • FIG. 3 is a flow chart of a method in accordance with an embodiment of the present invention.
  • a computer system 100 in accordance with one embodiment of the invention comprises a central processing unit (“CPU”) 102 , an instruction cache 104 , a pipeline 106 connecting the CPU 102 to the instruction cache 104 , and a data bus 108 .
  • CPU central processing unit
  • instruction cache 104 an instruction cache 104
  • pipeline 106 connecting the CPU 102 to the instruction cache 104
  • data bus 108 a data bus 108
  • the computer system 100 is a general purpose computer having all the additional internal components normally found in a personal computer such as, for example, a display, a CD-ROM, a hard-drive, a mouse, a keyboard, speakers, a microphone, a modem and all of the components used for connecting these elements to one another. These components have not been depicted in FIG. 1 for clarity. Additionally, the computer system 100 may comprise any workstation, PDA, or other processor-controlled device or system capable of processing instructions.
  • the CPU 102 may be a processor of any type.
  • the CPU 102 might also be a multiprocessor system.
  • a more complex instruction cache 104 and pipeline 106 may be used than those depicted here, although the present invention is similarly applicable to such systems.
  • the instruction cache 104 may also be of any sort.
  • the instruction cache 104 can be a 32 kiloword (KW) instruction cache using four word blocks.
  • KW 32 kiloword
  • any size instruction cache 104 using any size blocks may also be used.
  • the terms “instruction cache” and “instruction buffer” are used herein interchangeably, with no distinction intended between the use of either term.
  • the instruction cache 104 and CPU 102 are interconnected by an instruction pipeline 106 .
  • this pipeline represents the hardware and/or firmware used by the system 100 to manage and move instructions between the instruction cache 104 and the CPU 102 .
  • many CPUs have specialized circuitry to manage their associated instruction pipelines and buffers.
  • the instruction cache 104 and pipeline 106 are packaged with the CPU 102 in a single integrated circuit (not depicted). Such packaging advantageously provides close proximity of the instruction cache 104 , pipeline 106 and CPU 102 , which minimizes power consumption and instruction transfer time.
  • the instruction cache 104 and CPU 102 may each be in communication with a data bus 108 , thereby allowing the transfer of instructions and data from other devices and memory (not depicted).
  • FIG. 2 is an exemplary flowchart 200 of a method in accordance with an embodiment of the present invention.
  • the CPU executes instructions sequentially.
  • “Inst A0” 202 is first executed.
  • the CPU executes the “CPIF(vall, L1)” 204 instruction.
  • the CPIF instruction of this example is the conditional instruction pre-fetch.
  • a programmer has placed the CPIF instruction several instructions before the “Branch on COND1” instruction 210 . This placement allows time for the instructions at the branch-to address to be pre-fetched, if necessary, prior to the CPU executing the “Branch on COND1” instruction 210 .
  • the “CPIF(vall, L1)” instruction 204 causes the CPU or hardware instruction pipeline circuitry to evaluate the “vail” component of the “CPIF(vall, L1)” instruction 204 .
  • the “vail” component evaluates as a non-zero value, i.e., vail is ‘TRUE’ at action 216 , then the CPU or hardware instruction pipeline circuitry begins the pre-fetch of instructions located at the address given by the “L1” component 218 .
  • the CPU may continue to execute the “Inst A1” instruction 206 , as well as any others in sequence.
  • the instruction immediately preceding the “Branch on COND1” instruction 210 is executed.
  • the pre-fetch of instructions located at the address given by the “L1” component 218 of the “CPIF(vall, L1)” instruction is completed prior to the CPU execution of the “Branch on COND1” instruction 210 .
  • COND1 When the CPU executes the “Branch on COND1” instruction 210 in an embodiment of the present invention, either the condition given by “COND1” is ‘TRUE’ or it is ‘FALSE’.
  • COND 1 if COND 1 is ‘FALSE’, the branch is not taken, and the CPU continues program execution with the next sequential instructions, ⁇ B0” 212 , then “Inst B1” 214 , and so on. Otherwise, if COND 1 is ‘TRUE’, the branch is taken, and the CPU preferably continues execution with “Inst C0” 222 , then “Inst C1” 224 , and so forth.
  • a greater branch penalty is encountered if the “CPIF(vall, L1)” instruction 204 causes the incorrect prediction that the branch will or will not be taken. For example, if the “CPIF(vall, L1)” instruction 204 causes the prediction that the branch will be taken and it is not taken, then the instruction cache may not have the “Inst B0” 212 and “Inst B1” 214 instructions when they are needed, likely resulting in the need to flush and re-fill the instruction cache. This is known as an instruction cache “miss”.
  • the instruction cache may not have the “Inst C0” 222 and “Inst C1” 224 instructions when they are needed, also likely resulting in the need to flush and re-fill the instruction cache.
  • the present invention advantageously minimizes the number of instruction cache misses by allowing dynamic prediction of whether a given branch will be taken.
  • this dynamic prediction is enabled by providing for testing a logical condition embedded within the CPIF instruction. By proper selection of the condition to test, a programmer can greatly increase the accuracy of the branching predictions.
  • a relatively simple parameter of the CPIF instruction, ‘val1’ is evaluated to determine if it is non-zero (TRUE) or a zero (FALSE), corresponding to ‘pre-fetch’ and ‘do not pre-fetch’.
  • the preferably more complex condition ‘COND1’ of the “Branch on COND1” instruction 210 is evaluated to determine if the branch is taken.
  • FIG. 3 presents a more detailed flowchart 300 of a simplified processor executing a conditional pre-fetch instruction in accordance with an embodiment of the present invention.
  • a processor decodes the instruction addressed by its instruction pointer 302 .
  • the processor determines if the decoded instruction is a conditional pre-fetch instruction 304 . If it is not a conditional pre-fetch, the processor proceeds with instruction processing 306 .
  • the processor's instruction pointer is incremented 308 to point to the next sequential instruction, which is then decoded by the processor 302 .
  • the processor evaluates the ‘value’ component of the conditional pre-fetch instruction 310 .
  • the conditional pre-fetch instruction has the form: CPIF(value, address), wherein the ‘CPIF’ is the instruction pneumonic, ‘value’ is the expression to be evaluated, and ‘address’ is the beginning address of the instructions to be pre-fetched if value is TRUE.
  • CPIF pneumonic
  • any pneumonic may be employed in an embodiment.
  • the processor then evaluates the ‘value’ component of the conditional pre-fetch instruction 312 . If it is a non-zero value, which is also herein referred to as ‘TRUE’, the processor then pre-fetches the instructions at the location indicated by the ‘address’ component of the conditional pre-fetch instruction 314 . Any number of instructions may be pre-fetched in a given embodiment of the invention, although the number of instructions pre-fetched is preferably related to the size of the instruction cache and the architecture of the processor's pipeline.
  • the processor does not perform a pre-fetch operation. Instead, it preferably proceeds to increment its instruction pointer 308 and decode the next sequential instruction 302 .
  • FIG. 3 The flowchart of FIG. 3 is simplified for exemplary purposes.
  • processors are of the multiprocessing variety, wherein several instructions are in various stages of execution by the processor at any given time.
  • An embodiment of the present invention also envisions the use of such multiprocessing processors. These embodiments generally use more complex instruction pipeline architectures that allow for several instruction to be in various stages of execution at each processor clock cycle.
  • the processor itself manages the instruction pipeline and the instruction cache.
  • the instruction pipeline and instruction cache may be managed by hardware associated with the processor but not actually considered part of the processor. As mentioned above, it is possible that one processor may manage these components for another processor.
  • program execution will jump from the “Branch COND, L1” instruction to the “Inst C0” instruction at label ‘L1’ if ‘COND’ is ‘TRUE’, or non-zero. Otherwise, program execution will proceed to the “Inst B0” instruction first. If the instructions at the ‘L1’ label are not in the processor's instruction buffer at the time the “Branch COND, L1” instruction is executed, the processor may incur a branch penalty.
  • condition pre-fetch operation may advantageously save a significant number of processor clock cycles when compared with a similar scenario that does not use a conditional pre-fetch, as described above.
  • conditional pre-fetch by a “C” language programmer
  • any programming language including but not limited to any assembly language, any compiled language, such as “C”, “C++”, Cobol, Fortran, Ada, Pascal, etc., or any interpretive language, such as BASIC, JAVA, XML, or any other language may be used.
  • a conditional branch is implicit at the closing bracket “ ⁇ ”, where the “/* conditional branch */” comment has been placed.
  • the variable i is incremented and compared with n. If i is less than n, then the loop repeats. This is often implemented in compiled machine language as a conditional branch operation.
  • This version of the “C” language adds an address label ‘L1’ at the top of the ‘for’ loop, and a conditional pre-fetch instruction ‘CPIF’ at the outset of the loop.
  • This expression evaluates to a non-zero value until the last iteration of the loop. Therefore, the instructions at address L 1 will be pre-fetched for each loop iteration except for the last loop iteration. In this manner, the loop may be advantageously processed and executed without the processor incurring any branch penalty.
  • a 1-bit or 2-bit branch history is used together with a CPIF instruction to minimize branch penalties.
  • the branch history may be stored in one or two processor registers.
  • a CPIF instruction may use the stored register values in its expression to be evaluated.
  • a conventional branch history table only uses past history to determine when to pre-fetch instructions.
  • One aspect of the invention uses information contained in the branch history table as only one of the parameters to make a determination of a higher level. In this manner, the CPIF can incorporate the advantages of using 1-bit or 2-bit branch histories.

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Advance Control (AREA)
  • Executing Machine-Instructions (AREA)

Abstract

A method, storage medium, processor instruction and processor to for specifying a value in a first portion of a conditional pre-fetch instruction associated with a branch instruction used for effectuating a branch operation, specifying a target instruction address in a second portion of the instruction, evaluating the value to determine whether a condition is met, and pre-fetching one or more instructions starting at the target instruction address into an instruction buffer of the processor when the condition is met, is provided.

Description

    CROSS-REFERENCE TO RELATED APPLICATIONS
  • This application claims the benefit of the filing date of U.S. Provisional Patent Application No. 60/650,157 filed Feb. 4, 2005, the disclosure of which is hereby incorporated herein by reference.
  • BACKGROUND OF THE INVENTION
  • The present invention relates generally to the field of computing systems, and methods for improving instruction execution, for example, in reducing branch instruction delays in pipelined processors.
  • Computer processors often use instruction buffers to speed up the execution of programs. Such a buffer, also referred to as an “instruction cache,” allows the processor to queue the next several instructions in a pipeline while the processor is simultaneously executing another instruction. Thus, when the processor finishes executing an instruction the next instruction in the cache is available and ready for execution.
  • Many modern computing systems utilize a processor having a pipelined architecture to increase instruction throughput which use one or more instruction caches implemented by hardware or firmware.
  • Pipelining of instructions in an instruction cache may not be effective, however, when it comes to conditional jumps. When a conditional jump is encountered, the next set of instructions to be executed will typically be either the instructions immediately following the conditional jump instruction in sequence, which is currently stored in the instruction cache, or a set of instructions at a different address, which may not be stored in the cache. If the next instruction to be executed is not located at an address within the instruction cache, the processor will be effectively paused (e.g., by executing “NOP” instructions) for a number of clock cycles while the necessary instructions are loaded into the instruction cache.
  • Accordingly, when a conditional branch or jump is made, the processor is likely to have to wait a number of clock cycles while a new set of instructions are retrieved. This branch instruction delay is also known as a “branch penalty.” A branch penalty may be shorter when branching to an instruction already within the cache and longer when the instruction must be loaded into the cache.
  • As an illustrative example of a branch penalty, consider the following set of processor instructions
    Code: Inst A0
    Branch COND, L1
    Inst B0
    L1: Inst C0
    Inst C1
  • In this sample set of processor instructions, program execution will jump from the “Branch COND, L1” instruction to the “Inst C0” instruction at label L1 if COND is TRUE, or non-zero. Otherwise, program execution will proceed to the “Inst B0” instruction first. If the instructions at the L1 label are not in the processor's instruction buffer at the time the “Branch COND, L1” instruction is executed, the processor will have to read the instructions into the buffer, during which time no program instructions are executed. The processor clock cycles wasted by a processor awaiting instructions to be read into its instruction buffer are a measure of the branch penalty. To further illustrate a branch penalty using this scenario, consider the above set of processor instructions at execution time when the branch is taken and the instructions at L1 are not present in the processor's instruction buffer:
    Execution: Cycle
    Inst A0 1
    Branch to L1 2
    NOP: penalty 3
    NOP: penalty 4
    . . .
    NOP: penalty 16
    NOP: penalty 17
    Inst C0 18
    Inst C1 19
  • For simplicity, this example assumes a single clock cycle per instruction, branch or NOP. Also, the number of clock cycles needed to provide the instruction “Inst C0” is only for exemplary purposes, i.e., the “Inst C0” instruction may be available to the processor before or after clock cycle 18. In the above example, although the “Branch to L1” instruction was executed at clock cycle 2, the instruction “Inst C0” at L1 was not available to the processor in its instruction buffer until clock cycle 18. The branch penalty in this example is thus 16 clock cycles.
  • Several methods have been developed to minimize or eliminate the branch penalty. These methods include both hardware and software approaches. Hardware methods have included the development of processor instruction pipeline architectures that attempt to predict whether an upcoming branch in an instruction set will be taken, and pre-fetch or pre-load the necessary instructions into the processor's instruction buffer. In theory, pipelined processors can execute one instruction per machine clock cycle when a well-ordered, sequential instruction stream is executed. This may be accomplished even though each instruction itself may implicate or require a number of separate micro-instructions to be effectuated.
  • In one pipeline architecture approach, a fixed algorithm is employed to predict if an instruction branch will be taken. This approach has a drawback in that the fixed algorithm is not changeable, and thus cannot be optimized for each program executed on the processor.
  • Another hardware approach uses a branch history table (“BHT”) to predict when a branch may be taken. A BHT may be in the form of a table of bitmaps wherein each entry corresponds to a branch instruction for the executing program, and each bit represents a single branch or no-branch decision. Some BHT's provide only a single bit for each branch instruction, thus the prediction for each occurrence of the branch instruction corresponds to whatever happened last time. This is also known as 1-bit dynamic prediction. Using 1-bit prediction, if a conditional branch is taken, it is predicted to be taken the next time. Otherwise, if the conditional branch is not taken, it is predicted to not be taken the next time.
  • A BHT is also used to perform 2-bit dynamic prediction. In 2-bit dynamic prediction, if a given conditional branch is taken twice in succession, it is predicted to be taken next time. Likewise, if the branch is not taken twice in succession, it is predicted to not be taken the next time. If the branch is both taken once and not taken once in the prior two instances, then the prediction for the next instance is the same as the last time. Generally, if the branch is used for loop or exception handling, 2-bit dynamic prediction using a BHT is better than 1-bit because the branch is taken only once per loop or execution. Two-bit prediction tends to be more accurate and have a greater hardware cost. Therefore, if the state does not change frequently, 1-bit prediction may be sufficient for many purposes.
  • A BHT uses a significant amount of processor hardware resources, and may also result in significant branch penalties.
  • In a software approach, an instruction is provided that causes a pre-fetch of instructions starting at a specified address. In one implementation, this software takes the form of a HINT(ADDRESS) instruction, wherein the processor automatically pre-fetches the instructions beginning at ADDRESS as soon as the HINT is encountered. By placing the HINT several instructions before the actual branching instruction, the programmer can reduce the number of clock cycles during which the processor is awaiting the fetch of the next instruction.
  • One drawback to the use of the HINT(ADDRESS) instruction is that it unconditionally causes the pre-fetching of instructions. Thus, whether or not the branch is taken, the instructions at the branch address will be pre-fetched. The programmer must therefore decide where to place the HINT(ADDRESS) instruction, and the programmer may not always make the best predictions as to when the branch will be taken. If the programmer is incorrect, i.e., if the HINT(ADDRESS) instruction is given and the branch is not taken, the pre-fetch is a wasted effort. A significant time penalty is thus incurred from having to squash the erroneous instruction, flush the pipeline and re-load the correct instruction sequence. Depending on the size of the pipeline, this penalty can be quite large.
  • The hardware and software approaches to minimizing or eliminating branch penalties also do not account for the fact that the probability of conditions for branching varies throughout the execution of a program.
  • SUMMARY OF THE INVENTION
  • The present invention addresses these drawbacks.
  • In one aspect, the present invention provides a method that includes specifying a value in a first portion of a conditional pre-fetch instruction associated with a branch instruction used for effectuating a branch operation in a processor. Next a target instruction address is specified in a second portion in the conditional pre-fetch instruction. The value is then evaluated to determine whether a condition is met. If the condition is met, one or more instructions are pre-fetched starting at the target instruction address into an instruction buffer of the processor.
  • In another aspect, the target instructions are pre-fetched as described above when the value is non-zero.
  • A method of operating a processor having an instruction cache and using a branch control instruction associated with a conditional pre-fetch instruction. The conditional pre-fetch instruction includes a test condition portion and an instruction address portion. The method includes the steps of: (a) determining if the test condition portion of the conditional pre-fetch instruction evaluates to a TRUE value; and (b) when the test condition portion of the conditional pre-fetch instruction evaluates to a TRUE value, preloading one or more instructions beginning at an address indicated by the instruction address portion of the conditional pre-fetch instruction into an instruction buffer of the processor.
  • A storage medium is also provided in an aspect of the present invention. The storage medium contains a program including a conditional pre-fetch instruction operable to cause a processor to perform several steps a value is specified in a first portion of the conditional pre-fetch instruction. The conditional pre-fetch instruction is associated with a branch instruction used for effectuating a branch operation in the processor. A target instruction address is specified in a second portion in the conditional pre-fetch instruction. The value is used to determine whether a condition is met. If the condition is met, one or more instructions starting at the target instruction address are pre-fetched into an instruction buffer of the processor.
  • A processor under control of a program including a conditional pre-fetch instruction in conjunction with a branch instruction is also provided. The program causes the processor to decode a first portion of the conditional pre-fetch instruction, which specifies a value for evaluation the value evaluates to a true or false. Next, if the value evaluates to true, a second portion of the conditional pre-fetch instruction is decoded. This second portion identifies an address of a particular target instruction, which is then pre-fetched. The pre-fetching operation is associated with an operation to move the particular target instruction from a cache to an instruction buffer of the processor.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • FIG. 1 is a schematic block diagram of a system in accordance with an embodiment of the present invention;
  • FIG. 2 is an exemplary flowchart of a method in accordance with an embodiment of the present invention; and
  • FIG. 3 is a flow chart of a method in accordance with an embodiment of the present invention.
  • DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS
  • As shown in FIG. 1, a computer system 100 in accordance with one embodiment of the invention comprises a central processing unit (“CPU”) 102, an instruction cache 104, a pipeline 106 connecting the CPU 102 to the instruction cache 104, and a data bus 108. Although only these components are depicted in FIG. 1, it should be appreciated that a typical system 100 can include a large number of components, peripherals, and communications buses. In a preferred embodiment, the computer system 100 is a general purpose computer having all the additional internal components normally found in a personal computer such as, for example, a display, a CD-ROM, a hard-drive, a mouse, a keyboard, speakers, a microphone, a modem and all of the components used for connecting these elements to one another. These components have not been depicted in FIG. 1 for clarity. Additionally, the computer system 100 may comprise any workstation, PDA, or other processor-controlled device or system capable of processing instructions.
  • The CPU 102 may be a processor of any type. The CPU 102 might also be a multiprocessor system. In a multiprocessor system, a more complex instruction cache 104 and pipeline 106 may be used than those depicted here, although the present invention is similarly applicable to such systems.
  • The instruction cache 104 may also be of any sort. For example, the instruction cache 104 can be a 32 kiloword (KW) instruction cache using four word blocks. Alternatively, any size instruction cache 104 using any size blocks may also be used. The terms “instruction cache” and “instruction buffer” are used herein interchangeably, with no distinction intended between the use of either term.
  • As depicted in FIG. 1, the instruction cache 104 and CPU 102 are interconnected by an instruction pipeline 106. In an embodiment of the invention, this pipeline represents the hardware and/or firmware used by the system 100 to manage and move instructions between the instruction cache 104 and the CPU 102. For example, many CPUs have specialized circuitry to manage their associated instruction pipelines and buffers.
  • In a preferred embodiment, the instruction cache 104 and pipeline 106 are packaged with the CPU 102 in a single integrated circuit (not depicted). Such packaging advantageously provides close proximity of the instruction cache 104, pipeline 106 and CPU 102, which minimizes power consumption and instruction transfer time.
  • In an embodiment of the invention, the instruction cache 104 and CPU 102 may each be in communication with a data bus 108, thereby allowing the transfer of instructions and data from other devices and memory (not depicted).
  • FIG. 2 is an exemplary flowchart 200 of a method in accordance with an embodiment of the present invention. Preferably, the CPU executes instructions sequentially. Thus, in FIG. 2, “Inst A0” 202 is first executed. Then the CPU executes the “CPIF(vall, L1)” 204 instruction. The CPIF instruction of this example is the conditional instruction pre-fetch. In the present example, a programmer has placed the CPIF instruction several instructions before the “Branch on COND1” instruction 210. This placement allows time for the instructions at the branch-to address to be pre-fetched, if necessary, prior to the CPU executing the “Branch on COND1” instruction 210.
  • In an embodiment, the “CPIF(vall, L1)” instruction 204 causes the CPU or hardware instruction pipeline circuitry to evaluate the “vail” component of the “CPIF(vall, L1)” instruction 204. In a preferred embodiment, if the “vail” component evaluates as a non-zero value, i.e., vail is ‘TRUE’ at action 216, then the CPU or hardware instruction pipeline circuitry begins the pre-fetch of instructions located at the address given by the “L1” component 218.
  • The CPU, meanwhile, may continue to execute the “Inst A1” instruction 206, as well as any others in sequence. At some point, as depicted here by “Inst AN” 208, the instruction immediately preceding the “Branch on COND1” instruction 210 is executed. In a preferred embodiment, the pre-fetch of instructions located at the address given by the “L1” component 218 of the “CPIF(vall, L1)” instruction is completed prior to the CPU execution of the “Branch on COND1” instruction 210.
  • When the CPU executes the “Branch on COND1” instruction 210 in an embodiment of the present invention, either the condition given by “COND1” is ‘TRUE’ or it is ‘FALSE’. Preferably, if COND1 is ‘FALSE’, the branch is not taken, and the CPU continues program execution with the next sequential instructions, ¢B0” 212, then “Inst B1” 214, and so on. Otherwise, if COND1 is ‘TRUE’, the branch is taken, and the CPU preferably continues execution with “Inst C0” 222, then “Inst C1” 224, and so forth. If the pre-fetch of instructions located at the address given by the “L1” component 218, namely the “Inst C0” 222, “Inst C1” 224, etc. instructions is not completed when the CPU executes the “Branch on COND1” instruction 210, then a branch penalty 220 occurs while the pre-fetch completes.
  • A greater branch penalty is encountered if the “CPIF(vall, L1)” instruction 204 causes the incorrect prediction that the branch will or will not be taken. For example, if the “CPIF(vall, L1)” instruction 204 causes the prediction that the branch will be taken and it is not taken, then the instruction cache may not have the “Inst B0” 212 and “Inst B1” 214 instructions when they are needed, likely resulting in the need to flush and re-fill the instruction cache. This is known as an instruction cache “miss”. Similarly, if the “CPIF(vall, L1)” instruction 204 causes the prediction that the branch will not be taken and it is taken, then the instruction cache may not have the “Inst C0” 222 and “Inst C1” 224 instructions when they are needed, also likely resulting in the need to flush and re-fill the instruction cache.
  • The present invention advantageously minimizes the number of instruction cache misses by allowing dynamic prediction of whether a given branch will be taken. In an embodiment of the present invention, this dynamic prediction is enabled by providing for testing a logical condition embedded within the CPIF instruction. By proper selection of the condition to test, a programmer can greatly increase the accuracy of the branching predictions.
  • In the example shown in FIG. 2, a relatively simple parameter of the CPIF instruction, ‘val1’, is evaluated to determine if it is non-zero (TRUE) or a zero (FALSE), corresponding to ‘pre-fetch’ and ‘do not pre-fetch’. Likewise, the preferably more complex condition ‘COND1’ of the “Branch on COND1” instruction 210 is evaluated to determine if the branch is taken. By selecting a ‘val1’ whose TRUE or FALSE status closely corresponds with the ‘COND1’ of the “Branch on COND1” instruction, the programmer may improve the branching predictions. Indeed, in an embodiment of the present invention, ‘val1’ may be identical to ‘COND1’.
  • FIG. 3 presents a more detailed flowchart 300 of a simplified processor executing a conditional pre-fetch instruction in accordance with an embodiment of the present invention. In a general operation, a processor decodes the instruction addressed by its instruction pointer 302. The processor determines if the decoded instruction is a conditional pre-fetch instruction 304. If it is not a conditional pre-fetch, the processor proceeds with instruction processing 306. When the instruction processing is completed, the processor's instruction pointer is incremented 308 to point to the next sequential instruction, which is then decoded by the processor 302.
  • If the instruction is a conditional pre-fetch, the processor evaluates the ‘value’ component of the conditional pre-fetch instruction 310. In a preferred embodiment, the conditional pre-fetch instruction has the form: CPIF(value, address), wherein the ‘CPIF’ is the instruction pneumonic, ‘value’ is the expression to be evaluated, and ‘address’ is the beginning address of the instructions to be pre-fetched if value is TRUE. Although the pneumonic ‘CPIF’ is used herein to represent the conditional pre-fetch instruction, any pneumonic may be employed in an embodiment.
  • The processor then evaluates the ‘value’ component of the conditional pre-fetch instruction 312. If it is a non-zero value, which is also herein referred to as ‘TRUE’, the processor then pre-fetches the instructions at the location indicated by the ‘address’ component of the conditional pre-fetch instruction 314. Any number of instructions may be pre-fetched in a given embodiment of the invention, although the number of instructions pre-fetched is preferably related to the size of the instruction cache and the architecture of the processor's pipeline.
  • If the ‘value’ component of the conditional pre-fetch instruction is zero, which is referred to herein as ‘FALSE’, the processor does not perform a pre-fetch operation. Instead, it preferably proceeds to increment its instruction pointer 308 and decode the next sequential instruction 302.
  • The flowchart of FIG. 3 is simplified for exemplary purposes. For example, as is well known in the art, many processors are of the multiprocessing variety, wherein several instructions are in various stages of execution by the processor at any given time. An embodiment of the present invention also envisions the use of such multiprocessing processors. These embodiments generally use more complex instruction pipeline architectures that allow for several instruction to be in various stages of execution at each processor clock cycle.
  • Additionally, in one embodiment the processor itself manages the instruction pipeline and the instruction cache.
  • In another embodiment of the present invention, the instruction pipeline and instruction cache may be managed by hardware associated with the processor but not actually considered part of the processor. As mentioned above, it is possible that one processor may manage these components for another processor.
  • As an illustrative example of an embodiment of the invention, consider the following set of processor instructions
    Code: Inst A0
    CPIF VAL, L1
    Inst A1
    Inst A2
    Inst A3
    Branch COND, L1
    Inst B0
    L1: Inst C0
  • Inst C1
  • In this exemplary set of processor instructions, program execution will jump from the “Branch COND, L1” instruction to the “Inst C0” instruction at label ‘L1’ if ‘COND’ is ‘TRUE’, or non-zero. Otherwise, program execution will proceed to the “Inst B0” instruction first. If the instructions at the ‘L1’ label are not in the processor's instruction buffer at the time the “Branch COND, L1” instruction is executed, the processor may incur a branch penalty.
  • To further elaborate on this exemplary use of the conditional pre-fetch instruction, consider the above sample set of processor instructions at execution time, when the branch is taken and the instructions at ‘L1’ are not present in the processor's instruction buffer. Exemplary clock cycles have been provided for further illustration:
    Execution: Cycle Notes
    Inst A0
    1
    CPIF VAL, L1 2 Begin pre-fetch
    Inst A1
    3
    Inst A2 4
    Inst A3 5 Pre-fetch completed
    Branch to L1 6
    Inst C0 7
    Inst C1 8
  • As indicated, the condition pre-fetch operation may advantageously save a significant number of processor clock cycles when compared with a similar scenario that does not use a conditional pre-fetch, as described above.
  • In another example of using a conditional pre-fetch instruction in accordance with an embodiment of the present invention, use of the conditional pre-fetch by a “C” language programmer is envisioned. Although this example uses the “C” programming language, any programming language, including but not limited to any assembly language, any compiled language, such as “C”, “C++”, Cobol, Fortran, Ada, Pascal, etc., or any interpretive language, such as BASIC, JAVA, XML, or any other language may be used.
  • Using the “C” language loop as an example (in pseudocode):
    for (i=0; i < n; i++)
    {
      instruction ...
    } /* conditional branch */
  • A conditional branch is implicit at the closing bracket “}”, where the “/* conditional branch */” comment has been placed. Thus, each time ‘instruction . . . ’ is executed, the variable i is incremented and compared with n. If i is less than n, then the loop repeats. This is often implemented in compiled machine language as a conditional branch operation.
  • Using an embodiment of the present invention, the programmer may rewrite this “C” language loop thus:
    L1: for (i=0; i < n; i++)
    {
      CPIF (( i != n−1), L1);
      instruction ...
    } /* conditional branch */
  • This version of the “C” language adds an address label ‘L1’ at the top of the ‘for’ loop, and a conditional pre-fetch instruction ‘CPIF’ at the outset of the loop. Note that the value to be evaluated for the conditional pre-fetch instruction is (i !=n-1) . This expression evaluates to a non-zero value until the last iteration of the loop. Therefore, the instructions at address L1 will be pre-fetched for each loop iteration except for the last loop iteration. In this manner, the loop may be advantageously processed and executed without the processor incurring any branch penalty.
  • In an embodiment of the invention, a 1-bit or 2-bit branch history is used together with a CPIF instruction to minimize branch penalties. For example, the branch history may be stored in one or two processor registers. Prior to the next conditional branch, a CPIF instruction may use the stored register values in its expression to be evaluated. A conventional branch history table only uses past history to determine when to pre-fetch instructions. One aspect of the invention, however, uses information contained in the branch history table as only one of the parameters to make a determination of a higher level. In this manner, the CPIF can incorporate the advantages of using 1-bit or 2-bit branch histories.
  • Most of the foregoing alternative embodiments are not mutually exclusive, but may be implemented in various combinations to achieve unique advantages. As these and other variations and combinations of the features discussed above can be utilized without departing from the invention as defined by the claims, the foregoing description of the embodiments should be taken by way of illustration rather than by way of limitation of the invention as defined by the claims.

Claims (23)

1. A method comprising:
(a) specifying a value in a first portion of a conditional pre-fetch instruction, the conditional pre-fetch instruction being associated with a branch instruction used for effectuating a branch operation in a processor;
(b) specifying a target instruction address in a second portion in the conditional pre-fetch instruction;
(c) evaluating the value to determine whether a condition is met; and
(d) pre-fetching one or more instructions starting at the target instruction address into an instruction buffer of the processor when the condition is met.
2. The method according to claim 1, wherein the condition is met when the value is non-zero.
3. The method according to claim 1, wherein the conditional pre-fetch instruction causes the processor to initiate a pre-fetch operation of the target instructions based on the value.
4. The method according to claim 1, further comprising pre-loading the one or more instructions in a hardware pipeline of the processor in response to the conditional pre-fetch instruction.
5. The method according to claim 1, wherein the value in the first portion of the conditional pre-fetch instruction includes information pertaining to whether branching occurred in a prior iteration of the branch instruction.
6. The method according to claim 1, wherein the value in the first portion of the conditional pre-fetch instruction includes information pertaining to whether branching occurred in the prior two iterations of the branch instruction.
7. A method of operating a processor having an instruction cache and using a branch control instruction associated with a conditional pre-fetch instruction, the conditional pre-fetch instruction including a test condition portion and an instruction address portion, the method including the step of:
(a) determining if the test condition portion of the conditional pre-fetch instruction evaluates to a TRUE value; and
(b) when the test condition portion of the conditional pre-fetch instruction evaluates to a TRUE value, preloading one or more instructions beginning at an address indicated by the instruction address portion of the conditional pre-fetch instruction into an instruction buffer of the processor.
8. The method according to claim 7, wherein evaluation of a non-zero value of the test condition portion determines a TRUE evaluation.
9. The method according to claim 7, further including the step: initiating a pre-fetch operation of the one or more instructions based on the test condition.
10. The method according to claim 7, wherein the test condition portion includes whether branching occurred in an immediately preceding iteration of the branch control instruction.
11. The method according to claim 7, wherein the test condition portion includes whether branching occurred in two immediately preceding iterations of the branch control instruction.
12. A storage medium containing a program including a conditional pre-fetch instruction operable to cause a processor to perform steps comprising:
(a) specifying a value in a first portion of the conditional pre-fetch instruction, the conditional pre-fetch instruction being associated with a branch instruction used for effectuating a branch operation in the processor;
(b) specifying a target instruction address in a second portion in the conditional pre-fetch instruction;
(c) evaluating the value to determine whether a condition is met; and
(d) pre-fetching one or more instructions starting at the target instruction address into an instruction buffer of the processor when the condition is met.
13. The storage medium according to claim 12, wherein the conditional pre-fetch instruction causes the processor to initiate a pre-fetch operation of the particular target instruction into an instruction buffer of the processor based on the conditional value.
14. The storage medium according to claim 12, wherein the first portion of the conditional pre-fetch instruction comprises a processor register-loadable value.
15. The storage medium according to claim 12, wherein the address specified by the second portion of the conditional pre-fetch instruction comprises an offset of a processor cache.
16. The storage medium according to claim 12, wherein the conditional pre-fetch instruction is locatable in advance of the associated program branch instruction so that the processor can compute the target address before the program branch instruction is executed.
17. The storage medium according to claim 12, wherein the conditional pre-fetch instruction permits software control of instruction preloading in a hardware pipeline of the processor.
18. The storage medium according to claim 12, wherein the first portion of the conditional pre-fetch instruction includes information pertaining to whether branching has occurred in previous iterations of the associated branch instruction.
19. A processor under control of a program including a conditional pre-fetch instruction in conjunction with a branch instruction, the program causing the processor to perform steps comprising:
(a) decoding a first portion of the conditional pre-fetch instruction, the first portion specifying a value for evaluation, the value being evaluated as TRUE or FALSE; and
(b) decoding a second portion of the conditional pre-fetch instruction, the second portion identifying an address of a particular target instruction; and
(c) pre-fetching the particular target instruction when the value evaluates to TRUE, the pre-fetching operation being associated with an operation to move the particular target instruction from a cache to an instruction buffer of the processor.
20. The processor according to claim 19, wherein a pre-fetch operation of the particular target instruction is executable by the processor based on evaluation of the value.
21. The processor according to claim 19, wherein when the value is non-zero the value evaluates to TRUE, and when the value is zero the value evaluates to FALSE.
22. The processor according to claim 19, wherein the conditional pre-fetch instruction permits software control of instruction pre-loading in a hardware pipeline of the processor.
23. The processor according to claim 19, wherein the value for evaluation of the first portion of the conditional pre-fetch instruction includes information indicating whether branching occurred during previous iterations of the branch instruction.
US11/344,403 2005-02-04 2006-01-31 Methods and apparatus for dynamic prediction by software Active 2027-05-12 US7627740B2 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
US11/344,403 US7627740B2 (en) 2005-02-04 2006-01-31 Methods and apparatus for dynamic prediction by software
US12/540,522 US8250344B2 (en) 2005-02-04 2009-08-13 Methods and apparatus for dynamic prediction by software

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US65015705P 2005-02-04 2005-02-04
US11/344,403 US7627740B2 (en) 2005-02-04 2006-01-31 Methods and apparatus for dynamic prediction by software

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US12/540,522 Continuation US8250344B2 (en) 2005-02-04 2009-08-13 Methods and apparatus for dynamic prediction by software

Publications (2)

Publication Number Publication Date
US20060212680A1 true US20060212680A1 (en) 2006-09-21
US7627740B2 US7627740B2 (en) 2009-12-01

Family

ID=36979187

Family Applications (2)

Application Number Title Priority Date Filing Date
US11/344,403 Active 2027-05-12 US7627740B2 (en) 2005-02-04 2006-01-31 Methods and apparatus for dynamic prediction by software
US12/540,522 Active 2026-02-13 US8250344B2 (en) 2005-02-04 2009-08-13 Methods and apparatus for dynamic prediction by software

Family Applications After (1)

Application Number Title Priority Date Filing Date
US12/540,522 Active 2026-02-13 US8250344B2 (en) 2005-02-04 2009-08-13 Methods and apparatus for dynamic prediction by software

Country Status (2)

Country Link
US (2) US7627740B2 (en)
JP (1) JP4134179B2 (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140019721A1 (en) * 2011-12-29 2014-01-16 Kyriakos A. STAVROU Managed instruction cache prefetching
EP2696279A1 (en) * 2011-04-01 2014-02-12 ZTE Corporation Jump instruction coding method and system
US20150301830A1 (en) * 2014-04-17 2015-10-22 Texas Instruments Deutschland Gmbh Processor with variable pre-fetch threshold
US20160162294A1 (en) * 2014-12-07 2016-06-09 Lenovo Enterprise Solutions (Singapore) Pte. Ltd. Reconfigurable processors and methods for collecting computer program instruction execution statistics

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8533437B2 (en) * 2009-06-01 2013-09-10 Via Technologies, Inc. Guaranteed prefetch instruction
US8595471B2 (en) * 2010-01-22 2013-11-26 Via Technologies, Inc. Executing repeat load string instruction with guaranteed prefetch microcode to prefetch into cache for loading up to the last value in architectural register
PT2740795T (en) 2011-08-04 2017-01-09 Toray Industries Cancer treatment and/or prevention drug composition

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5732242A (en) * 1995-03-24 1998-03-24 Silicon Graphics, Inc. Consistently specifying way destinations through prefetching hints
US5742804A (en) * 1996-07-24 1998-04-21 Institute For The Development Of Emerging Architectures, L.L.C. Instruction prefetch mechanism utilizing a branch predict instruction
US5949995A (en) * 1996-08-02 1999-09-07 Freeman; Jackie Andrew Programmable branch prediction system and method for inserting prediction operation which is independent of execution of program code
US6092188A (en) * 1997-12-23 2000-07-18 Intel Corporation Processor and instruction set with predict instructions
US6282663B1 (en) * 1997-01-22 2001-08-28 Intel Corporation Method and apparatus for performing power management by suppressing the speculative execution of instructions within a pipelined microprocessor
US20020095566A1 (en) * 1998-10-12 2002-07-18 Harshvardhan Sharangpani Method for processing branch operations
US6560693B1 (en) * 1999-12-10 2003-05-06 International Business Machines Corporation Branch history guided instruction/data prefetching
US7240159B2 (en) * 1993-08-05 2007-07-03 Hitachi, Ltd. Data processor having cache memory

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6151672A (en) * 1998-02-23 2000-11-21 Hewlett-Packard Company Methods and apparatus for reducing interference in a branch history table of a microprocessor
US6446197B1 (en) * 1999-10-01 2002-09-03 Hitachi, Ltd. Two modes for executing branch instructions of different lengths and use of branch control instruction and register set loaded with target instructions
US6877089B2 (en) * 2000-12-27 2005-04-05 International Business Machines Corporation Branch prediction apparatus and process for restoring replaced branch history for use in future branch predictions for an executing program
US7234040B2 (en) * 2002-01-24 2007-06-19 University Of Washington Program-directed cache prefetching for media processors

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7240159B2 (en) * 1993-08-05 2007-07-03 Hitachi, Ltd. Data processor having cache memory
US5732242A (en) * 1995-03-24 1998-03-24 Silicon Graphics, Inc. Consistently specifying way destinations through prefetching hints
US5742804A (en) * 1996-07-24 1998-04-21 Institute For The Development Of Emerging Architectures, L.L.C. Instruction prefetch mechanism utilizing a branch predict instruction
US5949995A (en) * 1996-08-02 1999-09-07 Freeman; Jackie Andrew Programmable branch prediction system and method for inserting prediction operation which is independent of execution of program code
US6282663B1 (en) * 1997-01-22 2001-08-28 Intel Corporation Method and apparatus for performing power management by suppressing the speculative execution of instructions within a pipelined microprocessor
US6092188A (en) * 1997-12-23 2000-07-18 Intel Corporation Processor and instruction set with predict instructions
US20020095566A1 (en) * 1998-10-12 2002-07-18 Harshvardhan Sharangpani Method for processing branch operations
US6560693B1 (en) * 1999-12-10 2003-05-06 International Business Machines Corporation Branch history guided instruction/data prefetching

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2696279A1 (en) * 2011-04-01 2014-02-12 ZTE Corporation Jump instruction coding method and system
EP2696279A4 (en) * 2011-04-01 2014-03-19 Zte Corp Jump instruction coding method and system
US20140019721A1 (en) * 2011-12-29 2014-01-16 Kyriakos A. STAVROU Managed instruction cache prefetching
US9811341B2 (en) * 2011-12-29 2017-11-07 Intel Corporation Managed instruction cache prefetching
US20150301830A1 (en) * 2014-04-17 2015-10-22 Texas Instruments Deutschland Gmbh Processor with variable pre-fetch threshold
US10628163B2 (en) * 2014-04-17 2020-04-21 Texas Instruments Incorporated Processor with variable pre-fetch threshold
US11231933B2 (en) 2014-04-17 2022-01-25 Texas Instruments Incorporated Processor with variable pre-fetch threshold
US11861367B2 (en) 2014-04-17 2024-01-02 Texas Instruments Incorporated Processor with variable pre-fetch threshold
US20160162294A1 (en) * 2014-12-07 2016-06-09 Lenovo Enterprise Solutions (Singapore) Pte. Ltd. Reconfigurable processors and methods for collecting computer program instruction execution statistics
US10540180B2 (en) * 2014-12-07 2020-01-21 Lenovo Enterprise Solutions (Singapore) Pte. Ltd. Reconfigurable processors and methods for collecting computer program instruction execution statistics

Also Published As

Publication number Publication date
JP2006216040A (en) 2006-08-17
US7627740B2 (en) 2009-12-01
JP4134179B2 (en) 2008-08-13
US20090313456A1 (en) 2009-12-17
US8250344B2 (en) 2012-08-21

Similar Documents

Publication Publication Date Title
US5136697A (en) System for reducing delay for execution subsequent to correctly predicted branch instruction using fetch information stored with each block of instructions in cache
JP5198879B2 (en) Suppress branch history register updates by branching at the end of the loop
US7487340B2 (en) Local and global branch prediction information storage
US6263427B1 (en) Branch prediction mechanism
US20110320787A1 (en) Indirect Branch Hint
US7010648B2 (en) Method and apparatus for avoiding cache pollution due to speculative memory load operations in a microprocessor
CA2659384C (en) Apparatus for generating return address predictions for implicit and explicit subroutine calls
US7444501B2 (en) Methods and apparatus for recognizing a subroutine call
US8250344B2 (en) Methods and apparatus for dynamic prediction by software
JP2011100466A5 (en)
KR20100132032A (en) System and method of selectively committing a result of an executed instruction
JPH0863356A (en) Branch estimation device
JP2008532142A5 (en)
US20070288732A1 (en) Hybrid Branch Prediction Scheme
US20040117606A1 (en) Method and apparatus for dynamically conditioning statically produced load speculation and prefetches using runtime information
US20070288731A1 (en) Dual Path Issue for Conditional Branch Instructions
US9710269B2 (en) Early conditional selection of an operand
US20070288734A1 (en) Double-Width Instruction Queue for Instruction Execution

Legal Events

Date Code Title Description
AS Assignment

Owner name: SONY COMPUTER ENTERTAINMENT INC., JAPAN

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:YASUE, MASAHIRO;HATAKEYAMA, AKIYUKI;REEL/FRAME:022426/0854;SIGNING DATES FROM 20090309 TO 20090316

STCF Information on status: patent grant

Free format text: PATENTED CASE

CC Certificate of correction
FEPP Fee payment procedure

Free format text: PAYOR NUMBER ASSIGNED (ORIGINAL EVENT CODE: ASPN); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

AS Assignment

Owner name: SONY NETWORK ENTERTAINMENT PLATFORM INC., JAPAN

Free format text: CHANGE OF NAME;ASSIGNOR:SONY COMPUTER ENTERTAINMENT INC.;REEL/FRAME:027445/0657

Effective date: 20100401

AS Assignment

Owner name: SONY COMPUTER ENTERTAINMENT INC., JAPAN

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:SONY NETWORK ENTERTAINMENT PLATFORM INC.;REEL/FRAME:027481/0351

Effective date: 20100401

FPAY Fee payment

Year of fee payment: 4

FPAY Fee payment

Year of fee payment: 8

MAFP Maintenance fee payment

Free format text: PAYMENT OF MAINTENANCE FEE, 12TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1553); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

Year of fee payment: 12