WO2007016393A2 - Instruction cache having fixed number of variable length instructions - Google Patents

Instruction cache having fixed number of variable length instructions Download PDF

Info

Publication number
WO2007016393A2
WO2007016393A2 PCT/US2006/029523 US2006029523W WO2007016393A2 WO 2007016393 A2 WO2007016393 A2 WO 2007016393A2 US 2006029523 W US2006029523 W US 2006029523W WO 2007016393 A2 WO2007016393 A2 WO 2007016393A2
Authority
WO
WIPO (PCT)
Prior art keywords
instructions
cache
instruction
cache line
processor
Prior art date
Application number
PCT/US2006/029523
Other languages
French (fr)
Other versions
WO2007016393A3 (en
Inventor
Jeffrey Todd Bridges
James Norris Dieffenderfer
Rodney Wayne Smith
Thomas Andrew Sartorius
Original Assignee
Qualcomm Incorporated
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Qualcomm Incorporated filed Critical Qualcomm Incorporated
Priority to JP2008524216A priority Critical patent/JP4927840B2/en
Priority to EP06788854A priority patent/EP1910919A2/en
Publication of WO2007016393A2 publication Critical patent/WO2007016393A2/en
Publication of WO2007016393A3 publication Critical patent/WO2007016393A3/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/30Arrangements for executing machine instructions, e.g. instruction decode
    • G06F9/30145Instruction analysis, e.g. decoding, instruction word fields
    • G06F9/30149Instruction analysis, e.g. decoding, instruction word fields of variable length instructions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/30Arrangements for executing machine instructions, e.g. instruction decode
    • G06F9/38Concurrent instruction execution, e.g. pipeline or look ahead
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/30Arrangements for executing machine instructions, e.g. instruction decode
    • G06F9/38Concurrent instruction execution, e.g. pipeline or look ahead
    • G06F9/3802Instruction prefetching
    • G06F9/3816Instruction alignment, e.g. cache line crossing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/30Arrangements for executing machine instructions, e.g. instruction decode
    • G06F9/38Concurrent instruction execution, e.g. pipeline or look ahead
    • G06F9/3818Decoding for concurrent execution
    • G06F9/382Pipelined decoding, e.g. using predecoding

Definitions

  • the present invention relates generally to the field of processors and in particular to a processor having an instruction cache storing a fixed number of variable length instructions.
  • Microprocessors perform computational tasks in a wide variety of applications, including portable electronic devices. In many cases, maximizing processor performance is a major design goal, to permit additional functions and features to be implemented in portable electronic devices and other applications. Additionally, power consumption is of particular concern in portable electronic devices, which have limited battery capacity. Hence, processor designs that increase performance and reduce power consumption are desirable.
  • processors employ one or more instruction execution pipelines, wherein the execution of many multi-step sequential instructions is overlapped to improve overall processor performance.
  • Recently executed instructions are stored in a cache - a high-speed, usually on-chip memory - for ready access by the execution pipeline.
  • Many processor Instruction Set Architectures include variable length instructions. That is, the instruction op codes read from memory do not all occupy the same amount of space. This may result from the inclusion of operands with arithmetic or logical instructions, the amalgamation of multiple operations into a Very Long Instruction Word (VLIW), or other architectural features.
  • VLIW Very Long Instruction Word
  • One disadvantage to variable length instructions is that, upon fetching instructions from an instruction cache, the processor must ascertain the boundaries of each instruction, a computational task that consumes power and reduces performance.
  • FIG. 1 depicts a representative diagram of two lines 100, 140 of a prior art instruction cache storing variable length instructions (Il - 19).
  • each cache line comprises sixteen bytes, and a 32-bit word size is assumed.
  • Most instructions are a word width, or four bytes. Some instructions are of half-word width, comprising two bytes.
  • a first cache line 100 and associated tag field 120 contain instructions Il through 14, and half of instruction 15.
  • a second cache line 140, with associated tag field 160 contains the second half of instruction 15, and instructions 16 through 19.
  • the instruction lengths and their address are summarized in the following table:
  • the processor To read these instructions from the cache lines 100, 140, the processor must expend additional computational effort - at the cost of power consumption and delay - to determine the instruction boundaries. While this task may be assisted by pre- decoding the instructions and storing boundary information in or associated with the cache lines 100, 140, the additional computation is not obviated. Additionally, a fetch of instruction 15 will require two cache accesses. This dual access to fetch a misaligned instruction from the cache causes additional power consumption and processor delay.
  • variable-length instructions are stored in each line of an instruction cache.
  • the variable-length instructions are aligned along predetermined boundaries. Since the length of each instruction in the line, and hence the span of memory the instructions occupy, is not known, the address of the next following instruction is calculated and stored with the cache line. Ascertaining the instruction boundaries, aligning the instructions, and calculating the next fetch address are performed in a predecoder prior to placing the instructions in the cache.
  • a method of cache management in a processor having variable instruction length comprises storing a fixed number of instructions per cache line.
  • a processor in another embodiment, includes an instruction execution pipeline operative to execute instructions of variable length and an instruction cache operative to store a fixed number of the variable length instructions per cache line.
  • the processor additionally includes a predecoder operative to align the variable length instructions along predetermined boundaries prior to writing the instructions into a cache line.
  • Figure 1 is a diagram of a prior art instruction cache storing variable length instructions.
  • Figure 2 is a functional block diagram of a processor.
  • Figure 3 is a diagram of an instruction cache storing a fixed number of variable length instructions, aligned along predetermined boundaries.
  • Figure 2 depicts a functional block diagram of a representative processor 10, employing both a pipelined architecture and a hierarchical memory structure.
  • the processor 10 executes instructions in an instruction execution pipeline 12 according to control logic 14.
  • the pipeline includes various registers or latches 16, organized in pipe stages, and one or more Arithmetic Logic Units (ALU) 18.
  • a General Purpose Register (GPR) file 20 provides registers comprising the top of the memory hierarchy.
  • the pipeline fetches instructions from an Instruction Cache (I-cache) 22, with memory addressing and permissions managed by an Instruction-side Translation Lookaside Buffer (ITLB) 24.
  • ILB Instruction-side Translation Lookaside Buffer
  • a pre-decoder 21 inspects instructions fetched from memory prior to storing them in the I-cache 22. As discussed below, the pre-decoder 21 ascertains instruction boundaries, aligns the instructions, and calculates a next fetch address, which is store in the I-cache 22 with the instructions.
  • Data is accessed from a Data Cache 26, with memory addressing and permissions managed by a main Translation Lookaside Buffer (TLB) 28.
  • TLB main Translation Lookaside Buffer
  • the ITLB 24 may comprise a copy of part of the TLB 28.
  • the ITLB 24 and TLB 28 may be integrated.
  • the I-cache 22 and D-cache 26 may be integrated, or unified. Misses in the I-cache 22 and/or the D-cache 26 cause an access to main (off-chip) memory 32, under the control of a memory interface 30.
  • the processor 10 may include an Input/Output (I/O) interface 34, controlling access to various peripheral devices 36.
  • I/O Input/Output
  • the processor 10 may include a second-level (L2) cache for either or both the I and D caches 22, 26.
  • L2 second-level cache for either or both the I and D caches 22, 26.
  • one or more of the functional blocks depicted in the processor 10 may be omitted from a particular embodiment.
  • the processor 10 stores a fixed number of variable length instructions in each cache line.
  • the instructions are preferably aligned along predetermined boundaries, such as for example word boundaries. This alleviates the decode pipe stage from the necessity of calculating instruction boundaries, allowing higher speed operation and thus improving processor performance.
  • Storing instructions this way in the I-cache 22 also reduces power consumption by performing instruction length detection and alignment operation once. As I-cache 22 hit rates are commonly in the high 90%, considerable power savings may be realized by eliminating the need to ascertain instruction boundaries every time an instruction is executed from the I-cache 22.
  • the pre-decoder 21 comprises logic interposed in the path between main memory 32 and the I-cache 22.
  • the pre-decoder 21 logic inspects the data retrieved from memory, and ascertains the number and length of instructions.
  • the pre-decoder aligns the instructions along predetermined, e.g., word, boundaries, prior to passing the aligned instructions to the cache to be stored in a cache line.
  • Figure 3 depicts two representative lines 200, 260 of the I-cache 22, each containing a fixed number of the variable length instructions from Fig. 1 (in this example, four instructions are stored in each cache line 200, 260).
  • the cache lines 200, 260 are 16 bytes. Word boundaries are indicated by dashed lines; half word boundaries are indicated by dotted lines.
  • the instructions are aligned along word boundaries (i.e., each instruction starts at a word address).
  • the decode pipe stage may simply multiplex the relevant word from the cache line 200, 260 and immediately begin decoding the op code.
  • one half-word of space in the cache line 200, 260, respectively, is unused, as indicated in Fig. 3 by shading.
  • the cache 22 of Fig. 3 stores only eight instructions in two cache lines, rather than nine.
  • the word space corresponding to the length of 19 - the halfwords at offsets OxOA and OxIE - is not utilized. This decrease in the efficiency of storing instructions in the cache 22 is the price of the simplicity, improved processor power, and lower power consumption of the cache utilization depicted in Fig. 3.
  • a next fetch address is calculated by the pre- decoder 21 when the instructions are aligned (prior to storing them in the I-cache 22), and the next fetch address is stored in a field 240 along with the cache line 200.
  • an offset from the tag 220 may be calculated, and stored in along with the cache line 200, such as in an offset field 240.
  • the next fetch address may then be easily calculated by adding the offset to the tag address. This embodiment incurs the processing delay and power consumption of performing this addition each time a successive address fetch crosses a cache line.
  • other information may be stored to assist in the calculation of the next fetch address. For example, a set of bits equal to the fixed number of instructions in a cache line 240 may be stored, with e.g. a one indicating a fullword length instruction and a zero indicating a halfword length instruction stored in the corresponding instruction "slot.”
  • the addresses of the instructions in memory, and hence the address of the next sequential instruction may then be calculated from this information.
  • additional next address calculation aids may be devised and stored to calculate the next instruction fetch address.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Software Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Memory System Of A Hierarchy Structure (AREA)
  • Advance Control (AREA)
  • Executing Machine-Instructions (AREA)

Abstract

A fixed number of variable-length instructions are stored in each line of an instruction cache. The variable-length instructions are aligned along predetermined boundaries. Since the length of each instruction in the line, and hence the span of memory the instructions occupy, is not known, the address of the next following instruction is calculated and stored with the cache line. Ascertaining the instruction boundaries, aligning the instructions, and calculating the next fetch address are performed in a predecoder prior to placing the instructions in the cache.

Description

INSTRUCTION CACHE HAVING FIXED NUMBER OF VARIABLE
LENGTH INSTRUCTIONS
BACKGROUND
[0001] The present invention relates generally to the field of processors and in particular to a processor having an instruction cache storing a fixed number of variable length instructions.
[0002] Microprocessors perform computational tasks in a wide variety of applications, including portable electronic devices. In many cases, maximizing processor performance is a major design goal, to permit additional functions and features to be implemented in portable electronic devices and other applications. Additionally, power consumption is of particular concern in portable electronic devices, which have limited battery capacity. Hence, processor designs that increase performance and reduce power consumption are desirable.
[0003] Most modern processors employ one or more instruction execution pipelines, wherein the execution of many multi-step sequential instructions is overlapped to improve overall processor performance. Capitalizing on the spatial and temporal locality properties of most programs, recently executed instructions are stored in a cache - a high-speed, usually on-chip memory - for ready access by the execution pipeline. [0004] Many processor Instruction Set Architectures (ISA) include variable length instructions. That is, the instruction op codes read from memory do not all occupy the same amount of space. This may result from the inclusion of operands with arithmetic or logical instructions, the amalgamation of multiple operations into a Very Long Instruction Word (VLIW), or other architectural features. One disadvantage to variable length instructions is that, upon fetching instructions from an instruction cache, the processor must ascertain the boundaries of each instruction, a computational task that consumes power and reduces performance.
[0005] One approach known in the art to improving instruction cache access in the presence of variable length instructions is to "pre-decode" the instructions prior to storing them in the cache, and additionally store some instruction boundary information in the cache line along with the instructions. This reduces, but does not eliminate, the additional computational burden of ascertaining instruction boundaries that is placed on the decode task.
[0006] Also, by packing instructions into the cache in the same compact form that they are read from memory, instructions are occasionally misaligned, with part of an instruction being stored at the end of one cache line and the remainder stored at the beginning of a successive cache line. Fetching this instruction requires two cache accesses, further reducing performance and increasing power consumption, particularly as the two accesses are required each time the instruction executes. [0007] Figure 1 depicts a representative diagram of two lines 100, 140 of a prior art instruction cache storing variable length instructions (Il - 19). In this representative example, each cache line comprises sixteen bytes, and a 32-bit word size is assumed. Most instructions are a word width, or four bytes. Some instructions are of half-word width, comprising two bytes. A first cache line 100 and associated tag field 120 contain instructions Il through 14, and half of instruction 15. A second cache line 140, with associated tag field 160, contains the second half of instruction 15, and instructions 16 through 19. The instruction lengths and their address are summarized in the following table:
Figure imgf000004_0001
Table 1: Variable Length Instructions in Prior Art Cache
[0008] To read these instructions from the cache lines 100, 140, the processor must expend additional computational effort - at the cost of power consumption and delay - to determine the instruction boundaries. While this task may be assisted by pre- decoding the instructions and storing boundary information in or associated with the cache lines 100, 140, the additional computation is not obviated. Additionally, a fetch of instruction 15 will require two cache accesses. This dual access to fetch a misaligned instruction from the cache causes additional power consumption and processor delay.
SUMMARY
[0009] A fixed number of variable-length instructions are stored in each line of an instruction cache. The variable-length instructions are aligned along predetermined boundaries. Since the length of each instruction in the line, and hence the span of memory the instructions occupy, is not known, the address of the next following instruction is calculated and stored with the cache line. Ascertaining the instruction boundaries, aligning the instructions, and calculating the next fetch address are performed in a predecoder prior to placing the instructions in the cache. [0010] In one embodiment, a method of cache management in a processor having variable instruction length comprises storing a fixed number of instructions per cache line.
[0011] In another embodiment, a processor includes an instruction execution pipeline operative to execute instructions of variable length and an instruction cache operative to store a fixed number of the variable length instructions per cache line. The processor additionally includes a predecoder operative to align the variable length instructions along predetermined boundaries prior to writing the instructions into a cache line.
BRIEF DESCRIPTION OF DRAWINGS
[0012] Figure 1 is a diagram of a prior art instruction cache storing variable length instructions.
[0013] Figure 2 is a functional block diagram of a processor.
[0014] Figure 3 is a diagram of an instruction cache storing a fixed number of variable length instructions, aligned along predetermined boundaries.
DETAILED DESCRIPTION
[0015] Figure 2 depicts a functional block diagram of a representative processor 10, employing both a pipelined architecture and a hierarchical memory structure. The processor 10 executes instructions in an instruction execution pipeline 12 according to control logic 14. The pipeline includes various registers or latches 16, organized in pipe stages, and one or more Arithmetic Logic Units (ALU) 18. A General Purpose Register (GPR) file 20 provides registers comprising the top of the memory hierarchy. [0016] The pipeline fetches instructions from an Instruction Cache (I-cache) 22, with memory addressing and permissions managed by an Instruction-side Translation Lookaside Buffer (ITLB) 24. A pre-decoder 21 inspects instructions fetched from memory prior to storing them in the I-cache 22. As discussed below, the pre-decoder 21 ascertains instruction boundaries, aligns the instructions, and calculates a next fetch address, which is store in the I-cache 22 with the instructions.
[0017] Data is accessed from a Data Cache 26, with memory addressing and permissions managed by a main Translation Lookaside Buffer (TLB) 28. In various embodiments, the ITLB 24 may comprise a copy of part of the TLB 28. Alternatively, the ITLB 24 and TLB 28 may be integrated. Similarly, in various embodiments of the processor 10, the I-cache 22 and D-cache 26 may be integrated, or unified. Misses in the I-cache 22 and/or the D-cache 26 cause an access to main (off-chip) memory 32, under the control of a memory interface 30.
[0018] The processor 10 may include an Input/Output (I/O) interface 34, controlling access to various peripheral devices 36. Those of skill in the art will recognize that numerous variations of the processor 10 are possible. For example, the processor 10 may include a second-level (L2) cache for either or both the I and D caches 22, 26. In addition, one or more of the functional blocks depicted in the processor 10 may be omitted from a particular embodiment.
[0019] According to one or more embodiments disclosed herein, the processor 10 stores a fixed number of variable length instructions in each cache line. The instructions are preferably aligned along predetermined boundaries, such as for example word boundaries. This alleviates the decode pipe stage from the necessity of calculating instruction boundaries, allowing higher speed operation and thus improving processor performance. Storing instructions this way in the I-cache 22 also reduces power consumption by performing instruction length detection and alignment operation once. As I-cache 22 hit rates are commonly in the high 90%, considerable power savings may be realized by eliminating the need to ascertain instruction boundaries every time an instruction is executed from the I-cache 22.
[0020] The pre-decoder 21 comprises logic interposed in the path between main memory 32 and the I-cache 22. The pre-decoder 21 logic inspects the data retrieved from memory, and ascertains the number and length of instructions. The pre-decoder aligns the instructions along predetermined, e.g., word, boundaries, prior to passing the aligned instructions to the cache to be stored in a cache line.
[0021] Figure 3 depicts two representative lines 200, 260 of the I-cache 22, each containing a fixed number of the variable length instructions from Fig. 1 (in this example, four instructions are stored in each cache line 200, 260). The cache lines 200, 260 are 16 bytes. Word boundaries are indicated by dashed lines; half word boundaries are indicated by dotted lines. The instructions are aligned along word boundaries (i.e., each instruction starts at a word address). When an instruction is fetched from the I- cache 22 by the pipeline 12, the decode pipe stage may simply multiplex the relevant word from the cache line 200, 260 and immediately begin decoding the op code. In the case of half-word instructions (e.g., 13 and 18), one half-word of space in the cache line 200, 260, respectively, is unused, as indicated in Fig. 3 by shading. [0022] Note that, as compared to the prior art cache depicted in Fig. 1, the cache 22 of Fig. 3 stores only eight instructions in two cache lines, rather than nine. The word space corresponding to the length of 19 - the halfwords at offsets OxOA and OxIE - is not utilized. This decrease in the efficiency of storing instructions in the cache 22 is the price of the simplicity, improved processor power, and lower power consumption of the cache utilization depicted in Fig. 3.
[0023] Additionally, by allocating a fixed number of variable length instructions to a cache line 200, 260, and aligning the instructions along predetermined boundaries, no instruction is stored misaligned across cache lines, such as 15 in Fig. 1. Thus, the performance penalty and excess power consumption caused by two cache 22 accesses to retrieve a single instruction are completely obviated.
[0024] Because a fixed number of variable length instructions is stored, rather than a variable number of instructions having a known total length (the length of the cache line), the address of the next sequential instruction cannot be ascertained by simply incrementing the tag 220 of one cache line 200 by the memory size of the cache line 200. Accordingly, in one embodiment, a next fetch address is calculated by the pre- decoder 21 when the instructions are aligned (prior to storing them in the I-cache 22), and the next fetch address is stored in a field 240 along with the cache line 200. [0025] As an alternative to calculating and storing a next fetch address, according to one embodiment an offset from the tag 220 may be calculated, and stored in along with the cache line 200, such as in an offset field 240. The next fetch address may then be easily calculated by adding the offset to the tag address. This embodiment incurs the processing delay and power consumption of performing this addition each time a successive address fetch crosses a cache line. In other embodiments, other information may be stored to assist in the calculation of the next fetch address. For example, a set of bits equal to the fixed number of instructions in a cache line 240 may be stored, with e.g. a one indicating a fullword length instruction and a zero indicating a halfword length instruction stored in the corresponding instruction "slot." The addresses of the instructions in memory, and hence the address of the next sequential instruction, may then be calculated from this information. Those of skill in the art will readily recognize that additional next address calculation aids may be devised and stored to calculate the next instruction fetch address.
[0026] While various embodiments have been explicated herein with respect to a representative ISA including word and halfword instruction lengths, the present invention is not limited to these embodiments. In general, any variable length instructions may be advantageously stored in an instruction cache 22 in a fixed number, aligned along predetermined boundaries. Additionally, a different size cache line 240, 300 than that depicted herein may be utilized in the practice of various embodiments. [0027] Although embodiments of the present invention have been described herein with respect to particular features, aspects and embodiments thereof, it will be apparent that numerous variations, modifications, and other embodiments are possible within the broad scope of the present invention, and accordingly, all variations, modifications and embodiments are to be regarded as being within the scope of the invention. The present embodiments are therefore to be construed in all aspects as illustrative and not restrictive and all changes coming within the meaning and equivalency range of the appended claims are intended to be embraced therein.

Claims

CLAIMSWhat is claimed is:
1. A method of cache management in a processor having variable instruction length, comprising storing a fixed number of instructions per cache line.
2. The method of claim 1 further comprising inspecting instructions to determine their length and aligning the instructions along predetermined boundaries prior to placing them in the cache.
3. The method of claim 1 further comprising storing a next fetch address with each cache line.
4. The method of claim 3 further comprising determining the next fetch address prior to placing the instructions in the cache.
5. The method of claim 1 further comprising storing an offset with each cache line, the offset yielding the next fetch address when added to the cache line tag.
6. A processor, comprising: an instruction execution pipeline operative to execute instructions of variable length; an instruction cache operative to store a fixed number of the variable length instructions per cache line; and a predecoder operative to align the variable length instructions along predetermined boundaries prior to writing the instructions into a cache line.
7. The processor of claim 6 further comprising a next fetch address field associated with each cache line.
8. The processor of claim 7 wherein the predecoder is additionally operative to calculate the address of the instruction following the last instruction written to a cache line, and to store the address in the next fetch address field of the cache line.
PCT/US2006/029523 2005-07-29 2006-07-26 Instruction cache having fixed number of variable length instructions WO2007016393A2 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
JP2008524216A JP4927840B2 (en) 2005-07-29 2006-07-26 Instruction cache with a fixed number of variable-length instructions
EP06788854A EP1910919A2 (en) 2005-07-29 2006-07-26 Instruction cache having fixed number of variable length instructions

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US11/193,547 2005-07-29
US11/193,547 US7568070B2 (en) 2005-07-29 2005-07-29 Instruction cache having fixed number of variable length instructions

Publications (2)

Publication Number Publication Date
WO2007016393A2 true WO2007016393A2 (en) 2007-02-08
WO2007016393A3 WO2007016393A3 (en) 2007-06-28

Family

ID=37451109

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2006/029523 WO2007016393A2 (en) 2005-07-29 2006-07-26 Instruction cache having fixed number of variable length instructions

Country Status (6)

Country Link
US (1) US7568070B2 (en)
EP (1) EP1910919A2 (en)
JP (2) JP4927840B2 (en)
KR (1) KR101005633B1 (en)
CN (2) CN104657110B (en)
WO (1) WO2007016393A2 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2009059103A1 (en) * 2007-11-02 2009-05-07 Qualcomm Incorporated Predecode repair cache for instructions that cross an instruction cache line

Families Citing this family (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7337272B2 (en) * 2006-05-01 2008-02-26 Qualcomm Incorporated Method and apparatus for caching variable length instructions
US8627017B2 (en) * 2008-12-30 2014-01-07 Intel Corporation Read and write monitoring attributes in transactional memory (TM) systems
US9753858B2 (en) 2011-11-30 2017-09-05 Advanced Micro Devices, Inc. DRAM cache with tags and data jointly stored in physical rows
JP5968693B2 (en) * 2012-06-25 2016-08-10 ルネサスエレクトロニクス株式会社 Semiconductor device
US11768689B2 (en) 2013-08-08 2023-09-26 Movidius Limited Apparatus, systems, and methods for low power computational imaging
US10001993B2 (en) 2013-08-08 2018-06-19 Linear Algebra Technologies Limited Variable-length instruction buffer management
US10853074B2 (en) * 2014-05-01 2020-12-01 Netronome Systems, Inc. Table fetch processor instruction using table number to base address translation
BR112017001981B1 (en) * 2014-07-30 2023-05-02 Movidius Limited METHOD FOR MANAGING RELATED INSTRUCTION BUFFER, SYSTEM AND COMPUTER READABLE MEMORY
US9916251B2 (en) 2014-12-01 2018-03-13 Samsung Electronics Co., Ltd. Display driving apparatus and cache managing method thereof
CN106528450B (en) * 2016-10-27 2019-09-17 上海兆芯集成电路有限公司 Extracting data in advance and the device for using the method
CN108415729A (en) * 2017-12-29 2018-08-17 北京智芯微电子科技有限公司 A kind of processing method and processing device of cpu instruction exception
CN110750303B (en) * 2019-09-25 2020-10-20 支付宝(杭州)信息技术有限公司 Pipelined instruction reading method and device based on FPGA

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6035387A (en) * 1997-03-18 2000-03-07 Industrial Technology Research Institute System for packing variable length instructions into fixed length blocks with indications of instruction beginning, ending, and offset within block
WO2001027749A1 (en) * 1999-10-14 2001-04-19 Advanced Micro Devices, Inc. Apparatus and method for caching alignment information
US6253287B1 (en) * 1998-09-09 2001-06-26 Advanced Micro Devices, Inc. Using three-dimensional storage to make variable-length instructions appear uniform in two dimensions

Family Cites Families (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5179680A (en) * 1987-04-20 1993-01-12 Digital Equipment Corporation Instruction storage and cache miss recovery in a high speed multiprocessing parallel processing apparatus
EP0498654B1 (en) * 1991-02-08 2000-05-10 Fujitsu Limited Cache memory processing instruction data and data processor including the same
JP2828219B2 (en) * 1995-03-23 1998-11-25 インターナシヨナル・ビジネス・マシーンズ・コーポレーシヨン Method of providing object code compatibility, method of providing object code compatibility and compatibility with scalar and superscalar processors, method for executing tree instructions, data processing system
EP0843848B1 (en) * 1996-05-15 2004-04-07 Koninklijke Philips Electronics N.V. Vliw processor which processes compressed instruction format
US6112299A (en) * 1997-12-31 2000-08-29 International Business Machines Corporation Method and apparatus to select the next instruction in a superscalar or a very long instruction word computer having N-way branching
US6253309B1 (en) 1998-09-21 2001-06-26 Advanced Micro Devices, Inc. Forcing regularity into a CISC instruction set by padding instructions
JP3490007B2 (en) * 1998-12-17 2004-01-26 富士通株式会社 Command control device
US6779100B1 (en) * 1999-12-17 2004-08-17 Hewlett-Packard Development Company, L.P. Method and device for address translation for compressed instructions
JP2003131945A (en) * 2001-10-25 2003-05-09 Hitachi Ltd Cache memory device
US7133969B2 (en) * 2003-10-01 2006-11-07 Advanced Micro Devices, Inc. System and method for handling exceptional instructions in a trace cache based processor

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6035387A (en) * 1997-03-18 2000-03-07 Industrial Technology Research Institute System for packing variable length instructions into fixed length blocks with indications of instruction beginning, ending, and offset within block
US6253287B1 (en) * 1998-09-09 2001-06-26 Advanced Micro Devices, Inc. Using three-dimensional storage to make variable-length instructions appear uniform in two dimensions
WO2001027749A1 (en) * 1999-10-14 2001-04-19 Advanced Micro Devices, Inc. Apparatus and method for caching alignment information

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2009059103A1 (en) * 2007-11-02 2009-05-07 Qualcomm Incorporated Predecode repair cache for instructions that cross an instruction cache line
JP2011503719A (en) * 2007-11-02 2011-01-27 クゥアルコム・インコーポレイテッド Predecode repair cache for instructions that span instruction cache lines
EP2600240A3 (en) * 2007-11-02 2013-12-11 QUALCOMM Incorporated Predecode repair cache for instructions that cross an instruction cache line
JP2014044731A (en) * 2007-11-02 2014-03-13 Qualcomm Incorporated Predecode repair cache for instructions that cross instruction cache line
US8898437B2 (en) 2007-11-02 2014-11-25 Qualcomm Incorporated Predecode repair cache for instructions that cross an instruction cache line

Also Published As

Publication number Publication date
CN104657110B (en) 2020-08-18
US20070028050A1 (en) 2007-02-01
CN104657110A (en) 2015-05-27
JP4927840B2 (en) 2012-05-09
JP2012074046A (en) 2012-04-12
WO2007016393A3 (en) 2007-06-28
KR101005633B1 (en) 2011-01-05
JP2009503700A (en) 2009-01-29
EP1910919A2 (en) 2008-04-16
JP5341163B2 (en) 2013-11-13
CN101268440A (en) 2008-09-17
KR20080031981A (en) 2008-04-11
US7568070B2 (en) 2009-07-28

Similar Documents

Publication Publication Date Title
US7568070B2 (en) Instruction cache having fixed number of variable length instructions
EP2018609B1 (en) Pre-decoding variable length instructions
JP5837126B2 (en) System, method and software for preloading instructions from an instruction set other than the currently executing instruction set
US7818542B2 (en) Method and apparatus for length decoding variable length instructions
US6502185B1 (en) Pipeline elements which verify predecode information
US7676659B2 (en) System, method and software to preload instructions from a variable-length instruction set with proper pre-decoding
US20090019257A1 (en) Method and Apparatus for Length Decoding and Identifying Boundaries of Variable Length Instructions
US6092188A (en) Processor and instruction set with predict instructions
EP3550437B1 (en) Adaptive spatial access prefetcher apparatus and method
US6647490B2 (en) Training line predictor for branch targets
TWI438681B (en) Immediate and displacement extraction and decode mechanism
US6636959B1 (en) Predictor miss decoder updating line predictor storing instruction fetch address and alignment information upon instruction decode termination condition
CN113568663A (en) Code prefetch instruction
JP2023047283A (en) Scalable toggle point control circuit for clustered decode pipeline

Legal Events

Date Code Title Description
WWE Wipo information: entry into national phase

Ref document number: 200680034364.5

Country of ref document: CN

121 Ep: the epo has been informed by wipo that ep was designated in this application
WWE Wipo information: entry into national phase

Ref document number: 2008524216

Country of ref document: JP

Ref document number: 2006788854

Country of ref document: EP

NENP Non-entry into the national phase

Ref country code: DE

WWE Wipo information: entry into national phase

Ref document number: 235/MUMNP/2008

Country of ref document: IN

WWE Wipo information: entry into national phase

Ref document number: 1020087004751

Country of ref document: KR