CN115563031A - Instruction cache prefetch control method, device, chip and storage medium - Google Patents

Instruction cache prefetch control method, device, chip and storage medium Download PDF

Info

Publication number
CN115563031A
CN115563031A CN202211259393.9A CN202211259393A CN115563031A CN 115563031 A CN115563031 A CN 115563031A CN 202211259393 A CN202211259393 A CN 202211259393A CN 115563031 A CN115563031 A CN 115563031A
Authority
CN
China
Prior art keywords
instruction cache
hit
accessed
instruction
prefetch
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211259393.9A
Other languages
Chinese (zh)
Inventor
乌绮
刘奔
汪争
韩文燕
张琦滨
陈逸飞
陈阳
黄颢彦
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuxi Advanced Technology Research Institute
Original Assignee
Wuxi Advanced Technology Research Institute
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuxi Advanced Technology Research Institute filed Critical Wuxi Advanced Technology Research Institute
Priority to CN202211259393.9A priority Critical patent/CN115563031A/en
Publication of CN115563031A publication Critical patent/CN115563031A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/0802Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
    • G06F12/0862Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches with prefetch

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Memory System Of A Hierarchy Structure (AREA)

Abstract

The invention discloses a prefetch control method, a prefetch control device and a prefetch control chip of an instruction cache, comprising the following steps of; acquiring a physical address requested to be accessed, and inquiring the hit condition of a corresponding current data block in an instruction cache according to the physical address requested to be accessed; if the current data block is not hit, inquiring the hit condition of the data blocks with the set number; if the data blocks with the set number are not hit, the prefetching filling process is triggered; if the set number of data blocks under it hit, the normal filling process is triggered. The invention provides an ICache prefetching structure, which can judge the storage state of a current instruction lower setting instruction in advance in the instruction fetching stage, read a plurality of instructions at one time according to the current state and load the instructions into an instruction cache. By adopting the ICache design idea and the method, the prefetching function of the ICache with the multi-way set associative structure can be realized on the premise of not changing the Cacheline structure of the general instruction cache line, so that the hit rate of the ICache is improved, and the performance of a processor is improved.

Description

Instruction cache prefetch control method, device, chip and storage medium
Technical Field
The present application relates to the technical field of chip design, and in particular, to a prefetch method, device, chip, and storage medium capable of increasing an Instruction Cache (ICache) hit rate.
Background
Cache (Cache) is a bridge for information transmission between a main memory and a processor core, and in the field of processor design, the Cache is used for reducing the condition that the core is stopped and blocked by internal assembly line due to access and storage, thereby improving the processing performance. The Cache hit rate directly represents the access and storage efficiency of a Cache in practical application, so how to improve the Cache access and storage efficiency is a problem which needs to be considered in the field of processor design at present. The prefetching technology is to fill data blocks to be used by a processor core from a low-level memory bank to a high-level memory bank in advance so as to obtain the data blocks more quickly when the processor core needs to be used, delay caused by Cache access miss is relieved, and processing performance is improved.
According to different characteristics of application programs, a plurality of prefetching modes are generated, and the prefetching modes can be divided into prefetching suitable for an instruction cache and prefetching suitable for a data cache according to different characteristics of an instruction stream and a data stream. Currently, the prefetch algorithm applied to the instruction cache is as follows: OBL (One-Block-Lookahead) prefetching, tagged-based prefetching, stream buffer prefetching.
OBL (One-Block-Lookahead) prefetch: the next data block is prefetched when a certain cache data block is accessed, and the algorithm is simple, the hardware cost is low, but the inefficiency is high.
Tagged flag prefetch: a Tag bit Tag is set in the instruction cache for tagging the prefetched data block. When a data block is prefetched, the corresponding data block is set to be 1, when the data block is accessed, the data block is reset to be 0, if a Tag bit of a certain block is encountered in subsequent accesses, a prefetching mechanism is triggered, and sequential data is loaded into an instruction cache in advance. The use of the flag bit and always prefetching the next line of data block results in an approximate coverage while also reducing overhead, but inefficiencies also exist.
Stream buffer prefetching: the prefetching is carried out by utilizing the increasing or decreasing rule presented by the cache block address of the program access in a period of time. During the pre-fetching process, a stream buffer is used for analyzing the stream characteristics of the calculation program, and the data block is pre-fetched. The algorithm is loaded into the instruction Cache only when the prefetch block is hit, the utilization rate of the Cache is improved, and data pollution in the Cache is avoided.
Disclosure of Invention
In order to solve the defect that an instruction cache method in the prior art is high in inefficiency, the application provides an instruction cache device, a processor chip and a storage medium.
The following technical scheme is adopted in the application.
In a first aspect, the present application provides a prefetch control method for an instruction cache, including:
acquiring a physical address requested to be accessed, and inquiring the hit condition of a cache line corresponding to a current instruction in an instruction cache according to the physical address requested to be accessed;
if the current instruction cache line is not hit, inquiring the hit condition of the instruction cache lines with the set number below; if the instruction cache lines with the set number below are not hit, the prefetching filling process is triggered; a normal fill flow is triggered if a set number of instruction cache lines hit below.
Furthermore, the prefetching filling process is that the instruction cache line which is not hit currently and the instruction cache line which is not hit below the instruction cache line with the set number are accessed from the main and filled into the instruction cache; a common fill flow is to fill the currently missing instruction cache line from the main access back to the instruction cache.
In a second aspect, an apparatus for controlling prefetching of an instruction cache, includes: the method comprises the following steps: the device comprises an inquiry unit, a detection unit and a filling trigger unit;
the query unit is used for acquiring the physical address requested to be accessed and querying the hit condition of the instruction cache line corresponding to the current instruction to be accessed in the instruction cache according to the physical address requested to be accessed;
the detection unit is used for inquiring the hit condition of the instruction cache lines with the set number below if the instruction cache lines to be accessed currently are not hit;
the filling triggering unit is used for triggering the prefetching filling process if the instruction cache lines with the set number are not hit in the same way; a normal fill flow is triggered if a set number of instruction cache lines hit below.
Furthermore, the device also comprises a filling unit used for completing a prefetching filling flow or a common filling flow, wherein the prefetching filling flow is to return and fill a current missed instruction cache line and the missed instruction cache line with the set number below the missed instruction cache line from a main access to the instruction cache; a common fill flow is to fill the currently missing instruction cache line from the main access back to the instruction cache.
In a third aspect, a prefetch control method for an instruction cache includes:
acquiring a physical address requested to be accessed, and inquiring the hit condition of an instruction cache line corresponding to the current instruction to be accessed in the instruction cache according to the physical address requested to be accessed;
if the current instruction cache line to be accessed is not hit and the prefetching function is determined to be started at the same time, the hitting condition of the instruction cache lines with the set number is inquired;
if the same is not hit, the prefetching filling flow is triggered; a normal fill flow is triggered if a set number of instruction cache lines hit below.
Further, a prefetch switch register is provided for storing information indicating whether to turn on or off the prefetch function.
In a fourth aspect, a prefetch control apparatus for an instruction cache, a lookup unit, a detection unit, and a fill trigger unit;
the query unit is used for acquiring the physical address requested to be accessed and querying the hit condition of the instruction cache line corresponding to the current instruction to be accessed in the instruction cache according to the physical address requested to be accessed;
the detection unit is used for inquiring the hit condition of the instruction cache lines with the set number below if the instruction cache lines to be accessed currently are not hit and the prefetching function is determined to be started at the same time;
the filling triggering unit is used for triggering the prefetching filling flow if the same result is not hit; a normal fill flow is triggered if a set number of instruction cache lines hit below.
Further, the apparatus further includes a switch register for storing information indicating whether to turn on or off the prefetch function.
In a fifth aspect, a processor chip includes an instruction cache prefetch control apparatus according to any possible implementation of the above technical solution.
In a sixth aspect, a computer-readable storage medium is characterized in that a prefetch cache control program is stored on the computer-readable storage medium, and when executed by a processor, the prefetch cache control program implements the steps of the prefetch cache control method according to any possible implementation manner of the technical solution.
The invention has the following beneficial technical effects: the invention provides an ICache prefetching structure, which can judge the storage state of a current instruction lower setting instruction in advance in the instruction fetching stage, read a plurality of instructions at one time according to the current state and load the instructions into an instruction cache. By adopting the ICache design idea and method, when the current instruction cache line is not hit, the instruction cache line of the set bar which is not hit under the current instruction cache line is prefetched, so that the instruction cache line is prevented from being read from a main memory by a processor, the hit rate is improved, and the processing efficiency is improved;
the method can realize the prefetching function of the ICache with the multi-way set connection structure on the premise of not changing the Cacheline structure of the general instruction cache line, thereby further improving the hit rate of the ICache and the performance of the processor.
Drawings
FIG. 1 is a flow chart of an implementation of a prefetch control method provided in embodiment 2 of the present invention;
FIG. 2 is a block diagram of an ICache in an embodiment of the present invention;
FIG. 3 is a diagram of a control state machine for implementing the ICache prefetch function according to an embodiment of the present invention;
FIG. 4 is a diagram of 4-way set associative ICache lookup according to an embodiment of the present invention.
Detailed Description
The invention is further described below with reference to the figures and the specific examples.
Example 1: the prefetch control method of the instruction cache comprises the following steps:
acquiring a physical address requested to be accessed, and inquiring the hit condition of a cache line corresponding to a current instruction in an instruction cache according to the physical address requested to be accessed;
if the current instruction cache line is not hit, inquiring the hit condition of the instruction cache lines with the set number below; if the instruction cache lines with the set number below are not hit, the prefetching filling process is triggered; a normal fill flow is triggered if a set number of instruction cache lines hit below.
In this embodiment, the prefetch filling process is to fetch and fill the currently missed instruction cache line and the missed instruction cache lines with the set number below the currently missed instruction cache line from the main cache to the instruction cache; a common fill flow is to fill the currently missing instruction cache line from the main access back to the instruction cache.
Alternatively, the set number is one.
Example 2: corresponding to the above embodiments, the present embodiment provides a prefetch control apparatus of an instruction cache, including: the device comprises an inquiry unit, a detection unit and a filling trigger unit;
the query unit is used for acquiring the physical address requested to be accessed and querying the hit condition of the instruction cache line corresponding to the current instruction to be accessed in the instruction cache according to the physical address requested to be accessed;
the detection unit is used for inquiring the hit condition of the instruction cache lines with the set number below if the instruction cache lines to be accessed currently are not hit;
the filling triggering unit is used for triggering the prefetching filling process if the instruction cache lines with the set number are not hit in the same way; a normal fill flow is triggered if a set number of instruction cache lines hit below.
In this embodiment, the system further includes a filling unit configured to complete a prefetch filling process or a normal filling process, where the prefetch filling process includes accessing and filling a currently missed instruction cache line and the missed instruction cache lines with a predetermined number from a main cache to the instruction cache; a common fill flow is to fill the currently missing instruction cache line from the main access back to the instruction cache.
Example 3: as shown in fig. 1, the prefetch control method of the instruction cache includes:
acquiring a physical address requested to be accessed, and inquiring the hit condition of a corresponding instruction cache line to be accessed in an instruction cache according to the physical address requested to be accessed (namely judging whether a prefetch line is in the cache or not in fig. 1);
if the current instruction cache line to be accessed is not hit and the prefetching function is determined to be started at the same time, the hitting condition of the instruction cache lines with the set number is inquired;
if the same is not hit, the prefetching filling flow is triggered; a normal fill flow is triggered if a set number of instruction cache lines hit below.
In this embodiment, the prefetch filling process is to access and fill the instruction cache line currently not hit and the instruction cache lines not hit below the instruction cache line by a predetermined number from the main cache to the instruction cache; a common fill flow is to fill the currently missing instruction cache line from the main access back to the instruction cache.
The set number is set to one. Firstly, inquiring the hit condition of a corresponding instruction cache line in an instruction cache according to a physical address, inquiring the hit condition of the next instruction cache line if the instruction cache line is not hit, and triggering prefetching if the instruction cache line is not hit, and fetching and filling the current missed cache line Cacheline and the next cache line Cacheline from a main memory to the instruction cache ICache; if the cache line of the next cache line is hit, prefetching is not triggered, and only the cache line of the current cache line which is missed is accessed back from the main and is filled. In addition, a special register is required to be added in the core and used as a switch for the prefetching function, if the special register is started, the prefetching detection process is triggered, and if the special register is not started, the prefetching detection process is carried out according to a normal instruction fetching process.
In this embodiment, the instruction cache ICache architecture is a 4-way set associative architecture as shown in FIG. 2. In the structure, an instruction cache ICache is divided into an ITag array, an ICahce data (ICData) array and an access control module (ICache _ Ctrl). The ITag array is used for storing Tag information and comprises an ITag way0 (path 1 of the ITag array), an ITag way1 (path 2 of the ITag array), an ITag way2 (path 3 of the ITag array) and an ITag way3 (path 4 of the ITag array); the ICData array is used for storing instruction data information and comprises an ICData way0 (the 1 st way of the ICData array), an ICData way1 (the 2 nd way of the ICData array), an ICData way2 (the 3 rd way of the ICData array) and an ICData way3 (the 4 th way of the ICData array); the access control module (ICache _ Ctrl) is responsible for the handling of various events of the ICache and the control of the pre-fetch function. MEM is a memory.
A common filling flow, which initiates a request for filling 1 instruction cache line (Cacheline) to a bus; the prefetch fill flow initiates a fill 2 cache line (Cacheline) request to the bus.
In this embodiment, a prefetch switch register is provided for storing information indicating whether to turn on or off the prefetch function.
The prefetch control method provided by this embodiment is mainly controlled by a prefetch state machine, which has four states, as shown in fig. 3.
IDLE state: initial state
(1) The processor starts a pre-fetching function, and the ICache enters a CHECK state when receiving a missed access instruction;
(2) The processor closes the pre-fetching function, and the ICache enters a NORMAL state when receiving a missed access instruction;
(3) The rest of the cases remain in the IDLE state.
CHECK status: a detection state for detecting whether to trigger prefetch flow
(1) If the Next Cache is in the Cache (hit), the prefetching is not triggered, and the PREF state is jumped to;
(2) If NextCacheline is not in the Cache (miss), then a prefetch is triggered, jumping to the NORMAL state.
NORAML status: normal load status for loading Cacheline returned from main memory.
(1) Fetching a Cacheline from a main memory and filling an ICache, and jumping to an IDLE state after filling;
(2) The other cases remain in the NORAML state.
The PRE state: prefetch fill status to fill Cacheline returned from main memory.
(1) Two cachelines are taken back from the main memory and ICache is filled one by one, and the IDLE state is jumped to after filling is finished;
(2) The other cases remain in the PRE state.
The prefetching control method provided by the embodiment is practically applied to the 4-way set-associative ICache array.
The method for querying the Cacheline of the next instruction comprises the following steps: determining the line where the current instruction cache line is located in the instruction cache according to the INDEX field in the acquired physical address requesting access, and comparing the Tag field in the physical address with the line where the current instruction cache line is located to determine the way serial number where the next instruction cache line of the current instruction cache line is located in the instruction cache; and searching the ICdata array by inquiring the data position obtained by the TAG array to obtain the required next Cacheline.
The next Cacheline query process can be understood in conjunction with fig. 4 as (1) calculating the physical address of the next Cacheline, and determining that the access Cacheline is located in line 1 of the instruction cache according to the physical address INDEX field; (2) comparing the TAG field in the physical address with (1) the data located in the line, where TAG = c and TAG _ Way2= c, to determine to find the 2 nd Way of Cacheline in the cache; (3) the required Cacheline is obtained by looking up the ICData array at the position obtained by querying the TAG (line 1, way 2), and the required instruction INST3 is obtained in combination with the OFFSET in the physical address (OFFSET = 3).
It is noted here that if the current processor prefetch function is enabled and a miss access instruction is received (assuming the current physical address is PA), then in the CHECK state, the physical address of the next Cacheline is { PA [ TAG ], PA [ INDEX ] +1'b1, PA [ OFFSET ] }, rather than PA +1' b1, since the ICDATA array is in Cacheline units.
The invention realizes the prefetching function of the ICache on the premise of not modifying the conventional Cacheline structure.
In the figure: the Index field represents the row address in the instruction cache; the OFFSET field represents the Cacheline OFFSET, INST 0-7: instruction data 0~7 contained in the cache line.
Example 4: in correspondence with embodiment 3, the present embodiment provides a prefetch control apparatus for an instruction cache, including: the device comprises an inquiry unit, a detection unit and a filling trigger unit;
the query unit is used for acquiring the physical address requested to be accessed and querying the hit condition of the instruction cache line corresponding to the current instruction to be accessed in the instruction cache according to the physical address requested to be accessed;
the detection unit is used for inquiring the hit condition of the instruction cache lines with the set number if the instruction cache lines to be accessed currently are not hit and the prefetching function is determined to be started at the same time;
the filling triggering unit is used for triggering the prefetching filling flow if the same result is not hit; a normal fill flow is triggered if a set number of instruction cache lines hit below.
The apparatus also includes a switch register to identify whether to turn on or off the prefetch function.
Furthermore, the device also comprises a filling unit used for completing a prefetching filling flow or a common filling flow, wherein the prefetching filling flow is to return and fill a current missed instruction cache line and the missed instruction cache line with the set number below the missed instruction cache line from a main access to the instruction cache; a common fill flow is to fill the currently missing instruction cache line from the main access back to the instruction cache.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described systems, apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
Example 5: the processor chip comprises the instruction cache prefetching control device provided by the above embodiment and possible implementation modes.
Example 6: a computer-readable storage medium, wherein a prefetch cache control program is stored on the computer-readable storage medium, and when executed by a processor, implements the steps of the prefetch cache control method as provided by the above embodiments and possible implementations.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
While the present invention has been described with reference to the embodiments shown in the drawings, the present invention is not limited to the embodiments, which are illustrative and not restrictive, and it will be apparent to those skilled in the art that various changes and modifications can be made therein without departing from the spirit and scope of the invention as defined in the appended claims.

Claims (11)

1. A method for controlling prefetching of an instruction cache, comprising:
acquiring a physical address requested to be accessed, and inquiring the hit condition of a cache line corresponding to a current instruction in an instruction cache according to the physical address requested to be accessed;
if the current instruction cache line is not hit, inquiring the hit condition of the instruction cache lines with the set number below; if the instruction cache lines with the set number below are not hit, the prefetching filling process is triggered; a normal fill flow is triggered if a set number of instruction cache lines hit below.
2. The method of claim 1, wherein the prefetch loading process is to fetch and load a currently missed instruction cache line and the set number of missed instruction cache lines from a main access back to the instruction cache; a common fill flow is to fill the currently missing instruction cache line from the main access back to the instruction cache.
3. The prefetch control method of an instruction cache according to claim 1, wherein if the instruction cache employs a multi-way associative instruction cache array, the instruction cache comprises an ITag array for storing Tag information and an ICahce data array for storing instruction data information;
the method for querying the next instruction cache line comprises the following steps: determining the line where the current instruction cache line is located in the instruction cache according to the INDEX field in the acquired physical address requesting access, and comparing the Tag field in the physical address with the line where the current instruction cache line is located to determine the way serial number where the next instruction cache line of the current instruction cache line is located in the instruction cache; and searching the ICdata array by inquiring the data position obtained by the TAG array to obtain the required next instruction cache line.
4. A prefetch control apparatus for an instruction cache, comprising: the device comprises a query unit, a detection unit and a filling trigger unit;
the query unit is used for acquiring the physical address requested to be accessed and querying the hit condition of the instruction cache line corresponding to the current instruction to be accessed in the instruction cache according to the physical address requested to be accessed;
the detection unit is used for inquiring the hit condition of the instruction cache lines with the set number below if the instruction cache lines to be accessed currently are not hit;
the filling triggering unit is used for triggering the prefetching filling process if the instruction cache lines with the set number are not hit in the same way; a normal fill flow is triggered if a set number of instruction cache lines hit below.
5. The apparatus according to claim 4, further comprising a fill unit configured to perform a prefetch fill process or a normal fill process, wherein the prefetch fill process is performed by returning and filling a currently missed instruction cache line and the miss instruction cache line having the predetermined number from the main access to the instruction cache; a common fill flow is to fill the currently missing instruction cache line from the main access back to the instruction cache.
6. A method for controlling prefetching of an instruction cache, comprising:
acquiring a physical address requested to be accessed, and inquiring the hit condition of an instruction cache line corresponding to the current instruction to be accessed in the instruction cache according to the physical address requested to be accessed;
if the current instruction cache line to be accessed is not hit and the prefetching function is determined to be started at the same time, the hitting condition of the instruction cache lines with the set number is inquired;
if the same is not hit, the prefetching filling flow is triggered; a normal fill flow is triggered if a set number of instruction cache lines hit below.
7. The method of claim 6, wherein a prefetch switch register is provided for storing information indicating whether to turn on or off the prefetch function.
8. A prefetch control apparatus for an instruction cache, comprising: the device comprises an inquiry unit, a detection unit and a filling trigger unit;
the query unit is used for acquiring the physical address requested to be accessed and querying the hit condition of the instruction cache line corresponding to the current instruction to be accessed in the instruction cache according to the physical address requested to be accessed;
the detection unit is used for inquiring the hit condition of the instruction cache lines with the set number below if the instruction cache lines to be accessed currently are not hit and the prefetching function is determined to be started at the same time;
the filling triggering unit is used for triggering the prefetching filling flow if the same result is not hit; a normal fill flow is triggered if a set number of instruction cache lines hit below.
9. The apparatus as claimed in claim 8, further comprising a switch register for storing information indicating whether to turn on or off the prefetch function.
10. Processor chip comprising the prefetch control apparatus of an instruction cache according to any of claims 4, 7 and 8.
11. A computer-readable storage medium, having a prefetch cache control program stored thereon, which when executed by a processor implements the steps of the prefetch cache control method of any of claims 1, 2, 3, 6, 7.
CN202211259393.9A 2022-10-14 2022-10-14 Instruction cache prefetch control method, device, chip and storage medium Pending CN115563031A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211259393.9A CN115563031A (en) 2022-10-14 2022-10-14 Instruction cache prefetch control method, device, chip and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211259393.9A CN115563031A (en) 2022-10-14 2022-10-14 Instruction cache prefetch control method, device, chip and storage medium

Publications (1)

Publication Number Publication Date
CN115563031A true CN115563031A (en) 2023-01-03

Family

ID=84744697

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211259393.9A Pending CN115563031A (en) 2022-10-14 2022-10-14 Instruction cache prefetch control method, device, chip and storage medium

Country Status (1)

Country Link
CN (1) CN115563031A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116049033A (en) * 2023-03-31 2023-05-02 沐曦集成电路(上海)有限公司 Cache read-write method, system, medium and device for Cache

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116049033A (en) * 2023-03-31 2023-05-02 沐曦集成电路(上海)有限公司 Cache read-write method, system, medium and device for Cache

Similar Documents

Publication Publication Date Title
US6782454B1 (en) System and method for pre-fetching for pointer linked data structures
US7917701B2 (en) Cache circuitry, data processing apparatus and method for prefetching data by selecting one of a first prefetch linefill operation and a second prefetch linefill operation
US6138213A (en) Cache including a prefetch way for storing prefetch cache lines and configured to move a prefetched cache line to a non-prefetch way upon access to the prefetched cache line
US8140768B2 (en) Jump starting prefetch streams across page boundaries
US7406569B2 (en) Instruction cache way prediction for jump targets
EP0795820B1 (en) Combined prefetch buffer and instructions cache memory system and method for providing instructions to a central processing unit utilizing said system.
US6484239B1 (en) Prefetch queue
US20070186050A1 (en) Self prefetching L2 cache mechanism for data lines
KR101095204B1 (en) Methods and apparatus for low-complexity instruction prefetch system
US20010016897A1 (en) Maximizing sequential read streams while minimizing the impact on cache and other applications
US20070180158A1 (en) Method for command list ordering after multiple cache misses
US7047362B2 (en) Cache system and method for controlling the cache system comprising direct-mapped cache and fully-associative buffer
US7346741B1 (en) Memory latency of processors with configurable stride based pre-fetching technique
KR100234647B1 (en) Data processing system with instruction prefetch
US20070180156A1 (en) Method for completing IO commands after an IO translation miss
US20080140934A1 (en) Store-Through L2 Cache Mode
US9003123B2 (en) Data processing apparatus and method for reducing storage requirements for temporary storage of data
US6959363B2 (en) Cache memory operation
US5666505A (en) Heuristic prefetch mechanism and method for computer system
CN108874691B (en) Data prefetching method and memory controller
CN115563031A (en) Instruction cache prefetch control method, device, chip and storage medium
US20110022802A1 (en) Controlling data accesses to hierarchical data stores to retain access order
CN110737475B (en) Instruction cache filling and filtering device
CN112711383B (en) Non-volatile storage reading acceleration method for power chip
US9311247B1 (en) Method and apparatus for detecting patterns of memory accesses in a computing system with out-of-order program execution

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination