CN109918131B - Instruction reading method based on non-blocking instruction cache - Google Patents
Instruction reading method based on non-blocking instruction cache Download PDFInfo
- Publication number
- CN109918131B CN109918131B CN201910180780.5A CN201910180780A CN109918131B CN 109918131 B CN109918131 B CN 109918131B CN 201910180780 A CN201910180780 A CN 201910180780A CN 109918131 B CN109918131 B CN 109918131B
- Authority
- CN
- China
- Prior art keywords
- instruction
- cache
- data
- cache line
- sram
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Landscapes
- Memory System Of A Hierarchy Structure (AREA)
Abstract
The invention relates to the technical field of computer hardware, and particularly discloses an instruction reading method based on a non-blocking instruction cache, wherein an index mark register group is arranged in the non-blocking instruction cache and used for storing an index mark, and the instruction reading method based on the non-blocking instruction cache comprises the following steps: judging whether an instruction fetching request exists on the instruction bus; when there is instruction fetch request on the instruction bus, reading the index mark in the index mark register group; comparing the index mark with the address information in the instruction fetching request; if the index mark is consistent with the address information in the instruction fetching request and indicates that the cache is hit, reading instruction data from the data SRAM and returning the instruction data to the instruction bus; and if the index mark is inconsistent with the address information in the instruction fetching request and indicates that the cache is not hit, processing the subsequent instruction fetching request according to the priority mode of the key words and the corresponding situation of the instruction fetching operation and the cache line. The instruction reading method based on the non-blocking instruction cache provided by the invention obviously improves the performance of the processor.
Description
Technical Field
The invention relates to the technical field of computer hardware, in particular to an instruction reading method based on non-blocking instruction cache.
Background
With the rapid development of integrated circuit manufacturing processes, the frequency of processors has been increasing by more than 40% per year in recent years, and the speed of memories has been increasing by only about 1% per year, so that the speed gap between processors and memories is increasing, and the speed of accessing memories becomes a bottleneck limiting the performance of processors. The cache is used as a high-speed buffer area between the processor and the main memory, and the speed difference between the processor and the main memory is filled.
The main choice of the current mainstream cache design is a set associative form, which comprises a single-port index mark SRAM, a single-port data SRAM and a control logic. According to address information on an instruction bus sent by a processor, reading out an index value (TAG) of an index mark SRAM (static random access memory) to compare with a current address, if the address information is consistent with the current address, indicating that the cache hits, and then reading instruction data from a data SRAM and returning the instruction data to the instruction bus; if the cache line data is not equal to the cache line data, indicating that the cache is lost, reading a line of cache line data from the main memory, backfilling the cache line data to a certain path of the data SRAM according to an LRU (Least recently used) algorithm, updating the index mark SRAM, and then performing next-stage instruction pipelining.
In the existing scheme, the next-stage instruction flow is performed only when all data in one cache line is returned and backfilled into a data SRAM (static random access memory), and the current operation usually only needs to access a certain part of data in the cache line, so that the next-stage instruction flow of a processor is blocked, and the performance of the processor is reduced.
The index mark is stored in the SRAM, the data output of the read SRAM is delayed by one clock and then output, so that the value of the comparison result is delayed by one clock and then output, if the comparison result is hit, corresponding instruction data is read from the data SRAM and returned to an instruction bus of the processor, the two operations of reading the index value from the index SRAM and reading the instruction data from the data SRAM are carried out in series, the processor instruction fetch at least waits for one clock cycle, and the performance of the processor is reduced.
Disclosure of Invention
The invention aims to at least solve one of the technical problems in the prior art, and provides an instruction reading method based on non-blocking instruction cache to solve the problems in the prior art.
As an aspect of the present invention, an instruction reading method based on a non-blocking instruction cache is provided, where an index flag register set is disposed in the non-blocking instruction cache, the index flag register set is used to store an index flag, and the instruction reading method based on the non-blocking instruction cache includes:
judging whether an instruction fetching request exists on the instruction bus;
when the instruction bus has an instruction fetching request, reading an index mark in the index mark register group;
comparing the index mark with the address information in the instruction fetching request;
if the index mark is consistent with the address information in the instruction fetching request and indicates that the cache is hit, reading instruction data from a data SRAM and returning the instruction data to an instruction bus;
and if the index mark is inconsistent with the address information in the instruction fetching request and indicates that the cache is not hit, processing the subsequent instruction fetching request according to the priority mode of the key words and the corresponding situation of the instruction fetching operation and the cache line.
Preferably, the processing of the subsequent instruction fetching request according to the priority mode of the keyword and the corresponding condition of the instruction fetching operation and the cache line includes:
initiating a cache line access request containing keywords to a main memory;
judging whether a keyword is returned or not;
if the key word is returned, the key word is registered in a cache line register buffer area, and the key word is backfilled into a data SRAM;
and processing subsequent instruction fetching requests according to the corresponding conditions of the instruction fetching operation and the cache line.
Preferably, if no keyword is returned, returning to continuously judge whether the keyword is returned.
Preferably, the corresponding situation according to the instruction fetching operation and the cache line includes: the instruction fetching operation corresponds to the same cache line in the cache, the instruction fetching operation hits other cache lines in the cache, and the instruction fetching operation corresponds to other cache lines and the cache does not hit.
Preferably, the processing of the subsequent instruction fetching request according to the instruction fetching operation and the cache line corresponding condition includes:
judging whether the instruction fetching operation hits other cache lines in the cache;
if the instruction fetching operation hits other cache lines in the cache, reading keywords in the data SRAM, and backfilling the returned cache line data to the data SRAM for operation suspension;
if the instruction fetching operation corresponds to other cache lines and the caches are not hit, the operation of reading key words in the data SRAM is suspended, and the returned cache line data is backfilled into the data SRAM;
and if the instruction fetching operation corresponds to the same cache line in the cache, judging that the instruction fetching operation corresponds to the same cache line, acquiring data from a cache line register buffer area, and backfilling the returned cache line data into a data SRAM.
Preferably, the processing of the subsequent instruction fetching request according to the corresponding condition of the instruction fetching operation and the cache line further comprises suspending a key word operation in a data reading SRAM, and the step of backfilling the returned cache line data into the data SRAM is performed after:
judging whether the cache line data is read out from the main memory or not;
if the cache line data are all read out from the main memory, judging whether the cache line data are not backfilled into the data SRAM;
and if the cache line data is not backfilled into the data SRAM, backfilling the cache line data into the data SRAM.
Preferably, if the cache line data is not backfilled into the data SRAM, returning to the step of initiating a cache line access request containing the key words to the main memory.
Preferably, if the cache line data is not completely read out from the main memory, returning to execute the step of suspending the operation of reading the key words in the data SRAM and refilling the returned cache line data into the data SRAM.
Preferably, the processing of the subsequent instruction fetching request according to the instruction fetching operation and the cache line correspondence further includes: and after the steps of reading the key words in the data SRAM, backfilling the returned cache line data to the data SRAM for operation suspension are completed, and obtaining the data from the cache line register buffer area, and backfilling the returned cache line data into the data SRAM are completed, returning to execute the step of judging whether the instruction-taking operation hits in other cache lines.
Preferably, the non-blocking instruction cache is further provided with an LRU algorithm module, a cache control logic module, a data SRAM, and a cache line register buffer, the LRU algorithm module is respectively in communication connection with the index mark register set and the cache control logic module, the index mark register set is in communication connection with the cache control logic module, the data SRAM is in communication connection with the cache control logic module, and the cache line register buffer is in communication connection with the data SRAM.
According to the instruction reading method based on the non-blocking instruction cache, provided by the invention, the priority of the keywords is adopted, and the cache line register buffer area is added, so that the problem that the next-stage instruction pipelining is carried out only when all cache line data are returned and backfilled to a data SRAM (static random access memory) under the condition of one-time cache miss is solved, the function of the non-blocking instruction cache is realized, and the performance of a processor is obviously improved.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this specification, illustrate embodiments of the invention and together with the description serve to explain the principles of the invention and not to limit the invention. In the drawings:
FIG. 1 is a flowchart of a method for reading an instruction based on a non-blocking instruction cache according to the present invention.
Fig. 2 is a flowchart of a specific embodiment of the instruction reading method based on the non-blocking instruction cache according to the present invention.
FIG. 3 is a block diagram of the non-blocking instruction cache provided by the present invention.
Detailed Description
The following detailed description of embodiments of the invention refers to the accompanying drawings. It should be understood that the detailed description and specific examples, while indicating the present invention, are given by way of illustration and explanation only, not limitation.
As an aspect of the present invention, an instruction reading method based on a non-blocking instruction cache is provided, where an index flag register set is disposed in the non-blocking instruction cache, and the index flag register set is used to store an index flag, as shown in fig. 1, the instruction reading method based on the non-blocking instruction cache includes:
s110, judging whether an instruction fetching request exists on the instruction bus;
s120, when an instruction fetching request exists on the instruction bus, reading an index mark in the index mark register set;
s130, comparing the index mark with the address information in the instruction fetching request;
s140, if the index mark is consistent with the address information in the instruction fetching request and indicates cache hit, reading instruction data from a data SRAM and returning the instruction data to an instruction bus;
s150, if the index mark is inconsistent with the address information in the instruction fetching request and indicates that the cache is not hit, processing the subsequent instruction fetching request according to the priority mode of the key words and the corresponding situation of the instruction fetching operation and the cache line.
According to the instruction reading method based on the non-blocking instruction cache, provided by the invention, the priority of the keywords is adopted, and the cache line register buffer area is added, so that the problem that the next-stage instruction pipelining is carried out only when all cache line data are returned and backfilled to a data SRAM (static random access memory) under the condition of one-time cache miss is solved, the function of the non-blocking instruction cache is realized, and the performance of a processor is obviously improved.
With reference to fig. 2, how to process subsequent instruction fetch requests according to the priority mode of the keyword and the corresponding situation between the instruction fetch operation and the cache line will be described in detail below.
It should be noted that, as shown in fig. 3, an LRU algorithm module, a cache control logic module, a data SRAM, and a cache line register buffer are further disposed in the non-blocking instruction cache, the LRU algorithm module is respectively in communication connection with the index mark register set and the cache control logic module, the index mark register set is in communication connection with the cache control logic module, the data SRAM is in communication connection with the cache control logic module, and the cache line register buffer is in communication connection with the data SRAM. The non-blocking instruction cache has 128 words (512 bytes) and 4 groups and 4 paths of connected cache lines, each cache line has 8 words (32 bytes), single-port SRAM is used as data memory, index mark is stored in register group, and cache line register buffer area.
The LRU algorithm module: and (3) adopting a least recently used algorithm, wherein each cache line in each group has an LRU (least recently used) count value, when cache is lost, the cache line with the count value of 0 is replaced, the count value of the cache line is changed into the maximum value, and the count values of other lines are subtracted by 1. When the cache hits one way with the maximum count value, all the count values are kept unchanged, if one way without the maximum value is hit, the count value of each way is reduced by 1 from other ways with the count value of the way larger, and meanwhile, the count value of the way becomes the maximum value.
Index flag register set: and storing each group of cache lines and address information (TAG) and effective flag bits of a cache line register buffer area.
Data SRAM: the single-port SRAM is a mapping of main memory data.
cache line register buffer: in the case where a cache miss occurs, cache line data read from the main memory is buffered thereto.
cache control logic: and generating read-write data SRAM and control of the interface with the main memory according to the comparison result.
Specifically, the processing of the subsequent instruction fetching request according to the priority mode of the keyword and the corresponding condition of the instruction fetching operation and the cache line includes:
initiating a cache line access request containing keywords to a main memory;
judging whether a keyword is returned or not;
if the key word is returned, the key word is registered in a cache line register buffer area, and the key word is backfilled into a data SRAM;
and processing subsequent instruction fetching requests according to the corresponding conditions of the instruction fetching operation and the cache line.
More specifically, if no keyword is returned, returning to continuously judge whether a keyword is returned.
It should be understood that when there is an instruction fetch request on the instruction bus, according to the values of bits 6-5 of the address line on the current instruction bus, the index mark register set is searched, the TAG is compared with bits 31-7 of the instruction fetch address, if hit, the instruction data is read from the data SRAM and returned to the instruction bus, and meanwhile, the LRU algorithm module updates the count value. If the cache is lost, the cache control logic module initiates a memory access request to the main memory, firstly requests a required word (namely a key word), and after the key word is read from the main memory and simultaneously returns the key word to an instruction bus, the processor can enter the next-stage instruction flow.
It should be noted that the corresponding situation according to the instruction fetching operation and the cache line includes: the instruction fetching operation corresponds to the same cache line in the cache, the instruction fetching operation hits other cache lines in the cache, and the instruction fetching operation corresponds to other cache lines and the cache does not hit.
Specifically, the processing of the subsequent instruction fetching request according to the instruction fetching operation and the cache line corresponding condition includes:
judging whether the instruction fetching operation hits other cache lines in the cache;
if the instruction fetching operation hits other cache lines in the cache, reading keywords in the data SRAM, and backfilling the returned cache line data to the data SRAM for operation suspension;
if the instruction fetching operation corresponds to other cache lines and the caches are not hit, the operation of reading key words in the data SRAM is suspended, and the returned cache line data is backfilled into the data SRAM;
and if the instruction fetching operation corresponds to the same cache line in the cache, judging that the instruction fetching operation corresponds to the same cache line, acquiring data from a cache line register buffer area, and backfilling the returned cache line data into a data SRAM.
It should be appreciated that after the keyword is returned, three situations may occur, the first: if the subsequent instruction fetching operation hits in other cache lines, the instruction fetching result can be normally returned without being influenced by the currently returned cache line; in the second case: if the subsequent instruction fetching operation corresponds to other cache lines and is missed, the subsequent instruction fetching operation is blocked, and the subsequent instruction fetching operation can be continued only by waiting for all cache lines corresponding to the previous missed instruction fetching operation to return and backfilling the cache lines to a data SRAM; in the third case: and acquiring data from a cache line register buffer area if the subsequent instruction fetching operation corresponds to the same cache line.
Specifically, the processing of the subsequent instruction fetching request according to the corresponding condition of the instruction fetching operation and the cache line further comprises the following steps of suspending a keyword operation in a data reading SRAM, and backfilling the returned cache line data into the data SRAM:
judging whether the cache line data is read out from the main memory or not;
if the cache line data are all read out from the main memory, judging whether the cache line data are not backfilled into the data SRAM;
and if the cache line data is not backfilled into the data SRAM, backfilling the cache line data into the data SRAM.
Further specifically, if the cache line data is not backfilled into the data SRAM, the step of initiating a cache line access request including the keyword to the main memory is returned.
And if not, returning to execute the suspension of the key word operation in the data reading SRAM, and backfilling the returned cache line data into the data SRAM.
Further specifically, the processing of the subsequent instruction fetching request according to the instruction fetching operation and the cache line correspondence further includes: and after the steps of reading the key words in the data SRAM, backfilling the returned cache line data to the data SRAM for operation suspension are completed, and obtaining the data from the cache line register buffer area, and backfilling the returned cache line data into the data SRAM are completed, returning to execute the step of judging whether the instruction-taking operation hits in other cache lines.
It should be noted that, in the process of returning cache line data to the cache line register buffer area, if the second and third conditions are present, data is refilled into the data SRAM according to the LRU algorithm, if the first condition (the subsequent instruction fetch operation of the processor hits in other cache lines) occurs, a read operation is performed on the data SRAM, because the data SRAM is single-ported, and the read SRAM and the write SRAM cannot be operated simultaneously, it is preferentially ensured that the read SRAM and the instruction data not refilled into the data SRAM are recorded, and when the next cache is missing, the data not refilled into the cache line register buffer area of the data SRAM is written back to the data SRAM, and then a new wrap burst access request is initiated to the main memory, so that the refilling time can be saved, and the performance of the processor can be improved.
When the index mark is lost in the next cache, the cache line data corresponding to the last miss instruction fetching operation is refilled into the data SRAM and then updated.
Therefore, the instruction reading method based on the non-blocking instruction cache provided by the invention reduces the missing cost of the cache at a very low implementation cost by adopting a keyword priority technology, and improves the performance of a processor; in the process of returning the cache line data to the cache line register buffer area, the data is backfilled into the data SRAM, instead of backfilling the cache line register buffer area data into the data SRAM when the cache line data is lost next time, so that the performance of the processor is improved with less logic overhead; compared with the traditional SRAM (static random access memory) for storing the index (TAG) value, the register group is adopted to store the index (TAG) value, and the comparison result is output by one clock in advance, so that the performance can be improved.
It will be understood that the above embodiments are merely exemplary embodiments taken to illustrate the principles of the present invention, which is not limited thereto. It will be apparent to those skilled in the art that various modifications and improvements can be made without departing from the spirit and substance of the invention, and these modifications and improvements are also considered to be within the scope of the invention.
Claims (9)
1. An instruction reading method based on a non-blocking instruction cache is characterized in that an index mark register set is arranged in the non-blocking instruction cache and used for storing index marks, and the instruction reading method based on the non-blocking instruction cache comprises the following steps:
judging whether an instruction fetching request exists on the instruction bus;
when the instruction bus has an instruction fetching request, reading an index mark in the index mark register group;
comparing the index mark with the address information in the instruction fetching request;
if the index mark is consistent with the address information in the instruction fetching request and indicates that the cache is hit, reading instruction data from a data SRAM and returning the instruction data to an instruction bus;
if the index mark is inconsistent with the address information in the instruction fetching request and indicates that the cache is not hit, processing subsequent instruction fetching requests according to a keyword priority mode and the corresponding situation of the instruction fetching operation and the cache line;
wherein, the processing the subsequent instruction fetching request according to the priority mode of the keyword and the corresponding situation of the instruction fetching operation and the cache line comprises the following steps:
initiating a cache line access request containing keywords to a main memory;
judging whether a keyword is returned or not;
if the key word is returned, the key word is registered in a cache line register buffer area, and the key word is backfilled into a data SRAM;
and processing subsequent instruction fetching requests according to the corresponding conditions of the instruction fetching operation and the cache line.
2. The method of claim 1, wherein if no keyword is returned, returning is continued to determine whether a keyword is returned.
3. The method for reading the non-blocking instruction cache according to claim 1, wherein the step of reading the non-blocking instruction cache according to the correspondence between the instruction fetch operation and the cache line comprises the following steps: the instruction fetching operation corresponds to the same cache line in the cache, the instruction fetching operation hits other cache lines in the cache, and the instruction fetching operation corresponds to other cache lines and the cache does not hit.
4. The method of claim 3, wherein the processing of subsequent instruction fetch requests according to the instruction fetch operation corresponding to the cache line comprises:
judging whether the instruction fetching operation hits other cache lines in the cache;
if the instruction fetching operation hits other cache lines in the cache, reading keywords in the data SRAM, and backfilling the returned cache line data to the data SRAM for operation suspension;
if the instruction fetching operation corresponds to other cache lines and the caches are not hit, the operation of reading key words in the data SRAM is suspended, and the returned cache line data is backfilled into the data SRAM;
and if the instruction fetching operation corresponds to the same cache line in the cache, judging that the instruction fetching operation corresponds to the same cache line, acquiring data from a cache line register buffer area, and backfilling the returned cache line data into a data SRAM.
5. The non-blocking instruction cache-based instruction reading method according to claim 4, wherein the processing of the subsequent instruction fetch request according to the instruction fetch operation and the cache line correspondence further comprises suspending a key operation in a data reading SRAM, and the step of backfilling the returned cache line data into the data SRAM is performed after:
judging whether the cache line data is read out from the main memory or not;
if the cache line data are all read out from the main memory, judging whether the cache line data are not backfilled into the data SRAM;
and if the cache line data is not backfilled into the data SRAM, backfilling the cache line data into the data SRAM.
6. The method according to claim 5, wherein if the cache line data is not backfilled into the data SRAM, the step of initiating a cache line access request containing a keyword to the main memory is returned.
7. The method according to claim 5, wherein if the cache line data is not read out from the main memory, the key word operation in the SRAM for reading data is returned to be executed and suspended, and the returned cache line data is refilled into the data SRAM.
8. The method of claim 4, wherein the processing of subsequent instruction fetch requests according to the instruction fetch operation corresponding to the cache line further comprises: and after the steps of reading the key words in the data SRAM, backfilling the returned cache line data to the data SRAM for operation suspension are completed, and obtaining the data from the cache line register buffer area, and backfilling the returned cache line data into the data SRAM are completed, returning to execute the step of judging whether the instruction-taking operation hits in other cache lines.
9. The non-blocking instruction cache-based instruction reading method according to claim 1, wherein an LRU algorithm module, a cache control logic module, a data SRAM, and a cache line register buffer are further disposed in the non-blocking instruction cache, the LRU algorithm module is respectively in communication connection with the index mark register set and the cache control logic module, the index mark register set is in communication connection with the cache control logic module, the data SRAM is in communication connection with the cache control logic module, and the cache line register buffer is in communication connection with the data SRAM.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910180780.5A CN109918131B (en) | 2019-03-11 | 2019-03-11 | Instruction reading method based on non-blocking instruction cache |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910180780.5A CN109918131B (en) | 2019-03-11 | 2019-03-11 | Instruction reading method based on non-blocking instruction cache |
Publications (2)
Publication Number | Publication Date |
---|---|
CN109918131A CN109918131A (en) | 2019-06-21 |
CN109918131B true CN109918131B (en) | 2021-04-30 |
Family
ID=66964166
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910180780.5A Active CN109918131B (en) | 2019-03-11 | 2019-03-11 | Instruction reading method based on non-blocking instruction cache |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109918131B (en) |
Families Citing this family (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110515660B (en) * | 2019-08-28 | 2021-08-06 | 中国人民解放军国防科技大学 | Method and device for accelerating execution of atomic instruction |
CN111142941A (en) * | 2019-11-27 | 2020-05-12 | 核芯互联科技(青岛)有限公司 | Non-blocking cache miss processing method and device |
CN111414321B (en) * | 2020-02-24 | 2022-07-15 | 中国农业大学 | Cache protection method and device based on dynamic mapping mechanism |
CN112711383B (en) * | 2020-12-30 | 2022-08-26 | 浙江大学 | Non-volatile storage reading acceleration method for power chip |
CN113204370A (en) * | 2021-03-16 | 2021-08-03 | 南京英锐创电子科技有限公司 | Instruction caching method and device |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103399824A (en) * | 2013-07-17 | 2013-11-20 | 北京航空航天大学 | Method and device for holding cache miss states of caches in processor of computer |
CN103593306A (en) * | 2013-11-15 | 2014-02-19 | 浪潮电子信息产业股份有限公司 | Design method for Cache control unit of protocol processor |
US20140351554A1 (en) * | 2007-06-01 | 2014-11-27 | Intel Corporation | Linear to physical address translation with support for page attributes |
CN104809179A (en) * | 2015-04-16 | 2015-07-29 | 华为技术有限公司 | Device and method for accessing Hash table |
US20180095886A1 (en) * | 2016-09-30 | 2018-04-05 | Fujitsu Limited | Arithmetic processing device, information processing apparatus, and method for controlling arithmetic processing device |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1499382A (en) * | 2002-11-05 | 2004-05-26 | 华为技术有限公司 | Method for implementing cache in high efficiency in redundancy array of inexpensive discs |
-
2019
- 2019-03-11 CN CN201910180780.5A patent/CN109918131B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20140351554A1 (en) * | 2007-06-01 | 2014-11-27 | Intel Corporation | Linear to physical address translation with support for page attributes |
CN103399824A (en) * | 2013-07-17 | 2013-11-20 | 北京航空航天大学 | Method and device for holding cache miss states of caches in processor of computer |
CN103593306A (en) * | 2013-11-15 | 2014-02-19 | 浪潮电子信息产业股份有限公司 | Design method for Cache control unit of protocol processor |
CN104809179A (en) * | 2015-04-16 | 2015-07-29 | 华为技术有限公司 | Device and method for accessing Hash table |
US20180095886A1 (en) * | 2016-09-30 | 2018-04-05 | Fujitsu Limited | Arithmetic processing device, information processing apparatus, and method for controlling arithmetic processing device |
Non-Patent Citations (1)
Title |
---|
主机与存储器之间的缓存专利技术分析;误海旋;《河南科技》;20160605;第13-14页 * |
Also Published As
Publication number | Publication date |
---|---|
CN109918131A (en) | 2019-06-21 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109918131B (en) | Instruction reading method based on non-blocking instruction cache | |
CN107066396B (en) | Apparatus and method for operating caching of physical tags of virtual index | |
US9612972B2 (en) | Apparatuses and methods for pre-fetching and write-back for a segmented cache memory | |
US6321297B1 (en) | Avoiding tag compares during writes in multi-level cache hierarchy | |
KR20180123625A (en) | Systems and methods for write and flush support in hybrid memory | |
US8499123B1 (en) | Multi-stage pipeline for cache access | |
US8583874B2 (en) | Method and apparatus for caching prefetched data | |
US8370604B2 (en) | Method and system for caching attribute data for matching attributes with physical addresses | |
US6560679B2 (en) | Method and apparatus for reducing power consumption by skipping second accesses to previously accessed cache lines | |
US8954681B1 (en) | Multi-stage command processing pipeline and method for shared cache access | |
US20110320720A1 (en) | Cache Line Replacement In A Symmetric Multiprocessing Computer | |
JP2012203560A (en) | Cache memory and cache system | |
CN115617712A (en) | LRU replacement algorithm based on set associative Cache | |
KR100851738B1 (en) | Reverse directory for facilitating accesses involving a lower-level cache | |
US20050080995A1 (en) | Performance of a cache by including a tag that stores an indication of a previously requested address by the processor not stored in the cache | |
US20090292857A1 (en) | Cache memory unit | |
CN111639042B (en) | Processing method and device for prefetching buffer data consistency | |
US6976130B2 (en) | Cache controller unit architecture and applied method | |
CN111124297B (en) | Performance improving method for stacked DRAM cache | |
US6000017A (en) | Hybrid tag architecture for a cache memory | |
CN112148639B (en) | Efficient low-capacity cache memory replacement method and system | |
US6839806B2 (en) | Cache system with a cache tag memory and a cache tag buffer | |
CN111831587A (en) | Data writing method and device and electronic equipment | |
US20230305968A1 (en) | Cache replacement policy optimization for producer-consumer flows | |
US11138125B2 (en) | Hybrid cache memory and method for reducing latency in the same |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |