CN117149781A - Group-associative self-adaptive expansion cache architecture and access processing method thereof - Google Patents

Group-associative self-adaptive expansion cache architecture and access processing method thereof Download PDF

Info

Publication number
CN117149781A
CN117149781A CN202311435178.4A CN202311435178A CN117149781A CN 117149781 A CN117149781 A CN 117149781A CN 202311435178 A CN202311435178 A CN 202311435178A CN 117149781 A CN117149781 A CN 117149781A
Authority
CN
China
Prior art keywords
cache
expansion
group
item
current
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202311435178.4A
Other languages
Chinese (zh)
Other versions
CN117149781B (en
Inventor
高杨
罗庆
赖安洲
邵健
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Cetc Shentai Information Technology Co ltd
Original Assignee
Cetc Shentai Information Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Cetc Shentai Information Technology Co ltd filed Critical Cetc Shentai Information Technology Co ltd
Priority to CN202311435178.4A priority Critical patent/CN117149781B/en
Publication of CN117149781A publication Critical patent/CN117149781A/en
Application granted granted Critical
Publication of CN117149781B publication Critical patent/CN117149781B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/22Indexing; Data structures therefor; Storage structures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/24Querying
    • G06F16/245Query processing
    • G06F16/2455Query execution
    • G06F16/24552Database cache management
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/24Querying
    • G06F16/245Query processing
    • G06F16/2458Special types of queries, e.g. statistical queries, fuzzy queries or distributed queries
    • G06F16/2462Approximate or statistical queries
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0655Vertical data movement, i.e. input-output transfer; data movement between one or more hosts and one or more storage devices
    • G06F3/0656Data buffering arrangements

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Databases & Information Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Software Systems (AREA)
  • Computational Linguistics (AREA)
  • Probability & Statistics with Applications (AREA)
  • Fuzzy Systems (AREA)
  • Mathematical Physics (AREA)
  • Human Computer Interaction (AREA)
  • Memory System Of A Hierarchy Structure (AREA)

Abstract

The invention relates to the technical field of integrated circuit digital ICs, in particular to a group association self-adaptive expansion cache architecture and an access processing method thereof. Based on the traditional Cache architecture, the method further comprises the following steps: extension flag item extFlag, extension state item extState, extension index item extIndex, and group access statistics module setAccessStats; when the access of the current cache Set A is lost, the Index number Index B of the cache Set B which can be used for expansion is acquired by utilizing a Set access stats module. The invention dynamically utilizes the inactive cache group to expand the storage capacity of the current full cache group, reduces the conflict loss of the cache group and improves the system performance.

Description

Group-associative self-adaptive expansion cache architecture and access processing method thereof
Technical Field
The invention relates to the technical field of integrated circuit digital ICs, in particular to a group association self-adaptive expansion cache architecture and an access processing method thereof.
Background
In modern processor designs, memory access speed has become a critical bottleneck limiting processor performance, and multi-level cache architectures are widely recognized as an important approach to solving the "memory wall" problem. The effectiveness of the cache and the hierarchical storage structure is based on temporal locality and spatial locality of memory accesses. The traditional cache generally adopts a framework with fixed number of group association, the storage space of the cache is uniformly divided into a plurality of groups, each group comprises cache blocks with the same number and the same size, and all the cache blocks are used with equal probability. However, under the influence of the program or the program fragment actually running, the distribution of the group access and the block access of the cache on the cache space often presents the non-uniform characteristic, namely, some cache block accesses have high frequency and more conflict and deletion, and other cache block accesses have less frequency and are in an idle state for a long time, so that the overall utilization rate of the cache space is low, and the overall performance of the system is reduced.
The method for solving the influence of the access characteristic on the performance of the processor at present comprises the steps of firstly, carrying out dynamic cache parameter configuration based on statistics of the access behavior characteristic of the program in the running process of the program; the other is the use of cache blocks with global access information. The method mainly has the problems that the accuracy of statistical characteristics is low, the statistical module consumes more hardware resources, the statistical time delay is high in configuration, the statistical failure is caused, the access time delay is increased by global information, and the like, and the method is difficult to implement in practical application.
Disclosure of Invention
Aiming at the non-uniform characteristic of Cache access, the invention provides a group-associated self-adaptive expansion Cache (Cache) architecture and an access processing method thereof, which dynamically utilize an inactive Cache group to expand the storage capacity of a current full Cache group, reduce the conflict loss of the Cache group and improve the system performance.
In order to solve the technical problems, the invention provides a Cache architecture of a set associative self-adaptive expansion, which is built on the basis of a traditional Cache architecture and further comprises: extension flag item extFlag, extension state item extState, extension index item extIndex, and group access statistics module setAccessStats;
when the access of the current cache Set A is lost, the group access statistics module setAccess stats is utilized to obtain an Index number Index B of the cache Set B which can be used for expansion, and the storage capacity of the current cache Set A is expanded by setting an expansion state item extState=1, an expansion Index item extindex=index B of the current cache Set A, an expansion state item extState=2, an expansion Index item extindex=index A of the expansion cache Set B and an expansion flag item extFlag=1 of the corresponding group.
Preferably, the method further comprises: each cache block corresponds to an extension flag item extFlag; in the extended cache Set B, when the extension flag item extflag=0 of the cache block, it indicates that the content stored in the cache block belongs to the extended cache Set B; when the extension flag item extflag=1 of the cache block, it indicates that the content stored in this cache block belongs to the current cache group Set a.
Preferably, the method further comprises: each cache group corresponds to a pair of extension state item extState and extension index item extIndex, when the extension state item extstate=0, the current cache group Set a is not extended, and the value of the extension index item extIndex is invalid; when the expansion state item extstate=1, indicating that the current cache Set a is expanded, and the index number of the expanded cache Set B is the value in the expansion index item extIndex; when the extended state item extstate=2, the index number of the extended cache Set B indicates that the current cache Set a is extended, that is, the value in the extended index item extIndex.
Preferably, the method further comprises: each cache space corresponds to a group access statistics module setAccess stats, and the group access statistics module setAccess stats counts the activity degree of each cache group according to the cache access and hit missing condition.
The invention also provides an access processing method of the group association self-adaptive expansion cache architecture, which adopts the group association self-adaptive expansion cache architecture and comprises the following steps:
step 1: searching the current cache Set A according to the input Tag and Index information, and if the current cache Set A is hit, returning a hit cache block and ending cache access; if not, accessing the expansion cache block;
step 2: the method comprises the steps that access to an expansion cache block is firstly carried out, and whether the expansion cache block exists in the current cache block is confirmed according to an expansion state item extState value of a current cache group Set A; if the expansion relation is not established, when the current cache group Set A is not full, the expansion relation is not established, victim Block is directly selected from the current cache group Set A through a group replacement strategy and returned, and the cache access is ended;
step 3: when the current cache group Set A is not hit and is full, if the current cache group Set A does not establish an expansion relation, acquiring a proper expansion cache group Set B from a setAccess Stats module, and setting an expansion state item extState of the current cache group Set A as 1 and an expansion state item extState of the expansion cache group Set B as 2 by writing an Index B of the expansion cache group Set A into an expansion Index item extIndex of the current cache group Set A and writing an Index A of the current cache group Set A into an expansion Index item extIndex of the expansion cache group Set B;
step 4: if the current cache Set A has established an expansion relation, accessing the expanded cache Set B through an expansion index item extIndex value of the expansion relation;
step 5: if hit in the extended cache Set B, returning the hit cache block in the extended cache Set B and ending the cache access; if the cache Set B is not hit, selecting a Victim Block in the extended cache Set B through a Set replacement strategy and returning;
step 6: if the expansion items of the current cache Set A do not exist in the expanded cache Set B, namely the expansion flag items extFlag of the expanded cache Set SetB are all 0, acquiring a current free cache Set B ' from a Set access statistics module setAccess stats, and establishing a new expansion relation from the current cache Set A to the current free cache Set B ' by writing the Index of the current free cache Set B ' into the expansion Index item extIndex of the current cache Set A;
step 7: when the extension item of the current cache Set a is written into the cache block of the extended cache Set B, the corresponding extension flag item extFlag should be Set to 1.
Compared with the prior art, the invention has the following beneficial effects:
according to the invention, the capacity of the current full cache set can be expanded by utilizing the inactive cache set, and in a typical implementation embodiment, based on simulation results of a CPU SPEC 2017 test set, the framework can effectively reduce conflict deletion, the total number of deletion is reduced by 54.2% at most, and the framework is very suitable for being implemented in applications with limited cache capacity such as embedded processors. The invention solves the problem of conflict missing caused by uneven cache access, and can effectively improve the performance of a processor system.
Drawings
FIG. 1 is a block diagram of the architecture of the present invention.
FIG. 2 is a flow chart of an access process of the Cache architecture of the present invention.
FIG. 3 is a diagram of the results of memory access uniformity simulation for the Cache architecture of the present invention.
FIG. 4 is a graph of Cache access miss performance of the Cache architecture of the present invention in different test items.
Detailed Description
The invention is described in further detail below with reference to the drawings and the specific examples. The advantages and features of the present invention will become more apparent from the following description. It should be noted that the drawings are in a very simplified form and are all to a non-precise scale, merely for convenience and clarity in aiding in the description of embodiments of the invention.
As shown in FIG. 1, which is a block diagram of the Cache architecture of the present invention, the conventional Cache architecture generally includes Valid flag bit Valid, comparator, selector, data alignment, tag and data storage units, etc. The Cache architecture is mainly characterized in that an expansion flag item extFlag, an expansion state item extState, an expansion index item extIndex and a group access statistics module setAccess stats are additionally added in a traditional architecture, wherein each Cache block is required to be added with an expansion flag item extFlag, the extFlag bit width is 1 bit, and when the value is 0, the content stored in the current Cache block belongs to a current Cache group; and when the value is 1, the content stored in the current cache block belongs to the extended cache group. Each cache group needs to be added with a pair of extension state items extState and extension Index items extIndex, the bit width of the extState is 2bit, the bit width of the extIndex is consistent with the bit width of the cache Index, when the extState is 0, the current cache group is not extended, and the extIndex value is invalid; when the extState is 1, indicating that the current cache group is expanded, and the index number of the expanded cache group is the value in extIndex; when extState is 2, the index number of the extended cache set is the value in extIndex, which indicates that the current cache set is extended. Each cache space is added with a group access statistics module setAccess stats which counts the activity degree of each cache group according to the cache access and hit missing condition.
FIG. 2 is a flow chart of access processing of the Cache architecture of the present invention, wherein first, according to the input Tag and Index information, a search is performed on a current Cache Set A, if a hit occurs, a hit Cache block is returned and Cache access is terminated; if not, then access to the extended cache block is made. The access to the expansion cache Block is firstly to confirm whether the expansion cache Block exists in the current cache Block according to the extState value of the current cache group Set A, if the expansion relation is not established, the expansion relation is not established when the current cache group Set A is not full, victim Block is directly selected and returned in the current cache group Set A through a group replacement strategy, and the cache access is ended. The above process flow is basically consistent with the conventional cache access flow except for the step of checking whether there is an extended relationship.
The following is an extended cache set access procedure of the cache of the present invention. When the current cache group Set A is not hit and is full, if the Set A does not establish an expansion relationship, acquiring a proper expansion cache group Set B from a setAccess stats module, and establishing an expansion relationship from Set A to Set B by writing Index B of Set B into extIndex of the current cache group Set A and writing Index A of Set A into extIndex of Set B, and simultaneously setting extState of Set A to 1 and extState of Set B to 2; if the Set A has established an expansion relation, accessing the expansion cache Set B through an extIndex value of the expansion relation, and if the expansion relation is hit in the expansion cache Set B, returning to a cache block hit in the expansion cache Set B and ending cache access; if the expanded cache Set B is also missed, victim Block is selected and returned in Set B by a Set replacement policy. In particular, when the extended cache Set access is performed, if the extFlag items of the extended items SetB of the Set a are not already all 0 in the extended cache Set B, the current free cache Set B ' is acquired from the Set access statistics module setAccessStats, and a new extended relationship of Set a to Set B ' is established by writing the Index of Set B ' into the extIndex of Set a. When the extension of Set a writes to the cache block of Set B, the corresponding extFlag should be Set to 1.
FIG. 3 is a diagram showing typical statistics of memory accesses in the SEPC CPU 2017 test set gcc test item for the Cache architecture of the present invention. In the simulation of this embodiment, the main configuration parameters of the system are shown in table 1.
Table 1 system simulation configuration table:
counting the cache access of each cache group according to every 5 ten thousand times as a period when counting gcc test items, and increasing the access times according to the cache group where the actual hit cache block or Victim block is located under the condition that the cache group is expanded. Fig. 3 shows a typical 5-ten-thousand cache access frequency distribution in which the access frequencies of the various cache sets have been ordered from large to small. It can be seen that after the cache architecture of the present invention is implemented, the access frequency of each cache set is obviously more average, wherein the access frequency of the cache set with high frequency access is reduced by 22.75% at the highest, and the access frequency of the cache set with high frequency access is uniformly increased from 0 to about 0.38% for a large number of cache sets with low frequency access. The access difference in the program section is reduced to 0.13% from the original 23.25%, and the cache access uniformity is improved by 23.12%.
Fig. 4 is a graph of test results of a plurality of test items of the Cache architecture in the SEPC CPU 2017 test set, in which 500 ten thousand Cache access misses before each test set are counted, and the Cache access misses are normalized by the misses of the conventional Cache architecture, and the height of the histogram in fig. 4 represents the normalized access miss number. As shown in FIG. 4, the number of misses in the Cache architecture of the present invention was reduced by 0.39% (500.perlbench), 1.75% (502. Gcc), 54.2% (503. Bwaves), 3.96% (521. Wrf), 0.26% (541. Leala), 0.08% (548. Exchange 2), and 3.67% (549. Fotonik3 d), respectively.
The hardware resource consumption in the implementation of the Cache architecture of the present invention is shown in Table 2. It is assumed that the physical address WIDTH (pa_width) is 40 bits, the buffer SIZE is 32KB, the buffer block SIZE (line_size) is 32B, the bank ASSOC is 4, the number of buffer banks (set_num) is 256, the bank address LENGTH (index_length) is 8 bits, and the bit WIDTH (tag_length) of TAG data is 27 bits. As shown by analysis in a table, compared with the traditional Cache architecture, hardware resources of the Cache architecture are mainly added in Cache set LRU (8 bit), extFlag, extState flag bit (3 bit total), extIndex (8 bit) and the like, and statistics shows that in the embodiment, the Cache architecture only adds 1.92% of hardware resources. The hardware resources are saved very much.
Table 2 hardware resource consumption and comparison:
the above description is only illustrative of the preferred embodiments of the present invention and is not intended to limit the scope of the present invention, and any alterations and modifications made by those skilled in the art based on the above disclosure shall fall within the scope of the appended claims.

Claims (5)

1. The Cache architecture of the set associative self-adaptive expansion is established on the basis of the traditional Cache architecture, and is characterized by further comprising: extension flag item extFlag, extension state item extState, extension index item extIndex, and group access statistics module setAccessStats;
when the access of the current cache Set A is lost, the group access statistics module setAccess stats is utilized to obtain an Index number Index B of the cache Set B which can be used for expansion, and the storage capacity of the current cache Set A is expanded by setting an expansion state item extState=1, an expansion Index item extindex=index B of the current cache Set A, an expansion state item extState=2, an expansion Index item extindex=index A of the expansion cache Set B and an expansion flag item extFlag=1 of the corresponding group.
2. The set associative adaptive extended cache architecture of claim 1, further comprising: each cache block corresponds to an extension flag item extFlag; in the extended cache Set B, when the extension flag item extflag=0 of the cache block, it indicates that the content stored in the cache block belongs to the extended cache Set B; when the extension flag item extflag=1 of the cache block, it indicates that the content stored in this cache block belongs to the current cache group Set a.
3. A set associative adaptive extended cache architecture according to claim 2, further comprising: each cache group corresponds to a pair of extension state item extState and extension index item extIndex, when the extension state item extstate=0, the current cache group Set a is not extended, and the value of the extension index item extIndex is invalid; when the expansion state item extstate=1, indicating that the current cache Set a is expanded, and the index number of the expanded cache Set B is the value in the expansion index item extIndex; when the extended state item extstate=2, the index number of the extended cache Set B indicates that the current cache Set a is extended, that is, the value in the extended index item extIndex.
4. A set associative adaptive extended cache architecture according to claim 3, further comprising: each cache space corresponds to a group access statistics module setAccess stats, and the group access statistics module setAccess stats counts the activity degree of each cache group according to the cache access and hit missing condition.
5. A method for processing access to a set associative self-adaptive extended cache architecture, using a set associative self-adaptive extended cache architecture as claimed in any one of claims 1 to 4, comprising the steps of:
step 1: searching the current cache Set A according to the input Tag and Index information, and if the current cache Set A is hit, returning a hit cache block and ending cache access; if not, accessing the expansion cache block;
step 2: the method comprises the steps that access to an expansion cache block is firstly carried out, and whether the expansion cache block exists in the current cache block is confirmed according to an expansion state item extState value of a current cache group Set A; if the expansion relation is not established, when the current cache group Set A is not full, the expansion relation is not established, victim Block is directly selected from the current cache group Set A through a group replacement strategy and returned, and the cache access is ended;
step 3: when the current cache group Set A is not hit and is full, if the current cache group Set A does not establish an expansion relation, acquiring a proper expansion cache group Set B from a setAccess Stats module, and setting an expansion state item extState of the current cache group Set A as 1 and an expansion state item extState of the expansion cache group Set B as 2 by writing an Index B of the expansion cache group Set A into an expansion Index item extIndex of the current cache group Set A and writing an Index A of the current cache group Set A into an expansion Index item extIndex of the expansion cache group Set B;
step 4: if the current cache Set A has established an expansion relation, accessing the expanded cache Set B through an expansion index item extIndex value of the expansion relation;
step 5: if hit in the extended cache Set B, returning the hit cache block in the extended cache Set B and ending the cache access; if the cache Set B is not hit, selecting a Victim Block in the extended cache Set B through a Set replacement strategy and returning;
step 6: if the expansion items of the current cache Set A do not exist in the expanded cache Set B, namely the expansion flag items extFlag of the expanded cache Set SetB are all 0, acquiring a current free cache Set B ' from a Set access statistics module setAccess stats, and establishing a new expansion relation from the current cache Set A to the current free cache Set B ' by writing the Index of the current free cache Set B ' into the expansion Index item extIndex of the current cache Set A;
step 7: when the extension item of the current cache Set a is written into the cache block of the extended cache Set B, the corresponding extension flag item extFlag should be Set to 1.
CN202311435178.4A 2023-11-01 2023-11-01 Group-associative self-adaptive expansion cache architecture and access processing method thereof Active CN117149781B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311435178.4A CN117149781B (en) 2023-11-01 2023-11-01 Group-associative self-adaptive expansion cache architecture and access processing method thereof

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311435178.4A CN117149781B (en) 2023-11-01 2023-11-01 Group-associative self-adaptive expansion cache architecture and access processing method thereof

Publications (2)

Publication Number Publication Date
CN117149781A true CN117149781A (en) 2023-12-01
CN117149781B CN117149781B (en) 2024-02-13

Family

ID=88897297

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311435178.4A Active CN117149781B (en) 2023-11-01 2023-11-01 Group-associative self-adaptive expansion cache architecture and access processing method thereof

Country Status (1)

Country Link
CN (1) CN117149781B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106126440A (en) * 2016-06-22 2016-11-16 中国科学院计算技术研究所 A kind of caching method improving data spatial locality in the buffer and device
CN107861819A (en) * 2017-12-07 2018-03-30 郑州云海信息技术有限公司 A kind of method, apparatus and computer-readable recording medium of caching group load balancing
CN108537719A (en) * 2018-03-26 2018-09-14 上海交通大学 A kind of system and method improving graphics processing unit performance
CN115357196A (en) * 2022-08-31 2022-11-18 鹏城实验室 Dynamically expandable set-associative cache method, apparatus, device and medium

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106126440A (en) * 2016-06-22 2016-11-16 中国科学院计算技术研究所 A kind of caching method improving data spatial locality in the buffer and device
CN107861819A (en) * 2017-12-07 2018-03-30 郑州云海信息技术有限公司 A kind of method, apparatus and computer-readable recording medium of caching group load balancing
CN108537719A (en) * 2018-03-26 2018-09-14 上海交通大学 A kind of system and method improving graphics processing unit performance
CN115357196A (en) * 2022-08-31 2022-11-18 鹏城实验室 Dynamically expandable set-associative cache method, apparatus, device and medium

Also Published As

Publication number Publication date
CN117149781B (en) 2024-02-13

Similar Documents

Publication Publication Date Title
US10387315B2 (en) Region migration cache
US11086792B2 (en) Cache replacing method and apparatus, heterogeneous multi-core system and cache managing method
US9361236B2 (en) Handling write requests for a data array
US6865647B2 (en) Dynamic cache partitioning
WO2020199061A1 (en) Processing method and apparatus, and related device
US11126555B2 (en) Multi-line data prefetching using dynamic prefetch depth
CN111602377B (en) Resource adjusting method in cache, data access method and device
US7577793B2 (en) Patrol snooping for higher level cache eviction candidate identification
CN110297787B (en) Method, device and equipment for accessing memory by I/O equipment
KR20180114497A (en) Techniques to reduce read-modify-write overhead in hybrid dram/nand memory
US9836396B2 (en) Method for managing a last level cache and apparatus utilizing the same
CN115357196A (en) Dynamically expandable set-associative cache method, apparatus, device and medium
CN116501249A (en) Method for reducing repeated data read-write of GPU memory and related equipment
CN108537719B (en) System and method for improving performance of general graphic processor
CN106201918A (en) A kind of method and system quickly discharged based on big data quantity and extensive caching
CN116303138B (en) Caching architecture, caching method and electronic equipment
CN117149781B (en) Group-associative self-adaptive expansion cache architecture and access processing method thereof
CN107861819B (en) Cache group load balancing method and device and computer readable storage medium
CN115981555A (en) Data processing method and device, electronic equipment and medium
US11334488B2 (en) Cache management circuits for predictive adjustment of cache control policies based on persistent, history-based cache control information
US20190013062A1 (en) Selective refresh mechanism for dram
WO2021008552A1 (en) Data reading method and apparatus, and computer-readable storage medium
US20090157968A1 (en) Cache Memory with Extended Set-associativity of Partner Sets
CN115080459A (en) Cache management method and device and computer readable storage medium
CN108509151B (en) Line caching method and system based on DRAM memory controller

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant