CN117609111A - Data prefetching method, device, equipment and medium for system cache - Google Patents

Data prefetching method, device, equipment and medium for system cache Download PDF

Info

Publication number
CN117609111A
CN117609111A CN202311423775.5A CN202311423775A CN117609111A CN 117609111 A CN117609111 A CN 117609111A CN 202311423775 A CN202311423775 A CN 202311423775A CN 117609111 A CN117609111 A CN 117609111A
Authority
CN
China
Prior art keywords
address
page
cache
value
jump
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311423775.5A
Other languages
Chinese (zh)
Inventor
徐建国
王吴哲
孟昕
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Hongjun Microelectronics Technology Co ltd
Original Assignee
Hangzhou Hongjun Microelectronics Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Hongjun Microelectronics Technology Co ltd filed Critical Hangzhou Hongjun Microelectronics Technology Co ltd
Priority to CN202311423775.5A priority Critical patent/CN117609111A/en
Publication of CN117609111A publication Critical patent/CN117609111A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/0802Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
    • G06F12/0862Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches with prefetch
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/0802Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
    • G06F12/0893Caches characterised by their organisation or structure
    • G06F12/0897Caches characterised by their organisation or structure with two or more cache hierarchy levels
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Memory System Of A Hierarchy Structure (AREA)

Abstract

The invention relates to the field of computer storage design and discloses a data prefetching method, a device, equipment and a medium for system cache, wherein the method comprises the steps of obtaining a first page corresponding to an address when a current preset level cache is loaded and accessed; updating or recording an address jump value corresponding to a page in a cache page history table based on the first page and a corresponding address jump mode; updating a multi-level address hopping pattern table based on the address hopping value corresponding to the page in the cache page history table and the corresponding address hopping pattern; generating a prefetch request based on the updated multi-level address hopping pattern table; the invention improves the ratio of cache miss eliminated by prefetching by carrying out dynamic optimization adjustment according to the running condition of the actual cache, thereby ensuring the effect of data prefetching response.

Description

Data prefetching method, device, equipment and medium for system cache
Technical Field
The present invention relates to the field of computer storage design, and in particular, to a method, an apparatus, a device, and a medium for prefetching data in a system cache.
Background
Prefetching refers to the fast access of a processor to a block of data by pre-fetching the block of data that may be accessed into a cache. The prefetcher trains a prefetching strategy and speculates the Data blocks which are possibly accessed according to the memory access request and related information generated by the CPU of the processor, sends the prefetching request to the next-level storage and prefetches the Data blocks into the Data Cache. The delta pattern based prefetch algorithm suffers from two key drawbacks: (1) coverage deficiency: the coverage of the prefetch algorithm refers to the duty cycle of the prefetch-eliminated cache misses. The delta pattern based prefetch algorithm may not fully cover the memory access behavior of the program when recording and predicting access patterns, resulting in a failure to prefetch all of the data blocks that may be accessed. (2) lack of an efficient feedback mechanism: increasing the look-ahead depth of prefetching may take a more aggressive prefetch strategy, possibly improving coverage, but may also bring redundant prefetched data. Since only accuracy (i.e., whether the prefetch request hits) is used as an evaluation index, the effect of the prefetch request cannot be evaluated accurately. The lack of an effective feedback mechanism may result in the prefetch algorithm not being able to dynamically adjust and optimize according to the actual cache hit conditions, thereby potentially reducing the effectiveness of the prefetch.
Disclosure of Invention
In view of the above, the present invention provides a method, apparatus, device and medium for prefetching data in a system cache, so as to solve the problem of how to increase the ratio of cache misses for prefetching elimination and guarantee the prefetching effect.
In a first aspect, the present invention provides a method for prefetching data in a system cache, where the method includes: acquiring a first page corresponding to an address when a current preset level cache is accessed; updating or recording an address jump value corresponding to a page in a cache page history table and a corresponding address jump mode based on the first page; updating a multi-level address hopping pattern table based on the address hopping value corresponding to the page in the cache page history table and the corresponding address hopping pattern; generating a prefetch request based on the updated multi-level address hopping pattern table; acquiring feedback information after the pre-fetching request in the preset level cache, and adjusting the generation process of the next pre-fetching request based on the feedback information
The embodiment of the invention can accurately judge the possibility of prefetching and the priority of prefetching in the access process by recording and updating the page address jump value and the jump pattern in the cache page history table, and simultaneously updates the multi-level address jump pattern table according to the page address jump value and the jump pattern in the cache page history table, thereby better representing the address jump pattern among different pages, improving the accuracy of generating the prefetching request, acquiring the feedback information after the prefetching request in the preset level cache, adjusting the generating process of the next prefetching request according to the feedback information, and dynamically optimizing and adjusting according to the running condition of the actual cache, so as to improve the occupation ratio of the prefetching elimination cache miss and further ensure the effect of the prefetching response of the data.
In an optional implementation manner, the updating or recording, based on the first page, the address jump value corresponding to the page in the cache page history table and the corresponding address jump mode, includes: acquiring a page number corresponding to a first page, searching a cache page history table based on the page number, and judging whether the first page exists in the cache page history table; if so, calculating an address jump value between a first page corresponding to an address in the current preset level cache loading access and a first page corresponding to an address in the last preset level cache loading access, and updating an address jump mode corresponding to the first page in the cache page history table based on the address jump value; if not, the address jump value corresponding to the first page and the corresponding address jump mode are recorded in the cache page history table.
The method comprises the steps of accurately obtaining the situation of address jump by calculating the address jump value between a first page corresponding to the address when the current preset level cache loads and the first page corresponding to the address when the last preset level cache loads and accesses, deducing jump modes among different pages in the process of continuously capturing the address jump value, and reducing the cache miss rate, thereby providing a decision basis for the follow-up data prefetching.
In an alternative embodiment, the multi-level address hopping pattern table includes a plurality of hopping pattern tables of different lengths, including a hopping pattern, a next-bit hopping value, a corresponding hopping value count, whether the corresponding hopping value is a most recently used value, and a hopping value total count.
By designing hopping pattern tables with different lengths in the multi-stage address hopping pattern table, data corresponding to the address hopping patterns with different lengths are recorded respectively, the address hopping relation is comprehensively obtained, and the situation of different data access modes is ensured, so that the corresponding address hopping patterns are matched accurately.
In an alternative embodiment, the generating the prefetch request based on the updated multi-level address hopping pattern table includes: acquiring jump data in the updated multi-level address jump mode table, and generating the confidence of the current prefetch request; judging whether the confidence coefficient of the current prefetch request is larger than a preset threshold value or not, and if the confidence coefficient is larger than the preset threshold value, generating the current prefetch request based on the next bit jump value in the multi-level address jump mode table.
The confidence level of generating the prefetch request is evaluated by obtaining the confidence level of generating the current prefetch request in the multi-level address hopping pattern table. When the confidence is larger than a preset threshold, a prefetch request is generated based on the next bit jump value in the jump pattern table, so that the probability of misjudgment of an unreliable request is reduced, and the accuracy of the prefetch request is improved.
In an optional implementation manner, the obtaining the jump data in the updated multi-level address jump mode table, and generating the confidence of the current prefetch request, includes: acquiring a jump value count C corresponding to a next bit jump value of a jump mode in the updated multi-stage address jump mode table n Total count of jump values C t The method comprises the steps of carrying out a first treatment on the surface of the Acquiring confidence P of last generated prefetch request d-1 Coefficient factor LF lv The method comprises the steps of carrying out a first treatment on the surface of the Counting C based on a jump value corresponding to a next bit jump value of the jump pattern n Total count of jump values C t Confidence P of last generation prefetch request d-1 Coefficient factor LF lv The confidence P for generating the current prefetch request is calculated based on the following confidence calculation formula d
Wherein lv is different lengths corresponding to different hopping patterns.
The confidence of generating the current prefetch request is effectively determined by integrating different parameters, such as the jump value count corresponding to the next bit jump value of the jump pattern, the jump value total, the confidence of generating the prefetch request last time, coefficient factors and the like, so that the prefetch request with high confidence meeting the requirements can be screened out, and the accuracy of the prefetch request is improved.
In an optional implementation manner, the obtaining feedback information after the prefetch request in the preset level cache and adjusting a generation process of a next prefetch request based on the feedback information include: acquiring a prefetch request cache line filling state and a replacement state of a dynamic filter monitoring the preset level cache; based on the data filling state and the data eviction state, extracting an issued count value Ciss, an accurate count value Cacc, a timely count value Ctim, a pollution count value Cpol and a missing count value Cmiss of a prefetch request state, obtaining a prefetch accurate proportion a, a prefetch timely proportion t and a cache pollution proportion p caused by prefetching, wherein:
a=Cacc/Ciss;
t=Ctim/Cacc;
p=Cpol/Cmiss;
based on the accurate ratio a, the timely ratio t is prefetched, and the cache pollution ratio p caused by prefetching obtains a coefficient factor LF through the following formula lv(a,t,p)
LF lv(a,t,p) =B LFLF *a-β LF *t-γ LF *p;
Wherein alpha is LF To prefetch the first weight corresponding to the accurate proportion a, beta LF For pre-fetching the second weight corresponding to the timely proportion t, gamma LF A third weight corresponding to the cache pollution ratio p caused by prefetching, B LF Is a preset offset value;-passing said LF lv(a,t,p) And optimizing a confidence coefficient calculation formula as feedback information to adjust the generation process of the next prefetch request.
By monitoring the filling state and the replacement state of the cache line of the preset level cache, the count value of the sent prefetch request can be accurately obtained, the proportion of various states of prefetching is calculated, the coefficient factor is calculated by combining various weights according to the proportion, and the confidence coefficient calculation formula is further optimized, so that the generation process of the next prefetch request is dynamically optimized more accurately in real time, and the accuracy of the prefetch request is improved.
In an alternative embodiment, the method further comprises: and after eliminating repeated prefetch requests based on the dynamic filter, sending prefetch requests to the next-level storage of the preset-level cache.
By eliminating duplicate prefetch requests using a dynamic filter, it is avoided that multiple identical prefetch requests are sent to the next level of storage, reducing the load on the system cache.
In a second aspect, the present invention provides a data prefetching apparatus for a system cache, the apparatus comprising:
the page acquisition module is used for acquiring a first page corresponding to an address when the current preset level cache is accessed;
the cache page updating module is used for updating or recording an address jump value corresponding to a page in a cache page history table and a corresponding address jump mode based on the first page;
The address hopping pattern updating module is used for updating the multi-level address hopping pattern table based on the address hopping value corresponding to the page in the cache page history table and the corresponding address hopping pattern;
the prefetch request generation module is used for generating a prefetch request based on the updated multi-level address hopping pattern table;
and the prefetch feedback module is used for acquiring feedback information after the prefetch request in the preset level cache and adjusting the generation process of the next prefetch request based on the feedback information.
In a third aspect, the present invention provides a computer device comprising: the system comprises a memory and a processor, wherein the memory and the processor are in communication connection, the memory stores computer instructions, and the processor executes the computer instructions so as to execute the data prefetching method of the system cache of the first aspect or any implementation mode corresponding to the first aspect.
In a fourth aspect, the present invention provides a computer readable storage medium having stored thereon computer instructions for causing a computer to perform the data prefetching method of the system cache of the first aspect or any of the embodiments corresponding thereto.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings that are needed in the description of the embodiments or the prior art will be briefly described, and it is obvious that the drawings in the description below are some embodiments of the present invention, and other drawings can be obtained according to the drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flow diagram of a method for prefetching data for a system cache according to an embodiment of the invention;
FIG. 2 is a flow chart of another system cached data prefetching method according to an embodiment of the invention;
FIG. 3 is a flow chart of another system cached data prefetching method according to an embodiment of the invention;
FIG. 4 is a schematic block diagram of a data prefetching apparatus for a system cache according to an embodiment of the invention;
fig. 5 is a schematic diagram of a hardware structure of a computer device according to an embodiment of the present invention.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the embodiments of the present invention more apparent, the technical solutions of the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention, and it is apparent that the described embodiments are some embodiments of the present invention, but not all embodiments of the present invention. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
The data prefetching method of the system cache is applied to the scenes needing high-efficiency utilization of cache resources, such as a network server, a database management system, an image processing and other use scenes with high requirements on the data access speed, and the address which is likely to be accessed next is accurately predicted and prefetched into the cache in advance by recording and analyzing the jump mode of the address and combining with updating of a cache page history table and a multi-level address jump mode table, so that the ratio of cache miss caused by prefetching is improved, and the effect of data prefetching is further ensured.
In accordance with an embodiment of the present invention, there is provided a system cached data prefetching method embodiment, it being noted that the steps illustrated in the flowchart of the figures may be performed in a computer system such as a set of computer executable instructions, and, although a logical order is illustrated in the flowchart, in some cases, the steps illustrated or described may be performed in an order other than that illustrated herein.
In this embodiment, a method for prefetching data cached by a system is provided, which may be used in the above-mentioned computer device, and fig. 1 is a flowchart of a method for prefetching data cached by a system according to an embodiment of the present invention, as shown in fig. 1, where the flowchart includes the following steps:
Step S101, a first page corresponding to an address when a current preset level cache is accessed is acquired.
It should be noted that, the preset level cache refers to caches of different levels, for example, L2 level, L1D level, and the like, and the first page refers to a page determined according to a physical page number to which the access address belongs.
Illustratively, the program executing an instruction requires access to memory address 0x10000400, and the data at this address is not in the L1D cache, and the processor issues a load access request to the L2 cache once, and determines the corresponding first page according to address 0x 10000400.
Step S102, updating or recording address jump values corresponding to pages in a cache page history table and corresponding address jump modes based on the first page.
Note that a cache page history table (Page History Table, abbreviated as PHT) is used to record the history of recently accessed physical pages. The address jump value (delta) refers to the difference between the current access address and the address at which the page was last accessed. The address hopping Pattern (Delta Pattern) refers to a Pattern in which address hopping values appear according to a certain rule when a series of consecutive accesses to the same page.
Illustratively, the format of the cache page history table is as shown in Table 1:
Table 1 cache page history table
Where key=hash_of (page_num) is 0x100, meaning that the most recently accessed physical page is 0x100; the Last offset of 15- >16 indicates that the address of the Last access to the page is 15, and the address of the current access is 16, so the address jump value is +1; last 4Deltas are +1, +2, -1, +4- > +2, -1, +4, +1 indicates that the Last 4 address hopping values are +2, -1, +4, +1, forming a new address hopping pattern.
Step S103, updating the multi-level address hopping pattern table based on the address hopping value corresponding to the page in the cache page history table and the corresponding address hopping pattern.
It should be noted that, the multi-level address hopping pattern table (multi-length delta patterns, abbreviated as MLPT) refers to that a multi-level table is built to capture and record address hopping patterns with different lengths. The multi-level address hopping Pattern table includes a plurality of hopping Pattern tables (PT-lv, lv=1, 2,3, 4) of different lengths, including a hopping Pattern (Delta Pattern), a Next-bit hopping value (next_delta), a corresponding hopping value count (delta_counter), whether the corresponding hopping value is a most recently used value (Most Recenly Used, abbreviated MRU), and a Total hopping value count (total_counter).
Illustratively, the format of the multi-level address hopping pattern table is as follows in Table 2:
TABLE 2 Multi-level Address hopping pattern Table PT-4
Wherein PT-4 refers to a hopping Pattern table with length of 4, that is, the Delta Pattern contains a 4-bit hopping value; if the Delta Pattern sequence of the latest access is +1, +2, -1, +4, and a new access is generated currently, next_delta is +1, according to the updating decision of the multi-level address hopping Pattern, firstly, the corresponding Delta Pattern is found in a hopping Pattern table (PT-4) with the length of 4, the corresponding next_delta value is +1, delta_counter is 3- >4, MRU is 0- >1, and total_counter is 8- >9; based on the last first few Delta Pattern sequences, e.g., +2, -1, +4, a corresponding Delta Pattern is found in the length-3 hopping Pattern table (PT-3, see table 3) and the corresponding Next Delta value, delta Counter, MRU, total Counter is updated; and the like, until the jump pattern table (PT-1) with the length of 1 is updated, the whole multi-stage address jump pattern table is updated (PT-2, PT-1 updating modes refer to PT-4 and PT-3, and other formats are the same except for different jump pattern lengths and are not repeated here). The hopping Pattern table of each length is continuously updated and adjusted to adapt to Delta patterns of different lengths, so that the prefetch request can be generated more accurately, and the prefetch coverage rate is improved.
Illustratively, the multi-level address hopping pattern table PT-3 is as follows in Table 3:
TABLE 3 Multi-level Address hopping pattern Table PT-3
Step S104, generating a prefetch request based on the updated multi-stage address hopping pattern table.
It will be appreciated that the multi-level address hopping Pattern table is a table that indicates the regularity of address accesses by recording Delta patterns of different lengths, each of which records historical statistics of Delta patterns of different lengths, and that historical statistics based on Delta patterns of different lengths may be used to generate prefetch requests. And a prefetch request refers to deducing the next possible accessed address by Delta Pattern in the recorded address access history information, and prefetching data from a main memory or other storage blocks into the current preset level cache when the address is not accessed yet.
Step S105, feedback information after the pre-fetching request in the preset level cache is obtained, and the generation process of the next pre-fetching request is adjusted based on the feedback information.
In the embodiment of the invention, the feedback information after the prefetch request is used for optimizing the generation process of the next prefetch request, the feedback information refers to the coefficient factor calculated by the data obtained by monitoring the prefetch state by using the dynamic filter, and the accuracy and timeliness of the prefetch request and the cache pollution degree caused by the prefetch can be represented. Thereby dynamically adjusting the generation process of the prefetch request and improving the effect of the cache prefetch algorithm
The embodiment of the invention can accurately judge the possibility of prefetching and the priority of prefetching in the access process by recording and updating the page address jump value and the jump pattern in the cache page history table, and simultaneously updates the multi-level address jump pattern table according to the page address jump value and the jump pattern in the cache page history table, thereby better representing the address jump pattern among different pages, improving the accuracy of generating the prefetching request, acquiring the feedback information after the prefetching request in the preset level cache, adjusting the generating process of the next prefetching request according to the feedback information, and dynamically optimizing and adjusting according to the running condition of the actual cache, so as to improve the occupation ratio of the prefetching elimination cache miss and further ensure the effect of the prefetching response of the data.
In this embodiment, a method for prefetching data cached by a system is provided, which may be used in the computer and the like, and fig. 2 is a flowchart of a method for prefetching data cached by a system according to an embodiment of the present invention, as shown in fig. 2, where the flowchart includes the following steps:
step S201, a first page corresponding to an address when a current preset level cache is accessed is obtained. Please refer to step S101 in the embodiment shown in fig. 1 in detail, which is not described herein.
Step S202, updating or recording address jump values corresponding to pages in a cache page history table and corresponding address jump modes based on the first page. Specifically, step S202 includes:
in step S2021, a page number corresponding to the first page is obtained, and the cache page history table is searched based on the page number, so as to determine whether the first page exists in the cache page history table.
For example, if the address corresponding to the first page is 0x10000400, a corresponding page number "0x100" is calculated, and the page number "0x100" is used to perform traversal search in the cache page history table, if the corresponding page number appears, it indicates that the information corresponding to the first page is already stored in the cache page history table and needs to be updated, otherwise, it indicates that the information corresponding to the first page is not stored in the cache page history table and needs to be recorded.
Step S2022, if yes, calculating an address jump value between a first page corresponding to an address when the current preset level cache loads access and a first page corresponding to an address when the last preset level cache loads access, and updating an address jump mode corresponding to the first page in the cache page history table based on the address jump value; if not, the address jump value corresponding to the first page and the corresponding address jump mode are recorded in the cache page history table.
For example, if there is already a message in the cache page history table that the address jump pattern of the page a is +1, +2, -1, +4, if the address corresponding to the current first page is 0x10000400, a corresponding page number "001" is calculated to perform traversal search in the cache page history table to obtain the corresponding page a, which indicates that the address jump pattern of the first page is already stored, if the address jump value between the first page corresponding to the address when the current preset level cache load accesses and the first page corresponding to the address when the last preset level cache load accesses is calculated to be +1, the address jump value is updated to the address jump pattern of the page a (first page), so as to obtain a new address jump pattern of +2, -1, +4, +1. And similarly, if the address jump mode does not exist, directly recording the address jump mode of the first page as +1.
According to the embodiment of the invention, the situation of address jump is accurately obtained by calculating the address jump value between the first page corresponding to the address when the current preset level cache loads and accesses and the first page corresponding to the address when the last preset level cache loads and accesses, and in the process of continuously capturing the address jump value, the jump mode among different pages is deduced, so that the cache miss rate is reduced, and a decision basis is provided for the follow-up data prefetching.
Step S203, updating the multi-level address hopping pattern table based on the address hopping value corresponding to the page in the cache page history table and the corresponding address hopping pattern. Please refer to step S103 in the embodiment shown in fig. 1 in detail, which is not described herein.
Step S204, generating a prefetch request based on the updated multi-level address hopping pattern table. Specifically, step S204 includes:
step S2041, obtaining the jump data in the updated multi-stage address jump mode table, and generating the confidence of the current prefetch request. Specifically, step S2041 includes:
a1: acquiring a jump value count C corresponding to a next-bit jump value of a certain jump mode in the updated multi-stage address jump mode table n Total count of jump values C t
a2: acquiring confidence P of last generated prefetch request d-1 Coefficient factor LF lv
a3: hop value count C corresponding to the next bit hop value based on the hop pattern n Total count of jump values C t Confidence P of last generation prefetch request d-1 Coefficient factor LF lv The confidence P for generating the current prefetch request is calculated based on the following confidence calculation formula d
Wherein lv is different lengths corresponding to different hopping patterns.
For example, if lv is 4, it is illustrated that the calculated confidence is the confidence corresponding to the hopping pattern of length 4. If the updated hopping pattern is +2, -1, +4, +1, +2, -1, +4, +1 is needed to find the current multi-level address hopping pattern table (MLPT), e.g., the updated hopping pattern +2, -1, +4, +1, with the corresponding next-bit hopping value count of 3 and the total count of 8. The confidence of the last generation prefetch request is 0.7 and the coefficient factor is 0.5. The carry-over formula calculates the confidence level for the length 4 hopping pattern as 0.13125. And so on, the hopping pattern table with all the lengths needs to be searched, the confidence degrees corresponding to the hopping patterns +2, -1, +4, +1 updated under different lengths are respectively obtained, and if lv can be 1 or 2 or 3 or 4, the confidence degrees corresponding to the hopping patterns with all the lengths need to be calculated, and the hopping value corresponding to the highest confidence degree is selected as the hopping value for generating the prefetch request.
Furthermore, it will be appreciated that the confidence calculation process described above applies to a process of looping through prefetching, where the confidence P used to generate the prefetch request last time is needed d-1 Coefficient factor LF lv A kind of electronic device. If the calculated confidence is the first calculation process, the confidence P used for generating the prefetch request last time is not needed d-1 Coefficient factor LF lv Counting C directly based on a jump value corresponding to the next bit jump value of a certain jump mode n Total count of jump values C t The confidence coefficient P for generating the current prefetch request is calculated by the following confidence coefficient calculation formula d
Exemplary, ifWhen the confidence coefficient is calculated for the first time, the corresponding hopping pattern is +2, -1, +4, +1, the generated hopping pattern is-1, +4, +1, +3, and the calculated confidence coefficient is 0.889; at this point, the process of prefetching by recursive looping has been started, 0.889 will be taken as confidence P d-1 And entering the next cyclic calculation process.
Step S2042, judging whether the confidence coefficient of the current prefetch request is larger than a preset threshold value, and if so, generating the current prefetch request based on the next bit jump value in the multi-level address jump pattern table.
It should be noted that, the preset threshold refers to the minimum value of the confidence coefficient of prediction, in the embodiment of the present invention, two preset thresholds, TC and TN, are set, and are respectively used to determine that the data returned by the prefetch request is cached in L2C or LLC, and if the confidence coefficient of generating the prefetch request is smaller than the preset threshold, the prefetch request is not generated. It will be appreciated that generating the current prefetch request based on the next bit transition value in the multi-level address transition pattern table is accomplished by looking up the next bit transition value count and MRU information for the corresponding delta pattern in the MLPT and feedback information in the DF (dynamic Filter).
For example, in the case that one address hopping pattern with length 4 is +1, +2, -1, +4, the probability of predicting the next address hopping value to be +1 is the greatest, and confidence degrees a, b, c, d corresponding to a plurality of hopping patterns with different lengths (lv=1, 2,3, 4) are calculated according to the above formula. If the confidence level corresponding to the hopping pattern corresponding to the maximum confidence level selected exceeds the preset threshold, a prefetch request is generated based on the next bit hopping value of the address hopping pattern.
Step S205, feedback information after the pre-fetching request in the preset level cache is obtained, and the generation process of the next pre-fetching request is adjusted based on the feedback information. Please refer to step S105 in the embodiment shown in fig. 1 in detail, which is not described herein.
The embodiment of the invention evaluates the credibility of generating the prefetch request by acquiring the confidence of generating the current prefetch request in the multi-level address hopping pattern table. When the confidence is larger than a preset threshold, a prefetch request is generated based on the next bit jump value in the jump pattern table, so that the probability of misjudgment of an unreliable request is reduced, and the accuracy of the prefetch request is improved.
In this embodiment, a method for prefetching data cached by a system is provided, which may be used in the computer and the like, and fig. 3 is a flowchart of a method for prefetching data cached by a system according to an embodiment of the present invention, as shown in fig. 3, where the flowchart includes the following steps:
step S301, a first page corresponding to an address when a current preset level cache is accessed is acquired. Please refer to step S101 in the embodiment shown in fig. 1 in detail, which is not described herein.
Step S302, updating or recording address jump values corresponding to pages in a cache page history table and corresponding address jump modes based on the first page. Please refer to step S102 in the embodiment shown in fig. 1 in detail, which is not described herein.
Step S303, updating the multi-level address hopping pattern table based on the address hopping value corresponding to the page in the cache page history table and the corresponding address hopping pattern. Please refer to step S103 in the embodiment shown in fig. 1 in detail, which is not described herein.
Step S304, generating a prefetch request based on the updated multi-level address hopping pattern table. Please refer to step S104 in the embodiment shown in fig. 1 in detail, which is not described herein.
Step S305, obtaining feedback information after the pre-fetch request in the preset level cache, and adjusting the generation process of the next pre-fetch request based on the feedback information. Specifically, step S305 includes:
b1: the method comprises the steps of obtaining a prefetch request cache line filling state and a replacement state of a dynamic filter monitoring a preset level cache.
It should be noted that, in the embodiment of the present invention, a Dynamic negative feedback mechanism is introduced, and a Dynamic Filter (DF) is used to monitor a prefetch request and extract feedback information for recording the prefetch quality. Specifically including prefatch Status (fill Status of a cache line or data block), event Status (replacement Status of a cache line or data block), and Status Counter.
The Prefetch Status is used to record the Status of the Prefetch request, including three flags, i.e., issued (I), available (a), and Useful (U). Depending on the return of the prefetch request, it may be determined whether the prefetch request is valid, for example: when a prefetch request is first generated, i.e., not present in 1024 state records, then the (I) flag is set to 1, indicating that the corresponding prefetch request has been issued; when the prefetch request returns and the data therein has been successfully placed in the cache, the corresponding (a) flag is also set to 1, indicating that the prefetch request has been validated; when a subsequent read request hits, and where the (I) flag is 1 and the (U) flag is 0, then the (U) flag is also set to 1, at which point the prefetch request is considered valid.
Evchange Status records, via bit-vector, whether prefetched data resulted in Eviction of a cache line, such as: a buffer vector containing 4096 bits is used to record whether a certain buffer line has been replaced by prefetched data, wherein the bit vector determines specific bit positions by means of a hash of the buffer line address to evaluate the buffer pollution introduced by prefetching.
The Status Counter is used for counting quantity information according to prefatch Status and evocationstatus. For example: status Counter, record Status change information using a plurality of counters. (A) Ciss: issued Counter, ciss+1 when I is 0- > 1; (B) Cacc: accurate Counter, cacc+1 when U is 0- > 1; (C) Ctin: timely Counter, cti+1 when U is 0- >1, while A=1; (D) Cpol: pollution Counter, if the corresponding cache line search Eviction Status bit is 1 when the cache is in miss, cpol+1; (E) Cmsiss: miss Counter, count of global read request cache Miss.
b2: extracting an issued count value Ciss, an accurate count value Cacc, a timely count value Ctim, a pollution count value Cpol and a missing count value Cmiss of a prefetch request state based on the filling state and the replacement state, obtaining a prefetch accurate proportion a, a prefetch timely proportion t and a cache pollution proportion p caused by prefetching, wherein:
a=Cacc/Ciss;
t=Ctim/Cacc;
p=Cpol/Cmiss。
Illustratively, the issued count value Ciss of the prefetch request state is 1000, the accurate count value Cacc is 800, the immediate count value Ctim is 600, the pollution count value Cpol is 400, and the miss count value Cmiss is 1010. The exact ratio of prefetching a=0.8, the ratio of prefetching time t=0.75, and the ratio of cache pollution caused by prefetching p=0.1 are calculated.
b3: based on the accurate ratio a of prefetching, the timely ratio t of prefetching, the cache pollution ratio p caused by prefetching obtains a coefficient factor LF through the following formula lv(a,t,p)
LF lv(a,t,p) =B LFLF *a-β LF *t-γ LF *p;
Wherein alpha is LF To prefetch the first weight corresponding to the accurate proportion a, beta LF For pre-fetching the second weight corresponding to the timely proportion t, gamma LF A third weight corresponding to the cache pollution ratio p caused by prefetching, B LF Is a preset offset value.
b4: LF is carried out lv(a,t,p) And optimizing a confidence coefficient calculation formula as feedback information to adjust the generation process of the next prefetch request.
For example, assuming that the preset offset value is 10, the accurate ratio a of prefetching is 0.8, the timely ratio t of prefetching is 0.75, and the cache pollution ratio p caused by prefetching is 0.1. The first weight, the second weight and the third weight are respectively 0.6, 0.3 and 0.1, and the coefficient factor is calculated to be 0.745 for adjusting the generation process of the next prefetch request.
According to the embodiment of the invention, the count value of the sent prefetch request can be accurately obtained by monitoring the data filling state and the replacement state of the preset level cache, the proportion of various states of prefetching is calculated, the coefficient factor is calculated by combining various weights with the proportion, and the confidence coefficient calculation formula is further optimized, so that the generation process of the next prefetch request is dynamically optimized more accurately in real time, and the accuracy of the prefetch request is further improved.
Step S306, after eliminating repeated prefetch requests based on the dynamic filter, the prefetch requests are sent to the next-level storage of the preset-level cache.
The embodiment of the invention eliminates repeated prefetch requests by using the dynamic filter, thereby avoiding sending a plurality of identical prefetch requests to the next-stage storage and reducing the load of the system cache.
The embodiment also provides a data prefetching device of a system cache, which is used for implementing the foregoing embodiments and preferred embodiments, and the description is omitted herein. As used below, the term "module" may be a combination of software and/or hardware that implements a predetermined function. While the means described in the following embodiments are preferably implemented in software, implementation in hardware, or a combination of software and hardware, is also possible and contemplated.
The embodiment provides a data prefetching apparatus for system cache, as shown in fig. 4, including:
the page obtaining module 401 is configured to obtain a first page corresponding to an address when the current preset level cache loads and accesses;
a cache page updating module 402, configured to update or record, based on the first page, an address jump value corresponding to a page in the cache page history table and a corresponding address jump pattern;
an address hopping pattern updating module 403, configured to update the multi-level address hopping pattern table based on the address hopping value corresponding to the page in the cache page history table and the corresponding address hopping pattern;
a prefetch request generation module 404, configured to generate a prefetch request based on the updated multi-level address hopping pattern table;
the prefetch feedback module 405 is configured to obtain feedback information after a prefetch request in a preset level cache, and adjust a generation process of a next prefetch request based on the feedback information.
In an alternative embodiment, the cache page updating module 402 is specifically configured to obtain a page number corresponding to a first page, and search a cache page history table based on the page number, and determine whether the first page exists in the cache page history table; if so, calculating an address jump value between a first page corresponding to an address in the current preset level cache loading access and a first page corresponding to an address in the last preset level cache loading access, and updating an address jump mode corresponding to the first page in the cache page history table based on the address jump value; if not, the address jump value corresponding to the first page and the corresponding address jump mode are recorded in the cache page history table.
In an alternative embodiment, the multi-level address hopping pattern table comprises a plurality of hopping pattern tables of different lengths, the hopping pattern tables comprising hopping patterns, next-bit hopping values, corresponding hopping value counts, whether the corresponding hopping values are most recently used values, and a hop value total count.
In an alternative embodiment, the prefetch request generation module 404 is specifically configured to obtain jump data in the updated multi-level address jump mode table, and generate a confidence level of the current prefetch request; judging whether the confidence coefficient of the current prefetch request is larger than a preset threshold value, and if so, generating the current prefetch request based on the next bit jump value in the multi-level address jump mode table.
In an alternative embodiment, the prefetch request generation module 404 further includes a confidence computation subunit, configured to obtain a hop value count C corresponding to a next bit hop value of a certain hop pattern in the updated multi-level address hop pattern table n Total count of jump values C t The method comprises the steps of carrying out a first treatment on the surface of the Acquiring confidence P of last generated prefetch request d-1 Coefficient factor LF lv The method comprises the steps of carrying out a first treatment on the surface of the Hop value count C corresponding to the next bit hop value based on the hop pattern n Total count of jump values C t Confidence P of last generation prefetch request d-1 Coefficient factor LF lv The confidence P for generating the current prefetch request is calculated based on the following confidence calculation formula d
Wherein lv is different lengths corresponding to different hopping patterns.
In an alternative embodiment, the prefetch feedback module 405 is specifically configured to obtain a prefetch request cache line filling state and a replacement state of the dynamic filter monitoring the preset level cache; extracting an issued count value Ciss, an accurate count value Cacc, a timely count value Ctim, a pollution count value Cpol and a missing count value Cmiss of a prefetch request state based on a filling state and a replacement state, acquiring a prefetch accurate proportion a, a prefetch timely proportion t and a cache pollution proportion p caused by prefetching, wherein:
a=Cacc/Ciss;
t=Ctim/Cacc;
p=Cpol/Cmiss;
based on the accurate ratio a of prefetching, the timely ratio t of prefetching, the cache pollution ratio p caused by prefetching obtains a coefficient factor LF through the following formula lv(a,t,p)
LF lv(a,t,p) =B LFLF *a-β LF *t-γ LF *p;
Wherein alpha is LF To prefetch the first weight corresponding to the accurate proportion a, beta LF For pre-fetching the second weight corresponding to the timely proportion t, gamma LF A third weight corresponding to the cache pollution ratio p caused by prefetching, B LF Is a preset offset value; LF is carried out lv(a,t,p) And optimizing a confidence coefficient calculation formula as feedback information to adjust the generation process of the next prefetch request.
In an alternative embodiment, the apparatus further comprises: and the repeated request filtering module is used for sending the pre-fetching request to the next-stage storage of the preset-stage cache after eliminating the repeated pre-fetching request based on the dynamic filter.
Further functional descriptions of the above respective modules and units are the same as those of the above corresponding embodiments, and are not repeated here.
The embodiment of the invention can accurately judge the possibility of prefetching and the priority of prefetching in the access process by recording and updating the page address jump value and the jump pattern in the cache page history table, and simultaneously updates the multi-level address jump pattern table according to the page address jump value and the jump pattern in the cache page history table, thereby better representing the address jump pattern among different pages, improving the accuracy of generating the prefetching request, acquiring the feedback information after the prefetching request in the preset level cache, adjusting the generating process of the next prefetching request according to the feedback information, and dynamically optimizing and adjusting according to the running condition of the actual cache, so as to improve the occupation ratio of the prefetching elimination cache miss and further ensure the effect of the prefetching response of the data.
The system-cached data prefetching means in this embodiment is in the form of functional units, where units are ASIC (Application Specific Integrated Circuit ) circuits, processors and memories executing one or more software or fixed programs, and/or other devices that can provide the above-described functions.
The embodiment of the invention also provides a computer device, which is provided with the data prefetching device of the system cache shown in the figure 4.
Referring to fig. 5, fig. 5 is a schematic structural diagram of a computer device according to an alternative embodiment of the present invention, as shown in fig. 5, the computer device includes: one or more processors 10, memory 20, and interfaces for connecting the various components, including high-speed interfaces and low-speed interfaces. The various components are communicatively coupled to each other using different buses and may be mounted on a common motherboard or in other manners as desired. The processor may process instructions executing within the computer device, including instructions stored in or on memory to display graphical information of the GUI on an external input/output device, such as a display device coupled to the interface. In some alternative embodiments, multiple processors and/or multiple buses may be used, if desired, along with multiple memories and multiple memories. Also, multiple computer devices may be connected, each providing a portion of the necessary operations (e.g., as a server array, a set of blade servers, or a multiprocessor system). One processor 10 is illustrated in fig. 5.
The processor 10 may be a central processor, a network processor, or a combination thereof. The processor 10 may further include a hardware chip, among others. The hardware chip may be an application specific integrated circuit, a programmable logic device, or a combination thereof. The programmable logic device may be a complex programmable logic device, a field programmable gate array, a general-purpose array logic, or any combination thereof.
Wherein the memory 20 stores instructions executable by the at least one processor 10 to cause the at least one processor 10 to perform a method for implementing the embodiments described above.
The memory 20 may include a storage program area that may store an operating system, at least one application program required for functions, and a storage data area; the storage data area may store data created according to the use of the computer device, etc. In addition, the memory 20 may include high-speed random access memory, and may also include non-transitory memory, such as at least one magnetic disk storage device, flash memory device, or other non-transitory solid-state storage device. In some alternative embodiments, memory 20 may optionally include memory located remotely from processor 10, which may be connected to the computer device via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
Memory 20 may include volatile memory, such as random access memory; the memory may also include non-volatile memory, such as flash memory, hard disk, or solid state disk; the memory 20 may also comprise a combination of the above types of memories.
The computer device also includes a communication interface 30 for the computer device to communicate with other devices or communication networks.
The embodiments of the present invention also provide a computer readable storage medium, and the method according to the embodiments of the present invention described above may be implemented in hardware, firmware, or as a computer code which may be recorded on a storage medium, or as original stored in a remote storage medium or a non-transitory machine readable storage medium downloaded through a network and to be stored in a local storage medium, so that the method described herein may be stored on such software process on a storage medium using a general purpose computer, a special purpose processor, or programmable or special purpose hardware. The storage medium can be a magnetic disk, an optical disk, a read-only memory, a random access memory, a flash memory, a hard disk, a solid state disk or the like; further, the storage medium may also comprise a combination of memories of the kind described above. It will be appreciated that a computer, processor, microprocessor controller or programmable hardware includes a storage element that can store or receive software or computer code that, when accessed and executed by the computer, processor or hardware, implements the methods illustrated by the above embodiments.
Although the embodiments of the present invention have been described with reference to the accompanying drawings, various modifications and variations may be made by those skilled in the art without departing from the spirit and scope of the invention, and such modifications and variations fall within the scope defined by the appended claims.

Claims (10)

1. A method for prefetching data cached by a system, the method comprising:
acquiring a first page corresponding to an address when a current preset level cache is accessed;
updating or recording an address jump value corresponding to a page in a cache page history table and a corresponding address jump mode based on the first page;
updating a multi-level address hopping pattern table based on the address hopping value corresponding to the page in the cache page history table and the corresponding address hopping pattern;
generating a prefetch request based on the updated multi-level address hopping pattern table;
and acquiring feedback information after the pre-fetching request in the preset level cache, and adjusting the generation process of the next pre-fetching request based on the feedback information.
2. The method of claim 1, wherein updating or recording the address hopping value and the corresponding address hopping pattern for the page in the cache page history table based on the first page comprises:
Acquiring a page number corresponding to a first page, searching a cache page history table based on the page number, and judging whether the first page exists in the cache page history table;
if so, calculating an address jump value between a first page corresponding to an address in the current preset level cache loading access and a first page corresponding to an address in the last preset level cache loading access, and updating an address jump mode corresponding to the first page in the cache page history table based on the address jump value; if not, the address jump value corresponding to the first page and the corresponding address jump mode are recorded in the cache page history table.
3. The method of claim 1, wherein the multi-level address hopping pattern table comprises a plurality of hopping pattern tables of different lengths, the hopping pattern tables comprising a hopping pattern, a next-bit hopping value, a corresponding hopping value count, whether the corresponding hopping value is a most recently used value, and a hopping value total count.
4. The method of claim 1, wherein generating the prefetch request based on the updated multi-level address hopping pattern table comprises:
acquiring jump data in the updated multi-level address jump mode table, and generating the confidence of the current prefetch request;
Judging whether the confidence coefficient of the current prefetch request is larger than a preset threshold value or not, and if the confidence coefficient is larger than the preset threshold value, generating the current prefetch request based on the next bit jump value in the multi-level address jump mode table.
5. The method of claim 4, wherein the obtaining the hopping data in the updated multi-level address hopping pattern table to generate the confidence level of the current prefetch request comprises:
acquiring a jump value count C corresponding to a next bit jump value of a jump mode in the updated multi-stage address jump mode table n Total count of jump values C t
Acquiring confidence P of last generated prefetch request d-1 Coefficient factor LF lv
Counting C based on a jump value corresponding to a next bit jump value of the jump pattern n Total count of jump values C t Confidence P of last generation prefetch request d-1 Coefficient factor LF lv The confidence P for generating the current prefetch request is calculated based on the following confidence calculation formula d
Wherein lv is different lengths corresponding to different hopping patterns.
6. The method of claim 4, wherein the obtaining feedback information after the prefetch request in the preset level cache and adjusting the generation process of the next prefetch request based on the feedback information comprises:
Acquiring a prefetch request cache line filling state and a replacement state of a dynamic filter monitoring the preset level cache;
extracting an issued count value Ciss, an accurate count value Cacc, a timely count value Ctim, a pollution count value Cpol and a missing count value Cmiss of a prefetch request state based on the filling state and the replacement state, obtaining a prefetch accurate proportion a, a prefetch timely proportion t and a cache pollution proportion p caused by prefetching, wherein:
a=Cacc/Ciss;
t=Ctim/Cacc;
p=Cpol/Cmiss;
based on the accurate ratio a, the timely ratio t is prefetched, and the cache pollution ratio p caused by prefetching obtains a coefficient factor LF through the following formula lv(a,t,p)
LF lv(a,t,p) =B LFLF *a-β LF *t-γ LF *p;
Wherein alpha is LF For prefetching accurate ratiosFirst weight corresponding to example a, beta LF For pre-fetching the second weight corresponding to the timely proportion t, gamma LF A third weight corresponding to the cache pollution ratio p caused by prefetching, B LF Is a preset offset value;
-passing said LF lv(a,t,p) And optimizing a confidence coefficient calculation formula as feedback information to adjust the generation process of the next prefetch request.
7. The method according to any one of claims 1 to 6, further comprising:
and after eliminating repeated prefetch requests based on the dynamic filter, sending prefetch requests to the next-level storage of the preset-level cache.
8. An apparatus for prefetching data cached by a system, the apparatus comprising:
the page acquisition module is used for acquiring a first page corresponding to an address when the current preset level cache is accessed;
the cache page updating module is used for updating or recording an address jump value corresponding to a page in a cache page history table and a corresponding address jump mode based on the first page;
the address hopping pattern updating module is used for updating the multi-level address hopping pattern table based on the address hopping value corresponding to the page in the cache page history table and the corresponding address hopping pattern;
the prefetch request generation module is used for generating a prefetch request based on the updated multi-level address hopping pattern table;
and the prefetch feedback module is used for acquiring feedback information after the prefetch request in the preset level cache and adjusting the generation process of the next prefetch request based on the feedback information.
9. A computer device, comprising:
a memory and a processor in communication with each other, the memory having stored therein computer instructions, the processor executing the computer instructions to perform the method of prefetching data cached by the system of any one of claims 1 to 7.
10. A computer readable storage medium having stored thereon computer instructions for causing a computer to perform the method of prefetching data cached by the system of any one of claims 1 to 7.
CN202311423775.5A 2023-10-27 2023-10-27 Data prefetching method, device, equipment and medium for system cache Pending CN117609111A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311423775.5A CN117609111A (en) 2023-10-27 2023-10-27 Data prefetching method, device, equipment and medium for system cache

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311423775.5A CN117609111A (en) 2023-10-27 2023-10-27 Data prefetching method, device, equipment and medium for system cache

Publications (1)

Publication Number Publication Date
CN117609111A true CN117609111A (en) 2024-02-27

Family

ID=89943314

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311423775.5A Pending CN117609111A (en) 2023-10-27 2023-10-27 Data prefetching method, device, equipment and medium for system cache

Country Status (1)

Country Link
CN (1) CN117609111A (en)

Similar Documents

Publication Publication Date Title
US6453389B1 (en) Optimizing computer performance by using data compression principles to minimize a loss function
US9471497B2 (en) Methods for combining access history and sequentiality for intelligent prefetching and devices thereof
TWI684099B (en) Profiling cache replacement
US7757045B2 (en) Synchronizing recency information in an inclusive cache hierarchy
US8176258B2 (en) System and method for cache management
US8601216B2 (en) Method and system for removing cache blocks
US8583874B2 (en) Method and apparatus for caching prefetched data
EP3089039B1 (en) Cache management method and device
CN104156323B (en) A kind of adaptive read method of the data block length of cache memory and device
CN107544926A (en) Processing system and its access method
CN110162272B (en) Memory computing cache management method and device
WO2023173991A1 (en) Cache line compression prediction and adaptive compression
CN117609111A (en) Data prefetching method, device, equipment and medium for system cache
US20230022190A1 (en) Systems and methods for adaptive hybrid hardware pre-fetch
CN114461590A (en) Database file page prefetching method and device based on association rule
CN115080459A (en) Cache management method and device and computer readable storage medium
CN114764416A (en) Data caching method, device and equipment and computer readable storage medium
US8484423B2 (en) Method and apparatus for controlling cache using transaction flags
EP4261712A1 (en) Data elimination method and apparatus, cache node, and cache system
US20230297382A1 (en) Cache line compression prediction and adaptive compression
CN116467353B (en) Self-adaptive adjustment caching method and system based on LRU differentiation
CN112015679B (en) Cache optimization method and system based on access frequency
US11797446B2 (en) Multi purpose server cache directory
US11216382B1 (en) Intelligent hierarchical caching based on metrics for objects in different cache levels
CN113778693B (en) Cache operation method, cache operation device, electronic equipment and processor

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information

Country or region after: China

Address after: Room 07-1, 2001, No. 37 Huangge Section, Fanzhong Road, Nansha District, Guangzhou City, Guangdong Province, China

Applicant after: Guangdong Hongjun Microelectronics Technology Co.,Ltd.

Address before: 813-3, Building 1, No. 371, Mingxing Road, Economic and Technological Development Zone, Xiaoshan District, Hangzhou City, Zhejiang Province, 311200

Applicant before: Hangzhou Hongjun Microelectronics Technology Co.,Ltd.

Country or region before: China

CB02 Change of applicant information