CN112306909A - Cache elimination method and device and electronic equipment - Google Patents

Cache elimination method and device and electronic equipment Download PDF

Info

Publication number
CN112306909A
CN112306909A CN202011163795.XA CN202011163795A CN112306909A CN 112306909 A CN112306909 A CN 112306909A CN 202011163795 A CN202011163795 A CN 202011163795A CN 112306909 A CN112306909 A CN 112306909A
Authority
CN
China
Prior art keywords
linked list
cache
access data
current access
history
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202011163795.XA
Other languages
Chinese (zh)
Other versions
CN112306909B (en
Inventor
刘少荘
张亚东
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Suzhou Inspur Intelligent Technology Co Ltd
Original Assignee
Suzhou Inspur Intelligent Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Suzhou Inspur Intelligent Technology Co Ltd filed Critical Suzhou Inspur Intelligent Technology Co Ltd
Priority to CN202011163795.XA priority Critical patent/CN112306909B/en
Publication of CN112306909A publication Critical patent/CN112306909A/en
Application granted granted Critical
Publication of CN112306909B publication Critical patent/CN112306909B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/0802Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
    • G06F12/0893Caches characterised by their organisation or structure
    • G06F12/0895Caches characterised by their organisation or structure of parts of caches, e.g. directory or tag array
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/0223User address space allocation, e.g. contiguous or non contiguous base addressing
    • G06F12/023Free address space management
    • G06F12/0238Memory management in non-volatile memory, e.g. resistive RAM or ferroelectric memory
    • G06F12/0246Memory management in non-volatile memory, e.g. resistive RAM or ferroelectric memory in block erasable memory, e.g. flash memory

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Memory System Of A Hierarchy Structure (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The invention provides a cache elimination method, a cache elimination device and electronic equipment, belongs to the technical field of network equipment, and solves the technical problem that the cache hit rate is low in the existing cache elimination algorithm. A cache elimination method is applied to electronic equipment, wherein the memory of the electronic equipment comprises a first cache linked list, a second cache linked list, a first historical record linked list and a second historical record linked list, and the method comprises the following steps: judging the position of the current access data; if the current access data appear in the first cache linked list, inserting the current access data into the header of the second cache linked list, and deleting the current access data from the first cache linked list; if the current access data does not appear in the first cache linked list, judging whether the current access data appears in the second cache linked list; and if the current access data appear in the second cache linked list, moving the current access data to the head of the second cache linked list.

Description

Cache elimination method and device and electronic equipment
Technical Field
The invention relates to the technical field of network equipment, in particular to a cache elimination method and device and a computer readable storage medium.
Background
With the continuous development of network technology, the use frequency of electronic devices is higher and higher, wherein the optimization of the storage structure has an important influence on the performance of the electronic devices.
At present, some data in the disk may be frequently used, and it takes a long time to acquire the data from the disk. For commonly Used data, the data is obtained from a disk when Used, and is released after being read, the disk is a very large waste of resources, frequent Input/Output (Input/Output, abbreviated as IO) operations consume a Central Processing Unit (CPU), if the commonly Used data is cached in a memory, and the data is read from the memory, which consumes much less time than reading from the disk, but the memory is limited, and all the data cannot be loaded into the memory, an algorithm needs to be designed, the Recently Used data is cached in the memory, and the data which is not Used Recently is released from the memory, which data may not be accessed Recently, which is a key point in the research of the cache field, namely, the cache elimination algorithm, the common cache elimination algorithm has the Least recent use (Least recent recumbed, LRU for short), least frequently used (LFU for short), First in First out (FIFO for short), etc., which have various disadvantages. The LRU eliminates the data of the farthest timestamp, but the caching effect is not obvious for circularly accessing a large batch of data; LFU eliminates cache data with low access frequency, but is not friendly to new data, and history data with high access frequency is difficult to eliminate; FIFO, i.e. first-in first-out, cannot achieve good buffering effect on frequently accessed data if the buffering space is small. The method can influence the cache effect to a certain extent, and the cache hit rate is reduced.
Therefore, the existing cache elimination algorithm has the technical problem of low cache hit rate.
Disclosure of Invention
The invention aims to provide a cache elimination method, a cache elimination device, electronic equipment and a computer readable storage medium, and solves the technical problem that the cache hit rate is low in the existing cache elimination algorithm.
In a first aspect, the present invention provides a cache elimination method applied to an electronic device, where a memory of the electronic device includes a first cache linked list, a second cache linked list, a first history linked list and a second history linked list, the first cache linked list is used for caching recently accessed data that is accessed once, the second cache linked list is used for caching recently accessed data that is accessed more than once, the first history linked list is used for recording recently deleted access records in the first cache linked list, and the second history linked list is used for recording recently deleted access records in the second cache linked list, the method includes the following steps:
judging the position of the current access data;
if the current access data appear in the first cache linked list, inserting the current access data into the header of the second cache linked list, and deleting the current access data from the first cache linked list;
if the current access data does not appear in the first cache linked list, judging whether the current access data appears in the second cache linked list;
if the current access data appear in the second cache linked list, moving the current access data to the head of the second cache linked list;
if the current access data does not appear in the second cache linked list, judging whether the current access data appears in the first history linked list or not;
if the current access data does not appear in the first history linked list, judging whether the current access data appears in the second history linked list;
and if the current access data does not appear in the first cache linked list, the second cache linked list, the first history linked list and the second history linked list, inserting the current access data into the head of the first cache linked list.
Further, after the step of determining whether the current access data is present in the first history linked list if the current access data is not present in the second cache linked list, the method further includes: if the access record of the current access data appears in the first history linked list, increasing the length of the first cache linked list and reducing the length of the second cache linked list;
if the current access data does not appear in the first history record linked list, judging whether the current access data appears in the second history record linked list, and if the access record of the current access data appears in the second history record linked list, increasing the length of the second cache linked list and reducing the length of the first cache linked list.
Further, when the first cache linked list is full, deleting the access data at the tail part of the first cache linked list, and inserting the access data record into the head of the first history linked list;
and when the second cache linked list is full, deleting the access data at the tail part of the second cache linked list, and inserting the access data record into the head of the second history record linked list.
Further, when the first history linked list or the second history linked list is full, the tail record of the first history linked list or the second history linked list is deleted.
Further, the sum of the chain table lengths of the first cache chain table and the second cache chain table is a fixed value.
Further, if the access record of the current access data appears in the first history linked list, the length of the first cache linked list is increased, and after the step of reducing the length of the second cache linked list, the method further comprises the following steps:
and inserting the current access data into the head of the first cache linked list, and deleting the record in the first history record linked list.
Further, if the access record of the current access data appears in the second history linked list, the length of the second cache linked list is increased, and after the step of reducing the length of the first cache linked list, the method further comprises the following steps:
and inserting the current access data into the head of the second cache linked list, and deleting the record in the second history record linked list.
In a second aspect, a cache elimination apparatus is applied to an electronic device, where an internal memory of the electronic device includes a first cache linked list, a second cache linked list, a first history linked list and a second history linked list, the first cache linked list is used to cache recently accessed data that is accessed once, the second cache linked list is used to cache recently accessed data that is accessed more than once, the first history linked list is used to record recently deleted access records in the first cache linked list, and the second history linked list is used to record recently deleted access records in the second cache linked list, the apparatus includes:
the judging module is used for judging the position of the current access data;
the execution module is used for executing the following contents, if the current access data appear in the first cache linked list, the current access data are inserted into the header of the second cache linked list and deleted from the first cache linked list; if the current access data does not appear in the first cache linked list, judging whether the current access data appears in the second cache linked list; if the current access data appear in the second cache linked list, moving the current access data to the head of the second cache linked list; if the current access data does not appear in the second cache linked list, judging whether the current access data appears in the first history linked list or not; if the current access data does not appear in the first history linked list, judging whether the current access data appears in the second history linked list; and if the current access data does not appear in the first cache linked list, the second cache linked list, the first history linked list and the second history linked list, inserting the current access data into the head of the first cache linked list.
In a third aspect, the present invention further provides an electronic device, which includes a memory and a processor, where the memory stores a computer program that is executable on the processor, and the processor implements the method steps of the first aspect when executing the computer program.
In a fourth aspect, the present invention also provides a computer readable storage medium having stored thereon machine executable instructions which, when invoked and executed by a processor, cause the processor to perform the method of the first aspect.
The invention provides a cache elimination method, which is applied to electronic equipment, wherein the memory of the electronic equipment comprises a first cache linked list, a second cache linked list, a first historical record linked list and a second historical record linked list, the first cache linked list is used for caching recently-accessed data which is accessed once, the second cache linked list is used for caching recently-accessed data which is accessed more than once, the first historical record linked list is used for recording recently-deleted access records in the first cache linked list, and the second historical record linked list is used for recording recently-deleted access records in the second cache linked list.
The method comprises the following steps: and judging the position of the current access data, and judging the access frequency of the current access data according to the position of the current access data.
And if the current access data appear in the first cache linked list, inserting the current access data into the header of the second cache linked list, and deleting the current access data from the first cache linked list. The current access data appears in the first cache linked list, which shows that the access data is the access data which has been accessed for the last time, and when the access data is accessed for the next time, the access data should enter the second cache linked list for caching the access data for multiple times.
And if the current access data does not appear in the first cache linked list, judging whether the current access data appears in the second cache linked list. If the data is not present in the first cache linked list, the accessed data is not the new data which is accessed for the last time, and then whether the accessed data is the data which is accessed for the last high frequency is judged.
And if the current access data appear in the second cache linked list, moving the current access data to the head of the second cache linked list. The current access data appears in the second cache linked list, namely the high-frequency data is hit and accessed again, which shows that the access frequency of the access data is higher, and then the access data is moved to the head of the second cache linked list, so that the hit rate of the access data is improved.
And if the current access data does not appear in the second cache linked list, judging whether the current access data appears in the first history linked list. Whether the access preference is prone to accessing the new data for multiple times or not is judged by means of the data, and judgment reference is provided for the following dynamic adjustment of the lengths of the first cache linked list and the second cache linked list.
And if the current access data does not appear in the first history linked list, judging whether the current access data appears in the second history linked list. Whether the access preference is prone to multiple accesses to the high-frequency access data or not can be judged according to the data, and judgment reference is provided for the following dynamic adjustment of the lengths of the first cache linked list and the second cache linked list.
And if the current access data does not appear in the first cache linked list, the second cache linked list, the first history linked list and the second history linked list, inserting the current access data into the head of the first cache linked list. That is to say, the currently accessed data is the data accessed for the latest first time, so the table header of the first cache linked list is inserted, and a basis is provided for judging multiple accesses of new data.
By adopting the cache elimination method provided by the invention, according to the position of the current access data, whether the data is the data which is accessed for the latest first time or not can be judged, whether the data is the data which is accessed for the latest high frequency or not can also be judged, and the current access data can be inserted into the first cache linked list or the second cache linked list, so that the cache requirement of the high frequency data can be met, the requirement of accessing the new data for multiple times can also be considered, and the cache hit rate is improved.
Accordingly, the cache elimination device and the electronic device provided by the embodiment of the invention also have the technical effects.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and other drawings can be obtained by those skilled in the art without creative efforts.
Fig. 1 is a first flowchart of a cache eviction method according to an embodiment of the present invention;
FIG. 2 is a diagram illustrating a memory structure according to an embodiment of the present invention;
fig. 3 is a flowchart of a cache eviction method according to an embodiment of the present invention;
fig. 4 is a schematic diagram of a cache eviction apparatus according to an embodiment of the present invention.
Detailed Description
To make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions of the present invention will be clearly and completely described below with reference to the accompanying drawings, and it is apparent that the described embodiments are some, but not all embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The terms "comprising" and "having," and any variations thereof, as referred to in embodiments of the present invention, are intended to cover non-exclusive inclusions. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those steps or elements but may alternatively include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
At present, common cache elimination algorithms include Least Recently Used (LRU), Least Frequently Used (LFU), First In First Out (FIFO), and the like, and have various disadvantages. The LRU eliminates the data of the farthest timestamp, but the caching effect is not obvious for circularly accessing a large batch of data; LFU eliminates cache data with low access frequency, but is not friendly to new data, and history data with high access frequency is difficult to eliminate; FIFO, i.e. first-in first-out, cannot achieve good buffering effect on frequently accessed data if the buffering space is small. The method can influence the cache effect to a certain extent, and the cache hit rate is reduced.
Therefore, the existing cache elimination algorithm has the technical problem of low cache hit rate.
In order to solve the above problems, embodiments of the present invention provide a cache eviction method.
Example 1:
as shown in fig. 1, fig. 2, and fig. 3, a cache elimination method provided in an embodiment of the present invention is applied to an electronic device, where a memory of the electronic device includes a first cache linked list, a second cache linked list, a first history linked list, and a second history linked list, where the first cache linked list is used to cache recently accessed data that has been accessed once, the second cache linked list is used to cache recently accessed data that has been accessed more than once, the first history linked list is used to record recently deleted access records in the first cache linked list, and the second history linked list is used to record recently deleted access records in the second cache linked list.
The method comprises the following steps:
s1: and judging the position of the current access data, and judging the access frequency of the current access data according to the position of the current access data.
S2: and if the current access data appear in the first cache linked list, inserting the current access data into the header of the second cache linked list, and deleting the current access data from the first cache linked list. The current access data appears in the first cache linked list, which shows that the access data is the access data which has been accessed for the last time, and when the access data is accessed for the next time, the access data should enter the second cache linked list for caching the access data for multiple times.
S3: and if the current access data does not appear in the first cache linked list, judging whether the current access data appears in the second cache linked list. If the data is not present in the first cache linked list, the accessed data is not the new data which is accessed for the last time, and then whether the accessed data is the data which is accessed for the last high frequency is judged.
S4: and if the current access data appear in the second cache linked list, moving the current access data to the head of the second cache linked list. The current access data appears in the second cache linked list, namely the high-frequency data is hit and accessed again, which shows that the access frequency of the access data is higher, and then the access data is moved to the head of the second cache linked list, so that the hit rate of the access data is improved.
S5: and if the current access data does not appear in the second cache linked list, judging whether the current access data appears in the first history linked list. Whether the access preference is prone to accessing the new data for multiple times or not is judged by means of the data, and judgment reference is provided for the following dynamic adjustment of the lengths of the first cache linked list and the second cache linked list.
S6: and if the current access data does not appear in the first history linked list, judging whether the current access data appears in the second history linked list. Whether the access preference is prone to multiple accesses to the high-frequency access data or not can be judged according to the data, and judgment reference is provided for the following dynamic adjustment of the lengths of the first cache linked list and the second cache linked list.
S7: and if the current access data does not appear in the first cache linked list, the second cache linked list, the first history linked list and the second history linked list, inserting the current access data into the head of the first cache linked list. That is to say, the currently accessed data is the data accessed for the latest first time, so the table header of the first cache linked list is inserted, and a basis is provided for judging multiple accesses of new data.
By adopting the cache elimination method provided by the invention, according to the position of the current access data, whether the data is the data which is accessed for the latest first time or not can be judged, whether the data is the data which is accessed for the latest high frequency or not can also be judged, and the current access data can be inserted into the first cache linked list or the second cache linked list, so that the cache requirement of the high frequency data can be met, the requirement of accessing the new data for multiple times can also be considered, and the cache hit rate is improved.
In a possible implementation manner, after the step of determining whether the current access data is present in the first history list if the current access data is not present in the second cache list, the method further includes: and if the access record of the current access data appears in the first history record linked list, increasing the length of the first cache linked list and reducing the length of the second cache linked list. The current access data appears in the first history linked list, which shows that the access data is the data which has been accessed only once recently, and originally should be cached in the first cache linked list, so that the cache data can be directly read, but the access data is deleted because the first cache linked list is filled up, so that the access data can only be obtained from the disk again when the access data is accessed again. Therefore, the current access preference tends to access the new data for multiple times, and the length of the first cache linked list cannot meet the current cache requirement on the new data, so that the length of the first cache linked list is increased, the cache capacity on the new data is improved, and the requirement on the access preference at the moment is met. And because the capacity of the memory is a fixed value, the length of the second cache linked list needs to be reduced while the length of the first cache linked list is increased.
If the current access data does not appear in the first history record linked list, judging whether the current access data appears in the second history record linked list, and if the access record of the current access data appears in the second history record linked list, increasing the length of the second cache linked list and reducing the length of the first cache linked list. The current access data appears in the second history linked list, which shows that the access data is the data accessed for the last time, and originally, the access data should be cached in the second cache linked list, so that the cache data can be directly read, but the access data is deleted because the second cache linked list is filled up, and therefore, the access data can only be obtained from the disk again when being accessed again. Therefore, the current access preference tends to access the data accessed for a plurality of times with high frequency, and the length of the second cache linked list cannot meet the current cache requirement on the data accessed for the high frequency, so that the length of the second cache linked list is increased, the cache capacity on new data is improved, and the requirement on the access preference at the moment is met. And because the capacity of the memory is a fixed value, the length of the first cache linked list needs to be reduced while the length of the second cache linked list is increased.
Based on this, still include:
and when the first cache linked list is filled up, deleting the access data at the tail part of the linked list, and inserting the access data record into the head of the first history record linked list. When the first cache linked list is filled up, the tail data is deleted, namely the data which is accessed for the time farthest to the present is deleted, the deleted data record is inserted into the first historical record linked list, which provides important basic data for judging the access preference in different time periods, if the current access data appears in the first historical record linked list, the current access preference is indicated as multiple accesses to the new data, and at the moment, the lengths of different cache linked lists are correspondingly adjusted, so that the purpose of improving the cache effect is achieved.
And when the second cache linked list is filled up, deleting the access data at the tail part of the linked list, and inserting the access data record into the head of the second history record linked list. When the second storage linked list is filled up, the tail data is deleted, namely the multiple access data farthest away from the current time are deleted, the deleted access data record is inserted into the second historical record linked list, important basic data are provided for judging access preference in different time periods, if the current access data appear in the second historical record linked list, the current access preference is indicated as multiple access to high-frequency data, at the moment, the lengths of different cache linked lists are correspondingly adjusted, and the purpose of improving the cache effect is achieved.
In one possible implementation, when the first history linked list or the second history linked list is filled up, the tail record of the first history linked list or the second history linked list is deleted. Deleting the tail data means deleting the history record farthest from the current time, and keeping the most recent history record has more accurate reference value for judging the access preference.
In one possible implementation, the sum of the link list lengths of the first buffer link list and the second buffer link list is a fixed value. Because the capacity of the memory is fixed, the sum of the lengths of the first cache chain table and the second cache chain table is a fixed value.
In a possible implementation manner, if the access record of the currently accessed data appears in the first history record linked list, the method further includes, after the steps of increasing the length of the first cache linked list and decreasing the length of the second cache linked list: and inserting the current access data into the head of the first cache linked list, and deleting the record in the first history record linked list. After the current access data is inserted into the head of the first cache linked list, the data is cached, the data does not need to exist in the first history linked list, if the history is not deleted, the recordable space of the first history linked list can be occupied, the current access preference can be further influenced, if the cache data of the first cache linked list and the history in the first history linked list are the same data, when the access data is hit again, the position of the access data cannot be specifically judged, and further the operation of the whole elimination algorithm is influenced.
In a possible implementation manner, after the step of increasing the length of the second cache chain table and decreasing the length of the first cache chain table, the method further includes: and inserting the current access data into the head of the second cache linked list, and deleting the record in the second history record linked list. After the current access data is inserted into the head of the second cache linked list, the data is cached, the data does not need to exist in the second history linked list, if the history is not deleted, the recordable space of the second history linked list can be occupied, the current access preference can be further influenced, if the cache data of the second cache linked list and the history in the second history linked list are the same data, when the access data is hit again, the position of the access data cannot be specifically judged, and further the operation of the whole elimination algorithm causes influence.
Example 2:
as shown in fig. 4, an embodiment of the present invention further provides a cache elimination apparatus applied to an electronic device, where a memory of the electronic device includes a first cache linked list, a second cache linked list, a first history linked list, and a second history linked list, where the first cache linked list is used to cache recently accessed data that is accessed once, the second cache linked list is used to cache recently accessed data that is accessed more than once, the first history linked list is used to record a recently deleted access record in the first cache linked list, and the second history linked list is used to record a recently deleted access record in the second cache linked list, and the apparatus includes:
and the judging module is used for judging the position of the current access data.
The execution module is used for executing the following contents, if the current access data appear in the first cache linked list, the current access data are inserted into the header of the second cache linked list and deleted from the first cache linked list; if the current access data does not appear in the first cache linked list, judging whether the current access data appears in the second cache linked list; if the current access data appear in the second cache linked list, moving the current access data to the head of the second cache linked list; if the current access data does not appear in the second cache linked list, judging whether the current access data appears in the first history linked list or not; if the current access data does not appear in the first history linked list, judging whether the current access data appears in the second history linked list; and if the current access data does not appear in the first cache linked list, the second cache linked list, the first history linked list and the second history linked list, inserting the current access data into the head of the first cache linked list.
Example 3:
the embodiment of the present invention further provides an electronic device, which includes a memory and a processor, where the memory stores a computer program that can be run on the processor, and the processor implements the steps of the method provided in embodiment 1 when executing the computer program.
Example 4:
embodiments of the present invention further provide a computer-readable storage medium, where a machine executable instruction is stored in the computer-readable storage medium, and when the machine executable instruction is called and executed by a processor, the machine executable instruction causes the processor to execute the method provided in embodiment 1.
The cache eviction apparatus, the electronic device, and the computer-readable storage medium provided in embodiments 2, 3, and 4 have the same technical features as the cache eviction method provided in the above embodiments, and therefore the same technical problems can be solved, and the same technical effects can be achieved.
It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, it need not be further defined and explained in subsequent figures. Furthermore, the terms "first," "second," "third," and the like are used solely to distinguish one from another and are not to be construed as indicating or implying relative importance.
The apparatus provided by the embodiment of the present invention may be specific hardware on the device, or software or firmware installed on the device, etc. The device provided by the embodiment of the present invention has the same implementation principle and technical effect as the method embodiments, and for the sake of brief description, reference may be made to the corresponding contents in the method embodiments without reference to the device embodiments. It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the foregoing systems, apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the embodiments provided in the present invention, it should be understood that the disclosed apparatus and method can be implemented in other ways. The apparatus embodiments described above are merely illustrative, and for example, the flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of apparatus, methods and computer program products according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
For another example, the division of the unit is only one division of logical functions, and there may be other divisions in actual implementation, and for another example, multiple units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection of devices or units through some communication interfaces, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments provided by the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
Finally, it should be noted that: the above-mentioned embodiments are only specific embodiments of the present invention, which are used for illustrating the technical solutions of the present invention and not for limiting the same, and the protection scope of the present invention is not limited thereto, although the present invention is described in detail with reference to the foregoing embodiments, those skilled in the art should understand that: any person skilled in the art can modify or easily conceive the technical solutions described in the foregoing embodiments or equivalent substitutes for some technical features within the technical scope of the present disclosure; and the modifications, changes or substitutions do not make the essence of the corresponding technical solutions depart from the scope of the technical solutions of the embodiments of the present invention. Are intended to be covered by the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.

Claims (10)

1. A cache elimination method is characterized in that the method is applied to electronic equipment, wherein a memory of the electronic equipment comprises a first cache linked list, a second cache linked list, a first history linked list and a second history linked list, the first cache linked list is used for caching recently accessed data which is accessed once, the second cache linked list is used for caching recently accessed data which is accessed more than once, the first history linked list is used for recording recently deleted access records in the first cache linked list, and the second history linked list is used for recording recently deleted access records in the second cache linked list, and the method comprises the following steps:
judging the position of the current access data;
if the current access data appear in the first cache linked list, inserting the current access data into the header of the second cache linked list, and deleting the current access data from the first cache linked list;
if the current access data does not appear in the first cache linked list, judging whether the current access data appears in the second cache linked list;
if the current access data appear in the second cache linked list, moving the current access data to the head of the second cache linked list;
if the current access data does not appear in the second cache linked list, judging whether the current access data appears in the first history linked list or not;
if the current access data does not appear in the first history linked list, judging whether the current access data appears in the second history linked list;
and if the current access data does not appear in the first cache linked list, the second cache linked list, the first history linked list and the second history linked list, inserting the current access data into the head of the first cache linked list.
2. The cache eviction method of claim 1, wherein after the step of determining whether the current access data is present in the first history list if the current access data is not present in the second cache list, the cache eviction method further comprises: if the access record of the current access data appears in the first history linked list, increasing the length of the first cache linked list and reducing the length of the second cache linked list;
if the current access data does not appear in the first history record linked list, judging whether the current access data appears in the second history record linked list, and if the access record of the current access data appears in the second history record linked list, increasing the length of the second cache linked list and reducing the length of the first cache linked list.
3. The cache eviction method of claim 2, further comprising:
when the first cache linked list is full, deleting the access data at the tail part of the first cache linked list, and inserting the access data record into the head of the first history record linked list;
and when the second cache linked list is full, deleting the access data at the tail part of the second cache linked list, and inserting the access data record into the head of the second history record linked list.
4. The cache eviction method of claim 1, wherein when a first history linked list or a second history linked list is full, a tail record of the first history linked list or the second history linked list is deleted.
5. The cache eviction method of claim 1, wherein a sum of the linked list lengths of the first cache linked list and the second cache linked list is a fixed value.
6. The cache eviction method of claim 2, wherein if the access record of the currently accessed data appears in the first history record linked list, the length of the first cache linked list is increased, and after the step of decreasing the length of the second cache linked list, the method further comprises:
and inserting the current access data into the head of the first cache linked list, and deleting the record in the first history record linked list.
7. The cache eviction method of claim 2, wherein if the access record of the currently accessed data appears in the second history record linked list, the length of the second cache linked list is increased, and after the step of decreasing the length of the first cache linked list, the method further comprises:
and inserting the current access data into the head of the second cache linked list, and deleting the record in the second history record linked list.
8. The utility model provides a buffer elimination device, its characterized in that is applied to electronic equipment, wherein including first buffer chain table, second buffer chain table, first historical record chain table and second historical record chain table in electronic equipment's the memory, first buffer chain table is used for the most recently visited data of buffer access once, the second buffer chain table is used for buffer access more than once most recently visited data, the access record that the first historical record chain table was deleted recently in the first buffer chain table of record, the access record that the second historical record chain table was deleted recently in the second buffer chain table of record, the device includes:
the judging module is used for judging the position of the current access data;
the execution module is used for executing the following contents, if the current access data appear in the first cache linked list, the current access data are inserted into the header of the second cache linked list and deleted from the first cache linked list; if the current access data does not appear in the first cache linked list, judging whether the current access data appears in the second cache linked list; if the current access data appear in the second cache linked list, moving the current access data to the head of the second cache linked list; if the current access data does not appear in the second cache linked list, judging whether the current access data appears in the first history linked list or not; if the current access data does not appear in the first history linked list, judging whether the current access data appears in the second history linked list; and if the current access data does not appear in the first cache linked list, the second cache linked list, the first history linked list and the second history linked list, inserting the current access data into the head of the first cache linked list.
9. An electronic device comprising a memory and a processor, wherein the memory stores a computer program operable on the processor, and wherein the processor implements the steps of the method of any of claims 1 to 7 when executing the computer program.
10. A computer readable storage medium having stored thereon machine executable instructions which, when invoked and executed by a processor, cause the processor to execute the method of any of claims 1 to 7.
CN202011163795.XA 2020-10-27 2020-10-27 Cache elimination method and device and electronic equipment Active CN112306909B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011163795.XA CN112306909B (en) 2020-10-27 2020-10-27 Cache elimination method and device and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011163795.XA CN112306909B (en) 2020-10-27 2020-10-27 Cache elimination method and device and electronic equipment

Publications (2)

Publication Number Publication Date
CN112306909A true CN112306909A (en) 2021-02-02
CN112306909B CN112306909B (en) 2022-08-05

Family

ID=74330362

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011163795.XA Active CN112306909B (en) 2020-10-27 2020-10-27 Cache elimination method and device and electronic equipment

Country Status (1)

Country Link
CN (1) CN112306909B (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5249217A (en) * 1990-11-23 1993-09-28 Goldstar Telecommunications Co., Ltd. Both way recording method of a wireless telephone
CN110471939A (en) * 2019-07-11 2019-11-19 平安普惠企业管理有限公司 Data access method, device, computer equipment and storage medium

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5249217A (en) * 1990-11-23 1993-09-28 Goldstar Telecommunications Co., Ltd. Both way recording method of a wireless telephone
CN110471939A (en) * 2019-07-11 2019-11-19 平安普惠企业管理有限公司 Data access method, device, computer equipment and storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
庄绪路: "广域传感器数据库中的缓存技术研究", 《计算机时代》 *

Also Published As

Publication number Publication date
CN112306909B (en) 2022-08-05

Similar Documents

Publication Publication Date Title
US9235508B2 (en) Buffer management strategies for flash-based storage systems
US8601216B2 (en) Method and system for removing cache blocks
US9122607B1 (en) Hotspot detection and caching for storage devices
KR100577384B1 (en) Method for page replacement using information on page
US11113195B2 (en) Method, device and computer program product for cache-based index mapping and data access
CN111258967A (en) Data reading method and device in file system and computer readable storage medium
US10860497B2 (en) Method, apparatus, and system for caching data
US10853250B2 (en) Storage management method, electronic device and computer program product
CN112286459A (en) Data processing method, device, equipment and medium
CN111427804B (en) Method for reducing missing page interruption times, storage medium and intelligent terminal
CN113094392A (en) Data caching method and device
CN111708720A (en) Data caching method, device, equipment and medium
CN112306909B (en) Cache elimination method and device and electronic equipment
CN111913913A (en) Access request processing method and device
CN114116634B (en) Caching method and device and readable storage medium
US7631145B1 (en) Inter-frame texel cache
CN114461590A (en) Database file page prefetching method and device based on association rule
CN112650694B (en) Data reading method and device, cache proxy server and storage medium
CN114296635A (en) Cache elimination method and device of cache data, terminal and storage medium
US20170097899A1 (en) Non-transitory computer-readable storage medium, data management device, and data management method
CN105740167B (en) A kind of method and system that file system cache is deleted
WO2022156452A1 (en) Cache management method and apparatus, and device
CN110825652A (en) Method, device and equipment for eliminating cache data on disk block
CN113296710B (en) Cloud storage data reading method and device, electronic equipment and storage medium
CN110134509B (en) Data caching method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant