CN111752868A - LRU cache implementation method and device, computer readable storage medium and equipment - Google Patents

LRU cache implementation method and device, computer readable storage medium and equipment Download PDF

Info

Publication number
CN111752868A
CN111752868A CN201910239405.3A CN201910239405A CN111752868A CN 111752868 A CN111752868 A CN 111752868A CN 201910239405 A CN201910239405 A CN 201910239405A CN 111752868 A CN111752868 A CN 111752868A
Authority
CN
China
Prior art keywords
tbb
container
cache
lru
cache data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910239405.3A
Other languages
Chinese (zh)
Inventor
鲁宝宏
邓丹
刘洋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Wodong Tianjun Information Technology Co Ltd
Original Assignee
Beijing Wodong Tianjun Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Wodong Tianjun Information Technology Co Ltd filed Critical Beijing Wodong Tianjun Information Technology Co Ltd
Priority to CN201910239405.3A priority Critical patent/CN111752868A/en
Publication of CN111752868A publication Critical patent/CN111752868A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/12Replacement control
    • G06F12/121Replacement control using replacement algorithms
    • G06F12/123Replacement control using replacement algorithms with age lists, e.g. queue, most recently used [MRU] list or least recently used [LRU] list
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2212/00Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
    • G06F2212/10Providing a specific technical effect
    • G06F2212/1032Reliability improvement, data loss prevention, degraded operation etc

Abstract

The present disclosure relates to the field of data processing technologies, and in particular, to a method and an apparatus for implementing an LRU cache, and a computer-readable storage medium and an electronic device for implementing the method for implementing the LRU cache. The implementation method of the LRU cache includes: determining target cache data; responding to a cache instruction, and performing cache processing corresponding to the cache instruction on the target cache data based on a first TBB container; recording the access history of the LRU cache data in the first TBB container based on the second TBB container. The technical scheme adopts the parallel containers to realize the LRU cache, can provide rich thread safety interfaces, further meets various operations on LRU cache data, and is favorable for improving the LRU cache performance. Meanwhile, a voiceprint LRU caching mechanism is designed without lock, so that the concurrency of the caching application program is improved, and the requirement of a multi-thread concurrent use scene is met.

Description

LRU cache implementation method and device, computer readable storage medium and equipment
Technical Field
The present disclosure relates to the field of data processing technologies, and in particular, to a method and an apparatus for implementing an LRU cache, and a computer-readable storage medium and an electronic device for implementing the method for implementing the LRU cache.
Background
Least Recently Used (LRU) caching refers to a caching mechanism based on the LRU algorithm. Among them, the LRU algorithm is a data elimination algorithm frequently used in computer science. The core idea is as follows: if the data has been accessed recently, the probability of future access is higher, and if the data has been accessed rarely, the probability of future access is lower. When the LRU cache capacity reaches the preset maximum value, the data object that is least recently used needs to be eliminated from the LRU cache to ensure that the cache capacity is maintained within the preset range, thereby maintaining the operation stability of the computer system.
The prior art provides a method for realizing a lock-free single-thread LRU cache, but the method cannot support multi-thread simultaneous safe read-write cache and cannot meet the application scenario of multi-thread (such as advertisement background service and the like). In order to solve the above problems, the prior art also provides an improved LRU cache implementation method. Specifically, a global lock is added on the basis of the realization of a lock-free single thread, for any thread, the global lock needs to be acquired before the read cache operation or the write cache operation is performed, and the global lock is released after the read cache operation or the write cache operation is completed. Therefore, in a multi-thread concurrent use scene, the method can meet the safety requirement of read-write operation.
However, the above LRU cache implementation method cannot satisfy the usage scenario with high concurrency requirement.
It is to be noted that the information disclosed in the above background section is only for enhancement of understanding of the background of the present disclosure, and thus may include information that does not constitute prior art known to those of ordinary skill in the art.
Disclosure of Invention
The embodiments of the present disclosure provide a method for implementing an LRU cache, an apparatus for implementing the LRU cache, and a computer-readable storage medium and an electronic device for implementing the LRU cache, so as to improve concurrency performance of the LRU cache implementation method at least to a certain extent, and facilitate meeting a usage scenario with a high requirement.
Additional features and advantages of the disclosure will be set forth in the detailed description which follows, or in part will be obvious from the description, or may be learned by practice of the disclosure.
According to a first aspect of the embodiments of the present disclosure, there is provided a method for implementing an LRU cache, including:
determining target cache data;
responding to a cache instruction, and performing cache processing corresponding to the cache instruction on the target cache data based on a first TBB container;
recording the access history of the LRU cache data in the first TBB container based on the second TBB container.
In some embodiments of the present disclosure, based on the foregoing,
each LRU cache data stored in the first TBB container corresponds to a node object in a pre-allocation queue;
and a first access record queue is stored in the second TBB container and contains pointers of a plurality of node objects.
In some embodiments of the present disclosure, based on the foregoing scheme, wherein,
responding to a cache instruction, and performing cache processing corresponding to the cache instruction on the target cache data based on a first TBB container, wherein the cache processing comprises the following steps:
in response to a write instruction for writing the target cache data into the first TBB container, inserting the target cache data into the first TBB container, and adding the target cache data in a first node object in the pre-allocation queue through an atomic operation;
recording access history to LRU cache data in the first TBB container based on a second TBB container, comprising:
and adding the pointer of the first node object to the tail part of the first access record queue in the second TBB container so as to record the history data of accessing the LRU cache data.
In some embodiments of the present disclosure, based on the foregoing scheme, wherein,
responding to a cache instruction, and performing cache processing corresponding to the cache instruction on the target cache data based on a first TBB container, wherein the cache processing comprises the following steps:
in response to an update instruction for updating the first TBB container by using the target cache data, pointing a pointer of a second node object corresponding to LRU cache data to be updated to a hole memory;
adding the target cache data in a third node object in the pre-distribution queue through atomic operation, and updating the LRU cache data to be updated by using the target cache data;
recording access history to LRU cache data in the first TBB container based on a second TBB container, comprising:
and adding a pointer of the third node object to the tail part of the first access record queue in the second TBB container so as to record historical data of accessing the LRU cache data.
In some embodiments of the present disclosure, based on the foregoing scheme, wherein,
responding to a cache instruction, and performing cache processing corresponding to the cache instruction on the target cache data based on a first TBB container, wherein the cache processing comprises the following steps:
responding to a reading instruction for reading the target cache data from the first TBB container, and pointing a pointer of a fourth node object corresponding to the target cache data to a hole memory;
adding the target cache data in a fifth node object in the pre-distribution queue through atomic operation;
recording access history to LRU cache data in the first TBB container based on a second TBB container, comprising:
and adding a pointer of the fifth node object to the tail part of the first access record queue in the second TBB container so as to record historical data of accessing the LRU cache data.
In some embodiments of the present disclosure, based on the foregoing scheme, wherein,
responding to a cache instruction, and performing cache processing corresponding to the cache instruction on the target cache data based on a first TBB container, wherein the cache processing comprises the following steps:
in response to a deletion instruction for deleting the target cache data from the first TBB container, pointing a pointer of a sixth node object corresponding to the target cache data to a hole memory;
deleting the target cache data from the first TBB container;
recording access history to LRU cache data in the first TBB container based on a second TBB container, comprising:
and keeping the pointer of the sixth node object unchanged at the position of the first access record queue in the second TBB container.
In some embodiments of the present disclosure, based on the foregoing, the method further comprises:
acquiring the current capacity value of the second TBB container, and judging whether the current capacity value of the second TBB container exceeds a second preset threshold value or not;
if the current capacity value of the second TBB container exceeds a second preset threshold value, then:
judging whether the ith pointer of the first access record queue points to the cavity memory or not in the direction from the head of the first access record queue to the tail of the first access record queue;
if the ith pointer points to the void memory, deleting the ith pointer to compress the first access record queue, and judging whether the (i + 1) th pointer points to the void memory;
if the (i + 1) th pointer does not point to the void memory, the (i + 1) th pointer is transferred to the tail part of a second access record queue in the second TBB container, and whether the (i + 2) th pointer points to the void memory is judged.
In some embodiments of the present disclosure, based on the foregoing,
determining target cache data, comprising:
determining that the target cache data is data corresponding to a jth pointer in the second access record queue;
responding to a cache instruction, and performing cache processing corresponding to the cache instruction on the target cache data based on a first TBB container, wherein the cache processing comprises the following steps:
in response to an update instruction for updating the first TBB container by using the target cache data, pointing a jth pointer to a hole memory;
adding the target cache data in a seventh node object in the pre-distribution queue through atomic operation;
updating the LRU cache data to be updated by using the target cache data;
recording access history to LRU cache data in the first TBB container based on a second TBB container, comprising:
adding the pointer of the seventh node object to the tail of the first access record queue in the second TBB container so as to record the history data of accessing the LRU cache data.
In some embodiments of the present disclosure, based on the foregoing,
determining target cache data, comprising:
determining that the target cache data is data corresponding to a jth pointer in the second access record queue;
responding to a cache instruction, and performing cache processing corresponding to the cache instruction on the target cache data based on a first TBB container, wherein the cache processing comprises the following steps:
responding to a reading instruction for reading the target cache data from the first TBB container, and pointing a j-th pointer to the hole memory;
adding the target cache data in an eighth node object in the pre-distribution queue through atomic operation;
recording access history to LRU cache data in the first TBB container based on a second TBB container, comprising:
and adding the pointer of the eighth node object to the tail part of the first access record queue in the second TBB container so as to record the history data of accessing the LRU cache data.
In some embodiments of the present disclosure, based on the foregoing,
determining target cache data, comprising:
determining that the target cache data is data corresponding to a jth pointer in the second access record queue;
responding to a cache instruction, and performing cache processing corresponding to the cache instruction on the target cache data based on a first TBB container, wherein the cache processing comprises the following steps:
in response to a delete instruction for deleting the target cache data from the first TBB container, pointing a jth pointer to a hole memory;
deleting the target cache data from the first TBB container;
recording access history to LRU cache data in the first TBB container based on a second TBB container, comprising:
and in the second TBB container, keeping the position of the j-th pointer in the second access record queue unchanged.
In some embodiments of the present disclosure, based on the foregoing, the method further comprises:
acquiring a current capacity value of the first TBB container, and judging whether the current capacity value of the first TBB container exceeds a first preset threshold value or not;
if the current capacity value of the first TBB container exceeds a first preset threshold value, then:
traversing from the head of the second access record queue to the tail of the first access record queue, and judging whether the mth pointer of the second access record queue points to the void memory;
if the mth pointer points to the void memory, judging whether the (m + 1) th pointer points to the void memory;
if the (m + 1) th pointer does not point to the hole memory, determining and deleting first LRU cache data to be deleted in the first TBB container, wherein the first LRU cache data to be deleted is the same as the node object corresponding to the (m + 1) th pointer.
In some embodiments of the present disclosure, based on the foregoing, the method further comprises:
acquiring the current length value of the second access record queue, and judging whether the current length value of the second access record queue exceeds a third preset threshold value;
if the current length value of the second access record queue exceeds a third preset threshold, then:
and traversing the second access record queue, and deleting the pointer pointing to the void memory in the second access record queue.
In some embodiments of the present disclosure, based on the foregoing, the method further comprises:
if none of the pointers in the second access record queue point to pointers of the void memory, then:
sequentially deleting the nth pointer from the head of the second access record queue to the tail of the first access record queue until the current length value of the second access record queue is less than or equal to the third preset threshold, wherein n is a positive integer;
and determining and deleting second LRU cache data to be deleted in the first TBB container, wherein the second LRU cache data to be deleted is the same as the node object corresponding to the nth pointer.
According to a second aspect of the embodiments of the present invention, there is provided an apparatus for implementing an LRU cache, including:
the determining module is used for determining target cache data;
the processing module is used for responding to a cache instruction and carrying out cache processing corresponding to the cache instruction on the target cache data based on a first TBB container;
and the recording module is used for recording the access history of the LRU cache data in the first TBB container based on the second TBB container.
According to a third aspect of the embodiments of the present disclosure, there is provided a computer-readable storage medium on which a computer program is stored, the program, when executed by a processor, implementing the method for implementing the LRU cache as described in the first aspect of the embodiments above.
According to a fourth aspect of the embodiments of the present disclosure, there is provided an electronic apparatus including: one or more processors; a storage device for storing one or more programs which, when executed by the one or more processors, cause the one or more processors to implement the method for implementing LRU cache as described in the first aspect of the embodiments above.
The technical scheme provided by the embodiment of the disclosure can have the following beneficial effects:
in an aspect, an LRU caching mechanism is implemented based on TBB parallel containers. The first TBB container stores LRU cache data, and the second TBB container records access history of the LRU cache data in the first TBB container. The two parallel containers can provide rich thread safety interfaces to meet various operations (such as reading operation, deleting operation, writing operation and the like) on LRU cache data, and the LRU cache performance is favorably improved to meet the requirement of a multi-thread concurrent use scene.
On the other hand, compared with the prior multithreading LRU cache related technology, the LRU cache mechanism provided by the technical scheme adopts a lock-free design, and avoids the problem that only one thread can read and write the cache at the same time due to the lock characteristic, and the read and write requests of other threads must be queued to wait for obtaining the lock. Therefore, the concurrency of the cache application program is improved, and the requirement of a multi-thread concurrent use scene is further met.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and together with the description, serve to explain the principles of the disclosure. It is to be understood that the drawings in the following description are merely exemplary of the disclosure, and that other drawings may be derived from those drawings by one of ordinary skill in the art without the exercise of inventive faculty. In the drawings:
fig. 1 shows a flow diagram of a method of implementing an LRU cache according to an embodiment of the present disclosure;
FIG. 2 shows a schematic structural diagram of an LRU caching mechanism according to an embodiment of the present disclosure;
FIG. 3 shows a flow diagram of a method for implementing an LRU cache according to another embodiment of the present disclosure;
FIG. 4 shows a flow diagram for writing LRU cache data according to an embodiment of the present invention;
FIG. 5 shows a flow diagram of a method for implementing an LRU cache according to yet another embodiment of the present disclosure;
FIG. 6 is a schematic flow chart illustrating updating LRU cache data according to an embodiment of the present invention;
FIG. 7 shows a flow diagram of a method for implementing an LRU cache according to yet another embodiment of the present disclosure;
FIG. 8 is a schematic flow chart illustrating reading LRU cache data according to an embodiment of the present invention;
FIG. 9 is a flow chart diagram illustrating a method for implementing an LRU cache according to an embodiment of the present disclosure;
FIG. 10 is a schematic flow diagram illustrating deletion of LRU cache data according to an embodiment of the present invention;
FIG. 11 shows a flow diagram of a method for implementing an LRU cache according to another embodiment of the present disclosure;
FIG. 12 illustrates a flow diagram for monitoring capacity values, according to an embodiment of the invention;
FIG. 13 shows a flow diagram of a method for implementing an LRU cache according to yet another embodiment of the present disclosure;
FIG. 14 shows a flow diagram of a method for implementing an LRU cache according to yet another embodiment of the present disclosure;
FIG. 15 is a flow chart diagram illustrating a method for implementing an LRU cache according to an embodiment of the present disclosure;
fig. 16 shows a flow diagram of a method of implementing an LRU cache according to an embodiment of the present disclosure;
fig. 17 shows a flow diagram of a method of implementing an LRU cache according to another embodiment of the present disclosure;
fig. 18 is a schematic structural diagram illustrating an apparatus for implementing an LRU cache according to an embodiment of the present disclosure;
fig. 19 schematically illustrates a computer-readable storage medium for implementing the above-described LRU cache implementation method; and the number of the first and second groups,
fig. 20 schematically illustrates an example block diagram of an electronic device for implementing the LRU cache implementation method described above.
Detailed Description
Example embodiments will now be described more fully with reference to the accompanying drawings. Example embodiments may, however, be embodied in many different forms and should not be construed as limited to the examples set forth herein; rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the concept of example embodiments to those skilled in the art.
Furthermore, the described features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. In the following description, numerous specific details are provided to give a thorough understanding of embodiments of the disclosure. One skilled in the relevant art will recognize, however, that the subject matter of the present disclosure can be practiced without one or more of the specific details, or with other methods, components, devices, steps, and so forth. In other instances, well-known methods, devices, implementations, or operations have not been shown or described in detail to avoid obscuring aspects of the disclosure.
The block diagrams shown in the figures are functional entities only and do not necessarily correspond to physically separate entities. I.e. these functional entities may be implemented in the form of software, or in one or more hardware modules or integrated circuits, or in different networks and/or processor means and/or microcontroller means.
The flow charts shown in the drawings are merely illustrative and do not necessarily include all of the contents and operations/steps, nor do they necessarily have to be performed in the order described. For example, some operations/steps may be decomposed, and some operations/steps may be combined or partially combined, so that the actual execution sequence may be changed according to the actual situation.
For the implementation method of the multithreading LRU cache, the existing related art adopts a lock mode to ensure the multithreading safety LRU cache. However, this approach typically adds global locks to the lockless single-threaded implementation. Where the function of the global lock is to serialize concurrent accesses of multiple threads. Specifically, any thread that wants to read or write to the cache must first acquire the global lock and then release the lock after completing the read/write operation. Due to the characteristics of the lock, only one thread can read and write the cache at the same time, and the read and write requests of other threads need to queue to obtain the lock. Therefore, the cache method reduces the concurrency of the application program and cannot meet the scene with higher requirements on performance and concurrency.
Meanwhile, the existing related art provides a lock-free multithreading LRU cache implementation method for a specific scene. For example: the LRU cache is suitable for writing more and reading less leveldb, and the LRU cache realized in the LRU cache supports multithreading safety access. But at the same time, it internally locks each hash bucket and has a type limit on the key (index) of the cached data. Resulting in a great sacrifice in practical versatility. In an actual application scenario, if code integration is performed, secondary code modification may cause high use cost.
Another example is: a tool Thread building module (TBB for short) project developed by Intel corporation and developed by parallel programming is also internally provided with an LRU cache implementation supporting multithreading safety. However, the interface provided by its LRU cache is too single, and its application program cannot control the total cache capacity, and cannot actively delete and eliminate the cache. Therefore, the amount of cache data continuously increases, and further, the memory continuously increases, and further, the practicability is poor.
Fig. 1 illustrates a flow chart of an implementation method of an LRU cache according to an embodiment of the present disclosure, which at least partially overcomes the above-mentioned problems of the implementation method of the LRU cache provided by the related art.
The execution subject of the implementation method of the LRU cache provided by this embodiment may be a device having a calculation processing function, such as a server.
Referring to fig. 1, the embodiment provides a method for implementing an LRU cache, including:
step S101, determining target cache data;
step S102, responding to a cache instruction, and performing cache processing corresponding to the cache instruction on the target cache data based on a first TBB container; and the number of the first and second groups,
and step S103, recording the access history of the LRU cache data in the first TBB container based on the second TBB container.
In the solution provided by the embodiment shown in fig. 1, in one aspect, the LRU caching mechanism is implemented based on TBB parallel containers. The first TBB container stores LRU cache data, and the second TBB container records access history of the LRU cache data in the first TBB container. The two parallel containers can provide rich thread safety interfaces to meet various operations (such as reading operation, deleting operation, writing operation and the like) on LRU cache data, and the LRU cache performance is favorably improved to meet the requirement of a multi-thread concurrent use scene. On the other hand, compared with the prior multithreading LRU cache related technology, the LRU cache mechanism provided by the technical scheme adopts a lock-free design, and avoids the problem that only one thread can read and write the cache at the same time due to the lock characteristic, and the read and write requests of other threads must be queued to wait for obtaining the lock. Therefore, the concurrency of the cache application program is improved, and the requirement of a multi-thread concurrent use scene is further met.
Fig. 2 shows a schematic structural diagram of an LRU caching mechanism according to an embodiment of the present disclosure.
Referring to FIG. 2, in an exemplary embodiment, an LRU caching mechanism includes: the first TBB container 21, the second TBB container 22, and a pre-allocation queue 23.
In an exemplary embodiment, TBB:: current _ hash _ map is used as the storage container of the Cache, i.e., the first TBB container 21 described above. Current bound queue is used as the container for recording the history of access to the Cache, i.e. the second TBB container 22, TBB. The two TBB parallel containers provide rich thread safety interfaces, and can meet various operations required by the Cache in a use scene. Meanwhile, the Cache access history recorded by the second TBB container 22 can effectively compress the second TBB container 22 and eliminate the LRU K-V Cache, so that the capacity values of the two TBB containers are controlled, and further, Cache data service can be stably provided for the outside for a long time.
In an exemplary embodiment, referring again to FIG. 2, the LRUK-V Cache of the first TBB container 21 and the second TBB container 22 simultaneously hold pointers to Node objects in the pre-allocated queue. Each LRU cache data stored in the first TBB container 21 corresponds to a node object in the pre-allocation Queue 23, and the second TBB container 22 stores a first access record Queue (e.g., referred to as LRU Queue) containing pointers of a plurality of the node objects. The problem of thread safety when multithreading is operated in the same Node is guaranteed through an atomic variable std, and the realization of external interface of the LRUK-V Cache layer is also the multithreading performance safety.
In an exemplary embodiment, the present technical solution provides that the pre-allocation queue 23 adopts a manner of recycling the memory to reduce the overhead caused by frequently allocating and releasing the memory.
Fig. 3 is a flow chart illustrating a method for implementing an LRU cache according to another embodiment of the present disclosure. In this embodiment, a method for implementing the LRU cache when the cache instruction is the write operation instruction is specifically described, and may be specifically implemented by a Put interface of an LRU cache mechanism.
Referring to fig. 3, the method for implementing the LRU cache provided in this embodiment includes steps S301 to S304.
In step S301, target cache data is determined.
In an exemplary embodiment, when the cache instruction is a write operation instruction, the target cache data may be data to be written into the first TBB container.
In an exemplary embodiment, when the Put interface implements the write operation instruction, it is first searched whether the LRU K-V Cache already has the above target Cache data, and if not, the data is added to the LRU K-V Cache (i.e., the implementation method of the LRU Cache described in this embodiment). If the LRU K-V Cache already has the target Cache data, the LRU Cache data to be updated in the LRU K-V Cache is updated (i.e., the implementation method of the LRU Cache introduced in the embodiment shown in fig. 5).
In an exemplary embodiment, step S302 and step S303 are a specific implementation manner of step S102 in the embodiment shown in fig. 1, and step S304 is a specific implementation manner of step S103 in the embodiment shown in fig. 1. Specifically, the method comprises the following steps:
in step S302, in response to a write instruction to write the target cache data into the first TBB container, inserting the target cache data into the first TBB container;
in step S303, adding the target cache data in the first node object in the pre-allocation queue by an atomic operation; and the number of the first and second groups,
in step S304, a pointer of the first node object is added to the tail of the first access record queue in the second TBB container to record history data of accessing the LRU cache data.
In an exemplary embodiment, fig. 4 shows a flow diagram for writing LRU cache data according to an embodiment of the invention. Referring to FIG. 4, wherein prior to a write operation, the LRU K-V Cache in the first TBB container 41 currently contains key1, key2, and key 3. The target Cache data key4 in step S301 is data added to the LRU K-V Cache.
A specific embodiment of the above steps S302 to S304 is explained below with reference to fig. 4.
Referring to fig. 4, first, a specific implementation manner of step S302: in the first TBB container 41, the target Cache data key4 is inserted into the LRU K-V Cache (step (r) in fig. 4), and the thread security of the above inserted key4 is ensured by the returned access (accessor), i.e., other thread operations are not available to the key4 during the insertion of the key 4.
Next, the specific implementation manner of step S303: a Node object (i.e. the first Node object) and its address p4 are obtained from the pre-allocation queue 43, and the first Node object pointed to by p4 is filled with the above target cache data key4 (step (c) in the figure). Illustratively, atomic < cool > ready guarantees that the fill operation is thread safe by virtue of the atomic variable std, i.e., no other thread operations can act on key4 in the process of filling key4 into the first node object pointed to by fill p 4. The Node pointer in the access can then be updated.
Further, the specific implementation manner of step S304: in the second TBB container 42, p4 of the same first node object is inserted at the tail of LRU Queue (step c in the figure). At this point, Put add cache operation is complete. The tail (tail) of the LRU Queue holds the most recently accessed data and the head (head) of the LRU Queue holds the oldest accessed data.
Fig. 5 is a flow chart illustrating a method for implementing an LRU cache according to yet another embodiment of the present disclosure. In this embodiment, a method for implementing the LRU cache when the cache instruction is the update operation instruction is specifically described, and may be implemented by a Put interface of an LRU cache mechanism.
Referring to fig. 5, the method for implementing the LRU cache provided by this embodiment includes steps S501 to S504.
In step S501, target cache data is determined.
In an exemplary embodiment, when the cache instruction is an update operation instruction, the target cache data may be data to be written into the first TBB container.
In an exemplary embodiment, when the Put interface implements the update operation instruction, it is first searched whether the LRU K-V Cache already has the target Cache data, and if the LRU K-V Cache already has the target Cache data, the LRU Cache data to be updated in the LRUK-V Cache is updated.
In an exemplary embodiment, step S502 and step S503 are a specific implementation manner of step S102 in the embodiment shown in fig. 1, and step S504 is a specific implementation manner of step S103 in the embodiment shown in fig. 1. Specifically, the method comprises the following steps:
in step S502, in response to an update instruction for updating the first TBB container using the target cache data, a pointer of a second node object corresponding to LRU cache data to be updated is pointed to a hole memory;
in step S503, adding the target cache data in a third node object in the pre-allocation queue by an atomic operation, and updating the LRU cache data to be updated by using the target cache data; and the number of the first and second groups,
in step S504, a pointer of the third node object is added to the tail of the first access record queue in the second TBB container to record history data of accessing the LRU cache data.
In an exemplary embodiment, fig. 6 shows a flow chart of updating LRU cache data according to an embodiment of the invention. Referring to fig. 6, wherein the LRU K-V Cache in the first TBB container 61 currently contains key1, key2, key3, and key4 prior to the update operation. The target Cache data key2 in step S501 updates the original key2 of the LRU K-V Cache.
A specific embodiment of the above steps S502 to S504 is explained below with reference to fig. 6.
Referring to fig. 6, first, a specific implementation manner of step S502: in the first TBB container 61, when LRU K-VCache is searched and key2 (LRU cache data to be updated) already exists, its corresponding second node object is set as a hole memory by the returned access (step (r) in fig. 6).
Next, the specific implementation manner of step S503: a New Node object (third Node object) is obtained from the pre-allocation queue 63 and its address p2New is obtained, and the third Node object pointed to by p2New is filled with the above target cache data key2 (step two in fig. 6). Illustratively, atomic < cool > ready guarantees that the fill operation is thread safe by virtue of the atomic variable std, i.e., no other thread operations can act on key2 in the process of filling key2 into the third node object pointed to by fill p2 New. The Node pointer of the access is then updated (p 2 to p2New) and value (value 2 to value2 New).
Further, a specific implementation manner of step S504 is as follows: in the second TBB container 62, the same pointer p2New is inserted at the tail of LRU Queue (step three in fig. 6). As can be seen in FIG. 6, the Put update operation causes the tail (tail) of the LRU Queue to hold data that is still recently accessed and causes the previous pointer p2 to no longer appear within any of the processor objects of the LRU K-VCache.
Fig. 7 is a flow chart illustrating a method for implementing an LRU cache according to another embodiment of the present disclosure. In this embodiment, a method for implementing the LRU cache when the cache instruction is the read operation instruction is specifically described, and may be implemented by a Get interface of an LRU cache mechanism.
Referring to fig. 7, the method for implementing the LRU cache provided in this embodiment includes steps S701 to S704.
In step S701, target cache data is determined.
In an exemplary embodiment, when the cache instruction is a read instruction, the target cache data may be hit data in the first TBB container.
In an exemplary embodiment, step S702 and step S703 are a specific implementation manner of step S102 in the embodiment shown in fig. 1, and step S704 is a specific implementation manner of step S103 in the embodiment shown in fig. 1. Specifically, the method comprises the following steps:
in step S702, in response to a read instruction for reading the target cache data from the first TBB container, pointing a pointer of a fourth node object corresponding to the target cache data to a hole memory;
in step S703, adding the target cache data in a fourth node object in the pre-allocation queue by an atomic operation; and the number of the first and second groups,
in step S704, a pointer of the fourth node object is added to the tail of the first access record queue in the second TBB container to record history data of accessing the LRU cache data.
In an exemplary embodiment, fig. 8 shows a flow diagram of reading LRU cache data according to an embodiment of the invention. Referring to FIG. 8, wherein the LRUK-V Cache in the first TBB container 81 currently contains key1, key2, and key 3. In step S701, the target cache data key2 is to be read out from the key2 in the first TBB container 81.
A specific embodiment of the above steps S702 to S704 is explained below with reference to fig. 8.
Referring to fig. 8, first, a specific implementation manner of step S702: when the LRU K-VCache is found in the first TBB container 81 to determine that the LRU cache data key2 is to be read (to be updated), a hole memory is set as a fourth node object corresponding to the returned access (as shown in step (r) in fig. 8).
Next, a specific implementation manner of step S703: a New Node object (fifth Node object) is obtained from the pre-allocation queue 83 and the address p2New is obtained, and the fifth Node object pointed by p2New is filled with the target cache data key2 (step two in fig. 8). Illustratively, atomic < cool > ready guarantees that the fill operation is thread safe by virtue of the atomic variable std, i.e., no other thread operations can act on key2 in the process of filling key2 into the fifth node object pointed to by fill p2 New. The Node pointer of the access is then updated (p 2 changed to p2 New).
Further, a specific implementation manner of step S704: in the second TBB container 82, the same pointer p2New is inserted at the tail of the LRU Queue (step three in fig. 8). As can be seen in FIG. 8, the Get read operation causes the tail (tail) of the LRU Queue to hold the data that was still recently accessed.
Fig. 9 is a flowchart illustrating a method for implementing an LRU cache according to an embodiment of the present disclosure. In this embodiment, a method for implementing the LRU cache when the cache instruction is the delete instruction is specifically described, and may be implemented by a Remove interface of an LRU cache mechanism.
Referring to fig. 9, the method for implementing the LRU cache provided in this embodiment includes steps S901 to S904.
In step S901, target cache data is determined.
In an exemplary embodiment, when the cache instruction is a delete instruction, the target cache data may be data to be deleted in the first TBB container.
In an exemplary embodiment, step S902 and step S903 are a specific implementation manner of step S102 in the embodiment shown in fig. 1, and step S904 is a specific implementation manner of step S103 in the embodiment shown in fig. 1. Specifically, the method comprises the following steps:
in step S902, in response to a delete instruction for deleting the target cache data from the first TBB container, pointing a pointer of a sixth node object corresponding to the target cache data to a hole memory;
in step S903, deleting the target cache data from the first TBB container; and the number of the first and second groups,
in step S904, the pointer of the sixth node object is kept unchanged in the second TBB container at the position of the first access record queue.
In an exemplary embodiment, fig. 10 shows a flowchart of deleting LRU cache data according to an embodiment of the present invention. Referring to fig. 10, wherein the LRU K-V Cache in the first TBB container 101 currently contains key1, key2, and key 3. In step S901, the target cache data key2 is to be deleted key2 in the first TBB container 101.
A specific embodiment of the above steps S902 to S904 is explained below with reference to fig. 10.
Referring to fig. 10, a specific implementation manner of step S902: when the LRU K-V Cache to-be-deleted LRU Cache data key2 is searched in the first TBB container 101, a sixth node object corresponding to the key2 is set as a hole memory (see step (r) in fig. 10).
The specific implementation manner of step S903: the processor corresponding to key2 is removed from LRU K-V Cache (step two in FIG. 10).
The specific implementation manner of step S904: in the second TBB container 102, the pointer p2 of the sixth node object is kept unchanged at the position of the first access record Queue LRU Queue (step three in fig. 10).
Fig. 11 is a flow chart illustrating a method for implementing an LRU cache according to another embodiment of the present disclosure. In this embodiment, a method for monitoring a capacity value of the second TBB container is specifically described, and may be specifically implemented by a removeoverlapflowed interface of an LRU cache mechanism.
It should be noted that, in general, the number of cells in the first access record Queue is large, and if the capacity value control is performed by traversing all the cells in the first access record Queue, the system overhead is consumed. In this embodiment, in the first access record Queue, pointers to the hole memory hole are sequentially deleted in a direction from the head of the first access record Queue to the tail of the first access record Queue, and when the capacity value of the second TBB container is controlled within a preset range, deletion of the pointers to the hole memory hole in the first access record Queue is suspended. Similarly, when the capacity value of the second TBB container is detected to be out of the preset range, the pointers to the hole memories are deleted in sequence again from the head of the LRU Queue. That is, the present embodiment controls the current capacity value of the second TBB container within a preset range by periodically acquiring the current capacity value.
Referring to fig. 11, the method for implementing the LRU cache provided in this embodiment includes steps S111 to S116.
In step S111, the current capacity value of the second TBB container is acquired.
In step S112, it is determined whether the current capacity value of the second TBB container exceeds a second preset threshold.
In an exemplary embodiment, if the current capacity value of the second TBB container does not exceed the second preset threshold, step S111 is periodically executed.
In an exemplary embodiment, if the current capacity value of the second TBB container exceeds the second preset threshold, steps S113 to S116 are performed.
In step S113, it is determined whether the ith pointer of the first access record queue points to a hole memory.
In an exemplary embodiment, in a direction from a head of the first access record queue to a tail of the first access record queue, it is determined whether an ith pointer of the first access record queue points to a hole memory. If the ith pointer of the first access record queue points to the hole memory, it may be a hole generated in the operation corresponding to the Put interface, the Get interface, or the Remove interface in the above embodiment. If the ith pointer of the first access record queue does not point to the hole memory, it indicates that the ith pointer exists in any of the access object of the LRU K-V Cache of the first TBB container 121.
In an exemplary embodiment, if the ith pointer of the first access record queue points to the hole memory, step S114 and step S115 are sequentially performed. If the ith pointer of the first access record queue does not point to the hole memory, step S116 and step S115 are sequentially executed.
In step S114, the ith pointer is deleted to compress the first access record queue. And after i is assigned to i +1 in step S115, step S113 is performed again. That is, it is determined whether the (i + 1) th pointer (i.e., the next pointer to the ith pointer) points to the hole memory. Referring to fig. 12, that is, in the first access record Queue LRU Queue, it is sequentially determined whether the pointer points to the hole memory hole from the head of the first access record Queue to the tail of the first access record Queue, and if Node p4-1 points to the hole memory hole, Node p4-1 is deleted to compress the first access record Queue, and it is determined whether the next pointer points to the hole memory hole.
Further, in step S116, the ith pointer is transferred and stored to the tail of the second access record queue in the second TBB container. And after i is assigned to i +1 in step S115, step S113 is performed again. That is, it is determined whether the (i + 2) th pointer (i.e., the next pointer to the (i + 1) th pointer) points to the hole memory. Referring to fig. 12, that is, if it is determined that the next pointer Node p1 of Node p4-1 does not point to the hole memory hole, the pointer Node p1 is transferred to the tail of the second access record Queue LRU Queue2 in the second TBB container 122.
For example, referring to fig. 12, in the above direction, if it is determined that the pointers Node _ p4-2, Node _ p2-1 and Node _ p3-1 located behind Node _ p1 all point to the hole memory, the pointers are deleted to compress the first access record queue according to step S114.
For example, referring to fig. 12, according to the above direction, it is determined that none of the pointers Node p2, Node p3, and Node p4 located after Node p3-1 point to the hole memory hole, and then the pointers Node p2, Node p3, and Node p4 are transferred and stored to the tail of the second access record queue in the second TBB container according to step S116.
It should be noted that the ith pointer of the first access record queue does not point to the void memory, which indicates that the Cache object corresponding to the pointer is not accessed for a long time. In order to maximize the hit rate of the Cache, the technical solution provided by this embodiment is to extend the life cycle of such Cache, rather than immediately removing it from the first TBB container.
Meanwhile, when the current capacity value of the second TBB container exceeds the second preset threshold, several possible hole memories behind the ith pointer (the pointer which does not point to the hole memory) need to be cleaned up, so as to compress the length of the LRU Queue.
At this time, the pointer in the first access record queue that does not point to the hole memory is similar to a "road barrage", and to solve this problem, the present embodiment provides a technical solution that at least two levels of record queues are introduced into the second TBB container, referring to fig. 12, such as: first access record Queue LRU Queue and second access record Queue LRU Queue2, when a pointer in the first access record Queue that does not point to a hole memory is encountered, store the pointer in the original order into second access record Queue LRU Queue 2. Therefore, the technical scheme provided by the embodiment can solve the problem of compressing the LRU Queue through the matching of the two stages of LRU Queue, prolong the life cycle of LRU cache data to the maximum extent, and improve the hit probability of non-hot data to a certain extent.
In an exemplary embodiment, the ith pointer for the first access record Queue does not point to the hole memory, and the pointer is saved to a pointer in the second access record Queue LRU Queue 2. With regard to several Cache processing methods of such caches in the first TBB container, the following will be explained by the embodiments shown in fig. 13 to 15, respectively.
Fig. 13 is a flow chart illustrating a method for implementing an LRU cache according to yet another embodiment of the present disclosure. In this embodiment, the method for updating the LRU cache data with a low hit rate in the first TBB container may be specifically implemented by a Put interface of an LRU cache mechanism.
Referring to fig. 13, the method for implementing the LRU cache provided in this embodiment includes steps S131 to S134.
In step S131, it is determined that the target cache data is data corresponding to the jth pointer in the second access record queue.
In an exemplary embodiment, when the cache instruction is an update operation instruction, the target cache data may be data to be written into the first TBB container, and the target cache data is the same as data corresponding to the jth pointer in the second access record queue.
In an exemplary embodiment, when the Put interface implements the update operation instruction, it is first searched whether the LRU K-V Cache already has the target Cache data, and if the LRU K-V Cache already has the target Cache data, the LRU Cache data to be updated in the LRUK-V Cache is updated.
In step S132, in response to an update instruction for updating the first TBB container with the target cache data, a jth pointer is pointed to a hole memory.
In an exemplary embodiment, the specific implementation of step S132 is the same as the specific implementation of step S502 in fig. 5, and is not described again here.
In step S133, the target cache data is added to the seventh node object in the pre-allocation queue by an atomic operation, and the LRU cache data to be updated is updated using the target cache data.
In an exemplary embodiment, the specific implementation of step S133 is the same as the specific implementation of step S503 in fig. 5, and is not described herein again.
In step S134, a pointer of the seventh node object is added to the tail of the first access record queue in the second TBB container to record history data of accessing the LRU cache data.
In an exemplary embodiment, the specific implementation of step S134 is the same as the specific implementation of step S504 in fig. 5, and is not described again here.
Fig. 14 is a flow chart illustrating a method for implementing an LRU cache according to another embodiment of the present disclosure. In this embodiment, the method for reading the LRU cache data with a low hit rate in the first TBB container may be specifically implemented by a Get interface of an LRU cache mechanism.
Referring to fig. 14, the method for implementing the LRU cache provided in this embodiment includes steps S141 to S144.
In step S141, it is determined that the target cache data is data corresponding to the jth pointer in the second access record queue.
In an exemplary embodiment, when the cache instruction is a read instruction, the target cache data may be hit data in the first TBB container, and the target cache data is the same as data corresponding to the jth pointer in the second access record queue.
In step S142, in response to a read instruction for reading the target cache data from the first TBB container, the jth pointer is pointed to a hole memory.
In an exemplary embodiment, the specific implementation of step S142 is the same as the specific implementation of step S702 in fig. 7, and is not described herein again.
In step S143, the target cache data is added to the eighth node object in the pre-allocation queue by an atomic operation.
In an exemplary embodiment, the specific implementation of step S143 is the same as the specific implementation of step S703 in fig. 7, and is not described herein again.
In step S144, a pointer of the eighth node object is added to the tail of the first access record queue in the second TBB container to record history data of accessing the LRU cache data.
In an exemplary embodiment, the specific implementation of step S144 is the same as the specific implementation of step S704 in fig. 7, and is not described herein again.
Fig. 15 is a flowchart illustrating an implementation method of an LRU cache according to an embodiment of the present disclosure. In this embodiment, the method for deleting LRU cache data with a low hit rate in the first TBB container may be specifically implemented by a Remove interface of an LRU cache mechanism.
Referring to fig. 15, the method for implementing the LRU cache provided in this embodiment includes steps S151 to S154.
In step S151, it is determined that the target cache data is data corresponding to the jth pointer in the second access record queue.
In an exemplary embodiment, when the cache instruction is a delete instruction, the target cache data may be data to be deleted in the first TBB container, and the target cache data is the same as data corresponding to the jth pointer in the second access record queue.
In step S152, in response to a delete instruction for deleting the target cache data from the first TBB container, the jth pointer is pointed to the hole memory.
In an exemplary embodiment, the specific implementation of step S152 is the same as the specific implementation of step S902 in fig. 9, and is not described herein again.
In step S153, the target cache data is deleted from the first TBB container.
In an exemplary embodiment, the specific implementation of step S153 is the same as the specific implementation of step S903 in fig. 9, and is not described herein again.
In step S154, the position of the j-th pointer in the second access record queue is kept unchanged in the second TBB container.
In an exemplary embodiment, the specific implementation of step S154 is the same as the specific implementation of step S904 in fig. 9, and is not described again here.
As can be seen from each of the embodiments of fig. 13 to 15, a pointer to the hole memory hole also appears in the second access record queue of the second TBB container. The technical scheme also realizes the effective compression of the second access record Queue LRU Queue2 by limiting the maximum length of the second access record Queue LRU Queue2 and periodically and circularly traversing the second access record Queue LRU Queue 2.
It should be noted that the overhead cost required to traverse the second access record Queue LRU Queue2 is small because the maximum capacity of the second access record Queue LRU Queue2 is controlled to a small value. Therefore, in the present technical solution, the purpose of compressing the second access record Queue is achieved by obtaining and deleting the pointer pointing to the hole in a manner of traversing the second access record Queue LRU Queue2, and the method for compressing LRU Queue2 by the technical solution provided by the embodiment shown in fig. 16 is explained in detail below.
Referring to fig. 16, the method for implementing the LRU cache provided in this embodiment includes steps S161 to S165.
In step S161, a current length value of the second access record queue is obtained.
In step S162, it is determined whether the current length value of the second access record queue exceeds a third preset threshold.
In an exemplary embodiment, if the current length value of the second access record queue does not exceed the third preset threshold, step S161 is periodically executed.
In an exemplary embodiment, if the current length value of the second access record queue exceeds the third preset threshold, step S163 and step S164 are performed, or step S163-step S166 are performed.
In step S163, the second access record queue is traversed, and the pointer in the second access record queue to the hole memory is deleted.
In step S164, the current length value of the second access record queue is obtained again, and it is determined whether the current length value of the second access record queue exceeds a third preset threshold.
After deleting the mode lengths of all pointers to the hole memory in the second access record queue through step S163, there are two cases. One situation is: the length of the second access record queue is controlled within the third preset threshold range, in this case, the goal of compressing the length of the second access record queue is reached. The other situation is that: the length value of the second access record queue still exceeds the third preset threshold. This may be caused by deleting the pointer pointing to the hole memory in the second access queue, and the length of the pointer still exceeds the third preset threshold, or by not having a pointer pointing to the hole memory in the second access record queue, that is, in step S163, the pointer is not deleted effectively. In this case, the second access record queue is deleted as a pointer to the hole memory.
For example, in step S165, sequentially deleting an nth pointer in a direction from the head of the second access record queue to the tail of the first access record queue until the current length value of the second access record queue is less than or equal to the third preset threshold, where n is a positive integer; and in step S166, determining and deleting a second LRU cache data to be deleted in the first TBB container, where the second LRU cache data to be deleted is the same as the node object corresponding to the nth pointer.
By the technical solutions provided by the embodiments shown in step S165 and step S166, in the second access record queue, the time that the Cache corresponding to the pointer closer to the head is missed is longer. Therefore, the first pointers are deleted in sequence from the head to the tail of the second access record queue until the current length value of the second access record queue is less than or equal to a third preset threshold value. Correspondingly, deleting the Cache corresponding to the deleted pointer in the first TBB container.
Fig. 17 is a flowchart illustrating an implementation method of an LRU cache according to another embodiment of the present disclosure. In this embodiment, a method for monitoring a capacity value of the first TBB container is specifically described, and may be specifically implemented by a removeoverlapflowed interface of an LRU cache mechanism.
It should be noted that, in practical applications, the maximum capacity value (the first preset threshold) of the first TBB container is generally determined specifically according to the average number of hits within the LRUK-V Cache expiration time of the first TBB container. The maximum capacity value of the first access record Queue LRU Queue in the second TBB container is typically set to several tens times (e.g., 20 times, etc.) the maximum capacity value of the first TBB container.
In an exemplary embodiment, after compressing the length of the first access record Queue LRU Queue, the length of the first access record Queue LRU Queue is shortened to be within the allowable maximum length (i.e. the second preset threshold is not exceeded). While the second access record Queue LRU Queue2 will hold the pointers corresponding to those least recently used caches. When the Cache capacity in the first TBB container is detected to be over-limit, the nodes can be sequentially popped from the head of the second access record queue LRUQueue2, and corresponding LRU Cache data is removed from the LRU K-V Cache until the Cache capacity value of the first TBB container is not over-limit.
Specifically, when the capacity value of the first TBB container is controlled to be within a preset range, deletion of the Cache corresponding to the pointer in the second access record Queue LRU Queue2 is suspended. Similarly, when the capacity value of the first TBB container is detected to be out of the preset range, the Cache corresponding to the pointer in the LRU Queue2 is deleted in sequence again from the head of the LRU Queue 2. That is, the present embodiment controls the current capacity value of the first TBB container within a preset range by periodically acquiring the current capacity value.
Referring to fig. 17, the method for implementing the LRU cache provided in this embodiment includes steps S171 to S175.
In step S171, the current capacity value of the first TBB container is acquired.
In step S172, it is determined whether the current capacity value of the first TBB container exceeds a first preset threshold.
In an exemplary embodiment, if the current capacity value of the first TBB container does not exceed the first preset threshold, step S171 is periodically executed.
In an exemplary embodiment, if the current capacity value of the first TBB container exceeds the first preset threshold, steps S173-S175 are performed.
In step S173, it is determined whether the mth pointer of the second access record queue points to a hole memory.
In an exemplary embodiment, the mth pointer of the second access record queue points to the hole memory, and step S174 is executed. If the mth pointer of the second access record queue does not point to the hole memory, step S175 and step S174 are executed in sequence.
In step S174, after m is assigned to m +1, step S172 is performed again. That is, it is determined whether the (m + 1) th pointer (i.e., the next pointer to the (m) th pointer) points to the hole memory. That is, in the second access record Queue LRU Queue2, it is sequentially determined whether the pointer points to the hole memory from the head to the tail of the second access record Queue, and if Node p5 points to the hole memory, it is directly determined whether the next pointer of Node p5 points to the hole memory.
In step S175, a first LRU cache data to be deleted, which is the same as the node object corresponding to the (m + 1) th pointer, is determined and deleted in the first TBB container. That is, if the m +1 th pointer in the second access record queue does not point to the hole memory, it indicates that the corresponding Cache in the first TBB container is still not accessed in the process of prolonging the life cycle, and therefore, the Cache can be eliminated to reduce the volume value.
After m is assigned m +1 in step S175, step S173 is executed again. That is, it is determined whether the (m + 2) th pointer (i.e., the next pointer to the (m + 1) th pointer) points to the hole memory.
In the technical solution provided in the embodiment shown in fig. 17, the pointer stored in the second access record queue is used to extend the life cycle of the Cache that is not accessed for a long time, so as to maximize the Cache hit rate. However, after the capacity value of the first TBB container exceeds the first preset threshold, the pointers in the second access record queue are traversed. If the pointers point to the hole memory, the corresponding Cache is accessed during the life cycle of the Cache is prolonged; if the pointers do not point to the hole memory, it indicates that the corresponding Cache has not been accessed during the extended life cycle. Thus, the capacity value of the first TBB container may be reduced by deleting caches that have not been accessed during the extended lifecycle until the capacity value does not exceed the first preset threshold.
The following describes an embodiment of an apparatus of the present disclosure, which may be used to implement the LRU cache implementation method of the present disclosure.
Fig. 18 is a schematic structural diagram of an implementation apparatus of an LRU cache according to an embodiment of the present disclosure. Referring to fig. 18, the LRU cache implementation apparatus 180 includes: a determination module 181, a processing module 182, and a recording module 183.
The determining module 181 is configured to determine target cache data;
the processing module 182 is configured to, in response to a cache instruction, perform, based on a first TBB container, cache processing corresponding to the cache instruction on the target cache data; and the number of the first and second groups,
a recording module 183, configured to record, based on the second TBB container, an access history to LRU cached data in the first TBB container.
Wherein, in some embodiments of the present disclosure, based on the foregoing scheme,
each LRU cache data stored in the first TBB container corresponds to a node object in a pre-allocation queue;
and a first access record queue is stored in the second TBB container and contains pointers of a plurality of node objects.
In some embodiments of the present disclosure, based on the foregoing scheme, wherein,
the processing module 182 is specifically configured to:
in response to a write instruction for writing the target cache data into the first TBB container, inserting the target cache data into the first TBB container, and adding the target cache data in a first node object in the pre-allocation queue through an atomic operation;
the recording module 183 is specifically configured to:
and adding the pointer of the first node object to the tail part of the first access record queue in the second TBB container so as to record the history data of accessing the LRU cache data.
In some embodiments of the present disclosure, based on the foregoing scheme, wherein,
the processing module 182 is specifically configured to:
in response to an update instruction for updating the first TBB container by using the target cache data, pointing a pointer of a second node object corresponding to LRU cache data to be updated to a hole memory;
adding the target cache data in a third node object in the pre-distribution queue through atomic operation, and updating the LRU cache data to be updated by using the target cache data;
the recording module 183 is specifically configured to:
and adding a pointer of the third node object to the tail part of the first access record queue in the second TBB container so as to record historical data of accessing the LRU cache data.
In some embodiments of the present disclosure, based on the foregoing scheme, wherein,
the processing module 182 is specifically configured to:
responding to a reading instruction for reading the target cache data from the first TBB container, and pointing a pointer of a fourth node object corresponding to the target cache data to a hole memory;
adding the target cache data in a fifth node object in the pre-distribution queue through atomic operation;
the recording module 183 is specifically configured to:
and adding a pointer of the fifth node object to the tail part of the first access record queue in the second TBB container so as to record historical data of accessing the LRU cache data.
In some embodiments of the present disclosure, based on the foregoing scheme, wherein,
the processing module 182 is specifically configured to:
in response to a deletion instruction for deleting the target cache data from the first TBB container, pointing a pointer of a sixth node object corresponding to the target cache data to a hole memory;
deleting the target cache data from the first TBB container;
the recording module 183 is specifically configured to:
and keeping the pointer of the sixth node object unchanged at the position of the first access record queue in the second TBB container.
In some embodiments of the present disclosure, based on the foregoing scheme, the apparatus 180 for implementing an LRU cache further includes: the device comprises a first acquisition module and a first judgment module;
the system comprises a first acquisition module, a first judgment module and a second judgment module, wherein the first acquisition module is used for acquiring the current capacity value of the second TBB container;
if the current capacity value of the second TBB container exceeds a second preset threshold value, then:
the first judging module is further configured to judge whether an ith pointer of the first access record queue points to the void memory from the head of the first access record queue to the tail of the first access record queue;
if the ith pointer points to the hole memory, the recording module 183 is specifically configured to delete the ith pointer to compress the first access record queue, and determine whether the (i + 1) th pointer points to the hole memory;
if the (i + 1) th pointer does not point to the hole memory, the recording module 183 is specifically configured to transfer and store the (i + 1) th pointer to the tail of the second access record queue in the second TBB container, and determine whether the (i + 2) th pointer points to the hole memory.
In some embodiments of the present disclosure, based on the foregoing,
the determining module 181 is specifically configured to:
determining that the target cache data is data corresponding to a jth pointer in the second access record queue;
the processing module 182 is specifically configured to:
in response to an update instruction for updating the first TBB container by using the target cache data, pointing a jth pointer to a hole memory;
adding the target cache data in a seventh node object in the pre-distribution queue through atomic operation;
updating the LRU cache data to be updated by using the target cache data;
the recording module 183 is specifically configured to:
adding the pointer of the seventh node object to the tail of the first access record queue in the second TBB container so as to record the history data of accessing the LRU cache data.
In some embodiments of the present disclosure, based on the foregoing,
the determining module 181 is specifically configured to:
determining that the target cache data is data corresponding to a jth pointer in the second access record queue;
the processing module 182 is specifically configured to:
responding to a reading instruction for reading the target cache data from the first TBB container, and pointing a j-th pointer to the hole memory;
adding the target cache data in an eighth node object in the pre-distribution queue through atomic operation;
the recording module 183 is specifically configured to:
and adding the pointer of the eighth node object to the tail part of the first access record queue in the second TBB container so as to record the history data of accessing the LRU cache data.
In some embodiments of the present disclosure, based on the foregoing,
the determining module 181 is specifically configured to:
determining that the target cache data is data corresponding to a jth pointer in the second access record queue;
the processing module 182 is specifically configured to:
in response to a delete instruction for deleting the target cache data from the first TBB container, pointing a jth pointer to a hole memory;
deleting the target cache data from the first TBB container;
the recording module 183 is specifically configured to:
and in the second TBB container, keeping the position of the j-th pointer in the second access record queue unchanged.
In some embodiments of the present disclosure, based on the foregoing scheme, the apparatus 180 for implementing an LRU cache further includes: the second acquisition module and the second judgment module;
the second acquiring module is used for acquiring the current capacity value of the first TBB container, and the second judging module is used for judging whether the current capacity value of the first TBB container exceeds a first preset threshold value;
if the current capacity value of the first TBB container exceeds a first preset threshold value, then:
the second judging module is further configured to traverse in a direction from the head of the second access record queue to the tail of the first access record queue, and judge whether the mth pointer of the second access record queue points to the void memory;
if the mth pointer points to the hole memory, the recording module 183 is further configured to determine whether the (m + 1) th pointer points to the hole memory;
if the (m + 1) th pointer does not point to the hole memory, the processing module 182 is further configured to determine and delete the first LRU cache data to be deleted in the first TBB container, where the first LRU cache data to be deleted is the same as the node object corresponding to the (m + 1) th pointer.
In some embodiments of the present disclosure, based on the foregoing scheme, the apparatus 180 for implementing an LRU cache further includes: the third acquisition module, the third judgment module and the traversal module;
the third obtaining module is configured to obtain a current length value of the second access record queue, and the third judging module is configured to judge whether the current length value of the second access record queue exceeds a third preset threshold;
if the current length value of the second access record queue exceeds a third preset threshold, then:
and the traversing module is used for traversing the second access record queue and deleting the pointer pointing to the hole memory in the second access record queue.
In some embodiments of the present disclosure, based on the foregoing,
if none of the pointers in the second access record queue point to the void memory, then:
the recording module 183 is further configured to delete an nth pointer in sequence in a direction from the head of the second access record queue to the tail of the first access record queue until the current length value of the second access record queue is less than or equal to the third preset threshold, where n is a positive integer;
the processing module 182 is further configured to determine and delete a second LRU cache data to be deleted in the first TBB container, where the second LRU cache data to be deleted is the same as the node object corresponding to the nth pointer.
Since the functional modules of the implementation apparatus of the LRU cache of the exemplary embodiment of the present disclosure correspond to the steps of the exemplary embodiment of the implementation method of the LRU cache described above, for details that are not disclosed in the embodiment of the apparatus of the present disclosure, refer to the above-described embodiment of the index of the present disclosure.
Moreover, although the steps of the methods of the present disclosure are depicted in the drawings in a particular order, this does not require or imply that the steps must be performed in this particular order, or that all of the depicted steps must be performed, to achieve desirable results. Additionally or alternatively, certain steps may be omitted, multiple steps combined into one step execution, and/or one step broken down into multiple step executions, etc.
Through the above description of the embodiments, those skilled in the art will readily understand that the exemplary embodiments described herein may be implemented by software, or by software in combination with necessary hardware. Therefore, the technical solution according to the embodiments of the present disclosure may be embodied in the form of a software product, which may be stored in a non-volatile storage medium (which may be a CD-ROM, a usb disk, a removable hard disk, etc.) or on a network, and includes several instructions to make a computing device (which may be a personal computer, a server, a mobile terminal, or a network device, etc.) execute the implementation method of the LRU cache according to the embodiments of the present disclosure.
As will be appreciated by one skilled in the art, aspects of the present disclosure may be embodied as a system, method or program product. Accordingly, various aspects of the present disclosure may be embodied in the form of: an entirely hardware embodiment, an entirely software embodiment (including firmware, microcode, etc.) or an embodiment combining hardware and software aspects that may all generally be referred to herein as a "circuit," module "or" system.
In an exemplary embodiment of the present disclosure, there is also provided a computer-readable storage medium having stored thereon a program product capable of implementing the above-described method of the present specification. In some possible embodiments, various aspects of the disclosure may also be implemented in the form of a program product comprising program code for causing a terminal device to perform the steps according to various exemplary embodiments of the disclosure described in the "exemplary methods" section above of this specification, when the program product is run on the terminal device.
Referring to fig. 19, a program product 1900 for implementing the above method according to an embodiment of the present disclosure is described, which may employ a portable compact disc read only memory (CD-ROM) and include program code, and may be run on a terminal device, such as a personal computer. However, the program product of the present disclosure is not limited thereto, and in this document, a readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
The program product may employ any combination of one or more readable media. The readable medium may be a readable signal medium or a readable storage medium. A readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples (a non-exhaustive list) of the readable storage medium include: an electrical connection having one or more wires, a portable disk, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
A computer readable signal medium may include a propagated data signal with readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A readable signal medium may also be any readable medium that is not a readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
Program code embodied on a readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
Program code for carrying out operations for the present disclosure may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computing device, partly on the user's device, as a stand-alone software package, partly on the user's computing device and partly on a remote computing device, or entirely on the remote computing device or server. In the case of a remote computing device, the remote computing device may be connected to the user computing device through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computing device (e.g., through the internet using an internet service provider).
In an exemplary embodiment of the present disclosure, an electronic device capable of implementing the above method is also provided.
An electronic device 2000 according to this embodiment of the present disclosure is described below with reference to fig. 20. The electronic device 2000 shown in fig. 20 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present disclosure.
As shown in fig. 20, the electronic device 2000 is embodied in the form of a general purpose computing device. The components of the electronic device 2000 may include, but are not limited to: the at least one processing unit 2010, the at least one memory unit 2020, and the bus 2030 connecting the various system components including the memory unit 2020 and the processing unit 2010.
Wherein the memory unit stores program code executable by the processing unit 2010 to cause the processing unit 2010 to perform steps according to various exemplary embodiments of the present disclosure as described in the "exemplary methods" section of the specification above. For example, the processing unit 2010 may execute the following as shown in fig. 1: step S101, determining target cache data; step S102, responding to a cache instruction, and performing cache processing corresponding to the cache instruction on the target cache data based on a first TBB container; and step S103, recording the access history of the LRU cache data in the first TBB container based on the second TBB container.
The storage unit 2020 may include readable media in the form of volatile storage units such as a random access memory unit (RAM)20201 and/or a cache memory unit 20202, and may further include a read only memory unit (ROM) 20203.
The storage unit 2020 may also include a program/utility 20204 having a set (at least one) of program modules 20205, such program modules 20205 including, but not limited to: an operating system, one or more application programs, other program modules, and program data, each of which, or some combination thereof, may comprise an implementation of a network environment.
Bus 2030 may be one or more of any of several types of bus structures including a memory unit bus or memory unit controller, a peripheral bus, an accelerated graphics port, a processing unit, or a local bus using any of a variety of bus architectures.
The electronic device 2000 may also communicate with one or more external devices 2100 (e.g., keyboard, pointing device, bluetooth device, etc.), with one or more devices that enable a user to interact with the electronic device 2000, and/or with any devices (e.g., router, modem, etc.) that enable the electronic device 2000 to communicate with one or more other computing devices. Such communication may occur over an input/output (I/O) interface 2050. Also, the electronic device 2000 may communicate with one or more networks (e.g., a Local Area Network (LAN), a Wide Area Network (WAN), and/or a public network, such as the internet) through the network adapter 2060. As shown, the network adapter 2060 communicates with the other modules of the electronic device 2000 via the bus 2030. It should be appreciated that although not shown, other hardware and/or software modules may be used in conjunction with the electronic device 2000, including but not limited to: microcode, device drivers, redundant processing units, external disk drive arrays, RAID systems, tape drives, and data backup storage systems, among others.
Through the above description of the embodiments, those skilled in the art will readily understand that the exemplary embodiments described herein may be implemented by software, or by software in combination with necessary hardware. Therefore, the technical solution according to the embodiments of the present disclosure may be embodied in the form of a software product, which may be stored in a non-volatile storage medium (which may be a CD-ROM, a usb disk, a removable hard disk, etc.) or on a network, and includes several instructions to enable a computing device (which may be a personal computer, a server, a terminal device, or a network device, etc.) to execute the method according to the embodiments of the present disclosure.
Furthermore, the above-described figures are merely schematic illustrations of processes included in methods according to exemplary embodiments of the present disclosure, and are not intended to be limiting. It will be readily understood that the processes shown in the above figures are not intended to indicate or limit the chronological order of the processes. In addition, it is also readily understood that these processes may be performed synchronously or asynchronously, e.g., in multiple modules.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This application is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.

Claims (16)

1. A method for implementing an LRU cache, comprising:
determining target cache data;
responding to a cache instruction, and performing cache processing corresponding to the cache instruction on the target cache data based on a first TBB container;
recording the access history of the LRU cache data in the first TBB container based on the second TBB container.
2. An LRU cache implementation according to claim 1,
each LRU cache data stored in the first TBB container corresponds to a node object in a pre-allocation queue;
and a first access record queue is stored in the second TBB container and contains pointers of a plurality of node objects.
3. An LRU cache implementation according to claim 2, wherein,
responding to a cache instruction, and performing cache processing corresponding to the cache instruction on the target cache data based on a first TBB container, wherein the cache processing comprises the following steps:
in response to a write instruction for writing the target cache data into the first TBB container, inserting the target cache data into the first TBB container, and adding the target cache data in a first node object in the pre-allocation queue through an atomic operation;
recording access history to LRU cache data in the first TBB container based on a second TBB container, comprising:
and adding the pointer of the first node object to the tail part of the first access record queue in the second TBB container so as to record the history data of accessing the LRU cache data.
4. An LRU cache implementation according to claim 2, wherein,
responding to a cache instruction, and performing cache processing corresponding to the cache instruction on the target cache data based on a first TBB container, wherein the cache processing comprises the following steps:
in response to an update instruction for updating the first TBB container by using the target cache data, pointing a pointer of a second node object corresponding to LRU cache data to be updated to a hole memory;
adding the target cache data in a third node object in the pre-distribution queue through atomic operation, and updating the LRU cache data to be updated by using the target cache data;
recording access history to LRU cache data in the first TBB container based on a second TBB container, comprising:
and adding a pointer of the third node object to the tail part of the first access record queue in the second TBB container so as to record historical data of accessing the LRU cache data.
5. An LRU cache implementation according to claim 2, wherein,
responding to a cache instruction, and performing cache processing corresponding to the cache instruction on the target cache data based on a first TBB container, wherein the cache processing comprises the following steps:
responding to a reading instruction for reading the target cache data from the first TBB container, and pointing a pointer of a fourth node object corresponding to the target cache data to a hole memory;
adding the target cache data in a fifth node object in the pre-distribution queue through atomic operation;
recording access history to LRU cache data in the first TBB container based on a second TBB container, comprising:
and adding a pointer of the fifth node object to the tail part of the first access record queue in the second TBB container so as to record historical data of accessing the LRU cache data.
6. An LRU cache implementation according to claim 2, wherein,
responding to a cache instruction, and performing cache processing corresponding to the cache instruction on the target cache data based on a first TBB container, wherein the cache processing comprises the following steps:
in response to a deletion instruction for deleting the target cache data from the first TBB container, pointing a pointer of a sixth node object corresponding to the target cache data to a hole memory;
deleting the target cache data from the first TBB container;
recording access history to LRU cache data in the first TBB container based on a second TBB container, comprising:
and keeping the pointer of the sixth node object unchanged at the position of the first access record queue in the second TBB container.
7. A method for implementing an LRU cache according to any of claims 2 to 6, wherein the method further comprises:
acquiring the current capacity value of the second TBB container, and judging whether the current capacity value of the second TBB container exceeds a second preset threshold value or not;
if the current capacity value of the second TBB container exceeds a second preset threshold value, then:
judging whether the ith pointer of the first access record queue points to the cavity memory or not in the direction from the head of the first access record queue to the tail of the first access record queue;
if the ith pointer points to the void memory, deleting the ith pointer to compress the first access record queue, and judging whether the (i + 1) th pointer points to the void memory;
if the (i + 1) th pointer does not point to the void memory, the (i + 1) th pointer is transferred to the tail part of a second access record queue in the second TBB container, and whether the (i + 2) th pointer points to the void memory is judged.
8. An LRU cache implementation according to claim 7,
determining target cache data, comprising:
determining that the target cache data is data corresponding to a jth pointer in the second access record queue;
responding to a cache instruction, and performing cache processing corresponding to the cache instruction on the target cache data based on a first TBB container, wherein the cache processing comprises the following steps:
in response to an update instruction for updating the first TBB container by using the target cache data, pointing a jth pointer to a hole memory;
adding the target cache data in a seventh node object in the pre-distribution queue through atomic operation, and updating the LRU cache data to be updated by using the target cache data;
recording access history to LRU cache data in the first TBB container based on a second TBB container, comprising:
adding the pointer of the seventh node object to the tail of the first access record queue in the second TBB container so as to record the history data of accessing the LRU cache data.
9. An LRU cache implementation according to claim 7,
determining target cache data, comprising:
determining that the target cache data is data corresponding to a jth pointer in the second access record queue;
responding to a cache instruction, and performing cache processing corresponding to the cache instruction on the target cache data based on a first TBB container, wherein the cache processing comprises the following steps:
responding to a reading instruction for reading the target cache data from the first TBB container, and pointing a j-th pointer to the hole memory;
adding the target cache data in an eighth node object in the pre-distribution queue through atomic operation;
recording access history to LRU cache data in the first TBB container based on a second TBB container, comprising:
and adding the pointer of the eighth node object to the tail part of the first access record queue in the second TBB container so as to record the history data of accessing the LRU cache data.
10. An LRU cache implementation according to claim 7,
determining target cache data, comprising:
determining that the target cache data is data corresponding to a jth pointer in the second access record queue;
responding to a cache instruction, and performing cache processing corresponding to the cache instruction on the target cache data based on a first TBB container, wherein the cache processing comprises the following steps:
in response to a delete instruction for deleting the target cache data from the first TBB container, pointing a jth pointer to a hole memory;
deleting the target cache data from the first TBB container;
recording access history to LRU cache data in the first TBB container based on a second TBB container, comprising:
and in the second TBB container, keeping the position of the j-th pointer in the second access record queue unchanged.
11. A method of implementing an LRU cache according to any of claims 8 to 10, wherein the method further comprises:
acquiring a current capacity value of the first TBB container, and judging whether the current capacity value of the first TBB container exceeds a first preset threshold value or not;
if the current capacity value of the first TBB container exceeds a first preset threshold value, then:
traversing from the head of the second access record queue to the tail of the first access record queue, and judging whether the mth pointer of the second access record queue points to the void memory;
if the mth pointer points to the void memory, judging whether the (m + 1) th pointer points to the void memory;
if the (m + 1) th pointer does not point to the hole memory, determining and deleting first LRU cache data to be deleted in the first TBB container, wherein the first LRU cache data to be deleted is the same as the node object corresponding to the (m + 1) th pointer.
12. A method of implementing an LRU cache according to any of claims 8 to 10, wherein the method further comprises:
acquiring the current length value of the second access record queue, and judging whether the current length value of the second access record queue exceeds a third preset threshold value;
if the current length value of the second access record queue exceeds a third preset threshold, then:
and traversing the second access record queue, and deleting the pointer pointing to the void memory in the second access record queue.
13. An LRU cache implementation method according to claim 12, wherein the method further comprises:
if none of the pointers in the second access record queue point to the void memory, then:
sequentially deleting the nth pointer from the head of the second access record queue to the tail of the first access record queue until the current length value of the second access record queue is less than or equal to the third preset threshold, wherein n is a positive integer;
and determining and deleting second LRU cache data to be deleted in the first TBB container, wherein the second LRU cache data to be deleted is the same as the node object corresponding to the nth pointer.
14. An apparatus for implementing an LRU cache, comprising:
the determining module is used for determining target cache data;
the processing module is used for responding to a cache instruction and carrying out cache processing corresponding to the cache instruction on the target cache data based on a first TBB container;
and the recording module is used for recording the access history of the LRU cache data in the first TBB container based on the second TBB container.
15. A computer-readable storage medium on which a computer program is stored, the program implementing the method of implementing the LRU cache according to any one of claims 1 to 13 when executed by a processor.
16. An electronic device, comprising:
one or more processors;
a storage device for storing one or more programs which, when executed by the one or more processors, cause the one or more processors to implement the LRU cache implementing method of any one of claims 1 to 13.
CN201910239405.3A 2019-03-27 2019-03-27 LRU cache implementation method and device, computer readable storage medium and equipment Pending CN111752868A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910239405.3A CN111752868A (en) 2019-03-27 2019-03-27 LRU cache implementation method and device, computer readable storage medium and equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910239405.3A CN111752868A (en) 2019-03-27 2019-03-27 LRU cache implementation method and device, computer readable storage medium and equipment

Publications (1)

Publication Number Publication Date
CN111752868A true CN111752868A (en) 2020-10-09

Family

ID=72671574

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910239405.3A Pending CN111752868A (en) 2019-03-27 2019-03-27 LRU cache implementation method and device, computer readable storage medium and equipment

Country Status (1)

Country Link
CN (1) CN111752868A (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1408152A (en) * 2000-01-10 2003-04-02 高通股份有限公司 Method and apparatus for testing wireless communication channels
US20100156888A1 (en) * 2008-12-23 2010-06-24 Intel Corporation Adaptive mapping for heterogeneous processing systems
WO2010093003A1 (en) * 2009-02-13 2010-08-19 日本電気株式会社 Calculation resource allocation device, calculation resource allocation method, and calculation resource allocation program
US20150089139A1 (en) * 2013-09-24 2015-03-26 Ayal Zaks Method and apparatus for cache occupancy determination and instruction scheduling

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1408152A (en) * 2000-01-10 2003-04-02 高通股份有限公司 Method and apparatus for testing wireless communication channels
US20100156888A1 (en) * 2008-12-23 2010-06-24 Intel Corporation Adaptive mapping for heterogeneous processing systems
WO2010093003A1 (en) * 2009-02-13 2010-08-19 日本電気株式会社 Calculation resource allocation device, calculation resource allocation method, and calculation resource allocation program
US20150089139A1 (en) * 2013-09-24 2015-03-26 Ayal Zaks Method and apparatus for cache occupancy determination and instruction scheduling
CN105453041A (en) * 2013-09-24 2016-03-30 英特尔公司 Method and apparatus for cache occupancy determination and instruction scheduling

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
吴建宇;彭蔓蔓;: "面向多线程应用的片上多核处理器私有LLC优化", 计算机工程, no. 01 *
戴晨;陈鹏;杨冬蕾;张为华;: "面向多核的并行编程和优化研究", 计算机应用与软件, no. 12 *

Similar Documents

Publication Publication Date Title
KR101814577B1 (en) Method and apparatus for processing instructions using processing-in-memory
US9767140B2 (en) Deduplicating storage with enhanced frequent-block detection
US9710397B2 (en) Data migration for composite non-volatile storage device
US8868926B2 (en) Cryptographic hash database
US10572383B2 (en) Caching a block of data in a multi-tenant cache storage device based on space usage boundary estimates
CN101189584B (en) Managing memory pages
EP2397946A1 (en) Storage system using a rapid storage device as a cache
CN111309732B (en) Data processing method, device, medium and computing equipment
EP2997472B1 (en) Managing memory and storage space for a data operation
US9946660B2 (en) Memory space management
CN109753360B (en) Lightweight data management system and method for edge nodes in power system
US11392314B2 (en) Sequentially writing metadata into a solid state disk by redirect-on-write
CN114253908A (en) Data management method and device of key value storage system
CN108664577B (en) File management method and system based on FLASH idle area
CN111177143A (en) Key value data storage method and device, storage medium and electronic equipment
CN112346647A (en) Data storage method, device, equipment and medium
CN109726096B (en) Test data generation method and device
CN112799590B (en) Differentiated caching method for online main storage deduplication
CN111459402A (en) Magnetic disk controllable buffer writing method, controller, hybrid IO scheduling method and scheduler
CN111752868A (en) LRU cache implementation method and device, computer readable storage medium and equipment
US10209909B1 (en) Storage element cloning in presence of data storage pre-mapper
KR102280443B1 (en) Cloud database system with multi-cash for reducing network cost in processing select query
US9442863B1 (en) Cache entry management using read direction detection
TWI475419B (en) Method and system for accessing files on a storage system
US20150058585A1 (en) Migrating and retrieving queued data in byte-addressable storage

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination