CN116089425A - Method and device for improving redis memory elimination strategy based on LFU algorithm - Google Patents

Method and device for improving redis memory elimination strategy based on LFU algorithm Download PDF

Info

Publication number
CN116089425A
CN116089425A CN202211696513.1A CN202211696513A CN116089425A CN 116089425 A CN116089425 A CN 116089425A CN 202211696513 A CN202211696513 A CN 202211696513A CN 116089425 A CN116089425 A CN 116089425A
Authority
CN
China
Prior art keywords
hash table
node
key
target
linked list
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211696513.1A
Other languages
Chinese (zh)
Inventor
粟相颖
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tianyi Cloud Technology Co Ltd
Original Assignee
Tianyi Cloud Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tianyi Cloud Technology Co Ltd filed Critical Tianyi Cloud Technology Co Ltd
Priority to CN202211696513.1A priority Critical patent/CN116089425A/en
Publication of CN116089425A publication Critical patent/CN116089425A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/22Indexing; Data structures therefor; Storage structures
    • G06F16/2228Indexing structures
    • G06F16/2255Hash tables
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/21Design, administration or maintenance of databases
    • G06F16/215Improving data quality; Data cleansing, e.g. de-duplication, removing invalid entries or correcting typographical errors
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/23Updating
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/24Querying
    • G06F16/245Query processing
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • Quality & Reliability (AREA)
  • Computational Linguistics (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The invention discloses a method for eliminating a redis memory strategy based on LFU algorithm improvement, which is applied to a redis database, belongs to the technical field of data caching, and comprises the following steps: inquiring the active time of a keyword in a third hash table, reducing the access times of nodes corresponding to the target keyword under the condition that the active time of the target keyword in the keyword is larger than or equal to a preset value, determining the target nodes corresponding to the target keyword in a second hash table, moving the node information corresponding to the target nodes in a first hash table to the position corresponding to the reduced access times in the first hash table, updating a bidirectional linked list, deleting the first bidirectional linked list in the first hash table, deleting the data corresponding to the first bidirectional linked list in the second hash table and the third hash table, wherein the first bidirectional linked list is the bidirectional linked list with the access times smaller than the preset times.

Description

Method and device for improving redis memory elimination strategy based on LFU algorithm
Technical Field
The invention belongs to the technical field of data caching, and particularly relates to a method and a device for improving a redis memory elimination strategy based on an LFU algorithm.
Background
The redis is a common pure memory database with high availability and outstanding read-write performance, all data are stored in a memory, and in order to cope with the problem of overlarge data quantity, the redis is internally provided with a data elimination strategy, wherein the volatile-LFU and the allkys-LFU are two common types and are based on an LFU algorithm. The LFU is also known as LeastlyFrequently Used and is least recently used, i.e., the least recently used key is selected.
The LFU policy is based on the number of accesses, and the time of adding the buffered data greatly affects the retention time of the buffer, so that early data is easier to be buffered than later data, resulting in that later data is difficult to be buffered, and newly added buffered data is easy to be removed, such as "jittering" at the end of the buffer.
Disclosure of Invention
The embodiment of the invention aims to provide a method and a device for improving a redis memory elimination strategy based on an LFU algorithm, so as to avoid the problem that cache data added in early stage is not cleaned for a long time.
In order to solve the technical problems, the invention is realized as follows:
in a first aspect, an embodiment of the present invention provides a method for improving a redis memory elimination policy based on an LFU algorithm, which is applied to a redis database, and is characterized in that the method includes:
Querying the active time of the key words in the third hash table;
when the active time of a target keyword in the keywords is greater than or equal to a preset value, reducing the access times of the nodes corresponding to the target keyword;
determining a target node corresponding to the target keyword in a second hash table;
moving node information corresponding to a target node in a first hash table to a position corresponding to the reduced access times in the first hash table, and updating a doubly linked list;
deleting a first double-linked list in the first hash table, deleting data corresponding to the first double-linked list in a second hash table and a third hash table, wherein the first double-linked list is a double-linked list with access times smaller than preset times;
the Redis database comprises a first hash table, a second hash table and a third hash table, wherein the first hash table stores access times and node information of a plurality of nodes with the access times, the relation among the plurality of nodes is stored through the doubly linked list, and the node information comprises keywords, key values and the access times;
the second hash table stores the key words and nodes corresponding to the key words;
The third hash table stores the key and an active time corresponding to the key.
Optionally, before the querying the active time of the key in the third hash table, the method further comprises:
receiving a write operation of a user for a first keyword;
in response to the write operation, looking up the first key in the second hash table;
determining a first node corresponding to the first key under the condition that the first key exists in the second hash table;
updating a first key value corresponding to the first node in the first hash table, and increasing the access times corresponding to the first node;
moving node information corresponding to the first node in a first hash table to a position corresponding to the increased access times in the first hash table, and updating the doubly-linked list;
and updating the active time corresponding to the first keyword in the third hash table to be the current time.
Optionally, after the first key is looked up in the second hash table, the method further includes:
adding a first keyword and the first node corresponding to the first keyword in the second hash table under the condition that the first keyword does not exist in the second hash table;
Setting the access times of the first node as target times;
inserting node information of the first node into a doubly linked list corresponding to the target times in the first hash table, wherein the node information of the first node comprises the first key word, a key value of the first key word and the target times;
and newly adding the first keyword in the third hash table and setting the corresponding active time as the current time.
Optionally, the inserting the node information of the first node into the doubly linked list corresponding to the target times in the first hash table includes:
inserting node information of the first node into a doubly linked list corresponding to the target times in the first hash table under the condition that the access times in the first hash table comprise the target times;
and under the condition that the access times in the first hash table do not comprise the target times, creating a target doubly-linked list corresponding to the target times in the first hash table, and inserting node information of the first node into the target doubly-linked list.
Optionally, before the querying the active time of the key in the third hash table, the method further comprises:
Receiving a checking operation of a user for a key value corresponding to the second keyword;
searching the second key word in the second hash table in response to the checking operation;
returning a null value when the second key is not present in the second hash table;
determining a second node corresponding to the first key when the second key exists in the second hash table;
determining a second key value corresponding to the second node in the first hash table, and returning the second key value;
increasing the access times corresponding to the second node, moving the node information corresponding to the second node in a first hash table to a position corresponding to the increased access times in the first hash table, and updating the doubly-linked list;
and updating the active time corresponding to the second keyword in the third hash table to be the current time.
In a second aspect, an embodiment of the present invention provides a device for implementing an improved redis memory elimination policy based on LFU algorithm, where the device is applied to a redis database, and is characterized in that the device includes:
the first query module is used for querying the active time of the keywords in the third hash table;
The first adjusting module is used for reducing the access times of the nodes corresponding to the target keywords under the condition that the active time of the target keywords in the keywords is larger than or equal to a preset value;
the determining module is used for determining a target node corresponding to the target keyword in the second hash table;
the first updating module moves the node information corresponding to the target node in the first hash table to a position corresponding to the reduced access times in the first hash table and updates the doubly linked list;
the deleting module is used for deleting a first double-linked list in the first hash table, deleting data corresponding to the first double-linked list in the second hash table and the third hash table, wherein the first double-linked list is a double-linked list with access times smaller than preset times;
the redis database comprises a first hash table, a second hash table and a third hash table, wherein the first hash table stores access times and node information of a plurality of nodes with the access times, the relation among the plurality of nodes is stored through the doubly linked list, and the node information comprises keywords, key values and the access times;
The second hash table stores the key words and nodes corresponding to the key words;
the third hash table stores the key and an active time corresponding to the key.
Optionally, the apparatus further comprises:
the first receiving module is used for receiving a writing operation of a user aiming at the first keyword;
a first response module configured to search the second hash table for the first key in response to the write operation;
determining a first node corresponding to the first key under the condition that the first key exists in the second hash table;
the second adjusting module is used for updating a first key value corresponding to the first node in the first hash table and increasing the access times corresponding to the first node;
the second updating module is used for moving the node information corresponding to the first node in the first hash table to a position corresponding to the increased access times in the first hash table and updating the doubly-linked list;
and a third updating module, configured to update the active time corresponding to the first key in the third hash table to a current time.
Optionally, the first response module is further configured to:
Adding a first keyword and the first node corresponding to the first keyword in the second hash table under the condition that the first keyword does not exist in the second hash table;
the apparatus further comprises:
the setting module is used for setting the access times of the first node as target times;
the inserting module is used for inserting the node information of the first node into a doubly linked list corresponding to the target times of the first hash table, wherein the node information of the first node comprises the first key word, the key value of the first key word and the target times;
the setting module is used for newly adding the first keyword in the third hash table and setting the corresponding active time as the current time.
Optionally, the insertion module includes:
an inserting unit, configured to insert node information of the first node into a doubly linked list corresponding to the target number of times in the first hash table, where the access number of times in the first hash table includes the target number of times;
and the creating unit is used for creating a target doubly-linked list corresponding to the target times in the first hash table and inserting the node information of the first node into the target doubly-linked list under the condition that the access times in the first hash table do not comprise the target times.
Optionally, the apparatus further comprises:
the second receiving module is used for receiving the checking operation of the key value corresponding to the second key word by the user;
the second response module is used for responding to the check operation and searching the second keyword in the second hash table;
determining a second node corresponding to the first key when the second key exists in the second hash table;
the first return module is used for determining a second key value corresponding to the second node in the first hash table and returning the second key value;
a fourth updating module, configured to increase the number of accesses corresponding to the second node, and move node information corresponding to the second node in a first hash table to a position corresponding to the increased number of accesses in the first hash table, and update the doubly linked list;
a fifth updating module, configured to update an active time corresponding to the second key in the third hash table to a current time;
and the second return module is used for returning a null value when the second key word does not exist in the second hash table.
The embodiment of the invention provides a method for eliminating a redis memory based on an LFU algorithm improvement, which is applied to a redis database and is characterized by comprising the following steps: inquiring the active time of a keyword in a third hash table, reducing the access times of nodes corresponding to the target keyword under the condition that the active time of the target keyword in the keyword is larger than or equal to a preset value, determining target nodes corresponding to the target keyword in a second hash table, moving node information corresponding to the target node in a first hash table to positions corresponding to the reduced access times in the first hash table, updating a double-linked list, deleting a first double-linked list in the first hash table, deleting data corresponding to the first double-linked list in a second hash table and a third hash table, wherein the first double-linked list is a double-linked list with access times smaller than the preset times, the redis database comprises the first hash table, the second hash table and the third hash table, the first hash table stores the access times and node information of a plurality of nodes with the access times, the relation among the nodes passes through the key, the keyword, the second hash table and the keyword, the keyword and the key, the information are stored, and the key information. The embodiment of the invention improves the original LFU algorithm, so that the LFU elimination strategy can balance the access times and the active time, and the cache data added in early stage is prevented from being cleaned for a long time.
Drawings
FIG. 1 is a schematic diagram of a conventional LFU data structure;
FIG. 2 is a schematic diagram of the result of a structural change after a key access by a conventional LFU;
FIG. 3 is a schematic diagram of an improved LFU data structure provided by an embodiment of the present invention;
FIG. 4 is a flow chart of a method for improving a redis memory elimination strategy based on an LFU algorithm according to an embodiment of the present invention;
FIG. 5 is a schematic diagram of the result of a structural change after key access by an improved LFU according to an embodiment of the present invention;
FIG. 6 is a schematic structural diagram of an apparatus for implementing a redis memory elimination strategy based on LFU algorithm improvement according to an embodiment of the present invention;
the achievement of the object, functional features and advantages of the present invention will be further described with reference to the embodiments, referring to the accompanying drawings.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the present invention more apparent, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention, and it is apparent that the described embodiments are some embodiments of the present invention, but not all embodiments of the present invention. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
The terms first, second and the like in the description and in the claims, are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged, as appropriate, such that embodiments of the present invention may be implemented in sequences other than those illustrated or described herein, and that the objects identified by "first," "second," etc. are generally of a type, and are not limited to the number of objects, such as the first object may be one or more.
The method for eliminating the redis memory based on the improvement of the LFU algorithm provided by the embodiment of the invention is described in detail below through specific embodiments and application scenes thereof with reference to the accompanying drawings.
The conventional LFU algorithm has various implementation manners, but the basic ideas are consistent, and all node information corresponding to each frequency is stored in a hash (hash) table, and the key, the key value and the access number (counter) are included. The number of times relationships between nodes are stored in a doubly linked list. This has the advantage that when a node is found from a key lookup, we can get the number of times that node is accessed at the same time, thus getting all nodes of the current access number. With the body structure of the LFU algorithm, a value is obtained for the key value. Therefore, a hash table is also required to store the correspondence between key values and nodes. As shown in fig. 1.
Flow of viewing data:
s111, returning to null if the key does not exist, and terminating the flow;
s112, if the key exists, returning a value of the corresponding node;
s113, removing the node from the original doubly-linked list and adding the node into the doubly-linked list corresponding to the new access times.
The flow of writing data:
s121 Key exists and the process goes to step S123. If not, adding a data node;
s122, setting the node access time counter to be 1, inserting a doubly linked list with the corresponding time of the hash table 1, creating if the doubly linked list does not exist, and ending the flow.
S123, updating the value of the corresponding node;
s124, removing the node from the original doubly linked list and adding the node into the doubly linked list corresponding to the new access times.
S125, assuming that the operated key is key4, the position after operation is as shown in FIG. 2.
And S126, when the elimination strategy needs to be executed, deleting a certain number of doubly linked lists from the hash table 1 according to the access times from small to large, and updating the data of the hash table 2.
Example 1
Referring to fig. 3, a schematic diagram of an improved LFU data structure according to an embodiment of the present invention is shown, where all node information corresponding to each frequency is stored using a hash table 1, and a key indicates the number of accesses, and value is a doubly linked list. All nodes in the linked list are data nodes which are accessed for the same times, and the data nodes comprise keys, values and access times counters.
The correspondence between key values and nodes is stored using hash table 2.
The active time of each key is stored by using an ordered hash table, which is expressed by variable activeTime, and each key has one active time, namely the time of each access, and the active time of each key is stored by the ordered hash table and is ordered according to the active time.
A degradation time parameter degradation is introduced, wherein the degradation time parameter represents that when the key survival time is larger than the value, the heat degradation is carried out, and the access times are reduced by 1, so that the heat is controlled to be a threshold value.
Based on the improved LFU data structure, referring to fig. 4, a flowchart of a method for implementing the redis memory elimination strategy based on LFU algorithm improvement according to the embodiment of the present invention is shown.
The invention provides a method for eliminating strategies of redis memory based on LFU algorithm improvement, which comprises the following steps:
s401: querying the active time of the key words in the third hash table;
as shown in fig. 3, the third hash table is the ordered hash table in fig. 3, and is used for storing the key words and the active time.
S402: when the active time of a target keyword in the keywords is greater than or equal to a preset value, reducing the access times of the nodes corresponding to the target keyword;
the method and the device can avoid the cache data added in early stage from being cleaned for a long time, so that the method and the device introduce an active time parameter in the elimination strategy, the active time is long to indicate that the node is not accessed for a long time, the active time is short to indicate that the node is accessed recently, a preset value can be set according to requirements, and the node corresponding to the active time which is greater than or equal to the preset value is considered as the node to be processed.
Further, in order to balance the two parameters of the access times and the active time at the same time, it may be set to adjust the access times according to the active time, and in an exemplary case, when the active time of the keyword is greater than or equal to a preset value, the access times of the nodes corresponding to the keyword are reduced by one.
S403: determining a target node corresponding to the target keyword in a second hash table;
as shown in fig. 3, the second hash table is hash table 2 in fig. 3, which is used for storing the correspondence between key values and nodes. To obtain and determine the node according to the key value, and obtain the value stored in the first hash table according to the node
S404: moving node information corresponding to a target node in a first hash table to a position corresponding to the reduced access times in the first hash table, and updating a doubly linked list;
after the access times of the node are reduced because of the active time parameter, in the first hash table, the node information of the node does not meet the current corresponding access times, so that the node needs to be moved to a position corresponding to the reduced access times, which is called a degradation process, such as a link list with the access times of 1 after the degradation of key1 in fig. 3 to 5.
S405: deleting a first double-linked list in the first hash table, deleting data corresponding to the first double-linked list in a second hash table and a third hash table, wherein the first double-linked list is a double-linked list with access times smaller than preset times;
the Redis database comprises a first hash table, a second hash table and a third hash table, wherein the first hash table stores access times and node information of a plurality of nodes with the access times, the relation among the plurality of nodes is stored through the doubly linked list, and the node information comprises keywords, key values and the access times;
the second hash table stores the key words and nodes corresponding to the key words;
the third hash table stores the key and an active time corresponding to the key.
When the elimination strategy needs to be executed, a certain number of doubly linked lists are deleted from the hash table 1 in sequence from small to large according to the access times, and meanwhile, data corresponding to the hash table 2 and the ordered hash table are deleted.
The embodiment of the invention improves the original LFU algorithm, so that the LFU elimination strategy can balance the access times and the active time, and the cache data added in early stage is prevented from being cleaned for a long time.
In a possible implementation manner, before step S401, steps S406 to S409 are further included:
s406: receiving a write operation of a user for a first keyword;
s407: in response to the write operation, looking up the first key in the second hash table;
determining a first node corresponding to the first key under the condition that the first key exists in the second hash table;
updating a first key value corresponding to the first node in the first hash table, and increasing the access times corresponding to the first node;
s408: moving node information corresponding to the first node in a first hash table to a position corresponding to the increased access times in the first hash table, and updating the doubly-linked list;
s409: and updating the active time corresponding to the first keyword in the third hash table to be the current time.
For example, the receiving the write operation of the user for the first keyword is a process of writing data, and the specific steps may be:
firstly, judging whether a key value exists, and if so, updating the value of a corresponding node;
then, the access times counter+1 of the node are removed from the original doubly linked list, and the node is added into the doubly linked list corresponding to the new access times. And updating the activeTime of the key of the ordered hash table to be the current time, and automatically sequencing the inside of the table, wherein the key4 is arranged at the forefront.
And next, checking the activeTime of all the keys in the ordered hash table at regular time or according to a preset period, comparing the activeTime with the current time, and if the activeTime reaches or is larger than a set threshold degrade, subtracting 1 from the number of key accesses and simultaneously moving the positions of the data nodes corresponding to the keys to the matched doubly-linked list. This is called a degradation process.
Alternatively, degradation stops when the counter of the key is 1.
Illustratively, the operated position is moved to a linked list with more accesses by key4 as in fig. 5, and correspondingly, key1 activity is degraded and moved to a linked list with access number 1.
And finally, when the elimination strategy needs to be executed, deleting a certain number of doubly linked lists from the hash table 1 according to the access times from small to large, and deleting data corresponding to the hash table 2 and the ordered hash table.
In a possible embodiment, after step S407, steps S410-S413 are further included:
s410: adding a first keyword and the first node corresponding to the first keyword in the second hash table under the condition that the first keyword does not exist in the second hash table;
s411: setting the access times of the first node as target times;
S412: inserting node information of the first node into a doubly linked list corresponding to the target times in the first hash table, wherein the node information of the first node comprises the first key word, a key value of the first key word and the target times;
s413: and newly adding the first keyword in the third hash table and setting the corresponding active time as the current time.
Optionally, the inserting the node information of the first node into the doubly linked list corresponding to the target times in the first hash table includes:
inserting node information of the first node into a doubly linked list corresponding to the target times in the first hash table under the condition that the access times in the first hash table comprise the target times;
and under the condition that the access times in the first hash table do not comprise the target times, creating a target doubly-linked list corresponding to the target times in the first hash table, and inserting node information of the first node into the target doubly-linked list.
For example, the receiving the write operation of the user for the first keyword is a process of writing data, and the specific steps may be:
first, judging whether a key value exists, and if not, adding a data node.
Then, the node access time counter is set to be 1, a doubly linked list corresponding to the hash table 1 is inserted, and the doubly linked list is created if the doubly linked list does not exist.
And finally, adding the key in the ordered hash table, setting the activeTime as the current time, and ending the flow.
In a possible implementation manner, before the querying the active time of the key in the third hash table, the method further includes:
receiving a checking operation of a user for a key value corresponding to the second keyword;
searching the second key word in the second hash table in response to the checking operation;
returning a null value when the second key is not present in the second hash table;
determining a second node corresponding to the first key when the second key exists in the second hash table;
determining a second key value corresponding to the second node in the first hash table, and returning the second key value;
increasing the access times corresponding to the second node, moving the node information corresponding to the second node in a first hash table to a position corresponding to the increased access times in the first hash table, and updating the doubly-linked list;
And updating the active time corresponding to the second keyword in the third hash table to be the current time.
For example, receiving the write operation of the user for the first keyword is a process of viewing the data, and the specific steps may be:
firstly, judging whether a key value exists, if not, returning to null, and ending the flow;
if the value exists, returning the value of the corresponding node;
and then removing the node from the original doubly-linked list and adding the node into the doubly-linked list corresponding to the new access times. And updating the activeTime of the key of the ordered hash table to be the current time, and automatically sequencing the inside of the table, wherein the key4 is arranged at the forefront in fig. 5.
And finally, checking the activeTime of all the keys in the ordered hash table, comparing with the current time, and if the activeTime is up to or greater than the set threshold value degrade, subtracting 1 from the number of key accesses and simultaneously moving the positions of the data nodes corresponding to the keys to the matched doubly linked list. This is called a downgrade process, which stops when the counter of the key becomes 1.
When the elimination strategy needs to be executed, a certain number of doubly linked lists are deleted from the hash table 1 in sequence from small to large according to the access times, and meanwhile, data corresponding to the hash table 2 and the ordered hash table are deleted.
The embodiment of the invention improves the traditional LFU algorithm, increases the influence of access time on the heat of data, can well avoid the problem that early data is always accumulated in a cache compared with the original scheme, is a policy of dynamic balance of the LFU and the LRU, and has the advantages of the two algorithm policies.
Example two
Referring to fig. 6, a schematic structural diagram of an apparatus 60 for implementing a redis memory elimination policy based on LFU algorithm improvement according to an embodiment of the present invention is shown, where the apparatus applied to a redis database includes:
a first query module 601, configured to query an active time of a keyword in the third hash table;
a first adjustment module 602, configured to reduce, when an active time of a target keyword in the keywords is greater than or equal to a preset value, the number of accesses of a node corresponding to the target keyword;
a determining module 603, configured to determine a target node corresponding to the target key in the second hash table;
the first updating module 604 moves the node information corresponding to the target node in the first hash table to a position corresponding to the reduced access times in the first hash table, and updates the doubly linked list;
the deleting module 605 is configured to delete a first doubly linked list in the first hash table, delete data corresponding to the first doubly linked list in the second hash table and the third hash table, where the first doubly linked list is a doubly linked list with access times less than a preset number of times;
The redis database comprises a first hash table, a second hash table and a third hash table, wherein the first hash table stores access times and node information of a plurality of nodes with the access times, the relation among the plurality of nodes is stored through the doubly linked list, and the node information comprises keywords, key values and the access times;
the second hash table stores the key words and nodes corresponding to the key words;
the third hash table stores the key and an active time corresponding to the key.
Optionally, the apparatus 60 further comprises:
a first receiving module 606, configured to receive a write operation of a user for the first keyword;
a first response module 607, configured to search the second hash table for the first key in response to the write operation;
determining a first node corresponding to the first key under the condition that the first key exists in the second hash table;
a second adjustment module 608, configured to update a first key value corresponding to the first node in the first hash table, and increase the number of accesses corresponding to the first node;
A second updating module 609, configured to move node information corresponding to the first node in the first hash table to a position corresponding to the increased access number in the first hash table, and update the doubly linked list;
and a third updating module 610, configured to update the active time corresponding to the first key in the third hash table to the current time.
Optionally, the first response module 607 is further configured to:
adding a first keyword and the first node corresponding to the first keyword in the second hash table under the condition that the first keyword does not exist in the second hash table;
the apparatus 60 further comprises:
a setting module 611, configured to set the number of accesses of the first node to a target number;
an inserting module 612, configured to insert node information of the first node into a doubly linked list corresponding to the target number of times in the first hash table, where the node information of the first node includes the first key, a key value of the first key, and the target number of times;
a setting module 613, configured to newly add the first key to the third hash table and set the corresponding active time as the current time.
Optionally, the inserting module 612 includes:
an inserting unit 6121, configured to insert node information of the first node into a doubly linked list corresponding to the target number of times in the first hash table, where the number of accesses in the first hash table includes the target number of times;
and a creating unit 6122, configured to create a target doubly-linked list corresponding to the target number of times in the first hash table, and insert node information of the first node into the target doubly-linked list, where the number of accesses in the first hash table does not include the target number of times.
Optionally, the apparatus 60 further comprises:
a second receiving module 614, configured to receive a view operation of a user on a key value corresponding to the second keyword;
a second response module 615, configured to search the second hash table for the second key in response to the viewing operation;
determining a second node corresponding to the first key when the second key exists in the second hash table;
a first returning module 616, configured to determine a second key value corresponding to the second node in the first hash table, and return the second key value;
A fourth updating module 617, configured to increase the number of accesses corresponding to the second node, and move node information corresponding to the second node in the first hash table to a position corresponding to the increased number of accesses in the first hash table, and update the doubly linked list;
a fifth updating module 618, configured to update an active time corresponding to the second key in the third hash table to a current time;
and a second returning module 619, configured to return a null value when the second key does not exist in the second hash table.
The device 60 for eliminating the redis memory based on the LFU algorithm improvement provided in the embodiment of the present invention can implement each process implemented in the above method embodiment, and in order to avoid repetition, a description is omitted here.
In the embodiment of the invention, a first query module is used for querying the active time of a keyword in a third hash table, a first adjustment module is used for reducing the access times of nodes corresponding to the target keyword under the condition that the active time of the target keyword in the keyword is larger than or equal to a preset value, a determination module is used for determining target nodes corresponding to the target keyword in a second hash table, a first update module moves node information corresponding to the target node in the first hash table to a position corresponding to the reduced access times in the first hash table and updates a doubly-linked list, a deletion module is used for deleting a first doubly-linked list in the first hash table, deleting data corresponding to the first doubly-linked list in the second hash table and the third hash table, the first doubly-linked list is a doubly-linked list with access times smaller than the preset times, the redis database comprises the first hash table, the second hash table and the third hash table, the first hash table has a corresponding access times with the keyword, the node information and the node information, and the keyword has a plurality of access times corresponding to the nodes, and the node information is stored by the keyword, and the node information. The embodiment of the invention improves the original LFU algorithm, so that the LFU elimination strategy can balance the access times and the active time, and the cache data added in early stage is prevented from being cleaned for a long time.
The virtual system in the embodiment of the invention can be a device, a component in a terminal, an integrated circuit or a chip.
In addition, it should be noted that the above embodiment of the apparatus is merely illustrative, and does not limit the scope of the present invention, and in practical application, a person skilled in the art may select some or all modules according to actual needs to achieve the purpose of the embodiment, which is not limited herein.
In addition, technical details not described in detail in this embodiment may refer to the intelligent cognitive method and system provided in any embodiment of the present invention, which are not described herein.
The foregoing description of the preferred embodiments of the invention is not intended to be limiting, but rather is intended to cover all modifications, equivalents, alternatives, and improvements that fall within the spirit and scope of the invention.

Claims (10)

1. The method for improving the redis memory elimination strategy based on the LFU algorithm is applied to a redis database and is characterized by comprising the following steps of:
querying the active time of the key words in the third hash table;
when the active time of a target keyword in the keywords is greater than or equal to a preset value, reducing the access times of the nodes corresponding to the target keyword;
Determining a target node corresponding to the target keyword in a second hash table;
moving node information corresponding to a target node in a first hash table to a position corresponding to the reduced access times in the first hash table, and updating a doubly linked list;
deleting a first double-linked list in the first hash table, deleting data corresponding to the first double-linked list in a second hash table and a third hash table, wherein the first double-linked list is a double-linked list with access times smaller than preset times;
the redis database comprises a first hash table, a second hash table and a third hash table, wherein the first hash table stores access times and node information of a plurality of nodes with the access times, the relation among the plurality of nodes is stored through the doubly linked list, and the node information comprises keywords, key values and the access times;
the second hash table stores the key words and nodes corresponding to the key words;
the third hash table stores the key and an active time corresponding to the key.
2. The method of claim 1, wherein prior to the active time of the key in the query third hash table, the method further comprises:
Receiving a write operation of a user for a first keyword;
in response to the write operation, looking up the first key in the second hash table;
determining a first node corresponding to the first key under the condition that the first key exists in the second hash table;
updating a first key value corresponding to the first node in the first hash table, and increasing the access times corresponding to the first node;
moving node information corresponding to the first node in a first hash table to a position corresponding to the increased access times in the first hash table, and updating the doubly-linked list;
and updating the active time corresponding to the first keyword in the third hash table to be the current time.
3. The method of claim 2, wherein after the first key is looked up in the second hash table, the method further comprises:
adding a first keyword and the first node corresponding to the first keyword in the second hash table under the condition that the first keyword does not exist in the second hash table;
setting the access times of the first node as target times;
Inserting node information of the first node into a doubly linked list corresponding to the target times in the first hash table, wherein the node information of the first node comprises the first key word, a key value of the first key word and the target times;
and newly adding the first keyword in the third hash table and setting the corresponding active time as the current time.
4. The method of claim 3, wherein inserting the node information of the first node into the doubly linked list of the first hash table corresponding to the target number of times comprises:
inserting node information of the first node into a doubly linked list corresponding to the target times in the first hash table under the condition that the access times in the first hash table comprise the target times;
and under the condition that the access times in the first hash table do not comprise the target times, creating a target doubly-linked list corresponding to the target times in the first hash table, and inserting node information of the first node into the target doubly-linked list.
5. The method of claim 1, wherein prior to the active time of the key in the query third hash table, the method further comprises:
Receiving a checking operation of a user for a key value corresponding to the second keyword;
searching the second key word in the second hash table in response to the checking operation;
returning a null value when the second key is not present in the second hash table;
determining a second node corresponding to the first key when the second key exists in the second hash table;
determining a second key value corresponding to the second node in the first hash table, and returning the second key value;
increasing the access times corresponding to the second node, moving the node information corresponding to the second node in a first hash table to a position corresponding to the increased access times in the first hash table, and updating the doubly-linked list;
and updating the active time corresponding to the second keyword in the third hash table to be the current time.
6. An apparatus for implementing a Redis memory elimination strategy based on LFU algorithm improvement, applied to a Redis database, comprising:
the first query module is used for querying the active time of the keywords in the third hash table;
the first adjusting module is used for reducing the access times of the nodes corresponding to the target keywords under the condition that the active time of the target keywords in the keywords is larger than or equal to a preset value;
The determining module is used for determining a target node corresponding to the target keyword in the second hash table;
the first updating module moves the node information corresponding to the target node in the first hash table to a position corresponding to the reduced access times in the first hash table and updates the doubly linked list;
the deleting module is used for deleting a first double-linked list in the first hash table, deleting data corresponding to the first double-linked list in the second hash table and the third hash table, wherein the first double-linked list is a double-linked list with access times smaller than preset times;
the Redis database comprises a first hash table, a second hash table and a third hash table, wherein the first hash table stores access times and node information of a plurality of nodes with the access times, the relation among the plurality of nodes is stored through the doubly linked list, and the node information comprises keywords, key values and the access times;
the second hash table stores the key words and nodes corresponding to the key words;
the third hash table stores the key and an active time corresponding to the key.
7. The apparatus of claim 6, wherein the apparatus further comprises:
The first receiving module is used for receiving a writing operation of a user aiming at the first keyword;
a first response module configured to search the second hash table for the first key in response to the write operation;
determining a first node corresponding to the first key under the condition that the first key exists in the second hash table;
the second adjusting module is used for updating a first key value corresponding to the first node in the first hash table and increasing the access times corresponding to the first node;
the second updating module is used for moving the node information corresponding to the first node in the first hash table to a position corresponding to the increased access times in the first hash table and updating the doubly-linked list;
and a third updating module, configured to update the active time corresponding to the first key in the third hash table to a current time.
8. The apparatus of claim 7, wherein the first response module is further configured to:
adding a first keyword and the first node corresponding to the first keyword in the second hash table under the condition that the first keyword does not exist in the second hash table;
The apparatus further comprises:
the setting module is used for setting the access times of the first node as target times;
the inserting module is used for inserting the node information of the first node into a doubly linked list corresponding to the target times of the first hash table, wherein the node information of the first node comprises the first key word, the key value of the first key word and the target times;
the setting module is used for newly adding the first keyword in the third hash table and setting the corresponding active time as the current time.
9. The apparatus of claim 8, wherein the insertion module comprises:
an inserting unit, configured to insert node information of the first node into a doubly linked list corresponding to the target number of times in the first hash table, where the access number of times in the first hash table includes the target number of times;
and the creating unit is used for creating a target doubly-linked list corresponding to the target times in the first hash table and inserting the node information of the first node into the target doubly-linked list under the condition that the access times in the first hash table do not comprise the target times.
10. The apparatus of claim 6, wherein the apparatus further comprises:
The second receiving module is used for receiving the checking operation of the key value corresponding to the second key word by the user;
the second response module is used for responding to the check operation and searching the second keyword in the second hash table;
determining a second node corresponding to the first key when the second key exists in the second hash table;
the first return module is used for determining a second key value corresponding to the second node in the first hash table and returning the second key value;
a fourth updating module, configured to increase the number of accesses corresponding to the second node, and move node information corresponding to the second node in a first hash table to a position corresponding to the increased number of accesses in the first hash table, and update the doubly linked list;
a fifth updating module, configured to update an active time corresponding to the second key in the third hash table to a current time;
and the second return module is used for returning a null value when the second key word does not exist in the second hash table.
CN202211696513.1A 2022-12-28 2022-12-28 Method and device for improving redis memory elimination strategy based on LFU algorithm Pending CN116089425A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211696513.1A CN116089425A (en) 2022-12-28 2022-12-28 Method and device for improving redis memory elimination strategy based on LFU algorithm

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211696513.1A CN116089425A (en) 2022-12-28 2022-12-28 Method and device for improving redis memory elimination strategy based on LFU algorithm

Publications (1)

Publication Number Publication Date
CN116089425A true CN116089425A (en) 2023-05-09

Family

ID=86187787

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211696513.1A Pending CN116089425A (en) 2022-12-28 2022-12-28 Method and device for improving redis memory elimination strategy based on LFU algorithm

Country Status (1)

Country Link
CN (1) CN116089425A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117349483A (en) * 2023-12-05 2024-01-05 杭州行芯科技有限公司 Parasitic parameter searching method and device, electronic equipment and storage medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117349483A (en) * 2023-12-05 2024-01-05 杭州行芯科技有限公司 Parasitic parameter searching method and device, electronic equipment and storage medium
CN117349483B (en) * 2023-12-05 2024-04-09 杭州行芯科技有限公司 Parasitic parameter searching method and device, electronic equipment and storage medium

Similar Documents

Publication Publication Date Title
CN107491523B (en) Method and device for storing data object
US8868831B2 (en) Caching data between a database server and a storage system
EP0851354B1 (en) Reorganization of collisions in a hash bucket of a hash table to improve system performance
CN111159066A (en) Dynamically-adjusted cache data management and elimination method
CN107943718B (en) Method and device for cleaning cache file
CN111506604B (en) Method, apparatus and computer program product for accessing data
US20010028651A1 (en) Cache table management device for router and program recording medium thereof
US20100228914A1 (en) Data caching system and method for implementing large capacity cache
US8225060B2 (en) Data de-duplication by predicting the locations of sub-blocks within the repository
CN116089425A (en) Method and device for improving redis memory elimination strategy based on LFU algorithm
CN105512222A (en) Data query method and system, and data reading method and system
CN106569963A (en) Buffering method and buffering device
US20230176976A1 (en) Solid State Drive Cache Eviction Policy by an Unsupervised Reinforcement Learning Scheme
CN107766258B (en) Memory storage method and device and memory query method and device
CN113553476A (en) Key value storage method for reducing write pause by utilizing Hash
US7146466B2 (en) System for balancing multiple memory buffer sizes and method therefor
CN107220287A (en) For the index managing method of log query, device, storage medium and equipment
CN115168244A (en) Data updating method, device, equipment and readable storage medium
CN113392042A (en) Method, electronic device and computer program product for managing a cache
CN109002400B (en) Content-aware computer cache management system and method
CN116701440B (en) Cuckoo filter and data insertion, query and deletion method
US20070233958A1 (en) Cashe Device and Method for the Same
CN113626432B (en) Improved method of self-adaptive radix tree supporting arbitrary Key value
CN112463837B (en) Relational database data storage query method
CN116467353B (en) Self-adaptive adjustment caching method and system based on LRU differentiation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination