CN113434285B - Memory management method and device based on key value cache system - Google Patents
Memory management method and device based on key value cache system Download PDFInfo
- Publication number
- CN113434285B CN113434285B CN202010208691.XA CN202010208691A CN113434285B CN 113434285 B CN113434285 B CN 113434285B CN 202010208691 A CN202010208691 A CN 202010208691A CN 113434285 B CN113434285 B CN 113434285B
- Authority
- CN
- China
- Prior art keywords
- eliminated
- memory
- memory pages
- pages
- page
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000007726 management method Methods 0.000 title claims abstract description 45
- 238000003379 elimination reaction Methods 0.000 claims description 49
- 230000008030 elimination Effects 0.000 claims description 44
- 238000000034 method Methods 0.000 claims description 18
- 238000009825 accumulation Methods 0.000 claims description 4
- 125000004122 cyclic group Chemical group 0.000 description 5
- 230000008569 process Effects 0.000 description 5
- 238000010586 diagram Methods 0.000 description 4
- 239000012634 fragment Substances 0.000 description 3
- 230000007246 mechanism Effects 0.000 description 3
- 230000002776 aggregation Effects 0.000 description 2
- 238000004220 aggregation Methods 0.000 description 2
- 230000005540 biological transmission Effects 0.000 description 2
- 241001522296 Erithacus rubecula Species 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000005538 encapsulation Methods 0.000 description 1
- 238000013467 fragmentation Methods 0.000 description 1
- 238000006062 fragmentation reaction Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 230000004044 response Effects 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 230000001960 triggered effect Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F12/00—Accessing, addressing or allocating within memory systems or architectures
- G06F12/02—Addressing or allocation; Relocation
- G06F12/08—Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
- G06F12/0802—Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
- G06F12/0866—Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches for peripheral storage systems, e.g. disk cache
- G06F12/0868—Data transfer between cache memory and other subsystems, e.g. storage devices or host systems
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/50—Allocation of resources, e.g. of the central processing unit [CPU]
- G06F9/5005—Allocation of resources, e.g. of the central processing unit [CPU] to service a request
- G06F9/5011—Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals
- G06F9/5016—Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals the resource being the memory
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F12/00—Accessing, addressing or allocating within memory systems or architectures
- G06F12/02—Addressing or allocation; Relocation
- G06F12/08—Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
- G06F12/10—Address translation
- G06F12/1009—Address translation using page tables, e.g. page table structures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F12/00—Accessing, addressing or allocating within memory systems or architectures
- G06F12/02—Addressing or allocation; Relocation
- G06F12/08—Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
- G06F12/12—Replacement control
- G06F12/121—Replacement control using replacement algorithms
- G06F12/122—Replacement control using replacement algorithms of the least frequently used [LFU] type, e.g. with individual count value
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F12/00—Accessing, addressing or allocating within memory systems or architectures
- G06F12/02—Addressing or allocation; Relocation
- G06F12/08—Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
- G06F12/12—Replacement control
- G06F12/121—Replacement control using replacement algorithms
- G06F12/123—Replacement control using replacement algorithms with age lists, e.g. queue, most recently used [MRU] list or least recently used [LRU] list
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F12/00—Accessing, addressing or allocating within memory systems or architectures
- G06F12/02—Addressing or allocation; Relocation
- G06F12/08—Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
- G06F12/12—Replacement control
- G06F12/121—Replacement control using replacement algorithms
- G06F12/126—Replacement control using replacement algorithms with special data handling, e.g. priority of data or instructions, handling errors or pinning
- G06F12/127—Replacement control using replacement algorithms with special data handling, e.g. priority of data or instructions, handling errors or pinning using additional replacement algorithms
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/50—Allocation of resources, e.g. of the central processing unit [CPU]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/50—Allocation of resources, e.g. of the central processing unit [CPU]
- G06F9/5005—Allocation of resources, e.g. of the central processing unit [CPU] to service a request
- G06F9/5011—Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals
- G06F9/5022—Mechanisms to release resources
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2212/00—Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
- G06F2212/46—Caching storage objects of specific type in disk cache
- G06F2212/465—Structured object, e.g. database record
Abstract
The application discloses a memory management method and device based on a key value cache system. And for the memory pages which cannot be eliminated, the memory pages to be processed are added again, so that all the memory pages can be eliminated and recovered regularly.
Description
Technical Field
The present disclosure relates to Cache technology, and more particularly, to a memory management method and apparatus based on Key-Value Cache (Key-Value Cache) system.
Background
Currently, the mainstream Key-Value Cache (Key-Value Cache) system has Memcached and Redis. Both Memcached and Redis have limitations in memory management, mainly in terms of both data elimination and defragmentation.
Memcached adopts a sleb memory management mode, memory pages are divided into memory blocks with different sizes, and data with different sizes are distributed into the memory blocks with corresponding sizes, so that memory fragmentation is more convenient to integrate, but a sleb-based elimination strategy ignores data equalization among slebs, and fairness of the elimination strategy is affected. The slot is a policy for dividing and managing a memory with a fixed size.
Redis does not implement an individual memory management policy, but rather memory management is implemented through an external memory management policy such as encapsulation jemalloc, tcmalloc. The external memory management strategy can only perform general memory management and cannot be adaptively optimized. Therefore, the Redis cannot realize better memory defragmentation when ensuring fairness of the elimination strategy, so that the memory utilization rate is low. Among them, jemalloc, tcmalloc is a widely used external memory management policy.
Disclosure of Invention
The memory management method and device based on the key value cache system can achieve high-quality memory defragmentation and guarantee fairness of elimination strategies.
The embodiment of the invention provides a memory management method based on a key value cache system, which comprises the following steps:
determining whether to eliminate the memory pages to be eliminated according to the access states of key value pairs in the memory pages to be eliminated;
if the memory pages to be eliminated are determined to be eliminated, releasing the memory pages to be eliminated; and if the memory pages to be eliminated are determined not to be eliminated, adding the memory pages to be eliminated into the memory pages to be processed.
In an exemplary embodiment, the determining whether to retire the memory page to be retired includes:
selecting the memory pages to be eliminated from the memory pages to be processed according to an elimination strategy;
if all the data units stored in the memory pages to be eliminated can be eliminated according to the access states of the key value pairs, the memory pages to be eliminated are eliminated; and if the data units which cannot be eliminated exist in the data units stored in the memory pages to be eliminated are determined according to the access states of the key value pairs, determining that the memory pages to be eliminated are not eliminated.
In one illustrative example, the elimination strategy includes: first-in first-out FIFO policy, or least frequently LFU policy, or least recently LRU policy.
In an exemplary embodiment, the method further comprises: and setting an identifier for the memory page to be eliminated so as to be expressed as an eliminated page.
In an exemplary embodiment, the releasing the memory page to be eliminated includes:
adding all data units in the memory page to be eliminated into an elimination queue, and releasing all data units added into the elimination queue; and releasing the memory pages to be eliminated.
In an exemplary embodiment, the method further comprises:
and managing the memory pages to be processed by using the FIFO strategy.
In an exemplary embodiment, adding the memory page to be eliminated to the memory page to be processed includes: and adding the memory pages to be eliminated into the FIFO queue.
In an exemplary embodiment, after determining to discard the memory page to be discarded, before releasing the memory page to be discarded, the method further includes:
calculating release space information according to the access frequency of the memory pages to be eliminated and the access frequency of the memory pages with the minimum access frequency in the LFU heat collecting pool;
according to the calculated release space information, releasing the space of the memory page with the minimum access frequency in the LFU heat collecting pool;
wherein the priority of the LFU heat accumulation pool is higher than that of the FIFO queue.
In one illustrative example, the calculating the release space information includes:
calculating the ratio of the access frequency of the memory pages to be eliminated to the access frequency of the memory page with the minimum access frequency in the LFU heat collecting pool; and calculating the product of the ratio and a preset weight to obtain a release space ratio value beta serving as the release space information.
In an exemplary embodiment, the preset weight is determined according to practical situations, and the preset weight is a value between 0 and 1.
The present application also provides a computer-readable storage medium storing computer-executable instructions for performing any one of the memory management methods described above.
The application further provides a device for realizing memory management, which comprises a memory and a processor, wherein the memory stores the following instructions executable by the processor: the method for performing any one of the above memory management methods.
The application further provides a memory management device, which comprises: a determining module and a first processing module; wherein,
the determining module is used for determining whether to eliminate the memory pages to be eliminated according to the access states of the key value pairs in the memory pages to be eliminated;
the first processing module is used for releasing the memory pages to be eliminated if determining to eliminate the memory pages to be eliminated; and if the memory pages to be eliminated are determined not to be eliminated, adding the memory pages to be eliminated into the memory pages to be processed.
In an exemplary embodiment, the determining module is specifically configured to:
selecting the memory pages to be eliminated from the memory pages to be processed according to an elimination strategy; if all the data units stored in the memory pages to be eliminated can be eliminated according to the access states of the key value pairs, the memory pages to be eliminated are eliminated; and if the data units which cannot be eliminated exist in the data units stored in the memory pages to be eliminated are determined according to the access states of the key value pairs, determining that the memory pages to be eliminated are not eliminated.
In an exemplary embodiment, the releasing the memory page in the first processing module includes:
adding all data units in the memory page to be eliminated into an elimination queue, and releasing all data units added into the elimination queue; and releasing the memory pages to be eliminated.
In an exemplary example, the first processing module is further to: managing the memory pages to be processed by using the FIFO strategy; in response to this, the control unit,
the memory pages to be processed are included in the FIFO queue, and the memory pages to be eliminated are added into the memory pages to be processed as follows: and adding the memory pages to be eliminated into the memory pages to be processed, and adding the memory pages to be eliminated into the FIFO queue.
In an exemplary embodiment, the apparatus further includes a second processing module configured to:
calculating a release space according to the access frequency of the memory pages to be eliminated and the access frequency of the memory pages with the minimum access frequency in the LFU heat collection pool; according to the calculated release space, releasing the space of the memory page with the minimum access frequency in the LFU heat collecting pool; then executing the release of the memory pages to be eliminated;
wherein the priority of the LFU heat accumulation pool is higher than that of the FIFO queue.
The method and the device are based on the elimination strategy of the memory pages, so that the eliminated memory pages are changed from the state of being subjected to slab division (namely, the memory block division is performed) to the state of not being subjected to slab division, and therefore the memory pages which are changed to the state of not being subjected to slab division can be divided according to the sizes of other slabs, and the fairness of the elimination strategy on all slabs is ensured. For the memory pages which cannot be eliminated, the memory pages to be processed are re-added, namely, the cyclic elimination processing process is re-entered, so that all the memory pages can be eliminated and recovered regularly.
Furthermore, the embodiment of the application ensures that all memory pages can be eliminated and recycled regularly through the FIFO strategy, so that the elimination processing of the expired data and the memory fragments is flexibly obtained in the process of circularly eliminating the memory pages without additionally adding a memory management mechanism.
Further, through the processing of the LFU heat collecting pool, after the memory page to be eliminated is released later, if hot data exists, the hot data is written into the released space in the LFU heat collecting pool, so that the aggregation of the hot data is realized, and the elimination probability of the hot data is reduced.
Additional features and advantages of the invention will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by practice of the invention. The objectives and other advantages of the invention will be realized and attained by the structure particularly pointed out in the written description and claims hereof as well as the appended drawings.
Drawings
The accompanying drawings are included to provide a further understanding of the technical aspects of the present application, and are incorporated in and constitute a part of this specification, illustrate the technical aspects of the present application and together with the examples of the present application, and not constitute a limitation of the technical aspects of the present application.
FIG. 1 is a flow chart of a memory management method based on a key value cache system according to the present application;
FIG. 2 is a schematic diagram of an application scenario embodiment of a memory management method based on a key-value cache system according to the present application;
fig. 3 is a schematic diagram of the composition structure of a memory management device based on a key-value cache system in the present application.
Detailed Description
For the purposes of making the objects, technical solutions and advantages of the present application more apparent, embodiments of the present application will be described in detail hereinafter with reference to the accompanying drawings. It should be noted that, in the case of no conflict, the embodiments and features in the embodiments may be arbitrarily combined with each other.
In one typical configuration of the present application, a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include volatile memory in a computer-readable medium, random Access Memory (RAM) and/or nonvolatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). Memory is an example of computer-readable media.
Computer readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of storage media for a computer include, but are not limited to, phase change memory (PRAM), static Random Access Memory (SRAM), dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), read Only Memory (ROM), electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape disk storage or other magnetic storage devices, or any other non-transmission medium, which can be used to store information that can be accessed by a computing device. Computer readable media, as defined herein, does not include non-transitory computer readable media (transmission media), such as modulated data signals and carrier waves.
The steps illustrated in the flowchart of the figures may be performed in a computer system, such as a set of computer-executable instructions. Also, while a logical order is depicted in the flowchart, in some cases, the steps depicted or described may be performed in a different order than presented herein.
Fig. 1 is a flow chart of a memory management method based on a key value cache system according to the present application, as shown in fig. 1, including:
step 100: and determining whether to eliminate the memory page to be eliminated according to the access state of the key value pair in the memory page to be eliminated.
In one illustrative example, the present step may include:
selecting a memory page to be eliminated from the memory pages to be processed according to an elimination strategy;
if all data units (items) stored in the memory page to be eliminated can be eliminated according to the access state of the key value pair, the memory page to be eliminated is eliminated; if it is determined that the non-obsolete item exists in the items stored in the memory page to be obsolete according to the access state of the key value pair, for example: thread pre-allocated item, item being written, etc., then it is determined not to retire the memory page to be retired.
In one illustrative example, it may be determined whether an item is in a thread pre-allocation state or in a writing state based on the access state of a key-value pair.
In one illustrative example, the memory page to be processed may be a memory page that has been memory block partitioned.
In one illustrative example, the elimination strategy in step 100 may include, but is not limited to, such as: first-in first-out (FIFO, first in First out) strategy. The elimination strategy may also include, for example: least frequently used (LFU, least Frequently Used) policies, least recently used (LRU, least Recently Used) policies, and the like. Taking the elimination strategy as the FIFO strategy as an example, the memory pages to be eliminated are the memory pages which are allocated first from the memory pages to be processed.
In an illustrative example, step 100 may further comprise:
the memory page to be eliminated is set as an elimination page, such as a set identifier, so that the writing operation of the memory page is not performed any more, and the condition that new writing data is eliminated immediately is avoided.
Step 101: if the memory page to be eliminated is determined to be eliminated, releasing the memory page to be eliminated; and if the memory page to be eliminated is determined not to be eliminated, adding the memory page to be eliminated into the memory page to be processed.
In an exemplary embodiment, the step of releasing the memory page to be eliminated includes:
and adding all the items in the memory page to be eliminated into an elimination queue, releasing all the items added into the elimination queue, and then releasing the memory page to be eliminated.
According to the method and the device, based on the elimination strategy of the memory pages, the eliminated memory pages are changed from the state of being subjected to slab division (namely, the memory block division is performed) to the state of not being subjected to slab division, so that the memory pages with the state of not being subjected to slab division can be divided according to the sizes of other slabs, and the fairness of the elimination strategy on all slabs is ensured. For the memory pages which cannot be eliminated, the memory pages to be processed are re-added, namely, the cyclic elimination processing process is re-entered, so that all the memory pages can be eliminated and recovered regularly.
In one illustrative example, the present application may further include:
and managing the memory pages to be processed by using the FIFO strategy.
In one illustrative example, the present application may further include:
the memory pages to be processed in step 100 are included in the FIFO queue, and the re-adding the non-obsolete memory pages to the memory pages to be processed in step 101 includes: and adding the memory pages which cannot be eliminated into the FIFO queue to wait for the subsequent cycle elimination processing.
According to the embodiment of the invention, through the FIFO strategy, all memory pages can be eliminated and recycled regularly, so that the elimination processing of the expired data and the memory fragments is flexibly obtained in the cyclic elimination process of the memory pages, and an additional memory management mechanism is not required to be added.
In an exemplary embodiment, after determining to discard the memory page to be discarded and before releasing the memory page to be discarded, the method may further include:
according to the access frequency of the memory pages to be eliminated and the access frequency of the memory pages with the minimum access frequency in the LFU heat collection pool, calculating release space information;
and releasing the space of the memory page with the minimum access frequency in the LFU heat collecting pool according to the calculated release space information.
The priority of the LFU heat collecting pool is higher than that of the FIFO queue, namely, as long as the LFU heat collecting pool has space, data is written into the LFU heat collecting pool, so that the access frequency of memory pages in the LFU heat collecting pool is far higher than that of memory pages in a FIFO record chain when the stable state is reached over time, and the heat data is ensured to be collected in the LFU heat collecting pool; and the memory pages in the LFU hot pool and the memory pages in the FIFO record chain are managed separately, that is, the memory pages in the LFU hot pool are not recorded in the FIFO record chain and do not participate in the elimination process shown in step 100 and step 101.
In one illustrative example, calculating the release space information may include:
calculating the ratio of the access frequency of the memory pages to be eliminated to the access frequency of the memory page with the minimum access frequency in the LFU heat collecting pool; and calculating the product of the ratio and a preset weight to obtain a release space ratio value beta.
In an exemplary embodiment, the preset weight may be determined according to practical situations, where the preset weight is a value between 0 and 1.
In one embodiment of the present application, the LFU pool characterizes the access frequency of memory pages by recording the allocation order of the memory pages through FIFO queues (also called FIFO record chains). Through the processing of the LFU heat collecting pool, after the memory pages to be eliminated are released later, if hot data exist, the hot data are written into the released space in the LFU heat collecting pool, so that the heat data are aggregated, and the probability of eliminating the hot data is reduced.
The present application also provides a computer-readable storage medium storing computer-executable instructions for performing the memory management method of any one of the above.
The application further provides a device for implementing memory management, including a memory and a processor, where the memory stores the following instructions executable by the processor: a step for performing the memory management method of any of the above.
Fig. 2 is a schematic diagram of an application scenario embodiment of a memory management method based on a key value cache system in the present application, as shown in fig. 2, memory pages in an LFU hot pool and memory pages in a FIFO record chain are managed separately, that is, the memory pages in the LFU hot pool are not recorded in a FIFO queue, the FIFO queue (also referred to as FIFO record chain) records an allocation sequence of the memory pages, and the LFU hot pool characterizes an access frequency of the memory pages. The LFU heat sink has a higher priority than the FIFO queue.
As shown in fig. 2, when the elimination process is triggered, such as: when the remaining free memory space is less than 5% of the total memory space, firstly, selecting the memory page which is allocated first from the FIFO queue as the memory page to be eliminated according to the FIFO strategy, and setting an identification for the memory page to be eliminated to indicate that the memory page is the eliminated page, so that the writing operation of the memory page is not performed any more, and the condition that newly written data is eliminated immediately is avoided.
Then, successively trying to eliminate the item in the eliminated page: if all the items stored in the obsolete pages can be obsolete according to the access state of the key value pairs, determining to obsolete the obsolete pages; if it is determined that there are non-obsolete items in the items stored in the obsolete page according to the access state of the key value pair, for example: the access status of the key-value pair is displayed as a pre-assigned item for the thread, an item being written, etc., then it is determined that the obsolete page is not obsolete.
Then, if the obsolete page is determined not to be obsolete, putting the obsolete page into the FIFO queue again, and waiting for the subsequent obsolete process;
if the obsolete pages are determined to be obsolete, calculating a release space according to the access frequency of the obsolete pages and the access frequency of the memory pages with the minimum access frequency in the LFU heat collecting pool; according to the calculated release space, releasing the corresponding space of the memory page with the minimum access frequency in the LFU heat collecting pool; and then releasing the memory pages to be eliminated. Therefore, after the obsolete pages are released, if hot data exists, the hot data can be written into the released space in the LFU heat collecting pool, so that the aggregation of the hot data is realized, and the probability of obsolete hot data is reduced.
Fig. 3 is a schematic structural diagram of a memory management device based on a key-value cache system according to the present application, and as shown in fig. 3, at least includes: a determining module and a first processing module; wherein,
the determining module is used for determining whether to eliminate the memory page to be eliminated according to the access state of the key value pair in the memory page to be eliminated;
the first processing module is used for releasing the memory pages to be eliminated if the memory pages to be eliminated are determined to be eliminated; and if the memory page to be eliminated is determined not to be eliminated, adding the memory page to be eliminated into the memory page to be processed.
In one illustrative example, the determination module is specifically configured to:
selecting a memory page to be eliminated from the memory pages to be processed according to an elimination strategy; if all the items stored in the memory pages to be eliminated can be eliminated according to the access states of the key value pairs, the memory pages to be eliminated are eliminated; if it is determined that there are non-obsolete items in the items stored in the memory page to be obsolete according to the access state of the key value pair, for example: the access state of the key value pair indicates the item pre-allocated by the thread, the item being written, etc., and then it is determined that the memory page to be eliminated is not eliminated.
In an exemplary embodiment, the releasing the memory page in the first processing module includes:
and adding all the items in the memory page to be eliminated into an elimination queue, releasing all the items added into the elimination queue, and then releasing the memory page to be eliminated.
According to the method and the device, based on the elimination strategy of the memory pages, the eliminated memory pages are changed from the state of being subjected to slab division (namely, the memory block division is performed) to the state of not being subjected to slab division, so that the memory pages with the state of not being subjected to slab division can be divided according to the sizes of other slabs, and the fairness of the elimination strategy on all slabs is ensured. For the memory pages which cannot be eliminated, the memory pages to be processed are re-added, namely, the cyclic elimination processing process is re-entered, so that all the memory pages can be eliminated and recovered regularly.
In one illustrative example, the first processing module is further to: managing the memory pages to be processed by using the FIFO strategy; accordingly, the memory pages to be processed are included in the FIFO queue, and the memory pages that are not eliminated are added to the memory pages to be processed again, that is, the memory pages that are not eliminated are added to the FIFO queue to wait for the subsequent round robin elimination processing.
According to the embodiment of the invention, through the FIFO strategy, all memory pages can be eliminated and recycled regularly, so that the elimination processing of the expired data and the memory fragments is flexibly obtained in the cyclic elimination process of the memory pages, and an additional memory management mechanism is not required to be added.
In an exemplary embodiment, the memory management device further includes a second processing module, configured to:
calculating a release space according to the access frequency of the memory pages to be eliminated and the access frequency of the memory pages with the minimum access frequency in the LFU heat collection pool; according to the calculated release space, releasing the space of the memory page with the minimum access frequency in the LFU heat collecting pool; and then releasing the memory pages to be eliminated.
The priority of the LFU heat collecting pool is higher than that of the FIFO queue, namely, as long as the LFU heat collecting pool has space, data is written into the LFU heat collecting pool, so that the access frequency of memory pages in the LFU heat collecting pool is far higher than that of memory pages in a FIFO record chain when the stable state is reached over time, and the heat data is ensured to be collected in the LFU heat collecting pool; and the memory pages in the LFU hot pool and the memory pages in the FIFO record chain are managed separately, that is, the memory pages in the LFU hot pool are not recorded in the FIFO record chain and do not participate in the elimination process shown in step 100 and step 101.
In one embodiment of the present application, the LFU pool characterizes the access frequency of memory pages by recording the allocation order of the memory pages through FIFO queues (also called FIFO record chains). Through the processing of the LFU heat collecting pool, after the memory pages to be eliminated are released later, if hot data exist, the hot data are written into the released space in the LFU heat collecting pool, so that the heat data are aggregated, and the probability of eliminating the hot data is reduced.
Although the embodiments disclosed in the present application are described above, the embodiments are only used for facilitating understanding of the present application, and are not intended to limit the present application. Any person skilled in the art to which this application pertains will be able to make any modifications and variations in form and detail of implementation without departing from the spirit and scope of the disclosure, but the scope of the application is still subject to the scope of the claims appended hereto.
Claims (13)
1. A memory management method based on a key value cache system comprises the following steps:
determining whether to eliminate the memory pages to be eliminated according to the access states of key value pairs in the memory pages to be eliminated;
if the memory pages to be eliminated are determined to be eliminated, releasing the memory pages to be eliminated; if the memory pages to be eliminated are determined not to be eliminated, adding the memory pages to be eliminated into the memory pages to be processed;
the adding the memory page to be eliminated into the memory page to be processed includes: adding the memory pages to be eliminated into an FIFO queue;
after determining to eliminate the memory pages to be eliminated, before releasing the memory pages to be eliminated, the method further comprises: calculating the ratio of the access frequency of the memory pages to be eliminated to the access frequency of the memory page with the minimum access frequency in the LFU heat collecting pool; calculating the product of the ratio and a preset weight to obtain a release space ratio value beta serving as release space information; according to the calculated release space information, releasing the space of the memory page with the minimum access frequency in the LFU heat collecting pool; wherein the priority of the LFU heat accumulation pool is higher than that of the FIFO queue.
2. The memory management method of claim 1, wherein the determining whether to retire the memory page to be retired comprises:
selecting the memory pages to be eliminated from the memory pages to be processed according to an elimination strategy;
if all the data units stored in the memory pages to be eliminated can be eliminated according to the access states of the key value pairs, the memory pages to be eliminated are eliminated; and if the data units which cannot be eliminated exist in the data units stored in the memory pages to be eliminated are determined according to the access states of the key value pairs, determining that the memory pages to be eliminated are not eliminated.
3. The memory management method of claim 2, wherein the elimination policy comprises: first-in first-out FIFO policy, or least frequently LFU policy, or least recently LRU policy.
4. The memory management method of claim 2, the method further comprising: and setting an identifier for the memory page to be eliminated so as to be expressed as an eliminated page.
5. The memory management method according to claim 1, wherein the releasing the memory page to be eliminated includes:
adding all data units in the memory page to be eliminated into an elimination queue, and releasing all data units added into the elimination queue; and releasing the memory pages to be eliminated.
6. The memory management method of claim 2, the method further comprising:
and managing the memory pages to be processed by using the FIFO strategy.
7. The memory management method according to claim 1, wherein the preset weight is determined according to an actual situation, and the preset weight is a value between 0 and 1.
8. A computer readable storage medium storing computer executable instructions for performing the memory management method of any one of claims 1 to 7.
9. An apparatus for implementing memory management, comprising a memory and a processor, wherein the memory has stored therein instructions executable by the processor to: steps for performing the memory management method of any one of claims 1 to 7.
10. A memory management device, comprising: a determining module and a first processing module; wherein,
the determining module is used for determining whether to eliminate the memory pages to be eliminated according to the access states of the key value pairs in the memory pages to be eliminated;
the first processing module is used for releasing the memory pages to be eliminated if determining to eliminate the memory pages to be eliminated; if the memory pages to be eliminated are determined not to be eliminated, adding the memory pages to be eliminated into the memory pages to be processed;
the adding the memory page to be eliminated into the memory page to be processed includes: adding the memory pages to be eliminated into an FIFO queue;
after determining to eliminate the memory pages to be eliminated, before releasing the memory pages to be eliminated, the method further comprises: calculating the ratio of the access frequency of the memory pages to be eliminated to the access frequency of the memory page with the minimum access frequency in the LFU heat collecting pool; calculating the product of the ratio and a preset weight to obtain a release space ratio value beta serving as release space information; according to the calculated release space information, releasing the space of the memory page with the minimum access frequency in the LFU heat collecting pool; wherein the priority of the LFU heat accumulation pool is higher than that of the FIFO queue.
11. The memory management device of claim 10, wherein the determining module is specifically configured to:
selecting the memory pages to be eliminated from the memory pages to be processed according to an elimination strategy; if all the data units stored in the memory pages to be eliminated can be eliminated according to the access states of the key value pairs, the memory pages to be eliminated are eliminated; and if the data units which cannot be eliminated exist in the data units stored in the memory pages to be eliminated are determined according to the access states of the key value pairs, determining that the memory pages to be eliminated are not eliminated.
12. The memory management device of claim 10, wherein releasing the memory page in the first processing module comprises:
adding all data units in the memory page to be eliminated into an elimination queue, and releasing all data units added into the elimination queue; and releasing the memory pages to be eliminated.
13. The memory management device of claim 12, the first processing module further to: and managing the memory pages to be processed by using the FIFO strategy.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010208691.XA CN113434285B (en) | 2020-03-23 | 2020-03-23 | Memory management method and device based on key value cache system |
PCT/CN2021/082256 WO2021190468A1 (en) | 2020-03-23 | 2021-03-23 | Memory management method and apparatus based on key-value cache system |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010208691.XA CN113434285B (en) | 2020-03-23 | 2020-03-23 | Memory management method and device based on key value cache system |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113434285A CN113434285A (en) | 2021-09-24 |
CN113434285B true CN113434285B (en) | 2024-03-26 |
Family
ID=77752632
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010208691.XA Active CN113434285B (en) | 2020-03-23 | 2020-03-23 | Memory management method and device based on key value cache system |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN113434285B (en) |
WO (1) | WO2021190468A1 (en) |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106294200A (en) * | 2016-08-26 | 2017-01-04 | 昆明理工大学 | A kind of operating system page life cycle algorithm |
CN106469121A (en) * | 2016-09-09 | 2017-03-01 | 深圳大学 | A kind of method and device of Memory Allocation |
CN108173974A (en) * | 2018-03-01 | 2018-06-15 | 南京邮电大学 | A kind of HC Model inner buffer data based on distributed caching Memcached eliminate method |
CN110134514A (en) * | 2019-04-18 | 2019-08-16 | 华中科技大学 | Expansible memory object storage system based on isomery memory |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9742860B2 (en) * | 2012-02-28 | 2017-08-22 | International Business Machines Corporation | Bi-temporal key value cache system |
-
2020
- 2020-03-23 CN CN202010208691.XA patent/CN113434285B/en active Active
-
2021
- 2021-03-23 WO PCT/CN2021/082256 patent/WO2021190468A1/en active Application Filing
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106294200A (en) * | 2016-08-26 | 2017-01-04 | 昆明理工大学 | A kind of operating system page life cycle algorithm |
CN106469121A (en) * | 2016-09-09 | 2017-03-01 | 深圳大学 | A kind of method and device of Memory Allocation |
CN108173974A (en) * | 2018-03-01 | 2018-06-15 | 南京邮电大学 | A kind of HC Model inner buffer data based on distributed caching Memcached eliminate method |
CN110134514A (en) * | 2019-04-18 | 2019-08-16 | 华中科技大学 | Expansible memory object storage system based on isomery memory |
Non-Patent Citations (1)
Title |
---|
Hai Jin 等.Hotspot-Aware Hybrid Memory Management for In-Memory Key-Value Stores.IEEE TRANSACTIONS ON PARALLEL AND DISTRIBUTED SYSTEMS.2019,第31卷(第4期),全文. * |
Also Published As
Publication number | Publication date |
---|---|
CN113434285A (en) | 2021-09-24 |
WO2021190468A1 (en) | 2021-09-30 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US20110246742A1 (en) | Memory pooling in segmented memory architecture | |
EP3252609A1 (en) | Cache data determination method and device | |
US9769081B2 (en) | Buffer manager and methods for managing memory | |
CN104503703B (en) | The treating method and apparatus of caching | |
US11151155B2 (en) | Memory use in a distributed index and query system | |
CN110555001B (en) | Data processing method, device, terminal and medium | |
US10649967B2 (en) | Memory object pool use in a distributed index and query system | |
US10860497B2 (en) | Method, apparatus, and system for caching data | |
CN113157467B (en) | Multi-process data output method | |
US11372568B2 (en) | System and method for storing and accessing blockchain data | |
US9996470B2 (en) | Workload management in a global recycle queue infrastructure | |
CN109756429B (en) | Bandwidth allocation method and device | |
US11226778B2 (en) | Method, apparatus and computer program product for managing metadata migration | |
CN113687781A (en) | Method, device, equipment and medium for pulling up thermal data | |
CN112000281A (en) | Caching method, system and device for deduplication metadata of storage system | |
CN112148736A (en) | Method, device and storage medium for caching data | |
EP1970815A1 (en) | Data transfering apparatus and information processing system | |
CN106201918B (en) | A kind of method and system based on big data quantity and extensive caching quick release | |
US7509461B1 (en) | Method and apparatus for intelligent buffer cache pre-emption | |
CN113434285B (en) | Memory management method and device based on key value cache system | |
CN110569112B (en) | Log data writing method and object storage daemon device | |
US20170293578A1 (en) | QoS-CLASS BASED SERVICING OF REQUESTS FOR A SHARED RESOURCE | |
CN111459402A (en) | Magnetic disk controllable buffer writing method, controller, hybrid IO scheduling method and scheduler | |
CN112685335B (en) | Data storage system | |
CN110825652B (en) | Method, device and equipment for eliminating cache data on disk block |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |