CN117093508B - Memory resource management method and device, electronic equipment and storage medium - Google Patents
Memory resource management method and device, electronic equipment and storage medium Download PDFInfo
- Publication number
- CN117093508B CN117093508B CN202311340587.6A CN202311340587A CN117093508B CN 117093508 B CN117093508 B CN 117093508B CN 202311340587 A CN202311340587 A CN 202311340587A CN 117093508 B CN117093508 B CN 117093508B
- Authority
- CN
- China
- Prior art keywords
- memory block
- queue
- memory
- stored
- resource management
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000007726 management method Methods 0.000 title claims abstract description 218
- 238000000034 method Methods 0.000 claims abstract description 53
- 238000011084 recovery Methods 0.000 claims abstract description 52
- 230000004044 response Effects 0.000 claims abstract description 10
- 238000004064 recycling Methods 0.000 claims description 34
- 238000010586 diagram Methods 0.000 description 16
- 238000005516 engineering process Methods 0.000 description 7
- 230000008569 process Effects 0.000 description 5
- 238000013500 data storage Methods 0.000 description 4
- 230000008878 coupling Effects 0.000 description 3
- 238000010168 coupling process Methods 0.000 description 3
- 238000005859 coupling reaction Methods 0.000 description 3
- 230000006870 function Effects 0.000 description 3
- 230000007246 mechanism Effects 0.000 description 3
- 238000004891 communication Methods 0.000 description 2
- 239000012634 fragment Substances 0.000 description 2
- 238000012545 processing Methods 0.000 description 2
- 238000012163 sequencing technique Methods 0.000 description 2
- 238000013459 approach Methods 0.000 description 1
- 238000003491 array Methods 0.000 description 1
- 230000015556 catabolic process Effects 0.000 description 1
- 238000013480 data collection Methods 0.000 description 1
- 238000013523 data management Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000006731 degradation reaction Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000013467 fragmentation Methods 0.000 description 1
- 238000006062 fragmentation reaction Methods 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 230000002093 peripheral effect Effects 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 239000002699 waste material Substances 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F12/00—Accessing, addressing or allocating within memory systems or architectures
- G06F12/02—Addressing or allocation; Relocation
- G06F12/0223—User address space allocation, e.g. contiguous or non contiguous base addressing
- G06F12/023—Free address space management
- G06F12/0253—Garbage collection, i.e. reclamation of unreferenced memory
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/50—Allocation of resources, e.g. of the central processing unit [CPU]
- G06F9/5005—Allocation of resources, e.g. of the central processing unit [CPU] to service a request
- G06F9/5011—Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals
- G06F9/5016—Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals the resource being the memory
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/50—Allocation of resources, e.g. of the central processing unit [CPU]
- G06F9/5005—Allocation of resources, e.g. of the central processing unit [CPU] to service a request
- G06F9/5011—Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals
- G06F9/5022—Mechanisms to release resources
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Software Systems (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
Abstract
The invention relates to the technical field of computers, and discloses a memory resource management method, a device, electronic equipment and a storage medium, wherein the method comprises the following steps: acquiring a memory resource request of a user for a disk array control card; while distributing the target memory block to the user in response to the memory resource request, inserting the target memory block identifier into a memory resource management queue; when the preset recovery condition is met, determining the memory blocks to be recovered according to the allocation heat of each memory block represented by the memory resource management queue so as to recover the memory blocks to be recovered. According to the method provided by the scheme, the identifiers of the memory blocks allocated to the users are stored in the memory resource management queue, and the memory blocks are recovered according to the allocation heat represented by the memory resource management queue, so that the condition that the recovered memory blocks are low in allocation heat and the memory blocks which are just recovered are repeatedly allocated is hardly caused, the memory recovery efficiency is improved, and a foundation is laid for improving the performance of the disk array control card.
Description
Technical Field
The present disclosure relates to the field of computer technologies, and in particular, to a memory resource management method, a device, an electronic device, and a storage medium.
Background
Currently, disk array (Redundant Array of Independent Disks, simply referred to as RAID) technology is widely used in various fields as a high-reliability data storage solution. The user applies for the memory resources of the disk array control card (RAID card) to realize data access to the RAID, and allocation of the memory resources of the RAID card directly affects performance of the RAID card, so how to manage the memory resources of the RAID card becomes important research content.
In the prior art, memory resources of a RAID card are generally allocated in a memory pool manner, firstly, a corresponding one-dimensional array of the memory pool is constructed, when a user applies for accessing RAID, memory blocks are extracted from the one-dimensional array, the memory blocks are allocated to the user, and when the memory pool is insufficient in resources, a memory recovery process of the system is started to recover all idle memory blocks.
However, because the memory pool resources are limited, if the memory blocks are recovered based on the prior art, a large amount of system calls and additional time are required, and the situation that the memory blocks just recovered are repeatedly allocated may occur, so that the memory recovery efficiency is low, and the performance of the disk array control card is reduced.
Disclosure of Invention
The application provides a memory resource management method, a memory resource management device, electronic equipment and a storage medium, which are used for solving the defects that the memory recovery efficiency is low and the performance of a disk array control card is not guaranteed in the prior art.
A first aspect of the present application provides a memory resource management method, including:
acquiring a memory resource request of a user for a disk array control card;
while distributing the target memory block to the user in response to the memory resource request, inserting the target memory block identifier into a memory resource management queue;
and when the preset recovery condition is met, determining the memory blocks to be recovered according to the allocation heat of each memory block represented by the memory resource management queue so as to recover the memory blocks to be recovered.
In an alternative embodiment, the inserting the target memory block identifier into a memory resource management queue includes:
and inserting the target memory block identifier into the head of a memory resource management queue.
In an alternative embodiment, before inserting the target memory block identification into the head of a memory resource management queue, the method further comprises:
judging whether the target memory block identifier is stored in the memory resource management queue or not;
If the target memory block identifier is stored in the memory resource management queue, moving the target memory block identifier to the head of the memory resource management queue;
and if the target memory block identifier is not stored in the memory resource management queue, executing the step of inserting the target memory block identifier into the head of the memory resource management queue.
In an alternative embodiment, the inserting the target memory block identifier into a memory resource management queue includes:
judging whether the target memory block identifier is stored in the memory resource management queue or not;
if the target memory block identifier is stored in the memory resource management queue, increasing the reference count corresponding to the target memory block identifier by 1;
the memory block identifiers in the memory resource management queue are ordered from large to small according to the reference count, the head of the memory resource management queue is used for storing the memory block identifier with the largest reference count, and the tail of the memory resource management queue is used for storing the memory block identifier with the smallest reference count.
In an alternative embodiment, the method further comprises:
If the target memory block identifier is not stored in the memory resource management queue, inserting the target memory block identifier into the tail of the memory resource management queue, and setting the reference count corresponding to the target memory block identifier to be 1.
In an optional implementation manner, when the preset reclamation condition is met, determining the memory block to be reclaimed according to the allocation heat of each memory block represented by the memory resource management queue includes:
when a preset recycling condition is met, determining a memory block corresponding to a memory block identifier stored at the tail of the memory resource management queue as the memory block to be recycled;
and the allocation heat of the memory blocks corresponding to the memory block identifiers stored at the tail of the memory resource management queue is the lowest.
In an alternative embodiment, the memory resource management queue includes a first queue and a second queue, and the inserting the target memory block identifier into the memory resource management queue includes:
judging whether the target memory block identifier is stored in the first queue or not;
if the target memory block identification is stored in the first queue, migrating the target memory block identification from the first queue to the head of the second queue;
And if the target memory block identification is not stored in the first queue, inserting the target memory block identification into the head of the first queue.
In an alternative embodiment, before determining whether the target memory block identifier is already saved in the first queue, the method further includes:
judging whether the target memory block identifier is stored in the second queue or not;
if the target memory block identification is stored in the second queue, moving the target memory block identification to the head of the second queue;
and if the target memory block identifier is not stored in the second queue, executing the step of judging whether the target memory block identifier is stored in the first queue.
In an alternative embodiment, the memory resource management queue includes a first queue and a second queue, and the inserting the target memory block identifier into the memory resource management queue includes:
judging whether the target memory block identifier is stored in the first queue or not;
if the target memory block identifier is stored in the first queue, migrating the target memory block identifier from the first queue to the tail of the second queue, and setting the reference count corresponding to the target memory block identifier to be 1;
And if the target memory block identification is not stored in the first queue, inserting the target memory block identification into the head of the first queue.
In an alternative embodiment, before determining whether the target memory block identifier is already saved in the first queue, the method further includes:
judging whether the target memory block identifier is stored in the second queue or not;
if the target memory block identification is stored in the second queue, increasing the reference count corresponding to the target memory block identification by 1;
if the target memory block identifier is not stored in the second queue, executing the step of judging whether the target memory block identifier is stored in the first queue;
the memory block identifiers in the second queue are ordered from big to small according to the reference count, the head of the second queue is used for storing the memory block identifier with the largest reference count, and the tail of the second queue is used for storing the memory block identifier with the smallest reference count.
In an optional implementation manner, when the preset reclamation condition is met, determining the memory block to be reclaimed according to the allocation heat of each memory block represented by the memory resource management queue includes:
When a preset recycling condition is met, determining the memory blocks corresponding to the memory block identifiers stored at the tail of the first queue and the memory blocks corresponding to the memory block identifiers stored at the tail of the second queue as the memory blocks to be recycled;
the memory blocks corresponding to the memory block identifiers stored at the tail of the first queue are lowest in allocated heat in the first queue, and the memory blocks corresponding to the memory block identifiers stored at the tail of the second queue are lowest in allocated heat in the second queue.
In an alternative embodiment, the memory resource management queue includes a first priority queue and a second priority queue, and the inserting the target memory block identifier into the memory resource management queue includes:
judging whether the target memory block identifier is stored in the first priority queue or the second priority queue;
if the target memory block identification is stored in the first priority queue or the second priority queue, increasing the reference count of the target memory block identification by 1;
and when the reference count of any memory block identifier stored in the first priority queue reaches a preset reference count threshold, moving the memory block identifier from the first priority queue to the head of the second priority queue.
In an alternative embodiment, the method further comprises:
and when the idle time length of any memory block identifier stored in the second priority queue reaches a preset idle time length threshold value, moving the memory block identifier from the second priority queue to the head of the first priority queue.
In an alternative embodiment, the method further comprises:
if the target memory block identifier is not stored in the first priority queue or the second priority queue, inserting the target memory block identifier into the first priority queue, and setting the reference count corresponding to the target memory block identifier to be 1.
In an optional implementation manner, when the preset reclamation condition is met, determining the memory block to be reclaimed according to the allocation heat of each memory block represented by the memory resource management queue includes:
when a preset recycling condition is met, determining a memory block corresponding to a memory block identifier stored at the tail of the first priority queue as the memory block to be recycled;
and the allocation heat of the memory block corresponding to the memory block identifier stored at the tail of the first priority queue is the lowest.
In an alternative embodiment, the method further comprises:
After the memory block to be recycled is recycled, the memory block identifier to be recycled is saved to a preset recycling queue;
when any memory block identifier in the preset recovery queue is redetermined as a target memory block, determining a target priority queue corresponding to the memory block according to the reference count of the memory block, so as to insert the memory block identifier into the target priority queue.
In an alternative embodiment, the method further comprises:
and when the number of the idle memory blocks of the disk array control card is lower than a preset recovery threshold value or when the timing reaches a preset recovery period, determining that a preset recovery condition is met.
A second aspect of the present application provides a memory resource management device, including:
the acquisition module is used for acquiring a memory resource request of a user for the disk array control card;
the allocation module is used for allocating the target memory block to the user in response to the memory resource request and inserting the target memory block identifier into a memory resource management queue;
and the recovery module is used for determining the memory blocks to be recovered according to the distribution heat of each memory block represented by the memory resource management queue when the preset recovery condition is met so as to recover the memory blocks to be recovered.
In an alternative embodiment, the distribution module is specifically configured to:
and inserting the target memory block identifier into the head of a memory resource management queue.
In an alternative embodiment, the allocation module is further configured to:
before inserting the target memory block identifier into the head of a memory resource management queue, judging whether the target memory block identifier is stored in the memory resource management queue or not;
if the target memory block identifier is stored in the memory resource management queue, moving the target memory block identifier to the head of the memory resource management queue;
and if the target memory block identifier is not stored in the memory resource management queue, executing the step of inserting the target memory block identifier into the head of the memory resource management queue.
In an alternative embodiment, the distribution module is specifically configured to:
judging whether the target memory block identifier is stored in the memory resource management queue or not;
if the target memory block identifier is stored in the memory resource management queue, increasing the reference count corresponding to the target memory block identifier by 1;
the memory block identifiers in the memory resource management queue are ordered from large to small according to the reference count, the head of the memory resource management queue is used for storing the memory block identifier with the largest reference count, and the tail of the memory resource management queue is used for storing the memory block identifier with the smallest reference count.
In an alternative embodiment, the allocation module is further configured to:
if the target memory block identifier is not stored in the memory resource management queue, inserting the target memory block identifier into the tail of the memory resource management queue, and setting the reference count corresponding to the target memory block identifier to be 1.
In an alternative embodiment, the recycling module is specifically configured to:
when a preset recycling condition is met, determining a memory block corresponding to a memory block identifier stored at the tail of the memory resource management queue as the memory block to be recycled;
and the allocation heat of the memory blocks corresponding to the memory block identifiers stored at the tail of the memory resource management queue is the lowest.
In an optional implementation manner, the memory resource management queue includes a first queue and a second queue, and the allocation module is specifically configured to:
judging whether the target memory block identifier is stored in the first queue or not;
if the target memory block identification is stored in the first queue, migrating the target memory block identification from the first queue to the head of the second queue;
and if the target memory block identification is not stored in the first queue, inserting the target memory block identification into the head of the first queue.
In an alternative embodiment, the allocation module is further configured to:
before judging whether the target memory block identifier is stored in the first queue, judging whether the target memory block identifier is stored in the second queue;
if the target memory block identification is stored in the second queue, moving the target memory block identification to the head of the second queue;
and if the target memory block identifier is not stored in the second queue, executing the step of judging whether the target memory block identifier is stored in the first queue.
In an optional implementation manner, the memory resource management queue includes a first queue and a second queue, and the allocation module is specifically configured to:
judging whether the target memory block identifier is stored in the first queue or not;
if the target memory block identifier is stored in the first queue, migrating the target memory block identifier from the first queue to the tail of the second queue, and setting the reference count corresponding to the target memory block identifier to be 1;
and if the target memory block identification is not stored in the first queue, inserting the target memory block identification into the head of the first queue.
In an alternative embodiment, the allocation module is further configured to:
before judging whether the target memory block identifier is stored in the first queue, judging whether the target memory block identifier is stored in the second queue;
if the target memory block identification is stored in the second queue, increasing the reference count corresponding to the target memory block identification by 1;
if the target memory block identifier is not stored in the second queue, executing the step of judging whether the target memory block identifier is stored in the first queue;
the memory block identifiers in the second queue are ordered from big to small according to the reference count, the head of the second queue is used for storing the memory block identifier with the largest reference count, and the tail of the second queue is used for storing the memory block identifier with the smallest reference count.
In an alternative embodiment, the recycling module is specifically configured to:
when a preset recycling condition is met, determining the memory blocks corresponding to the memory block identifiers stored at the tail of the first queue and the memory blocks corresponding to the memory block identifiers stored at the tail of the second queue as the memory blocks to be recycled;
The memory blocks corresponding to the memory block identifiers stored at the tail of the first queue are lowest in allocated heat in the first queue, and the memory blocks corresponding to the memory block identifiers stored at the tail of the second queue are lowest in allocated heat in the second queue.
In an optional implementation manner, the memory resource management queue includes a first priority queue and a second priority queue, and the allocation module is specifically configured to:
judging whether the target memory block identifier is stored in the first priority queue or the second priority queue;
if the target memory block identification is stored in the first priority queue or the second priority queue, increasing the reference count of the target memory block identification by 1;
and when the reference count of any memory block identifier stored in the first priority queue reaches a preset reference count threshold, moving the memory block identifier from the first priority queue to the head of the second priority queue.
In an alternative embodiment, the allocation module is further configured to:
and when the idle time length of any memory block identifier stored in the second priority queue reaches a preset idle time length threshold value, moving the memory block identifier from the second priority queue to the head of the first priority queue.
In an alternative embodiment, the allocation module is further configured to:
if the target memory block identifier is not stored in the first priority queue or the second priority queue, inserting the target memory block identifier into the first priority queue, and setting the reference count corresponding to the target memory block identifier to be 1.
In an alternative embodiment, the recycling module is specifically configured to:
when a preset recycling condition is met, determining a memory block corresponding to a memory block identifier stored at the tail of the first priority queue as the memory block to be recycled;
and the allocation heat of the memory block corresponding to the memory block identifier stored at the tail of the first priority queue is the lowest.
In an alternative embodiment, the recycling module is further configured to:
after the memory block to be recycled is recycled, the memory block identifier to be recycled is saved to a preset recycling queue;
when any memory block identifier in the preset recovery queue is redetermined as a target memory block, determining a target priority queue corresponding to the memory block according to the reference count of the memory block, so as to insert the memory block identifier into the target priority queue.
In an alternative embodiment, the recycling module is further configured to:
and when the number of the idle memory blocks of the disk array control card is lower than a preset recovery threshold value or when the timing reaches a preset recovery period, determining that a preset recovery condition is met.
A third aspect of the present application provides an electronic device, including: at least one processor and memory;
the memory stores computer-executable instructions;
the at least one processor executes the computer-executable instructions stored by the memory such that the at least one processor performs the method as described above in the first aspect and the various possible designs of the first aspect.
A fourth aspect of the present application provides a computer-readable storage medium having stored therein computer-executable instructions which, when executed by a processor, implement the method as described above in the first aspect and the various possible designs of the first aspect.
The technical scheme of the application has the following advantages:
the application provides a memory resource management method, a device, electronic equipment and a storage medium, wherein the method comprises the following steps: acquiring a memory resource request of a user for a disk array control card; while distributing the target memory block to the user in response to the memory resource request, inserting the target memory block identifier into a memory resource management queue; when the preset recovery condition is met, determining the memory blocks to be recovered according to the allocation heat of each memory block represented by the memory resource management queue so as to recover the memory blocks to be recovered. According to the method provided by the scheme, the identifiers of the memory blocks allocated to the users are stored in the memory resource management queue, and the memory blocks are recovered according to the allocation heat represented by the memory resource management queue, so that the condition that the recovered memory blocks are low in allocation heat and the memory blocks which are just recovered are repeatedly allocated is hardly caused, the memory recovery efficiency is improved, and a foundation is laid for improving the performance of the disk array control card.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, a brief description will be given below of the drawings required for the embodiments or the description of the prior art, and it is obvious that the drawings in the following description are some embodiments of the present application, and other drawings may be obtained according to these drawings for a person having ordinary skill in the art.
Fig. 1 is a schematic structural diagram of a memory resource management system according to an embodiment of the present application;
fig. 2 is a flow chart of a memory resource management method according to an embodiment of the present application;
FIG. 3 is a schematic diagram of a memory resource management method according to an embodiment of the present disclosure;
FIG. 4 is a schematic diagram of a memory resource management team according to an embodiment of the present disclosure;
FIG. 5 is a schematic diagram of another embodiment of a memory resource management team;
FIG. 6 is a schematic diagram of a memory resource management team according to an embodiment of the present disclosure;
FIG. 7 is a schematic diagram of a memory resource management team according to an embodiment of the present disclosure;
fig. 8 is a schematic structural diagram of a memory resource management device according to an embodiment of the present application;
fig. 9 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Specific embodiments thereof have been shown by way of example in the drawings and will herein be described in more detail. These drawings and the written description are not intended to limit the scope of the disclosed concepts in any way, but to illustrate the concepts of the present application to those skilled in the art with reference to the specific embodiments.
Detailed Description
For the purposes of making the objects, technical solutions and advantages of the embodiments of the present application more clear, the technical solutions of the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is apparent that the described embodiments are some embodiments of the present application, but not all embodiments. All other embodiments, which can be made by one of ordinary skill in the art based on the embodiments herein without making any inventive effort, are intended to be within the scope of the present application.
Furthermore, the terms "first," "second," and the like, are used for descriptive purposes only and are not to be construed as indicating or implying a relative importance or implicitly indicating the number of technical features indicated. In the following description of the embodiments, the meaning of "a plurality" is two or more, unless explicitly defined otherwise.
With the rapid development of data centers and cloud computing, the performance and data management aspects of storage systems are becoming increasingly important. The RAID (Redundant Array of Independent Disks) technology is widely used in the fields of data centers, enterprise applications and the like as a high-reliability data storage solution. The RAID card is a special hardware device for realizing RAID function, is composed of a plurality of RAID arrays, combines a plurality of hard disks into a logic volume, and provides data redundancy and improvement of read-write performance. The RAID card has its own memory and processor for managing disks and implementing RAID levels to provide higher storage performance than a single hard disk and to provide data backup techniques, and is mounted on the PCIe bus of the storage server so that the RAID card can be considered a peripheral to the storage server. RAID technology is a technology that improves data storage performance and reliability by combining multiple disks into one logical disk. The RAID card is a solution for realizing RAID technology in a hardware mode, and can greatly improve the data reading and writing speed and the I/O operation efficiency. In RAID cards, memory allocation and reclamation is a significant proportion.
However, with the increasing capacity of the hard disk and the increasing speed of data access, how to allocate and reclaim the memory of the RAID card more efficiently becomes a bottleneck point. Therefore, a new technology is needed to improve the efficiency of memory allocation and recovery of RAID cards to achieve higher read-write performance and better data protection. However, when using a RAID card for data storage, the allocation and reclamation efficiency of the memory is related to the performance and stability of the entire storage system. In RAID card operation, the efficiency of memory allocation and reclamation is directly related to the overall performance of the overall storage system. The traditional memory allocation and reclamation method cannot meet the requirement of high concurrency, which causes a series of problems, such as: 1. memory over allocation: the traditional memory allocation method often over allocates the memory, resulting in memory resource waste. 2. The memory recovery efficiency is low: conventional memory reclamation approaches require a significant amount of system calls and additional time to process, resulting in inefficiency. 3. In extreme cases, memory leaks: in extreme cases, memory reclamation is not timely, which may lead to memory leakage problems.
The method not only can reduce memory fragments, but also can optimize the memory allocation and recovery mechanism, thereby improving the utilization rate of the memory and the performance of a storage system. The memory pool is a method for pre-distributing and managing the memory set, which can greatly improve the efficiency of memory distribution and recovery and reduce the generation of memory fragments. There are many different ways to implement the memory pool, such as a circular buffer or partner algorithm.
Although the memory pool can effectively reduce memory fragmentation and improve memory utilization, there are some disadvantages.
First, the size of the memory pool is typically fixed and cannot be dynamically expanded. This results in wasted memory when the memory is sufficient and in failure to meet the demand when the memory is insufficient. Thus, there is a need for a memory pool manager that dynamically expands based on memory usage.
Secondly, in the process of using the memory pool, because a plurality of threads use the same memory pool, a problem of contention among threads occurs. If the competition cannot be effectively solved, the efficiency of memory allocation and recovery is reduced, and even the problems of deadlock and the like occur. Therefore, a perfect synchronization mechanism needs to be introduced in the memory pool to solve the thread contention problem.
In view of the above problems, embodiments of the present application provide a method, an apparatus, an electronic device, and a storage medium for managing memory resources, where the method includes: acquiring a memory resource request of a user for a disk array control card; while distributing the target memory block to the user in response to the memory resource request, inserting the target memory block identifier into a memory resource management queue; when the preset recovery condition is met, determining the memory blocks to be recovered according to the allocation heat of each memory block represented by the memory resource management queue so as to recover the memory blocks to be recovered. According to the method provided by the scheme, the identifiers of the memory blocks allocated to the users are stored in the memory resource management queue, and the memory blocks are recovered according to the allocation heat represented by the memory resource management queue, so that the condition that the recovered memory blocks are low in allocation heat and the memory blocks which are just recovered are repeatedly allocated is hardly caused, the memory recovery efficiency is improved, and a foundation is laid for improving the performance of the disk array control card.
The following embodiments may be combined with each other, and the same or similar concepts or processes may not be described in detail in some embodiments. Embodiments of the present invention will be described below with reference to the accompanying drawings.
First, a description will be given of a structure of a memory resource management system on which the present application is based:
the memory resource management method, the memory resource management device, the electronic equipment and the storage medium are suitable for distributing and recycling the memory resources of the disk array control card. Fig. 1 is a schematic structural diagram of a memory resource management system according to an embodiment of the present application, which mainly includes a disk array control card, a data acquisition device, and a memory resource management device. Specifically, the method may collect a memory resource request of a user to a disk array control card based on a data collection device, and return the obtained memory resource request to a memory resource management device, where the device constructs a memory resource management queue according to the memory resource request, and performs a corresponding memory resource recycling operation based on the memory resource management when performing memory resource recycling.
The embodiment of the application provides a memory resource management method which is used for distributing and recycling memory resources of a disk array control card. The execution body of the embodiment of the application is electronic equipment, such as a server, a desktop computer, a notebook computer, a tablet computer and other electronic equipment capable of distributing and recycling memory resources of a disk array control card.
As shown in fig. 2, a flow chart of a memory resource management method according to an embodiment of the present application is shown, where the method includes:
step 201, a memory resource request of a user for a disk array control card is obtained.
Specifically, when a user needs to access any disk area in the disk array, a memory request is sent to the disk array control card to request a target memory block corresponding to the disk area.
Step 202, inserting the target memory block identifier into the memory resource management queue while allocating the target memory block to the user in response to the memory resource request.
Specifically, firstly, a target memory block requested by a user is determined according to the memory resource request, and a target memory block identifier is inserted into a memory resource management queue when the target memory block is allocated to the user, so that the memory block allocation condition of the disk array control card is recorded based on the memory resource management queue. The target memory block identifier specifically refers to a unique identifier, such as a number, in the disk array control card, of the target memory block.
And 203, determining the memory blocks to be recycled according to the allocation heat of each memory block represented by the memory resource management queue when the preset recycling condition is met, so as to recycle the memory blocks to be recycled.
It should be noted that the preset recovery condition may be determined according to an actual application scenario.
Specifically, in an embodiment, when the number of free memory blocks of the disk array control card is lower than a preset reclamation threshold, or when the timing reaches a preset reclamation period, it is determined that a preset reclamation condition is satisfied.
Specifically, when the number of idle memory blocks of the disk array control card is lower than a preset reclamation threshold, it can be determined that the current memory resources of the disk array control card are insufficient. And resetting the timing once every time the memory is recovered, and representing that the disk array control card has long time for memory recovery when the timing reaches a preset recovery period.
Specifically, when a preset recycling condition is met, determining the allocation heat of each memory block according to the storage position of each memory block identifier in the memory resource management queue, and further determining a plurality of memory blocks with lower allocation heat as memory blocks to be recycled so as to recycle the memory blocks to be recycled.
On the basis of the foregoing embodiment, as a practical implementation manner, in one embodiment, inserting the target memory block identifier into the memory resource management queue includes:
at step 2021, the target memory block identification is inserted into the head of the memory resource management queue.
As shown in fig. 3, a schematic diagram of the memory resource management method provided in the embodiment of the present application is shown, when a user requests a memory resource, a target memory block is extracted from a freelist linked list, and when the memory block is recovered or added, the memory block is saved to the Head of the freelist linked list (pointed by the Head pointer of the freelist linked list). After the idle target memory block is allocated for the user, the identification information of the target memory block is added to the Head of a usedlist linked list (memory resource management queue) (pointed by a Head pointer of the usedlist linked list), for example, an allocation thread allocates a number 1 memory block, a number 2 memory block, a number 3 memory block and a number 4 memory block in the usedlist linked list to the Head of the usedlist linked list. When the memory resource of the disk array control card is insufficient, the recovery thread traverses the usedlist linked list reversely, searches the recoverable memory pages (memory blocks), namely the non-dirty pages, the non-mirror pages and the non-write-back pages, and then releases the recoverable memory blocks.
Specifically, in one embodiment, before inserting the target memory block identifier into the head of the memory resource management queue, it may be determined whether the target memory block identifier is already stored in the memory resource management queue; if the target memory block identification is stored in the memory resource management queue, moving the target memory block identification to the head of the memory resource management queue; and if the target memory block identifier is not stored in the memory resource management queue, executing the step of inserting the target memory block identifier into the head of the memory resource management queue.
It should be noted that, the memory resource management queue may specifically be an LRU queue, which is a page replacement algorithm, and selects a page that is not used recently to be eliminated.
Specifically, as shown in fig. 4, a schematic structure diagram of a memory resource management queue according to an embodiment of the present application is shown, when a target memory block identifier is stored in the memory resource management queue, whether the memory resource management queue is already stored in the memory resource management queue is first determined, if yes, the target memory block identifier is moved to the head of the memory resource management queue, otherwise, the target memory block identifier is directly inserted into the head of the memory resource management queue.
Specifically, in one embodiment, it may be determined whether the target memory block identifier is already stored in the memory resource management queue; if the target memory block identifier is stored in the memory resource management queue, increasing the reference count corresponding to the target memory block identifier by 1.
The memory block identifiers in the memory resource management queue are ordered from large to small according to the reference count, the head of the memory resource management queue is used for storing the memory block identifier with the largest reference count, and the tail of the memory resource management queue is used for storing the memory block identifier with the smallest reference count.
Accordingly, in one embodiment, if the target memory block identifier is not stored in the memory resource management queue, the target memory block identifier is inserted into the tail of the memory resource management queue, and the reference count corresponding to the target memory block identifier is set to 1.
It should be noted that, the memory resource management queue may specifically use an LFU queue, which is a page replacement algorithm, and requires that the page with the smallest reference count be replaced when the page is replaced, because the frequently used page should have a larger reference number.
Specifically, as shown in fig. 5, in the schematic structural diagram of another memory resource management queue provided in the embodiment of the present application, when a target memory block identifier is stored in the memory resource management queue, it is first determined whether the memory resource management queue is already stored in the memory resource management queue, if yes, the reference count corresponding to the target memory block identifier is increased by 1, and the memory resource management queue is reordered according to the increased reference count, otherwise, the target memory block identifier is directly inserted into the tail of the memory resource management queue, and the reference count is set to 1. If the reference counts of the two memory block identifiers are the same in the memory resource management queue, sequencing is performed according to the allocation time of the two memory block identifiers, if the last allocation time is later, the sequencing position is earlier (near the head).
Further, in an embodiment, for the memory resource management queue shown in fig. 4 and 5, when a preset reclamation condition is satisfied, a memory block corresponding to a memory block identifier stored at the tail of the memory resource management queue is determined as a memory block to be reclaimed.
The allocation heat of the memory blocks corresponding to the memory block identifiers stored at the tail of the memory resource management queue is the lowest.
Specifically, when a preset reclaiming condition is met, firstly determining a memory block corresponding to a memory block identifier stored at the tail of a memory resource management queue as a memory block to be reclaimed, and deleting the identifier of the memory block to be reclaimed in the memory resource management queue while reclaiming the memory block to be reclaimed, namely releasing the tail of the memory resource management queue. The capacity of the memory resource management queue can be determined according to the number of memory blocks of the disk array control card, and when the memory resource management queue is full, the memory resource shortage of the disk array control card can be determined, so that the current meeting of the preset recovery condition is determined.
Based on the above embodiment, to further improve the memory reclamation efficiency, so as to ensure that the reclaimed memory block is a memory block with low allocation heat, as an implementation manner, in one embodiment, the memory resource management queue includes a first queue and a second queue, and inserting the target memory block identifier into the memory resource management queue includes:
Step 2022, determining whether the target memory block identifier is already stored in the first queue;
step 2023, if the target memory block identifier is already stored in the first queue, migrating the target memory block identifier from the first queue to the head of the second queue;
if the target memory block identification is not stored in the first queue, step 2024, the target memory block identification is inserted into the head of the first queue.
The first queue may specifically be a FIFO queue, and the second queue may be an LRU queue.
Specifically, in an embodiment, before determining whether the target memory block identifier is already stored in the first queue, it may also be determined whether the target memory block identifier is already stored in the second queue; if the target memory block identification is stored in the second queue, moving the target memory block identification to the head of the second queue; if the target memory block identifier is not stored in the second queue, executing the step of judging whether the target memory block identifier is stored in the first queue.
Specifically, as shown in fig. 6, in the structure diagram of still another memory resource management queue provided in this embodiment, when storing a target memory block identifier to the memory resource management queue, it is first determined whether the target memory block identifier is already stored in the second queue, if yes, the target memory block identifier is moved to the head of the second queue, otherwise, it is further determined whether the target memory block identifier is already stored in the first queue, if yes, the target memory block is determined to be a secondary access memory block, and then the target memory block identifier is migrated from the first queue to the head of the second queue, otherwise, the target memory block identifier is inserted into the head of the first queue.
Specifically, in an embodiment, it may also be determined whether the target memory block identifier is already stored in the first queue; if the target memory block identification is stored in the first queue, migrating the target memory block identification from the first queue to the tail of the second queue, and setting the reference count corresponding to the target memory block identification to be 1; and if the target memory block identification is not stored in the first queue, inserting the target memory block identification into the head of the first queue.
Specifically, in an embodiment, before determining whether the target memory block identifier is already stored in the first queue, it may also be determined whether the target memory block identifier is already stored in the second queue;
if the target memory block identification is stored in the second queue, increasing the reference count corresponding to the target memory block identification by 1; if the target memory block identifier is not stored in the second queue, executing the step of judging whether the target memory block identifier is stored in the first queue.
The memory block identifiers in the second queue are ordered from big to small according to the reference count, the head of the second queue is used for storing the memory block identifier with the largest reference count, and the tail of the second queue is used for storing the memory block identifier with the smallest reference count.
The first queue may specifically be a FIFO queue, and the second queue may be an LFU queue. When the second queue adopts the LFU queue, the actual memory resource management queue may be combined with the memory resource management queue shown in fig. 5 and fig. 6, so that the implementation principle is the same, and no description is repeated.
Further, in an embodiment, when the preset reclamation condition is satisfied, the memory block corresponding to the memory block identifier stored at the tail of the first queue and the memory block corresponding to the memory block identifier stored at the tail of the second queue may be determined as the memory block to be reclaimed.
The memory blocks corresponding to the memory block identifiers stored at the tail of the first queue are lowest in distribution heat in the first queue, and the memory blocks corresponding to the memory block identifiers stored at the tail of the second queue are lowest in distribution heat in the second queue.
Specifically, when a preset reclaiming condition is met, the memory blocks corresponding to the memory block identifiers stored at the tail parts of the first queue and the tail parts of the second queue are determined to be memory blocks to be reclaimed, and the identifiers of the two memory blocks to be reclaimed are deleted from the first queue and the second queue when the memory blocks to be reclaimed are reclaimed, namely the tail parts of the first queue and the second queue are released.
In order to further improve the memory reclamation efficiency based on the above embodiments, as an implementation manner, in one embodiment, the memory resource management queue includes a first priority queue and a second priority queue, and inserting the target memory block identifier into the memory resource management queue includes:
step 2025, determining whether the target memory block identifier is already stored in the first priority queue or the second priority queue;
step 2026, if the target memory block identifier is already stored in the first priority queue or the second priority queue, increasing the target memory block identifier by 1 from the corresponding reference count;
at step 2027, when the reference count of any memory block identifier stored in the first priority queue reaches the preset reference count threshold, the memory block identifier is moved from the first priority queue to the head of the second priority queue.
And when the idle time length of any memory block identifier stored in the second priority queue reaches a preset idle time length threshold value, moving the memory block identifier from the second priority queue to the head of the first priority queue. The idle time is used for indicating how long the memory block has been allocated without a user request.
It should be noted that, the first priority queue is the queue with the lowest priority, and the second priority queue has a higher priority than the first priority queue, and in practical application, the memory resource management queue may include a plurality of priority queues, and each priority queue may be an LRU queue.
Specifically, in an embodiment, if the target memory block identifier is not stored in the first priority queue or the second priority queue, the target memory block identifier is inserted into the first priority queue, and the reference count corresponding to the target memory block identifier is set to 1.
Specifically, as shown in fig. 7, which is a schematic structural diagram of another memory resource management queue provided in this embodiment of the present application, fig. 7 takes an example that the memory resource management queue may include a plurality of priority queues, Q0 represents a first priority queue, Q1 represents a second priority queue, qk represents a third priority queue, and the priority of the third priority queue is higher than that of the second priority queue. When a target memory block identifier is stored in a memory resource management team, firstly judging whether the target memory block identifier is stored in Q0, Q1 or Qk, and if the target memory block identifier is stored in any priority queue, increasing the reference count corresponding to the target memory block identifier by 1. When the reference count of the memory block identifier in any priority queue reaches a preset reference count threshold corresponding to the priority queue, upgrading the memory block identifier, and moving the memory block identifier to the previous priority queue. And when the idle time length of the memory block corresponding to the memory block identifier in any priority queue reaches a preset idle time length threshold value except the lowest priority queue, performing degradation processing on the memory block identifier, and moving the memory block identifier to the next priority queue. If the target memory block identifier is not stored in each priority queue, the target memory block identifier is inserted into the lowest priority queue (Q0), and the reference count corresponding to the target memory block identifier is set to 1.
Specifically, in an embodiment, when a preset reclamation condition is satisfied, a memory block corresponding to a memory block identifier stored at the tail of the first priority queue is determined as a memory block to be reclaimed.
The allocation heat of the memory block corresponding to the memory block identifier stored at the tail of the first priority queue is the lowest.
Specifically, when the preset reclaiming condition is satisfied, the memory block corresponding to the memory block identifier stored at the tail of the lowest priority queue may be used as the memory block to be reclaimed.
Specifically, in one embodiment, in order to reduce the risk of error reclamation of the memory block, as shown in fig. 7, after the memory block to be reclaimed is reclaimed, the memory block identifier to be reclaimed is saved to a preset reclamation queue (Q-history); when any memory block identifier in the preset recovery queue is redetermined as a target memory block, determining a target priority queue corresponding to the memory block according to the reference count of the memory block, so as to insert the memory block identifier into the target priority queue.
Specifically, when any memory block identifier in the preset recycle queue is redetermined as a target memory block, the target priority queue corresponding to the memory block can be determined according to the reference count of the memory block when the memory block is recycled. For example, if the reference count of the memory block is 10 when it is reclaimed, the application count corresponding to Q0 is 0-10, the application count corresponding to Q1 is 11-20, and after the memory block is redetermined as the target memory block, the reference count is increased by 1, i.e. the current reference count is 11, and the corresponding target priority queue is Q1. If the reference count of the memory block when it is recovered is 9, the current reference count is 10, and the corresponding target priority queue is Q0.
Specifically, after the memory block is reclaimed, its reference count may be emptied, and when any memory block identifier in the preset reclamation queue is redetermined as the target memory block, its reference count is set to 1, and the corresponding target priority queue is the lowest priority queue Q0.
According to the memory resource management method provided by the embodiment of the application, the memory resource request of a user to the disk array control card is obtained; while distributing the target memory block to the user in response to the memory resource request, inserting the target memory block identifier into a memory resource management queue; when the preset recovery condition is met, determining the memory blocks to be recovered according to the allocation heat of each memory block represented by the memory resource management queue so as to recover the memory blocks to be recovered. According to the method provided by the scheme, the identifiers of the memory blocks allocated to the users are stored in the memory resource management queue, and the memory blocks are recovered according to the allocation heat represented by the memory resource management queue, so that the condition that the recovered memory blocks are low in allocation heat and the memory blocks which are just recovered are repeatedly allocated is hardly caused, the memory recovery efficiency is improved, and a foundation is laid for improving the performance of the disk array control card. And a memory allocation and recovery mechanism of a plurality of multi-stage queues is provided, so that the memory recovery efficiency is further improved. And by referring to a preset recovery queue, the risk of error recovery of the memory block is reduced.
The embodiment of the application provides a memory resource management device, which is used for executing the memory resource management method provided by the embodiment.
Fig. 8 is a schematic structural diagram of a memory resource management device according to an embodiment of the present application. The memory resource management device 80 includes: an acquisition module 801, an allocation module 802, and a reclamation module 803.
The system comprises an acquisition module, a storage module and a control module, wherein the acquisition module is used for acquiring a memory resource request of a user on a disk array control card; the allocation module is used for allocating the target memory block to the user in response to the memory resource request and inserting the target memory block identifier into the memory resource management queue; and the recovery module is used for determining the memory blocks to be recovered according to the distribution heat of each memory block represented by the memory resource management queue when the preset recovery condition is met so as to recover the memory blocks to be recovered.
Specifically, in an embodiment, the allocation module is specifically configured to:
the target memory block identification is inserted into the head of the memory resource management queue.
Specifically, in an embodiment, the allocation module is further configured to:
before inserting the target memory block identifier into the head of the memory resource management queue, judging whether the target memory block identifier is stored in the memory resource management queue;
If the target memory block identification is stored in the memory resource management queue, moving the target memory block identification to the head of the memory resource management queue;
and if the target memory block identifier is not stored in the memory resource management queue, executing the step of inserting the target memory block identifier into the head of the memory resource management queue.
Specifically, in an embodiment, the allocation module is specifically configured to:
judging whether the target memory block identifier is stored in a memory resource management queue;
if the target memory block identifier is stored in the memory resource management queue, increasing the reference count corresponding to the target memory block identifier by 1;
the memory block identifiers in the memory resource management queue are ordered from large to small according to the reference count, the head of the memory resource management queue is used for storing the memory block identifier with the largest reference count, and the tail of the memory resource management queue is used for storing the memory block identifier with the smallest reference count.
Specifically, in an embodiment, the allocation module is further configured to:
if the target memory block identifier is not stored in the memory resource management queue, inserting the target memory block identifier into the tail of the memory resource management queue, and setting the reference count corresponding to the target memory block identifier to be 1.
Specifically, in one embodiment, the recycling module is specifically configured to:
when the preset recovery condition is met, determining the memory block corresponding to the memory block identifier stored at the tail of the memory resource management queue as the memory block to be recovered;
the allocation heat of the memory blocks corresponding to the memory block identifiers stored at the tail of the memory resource management queue is the lowest.
Specifically, in one embodiment, the memory resource management queue includes a first queue and a second queue, and the allocation module is specifically configured to:
judging whether the target memory block identifier is stored in a first queue or not;
if the target memory block identification is stored in the first queue, migrating the target memory block identification from the first queue to the head of the second queue;
and if the target memory block identification is not stored in the first queue, inserting the target memory block identification into the head of the first queue.
Specifically, in an embodiment, the allocation module is further configured to:
before judging whether the target memory block identifier is stored in the first queue, judging whether the target memory block identifier is stored in the second queue;
if the target memory block identification is stored in the second queue, moving the target memory block identification to the head of the second queue;
If the target memory block identifier is not stored in the second queue, executing the step of judging whether the target memory block identifier is stored in the first queue.
Specifically, in one embodiment, the memory resource management queue includes a first queue and a second queue, and the allocation module is specifically configured to:
judging whether the target memory block identifier is stored in a first queue or not;
if the target memory block identification is stored in the first queue, migrating the target memory block identification from the first queue to the tail of the second queue, and setting the reference count corresponding to the target memory block identification to be 1;
and if the target memory block identification is not stored in the first queue, inserting the target memory block identification into the head of the first queue.
Specifically, in an embodiment, the allocation module is further configured to:
before judging whether the target memory block identifier is stored in the first queue, judging whether the target memory block identifier is stored in the second queue;
if the target memory block identification is stored in the second queue, increasing the reference count corresponding to the target memory block identification by 1;
if the target memory block identification is not stored in the second queue, executing the step of judging whether the target memory block identification is stored in the first queue;
The memory block identifiers in the second queue are ordered from big to small according to the reference count, the head of the second queue is used for storing the memory block identifier with the largest reference count, and the tail of the second queue is used for storing the memory block identifier with the smallest reference count.
Specifically, in one embodiment, the recycling module is specifically configured to:
when the preset recovery condition is met, determining the memory block corresponding to the memory block identifier stored at the tail of the first queue and the memory block corresponding to the memory block identifier stored at the tail of the second queue as memory blocks to be recovered;
the memory blocks corresponding to the memory block identifiers stored at the tail of the first queue are lowest in distribution heat in the first queue, and the memory blocks corresponding to the memory block identifiers stored at the tail of the second queue are lowest in distribution heat in the second queue.
Specifically, in one embodiment, the memory resource management queue includes a first priority queue and a second priority queue, and the allocation module is specifically configured to:
judging whether the target memory block identification is stored in a first priority queue or a second priority queue;
if the target memory block identification is stored in the first priority queue or the second priority queue, increasing the target memory block identification by 1 from the corresponding reference count;
And when the reference count of any memory block identifier stored in the first priority queue reaches a preset reference count threshold, moving the memory block identifier from the first priority queue to the head of the second priority queue.
Specifically, in an embodiment, the allocation module is further configured to:
and when the idle time of any memory block identifier stored in the second priority queue reaches a preset idle time threshold, moving the memory block identifier from the second priority queue to the head of the first priority queue.
Specifically, in an embodiment, the allocation module is further configured to:
if the target memory block identifier is not stored in the first priority queue or the second priority queue, inserting the target memory block identifier into the first priority queue, and setting the reference count corresponding to the target memory block identifier to be 1.
Specifically, in one embodiment, the recycling module is specifically configured to:
when a preset recovery condition is met, determining a memory block corresponding to a memory block identifier stored at the tail of the first priority queue as a memory block to be recovered;
the allocation heat of the memory block corresponding to the memory block identifier stored at the tail of the first priority queue is the lowest.
Specifically, in an embodiment, the recycling module is further configured to:
After the memory blocks to be recycled are recycled, the memory block identifiers to be recycled are stored in a preset recycling queue;
when any memory block identifier in the preset recovery queue is redetermined as a target memory block, determining a target priority queue corresponding to the memory block according to the reference count of the memory block, so as to insert the memory block identifier into the target priority queue.
Specifically, in an embodiment, the recycling module is further configured to:
and when the number of the idle memory blocks of the disk array control card is lower than a preset reclamation threshold value or when the timing reaches a preset reclamation period, determining that a preset reclamation condition is met.
The specific manner in which the respective modules perform the operations of the memory resource management device in this embodiment has been described in detail in the embodiments related to the method, and will not be described in detail here.
The memory resource management device provided in the embodiments of the present application is configured to execute the memory resource management method provided in the foregoing embodiments, and the implementation manner and principle of the method are the same and are not repeated.
The embodiment of the application provides an electronic device, which is used for executing the memory resource management method provided by the embodiment.
Fig. 9 is a schematic structural diagram of an electronic device according to an embodiment of the present application. The electronic device 90 includes: at least one processor 91 and a memory 92.
The memory stores computer-executable instructions; at least one processor executes computer-executable instructions stored in the memory, causing the at least one processor to perform the memory resource management method as provided in the above embodiments.
The electronic device provided in the embodiment of the present application is configured to execute the memory resource management method provided in the foregoing embodiment, and its implementation manner and principle are the same and are not repeated.
The embodiment of the application provides a computer readable storage medium, in which computer executable instructions are stored, and when a processor executes the computer executable instructions, the memory resource management method provided in any embodiment is implemented.
The storage medium including the computer executable instructions provided in the embodiments of the present application may be used to store the computer executable instructions of the memory resource management method provided in the foregoing embodiments, and the implementation manner and principle of the implementation are the same, and are not repeated.
In the several embodiments provided in this application, it should be understood that the disclosed apparatus and method may be implemented in other ways. For example, the apparatus embodiments described above are merely illustrative, e.g., the division of the units is merely a logical function division, and there may be additional divisions when actually implemented, e.g., multiple units or components may be combined or integrated into another system, or some features may be omitted or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or units, which may be in electrical, mechanical or other form.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in each embodiment of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in hardware plus software functional units.
The integrated units implemented in the form of software functional units described above may be stored in a computer readable storage medium. The software functional unit is stored in a storage medium, and includes several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) or a processor (processor) to perform part of the steps of the methods described in the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (Random Access Memory, RAM), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-described division of the functional modules is illustrated, and in practical application, the above-described functional allocation may be performed by different functional modules according to needs, i.e. the internal structure of the apparatus is divided into different functional modules to perform all or part of the functions described above. The specific working process of the above-described device may refer to the corresponding process in the foregoing method embodiment, which is not described herein again.
Finally, it should be noted that: the above embodiments are only for illustrating the technical solution of the present application, and not for limiting the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some or all of the technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit of the corresponding technical solutions from the scope of the technical solutions of the embodiments of the present application.
Claims (15)
1. A memory resource management method, comprising:
acquiring a memory resource request of a user for a disk array control card;
While distributing the target memory block to the user in response to the memory resource request, inserting a target memory block identifier into a memory resource management queue;
when a preset recycling condition is met, determining a memory block to be recycled according to the distribution heat of each memory block represented by the memory resource management queue so as to recycle the memory block to be recycled;
the memory resource management queue includes a first priority queue and a second priority queue, and the inserting the target memory block identifier into the memory resource management queue includes:
judging whether the target memory block identifier is stored in the first priority queue or the second priority queue;
if the target memory block identification is stored in the first priority queue or the second priority queue, increasing the reference count of the target memory block identification by 1;
when the reference count of any memory block identifier stored in the first priority queue reaches a preset reference count threshold, moving the memory block identifier from the first priority queue to the head of the second priority queue;
when the idle time of any memory block identifier stored in the second priority queue reaches a preset idle time threshold, moving the memory block identifier from the second priority queue to the head of the first priority queue;
If the target memory block identifier is not stored in the first priority queue or the second priority queue, inserting the target memory block identifier into the first priority queue, and setting the reference count corresponding to the target memory block identifier to be 1;
when the preset recycling condition is met, determining the memory block to be recycled according to the distribution heat of each memory block represented by the memory resource management queue, including:
when a preset recycling condition is met, determining a memory block corresponding to a memory block identifier stored at the tail of the first priority queue as the memory block to be recycled;
wherein, the distribution heat of the memory block corresponding to the memory block identifier stored at the tail of the first priority queue is the lowest;
after the memory block to be recycled is recycled, the memory block identifier to be recycled is saved to a preset recycling queue;
when any memory block identifier in the preset recovery queue is redetermined as a target memory block, determining a target priority queue corresponding to the memory block according to the reference count of the memory block, so as to insert the memory block identifier into the target priority queue.
2. The method of claim 1, wherein inserting the target memory block identification into a memory resource management queue comprises:
And inserting the target memory block identifier into the head of a memory resource management queue.
3. The method of claim 2, wherein prior to inserting the target memory block identification into the head of a memory resource management queue, the method further comprises:
judging whether the target memory block identifier is stored in the memory resource management queue or not;
if the target memory block identifier is stored in the memory resource management queue, moving the target memory block identifier to the head of the memory resource management queue;
and if the target memory block identifier is not stored in the memory resource management queue, executing the step of inserting the target memory block identifier into the head of the memory resource management queue.
4. The method of claim 1, wherein inserting the target memory block identification into a memory resource management queue comprises:
judging whether the target memory block identifier is stored in the memory resource management queue or not;
if the target memory block identifier is stored in the memory resource management queue, increasing the reference count corresponding to the target memory block identifier by 1;
the memory block identifiers in the memory resource management queue are ordered from large to small according to the reference count, the head of the memory resource management queue is used for storing the memory block identifier with the largest reference count, and the tail of the memory resource management queue is used for storing the memory block identifier with the smallest reference count.
5. The method according to claim 4, wherein the method further comprises:
if the target memory block identifier is not stored in the memory resource management queue, inserting the target memory block identifier into the tail of the memory resource management queue, and setting the reference count corresponding to the target memory block identifier to be 1.
6. The method according to any one of claims 1-5, wherein when a preset reclamation condition is met, determining the memory block to be reclaimed according to the allocation heat of each memory block represented by the memory resource management queue includes:
when a preset recycling condition is met, determining a memory block corresponding to a memory block identifier stored at the tail of the memory resource management queue as the memory block to be recycled;
and the allocation heat of the memory blocks corresponding to the memory block identifiers stored at the tail of the memory resource management queue is the lowest.
7. The method of claim 1, wherein the memory resource management queue comprises a first queue and a second queue, and wherein inserting the target memory block identification into the memory resource management queue comprises:
judging whether the target memory block identifier is stored in the first queue or not;
If the target memory block identification is stored in the first queue, migrating the target memory block identification from the first queue to the head of the second queue;
and if the target memory block identification is not stored in the first queue, inserting the target memory block identification into the head of the first queue.
8. The method of claim 7, wherein prior to determining whether the target memory block identification has been saved in the first queue, the method further comprises:
judging whether the target memory block identifier is stored in the second queue or not;
if the target memory block identification is stored in the second queue, moving the target memory block identification to the head of the second queue;
and if the target memory block identifier is not stored in the second queue, executing the step of judging whether the target memory block identifier is stored in the first queue.
9. The method of claim 1, wherein the memory resource management queue comprises a first queue and a second queue, and wherein inserting the target memory block identification into the memory resource management queue comprises:
judging whether the target memory block identifier is stored in the first queue or not;
If the target memory block identifier is stored in the first queue, migrating the target memory block identifier from the first queue to the tail of the second queue, and setting the reference count corresponding to the target memory block identifier to be 1;
and if the target memory block identification is not stored in the first queue, inserting the target memory block identification into the head of the first queue.
10. The method of claim 9, wherein prior to determining whether the target memory block identification has been saved in the first queue, the method further comprises:
judging whether the target memory block identifier is stored in the second queue or not;
if the target memory block identification is stored in the second queue, increasing the reference count corresponding to the target memory block identification by 1;
if the target memory block identifier is not stored in the second queue, executing the step of judging whether the target memory block identifier is stored in the first queue;
the memory block identifiers in the second queue are ordered from big to small according to the reference count, the head of the second queue is used for storing the memory block identifier with the largest reference count, and the tail of the second queue is used for storing the memory block identifier with the smallest reference count.
11. The method according to any one of claims 7-10, wherein when a preset reclamation condition is met, determining the memory block to be reclaimed according to the allocation heat of each memory block represented by the memory resource management queue includes:
when a preset recycling condition is met, determining the memory blocks corresponding to the memory block identifiers stored at the tail of the first queue and the memory blocks corresponding to the memory block identifiers stored at the tail of the second queue as the memory blocks to be recycled;
the memory blocks corresponding to the memory block identifiers stored at the tail of the first queue are lowest in allocated heat in the first queue, and the memory blocks corresponding to the memory block identifiers stored at the tail of the second queue are lowest in allocated heat in the second queue.
12. The method according to claim 1, wherein the method further comprises:
and when the number of the idle memory blocks of the disk array control card is lower than a preset recovery threshold value or when the timing reaches a preset recovery period, determining that a preset recovery condition is met.
13. A memory resource management device, comprising:
the acquisition module is used for acquiring a memory resource request of a user for the disk array control card;
The allocation module is used for allocating the target memory block to the user in response to the memory resource request and inserting the target memory block identifier into a memory resource management queue;
the recovery module is used for determining the memory blocks to be recovered according to the distribution heat of each memory block represented by the memory resource management queue when the preset recovery condition is met so as to recover the memory blocks to be recovered;
the memory resource management queue comprises a first priority queue and a second priority queue, and the allocation module is specifically configured to:
judging whether the target memory block identifier is stored in the first priority queue or the second priority queue;
if the target memory block identification is stored in the first priority queue or the second priority queue, increasing the reference count of the target memory block identification by 1;
when the reference count of any memory block identifier stored in the first priority queue reaches a preset reference count threshold, moving the memory block identifier from the first priority queue to the head of the second priority queue;
the distribution module is further configured to:
when the idle time of any memory block identifier stored in the second priority queue reaches a preset idle time threshold, moving the memory block identifier from the second priority queue to the head of the first priority queue;
The distribution module is further configured to:
if the target memory block identifier is not stored in the first priority queue or the second priority queue, inserting the target memory block identifier into the first priority queue, and setting the reference count corresponding to the target memory block identifier to be 1;
the recovery module is specifically configured to:
when a preset recycling condition is met, determining a memory block corresponding to a memory block identifier stored at the tail of the first priority queue as the memory block to be recycled;
wherein, the distribution heat of the memory block corresponding to the memory block identifier stored at the tail of the first priority queue is the lowest;
the recovery module is further configured to:
after the memory block to be recycled is recycled, the memory block identifier to be recycled is saved to a preset recycling queue;
when any memory block identifier in the preset recovery queue is redetermined as a target memory block, determining a target priority queue corresponding to the memory block according to the reference count of the memory block, so as to insert the memory block identifier into the target priority queue.
14. An electronic device, comprising: at least one processor and memory;
The memory stores computer-executable instructions;
the at least one processor executing computer-executable instructions stored in the memory causes the at least one processor to perform the method of any one of claims 1 to 12.
15. A computer readable storage medium having stored therein computer executable instructions which when executed by a processor implement the method of any one of claims 1 to 12.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311340587.6A CN117093508B (en) | 2023-10-17 | 2023-10-17 | Memory resource management method and device, electronic equipment and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311340587.6A CN117093508B (en) | 2023-10-17 | 2023-10-17 | Memory resource management method and device, electronic equipment and storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN117093508A CN117093508A (en) | 2023-11-21 |
CN117093508B true CN117093508B (en) | 2024-01-23 |
Family
ID=88783629
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202311340587.6A Active CN117093508B (en) | 2023-10-17 | 2023-10-17 | Memory resource management method and device, electronic equipment and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN117093508B (en) |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104714846A (en) * | 2013-12-17 | 2015-06-17 | 华为技术有限公司 | Resource processing method, operating system and equipment |
CN108763103A (en) * | 2018-05-24 | 2018-11-06 | 郑州云海信息技术有限公司 | A kind of EMS memory management process, device, system and computer readable storage medium |
CN113485822A (en) * | 2020-06-19 | 2021-10-08 | 中兴通讯股份有限公司 | Memory management method, system, client, server and storage medium |
CN113495789A (en) * | 2020-04-08 | 2021-10-12 | 大唐移动通信设备有限公司 | Memory allocation method and device |
CN113946444A (en) * | 2021-10-14 | 2022-01-18 | 杭州国芯科技股份有限公司 | Memory allocation method |
WO2022233272A1 (en) * | 2021-05-06 | 2022-11-10 | 北京奥星贝斯科技有限公司 | Method and apparatus for eliminating cache memory block, and electronic device |
-
2023
- 2023-10-17 CN CN202311340587.6A patent/CN117093508B/en active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104714846A (en) * | 2013-12-17 | 2015-06-17 | 华为技术有限公司 | Resource processing method, operating system and equipment |
CN108763103A (en) * | 2018-05-24 | 2018-11-06 | 郑州云海信息技术有限公司 | A kind of EMS memory management process, device, system and computer readable storage medium |
CN113495789A (en) * | 2020-04-08 | 2021-10-12 | 大唐移动通信设备有限公司 | Memory allocation method and device |
CN113485822A (en) * | 2020-06-19 | 2021-10-08 | 中兴通讯股份有限公司 | Memory management method, system, client, server and storage medium |
WO2022233272A1 (en) * | 2021-05-06 | 2022-11-10 | 北京奥星贝斯科技有限公司 | Method and apparatus for eliminating cache memory block, and electronic device |
CN113946444A (en) * | 2021-10-14 | 2022-01-18 | 杭州国芯科技股份有限公司 | Memory allocation method |
Also Published As
Publication number | Publication date |
---|---|
CN117093508A (en) | 2023-11-21 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US9081702B2 (en) | Working set swapping using a sequentially ordered swap file | |
US7949839B2 (en) | Managing memory pages | |
KR100772863B1 (en) | Method and apparatus for shortening operating time of page replacement in demand paging applied system | |
CN106502587B (en) | Hard disk data management method and hard disk control device | |
EP3865992A2 (en) | Distributed block storage system, method, apparatus and medium | |
CN112306415B (en) | GC flow control method and device, computer readable storage medium and electronic equipment | |
CN110968269A (en) | SCM and SSD-based key value storage system and read-write request processing method | |
CN113778338A (en) | Distributed storage data reading efficiency optimization method, system, device and medium | |
WO2023124423A1 (en) | Storage space allocation method and apparatus, and terminal device and storage medium | |
US10180901B2 (en) | Apparatus, system and method for managing space in a storage device | |
CN117093508B (en) | Memory resource management method and device, electronic equipment and storage medium | |
US10282301B2 (en) | Method and system for hardware accelerated read-ahead caching | |
US10649906B2 (en) | Method and system for hardware accelerated row lock for a write back volume | |
US20200057576A1 (en) | Method and system for input/output processing for write through to enable hardware acceleration | |
CN116594808B (en) | Database rollback resource processing method, device, computer equipment and medium | |
CN117908780B (en) | Online capacity expansion method, device and equipment for disk array and storage medium | |
CN117389485B (en) | Storage performance optimization method, storage performance optimization device, storage system, electronic equipment and medium | |
CN114879909A (en) | Data storage method and device, electronic equipment and storage medium | |
CN118550717A (en) | Memory resource processing method, electronic equipment and storage medium | |
CN116955212A (en) | Memory defragmentation method, device, equipment and storage medium | |
CN117389909A (en) | Memory data elimination method and device | |
CN116382574A (en) | Buffer management method and device and storage device | |
CN117556088A (en) | Data management method and device for memory multidimensional database | |
CN111190543A (en) | Storage method and system for sharing NVDIMM storage resources among threads |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |