CN110209493B - Memory management method, device, electronic equipment and storage medium - Google Patents

Memory management method, device, electronic equipment and storage medium Download PDF

Info

Publication number
CN110209493B
CN110209493B CN201910290004.0A CN201910290004A CN110209493B CN 110209493 B CN110209493 B CN 110209493B CN 201910290004 A CN201910290004 A CN 201910290004A CN 110209493 B CN110209493 B CN 110209493B
Authority
CN
China
Prior art keywords
memory
circular queue
target object
position number
queue
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910290004.0A
Other languages
Chinese (zh)
Other versions
CN110209493A (en
Inventor
赵文
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN201910290004.0A priority Critical patent/CN110209493B/en
Publication of CN110209493A publication Critical patent/CN110209493A/en
Application granted granted Critical
Publication of CN110209493B publication Critical patent/CN110209493B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5011Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals
    • G06F9/5016Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals the resource being the memory
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Abstract

The embodiment of the application provides a memory management method, a memory management device, electronic equipment and a storage medium. The method comprises the following steps: determining a first circular queue according to the capacity requirement of the first target object; when the available memory index exists in the first circular queue, reading a target memory index in the available memory index from a first position indicated by a head pointer of the first circular queue, and updating the first position indicated by the head pointer; and distributing the memory resources corresponding to the target memory index to the first target object. According to the technical scheme provided by the embodiment of the application, the time required for memory allocation, namely the time for reading the available memory index in the position indicated by the queue head pointer, has no relation with the used memory capacity, namely the time complexity of the technical scheme provided by the embodiment of the application is O1, and the efficiency of memory allocation can be improved.

Description

Memory management method, device, electronic equipment and storage medium
Technical Field
The embodiment of the application relates to the technical field of storage, in particular to a memory management method, a memory management device, electronic equipment and a storage medium.
Background
Currently, electronic devices generally establish a perfect memory management mechanism, so that a process or a thread in an operating state can be allocated to a sufficient memory space, and a process or a thread stopping operation can timely release the memory space.
In the related art, the memory allocation process specifically includes: when a certain process or thread applies for memory space allocation, the memory management module sequentially traverses the memory blocks, and if the memory blocks are occupied or the memory blocks are not matched with the memory capacity applied by the process or thread, the next memory block is continuously traversed until the next memory block is traversed to the memory blocks which are not occupied by other processes or threads and meet the memory capacity requirements of the process or thread, and the traversed memory blocks are allocated to the process or thread applying for memory space allocation.
In the related art, the larger the memory occupation amount is, the longer the memory management process needs to traverse to the memory blocks meeting the allocation conditions, and the lower the allocation efficiency is.
Disclosure of Invention
The embodiment of the application provides a memory management method, a device, a terminal and a storage medium, which can be used for solving the problem that the distribution efficiency is lower as the time required for a memory management process to traverse to a memory block meeting distribution conditions in the related technology is longer.
In one aspect, an embodiment of the present application provides a memory management method, where the method includes:
determining a first circular queue according to the capacity requirement of the first target object; the capacity requirement is used for indicating a first memory capacity required by the first target object, and the first circular queue is used for storing an available memory index;
when the available memory index exists in the first circular queue, reading a target memory index in the available memory index from a first position indicated by a head pointer of the first circular queue, and updating the first position indicated by the head pointer;
and distributing the memory resources corresponding to the target memory index to the first target object.
In another aspect, an embodiment of the present application provides a memory management device, where the device includes:
the first determining module is used for determining a first circular queue according to the capacity requirement of the first target object; the capacity requirement is used for indicating a first memory capacity required by the first target object, and the first circular queue is used for storing an available memory index;
the index acquisition module is used for reading a target memory index in the available memory index from a first position indicated by a head pointer of the first circular queue when the available memory index exists in the first circular queue;
The position updating module is used for updating the first position indicated by the head pointer;
and the memory allocation module is used for allocating the memory resources corresponding to the target memory index to the first target object.
In yet another aspect, an embodiment of the present application provides an electronic device, where the electronic device includes a processor and a memory, where the memory stores at least one instruction, at least one section of program, a code set, or an instruction set, and the at least one instruction, the at least one section of program, the code set, or the instruction set is loaded and executed by the processor to implement the memory management method described above.
In yet another aspect, embodiments of the present application provide a computer readable storage medium having at least one instruction, at least one program, a code set, or a set of instructions stored therein, where the at least one instruction, the at least one program, the set of code, or the set of instructions are loaded and executed by a processor to implement the memory management method described above.
In yet another aspect, a computer program product is provided for performing the above memory management method when the computer program product is executed.
The technical scheme provided by the embodiment of the application can bring the following beneficial effects:
When a process or thread needs to apply for a memory space, only after a circular queue matched with the applied memory capacity is determined, the available memory index is directly read from the head pointer of the circular queue, and then the memory space corresponding to the available memory index can be obtained.
Drawings
Fig. 1 is an interface schematic diagram of an application scenario provided in an embodiment of the present application;
FIG. 2 is a flow chart of a memory management method according to one embodiment of the present application;
FIG. 3 is a schematic diagram of an initialized circular queue provided by one embodiment of the present application;
FIG. 4 is a flowchart of a memory allocation method according to an embodiment of the present application;
FIG. 5 is a schematic diagram of memory allocation according to one embodiment of the present application;
FIG. 6 is a flowchart of a memory management method according to another embodiment of the present application;
FIG. 7 is a flowchart of a memory release method according to another embodiment of the present application;
FIG. 8 is a schematic diagram of memory release according to another embodiment of the present application;
FIG. 9 is a bar graph of memory utilization provided by the related art;
FIG. 10 is a bar graph of memory utilization provided in one embodiment of the present application;
FIG. 11 is a block diagram of a memory management device according to one embodiment of the present application;
FIG. 12 is a block diagram of a memory management device according to one embodiment of the present application;
fig. 13 is a block diagram of an electronic device provided in one embodiment of the present application.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the present application more apparent, the embodiments of the present application will be described in further detail below with reference to the accompanying drawings.
According to the technical scheme provided by the embodiment of the application, all available memory resources are managed by adopting the circular queue, when a certain process or thread needs to apply for a memory space, the available memory index is directly read from the queue head pointer of the circular queue after the circular queue matched with the applied memory capacity is determined, and then the memory space corresponding to the available memory index can be obtained.
The related art also provides a memory management method, in which the memory management module allocates a memory space to each process, and each process individually manages the allocated memory spaces to be isolated from each other. When the memory utilization rate of a certain process is low, the memory utilization rate of the certain process is low, and the rest memory resources cannot be shared for other processes, so that the overall utilization rate of the memory is low.
According to the technical scheme provided by the embodiment of the application, all available memory resources are managed by the circular queue, each process or thread can access the circular queue according to the capacity of the process or thread to apply for the memory space, and compared with the prior art that each process independently manages the memory space of the process or thread, the memory fragments can be reduced, and the overall utilization rate of the memory is increased.
According to the technical scheme provided by the embodiment of the application, the execution main body of each step can be electronic equipment. The electronic device may be a terminal device such as a smart phone, a tablet computer, a notebook computer or a desktop computer, or may be a server, which is not limited in this embodiment of the present application. Optionally, the electronic device includes a memory management module, and the execution subject of each step may also be the memory management module.
The technical scheme provided by the embodiment of the application can be applied to the application scene with high concurrency of multiple processes. For example, the multi-process, highly concurrent application scenario described above may be a scenario that manages groups in a social application. In the processes of creating a group, disassembling a group, adding a group member, deleting a group member, a user needs to continuously perform memory allocation and memory release, and the memory allocation/release process is completed by adopting the technical scheme provided by the embodiment of the application.
Referring to fig. 1 in combination, an interface schematic diagram of a group provided in an embodiment of the present application is shown. When a user operates any one module in the group, the memory management module adopts the technical scheme provided by the embodiment of the application to complete the memory allocation and release process.
Referring to fig. 2, a flowchart of a memory management method according to an embodiment of the present application is shown, and the method includes the following steps (step 201 to step 203).
In step 201, a first circular queue is determined according to the capacity requirement of the first target object.
The first target object may be a process or thread for which there is a memory allocation requirement. In one possible implementation, when a process or thread starts running just before it starts, the process or thread has memory allocation requirements. In another possible implementation, when the memory capacity of a process or thread is insufficient, the process or thread has a memory allocation requirement. The capacity requirement is used to indicate a first memory capacity required by the first target object. The first memory capacity may be actually determined according to the operation duration, the operation condition, the operation result, etc. of the first target object, which is not limited in the embodiment of the present application. The first memory size required for the first target object is, illustratively, 212kb.
The first circular queue is used for storing available memory indexes, and the value of the number of the available memory indexes stored in the first circular queue is 0 to the length of the first circular queue. In this embodiment of the present application, when the number of available memory indexes stored in the first circular queue is greater than 0, the memory management module may continue to execute the subsequent memory allocation procedure.
When the capacity of the available memory index stored in the first circular queue is not 0, the memory capacities of the memory blocks corresponding to the available memory indexes may be the same or different, in this embodiment, only the memory capacities of the memory blocks corresponding to the available memory indexes are the same as an example, so that on one hand, the memory management module may only record the memory capacity corresponding to the circular queue, but not need to record the memory capacity corresponding to the available memory indexes in the circular queue, so that memory occupation may be reduced, and on the other hand, the object only needs to search the circular queue according to the memory capacity required by itself, but not need to search the position of the available memory index corresponding to the memory capacity required by itself, thereby improving memory allocation efficiency.
Alternatively, step 201 may be embodied as two sub-steps:
In step 201a, the corresponding relationship between different circular queues and different memory capacities is obtained.
And the memory capacity corresponding to the circular queue is used for indicating the memory capacity of the memory block corresponding to the available memory index stored in the circular queue. In this embodiment of the present application, the memory management module may divide the memory space into memory blocks with different memory capacities according to the requirements, and when a certain service needs to apply for the memory space subsequently, may apply for a suitable memory block in combination with the memory capacity of the memory space required by itself, so that the memory utilization rate can be improved as much as possible on the premise of meeting the service requirements. For example, the correspondence between the circular queue and the memory capacity may be referred to in Table-1.
TABLE-1
Circular queue Memory capacity
Circular queue 1 1kb
Circular queue 2 2kb
Circular queue 3 8kb
Circular queue 4 256kb
Taking the circular queue 3 as an example, the memory capacity of the memory block corresponding to the available memory index stored in the circular queue 3 is 8kb.
Alternatively, the above correspondence is stored in the header information of the first target object, and the first target object may directly acquire the above correspondence from the header information.
In step 201b, a circular queue having a memory capacity that is the target memory capacity is determined as a first circular queue.
The target memory capacity is a memory capacity which is larger than the first memory capacity and the difference between the target memory capacity and the first memory capacity accords with a preset condition. The predetermined condition may be that a difference between the target memory capacity and the first memory capacity is the smallest. By the method, the memory space can be saved as much as possible on the premise that the memory capacity of the memory block applied by the first target object meets the use requirement of the first target object, so that the utilization rate of the memory block is improved.
In combination with the above table-1, the first memory capacity required by the first target object is 6kb, and the memory management module determines that the target memory capacity is 8kb, and then determines the circular queue 3 as the first circular queue.
In other possible implementations, the preset condition may be that a difference between the target memory capacity and the first memory capacity is smaller than a preset threshold, where the preset threshold may be set experimentally or empirically.
Optionally, before step 201, the memory management module further needs to detect whether the first target object has memory access rights. Memory retrieval authority refers to authority to obtain an available memory index by accessing a head pointer location of a first circular queue.
Optionally, the memory management module determines whether the first target object has memory access rights through the resource lock. If the first target object acquires the resource lock, the first target object has memory acquisition authority; if the first target object does not acquire the resource lock, the first target object does not have memory acquisition permission. It should be noted that, the memory management module may determine the acquisition order of the resource locks according to the execution priority of the objects, the object with high priority may acquire the resource locks preferentially, and when the object with high priority successfully applies for the memory space, the resource locks may be released, and at this time, other objects may acquire the resource locks. The resource lock used in the embodiments of the present application is a semaphore.
When the memory management module detects that the first target object has the memory use right, the subsequent step 101 is continuously executed, and when the memory management module detects that the first target object does not have the memory acquisition right, the subsequent step is not executed. By the method, the conflict problem caused when the multithreading accesses the first circular queue at the same time can be avoided, so that the memory management mechanism can operate orderly and healthily.
Step 202, reading a target memory index from the available memory indexes from the first location indicated by the head pointer of the first circular queue, and updating the first location indicated by the head pointer.
The first location indicated by the head pointer of the first circular queue is an access location of the first target object at which the first circular queue is accessed. The first location indicated by the head pointer may be any location in the first circular queue. When the first circular queue is initialized, the first position indicated by the head pointer is the first position in the first circular queue. Referring to FIG. 3 in combination, a schematic diagram of the initialized first circular queue is shown in accordance with an exemplary embodiment of the present application. The queue length of the first circular queue is N, and in the initialized first circular queue, the position indicated by the head pointer is the first position in the first circular queue.
When the first location indicated by the head pointer is accessed by a process or thread, the first location indicated by the head pointer is updated according to the order of locations in the first circular queue. Illustratively, when the first location indicated by the head pointer is the 2 nd location in the first circular queue, and when the first location is accessed by an object, the first location indicated by the head pointer is the 3 rd location in the first circular queue. In the circular queue, when the first position indicated by the head pointer is the last position in the circular queue, if the first position is accessed by a certain object, the first position indicated by the head pointer is updated to be the first position in the circular queue.
In this embodiment of the present application, one of the available memory indexes is stored in the first location indicated by the head pointer of the first circular queue, and when an object accesses the first location, the available memory index may be read, and the read available memory index is the target memory index.
In addition, when the available memory index in the first location is read, the first location indicated by the head pointer needs to be adaptively updated, so that other available memory indexes are stored in the first location indicated by the head pointer, and then other objects access the first circular queue, the available memory index can also be directly read from the first location indicated by the head pointer, so that the memory management mechanism can operate orderly and healthily.
Optionally, the first circular queue has a queue length n (n is an integer greater than 1). The first location indicated by the update queue head pointer at this time may be specifically implemented as: when the first position number corresponding to the first position is smaller than n, adding 1 to the first position number to obtain a first position number corresponding to the updated first position; when the first position number is equal to n, the first position number is updated to 1, and the first position indicated by the head pointer is updated to the first position from the last position in the circular queue.
Optionally, the available memory index stored in the first circular queue is gradually allocated. When more objects apply for the available memory index and fewer objects release the memory space, the available memory index stored in the first circular queue gradually decreases until the available memory index is 0. Therefore, before reading the target memory index from the first location indicated by the head pointer of the first circular queue, the memory management module needs to first detect whether the number of available memory indexes stored in the first circular queue is greater than zero.
Optionally, the memory management module calculates the number of available memory indexes stored in the first circular queue according to the size relationship between the first position number and the second position number.
The first position number is a position number corresponding to the first position. The second position number is a position number corresponding to a second position indicated by the tail pointer in the first circular queue.
The second location indicated by the tail pointer of the first circular queue is the last access location of the first target object when accessing the first circular queue. The second location indicated by the tail pointer may be any location in the first circular queue. When the first circular queue is initialized, the second position indicated by the head pointer is the last position in the first circular queue. Referring to FIG. 3 in combination, a schematic diagram of the initialized first circular queue is shown in accordance with an exemplary embodiment of the present application. The queue length of the first circular queue is N, and in the initialized first circular queue, the position indicated by the tail pointer is the last position in the first circular queue.
When an object releases memory and the released memory capacity matches with the first circular queue, the second position indicated by the tail pointer is updated according to the position sequence in the first circular queue. When the second location indicated by the tail pointer is the 2 nd location in the first circular queue, the memory is released by the object, and the released memory capacity is the same as the memory capacity corresponding to the first circular queue, and the second location indicated by the tail pointer is the 3 rd location in the first circular queue. It should be noted that, in the circular queue, when the second position indicated by the tail pointer is the last position in the circular queue, at this time, a certain object releases the memory, and the released memory capacity is the same as the memory capacity corresponding to the first circular queue, and then the second position indicated by the tail pointer is updated to be the first position in the circular queue.
Optionally, when the second location number is greater than the first location number, determining a difference between the second location number and the first location number as the number of available memory indexes stored in the first circular queue. Illustratively, the first location is numbered 1, the second location is numbered m (i.e., the queue length of the first circular queue), and the number of available memory indexes stored in the first circular queue is m-1.
When the second position number is smaller than the first position number, subtracting the first position number from the queue length of the first circular queue and adding the second position number to obtain the number of available memory indexes stored in the first circular queue.
When the number of available memory indexes stored in the first circular queue is greater than zero, the step of reading a target memory index of the n available memory indexes from a first position indicated by a head pointer of the first circular queue is performed. When the number of available memory indexes stored in the first circular queue is equal to zero, the terminal needs to re-determine the circular queue at this time, and reads the available memory indexes from the position indicated by the head pointer of the newly determined circular queue.
In step 203, the memory resource corresponding to the target memory index is allocated to the first target object.
Optionally, after the memory management module obtains the available memory index, the memory management module returns the information such as the memory block number, the memory block position and the like included in the available memory index to the first target object, and at this time, the memory management module allocates the memory to the first target object successfully.
Referring to fig. 4 in combination, a flowchart of memory allocation according to one embodiment of the present application is shown. Firstly, detecting whether a process needing memory allocation acquires a resource lock, if so, detecting whether an available memory index exists in a circular queue, if so, reading the available memory index from a position indicated by a queue head pointer, returning the available memory index to the process, and then moving the queue head pointer to update the position indicated by the queue head pointer; if not, ending the flow. In addition, if the process fails to acquire the resource lock, the process continues to acquire the resource lock until the resource lock is successfully acquired.
Referring to fig. 5 in combination, a schematic diagram of memory allocation provided in an embodiment of the present application is shown. The initial position indicated by the head pointer is 1, the process 1 applies for memory allocation after acquiring the resource lock, then the memory management module allocates the available memory index 1 stored in the position 1 to the process 1, then the head pointer is moved, and the position indicated by the head pointer is updated to 2.
In summary, according to the technical solution provided in the embodiments of the present application, when a process or a thread needs to apply for a memory space by using a circular queue, only after determining the circular queue matching with the applied memory capacity, the usable memory index is directly read from the head pointer of the circular queue, and then the memory space corresponding to the usable memory index can be obtained.
In addition, all available memory resources are managed by the circular queue, and each process or thread can access the circular queue according to the capacity of the process or thread to apply for the memory space.
The following explains the flow of memory release. In an alternative embodiment provided based on the embodiment shown in fig. 1, the memory management method further includes the following steps:
step 601, determining a second circular queue according to the release capacity of the second target object.
The second target object refers to a process or thread for which there is a memory release requirement. In one possible implementation, when a process or thread ends running, the process or thread has a memory release requirement. In another possible implementation, when a process or thread applies for a memory space with a larger memory capacity, the memory space previously applied can be released, and at this time, the process or thread has a memory release requirement. The release capacity is used for indicating the memory capacity released by the second target object. The release capacity may be actually determined according to the memory capacity of the memory space occupied by the second target object, which is not limited in the embodiment of the present application. Illustratively, the release capacity of the second target object is 256kb.
The second circular queue is used to store the available memory index. The number of available memory indexes stored in the second circular queue is 0 to the queue length of the second circular queue. In this embodiment of the present application, when the number of available memory indexes stored in the second circular queue is smaller than the length of the second circular queue, the memory management module may continue to execute the subsequent memory release procedure.
Alternatively, step 601 may be implemented specifically as: and determining the corresponding circular queue with the memory capacity being the release capacity as a second circular queue. Taking table-1 as an example, if the release capacity of the second target object is 256kb, the memory management module determines the circular queue 4 as the second circular queue.
Optionally, before step 601, the memory management module further needs to detect whether the second target object has memory release authority. The memory release permission is permission to release the memory and write the memory index of the released memory into the updated queue tail pointer position of the second circular queue.
Optionally, the memory management module determines whether the second target object has the memory release authority in a resource lock mode. If the second target object acquires the resource lock, the second target object has memory release authority; if the second target object does not acquire the resource lock, the second target object does not have the memory release authority. It should be noted that, the memory management module may determine the acquisition order of the resource locks according to the execution priority of the objects, the object with high priority may acquire the resource locks preferentially, and when the object with high priority successfully applies for the memory space, the resource locks may be released, and at this time, other objects may acquire the resource locks.
When the memory management module detects that the second target object has the memory release right, the following step 601 is continuously executed, and when the memory management module detects that the second target object does not have the memory release right, the following step is not executed. By the method, the problem of conflict caused by simultaneous release of the memory by multiple threads can be avoided, so that the memory management mechanism can operate orderly and healthily.
Step 602, updating the location indicated by the tail pointer of the second circular queue, and storing the memory index corresponding to the memory resource to be released in the location indicated by the tail pointer of the updated second circular queue.
Optionally, the length of the second circular queue is m, and when the position number indicated by the tail pointer of the second circular queue before updating is m, the position number indicated by the tail pointer of the second circular queue is updated to be 1; and when the position number indicated by the tail pointer of the second circular queue before updating is smaller than m, updating the position number indicated by the tail pointer of the second circular queue to m+1.
Step 603, releasing the memory resource to be released.
The execution sequence of step 602 and step 603 is not limited in the embodiment of the present application. The memory management module may execute step 602 first, and then execute step 603; step 603 may be performed first, and then step 602 may be performed; step 602 and step 603 may also be performed simultaneously.
Referring to fig. 7 in combination, a flow chart of memory release according to one embodiment of the present application is shown. Firstly, detecting whether a process needing memory release acquires a resource lock, if the process successfully acquires the resource lock, detecting whether a queue tail pointer is legal, if the process successfully acquires the resource lock, moving the queue tail pointer to update the position indicated by the queue tail pointer, and then storing a memory index corresponding to a memory resource to be released in the position indicated by the updated queue tail pointer. In addition, if the process fails to acquire the resource lock, the process continues to acquire the resource lock until the resource lock is successfully acquired.
Referring to fig. 8 in combination, a schematic diagram of memory allocation provided in an embodiment of the present application is shown. The queue length of the second circular queue is N, the initial position indicated by the queue tail pointer is N, the process 3 applies for memory release after acquiring the resource lock, then the memory management module moves the queue tail pointer to update the position indicated by the queue tail pointer to be 1, and then the memory index (i.e. the memory block 3) corresponding to the memory resource to be released is stored in the position 1.
In summary, according to the technical scheme provided by the embodiment of the present application, when a process or thread needs to release a memory space by using a circular queue, only after determining the circular queue matching with the released memory capacity, the position indicated by the tail pointer is updated and the memory index corresponding to the released memory is stored to the position indicated by the updated tail pointer.
Referring to fig. 9 in combination, a histogram of memory utilization provided by the related art is shown. Fig. 9 shows the memory usage rate of a process of managing group basic information and the memory usage rate when the number of members is different in the group management scenario. Referring to FIG. 10 in combination, a histogram of memory utilization is shown as provided by one embodiment of the present application. Fig. 10 shows the memory usage rate of a process of managing group basic information and the memory usage rate when the number of members is different in the group management scenario.
As can be obtained by comparing fig. 9 and fig. 10, compared with the memory management scheme provided by the related art, the technical scheme provided by the embodiment of the present application can improve the memory utilization rate and reduce the memory fragmentation.
The following are device embodiments of the present application, which may be used to perform method embodiments of the present application. For details not disclosed in the device embodiments of the present application, please refer to the method embodiments of the present application.
Referring to fig. 11, a block diagram of a memory management device according to an embodiment of the present application is shown. The device has the function of realizing the method, and the function can be realized by hardware or can be realized by executing corresponding software by hardware. The apparatus may include:
A first determining module 1101, configured to determine a first circular queue according to a capacity requirement of a first target object; the capacity requirement is used for indicating a first memory capacity required by the first target object, and the first circular queue is used for storing an available memory index.
And the index obtaining module 1102 is configured to, when the available memory index exists in the first circular queue, read a target memory index in the available memory index from a first location indicated by a head pointer of the first circular queue.
A location updating module 1103 is configured to update the first location indicated by the head pointer.
The memory allocation module 1104 is configured to allocate a memory resource corresponding to the target memory index to the first target object.
In an alternative embodiment provided based on the embodiment shown in fig. 11, referring to fig. 12, the apparatus further includes:
the calculating module 1105 is configured to calculate, according to a size relationship between a first position number and a second position number, the number of the available memory indexes stored in the first circular queue, where the first position number is a position number corresponding to the first position, and the second position number is a position number corresponding to a second position indicated by a tail pointer in the first circular queue.
The index obtaining module 1102 is configured to perform a step of reading a target memory index in the available memory indexes from a first location indicated by a head pointer of the first circular queue when the number of the available memory indexes stored in the first circular queue is greater than zero.
Optionally, the calculating module 1105 is configured to:
when the second position number is larger than the first position number, determining a difference value between the second position number and the first position number as the number of the available memory indexes stored in the first circular queue;
and when the second position number is smaller than the first position number, subtracting the first position number from the queue length of the first circular queue and adding the second position number to obtain the number of the available memory indexes stored in the first circular queue.
In an alternative embodiment provided based on the embodiment shown in fig. 11, the queue determining module 1101 is configured to:
acquiring corresponding relations between different circular queues and different memory capacities, wherein the memory capacities corresponding to the circular queues are used for indicating the memory capacities of memory blocks corresponding to available memory indexes stored in the circular queues;
And determining a circular queue with the memory capacity being a target memory capacity as the first circular queue, wherein the target memory capacity is larger than the first memory capacity, and the difference value between the target memory capacity and the first memory capacity accords with a preset condition.
In an alternative embodiment provided based on the embodiment shown in fig. 11, referring to fig. 12, the apparatus further includes: the rights detection module 1106.
The permission detection module 1106 is configured to detect whether the first target object has a memory access permission.
The queue determining module 1101 is further configured to execute the step of determining the first circular queue according to the capacity requirement of the first target object if the first target object is detected to have the memory acquisition permission.
In an alternative embodiment provided based on the embodiment shown in fig. 11, the queue length of the first circular queue is n, and the location updating module 1103 is configured to:
when the first position number corresponding to the first position is smaller than n, adding 1 to the first position number to obtain a first position number corresponding to the updated first position;
and updating the first position number to 1 when the first position number is equal to the n.
In an alternative embodiment provided based on the embodiment shown in fig. 11, referring to fig. 12, the apparatus further includes:
a second determining module 1107, configured to determine a second circular queue according to the release capacity of the second target object; the release capacity is used for indicating a second memory capacity released by the second target object, and the second circular queue is used for storing an available memory index.
The location updating module 1108 is configured to update a location indicated by a tail pointer of the second circular queue.
And an index storage module 1108, configured to store a memory index corresponding to a memory resource to be released in a location indicated by the updated tail pointer of the second circular queue.
The memory release module 1109 is configured to release the memory resource to be released.
Fig. 13 shows a block diagram of an electronic device 1300 provided in an exemplary embodiment of the present application. The electronic device 1300 may be: a smart phone, a tablet computer, an MP3 player (Moving Picture Experts Group Audio Layer III, motion picture expert compression standard audio plane 3), an MP4 (Moving Picture Experts Group Audio Layer IV, motion picture expert compression standard audio plane 4) player, a notebook computer, or a desktop computer. Electronic device 1300 may also be referred to by other names as user device, portable electronic device, laptop electronic device, desktop electronic device, and the like.
In general, the electronic device 1300 includes: a processor 1301, and a memory 1302.
Processor 1301 may include one or more processing cores, such as a 4-core processor, an 8-core processor, and the like. Processor 1301 may be implemented in at least one hardware form of DSP (Digital Signal Processing ), FPGA (Field-Programmable Gate Array, field programmable gate array), PLA (Programmable Logic Array ). Processor 1301 may also include a main processor, which is a processor for processing data in an awake state, also called a CPU (Central Processing Unit ), and a coprocessor; a coprocessor is a low-power processor for processing data in a standby state. In some embodiments, processor 1301 may integrate a GPU (Graphics Processing Unit, image processor) for rendering and rendering of content required to be displayed by the display screen. In some embodiments, the processor 1301 may also include an AI (Artificial Intelligence ) processor for processing computing operations related to machine learning.
Memory 1302 may include one or more computer-readable storage media, which may be non-transitory. Memory 1302 may also include high-speed random access memory, as well as non-volatile memory, such as one or more magnetic disk storage devices, flash memory storage devices. In some embodiments, a non-transitory computer readable storage medium in memory 1302 is used to store at least one instruction for execution by processor 1301 to implement the memory management methods provided by the method embodiments herein.
In some embodiments, the electronic device 1300 may further optionally include: a peripheral interface 1303 and at least one peripheral. The processor 1301, the memory 1302, and the peripheral interface 1303 may be connected by a bus or signal lines. The respective peripheral devices may be connected to the peripheral device interface 1303 through a bus, a signal line, or a circuit board. Specifically, the peripheral device includes: at least one of radio frequency circuitry 1304, a display screen 1305, a camera assembly 1306, audio circuitry 1307, a positioning assembly 1308, and a power supply 1309.
In some embodiments, the electronic device 1300 also includes one or more sensors. The one or more sensors include, but are not limited to: acceleration sensor, gyroscope sensor, pressure sensor, fingerprint sensor, optical sensor, and proximity sensor.
Those skilled in the art will appreciate that the structure shown in fig. 13 is not limiting of the electronic device 1300 and may include more or fewer components than shown, or may combine certain components, or may employ a different arrangement of components.
In an exemplary embodiment, there is also provided a computer readable storage medium having stored therein at least one instruction, at least one program, a set of codes, or a set of instructions, the at least one instruction, the at least one program, the set of codes, or the set of instructions being loaded and executed by a processor of an electronic device to implement the memory management method described above.
Alternatively, the above-described computer-readable storage medium may be a ROM, random Access Memory (RAM), CD-ROM, magnetic tape, floppy disk, optical data storage device, or the like.
In an exemplary embodiment, a computer program product is also provided, which, when executed, is adapted to carry out the above-described memory management method.
It should be understood that references herein to "a plurality" are to two or more. "and/or", describes an association relationship of an association object, and indicates that there may be three relationships, for example, a and/or B, and may indicate: a exists alone, A and B exist together, and B exists alone. The character "/" generally indicates that the context-dependent object is an "or" relationship. The terms "first," "second," and the like, as used herein, do not denote any order, quantity, or importance, but rather are used to distinguish one element from another.
The foregoing embodiment numbers of the present application are merely for describing, and do not represent advantages or disadvantages of the embodiments.
The foregoing is illustrative of the present application and is not to be construed as limiting thereof, but rather as providing for the use of various modifications, equivalents, improvements or alternatives falling within the spirit and principles of the present application.

Claims (7)

1. A memory management method, the method comprising:
detecting whether a first target object acquires a resource lock; if the first target object acquires the resource lock, the first target object has memory acquisition authority, the target object with higher priority acquires the resource lock preferentially than the target object with lower priority, and the resource lock is released after the target object with higher priority successfully applies for the memory resource;
under the condition that the first target object has the memory acquisition authority, determining a first circulation queue according to the capacity requirement of the first target object; the capacity requirement is used for indicating a first memory capacity required by the first target object, and the first circular queue is used for storing an available memory index;
determining the number of the available memory indexes stored in the first circular queue according to a first position number and a second position number, and determining the difference value between the second position number and the first position number as the number of the available memory indexes stored in the first circular queue when the second position number is larger than the first position number, wherein the first position number is a position number corresponding to a first position indicated by a head pointer in the first circular queue, and the second position number is a position number corresponding to a second position indicated by a tail pointer in the first circular queue; when the second position number is smaller than the first position number, subtracting the first position number from the queue length of the first circular queue and adding the second position number to obtain the number of the available memory indexes stored in the first circular queue;
When the number of the available memory indexes stored in the first circular queue is greater than zero, reading a target memory index in the available memory indexes from the first position, and updating the first position indicated by the head pointer;
and distributing the memory resources corresponding to the target memory index to the first target object.
2. The method of claim 1, wherein the determining a first circular queue based on the capacity requirement of the first target object comprises:
acquiring corresponding relations between different circular queues and different memory capacities, wherein the memory capacities corresponding to the circular queues are used for indicating the memory capacities of memory blocks corresponding to available memory indexes stored in the circular queues;
and determining a circular queue with the memory capacity being a target memory capacity as the first circular queue, wherein the target memory capacity is larger than the first memory capacity, and the difference value between the target memory capacity and the first memory capacity accords with a preset condition.
3. The method according to claim 1 or 2, wherein the first circular queue has a queue length n, the n being an integer greater than 1; the updating the first position indicated by the queue head pointer comprises the following steps:
When the first position number corresponding to the first position is smaller than n, adding 1 to the first position number to obtain a first position number corresponding to the updated first position;
and updating the first position number to 1 when the first position number is equal to the n.
4. The method according to claim 1 or 2, characterized in that the method further comprises:
determining a second circular queue according to the release capacity of the second target object; the release capacity is used for indicating a second memory capacity released by the second target object, and the second circular queue is used for storing an available memory index;
updating the position indicated by the queue tail pointer of the second circular queue, and storing a memory index corresponding to the memory resource to be released in the updated position indicated by the queue tail pointer of the second circular queue;
and releasing the memory resource to be released.
5. A memory management device, the device comprising:
the permission detection module is used for detecting whether the first target object acquires the resource lock or not; if the first target object acquires the resource lock, the first target object has memory acquisition authority, the target object with higher priority acquires the resource lock preferentially than the target object with lower priority, and the resource lock is released after the target object with higher priority successfully applies for the memory resource;
The first determining module is used for determining a first circulation queue according to the capacity requirement of the first target object under the condition that the first target object has the memory acquisition authority; the capacity requirement is used for indicating a first memory capacity required by the first target object, and the first circular queue is used for storing an available memory index;
the computing module is used for determining the number of the available memory indexes stored in the first circular queue according to a first position number and a second position number, determining the difference value between the second position number and the first position number as the number of the available memory indexes stored in the first circular queue when the second position number is larger than the first position number, wherein the first position number is a position number corresponding to a first position indicated by a head pointer in the first circular queue, and the second position number is a position number corresponding to a second position indicated by a tail pointer in the first circular queue; when the second position number is smaller than the first position number, subtracting the first position number from the queue length of the first circular queue and adding the second position number to obtain the number of the available memory indexes stored in the first circular queue;
An index obtaining module, configured to read a target memory index in the available memory indexes from the first location when the number of the available memory indexes stored in the first circular queue is greater than zero;
the position updating module is used for updating the first position indicated by the head pointer;
and the memory allocation module is used for allocating the memory resources corresponding to the target memory index to the first target object.
6. An electronic device comprising a processor and a memory, the memory having stored therein a computer program that is loaded and executed by the processor to implement the memory management method of any of claims 1-4.
7. A computer readable storage medium having stored therein a computer program that is loaded and executed by a processor to implement the memory management method of any of claims 1 to 4.
CN201910290004.0A 2019-04-11 2019-04-11 Memory management method, device, electronic equipment and storage medium Active CN110209493B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910290004.0A CN110209493B (en) 2019-04-11 2019-04-11 Memory management method, device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910290004.0A CN110209493B (en) 2019-04-11 2019-04-11 Memory management method, device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN110209493A CN110209493A (en) 2019-09-06
CN110209493B true CN110209493B (en) 2023-08-01

Family

ID=67785292

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910290004.0A Active CN110209493B (en) 2019-04-11 2019-04-11 Memory management method, device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN110209493B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112596889B (en) * 2020-12-29 2023-09-29 凌云光技术股份有限公司 Method for managing chained memory based on state machine
CN112732448A (en) * 2021-01-18 2021-04-30 国汽智控(北京)科技有限公司 Memory space allocation method and device and computer equipment
CN112799842A (en) * 2021-02-01 2021-05-14 安徽芯纪元科技有限公司 Memory management method for embedded operating system of digital signal processor
CN114240730B (en) * 2021-12-20 2024-01-02 苏州凌云光工业智能技术有限公司 Processing method of detection data in AOI detection equipment
CN116521606B (en) * 2023-06-27 2023-09-05 太初(无锡)电子科技有限公司 Task processing method, device, computing equipment and storage medium

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5873089A (en) * 1995-09-04 1999-02-16 Hewlett-Packard Company Data handling system with circular queue formed in paged memory
CN109522121A (en) * 2018-11-15 2019-03-26 郑州云海信息技术有限公司 A kind of memory application method, device, terminal and computer readable storage medium

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN100517242C (en) * 2007-10-10 2009-07-22 中兴通讯股份有限公司 Quick memory application method
CN101493787B (en) * 2009-02-18 2011-05-11 中兴通讯股份有限公司 Internal memory operation management method and system
GB201322290D0 (en) * 2013-12-17 2014-01-29 Ibm Method and device for managing a memory
KR20190026231A (en) * 2017-09-04 2019-03-13 에스케이하이닉스 주식회사 Memory system and operating method of memory system

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5873089A (en) * 1995-09-04 1999-02-16 Hewlett-Packard Company Data handling system with circular queue formed in paged memory
CN109522121A (en) * 2018-11-15 2019-03-26 郑州云海信息技术有限公司 A kind of memory application method, device, terminal and computer readable storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
面向GPU的内存管理与应用;徐延东;华蓓;;电子技术(第07期);86-90 *

Also Published As

Publication number Publication date
CN110209493A (en) 2019-09-06

Similar Documents

Publication Publication Date Title
CN110209493B (en) Memory management method, device, electronic equipment and storage medium
US9081504B2 (en) Write bandwidth management for flash devices
US9411650B2 (en) Ledger-based resource tracking
US11360884B2 (en) Reserved memory in memory management system
CN111324427B (en) Task scheduling method and device based on DSP
US20190026317A1 (en) Memory use in a distributed index and query system
US10789170B2 (en) Storage management method, electronic device and computer readable medium
CN103631624A (en) Method and device for processing read-write request
US9389997B2 (en) Heap management using dynamic memory allocation
CN113625973B (en) Data writing method, device, electronic equipment and computer readable storage medium
CN111984425A (en) Memory management method, device and equipment for operating system
CN111444117B (en) Method and device for realizing fragmentation of storage space, storage medium and electronic equipment
CN109101438B (en) Method and apparatus for storing data
CN109491785B (en) Memory access scheduling method, device and equipment
US9405470B2 (en) Data processing system and data processing method
US10152258B1 (en) Big block allocation of persistent main memory
US9658976B2 (en) Data writing system and method for DMA
CN108959517B (en) File management method and device and electronic equipment
CN108345551B (en) Data storage method and device
US10496318B1 (en) System and method for capacity management in multi-tiered storage
US11360901B2 (en) Method and apparatus for managing page cache for multiple foreground applications
CN113419871B (en) Object processing method based on synchronous groove and related product
CN111913657B (en) Block data read-write method, device, system and storage medium
WO2015004570A1 (en) Method and system for implementing a dynamic array data structure in a cache line
US20140201493A1 (en) Optimizing large page processing

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant