CN110245091B - Memory management method and device and computer storage medium - Google Patents

Memory management method and device and computer storage medium Download PDF

Info

Publication number
CN110245091B
CN110245091B CN201811266497.6A CN201811266497A CN110245091B CN 110245091 B CN110245091 B CN 110245091B CN 201811266497 A CN201811266497 A CN 201811266497A CN 110245091 B CN110245091 B CN 110245091B
Authority
CN
China
Prior art keywords
memory
memory pool
capacity
pool
small
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811266497.6A
Other languages
Chinese (zh)
Other versions
CN110245091A (en
Inventor
曾华安
陈梁
徐杨波
袁文君
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Dahua Technology Co Ltd
Original Assignee
Zhejiang Dahua Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Dahua Technology Co Ltd filed Critical Zhejiang Dahua Technology Co Ltd
Priority to CN201811266497.6A priority Critical patent/CN110245091B/en
Publication of CN110245091A publication Critical patent/CN110245091A/en
Application granted granted Critical
Publication of CN110245091B publication Critical patent/CN110245091B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/0223User address space allocation, e.g. contiguous or non contiguous base addressing
    • G06F12/023Free address space management
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Memory System (AREA)
  • Memory System Of A Hierarchy Structure (AREA)

Abstract

The invention discloses a memory management method, a memory management device and a computer storage medium, which are used for solving the technical problems of more memory fragments and lower memory utilization rate in the memory use process in the prior art. The method comprises the following steps: judging whether the total available capacity of the first memory pool determined for the memory application is not less than the capacity required by the memory application allocation; the first memory pool is one of a large memory pool or a small memory pool group, the sum of the capacities of the large memory pool and the small memory pool group is the total memory capacity of the equipment, the page capacities of the small memory pools in the large memory pool and the small memory pool group are different, and the total capacity of the small memory pools in the small memory pool group is equal to the page capacity of the large memory pool; and if the total available capacity of the first memory pool is not less than the required capacity, allocating memory from the first memory pool to the memory application.

Description

Memory management method and device and computer storage medium
Technical Field
The present invention relates to the field of memory management technologies, and in particular, to a method and an apparatus for memory management, and a computer storage medium.
Background
In memory management technology, a common basic unit is a page. The page is the smallest unit in the memory application allocation. When the memory is used, the memory allocated to the user by the system is provided in the form of integral multiples of pages.
Currently, a common memory management technology is to manage pages in a memory pool by using a partner algorithm as a core. Specifically, the partner algorithm is to store pages in a memory pool in a linked list manner, and one memory pool may include multiple linked lists, nodes on different linked lists include different numbers of pages, and the number of pages exists in a 2n manner. For example, in the memory pool, all the free available memories are respectively stored in: the node size is 1 page of linked list, the node size is 2 pages of linked list, the node size is 4 pages of linked list and the like.
When a user applies for the memory, the minimum page size corresponding to the memory size applied by the user is calculated, and then whether a node is available in a linked list of the node containing the page size is inquired. If yes, returning to the available node; otherwise, inquiring whether available nodes exist in the linked list with more pages of the nodes. When the applied memory is larger than the required memory, the large memory is split into a plurality of small memory nodes to be stored in the corresponding linked list; when the memories are recycled to the memory pool, every two adjacent memories with equal size are merged.
However, in a multi-thread and multi-process usage scenario, there are often too many applications for a small memory, the small memory is dispersed in each thread, and factors such as uncertainty exist in the order of release, so that the memory usage of the whole system becomes dispersed, and thus the problem of memory fragmentation easily occurs, and the usage of the whole large memory application is affected. This is because when the partner algorithm is used for memory merging, two principles are needed: one is the principle that the sizes of the memory blocks are the same, and the other is that the real physical spaces of the memory blocks are adjacent. That means that an idle node on the memory linked list wants to be merged, and can only be merged with the adjacent node on the linked list, if the adjacent node is occupied all the time, the idle node is in a non-merging state all the time, that is, the idle node can only provide the memory size smaller than or equal to that of the node for the user request. If such debris is excessive, the overall memory pool condition is unhealthy. That is, only a small memory can be allocated, and a large memory cannot be allocated. Thereby causing waste of memory resources.
For example, if the total size of the memory pool is 8K and the page size is 1K, the memory pool has 4 linked lists in total, see table 1 (assuming that the memory pool is not used yet).
TABLE 1
Figure GDA0001896087410000021
In table 1, a 1K linked list is a linked list including 8 nodes, and the size of one node is 1K (one page); the 2K linked list is a linked list including 4 nodes, and the size of one node is 2K (page 2), and other similarities are not described again. Where NULL represents a NODE and the NODE is empty and NODE represents a NODE and the NODE is available.
The case of using the above memory pool by the user is assumed as shown in table 2.
TABLE 2
Figure GDA0001896087410000022
In table 2, USE represents a used node.
In the partner algorithm, only two memory blocks with adjacent sizes can be merged, so that the NODE1 and the NODE2 in the 1K linked list cannot be merged into one page in one 2K linked list, and the NODE3 page in the 2K linked list cannot be merged into one page in a 4K link in a new step, so that more memory fragments exist in the memory pool (NODE1, NODE2 and NODE 3).
In view of this, how to reduce the generation of memory fragments and improve the utilization rate of the memory becomes a technical problem to be solved urgently.
Disclosure of Invention
The invention provides a memory management method, a memory management device and a computer storage medium, which are used for solving the technical problems of more memory fragments and lower memory utilization rate in the memory use process in the prior art.
In a first aspect, to solve the above technical problem, a technical solution of a method for memory management provided in an embodiment of the present invention is as follows:
judging whether the total available capacity of the first memory pool determined for the memory application is not less than the capacity required by the memory application allocation; the first memory pool is one of a large memory pool or a small memory pool group, the sum of the capacities of the large memory pool and the small memory pool group is the total memory capacity of the equipment, the page capacities of the small memory pools in the large memory pool and the small memory pool group are different, and the total capacity of the small memory pools in the small memory pool group is equal to the page capacity of the large memory pool;
and if the total available capacity of the first memory pool is not less than the required capacity, allocating memory from the first memory pool to the memory application.
The memory of the equipment is divided into a large memory pool and a small memory pool group, the total capacity of each small memory pool in the small memory pool group is equal to the page capacity of the large memory pool, and the page capacity of each small memory pool is different from the page capacity of the large memory pool, so that memory applications with small required capacity can be processed in the small memory pools in a centralized manner, the capacity of the large memory pool occupied by the memory with small required capacity can be effectively avoided, the occurrence of memory fragmentation is reduced, and the utilization rate of the memory is further improved.
With reference to the first aspect, in a first possible implementation manner of the first aspect, the determining whether the total available capacity of the first memory pool determined for the memory application is not less than the capacity required for allocation of the memory application includes:
and if the total available capacity of the first memory pool is smaller than the required capacity, returning information of insufficient memory for the service corresponding to the memory application.
With reference to the first aspect, in a second possible implementation manner of the first aspect, before determining whether a total available capacity of the first memory pool determined for the memory application is not less than a capacity required for allocation of the memory application, the method further includes:
judging whether the required capacity is not less than the page capacity of the large memory pool or not;
if so, determining the large memory pool as the first memory pool;
and if not, determining a small memory pool from the small memory pool group as the first memory pool.
Before the required capacity is allocated to the memory application, whether the required capacity is larger than the page capacity of the large memory pool or not is judged, and whether the memory application needs to obtain the required capacity from the large memory pool or the small memory pool can be quickly determined, so that the memory application of the service with small memory requirement is concentrated in the small memory pool, the generation of memory fragments is reduced, and the memory application efficiency is improved.
With reference to the second possible implementation manner of the first aspect, in a third possible implementation manner of the first aspect, the determining, from the small memory pool groups, one small memory pool as the first memory pool includes:
selecting a small memory pool with the total available capacity larger than the required capacity and the minimum number from the small memory pool group as the first memory pool; wherein the number is used for uniquely marking the small memory pool.
The memory pool with the minimum number in the required capacity is used for allocating the required capacity for the memory application, so that the small memory pool can be managed conveniently, and the generation of memory fragments is reduced;
if the total available capacity of none of the small memory pools in the small memory pool group is larger than the required capacity, borrowing memory space from the large memory pool to configure a new small memory pool, and taking the memory space as the total memory capacity of the new small memory pool; and taking the new small memory pool as the first memory pool.
When none of the small memory pools in the small memory pool group applies for allocating the memory for the memory, the memory can be borrowed from the large memory pool, so that the generation of memory fragments can be further reduced, and the memory utilization rate can be improved.
With reference to the third possible implementation manner of the first aspect, in a fourth possible implementation manner of the first aspect, the borrowing a memory space from the large memory pool includes:
judging whether the available space of the large memory pool is not less than one page or not;
if so, allocating a page from the large memory pool as the total storage space of the new small memory pool; the storage space configuration of the new small memory pool is the same as the configuration of the small memory pools in the small memory pool group, and the number of the new small memory pool is the value obtained by adding one to the maximum number in all the small memory pools.
When the memory is borrowed to the large memory pool, whether the large memory pool can borrow the memory to the small memory pool group can be quickly determined by checking whether the available capacity of the large memory pool is not less than one page.
With reference to the fourth possible implementation manner of the first aspect, in a fifth possible implementation manner of the first aspect, after the determining whether the available space of the large memory pool is at least larger than one page, the method further includes:
if the available space of the large memory pool is less than one page, the borrowing of the memory space fails;
and informing the memory application that no available memory space exists currently.
With reference to the first aspect to the fourth possible implementation manner of the first aspect, in a sixth possible implementation manner of the first aspect, the allocating a memory from the first memory pool to the memory application includes:
determining a first node from a linked list of the first memory pool based on a first page number corresponding to the required capacity; the number of pages corresponding to the first node is the node with the page number closest to the first page number in the linked list of the first memory pool, and the nodes in the linked list correspond to the memory space of the first memory pool;
and allocating the memory space corresponding to the first node to the memory application.
With reference to the first aspect to the fourth possible implementation manner of the first aspect, in a seventh possible implementation manner of the first aspect, after the allocating the memory from the first memory pool to the memory application, the method further includes:
after the service corresponding to the memory application uses up the memory allocated by the first memory pool,
judging whether the first memory pool is a small memory pool or not according to the number of the first memory pool;
if yes, further judging whether the first memory pool is completely idle;
and when the first memory pool and the small memory pool group are determined to be all idle, determining that the first memory pool is from the large memory pool according to the number of the first memory pool and the maximum number in the small memory pool group, and recycling the first memory pool to the large memory pool.
When the required memory applied after service use is released, if the required memory is determined to be from the large memory pool and the first memory pool corresponding to the required memory is further determined to be fully idle, the first memory pool is timely recycled to the large memory pool, so that the generation of memory fragments can be further reduced, and the memory utilization rate can be improved.
With reference to the seventh possible implementation manner of the first aspect, in an eighth possible implementation manner of the first aspect, before the determining whether the total available capacity of the first memory pool determined for the memory application is not less than the capacity required for allocation of the memory application, the method includes:
configuring the designated memory pool into the large memory pool and the small memory pool group according to the configuration parameters;
the configuration parameters at least comprise the total capacity of the large memory pool and the small memory pool, the page capacity of the large memory pool and the small memory pool, and the total number of the small memory pools.
In a second aspect, an embodiment of the present invention provides a device for a method for memory management, configured to configure memory parameters, where the method includes:
when the service is operated, recording the use condition of the memory pool, and obtaining a memory use record; the memory usage record at least comprises the memory application capacity and times of all services, and the total usage of the memory pool is sampled according to a specified time interval;
carrying out statistical analysis on the memory usage record to obtain configuration parameters for configuring a large memory pool and a small memory pool for the memory pool; the sum of the capacities of the large memory pool and the small memory pool group is the total capacity of the memory pool, the page capacities of the small memory pools in the large memory pool and the small memory pool group are different, and the total capacity of the small memory pools in the small memory pool group is equal to the page capacity of the large memory pool.
When a service is operated, automatically recording the use condition of a memory pool, such as the memory application capacity and times and the total use amount of the memory pool, and obtaining a memory use record; then, carrying out statistical analysis on the memory use record to obtain configuration parameters for configuring a large memory pool and a small memory pool group for the memory pool; the sum of the capacities of the large memory pool and the small memory pool group is the total capacity of the memory pool, the page capacities of the large memory pool and the small memory pools in the small memory pool group are different, and the total capacity of the small memory pools in the small memory pool group is equal to the page capacity of the large memory pool. Therefore, the configuration parameters of the memory pool of the equipment can be rapidly determined, the efficiency of memory configuration can be effectively improved, the configuration parameters can be automatically determined according to the method when new services are found, and the intelligence of the memory configuration is improved.
With reference to the second aspect, in a first possible implementation manner of the second aspect, the obtaining configuration parameters for configuring a large memory pool and a small memory pool group for the memory pool includes:
performing statistical analysis on the memory application capacity and times of all the services to obtain the page capacity of the large memory pool and the page capacity of the small memory pool; the page capacity of the large memory pool and the page capacity of the small memory pool are parameters in the configuration parameters;
acquiring the page capacity of the small memory pool, namely acquiring the memory application capacity with the most application times from the memory application capacities and times of all services as a first application capacity; performing multiplication operation on the first application capacity and a preset proportionality coefficient to obtain the page capacity of the small memory pool; the preset proportionality coefficient is used for allocating the distribution efficiency and the utilization rate of the small memory pool;
the obtaining of the page capacity of the large memory pool is that firstly, the memory application capacity with the least application times in the first application capacity is obtained from the memory application capacities and the application times of all the services to be used as a second application capacity, and then the second application capacity is used as the page capacity of the large memory pool.
Acquiring the memory application capacity with the most application times from the memory application capacities and times of all services as a first application capacity; and multiplying a proportionality coefficient on the basis of the first application capacity, the page capacity of the small memory pool can be rapidly determined, and further, the services with small memory application are concentrated in the small memory pool for concentrated processing, so that memory fragments can be reduced, and the memory application efficiency can be improved.
With reference to the first possible implementation manner of the second aspect, in a second possible implementation manner of the second aspect, the obtaining configuration parameters for configuring a large memory pool and a small memory pool for the memory pool includes:
carrying out statistical analysis on the total usage of the memory pool collected in a specified time period to obtain the occurrence probability of each total usage value; wherein the occurrence probability is the percentage of the occurrence frequency of each total usage value to the occurrence frequency of all total usage values in the specified time period;
sequencing the occurrence probabilities of all the total use quantity values to obtain the total use quantity values of the designated number with the highest occurrence probability;
carrying out weighted calculation on the total usage value of the specified quantity to obtain a third memory value; wherein the third memory value is the total capacity of the small memory pool group;
dividing the third memory value by the page capacity of the large memory pool to obtain the total number of small memory pools in the small memory pool group; wherein the total number of the small memory pools is a parameter in the configuration parameters;
performing difference operation on the total capacity of the memory pool and the third memory value to obtain the total capacity of the large memory pool; and the total capacity of the large memory pool is a parameter in the configuration parameters.
In a third aspect, an embodiment of the present invention further provides a device for memory management, including:
the judging unit is used for judging whether the total available capacity of the first memory pool determined for the memory application is not less than the capacity required by the memory application allocation; the first memory pool is one of a large memory pool or a small memory pool group, the sum of the capacities of the large memory pool and the small memory pool group is the total memory capacity of the equipment, the page capacities of the small memory pools in the large memory pool and the small memory pool group are different, and the total capacity of the small memory pools in the small memory pool group is equal to the page capacity of the large memory pool;
and the allocation unit is used for allocating the memory from the first memory pool to the memory application if the total available capacity of the first memory pool is not less than the required capacity.
With reference to the third aspect, in a first possible implementation manner of the third aspect, the determining unit is configured to:
and if the total available capacity of the first memory pool is smaller than the required capacity, returning information of insufficient memory for the service corresponding to the memory application.
With reference to the third aspect, in a second possible implementation manner of the third aspect, the determining unit is further configured to:
judging whether the required capacity is not less than the page capacity of the large memory pool or not;
if so, determining the large memory pool as the first memory pool;
and if not, determining a small memory pool from the small memory pool group as the first memory pool.
With reference to the second possible implementation manner of the third aspect, in a third possible implementation manner of the third aspect, the determining unit is further configured to:
selecting the minimum memory pool with the total available capacity larger than the required capacity and the minimum number from the small memory pool group as the first memory pool; wherein, the number is used for uniquely marking the small memory pool; or
If the total available capacity of none of the small memory pools in the small memory pool group is larger than the required capacity, borrowing memory space from the large memory pool to configure a new small memory pool, and taking the memory space as the total memory capacity of the new small memory pool; and taking the new small memory pool as the first memory pool.
The memory pool with the minimum number in the required capacity is used for allocating the required capacity for the memory application, so that the small memory pool can be managed conveniently, and the generation of memory fragments is reduced.
With reference to the third possible implementation manner of the third aspect, in a fourth possible implementation manner of the third aspect, the determining unit is further configured to:
judging whether the available space of the large memory pool is not less than one page or not;
if so, allocating a page from the large memory pool as the total storage space of the new small memory pool; the storage space configuration of the new small memory pool is the same as the configuration of the small memory pools in the small memory pool group, and the number of the new small memory pool is the value obtained by adding one to the maximum number in all the small memory pools.
When the memory is borrowed to the large memory pool, whether the large memory pool can borrow the memory to the small memory pool group can be quickly determined by checking whether the available capacity of the large memory pool is not less than one page.
With reference to the fourth possible implementation manner of the third aspect, in a fifth possible implementation manner of the third aspect, the determining unit is further configured to:
if the available space of the large memory pool is less than one page, the borrowing of the memory space fails;
and informing the memory application that no available memory space exists currently.
With reference to the third aspect to the fourth possible implementation manner of the third aspect, in a sixth possible implementation manner of the third aspect, the allocating unit is configured to:
determining a first node from a linked list of the first memory pool based on a first page number corresponding to the required capacity; the number of pages corresponding to the first node is the node with the page number closest to the first page number in the linked list of the first memory pool, and the nodes in the linked list correspond to the memory space of the first memory pool;
and allocating the memory space corresponding to the first node to the memory application.
With reference to the third aspect to the fourth possible implementation manner of the third aspect, in a seventh possible implementation manner of the third aspect, the allocating unit is further configured to:
after the service corresponding to the memory application uses up the memory allocated by the first memory pool,
judging whether the first memory pool is a small memory pool or not according to the number of the first memory pool;
if yes, further judging whether the first memory pool is completely idle;
and when the first memory pool and the small memory pool group are determined to be all idle, determining that the first memory pool is from the large memory pool according to the number of the first memory pool and the maximum number in the small memory pool group, and recycling the first memory pool to the large memory pool.
With reference to the seventh possible implementation manner of the third aspect, in an eighth possible implementation manner of the third aspect, the apparatus includes:
the parameter configuration unit is used for configuring the specified memory pool into the large memory pool and the small memory pool group according to configuration parameters; the configuration parameters at least comprise the total capacity of the large memory pool and the small memory pool, the page capacity of the large memory pool and the small memory pool, and the total number of the small memory pools.
In a fourth aspect, an embodiment of the present invention further provides a device for memory management, configured to configure memory parameters, where the device includes:
the recording unit is used for recording the use condition of the memory pool when the service is operated to obtain the memory use record; the memory usage record at least comprises the memory application capacity and times of all services, and the total usage of the memory pool is sampled according to a specified time interval;
a parameter obtaining unit, configured to perform statistical analysis on the memory usage record to obtain configuration parameters for configuring a large memory pool and a small memory pool for the memory pool; the sum of the capacities of the large memory pool and the small memory pool group is the total capacity of the memory pool, the page capacities of the small memory pools in the large memory pool and the small memory pool group are different, and the total capacity of the small memory pools in the small memory pool group is equal to the page capacity of the large memory pool.
With reference to the fourth aspect, in a first possible implementation manner of the fourth aspect, the parameter obtaining unit is configured to:
performing statistical analysis on the memory application capacity and the times of all the services to obtain the page capacity of the large memory pool and the page capacity of the small memory pool; the page capacity of the large memory pool and the page capacity of the small memory pool are parameters in the configuration parameters;
when the page capacity of the small memory pool is obtained, firstly, the memory application capacity with the largest application times is obtained from the memory application capacities and times of all services to serve as a first application capacity; multiplying the first application capacity and a preset proportionality coefficient to obtain the page capacity of the small memory pool; the preset proportionality coefficient is used for allocating the distribution efficiency and the utilization rate of the small memory pool;
the page capacity of the large memory pool is obtained by firstly obtaining the memory application capacity with the least application times larger than the first application capacity from the memory application capacities and times of all services as a second application capacity, and then taking the second application capacity as the page capacity of the large memory pool.
With reference to the first possible implementation manner of the fourth aspect, in a second possible implementation manner of the fourth aspect, the parameter obtaining unit is configured to:
carrying out statistical analysis on the total usage of the memory pool collected in a specified time period to obtain the occurrence probability of each total usage value; wherein the occurrence probability is the percentage of the occurrence frequency of each total usage value to the occurrence frequency of all total usage values in the specified time period;
sequencing the occurrence probabilities of all the total use quantity values to obtain the total use quantity values with the highest occurrence probability and in the specified quantity;
carrying out weighted calculation on the total usage value of the specified quantity to obtain a third memory value; wherein the third memory value is the total capacity of the small memory pool group;
dividing the third memory value by the page capacity of the large memory pool to obtain the total number of small memory pools in the small memory pool group; wherein the total number of the small memory pools is a parameter in the configuration parameters;
performing difference operation on the total capacity of the memory pool and the third memory value to obtain the total capacity of the large memory pool; and the total capacity of the large memory pool is a parameter in the configuration parameters.
In a fifth aspect, an embodiment of the present invention further provides a device for a method for memory management, where the device includes:
at least one processor, and
a memory coupled to the at least one processor;
wherein the memory stores instructions executable by the at least one processor, and the at least one processor performs the method according to any of the embodiments of the first to second aspects by executing the instructions stored in the memory.
In a sixth aspect, an embodiment of the present invention further provides a computer-readable storage medium, including:
the computer-readable storage medium stores computer instructions which, when executed on a computer, cause the computer to perform the method according to any one of the embodiments of the first to second aspects.
Through the technical solutions in one or more of the above embodiments of the present invention, the embodiments of the present invention have at least the following technical effects:
in the embodiment provided by the invention, the memory of the equipment is divided into a large memory pool and a small memory pool group, the total capacity of each small memory pool in the small memory pool group is equal to the page capacity of the large memory pool, and the page capacity of each small memory pool is different from the page capacity of the large memory pool, so that the memory application with small required capacity can be concentrated in the small memory pools for processing, thereby effectively avoiding the memory with small required capacity occupying the capacity of the large memory pool, reducing the occurrence of memory fragmentation and further improving the utilization rate of the memory.
In the embodiment provided by the invention, the use condition of the memory pool, such as the memory application capacity and times and the total use amount of the memory pool, is automatically recorded when the service is operated, so that the memory use record is obtained; then, carrying out statistical analysis on the memory use record to obtain configuration parameters for configuring a large memory pool and a small memory pool group for the memory pool; the sum of the capacities of the large memory pool and the small memory pool group is the total capacity of the memory pool, the page capacities of the large memory pool and the small memory pools in the small memory pool group are different, and the total capacity of the small memory pools in the small memory pool group is equal to the page capacity of the large memory pool. Therefore, the configuration parameters of the memory pool of the equipment can be rapidly determined, the efficiency of memory configuration can be effectively improved, the configuration parameters can be automatically determined according to the method when new services are found, and the intelligence of the memory configuration is improved. Furthermore, by configuring the memory pool of the device into a large memory pool and a small memory pool according to the configuration parameters, the service applying for a small memory can request from the small memory pool, and the service applying for a large memory can request from the large memory pool, so that memory fragments can be effectively reduced, the utilization rate of the memory can be improved, and the condition that the memory is used by the service can be dynamically analyzed and adjusted by the method when a user develops the service, thereby further improving the smoothness of service operation.
Drawings
Fig. 1 is a flowchart of a memory management method according to an embodiment of the present invention;
fig. 2 is a schematic diagram of a linked list production result after configuring a memory of a device into a large memory pool and a small memory pool according to an embodiment of the present invention;
fig. 3 is a flowchart of applying for a small memory pool according to an embodiment of the present invention;
fig. 4 is a flowchart of applying for a large memory pool according to an embodiment of the present invention;
fig. 5 is a flowchart illustrating releasing an applied memory according to an embodiment of the present invention;
fig. 6 is a flowchart of a memory management method for configuring a memory according to an embodiment of the present invention;
fig. 7 is a trend chart of memory application capacity-application times provided by the embodiment of the present invention;
fig. 8 is a schematic diagram of a memory management structure according to an embodiment of the present invention;
fig. 9 is a schematic diagram of a memory management structure for configuring memory parameters according to an embodiment of the present invention.
Detailed Description
Embodiments of the present invention provide a method and an apparatus for memory management, and a computer storage medium, so as to solve technical problems of more memory fragments and lower memory utilization rate in a memory use process in the prior art.
In order to solve the technical problems, the general idea of the embodiment of the present application is as follows:
provided is a memory management method, including: judging whether the total available capacity of the first memory pool determined for the memory application is not less than the capacity required by memory application allocation; the first memory pool is one of a large memory pool or a small memory pool group, the sum of the capacities of the large memory pool and the small memory pool group is the total memory capacity of the equipment, the page capacities of the small memory pools in the large memory pool and the small memory pool group are different, and the total capacity of the small memory pools in the small memory pool group is equal to the page capacity of the large memory pool; and if the total available capacity of the first memory pool is not less than the required capacity, allocating the memory from the first memory pool to the memory application.
In the above scheme, the memory of the device is divided into a large memory pool and a small memory pool group, the total capacity of each small memory pool in the small memory pool group is equal to the page capacity of the large memory pool, and the page capacity of the small memory pool is different from the page capacity of the large memory pool, so that the memory application with small required capacity can be concentrated in the small memory pool for processing, thereby effectively avoiding the memory with small required capacity from occupying the capacity of the large memory pool, reducing the occurrence of memory fragmentation, and further improving the utilization rate of the memory.
In order to better understand the technical solutions of the present invention, the following detailed descriptions of the technical solutions of the present invention are provided with the accompanying drawings and the specific embodiments, and it should be understood that the specific features in the embodiments and the examples of the present invention are the detailed descriptions of the technical solutions of the present invention, and are not limitations of the technical solutions of the present invention, and the technical features in the embodiments and the examples of the present invention may be combined with each other without conflict.
In a first embodiment, referring to fig. 1, a method for memory management is provided in an embodiment of the present invention.
Step 101: judging whether the total available capacity of the first memory pool determined for the memory application is not less than the capacity required by the memory application allocation; the first memory pool is one of a large memory pool or a small memory pool group, the sum of the capacities of the large memory pool and the small memory pool group is the total memory capacity of the equipment, the page capacities of the small memory pools in the large memory pool and the small memory pool group are different, and the total capacity of the small memory pools in the small memory pool group is equal to the page capacity of the large memory pool.
Step 102: and if the total available capacity of the first memory pool is not less than the required capacity, allocating the memory from the first memory pool to the memory application.
The large memory pool and the small memory pool group are used in the device, and the memory of the device needs to be configured, and the device needs to be restarted to be effective after the configuration.
Specifically, the step of configuring the memory of the device is to configure the specified memory pool into a large memory pool and a small memory pool according to the configuration parameters.
It should be understood that, for a real device such as a computer, the designated memory pool refers to the total memory of the computer, and for a virtual device such as a virtual machine, the designated memory pool refers to the total memory allocated to the virtual machine, which may be a partial memory of one computer or the sum of the memories of multiple computers or servers.
Specifically, the configuration parameters at least include total capacities of the large memory pool and the small memory pool, page capacities of the large memory pool and the small memory pool, and a total number of the small memory pools.
Of course, the setting of the configuration parameters may be manual setting by a user, or may be adaptive configuration by the method provided in the second embodiment of the present invention, specifically please refer to the second embodiment.
Before judging whether the total available capacity of the first memory pool determined for the memory application is not less than the capacity required by the memory application allocation, judging whether the required capacity is not less than the page capacity of the large memory pool, and if so, determining the large memory pool as the first memory pool; if not, determining a small memory pool from the small memory pool group as a first memory pool.
For example, assuming that the total memory capacity of the device is 50MB, the total memory capacity of the large memory pool is 48MB, the page capacity is 1MB, the small memory pool group is 2, the total capacity of each small memory pool is 1MB, and the page capacity is 512B, table 2 shows the storage structure of the large memory pool (number 0), table 3 shows the storage structure of the small memory pool 1 (number 1) in the small memory pool group, and table 4 shows the storage structure of the small memory pool 2 (number 2) in the small memory pool group.
In tables 2-4, NULL represents no available node in the linked list, such as no available node in the 1MB linked list-24 MB linked list in the large memory pool (i.e. no storage space mapping of the large memory pool); NODE1 represents that there is a NODE available in the linked list, such as a NODE available in a 48MB linked list in a large memory pool (i.e., there is a storage space of the large memory pool mapped to the NODE, the storage space being 48 MB).
TABLE 2
Large memory pool, ID 0:
1M linked list NULL....
2M linked list NULL...
4M linked list NULL...
... NULL...
48M linked list NODE1
TABLE 3
Small memory pool, ID 1:
Figure GDA0001896087410000151
TABLE 4
Small memory pool, ID 2:
Figure GDA0001896087410000152
assuming that the required capacity of the memory application is 32MB, judging whether the 32MB is not less than 1MB of the page capacity of the large memory pool, if so, selecting the large memory pool to allocate the required capacity for the memory application (namely, using the large memory pool as a first memory pool); if the required capacity of the memory application is 800KB, it is determined whether 800KB is not less than 1MB of the page capacity of the large memory pool, and if not, the small memory pool in the small memory pool group is selected to allocate the required capacity for the memory application (i.e. the large memory pool is used as the first memory pool).
After determining whether the first memory pool is from a large memory pool or a small memory pool group, it is further determined whether the total available capacity of the first memory pool determined for the memory application is not less than the capacity required for allocation of the memory application.
For example, if the required capacity is 32MB, but the total available capacity of the large memory pool (i.e., the first memory pool) is 30MB, after determining that the required capacity is applied for allocation to the memory pool by the large memory pool, further determining that the total available capacity (30MB) of the first memory pool is smaller than the required capacity (32MB), returning information of insufficient memory to a service corresponding to the memory application; if the required capacity is 25MB, after determining that the large memory pool applies for allocating the required capacity to the memory pool, further judging that the total available capacity (30MB) of the first memory pool is greater than the required capacity (25MB), and allocating the required capacity to the memory from the first memory pool (namely, the large memory pool).
If the required capacity is the first memory pool determined from the small memory pool group, a small memory pool is further determined from the small memory pool group to provide the required capacity for the memory application.
Specifically, a small memory pool with the minimum number and the total available capacity larger than the required capacity is selected from the small memory pool group as a first memory pool; wherein, the number is used for uniquely marking the small memory pool.
For example, the required capacity is 800KB, and the small memory pool comprises: the small memory pool 1 numbered 1, the small memory pool 2 numbered 2, the small memory pool 3 numbered 3, and the small memory pool 4 numbered 4, totally 4 small memory pools, the total capacity of which is 1MB, and the page capacity is 512B. Wherein, the total available capacity of the small memory pools 1-4 is 500KB, 600KB, 900KB and 1MB in sequence.
To determine which small memory pool is specifically used to provide the required capacity for the memory application from the small memory pool group, first, the small memory pool with the total available capacity (800KB) larger than the required capacity (3-4) is selected from the small memory pool group (i.e. the small memory pools 1-4), and then the small memory pool 3 with the smallest number is selected as the first memory pool to provide the required capacity for the memory application.
If the total available capacity of the small memory pools 1-4 is 100KB, 150KB, 200KB, and 230KB in sequence, then it can be determined that none of the small memory pools in the small memory pool can provide the required capacity for the memory application (800K). Then, the memory space borrowed from the large memory pool can be configured into a new small memory pool, and the memory space borrowed from the large memory pool is used as the storage space of the new small memory pool, so that the new small memory pool is used as the first memory pool.
Specifically, borrowing memory space from a large memory pool, judging whether the total available space of the large memory pool is not less than one page or not, and if so, allocating one page from the large memory pool as the total storage space of a new small memory pool; the storage space configuration of the new small memory pool is the same as the configuration of the small memory pools in the small memory pool group, and the number of the new small memory pool is the value obtained by adding one to the maximum number in all the small memory pools.
For example, after determining that the total available capacity of none of the small memory pools in the memory pool group (small memory pools 1-4) is greater than the required capacity, borrowing memory space from the large memory pool, if the total available space of the large memory pool is determined to be not less than one page, configuring one page (e.g. one page is split from a 48MB linked list) in the large memory pool as a small memory pool (i.e. a new small memory pool) to be used as one member of the small memory pool group, wherein the total capacity of the new small memory pool is the capacity of one page in the large memory pool, the configuration is the same as the configuration of the small memory pools in the small memory pool group, and setting the number of the small memory pool to 5 (i.e. adding 1 to the maximum number of the small memory pools in the original small memory pool), if the next time the storage space borrowed from the large memory pool is configured as a new small memory pool, the new small memory pool of the new configuration has the number of 6, the rest is analogized, and the description is omitted.
If the available space of the large memory pool is judged to be less than one page when the memory space is borrowed from the large memory pool, the failure of borrowing the storage space from the large memory pool is determined, and the memory is informed to apply for the current unavailable memory space.
Regardless of whether the first memory pool is a large memory pool or a small memory pool in a small memory pool group, when allocating memory from the first memory pool to a memory application, it is necessary to determine which node of the linked list corresponding to the first memory pool allocates the required memory for the memory application.
Specifically, a first node is determined from a linked list of a first memory pool based on a first page number corresponding to the required capacity; and allocating the memory space corresponding to the first node to the memory application. The number of pages corresponding to the first node is the node with the page number closest to the first page number in the linked list of the first memory pool, and the node in the linked list corresponds to the memory space of the first memory pool.
Specifically, the calculation formula of the first page number is as follows:
N=log 2 (((SIZE-1)>>(P big -1)))(1)
whereinN is the first page number, SIZE is the required capacity, P big Is the page capacity of the large memory pool.
For example, assuming that a small memory pool 3 from among the small memory pool groups has been determined (assuming that the total available capacity is 1MB), the required capacity (800KB) is allocated for the memory application.
The first page count corresponding to the required capacity (800KB) can be calculated by equation (1) as:
N=log 2 (((800KB-1)>>(1024KB-1)))=2048=2 11
512B×2 11 since 1MB, the small memory pool 3 needs to split a node from the 1MB linked list to allocate the required capacity for the memory application. Specifically, please refer to table 5, which is a memory management linked list before fission of the small memory pool 3, and table 6 is a memory management list of the small memory pool 3 after fission.
TABLE 5
Small memory pool, ID 3:
512B linked list NULL...
1K linked list NULL...
2K linked list NULL...
... NULL...
1M linked list NODE1
TABLE 6
Small memory pool, ID 3:
Figure GDA0001896087410000181
in Table 6, the original 1MB list is split into NODEs in the 1MB list (NODE1 corresponds to a capacity of 800K), and NODEs in the 256K list (NODE1 corresponds to a capacity of 200K). The memory space 800KB corresponding to the NODE (NODE1) in the 1MB linked list in the small memory pool 3 is allocated to the memory application.
Further, after the service corresponding to the memory application uses the memory allocated by the first memory pool, firstly, judging whether the first memory pool is a small memory pool or not according to the number of the first memory pool, and if so, further judging whether the first memory pool is completely idle; and when the first memory pool and the small memory pool group are determined to be all idle, determining that the first memory pool is from the large memory pool according to the number of the first memory pool and the maximum number in the small memory pool group, and recycling the first memory pool to the large memory pool.
For example, after the service corresponding to the memory application uses up the 800KB memory allocated to the first memory pool (i.e., the small memory pool 3), the occupied memory space 800K of the first memory pool is released, and at this time, it is determined that the first memory pool is the small memory pool (because the number of the large memory pool is 0, the number greater than 0 is determined as the small memory pool) according to the number 3 of the first memory pool, and then it is further determined that the first memory pool (i.e., the small memory pool 3) is completely free, it is determined that the first memory pool is from the small memory pool according to the number 3 of the first memory pool, i.e., the maximum number 4 of the small memory pool (because 3 is less than or equal to 4, if a new small memory pool configured by the memory space allocated to the large memory pool, the number should be greater than 4), and then there is no need to recycle; if the number of the first memory pool is 5, it may be determined that the first memory pool is from the large memory pool, and at this time, the first memory pool needs to be recycled to the large memory pool.
In order that those skilled in the art will more clearly understand the above-described arrangements, a specific example will be provided below for a brief description.
Suppose that, taking dynamic detection service as an example, when a picture change is found, the size of a high-definition picture at the moment of the change needs to be captured and stored is about 1M. Most of the small memory applications in the device operation exist within 512B, and the basic use memory is stable at 16M. And manually (or automatically configuring in a mode of the second embodiment) configuring the device memory into: the total size of the large memory pool is 48M, the page size is 1M, the total size of one small memory pool is 1M, the page size is 512Bytes, and the number of the small memory pools is 16. The resulting linked list of large and small memory pools is generated as shown in fig. 2.
In FIG. 2, each chain table of the large memory pool and the small memory pool is as follows 2 n And (5) multiplying the page capacity by the volume of n, taking a natural number, and determining the volume of each node of the corresponding linked list. For example, a 1MB linked list in a large memory pool is 2 0 The x 1MB is 1MB, which indicates that each NODE in the 1MB linked list corresponds to the memory capacity of 1MB, and if the NODE is empty, it is represented by NULL, and if the NODE maps the memory space, the NODE is marked as NODE, and the first NODE is marked as NODE 1. Other similarities will not be described.
Specifically, when a user applies for memory blocks with different required capacities:
in the first case, when the requested capacity 512B is applied for program flow, it is determined that the requested capacity 512B is smaller than the page capacity 1MB of the large memory pool, and the requested capacity is applied from the small memory pool numbered 1. When the method is applied, the linked list is inquired step by step, the linked list (1KB linked list) with the nodes not less than 512B is obtained, cutting is carried out, the rest memory blocks are stored in the corresponding linked list, and the specific fission method can refer to a partner algorithm.
In the second case, when the application is larger than 1M and is used for storing the high definition pictures, the required capacity is 1M, and the required capacity 1M is judged to be not smaller than the page capacity 1MB of the large memory pool, so that the required capacity is applied from the large memory pool.
In the third case, if the number of the small memory of the application 512B exceeds 2048 times, the small memory pool numbered 1 is occupied, and the small memory is allocated from the small memory pool numbered 2.
In the fourth case, the application for 512B small memories is continued, and if all the small memory pools in the small memory pool group (numbers 1-16) cannot provide memory space, the large memory pool is configured as a new small memory pool (number 17) by one page, and memory space is provided for the service.
In the fifth case, when the service uses up the requested capacity, the memory is released, at this time, the number of the first memory pool allocated with the requested capacity for the service is checked, and if the number is 0, it indicates that the first memory pool used by the service is a large memory pool, the storage space occupied by the requested capacity is directly released back to the large memory pool linked list. If the serial number is the small memory pool with the number of 1-16, the storage space occupied by the required capacity is directly released back to the linked list of the corresponding small memory pool. If the number of the small memory pool is larger than 16 (the small memory pool is from the large memory pool), it needs to check whether the pool is all free, if yes, the small memory pool is released to the large memory pool.
In order to make those skilled in the art fully understand the above scheme, the flows of applying for the small memory pool and the large memory pool will be briefly described below.
First, when the memory requested by the user is the small memory pool, please refer to fig. 3.
Step 301: and allocating a memory pool for the user according to the memory application request of the user, wherein the capacity of the memory application request is the required capacity.
Step 302: and if the required capacity of the user memory application request is smaller than the total capacity of the small memory pool, selecting the small memory pool.
Step 303: and calculating the first page number of the small memory pool required by the memory application request according to the required capacity and the page size of the small memory pool.
Step 304: and judging whether the existing small memory pool has enough allocation.
If the total free page number of at least one node in the existing small memory pool is not less than the first page number, it is determined that the existing small memory pool is sufficiently allocated, and step 305-step 307 are performed.
Step 305: and judging whether the nodes in the existing small memory pool are suitable or not.
And determining a node closest to the first page number from the at least one node as a first node, and if the number of idle pages in the first node is greater than the first page number, determining that the node in the existing small memory pool is not suitable, and executing step 306.
Step 306: and cutting the first node.
And returning the cut new node for the user to use, namely executing step 307.
If the number of idle pages in the first node is equal to the first number of pages, it is determined that the node in the existing small memory pool is appropriate, and step 307 is executed.
Step 307: the node is returned for use.
The corresponding node is returned to the user for use according to step 305 or step 306.
If the total free page number of none of the nodes in the existing small memory pool is not less than the first page number, it is determined that the existing small memory pool is not allocated enough, and step 308-step 312 are executed.
Step 308: and requesting to borrow the memory from the large memory pool, namely borrowing the large memory pool.
Step 309: whether the generation memory pool has 1 page of free memory is judged.
If no 1 page of memory in the large memory pool is free, step 301 is executed.
Step 310: returning the memory shortage.
If there are 1 pages in the large memory pool that are free, step 311-step 312 are executed, and after step 312 is executed, step 304-step 307 are repeated.
Step 311: the large memory pool allocates 1 page of memory as a new small memory pool.
Step 312: and initializing the new small memory pool.
Second, please refer to fig. 4 when the memory requested by the user is a large memory pool.
Step 401: and selecting a memory pool for the user according to the memory application request of the user.
If the requested capacity requested by the memory application request is greater than the total capacity of the small memory pool, step 402 is executed.
Step 402: and selecting a large memory pool.
Step 403: the required number of pages is calculated.
And calculating the number of pages (recorded as the first number of pages) required by the memory application request according to the required capacity and the page capacity of the large memory pool.
Step 404: and judging whether the large memory pool is enough to be allocated.
If the total free page number of at least one node in the existing large memory pool is not less than the first page number, it is determined that the existing large memory pool is sufficiently allocated, and step 405-step 407 are performed.
Step 405: and judging whether the nodes in the existing large memory pool are suitable or not.
And determining a node closest to the first page number from the at least one node as a first node, and if the number of idle pages in the first node is greater than the first page number, determining that the node in the existing large memory pool is not suitable, and executing step 406.
Step 406: and cutting the first node.
And returning the cut new node for the user to use, namely executing step 407.
If the number of free pages in the first node is equal to the first number of pages, it is determined that the node in the existing small memory pool is appropriate, and step 407 is executed.
Step 407: the node is returned for use.
The corresponding node is returned to the user for use according to step 405 or step 406.
If the total free page number of none of the nodes in the existing large memory pool is not less than the first page number, it is determined that the existing large memory pool is not allocated enough, and step 408 is executed.
Step 408: returning that the memory is insufficient.
After the user finishes using the applied memory, the applied memory needs to be released, and a brief description of the memory release process is as follows, please refer to fig. 5:
step 501: and after the user uses the applied user memory, receiving a memory release request of the user.
Step 502: and judging whether the user memory is from a large memory pool or a small memory pool.
If the determination result is from the small memory pool, steps 503-506 are executed.
Step 503: and releasing the memory of the user.
Step 504: and judging whether the small memory pool where the user memory is located is completely free.
If the determination result is that the small memory pool in which the user memory is located is not fully idle, step 506 is executed.
Step 506: and finishing the whole process of memory release.
If the determination result in step 504 is that the small memory pool where the user memory is located is fully idle, step 505 is executed.
Step 505: and judging whether the small memory pool of the user memory is from the large memory pool or not.
If the determination result is that the small memory pool in which the user memory is located is not the self-large memory pool, step 506 is executed.
If the determination result is that the small memory pool in which the user memory is located is from the large memory pool, step 508, and step 506 are sequentially executed.
Step 507: and recycling the memory release node.
And releasing the memory of the small memory pool where the user memory is located, and recycling the memory into the node of the large memory pool.
Step 508: and merging the nodes.
And combining the node with the recycled memory with other nodes in the large memory pool to obtain a node with larger capacity.
In a second embodiment, based on the same inventive concept, an embodiment of the present invention provides a method for memory management for configuring a memory, referring to fig. 6, where the method includes:
step 601: when a service is operated, recording the use condition of a memory pool to obtain a memory use record; the memory usage record at least comprises the memory application capacity and times of all services, and the total usage of the memory pool sampled according to the specified time interval.
For example, the memory usage record records that, in a specified time period, a memory with a capacity of 1KB is applied 10 times, a memory with a capacity of 1.5KB is applied 15 times, a memory with a capacity of 2KB is applied 20 times, a memory with a capacity of 2.5KB is applied 25 times, a memory with a capacity of 3KB is applied 30 times, a memory with a capacity of 4KB is applied 25 times, a memory with a capacity of 6KB is applied 20 times, a memory with a capacity of 7KB is applied 10 times, a memory with a capacity of 8KB is applied 10 times, a memory with a capacity of 9KB is applied 13 times, and a memory with a capacity of 10KB is applied 11 times. And the total usage of the pool is collected once at a specified time (every 5s), e.g. 15KB, 25KB, 50KB, 70KB, etc.
After the memory usage record is obtained, step 302 may be performed.
Step 602: carrying out statistical analysis on the memory usage record to obtain configuration parameters for configuring a large memory pool and a small memory pool group for the memory pool; the sum of the capacities of the large memory pool and the small memory pool group is the total capacity of the memory pools, the page capacities of the small memory pools in the large memory pool and the small memory pool group are different, and the total capacity of the small memory pools in the small memory pool group is equal to the page capacity of the large memory pool.
Specifically, configuration parameters for configuring a large memory pool and a small memory pool group for the memory pool are obtained, statistical analysis needs to be performed on the memory application capacity and the number of times of all services, and the page capacity of the large memory pool and the page capacity of the small memory pool are obtained; the page capacity of the large memory pool and the page capacity of the small memory pool are parameters in the configuration parameters.
Statistical analysis is performed on the memory application capacity and the number of times of all services, and a mode of drawing the application capacity and the number of times of application as a trend graph can be adopted, and the data in step 601 is taken as an example, and the drawn trend graph is shown in fig. 7.
The page capacity of the small memory pool is obtained through the following method:
firstly, acquiring the memory application capacity with the most application times from the memory application capacities and times of all services as a first application capacity; then, multiplying the first application capacity and a preset proportionality coefficient to obtain the page capacity of the small memory pool; and the preset proportionality coefficient is used for allocating the distribution efficiency and the utilization rate of the small memory pool.
Specifically, if the statistical analysis of the memory application capacity and the number of times is performed in the manner shown in fig. 4, it may be directly determined from fig. 7 that the number of times of application is at most 30 times, and the corresponding memory application capacity is 3KB, so that the first application capacity is determined to be 3 KB; then, the first application capacity 3KB is multiplied by a preset scaling factor of 0.5, that is, the page capacity of the small memory pool is determined to be 3KB × 0.5 KB to 1.5 KB.
It should be understood that the preset scaling factor may be adjusted according to actual needs, which is mainly to balance the memory allocation efficiency and the memory utilization rate, and in general, the higher the memory allocation efficiency is, the lower the memory utilization rate is. Therefore, what value is specifically selected for the predetermined scaling factor is not limited herein.
After determining the page capacity of the small memory pool, the page capacity of the large memory pool can be determined, specifically: the memory application capacity with the least application times in the first application capacity can be obtained from the memory application capacities and the times of all services as the second application capacity, and the second application capacity is used as the page capacity of the large memory pool.
Referring to fig. 7, it can be determined from fig. 7 that the minimum number of applications greater than the first application capacity is 10, and the application capacity is 8KB, so that the second application capacity can be determined to be 8KB, and the second application capacity 8KB can be used as the page capacity of the large memory pool.
It should be understood that, although the statistical analysis of the memory application capacity and the number of times of all services is performed in the above description by using a trend graph, in practical applications, an algorithm may be used to implement the same function as that described above, and therefore, the specific way of performing the statistical analysis of the memory application capacity and the number of times of all services is not limited.
By the method, the respective page capacities of the large memory pool and the small memory pool in the configuration parameters can be determined, and then other parameters in the configuration parameters can be further determined.
In order to obtain other parameters in the configuration parameters, such as the total capacity and the number of the small memory pools, the occurrence probabilities of all the total usage values can be ranked first, and the total usage value with the highest occurrence probability in the specified number is obtained; then, carrying out weighted calculation on the total use quantity values of the specified quantity to obtain a third memory value; wherein, the third memory value is the total capacity of the small memory pool group; finally, dividing the third memory value by the page capacity of the large memory pool to obtain the total number of the small memory pools in the small memory pool group; the total number of the small memory pools is a parameter in the configuration parameters.
For example, in the memory usage record, the recorded always used memory pool situations sequentially include: 32KB, 15KB, 25KB, 50KB, 70KB, 25KB, 15KB, 32KB, 15KB, 50KB, 25KB, 50KB, 32KB, 20KB, 15KB, 25KB, 15 KB.
The total capacity of each memory needs to be counted firstly: the corresponding occurrence times of 15KB, 25KB, 32KB, 50KB and 70KB are as follows: 8. 7, 3, 2 and 1.
Calculating the occurrence probability of each memory total capacity according to the occurrence frequency corresponding to each memory total capacity, wherein the calculation mode of the occurrence probability is that the occurrence frequency of each memory total capacity is divided by the occurrence frequency of all memory total capacities, and specifically, the occurrence probability of each memory total capacity sequentially comprises: 38%, 33.3%, 14.4%, 9.5%, 4.8%.
Furthermore, all the occurrence probabilities can be sorted, and if the specified number is 3, the first 3 bits with the highest occurrence probability are taken out, and the following steps are performed in sequence: 38%, 33.3%, 14.4%, and 15KB, 25KB, 32KB in sequence are obtained as total usage values corresponding to them; then, a third memory value is obtained by performing weighted calculation on the 3 total usage amounts, where the specific third memory value is 15KB × 38% +25KB × 33.3% +32KB × 14.4% + 18.633KB, but since the total capacity of the small memory pool is 8K of the page capacity of the large memory pool, it is necessary to determine the total capacity of the small memory pool to be 24KB according to the calculated third memory value 18.633KB and the page capacity 8KB of the large memory pool, and further determine the total number of the small memory pools to be 24KB/8KB to be 3.
After determining the total capacity of the small memory pool group, the total capacity of the large memory pool can be determined according to the total capacity of the memory pool and the total capacity of the small memory pool group.
Assuming that the total capacity of the memory pool is 100K, the total capacity of the large memory pool is 100KB-24KB — 64 KB.
At this point, the main parameters of the memory configuration are determined.
It should be noted that the configuration parameters include not only the calculated parameters described above, but also, for example, the number of the large memory pool is determined to be 0, the number of the small memory pool group starts from 1, and each time a small memory pool is added, the total number of the small memory pool group is added by 1, if there is a memory space borrowed from the large memory pool, the configuration is performed according to the configuration parameters of the small memory pool, and the number is added by 1 on the basis of the maximum number of the small memory pool group.
It should be noted that, the above method for determining configuration parameters may be automatically recorded by the system when a new service is enabled and a special situation occurs, and the configuration parameters are re-determined by the above method. And the user can configure the memory pool by adopting the new configuration parameters when needed.
Based on the same inventive concept, referring to fig. 8, an embodiment of the present invention provides an apparatus for memory management, including:
a determining unit 801, configured to determine whether a total available capacity of the first memory pool determined for the memory application is not less than a capacity required for allocating the memory application; the first memory pool is one of a large memory pool or a small memory pool group, the sum of the capacities of the large memory pool and the small memory pool group is the total memory capacity of the equipment, the page capacities of the small memory pools in the large memory pool and the small memory pool group are different, and the total capacity of the small memory pools in the small memory pool group is equal to the page capacity of the large memory pool;
an allocating unit 802, configured to allocate a memory from the first memory pool to the memory application if the total available capacity of the first memory pool is not less than the required capacity.
Optionally, the determining unit 801 is configured to:
and if the total available capacity of the first memory pool is smaller than the required capacity, returning information of insufficient memory for the service corresponding to the memory application.
Optionally, the determining unit 801 is further configured to:
judging whether the required capacity is not less than the page capacity of the large memory pool or not;
if so, determining the large memory pool as the first memory pool;
and if not, determining a small memory pool from the small memory pool group as the first memory pool.
Optionally, the determining unit 801 is further configured to:
selecting the minimum memory pool with the total available capacity larger than the required capacity and the minimum number from the small memory pool group as the first memory pool; wherein, the number is used for uniquely marking the small memory pool; or
The memory pool with the minimum number in the required capacity is used for allocating the required capacity for the memory application, so that the small memory pool can be managed conveniently, and the generation of memory fragments is reduced.
Optionally, the determining unit 801 is further configured to:
if the total available capacity of none of the small memory pools in the small memory pool group is larger than the required capacity, borrowing memory space from the large memory pool to configure a new small memory pool, and taking the memory space as the total memory capacity of the new small memory pool; and taking the new small memory pool as the first memory pool.
Optionally, the determining unit 801 is further configured to:
judging whether the available space of the large memory pool is not less than one page or not;
if so, allocating a page from the large memory pool as the total storage space of the new small memory pool; the storage space configuration of the new small memory pool is the same as the configuration of the small memory pools in the small memory pool group, and the number of the new small memory pool is the value obtained by adding one to the maximum number in all the small memory pools.
Optionally, the determining unit 801 is further configured to:
if the available space of the large memory pool is less than one page, the memory space borrowing fails;
and informing the memory application that no available memory space exists currently.
Optionally, the allocating unit 802 is configured to:
determining a first node from a linked list of the first memory pool based on a first page number corresponding to the required capacity; the number of pages corresponding to the first node is the node with the page number closest to the first page number in the linked list of the first memory pool, and the nodes in the linked list correspond to the memory space of the first memory pool;
and allocating the memory space corresponding to the first node to the memory application.
Optionally, the allocating unit 802 is further configured to:
after the service corresponding to the memory application uses up the memory allocated by the first memory pool,
judging whether the first memory pool is a small memory pool or not according to the number of the first memory pool;
if yes, further judging whether the first memory pool is completely idle;
and when the first memory pool and the small memory pool group are determined to be all idle, determining that the first memory pool is from the large memory pool according to the number of the first memory pool and the maximum number in the small memory pool group, and recycling the first memory pool to the large memory pool.
Optionally, the apparatus includes:
a parameter configuration unit 803, configured to configure the specified memory pool as the large memory pool and the small memory pool according to configuration parameters;
the configuration parameters at least comprise the total capacity of the large memory pool and the small memory pool, the page capacity of the large memory pool and the small memory pool, and the total number of the small memory pools.
Based on the same inventive concept, referring to fig. 9, an embodiment of the present invention provides an apparatus for memory management, configured to configure memory parameters, including:
a recording unit 901, configured to record a use condition of a memory pool when a service is running, and obtain a memory use record; the memory usage record at least comprises the memory application capacity and times of all services, and the total usage of the memory pool is sampled according to a specified time interval;
a parameter obtaining unit 902, configured to perform statistical analysis on the memory usage record to obtain configuration parameters for configuring a large memory pool and a small memory pool for the memory pool; the sum of the capacities of the large memory pool and the small memory pool group is the total capacity of the memory pool, the page capacities of the small memory pools in the large memory pool and the small memory pool group are different, and the total capacity of the small memory pools in the small memory pool group is equal to the page capacity of the large memory pool.
Optionally, the parameter obtaining unit 902 is configured to:
performing statistical analysis on the memory application capacity and times of all the services to obtain the page capacity of the large memory pool and the page capacity of the small memory pool; the page capacity of the large memory pool and the page capacity of the small memory pool are parameters in the configuration parameters;
when the page capacity of the small memory pool is obtained, firstly, the memory application capacity with the largest application times is obtained from the memory application capacities and times of all services to serve as a first application capacity; multiplying the first application capacity and a preset proportionality coefficient to obtain the page capacity of the small memory pool; the preset proportionality coefficient is used for allocating the distribution efficiency and the utilization rate of the small memory pool;
the obtaining of the page capacity of the large memory pool is that firstly, the memory application capacity with the least application times in the first application capacity is obtained from the memory application capacities and the application times of all the services to be used as a second application capacity, and then the second application capacity is used as the page capacity of the large memory pool.
Optionally, the parameter obtaining unit 902 is configured to:
carrying out statistical analysis on the total usage of the memory pool collected in a specified time period to obtain the occurrence probability of each total usage value; wherein the occurrence probability is the percentage of the occurrence frequency of each total usage value to the occurrence frequency of all total usage values in the specified time period;
sequencing the occurrence probabilities of all the total use quantity values to obtain the total use quantity values of the designated number with the highest occurrence probability;
carrying out weighted calculation on the total usage value of the specified quantity to obtain a third memory value; wherein, the third memory value is the total capacity of the small memory pool group;
dividing the third memory value by the page capacity of the large memory pool to obtain the total number of small memory pools in the small memory pool group; wherein the total number of the small memory pools is a parameter in the configuration parameters;
performing difference operation on the total capacity of the memory pool and the third memory value to obtain the total capacity of the large memory pool; and the total capacity of the large memory pool is a parameter in the configuration parameters.
Based on the same inventive concept, an embodiment of the present invention provides an apparatus for memory management, including: at least one processor, and
a memory coupled to the at least one processor;
the memory stores instructions executable by the at least one processor, and the at least one processor executes the memory management method in the first embodiment or the second embodiment by executing the instructions stored in the memory.
Based on the same inventive concept, an embodiment of the present invention further provides a computer-readable storage medium, including:
the computer-readable storage medium stores computer instructions that, when executed on a computer, cause the computer to perform the memory management method in the first embodiment or the second embodiment.
In the embodiment provided by the invention, the memory of the equipment is divided into a large memory pool and a small memory pool group, the total capacity of each small memory pool in the small memory pool group is equal to the page capacity of the large memory pool, and the page capacity of each small memory pool is different from the page capacity of the large memory pool, so that the memory application with small required capacity can be concentrated in the small memory pools for processing, thereby effectively avoiding the memory with small required capacity occupying the capacity of the large memory pool, reducing the occurrence of memory fragmentation and further improving the utilization rate of the memory.
In the embodiment provided by the invention, the use condition of the memory pool, such as the memory application capacity and times and the total use amount of the memory pool, is automatically recorded when the service is operated, so that the memory use record is obtained; then, carrying out statistical analysis on the memory use record to obtain configuration parameters for configuring a large memory pool and a small memory pool group for the memory pool; the sum of the capacities of the large memory pool and the small memory pool group is the total capacity of the memory pool, the page capacities of the small memory pools in the large memory pool and the small memory pool group are different, and the total capacity of the small memory pools in the small memory pool group is equal to the page capacity of the large memory pool. Therefore, the configuration parameters of the memory pool of the equipment can be rapidly determined, the efficiency of memory configuration can be effectively improved, the configuration parameters can be automatically determined according to the method when new services are found, and the intelligence of the memory configuration is improved. Furthermore, by configuring the memory pool of the device into a large memory pool and a small memory pool according to the configuration parameters, the service applying for a small memory can request from the small memory pool, and the service applying for a large memory can request from the large memory pool, so that memory fragments can be effectively reduced, the memory utilization rate can be improved, and the user can dynamically analyze and adjust the memory using condition of the service by the method when developing the service, thereby further improving the smoothness of service operation.
As will be appreciated by one skilled in the art, embodiments of the present invention may be provided as a method, system, or computer program product. Accordingly, embodiments of the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, embodiments of the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
Embodiments of the present invention are described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
It will be apparent to those skilled in the art that various changes and modifications may be made in the present invention without departing from the spirit and scope of the invention. Thus, if such modifications and variations of the present invention fall within the scope of the claims of the present invention and their equivalents, the present invention is also intended to include such modifications and variations.

Claims (13)

1. A method for memory management, configured to allocate memory, the method comprising:
judging whether the capacity required by the memory application allocation is not less than the page capacity of the large memory pool or not;
if so, determining the large memory pool as a first memory pool;
if not, determining a small memory pool from the small memory pool group as the first memory pool;
judging whether the total available capacity of the first memory pool determined for the memory application is not less than the capacity required by the memory application allocation; the sum of the capacities of the large memory pool and the small memory pool group is the total memory capacity of the equipment, the page capacity of the large memory pool is different from the page capacity of each small memory pool in the small memory pool group, and the total capacity of each small memory pool in the small memory pool group is equal to the page capacity of the large memory pool;
and if the total available capacity of the first memory pool is not less than the required capacity, allocating memory from the first memory pool to the memory application.
2. The method of claim 1, wherein said determining one of said pool of mini-pools as said first pool comprises:
selecting the minimum memory pool with the total available capacity larger than the required capacity and the minimum number from the small memory pool group as the first memory pool; wherein, the number is used for uniquely marking the small memory pool; or
If the total available capacity of none of the small memory pools in the small memory pool group is larger than the required capacity, borrowing memory space from the large memory pool to configure a new small memory pool, and taking the memory space as the total memory capacity of the new small memory pool; and taking the new small memory pool as the first memory pool.
3. The method of claim 2, wherein said borrowing memory space from said large memory pool comprises:
judging whether the available space of the large memory pool is not less than one page or not;
if so, allocating a page from the large memory pool as the total storage space of the new small memory pool; and the storage space configuration of the new small memory pool is the same as the configuration of the small memory pools in the small memory pool group, and the number of the new small memory pool is the value obtained by adding one to the maximum number in all the small memory pools.
4. The method as claimed in claim 3, wherein said determining whether the available space of the large memory pool is greater than at least one page further comprises:
if the available space of the large memory pool is less than one page, the memory space borrowing fails;
and informing the memory application that no available memory space exists currently.
5. The method according to any of claims 1-3, wherein said allocating memory from said first memory pool to said memory application comprises:
determining a first node from a linked list of the first memory pool based on a first page number corresponding to the required capacity; the number of pages corresponding to the first node is the node with the page number closest to the first page number in the linked list of the first memory pool, and the nodes in the linked list correspond to the memory space of the first memory pool;
and allocating the memory space corresponding to the first node to the memory application.
6. The method of any of claims 1-3, wherein after said allocating memory from said first memory pool to said memory application, further comprising:
after the service corresponding to the memory application uses up the memory allocated by the first memory pool,
judging whether the first memory pool is a small memory pool or not according to the number of the first memory pool;
if yes, further judging whether the first memory pool is completely idle;
and when the first memory pool and the small memory pool group are determined to be all idle, if the number of the first memory pool is larger than the maximum number in the small memory pool group, and the first memory pool is determined to be from the large memory pool, the first memory pool is recycled to the large memory pool.
7. The method as claimed in claim 6, wherein before determining whether the total available capacity of the first memory pool determined for the memory application is not less than the capacity required for allocation of the memory application, comprising:
configuring the designated memory pool into the large memory pool and the small memory pool group according to the configuration parameters;
the configuration parameters at least comprise the total capacity of the large memory pool and the small memory pool, the page capacity of the large memory pool and the small memory pool, and the total number of the small memory pools.
8. A method of memory management for configuring memory, the method comprising:
when the service is operated, recording the use condition of the memory pool, and obtaining a memory use record; the memory usage record at least comprises the memory application capacity and times of all services, and the total usage of the memory pool is sampled according to a specified time interval;
performing statistical analysis on the memory usage record to obtain configuration parameters for configuring a large memory pool and a small memory pool for the memory pool, including: carrying out statistical analysis on the total usage of the memory pool collected in a specified time period to obtain the occurrence probability of each total usage value; wherein the occurrence probability is the percentage of the occurrence frequency of each total usage value to the occurrence frequency of all total usage values in the specified time period; sequencing the occurrence probabilities of all the total use quantity values to obtain the total use quantity values of the designated number with the highest occurrence probability; carrying out weighted calculation on the total usage value of the specified quantity to obtain a third memory value; wherein the third memory value is the total capacity of the small memory pool group; dividing the third memory value by the page capacity of the large memory pool to obtain the total number of small memory pools in the small memory pool group; performing difference operation on the total capacity of the memory pool and the third memory value to obtain the total capacity of the large memory pool; the total capacity of the large memory pool is a parameter in the configuration parameters;
the sum of the capacities of the large memory pool and the small memory pool group is the total capacity of the memory pools, the page capacity of the large memory pool is different from the page capacity of each small memory pool in the small memory pool group, and the total capacity of each small memory pool in the small memory pool group is equal to the page capacity of the large memory pool.
9. The method of claim 8, wherein the obtaining configuration parameters that configure a large memory pool and a small memory pool group for the memory pool comprises:
performing statistical analysis on the memory application capacity and times of all the services to obtain the page capacity of the large memory pool and the page capacity of the small memory pool; the page capacity of the large memory pool and the page capacity of the small memory pool are parameters in the configuration parameters;
when the page capacity of the small memory pool is obtained, firstly, the memory application capacity with the largest application times is obtained from the memory application capacities and times of all services to serve as a first application capacity; multiplying the first application capacity and a preset proportionality coefficient to obtain the page capacity of the small memory pool; the preset proportionality coefficient is used for allocating the distribution efficiency and the utilization rate of the small memory pool; the obtaining of the page capacity of the large memory pool is that firstly, the memory application capacity with the least application times in the first application capacity is obtained from the memory application capacities and the application times of all the services to be used as a second application capacity, and then the second application capacity is used as the page capacity of the large memory pool.
10. An apparatus for memory management, the apparatus configured to allocate memory, the apparatus comprising:
the judging unit is used for judging whether the capacity required by the memory allocation application is not less than the page capacity of the large memory pool or not; if so, determining the large memory pool as a first memory pool; if not, determining a small memory pool from the small memory pool group as the first memory pool; judging whether the total available capacity of the first memory pool determined for the memory application is not less than the capacity required by the memory application allocation; the sum of the capacities of the large memory pool and the small memory pool group is the total memory capacity of the equipment, the page capacity of the large memory pool is different from the page capacity of each small memory pool in the small memory pool group, and the total capacity of each small memory pool in the small memory pool group is equal to the page capacity of the large memory pool;
and the allocation unit is used for allocating the memory from the first memory pool to the memory application if the total available capacity of the first memory pool is not less than the required capacity.
11. An apparatus for memory management, the apparatus configured to configure memory, the apparatus comprising:
the recording unit is used for recording the use condition of the memory pool when the service is operated to obtain the memory use record; the memory usage record at least comprises the memory application capacity and times of all services, and the total usage of the memory pool is sampled according to a specified time interval;
a parameter obtaining unit, configured to perform statistical analysis on the memory usage record to obtain configuration parameters for configuring a large memory pool and a small memory pool for the memory pool, including: carrying out statistical analysis on the total usage of the memory pool collected in a specified time period to obtain the occurrence probability of each total usage value; wherein the occurrence probability is the percentage of the occurrence frequency of each total usage value to the occurrence frequency of all total usage values in the specified time period; sequencing the occurrence probabilities of all the total use quantity values to obtain the total use quantity values of the designated number with the highest occurrence probability; carrying out weighted calculation on the total usage value of the specified quantity to obtain a third memory value; wherein the third memory value is the total capacity of the small memory pool group; dividing the third memory value by the page capacity of the large memory pool to obtain the total number of small memory pools in the small memory pool group; performing difference operation on the total capacity of the memory pool and the third memory value to obtain the total capacity of the large memory pool; the total capacity of the large memory pool is a parameter in the configuration parameters; the sum of the capacities of the large memory pool and the small memory pool group is the total capacity of the memory pools, the page capacity of the large memory pool is different from the page capacity of each small memory pool in the small memory pool group, and the total capacity of each small memory pool in the small memory pool group is equal to the page capacity of the large memory pool.
12. A memory management device, comprising:
at least one processor, and
a memory coupled to the at least one processor;
wherein the memory stores instructions executable by the at least one processor, the at least one processor performing the method of any one of claims 1-9 by executing the instructions stored by the memory.
13. A computer-readable storage medium characterized by:
the computer readable storage medium stores computer instructions that, when executed on a computer, cause the computer to perform the method of any of claims 1-9.
CN201811266497.6A 2018-10-29 2018-10-29 Memory management method and device and computer storage medium Active CN110245091B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811266497.6A CN110245091B (en) 2018-10-29 2018-10-29 Memory management method and device and computer storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811266497.6A CN110245091B (en) 2018-10-29 2018-10-29 Memory management method and device and computer storage medium

Publications (2)

Publication Number Publication Date
CN110245091A CN110245091A (en) 2019-09-17
CN110245091B true CN110245091B (en) 2022-08-26

Family

ID=67882385

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811266497.6A Active CN110245091B (en) 2018-10-29 2018-10-29 Memory management method and device and computer storage medium

Country Status (1)

Country Link
CN (1) CN110245091B (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111090521B (en) * 2019-12-10 2023-05-02 Oppo(重庆)智能科技有限公司 Memory allocation method and device, storage medium and electronic equipment
CN112214313B (en) * 2020-09-22 2024-09-27 深圳云天励飞技术股份有限公司 Memory allocation method and related equipment
CN112241325B (en) * 2020-12-15 2021-03-23 南京集成电路设计服务产业创新中心有限公司 Ultra-large-scale integrated circuit database based on memory pool and design method
CN113504994B (en) * 2021-07-26 2022-05-10 上海遁一信息科技有限公司 Method and system for realizing elastic expansion and contraction of memory pool performance
CN113961485A (en) * 2021-10-26 2022-01-21 西安广和通无线通信有限公司 Memory optimization method, device, terminal and storage medium
CN116361234B (en) * 2023-06-02 2023-08-08 深圳中安辰鸿技术有限公司 Memory management method, device and chip

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1635482A (en) * 2003-12-29 2005-07-06 北京中视联数字系统有限公司 A memory management method for embedded system
CN101122883A (en) * 2006-08-09 2008-02-13 中兴通讯股份有限公司 Memory allocation method for avoiding RAM fragmentation
CN101266575A (en) * 2007-03-13 2008-09-17 中兴通讯股份有限公司 Method for enhancing memory pool utilization ratio
CN105893269A (en) * 2016-03-31 2016-08-24 武汉虹信技术服务有限责任公司 Memory management method used in Linux system

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8321651B2 (en) * 2008-04-02 2012-11-27 Qualcomm Incorporated System and method for memory allocation in embedded or wireless communication systems
US8793444B2 (en) * 2011-05-05 2014-07-29 International Business Machines Corporation Managing large page memory pools

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1635482A (en) * 2003-12-29 2005-07-06 北京中视联数字系统有限公司 A memory management method for embedded system
CN101122883A (en) * 2006-08-09 2008-02-13 中兴通讯股份有限公司 Memory allocation method for avoiding RAM fragmentation
CN101266575A (en) * 2007-03-13 2008-09-17 中兴通讯股份有限公司 Method for enhancing memory pool utilization ratio
CN105893269A (en) * 2016-03-31 2016-08-24 武汉虹信技术服务有限责任公司 Memory management method used in Linux system

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
一种跨平台内存池的设计与实现;刘娟;《蚌埠学院学报》;20170420;全文 *

Also Published As

Publication number Publication date
CN110245091A (en) 2019-09-17

Similar Documents

Publication Publication Date Title
CN110245091B (en) Memory management method and device and computer storage medium
CN103077197A (en) Data storing method and device
CN106407207B (en) Real-time newly-added data updating method and device
CN107783734B (en) Resource allocation method, device and terminal based on super-fusion storage system
US10356150B1 (en) Automated repartitioning of streaming data
CN111538586A (en) Cluster GPU resource management scheduling system, method and computer readable storage medium
CN105700948A (en) Method and device for scheduling calculation task in cluster
US8682850B2 (en) Method of enhancing de-duplication impact by preferential selection of master copy to be retained
CN107368260A (en) Memory space method for sorting, apparatus and system based on distributed system
CN110737717B (en) Database migration method and device
CN106325756B (en) Data storage method, data calculation method and equipment
CN111737168A (en) Cache system, cache processing method, device, equipment and medium
CN112559165A (en) Memory management method and device, electronic equipment and computer readable storage medium
CN116302461A (en) Deep learning memory allocation optimization method and system
CN110413539B (en) Data processing method and device
CN109033365B (en) Data processing method and related equipment
CN117234732A (en) Shared resource allocation method, device, equipment and medium
CN112988383A (en) Resource allocation method, device, equipment and storage medium
CN109788013B (en) Method, device and equipment for distributing operation resources in distributed system
CN110750330A (en) Virtual machine creating method, system, electronic equipment and storage medium
CN111291018B (en) Data management method, device, equipment and storage medium
CN112800020A (en) Data processing method and device and computer readable storage medium
CN111984650A (en) Storage method, system and related device of tree structure data
CN111599015A (en) Space polygon gridding filling method and device under constraint condition
CN117349023A (en) Application deployment method, device and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant