CN110245091A - A kind of method, apparatus and computer storage medium of memory management - Google Patents

A kind of method, apparatus and computer storage medium of memory management Download PDF

Info

Publication number
CN110245091A
CN110245091A CN201811266497.6A CN201811266497A CN110245091A CN 110245091 A CN110245091 A CN 110245091A CN 201811266497 A CN201811266497 A CN 201811266497A CN 110245091 A CN110245091 A CN 110245091A
Authority
CN
China
Prior art keywords
memory pool
memory
capacity
small
pool
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201811266497.6A
Other languages
Chinese (zh)
Other versions
CN110245091B (en
Inventor
曾华安
陈梁
徐杨波
袁文君
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Dahua Technology Co Ltd
Original Assignee
Zhejiang Dahua Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Dahua Technology Co Ltd filed Critical Zhejiang Dahua Technology Co Ltd
Priority to CN201811266497.6A priority Critical patent/CN110245091B/en
Publication of CN110245091A publication Critical patent/CN110245091A/en
Application granted granted Critical
Publication of CN110245091B publication Critical patent/CN110245091B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/0223User address space allocation, e.g. contiguous or non contiguous base addressing
    • G06F12/023Free address space management
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Abstract

The invention discloses a kind of method, apparatus of memory management and computer storage mediums, the technical problem that the memory fragmentation to solve to generate in memory use process existing in the prior art is more, memory usage is lower.This method comprises: being judged as whether the total active volume for the first memory pool that memory application determines is not less than capacity needed for the memory application distributes;Wherein, first memory pool is a memory pool in big memory pool or small memory pool group, the sum of the big memory pool and the capacity of the small memory pool group are total memory size of equipment, and the big memory pool and in the small memory pool group each small memory pool page capacity it is different, and in the small memory pool group each small memory pool total capacity it is equal with the page capacity of the big memory pool;If total active volume of first memory pool is not less than the required capacity, storage allocation gives the memory application from first memory pool.

Description

A kind of method, apparatus and computer storage medium of memory management
Technical field
The present invention relates to memory management technology fields, more particularly, to the method, apparatus and computer of a kind of memory management Storage medium.
Background technique
In memory management technology, common base unit is page.In so-called page, that is, memory application distribution most Junior unit.When using memory, system is provided in the form of the integral multiple of page to the memory that user distributes.
Currently, common memory management technology is to be managed using buddy algorithm as core to the page in memory pool.Tool Body, buddy algorithm is to save page in the form of chained list in memory pool, and a memory pool may include a plurality of chained list, Node on different chained lists includes the page of different numbers, and number of pages exists in the form of 2n.For example, institute is available free in memory pool Free memory is stored in respectively: the chained list that node size is page 1, the chained list that node size is page 2, and node size is page 4 In the chained list of the difference node size such as chained list.
When there is user to carry out memory application, the corresponding the smallest page size of the applied memory size of user is calculated, then Inquiry is available with the presence or absence of node in the chained list that node includes this page of size.If it exists, then enabled node is returned;It is no It then, include to inquire to whether there is enabled node in the more chained lists of number of pages to node.When the memory applied is greater than required memory When, then big memory is fissioned and is stored in corresponding chained list at multiple small memory nodes;And when Memory recycle is into memory pool, It is then to be merged according to the memory of the sizes such as adjacent two-by-two.
However, often there is small memory by excessive application, and be dispersed under the usage scenario of multithreading multi-process In each thread, there are the factors such as uncertainty for the sequencing of release, and the memory use of whole system is allowed to become to disperse, from And the use for the problem of memory fragmentation easily occur, influencing entire big memory application.This is because being carried out using buddy algorithm When memory merges, need based on two principles: one be memory block size principle of identity, the other is memory block is true Physical space wants adjacent.That means that an idle node on memory chained list is wanted to merge, can only be with phase on this chained list Adjacent node merges, if adjacent node is occupied always, which will always be in that shape can not be merged State, the i.e. idle node are only available to those users request and are less than or equal to this node memory size.If such broken Piece is excessive, then the situation of entire memory pool is just very unhealthy.Small memory can only be distributed, big memory cannot be distributed.To Cause the waste of memory source.
For example, the total size of memory pool is 8K, page size is that 1K please specifically join then the memory pool shares 4 chained lists It is shown in Table 1 (assuming that memory pool is still not used by).
Table 1
In table 1,1K chained list be include the chained list of 8 nodes, and a size of node is 1K (one page);2K chained list Being includes the chained list of 4 nodes, and a size of node is 2K (page 2), other similar to repeat no more.Wherein, NULL is represented One node and the node are sky, and NODE represents a node and the node is available.
Assuming that user uses the case where above-mentioned memory pool as shown in table 2.
Table 2
In table 2, node has been used in USE representative.
Since in buddy algorithm, only there are two the memory blocks of adjacent size can just merge so in 1K chained list NODE1 and NODE2 can not merge to one page in a 2K chained list, NODE3 pages in 2K chained list can not a new step merge into One page in 4K chain, so making memory pool, there are more memory fragmentation (NODE1, NODE2, NODE3).
In consideration of it, how reducing the generation of memory fragmentation, improving the utilization rate of memory as a technology urgently to be resolved Problem.
Summary of the invention
The present invention provides the method, apparatus and computer storage medium of a kind of memory management, to solve in the prior art The technical problem that the memory fragmentation generated in existing memory use process is more, memory usage is lower.
In a first aspect, in order to solve the above technical problems, a kind of skill of the method for memory management provided in an embodiment of the present invention Art scheme is as follows:
It is judged as whether the total active volume for the first memory pool that memory application determines is not less than the memory application distribution Required capacity;Wherein, first memory pool is a memory pool in big memory pool or small memory pool group, the big memory The sum of capacity of pond and the small memory pool group is total memory size of equipment, and the big memory pool and the small memory pool The page capacity of each small memory pool is different in group, and the total capacity of each small memory pool and the big memory in the small memory pool group The page capacity in pond is equal;
If total active volume of first memory pool is not less than the required capacity, from first memory pool Storage allocation gives the memory application.
By the way that the memory of equipment is divided into one big memory pool and small memory pool group, and allow in small memory pool group it is each it is small in The total capacity for depositing pond is equal with the page capacity of big memory pool, and the page capacity of small memory pool is different from the page capacity of big memory pool, The memory application that required capacity can be allowed small concentrates in small memory pool being handled, so as to effectively avoid required appearance It measures the capacity of the big memory pool of small EMS memory occupation, reduce the generation of RAM fragmentation, and then improve the utilization rate of memory.
With reference to first aspect, described to be judged as that memory application is true in the first possible embodiment of first aspect Whether total active volume of the first fixed memory pool is not less than capacity needed for the memory application distributes, comprising:
If total active volume of first memory pool is less than the required capacity, corresponding for the memory application The information of business return low memory.
With reference to first aspect, described to be judged as memory application in second of possible embodiment of first aspect Whether total active volume of the first determining memory pool is not less than before capacity needed for the memory application distributes, further includes:
Judge whether the required capacity is not less than the page capacity of the big memory pool;
If it has, then the big memory pool is determined as first memory pool;
If it has not, then determining a small memory pool as first memory pool from the small memory pool group.
Before the capacity needed for distributing for memory application, first judge whether required capacity is greater than the page appearance of big memory pool Amount, can quickly determine that memory application should obtain required capacity from big memory pool or small memory pool, so that memory be allowed to need It asks the memory application of small business to concentrate in small memory pool group, reduces the generation of memory fragmentation, improve memory application efficiency.
The possible embodiment of second with reference to first aspect, in the third possible embodiment of first aspect In, it is described to determine a small memory pool as first memory pool from the small memory pool group, comprising:
Total active volume is chosen from the small memory pool group and is greater than the required capacity, and numbers the smallest small memory Pond is as first memory pool;Wherein, the number is for uniquely indicating the small memory pool.
Capacity needed for the smallest memory pool distributes for memory application is numbered in required capacity by allowing meet, and can be convenient for Small memory pool is managed, is reduced the generation of memory fragmentation;
If total active volume of the small memory pool of no one of the small memory pool group is greater than the required capacity, from Memory headroom is temporarily transferred in the big memory pool and is configured to new small memory pool, and using the memory headroom as the new small memory Total memory size in pond;Using the new small memory pool as first memory pool.
When the small memory pool of no one of small memory pool group is memory application storage allocation, by being borrowed to big memory pool The mode for adjusting memory can further reduce the generation of memory fragmentation, improve memory usage.
The third possible embodiment with reference to first aspect, in the 4th kind of possible embodiment of first aspect In, it is described to temporarily transfer memory headroom from the big memory pool, comprising:
Judge whether the available space of the big memory pool is not less than one page;
If it has, then distributing total memory space of the one page as the new small memory pool from the big memory pool;Its In, the memory space configuration of the new small memory pool is identical as the configuration of small memory pool in the small memory pool group, described new The number of small memory pool is the value after the maximum number plus one in all small memory pools.
It, can be with by checking whether the active volume of big memory pool is not less than one page when temporarily transferring memory to big memory pool Quickly determine whether big memory pool can temporarily transfer memory to small memory pool group.
The 4th kind of possible embodiment with reference to first aspect, in the 5th kind of possible embodiment of first aspect In, after whether the available space for judging the big memory pool is at least more than one page, further includes:
If the available space of the big memory pool is less than one page, the secondment memory headroom failure;
Notify the current no free memory space of the memory application.
With reference to first aspect to the 4th kind of possible embodiment of first aspect, in the 6th kind of possibility of first aspect Embodiment in, the storage allocation from first memory pool give the memory application, comprising:
Based on corresponding first number of pages of the required capacity, first node is determined from the chained list of first memory pool; Wherein, the corresponding number of pages of the first node be in the chained list of first memory pool number of pages closest to first number of pages Node, the node in the chained list are corresponding with the memory headroom of first memory pool;
The corresponding memory headroom of the first node is distributed into the memory application.
With reference to first aspect to the 4th kind of possible embodiment of first aspect, in the 7th kind of possibility of first aspect Embodiment in, it is described from storage allocation in first memory pool to after the memory application, further includes:
After the memory that the corresponding business of the memory application has used the first memory pool distribution,
According to the number of first memory pool, judge whether first memory pool is small memory pool;
If it has, then further judging that whether complete first memory pool is idle;
When determining that first memory pool and small memory pool group are entirely idle, according to the number of first memory pool and Maximum number in the small memory pool group determines that first memory pool, then will be in described first from the big memory pool It deposits pond and is recycled to the big memory pool.
When business has used the required memory of application to carry out memory release, however, it is determined that required memory comes from big memory pool, And further determine that and the first memory pool is recycled to big memory pool in time when corresponding first memory pool of required memory is entirely idle, The generation of memory fragmentation can further be reduced, improve memory usage.
The 7th kind of possible embodiment with reference to first aspect, in the 8th kind of possible embodiment of first aspect In, whether total active volume of first memory pool for being judged as that memory application is determining is not less than the memory application distribution Before required capacity, comprising:
According to configuration parameter, the big memory pool and the small memory pool group are configured by specified memory pond;
Wherein, the configuration parameter is at least by the big memory pool and the respective total capacity of the small memory pool, described big The total quantity composition of memory pool and the respective page capacity of the small memory pool, the small memory pool.
Second aspect, the embodiment of the invention provides a kind of devices of method for memory management, for joining to memory Number is configured, which comprises
In the business of operation, the service condition of memory pool is recorded, obtains memory usage record;Wherein, the memory uses Record includes at least the memory application capacity and number of all business, and by the total of memory pool described in specified time interval sampling Usage amount;
It is for statistical analysis to the memory usage record, it obtains and configures big memory pool and small memory for the memory pool The configuration parameter of pond group;Wherein, the sum of the big memory pool and the capacity of the small memory pool group are total appearance of the memory pool Amount, the big memory pool is different with the page capacity of each small memory pool in the small memory pool group, and each in the small memory pool group The total capacity of small memory pool is equal with the page capacity of the big memory pool.
By automatically recording the service condition of memory pool in the business of operation, such as memory application capacity and number and memory Total usage amount in pond obtains memory usage record;It is for statistical analysis to memory usage record again later, it obtains as memory pool Configure the configuration parameter of big memory pool and small memory pool group;Wherein, big memory pool and the sum of the capacity of the small memory pool group are The page capacity of each small memory pool is different in the total capacity of the memory pool, big memory pool and the small memory pool group, and small memory The total capacity of each small memory pool of Chi Zuzhong is equal with the page capacity of big memory pool.So as to quickly determine device memory pond Configuration parameter, and then can effectively improve the efficiency of memory configurations, and can also be according to above-mentioned when discovery has new business Method automatically determines configuration parameter, improves the intelligence of memory configurations.
In conjunction with second aspect, in the first possible embodiment of second aspect, described obtain is the memory pool Configure the configuration parameter of big memory pool and small memory pool group, comprising:
It is for statistical analysis to the memory application capacity and number of all business, obtain the page of the big memory pool The page capacity of capacity and the small memory pool;Wherein, the page capacity of the page capacity of the big memory pool and the small memory pool is Parameter in the configuration parameter;
Wherein, the page capacity for obtaining the small memory pool is, first from the memory application capacity and number of all business In, the most memory application capacity of request times is obtained as the first application capacity;Apply for capacity for described first again and presets Proportionality coefficient carries out multiplying, obtains the page capacity of the small memory pool;Wherein, the preset ratio coefficient is for deploying The allocative efficiency and utilization rate of the small memory pool;
The page capacity for obtaining the big memory pool is first from the memory application capacity and number of all business, to obtain It takes greater than the least memory application capacity of request times in the first application capacity as the second application capacity, then will be described Page capacity of the second application capacity as the big memory pool.
By from the memory application capacity and number of all business, obtaining the most memory application capacity of request times As the first application capacity;And small memory can be quickly determined multiplied by a proportionality coefficient on the basis of the first application capacity The page capacity in pond, and then the small business of memory application is all focused in small memory pool and is focused on, so as to reduce Memory fragmentation improves memory application efficiency.
In conjunction with the first possible embodiment of second aspect, in second of possible embodiment of second aspect In, it is described to obtain the configuration parameter that big memory pool and small memory pool group are configured for the memory pool, comprising:
For statistical analysis to total usage amount of the memory pool collected in designated time period, acquisition each always makes The probability of occurrence of volume value;Wherein, the probability of occurrence is each always to use magnitude frequency of occurrence in the designated time period With all always using the percentage of magnitude frequency of occurrence;
It is always ranked up using the probability of occurrence of magnitude to all, obtains always making for the highest specified quantity of probability of occurrence Volume value;
The total of the specified quantity is weighted using magnitude, obtains third memory value;Wherein, the third Memory value is the total capacity of the small memory pool group;
With the third memory value divided by the page capacity of the big memory pool, the small medium and small memory of memory pool group is obtained The total quantity in pond;Wherein, the total quantity of the small memory pool is the parameter in the configuration parameter;
Total capacity and the third memory value to the memory pool carry out difference operation, obtain the total of the big memory pool Capacity;The total capacity of the big memory pool is the parameter in the configuration parameter.
The third aspect, the embodiment of the present invention also provide a kind of device for memory management, comprising:
Judging unit, for being judged as whether total active volume of the first determining memory pool of memory application is not less than institute State capacity needed for memory application distributes;Wherein, first memory pool is a memory in big memory pool or small memory pool group The sum of capacity of pond, the big memory pool and the small memory pool group be equipment total memory size, and the big memory pool and The page capacity of each small memory pool is different in the small memory pool group, and in the small memory pool group each small memory pool total capacity It is equal with the page capacity of the big memory pool;
Allocation unit, if total active volume for first memory pool not less than capacity needed for described, from described Storage allocation gives the memory application in first memory pool.
In conjunction with the third aspect, in the first possible embodiment of the third aspect, the judging unit is used for:
If total active volume of first memory pool is less than the required capacity, corresponding for the memory application The information of business return low memory.
In conjunction with the third aspect, in second of possible embodiment of the third aspect, the judging unit is also used to:
Judge whether the required capacity is not less than the page capacity of the big memory pool;
If it has, then the big memory pool is determined as first memory pool;
If it has not, then determining a small memory pool as first memory pool from the small memory pool group.
In conjunction with second of possible embodiment of the third aspect, in the third possible embodiment of the third aspect In, the judging unit is also used to:
Total active volume is chosen from the small memory pool group and is greater than the required capacity, and numbers the smallest small memory Pond is as first memory pool;Wherein, the number is for uniquely indicating the small memory pool;Or
If total active volume of the small memory pool of no one of the small memory pool group is greater than the required capacity, from Memory headroom is temporarily transferred in the big memory pool and is configured to new small memory pool, and using the memory headroom as the new small memory Total memory size in pond;Using the new small memory pool as first memory pool.
Capacity needed for the smallest memory pool distributes for memory application is numbered in required capacity by allowing meet, and can be convenient for Small memory pool is managed, is reduced the generation of memory fragmentation.
In conjunction with the third possible embodiment of the third aspect, in the 4th kind of possible embodiment of the third aspect In, the judging unit is also used to:
Judge whether the available space of the big memory pool is not less than one page;
If it has, then distributing total memory space of the one page as the new small memory pool from the big memory pool;Its In, the memory space configuration of the new small memory pool is identical as the configuration of small memory pool in the small memory pool group, described new The number of small memory pool is the value after the maximum number plus one in all small memory pools.
It, can be with by checking whether the active volume of big memory pool is not less than one page when temporarily transferring memory to big memory pool Quickly determine whether big memory pool can temporarily transfer memory to small memory pool group.
In conjunction with the 4th kind of possible embodiment of the third aspect, in the 5th kind of possible embodiment of the third aspect In, the judging unit is also used to:
If the available space of the big memory pool is less than one page, the secondment memory headroom failure;
Notify the current no free memory space of the memory application.
In conjunction with the 4th kind of possible embodiment of the third aspect to the third aspect, in the 6th kind of possibility of the third aspect Embodiment in, the allocation unit is used for:
Based on corresponding first number of pages of the required capacity, first node is determined from the chained list of first memory pool; Wherein, the corresponding number of pages of the first node be in the chained list of first memory pool number of pages closest to first number of pages Node, the node in the chained list are corresponding with the memory headroom of first memory pool;
The corresponding memory headroom of the first node is distributed into the memory application.
In conjunction with the 4th kind of possible embodiment of the third aspect to the third aspect, in the 7th kind of possibility of the third aspect Embodiment in, the allocation unit is also used to:
After the memory that the corresponding business of the memory application has used the first memory pool distribution,
According to the number of first memory pool, judge whether first memory pool is small memory pool;
If it has, then further judging that whether complete first memory pool is idle;
When determining that first memory pool and small memory pool group are entirely idle, according to the number of first memory pool and Maximum number in the small memory pool group determines that first memory pool, then will be in described first from the big memory pool It deposits pond and is recycled to the big memory pool.
In conjunction with the 7th kind of possible embodiment of the third aspect, in the 8th kind of possible embodiment of the third aspect In, described device, comprising:
Parameter configuration unit configures the big memory pool and described small for specified memory pond for according to configuration parameter Memory pool group;Wherein, the configuration parameter is at least by the big memory pool and the respective total capacity of the small memory pool, described The total quantity composition of big memory pool and the respective page capacity of the small memory pool, the small memory pool.
Fourth aspect, the embodiment of the present invention also provide a kind of device for memory management, for carrying out to memory parameters Configuration, comprising:
Recording unit obtains memory usage record in the business of operation, recording the service condition of memory pool;Its In, the memory usage record includes at least the memory application capacity and number of all business, and presses specified time interval sampling Total usage amount of the memory pool;
Gain of parameter unit obtains as memory pool configuration for for statistical analysis to the memory usage record The configuration parameter of big memory pool and small memory pool group;Wherein, the big memory pool and the sum of the capacity of the small memory pool group are The page capacity of each small memory pool is different in the total capacity of the memory pool, the big memory pool and the small memory pool group, and institute It is equal with the page capacity of the big memory pool to state the total capacity of each small memory pool in small memory pool group.
In conjunction with fourth aspect, in the first possible embodiment of fourth aspect, the gain of parameter unit is used In:
It is for statistical analysis to the memory application capacity and number of all business, obtain the page of the big memory pool The page capacity of capacity and the small memory pool;Wherein, the page capacity of the page capacity of the big memory pool and the small memory pool is Parameter in the configuration parameter;
Wherein, when obtaining the page capacity of the small memory pool, first from the memory application capacity and number of all business In, the most memory application capacity of request times is obtained as the first application capacity;Apply for capacity for described first again and presets Proportionality coefficient carries out multiplying, obtains the page capacity of the small memory pool;Wherein, the preset ratio coefficient is for deploying The allocative efficiency and utilization rate of the small memory pool;
The page capacity for obtaining the big memory pool is first from the memory application capacity and number of all business, to obtain It takes greater than the least memory application capacity of request times in the first application capacity as the second application capacity, then will be described Page capacity of the second application capacity as the big memory pool.
In conjunction with the first possible embodiment of fourth aspect, in second of possible embodiment of fourth aspect In, the gain of parameter unit is used for:
For statistical analysis to total usage amount of the memory pool collected in designated time period, acquisition each always makes The probability of occurrence of volume value;Wherein, the probability of occurrence is each always to use magnitude frequency of occurrence in the designated time period With all always using the percentage of magnitude frequency of occurrence;
It is always ranked up using the probability of occurrence of magnitude to all, obtains always making for the highest specified quantity of probability of occurrence Volume value;
The total of the specified quantity is weighted using magnitude, obtains third memory value;Wherein, the third Memory value is the total capacity of the small memory pool group;
With the third memory value divided by the page capacity of the big memory pool, the small medium and small memory of memory pool group is obtained The total quantity in pond;Wherein, the total quantity of the small memory pool is the parameter in the configuration parameter;
Total capacity and the third memory value to the memory pool carry out difference operation, obtain the total of the big memory pool Capacity;The total capacity of the big memory pool is the parameter in the configuration parameter.
5th aspect, the embodiment of the present invention also provide a kind of device of method for memory management, comprising:
At least one processor, and
The memory being connect at least one described processor;
Wherein, the memory is stored with the instruction that can be executed by least one described processor, described at least one The instruction that device is stored by executing the memory is managed, is executed such as above-mentioned first aspect to second aspect any embodiment institute The method stated.
6th aspect, the embodiment of the present invention also provide a kind of computer readable storage medium, comprising:
The computer-readable recording medium storage has computer instruction, when the computer instruction is transported on computers When row, so that computer executes the method as described in above-mentioned first aspect to second aspect any embodiment.
The technical solution in said one or multiple embodiments through the embodiment of the present invention, the embodiment of the present invention is at least It has the following technical effect that
In embodiment provided by the invention, by the way that the memory of equipment is divided into one big memory pool and small memory pool group, And make the total capacity of each small memory pool in small memory pool group equal with the page capacity of big memory pool, and the page capacity of small memory pool Different from the page capacity of big memory pool, the memory application that required capacity can be allowed small concentrates in small memory pool being handled, from And the capacity for the big memory pool of EMS memory occupation that can effectively avoid required capacity small, the generation for reducing RAM fragmentation, in turn Improve the utilization rate of memory.
In embodiment provided by the invention, by automatically recording the service condition of memory pool, such as in the business of operation Total usage amount of memory application capacity and number and memory pool obtains memory usage record;Later again to memory usage record into Row statistical analysis, obtains the configuration parameter that big memory pool and small memory pool group are configured for memory pool;Wherein, big memory pool and institute State the total capacity that the sum of capacity of small memory pool group is the memory pool, in big memory pool and the small memory pool group it is each it is small in The page capacity for depositing pond is different, and the total capacity of each small memory pool and the page capacity of big memory pool are equal in small memory pool group.To It can quickly determine the configuration parameter in device memory pond, and then can effectively improve the efficiency of memory configurations, and sending out Configuration parameter can also be automatically determined when existing new business according to the method described above, improves the intelligence of memory configurations.Further , by configuring big memory pool and small memory pool group by configuration parameter for the memory pool of equipment, the industry for making application memory small Business can be requested from small memory pool group, and the business for making application memory big can be requested from big memory pool, can effectively be subtracted Few memory fragmentation improves memory usage, and user can also be made in the business of exploitation, is used by the above method business The case where memory, carries out dynamic analysis, adjustment, further improves the fluency of service operation.
Detailed description of the invention
Fig. 1 is a kind of flow chart of EMS memory management process provided in an embodiment of the present invention;
Fig. 2 be it is provided in an embodiment of the present invention the memory configurations of equipment are big memory pool and small memory pool group after Chained list produces result schematic diagram;
Fig. 3 is the flow chart of the small memory pool of application provided in an embodiment of the present invention;
Fig. 4 is the flow chart of the big memory pool of application provided in an embodiment of the present invention;
Fig. 5 is the flow chart that the memory provided in an embodiment of the present invention to application is discharged;
Fig. 6 is a kind of flow chart of EMS memory management process for internally depositing into row configuration provided in an embodiment of the present invention;
Fig. 7 is memory application capacity-request times tendency chart provided in an embodiment of the present invention;
Fig. 8 is a kind of memory management structural schematic diagram provided in an embodiment of the present invention;
Fig. 9 is the memory management structural schematic diagram that a kind of pair of memory parameters provided in an embodiment of the present invention are configured.
Specific embodiment
Implementation column of the present invention provides the method, apparatus and computer storage medium of a kind of memory management, to solve existing skill The technical problem that the memory fragmentation generated in memory use process present in art is more, memory usage is lower.
In order to solve the above technical problems, general thought is as follows for technical solution in the embodiment of the present application:
There is provided a kind of EMS memory management process, comprising: be judged as the total active volume for the first memory pool that memory application determines Capacity needed for whether being distributed not less than memory application;Wherein, the first memory pool is one in big memory pool or small memory pool group The sum of capacity of memory pool, big memory pool and small memory pool group is total memory size of equipment, and memory pool and small memory pool greatly The page capacity of each small memory pool is different in group, and the page of the total capacity of each small memory pool and big memory pool holds in small memory pool group It measures equal;If total active volume of the first memory pool is not less than required capacity, storage allocation is to memory from the first memory pool Application.
Due in the above scheme, by the way that the memory of equipment is divided into one big memory pool and small memory pool group, and allowing small The total capacity of each small memory pool and the page capacity of big memory pool are equal in memory pool group, and the page capacity and imperial palace of small memory pool The page capacity for depositing pond is different, and the memory application that required capacity can be allowed small concentrates in small memory pool being handled, so as to The capacity of the big memory pool of the EMS memory occupation for effectively avoiding required capacity small, the generation for reducing RAM fragmentation, and then in raising The utilization rate deposited.
In order to better understand the above technical scheme, below by attached drawing and specific embodiment to the technology of the present invention side Case is described in detail, it should be understood that the specific features in the embodiment of the present invention and embodiment are to technical solution of the present invention Detailed description, rather than the restriction to technical solution of the present invention, in the absence of conflict, the embodiment of the present invention and Technical characteristic in embodiment can be combined with each other.
Embodiment one, referring to FIG. 1, the embodiment of the present invention provides a kind of method of memory management, this method it is processed Journey is as follows.
Step 101: being judged as whether the total active volume for the first memory pool that memory application determines is not less than the memory Capacity needed for application distributes;Wherein, the first memory pool is a memory pool in big memory pool or small memory pool group, big memory The sum of capacity of pond and small memory pool group is total memory size of equipment, and each small memory greatly in memory pool and small memory pool group The page capacity in pond is different, and the total capacity of each small memory pool and the page capacity of big memory pool are equal in small memory pool group.
Step 102: if total active volume of the first memory pool is not less than required capacity, being distributed from the first memory pool Memory gives memory application.
Above-mentioned big memory pool and small memory pool group are used in a device, it is also necessary to be configured, be led to the memory of equipment Often matching to postpone needs restarting equipment that can just come into force.
Specifically, carrying out configuration to the memory of equipment is to configure big memory pool for specified memory pond according to configuration parameter With small memory pool group.
It is to be appreciated that specified memory pond refers to the total interior of computer for a true equipment such as computer It deposits, and for virtual unit such as virtual machine, specified memory pond refers to the total memory for distributing to virtual machine, can be a calculating The partial memory of machine is also possible to the sum of the memory of multiple stage computers or server.
Specifically, configuration parameter is at least by big memory pool and the respective total capacity of small memory pool, big memory pool and small memory The composition such as total quantity of the respective page capacity in pond, small memory pool.
Certainly, user's manual setting can be to the setting of above-mentioned configuration parameter, it can also be with through the embodiment of the present invention two In offer method carry out adaptive configuration, specifically see embodiment two.
It is being judged as whether total active volume of the first determining memory pool of memory application is not less than memory application distribution institute Before needing capacity, it is also necessary to first judge whether required capacity is not less than the page capacity of big memory pool, if YES then by big memory pool It is determined as the first memory pool;If it has not, then determining a small memory pool as the first memory pool from small memory pool group.
As an example it is assumed that total memory size 50MB of equipment, total memory size of big memory pool is 48MB, page capacity For 1MB, small memory pool group totally 2, the total capacity of each small memory pool is 1MB, and page capacity is 512B, then table 2 is shown greatly Storage organization, the table 3 of memory pool (number 0) show the storage knot of the small memory pool 1 (number 1) in small memory pool group Structure, table 4 show the storage organization of the small memory pool 2 (number 2) in small memory pool group.
In table 2-4, NULL is represented without enabled node in the chained list, such as the 1MB chained list -24MB chained list in big memory pool Middle no enabled node (the memory space mapping i.e. without big memory pool);NODE1, which is represented, enabled node, such as imperial palace in the chained list Depositing has enabled node (to have the memory space of big memory pool to be mapped to the node, memory space is in the 48MB chained list in pond 48MB)。
Table 2
Big memory pool, ID 0:
1M chained list NULL….
2M chained list NULL…
4M chained list NULL…
NULL…
48M chained list NODE1
Table 3
Small memory pool, ID1:
Table 4
Small memory pool, ID2:
Assuming that the required capacity of memory application is 32MB, judge whether 32MB is not less than the page capacity 1MB of big memory pool, Judging result is yes, then just selecting capacity needed for distributing from big memory pool for memory application (i.e. using big memory pool as first Memory pool);If the required capacity of memory application is 800KB, judge whether 800KB is not less than the page capacity of big memory pool 1MB, judging result are no, then capacity needed for just selecting the small memory pool from small memory pool group to distribute for memory application is (i.e. Using big memory pool as the first memory pool).
In determination, to be from the first memory pool be after big memory pool or small memory pool group, it is also necessary to further sentence Break as total active volume of the first determining memory pool of memory application, if capacity needed for being distributed not less than the memory application.
For example, if required capacity is 32MB, but total active volume of big memory pool (i.e. the first memory pool) is 30MB, that It is after memory pool application distributes required capacity, further to judge the total of the first memory pool determining through big memory pool Active volume (30MB) is less than required capacity (32MB), then the information of low memory is returned to the corresponding business of memory application;If Required capacity is 25MB, after determining through big memory pool and being the required capacity of memory pool application distribution, further judges the Total active volume (30MB) of one memory pool is greater than required capacity (25MB), then is interior from the first memory pool (i.e. big memory pool) Deposit capacity needed for application distributes.
If required capacity is the first memory pool that the small memory pool determined from small memory pool group is used as, need Further determine that a small memory pool provides required capacity for memory application from small memory pool group.
Total active volume is chosen from small memory pool group greater than required capacity specifically, can be, and is numbered the smallest small Memory pool is as the first memory pool;Wherein, number is for uniquely indicating small memory pool.
For example, required capacity be 800KB, small memory pool group include: number be 1 small memory pool 1, number be 2 it is small in Deposit pond 2, the small memory pool 4 that the small memory pool 3 that number is 3, number are 4, totally 4 small memory pools, their total capacity be 1MB, Page capacity is 512B.Wherein, total active volume of small memory pool 1-4 is followed successively by 500KB, 600KB, 900KB, 1MB.
Which small memory pool required capacity specifically is provided with from small memory pool group for memory application determination, firstly, Need to choose the small memory pool that total active volume is greater than required capacity (800KB) from small memory pool group (i.e. small memory pool 1-4) For small memory pool 3-4, then chooses the smallest small memory pool 3 of number and be used as the first memory pool, provide required appearance for memory application Amount.
If total active volume of small memory pool 1-4 is followed successively by 100KB, 150KB, 200KB, 230KB, then can determine The small memory pool of no one of small memory pool group can provide required capacity (800K) for memory application.It so at this time can be from big Memory headroom is temporarily transferred in memory pool and is configured to new small memory pool, and using the memory headroom temporarily transferred from big memory pool as new small The memory space of memory pool, and then by new small memory pool as the first memory pool.
Specifically, temporarily transferring memory headroom from big memory pool, whether not to need first to judge total available space of big memory pool Less than one page, if YES then distributing total memory space of the one page as new small memory pool from big memory pool;Wherein, new small interior The memory space configuration for depositing pond is identical as the configuration of small memory pool in small memory pool group, and the number of new small memory pool is all The value after maximum number plus one in small memory pool.
For example, being greater than institute in the total active volume for determining the small memory pool of no one of memory pool group (small memory pool 1-4) After needing capacity, memory headroom just is temporarily transferred from big memory pool, if judging, total available space of big memory pool is not less than one page, Small memory pool (i.e. new small memory pool) is configured by one page (one page of such as fissioning out from 48MB chained list) in big memory pool, is made Used for a member of small memory pool group, the total capacity of the new small memory pool is one page capacity in big memory pool, configuration with The configuration of small memory pool in small memory pool group is identical, and sets 5 (i.e. original small interior for the number of the small memory pool It deposits on the basis of the maximum number of the small memory pool of Chi Zuzhong plus 1), if next time is empty there are also the storage that comes is temporarily transferred from big memory pool Between be configured as new small memory pool, the number of the new small memory pool of the new configuration is 6, other and so on, no longer superfluous herein It states.
If temporarily transfer memory headroom from big memory pool, after judging that the available space of big memory pool is less than one page, then It determines from big memory pool and temporarily transfers memory space failure, and notify the current no free memory space of memory application.
Regardless of the first memory pool is small memory pool in big memory pool or small memory pool group, from first memory When storage allocation is to memory application in pond, require to determine that from which node of that corresponding chained list of the first memory pool be interior Deposit memory needed for application distributes.
Specifically, being to determine first segment from the chained list of the first memory pool based on corresponding first number of pages of required capacity Point;The corresponding memory headroom of first node is distributed into memory application.Wherein, the corresponding number of pages of first node is the first memory For number of pages closest to the node of the first number of pages, the node in chained list is corresponding with the memory headroom of the first memory pool in the chained list in pond.
Specifically, the calculation formula of the first number of pages are as follows:
N=log2(((SIZE-1)>>(Pbig-1)))(1)
Wherein, N is the first number of pages, and SIZE is required capacity, PbigFor the page capacity of big memory pool.
It is interior for example, it is assumed that having determined the small memory pool 3 (assuming that total active volume is 1MB) from small memory pool group Deposit capacity (800KB) needed for application distributes.
Corresponding first number of pages of required capacity (800KB) can be calculated by formula (1) are as follows:
N=log2(((800KB-1) > > (1024KB-1)))=2048=211
512B×211=1MB, therefore the needs of small memory pool 3 are fissioned out from 1MB chained list, a node is memory application distribution Required capacity.Specifically, table 5 is referred to, for the memory management chained list before the fission of small memory pool 3, table 6 is small interior after fissioning Deposit the management table memory in pond 3.
Table 5
Small memory pool, ID3:
512B chained list NULL…
1K chained list NULL…
2K chained list NULL…
NULL…
1M chained list NODE1
Table 6
Small memory pool, ID3:
In table 6, original 1MB chained list is fissioned as the node (it is 800K that NODE1, which corresponds to capacity) in 1MB chained list, and Node in 256K chained list (it is 200K that NODE1, which corresponds to capacity).By the node (NODE1) in the 1MB chained list in small memory pool 3 Corresponding memory space 800KB distributes to memory application.
Further, after the memory that the corresponding business of memory application has used the first memory pool to distribute, first according to the The number of one memory pool judges whether the first memory pool is small memory pool, if YES then further whether judging the first memory pool It is complete idle;When determining that the first memory pool and small memory pool group are entirely idle, according to the number of the first memory pool and small memory pool group In maximum number, determine that the first memory pool from big memory pool, is then recycled to big memory pool by the first memory pool.
For example, in the 800KB that the corresponding business of memory application has used the first memory pool (i.e. small memory pool 3) to distribute After depositing, will release busy the first memory pool memory headroom 800K, need the number 3 according to the first memory pool at this time, sentence Disconnected first memory pool is for small memory pool (because the number of big memory pool is 0, then can be determined as in small greater than 0 number Deposit pond), further judge that the first memory pool (i.e. small memory pool 3) is the full free time later, then can be according in first The maximum number 4 for depositing the i.e. small memory pool group of number 3 in pond, determine the first memory pool come from childhood memory pool (because of 3≤4, if The new small memory pool being configured to by the memory headroom distributed in big memory pool, then 4) number should be greater than, that just need not be recycled; If the number of the first memory pool is 5, can determine the first memory pool from big memory pool, need at this time to the first memory pool into Row recycling, is recycled to big memory pool.
In order to which above scheme can be more clearly understood in those skilled in the art, it is provided below one specifically Example is briefly described.
It is assumed that finding to need to intercept the high definition figure for saving the variation moment when picture variation by taking dynamic detection business as an example Piece size is in 1M or so.There are within 512B in the small memory application of major part when equipment is run, and basis is steady using memory It is scheduled on 16M.Manually device memory is configured that by (mode that can also be used in implementation column two is automatically configured) Memory pool total size is 48M greatly, and page size is 1M, and one small memory pool total size is 1M, and page size is 512Bytes, small interior Depositing pond number is 16.The big memory pool so obtained and the chained list of small memory pool group generate as a result, reference can be made to Fig. 2.
In Fig. 2, each chained list of big memory pool and small memory pool, by 2n× page capacity, n take natural number, determine and correspond to Each node capacity of chained list.For example, the 1MB chained list in big memory pool, as 20 × 1MB=1MB illustrate in the 1MB chained list Each node corresponds to the memory size of 1MB, should if the node has mapped memory headroom if the node is indicated for sky with NULL Node is denoted as NODE, and first node is denoted as NODE1.It is other similar, it repeats no more.
Specifically, when the memory block of capacity needed for user's application is different:
The first situation, capacity needed for applying are the used time that 512B is used for program circuit, judge that required capacity 512B is small In big memory pool page capacity 1MB, then the just capacity needed for applying in the small memory pool that number is 1.It is looked into step by step when application Chained list is ask, obtains when node is not less than chained list (the 1KB chained list) of 512B and is cut, free memory block is stored in corresponding chained list In, the method specifically fissioned can refer to buddy algorithm.
Second situation, when being used to save high definition picture when application is greater than 1M, required capacity is 1M, judges required appearance It measures 1M and is not less than big memory pool page capacity 1MB, then capacity needed for applying from big memory pool.
The third situation, if the application small memory of 512B is more than 2048 times, the small memory pool that number is 1 is occupied full, will be from volume Number for 2 small memory pool in distribute small memory.
4th kind of situation, the small memory of continuation application 512B, if all small memory pools of small memory pool group (number 1-16) are all When memory headroom can not be provided, one page will be borrowed to be configured to new small memory pool (number 17) from big memory pool, memory sky is provided Between give above-mentioned business.
5th kind of situation will do it memory release, can first examine at this time when business has used applied required capacity The number of the first memory pool for capacity needed for traffic assignments is looked into, if number is 0, illustrates the first memory pool that business uses It is big memory pool, then the memory space that required capacity occupies directly is released back into big memory pool chained list.If number is 1-16 Number small memory pool, then directly the memory space that required capacity occupies is released back into the chained list of corresponding small memory pool.If It is the small memory pool (illustrating the small memory pool in big memory pool) that number is greater than No. 16, then also needs additionally to check this Whether pond is all idle, is that the small memory pool is entirely released back into big memory pool.
It, below will be respectively to the small memory of application in order to enable those skilled in the art to adequately understand above scheme Pond, big memory pool process briefly introduced.
One, it saves as when in user's application and asks small memory pool, refer to Fig. 3.
Step 301: being requested according to the memory application of user, be user's storage allocation pond, wherein memory application request Capacity is required capacity.
Step 302: if the required capacity of user memory application request is less than the total capacity of small memory pool, selecting in small Deposit pond.
Step 303: according to the page size of required capacity and small memory pool, small memory needed for calculating memory application request First number of pages in pond.
Step 304: judging whether existing small memory pool has enough distribution.
If there is total idle number of pages of at least one node not less than the first number of pages in existing small memory pool, it is determined that existing Small memory pool distributes enough, executes step 305- step 307.
Step 305: judging whether the node in existing small memory pool is suitable.
The node closest to the first number of pages is determined from least one node as first node, if in first node Idle number of pages is greater than the first number of pages, determines that the node in existing small memory pool is improper, executes step 306.
Step 306: first node is cut.
New node after cutting is returned, for users to use, that is, executes step 307.
If the idle number of pages in first node is equal to the first number of pages, determine that the node in existing small memory pool is suitable, executes Step 307.
Step 307: return node is for using.
Corresponding node user is returned to according to step 305 or step 306 to use.
If total idle number of pages of no one of existing small memory pool node is not less than the first number of pages, it is determined that existing small Memory pool not enough distributes, and executes step 308- step 312.
Step 308: requesting to temporarily transfer memory to big memory pool, that is, temporarily transfer big memory pool.
Step 309: judging whether there is the idle memory of page 1 for memory pool.
If do not have in big memory pool the memory of page 1 be it is idle, then follow the steps 301.
Step 310: returning to low memory.
If have in big memory pool page 1 be it is idle, then follow the steps 311- step 312, and after executing step 312, Repeat step 304- step 307.
Step 311: big memory pool distributes the memory of page 1 as new small memory pool.
Step 312: new small memory pool is initialized.
Two, it saves as imperial palace when in user's application and deposits pond, refer to Fig. 4.
Step 401: being requested according to the memory application of user, select memory pool for user.
If it is the total capacity greater than small memory pool that requested required capacity is requested in memory application, 402 are thened follow the steps.
Step 402: selecting big memory pool.
Step 403: number of pages needed for calculating.
According to required capacity and the page capacity of big memory pool, number of pages needed for calculating memory application request (is denoted as first page Number).
Step 404: judging whether big memory pool enough distributes.
If there is total idle number of pages of at least one node not less than the first number of pages in existing big memory pool, it is determined that existing Big memory pool distributes enough, executes step 405- step 407.
Step 405: judging whether the node in existing big memory pool is suitable.
The node closest to the first number of pages is determined from least one node as first node, if in first node Idle number of pages is greater than the first number of pages, determines that the node in existing big memory pool is improper, executes step 406.
Step 406: first node is cut.
New node after cutting is returned, for users to use, that is, executes step 407.
If the idle number of pages in first node is equal to the first number of pages, determine that the node in existing small memory pool is suitable, executes Step 407.
Step 407: return node is for using.
Corresponding node user is returned to according to step 405 or step 406 to use.
If total idle number of pages of no one of existing big memory pool node is not less than the first number of pages, it is determined that existing big Memory pool not enough distributes, and executes step 408.
Step 408: returning to low memory.
It after user has used the memory of application, needs to discharge the memory of application, process is discharged for memory Be briefly discussed below, refer to Fig. 5:
Step 501: after user has used the user memory of application, receiving the memory release request of user.
Step 502: judging that user memory is from big memory pool or small memory pool.
If judging result is to carry out memory pool from childhood, 503- step 506 is thened follow the steps.
Step 503: release user memory.
Step 504: whether the small memory pool where judging user memory is complete idle.
If judging result is that the small memory pool where user memory is not the full free time, 506 are thened follow the steps.
Step 506: terminating the whole process of memory release.
If the judging result in step 504 is that the small memory pool where user memory is the full free time, 505 are thened follow the steps.
Step 505: whether the small memory pool where judging user memory comes from big memory pool.
If judging result is small memory pool where user memory not being arrogant memory pool, 506 are thened follow the steps.
If judging result is that the small memory pool where user memory comes from big memory pool, step 508, step are successively executed Rapid 508, step 506.
Step 507: memory discharges node recycling.
Small memory pool where user memory is subjected to memory release, and is recycled in the node of big memory pool.
Step 508: node merges.
Other nodes of the node for having recycled memory and big memory pool are merged, the bigger section of capacity is obtained Point.
Embodiment two, based on the same inventive concept provides a kind of for internally depositing into capable configuration in one embodiment of the invention Memory management method, refer to Fig. 6, this method comprises:
Step 601: in the business of operation, recording the service condition of memory pool, obtain memory usage record;Wherein, memory Usage record includes at least the memory application capacity and number of all business, and by the total of specified time interval sampling memory pool Usage amount.
For example, had recorded in memory usage record at the appointed time section, capacity be the memory of 1KB be applied 10 times, The memory that capacity is 1.5KB be applied 15 times, capacity be 2KB memory be applied 20 times, capacity be 2.5KB memory Be applied 25 times, capacity be 3KB memory be applied 30 times, capacity be 4KB memory be applied 25 times, capacity be The memory of 6KB be applied 20 times, capacity be 7KB memory be applied 10 times, capacity be 8KB memory be applied 10 The memory that secondary, capacity is 9KB be applied 13 times, capacity be 10KB memory be applied 11 times.And press specified time (each 5s) acquires total usage amount of a memory pool, such as 15KB, 25KB, 50KB, 70KB.
After obtaining memory usage record, step 302 can be executed.
Step 602: it is for statistical analysis to memory usage record, it obtains and configures big memory pool and small memory for memory pool The configuration parameter of pond group;Wherein, the sum of big memory pool and the capacity of small memory pool group are the total capacity of memory pool, big memory pool It is different with the page capacity of small memory pool each in small memory pool group, and in small memory pool group each small memory pool total capacity and imperial palace The page capacity for depositing pond is equal.
Specifically, obtaining the configuration parameter for configuring big memory pool and small memory pool group for memory pool, need to all business Memory application capacity and number it is for statistical analysis, obtain the page capacity of big memory pool and the page capacity of small memory pool;Its In, the page capacity of big memory pool and the page capacity of small memory pool are the parameter in configuration parameter.
For statistical analysis to the memory application capacity and number of all business, can use will apply for capacity and application Number is plotted as the mode of a tendency chart, and with data instance in step 601, the tendency chart drawn out refers to Fig. 7.
Wherein, the page capacity for obtaining the small memory pool, can obtain in the following manner:
Firstly, obtaining the most memory application capacity of request times from the memory application capacity and number of all business As the first application capacity;Then, the first application capacity and preset ratio coefficient are subjected to multiplying, obtain small memory pool Page capacity;Wherein, preset ratio coefficient is used to deploy the allocative efficiency and utilization rate of the small memory pool.
Specifically, if it is for statistical analysis to memory application capacity and number be using as in Fig. 4 by the way of, that Request times can be directly determined from Fig. 7 to be up to 30 times, corresponding memory application capacity is 3KB, therefore the first application is held Amount is confirmed as 3KB;Then, by the first application capacity 3KB multiplied by preset ratio coefficient 0.5, that is, determine that the page of small memory pool holds Amount=3KB × 0.5=1.5KB.
It is to be appreciated that preset ratio coefficient can be adjusted according to actual needs, primarily to balance memory point With efficiency and utilization rate, it is generally the case that Memory Allocation efficiency is higher, memory usage is lower.Therefore preset ratio coefficient Specifically take what value it is not limited here.
After determining the page capacity of small memory pool, the page capacity of big memory pool can be determined, it is specific: can be from institute Have in the memory application capacity and number of business, obtains and be greater than the least memory application appearance of request times in the first application capacity Amount is as the second application capacity, using the second application capacity as the page capacity of big memory pool.
Continuing with referring to Fig. 7, apply for that least request times are 10 times in capacity greater than first from that can determine in Fig. 7, Its for application capacity be 8KB, therefore can determine the second application capacity be 8KB, and then can by second apply capacity 8KB Page capacity as big memory pool.
It is to be appreciated that although in the foregoing description to all business memory application capacity and number unite Meter analysis when, be analyzed using tendency chart by the way of, but in practical applications can also using algorithm realization with it is above-mentioned The identical function of mode, therefore memory application capacity and number to all business concrete mode for statistical analysis do not limit It is fixed.
By the above-mentioned means, can determine the big memory pool and the respective page capacity of small memory pool in configuration parameter, later, It can further determine the other parameters in configuration parameter.
To obtain the other parameters in configuration parameter, such as total capacity and quantity of small memory pool, first can always make to all The probability of occurrence of volume value is ranked up, and is obtained the total of the highest specified quantity of probability of occurrence and is used magnitude;Then, to specified The total of quantity is weighted using magnitude, obtains third memory value;Wherein, third memory value is the total of small memory pool group Capacity;Finally, obtaining the total quantity of the small medium and small memory pool of memory pool group divided by the page capacity of big memory pool with third memory value; Wherein, the total quantity of small memory pool is the parameter in configuration parameter.
For example, in memory usage record, the always dosage situation of the memory pool of record, in chronological order successively are as follows: 32KB、15KB、25KB、50KB、70KB、25KB、15KB、25KB、15KB、 32KB、15KB、50KB、25KB、50KB、32KB、 20KB、15KB、25KB、15KB、 15KB、25KB、15KB、25KB、20KB。
Need first to count every kind of total amount of memory: the corresponding number occurred of 15KB, 25KB, 32KB, 50KB, 70KB according to It is secondary are as follows: 8,7,3,2,1.
According to the corresponding frequency of occurrence of above-mentioned every kind of total amount of memory, the probability of occurrence of every kind of total amount of memory is calculated, out The calculation of existing probability is the number of every kind of total amount of memory appearance divided by the frequency of occurrence of all total amount of memory, specifically , the probability of occurrence of every kind of total amount of memory is successively are as follows: 38%, 33.3%, 14.4%, 9.5%, 4.8%.
And then all probabilities of occurrence can be ranked up, if specified quantity is 3, take probability of occurrence highest preceding 3 Position, successively are as follows: 38%, 33.3%, 14.4%, it obtains total use magnitude corresponding with them and is followed successively by 15KB, 25KB, 32KB; Then this 3 total usage amount is weighted and obtains third memory value, specific third memory value=15KB × 38%+ 25KB × 33.3%+32KB × 14.4%=18.633KB, however since the page that the total capacity of small memory pool is big memory pool holds 8K is measured, therefore needs to determine small memory pool according to the calculated third memory value 18.633KB and page capacity 8KB of big memory pool The total capacity of group is 24KB, and then can determine that the total quantity of small memory pool is 24KB/8KB=3.
It, can be according to the total of the total capacity of memory pool and small memory pool group after the total capacity for determining small memory pool group Capacity determines the total capacity of big memory pool.
Assuming that the total capacity of memory pool is 100K, then the total capacity of big memory pool is 100KB-24KB=64KB.
So far, just the major parameter of memory configurations is all determined.
It should be noted that configuration parameter not only includes the parameter of above-mentioned calculating, and it further include that memory pool such as is numbered, example The number of big memory pool is such as determined as 0, the number of small memory pool group is since 1, one small memory pool of every increase, small memory The master serial number of pond group adds 1, if there is temporarily transferred from big memory pool come memory space, just by small memory pool configuration parameter into Row configuration, and number and add 1 on the basis of the maximum number of small memory pool group, for details, reference can be made in embodiment one about big The example that memory pool memory is temporarily transferred, details are not described herein.
It should be noted that the determination method of above-mentioned configuration parameter can have new business enabling and special circumstances to occur When, it is automatically recorded by system and passes through the above method and redefine configuration parameter.So that user when needed can be using new Configuration parameter memory pool is configured.
Based on the same inventive concept, Fig. 8 is referred to, a kind of dress for memory management is provided in the embodiment of the present invention It sets, comprising:
Judging unit 801, for being judged as whether total active volume of the first determining memory pool of memory application is not less than Capacity needed for the memory application distributes;Wherein, first memory pool is in one in big memory pool or small memory pool group Deposit pond, the sum of capacity of the big memory pool and the small memory pool group is total memory size of equipment, and the big memory pool It is different with the page capacity of each small memory pool in the small memory pool group, and in the small memory pool group each small memory pool total appearance It measures equal with the page capacity of the big memory pool;
Allocation unit 802, if total active volume for first memory pool is not less than the required capacity, from Storage allocation gives the memory application in first memory pool.
Optionally, the judging unit 801, is used for:
If total active volume of first memory pool is less than the required capacity, corresponding for the memory application The information of business return low memory.
Optionally, the judging unit 801, is also used to:
Judge whether the required capacity is not less than the page capacity of the big memory pool;
If it has, then the big memory pool is determined as first memory pool;
If it has not, then determining a small memory pool as first memory pool from the small memory pool group.
Optionally, the judging unit 801, is also used to:
Total active volume is chosen from the small memory pool group and is greater than the required capacity, and numbers the smallest small memory Pond is as first memory pool;Wherein, the number is for uniquely indicating the small memory pool;Or
Capacity needed for the smallest memory pool distributes for memory application is numbered in required capacity by allowing meet, and can be convenient for Small memory pool is managed, is reduced the generation of memory fragmentation.
Optionally, the judging unit 801, is also used to:
If total active volume of the small memory pool of no one of the small memory pool group is greater than the required capacity, from Memory headroom is temporarily transferred in the big memory pool and is configured to new small memory pool, and using the memory headroom as the new small memory Total memory size in pond;Using the new small memory pool as first memory pool.
Optionally, the judging unit 801, is also used to:
Judge whether the available space of the big memory pool is not less than one page;
If it has, then distributing total memory space of the one page as the new small memory pool from the big memory pool;Its In, the memory space configuration of the new small memory pool is identical as the configuration of small memory pool in the small memory pool group, described new The number of small memory pool is the value after the maximum number plus one in all small memory pools.
Optionally, the judging unit 801, is also used to:
If the available space of the big memory pool is less than one page, the secondment memory headroom failure;
Notify the current no free memory space of the memory application.
Optionally, the allocation unit 802, is used for:
Based on corresponding first number of pages of the required capacity, first node is determined from the chained list of first memory pool; Wherein, the corresponding number of pages of the first node be in the chained list of first memory pool number of pages closest to first number of pages Node, the node in the chained list are corresponding with the memory headroom of first memory pool;
The corresponding memory headroom of the first node is distributed into the memory application.
Optionally, the allocation unit 802, is also used to:
After the memory that the corresponding business of the memory application has used the first memory pool distribution,
According to the number of first memory pool, judge whether first memory pool is small memory pool;
If it has, then further judging that whether complete first memory pool is idle;
When determining that first memory pool and small memory pool group are entirely idle, according to the number of first memory pool and Maximum number in the small memory pool group determines that first memory pool, then will be in described first from the big memory pool It deposits pond and is recycled to the big memory pool.
Optionally, described device, comprising:
Parameter configuration unit 803, for configuring the big memory pool and institute for specified memory pond according to configuration parameter State small memory pool group;
Wherein, the configuration parameter is at least by the big memory pool and the respective total capacity of the small memory pool, described big The total quantity composition of memory pool and the respective page capacity of the small memory pool, the small memory pool.
Based on the same inventive concept, Fig. 9 is referred to, a kind of dress for memory management is provided in the embodiment of the present invention It sets, for being configured to memory parameters, comprising:
Recording unit 901 obtains memory usage record in the business of operation, recording the service condition of memory pool; Wherein, the memory usage record includes at least the memory application capacity and number of all business, and adopts by specified time interval Total usage amount of memory pool described in sample;
Gain of parameter unit 902 obtains as the memory pool for for statistical analysis to the memory usage record Configure the configuration parameter of big memory pool and small memory pool group;Wherein, the big memory pool and the capacity of the small memory pool group it With the total capacity for the memory pool, the big memory pool is different with the page capacity of each small memory pool in the small memory pool group, And in the small memory pool group each small memory pool total capacity it is equal with the page capacity of the big memory pool.
Optionally, the gain of parameter unit 902, is used for:
It is for statistical analysis to the memory application capacity and number of all business, obtain the page of the big memory pool The page capacity of capacity and the small memory pool;Wherein, the page capacity of the page capacity of the big memory pool and the small memory pool is Parameter in the configuration parameter;
Wherein, when obtaining the page capacity of the small memory pool, first from the memory application capacity and number of all business In, the most memory application capacity of request times is obtained as the first application capacity;Apply for capacity for described first again and presets Proportionality coefficient carries out multiplying, obtains the page capacity of the small memory pool;Wherein, the preset ratio coefficient is for deploying The allocative efficiency and utilization rate of the small memory pool;
The page capacity for obtaining the big memory pool is first from the memory application capacity and number of all business, to obtain It takes greater than the least memory application capacity of request times in the first application capacity as the second application capacity, then will be described Page capacity of the second application capacity as the big memory pool.
Optionally, the gain of parameter unit 902, is used for:
For statistical analysis to total usage amount of the memory pool collected in designated time period, acquisition each always makes The probability of occurrence of volume value;Wherein, the probability of occurrence is each always to use magnitude frequency of occurrence in the designated time period With all always using the percentage of magnitude frequency of occurrence;
It is always ranked up using the probability of occurrence of magnitude to all, obtains always making for the highest specified quantity of probability of occurrence Volume value;
The total of the specified quantity is weighted using magnitude, obtains third memory value;Wherein, the third Memory value is the total capacity of the small memory pool group;
With the third memory value divided by the page capacity of the big memory pool, the small medium and small memory of memory pool group is obtained The total quantity in pond;Wherein, the total quantity of the small memory pool is the parameter in the configuration parameter;
Total capacity and the third memory value to the memory pool carry out difference operation, obtain the total of the big memory pool Capacity;The total capacity of the big memory pool is the parameter in the configuration parameter.
Based on the same inventive concept, a kind of device for memory management is provided in the embodiment of the present invention, comprising: extremely A few processor, and
The memory being connect at least one described processor;
Wherein, the memory is stored with the instruction that can be executed by least one described processor, described at least one The instruction that device is stored by executing the memory is managed, the memory pipe in embodiment one or implementation column two as described above is executed Reason method.
Based on the same inventive concept, the embodiment of the present invention also mentions a kind of computer readable storage medium, comprising:
The computer-readable recording medium storage has computer instruction, when the computer instruction is transported on computers When row, so that computer executes the EMS memory management process in embodiment one or embodiment two as described above.
In embodiment provided by the invention, by the way that the memory of equipment is divided into one big memory pool and small memory pool group, And make the total capacity of each small memory pool in small memory pool group equal with the page capacity of big memory pool, and the page capacity of small memory pool Different from the page capacity of big memory pool, the memory application that required capacity can be allowed small concentrates in small memory pool being handled, from And the capacity for the big memory pool of EMS memory occupation that can effectively avoid required capacity small, the generation for reducing RAM fragmentation, in turn Improve the utilization rate of memory.
In embodiment provided by the invention, by automatically recording the service condition of memory pool, such as in the business of operation Total usage amount of memory application capacity and number and memory pool obtains memory usage record;Later again to memory usage record into Row statistical analysis, obtains the configuration parameter that big memory pool and small memory pool group are configured for memory pool;Wherein, big memory pool and institute State the total capacity that the sum of capacity of small memory pool group is the memory pool, in big memory pool and the small memory pool group it is each it is small in The page capacity for depositing pond is different, and the total capacity of each small memory pool and the page capacity of big memory pool are equal in small memory pool group.To It can quickly determine the configuration parameter in device memory pond, and then can effectively improve the efficiency of memory configurations, and sending out Configuration parameter can also be automatically determined when existing new business according to the method described above, improves the intelligence of memory configurations.Further , by configuring big memory pool and small memory pool group by configuration parameter for the memory pool of equipment, the industry for making application memory small Business can be requested from small memory pool group, and the business for making application memory big can be requested from big memory pool, can effectively be subtracted Few memory fragmentation improves memory usage, and user can also be made in the business of exploitation, is used by the above method business The case where memory, carries out dynamic analysis, adjustment, further improves the fluency of service operation.
It should be understood by those skilled in the art that, the embodiment of the present invention can provide as method, system or computer program Product.Therefore, the embodiment of the present invention can be used complete hardware embodiment, complete software embodiment or combine software and hardware side The form of the embodiment in face.Moreover, it wherein includes computer available programs that the embodiment of the present invention, which can be used in one or more, Implement in the computer-usable storage medium (including but not limited to magnetic disk storage, CD-ROM, optical memory etc.) of code The form of computer program product.
The embodiment of the present invention be referring to according to the method for the embodiment of the present invention, equipment (system) and computer program product Flowchart and/or the block diagram describe.It should be understood that can be realized by computer program instructions in flowchart and/or the block diagram Each flow and/or block and flowchart and/or the block diagram in process and/or box combination.It can provide these Computer program instructions are to general purpose computer, special purpose computer, Embedded Processor or other programmable data processing devices Processor is to generate a machine, so that the finger executed by computer or the processor of other programmable data processing devices It enables generating and be specified for realizing in one or more flows of the flowchart and/or one or more blocks of the block diagram Function device.
These computer program instructions, which may also be stored in, is able to guide computer or other programmable data processing devices with spy Determine in the computer-readable memory that mode works, so that instruction stored in the computer readable memory generation includes The manufacture of command device, the command device are realized in one box of one or more flows of the flowchart and/or block diagram Or the function of being specified in multiple boxes.
These computer program instructions also can be loaded onto a computer or other programmable data processing device, so that Series of operation steps are executed on computer or other programmable devices to generate computer implemented processing, thus calculating The instruction executed on machine or other programmable devices is provided for realizing in one or more flows of the flowchart and/or side The step of function of being specified in block diagram one box or multiple boxes.
Obviously, various changes and modifications can be made to the invention without departing from of the invention by those skilled in the art Spirit and scope.In this way, if these modifications and changes of the present invention belongs to the model of the claims in the present invention and its equivalent technologies Within enclosing, then the present invention is also intended to include these modifications and variations.

Claims (15)

1. a kind of method of memory management is used for storage allocation, which is characterized in that the described method includes:
It is judged as whether the total active volume for the first memory pool that memory application determines is not less than needed for the memory application distribution Capacity;Wherein, first memory pool is a memory pool in big memory pool or small memory pool group, the big memory pool and institute Total memory size that the sum of capacity of small memory pool group is equipment is stated, and each small in the big memory pool and the small memory pool group The page capacity of memory pool is different, and in the small memory pool group total capacity of each small memory pool and the big memory pool page capacity It is equal;
If total active volume of first memory pool is not less than the required capacity, out of in first memory pool distribute It deposits to the memory application.
2. the method as described in claim 1, which is characterized in that described to be judged as the first determining memory pool of memory application Whether total active volume is not less than before capacity needed for the memory application distributes, further includes:
Judge whether the required capacity is not less than the page capacity of the big memory pool;
If it has, then the big memory pool is determined as first memory pool;
If it has not, then determining a small memory pool as first memory pool from the small memory pool group.
3. method according to claim 2, which is characterized in that described to determine a small memory pool from the small memory pool group As first memory pool, comprising:
Total active volume is chosen from the small memory pool group and is greater than the required capacity, and numbers the smallest small memory pool conduct First memory pool;Wherein, the number is for uniquely indicating the small memory pool;Or
If total active volume of the small memory pool of no one of the small memory pool group be greater than it is described needed for capacity, from described big Memory headroom is temporarily transferred in memory pool and is configured to new small memory pool, and using the memory headroom as the total interior of the new small memory pool Deposit capacity;Using the new small memory pool as first memory pool.
4. method as claimed in claim 3, which is characterized in that described to temporarily transfer memory headroom from the big memory pool, comprising:
Judge whether the available space of the big memory pool is not less than one page;
If it has, then distributing total memory space of the one page as the new small memory pool from the big memory pool;Wherein, described The memory space configuration of new small memory pool is identical as the configuration of small memory pool in the small memory pool group, the new small memory pool Number be value after maximum number plus one in all small memory pools.
5. method as claimed in claim 4, which is characterized in that at least whether the available space for judging the big memory pool After one page, further includes:
If the available space of the big memory pool is less than one page, the secondment memory headroom failure;
Notify the current no free memory space of the memory application.
6. the method as described in any claim of claim 1-4, which is characterized in that described out of in first memory pool distribute It deposits to the memory application, comprising:
Based on corresponding first number of pages of the required capacity, first node is determined from the chained list of first memory pool;Wherein, The corresponding number of pages of the first node is node of the number of pages closest to first number of pages in the chained list of first memory pool, institute The node stated in chained list is corresponding with the memory headroom of first memory pool;
The corresponding memory headroom of the first node is distributed into the memory application.
7. the method as described in any claim of claim 1-4, which is characterized in that described out of in first memory pool distribute It deposits to after the memory application, further includes:
After the memory that the corresponding business of the memory application has used the first memory pool distribution,
According to the number of first memory pool, judge whether first memory pool is small memory pool;
If it has, then further judging that whether complete first memory pool is idle;
When determining that first memory pool and small memory pool group are entirely idle, according to the number of first memory pool and described small Maximum number in memory pool group determines that first memory pool from the big memory pool, then returns first memory pool It receives to the big memory pool.
8. the method for claim 7, which is characterized in that be judged as the total available of the first memory pool that memory application determines Whether capacity is not less than before capacity needed for the memory application distributes, comprising:
According to configuration parameter, the big memory pool and the small memory pool group are configured by specified memory pond;
Wherein, the configuration parameter is at least by the big memory pool and the respective total capacity of the small memory pool, the big memory The total quantity composition of pond and the respective page capacity of the small memory pool, the small memory pool.
9. a kind of method of memory management, for internally depositing into capable configuration, which is characterized in that the described method includes:
In the business of operation, the service condition of memory pool is recorded, obtains memory usage record;Wherein, the memory usage record Including at least the memory application capacity and number of all business, and by total use of memory pool described in specified time interval sampling Amount;
It is for statistical analysis to the memory usage record, it obtains and configures big memory pool and small memory pool group for the memory pool Configuration parameter;Wherein, the sum of the big memory pool and the capacity of the small memory pool group are the total capacity of the memory pool, described Big memory pool is different with the page capacity of each small memory pool in the small memory pool group, and each small memory pool in the small memory pool group Total capacity it is equal with the page capacity of the big memory pool.
10. method as claimed in claim 9, which is characterized in that described obtain is that the memory pool configures big memory pool and small The configuration parameter of memory pool group, comprising:
It is for statistical analysis to the memory application capacity and number of all business, obtain the big memory pool page capacity and The page capacity of the small memory pool;Wherein, the page capacity of the page capacity of the big memory pool and the small memory pool is described matches Set the parameter in parameter;
Wherein, the page capacity for obtaining the small memory pool is first from the memory application capacity and number of all business, to obtain The memory application capacity for taking request times most is as the first application capacity;Again by the first application capacity and preset ratio system Number carries out multiplying, obtains the page capacity of the small memory pool;Wherein, the preset ratio coefficient is described small interior for deploying Deposit the allocative efficiency and utilization rate in pond;The page capacity for obtaining the big memory pool is, first from the memory application of all business In capacity and number, obtain greater than the least memory application capacity of request times in the first application capacity as the second application Capacity, then using the second application capacity as the page capacity of the big memory pool.
11. method as claimed in claim 10, which is characterized in that described obtain is that the memory pool configures big memory pool and small The configuration parameter of memory pool group, comprising:
It is for statistical analysis to total usage amount of the memory pool collected in designated time period, it obtains and each always uses magnitude Probability of occurrence;Wherein, the probability of occurrence is in the designated time period, each always to use magnitude frequency of occurrence and all total Use the percentage of magnitude frequency of occurrence;
It is always ranked up using the probability of occurrence of magnitude to all, obtains total usage amount of the highest specified quantity of probability of occurrence Value;
The total of the specified quantity is weighted using magnitude, obtains third memory value;Wherein, the third memory value For the total capacity of the small memory pool group;
With the third memory value divided by the page capacity of the big memory pool, the total of the small medium and small memory pool of memory pool group is obtained Quantity;Wherein, the total quantity of the small memory pool is the parameter in the configuration parameter;
Total capacity and the third memory value to the memory pool carry out difference operation, obtain the total capacity of the big memory pool; The total capacity of the big memory pool is the parameter in the configuration parameter.
12. a kind of device of memory management is used for storage allocation, which is characterized in that the described method includes:
Judging unit, for being judged as whether total active volume of the first determining memory pool of memory application is not less than the memory Capacity needed for application distributes;Wherein, first memory pool is a memory pool in big memory pool or small memory pool group, described The sum of big memory pool and the capacity of the small memory pool group are total memory size of equipment, and the big memory pool and it is described it is small in The page capacity for depositing each small memory pool in the group of pond is different, and the total capacity of each small memory pool and the imperial palace in the small memory pool group The page capacity for depositing pond is equal;
Allocation unit, if total active volume for first memory pool is not less than the required capacity, from described first Storage allocation gives the memory application in memory pool.
13. a kind of device of memory management, for internally depositing into capable configuration, which is characterized in that described device includes:
Recording unit obtains memory usage record in the business of operation, recording the service condition of memory pool;Wherein, described Memory usage record includes at least the memory application capacity and number of all business, and by memory described in specified time interval sampling Total usage amount in pond;
Gain of parameter unit is obtained and configures imperial palace for the memory pool for for statistical analysis to the memory usage record Deposit the configuration parameter in pond and small memory pool group;Wherein, the big memory pool and the sum of the capacity of the small memory pool group are described The page capacity of each small memory pool is different and described small in the total capacity of memory pool, the big memory pool and the small memory pool group The total capacity of each small memory pool is equal with the page capacity of the big memory pool in memory pool group.
14. a kind of memory management device characterized by comprising
At least one processor, and
The memory being connect at least one described processor;
Wherein, the memory is stored with the instruction that can be executed by least one described processor, at least one described processor By executing the instruction of the memory storage, such as the described in any item methods of claim 1-11 are executed.
15. a kind of computer readable storage medium, it is characterised in that:
The computer-readable recording medium storage has computer instruction, when the computer instruction is run on computers, So that computer executes such as method of any of claims 1-11.
CN201811266497.6A 2018-10-29 2018-10-29 Memory management method and device and computer storage medium Active CN110245091B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811266497.6A CN110245091B (en) 2018-10-29 2018-10-29 Memory management method and device and computer storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811266497.6A CN110245091B (en) 2018-10-29 2018-10-29 Memory management method and device and computer storage medium

Publications (2)

Publication Number Publication Date
CN110245091A true CN110245091A (en) 2019-09-17
CN110245091B CN110245091B (en) 2022-08-26

Family

ID=67882385

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811266497.6A Active CN110245091B (en) 2018-10-29 2018-10-29 Memory management method and device and computer storage medium

Country Status (1)

Country Link
CN (1) CN110245091B (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111090521A (en) * 2019-12-10 2020-05-01 Oppo(重庆)智能科技有限公司 Memory allocation method and device, storage medium and electronic equipment
CN112214313A (en) * 2020-09-22 2021-01-12 深圳云天励飞技术股份有限公司 Memory allocation method and related equipment
CN112241325A (en) * 2020-12-15 2021-01-19 南京集成电路设计服务产业创新中心有限公司 Ultra-large-scale integrated circuit database based on memory pool and design method
CN113504994A (en) * 2021-07-26 2021-10-15 上海遁一信息科技有限公司 Method and system for realizing elastic expansion and contraction of memory pool performance
WO2023071158A1 (en) * 2021-10-26 2023-05-04 西安广和通无线通信有限公司 Memory optimization method and apparatus, terminal, and storage medium
CN116361234A (en) * 2023-06-02 2023-06-30 深圳中安辰鸿技术有限公司 Memory management method, device and chip

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1635482A (en) * 2003-12-29 2005-07-06 北京中视联数字系统有限公司 A memory management method for embedded system
CN101122883A (en) * 2006-08-09 2008-02-13 中兴通讯股份有限公司 Memory allocation method for avoiding RAM fragmentation
CN101266575A (en) * 2007-03-13 2008-09-17 中兴通讯股份有限公司 Method for enhancing memory pool utilization ratio
US20090254731A1 (en) * 2008-04-02 2009-10-08 Qualcomm Incorporated System and method for memory allocation in embedded or wireless communication systems
US20120284479A1 (en) * 2011-05-05 2012-11-08 International Business Machines Corporation Managing large page memory pools
CN105893269A (en) * 2016-03-31 2016-08-24 武汉虹信技术服务有限责任公司 Memory management method used in Linux system

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1635482A (en) * 2003-12-29 2005-07-06 北京中视联数字系统有限公司 A memory management method for embedded system
CN101122883A (en) * 2006-08-09 2008-02-13 中兴通讯股份有限公司 Memory allocation method for avoiding RAM fragmentation
CN101266575A (en) * 2007-03-13 2008-09-17 中兴通讯股份有限公司 Method for enhancing memory pool utilization ratio
US20090254731A1 (en) * 2008-04-02 2009-10-08 Qualcomm Incorporated System and method for memory allocation in embedded or wireless communication systems
US20120284479A1 (en) * 2011-05-05 2012-11-08 International Business Machines Corporation Managing large page memory pools
CN105893269A (en) * 2016-03-31 2016-08-24 武汉虹信技术服务有限责任公司 Memory management method used in Linux system

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
(美)(C.坎特)CHRIS CANT著: "《Windows WDM设备驱动程序开发指南》", 31 January 2000 *
(美)H.FRANK CERVONE著: "《Solaris 性能管理》", 31 December 2000 *
刘娟: "一种跨平台内存池的设计与实现", 《蚌埠学院学报》 *

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111090521A (en) * 2019-12-10 2020-05-01 Oppo(重庆)智能科技有限公司 Memory allocation method and device, storage medium and electronic equipment
CN112214313A (en) * 2020-09-22 2021-01-12 深圳云天励飞技术股份有限公司 Memory allocation method and related equipment
CN112241325A (en) * 2020-12-15 2021-01-19 南京集成电路设计服务产业创新中心有限公司 Ultra-large-scale integrated circuit database based on memory pool and design method
CN112241325B (en) * 2020-12-15 2021-03-23 南京集成电路设计服务产业创新中心有限公司 Ultra-large-scale integrated circuit database based on memory pool and design method
CN113504994A (en) * 2021-07-26 2021-10-15 上海遁一信息科技有限公司 Method and system for realizing elastic expansion and contraction of memory pool performance
CN113504994B (en) * 2021-07-26 2022-05-10 上海遁一信息科技有限公司 Method and system for realizing elastic expansion and contraction of memory pool performance
WO2023071158A1 (en) * 2021-10-26 2023-05-04 西安广和通无线通信有限公司 Memory optimization method and apparatus, terminal, and storage medium
CN116361234A (en) * 2023-06-02 2023-06-30 深圳中安辰鸿技术有限公司 Memory management method, device and chip
CN116361234B (en) * 2023-06-02 2023-08-08 深圳中安辰鸿技术有限公司 Memory management method, device and chip

Also Published As

Publication number Publication date
CN110245091B (en) 2022-08-26

Similar Documents

Publication Publication Date Title
CN110245091A (en) A kind of method, apparatus and computer storage medium of memory management
CN109783237A (en) A kind of resource allocation method and device
CN106453146B (en) Method, system, device and readable storage medium for allocating private cloud computing resources
CN110162388A (en) A kind of method for scheduling task, system and terminal device
CN107864211B (en) Cluster resource dispatching method and system
CN106407244A (en) Multi-database-based data query method, system and apparatus
CN107229693A (en) The method and system of big data system configuration parameter tuning based on deep learning
CN108279974A (en) A kind of cloud resource distribution method and device
CN105096174A (en) Transaction matching method and transaction matching system
CN103310460A (en) Image characteristic extraction method and system
CN114416352A (en) Computing resource allocation method and device, electronic equipment and storage medium
CN103176849A (en) Virtual machine clustering deployment method based on resource classification
CN108647081A (en) Empty machine resource automatic distribution system based on order
CN106202092A (en) The method and system that data process
CN107729514A (en) A kind of Replica placement node based on hadoop determines method and device
CN103701894A (en) Method and system for dispatching dynamic resource
CN104536832A (en) Virtual machine deployment method
WO2022257302A1 (en) Method, apparatus and system for creating training task of ai training platform, and medium
CN106502918A (en) A kind of scheduling memory method and device
CN106874109A (en) A kind of distributed job distribution processing method and system
CN108241531A (en) A kind of method and apparatus for distributing resource for virtual machine in the cluster
CN109324890A (en) Method for managing resource, device and computer readable storage medium
CN107291720A (en) A kind of method, system and computer cluster for realizing batch data processing
CN107124473A (en) The construction method and cloud platform of cloud platform
CN114327811A (en) Task scheduling method, device and equipment and readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant