CN107153618A - A kind of processing method and processing device of Memory Allocation - Google Patents

A kind of processing method and processing device of Memory Allocation Download PDF

Info

Publication number
CN107153618A
CN107153618A CN201610116806.6A CN201610116806A CN107153618A CN 107153618 A CN107153618 A CN 107153618A CN 201610116806 A CN201610116806 A CN 201610116806A CN 107153618 A CN107153618 A CN 107153618A
Authority
CN
China
Prior art keywords
memory
memory block
block
free
current
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201610116806.6A
Other languages
Chinese (zh)
Inventor
欧阳圣
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Advanced New Technologies Co Ltd
Advantageous New Technologies Co Ltd
Original Assignee
Alibaba Group Holding Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Alibaba Group Holding Ltd filed Critical Alibaba Group Holding Ltd
Priority to CN201610116806.6A priority Critical patent/CN107153618A/en
Publication of CN107153618A publication Critical patent/CN107153618A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/0802Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
    • G06F12/0866Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches for peripheral storage systems, e.g. disk cache
    • G06F12/0871Allocation or management of cache space

Abstract

The embodiment of the present application discloses a kind of processing method and processing device of Memory Allocation.Methods described includes:Internal memory application information is obtained, judges to apply whether internal memory is less than in the memory pool of foundation that the maximum of memory block can storage allocation;When judge it is described application internal memory be more than the maximum can storage allocation when, according to the bulk memory information recorded in the data head of memory block, from bulk memory storage allocation to application internal memory;Otherwise, judge whether the application internal memory is more than the free memory of current memory block, when judging that the application internal memory is less than the free memory of current memory block, judge whether the current residual space of current memory block meets the demand of application internal memory;If the current residual space of the current memory block meets the demand of the application internal memory, storage allocation gives the application internal memory from the current residual space of the current memory block.Methods described and device can reduce the time of Memory Allocation, improve Installed System Memory utilization rate, optimize Installed System Memory allocative efficiency.

Description

A kind of processing method and processing device of Memory Allocation
Technical field
The application is related to memory management technology field, more particularly to a kind of processing method and processing device of Memory Allocation.
Background technology
With the development of the communication technology, people can realize data, letter by setting up network connection between terminal and server The alternating transmission of breath.Moreover, communication, Ren Menbian can also be interacted between different terminals by the way of data transfer The internet that can be set up by above-mentioned transmission means obtains required information.
Under normal circumstances, in internet, when communication two party has data interaction, it is necessary to set up a connection, data transfer After the completion of, then this connection is disconnected, i.e., connection only completes the transmission of a business every time, such connection is referred to as short connection.But, For different terminals, user may repeatedly be communicated in a short time, such as:Operating in terminal system has networking work( The program or thread of energy, multiple access request TCP can be initiated to server end, and (Transmission Control Protocol are passed Transport control protocol is discussed) connection.If communication is all first to connect every time, then if transmitting, then the processing speed of data transfer will It is severely impacted.Meanwhile, frequently short connection is created, it is necessary to server end is constantly monitored, and is constantly attached confirmation, Not only increase the work load of server, and waste network bandwidth resources.Be solve above-mentioned short connections of TCP because frequently building The efficiency that vertical and disconnecting link is produced, can replace short connecting communication, the connection of TCP length using the long connecting communication modes of TCP Multiple packets can be continuously transmitted in a connection.Although carrying out communication by the way of long connection can avoid in short-term The interior connection request repeated, still, long connection are usually to be set up by terminal, and each long connection can transmit mass data, Assuming that there is the situation of a large amount of request connections, server needs externally to provide the large batch of continual long company carried out data transmission Connect.At this time, it may be necessary to which a kind of with optimized allocation of resources mechanism, can mitigate the burden of server in the method for flexible allocation internal memory.
In the prior art, during long connecting communication, server, can be with when receiving the request for distributing a certain size internal memory The internal memory free block list of internal maintenance is first looked for, and needs (for example to distribute being not less than of finding at first according to certain algorithm Apply the memory block of size to requestor etc.) find the free memory block of suitable size.If the free memory block is excessive, also need Cut into allocated part and less free block.Then internal memory free block list described in server update, completes an internal memory Distribution.But, in above-mentioned memory allocation method, memory fragmentation, internal memory profit are easily generated in free memory block after dispensing It is not high with rate, also, due to needing continually to allocate internal memory during long connecting communication, but the physical address of free memory block Discontinuously, server needs to spend more time search available free memory block in invoked procedure, and internal memory allotment is inefficient.
The Memory Allocation processing method of long connection communication server in the case where there is a large amount of request connections needs not in the prior art Disconnected consumption resource is calculated, and spends more time search available free memory block, and easily in the free time after dispensing Memory fragmentation is generated in counterfoil, causes server system memory usage relatively low, Memory Allocation inefficiency.
The content of the invention
The purpose of the embodiment of the present application is to provide a kind of processing method and processing device of Memory Allocation, it is possible to reduce Memory Allocation when Between, Installed System Memory utilization rate is improved, optimizes Installed System Memory allocative efficiency.
What a kind of memory allocation method of the embodiment of the present application offer and device were realized in:
A kind of processing method of Memory Allocation, methods described can include:
Internal memory application information is obtained, judges to apply whether internal memory is less than in the memory pool of foundation that the maximum of memory block can storage allocation;
When judge it is described application internal memory be more than the maximum can storage allocation when, according to the bulk recorded in the memory block data head Memory information, storage allocation gives application internal memory from bulk memory;
Otherwise, judge whether the application internal memory is more than the free memory of current memory block, work as when judging that the application internal memory is less than During the free memory of preceding memory block, judge whether the current residual space of current memory block meets the demand of the application internal memory;
If the current residual space of the current memory block meets the demand of the application internal memory, from the current of the current memory block Storage allocation gives the application internal memory in remaining space.
A kind of processing unit of Memory Allocation, described device can include:
Maximum memory judge module, for obtaining internal memory application information, judges to apply for whether internal memory is interior less than in the memory pool set up The maximum of counterfoil can storage allocation;
Bulk memory distribute module, for judging that the application internal memory is more than the maximum and can divided when the maximum memory judge module During with internal memory, according to the bulk memory information recorded in the memory block data head, storage allocation is in application from bulk memory Deposit;
Remaining space judge module, for judging whether the application internal memory is more than the free memory of current memory block, when judging State application internal memory be less than current memory block free memory when, judge whether the current residual space of current memory block meets the Shen Please internal memory demand;
Remaining space distribute module, if the current residual space for judging current memory block for the remaining space judge module is met During the demand of the application internal memory, storage allocation gives the application internal memory from the current residual space of the current memory block.
The processing method and processing device for the Memory Allocation that the application is provided, can according to the size of application internal memory respectively from memory pool or Storage allocation in bulk memory, is distributed respectively according to the size for judging application internal memory and free memory again in memory pool Memory Allocation Free memory or new memory block, can not only meet the demand of application memory headroom, can also make full use of the space of memory block. The memory pool is conducive to centralized management of the server to small memory headroom, the thread frequently applied especially for small memory headroom, Space can quickly be applied for from memory pool, and greatly reduce memory fragmentation, space availability ratio is greatly improved.
Brief description of the drawings
, below will be to embodiment or prior art in order to illustrate more clearly of the embodiment of the present application or technical scheme of the prior art The accompanying drawing used required in description is briefly described, it should be apparent that, drawings in the following description are only note in the application Some embodiments carried, for those of ordinary skill in the art, without having to pay creative labor, can be with Other accompanying drawings are obtained according to these accompanying drawings.
Fig. 1 is a kind of method flow diagram schematic diagram of the embodiment for the Memory Allocation processing method that the application is provided;
Fig. 2 is a kind of structural representation of the embodiment for the memory block that the application is provided;
Fig. 3 is a kind of flow chart of the embodiment for internal memory processing method when applying for internal memory more than free memory that the application is provided;
Fig. 4 is a kind of structural representation for the embodiment for setting up new memory block that the application is provided;
Fig. 5 is a kind of flow chart for embodiment from free memory storage allocation method that the application is provided;
Fig. 6 is a kind of method flow diagram of the embodiment for the internal memory releasing memory that the application is provided;
Fig. 7 is a kind of modular structure schematic diagram of the embodiment for the Memory Allocation processing unit that the application is provided;
Fig. 8 is the modular structure schematic diagram of another embodiment for the Memory Allocation processing unit that the application is provided;
Fig. 9 is the modular structure schematic diagram of another embodiment for the Memory Allocation processing unit that the application is provided;
Figure 10 is a kind of modular structure schematic diagram of the embodiment for the remaining space distribute module that the application is provided;
Figure 11 is the modular structure schematic diagram of another embodiment for the Memory Allocation processing unit that the application is provided;
Figure 12 is the modular structure schematic diagram of another embodiment for the Memory Allocation processing unit that the application is provided;
Figure 13 is the modular structure schematic diagram of another embodiment for the chained list update module of releasing memory that the application is provided.
Embodiment
In order that those skilled in the art more fully understand the technical scheme in the application, below in conjunction with the embodiment of the present application Accompanying drawing, the technical scheme in the embodiment of the present application is clearly and completely described, it is clear that described embodiment is only It is some embodiments of the present application, rather than whole embodiments.Based on the embodiment in the application, ordinary skill people The every other embodiment that member is obtained under the premise of creative work is not made, should all belong to the scope of the application protection.
The processing method to Memory Allocation described herein is described in detail below in conjunction with the accompanying drawings.Fig. 1 is the application offer Memory Allocation processing method a kind of embodiment method flow diagram schematic diagram.Although this application provides such as following embodiments Or method operating procedure shown in the drawings, but based on routine or can include in the process without performing creative labour more Or less operating procedure.In the step of necessary causality is not present in logicality, the execution sequence of these steps is not limited The execution sequence provided in the embodiment of the present application.In the Memory Allocation processing procedure of methods described in practice or device perform When, it can be performed according to embodiment or method shown in the drawings order or parallel execution (such as parallel processor or multi-thread The environment of journey processing).
A kind of embodiment of the processing method for Memory Allocation that specific the application is provided a kind of is as shown in figure 1, methods described can be with Including:
S1:Internal memory application information is obtained, judges to apply whether internal memory is less than the maximum of memory block in the memory pool set up and can distribute interior Deposit.
, typically can be continually to the small memory headroom of server application in long connection process, now, server can be in a short time The internal memory application information that long connection process is sent is continuously received, the internal memory application information can be the size for applying for memory headroom Etc. information.In the embodiment of the present application, server can set up memory pool in internal memory, and the memory pool can be in such small internal memory Space applies using in frequently process, until the process is completed.The memory pool can be pre- in server memory The memory headroom first divided.The memory pool of the foundation can be configured to:
SS1:Be divided into includes the memory block of data head and data storage areas including at least one default fixed size;
SS2:Memory block in memory pool be configured to according to memory block free memory from big to small order arrangement.
At least one memory block can be divided in the memory pool, the size of the memory block is preset value, such as 10K, 1M etc.. When the memory pool includes two or more memory block, memory block is formed between the memory block in whole memory pool Chained list.Fig. 2 is a kind of structural representation of the embodiment for the memory block that the application is provided, as shown in Fig. 2 the memory block can be with Including data head and data storage area, wherein, the data storage area is used for data storage storage information, and the data head is used for Data storage header, the data header information can include:
Current memory block first address information;
A upper memory block first address information, next memory block first address information;
Unallocated memory headroom address information, free memory size information;
Bulk memory first address information, bulk memory size information;
Releasing memory space address information, releasing memory size information.
Wherein, the current memory block first address information can store the physical address information of current memory block, such as current memory The first address " 0X8049320 " of block.As it was previously stated, when the memory pool includes multiple memory blocks, the thing between memory block Reason address is continuous, and a upper memory block first address information, next memory block first address information can be stored respectively works as Upper one, the first address information of next memory block of preceding memory block.Next memory block first address can be current interior The address obtained on counterfoil first address after the default fixed size of offset memory block.As shown in Fig. 2 the free memory of memory block can With including unallocated internal memory and releasing memory, it is described it is unallocated in save as and do not stored any data message in memory block also Data storage area, the releasing memory is had stored data in memory block, but data storage has been labeled as release Data storage area.The bulk memory can be big in one or more memory headroom outside memory pool, the present embodiment The memory headroom of block internal memory is not fixed, generally higher than the maximum of memory block can storage allocation, the maximum can storage allocation can wrap Include memory block storage allocation and can storage allocation., can be in the form of chained list by described in when there is multiple bulk memories Multiple bulk memories are together in series, as shown in Fig. 2 next memory address of bulk memory 1 points to bulk memory 2.It is described big Block internal memory first address information can be with the first address information of the bulk memory outside stored memory pond, and the bulk memory size information can To store the memory headroom information of corresponding bulk memory, it can travel through and can apply by the first address information of the bulk memory The bulk memory used, and understand the memory size of corresponding bulk memory.
In small memory headroom application is using more frequently process, can also apply for larger internal memory once in a while, now, memory pool it is interior Counterfoil can not meet the demand of process application internal memory., can be by judging whether application internal memory is less than internal memory in the present embodiment The maximum of block can storage allocation, determine storage allocation block internal memory or bulk memory.
S2:When judge it is described application internal memory be more than the maximum can storage allocation when, according in the data head of the memory block record Bulk memory information, from bulk memory storage allocation to application internal memory.
The maximum in the present embodiment can storage allocation be usually memory block data storage area size, generally, institute State maximum can storage allocation size be fixed preset value.When judged result can distribute for the maximum that the application internal memory is more than memory block During internal memory, the bulk memory information that can be recorded in the data head according to current memory block, the storage allocation from the bulk memory Give application internal memory.The address of available bulk memory and corresponding bulk memory is big outside the data header information stored memory pond Small information.According to the size of application internal memory, the bulk memory is traveled through, it is determined that can be with storage allocation in the bulk for applying for internal memory Deposit, for example, first internal memory is more than the bulk memory of the application internal memory in bulk memory first address chained list.
It should be noted that after the partial memory of bulk memory to be distributed to application internal memory, the memory headroom of the bulk memory Reduce, the first address of the bulk memory also respective offsets.In the present embodiment, in the storage allocation from the bulk memory After application internal memory, it may comprise steps of:
Update the first address of the bulk memory;
The first address information and size information of the bulk memory are updated to the data head of memory block in the memory pool.
Bulk memory distribution application internal memory outside memory pool, solving small memory block in memory pool can not meet in application once in a while The problem of depositing demand.In addition, after big memory block storage allocation, the address of big memory block and size information are updated to each In memory block, it is ensured that the newest big block address memory of memory block storage and size information.
S3:Otherwise, judge it is described application internal memory whether be more than current memory block free memory, when judge it is described apply internal memory it is small When the free memory of current memory block, judge whether the current residual space of current memory block meets the need of the application internal memory Ask.
When judged result for it is described application internal memory be less than the maximum can storage allocation when, it may be determined that in memory pool apply in Deposit.Need exist in two kinds of situation, a kind of is the free memory that the application internal memory is less than current memory block, and another is described Apply for that internal memory is more than or equal to the free memory of current memory block.Under above two different situations, the mode of storage allocation is different.
Fig. 3 is a kind of flow chart of the embodiment for internal memory processing method when applying for internal memory more than free memory that the application is provided, As shown in figure 3, including:
S11:When judging that the application internal memory is more than or equal to the free memory of current memory block, based on current memory block number evidence The memory block chained list that head is recorded searches free memory in the memory pool and is more than or equal to the available interior of the application internal memory successively Counterfoil.
, can be based on current memory block data head note when the free memory of the current memory block can not meet the application internal memory The memory block link table information of record searches next piece of free memory in the memory pool and is more than or equal to the available of the application internal memory Memory block.
In another implement scene of the application, when the memory block chained list of memory block in the memory pool is configured to according to internal memory When the order of the free memory of block from big to small is arranged, the memory block link table information that can be recorded based on current memory block data head be sentenced Whether the free memory of the lastblock memory block of disconnected current memory block is more than or equal to the application internal memory.The present embodiment is being arranged The free memory block that free memory is more than or equal to the application internal memory is searched in the memory block chained list of sequence, free memory can be saved The lookup time, improve search efficiency.
S12:Correspondingly, if finding the free memory block, whether the current residual space for judging current memory block Meeting the application memory requirements includes judging whether the current residual space of the free memory block meets the application internal memory Demand.
If find free memory be more than or equal to it is described application internal memory free memory block, it can be determined that the free memory block work as Whether preceding remaining space meets the application memory requirements.
As shown in figure 3, in another embodiment of the application, methods described can also include:
S21:If not finding the free memory block that free memory is more than or equal to the application internal memory in the memory pool, New memory block is created in the memory pool, and uses the new memory block to be the application Memory Allocation internal memory.
If do not found in the memory pool free memory be more than or equal to it is described application internal memory free memory block when, can To create new memory block in the memory pool, and the use of the new memory block is the application Memory Allocation internal memory.The present embodiment In new memory block can build on the end of the memory pool, last memory block of the memory pool points to the new internal memory Block.Fig. 4 is a kind of structural representation for the embodiment for setting up new memory block that the application is provided, as shown in figure 4, setting up in new During counterfoil, the partial memory of the data storage area of the new memory block is distributed into the application internal memory first, then by remaining portion Point internal memory is used as unallocated internal memory, it is ensured that new memory block it is maximum can storage allocation and others memory blocks it is in the same size.It is described Bulk memory first address information, bulk memory size information in new memory block data head can replicate the correspondence of previous memory block Storage information.Completed in this implementation while new memory block is created to the Memory Allocation for applying for internal memory, reduction operating procedure, Improve allocative efficiency.
As shown in figure 3, in another embodiment of the application, methods described can also include:
S31:Based on next memory block indicated by the data head of memory block, compare successively in memory pool in the free time of memory block The size with the free memory of the new memory block is deposited, the chained list of the new memory block is inserted into free memory in the memory pool Previous memory block and free memory more than the free memory of the new memory block are less than the free memory of the new memory block Between latter memory block.
, can be by when the foundation for completing new memory block and after apply for the Memory Allocation of internal memory in another embodiment of the application Relevant position insertion of the new memory block in the memory pool, it is ensured that the memory block in the memory pool is according to free memory Arrange, can specifically be accomplished in the following manner from big to small:Compare successively in memory pool the free memory of memory block with it is described new The size of the free memory of memory block, by the chained list of the new memory block be inserted into previous free memory be more than, it is latter idle Internal memory is less than between the adjacent memory block of the free memory of the new memory block.
S4:If the current residual space of the current memory block meets the demand of the application internal memory, from the current memory block Storage allocation gives the application internal memory in current residual space.
If the current residual space of the current memory block meets the demand of the application internal memory, that is, current memory block is current When remaining space is more than or equal to the free memory, it can be given storage allocation from the current residual space of the current memory block Apply for internal memory.Specifically, Fig. 5 is a kind of flow chart for embodiment from free memory storage allocation method that the application is provided, If as shown in figure 5, the current residual space of the current memory block meets the demand of the application internal memory, from described current Storage allocation can include to the application internal memory in the current residual space of memory block:
S101:If the current residual space of the current memory block meets the demand of the application internal memory, preferentially from current memory block Current unallocated memory headroom storage allocation to application internal memory.
, can be preferentially from current memory block if the current residual space of the current memory block meets the demand of the application internal memory Current unallocated memory headroom storage allocation to application internal memory.Specifically, it may determine that whether the application internal memory is less than first Or equal to the unallocated internal memory in the free memory.
, can be empty from the unallocated internal memory if the unallocated internal memory that the application internal memory is less than or equal in the free memory Between storage allocation to application internal memory.It is described it is unallocated in save as the memory block that data were never stored in memory block free memory, and Although the releasing memory in free memory block can be used, the data storage of release is maintain in the releasing memory, To the data storage in releasing memory, it is necessary to first wipe the data storage, memory space of reallocating.Therefore, in this reality Apply in example, preferentially can give the application internal memory by the unallocated Memory Allocation in the free memory.
S102:If the current unallocated internal memory of current memory block is unsatisfactory for the demand of the application internal memory, from current memory block number The memory headroom that the demand for meeting the application internal memory is searched in the space address information of the releasing memory recorded according to head is divided Match somebody with somebody.
If the current unallocated internal memory of current memory block is unsatisfactory for the space size of the application internal memory, can be from current memory block number Lookup meets the memory headroom for applying for memory requirements and is allocated in the releasing memory recorded according to head.Specifically, it can sentence Whether the release sub- internal memory of the application internal memory is more than or equal to comprising internal memory in releasing memory in the disconnected free memory.This In embodiment, because use, the release of internal memory are random, the time is also inconsistent, therefore, and releasing memory does not connect usually Continuous, the releasing memory can discharge sub- internal memory comprising multiple.It is described discharged sub- internal memory memory size be typically also Differ, the sub- internal memory of release for whether being more than or equal to the application internal memory comprising internal memory can be searched for from releasing memory.
It should be noted that when the releasing memory comprising it is multiple discharged sub- internal memory when, the releasing memory space address Information can include multiple first address informations for having discharged sub- internal memory.
As shown in figure 5, in another embodiment of the application, methods described can also include:
S201:If failing to find from the releasing memory of current memory block data head record and meeting the application memory requirements Memory headroom, based on current memory block data head record address information successively in the memory pool search free memory be more than or Equal to the free memory block of the application internal memory.
Can be discontinuous due to having discharged between sub- internal memory, and it is continuous memory headroom to apply for that internal memory needs, therefore it is described What releasing memory had been included, which has discharged sub- internal memory, may can not meet the demand of application internal memory.Therefore, when judged result is described , can be with when not including internal memory in the releasing memory in free memory and being more than or equal to the sub- internal memory of release of the application internal memory The memory block chained list recorded based on current memory block data head is searched next piece of free memory in the memory pool and is more than or equal to The free memory block of the application internal memory.
S202:Accordingly, if finding the free memory block, whether the current residual space for judging current memory block Meeting the application memory requirements includes judging whether the current residual space of the free memory block meets the application internal memory Demand.
If finding the free memory block, it can be determined that whether the current residual space of the free memory block meets the Shen Please memory requirements.If the current residual space of the free memory block meets the application memory requirements, can be with repeat step S101-S102, can be with repeat step if the current residual space of the free memory block can not meet the application memory requirements S11-S12, specific embodiment mode will not be repeated here.
As shown in figure 5, in another embodiment of the application, methods described can also include:
S301:If not finding the free memory block that free memory is more than or equal to the application internal memory in the memory pool, New memory block is created in the memory pool, and uses the new memory block to be the application Memory Allocation internal memory.
S301 embodiment may be referred to S21 embodiment, will not be repeated here.
In another embodiment of the application, internal memory release, figure can be carried out to the memory block in bulk memory and memory pool 6 be a kind of method flow diagram of the embodiment for the internal memory releasing memory that the application is provided, as shown in fig. 6, including:
S1001:Internal memory releasing request information is obtained, judges whether releasing memory can storage allocation more than the maximum.
The internal memory releasing request information can include the address information and size of request releasing memory, due to the internal memory point of memory pool Match somebody with somebody, discharge and worker thread is synchronous presence, when worker thread needs to discharge certain block internal memory, first determine whether certainly whether internal memory The internal memory of memory block in pond.It can specifically first determine whether whether releasing memory can storage allocation more than the maximum.
S1002:When judged result be releasing memory be more than the maximum can storage allocation when, according to the internal memory releasing request believe The first address of the corresponding bulk memory of releasing memory described in the bulk memory Information locating of breath and data head record.
When judge the releasing memory be more than the maximum can storage allocation when, it may be determined that request release be bulk memory.This When, the bulk memory that can be recorded according to the address information and data head of the releasing memory in the internal memory releasing request information is believed Breath, the bulk memory first address information in the data head links and navigates to the first address of the bulk memory.
S1003:The bulk memory recorded in the data message for the bulk memory that release positioning is obtained, deletion memory block data head The first address information of the corresponding bulk memory of releasing memory described in the chained list of location.
After the first address of the bulk memory is navigated to, the bulk that positioning is obtained is discharged according to the size information of the releasing memory The data message of internal memory.After the bulk memory releasing memory, the bulk memory that is recorded in memory block data head can be deleted The first address information of the corresponding bulk memory of releasing memory described in the chained list of location, keep the data head record is newest bulk Memory address information.
As described in Figure 6, in another embodiment of the application, methods described can also include:
S2001:When judged result be releasing memory be less than the maximum can storage allocation when, it is fixed according to internal memory releasing request information Address of the position releasing memory in the corresponding memory block to be released of the memory pool, and discharge the institute of the memory block to be released State the memory headroom of releasing memory.
When judge the releasing memory be less than the maximum can storage allocation when, can be according in the internal memory releasing request information Address information positions address of the releasing memory in the corresponding memory block to be released of the memory pool, and discharges described to be released The memory headroom of the releasing memory of memory block.
S2002:The memory headroom address of the releasing memory generated after the memory block releasing memory to be released is obtained, will be described Memory headroom address is inserted into the space address information of the releasing memory of the memory block data head record to be released.
Address in be released memory block of the releasing memory in the memory pool is navigated to, can obtain the positioning Releasing memory be divided into releasing memory.It should be noted that because the memory block in memory pool is default size, internal memory Internal memory in block can not be wiped being serviced device as the storage information in bulk memory, can only be capped, therefore, described to wait to release Releasing memory can be marked as by putting internal memory, wait next time using described in releasing memory when, can be in the releasing memory Storage information covered.
In the present embodiment, the release that the memory headroom address is inserted into the memory block data head record to be released It can include in the space address information of internal memory:
Judge the present node where the memory headroom address, the previous node and present node with the present node are latter Whether individual node is contiguous memory;
If the result judged is is contiguous memory, by the present node, the previous node of the present node, it is described work as The new node that latter node of front nodal point merges into a contiguous memory recorded the memory block data head record to be released In the space address information of releasing memory;Otherwise, the memory headroom address is inserted into as new node described to be released interior In the space address information of the releasing memory of counterfoil data head record.
Present node in the present embodiment refers to current releasing memory, and the previous node, latter node refer to described release respectively Put the previous releasing memory and latter releasing memory of internal memory.By check current Free up Memory address whether with it is adjacent Free up Memory is repeated, if repeating, and current Free up Memory and adjacent Free up Memory are merged into a continuous Free up Memory, Can facilitate can subsequently find accurately the information of Free up Memory.
As described in Figure 6, in another embodiment of the application, methods described can also include:
S3001:Judge whether the new node is located at the first memory block of memory pool, if the new node is not located at memory pool First memory block, then judge the free memory of the memory block to be released and maximum can storage allocation it is whether equal;
S3002:If the result judged as the memory block to be released free memory and maximum can storage allocation it is equal when, release The data storage areas memory headroom of the memory block to be released, deletes memory block node to be released.
When the new node is the first memory block of memory pool, internal memory release flow terminates.When the new node is located at memory pool Node memory block in when, it can be determined that the free memory block of the memory block to be released and maximum can storage allocation it is whether equal. When the memory block to be released free memory and maximum can storage allocation it is equal when, it may be determined that after the release of this internal memory, The internal memory of the memory block is in idle state.At this point it is possible to the space of the data storage area of the releasing memory block is discharged, and Delete the memory block node to be released, i.e. for example that the previous memory block K-1 middle fingers of the memory block K to be released is downward The address of one memory block is set to point to next memory block K+1 of the memory block K to be released.
As described in Figure 6, in another embodiment of the application, methods described can also include:
S4001:If the result judged as the memory block to be released free memory and maximum can storage allocation it is unequal when, it is right The free memory of memory block carries out after size judgement memory block being arranged to free memory from big to small in the memory pool Order sorts.
When the releasing memory block free memory and maximum can storage allocation it is unequal, can be to memory block in the memory pool Memory block is arranged into the order of free memory from big to small after free memory progress size judgement to sort, it is ensured that in memory pool The free memory of memory block is arranged according to descending order.
It is sky when the first memory block for detecting memory pool points to the header addresses of next memory block in embodiments herein When, and free memory and the memory block of the first memory block maximum can storage allocation it is equal when, perform and destroy in described Deposit pond, releasing memory pool space.
The processing method for the Memory Allocation that the application is provided, can be according to the size of application internal memory respectively out of memory pool or bulk Middle storage allocation is deposited, is distributed respectively in the free time according to the size for judging application internal memory and free memory again in memory pool Memory Allocation Deposit or new memory block, can not only meet the demand of application memory headroom, the space of memory block can also be made full use of.It is described Memory pool is conducive to centralized management of the server to small memory headroom, the thread frequently applied especially for small memory headroom, can be with The quick application space from memory pool, and greatly reduce memory fragmentation, space availability ratio is greatly improved.
On the other hand the application also provides a kind of Memory Allocation device, and Fig. 7 is the processing unit for the Memory Allocation that the application is provided A kind of modular structure schematic diagram of embodiment, as shown in fig. 7, described device 70 can include:
Maximum memory judge module 71, for obtaining internal memory application information, judges to apply for whether internal memory is less than in the memory pool set up The maximum of memory block can storage allocation;
Bulk memory distribute module 72, for judging that the application internal memory can more than the maximum when the maximum memory judge module During storage allocation, according to the bulk memory information recorded in the memory block data head, storage allocation is to application from bulk memory Internal memory;
Remaining space judge module 73, for judging whether the application internal memory is more than the free memory of current memory block, works as judgement When the application internal memory is less than the free memory of current memory block, judge whether the current residual space of current memory block meets described Apply for the demand of internal memory;
Remaining space distribute module 74, if the current residual space for judging current memory block for the remaining space judge module is expired During the demand of the foot application internal memory, storage allocation gives the application internal memory from the current residual space of the current memory block.
It should be noted that the memory pool of the foundation in the present embodiment can be configured to:
Be divided into includes the memory block of data head and data storage areas including at least one default fixed size;
In memory pool memory block be configured to according to memory block free memory from big to small order arrangement.
Fig. 8 is the modular structure schematic diagram of another embodiment for the Memory Allocation device that the application is provided, as shown in figure 8, dress Putting 80 can also include:
First free memory block searching modul 81, for judging that the application internal memory is more than or waited when the remaining space judge module When the free memory of current memory block, the memory block chained list based on current memory block data head record is successively in the memory pool Search the free memory block that free memory is more than or equal to the application internal memory;
First free memory block judge module 82, if for finding the free memory block, the judgement current memory block Whether current residual space, which meets the application memory requirements, includes judging whether the current residual space of the free memory block is full The demand of the foot application internal memory.
Fig. 9 is the modular structure schematic diagram of another embodiment for the Memory Allocation device that the application is provided, as shown in figure 9, dress Putting 90 can also include:
First new memory block creation module 91, if for not finding free memory in the memory pool more than or equal to the Shen Please internal memory free memory block, then create new memory block in the memory pool, and the use of the new memory block is the application Memory Allocation internal memory.
First memory block order module 92, it is relatively interior successively for next memory block indicated by the data head based on memory block The size of the free memory of memory block and the free memory of the new memory block in pond is deposited, the chained list of the new memory block is inserted into Free memory is more than the previous memory block and free memory of the free memory of the new memory block less than described in the memory pool Between latter memory block of the free memory of new memory block.
Figure 10 is a kind of modular structure schematic diagram of the embodiment for the remaining space distribute module 74 that the application is provided, in the application One embodiment scene in, the remaining space distribute module 74 can include:
Unallocated memory allocating module 101, if the current residual space for the current memory block meets the application internal memory Demand, preferentially gives application internal memory from the current unallocated memory headroom storage allocation of current memory block;
Releasing memory distribute module 102, if the current unallocated internal memory for current memory block is unsatisfactory for the application internal memory Searched in demand, then the space address information of the releasing memory recorded from current memory block data head and meet the application internal memory The memory headroom of demand is allocated.
As shown in Figure 10, in the application another embodiment, the remaining space distribute module 74 can also include:
Second free memory block searching modul 103, if for failing to look into from the releasing memory of current memory block data head record The memory headroom for meeting the application memory requirements is found, the address information based on current memory block data head record is successively described The free memory block that free memory is more than or equal to the application internal memory is searched in memory pool;
Second free memory block judge module 104, if for finding the free memory block, judging the free memory block Current residual space whether meet the application memory requirements.
As shown in Figure 10, in the application another embodiment, the remaining space distribute module 74 can also include:
Second new memory block creating unit 105, for being in the memory pool when the second free memory block judge module judged result In do not find free memory be more than or equal to it is described application internal memory free memory block, then new internal memory is created in the memory pool Block, and use the new memory block to be the application Memory Allocation internal memory.
Figure 11 is the modular structure schematic diagram of another embodiment for the Memory Allocation processing unit that the application is provided, such as Figure 11 institutes Show, device 110 can also include:
Releasing memory judge module 1101, for obtaining internal memory releasing request information, judges whether releasing memory is more than the maximum Can storage allocation;
Bulk memory locating module 1102, is more than described for the judged result when the releasing memory judging unit for releasing memory Maximum can storage allocation when, released according to the bulk memory Information locating that the internal memory releasing request information and data head are recorded Put the first address of the corresponding bulk memory of internal memory;
Bulk memory release module 1103, the data message for discharging the bulk memory that positioning is obtained, deletes memory block data head The first address information of the corresponding bulk memory of releasing memory described in the bulk memory address link list of middle record.
Figure 12 is the modular structure schematic diagram of another embodiment for the Memory Allocation processing unit that the application is provided, such as Figure 12 institutes Show, device 120 can also include:
Releasing memory locating module 1201, is less than the maximum for the judged result when releasing memory judging unit for releasing memory Can storage allocation when, releasing memory is in the corresponding memory block to be released of the memory pool according to internal memory releasing request Information locating In address, and discharge the memory headroom of the releasing memory of the memory block to be released;
Releasing memory chained list update module 1202, for obtaining in the release generated after the memory block releasing memory to be released The memory headroom address deposited, the memory headroom address is inserted into the releasing memory of the memory block data head record to be released Space address information in.
Figure 13 is the modular structure schematic diagram of another embodiment for the chained list update module of releasing memory that the application is provided, such as Shown in Figure 13, releasing memory chained list update module 1202 can include:
Contiguous memory judge module 1301, for judging the present node where the memory headroom address, with the present node Latter node of previous node and present node whether be contiguous memory;
Internal memory merging module 1302, will be described if the judged result for the contiguous memory judge module is is contiguous memory Present node, the previous node of the present node, latter node of the present node merge into contiguous memory New node recorded in the space address information of the releasing memory of the memory block data head record to be released;Otherwise, will be described The space address that memory headroom address is inserted into the releasing memory of the memory block data head record to be released as new node is believed In breath.
As shown in figure 13, in the application another embodiment, the chained list of the releasing memory update module 1202 can also be wrapped Include:
New node judge module 1303, for judging whether the new node is located at the first memory block of memory pool, if the new section Point is not located at the first memory block of memory pool, then judge the free memory of the memory block to be released and maximum can storage allocation be It is no equal;
Node release module 1304, if the judged result for the new node judge module is the free time of the memory block to be released Internal memory and maximum can storage allocation it is equal when, discharge the data storage areas memory headroom of the memory block to be released, delete and wait to release Put memory block node.
As shown in figure 13, in the application another embodiment, the chained list of the releasing memory update module 1202 can also be wrapped Include:
First memory block order module 1305, if the judged result for the new node judge module is the memory block to be released Free memory and maximum can storage allocation it is unequal when, the free memory of memory block in the memory pool is carried out after size judgement Memory block is arranged to the order sequence of free memory from big to small.
In one embodiment of the application, described device also includes:
Memory pool destroys module, and the header addresses for pointing to next memory block when the first memory block for detecting memory pool are Sky, and free memory and the memory block of the first memory block maximum can storage allocation it is equal when, perform and destroy in described Deposit pond, releasing memory pool space.
The processing unit for the Memory Allocation that the application is provided, can be according to the size of application internal memory respectively out of memory pool or bulk Middle storage allocation is deposited, is distributed respectively in the free time according to the size for judging application internal memory and free memory again in memory pool Memory Allocation Deposit or new memory block, can not only meet the demand of application memory headroom, the space of memory block can also be made full use of.It is described Memory pool is conducive to centralized management of the server to small memory headroom, the thread frequently applied especially for small memory headroom, can be with The quick application space from memory pool, and greatly reduce memory fragmentation, space availability ratio is greatly improved.
Although mentioning the address information recording side of the dividing mode of memory pool memory block, memory block and memory headroom in teachings herein The memory information data of formula or the like are set, stored, deletion mode is described, and still, the application is not limited to be complete Meet industry or certain computer language and perform standard or the situation described by embodiment.The computer language or reality of some standardization Apply example description on the basis of embodiment amended slightly can also carry out above-described embodiment it is identical, equivalent or close or deformation Afterwards it is anticipated that implementation result.Certainly, though not by the way of upper data processing, judging, as long as it is above-mentioned each to meet the application Internal storage data processing, information exchange and the information of embodiment judge feedback system, still can realize identical application, herein not Repeat again.
Although this application provides the method operating procedure as described in embodiment or flow chart, based on conventional or without creativeness Means can include more or less operating procedures.The step of being enumerated in embodiment order is only numerous step execution sequences In a kind of mode, unique execution sequence is not represented., can be according to reality when device or client production in practice is performed Apply example or method shown in the drawings order is performed or parallel execution (environment of such as parallel processor or multiple threads).
Module, device or module that above-described embodiment is illustrated, can specifically be realized by computer chip or entity, or by having There is the product of certain function to realize.For convenience of description, it is divided into various modules during description apparatus above with function to describe respectively. Certainly, the function of each module can be realized in same or multiple softwares and/or hardware when implementing the application, can also be by Realize that the module of same function is realized by the combination of multiple submodule or submodule.
, completely can be with it is also known in the art that in addition to realizing controller in pure computer readable program code mode Cause controller with gate, switch, application specific integrated circuit, FPGA control by the way that method and step is carried out into programming in logic Device processed realizes identical function with the form of embedded microcontroller etc..Therefore this controller is considered a kind of Hardware Subdivision Part, and the device for realizing various functions included to its inside can also be considered as the structure in hardware component.Or even, It not only can be able to will be the software module of implementation method but also can be in hardware component for realizing that the device of various functions is considered as Structure.
The application can be described in the general context of computer executable instructions, such as program module. Usually, program module include performing particular task or realize the routine of particular abstract data type, program, object, component, Data structure, class etc..The application can also be put into practice in a distributed computing environment, in these DCEs, by Remote processing devices connected by communication network perform task.In a distributed computing environment, program module can position In including in the local and remote computer-readable storage medium including storage device.
As seen through the above description of the embodiments, those skilled in the art can be understood that the application can be by soft Part adds the mode of required general hardware platform to realize.Understood based on such, the technical scheme of the application is substantially in other words The part contributed to prior art can be embodied in the form of software product, and the computer software product can be stored in In storage medium, such as ROM/RAM, magnetic disc, CD, including some instructions are to cause a computer equipment (can be with It is personal computer, mobile terminal, server, or network equipment etc.) perform each embodiment of the application or embodiment Method described in some parts.
Each embodiment in this specification is described by the way of progressive, and same or analogous part is mutual between each embodiment Referring to what each embodiment was stressed is the difference with other embodiment.The application can be used for it is numerous general or In special computing system environments or configuration.For example:Personal computer, server computer, handheld device portable are set Standby, laptop device, multicomputer system, the system based on microprocessor, set top box, programmable electronic equipment, network PC, minicom, DCE of mainframe computer including any of the above system or equipment etc..
Although depicting the application by embodiment, it will be appreciated by the skilled addressee that the application have it is many deformation and change and Spirit herein is not departed from, it is desirable to which appended claim includes these deformations and changed without departing from spirit herein.

Claims (28)

1. a kind of processing method of Memory Allocation, it is characterised in that methods described includes:
Internal memory application information is obtained, judges to apply whether internal memory is less than in the memory pool of foundation that the maximum of memory block can storage allocation;
When judge it is described application internal memory be more than the maximum can storage allocation when, it is big according to what is recorded in the data head of the memory block Block memory information, storage allocation gives application internal memory from bulk memory;
Otherwise, judge whether the application internal memory is more than the free memory of current memory block, work as when judging that the application internal memory is less than During the free memory of preceding memory block, judge whether the current residual space of current memory block meets the demand of the application internal memory;
If the current residual space of the current memory block meets the demand of the application internal memory, from the current of the current memory block Storage allocation gives the application internal memory in remaining space.
2. a kind of processing method of Memory Allocation according to claim 1, it is characterised in that the memory pool of the foundation It is configured to:
Being divided into includes the memory block of at least one default fixed size;
Memory block in memory pool be configured to according to memory block free memory from big to small order arrangement.
3. the processing method of a kind of Memory Allocation according to claim 1 or 2, it is characterised in that methods described is also wrapped Include:
When judging that the application internal memory is more than or equal to the free memory of current memory block, based on current memory block data head record Memory block chained list successively in the memory pool search free memory be more than or equal to it is described application internal memory free memory block;
Correspondingly, if finding the free memory block, whether the current residual space for judging current memory block meets institute Stating application memory requirements includes judging whether the current residual space of the free memory block meets the demand of the application internal memory.
4. the processing method of a kind of Memory Allocation according to claim 3, it is characterised in that methods described also includes:
If the free memory block that free memory is more than or equal to the application internal memory is not found in the memory pool, described New memory block is created in memory pool, and uses the new memory block to be the application Memory Allocation internal memory.
5. the processing method of a kind of Memory Allocation according to claim 4, it is characterised in that methods described also includes:
Based on next memory block indicated by the data head of memory block, the free memory of memory block and institute in memory pool are compared successively The size of the free memory of new memory block is stated, the chained list of the new memory block is inserted into free memory in the memory pool is more than institute The previous memory block and free memory for stating the free memory of new memory block are latter individual less than the free memory of the new memory block Between memory block.
6. the processing method of a kind of Memory Allocation according to claim 1, it is characterised in that if described current interior The current residual space of counterfoil meets the application memory requirements, the storage allocation from the current residual space of the current memory block Include to the application internal memory:
If the current residual space of the current memory block meets the demand of the application internal memory, preferentially from the current of current memory block Unallocated memory headroom storage allocation gives application internal memory;
If the current unallocated internal memory of current memory block is unsatisfactory for the demand of the application internal memory, from current memory block data head note The memory headroom that the demand for meeting the application internal memory is searched in the space address information of the releasing memory of record is allocated.
7. the processing method of a kind of Memory Allocation according to claim 6, it is characterised in that methods described also includes:
If failing to find the internal memory for meeting the application memory requirements from the releasing memory of current memory block data head record Space, searches free memory successively based on the address information that current memory block data head is recorded in the memory pool and is more than or equal to The free memory block of the application internal memory;
Accordingly, if finding the free memory block, whether the current residual space for judging current memory block meets institute Stating application memory requirements includes judging whether the current residual space of the free memory block meets the demand of the application internal memory.
8. the processing method of a kind of Memory Allocation according to claim 7, it is characterised in that methods described also includes:
If the free memory block that free memory is more than or equal to the application internal memory is not found in the memory pool, described New memory block is created in memory pool, and uses the new memory block to be the application Memory Allocation internal memory.
9. the processing method of a kind of Memory Allocation according to claim 1, it is characterised in that methods described also includes:
Internal memory releasing request information is obtained, judges whether releasing memory can storage allocation more than the maximum;
When judged result be releasing memory be more than the maximum can storage allocation when, according to the internal memory releasing request information and number The first address of the corresponding bulk memory of releasing memory described in the bulk memory Information locating recorded according to head;
The data message for the bulk memory that release positioning is obtained, is deleted in the bulk memory address link list recorded in memory block data head The first address information of the corresponding bulk memory of the releasing memory.
10. the processing method of a kind of Memory Allocation according to claim 9, it is characterised in that methods described also includes:
When judged result be releasing memory be less than the maximum can storage allocation when, released according to internal memory releasing request Information locating Put the interior address existed in the corresponding memory block to be released of the memory pool, and discharge in the release of the memory block to be released The memory headroom deposited;
And, the memory headroom address of the releasing memory generated after the memory block releasing memory to be released is obtained, will be described interior Space address is deposited to be inserted into the space address information of the releasing memory of the memory block data head record to be released.
11. the processing method of a kind of Memory Allocation according to claim 10, it is characterised in that described by the internal memory The space address information that space address is inserted into the releasing memory of the memory block data head record to be released includes:
Judge the present node where the memory headroom address, the previous node and present node with the present node are latter Whether individual node is contiguous memory;
If the result judged is is contiguous memory, by the present node, the previous node of the present node, it is described work as Latter node of front nodal point merges into the new node of a contiguous memory and recorded the memory block data head record to be released Releasing memory space address information in;Otherwise, the memory headroom address is inserted into as new node described to be released In the space address information of the releasing memory of memory block data head record.
12. the processing method of a kind of Memory Allocation according to claim 11, it is characterised in that methods described also includes:
Judge whether the new node is located at the first memory block of memory pool, if the new node is not located at the first interior of memory pool Counterfoil, then judge the free memory of the memory block to be released and maximum can storage allocation it is whether equal;
If the result judged as the memory block to be released free memory and maximum can storage allocation it is equal when, wait to release described in release The data storage areas memory headroom of memory block is put, memory block node to be released is deleted.
13. the processing method of a kind of Memory Allocation according to claim 12, it is characterised in that methods described also includes:
If the result judged as the memory block to be released free memory and maximum can storage allocation it is unequal when, to the internal memory Memory block is arranged into the order of free memory from big to small after the free memory progress size judgement of memory block in pond to sort.
14. a kind of processing method of Memory Allocation according to right wants any one in 1-2,6-13, it is characterised in that Methods described also includes:
It is sky when the first memory block for detecting memory pool points to the header addresses of next memory block, and the first memory block The maximum of free memory and the memory block can storage allocation it is equal when, perform and destroy the memory pool, releasing memory pool space.
15. a kind of processing unit of Memory Allocation, it is characterised in that described device includes:
Maximum memory judge module, for obtaining internal memory application information, judges to apply for whether internal memory is interior less than in the memory pool set up The maximum of counterfoil can storage allocation;
Bulk memory distribute module, for judging that the application internal memory is more than the maximum and can divided when the maximum memory judge module During with internal memory, according to the bulk memory information recorded in the memory block data head, storage allocation is in application from bulk memory Deposit;
Remaining space judge module, for judging whether the application internal memory is more than the free memory of current memory block, when judging State application internal memory be less than current memory block free memory when, judge whether the current residual space of current memory block meets the Shen Please internal memory demand;
Remaining space distribute module, if the current residual space for judging current memory block for the remaining space judge module is met During the demand of the application internal memory, storage allocation gives the application internal memory from the current residual space of the current memory block.
16. a kind of processing unit of Memory Allocation according to claim 15, it is characterised in that the internal memory of the foundation Pond is configured to:
Be divided into includes the memory block of data head and data storage areas including at least one default fixed size;
Memory block in memory pool be configured to according to memory block free memory from big to small order arrangement.
17. the processing unit of a kind of Memory Allocation according to claim 15 or 16, it is characterised in that described device is also Including:
First free memory block searching modul, for judging that the application internal memory is more than or equal to when the remaining space judge module During the free memory of current memory block, the memory block chained list recorded based on current memory block data head is looked into the memory pool successively Free memory is looked for be more than or equal to the free memory block of the application internal memory;
First free memory block judge module, if for finding the free memory block, it is described to judge working as current memory block Whether preceding remaining space, which meets the application memory requirements, includes judging whether the current residual space of the free memory block meets The demand of the application internal memory.
18. the processing unit of a kind of Memory Allocation according to claim 17, it is characterised in that described device also includes:
First new memory block creation module, if for not finding free memory in the memory pool more than or equal to the application The free memory block of internal memory, then create new memory block in the memory pool, and using the new memory block in the application Deposit storage allocation.
19. the processing unit of a kind of Memory Allocation according to claim 18, it is characterised in that described device also includes:
First memory block order module, for next memory block indicated by the data head based on memory block, compares internal memory successively The size of the free memory of memory block and the free memory of the new memory block, institute is inserted into by the chained list of the new memory block in pond State free memory in memory pool new less than described more than the previous memory block and free memory of the free memory of the new memory block Between latter memory block of the free memory of memory block.
20. a kind of processing unit of Memory Allocation according to claim 15, it is characterised in that the remaining space point Include with module:
Unallocated memory allocating module, if the current residual space for the current memory block meets the need of the application internal memory Ask, preferentially give application internal memory from the current unallocated memory headroom storage allocation of current memory block;
Releasing memory distribute module, if the current unallocated internal memory for current memory block is unsatisfactory for the need of the application internal memory Ask, then the need for meeting the application internal memory are searched in the space address information of the releasing memory recorded from current memory block data head The memory headroom asked is allocated.
21. the processing unit of a kind of Memory Allocation according to claim 20, it is characterised in that described device also includes:
Second free memory block searching modul, if for failing to find from the releasing memory of current memory block data head record The memory headroom of the application memory requirements is met, the address information based on current memory block data head record is successively in the internal memory The free memory block that free memory is more than or equal to the application internal memory is searched in pond;
Second free memory block judge module, if for finding the free memory block, it is described to judge working as current memory block Whether preceding remaining space, which meets the application memory requirements, includes judging whether the current residual space of the free memory block meets The demand of the application internal memory.
22. the processing unit of a kind of Memory Allocation according to claim 21, it is characterised in that described device also includes:
Second new memory block creating unit, for when the second free memory block judge module judged result be in the memory pool not The free memory block that free memory is more than or equal to the application internal memory is found, then new memory block is created in the memory pool, And use the new memory block to be the application Memory Allocation internal memory.
23. the processing unit of a kind of Memory Allocation according to claim 15, it is characterised in that described device also includes:
Releasing memory judging unit, for obtaining internal memory releasing request information, judges whether releasing memory is more than the maximum and can divide With internal memory;
Bulk memory locating module, is more than the maximum for the judged result when the releasing memory judging unit for releasing memory Can storage allocation when, in release described in the bulk memory Information locating recorded according to the internal memory releasing request information and data head Deposit the first address of corresponding bulk memory;
Bulk memory release module, the data message for discharging the bulk memory that positioning is obtained, deletes in memory block data head and remembers The first address information of the corresponding bulk memory of releasing memory described in the bulk memory address link list of record.
24. the processing unit of a kind of Memory Allocation according to claim 23, it is characterised in that described device also includes:
Releasing memory locating module, being less than the maximum for the judged result when releasing memory judging unit for releasing memory can divide During with internal memory, releasing memory is in the corresponding memory block to be released of the memory pool according to internal memory releasing request Information locating Address, and discharge the memory headroom of the releasing memory of the memory block to be released;
Releasing memory chained list update module, for obtaining the releasing memory generated after the memory block releasing memory to be released Memory headroom address, the memory headroom address is inserted into the sky of the releasing memory of the memory block data head record to be released Between in address information.
25. a kind of processing unit of Memory Allocation according to claim 24, it is characterised in that the releasing memory Chained list updating block includes:
Contiguous memory judge module, for judging the present node where the memory headroom address, before the present node Whether latter node of one node and present node is contiguous memory;
Internal memory merging module, will be described current if the judged result for the contiguous memory judge module is is contiguous memory Node, the previous node of the present node, latter node of the present node merge into the new section of a contiguous memory Point recorded in the space address information of the releasing memory of the memory block data head record to be released;Otherwise, by the internal memory Space address is inserted into as new node in the space address information of the releasing memory of the memory block data head record to be released.
26. the processing unit of a kind of Memory Allocation according to claim 25, it is characterised in that described device also includes:
New node judge module, for judging whether the new node is located at the first memory block of memory pool, if the new node is not The first memory block positioned at memory pool, then judge the free memory of the memory block to be released and maximum can storage allocation whether phase Deng;
Node release module, if the judged result for the new node judge module is the free memory of the memory block to be released With maximum can storage allocation it is equal when, discharge the data storage areas memory headroom of the memory block to be released, delete in be released Counterfoil node.
27. the processing unit of a kind of Memory Allocation according to claim 26, it is characterised in that described device also includes:
First memory block order module, if the judged result for the new node judge module is the sky of the memory block to be released Not busy internal memory and maximum can storage allocation it is unequal when, the free memory of memory block in the memory pool is carried out will be interior after size judgement Counterfoil is arranged to the order sequence of free memory from big to small.
28. a kind of processing unit of Memory Allocation as described in right wants any one in 15-16,20-27, it is characterised in that Described device also includes:
Memory pool destroys module, and the header addresses for pointing to next memory block when the first memory block for detecting memory pool are Sky, and free memory and the memory block of the first memory block maximum can storage allocation it is equal when, perform and destroy in described Deposit pond, releasing memory pool space.
CN201610116806.6A 2016-03-02 2016-03-02 A kind of processing method and processing device of Memory Allocation Pending CN107153618A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610116806.6A CN107153618A (en) 2016-03-02 2016-03-02 A kind of processing method and processing device of Memory Allocation

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610116806.6A CN107153618A (en) 2016-03-02 2016-03-02 A kind of processing method and processing device of Memory Allocation

Publications (1)

Publication Number Publication Date
CN107153618A true CN107153618A (en) 2017-09-12

Family

ID=59792015

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610116806.6A Pending CN107153618A (en) 2016-03-02 2016-03-02 A kind of processing method and processing device of Memory Allocation

Country Status (1)

Country Link
CN (1) CN107153618A (en)

Cited By (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108897603A (en) * 2018-07-03 2018-11-27 郑州云海信息技术有限公司 A kind of memory source management method and device
CN109144890A (en) * 2018-07-02 2019-01-04 珠海格力电器股份有限公司 A kind of date storage method and device
CN109324870A (en) * 2018-09-20 2019-02-12 郑州云海信息技术有限公司 A kind of method and apparatus for deleting the snapshot disk of virtual machine
CN109710408A (en) * 2018-12-24 2019-05-03 杭州迪普科技股份有限公司 EMS memory management process and device
CN109739629A (en) * 2018-12-29 2019-05-10 中国银联股份有限公司 A kind of system multithread scheduling method and device
CN109947560A (en) * 2019-02-25 2019-06-28 深圳市创联时代科技有限公司 A kind of EMS memory management process
CN110532198A (en) * 2019-09-09 2019-12-03 成都西山居互动娱乐科技有限公司 A kind of method and device of memory allocation
WO2019237811A1 (en) * 2018-06-13 2019-12-19 华为技术有限公司 Memory allocation method and apparatus for neural network
CN110647397A (en) * 2019-09-16 2020-01-03 北京方研矩行科技有限公司 Method and system for dynamically controlling total memory amount based on block memory
CN110928803A (en) * 2018-09-19 2020-03-27 阿里巴巴集团控股有限公司 Memory management method and device
CN111027290A (en) * 2019-11-22 2020-04-17 贝壳技术有限公司 Data report naming method and device, electronic equipment and storage medium
CN112214313A (en) * 2020-09-22 2021-01-12 深圳云天励飞技术股份有限公司 Memory allocation method and related equipment
WO2021087662A1 (en) * 2019-11-04 2021-05-14 深圳市欢太科技有限公司 Memory allocation method and apparatus, terminal, and computer readable storage medium
CN113051066A (en) * 2019-12-27 2021-06-29 阿里巴巴集团控股有限公司 Memory management method, device, equipment and storage medium
CN113296961A (en) * 2021-07-22 2021-08-24 广州中望龙腾软件股份有限公司 GPU-based dynamic memory allocation method and device and memory linked list
CN114327868A (en) * 2021-12-08 2022-04-12 中汽创智科技有限公司 Dynamic memory regulation and control method, device, equipment and medium
CN114518962A (en) * 2022-04-15 2022-05-20 北京奥星贝斯科技有限公司 Memory management method and device
CN116361234A (en) * 2023-06-02 2023-06-30 深圳中安辰鸿技术有限公司 Memory management method, device and chip
CN116991595A (en) * 2023-09-27 2023-11-03 太初(无锡)电子科技有限公司 Memory allocation method, device, equipment and medium based on Bitmap
CN117130949A (en) * 2023-08-28 2023-11-28 零束科技有限公司 Memory management method, device, electronic equipment and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1635482A (en) * 2003-12-29 2005-07-06 北京中视联数字系统有限公司 A memory management method for embedded system
CN101937398A (en) * 2010-09-14 2011-01-05 中兴通讯股份有限公司 Configuration method and device for built-in system memory pool
CN102915276A (en) * 2012-09-25 2013-02-06 武汉邮电科学研究院 Memory control method for embedded systems
US20150074066A1 (en) * 2013-09-06 2015-03-12 Sap Ag Database operations on a columnar table database
US20150169454A1 (en) * 2013-11-19 2015-06-18 Wins Co., Ltd. Packet transfer system and method for high-performance network equipment

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1635482A (en) * 2003-12-29 2005-07-06 北京中视联数字系统有限公司 A memory management method for embedded system
CN101937398A (en) * 2010-09-14 2011-01-05 中兴通讯股份有限公司 Configuration method and device for built-in system memory pool
CN102915276A (en) * 2012-09-25 2013-02-06 武汉邮电科学研究院 Memory control method for embedded systems
US20150074066A1 (en) * 2013-09-06 2015-03-12 Sap Ag Database operations on a columnar table database
US20150169454A1 (en) * 2013-11-19 2015-06-18 Wins Co., Ltd. Packet transfer system and method for high-performance network equipment

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
高仲仪等编: "《编译原理及编译程序构造》", 31 December 1990, 北京航空航天大学出版社 *

Cited By (32)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110597616B (en) * 2018-06-13 2022-07-29 华为技术有限公司 Memory allocation method and device for neural network
WO2019237811A1 (en) * 2018-06-13 2019-12-19 华为技术有限公司 Memory allocation method and apparatus for neural network
CN110597616A (en) * 2018-06-13 2019-12-20 华为技术有限公司 Memory allocation method and device for neural network
CN109144890B (en) * 2018-07-02 2021-02-05 珠海格力电器股份有限公司 Data storage method and device
CN109144890A (en) * 2018-07-02 2019-01-04 珠海格力电器股份有限公司 A kind of date storage method and device
CN108897603A (en) * 2018-07-03 2018-11-27 郑州云海信息技术有限公司 A kind of memory source management method and device
CN108897603B (en) * 2018-07-03 2021-10-01 郑州云海信息技术有限公司 Memory resource management method and device
CN110928803B (en) * 2018-09-19 2023-04-25 阿里巴巴集团控股有限公司 Memory management method and device
CN110928803A (en) * 2018-09-19 2020-03-27 阿里巴巴集团控股有限公司 Memory management method and device
CN109324870A (en) * 2018-09-20 2019-02-12 郑州云海信息技术有限公司 A kind of method and apparatus for deleting the snapshot disk of virtual machine
CN109710408A (en) * 2018-12-24 2019-05-03 杭州迪普科技股份有限公司 EMS memory management process and device
CN109739629B (en) * 2018-12-29 2023-04-25 中国银联股份有限公司 System multithreading scheduling method and device
CN109739629A (en) * 2018-12-29 2019-05-10 中国银联股份有限公司 A kind of system multithread scheduling method and device
CN109947560A (en) * 2019-02-25 2019-06-28 深圳市创联时代科技有限公司 A kind of EMS memory management process
CN110532198A (en) * 2019-09-09 2019-12-03 成都西山居互动娱乐科技有限公司 A kind of method and device of memory allocation
CN110532198B (en) * 2019-09-09 2023-08-08 成都西山居互动娱乐科技有限公司 Storage space allocation method and device
CN110647397A (en) * 2019-09-16 2020-01-03 北京方研矩行科技有限公司 Method and system for dynamically controlling total memory amount based on block memory
CN110647397B (en) * 2019-09-16 2022-06-03 北京方研矩行科技有限公司 Method and system for dynamically controlling total memory amount based on block memory
WO2021087662A1 (en) * 2019-11-04 2021-05-14 深圳市欢太科技有限公司 Memory allocation method and apparatus, terminal, and computer readable storage medium
CN111027290A (en) * 2019-11-22 2020-04-17 贝壳技术有限公司 Data report naming method and device, electronic equipment and storage medium
CN113051066A (en) * 2019-12-27 2021-06-29 阿里巴巴集团控股有限公司 Memory management method, device, equipment and storage medium
CN112214313A (en) * 2020-09-22 2021-01-12 深圳云天励飞技术股份有限公司 Memory allocation method and related equipment
WO2022062833A1 (en) * 2020-09-22 2022-03-31 深圳云天励飞技术股份有限公司 Memory allocation method and related device
CN113296961A (en) * 2021-07-22 2021-08-24 广州中望龙腾软件股份有限公司 GPU-based dynamic memory allocation method and device and memory linked list
CN114327868A (en) * 2021-12-08 2022-04-12 中汽创智科技有限公司 Dynamic memory regulation and control method, device, equipment and medium
CN114327868B (en) * 2021-12-08 2023-12-26 中汽创智科技有限公司 Memory dynamic regulation and control method, device, equipment and medium
CN114518962A (en) * 2022-04-15 2022-05-20 北京奥星贝斯科技有限公司 Memory management method and device
CN116361234A (en) * 2023-06-02 2023-06-30 深圳中安辰鸿技术有限公司 Memory management method, device and chip
CN116361234B (en) * 2023-06-02 2023-08-08 深圳中安辰鸿技术有限公司 Memory management method, device and chip
CN117130949A (en) * 2023-08-28 2023-11-28 零束科技有限公司 Memory management method, device, electronic equipment and storage medium
CN116991595A (en) * 2023-09-27 2023-11-03 太初(无锡)电子科技有限公司 Memory allocation method, device, equipment and medium based on Bitmap
CN116991595B (en) * 2023-09-27 2024-02-23 太初(无锡)电子科技有限公司 Memory allocation method, device, equipment and medium based on Bitmap

Similar Documents

Publication Publication Date Title
CN107153618A (en) A kind of processing method and processing device of Memory Allocation
US10652319B2 (en) Method and system for forming compute clusters using block chains
US11637889B2 (en) Configuration recommendation for a microservice architecture
US20110138396A1 (en) Method and system for data distribution in high performance computing cluster
CN109873868A (en) A kind of computing capability sharing method, system and relevant device
CN112667414A (en) Message queue-based message consumption method and device, computer equipment and medium
CN109478147B (en) Adaptive resource management in distributed computing systems
US9424096B2 (en) Task allocation in a computer network
CN101977242A (en) Layered distributed cloud computing architecture and service delivery method
JP2022549527A (en) Data processing method, apparatus, distributed dataflow programming framework and related components
CN103248696B (en) Dynamic configuration method for virtual resource under a kind of cloud computing environment
CN107291536B (en) Application task flow scheduling method in cloud computing environment
CN111274033B (en) Resource deployment method, device, server and storage medium
CN105516086A (en) Service processing method and apparatus
CN107291544A (en) Method and device, the distributed task scheduling execution system of task scheduling
CN112559182A (en) Resource allocation method, device, equipment and storage medium
US20060031444A1 (en) Method for assigning network resources to applications for optimizing performance goals
CN109743202B (en) Data management method, device and equipment and readable storage medium
CN111459641A (en) Cross-machine-room task scheduling and task processing method and device
CN112600761A (en) Resource allocation method, device and storage medium
CN116069493A (en) Data processing method, device, equipment and readable storage medium
CN105427149A (en) Cross-border e-commerce BPO service method and device based on SOA expansion framework
CN112003930A (en) Task allocation method, device, equipment and storage medium
CN111858035A (en) FPGA equipment allocation method, device, equipment and storage medium
CN107454137B (en) Method, device and equipment for on-line business on-demand service

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20201013

Address after: Cayman Enterprise Centre, 27 Hospital Road, George Town, Grand Cayman Islands

Applicant after: Innovative advanced technology Co.,Ltd.

Address before: Cayman Enterprise Centre, 27 Hospital Road, George Town, Grand Cayman Islands

Applicant before: Advanced innovation technology Co.,Ltd.

Effective date of registration: 20201013

Address after: Cayman Enterprise Centre, 27 Hospital Road, George Town, Grand Cayman Islands

Applicant after: Advanced innovation technology Co.,Ltd.

Address before: Greater Cayman, British Cayman Islands

Applicant before: Alibaba Group Holding Ltd.

RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20170912