Embodiment
In order that those skilled in the art more fully understand the technical scheme in the application, below in conjunction with the embodiment of the present application
Accompanying drawing, the technical scheme in the embodiment of the present application is clearly and completely described, it is clear that described embodiment is only
It is some embodiments of the present application, rather than whole embodiments.Based on the embodiment in the application, ordinary skill people
The every other embodiment that member is obtained under the premise of creative work is not made, should all belong to the scope of the application protection.
The processing method to Memory Allocation described herein is described in detail below in conjunction with the accompanying drawings.Fig. 1 is the application offer
Memory Allocation processing method a kind of embodiment method flow diagram schematic diagram.Although this application provides such as following embodiments
Or method operating procedure shown in the drawings, but based on routine or can include in the process without performing creative labour more
Or less operating procedure.In the step of necessary causality is not present in logicality, the execution sequence of these steps is not limited
The execution sequence provided in the embodiment of the present application.In the Memory Allocation processing procedure of methods described in practice or device perform
When, it can be performed according to embodiment or method shown in the drawings order or parallel execution (such as parallel processor or multi-thread
The environment of journey processing).
A kind of embodiment of the processing method for Memory Allocation that specific the application is provided a kind of is as shown in figure 1, methods described can be with
Including:
S1:Internal memory application information is obtained, judges to apply whether internal memory is less than the maximum of memory block in the memory pool set up and can distribute interior
Deposit.
, typically can be continually to the small memory headroom of server application in long connection process, now, server can be in a short time
The internal memory application information that long connection process is sent is continuously received, the internal memory application information can be the size for applying for memory headroom
Etc. information.In the embodiment of the present application, server can set up memory pool in internal memory, and the memory pool can be in such small internal memory
Space applies using in frequently process, until the process is completed.The memory pool can be pre- in server memory
The memory headroom first divided.The memory pool of the foundation can be configured to:
SS1:Be divided into includes the memory block of data head and data storage areas including at least one default fixed size;
SS2:Memory block in memory pool be configured to according to memory block free memory from big to small order arrangement.
At least one memory block can be divided in the memory pool, the size of the memory block is preset value, such as 10K, 1M etc..
When the memory pool includes two or more memory block, memory block is formed between the memory block in whole memory pool
Chained list.Fig. 2 is a kind of structural representation of the embodiment for the memory block that the application is provided, as shown in Fig. 2 the memory block can be with
Including data head and data storage area, wherein, the data storage area is used for data storage storage information, and the data head is used for
Data storage header, the data header information can include:
Current memory block first address information;
A upper memory block first address information, next memory block first address information;
Unallocated memory headroom address information, free memory size information;
Bulk memory first address information, bulk memory size information;
Releasing memory space address information, releasing memory size information.
Wherein, the current memory block first address information can store the physical address information of current memory block, such as current memory
The first address " 0X8049320 " of block.As it was previously stated, when the memory pool includes multiple memory blocks, the thing between memory block
Reason address is continuous, and a upper memory block first address information, next memory block first address information can be stored respectively works as
Upper one, the first address information of next memory block of preceding memory block.Next memory block first address can be current interior
The address obtained on counterfoil first address after the default fixed size of offset memory block.As shown in Fig. 2 the free memory of memory block can
With including unallocated internal memory and releasing memory, it is described it is unallocated in save as and do not stored any data message in memory block also
Data storage area, the releasing memory is had stored data in memory block, but data storage has been labeled as release
Data storage area.The bulk memory can be big in one or more memory headroom outside memory pool, the present embodiment
The memory headroom of block internal memory is not fixed, generally higher than the maximum of memory block can storage allocation, the maximum can storage allocation can wrap
Include memory block storage allocation and can storage allocation., can be in the form of chained list by described in when there is multiple bulk memories
Multiple bulk memories are together in series, as shown in Fig. 2 next memory address of bulk memory 1 points to bulk memory 2.It is described big
Block internal memory first address information can be with the first address information of the bulk memory outside stored memory pond, and the bulk memory size information can
To store the memory headroom information of corresponding bulk memory, it can travel through and can apply by the first address information of the bulk memory
The bulk memory used, and understand the memory size of corresponding bulk memory.
In small memory headroom application is using more frequently process, can also apply for larger internal memory once in a while, now, memory pool it is interior
Counterfoil can not meet the demand of process application internal memory., can be by judging whether application internal memory is less than internal memory in the present embodiment
The maximum of block can storage allocation, determine storage allocation block internal memory or bulk memory.
S2:When judge it is described application internal memory be more than the maximum can storage allocation when, according in the data head of the memory block record
Bulk memory information, from bulk memory storage allocation to application internal memory.
The maximum in the present embodiment can storage allocation be usually memory block data storage area size, generally, institute
State maximum can storage allocation size be fixed preset value.When judged result can distribute for the maximum that the application internal memory is more than memory block
During internal memory, the bulk memory information that can be recorded in the data head according to current memory block, the storage allocation from the bulk memory
Give application internal memory.The address of available bulk memory and corresponding bulk memory is big outside the data header information stored memory pond
Small information.According to the size of application internal memory, the bulk memory is traveled through, it is determined that can be with storage allocation in the bulk for applying for internal memory
Deposit, for example, first internal memory is more than the bulk memory of the application internal memory in bulk memory first address chained list.
It should be noted that after the partial memory of bulk memory to be distributed to application internal memory, the memory headroom of the bulk memory
Reduce, the first address of the bulk memory also respective offsets.In the present embodiment, in the storage allocation from the bulk memory
After application internal memory, it may comprise steps of:
Update the first address of the bulk memory;
The first address information and size information of the bulk memory are updated to the data head of memory block in the memory pool.
Bulk memory distribution application internal memory outside memory pool, solving small memory block in memory pool can not meet in application once in a while
The problem of depositing demand.In addition, after big memory block storage allocation, the address of big memory block and size information are updated to each
In memory block, it is ensured that the newest big block address memory of memory block storage and size information.
S3:Otherwise, judge it is described application internal memory whether be more than current memory block free memory, when judge it is described apply internal memory it is small
When the free memory of current memory block, judge whether the current residual space of current memory block meets the need of the application internal memory
Ask.
When judged result for it is described application internal memory be less than the maximum can storage allocation when, it may be determined that in memory pool apply in
Deposit.Need exist in two kinds of situation, a kind of is the free memory that the application internal memory is less than current memory block, and another is described
Apply for that internal memory is more than or equal to the free memory of current memory block.Under above two different situations, the mode of storage allocation is different.
Fig. 3 is a kind of flow chart of the embodiment for internal memory processing method when applying for internal memory more than free memory that the application is provided,
As shown in figure 3, including:
S11:When judging that the application internal memory is more than or equal to the free memory of current memory block, based on current memory block number evidence
The memory block chained list that head is recorded searches free memory in the memory pool and is more than or equal to the available interior of the application internal memory successively
Counterfoil.
, can be based on current memory block data head note when the free memory of the current memory block can not meet the application internal memory
The memory block link table information of record searches next piece of free memory in the memory pool and is more than or equal to the available of the application internal memory
Memory block.
In another implement scene of the application, when the memory block chained list of memory block in the memory pool is configured to according to internal memory
When the order of the free memory of block from big to small is arranged, the memory block link table information that can be recorded based on current memory block data head be sentenced
Whether the free memory of the lastblock memory block of disconnected current memory block is more than or equal to the application internal memory.The present embodiment is being arranged
The free memory block that free memory is more than or equal to the application internal memory is searched in the memory block chained list of sequence, free memory can be saved
The lookup time, improve search efficiency.
S12:Correspondingly, if finding the free memory block, whether the current residual space for judging current memory block
Meeting the application memory requirements includes judging whether the current residual space of the free memory block meets the application internal memory
Demand.
If find free memory be more than or equal to it is described application internal memory free memory block, it can be determined that the free memory block work as
Whether preceding remaining space meets the application memory requirements.
As shown in figure 3, in another embodiment of the application, methods described can also include:
S21:If not finding the free memory block that free memory is more than or equal to the application internal memory in the memory pool,
New memory block is created in the memory pool, and uses the new memory block to be the application Memory Allocation internal memory.
If do not found in the memory pool free memory be more than or equal to it is described application internal memory free memory block when, can
To create new memory block in the memory pool, and the use of the new memory block is the application Memory Allocation internal memory.The present embodiment
In new memory block can build on the end of the memory pool, last memory block of the memory pool points to the new internal memory
Block.Fig. 4 is a kind of structural representation for the embodiment for setting up new memory block that the application is provided, as shown in figure 4, setting up in new
During counterfoil, the partial memory of the data storage area of the new memory block is distributed into the application internal memory first, then by remaining portion
Point internal memory is used as unallocated internal memory, it is ensured that new memory block it is maximum can storage allocation and others memory blocks it is in the same size.It is described
Bulk memory first address information, bulk memory size information in new memory block data head can replicate the correspondence of previous memory block
Storage information.Completed in this implementation while new memory block is created to the Memory Allocation for applying for internal memory, reduction operating procedure,
Improve allocative efficiency.
As shown in figure 3, in another embodiment of the application, methods described can also include:
S31:Based on next memory block indicated by the data head of memory block, compare successively in memory pool in the free time of memory block
The size with the free memory of the new memory block is deposited, the chained list of the new memory block is inserted into free memory in the memory pool
Previous memory block and free memory more than the free memory of the new memory block are less than the free memory of the new memory block
Between latter memory block.
, can be by when the foundation for completing new memory block and after apply for the Memory Allocation of internal memory in another embodiment of the application
Relevant position insertion of the new memory block in the memory pool, it is ensured that the memory block in the memory pool is according to free memory
Arrange, can specifically be accomplished in the following manner from big to small:Compare successively in memory pool the free memory of memory block with it is described new
The size of the free memory of memory block, by the chained list of the new memory block be inserted into previous free memory be more than, it is latter idle
Internal memory is less than between the adjacent memory block of the free memory of the new memory block.
S4:If the current residual space of the current memory block meets the demand of the application internal memory, from the current memory block
Storage allocation gives the application internal memory in current residual space.
If the current residual space of the current memory block meets the demand of the application internal memory, that is, current memory block is current
When remaining space is more than or equal to the free memory, it can be given storage allocation from the current residual space of the current memory block
Apply for internal memory.Specifically, Fig. 5 is a kind of flow chart for embodiment from free memory storage allocation method that the application is provided,
If as shown in figure 5, the current residual space of the current memory block meets the demand of the application internal memory, from described current
Storage allocation can include to the application internal memory in the current residual space of memory block:
S101:If the current residual space of the current memory block meets the demand of the application internal memory, preferentially from current memory block
Current unallocated memory headroom storage allocation to application internal memory.
, can be preferentially from current memory block if the current residual space of the current memory block meets the demand of the application internal memory
Current unallocated memory headroom storage allocation to application internal memory.Specifically, it may determine that whether the application internal memory is less than first
Or equal to the unallocated internal memory in the free memory.
, can be empty from the unallocated internal memory if the unallocated internal memory that the application internal memory is less than or equal in the free memory
Between storage allocation to application internal memory.It is described it is unallocated in save as the memory block that data were never stored in memory block free memory, and
Although the releasing memory in free memory block can be used, the data storage of release is maintain in the releasing memory,
To the data storage in releasing memory, it is necessary to first wipe the data storage, memory space of reallocating.Therefore, in this reality
Apply in example, preferentially can give the application internal memory by the unallocated Memory Allocation in the free memory.
S102:If the current unallocated internal memory of current memory block is unsatisfactory for the demand of the application internal memory, from current memory block number
The memory headroom that the demand for meeting the application internal memory is searched in the space address information of the releasing memory recorded according to head is divided
Match somebody with somebody.
If the current unallocated internal memory of current memory block is unsatisfactory for the space size of the application internal memory, can be from current memory block number
Lookup meets the memory headroom for applying for memory requirements and is allocated in the releasing memory recorded according to head.Specifically, it can sentence
Whether the release sub- internal memory of the application internal memory is more than or equal to comprising internal memory in releasing memory in the disconnected free memory.This
In embodiment, because use, the release of internal memory are random, the time is also inconsistent, therefore, and releasing memory does not connect usually
Continuous, the releasing memory can discharge sub- internal memory comprising multiple.It is described discharged sub- internal memory memory size be typically also
Differ, the sub- internal memory of release for whether being more than or equal to the application internal memory comprising internal memory can be searched for from releasing memory.
It should be noted that when the releasing memory comprising it is multiple discharged sub- internal memory when, the releasing memory space address
Information can include multiple first address informations for having discharged sub- internal memory.
As shown in figure 5, in another embodiment of the application, methods described can also include:
S201:If failing to find from the releasing memory of current memory block data head record and meeting the application memory requirements
Memory headroom, based on current memory block data head record address information successively in the memory pool search free memory be more than or
Equal to the free memory block of the application internal memory.
Can be discontinuous due to having discharged between sub- internal memory, and it is continuous memory headroom to apply for that internal memory needs, therefore it is described
What releasing memory had been included, which has discharged sub- internal memory, may can not meet the demand of application internal memory.Therefore, when judged result is described
, can be with when not including internal memory in the releasing memory in free memory and being more than or equal to the sub- internal memory of release of the application internal memory
The memory block chained list recorded based on current memory block data head is searched next piece of free memory in the memory pool and is more than or equal to
The free memory block of the application internal memory.
S202:Accordingly, if finding the free memory block, whether the current residual space for judging current memory block
Meeting the application memory requirements includes judging whether the current residual space of the free memory block meets the application internal memory
Demand.
If finding the free memory block, it can be determined that whether the current residual space of the free memory block meets the Shen
Please memory requirements.If the current residual space of the free memory block meets the application memory requirements, can be with repeat step
S101-S102, can be with repeat step if the current residual space of the free memory block can not meet the application memory requirements
S11-S12, specific embodiment mode will not be repeated here.
As shown in figure 5, in another embodiment of the application, methods described can also include:
S301:If not finding the free memory block that free memory is more than or equal to the application internal memory in the memory pool,
New memory block is created in the memory pool, and uses the new memory block to be the application Memory Allocation internal memory.
S301 embodiment may be referred to S21 embodiment, will not be repeated here.
In another embodiment of the application, internal memory release, figure can be carried out to the memory block in bulk memory and memory pool
6 be a kind of method flow diagram of the embodiment for the internal memory releasing memory that the application is provided, as shown in fig. 6, including:
S1001:Internal memory releasing request information is obtained, judges whether releasing memory can storage allocation more than the maximum.
The internal memory releasing request information can include the address information and size of request releasing memory, due to the internal memory point of memory pool
Match somebody with somebody, discharge and worker thread is synchronous presence, when worker thread needs to discharge certain block internal memory, first determine whether certainly whether internal memory
The internal memory of memory block in pond.It can specifically first determine whether whether releasing memory can storage allocation more than the maximum.
S1002:When judged result be releasing memory be more than the maximum can storage allocation when, according to the internal memory releasing request believe
The first address of the corresponding bulk memory of releasing memory described in the bulk memory Information locating of breath and data head record.
When judge the releasing memory be more than the maximum can storage allocation when, it may be determined that request release be bulk memory.This
When, the bulk memory that can be recorded according to the address information and data head of the releasing memory in the internal memory releasing request information is believed
Breath, the bulk memory first address information in the data head links and navigates to the first address of the bulk memory.
S1003:The bulk memory recorded in the data message for the bulk memory that release positioning is obtained, deletion memory block data head
The first address information of the corresponding bulk memory of releasing memory described in the chained list of location.
After the first address of the bulk memory is navigated to, the bulk that positioning is obtained is discharged according to the size information of the releasing memory
The data message of internal memory.After the bulk memory releasing memory, the bulk memory that is recorded in memory block data head can be deleted
The first address information of the corresponding bulk memory of releasing memory described in the chained list of location, keep the data head record is newest bulk
Memory address information.
As described in Figure 6, in another embodiment of the application, methods described can also include:
S2001:When judged result be releasing memory be less than the maximum can storage allocation when, it is fixed according to internal memory releasing request information
Address of the position releasing memory in the corresponding memory block to be released of the memory pool, and discharge the institute of the memory block to be released
State the memory headroom of releasing memory.
When judge the releasing memory be less than the maximum can storage allocation when, can be according in the internal memory releasing request information
Address information positions address of the releasing memory in the corresponding memory block to be released of the memory pool, and discharges described to be released
The memory headroom of the releasing memory of memory block.
S2002:The memory headroom address of the releasing memory generated after the memory block releasing memory to be released is obtained, will be described
Memory headroom address is inserted into the space address information of the releasing memory of the memory block data head record to be released.
Address in be released memory block of the releasing memory in the memory pool is navigated to, can obtain the positioning
Releasing memory be divided into releasing memory.It should be noted that because the memory block in memory pool is default size, internal memory
Internal memory in block can not be wiped being serviced device as the storage information in bulk memory, can only be capped, therefore, described to wait to release
Releasing memory can be marked as by putting internal memory, wait next time using described in releasing memory when, can be in the releasing memory
Storage information covered.
In the present embodiment, the release that the memory headroom address is inserted into the memory block data head record to be released
It can include in the space address information of internal memory:
Judge the present node where the memory headroom address, the previous node and present node with the present node are latter
Whether individual node is contiguous memory;
If the result judged is is contiguous memory, by the present node, the previous node of the present node, it is described work as
The new node that latter node of front nodal point merges into a contiguous memory recorded the memory block data head record to be released
In the space address information of releasing memory;Otherwise, the memory headroom address is inserted into as new node described to be released interior
In the space address information of the releasing memory of counterfoil data head record.
Present node in the present embodiment refers to current releasing memory, and the previous node, latter node refer to described release respectively
Put the previous releasing memory and latter releasing memory of internal memory.By check current Free up Memory address whether with it is adjacent
Free up Memory is repeated, if repeating, and current Free up Memory and adjacent Free up Memory are merged into a continuous Free up Memory,
Can facilitate can subsequently find accurately the information of Free up Memory.
As described in Figure 6, in another embodiment of the application, methods described can also include:
S3001:Judge whether the new node is located at the first memory block of memory pool, if the new node is not located at memory pool
First memory block, then judge the free memory of the memory block to be released and maximum can storage allocation it is whether equal;
S3002:If the result judged as the memory block to be released free memory and maximum can storage allocation it is equal when, release
The data storage areas memory headroom of the memory block to be released, deletes memory block node to be released.
When the new node is the first memory block of memory pool, internal memory release flow terminates.When the new node is located at memory pool
Node memory block in when, it can be determined that the free memory block of the memory block to be released and maximum can storage allocation it is whether equal.
When the memory block to be released free memory and maximum can storage allocation it is equal when, it may be determined that after the release of this internal memory,
The internal memory of the memory block is in idle state.At this point it is possible to the space of the data storage area of the releasing memory block is discharged, and
Delete the memory block node to be released, i.e. for example that the previous memory block K-1 middle fingers of the memory block K to be released is downward
The address of one memory block is set to point to next memory block K+1 of the memory block K to be released.
As described in Figure 6, in another embodiment of the application, methods described can also include:
S4001:If the result judged as the memory block to be released free memory and maximum can storage allocation it is unequal when, it is right
The free memory of memory block carries out after size judgement memory block being arranged to free memory from big to small in the memory pool
Order sorts.
When the releasing memory block free memory and maximum can storage allocation it is unequal, can be to memory block in the memory pool
Memory block is arranged into the order of free memory from big to small after free memory progress size judgement to sort, it is ensured that in memory pool
The free memory of memory block is arranged according to descending order.
It is sky when the first memory block for detecting memory pool points to the header addresses of next memory block in embodiments herein
When, and free memory and the memory block of the first memory block maximum can storage allocation it is equal when, perform and destroy in described
Deposit pond, releasing memory pool space.
The processing method for the Memory Allocation that the application is provided, can be according to the size of application internal memory respectively out of memory pool or bulk
Middle storage allocation is deposited, is distributed respectively in the free time according to the size for judging application internal memory and free memory again in memory pool Memory Allocation
Deposit or new memory block, can not only meet the demand of application memory headroom, the space of memory block can also be made full use of.It is described
Memory pool is conducive to centralized management of the server to small memory headroom, the thread frequently applied especially for small memory headroom, can be with
The quick application space from memory pool, and greatly reduce memory fragmentation, space availability ratio is greatly improved.
On the other hand the application also provides a kind of Memory Allocation device, and Fig. 7 is the processing unit for the Memory Allocation that the application is provided
A kind of modular structure schematic diagram of embodiment, as shown in fig. 7, described device 70 can include:
Maximum memory judge module 71, for obtaining internal memory application information, judges to apply for whether internal memory is less than in the memory pool set up
The maximum of memory block can storage allocation;
Bulk memory distribute module 72, for judging that the application internal memory can more than the maximum when the maximum memory judge module
During storage allocation, according to the bulk memory information recorded in the memory block data head, storage allocation is to application from bulk memory
Internal memory;
Remaining space judge module 73, for judging whether the application internal memory is more than the free memory of current memory block, works as judgement
When the application internal memory is less than the free memory of current memory block, judge whether the current residual space of current memory block meets described
Apply for the demand of internal memory;
Remaining space distribute module 74, if the current residual space for judging current memory block for the remaining space judge module is expired
During the demand of the foot application internal memory, storage allocation gives the application internal memory from the current residual space of the current memory block.
It should be noted that the memory pool of the foundation in the present embodiment can be configured to:
Be divided into includes the memory block of data head and data storage areas including at least one default fixed size;
In memory pool memory block be configured to according to memory block free memory from big to small order arrangement.
Fig. 8 is the modular structure schematic diagram of another embodiment for the Memory Allocation device that the application is provided, as shown in figure 8, dress
Putting 80 can also include:
First free memory block searching modul 81, for judging that the application internal memory is more than or waited when the remaining space judge module
When the free memory of current memory block, the memory block chained list based on current memory block data head record is successively in the memory pool
Search the free memory block that free memory is more than or equal to the application internal memory;
First free memory block judge module 82, if for finding the free memory block, the judgement current memory block
Whether current residual space, which meets the application memory requirements, includes judging whether the current residual space of the free memory block is full
The demand of the foot application internal memory.
Fig. 9 is the modular structure schematic diagram of another embodiment for the Memory Allocation device that the application is provided, as shown in figure 9, dress
Putting 90 can also include:
First new memory block creation module 91, if for not finding free memory in the memory pool more than or equal to the Shen
Please internal memory free memory block, then create new memory block in the memory pool, and the use of the new memory block is the application
Memory Allocation internal memory.
First memory block order module 92, it is relatively interior successively for next memory block indicated by the data head based on memory block
The size of the free memory of memory block and the free memory of the new memory block in pond is deposited, the chained list of the new memory block is inserted into
Free memory is more than the previous memory block and free memory of the free memory of the new memory block less than described in the memory pool
Between latter memory block of the free memory of new memory block.
Figure 10 is a kind of modular structure schematic diagram of the embodiment for the remaining space distribute module 74 that the application is provided, in the application
One embodiment scene in, the remaining space distribute module 74 can include:
Unallocated memory allocating module 101, if the current residual space for the current memory block meets the application internal memory
Demand, preferentially gives application internal memory from the current unallocated memory headroom storage allocation of current memory block;
Releasing memory distribute module 102, if the current unallocated internal memory for current memory block is unsatisfactory for the application internal memory
Searched in demand, then the space address information of the releasing memory recorded from current memory block data head and meet the application internal memory
The memory headroom of demand is allocated.
As shown in Figure 10, in the application another embodiment, the remaining space distribute module 74 can also include:
Second free memory block searching modul 103, if for failing to look into from the releasing memory of current memory block data head record
The memory headroom for meeting the application memory requirements is found, the address information based on current memory block data head record is successively described
The free memory block that free memory is more than or equal to the application internal memory is searched in memory pool;
Second free memory block judge module 104, if for finding the free memory block, judging the free memory block
Current residual space whether meet the application memory requirements.
As shown in Figure 10, in the application another embodiment, the remaining space distribute module 74 can also include:
Second new memory block creating unit 105, for being in the memory pool when the second free memory block judge module judged result
In do not find free memory be more than or equal to it is described application internal memory free memory block, then new internal memory is created in the memory pool
Block, and use the new memory block to be the application Memory Allocation internal memory.
Figure 11 is the modular structure schematic diagram of another embodiment for the Memory Allocation processing unit that the application is provided, such as Figure 11 institutes
Show, device 110 can also include:
Releasing memory judge module 1101, for obtaining internal memory releasing request information, judges whether releasing memory is more than the maximum
Can storage allocation;
Bulk memory locating module 1102, is more than described for the judged result when the releasing memory judging unit for releasing memory
Maximum can storage allocation when, released according to the bulk memory Information locating that the internal memory releasing request information and data head are recorded
Put the first address of the corresponding bulk memory of internal memory;
Bulk memory release module 1103, the data message for discharging the bulk memory that positioning is obtained, deletes memory block data head
The first address information of the corresponding bulk memory of releasing memory described in the bulk memory address link list of middle record.
Figure 12 is the modular structure schematic diagram of another embodiment for the Memory Allocation processing unit that the application is provided, such as Figure 12 institutes
Show, device 120 can also include:
Releasing memory locating module 1201, is less than the maximum for the judged result when releasing memory judging unit for releasing memory
Can storage allocation when, releasing memory is in the corresponding memory block to be released of the memory pool according to internal memory releasing request Information locating
In address, and discharge the memory headroom of the releasing memory of the memory block to be released;
Releasing memory chained list update module 1202, for obtaining in the release generated after the memory block releasing memory to be released
The memory headroom address deposited, the memory headroom address is inserted into the releasing memory of the memory block data head record to be released
Space address information in.
Figure 13 is the modular structure schematic diagram of another embodiment for the chained list update module of releasing memory that the application is provided, such as
Shown in Figure 13, releasing memory chained list update module 1202 can include:
Contiguous memory judge module 1301, for judging the present node where the memory headroom address, with the present node
Latter node of previous node and present node whether be contiguous memory;
Internal memory merging module 1302, will be described if the judged result for the contiguous memory judge module is is contiguous memory
Present node, the previous node of the present node, latter node of the present node merge into contiguous memory
New node recorded in the space address information of the releasing memory of the memory block data head record to be released;Otherwise, will be described
The space address that memory headroom address is inserted into the releasing memory of the memory block data head record to be released as new node is believed
In breath.
As shown in figure 13, in the application another embodiment, the chained list of the releasing memory update module 1202 can also be wrapped
Include:
New node judge module 1303, for judging whether the new node is located at the first memory block of memory pool, if the new section
Point is not located at the first memory block of memory pool, then judge the free memory of the memory block to be released and maximum can storage allocation be
It is no equal;
Node release module 1304, if the judged result for the new node judge module is the free time of the memory block to be released
Internal memory and maximum can storage allocation it is equal when, discharge the data storage areas memory headroom of the memory block to be released, delete and wait to release
Put memory block node.
As shown in figure 13, in the application another embodiment, the chained list of the releasing memory update module 1202 can also be wrapped
Include:
First memory block order module 1305, if the judged result for the new node judge module is the memory block to be released
Free memory and maximum can storage allocation it is unequal when, the free memory of memory block in the memory pool is carried out after size judgement
Memory block is arranged to the order sequence of free memory from big to small.
In one embodiment of the application, described device also includes:
Memory pool destroys module, and the header addresses for pointing to next memory block when the first memory block for detecting memory pool are
Sky, and free memory and the memory block of the first memory block maximum can storage allocation it is equal when, perform and destroy in described
Deposit pond, releasing memory pool space.
The processing unit for the Memory Allocation that the application is provided, can be according to the size of application internal memory respectively out of memory pool or bulk
Middle storage allocation is deposited, is distributed respectively in the free time according to the size for judging application internal memory and free memory again in memory pool Memory Allocation
Deposit or new memory block, can not only meet the demand of application memory headroom, the space of memory block can also be made full use of.It is described
Memory pool is conducive to centralized management of the server to small memory headroom, the thread frequently applied especially for small memory headroom, can be with
The quick application space from memory pool, and greatly reduce memory fragmentation, space availability ratio is greatly improved.
Although mentioning the address information recording side of the dividing mode of memory pool memory block, memory block and memory headroom in teachings herein
The memory information data of formula or the like are set, stored, deletion mode is described, and still, the application is not limited to be complete
Meet industry or certain computer language and perform standard or the situation described by embodiment.The computer language or reality of some standardization
Apply example description on the basis of embodiment amended slightly can also carry out above-described embodiment it is identical, equivalent or close or deformation
Afterwards it is anticipated that implementation result.Certainly, though not by the way of upper data processing, judging, as long as it is above-mentioned each to meet the application
Internal storage data processing, information exchange and the information of embodiment judge feedback system, still can realize identical application, herein not
Repeat again.
Although this application provides the method operating procedure as described in embodiment or flow chart, based on conventional or without creativeness
Means can include more or less operating procedures.The step of being enumerated in embodiment order is only numerous step execution sequences
In a kind of mode, unique execution sequence is not represented., can be according to reality when device or client production in practice is performed
Apply example or method shown in the drawings order is performed or parallel execution (environment of such as parallel processor or multiple threads).
Module, device or module that above-described embodiment is illustrated, can specifically be realized by computer chip or entity, or by having
There is the product of certain function to realize.For convenience of description, it is divided into various modules during description apparatus above with function to describe respectively.
Certainly, the function of each module can be realized in same or multiple softwares and/or hardware when implementing the application, can also be by
Realize that the module of same function is realized by the combination of multiple submodule or submodule.
, completely can be with it is also known in the art that in addition to realizing controller in pure computer readable program code mode
Cause controller with gate, switch, application specific integrated circuit, FPGA control by the way that method and step is carried out into programming in logic
Device processed realizes identical function with the form of embedded microcontroller etc..Therefore this controller is considered a kind of Hardware Subdivision
Part, and the device for realizing various functions included to its inside can also be considered as the structure in hardware component.Or even,
It not only can be able to will be the software module of implementation method but also can be in hardware component for realizing that the device of various functions is considered as
Structure.
The application can be described in the general context of computer executable instructions, such as program module.
Usually, program module include performing particular task or realize the routine of particular abstract data type, program, object, component,
Data structure, class etc..The application can also be put into practice in a distributed computing environment, in these DCEs, by
Remote processing devices connected by communication network perform task.In a distributed computing environment, program module can position
In including in the local and remote computer-readable storage medium including storage device.
As seen through the above description of the embodiments, those skilled in the art can be understood that the application can be by soft
Part adds the mode of required general hardware platform to realize.Understood based on such, the technical scheme of the application is substantially in other words
The part contributed to prior art can be embodied in the form of software product, and the computer software product can be stored in
In storage medium, such as ROM/RAM, magnetic disc, CD, including some instructions are to cause a computer equipment (can be with
It is personal computer, mobile terminal, server, or network equipment etc.) perform each embodiment of the application or embodiment
Method described in some parts.
Each embodiment in this specification is described by the way of progressive, and same or analogous part is mutual between each embodiment
Referring to what each embodiment was stressed is the difference with other embodiment.The application can be used for it is numerous general or
In special computing system environments or configuration.For example:Personal computer, server computer, handheld device portable are set
Standby, laptop device, multicomputer system, the system based on microprocessor, set top box, programmable electronic equipment, network
PC, minicom, DCE of mainframe computer including any of the above system or equipment etc..
Although depicting the application by embodiment, it will be appreciated by the skilled addressee that the application have it is many deformation and change and
Spirit herein is not departed from, it is desirable to which appended claim includes these deformations and changed without departing from spirit herein.