CN100359489C - Method for internal memory allocation in the embedded real-time operation system - Google Patents

Method for internal memory allocation in the embedded real-time operation system Download PDF

Info

Publication number
CN100359489C
CN100359489C CNB2004100414592A CN200410041459A CN100359489C CN 100359489 C CN100359489 C CN 100359489C CN B2004100414592 A CNB2004100414592 A CN B2004100414592A CN 200410041459 A CN200410041459 A CN 200410041459A CN 100359489 C CN100359489 C CN 100359489C
Authority
CN
China
Prior art keywords
memory
memory pool
pool
internal memory
block
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CNB2004100414592A
Other languages
Chinese (zh)
Other versions
CN1722106A (en
Inventor
何先波
张芝萍
徐立锋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Global Innovation Polymerization LLC
Original Assignee
ZTE Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by ZTE Corp filed Critical ZTE Corp
Priority to CNB2004100414592A priority Critical patent/CN100359489C/en
Publication of CN1722106A publication Critical patent/CN1722106A/en
Application granted granted Critical
Publication of CN100359489C publication Critical patent/CN100359489C/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Abstract

The present invention relates to a method for internal memory allocation in an embedded real-time operation system, which comprises the following steps: a large internal memory area is applied to an operation system in advance; the large internal memory area is divided into internal memory pools with different sizes, and each internal memory pool comprises a plurality of internal memory blocks with the same size; the control head of each internal memory pool is initialized; when internal memory blocks are needed, the corresponding internal memory pool is found according to the size of the internal memory block needed; the present invention judges whether ideal blocks exist in the internal memory pool; if the ideal blocks exist, one internal memory block is taken off from the beginning of a team and the relevant information of control head in the internal memory pool is modified; allocation is over; if the ideal blocks do not exist, the internal memory pool is adjusted dynamically; when the internal memory is released, the corresponding memory pool is returned to the end of the team according to the size of the internal memory to be released and the relevant information of the internal memory pool control head is modified. The present invention can adjust the capacity of each internal memory pool dynamically, simultaneously ensures the real-time performance of high priority tasks, and enhances the reliability and the stability of a system.

Description

The method of Memory Allocation in a kind of embedded real-time operating system
Technical field
The present invention relates to computer realm, specifically, relate to the Memory Allocation in the embedded real-time multi-task operating system in the computer realm.
Background technology
Memory management in the embedded real-time operating system is to ensure the key factor of using real-time.In the application of the communications field, the internal memory application often presents a large amount of application phenomenons of various same data structure (being identical size internal memory piece).In order to improve real-time performance, generally on the simple Memory Allocation mechanism that real time operating system provides, do following " encapsulation " again:
Step 1: in advance to big memory field of operating system application.
Step 2: this memory field is divided into some subareas (this paper is called memory pool), and same memory pool is made up of the identical memory block of some sizes.Memory block quantity generally gets according to statistics and a large amount of empirical estimating that use in this field in each memory pool.As a kind of division methods in the communications field: (64,1200), (128,1000), (256,800), (512,500), (1024,20), (2048,10), (4096,5), (8192,2), (16384,1), the front numeral is the memory block size of unit with the byte in the bracket, memory block sum in this memory pool of back numeral, and the memory block magnitude relationship is generally 2 times of relations in the adjacent memory pond.Every kind of interior memory block of memory pool can be organized into formation by static state or dynamic link table, gets a free block from queue heads during application, is returned to rear of queue when giving back.
For the ease of management, each memory pool has a control and management head (this paper is called the memory pool control head), the information that is comprised generally has: the size of this pond memory block, the memory block total amount, the free memory block number, free block formation owner pointer in this pond (the formation owner pointer is meant first element node address in the formation), semaphore (semaphore is the mechanism of a kind of exclusive reference shared resource of being provided by operating system) of this memory pool structure of mutually exclusive operation or the like is provided idle subqueue tail pointer in this pond (the rear of queue pointer is meant last element node address in the formation).
The deficiency of the prior art mainly contains 2 points: 1, the number of memory block is static definite in each memory pool, often must decide according to the statistics and the empirical value of previous accumulation, can not dynamically adjust in the operational process of application task; 2, the real-time of the common high-priority task of must trying every possible means as far as possible to ensure of real-time system, this memory allocation method does not give consideration.
Summary of the invention
The technical problem to be solved in the present invention is, overcomes the deficiencies in the prior art, proposes a kind of new internal memory and adjusts distribution method, and it can dynamically be adjusted, and ensures the high-priority task real-time.
Technical scheme of the present invention comprises:
1.1 in operating system, apply for a big memory field in advance;
1.2 the big internal memory zoning of being applied for is divided into the memory pool that varies in size, and same memory pool comprises the identical memory block of some sizes; Each memory pool control head of initialization;
1.3 when the needs memory block, memory block size as required finds corresponding memory pool, with the nearest access identities of its control head and never access identities be not changed to not, judge whether this memory pool has free block, if have, take off one from team's head, revise the relevant information of memory pool control head, distribute and finish; If do not have, apply for that relatively the original application in task and this pond control structure distributes the failed tasks limit priority, what guarantee to note is all limit priorities that distribute the failure processes recently, increases the distribution frequency of failure recently, and memory pool is dynamically adjusted;
1.4 in releasing memory, find corresponding memory pool to be returned to tail of the queue, rewrite the relevant information of memory pool control head according to the memory size that will discharge.
The information that described memory pool control head comprises has: memory block size (bs) in this pond, current memory block total amount (cbcnt), free memory block number (fbcnt), free block formation owner pointer (pfbhead), free block rear of queue pointer (pfbrear), application distributes the number of times (failcnt) of failure on this pond recently, application distributes the limit priority (mprior) of failed tasks, whether not this memory pool recently access identities (lrna), minimum memory block is counted threshold value (min), and never whether this memory pool access identities (never).
Rewriting internal memory control head information in the step 1.4 comprises that distributing failed tasks limit priority territory to compose application is lowest priority, distributes the frequency of failure clear 0 recently, process recently not access identities be changed to true or the like.
Memory pool is dynamically adjusted, it is core content of the present invention, be meant: travel through each memory pool control head, search whether have the free block number be 0 and its application distribute the memory pool of failed tasks limit priority thresholding maximum (being the mprior thresholding maximum of memory pool control head), memory pool as not existing, dynamically adjustment task enter sleep state (sleep in the present invention,, sleep state, term such as waking up is technical term.Dynamically the same common operation with many application tasks of adjustment task on computers, the adjustment task has three kinds of states: sleep state, ready state and executing state, and when being in execution state, the adjustment task just can dynamically adjust, and adjustment finishes, enter sleep state, allow other task run.When being waken up or the length of one's sleep to when point, might continue operation, carry out the dynamic adjustment of a new round).As existing, attempt this memory pool being carried out dilatation by following step:
4.1 the piece size is searched successively greater than all memory pools of memory pool piece size to be expanded, and satisfy up to running into a memory pool: 1) memory pool never has the process application; 2) the free block number is counted threshold value greater than its minimum memory block; If there is such memory pool, take one from this pond free block queue heads, forward 4.4 to; Otherwise, forward 42 to;
4.2 the piece size is searched successively greater than all memory pools of memory pool piece size to be expanded, and satisfy up to running into a memory pool: 1) the nearest no user task of this memory pool was visited; 2) in this pond internal memory free block number greater than the certain proportion value (span that this ratio value is general is between 3/5 to 1) of its memory block sum; 3) the free block number is counted threshold value greater than its minimum memory block; If there is such memory pool, take one from this pond free block queue heads, forward 4.4 to; Otherwise, forward 4.3 to;
4.3 the piece size is searched successively greater than all memory pools of memory pool piece size to be expanded, satisfies up to running into a memory pool: 1) in this pond internal memory free block number greater than the certain proportion value of its memory block sum (value of this ratio value generally greater than 4.2 in ratio value); 2) the free block number is counted threshold value greater than minimum memory block; If there is such memory pool, take one from this pond free block queue heads; Otherwise, forward 4.7 to;
4.4 if this memory block can satisfy condition: the memory block quantity that can be split into memory pool to be expanded surpasses certain threshold value (choose usually with the relevant value of the application distribution frequency of failure on this pond) recently, then this piece a plurality of memory pools that break step by step: promptly be divided into for the first time two parts, that a part adds this piece source memory pool adjacency but in the memory pool that the piece size is less, it is carried out dilatation, and revise dilatation memory pool associated internal memory control head information with reference to step 4.6; Another part is done same judgement, as satisfy condition, be divided into two parts again, wherein in next-door neighbour's memory pool of part adding and last dilatation memory pool, go down, always up to the piece that is not satisfied condition.
4.5 the memory block dilatation that finally obtains in 4.4 in memory pool to be expanded;
4.6 put the nearest user task access identities of memory pool to be expanded (lrna) for not having, the clear distribution frequency of failure (failcnt) recently is 0, the minimum of setting when putting this pond priority (mprior) for initialization, and revise other related data of memory pool control head;
4.7 the adjustment task enters sleep state.
Compare with traditional memory allocation method towards the communications field, this inventive method can be at the capacity of in service dynamic each memory pool of adjustment of application system, can ensure simultaneously the real-time of high-priority task as far as possible, reduced each memory pool because the initial capacity mis-arrange causes the possibility of the decline or the system crash of system performance, have adaptive characteristics, improved the reliability and stability of system greatly.
Description of drawings
Fig. 1 has described the core image after the memory pool initialization;
Fig. 2 has described the memory pool control head array after the initialization;
Fig. 3 is the initial flowchart of one embodiment of the present of invention;
Fig. 4 is the memory block allocation flow figure of one embodiment of the present of invention;
Fig. 5 is that the memory block of one embodiment of the present of invention discharges process flow diagram.
Embodiment
Technical scheme of the present invention can be divided into initialization, Memory Allocation and internal memory and discharge three phases.
The phase one initialization
The first step is in advance to big memory field of operating system application.
Second step static division from the big memory field that the first step is distributed goes out each memory pool and each memory pool control head of initialization.In order to improve retrieval performance, the memory pool control head is generally left concentratedly by an array, and deposits in order by the pairing memory block size of each memory pool control head.The main management information that the memory pool control head comprises has: memory block size (bs) in this pond, current memory block total amount (cbcnt), free memory block number (fbcnt), free block formation owner pointer (pfbhead), free block rear of queue pointer (pfbrear), application distributes the number of times (failcnt) of failure on this pond recently, application distributes the limit priority (mprior) of failed tasks, whether not this memory pool recently access identities (lrna), minimum memory block is counted threshold value (min), and never whether this memory pool access identities (never) or the like.
The 3rd step produced memory pool and dynamically adjusts task, and made it to be in sleep state.
The subordinate phase Memory Allocation
The first step is found the memory pool of correspondence by the memory block size of application when the needs memory block.
Second step put this memory pool control head recently not access identities for access identities not and never for not;
The 3rd step judged that whether this pond also has free block, did not do for the 5th step if change.
The 4th step was taken off one from team's head, revised other relevant domain of information of memory pool control head, changeed and did for the 7th step.
The 5th step can't distribute from this pond, apply for that relatively the original application in task and this pond control structure distributes the failed tasks limit priority, and what guarantee to note is all limit priorities that distribute the failure processes recently, increases the distribution frequency of failure recently.
The 6th step woke memory pool up and dynamically adjusts task.
The 7th step Memory Allocation finishes.
Above-mentioned the 6th step, dynamically the function of adjustment task is described in detail as follows: travel through each memory pool control head, search whether have the free block number be 0 and its application distribute the memory pool of failed tasks limit priority thresholding maximum (being the mprior thresholding maximum of memory pool control head), memory pool as not existing, dynamically adjustment task enters sleep state; As existing, attempt this memory pool being carried out dilatation by following step:
4.1 the piece size is searched successively greater than all memory pools of memory pool piece size to be expanded, and satisfy up to running into a memory pool: 1) memory pool never has the process application; 2) the free block number is counted threshold value greater than its minimum memory block.If there is such memory pool, take one from this pond free block queue heads, forward 4.2 to.Otherwise, forward 4.7 to.
4.2 the piece size is searched successively greater than all memory pools of memory pool piece size to be expanded, and satisfy up to running into a memory pool: 1) the nearest no user task of this memory pool was visited; 2) in this pond internal memory free block number greater than certain ratio value (span that this ratio value is general is between 3/5 to 1) of its memory block sum; 3) the free block number is counted threshold value greater than its minimum memory block.If there is such memory pool, take one from this pond free block queue heads, forward 4.3 to; Otherwise, forward 4.7 to.
4.3 the piece size is searched successively greater than all memory pools of memory pool piece size to be expanded, satisfies up to running into a memory pool: 1) in this pond internal memory free block number greater than certain ratio value (value of this ratio value is generally greater than the ratio value in second step) of its memory block sum; 2) the free block number is counted threshold value greater than minimum memory block.If there is such memory pool, take one from this pond free block queue heads, forward 4.4 to; Otherwise, forward 4.7 to.
4.4 if this memory block can satisfy condition: the memory block quantity that can be split into memory pool to be expanded surpasses certain threshold value (choose usually with the relevant value of the application distribution frequency of failure on this pond) recently, then this piece a plurality of memory pools that break step by step: promptly be divided into for the first time two parts, that a part adds this piece source memory pool adjacency but in the memory pool that the piece size is less, it is carried out dilatation, and with reference to the 6th step modification dilatation memory pool associated internal memory control head information; Another part is done same judgement, as satisfy condition, be divided into two parts again, wherein in next-door neighbour's memory pool of part adding and last dilatation memory pool, go down, always up to the piece that is not satisfied condition.[mode of why taking to break step by step be because: if the memory block that the front was found in three steps be far longer than to be expanded in the words of memory block in cun pond, if all distributing to it can make troubles for later adjustment, cause whole internal memory to pond, low memory (memory pool that the memory block size is less) clustering phenomena especially severe, because consider the problem of time performance, the present invention does not adjust in the pond, low memory internal memory in the high memory pond].
4.5 the memory block dilatation that finally obtains in 4.4 in memory pool to be expanded.
4.6 put the nearest user task access identities of memory pool to be expanded (lrna) for not having, the clear distribution frequency of failure (failcnt) recently is 0, the minimum of setting when putting this pond priority (mprior) for initialization, and revise other related data of memory pool control head.
4.7 dynamically adjustment task enters sleep state.
The phase III internal memory discharges
The first step finds corresponding memory pool according to the interior cun size that will discharge.
Second step was returned to tail of the queue, distributed failed tasks limit priority territory to compose to be lowest priority rewriting application in the corresponding internal memory control head in the internal memory control head, distributed the frequency of failure clear 0 recently, process recently not access identities be changed to very.
When the present invention dynamically adjusted memory pool, being provided with mainly of priority considered the reliability of system and the aspect that influences of real-time from Memory Allocation.If application task is because of distributing the collapse that tends to cause total system less than internal memory, can memory pool dynamically adjusting task priority is set to the highest, allow it periodically dynamically adjust, feasible memory block in certain memory pool can obtain in time replenishing dilatation when distributing, thereby prevents the generation of Memory Allocation turkey to a certain extent in advance.If the Memory Allocation failure just causes task to be blocked, other priority of an intergrade is set then can for dynamically adjustment task of memory pool, and only when distributing failure, adjusts, in order to avoid its frequent operation causes system performance to descend.
Below just with (64,1200), (128,1000), (256,800), (512,500), (1024,20), (2048,10), (4096,5), (8192,2), (16384,1) traditional statistics is estimated the memory pool structure and is the basis, and the present invention is carried out more specifically detailed description:
Step as shown in Figure 3 is at first to one of the operating system application big memory field (among Fig. 1 shown in the dash area) and be provided with 64,128,256,512,1024,2048,4096,8192, the memory pool of 16,384 9 kinds of memory block sizes.All memory pool heads are formed the array of nine elements, and the element in the whole array is deposited in order by the memory block size, and the content after the initialization of the element of this internal memory control head array is shown in the table content among Fig. 2.Wherein first row is (64,1200) corresponding memory pool control head in the table, owing to can not diverted, all is set at initial statistical value 1200 so the current memory block number (cbcnt) of this memory pool head is counted threshold value (min) with minimum memory block; I.e. (16384,1) the corresponding memory pool of last column because application system is seldom gone application, considers that simultaneously being convenient to internal memory dynamically adjusts in the table, has divided a memory block to it during the initialization of this pond more, plays the effect of backup memory.The minimum memory block of remaining memory pool count threshold value (min) initialization value be made as simply respectively its initial current memory block number (cbcnt) 7/8,6/7,5/6,, 4/5,3/4,2/3,1/2, certainly this value also can be set according to the statistics experience, distribute the limit priority territory (mprior) of failed tasks to be set at 1, be based on the lowest priority of supposition 1 expression system, the concrete application system of the best basis of this value is set certainly.
When task application internal memory, the application memory block size by task finds corresponding memory pool as shown in Figure 4; Put this memory pool control head recently not access identities for access identities not and never for not, the existing task of expression conducts interviews to this memory pool.Judge then whether this pond also has free block, takes off one if having from this pond, is allocated successfully specifically; Do not have as crossing, this Task Distribution memory failure is revised internal memory control head related data, and memory pool is dynamically adjusted.
Fig. 5 has described the process that task releasing memory piece is arranged.At first find corresponding memory pool, then this piece is suspended in the idle queues of this memory pool, and revise corresponding control information in the internal memory control head according to the interior cun block size that will discharge.Why the application in the internal memory control head is distributed failed tasks limit priority territory to compose to low priority 1, distributes the frequency of failure clear 0 recently, process recently not access identities be changed to very, mainly be to consider that now this memory pool has had distributable memory block.
Dynamically the specific implementation of adjusting can be described below:
In table shown in Figure 1, find out free block number (fbcnt) and be 0 and application distribute the maximum list item of failed tasks highest priority value (mprior), as the list item not having exists, expression need not adjusted, the adjustment task enters sleep state; As existing, might as well suppose that i list item satisfies condition, need the memory pool of its representative is carried out dilatation, the step of its expansion method is described as:
The first step plays the table tail from list item i+1 and searches successively, and satisfy up to running into a list item: 1) never is 1 (memory pool never has the process application); 2) fbcnt is greater than min (the free block number is counted threshold value greater than its minimum memory block).If there is such list item, take one from the memory pool free block queue heads of this list item correspondence, changeed for the 4th step.
Second step played the table tail from list item i+1 and searches successively, and satisfy up to running into a list item: 1) lrna is 1 (the nearest no user task of this memory pool was visited); 2) fbcnt greater than cbcnt 3/5 (internal memory free block number is greater than 3/5 of its current memory block sum in this pond; 3) fbcnt is greater than min (the free block number is counted threshold value greater than its minimum memory block).If there is such list item, take one from the memory pool free block queue heads of this list item correspondence, changeed for the 4th step.
The 3rd step played the table tail from list item i+1 and searches successively, satisfied up to running into a list item: 1) fbcnt greater than cbcnt 4/5 (internal memory free block number is greater than 4/5 of its current memory block sum in this pond; 2) fbcnt is greater than min (the free block number is counted threshold value greater than its minimum memory block).If there is such list item, take one from the memory pool free block queue heads of this list item correspondence, changeed for the 4th step; If do not exist, failure is adjusted in expression, changes for the 7th step.
If the memory block that the 4th step obtained from top three steps can satisfy condition: the memory block quantity that can be split into memory pool to be expanded is above the failcnt among the list item i (application distributes the frequency of failure)+2, then this piece a plurality of memory pools that break step by step: promptly be divided into for the first time two parts, a part adds this piece source memory pool and (is assumed to list item j, by top finding step as can be known the adjacency of j>i) but in the less memory pool of piece size (being in the memory pool of j-1 list item correspondence), it is carried out dilatation, and with reference to the 6th step modification internal memory control head relevant domain of information; Another part is done same judgement, as satisfy condition, be divided into two parts again, wherein a part adds in next-door neighbour's memory pool with last dilatation memory pool (being the corresponding memory pool of j-2 list item), goes down, up to the piece that is not satisfied condition always.
The memory block dilatation that finally obtains during the 5th step went on foot the 4th is in the memory pool to be expanded of list item i representative.
It is 0 that the 6th step was put the nearest user task access identities of memory pool to be expanded (lrna), and the clear distribution frequency of failure (failcnt) recently is 0, and putting this pond application the highest preferential thresholding of failure (mprior) is 1, and revises other phase related control information of memory pool control head.
The whole task of the 7th step enters sleep state.

Claims (5)

1, the method for Memory Allocation in a kind of embedded real-time operating system comprises:
1.1 in operating system, apply for a big memory field in advance;
1.2 the big internal memory zoning of being applied for is divided into the memory pool that varies in size, and same memory pool comprises the identical memory block of some sizes; Each memory pool control head of initialization;
1.3 when the needs memory block, memory block size as required finds corresponding memory pool, with the nearest access identities of its control head and never access identities be not changed to not, judge whether this memory pool has free block, if have, take off one from team's head, revise the relevant information of memory pool control head, distribute and finish; If do not have, apply for that relatively the original application in task and this pond control structure distributes the failed tasks limit priority, what guarantee to note is all limit priorities that distribute the failure processes recently, increases the distribution frequency of failure recently, and memory pool is dynamically adjusted;
1.4 in releasing memory, find corresponding memory pool to be returned to tail of the queue, rewrite the relevant information of internal memory control head according to the memory size that will discharge.
2, the method for Memory Allocation in the described embedded real-time operating system of claim 1, it is characterized in that, the information that described memory block control head comprises has: memory block size in this pond, current memory block total amount, the free memory block number, free block formation owner pointer, free block rear of queue pointer, application distributes the number of times of failure on this pond recently, application distributes the limit priority of failed tasks, this memory pool is no user task access identities whether recently, and minimum memory block is counted threshold value, and whether this memory pool never has the user task access identities.
3, the method for Memory Allocation in the described embedded real-time operating system of claim 2, it is characterized in that, rewriting internal memory control head information in the described step 1.4, be meant that distributing the highest first top grade of failed tasks territory to compose application is lowest priority, distribute recently the frequency of failure clear 0, process recently not access identities be changed to very.
4, the method for Memory Allocation in the described embedded real-time operating system of the arbitrary claim of claim 1 to 3, it is characterized in that, described memory pool is dynamically adjusted, be meant: travel through each memory pool control head, search whether have the free block number be 0 and its application distribute the memory pool of failed tasks limit priority thresholding maximum, memory pool as not existing, dynamically adjustment task enters sleep state; As existing, attempt this memory pool being carried out dilatation by following step:
4.1 the piece size is searched successively greater than all memory pools of memory pool piece size to be expanded, and satisfy up to running into a memory pool: 1) memory pool never has the process application; 2) the free block number is counted threshold value greater than its minimum memory block; If there is such memory pool, take one from this pond free block queue heads, forward 4.4 to; Otherwise, forward 4.2 to;
4.2 the piece size is searched successively greater than all memory pools of memory pool piece size to be expanded, and satisfy up to running into a memory pool: 1) the nearest no user task of this memory pool was visited; 2) in this pond internal memory free block number greater than the certain proportion value of its memory block sum; 3) the free block number is counted threshold value greater than its minimum memory block; If there is such memory pool, take one from this pond free block queue heads, forward 4.4 to; Otherwise, forward 4.3 to;
4.3 the piece size is searched successively greater than all memory pools of memory pool piece size to be expanded, satisfies up to running into a memory pool: 1) in this pond internal memory free block number greater than the certain proportion value of its memory block sum; 2) the free block number is counted threshold value greater than minimum memory block; If there is such memory pool, take one from this pond free block queue heads; Otherwise, forward 4.7 to;
4.4 if this memory block can satisfy condition: the memory block quantity that can be split into memory pool to be expanded surpasses certain threshold value, then this piece a plurality of memory pools that break step by step: promptly be divided into for the first time two parts, that a part adds this piece source memory pool adjacency but in the memory pool that the piece size is less, it is carried out dilatation, revise dilatation memory pool associated internal memory control head information; Another part is done same judgement, as satisfy condition, be divided into two parts again, wherein in next-door neighbour's memory pool of part adding and last dilatation memory pool, go down, always up to the piece that is not satisfied condition;
4.5 the memory block dilatation that finally obtains in 4.4 in memory pool to be expanded;
4.6 put the nearest user task access identities of memory pool to be expanded for not having, the clear frequency of failure of distribution recently is 0, the minimum of setting when putting this pond priority for initialization, and revise other related data of memory pool control head;
4.7 the adjustment task enters sleep state.
5, the method for Memory Allocation in the described embedded real-time operating system of claim 4 is characterized in that the span that ratio value is general in the step 4.2 is between 3/5 to 1, and the ratio value in the step 4.3 is worth greater than this.
CNB2004100414592A 2004-07-13 2004-07-13 Method for internal memory allocation in the embedded real-time operation system Expired - Fee Related CN100359489C (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CNB2004100414592A CN100359489C (en) 2004-07-13 2004-07-13 Method for internal memory allocation in the embedded real-time operation system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CNB2004100414592A CN100359489C (en) 2004-07-13 2004-07-13 Method for internal memory allocation in the embedded real-time operation system

Publications (2)

Publication Number Publication Date
CN1722106A CN1722106A (en) 2006-01-18
CN100359489C true CN100359489C (en) 2008-01-02

Family

ID=35912429

Family Applications (1)

Application Number Title Priority Date Filing Date
CNB2004100414592A Expired - Fee Related CN100359489C (en) 2004-07-13 2004-07-13 Method for internal memory allocation in the embedded real-time operation system

Country Status (1)

Country Link
CN (1) CN100359489C (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107391253A (en) * 2017-06-08 2017-11-24 珠海金山网络游戏科技有限公司 A kind of method for reducing Installed System Memory distribution release conflict

Families Citing this family (32)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101266575B (en) * 2007-03-13 2010-05-19 中兴通讯股份有限公司 Method for enhancing memory pool utilization ratio
CN100530140C (en) * 2007-11-08 2009-08-19 Ut斯达康通讯有限公司 Memory management method for application program
CN101594478B (en) * 2008-05-30 2013-01-30 新奥特(北京)视频技术有限公司 Method for processing ultralong caption data
CN101630992B (en) * 2008-07-14 2013-06-05 中兴通讯股份有限公司 Method for managing shared memory
CN101853215B (en) * 2010-06-01 2012-05-02 恒生电子股份有限公司 Memory allocation method and device
CN102455976B (en) * 2010-11-02 2015-09-23 上海宝信软件股份有限公司 A kind of method of middleware memory management
CN102004675A (en) * 2010-11-11 2011-04-06 福建星网锐捷网络有限公司 Cross-process data transmission method, device and network equipment
CN102263701B (en) * 2011-08-19 2017-03-22 中兴通讯股份有限公司 Queue regulation method and device
CN103678161B (en) * 2012-09-06 2016-08-03 中兴通讯股份有限公司 EMS memory management process and device
CN102968378B (en) * 2012-10-23 2016-06-15 融创天下(上海)科技发展有限公司 A kind of method of random memory, Apparatus and system
CN103810115B (en) * 2012-11-15 2017-10-13 深圳市腾讯计算机系统有限公司 The management method and device of a kind of memory pool
CN103888827A (en) * 2012-12-20 2014-06-25 中山大学深圳研究院 Digital television application management layer system and method based on embedded kernel
CN103399821A (en) * 2013-06-28 2013-11-20 贵阳朗玛信息技术股份有限公司 jitterbuf memory processing method and device
CN103425592B (en) * 2013-08-05 2016-08-10 大唐移动通信设备有限公司 EMS memory management process in a kind of multiprocess system and device
CN103744736B (en) * 2014-01-09 2018-10-02 深圳Tcl新技术有限公司 The method and Linux terminal of memory management
CN103942150A (en) * 2014-04-01 2014-07-23 上海网达软件股份有限公司 Memory management method for real-time streaming media transmission system
CN105630779A (en) * 2014-10-27 2016-06-01 杭州海康威视系统技术有限公司 Hadoop distributed file system based small file storage method and apparatus
CN106155917A (en) * 2015-04-28 2016-11-23 北京信威通信技术股份有限公司 EMS memory management process and device
CN105138289A (en) * 2015-08-20 2015-12-09 上海联影医疗科技有限公司 Storage management method and device for computation module
CN105159837A (en) * 2015-08-20 2015-12-16 广东睿江科技有限公司 Memory management method
CN105159615A (en) * 2015-09-10 2015-12-16 上海斐讯数据通信技术有限公司 Dynamic memory control method and dynamic memory control system
CN105302738B (en) * 2015-12-09 2018-09-11 北京东土科技股份有限公司 A kind of memory allocation method and device
CN105700968A (en) * 2016-01-11 2016-06-22 厦门雅迅网络股份有限公司 Method and device for memory leakage diagnosis processing in embedded system
CN105718319B (en) * 2016-02-23 2019-03-15 中国科学院微电子研究所 A kind of memory pool domain analytic method and memory pool device
CN106550010A (en) * 2016-09-21 2017-03-29 南京途牛科技有限公司 A kind of real-time control distributed system calls external system to service the method and system of the frequency
CN107168890B (en) * 2017-04-01 2021-03-19 杭州联吉技术有限公司 Memory pool management method and device
CN107273141B (en) * 2017-07-10 2020-12-29 无锡走向智能科技有限公司 Embedded real-time operating system
CN110162395B (en) * 2018-02-12 2021-07-20 杭州宏杉科技股份有限公司 Memory allocation method and device
CN109614240A (en) * 2018-12-13 2019-04-12 锐捷网络股份有限公司 Memory application method, equipment and storage medium
CN110928680B (en) * 2019-11-09 2023-09-12 上交所技术有限责任公司 Order memory allocation method suitable for securities trading system
CN112685188A (en) * 2021-03-22 2021-04-20 四川九洲电器集团有限责任公司 Embedded memory management method and device based on global byte array
CN114020461B (en) * 2021-11-03 2022-10-11 无锡沐创集成电路设计有限公司 Memory allocation method, system, storage medium and electronic equipment

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1393780A (en) * 2001-06-28 2003-01-29 华为技术有限公司 Adaptive dynamic memory management method
CN1427342A (en) * 2001-12-21 2003-07-02 上海贝尔有限公司 Internal storage management system and its distribution method

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1393780A (en) * 2001-06-28 2003-01-29 华为技术有限公司 Adaptive dynamic memory management method
CN1427342A (en) * 2001-12-21 2003-07-02 上海贝尔有限公司 Internal storage management system and its distribution method

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107391253A (en) * 2017-06-08 2017-11-24 珠海金山网络游戏科技有限公司 A kind of method for reducing Installed System Memory distribution release conflict
CN107391253B (en) * 2017-06-08 2020-12-08 珠海金山网络游戏科技有限公司 Method for reducing system memory allocation release conflict

Also Published As

Publication number Publication date
CN1722106A (en) 2006-01-18

Similar Documents

Publication Publication Date Title
CN100359489C (en) Method for internal memory allocation in the embedded real-time operation system
CN100407152C (en) Methods and systems for multi-policy resource scheduling
CN101799797B (en) Dynamic allocation method of user disk quota in distributed storage system
US20110113215A1 (en) Method and apparatus for dynamic resizing of cache partitions based on the execution phase of tasks
CN102426542A (en) Resource management system for data center and operation calling method
CN103246612B (en) A kind of method of data buffer storage and device
CN104679594B (en) A kind of middleware distributed computing method
CN110058932A (en) A kind of storage method and storage system calculated for data flow driven
CN101373445B (en) Method and apparatus for scheduling memory
CN101387987A (en) Storage device, method and program for controlling storage device
US20110106934A1 (en) Method and apparatus for controlling flow of management tasks to management system databases
CN103051691A (en) Subarea distribution method, device and distributed type storage system
CN108845958A (en) A kind of mapping of interleaver and dynamic EMS memory management system and method
CN101187884A (en) Resource management method and management system
CN101968755A (en) Application load change adaptive snapshot generating method
CN110727517A (en) Memory allocation method and device based on partition design
CN103425435A (en) Disk storage method and disk storage system
CN110321331A (en) The object storage system of storage address is determined using multistage hash function
CN105094751A (en) Memory management method used for parallel processing of streaming data
CN103793332B (en) Date storage method based on internal memory, device, processor and electronic equipment
CN106383671A (en) Block device storage cluster capability expansion system and method
CN101819459A (en) Heterogeneous object memory system-based power consumption control method
CN102439570A (en) Memory management method and device aiming at multi-step length non conformance memory access numa framework
Huang et al. Load balancing for clusters of VOD servers
CN103294609A (en) Information processing device, and memory management method

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20180427

Address after: California, USA

Patentee after: Global innovation polymerization LLC

Address before: 518057 Department of law, Zhongxing building, South hi tech Industrial Park, Nanshan District hi tech Industrial Park, Guangdong, Shenzhen

Patentee before: ZTE Corp.

CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20080102