CN108121603B - Memory management method for embedded system - Google Patents

Memory management method for embedded system Download PDF

Info

Publication number
CN108121603B
CN108121603B CN201711379808.5A CN201711379808A CN108121603B CN 108121603 B CN108121603 B CN 108121603B CN 201711379808 A CN201711379808 A CN 201711379808A CN 108121603 B CN108121603 B CN 108121603B
Authority
CN
China
Prior art keywords
memory
pool
block
batch
application
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201711379808.5A
Other languages
Chinese (zh)
Other versions
CN108121603A (en
Inventor
刘东栋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Anhui Wantong Post And Telecommunications Co ltd
Original Assignee
Anhui Wantong Post And Telecommunications Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Anhui Wantong Post And Telecommunications Co ltd filed Critical Anhui Wantong Post And Telecommunications Co ltd
Priority to CN201711379808.5A priority Critical patent/CN108121603B/en
Publication of CN108121603A publication Critical patent/CN108121603A/en
Application granted granted Critical
Publication of CN108121603B publication Critical patent/CN108121603B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5011Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals
    • G06F9/5016Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals the resource being the memory
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5011Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals
    • G06F9/5022Mechanisms to release resources

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Memory System (AREA)
  • Techniques For Improving Reliability Of Storages (AREA)

Abstract

A memory management method for an embedded system can solve the problems of memory fragmentation and application efficiency reduction caused by applying a large amount of small memories in the application of the embedded system. The method comprises the following steps: centralized application and centralized release; a memory buffer pool is generated through configuration, so that the management structure is prevented from directly applying for a small memory from a dynamic memory; managing the batch memory pools in a queue mode; fault-tolerant processing is carried out on the bad blocks of the memory caused by program exception, and the damaged memory blocks are isolated. The invention can effectively overcome the problems of memory fragments and memory application efficiency which can not be avoided by the traditional memory method, has good fault-tolerant performance and ensures that the program runs more robustly and stably.

Description

Memory management method for embedded system
Technical Field
The invention relates to the technical field of memory management of an embedded operating system, in particular to a memory management method of the embedded operating system.
Background
There are many methods for memory management under the common real address mode (the real address and the physical address are in a one-to-one correspondence relationship) in the embedded system, and the common memory allocation method has the following advantages and disadvantages:
(1) the method is adapted for the first time. The VxWorks operating system adopts the distribution method. When the method is used for memory allocation, the search is started from the chain head of the idle partition until the idle partition which can meet the size requirement of the idle partition is found. Then, according to the size of the operation, a piece of memory is divided from the partition and allocated to the requester, and the rest of the free partitions are still left in the free partition chain. This approach tends to use free partitions in the low address portion of the memory, which are used very rarely in the high address portion, thereby preserving the large free area of the high address portion. Obviously, the method creates conditions for allocating large memory space for large operation arriving at later time. The disadvantage is that the low address part is continuously divided, leaving a lot of hard to utilize, very small free areas, and each search starts from the low address part, which undoubtedly increases the search overhead. And also causes memory fragmentation problems.
(2) And circulating the first adaptation method. The method is evolved from a first-time adaptation method. When the memory space is allocated to the process, the process does not start to search from the chain head every time, but starts to search from the idle partition found last time until a free partition which can meet the requirement is found, and a block is drawn from the free partition to be allocated to the operation. This approach enables more even distribution of idle memory partitions, but will lack large idle partitions.
(3) The best adaptation method. This approach always assigns to the job the smallest free partition that can meet the demand. In order to speed up the search, the method requires that all free areas are sorted according to their size and then form a blank chain in increasing order. Thus, each time the first free area to satisfy the requirement is found, the first free area is necessarily optimal. In isolation, this method seems to be optimal, but in fact is not essential. Since the space remaining after each allocation must be minimal, many small free areas will remain in the memory that are difficult to utilize. And also has to be reordered after each allocation, which also incurs some overhead.
(4) The worst-case adaptation method. In the worst adaptation method, the method forms the idle area chain according to the descending order of the size, and the idle area chain is directly distributed from the first idle partition of the idle area chain when being distributed (the idle area chain is not distributed when the requirement cannot be met). It is readily apparent that if the first free partition cannot be satisfied, then no more free partitions can satisfy the need. This method of dispensing does not seem reasonable at first sight, but he also has a very strong intuitive appeal: after placing a program in a large free area, the remaining free area is often also very large, and thus a new larger program can be loaded. The worst adaptation method and the best adaptation method are ordered in the opposite way, and the queue pointer of the worst adaptation method always points to the largest free area and always searches from the largest free area when allocation is carried out. This method overcomes the disadvantage of leaving many small fragments for the best-fit method, but the possibility of keeping large free areas is reduced and free area reclamation is as complicated as the best-fit method.
The protocol stack (such as DHCP, RSVP, LDP, etc.) in the embedded router equipment needs to apply and release a large amount of small memories with the same size instantly when a large amount of and various routes are generated and oscillated, the applied amount is unstable along with the change of time, the characteristic of applying for memories under the condition is that small memories are applied and released irregularly, as a system has other processes besides the memory applying mode and also applies for memories, memory fragments are easy to form, generally, an operating system maintains an idle memory linked list, the length of a chain is increased along with the increase of the fragments, each memory applying for traverses an idle memory linked list once, the efficiency of applying for memories is lower as the length of the chain is longer, and the whole fragment is changed into a large memory when the fragments are more than a certain degree, thereby causing the condition of applying for a large block of memories to fail, however, the above conventional memory management methods cannot solve this problem, and a new memory allocation policy needs to be proposed.
Disclosure of Invention
The memory management method of the embedded system can solve the problems of memory fragmentation and application efficiency reduction caused by applying a large amount of small memories in the application of the embedded system.
In order to achieve the purpose, the invention adopts the following technical scheme:
a memory management method for embedded system includes providing an index of memory batch application structure for indexing multiple batch memory application of multiple protocols. The index assigns an index number to various bulk memory application applications and then records the entry address of the bulk memory management structure. When a certain protocol process needs to apply for the memory, the corresponding batch memory management structure entry address is found according to the index number to apply for the memory. The batch memory management structure acquires a batch of memory with fixed size each time and provides the memory for the protocol process to use, and the batch of memory is released together when being idle, so that the idea of centralized application and centralized release is embodied.
Meanwhile, in order to prevent the memory batch application structure of each protocol process from directly applying to a large BLOCK of memory, the invention provides a large batch memory pool called BLOCK, statically applies to a large BLOCK of memory with continuous addresses (the size of N BLOCKs, N is determined according to actual conditions) during initialization, and is specially used for batch memory application, and only when the static memory is used up, a BLOCK is applied from the dynamic memory. This corresponds to a buffer between the bulk memory application management structure and the dynamic memory. All the application of the batch memory application applies for a required batch of memory from the memory POOL, which is called POOL, wherein POOL is a block of memory with continuous addresses, POOL is divided into a certain number of small pieces of memory, which are called UINT, and UINT is the minimum unit of the batch memory application.
The entry address of the batch memory management structure is recorded in the memory batch application structure index, and the address points to one batch memory management structure. The batch memory management structure manages the application, release, statistics, etc. of a certain type of batch memory. The batch memory management structure adopts queue management, acquires a POOL from the BLOCK, hangs in a corresponding queue of the management structure, and the protocol application process acquires a required small memory from the corresponding queue of the management structure. The bulk memory management structure manages the POOL through three chained queues, and is managed with the three queues according to three states of the POOL, namely, an empty state, a full state, and a half-full state (available state). Applying UINT always applies from the queue for mounting available POOL; applying UNIT to cause the whole POOL to be full, eliminating the whole POOL and mounting the POOL into a full queue, and ensuring that UINT application can be applied at the first time without traversing the queue; releasing UINT causes the entire POOL to be empty and the entire POOL is mounted into an empty queue and then released into the BLOCK according to certain policy. Also, if a BLOCK is extended from dynamic memory, it is released to dynamic memory when it is no longer used.
When the application module is initially used, a memory batch application structure index is registered, a corresponding memory batch application structure is initialized, the corresponding memory batch application structure manages a certain number of BLOCKs under an initial state, the initial number of the BLOCKs is generally determined according to conditions, the memory can be guaranteed under normal use as a premise, and when an emergency vibrates, the memory batch application structure is expanded from a dynamic memory, so that the efficiency can be improved, and fragmentation can be reduced.
If some abnormal conditions occur in the application module during use, a certain or a plurality of small memory blocks are damaged, at the moment, when the memory block is applied or released, whether the memory block has errors or not is checked, and if the memory block has errors, the memory block is isolated, so that the phenomenon that the memory cannot be applied is avoided. Because the small memory is managed by adopting a circular queue mode, the isolation of the bad block is very convenient, as long as a pointer pointing to the small memory is set to be NULL, the pointer is directly skipped when the small memory is applied for the next time, and the next available memory is searched, so that the isolation of the bad block is realized, the isolated memory is not always in an isolation state, and when the whole POOL is in an idle state, the whole POOL and the isolation block are released together.
The technical scheme of the invention comprises the following steps:
a memory management method of an embedded system comprises the following steps:
step 1: centralized application and centralized release; the method comprises the steps that a small piece of memory with the same size is applied to a protocol process in a concentrated mode every time, similarly, when the protocol process needs to release the memory, the dynamic memory is released back in the concentrated mode in the same batch, and meanwhile, the mode that a plurality of protocol processes can simultaneously apply for the memory in the batch is achieved through the mode that an index table with index numbers corresponding to management structures one to one is established;
step 2: a memory buffer pool is generated through configuration, so that the management structure is prevented from directly applying for a small memory from a dynamic memory; while small blocks of memory are the source of memory fragmentation. Therefore, the generation of fragments can be greatly reduced by establishing the memory buffer pool.
And step 3: managing the batch memory pools in a queue mode; the speed of applying for releasing the memory can be improved, the chance of frequently applying for releasing the batch memory pools from the memory buffer pool is reduced, and the service efficiency of the memory is further improved.
And 4, step 4: the fault-tolerant processing is carried out on the memory bad blocks caused by program abnormity, the damaged memory blocks are isolated, the influence of error memories on normal memory application and release is avoided, and the isolated bad blocks can be released to buffer the pool for continuous utilization.
Further, the step 1 further includes:
step 11: initializing a batch memory management index structure according to the size class and the quantity of each service memory;
step 12: initializing memory blocks with customized sizes, such as 64K, 1M bytes and the like, under a batch memory management index structure, wherein the size of the memory blocks can be determined according to actual use requirements and is transmitted through initialization parameters;
further, the step 2 further comprises:
step 21: according to the size of the memory used by the service, memory pools with certain sizes are divided in the memory blocks, for example, the memory blocks are 64K bytes, 16 memory pools are divided, and each memory pool is 4K bytes. The memory pool is divided according to the characteristics of service batch application and release, and the intra-block fragments and memory waste are avoided as much as possible.
Further, the step 3 further includes:
step 31: when the service needs to apply for the memory, memory cell blocks are divided from the memory pool and returned to the user according to the memory management index handle obtained during service initialization, the memory pool manages the memory units by using the queues and is divided into two queues of an idle state and a use state, when all the memory units are in the idle state, the memory pool to which the memory unit belongs is in the idle state and can be released back to the memory block queue to which the memory unit belongs, when all the memory pools in the dynamic memory block are idle, the memory pool to which the memory unit belongs is released back to the system memory pool, and if the memory block is a static memory block, the memory block is in the idle state and is used when other services initialize batch memories.
Further, the memory pool in step 21 is divided into three states: a full queue (full list), an available queue (avail list) and an empty queue (empty list), and the memory pool is in a queue in a certain state according to the memory unit use condition in the step 4.
The invention has the beneficial effects that:
1. the memory application mode of centralized application and centralized release is applied to a plurality of protocol processes of the embedded router equipment, so that the fragmentation problem of the memory caused by applying and releasing a large amount of small memories can be effectively solved;
2. the problem of reduced memory application efficiency caused by the increase of the length of a memory chain can be effectively solved through a strategy of managing a plurality of queues;
3. the performance problem that the whole memory block can not return to the memory pool due to long-term occupation of the individual small-chip memories is solved.
4. The problem that the memory cannot be applied due to the fact that the memory is bad is solved. In a word, the invention can effectively overcome the problems of memory fragments and memory application efficiency which cannot be avoided by the traditional memory method, has good fault-tolerant performance and ensures that the program runs more robustly and stably.
Drawings
FIG. 1 is a diagram illustrating an index array structure of a memory management structure according to the present invention;
FIG. 2 is a BLOCK memory management structure of the present invention;
FIG. 3 is a BLOCK structure of the present invention for a bulk memory BLOCK;
fig. 4 is a schematic diagram of the POOL management structure of the present invention;
fig. 5 is a schematic structural diagram of the memory POOL of the present invention.
Detailed Description
The invention is further described below with reference to the accompanying drawings:
fig. 1 is a schematic diagram of an index array of a memory management structure, where functions to be implemented are as follows:
in a system, there may be several protocol types with different memory size batch memory application requirements, for example, a protocol process a needs to use 32-byte batch memory application requirements, and a protocol process B needs to use 64-byte batch memory application requirements; or different modules of the protocol a process also have the requirements of batch memory application of different sizes, which requires distinguishing the batch memory application requirements of each size, and the batch memory application requirements of each size correspond to the management structure of one batch memory application. The invention is realized by defining a global M multiplied by 2 two-dimensional array called as a memory management structure index array, and the size of M can be determined according to actual conditions. The element [ n ] [0] of the array records the index value index, and [ n ] [1] records a pointer pointing to the corresponding memory batch application management structure. After the memory management structure Index array is initialized, indexes 1 to IndexM are all assigned to 0, and corresponding pointers are all set to be null. The initialized memory management structure index array is shown in fig. 1.
When a certain protocol process needs a batch memory application requirement with a certain size, firstly, the whole memory management structure index array is traversed, an element with a first index value [ i-1] [0] being 0 is found, and the value of the element is set as a value i obtained by adding one to the subscript of the current array and is used as an index. index is a certificate for uniquely identifying a certain type of batch memory; then initializing a memory batch application management structure, and recording the pointer of the memory batch application management structure into [ i ] [1] of the ith item of the memory management structure index array. The pointer is used as the entry address of the application batch memory under the index corresponding to a certain protocol process. Similarly, when the batch memory application management structure is deleted, the corresponding index value is set to 0, and the corresponding pointer is set to null.
The BLOCK memory BLOCK management structure diagram shown in fig. 2 is implemented as follows:
the batch memory application method applies for the memory POOL not directly from the dynamic memory, but first applies for a large BLOCK memory POOL BLOCK from the dynamic memory, the size of the large BLOCK memory POOL BLOCK is 64K or more (256K or 1M), and the size of the BLOCK can be determined according to actual needs, which is only exemplified here. A BLOCK has several small memory POOLs POOL of equal size, for example a 64K BLOCK can be cut into 16 POOLs, each POOL has a size of 4K, a POOL contains several small memory UINTs, such a POOL is a batch of memory, as a whole applied and released by the memory batch application management structure. Since the size of each POOL is consistent, batch memory application of different index types can apply for POOL from the large block memory. During initialization, a large BLOCK of memory with continuous addresses (the size of N BLOCKs is determined according to actual conditions) is applied from the dynamic memory and is specially used for batch memory application, the N BLOCKs are called static BLOCKs, and the static BLOCKs are never released to the dynamic memory; when the static memory is used up, a BLOCK is requested from the dynamic memory, which is called dynamic BLOCK, and the dynamic BLOCK is released back to the dynamic memory when the dynamic BLOCK is completely idle. This corresponds to a buffer between the bulk memory application management structure and the dynamic memory. The BLOCK memory pools are connected together in a double linked list mode, and a BLOCK management structure is defined for management. The BLOCK management structure is used for managing BLOCK application, release and statistical work, and the structure is shown in the figure. Each BLOCK has a forward pointer pointing to its previous one and a backward pointer pointing to its next one, thus forming a bi-directional chain, which allows a desired BLOCK to be easily found.
As shown in fig. 3, in the BLOCK structure of BLOCK, because a BLOCK has batch memory applications corresponding to different indexes, the present invention divides the BLOCK into equal-sized POOLs (e.g. 4K) instead of equal-sized POOLs, which is very beneficial to the versatility, and if equal-sized POOLs are used, the sizes of the POOLs are unequal to cause the problem of mismatching of the sizes of the POOLs, for example, two applications respectively apply for 3K and 5K POOLs, the 3K POOL is sandwiched between two 5K POOLs, and when the 3K POOL is released, the 5K batch memory application cannot apply the 3K POOL, which results in memory waste. When the mode of equal size is adopted, the other processes in the POOL released by the previous process can still be used when applying for, but the adoption of the POOL of equal size can cause the number of the small memories in the POOL used by different processes to be inconsistent, which has no great influence on the use of the memory by the process. POOL between the same BLOCK is linked together through the two-way chain table too, when applying for POOL, can obtain through the pointer through the linked list entry node in the BLOCK management head structure.
Fig. 4 is a schematic diagram of a memory batch application management structure; the memory batch application management structure is an entry for applying for batch memories, and records some settings and statistical information used by the memories, such as an initialization flag, Index check, size setting of memory units, statistical information used, and the like.
The POOL of memory POOL has 3 states, namely a full POOL of POOL memory, a partially used POOL of POOL memory and an empty POOL of memory. The memory batch application management structure defines three management queues, which are respectively called as: the method comprises the steps of full queue (full list), available queue (available list) and empty queue (empty list), wherein the full queue manages a full POOL memory POOL and a used POOL memory POOL of an available queue management part, the empty queue manages an empty POOL memory POOL, and the memory POOLs on the management queues are linked by a doubly linked list. The initial state full queue and the available queue are both empty, and a certain number of POOLs (generally set to 3-6) are hung in the empty queue. The management strategy is as follows: (1) applying for a small-chip memory UNIT to always apply for from an available queue (avail list), generally finding a first POOL in the available queue, converting the POOL into a full POOL when the POOL is used up, removing the POOL from the available queue (avail list), and hanging the POOL into the tail of the full queue (full list); (2) when a memory is released, if the state of the POOL before the release is full and the POOL is in a full queue (full list), removing the POOL from the full queue and hanging the POOL into the tail of an available queue (avail list); (3) when a memory is released, if the state of a POOL before the release is an available state, the POOL is in an available queue (avail list), after the release, whether the POOL is in the available state or not is judged, namely, whether the POOL is empty or not is judged, if not, no operation is carried out, if the POOL is empty, the POOL is removed from the available queue (avail list), and the POOL is hung in an idle queue (empty list); (4) when the number of POOLs in the idle queue is larger than the number (3-6) of initialization setting, the redundant idle POOLs are released back to the BLOCK.
Compared with the method of only adopting one queue, the method of adopting three chained queues has the advantages of simpler and more convenient management and higher application speed. Firstly, a full queue is added, POOL which is used up is removed from all POOLs, the rest available POOLs are hung in an avail queue, an available queue avail list is found out in an application memory at first, the first POOL which is hung on the available queue avail list always has a free small-chip memory UNIT for application, and the speed of application is greatly increased without traversing a linked list for searching. Meanwhile, an idle queue is added, when only one queue is used for management, the POOL is released to a large-BLOCK memory as long as the POOL is idle, the times of applying for the POOL from the BLOCK are increased, when the POOL is obtained from the BLOCK again, the operations of initializing, calculating, splitting, filling and the like need to be carried out on the POOL again, the process is relatively time-consuming, and therefore the performance is reduced when the operations are carried out excessively. The increase of one idle queue can greatly reduce the opportunity of applying for POOL from BLOCK, 3-6 POOL can be used (set during initialization) when the maximum number of the added idle queues is increased, and the free POOL is released into the large BLOCK only when the number of the idle POOL exceeds the number of the initialized BLOCKs, thereby playing a certain buffering role and reducing the times of POOL initialization.
The adoption of 3 queues can also overcome the problem of hidden memory fragments that the whole large-block memory cannot be released due to long-term occupation of some small memories, because one index can only be generally used for one application protocol, the possibility that memories with different occupation durations use one large-block memory at the same time is fundamentally reduced, and simultaneously, because the application of the small memory is always applied from the first POOL in the avail queue, the randomness of memory application is greatly reduced, so that the problem of memory fragments due to long-term occupation of the small memory is greatly reduced.
Fig. 5 is a schematic structural diagram of the memory POOL;
because the sizes of the UNIT memories of each small piece are the same in the POOL memory, the number of the UNIT memories in the POOL is determined by the size of the UNIT memory configured during initialization, and a calculation formula of the number of the UNIT memories is as follows:
the number of UNITs is (POOL total size-POOL head size)/size of UNIT memory.
If the POOL can not be exactly divided completely, filling the remained memory which is not completely divided with a special value.
Besides the pointer pointing to the next POOL and the pointer pointing to the last POOL, the POOL management structure also has a pointer pointing to the head of the recorded UINT address area and a pointer pointing to the tail of the recorded UINT address area, and also has a counting semaphore. The method comprises the following steps: if a UINT memory is applied, the head address of the UINT memory currently pointing to the head of the UINT address area is obtained, meanwhile, a pointer pointing to the head of the UINT address area is increased in number, and the number of counting signals is decreased by one; if a UINT memory is released, a pointer pointing to the tail of the UINT address area is increased, the count semaphore is increased by one, and the head address pointing to the UINT memory is recorded in the UINT address area.
When a block of UNIT is destroyed, in order to make the applied memory not affected, the invention adopts the memory bad block isolation technology, as shown in fig. 5, when the memory is applied for verification, the pointer pointing to the memory in the head structure pointer area of the POOL management structure is set to NULL, when the head pointer points to the address and finds that the address is NULL, the memory is not applied, the block is skipped and the available UNIT is searched backwards, which is equivalent to isolating the memory, and at the same time, the statistical data of the POOL management structure is modified correspondingly. Meanwhile, a field for counting the number of damaged memory units is added in the POOL management structure. When the POOL except the isolated memory is in the idle state, releasing the whole POOL. The isolated bad block is returned to the buffer pool, and the block memory is utilized along with the initialization at the next application.
The above-mentioned embodiments are merely illustrative of the preferred embodiments of the present invention, and do not limit the scope of the present invention, and various modifications and improvements of the technical solutions of the present invention by those skilled in the art should fall within the protection scope of the present invention without departing from the design spirit of the present invention.

Claims (5)

1. A memory management method of an embedded system is characterized in that: the method comprises the following steps:
step 1: centralized application and centralized release; the method comprises the steps that a small piece of memory with the same size is applied to a protocol process in a concentrated mode every time, similarly, when the protocol process needs to release the memory, the dynamic memory is released back in the concentrated mode in the same batch, and meanwhile, the mode that a plurality of protocol processes can simultaneously apply for the memory in the batch is achieved through the mode that an index table with index numbers corresponding to management structures one to one is established;
step 2: a memory buffer pool is generated through configuration, so that the management structure is prevented from directly applying for a small memory from a dynamic memory; the batch memory application mode applies for the memory POOL POOL not directly from the dynamic memory, but applies for a large BLOCK memory POOL BLOCK from the dynamic memory, the size of the large BLOCK memory POOL BLOCK is 64K or larger, one BLOCK has a plurality of small memory POOLs POOL with equal size, one POOL comprises a plurality of small memory UINTs, thus one POOL is a batch of memory and is applied and released by the memory batch application management structure as a whole, and because the size of each POOL is consistent, batch memory application of different index types can apply for the POOL from the large BLOCK memory; during initialization, a large BLOCK of memory with continuous addresses is applied from the dynamic memory and is specially used for batch memory application, the N BLOCKs are called static BLOCKs, and the static BLOCKs are never released to the dynamic memory; when the static memory is used up, a BLOCK is applied from the dynamic memory, which is called dynamic BLOCK, and the dynamic BLOCK is released back to the dynamic memory when the dynamic BLOCK is idle, so that a buffer is formed between the batch memory application management structure and the dynamic memory; the BLOCK memory pools are connected together in a double linked list mode, and a BLOCK management structure is defined for management, wherein the BLOCK management structure is used for managing the application, release and statistical work of the BLOCK;
and step 3: managing the batch memory pools in a queue mode;
and 4, step 4: the fault-tolerant processing is carried out on the memory bad blocks caused by program abnormity, the damaged memory blocks are isolated, the influence of error memories on normal memory application and release is avoided, and the isolated bad blocks can be released to buffer the pool for continuous utilization.
2. The memory management method of an embedded system according to claim 1, wherein: the step 1 further comprises:
step 11: initializing a batch memory management index structure according to the size class and the quantity of each service memory;
step 12: the memory blocks with the customized sizes are initialized under the batch memory management index structure, the sizes of the memory blocks can be determined according to actual use requirements, and the memory blocks are transmitted through initialization parameters.
3. The memory management method of an embedded system according to claim 2, wherein: the step 2 further comprises:
step 21: and dividing a memory pool with a determined size in the memory block according to the size of the memory used by the service, wherein the division of the memory pool is carried out according to the characteristics of batch application and release of the service, and the fragments in the block and the waste of the memory are avoided as much as possible.
4. The memory management method of an embedded system according to claim 3, wherein: the step 3 further comprises:
step 31: when the service needs to apply for the memory, memory cell blocks are divided from the memory pool and returned to the user according to the memory management index handle obtained during service initialization, the memory pool manages the memory units by using the queues and is divided into two queues of an idle state and a use state, when all the memory units are in the idle state, the memory pool to which the memory unit belongs is in the idle state and can be released back to the memory block queue to which the memory unit belongs, when all the memory pools in the dynamic memory block are idle, the memory pool to which the memory unit belongs is released back to the system memory pool, and if the memory block is a static memory block, the memory block is in the idle state and is used when other services initialize batch memories.
5. The memory management method of an embedded system according to claim 4, wherein: the memory pool in step 21 is divided into three states: and 4, enabling the memory pool to be in a queue in a certain state according to the memory unit use condition in the step 4.
CN201711379808.5A 2017-12-20 2017-12-20 Memory management method for embedded system Active CN108121603B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201711379808.5A CN108121603B (en) 2017-12-20 2017-12-20 Memory management method for embedded system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201711379808.5A CN108121603B (en) 2017-12-20 2017-12-20 Memory management method for embedded system

Publications (2)

Publication Number Publication Date
CN108121603A CN108121603A (en) 2018-06-05
CN108121603B true CN108121603B (en) 2021-11-02

Family

ID=62229516

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201711379808.5A Active CN108121603B (en) 2017-12-20 2017-12-20 Memory management method for embedded system

Country Status (1)

Country Link
CN (1) CN108121603B (en)

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109508235B (en) * 2018-09-28 2020-12-15 深圳市紫光同创电子有限公司 Memory pool management method and device and computer readable storage medium
CN109684232B (en) * 2018-10-23 2021-09-14 许继集团有限公司 Embedded protocol stack memory management method
CN110109677B (en) * 2019-05-07 2023-08-29 北京善讯互动科技有限公司 Dynamic object cache pool allocation method
CN110727514A (en) * 2019-10-12 2020-01-24 北京无线电测量研究所 Memory management method based on index queue and embedded equipment
CN111162937B (en) * 2019-12-20 2023-05-16 北京格林威尔科技发展有限公司 Method and device for realizing memory pool in transmission equipment
CN112231128B (en) * 2020-09-11 2024-06-21 中科可控信息产业有限公司 Memory error processing method, device, computer equipment and storage medium
CN114518961A (en) * 2022-02-24 2022-05-20 上海金卓科技有限公司 Method and device for managing dynamic memory of real-time operating system

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1635482A (en) * 2003-12-29 2005-07-06 北京中视联数字系统有限公司 A memory management method for embedded system
CN103631721A (en) * 2012-08-23 2014-03-12 华为技术有限公司 Method and system for isolating bad blocks in internal storage
CN106681829A (en) * 2016-12-09 2017-05-17 上海斐讯数据通信技术有限公司 Memory management method and system

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10108539B2 (en) * 2013-06-13 2018-10-23 International Business Machines Corporation Allocation of distributed data structures
US9740481B2 (en) * 2013-12-03 2017-08-22 Samsung Electronics Co., Ltd. Electronic device and method for memory allocation in electronic device

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1635482A (en) * 2003-12-29 2005-07-06 北京中视联数字系统有限公司 A memory management method for embedded system
CN103631721A (en) * 2012-08-23 2014-03-12 华为技术有限公司 Method and system for isolating bad blocks in internal storage
CN106681829A (en) * 2016-12-09 2017-05-17 上海斐讯数据通信技术有限公司 Memory management method and system

Also Published As

Publication number Publication date
CN108121603A (en) 2018-06-05

Similar Documents

Publication Publication Date Title
CN108121603B (en) Memory management method for embedded system
US11372544B2 (en) Write type based crediting for block level write throttling to control impact to read input/output operations
CN109154917B (en) Storage system and solid state disk
US10740006B2 (en) System and method for enabling high read rates to data element lists
US6757802B2 (en) Method for memory heap and buddy system management for service aware networks
CN104090847B (en) Address distribution method of solid-state storage device
US7536488B2 (en) Buffer controller and management method thereof
US20160132541A1 (en) Efficient implementations for mapreduce systems
US10055153B2 (en) Implementing hierarchical distributed-linked lists for network devices
US10394606B2 (en) Dynamic weight accumulation for fair allocation of resources in a scheduler hierarchy
WO2019127104A1 (en) Method for resource adjustment in cache, data access method and device
CN104317742A (en) Thin provisioning method for optimizing space management
CN101231619A (en) Method for managing dynamic internal memory base on discontinuous page
US9785367B2 (en) System and method for enabling high read rates to data element lists
CN109871365A (en) A kind of distributed file system
CN101610197A (en) A kind of buffer management method and system thereof
US9767014B2 (en) System and method for implementing distributed-linked lists for network devices
US9811403B1 (en) Method, apparatus and system for performing matching operations in a computing system
US10067690B1 (en) System and methods for flexible data access containers
CN118550474A (en) Separated key value pair storage system
CN111831397A (en) Method, device, equipment and storage medium for processing IO (input/output) request
US8898419B2 (en) System and method for balancing block allocation on data storage devices

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant