CN117435352B - Lightweight memory optimal allocation method for mixed management of fixed-length and variable-length data - Google Patents
Lightweight memory optimal allocation method for mixed management of fixed-length and variable-length data Download PDFInfo
- Publication number
- CN117435352B CN117435352B CN202311757383.2A CN202311757383A CN117435352B CN 117435352 B CN117435352 B CN 117435352B CN 202311757383 A CN202311757383 A CN 202311757383A CN 117435352 B CN117435352 B CN 117435352B
- Authority
- CN
- China
- Prior art keywords
- memory
- length
- variable
- fixed
- idle
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 85
- 238000005457 optimization Methods 0.000 claims abstract description 82
- 238000013500 data storage Methods 0.000 claims abstract description 70
- 239000012634 fragment Substances 0.000 claims abstract description 44
- 238000007726 management method Methods 0.000 claims abstract description 40
- 238000003860 storage Methods 0.000 claims description 25
- 238000013507 mapping Methods 0.000 claims description 18
- 238000013467 fragmentation Methods 0.000 claims description 15
- 238000006062 fragmentation reaction Methods 0.000 claims description 15
- 238000009826 distribution Methods 0.000 claims description 10
- 238000005520 cutting process Methods 0.000 claims description 7
- 238000012163 sequencing technique Methods 0.000 claims description 4
- 230000001960 triggered effect Effects 0.000 claims description 2
- 230000008569 process Effects 0.000 description 16
- 238000010586 diagram Methods 0.000 description 5
- 230000003044 adaptive effect Effects 0.000 description 4
- 238000010801 machine learning Methods 0.000 description 4
- 238000004590 computer program Methods 0.000 description 3
- 230000006870 function Effects 0.000 description 3
- 238000007781 pre-processing Methods 0.000 description 3
- 238000000926 separation method Methods 0.000 description 3
- 238000004364 calculation method Methods 0.000 description 2
- 230000005055 memory storage Effects 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 239000002699 waste material Substances 0.000 description 2
- 238000004458 analytical method Methods 0.000 description 1
- 230000006399 behavior Effects 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000000903 blocking effect Effects 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000007667 floating Methods 0.000 description 1
- 238000011022 operating instruction Methods 0.000 description 1
- 230000000750 progressive effect Effects 0.000 description 1
- 238000011084 recovery Methods 0.000 description 1
- 230000004044 response Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/50—Allocation of resources, e.g. of the central processing unit [CPU]
- G06F9/5005—Allocation of resources, e.g. of the central processing unit [CPU] to service a request
- G06F9/5011—Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals
- G06F9/5016—Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals the resource being the memory
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F12/00—Accessing, addressing or allocating within memory systems or architectures
- G06F12/02—Addressing or allocation; Relocation
- G06F12/0223—User address space allocation, e.g. contiguous or non contiguous base addressing
- G06F12/023—Free address space management
- G06F12/0253—Garbage collection, i.e. reclamation of unreferenced memory
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/50—Allocation of resources, e.g. of the central processing unit [CPU]
- G06F9/5005—Allocation of resources, e.g. of the central processing unit [CPU] to service a request
- G06F9/5011—Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals
- G06F9/5022—Mechanisms to release resources
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02D—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
- Y02D10/00—Energy efficient computing, e.g. low power processors, power management or thermal management
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Software Systems (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Memory System (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
Abstract
The specification discloses a lightweight memory optimization allocation method for mixed management of fixed-length and variable-length data, which relates to the technical field of data memory allocation and comprises the steps of pre-allocating a memory space based on determined characteristics of stored data to obtain a fixed-length data storage space and a variable-length data storage space; setting a linked list of idle memory blocks respectively, and selecting the memory block which is the forefront in the sequence and has the memory space larger than or equal to the memory block of the application to be cut when the application is applied for storing the variable-length data; when applying for storing fixed-length data, selecting a memory block which is the forefront in order in a fixed-length idle chain table and has a memory space equal to that of the application; distributing a plurality of residual memory blocks to the same variable-length data; the idle memory blocks in the allocated variable-length idle linked list are ordered and whether adjacent memory blocks can be combined is judged, so that the problems of excessive weight, excessive occupied resources and high performance cost and time cost of memory fragment optimization in the existing data memory allocation method are solved.
Description
Technical Field
The invention belongs to the technical field of data memory allocation, and particularly relates to a lightweight memory optimal allocation method for mixed management of fixed-length and variable-length data.
Background
In embedded database systems, it is critical to distribute data reasonably to memory for application performance. Currently, the following three methods are mainly used for data memory allocation: instant application allocation, memory pool allocation and machine learning based adaptive model building allocation.
The instant application allocation refers to dynamically allocating memory as needed when the program is running. The method is the most common memory allocation method, an application program applies for the memory by using standard memory allocation functions (such as malloc and new), and then releases the memory by using corresponding functions (such as free and delete), and the method has the advantages of high flexibility, high efficiency and simple realization of the memory, suitability for various data objects and sizes, memory allocation only when needed, and no memory resource waste. However, frequent memory allocation and release operations performed by this method increase the overhead of the memory allocator, reduce performance, and may result in a large amount of memory fragmentation, particularly in long-running applications.
Memory pool allocation is a custom memory management method, where an application pre-allocates a certain number of memory blocks at initialization, and then allocates and releases the blocks as needed. Memory pools are typically used to manage a large number of objects of the same size. The memory allocation overhead and the frequency of memory allocation and release operations can be reduced, the performance is improved, and the memory pool reduces the generation of fragments due to the same size of the memory blocks. The memory pool method has high flexibility, and the size and the allocation strategy of the memory pool can be customized according to the application program requirement.
The self-adaptive model establishment and allocation based on machine learning is a newer method, and a memory allocation model is dynamically established by using a machine learning technology. The model can predict a proper memory allocation strategy according to the behavior and data characteristics of the application program, so that the memory management is optimized. The optimal memory allocation strategy can be adaptively selected according to the requirements and performance indexes of the application program, memory fragments can be reduced to the greatest extent, and the machine learning model can predict the memory allocation mode under the condition of minimum fragments.
Therefore, the existing memory allocation method of data has the problems of overlarge weight level, excessive occupied resources, high performance cost and high time cost of memory fragmentation optimization.
Disclosure of Invention
The invention aims to provide a lightweight memory optimal allocation method for mixed management of fixed-length and variable-length data, which aims to solve the problems of excessive weight, excessive occupied resources and high performance cost and time cost of memory fragment optimization in the conventional memory allocation method of data.
In order to achieve the above purpose, the invention adopts the following technical scheme:
on the one hand, the specification provides a lightweight memory optimization allocation method for mixed management of fixed-length and variable-length data, which comprises the following steps:
pre-distributing the memory space based on the determined characteristics of the stored data to obtain an original data storage space, a fixed-length data storage space and a variable-length data storage space;
setting a chain table of idle memory blocks in a fixed-length data storage space and a variable-length data storage space respectively to obtain a variable-length idle chain table and a fixed-length idle chain table;
when applying for storing variable length data, selecting the memory block which is the foremost ordered memory space in the variable length idle linked list and is larger than or equal to the memory space of the variable length data application, and cutting to obtain the memory block and the rest memory block of the variable length data application; when applying for storing fixed-length data, selecting a memory block which is the forefront in sequence and has a memory space equal to the memory space of the fixed-length data application in a fixed-length idle chain table based on the length of the fixed-length data, and distributing the memory block to the fixed-length data;
establishing a mapping table, connecting a plurality of residual memory blocks and distributing the residual memory blocks to the same variable-length data;
and sequencing the idle memory blocks in the variable-length idle linked list after memory allocation, judging whether adjacent idle memory blocks can be combined based on the addresses and the space sizes of the idle memory blocks, and if so, combining the idle memory blocks to obtain the combined variable-length idle linked list.
On the other hand, the present specification provides a lightweight memory optimization allocation device for mixed management of fixed-length and variable-length data, comprising:
the space pre-allocation module is used for pre-allocating the memory space based on the determined characteristics of the stored data to obtain an original data storage space, a fixed-length data storage space and a variable-length data storage space;
the idle linked list setting module is used for setting a linked list of idle memory blocks in the fixed-length data storage space and the variable-length data storage space respectively to obtain a variable-length idle linked list and a fixed-length idle linked list;
the memory request execution allocation module is used for selecting the memory block which is the forefront in the sequence and has the memory space larger than or equal to the memory space of the variable-length data application in the variable-length idle linked list and cutting when the variable-length data is applied to be stored, so as to obtain the memory block and the rest memory block of the variable-length data application; when applying for storing fixed-length data, selecting a memory block which is the forefront in sequence and has a memory space equal to the memory space of the fixed-length data application in a fixed-length idle chain table based on the length of the fixed-length data, and distributing the memory block to the fixed-length data;
the large memory fragment optimization module is used for establishing a mapping table, connecting a plurality of residual memory blocks and distributing the residual memory blocks to the same variable-length data;
and the small memory fragment optimization module is used for sequencing the idle memory blocks in the variable-length idle linked list after memory allocation, judging whether adjacent idle memory blocks can be combined based on the addresses and the space sizes of the idle memory blocks, and if so, combining the idle memory blocks to obtain the combined variable-length idle linked list.
Based on the technical scheme, the following technical effects can be obtained in the specification:
based on the prior art of memory pool management, the method extends the strategy of variable-length data separation storage on the basis of memory Chi Ding long data storage management, uses the characteristics of data stored in an application memory to separate and store different data types, performs preprocessing on a distribution space, then performs space application processing, uses a large memory fragment combination chain to use in the process, and performs memory fragment optimization of the adaptive storage content in a small memory fragment optimization combination mode, and the two modes are combined to greatly reduce system resources occupied by a memory fragment optimization process, so that the system turnaround time caused by the optimization process of the traditional method is basically eliminated, and the space allocation application of a real-time database is more adapted, thereby solving the problems of excessive weight, excessive occupied resources and high performance cost and time cost of the memory fragment optimization of the existing data memory allocation method.
Drawings
Fig. 1 is a flow chart of a lightweight memory optimization allocation method for mixed management of fixed-length and variable-length data according to an embodiment of the invention.
FIG. 2 is a diagram illustrating memory allocation according to an embodiment of the present invention.
Fig. 3 is a schematic diagram of a lightweight memory optimization allocation method for mixed management of fixed-length and variable-length data according to an embodiment of the invention.
Fig. 4 is a schematic structural diagram of a lightweight memory optimization allocation device for mixed management of fixed-length and variable-length data according to an embodiment of the invention.
Fig. 5 is a schematic structural diagram of an electronic device according to an embodiment of the invention.
Detailed Description
The advantages and features of the present invention will become more fully apparent from the following description and appended claims, taken in conjunction with the accompanying drawings and detailed description. It should be noted that the drawings are in a very simplified form and are adapted to non-precise proportions, merely for the purpose of facilitating and clearly aiding in the description of embodiments of the invention.
It should be noted that, in order to clearly illustrate the present invention, various embodiments of the present invention are specifically illustrated by the present embodiments to further illustrate different implementations of the present invention, where the various embodiments are listed and not exhaustive. Furthermore, for simplicity of explanation, what has been mentioned in the previous embodiment is often omitted in the latter embodiment, and therefore, what has not been mentioned in the latter embodiment can be referred to the previous embodiment accordingly.
Example 1
Referring to fig. 1, fig. 1 illustrates a light-weight memory optimization allocation method for mixed management of fixed-length and variable-length data according to the present embodiment. In this embodiment, the method includes:
step 102, pre-distributing the memory space based on the determined characteristics of the stored data to obtain an original data storage space, a fixed-length data storage space and a variable-length data storage space;
in this embodiment, the stored data includes database data and a temporary variable application; the database data are stored in the fixed-length data storage space and the variable-length data storage space; the temporary variable application is stored in the variable-length data storage space.
Specifically, the method is applied to managing the memory space for the program applied when the database program is started, all the memory spaces in the heap are applied by the method, and the allocation management is performed by the method.
Based on the applicant of the memory space, the memory usage mode can be divided into database data storage and temporary variable application. In order to facilitate the allocation and management of the memory space, the data of the database data storage application are divided into fixed-length data storage and variable-length data storage, and the application of the temporary variable is just to delete and use, and the memory size of the temporary variable application is not necessarily the same each time, so that the memory space of the temporary variable application is also classified into the variable-length data storage.
In this embodiment, one implementation manner of step 102 is:
step 202, storing database data and data table metadata information at the initial position of a memory space to obtain an original data storage space and a residual memory;
step 204, dividing the remaining memory into an upper half and a lower half;
step 206, using the upper half part for variable length data storage to obtain a variable length data storage space; and using the lower half part for fixed-length data storage to obtain a fixed-length data storage space.
Specifically, the method is based on analysis of the characteristics of stored data, after the information such as the memory space to be managed and the characteristics of the data are acquired, the data information of the database and the data table is firstly stored at the initial position of the management memory, and then the rest memory is divided into an upper part and a lower part for management. The upper half part is used for distributing variable-length data, and the lower half part is used for distributing fixed-length data.
Step 104, setting a chain table of idle memory blocks in the fixed-length data storage space and the variable-length data storage space respectively to obtain a variable-length idle chain table and a fixed-length idle chain table;
in this embodiment, a plurality of nodes are stored in the linked list, and each node stores a start address and a length of an idle memory block; the starting address of the idle memory block stored in the variable-length idle linked list is the head address of the memory block, and data is distributed from head to tail by the head address; and the starting address of the idle memory block stored in the fixed-length idle linked list is the tail address of the memory block, and data are distributed from tail to head.
Specifically, the memory space is divided into two parts of fixed length and variable length, each part is provided with a chain table of idle memory blocks, each node in the chain table of idle memory blocks stores a starting address and a length of a memory block, obviously, at the beginning, only one memory block with the length of the whole memory space is in the memory management chain table, the starting address of the only node in the idle chain table of the variable length part and the idle chain table of the fixed length part points to the memory block, and the lengths of the blocks stored in the nodes are the same, and the difference is that the variable length part is the upper half part of the whole memory storage, the starting address of the memory block stored in the node of the idle chain table is the first address of the block, the data is distributed from the head to the tail, the fixed length part is the lower half part of the whole memory storage, the starting address of the memory block stored in the node of the idle chain table is the tail address of the block, and the data is distributed from the tail to the head. Here, the space allocation mode of the data is that the tail of the memory block faces to the head of the memory block, and the data is stored in reverse order.
In this embodiment, after step 104, the method further includes:
step 302, calculating and obtaining the lengths of a plurality of fixed-length data based on the data types of the storage fields of the fixed-length data in the storage data;
step 304, based on the length of the fixed-length data, obtaining the size of the memory space to be applied for by each fixed-length data;
step 306, pre-allocating a plurality of fixed-length idle memory blocks in the fixed-length data storage space based on the memory space size of each fixed-length data to be applied, and directly mounting the fixed-length idle memory blocks in the fixed-length idle linked list.
Specifically, according to the field data type and data format of the data table meta-information, judging whether the stored data is fixed-length data or variable-length data, if the stored data is variable-length data, storing the data in a variable-length space, and performing proper allocation optimization; if the data is fixed-length data, the data length is calculated according to the data type of the storage field, so that the memory space size of one storage application is defined. The field storage sequence is rearranged by adopting a memory alignment mode, so that the space of one data application is minimized. The method is used for fixing the size of the space required by the stored data of each table directly, so that the fixed-length storage is realized without memory fragments. It can be understood that we construct a template for each type of fixed-length data, and the fixed-length idle linked list stores a plurality of idle memory blocks formed by the fixed-length templates for application memory. The storage length of the fixed-length data is known, and then a certain amount of space of the fixed-length data block can be allocated in advance by combining the read-write frequency characteristic of the known data and is directly mounted in a fixed-length idle linked list, so that the time of the earlier fixed-length data application of the system using the method is reduced, and the whole application flow is more gentle.
It should be noted that, after the fixed-length space is used, a memory block with the same size as the current existing space is allocated from the largest idle block again, and the memory block with the size split into the fixed-length data length is put into the idle linked list again to be applied for the fixed-length data memory.
Step 106, when applying for storing variable length data, selecting the memory block which is the forefront in the variable length idle linked list and has the memory space larger than or equal to the memory space of the variable length data application, and cutting to obtain the memory block and the rest memory block of the variable length data application; when applying for storing fixed-length data, selecting a memory block which is the forefront in sequence and has a memory space equal to the memory space of the fixed-length data application in a fixed-length idle chain table based on the length of the fixed-length data, and distributing the memory block to the fixed-length data;
specifically, after the pre-allocation of the overall memory space is performed, the system starts to process the application and release requests of the memory block according to the allocation method.
When memory is required to be applied, first blocks with the first size being matched are searched from the idle block linked list from top to bottom for distribution, the blocks with the first space size being larger than or equal to the application size are searched for variable-length data distribution, the blocks are cut and distributed to an applicant, if the blocks remain after the cutting, the remaining parts are still hung at the original positions of the idle linked list, for fixed-length data distribution, the blocks with the first space size being equal to the application size are directly searched, so that memory fragments are prevented from being generated in a fixed-length storage space, and if the last block is directly searched for, the total memory blocks with the fixed length and the variable length are shared, the distribution process of the variable-length data is executed. The difference between the allocation of the free memory blocks with given length and the allocation of the free memory blocks with variable length is that the variable length blocks store the head addresses of the blocks, the allocated head addresses are added with the allocated size to obtain new head addresses, the new head addresses are stored in the original management data, the fixed length blocks store the tail addresses of the blocks, and the allocated tail addresses are subtracted from the allocated size to obtain new tail addresses, and the new tail addresses are stored in the original management data.
Step 108, a mapping table is established, and a plurality of residual memory blocks are connected and distributed to the same variable-length data;
in this embodiment, the mapping table includes a plurality of remaining memory blocks allocated to the same variable length data and a correspondence relationship between a next remaining memory block corresponding to each remaining memory block.
Specifically, referring to fig. 2, since variable length data continuously applies for new memory blocks, and old memory blocks are continuously divided, which results in continuously generating excessively small memory blocks in the whole memory, in order to effectively utilize these small memory blocks, we use multiple small memory blocks to satisfy the large capacity space application during allocation, and therefore, a new mapping table is required, which is very important for realizing the combined chain use of memory fragments, because it enables the system to track the relationship of each memory block, thereby effectively connecting them together to satisfy the large capacity memory allocation request. For each allocated memory block, the information of the next memory block where the data stored in the memory block is stored in the mapping table is stored, so that a plurality of small memory blocks are allocated to the same application, and the whole memory space is effectively utilized. For example, in the allocation map table in fig. 2, two free memory blocks 28 and 48 are allocated to the same variable length data application, and the map value described in the next table corresponding to 28 is 48 when the next memory block of 28 is 48 as shown in the figure; and 48 has no next block of memory, the mapping value in the next table of 48 is 0.
Alternatively, the mapping value in the mapping table may be the start address of the next memory block.
It should be noted that, in order to prevent the mapping table from storing too much redundancy and resulting in low query efficiency, the length of the memory block chain should not be too long, and trade-off the query performance and the space usage situation, when the number of the required small memory blocks is too large, the small memory blocks are directly allocated from the end blocks of the idle block linked list, which is more beneficial to the performance of the system.
Based on the method, the combined chain allocation of the memory blocks is an effective memory management strategy, and the system can effectively use the partitioned small memory blocks to meet the requirement of large-capacity memory.
Also included after step 108 is:
space recovery:
when a fixed-length memory block is released, we directly hang it on the head of the corresponding free block linked list without any processing to wait for the next allocation. When a variable-length memory block is released, the next memory block for storing data of the memory block needs to be continuously found along the mapping table, all the memory blocks are released, the mapping value of the mapping table is marked as empty, and then each memory block is put into a variable-length idle block linked list.
Step 110, sorting the idle memory blocks in the variable-length idle linked list after memory allocation, judging whether adjacent idle memory blocks can be combined based on the addresses and the space sizes of the idle memory blocks, if so, combining the idle memory blocks to obtain the combined variable-length idle linked list.
Specifically, after memory allocation, reclamation and reassignment are performed to a certain extent, there may be some too small memory fragments in the variable-length idle linked list, and these memory fragments are too small to have a value of repeated use, and even if a single application is used for allocating the combined chain, the combined chain is too long, so that the performance loss of reading and writing data is serious, and such memory fragments will not be allocated in the idle linked list. After a certain amount of memory fragments are accumulated, a large amount of free space cannot be effectively utilized, the use efficiency of the memory space of the heap is seriously affected, and for the small fragments, a lightweight non-blocking optimization method is used.
In this embodiment, before step 110, the method further includes:
step 402, judging whether the size of the idle memory block is smaller than a memory fragmentation threshold, if yes, judging that the idle memory block is a memory fragmentation and performing reallocation optimization;
in this embodiment, the memory fragmentation threshold is a memory space size corresponding to the longest fixed-length data in the fixed-length data.
Specifically, a memory fragmentation threshold is set as the length of a piece of maximum fixed-length data, if the memory fragmentation is smaller than the threshold, the memory fragmentation is determined to be too small to be used, and the memory block needs to be reallocated and optimized. And because the threshold value is the length of one piece of data, the situation that the size of the memory fragments is smaller than the threshold value does not occur in the fixed-length part, namely, the space optimization of the memory in the lower half part is not needed. The method reduces the scope of space optimization, and only needs to defragment the variable length, namely the upper half, thereby greatly reducing the time and performance loss of optimizing distribution.
Step 404, selecting an optimization strategy based on the data characteristics of the stored data; the optimization strategy is the time for optimizing the memory.
In this embodiment, the optimization strategy includes frequency-wise optimization and space-triggered optimization; the optimization according to the frequency mode is divided into optimization according to time frequency and optimization according to distribution times; the optimization according to the time frequency is that memory space management optimization is carried out once every preset time; the memory management optimization is carried out after the memory with preset times is allocated for each application according to the allocation times; and optimizing the space triggering mode to perform memory space management optimization after the number of the memory fragments is greater than a preset number threshold.
Optionally, a suitable method is selected according to the acquired different data characteristics, when the storage form is more read and less write, a suitable optimizing time can be selected according to the allocation frequency, when the write is more read and less write, a optimizing time can be selected according to the time frequency, and when the length floating interval of the variable-length data is too large, a space triggering mode is used for selecting the optimizing time.
In this embodiment, one implementation manner of step 110 is:
step 502, dividing a variable length idle linked list after memory allocation into two sub-linked lists by using a fast and slow pointer, wherein one sub-linked list comprises a first half node and the other sub-linked list comprises a second half node;
step 504, splitting the two sub-linked lists respectively to obtain a plurality of unit linked lists, wherein each unit linked list at most comprises a node;
step 506, sorting the plurality of element linked lists from front to back according to the first addresses;
step 508, calculating and obtaining the tail address of the previous free memory block based on the head address and the block size of the previous free memory block in the two adjacent free memory blocks in sequence;
step 510, determining whether the tail address of the previous free memory block is equal to the head address of the next free memory block, if so, merging two adjacent free memory blocks to obtain a merged variable length free linked list.
Specifically, the process of optimizing and eliminating the memory fragments adopts light sorting, adding and merging. Firstly, the whole variable-length linked list is ordered according to the address sequence, so that the whole linked list is ordered, memory blocks with adjacent addresses are adjacent in the idle linked list, and then, whether the adjacent memory blocks can be combined or not is judged according to the block addresses and the block sizes, so that small memory blocks are combined. Because the idle memory blocks are stored in a linked list mode, the sorting method selects merging sorting so as to avoid using extra space, the time and space performances can be simultaneously considered, and in the merging stage of merging sorting, whether data are merged can be directly judged, so that one-time traversal process is omitted, and the whole optimization process of memory fragments is shortened again.
The specific process of merging optimization is as follows:
1. splitting linked list: first, the original free linked list is split into two sub-linked lists. This step is accomplished using a fast and slow pointer. The fast pointer moves two steps at a time and the slow pointer moves one step at a time until the fast pointer reaches the end of the linked list, the slow pointer points to the intermediate node of the linked list. Then the linked list is split into two parts, wherein the left part contains nodes of the first half part, and the right part contains nodes of the second half part.
2. Recursive ordering: and recursively applying merging optimization to the split two sub-linked lists respectively. This will continue to split the sub-linked lists until each sub-linked list contains only one or zero nodes.
3. Merging sub-linked lists and merging fragments: after the recursion procedure returns to the ordered sub-linked list, the sub-linked lists are combined into an ordered linked list. In the merging process, we compare the head addresses of the head nodes of the two sub-linked lists, select the node with smaller head address as the head node of the new linked list, calculate the tail address of the memory fragment according to the head address and the block length, compare with the head address of the last determined smaller memory fragment, if equal, merge the two, then repeat the operation, recursively merge the rest nodes.
4. Returning the combined linked list: when the recursive merging is completed, the entire linked list is ordered and there are no contiguous memory fragments. And returning to the combined linked list.
Based on the method, the method for eliminating the memory fragments by light weight has lower performance loss for the system, chain allocation and memory block ordering and merging can be realized in a short time, and compared with a common method for wholly transferring and reusing data, the method almost does not need to use extra lock resources or system waiting response time, and can greatly reduce the waste of system resources.
Fig. 3 is a schematic diagram of a lightweight memory optimization allocation flow for mixed management of fixed-length variable-length data.
Based on the method, a lightweight multi-element memory resource optimization allocation method is designed on the basis of the traditional memory allocation method, the types and the characteristics of the analyzed and stored data are utilized, the fixed-length and variable-length data are logically separated and spatially mixed for storage allocation, and proper time is selected for storage optimization, so that the memory is more flexible to use and allocate, and the problem that the existing allocation method uniformly performs over-fixed allocation without considering actual storage content is solved.
In summary, the method extends the policy of variable-length data separation storage based on the prior art of memory pool management on the basis of memory Chi Ding long data storage management, uses the characteristics of data stored in the application memory to separate and store different data types, performs preprocessing on the allocation space, then performs processing of space application, uses a large memory fragment combination chain to use in the process, and performs memory fragment optimization of the adaptive storage content in a small memory fragment optimization merging mode, and the two modes are combined to greatly reduce the system resources occupied by the memory fragment optimization process, so that the system turnaround time brought by the optimization process of the traditional method is basically eliminated, and the space allocation application of the real-time database is more adapted, thereby solving the problems of excessive weight, excessive occupied resources and high performance cost and time cost of the memory fragment optimization of the current memory allocation method of data.
Example 2
Referring to fig. 4, fig. 4 shows a lightweight memory optimization allocation device for mixed management of fixed-length and variable-length data according to the present embodiment, which includes:
the space pre-allocation module is used for pre-allocating the memory space based on the determined characteristics of the stored data to obtain an original data storage space, a fixed-length data storage space and a variable-length data storage space;
the idle linked list setting module is used for setting a linked list of idle memory blocks in the fixed-length data storage space and the variable-length data storage space respectively to obtain a variable-length idle linked list and a fixed-length idle linked list;
the memory request execution allocation module is used for selecting the memory block which is the forefront in the sequence and has the memory space larger than or equal to the memory space of the variable-length data application in the variable-length idle linked list and cutting when the variable-length data is applied to be stored, so as to obtain the memory block and the rest memory block of the variable-length data application; when applying for storing fixed-length data, selecting a memory block which is the forefront in sequence and has a memory space equal to the memory space of the fixed-length data application in a fixed-length idle chain table based on the length of the fixed-length data, and distributing the memory block to the fixed-length data;
the large memory fragment optimization module is used for establishing a mapping table, connecting a plurality of residual memory blocks and distributing the residual memory blocks to the same variable-length data;
and the small memory fragment optimization module is used for sequencing the idle memory blocks in the variable-length idle linked list after memory allocation, judging whether adjacent idle memory blocks can be combined based on the addresses and the space sizes of the idle memory blocks, and if so, combining the idle memory blocks to obtain the combined variable-length idle linked list.
Optionally, the stored data includes database data and a temporary variable application; dividing the storage of the database data into fixed-length data storage and variable-length data storage; and storing the temporary variable application as variable length data.
Optionally, the spatial pre-allocation module includes:
the original data storage unit is used for storing database data and data table metadata information at the initial position of the memory space to obtain an original data storage space and a residual memory;
the memory dividing unit is used for dividing the residual memory into an upper half part and a lower half part;
the memory pre-allocation unit is used for storing the upper half part of the memory pre-allocation unit into the variable-length data to obtain a variable-length data storage space; and using the lower half part for fixed-length data storage to obtain a fixed-length data storage space.
Optionally, the method further comprises:
the fixed-length data length calculation module is used for calculating and obtaining the lengths of a plurality of fixed-length data based on the data types of the storage fields of the fixed-length data in the storage data;
the fixed-length data pre-allocation module is used for obtaining the memory space size to be applied for each fixed-length data based on the length of the fixed-length data;
and the fixed-length idle memory block mounting module is used for pre-distributing a plurality of fixed-length idle memory blocks in the fixed-length data storage space based on the memory space size of each fixed-length data to be applied and directly mounting the fixed-length idle memory blocks in the fixed-length idle linked list.
Optionally, a plurality of nodes are stored in the linked list, and each node stores the starting address and the length of an idle memory block; the starting address of the idle memory block stored in the variable-length idle linked list is the head address of the memory block, and data is distributed from head to tail by the head address; and the starting address of the idle memory block stored in the fixed-length idle linked list is the tail address of the memory block, and data are distributed from tail to head.
Optionally, the mapping table includes a plurality of remaining memory blocks allocated to the same variable length data and a correspondence relationship between a next remaining memory block corresponding to each remaining memory block.
Optionally, the method further comprises:
the memory fragmentation judging module is used for judging whether the size of the idle memory block is smaller than a memory fragmentation threshold value, if so, judging that the idle memory block is a memory fragmentation and performing reallocation optimization;
the optimization strategy selection module is used for selecting an optimization strategy based on the data characteristics of the stored data; the optimization strategy is the time for optimizing the memory.
Optionally, the memory fragmentation threshold is a memory space size corresponding to the longest fixed-length data in the fixed-length data.
Optionally, the optimization strategy comprises frequency mode optimization and space triggering mode optimization; the optimization according to the frequency mode is divided into optimization according to time frequency and optimization according to distribution times; the optimization according to the time frequency is that memory space management optimization is carried out once every preset time; the memory management optimization is carried out after the memory with preset times is allocated for each application according to the allocation times; and optimizing the space triggering mode to perform memory space management optimization after the number of the memory fragments is greater than a preset number threshold.
Optionally, the small memory fragment optimization module includes:
the chain table dividing unit is used for dividing the variable-length idle chain table after memory allocation into two sub-chain tables by using a fast and slow pointer, wherein one sub-chain table comprises a first half node and the other sub-chain table comprises a second half node;
the sub-linked list splitting unit is used for splitting the two sub-linked lists respectively to obtain a plurality of unit linked lists, and each unit linked list at most comprises a node;
the unit chain list ordering unit is used for ordering the unit chain lists from front to back according to the head addresses of the unit chain lists;
the tail address calculation unit is used for sequentially calculating and obtaining the tail address of the previous free memory block based on the head address and the block size of the previous free memory block in the two adjacent free memory blocks;
and the merging judging unit is used for judging whether the tail address of the previous idle memory block is equal to the head address of the next idle memory block, if so, merging the two adjacent idle memory blocks to obtain a merged variable-length idle linked list.
Based on the prior art of memory pool management, the device extends the strategy of variable-length data separation storage on the basis of memory Chi Ding long data storage management, uses the characteristics of data stored in an application memory to separate and store different data types, performs preprocessing on a distribution space, then performs space application processing, uses a large memory fragment combination chain to use in the process, and performs memory fragment optimization of adaptive storage content in a small memory fragment optimization merging mode, and the two modes are combined to greatly reduce system resources occupied by a memory fragment optimization process, so that the system turnaround time brought by the optimization process of the traditional method is basically eliminated, and the space allocation application of a real-time database is more adapted, thereby solving the problems of excessive weight, excessive occupied resources and high performance cost and time cost of memory fragment optimization of the current memory allocation method of data.
Example 3
Referring to fig. 5, the present embodiment provides an electronic device, which includes a processor, an internal bus, a network interface, a memory, and a nonvolatile memory, and may include hardware required by other services. The processor reads the corresponding computer program from the nonvolatile memory to the memory and then operates the memory to form a lightweight memory optimal allocation method for mixed management of the fixed-length and variable-length data on a logic level. Of course, other implementations, such as logic devices or combinations of hardware and software, are not excluded from the present description, that is, the execution subject of the following processing flows is not limited to each logic unit, but may be hardware or logic devices.
The network interface, processor and memory may be interconnected by a bus system. The buses may be classified into address buses, data buses, control buses, and the like.
The memory is used for storing programs. In particular, the program may include program code including computer-operating instructions. The memory may include read only memory and random access memory and provide instructions and data to the processor.
The processor is used for executing the program stored in the memory and specifically executing:
step 102, pre-distributing the memory space based on the determined characteristics of the stored data to obtain an original data storage space, a fixed-length data storage space and a variable-length data storage space;
step 104, setting a chain table of idle memory blocks in the fixed-length data storage space and the variable-length data storage space respectively to obtain a variable-length idle chain table and a fixed-length idle chain table;
step 106, when applying for storing variable length data, selecting the memory block which is the forefront in the variable length idle linked list and has the memory space larger than or equal to the memory space of the variable length data application, and cutting to obtain the memory block and the rest memory block of the variable length data application; when applying for storing fixed-length data, selecting a memory block which is the forefront in sequence and has a memory space equal to the memory space of the fixed-length data application in a fixed-length idle chain table based on the length of the fixed-length data, and distributing the memory block to the fixed-length data;
step 108, a mapping table is established, and a plurality of residual memory blocks are connected and distributed to the same variable-length data;
step 110, sorting the idle memory blocks in the variable-length idle linked list after memory allocation, judging whether adjacent idle memory blocks can be combined based on the addresses and the space sizes of the idle memory blocks, if so, combining the idle memory blocks to obtain the combined variable-length idle linked list.
The processor may be an integrated circuit chip having signal processing capabilities. In implementation, each step of the above method may be implemented by an integrated logic circuit of hardware of a processor or an instruction in a software form.
Based on the same invention, the embodiments of the present disclosure further provide a computer-readable storage medium storing one or more programs that, when executed by an electronic device including a plurality of application programs, cause the electronic device to perform a lightweight memory optimization allocation method for hybrid management of fixed-length variable-length data provided by the embodiments corresponding to fig. 1 to 3.
It will be appreciated by those skilled in the art that embodiments of the present description may be provided as a method, system, or computer program product. Accordingly, the present specification may take the form of an entirely hardware embodiment, an entirely software embodiment, or an embodiment combining software and hardware aspects. Furthermore, the present description can take the form of a computer program product on one or more computer-readable storage media having computer-usable program code embodied therein.
In addition, for the device embodiments described above, since they are substantially similar to the method embodiments, the description is relatively simple, and references to the parts of the description of the method embodiments are only required. Moreover, it should be noted that in the respective modules of the system of the present application, the components thereof are logically divided according to functions to be implemented, but the present application is not limited thereto, and the respective components may be re-divided or combined as necessary.
In this specification, each embodiment is described in a progressive manner, and identical and similar parts of each embodiment are all referred to each other, and each embodiment mainly describes differences from other embodiments.
The foregoing describes specific embodiments of the present disclosure. Other embodiments are within the scope of the following claims. In some cases, the actions or steps recited in the claims can be performed in a different order than in the embodiments and still achieve desirable results. In addition, the particular order shown, or the sequential order shown, is not necessarily required to achieve desirable results in the course of drawing figures, and in some embodiments, multitasking and parallel processing may be possible or advantageous.
The foregoing is merely exemplary of the present application and is not intended to limit the present application. Various modifications and changes may be made to the present application by those skilled in the art. Any modifications, equivalent substitutions, improvements, etc. which are within the spirit and principles of the present application are intended to be included within the scope of the claims of the present application.
Claims (10)
1. The lightweight memory optimal allocation method for the mixed management of the fixed-length variable-length data is characterized by comprising the following steps of:
pre-distributing the memory space based on the determined characteristics of the stored data to obtain an original data storage space, a fixed-length data storage space and a variable-length data storage space;
setting a chain table of idle memory blocks in a fixed-length data storage space and a variable-length data storage space respectively to obtain a variable-length idle chain table and a fixed-length idle chain table;
when applying for storing variable length data, selecting the memory block which is the foremost ordered memory space in the variable length idle linked list and is larger than or equal to the memory space of the variable length data application, and cutting to obtain the memory block and the rest memory block of the variable length data application; when applying for storing fixed-length data, selecting a memory block which is the forefront in sequence and has a memory space equal to the memory space of the fixed-length data application in a fixed-length idle chain table based on the length of the fixed-length data, and distributing the memory block to the fixed-length data;
establishing a mapping table, connecting a plurality of residual memory blocks and distributing the residual memory blocks to the same variable-length data;
and sequencing the idle memory blocks in the variable-length idle linked list after memory allocation, judging whether adjacent idle memory blocks can be combined based on the addresses and the space sizes of the idle memory blocks, and if so, combining the idle memory blocks to obtain the combined variable-length idle linked list.
2. The method of claim 1, wherein the stored data includes database data and temporary variable applications; the database data are stored in the fixed-length data storage space and the variable-length data storage space; the temporary variable application is stored in the variable-length data storage space.
3. The method of claim 2, wherein the pre-allocating the memory space based on the determined characteristics of the stored data to obtain the original data storage space, the fixed length data storage space, and the variable length data storage space comprises:
storing the database data and the data table metadata information at the initial position of the memory space to obtain an original data storage space and a residual memory;
dividing the residual memory into an upper half and a lower half;
the upper half part is used for storing the variable-length data, so as to obtain a variable-length data storage space; and using the lower half part for fixed-length data storage to obtain a fixed-length data storage space.
4. A method according to claim 3, wherein after setting a linked list of free memory blocks in the fixed-length data storage space and the variable-length data storage space, respectively, obtaining the variable-length free linked list and the fixed-length free linked list further comprises:
calculating the length of a plurality of fixed-length data based on the data type of a storage field of the fixed-length data in the storage data;
based on the length of the fixed-length data, the memory space size to be applied for each fixed-length data is obtained;
and pre-distributing a plurality of fixed-length idle memory blocks in the fixed-length data storage space based on the size of the memory space to be applied for each fixed-length data, and directly mounting the fixed-length idle memory blocks in the fixed-length idle linked list.
5. A method according to claim 3, wherein a plurality of nodes are stored in a linked list, each node storing a starting address and length of a free memory block; the starting address of the idle memory block stored in the variable-length idle linked list is the head address of the idle memory block, and data is distributed from head to tail by the head address; and the starting address of the idle memory block stored in the fixed-length idle linked list is the tail address of the idle memory block, and data are distributed from tail to head.
6. The method of claim 5, wherein the mapping table includes a number of remaining memory blocks allocated to the same variable length data and a correspondence between each remaining memory block and a corresponding next remaining memory block.
7. The method of claim 6, wherein the steps of sorting the free memory blocks in the variable-length free link list after memory allocation and determining whether neighboring free memory blocks can be merged based on the addresses and space sizes of the free memory blocks, if so, merging the free memory blocks, and obtaining the merged variable-length free link list further comprise:
judging whether the size of the idle memory block is smaller than a memory fragmentation threshold, if so, judging that the idle memory block is a memory fragmentation and performing reallocation optimization;
selecting an optimization strategy based on data characteristics of the stored data; the optimization strategy is the time for optimizing the memory.
8. The method of claim 7, wherein the memory fragmentation threshold is a memory space size corresponding to longest-length fixed-length data of the fixed-length data.
9. The method of claim 7, wherein the optimization strategy comprises frequency-wise optimization and space-triggered optimization; the optimization according to the frequency mode is divided into optimization according to time frequency and optimization according to distribution times; the optimization according to the time frequency is that memory space management optimization is carried out once every preset time; optimizing according to the allocation times, namely performing memory management optimization after applying for allocating the memory with preset times; and optimizing the space triggering mode to perform memory space management optimization after the number of the memory fragments is greater than a preset number threshold.
10. The method of claim 1, wherein the steps of ordering the free memory blocks in the variable-length free linked list after memory allocation and determining whether neighboring free memory blocks can be merged based on the addresses and space sizes of the free memory blocks, if so, merging the free memory blocks, and obtaining the merged variable-length free linked list comprise:
dividing the variable-length idle linked list after memory allocation into two sub-linked lists by using a fast and slow pointer, wherein one sub-linked list comprises a first half node and the other sub-linked list comprises a second half node;
splitting the two sub-linked lists respectively to obtain a plurality of unit linked lists, wherein each unit linked list comprises at most one node;
sorting the plurality of unit linked lists from front to back according to the first addresses;
sequentially calculating and obtaining the tail address of the previous free memory block based on the head address and the block size of the previous free memory block in the two adjacent free memory blocks;
and judging whether the tail address of the previous idle memory block is equal to the head address of the next idle memory block, if so, combining the two adjacent idle memory blocks to obtain a combined variable-length idle linked list.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311757383.2A CN117435352B (en) | 2023-12-20 | 2023-12-20 | Lightweight memory optimal allocation method for mixed management of fixed-length and variable-length data |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311757383.2A CN117435352B (en) | 2023-12-20 | 2023-12-20 | Lightweight memory optimal allocation method for mixed management of fixed-length and variable-length data |
Publications (2)
Publication Number | Publication Date |
---|---|
CN117435352A CN117435352A (en) | 2024-01-23 |
CN117435352B true CN117435352B (en) | 2024-03-29 |
Family
ID=89552061
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202311757383.2A Active CN117435352B (en) | 2023-12-20 | 2023-12-20 | Lightweight memory optimal allocation method for mixed management of fixed-length and variable-length data |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN117435352B (en) |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1635482A (en) * | 2003-12-29 | 2005-07-06 | 北京中视联数字系统有限公司 | A memory management method for embedded system |
CN101226553A (en) * | 2008-02-03 | 2008-07-23 | 中兴通讯股份有限公司 | Method and device for storing length-various field of embedded database |
CN114077620A (en) * | 2020-08-17 | 2022-02-22 | 中国科学院声学研究所 | Structured streaming data oriented caching method and system |
CN114328285A (en) * | 2022-01-04 | 2022-04-12 | 北京广利核系统工程有限公司 | Heap memory allocation management method and device of embedded operating system |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR100528967B1 (en) * | 2002-12-18 | 2005-11-15 | 한국전자통신연구원 | Apparatus and method for controlling memory allocation for variable sized packets |
-
2023
- 2023-12-20 CN CN202311757383.2A patent/CN117435352B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1635482A (en) * | 2003-12-29 | 2005-07-06 | 北京中视联数字系统有限公司 | A memory management method for embedded system |
CN101226553A (en) * | 2008-02-03 | 2008-07-23 | 中兴通讯股份有限公司 | Method and device for storing length-various field of embedded database |
CN114077620A (en) * | 2020-08-17 | 2022-02-22 | 中国科学院声学研究所 | Structured streaming data oriented caching method and system |
CN114328285A (en) * | 2022-01-04 | 2022-04-12 | 北京广利核系统工程有限公司 | Heap memory allocation management method and device of embedded operating system |
Non-Patent Citations (1)
Title |
---|
一种自适应变长块内存池SVBSMP;吴捷 等;计算机应用;20080630(第S1期);第280-283页 * |
Also Published As
Publication number | Publication date |
---|---|
CN117435352A (en) | 2024-01-23 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US9971681B2 (en) | Lazy real time garbage collection method | |
CN108038002B (en) | Embedded software memory management method | |
JP3611305B2 (en) | Persistent and robust storage allocation system and method | |
US6175900B1 (en) | Hierarchical bitmap-based memory manager | |
JP3771803B2 (en) | System and method for persistent and robust memory management | |
JP2858795B2 (en) | Real memory allocation method | |
US7571163B2 (en) | Method for sorting a data structure | |
CN109690498B (en) | Memory management method and equipment | |
CN110750356B (en) | Multi-core interaction method, system and storage medium suitable for nonvolatile memory | |
CN115599544A (en) | Memory management method and device, computer equipment and storage medium | |
CN110727517A (en) | Memory allocation method and device based on partition design | |
WO2023029982A1 (en) | Method and system for memory allocation | |
JP2006519438A (en) | Configuration and method for managing available memory resources | |
CN106294189B (en) | Memory defragmentation method and device | |
KR100907477B1 (en) | Apparatus and method for managing index of data stored in flash memory | |
US20060236065A1 (en) | Method and system for variable dynamic memory management | |
WO2007097581A1 (en) | Method and system for efficiently managing a dynamic memory in embedded system | |
CN112256441B (en) | Memory allocation method and device for neural network inference | |
CN117435352B (en) | Lightweight memory optimal allocation method for mixed management of fixed-length and variable-length data | |
US8990537B2 (en) | System and method for robust and efficient free chain management | |
US20240053892A1 (en) | Dynamic memory management apparatus and method for hls | |
CN113535392B (en) | Memory management method and system for realizing support of large memory continuous allocation based on CMA | |
US11016685B2 (en) | Method and defragmentation module for defragmenting resources | |
CN117724991B (en) | Dynamic memory management method, system, terminal and storage medium of embedded system | |
US12111756B2 (en) | Systems, methods, and apparatus for wear-level aware memory allocation |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |