CN107665146B - Memory management device and method - Google Patents

Memory management device and method Download PDF

Info

Publication number
CN107665146B
CN107665146B CN201610618811.7A CN201610618811A CN107665146B CN 107665146 B CN107665146 B CN 107665146B CN 201610618811 A CN201610618811 A CN 201610618811A CN 107665146 B CN107665146 B CN 107665146B
Authority
CN
China
Prior art keywords
memory
fragments
length
blocks
idle
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201610618811.7A
Other languages
Chinese (zh)
Other versions
CN107665146A (en
Inventor
赵庆贺
任勇
史洪波
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Priority to CN201610618811.7A priority Critical patent/CN107665146B/en
Priority to PCT/CN2017/076666 priority patent/WO2018018896A1/en
Publication of CN107665146A publication Critical patent/CN107665146A/en
Application granted granted Critical
Publication of CN107665146B publication Critical patent/CN107665146B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5011Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals
    • G06F9/5016Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals the resource being the memory
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/0223User address space allocation, e.g. contiguous or non contiguous base addressing
    • G06F12/0292User address space allocation, e.g. contiguous or non contiguous base addressing using tables or multilevel address translation means

Abstract

The invention discloses a memory management device and a memory management method, and belongs to the field of computers. The memory management device manages the memory blocks and the memory fragments in a hierarchical mode, flexibly applies or releases the memory blocks according to the actual occupation condition of the memory fragments, achieves dynamic management of the memory blocks, can meet the burst of services, enhances the sharing performance of memory resources, avoids resource waste and improves the use efficiency of the memory.

Description

Memory management device and method
Technical Field
The present invention relates to the field of computers, and in particular, to a memory management device and method.
Background
With the development of computer technology and communication technology, data communication services are more and more common in people's lives, and generally include data receiving, data storing and data forwarding services, which all need to frequently operate a memory, specifically, a memory is required to be applied when data is received, then the data is stored in the applied memory, the data is specifically processed and then forwarded, the memory needs to be released, and the characteristic of frequently accessing the memory of the data communication services puts higher requirements on memory management, that is, less fragments, high efficiency, strong anti-burst capability, and the like.
In the related art, memory management is usually performed in a static manner, that is, a memory is configured into a plurality of memory blocks with the same size, each memory block is configured into a plurality of memory fragments with the same length, memory addresses of the memory fragments with the same length are stored in the same memory queue, and allocation of the memory fragments is maintained in a memory queue manner, for example, when a service applies for a memory, an address of a free memory is taken out from the head of the memory queue for allocation, and when the memory is released, an address of the memory to be released is stored at the tail of the memory queue.
In the process of implementing the invention, the inventor finds that the prior art has at least the following problems:
in the static memory management, for each service, a fixed number of memory blocks are set, however, the service is often bursty, and it is very likely that one service cannot occupy all the memory blocks occupied by one memory queue, which results in waste of memory resources and low memory usage efficiency.
Disclosure of Invention
In order to solve the problems in the prior art, embodiments of the present invention provide a memory management device and method.
The invention provides a memory management device, which comprises a plurality of functional modules, a plurality of memory management modules and a plurality of memory management modules, wherein the functional modules are used for monitoring the states of at least two memory blocks; when one memory block is subjected to memory fragmentation, the state of the memory block is an occupied state, and when one memory block is not subjected to memory fragmentation, the state of the memory block is an idle state; monitoring the states of at least two memory fragments, wherein the at least two memory fragments comprise at least two fragment lengths, and each memory fragment is formed by dividing a memory block; if the number of idle memory fragments with the same length is less than the application threshold value of the memory blocks, applying the memory blocks in the idle state to divide according to the states of the at least two memory blocks; if the number of the idle memory fragments with the same length is larger than the memory block release threshold value, releasing the idle memory fragments with the same length; when a service request is received, allocating idle memory fragments with corresponding lengths to the service request; and storing the data corresponding to the service request according to the memory address of the idle memory fragment allocated by the memory fragment management module.
The memory management device provided by the invention flexibly applies or releases the memory blocks by carrying out hierarchical management on the memory blocks and the memory fragments according to the actual occupation condition of the memory fragments, thereby realizing the dynamic management of the memory blocks, meeting the burst property of services, enhancing the sharing property of memory resources, avoiding the waste of resources and improving the use efficiency of the memory.
Secondly, the invention also provides a memory management method, which specifically comprises the following steps: monitoring the states of at least two memory blocks; when one memory block is subjected to memory fragmentation, the state of the memory block is an occupied state, and when one memory block is not subjected to memory fragmentation, the state of the memory block is an idle state; monitoring the states of at least two memory fragments, wherein the at least two memory fragments comprise at least two fragment lengths, and each memory fragment is formed by dividing a memory block; if the number of idle memory fragments with the same length is less than the application threshold value of the memory blocks, applying the memory blocks in the idle state to divide according to the states of the at least two memory blocks; if the number of the idle memory fragments with the same length is larger than the memory block release threshold value, releasing the idle memory fragments with the same length; when a service request is received, allocating idle memory fragments with corresponding lengths to the service request; and storing the data corresponding to the service request according to the allocated memory address of the idle memory fragment.
According to the memory management method provided by the invention, the memory blocks and the memory fragments are hierarchically managed, and the memory blocks are flexibly applied or released according to the actual occupation condition of the memory fragments, so that the dynamic management of the memory blocks is realized, the burstiness of services can be met, the sharing performance of memory resources is enhanced, the resource waste is avoided, and the memory use efficiency is improved.
In one possible design, the method further includes:
for any length of memory fragments, if the application for the length of memory fragments is not received within the preset time, releasing the memory blocks occupied by the length of memory fragments.
By the memory management method, whether the memory block needs to be recovered or not can be judged according to the actual occurrence condition of the service, and the memory block is applied for memory fragmentation corresponding to other services, so that the use efficiency of the memory is improved.
In one possible design, for a memory fragment of any length, managing memory addresses of memory blocks occupied by the memory fragment of the length in the form of an unused memory block queue, a depleted memory block queue, and a using memory block queue;
the unused memory block queue comprises memory addresses of memory blocks with idle memory fragments in all the memory fragments with the length;
the exhausted memory block queue comprises memory addresses of memory blocks of which all memory slices in the memory slices with the length are occupied;
the currently used memory block queue includes memory addresses of memory blocks occupied by part of the memory slices in the memory slices with the length.
By the memory management method, the memory blocks occupied by the memory fragments with certain length are managed by adopting the queues with various functions, so that the time delay when the memory is applied by a service can be greatly reduced. In addition, page table switching can be reduced, and the amount of calculation required for looking up a page table, switching the page table, and the like can be reduced.
In one possible design, when a service request is received, allocating a free memory slice with a corresponding length to the service request includes:
and when the service request is received, preferentially distributing the idle memory fragments in the memory block with the least number of the idle memory fragments in the occupied memory block from the idle memory fragments with the length corresponding to the service request.
By the memory management method, the continuity of data writing in the memory can be ensured, and excessive memory fragments are avoided.
In a possible design, for any length of memory slice, the occupation condition of the memory slice with the length is maintained by adopting a multi-level Bitmap lookup table.
The actual occupation condition of the memory fragments can be orderly maintained in a mode of a multi-level Bitmap lookup table.
In one possible design, the method further includes:
for any memory block with any size, detecting whether the proportion of the memory blocks in the occupied state in the memory blocks with the size is smaller than a first preset threshold value;
and if the ratio is smaller than the first preset threshold, unloading at least one memory block in an idle state from the memory blocks with the sizes.
In one possible design, the method further includes:
for any memory block with any size, detecting whether the proportion of the memory blocks in the occupied state in the memory blocks with the size is larger than a second preset threshold value;
and if the ratio is larger than the second preset threshold, loading at least one memory block in an idle state from the system memory.
The loading or unloading of the system memory is determined according to the actual occupation condition of the memory block, so that flexible memory management can be realized, and the data volume requirements of services under different scenes can be met.
In one possible design, the method further includes:
for any length of memory fragments, caching preset number of memory addresses of the free memory fragments with the length in a double-pointer last-in first-out (LIFO) caching mode;
accordingly, upon receipt of a service request, the memory address of a free memory slice is allocated from the dual pointer last in first out, LIFO, buffer in response to the service request.
By the memory management method, the memory addresses of some memory fragments are obtained in advance for being used for distributing service requests, so that the time required for positioning the idle memory fragments can be shortened, and the time delay of service application memories is greatly reduced.
In one possible design, the method further includes:
for any length of memory fragments, detecting that the number of memory addresses in the double-pointer last-in first-out LIFO cache is lower than a preset threshold value;
and if the number of the memory addresses is lower than the preset threshold value, acquiring the memory addresses of the memory fragments with the preset number and storing the memory addresses into the double-pointer last-in first-out LIFO cache.
By the memory management method, the number of memory addresses in the cache can be ensured, and the time delay of the service application memory is greatly reduced.
The embodiment of the present invention further provides a data communication device, where the data communication device includes a receiving device, a sending device, a system host, a memory management device, and a memory, where the memory management device is configured to monitor states of at least two memory blocks; when one memory block is subjected to memory fragmentation, the state of the memory block is an occupied state, and when one memory block is not subjected to memory fragmentation, the state of the memory block is an idle state; monitoring the states of at least two memory fragments, wherein the at least two memory fragments comprise at least two fragment lengths, and each memory fragment is formed by dividing a memory block; if the number of idle memory fragments with the same length is less than the application threshold value of the memory blocks, applying the memory blocks in the idle state to divide according to the states of the at least two memory blocks; if the number of the idle memory fragments with the same length is larger than the memory block release threshold value, releasing the idle memory fragments with the same length; when a service request is received, allocating idle memory fragments with corresponding lengths to the service request; and storing the data corresponding to the service request according to the allocated memory address of the idle memory fragment.
The data communication equipment provided by the invention flexibly applies or releases the memory block by carrying out hierarchical management on the memory block and the memory fragment according to the actual occupation condition of the memory fragment, thereby realizing the dynamic management of the memory block, meeting the burst property of the service, enhancing the sharing property of the memory resource, avoiding the waste of the resource and improving the use efficiency of the memory.
In a possible design, the method further includes configuring, by the memory management device, for a memory slice of any length, if no application for the memory slice of the length is received within a preset time, releasing the memory block occupied by the memory slice of the length.
By the data communication equipment, whether the memory block needs to be recovered or not can be judged according to the actual occurrence condition of the service, and the memory block is applied for memory fragmentation corresponding to other services, so that the use efficiency of the memory is improved.
In one possible design, for a memory fragment of any length, managing memory addresses of memory blocks occupied by the memory fragment of the length in the form of an unused memory block queue, a depleted memory block queue, and a using memory block queue;
the unused memory block queue comprises memory addresses of memory blocks with idle memory fragments in all the memory fragments with the length;
the exhausted memory block queue comprises memory addresses of memory blocks of which all memory slices in the memory slices with the length are occupied;
the currently used memory block queue includes memory addresses of memory blocks occupied by part of the memory slices in the memory slices with the length.
By the data communication equipment, the memory blocks occupied by the memory fragments with certain length are managed by adopting the queues with various functions, so that the time delay when the memory is applied by a service can be greatly reduced. In addition, page table switching can be reduced, and the amount of calculation required for looking up a page table, switching the page table, and the like can be reduced.
In one possible design, the memory management device is configured to, when receiving the service request, preferentially allocate, from the idle memory slices of the length corresponding to the service request, an idle memory slice in a memory block that occupies the memory block and has the smallest number of idle memory slices.
By the data communication equipment, the continuity of data writing in the memory can be ensured, and excessive memory fragments are avoided.
In a possible design, for any length of memory slice, the occupation condition of the memory slice with the length is maintained by adopting a multi-level Bitmap lookup table.
The actual occupation condition of the memory fragments can be orderly maintained in a mode of a multi-level Bitmap lookup table.
In one possible design, the memory management device is configured to detect, for any size of memory block, whether a proportion of memory blocks in an occupied state in the size of memory block is smaller than a first preset threshold; and if the ratio is smaller than the first preset threshold, unloading at least one memory block in an idle state from the memory blocks with the sizes.
In one possible design, the memory management device is configured to detect, for any size of memory block, whether a proportion of memory blocks in an occupied state in the size of memory block is greater than a second preset threshold; and if the ratio is larger than the second preset threshold, loading at least one memory block in an idle state from the system memory.
The loading or unloading of the system memory is determined according to the actual occupation condition of the memory block, so that flexible memory management can be realized, and the data volume requirements of services under different scenes can be met.
In one possible design, the memory management device is configured to cache, for any length of memory slice, a preset number of memory addresses of the length of free memory slices in a form of a double-pointer last-in first-out LIFO cache; accordingly, upon receipt of a service request, the memory address of a free memory slice is allocated from the dual pointer last in first out, LIFO, buffer in response to the service request.
By the data communication equipment, the memory addresses of some memory fragments are obtained in advance for being used for distributing service requests, so that the time required for positioning the idle memory fragments can be shortened, and the time delay of service application memories is greatly reduced.
In one possible design, the memory management device is configured to detect that, for a memory slice of any length, the number of memory addresses in the double-pointer last-in first-out LIFO cache is lower than a preset threshold;
and if the number of the memory addresses is lower than the preset threshold value, acquiring the memory addresses of the memory fragments with the preset number and storing the memory addresses into the double-pointer last-in first-out LIFO cache.
By the data communication equipment, the number of the memory addresses in the cache can be ensured, and the time delay of the service application memory is greatly reduced.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a schematic diagram of a hardware structure of a data communication device according to an embodiment of the present invention.
Fig. 2 is a schematic structural diagram of a memory management device according to an embodiment of the present invention.
Fig. 3 is a schematic diagram of a queue maintained by a memory block management module according to an embodiment of the present invention.
Fig. 4 is a schematic diagram of a three-level Bitmap lookup table according to an embodiment of the present invention.
Fig. 5 is a schematic diagram illustrating a configuration manner of a memory block queue and a memory fragmentation queue according to an embodiment of the present invention.
Fig. 6 is a flowchart of a memory management method according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, embodiments of the present invention will be described in detail with reference to the accompanying drawings.
In order to facilitate the reader to understand the technical solution of the embodiments of the present invention, the following description of the present invention will briefly describe the hardware structure of the data communication device.
As shown in fig. 1, a hardware structure of the data communication device may include at least a receiving device 110, a sending device 120, a system host 130, a memory management device 140, and a memory 150. The receiving device 110 is configured to receive a data packet, the sending device 120 is configured to send a data packet, the system host 130 is configured to perform specific processing on the received data packet, and the memory management device 140 is configured to allocate a memory to data corresponding to a service request according to the service request generated in a processing process during the processing of the data packet.
Referring to fig. 2, a schematic structural diagram of a memory management device according to an embodiment of the invention is shown. As shown in fig. 2, the apparatus may include, but is not limited to: memory block status monitoring module 210, memory slice monitoring module 220, memory slice management module 230, and data storage module 240. In the following, the functions of the respective modules are described separately:
the memory block status monitoring module 210.
The memory block state monitoring module 210 is configured to monitor states of at least two memory blocks; when one memory block is subjected to memory fragmentation, the state of the memory block is an occupied state, and when one memory block is not subjected to memory fragmentation, the state of the memory block is an idle state.
The memory 150 may be divided into a plurality of memory blocks, the sizes of the memory blocks may be set by a technician according to the requirements of data communication services, and in addition, the sizes of the memory blocks may be the same or different; of course, in practical applications, the memory management configuration module 210 may also divide the memory into a whole memory block. The method for dividing the memory blocks, such as the number of the memory blocks, the size of the memory blocks, and the like, is not particularly limited in the present invention.
For the data communication device, it has the capability of processing multiple different services, so in order to meet multiple service requirements, each service may occupy a certain number of memory blocks in advance, and each service may occupy at least one memory block with the same size, or at least two memory blocks with different sizes, which is not specifically limited herein. In order to avoid waste of memory resources, the memory block occupied by the memory block can be further divided into at least one memory fragment, so that memory application and allocation can be performed for the service request by taking the memory fragment as a unit. It should be noted that the size of each memory slice in each memory block may also be set by a technician according to the requirement of the data communication service, the number of the memory slices in each memory block may be the same or different, and the size of each memory slice in each memory block may be the same or different, which is not specifically limited by the present invention.
It should be noted that, when the memory 150 is divided into memory blocks, only a part of the memory 150 may be divided, and the rest of the memory is temporarily not divided, so as to be used flexibly. Further, in order to deal with the service request burst, the number of enabled memory blocks can be flexibly adjusted according to the occupation situation of the memory blocks. Specifically, the memory block status monitoring module may be used in any one of the following implementation processes:
in a first implementation process, the memory block state monitoring module is configured to detect, for a memory block of any size, whether a proportion of memory blocks in an occupied state in the memory blocks of the size is smaller than a first preset threshold; and if the ratio is smaller than the first preset threshold, unloading at least one memory block in an idle state from the memory blocks with the sizes.
For this implementation process, if it is detected that the proportion of the memory blocks in the occupied state in the memory blocks of the size is smaller than the first preset threshold, it indicates that the data amount required by the current service is small, and the memory blocks occupied currently are too many.
In a second implementation process, the memory block state monitoring module is configured to detect, for a memory block of any size, whether a proportion of memory blocks in an occupied state in the memory blocks of the size is greater than a second preset threshold; and if the ratio is larger than the second preset threshold, loading at least one memory block in an idle state from the system memory.
For this implementation process, if it is detected that the proportion of the memory blocks in the occupied state in the memory blocks of the size is greater than the second preset threshold, it indicates that the remaining amount of the memory block of the size is insufficient, and may not be enough to handle the data amount of the service, so that at least one memory block in an idle state may be loaded from the system memory to ensure that the service processing is not affected.
It should be noted that, in addition to determining whether to unload or load the memory blocks according to the above ratio, the determination may be performed according to the number of the memory blocks in the occupied state, and the present invention is not limited to this.
For the management of the memory blocks, a queue form may be adopted, each queue is used to store addresses of the memory blocks, each queue may be used to manage the memory blocks with the same size, and the number of the addresses stored in each queue may be the same or different, which is not specifically limited in this embodiment of the present invention. It should be noted that the queue exists in the form of a linked list or other data structure, which may include at least the following information: queue head and tail pointers, the number of memory blocks, the size of the memory blocks and the like.
In summary, the memory block is loaded and unloaded according to the actual occupation situation of the memory block, so that the memory block can be dynamically adjusted according to the actual business requirement, thereby enhancing the sharing of the memory resource and improving the memory use efficiency.
Memory fragmentation monitoring module 220
The memory slice monitoring module 220 is configured to monitor states of at least two memory slices, where the at least two memory slices include at least two slice lengths, and each memory slice is formed by dividing a memory block; if the number of idle memory fragments with the same length is less than the application threshold value of the memory blocks, applying for the memory blocks in the idle state to be segmented according to the states of the at least two memory blocks; and if the number of the idle memory fragments with the same length is larger than the memory block release threshold value, releasing the idle memory fragments with the same length.
The memory fragmentation state comprises an occupied state and an idle state, when the memory fragmentation is allocated, the memory fragmentation is in the occupied state, and when the memory fragmentation is released or not allocated, the memory fragmentation is in the idle state. If the number of idle memory fragments of the same length is less than the memory block application threshold, it indicates that the length of the memory fragments may not be enough to meet the service requirement, at least one memory block may be applied from the memory blocks in the idle state, and the applied memory block is divided into at least one memory fragment for the service application requirement, and if a certain memory block in the idle state is allocated and divided, the state of the memory block is converted into the occupied state. If the number of the idle memory fragments of the same length is greater than the memory block release threshold, it indicates that the amount of service data corresponding to the memory fragment of the length is limited, and the idle memory fragments of the same length may be released in order to avoid wasting memory resources. When releasing, the memory addresses of the memory fragments may be aggregated to obtain the address of the entire memory block, and then the address of the memory block is released.
Further, the memory fragment monitoring module is configured to, for a memory fragment of any length, release a memory block occupied by the memory fragment of the length if allocation of the memory fragment of the length is not received within a preset time. If the application for the memory fragment with the length is not received within the preset time, which indicates that the service has no demand, the memory block occupied by the memory fragment with the length can be released in order to avoid wasting the memory resource.
In order to reduce the memory fragments and increase the use efficiency of the memory, for a memory fragment of any length, a plurality of queues with different functions may be used to manage the memory blocks occupied by the memory fragment of the length, which is specifically described as follows with reference to fig. 3: the memory fragment monitoring module is used for managing the memory addresses of the memory blocks occupied by the memory fragments with any length in the forms of unused memory block queues, exhausted memory block queues and used memory block queues. The unused memory block queue includes memory addresses of memory blocks in which all memory slices of the length are idle. The exhausted memory block queue includes memory addresses of memory blocks of which all memory slices in the memory slices of the length are occupied. The currently used memory block queue includes memory addresses of memory blocks occupied by part of the memory slices in the memory slices with the length.
It should be noted that, for a memory block occupied by a memory slice, there may be three states: the unused state, namely the memory fragments divided by the memory block are all in an idle state; the used state, that is, a part of the memory fragments divided by the memory block is in an idle state, and a part of the memory fragments is allocated; the exhausted state means that all the memory slices divided by the memory block are allocated. However, as the device operates, the state of the memory segment changes, which causes the state of the memory block to also change at any time, for example, when one memory segment in a certain unused memory block is allocated, the unused memory block becomes a currently used memory block, in order to actually reflect the change of the memory, the queue needs to be dynamically maintained, and the dynamic maintenance process may include the following conditions (a) to (d):
a) when all memory fragments in a first specified memory block of a currently-used memory block queue are idle, the memory fragment monitoring module deletes the first specified memory block from the currently-used memory block queue and adds the first specified memory block to the tail of an unused memory block queue.
b) When the queue of the memory blocks being used does not include any memory block, the memory fragmentation monitoring module acquires at least one memory block in an idle state from the head of the queue of the unused memory blocks, and adds the at least one memory block in the idle state to the tail of the queue of the memory blocks being used.
c) When at least one memory slice of a second designated memory block in the exhausted memory block queue becomes idle, the memory slice monitoring module deletes the second designated memory block from the exhausted memory block queue and adds the second designated memory block to the head of the currently used memory block queue, so that the memory slices in the memory block can be preferentially allocated, and the effects of reducing the memory slices and increasing the use efficiency of the memory are achieved.
d) When all the memory slices of the third specified memory block in the queue of the currently used memory block are allocated, the memory slice monitoring module deletes the third specified memory block from the queue of the currently used memory block and adds the third specified memory block to the tail of the queue of the exhausted memory block.
In summary, the memory blocks occupied by the memory fragments with a certain length are managed in the form of a plurality of different functional queues, so that the memory fragments in the memory blocks being used can be preferentially allocated, thereby reducing the memory fragments, increasing the memory use efficiency, improving the flexibility of memory allocation and avoiding the negative influence caused by frequent page table switching,
further, in order to improve the positioning speed of the memory fragments during the subsequent memory application and reduce the time delay of the memory applied by the service, the invention provides a method for maintaining the occupation condition of the memory fragments, that is, the memory fragment monitoring module is used for maintaining the occupation condition of the memory fragments with any length by adopting a multi-level Bitmap lookup table.
For example, for a memory slice of one length, a multi-level Bitmap lookup table may be configured, the multi-level Bitmap lookup table comprising: a first level Bitmap scheduling table and a third level Bitmap lookup table, the third level Bitmap lookup table comprising: a first level Bitmap lookup table, a second level Bitmap lookup table and a third level Bitmap lookup table.
Wherein, the first-level Bitmap scheduling table is used for storing the unused memory block queue, the mapping relationship between the memory blocks in the depleted memory block queue and the currently used memory block queue and the three-level Bitmap lookup table, the first-level Bitmap lookup table is a one-dimensional storage structure and includes a plurality of first-level storage units, each first-level storage unit corresponds to one row of the second-level Bitmap lookup table, the second-level Bitmap lookup table is a two-dimensional storage structure and includes a plurality of rows and a plurality of second-level storage units, each second-level storage unit corresponds to one row of the third-level Bitmap lookup table, the third-level Bitmap lookup table is a two-dimensional storage structure and includes a plurality of third-level storage units, each third-level storage unit corresponds to one of the memory slices, as shown in fig. 4, the diagram of the above-level Bitmap lookup table is shown, and the first-level storage unit, the second-level storage unit and the third-level storage unit all correspond to one preset value, the preset value is used to reflect the occupation states of the primary storage unit, the secondary storage unit, and the memory corresponding to the tertiary storage unit, for example, the preset value may be 0 or 1, where 0 is used to indicate that the memory is not occupied or not occupied, and 1 is used to indicate that the memory is occupied, it should be noted that the preset value is merely exemplary, and in practical applications, the preset value may be any value, and the preset value may be more than two, and the present invention is not limited in particular, and in practical applications, the multi-level Bitmap lookup table may only include a Bitmap lookup table instead of a Bitmap schedule table, or may include a Bitmap lookup table with any number of four levels, two levels, and the like, for this, the present invention is not limited in particular, and because the lookup manners of the Bitmap lookup tables with different numbers of levels are very similar, the present invention will be described by taking a three-level Bitmap lookup table as an example, the lookup method of the Bitmap lookup table of other stages is not described in detail.
To facilitate understanding of the method for locating a free memory partition in a queue of a currently used memory block by using the multi-level Bitmap lookup table, the present invention will briefly exemplify the method.
For example, when a first free memory slice in the queue of the used memory block needs to be searched, the first-level Bitmap scheduling table may be first searched to obtain a third-level Bitmap lookup table corresponding to a memory block at the head of the queue of the used memory block, then the first-level storage unit with a preset value of 0 (the preset value of 0 indicates that the memory is unoccupied or not completely occupied) may be sequentially searched in the first-level Bitmap lookup table, assuming that the number of the first-level storage unit with the first preset value of 0 is a, the second-level storage unit with the preset value of 0 may be sequentially searched in the a-th row of the second-level Bitmap lookup table, assuming that the number of the second-level storage unit with the first preset value of 0 is b, the memory management device 200 may search the third-level storage unit with the preset value of 0 in the ab-th row of the third-level Bitmap lookup table, assuming that the number of the third-level storage unit with the first preset value of 0 is ijkl0124, the memory fragment with the number ijkl0124 is the first free memory fragment in the queue of the memory block being used, and it should be noted that the numbers in the above examples are only exemplary and do not limit the present invention.
When the memory blocks and the memory fragments are managed in a queue manner, the logical relationship between the memory block queues and the memory fragment queues may refer to fig. 5, as shown in fig. 5, a plurality of memory fragment queues may correspond to one memory block queue, the memory fragments managed by each memory fragment queue are divided by the memory blocks managed by the memory block queue, and in an actual scenario, the memory 150 may be managed by the plurality of memory block queues.
Memory fragmentation management module 230
The memory slice management module 230 is configured to, when receiving a service request, allocate an idle memory slice with a corresponding length to the service request.
In order to reduce the memory fragmentation, when allocating the memory fragments, the memory fragments in the memory block at the head of the memory block queue may be preferentially allocated for use. Since the corresponding relationship between the logical address and the physical address is stored in the page table, when allocating the memory fragment, the page table needs to be used to obtain the mapping relationship between the memory address of the memory fragment and the memory block, and when allocating the memory fragment, the memory fragment in the memory block at the head of the memory block queue is always allocated, so that the mapping relationship can be obtained without switching the page table, and therefore, the method can effectively inhibit the negative influence caused by frequent switching of the page table.
In a possible design, the memory slice management module 230 is configured to, when receiving a service request, preferentially allocate, from idle memory slices of a length corresponding to the service request, an idle memory slice in a memory block that occupies a memory block and has a smallest number of idle memory slices.
The inventor realizes that, in addition to preferentially allocating the memory fragments in the currently used memory block, the memory fragments in the currently used memory block with a smaller number of idle memory fragments can be preferentially allocated, so that the effects of reducing the memory fragments and increasing the use efficiency of the memory can be achieved. Therefore, in an embodiment of the present invention, for each length of memory slice, the number of idle memory slices in each memory block included in the currently used memory block queue may be detected at intervals of a predetermined time, and the currently used memory block queue may be sorted according to the number of idle memory slices in an order from a small number to a large number, so that the memory slice in the memory block with the smallest number of idle memory slices is preferentially allocated, where the predetermined time may be set by a technical person, which is not specifically limited by the present invention.
In a possible design, the memory slice management module is configured to, for a memory slice of any length, buffer a preset number of memory addresses of a free memory slice of the length in a form of a dual pointer last-in first-out LIFO buffer, and when a service request is received, allocate the memory addresses of the free memory slice from the dual pointer last-in first-out LIFO buffer in response to the service request.
In order to further reduce the delay of the service applying the memory, a double-pointer last-in first-out LIFO buffer can be provided for each length of memory fragment to buffer a certain number of memory addresses, so that the memory fragment can be preferentially and directly allocated from the buffer when the service applies the memory fragment. When the maintenance form of the multilevel Bitmap lookup table is adopted, the step of inquiring the multilevel Bitmap lookup table can be avoided, and the memory is directly allocated from the cache, so that the time delay of the service application memory is further reduced, and the calculation amount required by memory management is reduced.
For the scenario that the memory addresses of the idle memory fragments with the preset number of lengths are cached in the form of double-pointer last-in first-out LIFO caching, after a memory fragment use request is received, the memory addresses of the memory fragments to be allocated in the storage unit corresponding to the stack top pointer can be preferentially allocated. After receiving the memory release request, it can store the memory address of the memory fragment to be released into the cache, that is, the memory address of the memory fragment to be released is written into the storage unit corresponding to the top pointer, so that it can be preferentially allocated, thus avoiding frequent switching of page tables when the service applies for the memory.
In a possible design, the memory slice management module 230 is configured to detect that, for a memory slice of any length, the number of memory addresses in the double-pointer last-in first-out LIFO cache is lower than a preset threshold; and if the number of the memory addresses is lower than the preset threshold value, acquiring the memory addresses of the memory fragments with the preset number and storing the memory addresses into the double-pointer last-in first-out LIFO cache.
If the number of the stored memory addresses is lower than the preset threshold value, which indicates that the memory slices to be allocated are insufficient, the memory addresses of the memory slices with the preset number can be obtained from the memory slices with the length. It should be noted that the prefetch threshold and the preset number may be set by a skilled person, and the present invention is not limited to this. For example, taking the queue of the currently-used memory blocks as an example, the memory fragmentation management module may search, in the multi-level Bitmap lookup table, idle memory fragments in the currently-used memory blocks in a hierarchical manner according to the arrangement sequence of the memory blocks in the queue of the currently-used memory blocks, where the currently-used memory blocks are memory blocks occupied by a part of the memory fragments, obtain a preset number of idle memory fragments according to the search sequence, and obtain memory addresses of the preset number of idle memory fragments in the cache.
To assist the reader in understanding the above technical process, the above acquisition process is explained below based on the structure of a multi-level Bitmap lookup table. For example, as found by comparing the number of the to-be-allocated memory fragments stored in the cache with the prefetch threshold, if it needs to acquire 10 idle memory fragments, the multi-level Bitmap lookup table may be queried, specifically, first, a first-level Bitmap scheduling table is searched to acquire a third-level Bitmap lookup table corresponding to a first memory block at the head of a currently-used memory block queue, then, a first-level memory cell with a preset value of 0 (the preset value of 0 indicates that the memory is not occupied or not completely occupied) may be sequentially searched in the first-level Bitmap lookup table, assuming that the number of the first-level memory cell with the first preset value of 0 is a, a second-level memory cell with the preset value of 0 may be sequentially searched in an a-th row of the second-level Bitmap lookup table, and assuming that the number of the second-level memory cell with the first preset value of 0 is b, a third-level memory cell with the preset value of 0 may be sequentially searched in a third-level Bitmap lookup table, if 10 tertiary storage units with the preset value of 0 are found in the ab line of the third-level Bitmap lookup table, the memory addresses of the memory fragments corresponding to the 10 tertiary storage units with the preset value of 0 are prefetched into the cache, if less than 10 tertiary storage units with the preset value of 0 are found in the ab line of the third-level Bitmap lookup table, the cache returns to continue to find the secondary storage unit with the preset value of 0 in the a line of the second-level Bitmap lookup table, assuming that the number of the secondary storage unit with the second preset value of 0 is c, the cache can sequentially find the tertiary storage unit with the preset value of 0 in the ac line of the third-level Bitmap lookup table, the query process can be executed in a circulating manner until 10 memory addresses are obtained, of course, if enough memory fragments are not found after the secondary Bitmap lookup table is returned, the reference value may be continuously returned to the first-level Bitmap lookup table or even the first-level Bitmap scheduling table for querying, and it should be noted that the above preset values and the numbers of the storage units at each level are merely exemplary, and are not intended to limit the present invention.
It should be noted that, after the memory addresses of the preset number of idle memory segments are obtained to the cache, the preset values of the three-level storage units corresponding to the preset number of idle memory segments may be modified in the multi-level Bitmap lookup table to indicate that the memory addresses of the preset number of idle memory segments have been obtained to the cache, so as to avoid errors occurring in the next obtaining process.
Further, for any length of memory fragments, detecting whether the number of memory addresses in the double-pointer last-in first-out LIFO cache is higher than a recovery threshold value; and if the number of the memory addresses is higher than the recovery threshold value, releasing the memory addresses of the preset number of memory fragments from the double-pointer last-in first-out LIFO cache. The freeing may refer to the reclaiming of the memory fragments in this manner into a three-level Bitmap lookup table.
It should be further noted that the storage structure of the above-mentioned buffer may be various, and the above-mentioned buffer may be a last-in first-out LIFO buffer, or a first-in first-out FIFO buffer, etc., and the present invention is not limited to this specifically, when the above-mentioned buffer is a dual-pointer last-in first-out LIFO buffer, the buffer may include a stack top pointer and a stack bottom pointer, and a preset number of spare memory fragments may be written into the storage unit corresponding to the stack bottom pointer of the dual-pointer last-in first-out LIFO buffer.
Data storage module 240
The data storage module 240 is configured to store data corresponding to the service request according to the memory address of the idle memory slice allocated by the memory slice management module. The data storage module 240 serves as a front end for processing the service request, and a specific storage process thereof is not described herein.
The device provided by the embodiment of the invention flexibly applies or releases the memory block by carrying out hierarchical management on the memory block and the memory fragment according to the actual occupation condition of the memory fragment, thereby realizing dynamic management of the memory block, meeting the burst property of service, enhancing the sharing property of memory resources, avoiding resource waste and improving the use efficiency of the memory. In addition, some memory addresses of the memory fragments to be allocated can be obtained in advance in a cache mode, so that the memory addresses of the memory fragments in the cache are directly allocated to the service when the service applies for the memory, and the query of idle memory fragments is not needed, thereby reducing the time delay of the service applying for the memory.
Fig. 6 is a flowchart of a memory management method according to an embodiment of the present invention. Referring to fig. 6, this embodiment specifically includes:
601. monitoring the states of at least two memory blocks; when one memory block is subjected to memory fragmentation, the state of the memory block is an occupied state, and when one memory block is not subjected to memory fragmentation, the state of the memory block is an idle state.
602. Monitoring the states of at least two memory fragments, wherein the at least two memory fragments comprise at least two fragment lengths, and each memory fragment is formed by dividing a memory block.
603. If the number of idle memory fragments with the same length is less than the application threshold value of the memory blocks, applying the memory blocks in the idle state to divide according to the states of the at least two memory blocks; and if the number of the idle memory fragments with the same length is larger than the memory block release threshold value, releasing the idle memory fragments with the same length.
It should be noted that the execution sequence of the steps 601 to 603 may be parallel, or may be any execution sequence, and the monitoring cycles may be the same or different, which is not specifically limited in this embodiment of the present invention.
604. And when a service request is received, allocating an idle memory fragment with a corresponding length to the service request.
605. And storing the data corresponding to the service request according to the allocated memory address of the idle memory fragment.
In one possible design, the method further includes:
for any length of memory fragments, if the application for the length of memory fragments is not received within the preset time, releasing the memory blocks occupied by the length of memory fragments.
In one possible design, for a memory fragment of any length, managing memory addresses of memory blocks occupied by the memory fragment of the length in the form of an unused memory block queue, a depleted memory block queue, and a using memory block queue;
the unused memory block queue comprises memory addresses of memory blocks with idle memory fragments in all the memory fragments with the length;
the exhausted memory block queue comprises memory addresses of memory blocks of which all memory slices in the memory slices with the length are occupied;
the currently used memory block queue includes memory addresses of memory blocks occupied by part of the memory slices in the memory slices with the length.
In one possible design, when a service request is received, allocating a free memory slice with a corresponding length to the service request includes:
and when the service request is received, preferentially distributing the idle memory fragments in the memory block with the least number of the idle memory fragments in the occupied memory block from the idle memory fragments with the length corresponding to the service request.
In one possible design, for any length of memory slice, the occupation condition of the memory slice with the length is maintained by adopting a multi-level Bitmap lookup table.
In one possible design, the method further includes:
for any memory block with any size, detecting whether the proportion of the memory blocks in the occupied state in the memory blocks with the size is smaller than a first preset threshold value;
and if the ratio is smaller than the first preset threshold, unloading at least one memory block in an idle state from the memory blocks with the sizes.
In one possible design, the method further includes:
for any memory block with any size, detecting whether the proportion of the memory blocks in the occupied state in the memory blocks with the size is larger than a second preset threshold value;
and if the ratio is larger than the second preset threshold, loading at least one memory block in an idle state from the system memory.
In one possible design, the method further includes:
for any length of memory fragments, caching preset number of memory addresses of the free memory fragments with the length in a double-pointer last-in first-out (LIFO) caching mode;
accordingly, upon receipt of a service request, the memory address of a free memory slice is allocated from the dual pointer last in first out, LIFO, buffer in response to the service request.
In one possible design, the method further includes:
for any length of memory fragments, detecting that the number of memory addresses in the double-pointer last-in first-out LIFO cache is lower than a preset threshold value;
and if the number of the memory addresses is lower than the preset threshold value, acquiring the memory addresses of the memory fragments with the preset number and storing the memory addresses into the double-pointer last-in first-out LIFO cache.
It should be noted that the specific implementation process of the method embodiment and the device embodiment are similar to those described above, and are not described herein again.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific processes of the method described above may refer to corresponding processes in the apparatus embodiment, and are not described herein again.
It will be understood by those skilled in the art that all or part of the steps for implementing the above embodiments may be implemented by hardware, or may be implemented by a program instructing relevant hardware, where the program may be stored in a computer-readable storage medium, and the above-mentioned storage medium may be a read-only memory, a magnetic disk or an optical disk, etc.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents, improvements and the like that fall within the spirit and principle of the present invention are intended to be included therein.

Claims (16)

1. A memory management device, comprising:
the memory block state monitoring module is used for monitoring the states of at least two memory blocks; when one memory block is subjected to memory fragmentation, the state of the memory block is an occupied state, and when one memory block is not subjected to memory fragmentation, the state of the memory block is an idle state;
the memory fragment monitoring module is used for monitoring the states of at least two memory fragments, wherein the at least two memory fragments comprise at least two fragment lengths, and each memory fragment is formed by dividing a memory block;
the memory fragment monitoring module is further configured to apply for division of the memory blocks in the idle state according to the states of the at least two memory blocks if the number of idle memory fragments of the same length is less than a memory block application threshold; if the number of the idle memory fragments with the same length is larger than the memory block release threshold value, releasing the idle memory fragments with the same length;
the memory fragmentation management module is used for preferentially distributing idle memory fragments in memory blocks which occupy the memory blocks and have the least number of idle memory fragments from the idle memory fragments with the length corresponding to the service request when the service request is received;
and the data storage module is used for storing the data corresponding to the service request according to the memory address of the idle memory fragment distributed by the memory fragment management module.
2. The apparatus of claim 1, wherein the memory slice monitor module is configured to:
for any length of memory fragments, if the application for the length of memory fragments is not received within the preset time, releasing the memory blocks occupied by the length of memory fragments.
3. The apparatus according to claim 1, wherein the memory segment monitoring module is configured to manage, for a memory segment of any length, memory addresses of memory blocks occupied by the memory segment of the length in the form of an unused memory block queue, a depleted memory block queue, and a currently used memory block queue;
the unused memory block queue comprises memory addresses of memory blocks with idle memory fragments in all the memory fragments with the length;
the exhausted memory block queue comprises memory addresses of memory blocks of which all memory slices in the memory slices with the length are occupied;
the currently used memory block queue includes memory addresses of memory blocks occupied by part of the memory slices in the memory slices with the length.
4. The apparatus according to claim 1, wherein the memory slice monitoring module is configured to maintain, for a memory slice of any length, an occupation status of the memory slice of the length by using a multi-level Bitmap lookup table.
5. The apparatus according to claim 1, wherein the memory block status monitoring module is further configured to:
for any memory block with any size, detecting whether the proportion of the memory blocks in the occupied state in the memory blocks with the size is smaller than a first preset threshold value;
and if the ratio is smaller than the first preset threshold, unloading at least one memory block in an idle state from the memory blocks with the sizes.
6. The apparatus according to claim 1, wherein the memory block status monitoring module is further configured to:
for any memory block with any size, detecting whether the proportion of the memory blocks in the occupied state in the memory blocks with the size is larger than a second preset threshold value;
and if the ratio is larger than the second preset threshold, loading at least one memory block in an idle state from the system memory.
7. The apparatus according to claim 1, wherein the memory slice management module is configured to, for any length of memory slice, buffer a preset number of memory addresses of free memory slices of the length in a form of a dual pointer last in first out LIFO buffer, and when a service request is received, allocate the memory addresses of the free memory slices from the dual pointer last in first out LIFO buffer in response to the service request.
8. The apparatus according to claim 7, wherein the memory slice management module is configured to detect that, for a memory slice of any length, the number of memory addresses in the dual pointer last in first out LIFO buffer is lower than a preset threshold;
and if the number of the memory addresses is lower than the preset threshold value, acquiring the memory addresses of the memory fragments with the preset number and storing the memory addresses into the double-pointer last-in first-out LIFO cache.
9. A memory management method, comprising:
monitoring the states of at least two memory blocks; when one memory block is subjected to memory fragmentation, the state of the memory block is an occupied state, and when one memory block is not subjected to memory fragmentation, the state of the memory block is an idle state;
monitoring the states of at least two memory fragments, wherein the at least two memory fragments comprise at least two fragment lengths, and each memory fragment is formed by dividing a memory block;
if the number of idle memory fragments with the same length is less than the application threshold value of the memory blocks, applying the memory blocks in the idle state to divide according to the states of the at least two memory blocks; if the number of the idle memory fragments with the same length is larger than the memory block release threshold value, releasing the idle memory fragments with the same length;
when a service request is received, preferentially distributing idle memory fragments in memory blocks occupying the memory blocks with the least number of idle memory fragments from the idle memory fragments with the length corresponding to the service request;
and storing the data corresponding to the service request according to the allocated memory address of the idle memory fragment.
10. The method of claim 9, further comprising:
for any length of memory fragments, if the application for the length of memory fragments is not received within the preset time, releasing the memory blocks occupied by the length of memory fragments.
11. The method according to claim 9, characterized in that, for any length of memory slice, the memory addresses of the memory blocks occupied by the length of memory slice are managed in the form of unused memory block queues, exhausted memory block queues and in-use memory block queues;
the unused memory block queue comprises memory addresses of memory blocks with idle memory fragments in all the memory fragments with the length;
the exhausted memory block queue comprises memory addresses of memory blocks of which all memory slices in the memory slices with the length are occupied;
the currently used memory block queue includes memory addresses of memory blocks occupied by part of the memory slices in the memory slices with the length.
12. The method according to claim 9, wherein for any length of memory slice, the occupation status of the length of memory slice is maintained by using a multi-level Bitmap lookup table.
13. The method of claim 9, further comprising:
for any memory block with any size, detecting whether the proportion of the memory blocks in the occupied state in the memory blocks with the size is smaller than a first preset threshold value;
and if the ratio is smaller than the first preset threshold, unloading at least one memory block in an idle state from the memory blocks with the sizes.
14. The method of claim 9, further comprising:
for any memory block with any size, detecting whether the proportion of the memory blocks in the occupied state in the memory blocks with the size is larger than a second preset threshold value;
and if the ratio is larger than the second preset threshold, loading at least one memory block in an idle state from the system memory.
15. The method of claim 9, further comprising:
for any length of memory fragments, caching preset number of memory addresses of the free memory fragments with the length in a double-pointer last-in first-out (LIFO) caching mode;
accordingly, upon receipt of a service request, the memory address of a free memory slice is allocated from the dual pointer last in first out, LIFO, buffer in response to the service request.
16. The method of claim 15, further comprising:
for any length of memory fragments, detecting that the number of memory addresses in the double-pointer last-in first-out LIFO cache is lower than a preset threshold value;
and if the number of the memory addresses is lower than the preset threshold value, acquiring the memory addresses of the memory fragments with the preset number and storing the memory addresses into the double-pointer last-in first-out LIFO cache.
CN201610618811.7A 2016-07-29 2016-07-29 Memory management device and method Active CN107665146B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201610618811.7A CN107665146B (en) 2016-07-29 2016-07-29 Memory management device and method
PCT/CN2017/076666 WO2018018896A1 (en) 2016-07-29 2017-03-14 Memory management apparatus and method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610618811.7A CN107665146B (en) 2016-07-29 2016-07-29 Memory management device and method

Publications (2)

Publication Number Publication Date
CN107665146A CN107665146A (en) 2018-02-06
CN107665146B true CN107665146B (en) 2020-07-07

Family

ID=61016909

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610618811.7A Active CN107665146B (en) 2016-07-29 2016-07-29 Memory management device and method

Country Status (2)

Country Link
CN (1) CN107665146B (en)
WO (1) WO2018018896A1 (en)

Families Citing this family (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108984323B (en) * 2018-07-13 2022-04-01 上海联影医疗科技股份有限公司 Scheduling method and system for shared storage space
CN109213596B (en) * 2018-08-01 2023-03-10 青岛海信移动通信技术股份有限公司 Method and equipment for allocating terminal memory
CN109656836A (en) * 2018-12-24 2019-04-19 新华三技术有限公司 A kind of data processing method and device
CN109800089A (en) * 2019-01-24 2019-05-24 湖南国科微电子股份有限公司 A kind of buffer resource distribution method, module and electronic equipment
CN110008021A (en) * 2019-03-05 2019-07-12 平安科技(深圳)有限公司 EMS memory management process, device, electronic equipment and computer readable storage medium
CN110471759B (en) * 2019-07-04 2023-09-01 中科晶上(苏州)信息技术有限公司 Method for dynamically managing memory of multi-core embedded processor in real time
CN111309482B (en) * 2020-02-20 2023-08-15 浙江亿邦通信科技有限公司 Hash algorithm-based block chain task allocation system, device and storable medium
CN113326120B (en) * 2020-02-29 2023-12-26 杭州迪普科技股份有限公司 Apparatus and method for managing memory
CN111984652B (en) * 2020-08-28 2022-08-12 苏州浪潮智能科技有限公司 Method for searching idle block in bitmap data and related components
CN114253457A (en) * 2020-09-21 2022-03-29 华为技术有限公司 Memory control method and device
CN112650449B (en) * 2020-12-23 2022-12-27 展讯半导体(南京)有限公司 Method and system for releasing cache space, electronic device and storage medium
CN112685333A (en) * 2020-12-28 2021-04-20 上海创功通讯技术有限公司 Heap memory management method and device
CN114327917A (en) * 2022-03-11 2022-04-12 武汉深之度科技有限公司 Memory management method, computing device and readable storage medium
CN117573377A (en) * 2024-01-15 2024-02-20 摩尔线程智能科技(北京)有限责任公司 Memory management method, device, equipment and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1489334A (en) * 2002-10-11 2004-04-14 深圳市中兴通讯股份有限公司 Method for storage area management with static and dynamic joint
CN1532708A (en) * 2003-03-19 2004-09-29 华为技术有限公司 Static internal storage management method
CN101122883A (en) * 2006-08-09 2008-02-13 中兴通讯股份有限公司 Memory allocation method for avoiding RAM fragmentation
CN101847127A (en) * 2010-06-18 2010-09-29 福建星网锐捷网络有限公司 Memory management method and device
CN102455974A (en) * 2010-10-21 2012-05-16 上海宝信软件股份有限公司 High-speed internal memory application and release management system with controllable internal memory consumption and high-speed internal memory application release management method

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103136104B (en) * 2011-11-24 2016-04-06 深圳市快播科技有限公司 A kind of EMS memory management process and system
US9541984B2 (en) * 2013-06-05 2017-01-10 Apple Inc. L2 flush and memory fabric teardown
CN105469173A (en) * 2014-08-19 2016-04-06 西安慧泽知识产权运营管理有限公司 Method of optimal management on static memory

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1489334A (en) * 2002-10-11 2004-04-14 深圳市中兴通讯股份有限公司 Method for storage area management with static and dynamic joint
CN1532708A (en) * 2003-03-19 2004-09-29 华为技术有限公司 Static internal storage management method
CN101122883A (en) * 2006-08-09 2008-02-13 中兴通讯股份有限公司 Memory allocation method for avoiding RAM fragmentation
CN101847127A (en) * 2010-06-18 2010-09-29 福建星网锐捷网络有限公司 Memory management method and device
CN102455974A (en) * 2010-10-21 2012-05-16 上海宝信软件股份有限公司 High-speed internal memory application and release management system with controllable internal memory consumption and high-speed internal memory application release management method

Also Published As

Publication number Publication date
WO2018018896A1 (en) 2018-02-01
CN107665146A (en) 2018-02-06

Similar Documents

Publication Publication Date Title
CN107665146B (en) Memory management device and method
US11340812B2 (en) Efficient modification of storage system metadata
US8949518B2 (en) Method for tracking memory usages of a data processing system
US9965196B2 (en) Resource reservation for storage system metadata updates
CN110858162B (en) Memory management method and device and server
US9769081B2 (en) Buffer manager and methods for managing memory
CN109614377A (en) File delet method, device, equipment and the storage medium of distributed file system
US11314689B2 (en) Method, apparatus, and computer program product for indexing a file
CN111324427B (en) Task scheduling method and device based on DSP
CN107209716B (en) Memory management device and method
US20190286582A1 (en) Method for processing client requests in a cluster system, a method and an apparatus for processing i/o according to the client requests
JP2011248920A (en) Configuration and method for managing usable memory resource
CN113535633A (en) On-chip cache device and read-write method
US20130232124A1 (en) Deduplicating a file system
US20030120886A1 (en) Method and apparatus for buffer partitioning without loss of data
WO2022135160A1 (en) Releasing method and releasing system for buffer space, and electronic device and storage medium
CN111190541B (en) Flow control method of storage system and computer readable storage medium
CN106537321B (en) Method, device and storage system for accessing file
KR101915945B1 (en) A Method for processing client requests in a cluster system, a Method and an Apparatus for processing I/O according to the client requests
CN113204382A (en) Data processing method, data processing device, electronic equipment and storage medium
US9116814B1 (en) Use of cache to reduce memory bandwidth pressure with processing pipeline
CN111324438A (en) Request scheduling method and device, storage medium and electronic equipment
US9965211B2 (en) Dynamic packet buffers with consolidation of low utilized memory banks
CN117539796A (en) Electronic device and buffer memory management method
US10747672B2 (en) Managing a datalog space of a data cache

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant