CN110928803B - Memory management method and device - Google Patents

Memory management method and device Download PDF

Info

Publication number
CN110928803B
CN110928803B CN201811093671.1A CN201811093671A CN110928803B CN 110928803 B CN110928803 B CN 110928803B CN 201811093671 A CN201811093671 A CN 201811093671A CN 110928803 B CN110928803 B CN 110928803B
Authority
CN
China
Prior art keywords
memory
address space
memory block
pool
current
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811093671.1A
Other languages
Chinese (zh)
Other versions
CN110928803A (en
Inventor
常怀鑫
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Alibaba Cloud Computing Ltd
Original Assignee
Alibaba Group Holding Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Alibaba Group Holding Ltd filed Critical Alibaba Group Holding Ltd
Priority to CN201811093671.1A priority Critical patent/CN110928803B/en
Publication of CN110928803A publication Critical patent/CN110928803A/en
Application granted granted Critical
Publication of CN110928803B publication Critical patent/CN110928803B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/0223User address space allocation, e.g. contiguous or non contiguous base addressing
    • G06F12/023Free address space management
    • G06F12/0253Garbage collection, i.e. reclamation of unreferenced memory
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2212/00Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
    • G06F2212/15Use in a specific computing environment
    • G06F2212/151Emulated environment, e.g. virtual machine

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Memory System (AREA)

Abstract

The invention discloses a memory management method and device. The memory management method comprises the following steps: receiving a request for applying the memory, wherein the request comprises the size of a memory block to be applied; judging whether a memory block meeting a preset condition exists in the current memory pool or not; if the memory blocks meeting the preset conditions do not exist in the current memory pool, determining an available address space from the virtual address space pre-allocated in the current memory pool; applying for the memory block from the operating system according to the determined available address space; and distributing corresponding memory blocks from the memory blocks according to the size of the memory blocks to be applied, wherein the memory blocks comprise at least one memory block. The invention also discloses corresponding computing equipment.

Description

Memory management method and device
Technical Field
The present invention relates to the field of computer technologies, and in particular, to a memory management method and apparatus.
Background
For memory management of user processes in an operating system, a common processing method is to directly allocate memory blocks of any size by using an underlying application release memory operation (e.g., a malloc method), but this method is easy to generate memory fragments. An improved memory management approach is to manage memory block allocation through a memory pool. The memory pool is provided with a plurality of pools for managing memory allocation, each pool is used for allocating memory blocks with different sizes, when the memory is applied, the pools in the memory pool are sequentially accessed according to the sequence from small to large, and if the memory block size in the pool is not large enough, the pool for allocating larger memory blocks is continuously accessed. The use of a memory pool can bring about a number of advantages over the use of an underlying application to free memory operations: (1) the memory pool can manage a plurality of memory blocks applied in advance as a cache, so that the memory application is quickened; (2) the released memory can also be managed by a memory pool; (3) the memory pool applies for larger memory blocks to the system in a fixed size, so that the occurrence of memory fragments can be reduced. Although the memory pool provides benefits in memory performance and utilization, it is not simpler than general memory problems in investigating memory problems in the memory pool. Taking the UAF (use-after-free) problem as an example, if a previous user of a piece of memory still accesses the piece of memory after releasing the memory, it will affect a new user applying to the piece of memory. As a result of the memory errors of the new user, the old user still accesses the memory, but there is no logical association between the new and old users, so it becomes a challenge to find the old user with problems. Investigation of memory problems is particularly difficult in projects with multiple modules and complex codes.
In summary, when using a memory pool, the memory pool and other memory allocations use the same piece of address space, and the distribution of the system memory is disordered during running, and a piece of memory may be applied for use by the system or any memory pool after being released. The memory pool does not provide a logical clue to solving the memory problem, so it can be said that the use of the memory pool does not bring any convenience to the memory problem.
Based on this, a reasonable memory management scheme is needed to solve the above-mentioned problems.
Disclosure of Invention
Accordingly, the present invention provides a memory management method and apparatus that seeks to solve or at least mitigate at least one of the above-identified problems.
According to one aspect of the present invention, there is provided a memory management method, the method comprising the steps of: receiving a request for applying the memory, wherein the request comprises the size of a memory block to be applied; judging whether a memory block meeting a preset condition exists in the current memory pool or not; if the memory blocks meeting the preset conditions do not exist in the current memory pool, determining an available address space from the virtual address space pre-allocated in the current memory pool; applying for the memory block from the operating system according to the determined available address space; and distributing corresponding memory blocks from the memory blocks according to the size of the memory blocks to be applied, wherein the memory blocks comprise at least one memory block.
Optionally, in the method according to the present invention, the method further includes a step of pre-allocating a virtual address space for the memory pool, including: and mapping the virtual address space of each memory pool in the process space according to a preset rule, so that each memory pool has the virtual address space independent of each other.
Optionally, in the method according to the present invention, the step of pre-allocating a virtual address space for each memory pool according to a predetermined rule so that each memory pool has a virtual address space independent of each other further includes: the virtual address space is divided into a plurality of sub-address spaces according to the displacement, wherein each sub-address space corresponds to one memory block of the memory pool, and each sub-address space is provided with a state identifier for indicating whether the current state of the sub-address space is an available state.
Optionally, in the method according to the present invention, the step of determining the available address space from the virtual address space pre-allocated in the current memory pool comprises: determining the state of each sub-address space in the virtual address space corresponding to the current memory pool according to the state identifier; and selecting a sub-address space of the available state as the available address space of the current memory pool according to the sequence from the high address to the low address.
Optionally, in the method according to the present invention, the step of applying for the memory chunk according to the available address space further comprises: memory chunks are applied according to the starting address and the space length of the available address space.
Optionally, in the method according to the present invention, after the step of determining whether there is a memory block in the current memory pool that meets the predetermined condition, the method further includes the step of: if the memory blocks meeting the preset conditions exist in the current memory pool, corresponding memory blocks are distributed from the memory blocks meeting the preset conditions according to the sizes of the memory blocks to be applied.
Optionally, in the method according to the present invention, the step of pre-allocating virtual address spaces for the plurality of memory pools according to a predetermined rule further includes: generating a memory block linked list to record the application state of the memory blocks in the memory pool; and generating a memory block array to record the use state of each memory block in the memory block.
Optionally, in the method according to the present invention, the memory chunk list is updated each time a memory chunk is applied; updating the memory block array when the memory blocks are allocated to the user process each time; when the user process releases the memory, the memory block array is updated.
Optionally, in the method according to the present invention, the step of determining whether a memory block satisfying a predetermined condition exists in the current memory pool includes: judging whether a memory block meeting preset conditions exists in the current memory pool according to the memory block array, wherein the preset conditions comprise: one or more memory blocks that may be used and that are larger than the size of the memory block to be applied.
According to another aspect of the present invention, there is provided a memory management apparatus including: the connection management module is suitable for receiving a request for applying the memory, wherein the request comprises the size of a memory block to be applied; the first memory allocation module is suitable for judging whether a memory block meeting a preset condition exists in the current memory pool or not, and determining an available address space from a virtual address space pre-allocated in the current memory pool when the memory block meeting the preset condition does not exist in the current memory pool; the second memory allocation module is suitable for applying for the memory blocks to the operating system according to the available address space and allocating corresponding memory blocks from the applied memory blocks according to the size of the memory blocks to be applied, wherein the memory blocks comprise at least one memory block.
Optionally, in the device according to the present invention, further comprising: the address mapping module is suitable for mapping virtual address spaces in the process space of the memory pools according to a preset rule, so that each memory pool has a mutually independent virtual address space; the address mapping module is further adapted to divide the virtual address space into a plurality of sub-address spaces according to the displacement, wherein each sub-address space corresponds to one memory chunk of the memory pool, and each sub-address space has a state identifier for indicating whether the current state of the sub-address space is an available state.
According to yet another aspect of the present invention, there is provided a computing device comprising: at least one processor; and a memory storing program instructions, wherein the program instructions are configured to be adapted to be executed by at least one processor, the program instructions comprising instructions for performing any of the methods described above.
According to still another aspect of the present invention, there is provided a readable storage medium storing program instructions that, when read and executed by a computing device, cause the computing device to perform any one of the methods described above.
According to the memory management scheme of the invention, mutually independent virtual address spaces are pre-allocated for each memory pool, the address spaces are managed, and the memory applied by the user process belongs to the pre-allocated address spaces. Thus, isolating memory used from a pool of memory and all other memory applications from address space, the scheme according to the present invention can provide a direct clue from the victim site to the source.
Drawings
To the accomplishment of the foregoing and related ends, certain illustrative aspects are described herein in connection with the following description and the annexed drawings, which set forth the various ways in which the principles disclosed herein may be practiced, and all aspects and equivalents thereof are intended to fall within the scope of the claimed subject matter. The above, as well as additional objects, features, and advantages of the present disclosure will become more apparent from the following detailed description when read in conjunction with the accompanying drawings. Like reference numerals generally refer to like parts or elements throughout the present disclosure.
FIG. 1 shows a schematic diagram of a computing device 100 according to one embodiment of the invention;
FIG. 2 illustrates a flow chart of a memory management method 200 according to one embodiment of the invention;
FIG. 3 illustrates an exemplary diagram of a memory block linked list and memory block array in accordance with one embodiment of the present invention;
FIG. 4 shows a schematic diagram of an MMAP space according to one embodiment of the invention; and
fig. 5 shows a block diagram of a memory management device 500 according to an embodiment of the invention.
Detailed Description
Exemplary embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While exemplary embodiments of the present disclosure are shown in the drawings, it should be understood that the present disclosure may be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the disclosure to those skilled in the art.
FIG. 1 shows a schematic diagram of a computing device 100 according to one embodiment of the invention.
As shown in FIG. 1, in a basic configuration 102, a computing device 100 typically includes a system memory 106 and one or more processors 104. The memory bus 108 may be used for communication between the processor 104 and the system memory 106.
Depending on the desired configuration, the processor 104 may be any type of processing including, but not limited to: a microprocessor (μp), a microcontroller (μc), a digital information processor (DSP), or any combination thereof. The processor 104 may include one or more levels of caches, such as a first level cache 110 and a second level cache 112, a processor core 114, and registers 116. The example processor core 114 may include an Arithmetic Logic Unit (ALU), a Floating Point Unit (FPU), a digital signal processing core (DSP core), or any combination thereof. The example memory controller 118 may be used with the processor 104, or in some implementations, the memory controller 118 may be an internal part of the processor 104.
Depending on the desired configuration, system memory 106 may be any type of memory including, but not limited to: volatile memory (such as RAM), non-volatile memory (such as ROM, flash memory, etc.), or any combination thereof. The system memory 106 may include an operating system 120, one or more applications 122, and programs 124. In some implementations, the application 122 may be arranged to execute instructions on an operating system by the one or more processors 104 using the program 124.
Computing device 100 may also include an interface bus 140 that facilitates communication from various interface devices (e.g., output devices 142, peripheral interfaces 144, and communication devices 146) to basic configuration 102 via bus/interface controller 130. The example output device 142 includes a graphics processing unit 148 and an audio processing unit 150. They may be configured to facilitate communication with various external devices such as a display or speakers via one or more a/V ports 152. Example peripheral interfaces 144 may include a serial interface controller 154 and a parallel interface controller 156, which may be configured to facilitate communication with external devices such as input devices (e.g., keyboard, mouse, pen, voice input device, touch input device) or other peripherals (e.g., printer, scanner, etc.) via one or more I/O ports 158. An example communication device 146 may include a network controller 160, which may be arranged to facilitate communication with one or more other computing devices 162 via one or more communication ports 164 over a network communication link.
The network communication link may be one example of a communication medium. Communication media may typically be embodied by computer readable instructions, data structures, program modules, and may include any information delivery media in a modulated data signal, such as a carrier wave or other transport mechanism. A "modulated data signal" may be a signal that has one or more of its data set or changed in such a manner as to encode information in the signal. By way of non-limiting example, communication media may include wired media such as a wired network or special purpose network, and wireless media such as acoustic, radio Frequency (RF), microwave, infrared (IR) or other wireless media. The term computer readable media as used herein may include both storage media and communication media.
According to embodiments of the invention, computing device 100 may be implemented as a server, such as a file server, a database server, an application server, a WEB server, etc., as well as a personal computer including desktop and notebook computer configurations. Of course, computing device 100 may also be implemented as part of a small-sized portable (or mobile) electronic device, such as a cellular telephone, a digital camera, a Personal Digital Assistant (PDA), a personal media player device, a wireless web-watch device, a personal headset device, an application-specific device, or a hybrid device that may include any of the above functions.
The present invention provides a memory management method 200 for allocating corresponding memory blocks for a user process by storing a plurality of instructions comprising performing the method 200 in a program 124 of a computing device 100. It should be noted that the corresponding instructions may also be executed by one or more processors 104 on an operating system using program 124 by arranging a corresponding application 122 (e.g., memory management device) on computing device 100.
Fig. 2 shows a flow chart of a memory management method 200 according to an embodiment of the invention. The following describes the execution of the memory management method 200 according to the present invention in detail with reference to fig. 2. As shown in fig. 2, the method 200 begins at step S210.
In step S210, the memory pool receives a request for applying for memory from a user process, where the request includes a size of a memory block to be applied.
According to one embodiment, at least one memory pool is arranged in the process space, each memory pool being for allocating memory blocks of different sizes. Generally, the memory pool applies for memory to the operating system in a larger unit, and is denoted as a memory chunk (chunk); after the application of the memory blocks, each memory block is segmented into a plurality of memories with smaller sizes, and the memories are marked as memory blocks (blocks), wherein the memory blocks (blocks) are memory blocks allocated to a user process by a memory pool. That is, each memory chunk includes a plurality of memory blocks.
In addition, in order to manage the memory chunks applied from the operating system by the memory pools, each memory pool also has a corresponding memory block chain table (denoted as a chunk list) for recording the application state of the memory chunks. In order to accelerate allocation of memory blocks in the memory pools, each memory pool further has a corresponding memory block array (referred to as a block array) for recording a usage status of each memory block in the memory block. That is, the memory block list records the memory blocks already applied by the memory pool, and the memory block array records the use states of the memory blocks in the memory blocks already applied. In some embodiments, the usage status of the memory block includes used and unused, denoted used and free, respectively. Referring to FIG. 3, an example diagram of a memory block linked list (chunk list) and a memory block array (block array) is shown, according to one embodiment of the present invention. The memory block chain table contains the memory blocks applied by the memory pool, and for each memory block, a header file is also contained in addition to the plurality of memory blocks contained in the memory block chain table, and the header file can contain an identifier for uniquely identifying the memory block. In addition, the memory block array records all memory blocks by storing pointers of all memory blocks, the pointers pointing to locations in the memory block where the memory block is located. At the same time, the memory block array also records the use status of all the memory blocks, for example, in fig. 3, all the used memory blocks are recorded together, and all the unused memory blocks are recorded together.
According to one embodiment, the memory block list and memory block array are updated accordingly each time memory is applied or released.
Then in step S220, it is determined whether there is a memory block in the current memory pool that satisfies a predetermined condition.
According to one embodiment, the current memory pool queries the use state of each memory block recorded in the memory block array thereof, and judges whether the current memory pool has a memory block meeting a predetermined condition according to the use state. The memory blocks meeting the predetermined condition are defined as: one or more memory blocks that may be used and that are larger than the size of the memory block to be applied. The predetermined condition includes two layers of meaning: the first memory block which can be used refers to a memory block with the use state of unused in the memory block array, and the second memory obtained by adding one or more memory blocks which are unused is not smaller than the size of the memory block to be applied. In one embodiment, if the current memory pool includes "unused" memory blocks and the size of the memory block to be applied by the user process does not exceed the size of the unused memory blocks, the memory pool allocates one or more corresponding memory blocks from the "unused" memory blocks to the user process directly according to the size of the memory block to be applied.
Then in step S230, if there is no memory block in the current memory pool that satisfies the predetermined condition, an available address space is determined from the virtual address space pre-allocated in the current memory pool.
According to an embodiment of the invention, the method 200 further comprises the steps of: a virtual address space is pre-allocated for each memory pool. Specifically, virtual address spaces in the process space of the memory pools are mapped according to a predetermined rule, so that each memory pool has a virtual address space independent of each other. Thus, the virtual address space is used by the memory pool in allocating memory, thereby achieving isolation of the memory pool (in embodiments according to the present invention, memory pools having virtual address spaces that are independent of each other are referred to as isolated memory pools).
Taking a user mode memory pool in a Linux operating system as an example for illustration. Wherein the memory pool uses pre-allocated MMAP space, the high address in the MMAP space is stack space, the low address in the MMAP space is heap space, and FIG. 4 is a partial schematic diagram of the MMAP space according to one embodiment of the invention. One condition for pre-allocating an exclusive virtual address space for each memory pool in the MMAP space is: the virtual address space used by one memory pool is not intersected by heap space, stack space, virtual address space of other memory pools, and other portions of the MMAP space. As shown in fig. 4, it is assumed that virtual address space 410 and virtual address space 420 are virtual address spaces pre-allocated for two memory pools, memboost_1 and memboost_2, respectively, where memory pool memboost_1 corresponds to a total of 16TB virtual address space from 0x500000000000 to 0x5FFFFFFFFFFF and memory pool memboost_2 corresponds to a total of 16TB virtual address space from 0x400000000000 to 0x4 FFFFFFFFFFF. It can be seen that the two segments of virtual address space pre-allocated from the MMAP space are isolated from stack space; and when pre-allocated, high addresses near stack space are preferentially allocated by default. In other embodiments, the pre-allocated virtual address space may also be isolated from the default allocation of MMAP space by rational design.
After pre-allocating exclusive virtual address space for each memory pool, further management of the allocated virtual address space is required. According to some embodiments of the present invention, after the virtual address space is pre-allocated for each memory pool, the virtual address space may be further managed as follows: the virtual address space is divided into a plurality of sub-address spaces according to the displacement, wherein each sub-address space corresponds to one memory block of the memory pool, and each sub-address space is provided with a state identifier for indicating whether the current state of the sub-address space is an available state. In a preferred embodiment, a bitmap is used to manage the virtual address space corresponding to each memory pool, as shown in fig. 4, the virtual address space 420 is divided into a plurality of sub-address spaces, and numerals 0, 1, 2, 3, …, n represent the displacement within the bitmap for each sub-address space. Assuming a memory size of 4MB for one memory chunk, the 16TB space of virtual address space 420 may be divided into 4M sub-address spaces, i.e., the bitmap has a total of 4M entries. At the same time, each sub-address space also has a state identifier to indicate the current state of the sub-address space. For ease of illustration, assuming that the bitmap item filled with a diagonal "/" in fig. 4 indicates that the current state of the sub-address space is "unavailable", and that the bitmap item not filled indicates that the current state of the sub-address space is "available", the sub-address spaces shifted by "2" and "4" are currently available according to the bitmap current state, and the corresponding start addresses are 0x400000800000 and 0x400001000000, respectively.
After the above configuration is completed (i.e., virtual address spaces are pre-allocated for each memory pool according to a predetermined rule and managed), when the memory pool needs to apply for a memory chunk, an available address space is determined from the virtual address spaces pre-allocated in the current memory pool. Specifically, the state of each sub-address space in the virtual address space corresponding to the current memory pool is determined according to the state identifier in the bitmap, and then one sub-address space of the available states is selected according to the sequence from the high address to the low address to be used as the available address space of the current memory pool. Taking the memory pool memboost_2 as an example, the query bitmap indicates that the sub-address spaces with the displacement of '2' and '4' are currently available, and the sub-address space with the displacement of '2' is selected as the available address space according to the sequence from the high address to the low address.
Then in step S240, the memory chunk is applied to the operating system according to the determined available address space.
According to an embodiment of the present invention, a memory chunk is applied according to the determined starting address and space length of the available address space. Taking the memory pool memboost_2 as an example, the determination in step S230 is as follows: it is necessary to apply for a memory chunk of size 4MB starting at address 0x400001000000 to the operating system, that is, the starting address of the available address space is 0x400001000000 and the space length is 4MB. Application of the memory block to the operating system is realized by calling a mmap () function, wherein the mmap () at least comprises an addr parameter and a length parameter, the addr parameter represents a starting address of an available address space, namely, 0x400001000000, and the length parameter represents a space length of the available address space, namely, 4MB.
Then in step S250, corresponding memory blocks are allocated from the applied memory chunks according to the size of the memory blocks to be applied for use by the user process. As described above, the memory block is divided into a plurality of memory blocks, so that one or more memory blocks are sequentially selected from the plurality of memory blocks according to the size of the memory block to be applied for the user process.
According to the embodiment of the invention, each time a memory chunk is applied to, a memory block linked list is updated, and the memory chunk (including a header file, a contained memory chunk and the like) is added in the memory block linked list. Meanwhile, after the memory blocks are distributed to the user process, the memory block array is correspondingly updated, and the use state of each memory block in the newly-added memory block is recorded. Similarly, when the user process releases the memory, the memory block array is updated accordingly, and the usage status of the released one or more memory blocks is updated to be "unused".
It should be noted that pre-allocating virtual address space for a memory pool only programs the use of address space, whose actual allocation is deferred until the memory chunks are used. Other modules cannot be prevented from using the reserved space in the process address space until their application. In an embodiment according to the present invention, when a conflict occurs, the memory pool is rolled back from the isolated memory pool to the normal memory pool so that the conflict does not affect program operation. The problem of conflict in the use of the virtual address space indicates that the conflict exists in the special use of the virtual address space, and unified coordination and resolution are required. In another implementation, if all memory in its address space is applied for when the memory pool is created, no such conflict in running will occur, but this implementation requires applying for a huge amount of memory in advance, and its usability is somewhat affected. Those skilled in the art can select a suitable memory management manner according to the embodiment of the present invention in conjunction with a specific application scenario, which is not listed here.
According to the memory management scheme of the invention, a single shared address space is pre-allocated for a memory pool, the address space is managed, and the memory applied by a user process belongs to the pre-allocated address space. Thus, isolating memory used from a pool of memory and all other memory applications from address space, the scheme according to the present invention can provide a direct clue from the victim site to the source.
Also taking the memory pool memboost_2 as an example in the foregoing, assuming that a memory chunk corresponding to a sub-address space with a displacement of "2" has a memory use error in a certain allocated data structure, because of the memory pool isolation, this sub-address space (i.e., the 4MB space from 0x 400000800000) can only be used by the memboost_2, but not by other data structures allocated from the MMAP space and other memory pools, and even by data structures allocated from the stack space and the heap space, so that the influence of the memory error can be limited to the memboost_2 itself. Therefore, the source of the memory error of the memory block, that is, the user before the piece of sub-address space, is necessarily the user of the memboost_2, and the user can directly exclude suspicion of using other memories except the memboost_2 by applying to use the piece of memory to influence the victim.
Similarly, when a memory usage error occurs in an address space corresponding to a non-memory pool, it can be directly determined that the memory pool with the isolated address space is not the source of the memory error because the memory pool does not use the address space.
Generally, according to the memory management scheme of the present invention, it is able to quickly determine whether the source of a memory error is a specific isolated memory pool according to whether the address of the memory error victim belongs to the memory pool (i.e., the isolated memory pool) of a pre-allocated virtual address space. Meanwhile, the use of the data structure to the memory pool is reasonably planned, the memory pool can be directly positioned to be used by a plurality of data structures, and the investigation range is rapidly reduced. In addition, if the victim address does not belong to the address space corresponding to any one of the isolated memory pools, suspicion of the isolated memory pool in the memory problem can be eliminated, and the problem investigation range can be reduced.
Fig. 5 shows a block diagram of a memory management device 500 according to an embodiment of the invention. As shown in fig. 5, the apparatus 500 includes: the connection management module 510, the first memory allocation module 520, and the second memory allocation module 530. The components of the apparatus 500 are described below.
The connection management module 510 receives a request for applying for memory, where the request includes a size of a memory block to be applied.
The first memory allocation module 520 determines whether there is a memory block in the current memory pool that satisfies a predetermined condition, and determines an available address space from the virtual address space pre-allocated in the current memory pool when there is no memory block in the current memory pool that satisfies the predetermined condition.
The second memory allocation module 530 applies for memory chunks to the operating system according to the determined available address space, and allocates corresponding memory chunks from the applied memory chunks according to the size of the memory chunks to be applied.
The apparatus 500 further includes an address mapping module 540, where the address mapping module 540 maps virtual address spaces of the plurality of memory pools in the process space according to a predetermined rule, so that each memory pool has a virtual address space independent of each other. In addition, the address mapping module 540 divides the virtual address space into a plurality of sub-address spaces according to the displacement, wherein each sub-address space corresponds to a memory chunk of the memory pool, and each sub-address space has a status identifier for indicating whether the current status of each sub-address space is an available status.
For a detailed description of the components of the apparatus 500, reference should be made to the descriptions related to fig. 1-4, and the description is limited and omitted here.
The various techniques described herein may be implemented in connection with hardware or software or, alternatively, with a combination of both. Thus, the methods and apparatus of the present invention, or certain aspects or portions of the methods and apparatus of the present invention, may take the form of program code (i.e., instructions) embodied in tangible media, such as removable hard drives, U-drives, floppy diskettes, CD-ROMs, or any other machine-readable storage medium, wherein, when the program is loaded into and executed by a machine, such as a computer, the machine becomes an apparatus for practicing the invention.
In the case of program code execution on programmable computers, the computing device will generally include a processor, a storage medium readable by the processor (including volatile and non-volatile memory and/or storage elements), at least one input device, and at least one output device. Wherein the memory is configured to store program code; the processor is configured to perform the method of the invention in accordance with instructions in said program code stored in the memory.
By way of example, and not limitation, readable media comprise readable storage media and communication media. The readable storage medium stores information such as computer readable instructions, data structures, program modules, or other data. Communication media typically embodies computer readable instructions, data structures, program modules or other data in a modulated data signal such as a carrier wave or other transport mechanism and includes any information delivery media. Combinations of any of the above are also included within the scope of readable media.
In the description provided herein, algorithms and displays are not inherently related to any particular computer, virtual system, or other apparatus. Various general-purpose systems may also be used with examples of the invention. The required structure for a construction of such a system is apparent from the description above. In addition, the present invention is not directed to any particular programming language. It will be appreciated that the teachings of the present invention described herein may be implemented in a variety of programming languages, and the above description of specific languages is provided for disclosure of enablement and best mode of the present invention.
In the description provided herein, numerous specific details are set forth. However, it is understood that embodiments of the invention may be practiced without these specific details. In some instances, well-known methods, structures and techniques have not been shown in detail in order not to obscure an understanding of this description.
Similarly, it should be appreciated that in the foregoing description of exemplary embodiments of the invention, various features of the invention are sometimes grouped together in a single embodiment, figure, or description thereof for the purpose of streamlining the disclosure and aiding in the understanding of one or more of the various inventive aspects. However, the disclosed method should not be construed as reflecting the intention that: i.e., the claimed invention requires more features than are expressly recited in each claim. Rather, as the following claims reflect, inventive aspects lie in less than all features of a single foregoing disclosed embodiment. Thus, the claims following the detailed description are hereby expressly incorporated into this detailed description, with each claim standing on its own as a separate embodiment of this invention.
Those skilled in the art will appreciate that the modules or units or components of the devices in the examples disclosed herein may be arranged in a device as described in this embodiment, or alternatively may be located in one or more devices different from the devices in this example. The modules in the foregoing examples may be combined into one module or may be further divided into a plurality of sub-modules.
Those skilled in the art will appreciate that the modules in the apparatus of the embodiments may be adaptively changed and disposed in one or more apparatuses different from the embodiments. The modules or units or components of the embodiments may be combined into one module or unit or component and, furthermore, they may be divided into a plurality of sub-modules or sub-units or sub-components. Any combination of all features disclosed in this specification (including any accompanying claims, abstract and drawings), and all of the processes or units of any method or apparatus so disclosed, may be used in combination, except insofar as at least some of such features and/or processes or units are mutually exclusive. Each feature disclosed in this specification (including any accompanying claims, abstract and drawings), may be replaced by alternative features serving the same, equivalent or similar purpose, unless expressly stated otherwise.
Furthermore, those skilled in the art will appreciate that while some embodiments described herein include some features but not others included in other embodiments, combinations of features of different embodiments are meant to be within the scope of the invention and form different embodiments. For example, in the following claims, any of the claimed embodiments can be used in any combination.
Furthermore, some of the embodiments are described herein as methods or combinations of method elements that may be implemented by a processor of a computer system or by other means of performing the functions. Thus, a processor with the necessary instructions for implementing the described method or method element forms a means for implementing the method or method element. Furthermore, the elements of the apparatus embodiments described herein are examples of the following apparatus: the apparatus is for carrying out the functions performed by the elements for carrying out the objects of the invention.
As used herein, unless otherwise specified the use of the ordinal terms "first," "second," "third," etc., to describe a general object merely denote different instances of like objects, and are not intended to imply that the objects so described must have a given order, either temporally, spatially, in ranking, or in any other manner.
While the invention has been described with respect to a limited number of embodiments, those skilled in the art, having benefit of the above description, will appreciate that other embodiments are contemplated within the scope of the invention as described herein. Furthermore, it should be noted that the language used in the specification has been principally selected for readability and instructional purposes, and may not have been selected to delineate or circumscribe the inventive subject matter. Accordingly, many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the appended claims. The disclosure of the present invention is intended to be illustrative, but not limiting, of the scope of the invention, which is defined by the appended claims.

Claims (13)

1. A memory management method, the method comprising the steps of:
receiving a request for applying for a memory, wherein the request comprises the size of a memory block to be applied;
judging whether a memory block meeting a preset condition exists in the current memory pool or not;
if the memory block does not exist in the current memory pool, determining an available address space from the virtual address space pre-allocated in the current memory pool;
applying for the memory block from the operating system according to the available address space; and
allocating corresponding memory blocks from the memory blocks according to the size of the memory blocks to be applied,
wherein the memory block comprises at least one memory block,
the memory management method further comprises the step of pre-distributing a virtual address space for a memory pool, and the method comprises the following steps:
and mapping the virtual address space of each memory pool in the process space according to a preset rule, so that each memory pool has the virtual address space independent of each other.
2. The method of claim 1, wherein the pre-allocating the virtual address space for each memory pool according to a predetermined rule such that each memory pool has a virtual address space independent of each other further comprises:
dividing the virtual address space into a plurality of sub-address spaces according to displacement, wherein each sub-address space corresponds to one memory block of the memory pool, and each sub-address space is provided with a state identifier for indicating whether the current state of the sub-address space is an available state.
3. The method of claim 2, wherein the determining available address space from the pre-allocated virtual address space of the current memory pool comprises:
determining the state of each sub-address space in the virtual address space corresponding to the current memory pool according to the state identifier; and
and selecting one sub-address space of the available state from the high address to the low address as the available address space of the current memory pool.
4. The method of any of claims 1-3, the applying for memory chunks from the available address space further comprising:
and applying for the memory block according to the starting address and the space length of the available address space.
5. A method according to any one of claims 1-3, wherein after said step of determining whether there is a memory block in the current memory pool that meets a predetermined condition, further comprising the step of:
if the memory blocks meeting the preset conditions exist in the current memory pool, corresponding memory blocks are distributed from the memory blocks meeting the preset conditions according to the sizes of the memory blocks to be applied.
6. The method of any of claims 2-3, wherein pre-allocating virtual address space for the plurality of memory pools according to a predetermined rule further comprises:
generating a memory block linked list to record the application state of the memory blocks in the memory pool; and
and generating a memory block array to record the use state of each memory block in the memory block.
7. The method of claim 6, further comprising the step of:
updating the memory block linked list when each time a memory block is applied; and
and updating the memory block array each time the memory block is allocated to the user process.
8. The method of claim 6, further comprising the step of:
when the user process releases the memory, the memory block array is updated.
9. The method of claim 6, wherein the step of determining whether there is a memory block in the current memory pool that satisfies a predetermined condition comprises:
judging whether a memory block meeting a preset condition exists in the current memory pool according to the memory block array,
wherein the predetermined condition includes: one or more memory blocks that may be used and that are larger than the size of the memory block to be applied.
10. A memory management device, comprising:
the connection management module is suitable for receiving a request for applying for the memory, wherein the request comprises the size of a memory block to be applied;
the first memory allocation module is suitable for judging whether a memory block meeting a preset condition exists in the current memory pool or not, and determining an available address space from a virtual address space pre-allocated in the current memory pool when the memory block does not exist in the current memory pool;
the second memory allocation module is suitable for applying for the memory block to the operating system according to the available address space and allocating corresponding memory blocks from the applied memory blocks according to the size of the memory block to be applied, wherein the memory block comprises at least one memory block;
the address mapping module is suitable for mapping virtual address spaces in the process space of the memory pools according to a preset rule, so that each memory pool has a mutually independent virtual address space.
11. The apparatus of claim 10, the address mapping module further adapted to divide the virtual address space into a plurality of sub-address spaces by displacement, wherein each sub-address space corresponds to one memory chunk of the memory pool, and each sub-address space has a state identifier for indicating whether a current state of the sub-address space is an available state.
12. A computing device, comprising:
at least one processor; and
a memory storing program instructions, wherein the program instructions are configured to be adapted to be executed by the at least one processor, the program instructions comprising instructions for performing the method of any of claims 1-9.
13. A readable storage medium storing program instructions which, when read and executed by a computing device, cause the computing device to perform the method of any of claims 1-9.
CN201811093671.1A 2018-09-19 2018-09-19 Memory management method and device Active CN110928803B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811093671.1A CN110928803B (en) 2018-09-19 2018-09-19 Memory management method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811093671.1A CN110928803B (en) 2018-09-19 2018-09-19 Memory management method and device

Publications (2)

Publication Number Publication Date
CN110928803A CN110928803A (en) 2020-03-27
CN110928803B true CN110928803B (en) 2023-04-25

Family

ID=69855170

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811093671.1A Active CN110928803B (en) 2018-09-19 2018-09-19 Memory management method and device

Country Status (1)

Country Link
CN (1) CN110928803B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112114962A (en) * 2020-09-04 2020-12-22 北京达佳互联信息技术有限公司 Memory allocation method and device
CN114116194A (en) * 2021-09-03 2022-03-01 济南外部指针科技有限公司 Memory allocation method and system
CN116501511B (en) * 2023-06-29 2023-09-15 恒生电子股份有限公司 Memory size processing method and device, electronic equipment and storage medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102063385A (en) * 2010-12-23 2011-05-18 深圳市金宏威实业发展有限公司 Memory management method and system
CN103593243A (en) * 2013-11-01 2014-02-19 浪潮电子信息产业股份有限公司 Dynamic extensible method for increasing virtual machine resources
CN107153618A (en) * 2016-03-02 2017-09-12 阿里巴巴集团控股有限公司 A kind of processing method and processing device of Memory Allocation

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9535606B2 (en) * 2014-12-22 2017-01-03 Intel Corporation Virtual serial presence detect for pooled memory
US9983914B2 (en) * 2015-05-11 2018-05-29 Mentor Graphics Corporation Memory corruption protection by tracing memory

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102063385A (en) * 2010-12-23 2011-05-18 深圳市金宏威实业发展有限公司 Memory management method and system
CN103593243A (en) * 2013-11-01 2014-02-19 浪潮电子信息产业股份有限公司 Dynamic extensible method for increasing virtual machine resources
CN107153618A (en) * 2016-03-02 2017-09-12 阿里巴巴集团控股有限公司 A kind of processing method and processing device of Memory Allocation

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
付国江.基于VxWorks虚拟平台的改进.中国优秀硕士学位论文全文数据库.2007,(第3期),全文. *
孙勇 ; 林菲 ; .一种针对大型数据结构的自动内存管理算法.杭州电子科技大学学报.2005,(第06期),全文. *

Also Published As

Publication number Publication date
CN110928803A (en) 2020-03-27

Similar Documents

Publication Publication Date Title
US11669444B2 (en) Computing system and method for controlling storage device
US11467955B2 (en) Memory system and method for controlling nonvolatile memory
US10324834B2 (en) Storage device managing multi-namespace and method of operating the storage device
KR102137761B1 (en) Heterogeneous unified memory section and method for manaing extended unified memory space thereof
CN109725846B (en) Memory system and control method
JP2020046963A (en) Memory system and control method
US9952788B2 (en) Method and apparatus for providing a shared nonvolatile memory system using a distributed FTL scheme
US20220058117A1 (en) Memory system and method for controlling nonvolatile memory
WO2016082196A1 (en) File access method and apparatus and storage device
US20180285376A1 (en) Method and apparatus for operating on file
TW202024926A (en) Memory system and control method
CN110928803B (en) Memory management method and device
JP2021510222A (en) Data processing methods, equipment, and computing devices
US11151064B2 (en) Information processing apparatus and storage device access control method
US9563363B2 (en) Flexible storage block for a solid state drive (SSD)-based file system
TWI777720B (en) Method and apparatus for performing access management of memory device in host performance booster architecture with aid of device side table information
JP2008217208A (en) Storage device, computer system and management method for storage device
CN117472795A (en) Storage medium management method and server
JP2022121655A (en) Memory system and control method
JP2022019787A (en) Memory system and control method
KR20070030041A (en) A method of memory management for a mobile terminal using a paging form
CN117991992A (en) Small partition ZNS SSD-based data writing method, system, equipment and medium
JP2010039676A (en) Data management method
WO2010113380A1 (en) Calculator system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40026853

Country of ref document: HK

GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20231128

Address after: Room 1-2-A06, Yungu Park, No. 1008 Dengcai Street, Sandun Town, Xihu District, Hangzhou City, Zhejiang Province

Patentee after: Aliyun Computing Co.,Ltd.

Address before: Grand Cayman capital building, a mailbox four / 847

Patentee before: ALIBABA GROUP HOLDING Ltd.