CN115599544A - Memory management method and device, computer equipment and storage medium - Google Patents

Memory management method and device, computer equipment and storage medium Download PDF

Info

Publication number
CN115599544A
CN115599544A CN202211248341.1A CN202211248341A CN115599544A CN 115599544 A CN115599544 A CN 115599544A CN 202211248341 A CN202211248341 A CN 202211248341A CN 115599544 A CN115599544 A CN 115599544A
Authority
CN
China
Prior art keywords
memory
block
metadata
target
segments
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211248341.1A
Other languages
Chinese (zh)
Inventor
郑豪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Alibaba China Co Ltd
Alibaba Cloud Computing Ltd
Original Assignee
Alibaba China Co Ltd
Alibaba Cloud Computing Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Alibaba China Co Ltd, Alibaba Cloud Computing Ltd filed Critical Alibaba China Co Ltd
Priority to CN202211248341.1A priority Critical patent/CN115599544A/en
Publication of CN115599544A publication Critical patent/CN115599544A/en
Priority to PCT/CN2023/123475 priority patent/WO2024078429A1/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5011Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals
    • G06F9/5016Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals the resource being the memory
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/06Addressing a physical block of locations, e.g. base addressing, module addressing, memory dedication
    • G06F12/0646Configuration or reconfiguration
    • G06F12/0653Configuration or reconfiguration with centralised address assignment

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Software Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
  • Memory System (AREA)

Abstract

The embodiment of the present specification provides a memory management method, an apparatus, a computer device, and a storage medium, where the memory includes a plurality of memory blocks, and each memory block is divided into a plurality of memory segments; the memory is used for storing total metadata and block metadata corresponding to each allocated memory block; the block metadata includes: the allocation state information of each memory segment in the allocated memory block; the total metadata includes: the quantity information of the unallocated memory segments in each allocated memory block; the method comprises the following steps: responding to the memory adjustment request, and determining a target memory segment in a state needing to be adjusted according to the total metadata and the block metadata; adjusting the distribution state of the target memory segment based on the memory adjustment type corresponding to the memory adjustment request; after the allocation state of the target memory segment is adjusted, updating the allocation state information of the block metadata of the target memory block to which the target memory segment belongs, and updating the quantity information of the unallocated memory segments in the target memory block in the total metadata.

Description

Memory management method and device, computer equipment and storage medium
Technical Field
The present disclosure relates to the field of computer technologies, and in particular, to a memory management method and apparatus, a computer device, and a storage medium.
Background
In a conventional memory management scheme, an internal memory is divided into a plurality of memory pages (pages), and metadata (e.g., struct pages) needs to be created for each page to manage each page. Since the memory pages are usually small (e.g. 4k bytes), for example, each 4k memory page requires 64 bytes of metadata for each memory page, and in a large memory scenario, a large amount of memory space is occupied for storing metadata, which results in a large amount of memory occupied by metadata.
In other solutions, to avoid occupation of a large amount of metadata on the memory, a larger granularity is used as a management unit to manage the memory, for example, the memory is divided into memory blocks of 2m and the like for management. However, at a larger management granularity, each memory block may not completely store data, and therefore, the internal storage space of the memory block is wasted. In addition, for a scene requiring a small block of memory, such as a scene storing compressed memory data, there is also a need for performing management with a smaller granularity on memory blocks. Therefore, under the condition of large management granularity, how to avoid memory waste is an urgent technical problem to be solved.
Disclosure of Invention
In order to overcome the problems in the related art, embodiments of the present specification provide a memory management method, an apparatus, and a computer device.
According to a first aspect of embodiments of the present specification, a memory management method is provided, where a memory includes multiple memory blocks, and each memory block is divided into multiple memory segments; the memory is used for storing total metadata and block metadata corresponding to each allocated memory block;
the block metadata includes: the allocation state information of each memory segment in the allocated memory block;
the total metadata includes: the quantity information of the unallocated memory segments in each allocated memory block;
the method comprises the following steps:
responding to a memory adjustment request, and determining a target memory segment in a state needing to be adjusted according to the total metadata and the block metadata;
adjusting the distribution state of the target memory segment based on the memory adjustment type corresponding to the memory adjustment request;
after the allocation state of the target memory segment is adjusted, updating the allocation state information of the block metadata of the target memory block to which the target memory segment belongs, and updating the quantity information of the unallocated memory segments in the target memory block in the total metadata.
According to a second aspect of the embodiments of the present specification, there is provided a memory management apparatus, where the memory includes a plurality of memory blocks, and each memory block is divided into a plurality of memory segments;
the memory is used for storing total metadata and block metadata corresponding to each allocated memory block;
the block metadata includes: the allocation state information of each memory segment in the allocated memory block;
the total metadata includes: the quantity information of the unallocated memory segments in each allocated memory block;
the device comprises:
a determination module to: responding to a memory adjustment request, and determining a target memory segment of which the distribution state needs to be adjusted according to the total metadata and the block metadata;
an adjustment module to: adjusting the distribution state of the target memory segment based on the memory adjustment type corresponding to the memory adjustment request;
an update module to: after the allocation state of the target memory segment is adjusted, updating the allocation state information of the block metadata of the target memory block to which the target memory segment belongs, and updating the quantity information of the unallocated memory segments in the target memory block in the total metadata.
According to a third aspect of embodiments of the present specification, there is provided a computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor implements the steps of the method embodiments of the aforementioned first aspect when executing the computer program. According to a third aspect of embodiments herein, there is provided a computer program product comprising a computer program that, when executed by a processor, performs the steps of the method embodiments of the aforementioned first aspect.
According to a fourth aspect of embodiments herein, there is provided a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, performs the steps of the method embodiments of the first aspect.
The technical scheme provided by the embodiment of the specification can have the following beneficial effects:
in this embodiment of the present specification, a memory includes multiple memory blocks, and each memory block is divided into multiple memory segments; therefore, the memory block can be designed to have a larger granularity, so that the occupation of block metadata of the memory block is reduced; in addition, the memory segments in the memory blocks may also be specifically managed, and in this embodiment, two layers of metadata are designed, where the two layers of metadata include total metadata and block metadata of each allocated memory block, and the block metadata includes: the allocation state information of each memory segment in the allocated memory block is used for determining the memory segments which can be used for allocation in the allocated memory block; the total metadata includes: the quantity information of the unallocated memory segments in each allocated memory block is used to determine the allocable memory blocks in the memory. When a memory adjustment request exists, a target memory segment of which the allocation state needs to be adjusted can be determined; after the allocation state of the target memory segment is adjusted, the allocation state information of the block metadata of the target memory block to which the target memory segment belongs is updated, and the quantity information of the target memory block in the total metadata is updated, so that the memory segment can be allocated, the waste of the residual space of the memory block with large granularity is reduced, and the management of the memory block with smaller granularity is realized.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the specification.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present specification and together with the description, serve to explain the principles of the specification.
Fig. 1A and 1B are schematic diagrams of a memory architecture shown in accordance with an exemplary embodiment of the present description.
Fig. 2A is a schematic diagram illustrating a memory block partitioning a memory segment according to an exemplary embodiment of the present disclosure.
FIG. 2B is a diagram of a singly linked list shown in accordance with an exemplary embodiment of the present specification.
FIG. 2C is a diagram illustrating a doubly linked list in accordance with an illustrative embodiment.
FIG. 2D is a diagram illustrating two doubly linked lists in accordance with an exemplary embodiment.
FIG. 2E is a diagram of a linked list array shown in accordance with an exemplary embodiment of the present specification.
FIG. 2F is a schematic illustration of the overall metadata shown in the present specification according to an exemplary embodiment.
Fig. 2G to fig. 2J are schematic diagrams illustrating memory management according to an exemplary embodiment of the present disclosure.
Fig. 3 is a block diagram of a computer device in which a memory management apparatus according to an exemplary embodiment is shown.
Fig. 4 is a block diagram of a memory management device according to an example embodiment of the present disclosure.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The implementations described in the following exemplary examples do not represent all implementations consistent with this description. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the specification, as detailed in the appended claims.
The terminology used in the description herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the description. As used in this specification and the appended claims, the singular forms "a", "an", and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as used herein refers to and encompasses any and all possible combinations of one or more of the associated listed items.
It should be understood that although the terms first, second, third, etc. may be used herein to describe various information, these information should not be limited to these terms. These terms are only used to distinguish one type of information from another. For example, the first information may also be referred to as second information, and similarly, the second information may also be referred to as first information, without departing from the scope of the present specification. The word "if" as used herein may be interpreted as "at" \8230; "or" when 8230; \8230; "or" in response to a determination ", depending on the context.
In the memory management field, metadata of a memory is data in which state information is recorded for each management unit (which may be referred to as a memory page, a memory block, or the like) of the memory in order to facilitate management of the memory. Various types of status information may be recorded in the metadata based on specific management needs. It is understood that during operation of the computer device, the metadata is stored in the memory. As mentioned in the background, the smaller management granularity of the memory may cause a large amount of metadata to occupy the memory, for example, the higher overhead of memory management may be caused in a scenario such as a virtual machine. As depicted in fig. 1A, a schematic diagram illustrating running a virtual machine on a Host machine (Host) according to an exemplary embodiment is shown.
The host of the present embodiment refers to a physical computer for installing virtual machine software, and the host is a concept with respect to a virtual machine.
Virtual machine (virtual machine) of the present embodiment: refers to a complete computer system with complete hardware system functionality, which is simulated by software and runs in a completely isolated environment. The virtual machine can realize the work completed by the physical computer. When creating a virtual machine in a computer, it is necessary to use a part of the hard disk and the memory capacity of the physical machine as the hard disk and the memory capacity of the virtual machine. Each virtual machine has an independent operating system, and the virtual machines can be operated like a physical machine. Common virtual machine software includes, but is not limited to: VMware (VMware ACE), virtual Box, virtual PC or KVM (Kernel-based Virtual Machine), etc., which may virtualize a plurality of computers in a physical Machine system.
Referring to fig. 1A, in a HOST, multiple virtual machines VM1, VM2, \8230, VMn may be run, and the memories used by the HOST are from the Memory of the HOST, and the kernel of the HOST and other applications (such as applications 1 to 3 in the figure) on the HOST may also use the Memory. In the running process, the memory used by the kernel and the application program can compete with the memory used by the virtual machine, so that the sellable memory of the host is uncertain, and especially under the condition of serious shortage of the memory, the memory of the virtual machine can be exchanged or even the virtual machine cannot be used, and the performance and the stability of the system are influenced.
Based on this, a memory allocation architecture of reserved memory may be adopted, as shown in fig. 1B, which is a schematic diagram of an exemplary reserved memory scenario shown in this specification, in which a memory of a host includes two storage spaces, and as shown in fig. 1B, the two storage spaces of the memory are shown in different filling manners, including an unreserved storage space a for a kernel (in the drawing, diagonal filling is adopted), and a reserved storage space B for a virtual machine (in the drawing, vertical line and gray level filling are adopted). That is, the unreserved memory space a is used for the kernel in the figure, and applications (such as application 1 to application 3 in the example in the figure) running on the operating system can use the unreserved memory space a. The reserved storage space b is available for a Virtual Machine (VM), such as n Virtual machines of VM1 to VMn shown in the figure. The two storage spaces may adopt different management granularities, that is, the memory may be divided differently. For ease of illustration in FIG. 1B, the two storage spaces are illustrated in a sequential manner in the figure. It will be appreciated that in practice, the two storage spaces may be non-contiguous.
The reserved memory space occupies most of the memory, and is unavailable for the host kernel, and a reserved memory module can be inserted into the kernel of the operating system for special management. In order to facilitate management of the series of memories, at the same time, occupation of a large amount of metadata on the memories is avoided, and considering that a minimum memory is also hundreds of MB (MByte, megabyte) when allocating a memory for a virtual machine, the reserved memory module usually manages the reserved memory by using a larger granularity as a management unit, for example, the reserved memory is divided into memory blocks (ms) with the size of 2MB and the like for management; in some scenarios, large memories are also commonly used, and other granularities, such as 1GB (GigaByte) are optional.
However, at a larger management granularity, each memory block may not completely store data, and thus, the internal storage space of the memory block is wasted. In addition, in a memory compression scenario, the compressed data is also smaller than the size of the memory block. Therefore, under a large management granularity, how to avoid memory waste and efficiently manage the inside of a memory block is an urgent technical problem to be solved.
Based on this, an embodiment of the present specification provides a memory management method, where a memory includes a plurality of memory blocks, and each memory block is divided into a plurality of memory segments; therefore, the memory block can be designed to have a larger granularity, so that the occupation of block metadata of the memory block is reduced; in addition, the memory segments in the memory blocks can be managed specifically, in this embodiment, two layers of metadata are designed, and the total metadata is also included on the basis of the block metadata of each allocated memory block; the block metadata includes: allocating state information of each memory segment in the allocated memory block, wherein the allocation state information is used for determining the memory segments which can be allocated in the allocated memory block; the total metadata includes: the quantity information of the unallocated memory segments in each allocated memory block is used to determine the allocable memory blocks in the memory. When a memory adjustment request exists, a target memory segment of which the allocation state needs to be adjusted can be determined; after the allocation state of the target memory segment is adjusted, the allocation state information of the block metadata of the target memory block to which the target memory segment belongs is updated, and the quantity information of the target memory block in the total metadata is updated, so that the memory segment can be allocated, and the waste of the residual space of the memory block with large granularity is reduced. The present embodiment will be described in detail below.
The memory of this embodiment includes a plurality of memory blocks, and each of the memory blocks is divided into a plurality of memory segments. The size of the memory block may be flexibly configured according to needs, for example, the aforementioned 2MB or 1GB, and the like, which is not limited in this embodiment. The memory blocks may be continuous or discontinuous.
As shown in fig. 2A, an embodiment of the present disclosure is to divide each memory block into a plurality of memory segments, where the size of the memory segments is flexibly configured according to needs, and this embodiment does not limit this. For example, a memory block of 2MB is taken as an example, and is divided into a plurality of small segments, for example, 1k is taken as a granularity, and the memory block can be divided into 2048 memory segments.
Based on the above design of dividing the memory block into a plurality of memory segments, a data structure for managing each memory segment needs to be designed, and the data structure of this embodiment includes block metadata and total metadata.
In this embodiment, the memory blocks are used as the granularity, and metadata of each allocated memory block is established, which is called block metadata (header). The block metadata includes allocation status information of each memory segment in the allocated memory block. In practical applications, the data structure of the block metadata may be flexibly implemented according to needs, and the implementation of the allocation status information may also be implemented in various ways, which is not limited in this embodiment. It will be appreciated that block metadata need only be established for memory blocks that have been allocated, i.e. have stored data.
For example, the memory segments of the memory block may be numbered in a predetermined order (for example, in an order from higher to lower addresses or an order from lower to higher addresses, etc.), and each number may be provided with an allocation status flag indicating whether the memory block is allocated. Or, the allocation status of each memory segment in the memory block may be represented in a bitmap manner. A bitmap (bitmap) is a data structure, and includes at least one element, the elements are arranged in sequence, and each element adopts "0" or "1" to indicate that its corresponding element does not exist or exists. In this embodiment, the memory segments may be sorted according to a set sequence, and two bits, namely 0 and 1, are used to represent two states of whether the memory segments are allocated, so that the allocation state information of each memory segment in one memory block can be represented by one bitmap, and the occupied data is low, which is convenient for rapidly analyzing the occupied situation of each segment in the memory block during subsequent processing. Of course, it is clear to those skilled in the art that the distribution status information may be implemented in various ways in practical applications, and this embodiment does not limit this.
For example, the storage location of the block metadata may be in various manners, for example, the memory occupied by the block metadata is small and may be stored in the memory segment of the memory block; according to the size of the block metadata and the size of the memory segment, the block metadata can occupy one or more memory segments; the memory segment storing the block metadata may be configured as needed, for example, the block metadata is stored from the first memory segment of the memory block, or the block metadata is stored from the last memory segment of the memory block. In other examples, the block metadata may also be stored in other locations of the memory in a unified manner, which is not limited in this embodiment.
In the reserved memory scenario, the memory may include a first storage space (non-reserved memory) used by an operating system of the computer device and a second storage space (reserved memory) used by the virtual machine, where the second storage space includes the plurality of memory blocks. The first storage space and the second storage space may adopt different management granularities, the first storage space may be managed by a first memory management module of the operating system, and the method of this embodiment may be applied to a second memory management module for managing the second storage space in the operating system. If the second memory management module uses the first storage space, memory allocation needs to be initiated to the first memory management module. If the block metadata frequently changes, the second memory management module is required to frequently interact with the first memory management module, and based on this, the block metadata is stored in the memory segment of the memory block and is directly managed by the second memory management module, so that the processing efficiency can be improved. Moreover, the memory block of the reserved memory has a larger granularity and is often not completely used, and the space occupied by the block metadata is very limited, so that the use of the memory block is not influenced. And the corresponding relation between the block metadata and the memory block does not need to be specially established, and the block metadata of the memory block can be directly determined when the address of the memory block is determined.
As an example, the size of the memory segment may be determined based on the size of the block metadata, and the size of the other memory segment is greater than or equal to the size of the block metadata, so that the block metadata is stored in one memory segment, for example, the first memory segment, thereby facilitating management and improving management efficiency.
For example, the block metadata may further include other information as needed, such as a physical address paddr (physics address) of the memory block ms, a free memory segment number free, a maximum free segment number (i.e., the maximum number of consecutive unallocated memory segments in the allocated memory block) max _ free, a sequence number max of a start position of the maximum consecutive free segment, and the like, so as to facilitate subsequent processing of allocation or release of the memory segments.
In this embodiment, total metadata is further established, where the total metadata includes information of the number of unallocated memory segments in each allocated memory block, so as to determine the allocable memory blocks in the memory.
The information about the number of unallocated memory segments in each allocated memory block may include information about each idle memory segment in the allocated memory block, such as the number of idle memory segments and/or the maximum number of idle segments, where the maximum number of idle segments represents the maximum number of consecutive idle memory segments in the memory block. For example, the number of idle memory segments in a memory block ms is 200, which includes two consecutive idle memory segments, where the number of one consecutive idle memory segment is 50, the number of the other consecutive idle memory segment is 150, and the maximum number of idle segments is 150. There are two types of allocated memory chunks: memory blocks with free memory segments exist in the above example; and a memory block completely occupied, that is, no idle memory segment exists in the memory block, and the number of the idle memory segments is zero. When the memory allocation request is obtained through the information, whether the memory blocks meeting the allocation request exist in the allocated memory blocks can be quickly determined by using the total metadata. In practical application, the implementation manner of the data structure of the total metadata may be flexibly configured according to needs, which is not limited in this embodiment.
Illustratively, the storage location of the overall metadata may also be in a variety of ways; for example, in the reserved memory scenario, the total metadata may be stored in the unreserved storage space of the memory, or may be stored in the reserved storage space of the memory. For example, the total metadata may be stored in unreserved storage space due to the size of the total metadata and the requirement that the memory is desired to be reserved for use by the virtual machine as much as possible in a reserved memory scenario.
In some examples, the total metadata may include an address of each of the block metadata, and after determining that there is at least one alternative memory block that satisfies the memory allocation request, the method further includes: the block metadata of the at least one candidate memory block is read according to the address of the block metadata of the at least one candidate memory block, so that the block metadata of the candidate memory block can be quickly read after the candidate memory block is determined in this embodiment.
For convenience of management and implementation of fast allocation, since there are two types of allocated memory chunks, in some examples, information of fully allocated memory chunks and information of incompletely allocated memory chunks in the total metadata may also be separately managed and stored.
In practical applications, there may be a plurality of implementation manners for recording the total metadata of the quantity information of the unallocated memory segments in each allocated memory block. In some examples, the information about the number of unallocated memory segments in each allocated memory block in the total metadata may be stored in a linked list manner.
A linked list is a non-sequential, non-sequential storage structure on a physical storage structure, and the logical order of data elements is implemented by the order of pointer links in the linked list. A linked list is composed of a series of nodes (each element in the linked list is called a node), which can be dynamically generated at runtime. Each node comprises two parts: one is a data field to store data elements and the other is a pointer field to store addresses.
The linked list comprises a single-chain list and a double-linked list; fig. 2B is a schematic diagram of a singly-linked list according to an embodiment, where a first node of the linked list includes a head pointer head, whose data field is null, and the head pointer head points to a data field of a next node. The pointer of the next of the last node points to the end null. Ending means that the linked list is a non-circular linked list, and in other examples, the next pointer of the last node can also point to the head pointer, so that a circular linked list is formed.
The pointer field of each node in the doubly linked list includes a pre-pointer prev (data field for pointing to a previous node) and a post-pointer next, so that a node previous to the current node can be quickly found with respect to the singly linked list. Similarly, according to the direction of the next of the back pointer of the last node, the double-linked list may further include a non-circular double-linked list and a circular double-linked list. Fig. 2C is a schematic diagram of a double-linked list according to an embodiment, where a circular double-linked list is taken as an example, a pointer field (i.e., a head pointer) of a first node includes a front pointer and a back pointer, and a data field head may be empty or store data as needed; and in other nodes, the pointer fields all comprise a front pointer and a rear pointer, and the data fields in the graph are a1, a2 and a3 in sequence. The node is shown in the form of a front pointer, a data field and a back pointer in sequence, and in practical application, other ways may be adopted as needed, for example, the node is sequentially a front pointer, a back pointer and a data field, which is not limited in this embodiment. In practical application, the type of the double linked list may be selected as needed, which is not limited in this embodiment.
In some examples, the total metadata includes one or more first linked lists, different ones of the first linked lists corresponding to different ones of the quantity information; in practical application, the first linked list may be a single linked list or a double linked list as needed, and this embodiment is not limited.
The first linked list includes at least one node, and the node is configured to store an address of the chunk metadata of one allocated memory chunk, so that the chunk metadata of the memory chunk can be quickly accessed through the total metadata. Wherein, the addresses of the block metadata of the allocated memory blocks with the same amount of information are stored in different nodes of the first linked list.
Two doubly linked lists are shown in FIG. 2D: a linked List List _ a and a linked List List _ b; the linked list of this embodiment includes the head node, and in practical application, it is also optional to provide the head node as needed, and this embodiment does not limit this.
The linked List List _ a is a bidirectional circular linked List, and the pointer field (head pointer) of the first node comprises a front pointer and a back pointer and also comprises data head _ a; the back pointer points to the next node a1, the front pointer points to the last node a1, and the back pointer points to the next node a1; similarly, node a1 has its front pointer pointing to head _ a and its back pointer pointing to head _ a.
The linked List listb is a bidirectional circular linked List, the pointer field (head pointer) of the first node comprises a front pointer and a back pointer, and also comprises a data head _ b, the back pointer points to the next node b1, and the front pointer points to the last node b2; the other two nodes are similarly directed, and reference is made to the attached drawings.
In this embodiment, the node may store the address of the chunk metadata header of the allocated memory chunk. A1, b1 and b2 as shown in the figure respectively store the addresses of the block metadata headers of the corresponding allocated memory blocks. The block metadata is stored in a memory segment of memory segment ms, such as the first memory segment, and the header of the memory block is accessible via the node.
And the node a1 in the linked List lista is used for connecting to the memory block m1. Nodes b1 and b2 in the linked List listb respectively represent memory blocks m2 and m3, that is, the block metadata of m2 and the block metadata of m3 are linked in a linked List, which indicates that the two allocated memory blocks have the same amount information (e.g., max _ free) of unallocated memory segments.
The block metadata of the allocated memory block corresponding to a1 and the block metadata of the allocated memory block corresponding to b1 use different linked lists, that is, the number information (e.g., max _ free) of the unallocated memory segments of the allocated memory block corresponding to a1 is different from the number information (e.g., max _ free) of the unallocated memory segments of the allocated memory block corresponding to b 1.
In practical applications, when the number of the memory segments is large, the number information of the unallocated memory segments of each allocated memory block may also be possibly large. For example, one memory block has 2048 memory segments, and when there are many memory blocks, there are 2048 possibilities for the number of unallocated memory segments in the allocated memory block, and each first linked list corresponds to the number information of one unallocated memory segment, that is, there may be a plurality of first linked lists. In order to facilitate querying allocable memory blocks through total data during memory allocation, in this embodiment, the total metadata includes a linked list array, and each element in the linked list array corresponds to a different number range; each element is used for linking to one or more first linked lists, and the number corresponding to the linked first linked list is in the number range corresponding to the element. For example, the linked list array in the total metadata may be separate metadata for managing the linked list under each element in the array.
The number range can be divided according to the number of the memory segments, the number range is multiple, and each number range can be the same or different. For example, there are 2048 memory segments, which may be divided into 16, 1 st to 128 th as a range of numbers, 129 to 256 th as a range of numbers, and so on. It is clear to those skilled in the art that other various dividing manners may be available in practical application, and this embodiment does not limit this.
Based on this, n number ranges are divided, and n elements in the linked list array are provided. Each element in the linked list array is a linked list. By way of example, a linked list array partial [ nlist ] (where nlisit denotes n linked list elements) includes 16 elements: partial [0] to partial [15]. The first linked list, with the number of unallocated memory segments in the range of "1 to 128", is linked below the first element partial [0] of the array, and so on.
Fig. 2E is a schematic diagram of a linked list array according to an exemplary embodiment, where the linked list array shown in fig. 2E includes 16 elements, and a number range corresponding to each element is shown in the figure. For example, assuming that the following three allocated memory blocks exist in the memory, according to the maximum number of idle memory segments of the three allocated memory blocks, the corresponding linked list may be:
the maximum number of idle memory segments of the memory block ms1 is assumed to be 100; the block metadata header of the memory block may be linked in the linked List lista shown in fig. 2D;
the maximum number max _ free of the idle memory segments of the memory blocks ms2 and ms3 is assumed to be 120; the block metadata headers of the two memory blocks may be linked in the linked List listb shown in fig. 2D; where b1 denotes the header of the memory block ms2 and b2 denotes the header of the memory block ms3.
Since the maximum number of free memory segments of these three memory blocks is in the range of "1-128", it can be linked to the first element partial [0] of the array.
In practical application, multiple linking modes can be adopted according to needs, for example, each element corresponds to one general linked list and is used for storing a head pointer of the corresponding general linked list; and the head pointer of each first linked list is stored in the node of the total linked list corresponding to the element corresponding to the first linked list.
Each element under the linked list array is a linked list, and the storage information of each element can be the information of the first node of the linked list. For example, fig. 2E shows the linked List array total, the first element partial [0] stores the information, i.e., the first node information of the bidirectional circular linked List _ k. Specifically, the bidirectional circular linked List _ k under partial [0] includes: node head _ k, which points to node k1, node k1 points to node k2, and the next of k2 may point to head _ k, thus forming a bi-directional circular linked list. That is, head _ k, k1, and k2 form a linked List _ k; unlike the linked List listk, the other two groups of linked lists lista and listb may be linked with the linked List listk, respectively, for example, the node k1 may store the first node of the linked List lista, and the other node k2 may store the first node of the linked List listb; as shown in fig. 2F, k1 actually stores information of the first node of List _ a enclosed by the dashed line in the figure, and k2 actually stores information of the first node of List _ b enclosed by the dashed line in the figure, thereby realizing linking among linked lists List _ k, list _ a, and linked List _ b. For convenience of understanding, information within a dotted frame is not put into k1 and k 2.
Illustratively, the above example relates to two cases of max _ free being 100 and 120, and the max _ free may also be stored in the linked List as required, for example, may be stored in the data field head _ a of the first node of List _ a and the data field head _ b of the first node of List _ b, respectively.
For example, the order of the two linked lists List _ a and List _ b linked by the linked List _ k may be flexibly configured according to needs, for example, the two linked lists List _ a and List _ b may be sorted in an ascending order of max _ free, or sorted in a descending order, or sorted in other self-defining orders, which is not limited in this embodiment.
Because there are many possibilities for the maximum number of memory segments, max _ free, in this embodiment, a corresponding first linked list may be created only when max _ free occurs; for example, in the range of "1 to 128" corresponding to the first element partial [0], since only max _ free has only two cases of 100 and 120, only the linked List _ a corresponding to 100 and the linked List _ b corresponding to 120 are created, so that the resource occupation can be reduced, and accordingly, the linked List _ k includes nodes for linking the two linked lists. It may be understood that, in an actual application, a corresponding linked list may also be created for each max _ free, and for a case that a certain max _ free does not have a corresponding allocated memory block, it is also optional that the linked list corresponding to the max _ free stores a null value, which is not limited in this embodiment.
In practical applications, there exists a fully allocated memory block, that is, all the memory segments of the allocated memory block are allocated, and the free memory segment of the allocated memory block is zero, in this case, as described in the foregoing example, a first linked list indicating that max _ free is zero may be created. In other examples, the fully allocated memory blocks may be managed separately, for example, another linked list may be created, which is referred to as a second linked list in this embodiment, where the second linked list is not linked in a linked list array with the first linked list, and a data field of a node in the second linked list may be used to store addresses of block metadata of the fully allocated memory blocks, so as to link block metadata headers of the fully allocated memory blocks, so that the block metadata headers of the fully allocated memory blocks may be mounted in the same linked list. Under the memory allocation scene, no idle memory segment exists in the fully allocated memory block, and the memory block cannot be used for allocation, so that the fully allocated memory block is independently managed on the basis, and the processing efficiency of memory allocation is improved.
As can be seen from the above embodiments, the elements in the linked list array may not be directly linked to the block metadata header, and a layer of structure list (i.e., each first linked list) may be further provided in the middle, and may be allocated as needed according to the maximum number of idle segments; for example, only one ms containing 5 continuous segments is contained in 1-128 corresponding to partial [0], a linked list is distributed to ensure that the max _ free corresponding to the linked list is 5, and then the list is linked to the partial [0] in an uplink way and linked to the header in a downlink way; and other maximum idle segment numbers do not appear, so that the allocation is not carried out at first, and the waste of metadata is avoided.
The above-mentioned linked list data and the second linked list may be organized in a pool total structure, which is used as total metadata and may be used to manage all memory blocks of this embodiment, optionally, the total metadata may further include other information, for example, the number nr of memory blocks ms included in the total metadata is recorded, a protection flag lock for protecting linked list operation, a cache pool for caching list metadata, and the like.
Based on the above metadata design, the present specification further provides an embodiment of a memory management method. As shown in fig. 2G and fig. 2H, which are flowcharts illustrating a memory management method according to an exemplary embodiment of the present disclosure, the method may include the following steps:
in step 202, in response to the memory adjustment request, a target memory segment whose allocation state needs to be adjusted is determined according to the total metadata and the block metadata.
In step 204, the allocation status of the target memory segment is adjusted based on the memory adjustment type corresponding to the memory adjustment request;
in step 206, after the allocation status of the target memory segment is adjusted, the allocation status information of the block metadata of the target memory block to which the target memory segment belongs is updated, and the quantity information of the unallocated memory segments in the target memory block in the total metadata is updated.
The memory management method of this embodiment may be applied to any scenario that requires memory management, including but not limited to the aforementioned reserved memory scenario. In some examples, the method of the present embodiment may manage all the storage space of the internal memory, or may manage part of the storage space; for example, in the reserved memory scenario, a memory space dedicated to the virtual machine is reserved in the memory.
When the method is applied to a reserved memory scenario, the memory may include a first storage space used by an operating system of the computer device and a second storage space used by the virtual machine, where the second storage space includes the multiple memory blocks. The first storage space and the second storage space may adopt different management units, and the first storage space may be managed by a first memory management module of the operating system.
As shown in fig. 2I and 2J, memory management generally involves two operations: memory allocation and memory release. The following description is made separately. Taking the application of the method of the present embodiment to the memory management module as an example, in practical application, the memory allocation and the memory release may be functions that operate independently. The memory allocation request 21 is input to the memory management module, and the step of determining the target memory segment 211 in the step 211 and the step 212 of updating after the allocation status of the target memory segment is adjusted may be executed, specifically including the step of updating the block metadata of the target memory block and the step of updating the total metadata. Similarly, the step of determining the target memory segment 221 in step 221 and the step of updating 222 after adjusting the allocation status of the target memory segment may be executed when the memory release request 22 is input to the memory management module, which specifically includes the step of updating the block metadata of the target memory block and the step of updating the total metadata.
In some examples, the memory adjustment request includes: a memory allocation request; the determining a target memory segment whose allocation state needs to be adjusted according to the total metadata and the block metadata includes:
determining whether at least one alternative memory block meeting the memory allocation request exists according to the total metadata;
if so, determining a target memory block and a target memory segment for memory allocation in the target memory block in the at least one alternative memory block according to the block metadata corresponding to the at least one alternative memory block.
In this embodiment, the memory allocation request may carry the size of the storage space to be allocated. In practical applications, the size of the storage space may be larger than the size of one memory block and may be smaller than the size of one memory block. In the case of a size smaller than one memory block, it may be determined whether there is a suitable free memory segment to allocate through the total metadata and the block metadata.
In some examples, the information on the number of unallocated memory segments includes a maximum number of idle segments, where the maximum number of idle segments represents a maximum number of consecutive unallocated memory segments in the allocated memory block; determining whether at least one alternative memory block meeting the memory allocation request exists according to the total metadata includes:
determining the number of memory segments needing to be allocated, which are required by meeting the memory allocation request;
and determining whether at least one alternative memory block with the maximum number of idle segments larger than or equal to the number of the memory segments needing to be allocated exists or not according to the total metadata.
For example, the size of the storage space may be divided by the size of the memory segment and rounded up to obtain the number chunk to be allocated.
Since the total metadata includes the quantity information of the unallocated memory segments in each allocated memory block, it can be determined whether an allocable memory block exists in the memory, and then the allocable memory segments are queried. In some examples, the amount information of the unallocated memory segments recorded in the total metadata may be the amount of free memory segments, and the storage space required by one memory allocation request may be a non-consecutive memory segment.
In other examples, the storage space required for a memory allocation request may be a contiguous segment of memory. For example, in the present embodiment, based on the design of the maximum number of idle segments, when responding to each memory allocation request, consecutive target memory segments may be allocated, thereby reducing the complexity of memory management. The total metadata stores max _ free of each memory block, the number range to which the chunk number chunk needs to be allocated belongs is determined, information stored by each element in the linked list array is further inquired, the element corresponding to the number range to which chunk belongs is larger than or equal to non-empty, the first linked list is linked below the element, and the memory blocks which can be allocated can be determined.
In some examples, the determining, according to the block metadata corresponding to the at least one candidate memory block, a target memory block and a target memory segment in the target memory block, where the target memory segment is used for allocating memory, in the at least one candidate memory block includes:
if the candidate memory blocks with the maximum number of idle sections equal to the number of the memory sections needing to be allocated exist, determining the candidate memory blocks and the maximum idle sections in the candidate memory blocks as target memory blocks and target memory sections used for allocation in the target memory blocks according to the block metadata of the candidate memory blocks;
if the maximum number of the idle segments of the at least one candidate memory block is greater than the number of the memory segments to be allocated, determining a difference between the number of consecutive unallocated memory segments in the candidate memory block and the number of the memory segments to be allocated according to the block metadata of the at least one candidate memory block, and determining a target memory block and a target memory segment for allocation in the target memory block according to the difference.
In this embodiment, if the total data stores a max _ free that is exactly equal to the chunk of the memory segment to be allocated, one or more memory blocks linked by the first linked list corresponding to the max _ free may be used as the target memory block; in the case that there are multiple nodes, one of the nodes may be flexibly selected as a target memory block according to needs, for example, to facilitate updating of the metadata, a memory block to which block metadata linked to a last node in the first linked list corresponding to the max _ free node belongs may be used as the target memory block, so that the node may be quickly removed from the first linked list, and quick updating of the total metadata is achieved.
If there is no max _ free that is exactly equal to the chunk to be allocated, the memory chunks corresponding to other max _ free may be selected as needed. For example, the memory segment chunk to be allocated includes 120, 150, 200, and the like in ascending order in 110,max _free, and the allocated memory block corresponding to 120 may be selected, so that after 120 memory segments of the memory block are allocated, the situation of memory segment fragments is reduced as much as possible; of course, it is also optional to select the allocated memory blocks corresponding to other max _ free, and this embodiment does not limit this.
Taking the example of selecting each allocated memory block corresponding to max _ free of 120 as the candidate memory block, and taking max _ free of 120 as the candidate memory block, there are multiple memory blocks corresponding to max _ free of 120, and it is assumed that 2 memory blocks are taken as the example, and include the candidate memory block ms2 and the candidate memory block ms3. Because the maximum number of the continuous idle segments of ms2 and ms3 is greater than the chunk of the memory to be allocated, smaller continuous idle memory segments may exist under ms2 and ms3, and it is possible to just match the chunk. In order to reduce the fragmentation of the memory segments, one of the memory blocks may be selected optionally as needed, and the block metadata thereof is traversed to determine whether there is a more suitable continuous memory segment. Of course, in other examples, selecting a plurality of memory blocks or all memory blocks, and traversing the block metadata of each memory block is also optional, but this way may generate overhead when the system is busy, and may be flexibly configured in practical applications as needed, which is not limited in this embodiment. Illustratively, taking ms2 as an example, the block metadata of ms2 is read, the allocation state information of each memory segment is traversed, and finally, the continuous idle memory segments required by chunk are determined. For example, the number of one continuous free memory segment is determined to be 115 by the block metadata of ms2, and since the difference between 115 and chunk is smaller than the difference between 118 and chunk, ms2 is determined to be the target memory segment, and the found continuous free memory segment of 115 is determined to be the target memory segment.
Based on this, the address of the target memory segment in the determined target memory block, that is, the memory block used for the current allocation, may be returned to the request. And, the allocation state is adjusted for each target memory segment from an unallocated state to an allocated state. And then, updating the block metadata of the target memory block, that is, updating the allocation state information of each memory segment of the target memory block.
And updating the quantity information of the unallocated memory segments of the target memory block in the total metadata. For example, if the allocation status is adjusted, the target memory chunk becomes a fully allocated memory chunk, and the header of the target memory chunk is removed from the original first linked list and linked to the second linked list representing the fully allocated memory chunk. If the maximum idle section number of the target memory block changes, removing the header of the target memory block from the original first linked list, and re-determining a new maximum idle section number; if the headers of the memory blocks are not linked under the original first linked list after the removal, deleting the original first linked list, namely deleting the head metadata of the linked list. And if the new maximum idle section number has a corresponding first linked list, adding the new maximum idle section number into the first linked list, and if the new maximum idle section number does not have the first linked list, creating the first linked list and linking the first linked list to the corresponding elements in the linked list array.
Next, through an embodiment of memory allocation:
1. receiving a memory allocation request, and determining the size of a storage space to be allocated; the information of the memory blocks/memory blocks to be allocated needs to be recorded in the total metadata pool; in this embodiment, the number of memory segments needed is converted into chunk according to the size and the granularity of the memory segments.
2. Searching whether max _ free meeting chunk exists in a linked list array partial [ ] of pool; if yes, the address information of the block metadata header of the target memory block can be obtained according to the corresponding linked list, the found address information of the header is returned, and the step 5 is skipped; otherwise, step 3 is executed.
3. Since the idle memory blocks of all existing memory blocks ms of the pool cannot meet the allocation requirement, one idle memory block ms needs to be reallocated. If the memory space is insufficient, the allocation is failed and the exit is directly made, otherwise, the step 4 is executed.
4. Newly allocating a memory block ms, initializing a block metadata header of the memory block ms, and returning address information of the header; moreover, other related processes such as establishing virtual address mapping may also be performed, which is not described in this embodiment.
5. According to the address information of the returned header, the block metadata header which can meet the allocation requirement can be determined to be found; for the case that max _ free is equal to the required chunk, the start allocation position sidx is set to the start position max of the largest consecutive tile in the block metadata header, and the step 14 is skipped directly.
6. Otherwise, that is, max _ free is greater than the required chunk (at this time, there may be a plurality of memory blocks with the same max _ free), the allocation bitmap recorded in the header is traversed, and the first free small segment position idx is found.
7. And adding 1 to the number free of the continuous idle sections, continuously judging whether the next small section is idle, if so, performing the step 11, and otherwise, performing the step 8.
8. And (4) judging whether the free segment free is equal to the required size chunk, and if so, indicating that the allocation requirement is met, and directly performing the step 13.
9. If the free segment free is greater than chunk, the difference diff between the segments is recorded and compared to the minimum difference min _ diff, and if less than min _ diff, the position min _ idx at which the segment of the continuous segment begins is recorded and min _ diff is updated.
10. Judging whether an end mark is set, if yes, step 13; otherwise, step 11.
11. And continuously searching the starting position of the next free memory segment.
12. At this time, whether the next segment or the idle memory segment is judged, whether traversal is finished or not is judged (under the condition that a plurality of memory blocks with the same max _ free exist, the bitmap of one or more memory blocks can be traversed as required), the end setting of the end mark is finished, and the step 8 is skipped; otherwise jump back to step 7.
13. The minimum difference min _ diff found at this time is the required target memory segment, and the allocation start position sidx is set to min _ idx.
14. The chunk size at the beginning of the allocation location sidx is set to the allocated state.
15. And returning the virtual address handle where the sidx memory segment of the memory block is located, wherein the position can be used for storing the small block of memory.
16. If the memory block is not the newly allocated memory block, removing the address information of the header of the memory block from the existing linked list; and if the headers of the memory blocks are not linked under the original linked list after the removal, deleting the original first linked list, namely deleting the head metadata of the linked list.
17. Judging the number of idle memory segments in the header at the moment, if the header is full, moving into a full linked list, and jumping to the step 23; otherwise step 18 is performed.
18. Since the memory block is not full, the location max and the size max _ free of the largest continuous memory fragment in the header in the block metadata are updated first.
19. And searching the partial [ i ] of the not-full array position where the partial [ i ] is located according to the max _ free.
20. Traversing the head of a partial [ i ] linked list, and checking whether a linked list node of max _ free exists in a list below the head of the partial [ i ] linked list; not, step 21 is executed; otherwise, list is found and step 22 is performed.
21. Firstly, a linked list node list is distributed, the max _ free value of the linked list node list is set, and the linked list node list is upwards linked into a partial [ i ].
22. Find a list that satisfies max _ free, link the header in down.
23. The entire distribution process is completed.
Embodiments of memory release are provided next. The memory adjustment request comprises: a memory release request, the memory release request carrying: the size of the memory to be released and the address of the memory to be released; the determining a target memory segment in which a state needs to be adjusted according to the total metadata and the block metadata includes: determining a target memory block according to the size of the memory block and the address of the memory to be released; and determining a target memory segment to be released in the target memory block according to the size of the memory segment and the size of the memory to be released.
1. Responding to the memory release request, and moving a small memory out of the pool; determining a position handler to be released according to the memory release request, wherein the size of the position handler is size; converting the memory segment into the number of memory segments needed to be distributed according to the size of the memory segment;
2. determining a memory block where the handler is located according to the address of the handler, and accessing a block metadata header of the memory block;
3. determining the position idx of the handler in the header according to the block metadata header;
4. setting the chunk memory segments starting from idx as idle states;
5. removing the header from the existing linked list; if the headers of the memory blocks are not linked under the original linked list after the removal, deleting the original linked list, namely deleting the head metadata of the linked list;
6. judging the number of idle memory segments in the header at the moment, if the memory segments are completely empty, directly releasing the memory block ms, returning the memory block ms to a reserved memory management system at the previous stage, and skipping to the step 12; otherwise, executing step 7;
7. if not, updating the position max and the size max _ free of the largest continuous memory segment in the header of the memory block ms;
8. searching the position partial [ i ] of the corresponding element in the linked list array according to the max _ free;
9. traversing the head of the partial [ i ] link table, and checking whether a link table node of max _ free exists in the list below the head of the partial [ i ] link table; none, step 11; otherwise, finding the list, step 10;
10. firstly, a linked list node list is distributed, a max _ free value is set, and the linked list node list is upwards linked into a partial [ i ];
11. find a list that satisfies max _ free, link the header in down.
12. The entire dispensing process is completed.
Corresponding to the foregoing embodiments of the memory management method, the present specification further provides embodiments of a memory management apparatus and a computer device applied thereto.
The embodiment of the memory management device in the present specification can be applied to a computer device, such as a server or a terminal device. The apparatus embodiments may be implemented by software, or by hardware, or by a combination of hardware and software. In the case of software implementation, as a logical device, the corresponding computer program instructions in the nonvolatile memory are read into the memory by the processor in which the device is located and executed. In terms of hardware, as shown in fig. 3, a hardware structure diagram of a computer device in which the memory management device is located in this specification is shown, and besides the processor 310, the memory 330, the network interface 320, and the nonvolatile memory 340 shown in fig. 3, the computer device in which the memory management device 331 is located in the embodiment may also include other hardware according to an actual function of the computer device, which is not described again.
As shown in fig. 4, fig. 4 is a block diagram of a memory management apparatus according to an exemplary embodiment in this specification, where the memory includes a plurality of memory blocks, and each memory block is divided into a plurality of memory segments;
the memory is used for storing total metadata and block metadata corresponding to each allocated memory block;
the block metadata includes: the allocation state information of each memory segment in the allocated memory block;
the total metadata includes: the quantity information of the unallocated memory segments in each allocated memory block;
the device comprises:
a determining module 41 configured to: responding to a memory adjustment request, and determining a target memory segment of which the distribution state needs to be adjusted according to the total metadata and the block metadata;
an adjustment module 42 for: adjusting the distribution state of the target memory segment based on the memory adjustment type corresponding to the memory adjustment request;
an update module 43 configured to: after the allocation state of the target memory segment is adjusted, updating the allocation state information of the block metadata of the target memory block to which the target memory segment belongs, and updating the quantity information of the unallocated memory segments in the target memory block in the total metadata.
In some examples, the memory adjustment request includes: a memory allocation request;
the determining module is further configured to:
determining whether at least one alternative memory block meeting the memory allocation request exists according to the total metadata;
if so, determining a target memory block and a target memory segment for memory allocation in the target memory block in the at least one alternative memory block according to the block metadata corresponding to the at least one alternative memory block.
In some examples, the information on the number of unallocated memory segments includes a maximum number of idle segments, where the maximum number of idle segments represents a maximum number of consecutive unallocated memory segments in the allocated memory block;
the determining module is further configured to:
determining the number of memory segments needing to be allocated, which are required by meeting the memory allocation request;
and determining whether at least one alternative memory block with the maximum number of idle segments larger than or equal to the number of the memory segments needing to be allocated exists according to the total metadata.
In some examples, the determining module is further configured to:
if an alternative memory block with the maximum number of idle segments equal to the number of the memory segments needing to be allocated exists, determining the alternative memory block and the maximum idle segment in the alternative memory block as a target memory block and a target memory segment for allocation in the target memory block according to the block metadata of the alternative memory block;
if the maximum number of the idle segments of the at least one candidate memory block is greater than the number of the memory segments to be allocated, determining a difference between the number of consecutive unallocated memory segments in the candidate memory block and the number of the memory segments to be allocated according to the block metadata of the at least one candidate memory block, and determining a target memory block and a target memory segment for allocation in the target memory block according to the difference.
In some examples, the total metadata further includes an address of each of the block metadata; the determining module is further configured to, after determining that at least one candidate memory block that satisfies the memory allocation request exists, read block metadata of the at least one candidate memory block according to an address of the block metadata of the at least one candidate memory block.
In some examples, the total metadata includes one or more first linked lists, different ones of the first linked lists corresponding to different ones of the quantity information;
the first linked list includes at least one node, and each node is configured to store an address of block metadata of an allocated memory block, so as to access the block metadata of the alternative memory block after the alternative memory block is determined; and storing the addresses of the block metadata of the allocated memory blocks with the same amount of information in different nodes of the first linked list.
In some examples, the total metadata includes a linked list array, each element in the linked list array corresponding to a different range of numbers;
each element is used for linking to one or more first linked lists, and the quantity information corresponding to the linked first linked list is in the quantity range corresponding to the element.
In some examples, each of the elements corresponds to an overall linked list and is configured to store a head pointer of the corresponding overall linked list;
and the head pointer of each first linked list is stored in the node of the total linked list corresponding to the element corresponding to the first linked list.
In some examples, the memory adjustment request includes: a memory release request, the memory release request carrying: the size of the memory to be released and the address of the memory to be released;
the determining module is further configured to:
determining a target memory block according to the size of the memory block and the address of the memory to be released;
and determining a target memory segment to be released in the target memory block according to the size of the memory segment and the size of the memory to be released.
In some examples, the memory includes a first storage space for use by an operating system of the computer device and a second storage space for use by a virtual machine, the second storage space including the plurality of memory chunks;
the first storage space is managed by a first memory management module of the operating system, and the device is applied to a second memory management module which is used for managing the second storage space in the operating system;
the block metadata of the allocated memory block is stored in the memory segment of the memory block, and the total metadata is stored in the first storage space by calling the first memory management module.
The implementation process of the functions and actions of each module in the memory management device is specifically described in the implementation process of the corresponding step in the memory management method, and is not described herein again.
Accordingly, embodiments of the present specification further provide a computer program product, which includes a computer program, and when the computer program is executed by a processor, the steps of the foregoing memory management method embodiment are implemented.
Accordingly, embodiments of the present specification further provide a computer device, including a memory, a processor, and a computer program stored on the memory and executable on the processor, where the processor implements the steps of the memory management method embodiments when executing the program.
Accordingly, embodiments of the present specification further provide a computer-readable storage medium on which a computer program is stored, where the computer program, when executed by a processor, implements the steps of the embodiments of the memory management method.
For the device embodiments, since they substantially correspond to the method embodiments, reference may be made to the partial description of the method embodiments for relevant points. The above-described embodiments of the apparatus are merely illustrative, wherein the modules described as separate parts may or may not be physically separate, and the parts displayed as modules may or may not be physical modules, may be located in one place, or may be distributed on a plurality of network modules. Some or all of the modules can be selected according to actual needs to achieve the purpose of the solution in the specification. One of ordinary skill in the art can understand and implement without inventive effort.
The above embodiments may be applied to one or more electronic devices, which are devices capable of automatically performing numerical calculation and/or information processing according to instructions set or stored in advance, and the hardware of the electronic devices includes, but is not limited to, a microprocessor, an Application Specific Integrated Circuit (ASIC), a Programmable Gate Array (FPGA), a Digital Signal Processor (DSP), an embedded device, and the like.
The electronic device may be any electronic product capable of performing human-computer interaction with a user, for example, a Personal computer, a tablet computer, a smart phone, a Personal Digital Assistant (PDA), a game machine, an interactive Internet Protocol Television (IPTV), an intelligent wearable device, and the like.
The electronic device may also include a network device and/or a user device. Wherein the network device includes, but is not limited to, a single network server, a server group consisting of a plurality of network servers, or a Cloud Computing (Cloud Computing) based Cloud consisting of a large number of hosts or network servers.
The Network where the electronic device is located includes, but is not limited to, the internet, a wide area Network, a metropolitan area Network, a local area Network, a Virtual Private Network (VPN), and the like.
The foregoing description has been directed to specific embodiments of this disclosure. Other embodiments are within the scope of the following claims. In some cases, the actions or steps recited in the claims can be performed in a different order than in the embodiments and still achieve desirable results. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing may also be possible or may be advantageous.
The steps of the above methods are divided for clarity, and the implementation may be combined into one step or split some steps, and the steps are divided into multiple steps, so long as the same logical relationship is included, which are all within the protection scope of the present patent; it is within the scope of the present application to add insignificant modifications to the algorithms or processes or to introduce insignificant design changes to the core design without changing the algorithms or processes.
The description herein of "particular examples," or "some examples," etc., means that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the specification. In this specification, the schematic representations of the terms used above do not necessarily refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples.
Other embodiments of the present disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the invention disclosed herein. This specification is intended to cover any variations, uses, or adaptations of the specification following the general principles of the specification and including such departures from the present disclosure as come within known or customary practice in the art to which the specification pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the specification being indicated by the following claims.
It will be understood that the present description is not limited to the precise arrangements that have been described above and shown in the drawings, and that various modifications and changes may be made without departing from the scope thereof. The scope of the description is limited only by the appended claims.
The above description is only a preferred embodiment of the present disclosure, and should not be taken as limiting the present disclosure, and any modifications, equivalents, improvements, etc. made within the spirit and principle of the present disclosure should be included in the scope of the present disclosure.

Claims (13)

1. A memory management method, the said memory includes a plurality of memory blocks, every said memory block is divided into a plurality of memory segments; the memory is used for storing total metadata and block metadata corresponding to each allocated memory block;
the block metadata includes: the allocation state information of each memory segment in the allocated memory block;
the total metadata includes: the quantity information of the unallocated memory segments in each allocated memory block;
the method comprises the following steps:
responding to a memory adjustment request, and determining a target memory segment in a state needing to be adjusted according to the total metadata and the block metadata;
adjusting the distribution state of the target memory segment based on the memory adjustment type corresponding to the memory adjustment request;
after the allocation state of the target memory segment is adjusted, updating the allocation state information of the block metadata of the target memory block to which the target memory segment belongs, and updating the quantity information of the unallocated memory segments in the target memory block in the total metadata.
2. The method of claim 1, the memory adjustment request comprising: a memory allocation request;
the determining a target memory segment of which the state needs to be adjusted according to the total metadata and the block metadata includes:
determining whether at least one alternative memory block meeting the memory allocation request exists according to the total metadata;
if so, determining a target memory block and a target memory segment for memory allocation in the target memory block in the at least one alternative memory block according to the block metadata corresponding to the at least one alternative memory block.
3. The method according to claim 2, wherein the information on the number of unallocated memory segments includes a maximum number of idle segments, and the maximum number of idle segments represents a maximum number of consecutive unallocated memory segments in the allocated memory block;
determining whether at least one alternative memory block meeting the memory allocation request exists according to the total metadata includes:
determining the number of memory segments needing to be allocated, which are required by meeting the memory allocation request;
and determining whether at least one alternative memory block with the maximum number of idle segments larger than or equal to the number of the memory segments needing to be allocated exists according to the total metadata.
4. The method according to claim 3, wherein the determining, according to the block metadata corresponding to the at least one candidate memory block, a target memory block in the at least one candidate memory block and a target memory segment in the target memory block, where the target memory segment is used for allocating memory, includes:
if the candidate memory blocks with the maximum number of idle sections equal to the number of the memory sections needing to be allocated exist, determining the candidate memory blocks and the maximum idle sections in the candidate memory blocks as target memory blocks and target memory sections used for allocation in the target memory blocks according to the block metadata of the candidate memory blocks;
if the maximum number of the idle segments of the at least one candidate memory block is greater than the number of the memory segments to be allocated, determining a difference between the number of consecutive unallocated memory segments in the candidate memory block and the number of the memory segments to be allocated according to the block metadata of the at least one candidate memory block, and determining a target memory block and a target memory segment for allocation in the target memory block according to the difference.
5. The method of claim 2, the total metadata further comprising an address of each of the block metadata; after determining that at least one candidate memory chunk satisfying the memory allocation request exists, the method further includes: and reading the block metadata of the at least one alternative memory block according to the address of the block metadata of the at least one alternative memory block.
6. The method of claim 5, the total metadata comprising one or more first linked lists, different ones of the first linked lists corresponding to different ones of the quantity information;
the first linked list includes at least one node, and each node is configured to store an address of block metadata of an allocated memory block, so as to access the block metadata of the alternative memory block after the alternative memory block is determined; and storing the addresses of the block metadata of the allocated memory blocks with the same amount of information in different nodes of the first linked list.
7. The method of claim 6, the total metadata comprising a linked list array, each element in the linked list array corresponding to a different range of numbers;
each element is used for linking to one or more first linked lists, and the quantity information corresponding to the linked first linked list is in the quantity range corresponding to the element.
8. The method of claim 7, wherein each of the elements corresponds to an overall linked list and is configured to store a head pointer of the corresponding overall linked list;
and the head pointer of each first linked list is stored in the node of the total linked list corresponding to the element corresponding to the first linked list.
9. The method of claim 1, the memory adjustment request comprising: a memory release request, the memory release request carrying: the size of the memory to be released and the address of the memory to be released;
the determining a target memory segment of which the state needs to be adjusted according to the total metadata and the block metadata includes:
determining a target memory block according to the size of the memory block and the address of the memory to be released;
and determining a target memory segment to be released in the target memory block according to the size of the memory segment and the size of the memory to be released.
10. The method according to any one of claims 1 to 9, wherein the memory includes a first storage space used by an operating system of the computer device and a second storage space used by a virtual machine, and the second storage space includes the plurality of memory blocks;
the first storage space is managed by a first memory management module of the operating system, and the method is applied to a second memory management module which is used for managing the second storage space in the operating system;
the block metadata of the allocated memory block is stored in the memory segment of the memory block, and the total metadata is stored in the first storage space by calling the first memory management module.
11. A memory management device is provided, wherein the memory comprises a plurality of memory blocks, and each memory block is divided into a plurality of memory segments;
the memory is used for storing total metadata and block metadata corresponding to each allocated memory block;
the block metadata includes: the allocation state information of each memory segment in the allocated memory block;
the total metadata includes: the quantity information of the unallocated memory segments in each allocated memory block;
the device comprises:
a determination module to: responding to a memory adjustment request, and determining a target memory segment of which the distribution state needs to be adjusted according to the total metadata and the block metadata;
an adjustment module to: adjusting the distribution state of the target memory segment based on the memory adjustment type corresponding to the memory adjustment request;
an update module to: after the allocation state of the target memory segment is adjusted, updating the allocation state information of the block metadata of the target memory block to which the target memory segment belongs, and updating the quantity information of the unallocated memory segments in the target memory block in the total metadata.
12. A computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor implements the steps of the method of any one of claims 1 to 10 when executing the computer program.
13. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the steps of the method of any one of claims 1 to 10.
CN202211248341.1A 2022-10-12 2022-10-12 Memory management method and device, computer equipment and storage medium Pending CN115599544A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202211248341.1A CN115599544A (en) 2022-10-12 2022-10-12 Memory management method and device, computer equipment and storage medium
PCT/CN2023/123475 WO2024078429A1 (en) 2022-10-12 2023-10-09 Memory management method and apparatus, computer device, and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211248341.1A CN115599544A (en) 2022-10-12 2022-10-12 Memory management method and device, computer equipment and storage medium

Publications (1)

Publication Number Publication Date
CN115599544A true CN115599544A (en) 2023-01-13

Family

ID=84847498

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211248341.1A Pending CN115599544A (en) 2022-10-12 2022-10-12 Memory management method and device, computer equipment and storage medium

Country Status (2)

Country Link
CN (1) CN115599544A (en)
WO (1) WO2024078429A1 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116991595A (en) * 2023-09-27 2023-11-03 太初(无锡)电子科技有限公司 Memory allocation method, device, equipment and medium based on Bitmap
CN117130565A (en) * 2023-10-25 2023-11-28 苏州元脑智能科技有限公司 Data processing method, device, disk array card and medium
CN117555674A (en) * 2023-10-26 2024-02-13 南京集成电路设计服务产业创新中心有限公司 Efficient multithreading batch processing block memory pool management method

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7533228B1 (en) * 2005-05-27 2009-05-12 Sun Microsystems, Inc. Two-pass sliding compaction
CN108304259B (en) * 2017-01-11 2023-04-14 中兴通讯股份有限公司 Memory management method and system
CN110287127A (en) * 2019-05-14 2019-09-27 江苏大学 A kind of Nonvolatile memory management method and system that more granularity multicores are expansible
CN111143058A (en) * 2019-12-17 2020-05-12 长沙新弘软件有限公司 Memory management method based on backup list
CN114546661A (en) * 2022-03-01 2022-05-27 浙江大学 Dynamic memory allocation method and device based on memory transformation

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116991595A (en) * 2023-09-27 2023-11-03 太初(无锡)电子科技有限公司 Memory allocation method, device, equipment and medium based on Bitmap
CN116991595B (en) * 2023-09-27 2024-02-23 太初(无锡)电子科技有限公司 Memory allocation method, device, equipment and medium based on Bitmap
CN117130565A (en) * 2023-10-25 2023-11-28 苏州元脑智能科技有限公司 Data processing method, device, disk array card and medium
CN117130565B (en) * 2023-10-25 2024-02-06 苏州元脑智能科技有限公司 Data processing method, device, disk array card and medium
CN117555674A (en) * 2023-10-26 2024-02-13 南京集成电路设计服务产业创新中心有限公司 Efficient multithreading batch processing block memory pool management method
CN117555674B (en) * 2023-10-26 2024-05-14 南京集成电路设计服务产业创新中心有限公司 Efficient multithreading batch processing block memory pool management method

Also Published As

Publication number Publication date
WO2024078429A1 (en) 2024-04-18

Similar Documents

Publication Publication Date Title
CN111344683B (en) Namespace allocation in non-volatile memory devices
CN115599544A (en) Memory management method and device, computer equipment and storage medium
EP3958107A1 (en) Storage system, memory management method, and management node
CN102819497B (en) A kind of memory allocation method, Apparatus and system
JP6785204B2 (en) Memory system and control method
JP2019008729A (en) Memory system and control method
JP2858795B2 (en) Real memory allocation method
CN102289409B (en) The memory allocator of layered scalable
EP2645259A1 (en) Method, device and system for caching data in multi-node system
CN111984425B (en) Memory management method, device and equipment for operating system
CN111522659B (en) Space use method and device
CN112241320A (en) Resource allocation method, storage device and storage system
WO2024099448A1 (en) Memory release method and apparatus, memory recovery method and apparatus, and computer device and storage medium
US11287996B2 (en) Method, device and computer program product for storing data
CN114556309A (en) Memory space allocation method and device and storage medium
CN114327917A (en) Memory management method, computing device and readable storage medium
WO2007097581A1 (en) Method and system for efficiently managing a dynamic memory in embedded system
JP2001282617A (en) Method and system for dynamically sectioning shared cache
CN116340198B (en) Data writing method and device of solid state disk and solid state disk
CN117215485A (en) ZNS SSD management method, data writing method, storage device and controller
CN116661690A (en) Method, device, computer equipment and storage medium for recording memory state
CN115756838A (en) Memory release method, memory recovery method, memory release device, memory recovery device, computer equipment and storage medium
CN107656697B (en) Method and device for operating data on storage medium
US7159094B1 (en) Kernel memory defragmentation method and apparatus
US11429519B2 (en) System and method for facilitating reduction of latency and mitigation of write amplification in a multi-tenancy storage drive

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination