CN116225693A - Metadata management method, device, computer equipment and storage medium - Google Patents

Metadata management method, device, computer equipment and storage medium Download PDF

Info

Publication number
CN116225693A
CN116225693A CN202211679912.7A CN202211679912A CN116225693A CN 116225693 A CN116225693 A CN 116225693A CN 202211679912 A CN202211679912 A CN 202211679912A CN 116225693 A CN116225693 A CN 116225693A
Authority
CN
China
Prior art keywords
memory
metadata
memory block
management information
linked list
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211679912.7A
Other languages
Chinese (zh)
Inventor
郑豪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Alibaba China Co Ltd
Alibaba Cloud Computing Ltd
Original Assignee
Alibaba China Co Ltd
Alibaba Cloud Computing Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Alibaba China Co Ltd, Alibaba Cloud Computing Ltd filed Critical Alibaba China Co Ltd
Priority to CN202211679912.7A priority Critical patent/CN116225693A/en
Publication of CN116225693A publication Critical patent/CN116225693A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5011Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals
    • G06F9/5016Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals the resource being the memory
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Abstract

The present disclosure provides a metadata management method, a device, and a storage medium, where the method is applied to a first memory management module of an operating system, where the first memory management module and a second memory management module of the operating system respectively manage different memory blocks of a memory, and metadata of the memory blocks managed by the first memory management module are stored in the memory blocks applied from the second memory management module; the method comprises the following steps: responding to the release of metadata stored in a memory block, and if the preset reservation condition is determined to be met, reserving the memory block; and responding to a storage request of the target metadata, and if a memory block adapting to the target metadata is determined from the reserved memory blocks, storing the target metadata by using the determined memory block.

Description

Metadata management method, device, computer equipment and storage medium
Technical Field
The present disclosure relates to the field of computer technologies, and in particular, to a metadata management method, a metadata management device, a computer device, and a storage medium.
Background
Computer devices may employ conventional memory management architecture, i.e., the entire memory is managed by a memory management module of the operating system (commonly referred to as the kernel's memory management module). In other scenarios, such as virtual machine scenarios, the computer device may employ a memory allocation architecture that reserves memory, e.g., based on a memory management module of the kernel, the operating system may also have other memory management modules (e.g., reserved memory management modules). The two memory management modules can be used for managing different storage spaces of the memory and can adopt different management granularities, for example, the memory management module of the kernel adopts smaller management granularities, and the reserved memory management module adopts larger management granularities.
For the memory space managed by the reserved memory management module, the reserved memory management module records information of each memory block by using metadata for the memory block of the memory space, and the memory block can be applied for storing metadata of the memory block managed by the reserved memory management module through the memory management module of the kernel.
In practical applications, for example, in the context of memory exchange, the condition that the state of a memory block frequently changes to cause the metadata to frequently change occurs, so that the reserved memory management module needs to frequently interact with the memory management module of the kernel to apply for or release the memory block, which brings about larger processing overhead.
Disclosure of Invention
In order to overcome the problems in the related art, the present specification provides a metadata management method, apparatus, computer device, and storage medium.
According to a first aspect of embodiments of the present disclosure, a metadata management method is provided, where the method is applied to a first memory management module of an operating system, the first memory management module and a second memory management module of the operating system respectively manage different memory blocks of a memory, and metadata of the memory blocks managed by the first memory management module is stored in the memory blocks applied from the second memory management module;
The method comprises the following steps:
responding to the release of metadata stored in a memory block, and if the preset reservation condition is determined to be met, reserving the memory block;
and responding to a storage request of the target metadata, and if a memory block adapting to the target metadata is determined from the reserved memory blocks, storing the target metadata by using the determined memory block.
According to a second aspect of embodiments of the present disclosure, there is provided a metadata management apparatus, where the apparatus is applied to a first memory management module of an operating system, the first memory management module and a second memory management module of the operating system respectively manage different memory blocks of a memory, and metadata of the memory blocks managed by the first memory management module is stored in the memory blocks applied from the second memory management module;
the device comprises:
a reservation processing module for: responding to the release of metadata stored in a memory block, and if the preset reservation condition is determined to be met, reserving the memory block;
the storage processing module is used for: and responding to a storage request of the target metadata, and if a memory block adapting to the target metadata is determined from the reserved memory blocks, storing the target metadata by using the determined memory block.
According to a third aspect of embodiments of the present specification, there is provided a computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the steps of the method embodiments of the first aspect are implemented when the computer program is executed by the processor.
According to a fourth aspect of embodiments of the present specification, there is provided a computer readable storage medium having stored thereon a computer program which, when executed by a processor, implements the steps of the method embodiments of the first aspect described above.
The technical scheme provided by the embodiment of the disclosure can comprise the following beneficial effects:
in the embodiment of the disclosure, for the memory block for storing metadata applied first, when the metadata is released, the first memory management module does not release the memory block for storing the metadata back to the second memory management module, but reserves the memory block and manages the memory block under the condition that the reservation condition is met, and when metadata is required to be stored again later, whether the memory block is proper or not can be selected from all the managed memory blocks first, so that the cost can be reduced, and the performance loss caused by interaction between the first memory management module and the second memory management module is reduced.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the specification and together with the description, serve to explain the principles of the specification.
FIG. 1A is a schematic diagram of page table data according to an exemplary embodiment of the present description.
FIG. 1B is a schematic diagram of a memory according to an exemplary embodiment of the present disclosure.
Fig. 2A is a flowchart illustrating a metadata management method according to an exemplary embodiment of the present description.
FIG. 2B is a diagram of a NUMA architecture shown in accordance with an exemplary embodiment of the specification.
Fig. 3 is a block diagram of a computer device in which a metadata management apparatus is shown according to an exemplary embodiment of the present specification.
Fig. 4 is a block diagram of a metadata management apparatus according to an exemplary embodiment of the present specification.
Detailed Description
Reference will now be made in detail to exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, the same numbers in different drawings refer to the same or similar elements, unless otherwise indicated. The implementations described in the following exemplary examples do not represent all implementations consistent with the present specification. Rather, they are merely examples of apparatus and methods consistent with some aspects of the present description as detailed in the accompanying claims.
The terminology used in the description presented herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the description. As used in this specification and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as used herein refers to and encompasses any or all possible combinations of one or more of the associated listed items.
It should be understood that although the terms first, second, third, etc. may be used in this specification to describe various information, these information should not be limited to these terms. These terms are only used to distinguish one type of information from another. For example, the first information may also be referred to as second information, and similarly, the second information may also be referred to as first information, without departing from the scope of the present description. The word "if" as used herein may be interpreted as "at … …" or "at … …" or "responsive to a determination", depending on the context.
When the process runs, the operating system allocates a virtual address space and a physical address space for the process, and creates a page table corresponding to the process, wherein the page table is used for recording the mapping relation between the virtual address space and the physical address space. The operating system also maintains metadata for the managed memory for managing the memory. When the memory is exchanged, the page table of the process needs to be updated because the storage position of the data of the process is changed; since the change of the storage location of the data also changes the state of the memory block, the metadata also needs to be updated.
Page tables are a concept of virtual memory technology. In order to allow programs to obtain more available memory and expand physical memory into larger logical memory, operating systems also use virtual memory technology. The method abstracts the physical memory into an address space, and an operating system allocates an independent set of virtual addresses for each process, and maps the virtual addresses of different processes and the physical addresses of different memories. If the program is to access a virtual address, it is translated by the operating system into a different physical address. The concept of two addresses is referred to herein:
the memory addresses used by the program are called virtual addresses (Virtual Memory Address, VA);
the spatial addresses within the actual existing hardware are called physical addresses (Physical Memory Address, PA).
The virtual address and the physical address are mapped by a page table. The page table is stored in the memory, and the translation of virtual memory into physical memory is achieved by the MMU (Memory Management Unit ) of the CPU (Central Processing Unit, processor). When the virtual address to be accessed by the process cannot be found in the page table, the system generates page fault abnormality, enters the system kernel space to allocate physical memory, updates the page table of the process, and finally returns to the user space to resume the operation of the process.
Referring to fig. 1A, a schematic diagram of page table data according to an exemplary embodiment of the present disclosure is shown, where a memory management unit of an operating system divides a memory according to a set management granularity, and each management granularity may be called a page (page) or a block. In this embodiment, taking the page allocated by the process as 0 to N as an example, the page table includes N page table entries, where each page table entry is used to represent a correspondence between a virtual address and a physical address of each page. Thus, the entire page table records the relationship between the virtual address space, page table, and physical address space of the process, and a certain virtual address of the process may be mapped to a corresponding physical address through the page table.
In some examples, the computer device may employ a conventional memory management architecture, i.e., the entire memory is managed by the operating system. In other scenarios, for example, in a virtual machine scenario, the computer device may employ a memory allocation architecture of reserved memory, as shown in fig. 1B, which is a schematic diagram of a reserved memory scenario according to an exemplary embodiment of the present disclosure, where the memory of the host may include multiple storage spaces, two of which are shown in fig. 1B with different filling modes, including an unreserved storage space a (with diagonal filling in the figure) for the kernel, and a reserved storage space B (with vertical lines and gray filling in the figure) for the virtual machine. That is, the unreserved storage space a is used for the kernel in the figure, and applications running on the operating system (e.g., application 1 to application 3 illustrated in the figure) can use the unreserved storage space a. The reserved storage space b is available for Virtual Machines (VM), such as VM1 to VMn as shown in the figure. The two storage spaces may be managed with different granularity, i.e. the memory may be partitioned differently. For ease of illustration in fig. 1B, two storage spaces are illustrated in a continuous fashion in the figure. It will be appreciated that in practice, the two storage spaces may be non-contiguous. In practical applications, the memory can be divided into more storage spaces.
The reserved storage space occupies most of the memory and is not available to the host kernel, and a module can be inserted into the kernel of the operating system to be specially used for managing the reserved storage space. In order to facilitate management of the series of memories and avoid occupation of a large amount of metadata on the memories, and considering that when the memories are allocated to the virtual machines, the minimum time is hundreds of MB (megabytes), the reserved storage space is managed by dividing the reserved storage space into Memory blocks (ms) with the size of 2MB, for example; in some scenarios, large granularity is also commonly used, such as 1GB
(GigaByte) and the like are optional, and this embodiment is not limited thereto.
When the method is applied to the reserved memory scene, the operating system can adopt different modules to respectively manage the reserved memory space and the unreserved memory space, for example, a reserved memory management module of the operating system manages the reserved memory space, and a memory management module of the kernel manages the unreserved memory space. For the memory space managed by the reserved memory management module, the reserved memory management module records information of each memory block by using metadata for the memory block of the memory space, and the memory block can be applied for storing metadata of the memory block managed by the reserved memory management module through the memory management module of the kernel.
In practical applications, for example, in the context of memory exchange, the condition that the state of a memory block frequently changes to cause the metadata to frequently change occurs, so that the reserved memory management module needs to frequently interact with the memory management module of the kernel to apply for or release the memory block, thereby bringing about larger processing overhead.
Based on this, the present embodiment provides a metadata management method, which is applied to a first memory management module of an operating system, where the first memory management module and a second memory management module of the operating system respectively manage different memory blocks of a memory, and metadata of the memory blocks managed by the first memory management module are stored in the memory blocks applied from the second memory management module.
As shown in fig. 2A, there is a flowchart of a metadata management method according to an exemplary embodiment of the present disclosure, the method including:
in step 202, in response to the metadata stored in the memory block being released, if it is determined that the predetermined reservation condition is satisfied, the memory block is reserved.
In step 204, in response to a storage request of the target metadata, if a memory block adapting to the target metadata is determined from the reserved memory blocks, the determined memory block is used to store the target metadata.
In some examples, the embodiment may be applied to the foregoing reserved memory scenario, where the first memory management module may be the foregoing reserved memory management module for managing reserved memory space, and the second memory management module may be the foregoing memory management module for managing a kernel that is not reserved memory space.
The solution of this embodiment may be applied to any computer device, where the computer device may be a single-core CPU device, or may include a device with multiple physical CPUs, and may adopt a non-uniform memory access (Non Uniform Memory Access Architecture, NUMA) architecture according to needs, where the NUMA architecture includes at least two NUMA nodes (NUMA nodes), as shown in fig. 2B, and the host may include a NUMA node 1 and a NUMA node 2, taking two NUMA nodes as an example. Under a NUMA architecture, the host's multiple physical CPUs and multiple memories are subordinate to different NUMA nodes. Each NUMA node includes at least one physical CPU and at least one physical memory, and fig. 2B exemplifies that the NUMA node includes one physical CPU and one physical memory. Within NUMA nodes, the physical CPUs communicate with the physical memory using an integrated memory controller Bus (Integrated Memory Controller Bus, IMC Bus), while NUMA nodes communicate using a fast path interconnect (Quick Path Interconnect, QPI). Because QPI has a higher latency than IMC Bus, the host has a far and near (remote/local) access to memory by the physical CPU. The physical CPU has a faster physical memory access speed to the node, and has a slower physical memory access speed to other NUMA nodes.
In a NUMA architecture scenario, the memory of this embodiment may include any of the physical memories described above. Optionally, any physical memory in the NUMA architecture may also use a reserved memory architecture. Based on this, the storage space managed in this embodiment may also refer to a reserved storage space in any physical memory in the NUMA architecture.
It can be understood that in practical application, the computer device may also adopt other architectures, and according to actual needs, the memory referred to in this embodiment may have various implementation manners according to actual application scenarios, which are not listed here.
For convenience of distinction, the memory blocks managed by the first memory management module are referred to as first-class memory blocks, and the memory blocks managed by the second memory management module are referred to as second-class memory blocks. Metadata of the memory blocks managed by the first memory management module is stored in the memory blocks applied from the second memory management module. The memory blocks reserved by the first memory management module in this embodiment are the memory blocks applied by the first memory management module from the second memory management module.
In some examples, the first type of metadata of the first type of memory block is of multiple types, and according to different scenarios, the metadata may include, but is not limited to, metadata indicating whether the memory block is allocated, total metadata of the memory or metadata indicating a cold state of the memory block, metadata indicating a process to which the memory block belongs, metadata indicating a memory allocation condition of the process to which the memory block belongs, and so on. Taking a virtual machine scene as an example, in some schemes, computer equipment is special for a virtual machine, in a memory allocation scheme of the virtual machine, a fixed virtual address space is configured for each virtual machine, in order to facilitate querying of a corresponding relationship between a virtual address and a physical address, metadata mmap (memory map, a mapping relationship between a virtual address and a physical address of a memory) for representing a memory allocation condition is additionally created on the basis of page table data, and bidirectional querying of the virtual address and the physical address can be realized through the metadata, so that querying efficiency can be improved.
The first memory management module may apply for the second type of memory block to the second memory management module to store metadata of the first type of memory block, for example, in the reserved memory scenario, metadata of each memory block in the reserved memory space is smaller, and the memory management module of the kernel is provided with a small-block memory function, so that a memory block with a smaller granularity can be provided. For example, the memory management module of the kernel is provided with a plurality of small-granularity memory blocks of 32 bytes (Byte), 64 bytes or 128 bytes, etc. for storing data with smaller granularity; in addition, some kernel memory management modules may also provide memory blocks of a particular size.
When metadata stored in a certain second-class memory block is destroyed, the second-class memory block does not need to store data, the first memory management module interacts with the second memory management module, and the second-class memory block is released to the second memory management module.
In practical application, there are cases where metadata of the first type of memory block is frequently created and destroyed. For example, for each memory block allocated to each process, the memory management module records the cold and hot state information of the memory blocks, and for the memory blocks in the cold state, releases the memory blocks through some processing mechanisms to make room for improving the resource utilization rate. For example, memory swap out to secondary storage, memory compression or memory merging, etc. Taking memory exchange as an example, a large amount of metadata update is required in the exchange process, and allocation release of a large amount of metadata may exist, and performance loss caused by frequent allocation release is relatively large.
In order to reduce interaction between the first memory management module and the second memory management module, in this embodiment, for a memory block for storing metadata applied first, when the metadata is released, the first memory management module does not release the memory block storing the metadata back to the second memory management module, but reserves the memory block and manages the memory block under the condition that a reservation condition is satisfied, and when metadata needs to be stored again subsequently, whether a proper memory block exists in each managed memory block can be selected first, so that overhead can be reduced, and performance loss caused by interaction between the first memory management module and the second memory management module is reduced.
Wherein the release of the metadata stored in the memory block indicates that the metadata is deleted, such that the memory block is in an idle state. In this embodiment, whether the memory block is reserved or not may be determined according to a preset reservation condition. For example, if the memory blocks reserved by the first memory management module are sufficient, it may be determined that the memory blocks may not be reserved and recovered by the second memory management module.
The preset reservation condition can be implemented in various ways, for example, the preset reservation condition can be determined based on the number of memory blocks to be reserved by the first memory management module, or can be determined by combining the total size of the memory blocks to be reserved by the first memory management module, and can be flexibly configured according to the needs in practical application.
Since the first memory management module needs to manage the reserved memory blocks, at least one memory block is reserved. To achieve efficient management, in some examples, the method may further include: and establishing and storing management information for the reserved memory blocks, wherein the management information is used for managing the reserved memory blocks.
And if a memory block adapting to the target metadata is determined from the reserved memory blocks, storing the target metadata by using the determined memory block, wherein the method comprises the following steps:
and if the memory blocks adapting to the target metadata are determined from the reserved memory blocks according to the stored management information, storing the target metadata by using the determined memory blocks.
In this embodiment, the management information may be flexibly configured according to needs, and may include size information or address information of the memory block. The management information may be stored in a first type of memory block managed by the first memory management module, or may be applied to be stored in a second type of memory block by the first memory management module from the second memory management module.
The first memory management module reserves the second type of memory blocks, namely caches the second type of memory blocks; storing the management information of the reserved second type memory block can be understood as establishing a cache pool to maintain each management information. The management information of each reserved second-type memory block can be understood as metadata of the second-type memory block. In practical application, metadata of the first type of memory blocks managed by the first memory management module may be of multiple types, and each type of metadata may correspond to a cache pool.
In practical applications, the metadata of the first-class memory blocks of the same type may be the same or different in size. For example, the size of metadata or the like indicating the allocation status of a memory block is fixed. The metadata of the process is not fixed in size, and for example, metadata indicating the merge state of each memory block of the process and the like. For example, the aforementioned metadata mmap representing the memory condition of the virtual machine, the structure body includes an entry field, where each memory block of the virtual machine is recorded, and when the corresponding memory block changes, the entry needs to be updated, but the entry of the whole mmap structure is in the form of an array, and if an entry is added or subtracted, a new mmap structure needs to be reassigned for updating.
For metadata of non-fixed size, the size of the second type of memory block storing the metadata is also non-fixed. Based on this, in order to facilitate management of each second type memory block, second type memory blocks with different sizes can be managed through multiple groups, and suitable memory blocks can be quickly and accurately found during allocation. As an example, the management information includes a size of a memory block;
the establishing and storing management information for the reserved memory blocks comprises the following steps:
Establishing management information according to the reserved size of the memory block, and distributing the management information to corresponding groups for storage; wherein different packets correspond to different size ranges;
and if a memory block adapting to the target metadata is determined from the reserved memory blocks according to the stored management information, storing the target metadata by using the determined memory block, including:
and if the management information corresponding to the group to which the size of the target metadata belongs is acquired according to the size of the target metadata, determining a memory block adapting to the target metadata from reserved memory blocks corresponding to the acquired management information, and storing the target metadata by using the determined memory block.
In this embodiment, the number of packets may be flexibly configured according to needs, and the size range corresponding to each packet may be flexibly configured according to needs, which is not limited in this embodiment. Alternatively, the correspondence between each packet and the corresponding size range may be stored, or a processing function mapped to the corresponding packet according to the size may be set.
The data structure of the packet may be implemented in various manners, for example, by a plurality of queues, where each packet corresponds to a queue, and each queue includes one or more members, and each member corresponds to management information of a managed memory block.
In some examples, the packet corresponds to a first linked list; each management information of the same group is respectively linked to each node of a first linked list corresponding to the group;
the distributing the management information to the corresponding packets includes:
linking the management information to the nodes of the corresponding first linked list;
the obtaining each management information corresponding to the packet to which the size of the target metadata belongs includes:
and acquiring each piece of management information linked with each node in a first linked list corresponding to the size of the target metadata.
In this embodiment, for each packet, a linked list implementation may be employed. The non-continuous and non-sequential storage structure can be realized through the linked list, the method is suitable for storing management information of each memory block in the embodiment, and more efficient searching, inserting and deleting of the management information can be realized through the linked list.
In some examples, each of the first linked lists is linked to each node of the total linked list, each of the first linked lists being ordered in a set order;
the linking the management information to the node of the corresponding first linked list includes:
determining a first linked list to which the management information belongs from the total linked list according to the setting sequence and the size range corresponding to each first linked list, and linking the management information to the first linked list to which the management information belongs;
The obtaining each piece of management information linked with each node in the first linked list corresponding to the size of the target metadata includes:
and determining a first linked list corresponding to the size of the target metadata from the total linked list according to the setting sequence and the size range corresponding to each first linked list, and acquiring each piece of management information linked with each node in the first linked list corresponding to the size of the target metadata.
In this embodiment, in order to efficiently access each first linked list, in this embodiment, each first linked list is linked through a total linked list, and each node in the total linked list is linked to each first linked list, which may be that each node in the total linked list stores a root node of the first linked list. Each first linked list may be ordered according to a set order, for example, from high to low or from low to high according to a size range corresponding to the first linked list; taking low-to-high ordering as an example, the order of the nodes in the total linked list from front to back corresponds to the first linked list ordered from low to high.
In this case, when the packet to which the first size belongs needs to be determined from the target storage space, the first linked list to which the first size belongs may be determined from the total linked list according to the setting order and the size range corresponding to each first linked list. When the metadata of the target second-class memory block needs to be stored in the packet to which the second size belongs, the first linked list to which the second size belongs may be determined from the total linked list according to the set order and the size range corresponding to each first linked list, and the metadata of the target second-class memory block may be stored in the first linked list to which the second size belongs.
In some examples, the management information of the memory block is stored in the memory block, and an address of the management information of the memory block is stored in a node of the first linked list. In this embodiment, the size of the management information is generally smaller, and by storing the management information in the memory block corresponding to the management information, no special establishment of a correspondence between the management information and the memory block is required, and when the management information is read by the node accessing the first linked list, the memory block to which the management information belongs can be directly accessed.
An embodiment will be described. Taking a reserved memory virtual machine scene as an example, when each virtual machine is created, metadata mmap is created for the virtual machine and used for recording information of a storage space allocated to the virtual machine, the mmap structure body records information of a physical memory range allocated at the time, a corresponding virtual address, a process and the like, and all the allocated memories are mounted in a linked list of the mmap.
To manage the aging of the memory, the set of memory blocks in a cold state can be found by metadata representing the cold and hot states of the memory. In the case of a memory shortage, this portion of memory may be migrated to secondary storage to free up more memory for active virtual machines. Of course, the moved memory space can be used for inventory scheduling, so as to improve the utilization rate of host resources. However, each time this movement requires updating metadata that records the memory condition of the virtual machine, such as the aforementioned mmap structure, which includes a field in the form of an array that records each memory block of the virtual machine, the array needs to be updated when the corresponding memory block changes, and because of the array form, a new mmap structure needs to be reassigned for updating if an entry is added or subtracted.
This churning is temporary and if the secondary storage is not a directly accessible device (e.g., disk), subsequent virtual machine accesses continue, triggering a page fault, which requires a quick reload operation. The memory blocks swapped out to the secondary storage can be actively swapped back to the main memory according to the requirements of scheduling and the like; this also requires updating the metadata mmap, which results in frequent increases or decreases of the array in the entire structure, which may lead to frequent application and release of the metadata, resulting in a large performance overhead.
In order to reduce performance overhead, the present embodiment provides a metadata management scheme, which may be applied to a first memory management module of an operating system (for example, a reserved memory management module in a reserved memory scenario, etc.), and the present embodiment may design a cache pool, where the cache pool manages a second type of memory block applied from a second memory management module of the operating system through management information.
By way of example, the cache pool may be represented by the following data structure:
(1) And the main structure pool is used for recording the whole information of the cache pool.
For example, the total level of the cache (the total level is used for judging whether the preset reservation condition is met), the number of cache structures (i.e. the number of reserved memory blocks), the number of queues (i.e. the number of packets), the processing function list_seq mapped to the queues according to the size of the memory blocks, multiple queue arrays, and possibly used special cache structures, related statistics, etc. may be included.
As an example, the statistical information may be various, for example, the statistical information may include the number of times that the adapted memory block is not found from the managed memory block and the second memory management is applied to the second memory management, based on this number of times information, if the number of times is larger, it indicates that the memory block reserved by the first memory management fails to better meet the allocation requirement, and the reservation condition may be adjusted, so as to improve the probability of acquiring the adapted memory block from the reserved memory block, and reduce the interaction between the first memory management module and the second memory management module.
(2) The next layer of the main structure is a queue, that is, each queue is linked under the main structure, and the queue structure pool_list can record the queue information of each queue, including the number of caches in the queue (that is, the number of memory blocks managed in the queue), the water level of the queue, and the queue member list.
(3) The next layer of the queue is the members pool_elem, each member corresponding to a second type of memory block to be managed. The head of each second-type memory block is multiplexed into a linked list pointer and linked in a cache queue.
In practical applications, it may be determined whether to implement the scheme of this embodiment according to the type of metadata of the first type of memory block to be managed. For example, in the memory management process, the memory exchange function may be started as required, and the overhead may be reduced by the scheme of this embodiment because the metadata of the memory exchange is frequently changed.
When the scheme of the embodiment needs to be implemented, a pool can be created, and the implementation manner of creating the pool, namely, creating metadata corresponding to the pool, wherein the structural body of the metadata can refer to the embodiment. Optionally, determining the number of the queues according to the set number of the queues, and creating a metadata structure corresponding to the queues. The number of the queues is flexibly configured according to the needs, and can be one or a plurality of queues.
As in the previous embodiment, the second memory management module may be provided with a general memory block and also with a memory block of a specific size; in this embodiment, the general manner is referred to as a general-purpose cache, and a memory block of a specific size is referred to as a private cache. For the single queue, whether to establish a corresponding special buffer memory can be determined according to whether the special buffer memory parameter is transmitted, and the subsequent allocation can be processed through the special buffer memory; if not, processing is performed through the universal cache.
When the first memory management module needs to store metadata of the first type of memory blocks, whether the second type of memory blocks are suitable or not can be firstly inquired through a structure body of the cache pool. For example, in the case of multiple queues, information of a second type memory block of a specific size is stored in a specific queue, and in this embodiment, the information may be converted into a sequence number of the queue according to the required size by using a list_seq function, and the corresponding queue is accessed according to the sequence number of the queue. If there are no members in the queue, it may be determined that there are no memory blocks that fit the size. And searching whether the memory block adapting to the size exists in the queue members in the queue. In the case where there are members in the queue, the sizes of the members in the queue are not necessarily adapted to the required sizes, and thus, it is necessary to determine the sizes of the respective members in the queue. If the adaptive queue member is found, indicating that the adaptive second-class memory block is found; if not, applying for the proper second type memory block through interaction with the second memory management module.
In the case of fixed-size first-class metadata, a single queue approach may be employed. The sizes of the second type memory blocks corresponding to the members in the single queue are the same. When the memory block is distributed, mapping from the size to the queue is not needed, the queue is not needed to be checked, and only whether the queue has a member is needed to be checked, if the member is taken out, a second-type memory block is applied to a second-type memory management module. Alternatively, it may be determined whether a private cache is configured, if so, from a general cache, otherwise.
When a certain first type of metadata needs to be released, a second type of memory block storing the first type of metadata can be released, namely the second type of memory block is restored to an idle state. When releasing, whether to release to the second memory management module or not can be determined according to the requirement, or the first memory management module still manages the memory. In this embodiment, the preset reservation condition is implemented, and the failure of the preset reservation condition indicates that the memory blocks managed by the current first memory management module are more, and the memory blocks can be recovered by the second memory management module without further management.
For example, various parameters may be configured, such as the number of second-type memory blocks currently managed by the first memory management module, or the total size of all second-type memory blocks managed, etc. In case of multiple packets, the parameters of each packet, such as the number of second class memory blocks of each packet, or the total size of second class memory blocks of each packet, etc., may also be refined. Various preset reservation conditions may be set based on the above parameters, and may include any of the following, for example: the number of the second-type memory blocks currently managed by the first memory management module is smaller than or equal to a first number threshold, the total size of all the managed second-type memory blocks is smaller than or equal to a first size threshold, the number of the grouped second-type memory blocks is smaller than or equal to a second number threshold, or the total size of the grouped second-type memory blocks is smaller than or equal to a second size threshold, and the like.
For a certain second type of memory block, when releasing, the scheme of multiple queues needs to map the queues through a list_seq function, if the water level of the mapped queues is larger than a set water level or the water level of the whole cache pool is larger than the set water level, the memory block is directly returned to the general cache of the kernel, otherwise, the memory block is mounted in the corresponding cache queue;
similarly, when releasing, the single queue is simplified greatly, mapping from the size to the queue is not needed, whether the water level of the pool meets the preset reservation condition is directly judged, if yes, the queue can be directly enqueued, if not, the special buffer is configured, otherwise, the special buffer is released, and otherwise, the general buffer is released.
Finally, destroying the buffer pool, firstly clearing all buffer elements in the queues, and if a special buffer is configured, releasing the special buffer or else releasing the special buffer to a general buffer; if the special buffer exists, after the release is finished, destroying the special buffer, and finally destroying the structure of the whole buffer pool.
As in the foregoing embodiment, metadata of the memory blocks managed by the first memory management module has multiple types, and the scheme of the embodiment is applicable to management of various types of metadata. Aiming at the memory merging scene, the memory blocks in a cold state are processed through a scheme of merging the same memory blocks in the memory merging scene, so that the use amount of the memory is reduced. But requires additional metadata records to merge the information of the memory blocks. For example, in the reserved memory scenario, the reserved memory blocks would only need one byte to represent usage, but additional structures are needed to record the information of each merged memory block after merging. In order to avoid excessive metadata overhead and influence on system stability, the embodiment optimizes the metadata of the combined memory block so as to reduce the metadata overhead and reduce the influence on the system stability.
The memory block merging process is that after the memory merging module of the operating system is started, the memory block to be released in a cold state is taken out and is firstly taken as a candidate memory block; and then, comparing the retrieved memory blocks to be released with data stored in each candidate memory block to determine whether there are combinable memory blocks until the combinable candidate memory blocks are found or the traversal of all the candidate memory blocks is finished. If the combinable candidate memory block is found, the virtual address of the memory block to be released is mapped to the physical address of the found candidate memory block, so that the physical address of the memory block to be released is released.
Each of the merged memory blocks capable of merging stores the same data. For memory blocks to be freed, in some embodiments, the zero blocks are managed separately when the memory is merged. The zero block refers to a memory block whether each bit is zero, and each zero block also belongs to a combined memory block.
In order to improve management efficiency, the memory blocks taken out of the cold page set may find the memory blocks with merging or may fail to find the memory blocks with merging temporarily, and based on this, the merged candidate memory blocks and the non-merged candidate memory blocks may be managed respectively. As an example, two tree structures may be maintained, one merged tree and one uncombined tree. The memory blocks managed in the merging tree are all the memory blocks (at least two identical memory blocks) which are already merged, and the page table attribute is changed into read-only authority; the uncombined tree is a memory block which is taken from the cold page set and is not combined yet, and the page table attribute is still read and written. That is, the merged candidate memory block set is maintained by a merge tree, and the non-merged candidate memory block set is maintained by a non-merge tree. The merged and non-merged trees may be designed as desired, e.g., the nodes of the tree are used to associate memory blocks, and each node in the merged tree is used to associate each merged memory block storing the same data.
For the zero blocks, the tree structure is not needed, and the zero block nodes are adopted to maintain the metadata of each zero block.
In some examples, the metadata of the memory block managed by the first memory management module includes: merging metadata of the memory blocks;
the metadata of the merged memory block includes: non-shared metadata for recording non-shared fields of the merged memory block and shared metadata for recording shared fields of the merged memory block; wherein each of the merged memory blocks storing the same content shares the same common metadata.
The merged memory block in this embodiment refers to a memory block that stores the same content and participates in merging, and includes the zero block and other memory blocks.
For each merged memory block storing the same content, metadata needs to be established for each memory block to record merging information, wherein the metadata comprises a plurality of fields, and each field stores one merging information. The information stored in some of the fields is identical for each of the consolidated memory blocks storing the same content. For example, merge flags, physical addresses of basic memory blocks, or encoded information of memory block storage contents, etc.
In this embodiment, these same pieces of information are referred to as common information, and these pieces of common information are separated to individually build a structure (item_ext in this embodiment). Thus, each of the merged memory blocks storing the same content shares the same common metadata. For example, there are 3 merged memory blocks storing the same content, each merged memory block corresponds to a respective non-common metadata, and the 3 merged memory blocks share the same common metadata. In this embodiment, the item_ext may be stored in the foregoing cache pool.
In some examples, the shared metadata is used to store at least one of the following shared information: merging marks, physical addresses of basic memory blocks or coding information of memory block storage contents.
To facilitate finding common metadata, in some examples, the non-common metadata includes a linking field that stores an address of the common metadata to link the non-common metadata with the common metadata. Therefore, by accessing the link field in the non-shared metadata of the merged memory block, the shared metadata can be quickly accessed through the stored address, and the link between the non-shared metadata and the shared metadata is realized.
In some examples, the method further comprises:
when a node of a merging tree mounts a first merging memory block, information in common metadata of the first merging memory block is recorded in metadata of the node of the merging tree, and then the common metadata of the first merging memory block is released.
If the merged memory block mounted by the node of the merged tree is restored, creating the shared metadata of the restored merged memory block according to the information recorded by the metadata of the node of the merged tree.
Metadata for the combined memory block is described in further detail below. In order to record information related to memory merging, in this embodiment, the data structure of metadata of the merged memory block in the first memory management module may be as follows:
the main structure zsm records various memory merge related data structures, including:
a) The merged memory information node of each memory node (namely the NUMA node under the NUMA architecture), and all the merged and non-merged memory blocks of each node are in a respective node structure;
b) Establishing a cache pool corresponding to various metadata (structural bodies) of the next layer mentioned later, and avoiding the cost caused by frequent allocation and release of the metadata; the establishment of the cache pool can refer to the foregoing embodiment.
The metadata of the present embodiment includes: metadata slots for each process; represents a merged tree structure side; the metadata item of the memory block mounted on each node in the merged tree.
c) The link entry for connecting metadata slots of all processes comprises a linked list entry and a hash table entry which is positioned quickly;
d) The method comprises the steps of recording the position of an uncombined page scanned each time, including the process scanned last time, the corresponding slot and the item position in the slot, so as to start scanning for the next record;
e) Switching equipment information corresponding to the combined memory;
f) And various statistics, lock protection fields, etc.
2. Each zsm node structure node contains the aggregate entries of various merged and non-merged memory blocks of the node, and mainly includes:
a) All node entries of the node merging zero pages, corresponding lock protections, and statistical information fields;
b) Node tree entries, corresponding lock protections, and statistics fields for all the same pages of the node;
c) Entries of all uncombined page node trees of the node, corresponding lock protection, and statistical information fields;
3. metadata side for managing the merged zero page and the same page respectively, mainly comprising:
a) A red-black tree node field for combining the memory node nodes into a red-black tree;
b) Under the linked list field head, each combined memory block structure item is mounted;
c) For nodes merging the same page (head not empty), it will also record the physical address paddr of the merge base block
(physical address);
d) Recording the kernel-mode virtual address kva (kernel virtual address) of the basic memory block to facilitate content comparison;
e) And recording node nid to which the snode belongs, merging attributes (zero page, the same page and the like), a memory block ms sequence number corresponding to the paddr, merging page quantity statistical information and the like.
4. Merging the metadata item of the memory block, which is the most basic operation unit of memory merging, the structure body contains the following important fields:
a) Physical address paddr of memory block;
b) A memory block is generally allocated to a user-mode process, and if the kernel mode is to access its content, a kernel-mode virtual address kva needs to be established first;
c) In addition, the merging process involves updating its original page table structure, so it is necessary to record its corresponding page table entry pmd;
d) At the same time, the management structure can be linked in merging nodes or organized in uncombined trees, so that space can be multiplexed, and a linked node link and a corresponding node side pointer of a linked list and a node for red-black tree link are respectively established;
e) Recording the merging attribute (zero page node, the same page node tree or the un-merging tree) of the node;
f) Of course, for the case of uncombined, the current content of the hash check value of the data in the memory is recorded;
g) Finally there will be some fields associated with slots.
5. In order to allocate the physical memory block and the process to be associated, the structure of the metadata slot mainly includes the following fields:
a) The structure will record the corresponding process attributes; in order to facilitate the correspondence between each memory allocation mmap, the associated vma structure is stored, vma corresponds to each memory allocation information mmap of the reserved memory one by one, and the mmap also contains information such as process pid;
b) In order to conveniently and quickly find out corresponding slots, a plurality of processes exist in the system, and the slots are linked to a main structural body zsm in a hash chain table mode;
c) In addition, all slots are connected in the zsm in series through a linked list so as to facilitate full traversal;
d) Then all items of the process are linked in slots;
e) Auxiliary fields such as a protection lock and the like are also provided;
6. for the uncombined memory blocks, organizing the uncombined memory blocks into a red and black tree according to kva in the item; meanwhile, reconstructing the red and black trees during each scanning; traversing each item in each subsequent slot from the last scanning position, calculating a checksum of the uncombined item, and entering the uncombined tree if the value is consistent with the original value recorded in the item so as to further compare and find the same memory block; otherwise, the time data is inconsistent, which indicates that the time data is changed, and the merging candidate option is not temporarily entered.
In order to save metadata overhead, for example, the design of the reserved memory scenario is initially to save metadata overhead, so in the basic reserved memory scenario, only one byte is needed for each memory block to represent state information. The current merged memory block needs to build a structure body of metadata item for each memory block participating in merging, which is used for representing various merged information of the memory block, and the cost is increased greatly compared with the basic version cost.
In order to minimize this metadata overhead, the present embodiment may optimize the structure of the metadata of the memory block, thereby greatly reducing the size of the metadata item structure. The above-mentioned metadata item includes a plurality of fields, and the physical address addr, the coding information checksum, the attribute flag, and other information of the memory block need to be recorded, and these information are only meaningful when the memory block is not merged, and once the memory block is merged, the metadata of the memory block needs to be mounted in a node of the merge tree, and is consistent with the information recorded by the node snode in the merge tree, so that these information in the metadata of the merge memory block can be optimized.
The metadata of the merged memory block includes a plurality of fields, which can be separated for the fields that can be shared, and a special structure item_ext is established, and meanwhile, a corresponding cache pool is established in the main structure zsm, and an ext field is added in the cache pool item for linking the item_ext structure.
When the memory block is merged, if the first item entering the snode is needed to be merged, paddr, flag, checksum and the like in item_ext are recorded in the corresponding field of the snode, and the item_ext structure is released into the cache pool.
When the memory block is restored, the item needs to be restored from the side, if the memory merge tracking is to be participated later, the item_ext structure needs to be reassigned, and the restored memory block address paddr is used for filling the item_ext,
simultaneously, recalculating a checksum;
in addition, in the running process, if the information such as the physical address paddr of a certain item is to be acquired, whether the information has an item_ext structure of the user or not needs to be judged through an ext field, if yes, the information is analyzed from the item_ext, otherwise, the information is analyzed from the corresponding snode.
Corresponding to the foregoing embodiments of the metadata management method, the present specification also provides embodiments of the metadata management apparatus and a computer device to which the metadata management apparatus is applied.
The embodiments of the metadata management apparatus of the present specification may be applied to a computer device, such as a server or a terminal device. The apparatus embodiments may be implemented by software, or may be implemented by hardware or a combination of hardware and software. Taking software implementation as an example, the device in a logic sense is formed by reading corresponding computer program instructions in the nonvolatile memory into the memory through a processor of the file processing where the device is located. In terms of hardware, as shown in fig. 3, a hardware structure diagram of a computer device where the metadata management apparatus of the present disclosure is located is shown in fig. 3, and the computer device where the metadata management apparatus 331 is located in the embodiment, except for the processor 310, the memory 330, the network interface 320, and the nonvolatile memory 340 shown in fig. 3, generally may include other hardware according to the actual functions of the computer device, which is not described herein again.
As shown in fig. 4, fig. 4 is a block diagram of a metadata management apparatus according to an exemplary embodiment of the present disclosure, where the apparatus is applied to a first memory management module of an operating system, the first memory management module and a second memory management module of the operating system respectively manage different memory blocks of a memory, and metadata of the memory blocks managed by the first memory management module is stored in the memory blocks applied from the second memory management module;
the device comprises:
a reservation processing module 41 for: responding to the release of metadata stored in a memory block, and if the preset reservation condition is determined to be met, reserving the memory block;
a storage processing module 42 for: and responding to a storage request of the target metadata, and if a memory block adapting to the target metadata is determined from the reserved memory blocks, storing the target metadata by using the determined memory block.
In some examples, the reservation processing module is further to:
establishing management information for the reserved memory blocks and storing the management information, wherein the management information is used for managing the reserved memory blocks;
and the storage processing module executes the process of determining that the memory block adapting to the target metadata exists in the reserved memory blocks, and stores the target metadata by using the determined memory block, and the process comprises the following steps:
And if the memory blocks adapting to the target metadata are determined from the reserved memory blocks according to the stored management information, storing the target metadata by using the determined memory blocks.
In some examples, the management information includes a size of a memory block;
the reservation processing module executes the establishment management information of the reserved memory block and stores the management information, and the method comprises the following steps:
establishing management information according to the reserved size of the memory block, and distributing the management information to corresponding groups for storage; wherein different packets correspond to different size ranges;
and the storage processing module executes the steps of determining a memory block adapting to the target metadata from the reserved memory blocks according to the stored management information, and storing the target metadata by using the determined memory block, wherein the steps comprise:
and if the management information corresponding to the group to which the size of the target metadata belongs is acquired according to the size of the target metadata, determining a memory block adapting to the target metadata from reserved memory blocks corresponding to the acquired management information, and storing the target metadata by using the determined memory block.
In some examples, the packet corresponds to a first linked list; each management information of the same group is respectively linked to each node of a first linked list corresponding to the group;
the reservation processing module performs the assigning of the management information to a corresponding packet, including:
linking the management information to the nodes of the corresponding first linked list;
the storage processing module executing the obtaining of the respective management information corresponding to the packet to which the size of the target metadata belongs, includes:
and acquiring each piece of management information linked with each node in a first linked list corresponding to the size of the target metadata.
In some examples, each of the first linked lists is linked to each node of the total linked list, each of the first linked lists being ordered in a set order;
the reservation processing module executes the node linking the management information to the corresponding first linked list, including:
determining a first linked list to which the management information belongs from the total linked list according to the setting sequence and the size range corresponding to each first linked list, and linking the management information to the first linked list to which the management information belongs;
The storage processing module executes the obtaining of each management information linked with each node in the first linked list corresponding to the size of the target metadata, and the method comprises the following steps:
and determining a first linked list corresponding to the size of the target metadata from the total linked list according to the setting sequence and the size range corresponding to each first linked list, and acquiring each piece of management information linked with each node in the first linked list corresponding to the size of the target metadata.
In some examples, the management information of the memory block is stored in the memory block, and an address of the management information of the memory block is stored in a node of the first linked list.
In some examples, the metadata of the memory block managed by the first memory management module includes: merging metadata of the memory blocks;
the metadata of the merged memory block includes: non-shared metadata for recording non-shared fields of the merged memory block and shared metadata for recording shared fields of the merged memory block; wherein each of the merged memory blocks storing the same data shares the same common metadata.
In some examples, the non-common metadata includes a link field that stores an address of the common metadata to link the non-common metadata with the common metadata.
In some examples, the apparatus further comprises:
when a node of a merging tree mounts a first merging memory block, recording information in common metadata of the first merging memory block in metadata of the node of the merging tree, and then releasing the common metadata of the first merging memory block;
if the merged memory block mounted by the node of the merged tree is restored, creating the shared metadata of the restored merged memory block according to the information recorded by the metadata of the node of the merged tree.
In some examples, the shared metadata is used to store at least one of the following shared information: merging marks, physical addresses of basic memory blocks or coding information of memory block storage contents.
The implementation process of the functions and roles of each module in the metadata management apparatus is specifically shown in the implementation process of the corresponding steps in the method, and will not be described herein.
Accordingly, the present description also provides a computer program product comprising a computer program which, when executed by a processor, implements the steps of the foregoing metadata management method embodiments.
Accordingly, the embodiments of the present specification further provide a computer device, including a memory, a processor, and a computer program stored on the memory and executable on the processor, where the steps of the metadata management method embodiments are implemented when the processor executes the program.
Accordingly, the present description also provides a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements the steps of the metadata management method embodiments.
For the device embodiments, reference is made to the description of the method embodiments for the relevant points, since they essentially correspond to the method embodiments. The apparatus embodiments described above are merely illustrative, wherein the modules illustrated as separate components may or may not be physically separate, and the components shown as modules may or may not be physical, i.e., may be located in one place, or may be distributed over a plurality of network modules. Some or all of the modules may be selected according to actual needs to achieve the purposes of the present description. Those of ordinary skill in the art will understand and implement the present invention without undue burden.
The above-described embodiments may be applied to one or more computer devices, which are devices capable of automatically performing numerical calculations and/or information processing according to preset or stored instructions, the hardware of which include, but are not limited to, microprocessors, application specific integrated circuits (Application Specific Integrated Circuit, ASICs), programmable gate arrays (fields-Programmable Gate Array, FPGAs), digital processors (Digital Signal Processor, DSPs), embedded devices, etc.
The computer device may be any electronic product that can interact with a user in a human-computer manner, such as a personal computer, tablet computer, smart phone, personal digital assistant (Personal Digital Assistant, PDA), game console, interactive internet protocol television (Internet Protocol Television, IPTV), smart wearable device, etc.
The computer device may also include a network device and/or a user device. Wherein the network device includes, but is not limited to, a single network server, a server group composed of a plurality of network servers, or a Cloud based Cloud Computing (Cloud Computing) composed of a large number of hosts or network servers.
The network in which the computer device is located includes, but is not limited to, the internet, a wide area network, a metropolitan area network, a local area network, a virtual private network (Virtual Private Network, VPN), and the like.
The foregoing describes specific embodiments of the present disclosure. Other embodiments are within the scope of the following claims. In some cases, the actions or steps recited in the claims can be performed in a different order than in the embodiments and still achieve desirable results. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing are also possible or may be advantageous.
The above steps of the methods are divided, for clarity of description, and may be combined into one step or split into multiple steps when implemented, so long as they include the same logic relationship, and they are all within the protection scope of this patent; it is within the scope of this application to add insignificant modifications to the algorithm or flow or introduce insignificant designs, but not to alter the core design of its algorithm and flow.
Where a description of "a specific example", or "some examples", etc., means that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the present description. In this specification, schematic representations of the above terms do not necessarily refer to the same embodiments or examples. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples.
Other embodiments of the present description will be apparent to those skilled in the art from consideration of the specification and practice of the invention disclosed herein. This specification is intended to cover any variations, uses, or adaptations of the specification following, in general, the principles of the specification and including such departures from the present disclosure as come within known or customary practice within the art to which the specification pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the specification being indicated by the following claims.
It is to be understood that the present description is not limited to the precise arrangements and instrumentalities shown in the drawings, which have been described above, and that various modifications and changes may be made without departing from the scope thereof. The scope of the present description is limited only by the appended claims.
The foregoing description of the preferred embodiments is provided for the purpose of illustration only, and is not intended to limit the scope of the disclosure, since any modifications, equivalents, improvements, etc. that fall within the spirit and principles of the disclosure are intended to be included within the scope of the disclosure.

Claims (13)

1. The metadata management method is applied to a first memory management module of an operating system, wherein the first memory management module and a second memory management module of the operating system respectively manage different memory blocks of a memory, and metadata of the memory blocks managed by the first memory management module are stored in the memory blocks applied from the second memory management module;
the method comprises the following steps:
responding to the release of metadata stored in a memory block, and if the preset reservation condition is determined to be met, reserving the memory block;
and responding to a storage request of the target metadata, and if a memory block adapting to the target metadata is determined from the reserved memory blocks, storing the target metadata by using the determined memory block.
2. The method of claim 1, the method further comprising:
establishing management information for the reserved memory blocks and storing the management information, wherein the management information is used for managing the reserved memory blocks;
and if a memory block adapting to the target metadata is determined from the reserved memory blocks, storing the target metadata by using the determined memory block, wherein the method comprises the following steps:
and if the memory blocks adapting to the target metadata are determined from the reserved memory blocks according to the stored management information, storing the target metadata by using the determined memory blocks.
3. The method of claim 2, wherein the management information includes a memory block size;
the establishing and storing management information for the reserved memory blocks comprises the following steps:
establishing management information according to the reserved size of the memory block, and distributing the management information to corresponding groups for storage; wherein different packets correspond to different size ranges;
and if a memory block adapting to the target metadata is determined from the reserved memory blocks according to the stored management information, storing the target metadata by using the determined memory block, including:
And if the management information corresponding to the group to which the size of the target metadata belongs is acquired according to the size of the target metadata, determining a memory block adapting to the target metadata from reserved memory blocks corresponding to the acquired management information, and storing the target metadata by using the determined memory block.
4. A method according to claim 3, said packets corresponding to a first linked list; each management information of the same group is respectively linked to each node of a first linked list corresponding to the group;
the distributing the management information to the corresponding packets includes:
linking the management information to the nodes of the corresponding first linked list;
the obtaining each management information corresponding to the packet to which the size of the target metadata belongs includes:
and acquiring each piece of management information linked with each node in a first linked list corresponding to the size of the target metadata.
5. The method of claim 4, wherein each first linked list is linked to each node of a total linked list, each first linked list being ordered in a set order;
the linking the management information to the node of the corresponding first linked list includes:
Determining a first linked list to which the management information belongs from the total linked list according to the setting sequence and the size range corresponding to each first linked list, and linking the management information to the first linked list to which the management information belongs;
the obtaining each piece of management information linked with each node in the first linked list corresponding to the size of the target metadata includes:
and determining a first linked list corresponding to the size of the target metadata from the total linked list according to the setting sequence and the size range corresponding to each first linked list, and acquiring each piece of management information linked with each node in the first linked list corresponding to the size of the target metadata.
6. The method of claim 4, wherein the management information of the memory block is stored in the memory block, and an address of the management information of the memory block is stored in a node of the first linked list.
7. The method of claim 1, wherein the metadata of the memory block managed by the first memory management module comprises: merging metadata of the memory blocks;
the metadata of the merged memory block includes: non-shared metadata for recording non-shared fields of the merged memory block and shared metadata for recording shared fields of the merged memory block; wherein each of the merged memory blocks storing the same data shares the same common metadata.
8. The method of claim 7, the non-common metadata comprising a link field storing an address of the common metadata to link the non-common metadata with the common metadata.
9. The method of claim 7, the method further comprising:
when a node of a merging tree mounts a first merging memory block, recording information in common metadata of the first merging memory block in metadata of the node of the merging tree, and then releasing the common metadata of the first merging memory block;
if the merged memory block mounted by the node of the merged tree is restored, creating the shared metadata of the restored merged memory block according to the information recorded by the metadata of the node of the merged tree.
10. The method of claim 7, the common metadata for storing at least one of the following common information: merging marks, physical addresses of basic memory blocks or coding information of memory block storage contents.
11. A metadata management device, the device is applied to a first memory management module of an operating system, the first memory management module and a second memory management module of the operating system respectively manage different memory blocks of a memory, and metadata of the memory blocks managed by the first memory management module are stored in the memory blocks applied from the second memory management module;
The device comprises:
a reservation processing module for: responding to the release of metadata stored in a memory block, and if the preset reservation condition is determined to be met, reserving the memory block;
the storage processing module is used for: and responding to a storage request of the target metadata, and if a memory block adapting to the target metadata is determined from the reserved memory blocks, storing the target metadata by using the determined memory block.
12. A computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor implements the steps of the method of any of claims 1 to 10 when the computer program is executed.
13. A computer readable storage medium having stored thereon a computer program which, when executed by a processor, implements the steps of the method of any of claims 1 to 10.
CN202211679912.7A 2022-12-26 2022-12-26 Metadata management method, device, computer equipment and storage medium Pending CN116225693A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211679912.7A CN116225693A (en) 2022-12-26 2022-12-26 Metadata management method, device, computer equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211679912.7A CN116225693A (en) 2022-12-26 2022-12-26 Metadata management method, device, computer equipment and storage medium

Publications (1)

Publication Number Publication Date
CN116225693A true CN116225693A (en) 2023-06-06

Family

ID=86579528

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211679912.7A Pending CN116225693A (en) 2022-12-26 2022-12-26 Metadata management method, device, computer equipment and storage medium

Country Status (1)

Country Link
CN (1) CN116225693A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116821058A (en) * 2023-08-28 2023-09-29 腾讯科技(深圳)有限公司 Metadata access method, device, equipment and storage medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116821058A (en) * 2023-08-28 2023-09-29 腾讯科技(深圳)有限公司 Metadata access method, device, equipment and storage medium
CN116821058B (en) * 2023-08-28 2023-11-14 腾讯科技(深圳)有限公司 Metadata access method, device, equipment and storage medium

Similar Documents

Publication Publication Date Title
US10552337B2 (en) Memory management and device
CN105740164B (en) Multi-core processor supporting cache consistency, reading and writing method, device and equipment
US10289555B1 (en) Memory read-ahead using learned memory access patterns
US8095736B2 (en) Methods and systems for dynamic cache partitioning for distributed applications operating on multiprocessor architectures
CN114860163B (en) Storage system, memory management method and management node
KR20120068454A (en) Apparatus for processing remote page fault and method thereof
CN110555001B (en) Data processing method, device, terminal and medium
US9208088B2 (en) Shared virtual memory management apparatus for providing cache-coherence
CN113674133A (en) GPU cluster shared video memory system, method, device and equipment
CN116225693A (en) Metadata management method, device, computer equipment and storage medium
US20140289739A1 (en) Allocating and sharing a data object among program instances
WO2019245445A1 (en) Memory allocation in a hierarchical memory system
CN113138851B (en) Data management method, related device and system
US20170364442A1 (en) Method for accessing data visitor directory in multi-core system and device
CN116302491A (en) Memory management method, device, computer equipment and storage medium
US20200409833A1 (en) Reducing fragmentation of computer memory
CN115617542A (en) Memory exchange method and device, computer equipment and storage medium
CN115712500A (en) Memory release method, memory recovery method, memory release device, memory recovery device, computer equipment and storage medium
CN113535392B (en) Memory management method and system for realizing support of large memory continuous allocation based on CMA
CN114518962A (en) Memory management method and device
US11093169B1 (en) Lockless metadata binary tree access
CN114116194A (en) Memory allocation method and system
CN115793957A (en) Method and device for writing data and computer storage medium
US8762647B2 (en) Multicore processor system and multicore processor
CN112114962A (en) Memory allocation method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination