WO2016187974A1 - 一种存储空间管理方法及装置 - Google Patents

一种存储空间管理方法及装置 Download PDF

Info

Publication number
WO2016187974A1
WO2016187974A1 PCT/CN2015/087705 CN2015087705W WO2016187974A1 WO 2016187974 A1 WO2016187974 A1 WO 2016187974A1 CN 2015087705 W CN2015087705 W CN 2015087705W WO 2016187974 A1 WO2016187974 A1 WO 2016187974A1
Authority
WO
WIPO (PCT)
Prior art keywords
storage space
free
free storage
contiguous
consecutive
Prior art date
Application number
PCT/CN2015/087705
Other languages
English (en)
French (fr)
Inventor
李林
熊先奎
葛聪
王庆
潘睿
Original Assignee
中兴通讯股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 中兴通讯股份有限公司 filed Critical 中兴通讯股份有限公司
Publication of WO2016187974A1 publication Critical patent/WO2016187974A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation

Definitions

  • the present invention relates to the field of communications, and in particular to a storage space management method and apparatus.
  • DRAM Dynamic Random Access Memory
  • Flash Flash
  • capacity and density bottlenecks that is, under the same area, the capacity of DRAM and Flash has been difficult to increase; in addition, in many handheld devices, the power consumption of DRAM, especially the refresh energy consumption, has accounted for about 40% of the energy consumption of the handheld device system. In some data centers, the cost increase caused by the refresh energy consumption of DRAM is not to be underestimated.
  • Non-Volatile Memory NVM for short
  • NVM non-Volatile Memory
  • their large capacity, high density, low energy consumption, fast reading and writing speed, long wear cycle and other characteristics have caused academic and industrial
  • the broad focus of the world has led people to see the hope of improving the performance of storage systems in the context of cloud computing and big data.
  • many applications require their data or data structures to be stored persistently.
  • Persistent memory not only meets this requirement, but also reduces the storage stack hierarchy and improves storage efficiency.
  • One of the most efficient ways for an application to use persistent memory is to map persistent memory to the process address space. As a result, the application can directly read and write persistent memory regions, greatly reducing overhead.
  • the entire persistent memory is effectively organized and managed, and the persistent storage area of the specified size is mapped to the process address space according to the requirements of the application.
  • the persistent storage area can be called a persistent heap, where the application can store data or data structures that need to be persisted. This is the application scenario and main tasks of the persistent memory management mechanism.
  • the free space will not increase. If the allocation and release operations of the persistent space are not optimized, the free space is likely to be exhausted in a short time. Therefore, spatial performance is very important for persistent memory management.
  • the present invention provides a storage space management method and apparatus to at least solve the lack of related NVM in the related art.
  • the storage management mechanism leads to the problem of inefficient memory space allocation.
  • a storage space management method including: receiving a memory space request of an application; obtaining a number of free page frames requested in the memory space request; and querying a free page frame organization according to the number of free page frames And obtaining consecutive free storage spaces of equal size of consecutive free storage spaces corresponding to the number of free page frames; and allocating consecutive free storage spaces to the application.
  • the idle page frame organization includes: a page frame linked list or a tree
  • the page frame linked list or the tree includes: at least one allocation unit descriptor, and at least one allocation unit descriptor is used to describe a storage state of the continuous storage space.
  • the step of querying the free page frame organization according to the number of free page frames to obtain the contiguous free storage space of the same size as the number of free page frames includes: querying at least one allocation unit descriptor in a page frame linked list or a tree; and contiguous free storage space corresponding to a size of a corresponding consecutive free storage space in the at least one allocation unit descriptor and a number of free page frames requested by the application Performing a match to query whether there is a contiguous free storage space with the same size of the contiguous free storage space corresponding to the number of free page frames; if there is a contiguous free storage space with the same size of the contiguous free storage space corresponding to the number of free page frames, the continuous idle space is extracted.
  • Storage space as the number of allocated to free page frames Continuous free storage space; wherein the storage state of continuous memory space comprises: idle and assigned.
  • the step of allocating the contiguous free storage space to the application includes: if the size of the contiguous free storage space in the page frame linked list or the tree is equal to the size of the contiguous free storage space corresponding to the number of free page frames, The free storage space is allocated to the application; if the size of the contiguous free storage space in the page frame linked list or the tree is smaller than the continuous free storage space corresponding to the number of free page frames, query N consecutive free storage spaces in the page frame linked list or the tree. And determining whether the consecutive idle storage spaces corresponding to the number of free page frames are equal in size, N is an integer, and is greater than 1; if the determination result is yes, allocating N consecutive free storage spaces to the application program.
  • the method further includes: if the N consecutive free storage spaces are smaller than the consecutive free storage spaces corresponding to the number of free page frames, searching for a continuous idle storage corresponding to the number of free page frames in the page frame linked list or the tree.
  • the first consecutive free storage space of the space size if the first continuous free storage space is obtained, cutting the first consecutive free storage space, and obtaining a second consecutive idle storage having the same size of the continuous free storage space corresponding to the number of free page frames Space; assigning a second consecutive free storage space to the application; wherein, the remaining consecutive free storage space after the first consecutive free storage space is cut back to the page frame linked list or the tree; if the maximum continuous idleness in the page frame linked list or the tree If the storage space is smaller than the contiguous free storage space corresponding to the number of free page frames, the idle page frame linked list or the tree is queried whether there is a contiguous storage space smaller than the maximum contiguous free storage space, and the size of the contiguous storage space corresponds to the number of free page frames.
  • Continuous free storage space matching if continuous storage space is less than idle If there is a continuous free storage space corresponding to the number of page frames, it is queried in the free page frame linked list or the tree whether there is a contiguous storage space smaller than the continuous free storage space until the query obtains continuous idle space matching the continuous free storage space corresponding to the number of free page frames. The storage space, or, prompts that there is currently no contiguous free storage space that can match the continuous free storage space corresponding to the number of free page frames.
  • the method further includes: receiving a persistent memory request of the application, the persistent memory request is used to indicate data pre-stored by the query application; a memory request, which is queried by an object descriptor in the memory system to obtain data pre-stored by the application;
  • the object descriptor is used to indicate a page frame linked list or tree containing at least one allocation unit descriptor.
  • a storage space management apparatus including: a first receiving module configured to receive a memory space request of an application; and an obtaining module configured to acquire a free page requested in the memory space request The number of frames; the first query module is configured to query the idle page frame organization according to the number of free page frames, and obtain consecutive free storage spaces of equal size of consecutive free storage spaces corresponding to the number of free page frames; the allocation module is set to store consecutive idle spaces. Space is allocated to the application.
  • the idle page frame organization includes: a page frame linked list or a tree
  • the page frame linked list or the tree includes: at least one allocation unit descriptor, and at least one allocation unit descriptor is used to describe a storage state of the continuous storage space.
  • the first query module includes: a query unit, configured to query the page frame linked list or at least one allocation unit descriptor in the tree; the first matching unit, and the length of the page frame corresponding to the continuous storage space In order to match the size of the corresponding consecutive free storage space in the at least one allocation unit descriptor with the consecutive free storage space corresponding to the number of free page frames requested by the application, whether there is continuous idle storage corresponding to the number of free page frames a contiguous free storage space of equal size; the second matching unit is configured to extract consecutive free storage spaces as the number of allocated idle pages if there is a contiguous free storage space of equal size of consecutive free storage spaces corresponding to the number of free page frames Corresponding continuous free storage space; wherein, continuous storage is empty Memory states including: an idle and assigned.
  • the allocating module includes: a first allocating unit, configured to be continuous if the size of the contiguous free storage space in the page frame linked list or the tree is equal to the size of the contiguous free storage space corresponding to the number of free page frames
  • the free storage space is allocated to the application;
  • the determining unit is set to be in the page frame linked list or the tree in the case where the size of the continuous free storage space in the page frame linked list or the tree is smaller than the size of the continuous free storage space corresponding to the number of free page frames Querying N consecutive free storage spaces, determining whether N consecutive free storage spaces are equal in size to consecutive free storage spaces corresponding to the number of free page frames, N is an integer and greater than 1; and the second allocation unit is set to be In the case of the case, N consecutive free storage spaces are allocated to the application.
  • the allocating module further includes: a spatial query unit, configured to search for more than idle in the page frame linked list or the tree if the N consecutive free storage spaces are smaller than the consecutive free storage spaces corresponding to the number of free page frames a first continuous free storage space of a continuous idle storage space corresponding to the number of page frames; the spatial cutting unit is configured to cut the first continuous free storage space to obtain a free page when the first continuous free storage space is obtained a second consecutive free storage space of equal size of consecutive free storage spaces corresponding to the number of frames; a third allocation unit configured to allocate the second consecutive free storage space to the application; wherein the remaining of the first consecutive free storage spaces is cut The continuous free storage space is returned to the page frame linked list or the tree; the fourth allocation unit is set to be in the case where the largest continuous free storage space in the page frame linked list or the tree is smaller than the continuous free storage space corresponding to the number of free page frames, in the idle page Whether the query in the linked list or tree is less than the maximum continuous free storage space Continuous storage
  • the contiguous storage space is smaller than the contiguous free storage space corresponding to the number of free page frames, query in the idle page frame list or tree. There is a contiguous free storage space that is smaller than the contiguous free storage space until the query obtains a contiguous free storage space that matches the consecutive free storage space corresponding to the number of free page frames, or indicates that there is currently no continuous free storage space matching the number of free page frames. Continuous free storage space.
  • the apparatus further includes: a second receiving module, configured to divide the continuous idle storage space After the application is allocated, the application receives the persistent memory request, the persistent memory request is used to indicate the data pre-stored by the query application; the second query module is set to pass the object descriptor in the memory system according to the persistent memory request. The query is performed to obtain data pre-stored by the application; wherein the object descriptor is used to indicate a page frame linked list or tree including at least one allocation unit descriptor.
  • the memory space request of the receiving application is received; the number of free page frames requested in the memory space request is obtained; the idle page frame organization is queried according to the number of free page frames, and the continuous idle storage corresponding to the number of free page frames is obtained.
  • a contiguous free storage space of equal size allocates contiguous free storage space to the application.
  • FIG. 1 is a flow chart of a storage space management method according to an embodiment of the present invention.
  • FIG. 2 is a structural block diagram of a memory pool structure according to an embodiment of the present invention.
  • FIG. 3 is a diagram showing an initial state of a metadata memory pool subpool according to an embodiment of the present invention.
  • FIG. 4 is a state diagram of a metadata storage pool subpool after a first allocation according to an embodiment of the present invention
  • FIG. 5 is a state diagram of a second allocation of metadata memory pool subpools according to an embodiment of the present invention.
  • FIG. 6 is a state diagram of a metadata memory pool subpool releasing a first element according to an embodiment of the present invention
  • FIG. 7 is a diagram showing an organization structure of a free page frame according to an embodiment of the present invention.
  • FIG. 8 is a structural block diagram of a storage space management apparatus according to an embodiment of the present invention.
  • FIG. 9 is a structural block diagram of a storage space management apparatus according to an embodiment of the present invention.
  • FIG. 10 is a block diagram showing the structure of another storage space management apparatus according to an embodiment of the present invention.
  • FIG. 11 is a block diagram showing the structure of still another storage space management apparatus according to an embodiment of the present invention.
  • the storage space management method provided in this embodiment may be applicable to a non-volatile storage memory, wherein the non-volatile memory (NVM) may include at least: a resistive memory (RRAM), Phase Change Memory (PCM), Magnetic RAM (MRAM), and Spin-Torque Transfer RAM (STT RAM).
  • NVM non-volatile memory
  • RRAM resistive memory
  • PCM Phase Change Memory
  • MRAM Magnetic RAM
  • STT RAM Spin-Torque Transfer RAM
  • this type of memory has the characteristics of large capacity, high density, low power consumption, fast reading and writing speed, long wear cycle, etc.
  • This type of memory can be directly connected to the processor memory subsystem, that is, connected to the memory bus.
  • NVM can be called persistent memory, which is Persistent Memory.
  • Persistent Memory In the context of cloud computing and big data, many applications require their data or data structures to be persisted.
  • Persistent memory not only meets this requirement, but also reduces the storage stack hierarchy and improves storage efficiency.
  • One of the most efficient ways for an application to use persistent memory is to map persistent memory to the process address space. As a result, the application can directly read and write persistent memory regions, greatly reducing overhead.
  • the entire persistent memory is effectively organized and managed, and the persistent storage area of the specified size is mapped to the process address space according to the requirements of the application. Once the mapping is complete, the persistent storage area can be called a persistent heap, where the application can store data or data structures that need to be persisted.
  • the NVM has a short wear cycle and asymmetrical read/write performance, and does not consider the space waste caused by the buddy system. .
  • the problem is mainly reflected in the following aspects:
  • the buddy system when requesting allocation of space to the buddy system, even if the number of page frames requested by the user is not a power of 2, the buddy system always allocates according to a power of 2. For example, the user only requests 300 page frames, but the buddy system assigns it 512 page frames. Obviously, this wastes 212 page frames. Also for the above reasons, it may happen that even if there is enough continuous space, it cannot be assigned to the user. For example, the current maximum number of consecutive free page frames is 384, and the user also applies for 300 page frames. Although the number of free page frames is greater than the number of requested page frames, the buddy system will still try to allocate 512 consecutive page frames. Obviously, the allocation request cannot be satisfied at this time, and the 384 consecutive page frames are not fully utilized, resulting in wasted space.
  • the mapped storage space does not have to be contiguous when mapping to the process/core address space.
  • the buddy system did not make full use of this, and only provided the allocation of consecutive page frames. As a result, it may happen that even if there is enough discontinuous free space, the user's allocation requirements cannot be met. In this case, it is very likely that some small memory fragments will not be used for a long time, resulting in wasted space.
  • the buddy system can only allocate page frames according to the power of 2
  • the memory management mechanism built on the buddy system can call the buddy system multiple times to achieve the space request satisfying the non-2 power. Not much space is allocated to the user.
  • the method is based on the law that any integer can be represented in binary form. For example, if a user applies for 255 page frames, it can be converted into 8 calls to the buddy system. Among them, each application separately 128, 64, 32, 16, 8, 4, 2, 1 consecutive page frames. Although this method avoids the waste of space caused by the buddy system only distributing according to the power of 2, it introduces too much metadata. Each time the buddy system is assigned, a metadata is needed to describe the currently obtained space. Obviously, the above example requires 8 metadata to describe the space obtained by 8 calls. Therefore, the main drawback of the above method is that the metadata space overhead is too large.
  • the file system when the file system is managed, the file system is built on the NVM, and then the persistent memory is mapped using a file or other object mapped into a memory (mmap).
  • the application scenario of the file system is quite different from the application scenario of the persistent memory described above.
  • the file system can be used to accomplish the task of persisting memory mapping and demapping, many of the designs in the file system are not suitable for persistent memory applications.
  • the main problem with using the file system to manage persistent memory is that the space overhead of the metadata portion is too large. For example, in some file systems, bitmaps are used to manage data blocks, where the length of the bitmap is proportional to the size of the storage space.
  • the larger the storage space the larger the space occupied by the metadata portion represented by the bitmap.
  • the amount of metadata used is not scalable, and its space overhead is too large.
  • some file systems recognize the poor spatial performance of the above bitmap design scheme, and use the B-tree to organize several consecutive storage spaces of one file.
  • the size of the B-tree node is the size of a data block, such as 4KBytes. This design may be efficient for file systems, but it is not suitable for persistent memory applications. When an application requests a mapping space from a persistent memory management mechanism, it usually gives the size of the mapped space.
  • one of the most efficient ways to use the file system to manage persistent memory is to first call the function such as fallocate as the mapped file before the mmap operation, and reserve contiguous physical space as much as possible.
  • the mapping file corresponding to a persistent heap contains a limited number of contiguous spaces relative to ordinary files. Therefore, in this case, it is very wasteful to use a B-tree, or even a node of the B-tree, to organize a very limited number of contiguous spaces of mapped files.
  • some metadata structures in the file system such as block groups and their descriptor tables, are completely unnecessary for persistent memory applications. Obviously, the space occupied by this part of the metadata is also wasted.
  • the embodiment of the present invention provides a storage space management method for the above problem, based on the characteristics of the NVM medium and the persistent memory application scenario, as follows:
  • FIG. 1 is a flowchart of a storage space management method according to an embodiment of the present invention. As shown in FIG. 1 , the process includes the following steps:
  • Step S102 receiving a memory space request of the application
  • Step S104 obtaining the number of free page frames requested in the memory space request
  • Step S106 Query the idle page frame organization according to the number of free page frames, and obtain consecutive free storage spaces of equal size of consecutive free storage spaces corresponding to the number of free page frames;
  • Step S108 the continuous free storage space is allocated to the application.
  • the memory space request from the application is first received, and secondly, in the process of parsing the memory space request, the number of free page frames requested by the application in the memory space request is obtained, and thirdly, the After the number of free page frames, the page frame linked list or the tree is queried, and then the contiguous free storage space with the same size of the contiguous free storage space corresponding to the number of the free page frames is obtained, and finally, the contiguous free storage space is allocated to the application.
  • the memory space request of the receiving application is received; the number of free page frames requested in the memory space request is obtained; the page frame linked list or the tree is queried according to the number of free page frames, and the continuous free storage space corresponding to the number of free page frames is obtained.
  • Equally sized contiguous free storage allocates contiguous free storage to the application.
  • One is an allocation unit descriptor representing a continuous page frame, wherein either the free space or the allocated space is an allocation unit descriptor representation; and, in the allocation unit descriptor, the continuous page frame of the description is stored.
  • Information such as the start address and length.
  • the second is the object descriptor that represents the persistent heap.
  • the object descriptor points to a linked list containing a number of allocation unit descriptors to represent several consecutive page frames contained in its corresponding persistent heap.
  • the embodiment also designs a metadata memory pool. Since there are two types of metadata, the allocation unit descriptor memory pool and the object descriptor memory pool are instantiated separately.
  • the idle page frame organization includes: a page frame linked list or a tree
  • the page frame linked list or the tree includes: at least one allocation unit descriptor, and at least one allocation unit descriptor is used to describe a storage state of the continuous storage space.
  • the idle page frame organization is queried according to the number of free page frames in step S106, and consecutive idle spaces of the same size as the number of free page frames are obtained.
  • the steps to store space include:
  • Step1 query at least one allocation unit descriptor in the page frame linked list or the tree;
  • Step 2 Match the size of the corresponding consecutive free storage space in the at least one allocation unit descriptor to the continuous free storage space corresponding to the number of free page frames requested by the application, and query whether there is continuous idleness corresponding to the number of free page frames.
  • Step 3 If there is a continuous free storage space of equal size of consecutive free storage spaces corresponding to the number of free page frames, the continuous free storage space is extracted as a continuous free storage space corresponding to the number of free page frames.
  • the storage state of the contiguous storage space includes: allocated and idle.
  • the allocation unit descriptor corresponding to the continuous free storage space is put into the page according to the size of the continuous free storage space corresponding to the consecutive page frames.
  • a linked list or tree This embodiment provides an example in which a total of 127 free page frame linked lists are used, wherein each page frame linked list is composed of a plurality of allocation unit descriptors, and the storage space corresponding to the allocation unit descriptors in the same page frame linked list is used. Size the same.
  • the space size corresponding to each allocation unit descriptor is 1 page frame; in the second page frame linked list, the space size corresponding to each allocation unit descriptor is 2 consecutive page frames. In the third page frame linked list, the space size corresponding to each allocation unit descriptor is 3 consecutive page frames, and up to the 127th page frame linked list, the space size corresponding to each allocation unit descriptor is 127 consecutive pages. frame.
  • this embodiment also provides an example in which five balanced binary trees are used, wherein the five balanced binary trees are used for storing large consecutive page frames.
  • the key of the tree is the size of the continuous page frame.
  • Each tree is also composed of a plurality of allocation unit descriptors, and the space size corresponding to the allocation unit descriptors in the same tree is set to be within a fixed range. For example, in the first tree, the space size corresponding to each allocation unit descriptor is between 128 page frames and 255 page frames, and the second tree is between 256 page frames and 511 page frames.
  • the third tree there are between 512 page frames and 1023 page frames
  • the fourth tree there are between 1024 page frames and 2047 page frames, and more than 2048 consecutive page frame free spaces correspond to the allocation unit.
  • Descriptor located in the 5th tree. It is possible that a plurality of allocation unit descriptors correspond to the same tree node, that is, the contiguous spaces corresponding to the allocation unit descriptors are equal in size. At this point, these allocation unit descriptors are organized in a linked list and pointed to by the tree node.
  • the size of the corresponding consecutive free storage space in the at least one allocation unit descriptor is consecutively idle corresponding to the number of free page frames requested by the application.
  • the storage space is matched, and there is a contiguous free storage space of equal size of consecutive free storage spaces corresponding to the number of free page frames requested by the application in the page frame linked list or the tree.
  • step of allocating the continuous free storage space to the application in step S108 includes:
  • Step 1 If the size of the contiguous free storage space in the page frame list or the tree is equal to the size of the contiguous free storage space corresponding to the number of free page frames, the contiguous free storage space is allocated to the application;
  • a contiguous space of the same size of the contiguous free storage space corresponding to the number of the free page frames is searched in the page frame linked list or the tree.
  • Step 2 If the size of the contiguous free storage space in the page frame list or the tree is smaller than the size of the contiguous free storage space corresponding to the number of free page frames, query N consecutive free storage spaces in the page frame linked list or the tree to determine N consecutive idle spaces. Whether the storage space is equal in size to the number of free page frames, and N is an integer and is greater than 1;
  • Step 3 When the judgment result is YES, N consecutive free storage spaces are allocated to the application.
  • Step1 if the size of the contiguous free storage space corresponding to the number of free page frames requested by the application cannot be satisfied on the basis of Step1, two consecutive free storage spaces are searched in the page frame linked list or the tree according to Step 2 and Step 3. A contiguous free storage space of the same size as the contiguous free storage space corresponding to the number of free page frames.
  • the storage space management method is described by taking two consecutive free storage spaces as an example to implement the storage space management method provided in this embodiment, which is not limited.
  • the present embodiment provides Storage space management methods also include:
  • Step 4 If the N consecutive free storage spaces are smaller than the consecutive free storage spaces corresponding to the number of free page frames, the first consecutive free storage space that is larger than the continuous free storage space corresponding to the number of free page frames is searched in the page frame linked list or the tree;
  • Step 5 if the first continuous free storage space is obtained, cutting the first consecutive free storage space, and obtaining a second continuous idle storage space of equal size of consecutive free storage spaces corresponding to the number of free page frames;
  • Step 6 assigning the second consecutive free storage space to the application
  • Step4 to Step6 the first consecutive free storage space that is closest to the requested size is found in the page frame linked list or the tree, and the first consecutive idle storage space is found after the first continuous free storage space is found.
  • the storage space is cut, and the second consecutive free storage space of equal size of the consecutive free storage spaces corresponding to the number of free page frames is separated, and the remaining consecutive free storage spaces are placed in the corresponding page frame linked list or tree.
  • Step 7 If the maximum contiguous free storage space in the page frame list or the tree is smaller than the contiguous free storage space corresponding to the number of free page frames, query whether there is a contiguous storage space smaller than the maximum contiguous free storage space in the idle page frame linked list or the tree.
  • the size of the contiguous storage space matches the contiguous free storage space corresponding to the number of free page frames. If the contiguous storage space is smaller than the contiguous free storage space corresponding to the number of free page frames, the query in the free page frame list or the tree is less than continuous idle.
  • Step1, Step2 and Step3 and Step4 to Step6 cannot satisfy the size of the continuous free storage space corresponding to the number of free page frames requested by the application, that is, the current system maximum continuous space is smaller than the requested one.
  • the current system maximum continuous space is smaller than the requested one.
  • the state of the page frame that is physically continuous with the released page frame is checked. If the physically consecutive page frame is free, a merge operation is initiated. That is, only one allocation unit descriptor is used to represent the merged consecutive page frame.
  • the storage space management method provided by this embodiment can be satisfied at the expense of using the least allocation unit descriptor.
  • the application needs to size the contiguous free storage space, and after the allocated contiguous storage space is released, it can still merge the released contiguous storage space with one allocation unit descriptor, thereby saving system resources.
  • the storage space management method further includes:
  • Step S109 receiving a persistent memory request of the application, where the persistent memory request is used to indicate data pre-stored by the query application;
  • Step S110 according to the persistent memory request, querying through the object descriptor in the memory system, and obtaining data pre-stored by the application;
  • the object descriptor is used to indicate a page frame linked list or a tree including at least one allocation unit descriptor.
  • the object mentioned here is the persistent heap.
  • the object or the persistent heap is represented by an object descriptor, which saves the identifier of the persistent heap in addition to the page frame linked list composed of the allocation unit descriptor. .
  • This flag allows the application to retrieve the persistent memory that was previously applied.
  • all object descriptors are organized into a balanced binary tree. Among them, the key of the tree is the identifier of the persistent heap.
  • the storage space management method provided in this embodiment differs from the solution based on the traditional page frame management mechanism, the embodiment does not allocate persistent memory to the application. Rather, according to the four allocation strategies described in Steps 1 to 7, the strategy is allocated according to the number of page frames applied by the application. That is, how many page frames are allocated by the application, and there is no waste of space.
  • this embodiment does not have a situation in which there is sufficient free space but cannot be allocated.
  • the four allocation strategies correspondingly described in Steps 1 to 7 in the embodiment, when the page frame is released, the adjacent idle continuous page frame is always tried to be merged. In other words, consecutive page frames in the free list and in the tree are not recombinable. Therefore, according to the principle of cutting distribution, it is impossible to have sufficient continuous space, but it is impossible to allocate.
  • a strategy of combining allocation and best effort allocation combines discrete multiple consecutive spaces for allocation. Therefore, it is not possible to have enough discrete spaces to allocate. Obviously, the above two advantages are beneficial to improve the space utilization of persistent memory.
  • the metadata mainly includes an allocation unit descriptor and an object descriptor, and the two descriptors constitute a node of a linked list or a tree.
  • the allocation unit descriptor includes information such as a start address and a length of the continuous page frame, an address of an allocation unit descriptor corresponding to a space physically adjacent to the described continuous space, a pointer constituting a linked list or a tree, and the like.
  • the object descriptor contains information such as the persistent heap identifier, a pointer to the allocation unit descriptor list, and a pointer to the tree.
  • the two metadata designed in this embodiment occupy a small space, which is within a few tens of bytes. Unlike the file system, the entire data block is treated as a B-tree node.
  • the found free space is divided, and a new allocation unit descriptor is needed to represent the remaining space after the cutting or the allocated space.
  • space allocation is performed according to the best effort allocation strategy, several allocation unit descriptors and their corresponding free spaces are found. If the sum of the sizes of these free spaces is exactly equal to the requested space size, there is no need to add an allocation unit descriptor. Otherwise, you will need to cut the last free space found. In this case, an allocation unit descriptor will be added.
  • the present embodiment allocates a new space in the tail (ie, the high address portion) of the original space when the four allocation policies are implemented, that is, the new space and the original space are physically adjacent. In this way, the original space and the new space can be combined, that is, the number of allocation unit descriptors is reduced.
  • this embodiment merges physically adjacent free spaces. Therefore, the return operation may also reduce the allocation unit descriptor.
  • the metadata portion further includes: a fixed-size NVM total descriptor and a log area. Among them, the NVM total descriptor and log area occupy a small space, only one page frame.
  • the spatial complexity of the metadata part is O(CN+ ⁇ ), where N is the number of persistent heaps, C is the number of times a heap is expanded, and ⁇ is the fixed overhead represented by the NVM total descriptor. And CN is the number of times the space application is persisted.
  • N the number of persistent heaps
  • C the number of times a heap is expanded
  • the fixed overhead represented by the NVM total descriptor.
  • CN the number of times the space application is persisted.
  • the expansion operation is usually triggered only when the remaining free space of the persistent heap is not enough. Therefore, the number of expansions, usually C, is very small and can be regarded as a constant.
  • the spatial complexity of the metadata part is only related to the number of persistent heaps, and has good scalability regardless of the allocated space size or the entire persistent memory size.
  • an index structure is established for the persistent heap inside the persistent memory, that is, a balanced binary tree composed of object descriptors.
  • the overall layout of the persistent memory applied in this embodiment is first introduced, and then the metadata and its memory pool are analyzed, and on this basis, the organization and allocation of the page frame and the object descriptor are introduced. Organization and management.
  • the NVM total descriptor contains control description information for the entire persistent memory management mechanism, such as various types of flag information, free page frame management structure information, and so on.
  • control description information for the entire persistent memory management mechanism, such as various types of flag information, free page frame management structure information, and so on.
  • the structure definition of this header area is given below, namely the NVM total descriptor.
  • FormatFlag A format flag that determines whether the current persistent memory has been formatted.
  • InitialFlag Initialization flag that identifies each stage in the system during fault recovery at startup.
  • NVMSize The size of the NVM area, which is used to represent the amount of space in the entire persistent memory.
  • MetaDataSize The size of the metadata portion of the entire persistent memory.
  • FreePageNumber The number of unused page frames in the entire persistent memory.
  • FreePageLists Free page frame linked list entry. The entry contains a total of 127 pointers to the corresponding free page frame list NvmAllocUnitDescriptorInList (this structure will be described later).
  • FreePageTrees Free page frame tree entry. The entry contains a total of five pointers to the corresponding free page frame tree NvmAllocUnitDescriptorInTree (this structure will be described later).
  • NvmObjectTree Object descriptor tree entry, which points to the object descriptor tree NvmObjectDestriptor (this structure will be described later).
  • AllocUnitDescriptorMemoryPoolForList The allocation unit descriptor memory pool entry point structure in the linked list. This entry point type NvmMetaDataPool will be described later.
  • AllocUnitDescriptorMemoryPoolForTree The allocation unit descriptor memory pool entry point structure located in the tree. This entry point type NvmMetaDataPool will be described later.
  • NvmObjectDescriptorMemoryPool Object descriptor memory pool entry point structure. This entry point type NvmMetaDataPool will be described later.
  • the above NvmDescriptor structure occupies the head area of the persistent memory, that is, the first page frame.
  • the size of the page frame is set to 4KBytes. Since the size of the NvmDescriptor is only 1 KBbytes, the rest of the first page frame is used as the log area.
  • this patent scheme uses two kinds of metadata, one is the allocation unit descriptor, and the other is the object descriptor.
  • the allocation unit descriptor is used to represent a contiguous page frame, and the definitions of its various fields are given below.
  • NvmAllocUnitDescriptor is an allocation unit descriptor whose meanings are as follows.
  • SpaceAddress The base address of the consecutive page frame represented by the current allocation unit descriptor.
  • SpaceSize The size of the contiguous page frame represented by the current allocation unit descriptor.
  • PreSpaceAddress A contiguous space (precursor) physically adjacent to the space described by the current allocation unit descriptor, corresponding to the address of the allocation unit descriptor.
  • NextSpaceAddress A contiguous space physically adjacent (successor) to the space described by the current allocation unit descriptor, corresponding to the address of the allocation unit descriptor.
  • Flags Status flags of consecutive spaces corresponding to the current allocation unit descriptor, including information such as whether it is free space.
  • the NvmAllocUnitDescriptor structure derives two wrapping structures, NvmAllocUnitDescriptorInList and NvmAllocUnitDescriptorInTree.
  • the definition of these two structures is as follows.
  • the NvmAllocUnitDescriptorInList structure represents the allocation unit descriptor located in the linked list
  • the NvmAllocUnitDescriptorInTree structure represents the allocation unit descriptor located in the tree.
  • the Prev and Next fields respectively point to the predecessor and successor nodes of the doubly linked list node
  • the LeftChild, RightChild and Parent fields point to the left and right children and parent nodes of the nodes in the tree respectively.
  • the SameSizeList field is special.
  • the object descriptor represents a persistent heap whose structure is as follows.
  • SpaceList points to the allocation unit descriptor list, that is, a linked list consisting of allocation unit descriptors. Each element of the linked list represents the contiguous space owned by the persistent heap.
  • NvmObjectSize The total size of the persistent heap space represented by the current object descriptor.
  • Flags Various flags for the current persistent heap.
  • NvmUUID 128-bit persistent heap identifier
  • the patent scheme establishes three memory pools for metadata, namely, an allocation unit descriptor memory pool located in a linked list, an allocation unit descriptor memory pool located in a tree, and an object descriptor memory pool.
  • the three memory pools are identical in structure, except that the metadata managed internally is of different sizes.
  • the structure definition of the memory pool entry point is given below.
  • TotalElementNumber The total number of metadata in the memory pool.
  • FreeElementNumber The number of metadata that can be allocated in the memory pool.
  • MetaDataFreeList Points to a list of sub-pools of assignable elements. Each element in the linked list is an allocation unit descriptor.
  • MetaDataFullList points to a list of unallocated element subpools, each of which is an allocation unit descriptor.
  • a memory pool is made up of several memory subpools, and each memory subpool is represented by an allocation unit descriptor.
  • the two pointers contained in NvmMetaDataPool point to the list of assignable element subpools and the list of unallocated element subpools.
  • the former also contains idle metadata, and all of the latter's metadata has been allocated.
  • Figure 2 shows the basic structure of the above memory pool.
  • FIG 2 shows the basic structure of the subpool in addition to the above information.
  • the memory pool subpool is represented by the allocation unit descriptor, that is, the actual storage space of the memory subpool can be found by the allocation unit descriptor.
  • the NvmMetaDataSubPool structure At the head of this actual storage space is an instance of the NvmMetaDataSubPool structure that controls the allocation and release operations in the memory subpool. The definition of the structure is given below.
  • TotalElementNumber Represents the total number of elements in the current memory subpool.
  • FreeElementNumber Represents the number of elements available for allocation in the current memory subpool.
  • FirstFreeElementAddress Indicates the address of the first element available for allocation in the current memory subpool.
  • FreedElementNumber indicates the number of elements that are released in the current memory subpool, that is, the number of metadata organized in a linked list. The specific meaning of this field can be found in the analysis below.
  • the rest is used for element storage.
  • the first 8 bytes of each such element are used to describe the control information associated with that element.
  • the state of the element is different, and the meaning of the first 8 bytes is different.
  • An element has three states: unallocated, allocated, and released (where unallocated and released are idle).
  • the meaning of the first 8 bytes is as follows:
  • the first 8 bytes of the element store the address of the allocation unit descriptor structure corresponding to the memory pool subpool in which it resides, that is, the address of the NvmAllocUnitDescriptorInList.
  • the first 8 bytes of the element store the address of the next assignable element.
  • the allocation and release of metadata is based on the above metadata memory pool structure. The specific allocation and release steps are described below.
  • the requested metadata type is first determined, that is, an allocation unit descriptor located in the linked list, an allocation unit descriptor located in the tree, or an object descriptor.
  • the information of the allocateable memory subpool is obtained in the NvmDescriptor structure and the NvmMetaDataPool structure.
  • the release process of the metadata is as follows:
  • the allocation unit descriptor corresponding to the memory subpool to which the metadata belongs that is, the address of the NvmAllocUnitDescriptorInList structure instance is found.
  • the metadata release operation is performed in the found memory pool subpool.
  • Figure 3 shows the initial state.
  • the FirstFreeElementAddress field of NvmMetaDataSubPool points to the first element of the subpool and is the first element that can be allocated.
  • the first assignable element is assigned, as shown in Figure 4.
  • the FirstFreeElementAddress field points to the second element.
  • the FirstFreeElementAddress field will point to the third element, as shown in Figure 5.
  • the FirstFreeElementAddress field will point to the released element and store the address of the element pointed to by the FirstFreeElementAddress field in the element.
  • the final result of the whole process will be as shown in Figure 6. In this way, the released elements will be organized into a linked list for easy distribution.
  • the elements mentioned in this embodiment may be the free memory space corresponding to the allocation unit descriptor, and the storage space management method provided in this embodiment is applicable, which is not limited.
  • this embodiment organizes consecutive free page frames into 127 linked lists and 5 balanced binary trees according to their sizes, that is, the FreePageLists and FreePageTrees fields in the NvmDescriptor structure.
  • Figure 7 shows the organization of the free page frame. The left side of the figure is the free page frame entry point and the free page frame tree entry point structure, and the right side is the allocation unit descriptor corresponding to several consecutive free page frames.
  • a number of linked lists and balanced binary trees are all allocation unit descriptors.
  • the tree adopted in the patent scheme is a red-black tree.
  • Step1 After receiving the allocation request, first access the NvmDescriptor structure located in the persistent memory header. And according to its FreePageNumber field, determine whether there are enough free page frames. If the number of free page frames is not enough, an error is returned.
  • Step 2 if there are enough free page frames, it is judged whether the requested number of page frames n is greater than 127 page frames. If it is greater, then turn 7) to execute; otherwise, perform 3).
  • Step 3 determine whether the FreePageLists[n-1] linked list in the NvmDescriptor structure is empty. If not, an allocation unit descriptor is taken from the linked list and returned to the requester, while the FreePageNumber field in the NvmDescriptor structure is updated. That is, the exact allocation strategy is executed.
  • Step4 if the FreePageLists[n-1] list is empty, it traverses the linked list in FreePageLists[0] to FreePageLists[n-2] to determine whether there are two allocation unit descriptors, and the sum of the corresponding spaces is equal to n. If found, the found allocation unit descriptor is returned to the requester, and the FreePageNumber field in the NvmDescriptor structure is updated. That is, the strategy of combined allocation is performed.
  • Step 5 if the 4) step fails to find, and n ⁇ 127, the search starts from FreePageLists[n] until a non-empty list is found in the FreePageLists array. If not found or n is equal to 127, then turn to 7) to execute.
  • Step6 If a non-empty linked list is found in the FreePageLists array, an allocation unit descriptor is taken out from it, and the continuous space represented by it is cut according to the size of n. During the cutting process, an idle allocation unit descriptor is requested from the allocation unit descriptor memory pool to represent the remaining space after the cutting, and is added to a linked list in the FreePageLists according to the size of the remaining space. Finally, the original allocation unit descriptor is returned to the requester, and the FreePageNumber field in the NvmDescriptor structure is updated. The above process, ie cutting allocation.
  • Step7 determine which tree in FreePageTrees to allocate.
  • Step8 if the found tree FreePageTrees[m] is not empty, then look up in the tree according to the value of n. If an allocation unit descriptor can be found, and the corresponding space size is exactly n page frames, the allocation unit descriptor is taken out and returned to the requester, and the FreePageNumber field in the NvmDescriptor structure is updated. That is, the exact allocation strategy is executed.
  • Step9 if the tree found in 7) is empty, or 8) does not find the above allocation unit descriptor, then traverse FreePageLists[0] to FreePageTrees[m] to determine whether there are two allocation unit descriptors, corresponding to The sum of the spaces is equal to n. If found, the found allocation unit descriptor is returned to the requester, and the FreePageNumber field in the NvmDescriptor structure is updated. That is, the strategy of combined allocation is performed.
  • Step10 if 9) execution fails, it traverses from FreePageTrees[m] until it finds an allocation unit descriptor whose corresponding space size is larger than n page frames.
  • Step 11 If an allocation unit descriptor is found to satisfy the requirement of 10), the allocation unit descriptor is cut. During the cutting process, an idle allocation unit descriptor is requested from the allocation unit descriptor memory pool to represent the remaining space after the cutting, and is added to a linked list or FreePageTrees in FreePageLists according to the size of the remaining space. One of the trees. Finally, the original allocation unit descriptor is returned to the requester, and the FreePageNumber field in the NvmDescriptor structure is updated. The above process, ie cutting allocation.
  • Step12 if 10) execution fails, recursively search starting from the maximum allocation unit descriptor in FreePageTrees[4] until enough free page frames are found. Finally, the found allocation unit descriptor is returned to the requester and the FreePageNumber field in the NvmDescriptor structure is updated. The above process, that is, try to allocate.
  • the release process of the page frame is as follows:
  • metadata such as an object descriptor is used to represent a persistent heap.
  • the identifier of the persistent heap and several consecutive page frames owned by the persistent heap that is, the allocation unit descriptor linked list in the object descriptor) are recorded in the object descriptor.
  • all object descriptors are organized into a balanced binary tree according to the identifier of each persistent heap, and the root node of the tree is pointed by the NvmObjectDestriptor field in the NvmDescriptor structure.
  • the object UUID passed by the creator is found. If the UUID is found, an error is returned.
  • the augmentation operation of the persistent heap is similar to the above process except that no new object descriptors have to be created. Once the persistent heap has been created, you can map the requested page frame to the process address space. The main steps of persistent heap deletion are described below.
  • the main reduction process of this embodiment is similar to deleting a persistent heap, except that the object descriptors need not be deleted from the tree.
  • the number of persistent heaps is related to the number of applications in the same system.
  • An application usually only has a small number of persistent heaps. Therefore, the number of persistent heaps in the entire system is relatively small.
  • the number of operations for expansion and reduction is small.
  • only the application, deletion, expansion, and reduction operations of the persistent heap will affect the concurrency of the persistent memory management mechanism to a large extent. As can be seen from the above analysis, the number of these operations is relatively small. Therefore, the probability of a persistent memory management mechanism for contention is small. Therefore, this patent scheme uses a simple coarse-grained lock, which is locked at the entrance of the above operation.
  • this patent scheme adopts the traditional undo type log scheme. All modifications to the metadata section are logged when the persistent heap is applied, deleted, expanded, and reduced.
  • the area used to save the log ie the first page frame of the persistent memory, removes portions of the NvmDescriptor structure.
  • the NvmDescriptor structure occupies more than a thousand bytes, so the log area has nearly 3 kilobytes. And this 3 kilobytes is fully capable of accommodating the modified items of the metadata part.
  • the method according to the above embodiment can be implemented by means of software plus a necessary general hardware platform, and of course, by hardware, but in many cases, the former is A better implementation.
  • the technical solution of the present invention which is essential or contributes to the prior art, may be embodied in the form of a software product stored in a storage medium (such as ROM/RAM, disk,
  • the optical disc includes a number of instructions for causing a terminal device (which may be a cell phone, a computer, a server, or a network device, etc.) to perform the methods described in various embodiments of the present invention.
  • a storage space management device is provided, which is used to implement the above-mentioned embodiments and preferred embodiments, and has not been described again.
  • the term “module” may implement a combination of software and/or hardware of a predetermined function.
  • the apparatus described in the following embodiments is preferably implemented in software, hardware, or a combination of software and hardware, is also possible and contemplated.
  • FIG. 8 is a structural block diagram of a storage space management apparatus according to an embodiment of the present invention. As shown in FIG. 8, the apparatus includes: a first receiving module 82, an obtaining module 84, a first querying module 86, and an allocating module 88, where
  • the first receiving module 82 is configured to receive a memory space request of the application
  • the obtaining module 84 is configured to establish an electrical connection with the first receiving module 82, and is configured to obtain the number of free page frames requested in the memory space request;
  • the first query module 86 establishes an electrical connection with the obtaining module 84, and is configured to query the idle page frame organization according to the number of free page frames, and obtain consecutive free storage spaces of equal size of consecutive free storage spaces corresponding to the number of free page frames;
  • the distribution module 88 establishes an electrical connection with the first query module 86 and is arranged to allocate successive free storage spaces to the application.
  • FIG. 9 is a structural block diagram of a storage space management apparatus according to an embodiment of the present invention.
  • the idle page frame organization includes: a page frame linked list or a tree, a page frame linked list or The tree includes: at least one allocation unit descriptor, where the at least one allocation unit descriptor is used to describe the storage state of the contiguous storage space and the start address and length of the page frame corresponding to the contiguous storage space
  • the first query module 86 includes: a query unit 861, a first matching unit 862 and a second matching unit 863, wherein
  • the query unit 861 is configured to query at least one allocation unit descriptor in the page frame linked list or the tree;
  • the first matching unit 862 establishes an electrical connection with the query unit 861, and is configured to perform, for the continuous idle storage space corresponding to the number of free page spaces requested by the application, the size of the corresponding consecutive free storage space in the at least one allocation unit descriptor. Matching, querying whether there is a continuous free storage space of equal size of consecutive free storage spaces corresponding to the number of free page frames;
  • the second matching unit 863 establishes an electrical connection with the first matching unit 862, and is configured to extract the continuous free storage space as the allocated idle page if there is a continuous free storage space with the same size of the continuous free storage space corresponding to the number of free page frames.
  • FIG. 10 is a structural block diagram of another storage space management apparatus according to an embodiment of the present invention.
  • the allocation module 88 includes: a first allocation unit 881, a determining unit 882, and a second An allocating unit 883, a spatial query unit 884, a spatial cutting unit 885, a third allocating unit 886, or a fourth allocating unit 887, wherein
  • the first allocating unit 881 is configured to allocate the continuous free storage space to the application in a case where the size of the continuous free storage space in the page frame linked list or the tree is equal to the size of the continuous free storage space corresponding to the number of free page frames;
  • the determining unit 882 is configured to query the N consecutive free storage spaces in the page frame linked list or the tree if the size of the continuous free storage space in the page frame linked list or the tree is smaller than the continuous free storage space size corresponding to the number of free page frames. , determining whether the N consecutive free storage spaces are equal in size to the consecutive free storage spaces corresponding to the number of free page frames, where N is an integer and greater than 1;
  • the second allocating unit 883 establishes an electrical connection with the determining unit 882, and is set to allocate N consecutive free storage spaces to the application if the determination result is YES.
  • the allocating module 88 further includes: a spatial query unit 884, configured to search in a page frame linked list or a tree if the N consecutive free storage spaces are smaller than the consecutive free storage spaces corresponding to the number of free page frames. a first consecutive free storage space that is larger than a continuous free storage space corresponding to the number of free page frames;
  • the space cutting unit 885 establishes an electrical connection with the spatial query unit 884, and is configured to cut the first continuous free storage space when the first continuous free storage space is obtained, and obtain a continuous free storage space corresponding to the number of free page frames. a second consecutive free storage space of equal size;
  • the third allocating unit 886 establishes an electrical connection with the spatial cutting unit 885, and is configured to allocate the second consecutive free storage space to the application; wherein the remaining consecutive free storage space after the first continuous free storage space is cut back to the page frame Chain list or tree;
  • the fourth allocating unit 887 is configured to: in the case where the maximum continuous free storage space in the page frame linked list or the tree is smaller than the continuous free storage space corresponding to the number of free page frames, whether the query in the idle page frame linked list or the tree exists less than the maximum continuous
  • the contiguous storage space of the free storage space, and the size of the contiguous storage space matches the contiguous free storage space corresponding to the number of free page frames.
  • the idle page frame list or Query whether there is a contiguous storage space smaller than the contiguous free storage space in the tree until the query obtains consecutive free storage spaces that match the consecutive free storage spaces corresponding to the number of free page frames, or prompt that there is no continuous correspondence corresponding to the number of free page frames.
  • a contiguous free storage space that matches the free storage space is a contiguous free storage space smaller than the contiguous free storage space in the tree until the query obtains consecutive free storage spaces that match the consecutive free storage spaces corresponding to the number of free page frames, or prompt that there is no continuous correspondence corresponding to the number of free page frames.
  • FIG. 11 is a structural block diagram of another storage space management apparatus according to an embodiment of the present invention. As shown in FIG. 11, the storage space management apparatus further includes: a second receiving module 1100 and a second query. Module 1101, wherein
  • the second receiving module 1100 is configured to receive, after the continuous free storage space is allocated to the application, a persistent memory request of the application, where the persistent memory request is used to indicate data pre-stored by the query application;
  • the second query module 1101 establishes an electrical connection relationship with the second receiving module 1100, and is configured to perform a query by using an object descriptor in the memory system according to the persistent memory request, and obtain data pre-stored by the application; wherein, the object descriptor is used. Indicates a page frame linked list or tree containing at least one allocation unit descriptor.
  • each of the above modules may be implemented by software or hardware.
  • the foregoing may be implemented by, but not limited to, the foregoing modules are all located in the same processor; or, the modules are located in multiple In the processor.
  • Embodiments of the present invention also provide a storage medium.
  • the foregoing storage medium may be configured to store program code for performing the following steps:
  • the storage medium is further configured to store program code for performing the following steps: including, in the free page frame organization: a page frame linked list or a tree, the page frame linked list or the tree comprising: at least one allocation unit descriptor, at least one The allocation unit descriptor is used to describe the storage state of the continuous storage space and the start address and length of the page frame corresponding to the continuous storage space, and query the idle page frame organization according to the number of free page frames to obtain the number corresponding to the number of free page frames.
  • the steps of consecutive free storage spaces of equal free storage space size include:
  • the contiguous free storage space is extracted as the contiguous free storage space corresponding to the number of free page frames.
  • the foregoing storage medium may include, but not limited to, a USB flash drive, a Read-Only Memory (ROM), a Random Access Memory (RAM), a mobile hard disk, and a magnetic memory.
  • ROM Read-Only Memory
  • RAM Random Access Memory
  • a mobile hard disk e.g., a hard disk
  • magnetic memory e.g., a hard disk
  • the step of the processor performing the allocation of the contiguous free storage space to the application according to the stored program code in the storage medium comprises: if the size of the contiguous free storage space in the page frame linked list or the tree is equal to The contiguous free storage space corresponding to the number of free page frames is allocated to the application; if the size of the contiguous free storage space in the page frame list or the tree is smaller than the continuous free storage space corresponding to the number of free page frames, Then, the N consecutive vacant storage spaces are queried in the page frame list or the tree, and it is determined whether the N consecutive vacant storage spaces are equal in size to the contiguous free storage space corresponding to the number of free page frames, where N is an integer and is greater than 1; In the case of this, N consecutive free storage spaces are allocated to the application.
  • the processor performs, according to the stored program code in the storage medium, if the N consecutive free storage spaces are smaller than the consecutive free storage spaces corresponding to the number of free page frames, searching in the page frame list or the tree.
  • a first consecutive free storage space that is larger than the number of consecutive free storage spaces corresponding to the number of free page frames; if the first consecutive free storage space is obtained, the first consecutive free storage space is cut to obtain continuous idleness corresponding to the number of free page frames.
  • the second consecutive free storage space is allocated to the application; wherein the remaining consecutive free storage space after the first continuous free storage space is cut back to the page frame linked list or the tree; If the maximum contiguous free storage space in the page frame linked list or the tree is smaller than the contiguous free storage space corresponding to the number of free page frames, the idle page frame linked list or the tree is queried whether there is a contiguous storage space smaller than the maximum contiguous free storage space, and the continuous storage is continuous.
  • Continuous free storage space corresponding to the size of the space and the number of free page frames If the contiguous storage space is smaller than the contiguous free storage space corresponding to the number of free page frames, query whether there is a contiguous storage space smaller than the contiguous free storage space in the idle page frame linked list or the tree until the query is corresponding to the number of free page frames. Continuous free storage space that matches consecutive free storage spaces, or prompts that there is no continuous free storage space that can match the continuous free storage space corresponding to the number of free page frames.
  • the storage space management method further includes: receiving the persistentity of the application.
  • Memory request the persistent memory request is used to indicate the data pre-stored by the query application; according to the persistent memory request, the object descriptor stored in the memory system is used to obtain data pre-stored by the application; wherein, the object descriptor is used Indicates a page frame linked list or tree containing at least one allocation unit descriptor.
  • modules or steps of the present invention described above can be implemented by a general-purpose computing device that can be centralized on a single computing device or distributed across a network of multiple computing devices. Alternatively, they may be implemented by program code executable by the computing device such that they may be stored in the storage device by the computing device and, in some cases, may be different from the order herein.
  • the steps shown or described are performed, or they are separately fabricated into individual integrated circuit modules, or a plurality of modules or steps thereof are fabricated as a single integrated circuit module.
  • the invention is not limited to any specific combination of hardware and software.
  • the technical solution provided by the embodiment of the present invention can be applied to the management of the storage space, adopting the memory space request of the receiving application, obtaining the number of free page frames requested in the memory space request, and querying the idle page frame organization according to the number of free page frames. Acquiring consecutive free storage spaces of equal size of consecutive free storage spaces corresponding to the number of free page frames; allocating consecutive free storage spaces to the application.
  • the problem that the memory management mechanism for the NVM is lacking in the related art leads to inefficient memory space allocation, and the effect of improving the memory management efficiency of the NVM is achieved.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
  • Memory System Of A Hierarchy Structure (AREA)
  • Memory System (AREA)

Abstract

一种存储空间管理方法及装置,上述方法具体包括:接收应用程序的内存空间请求(S102);获取内存空间请求中所请求的空闲页框数目(S104);依据空闲页框数目查询空闲页框组织,获取与空闲页框数目对应的连续空闲存储空间大小相等的连续空闲存储空间(S106);将连续空闲存储空间分配给应用程序(S108)。上述方法及装置解决了相关技术中缺少针对NVM的内存管理机制导致内存空间分配效率低下的问题,进而达到了提升NVM的内存管理效率的效果。

Description

一种存储空间管理方法及装置 技术领域
本发明涉及通信领域,具体而言,涉及一种存储空间管理方法及装置。
背景技术
随着云计算和大数据技术的迅猛发展,用户对存储系统的存储效率与存储质量的要求越来越高,即,需要具备容量大、密度高、能耗低、读写速度快等特点的存储系统,而以动态随机存取存储器(Dynamic Random Access Memory,简称DRAM)与闪存(Flash)为代表的存储介质逐渐达到了技术瓶颈,例如容量和密度瓶颈。即,在相同面积下,DRAM与Flash的容量已经很难再增加;此外,在很多手持设备中,DRAM的能耗,特别是刷新能耗已经占据手持设备系统能耗的40%左右。而在一些数据中心中,DRAM的刷新能耗所带来的成本增加也是不容小觑的。对于Flash而言,虽然密度大于DRAM,但其读写速度远远慢于DRAM,且写入次数限制过小。这些不利因素,限制了Flash的未来应用。因此,长期以来研究人员都在不断地寻找满足要求的、新的存储介质。
随着新型非易失存储器(Non-Volatile Memory,简称NVM)取得了不小的进步,它们的大容量、高密度、低能耗、读写速度快、磨损周期长等特点引起了学术界和工业界的广泛关注,使人们在云计算、大数据时代背景下,看到了存储系统性能提升的希望,在云计算和大数据背景下,很多应用程序都会要求自己的数据或数据结构被持久化存储。而持久化内存不仅可以满足这一要求,还可以减少存储栈层次,提高存储效率。应用程序使用持久化内存最有效的一种方式,即将持久化内存映射到进程地址空间中。如此一来,应用程序就可以直接读写持久化内存区域,极大地减少额外开销。当有多个进程都需要映射持久化内存时,就要对整个持久化内存进行有效的组织和管理,并根据应用的需求将指定大小的持久化存储区域,映射到进程地址空间。映射完成后,该持久化存储区域便可称之为持久化堆,应用程序就可以在其中存储需要持久化的数据或数据结构。这就是持久化内存管理机制的应用场景和主要任务。但是由于NVM的持久性,即使重启系统,空闲空间也不会增多。若不对持久化空间的分配和释放操作进行优化处理,空闲空间很有可能在较短时间内耗尽。因此,空间性能对于持久化内存管理而言是非常重要的。
然而,上述的持久化内存管理方案,却在很大程度上忽视了空间效率。因此在如何解决NVM的内存管理问题上,目前尚未提出有效的解决方案。
发明内容
本发明提供了一种存储空间管理方法及装置,以至少解决相关技术中缺少针对NVM的内 存管理机制导致内存空间分配效率低下的问题。
根据本发明的一个实施例,提供了一种存储空间管理方法,包括:接收应用程序的内存空间请求;获取内存空间请求中所请求的空闲页框数目;依据空闲页框数目查询空闲页框组织,获取与空闲页框数目对应的连续空闲存储空间大小相等的连续空闲存储空间;将连续空闲存储空间分配给应用程序。
在本发明实施例中,在空闲页框组织中包括:页框链表或树,页框链表或树包括:至少一个分配单元描述符,至少一个分配单元描述符用于描述连续存储空间的存储状态以及连续存储空间对应的页框的起始地址和长度的情况下,依据空闲页框数目查询空闲页框组织,获取与空闲页框数目对应的连续空闲存储空间大小相等的连续空闲存储空间的步骤包括:查询页框链表或树中的至少一个分配单元描述符;将至少一个分配单元描述符中对应的连续空闲存储空间的大小,与应用程序所请求的空闲页框数目对应的连续空闲存储空间进行匹配,查询是否存在与空闲页框数目对应的连续空闲存储空间大小相等的连续空闲存储空间;若存在与空闲页框数目对应的连续空闲存储空间大小相等的连续空闲存储空间,则提取连续空闲存储空间作为分配至空闲页框数目对应的连续空闲存储空间;其中,连续存储空间的存储状态包括:已分配和空闲。
在本发明实施例中,将连续空闲存储空间分配给应用程序的步骤包括:若页框链表或树中的连续空闲存储空间的大小等于空闲页框数目对应的连续空闲存储空间大小,则将连续空闲存储空间分配给应用程序;若页框链表或树中的连续空闲存储空间的大小小于空闲页框数目对应的连续空闲存储空间大小,则在页框链表或树中查询N个连续空闲存储空间,判断N个连续空闲存储空间是否与空闲页框数目对应的连续空闲存储空间大小相等,N为整数,且大于1;在判断结果为是的情况下,将N个连续空闲存储空间分配给应用程序。
在本发明实施例中,上述方法还包括:若N个连续空闲存储空间小于空闲页框数目对应的连续空闲存储空间,则在页框链表或树中查找大于空闲页框数目对应的连续空闲存储空间大小的第一连续空闲存储空间;若得到第一连续空闲存储空间,则对第一连续空闲存储空间进行切割,得到与空闲页框数目对应的连续空闲存储空间大小相等的第二连续空闲存储空间;将第二连续空闲存储空间分配给应用程序;其中,将第一连续空闲存储空间切割后的剩余连续空闲存储空间归还至页框链表或树;若页框链表或树中最大的连续空闲存储空间小于空闲页框数目对应的连续空闲存储空间,则在空闲页框链表或树中查询是否存在小于最大连续空闲存储空间的连续存储空间,且连续存储空间的大小与空闲页框数目对应的连续空闲存储空间匹配,若连续存储空间小于空闲页框数目对应的连续空闲存储空间,则在空闲页框链表或树中查询是否存在小于连续空闲存储空间的连续存储空间,直至查询得到与空闲页框数目对应的连续空闲存储空间匹配的连续空闲存储空间,或,提示当前没有能够与空闲页框数目对应的连续空闲存储空间匹配的连续空闲存储空间。
在本发明实施例中,在将连续空闲存储空间分配给应用程序之后,方法还包括:接收应用程序的持久化内存请求,持久化内存请求用于指示查询应用程序预先存储的数据;依据持久化内存请求,通过内存系统中的对象描述符进行查询,得到应用程序预先存储的数据;其 中,对象描述符用于指示包含至少一个分配单元描述符的页框链表或树。
根据本发明的另一个实施例,提供了一种存储空间管理装置,包括:第一接收模块,设置为接收应用程序的内存空间请求;获取模块,设置为获取内存空间请求中所请求的空闲页框数目;第一查询模块,设置为依据空闲页框数目查询空闲页框组织,获取与空闲页框数目对应的连续空闲存储空间大小相等的连续空闲存储空间;分配模块,设置为将连续空闲存储空间分配给应用程序。
在本发明实施例中,在空闲页框组织中包括:页框链表或树,页框链表或树包括:至少一个分配单元描述符,至少一个分配单元描述符用于描述连续存储空间的存储状态以及连续存储空间对应的页框的起始地址和长度的情况下,第一查询模块包括:查询单元,设置为查询页框链表或树中的至少一个分配单元描述符;第一匹配单元,设置为将至少一个分配单元描述符中对应的连续空闲存储空间的大小,与应用程序所请求的空闲页框数目对应的连续空闲存储空间进行匹配,查询是否存在与空闲页框数目对应的连续空闲存储空间大小相等的连续空闲存储空间;第二匹配单元,设置为若存在与空闲页框数目对应的连续空闲存储空间大小相等的连续空闲存储空间,则提取连续空闲存储空间作为分配至空闲页框数目对应的连续空闲存储空间;其中,连续存储空间的存储状态包括:已分配和空闲。
在本发明实施例中,分配模块包括:第一分配单元,设置为在页框链表或树中的连续空闲存储空间的大小等于空闲页框数目对应的连续空闲存储空间大小的情况下,将连续空闲存储空间分配给应用程序;判断单元,设置为在页框链表或树中的连续空闲存储空间的大小小于空闲页框数目对应的连续空闲存储空间大小的情况下,在页框链表或树中查询N个连续空闲存储空间,判断N个连续空闲存储空间是否与空闲页框数目对应的连续空闲存储空间大小相等,N为整数,且大于1;第二分配单元,设置为在判断结果为是的情况下,将N个连续空闲存储空间分配给应用程序。
在本发明实施例中,分配模块还包括:空间查询单元,设置为在N个连续空闲存储空间小于空闲页框数目对应的连续空闲存储空间的情况下,在页框链表或树中查找大于空闲页框数目对应的连续空闲存储空间大小的第一连续空闲存储空间;空间切割单元,设置为在得到第一连续空闲存储空间的情况下,对第一连续空闲存储空间进行切割,得到与空闲页框数目对应的连续空闲存储空间大小相等的第二连续空闲存储空间;第三分配单元,设置为将第二连续空闲存储空间分配给应用程序;其中,将第一连续空闲存储空间切割后的剩余连续空闲存储空间归还至页框链表或树;第四分配单元,设置为在页框链表或树中最大的连续空闲存储空间小于空闲页框数目对应的连续空闲存储空间的情况下,在空闲页框链表或树中查询是否存在小于最大连续空闲存储空间的连续存储空间,且连续存储空间的大小与空闲页框数目对应的连续空闲存储空间匹配,若连续存储空间小于空闲页框数目对应的连续空闲存储空间,则在空闲页框链表或树中查询是否存在小于连续空闲存储空间的连续存储空间,直至查询得到与空闲页框数目对应的连续空闲存储空间匹配的连续空闲存储空间,或,提示当前没有能够与空闲页框数目对应的连续空闲存储空间匹配的连续空闲存储空间。
在本发明实施例中,上述装置还包括:第二接收模块,设置为在将连续空闲存储空间分 配给应用程序之后,接收应用程序的持久化内存请求,持久化内存请求用于指示查询应用程序预先存储的数据;第二查询模块,设置为依据持久化内存请求,通过内存系统中的对象描述符进行查询,得到应用程序预先存储的数据;其中,对象描述符用于指示包含至少一个分配单元描述符的页框链表或树。
通过本发明实施例,采用接收应用程序的内存空间请求;获取内存空间请求中所请求的空闲页框数目;依据空闲页框数目查询空闲页框组织,获取与空闲页框数目对应的连续空闲存储空间大小相等的连续空闲存储空间;将连续空闲存储空间分配给应用程序。解决了相关技术中缺少针对NVM的内存管理机制导致内存空间分配效率低下的问题,进而达到了提升NVM的内存管理效率的效果。
附图说明
此处所说明的附图用来提供对本发明的进一步理解,构成本申请的一部分,本发明的示意性实施例及其说明用于解释本发明,并不构成对本发明的不当限定。在附图中:
图1是根据本发明实施例的存储空间管理方法的流程图;
图2是根据本发明实施例的一种内存池结构的结构框图;
图3是根据本发明实施例的一种元数据内存池子池初始状态图;
图4是根据本发明实施例一种元数据内存池子池第一次分配后状态图;
图5是根据本发明实施例一种元数据内存池子池第二次分配后状态图;
图6是根据本发明实施例一种元数据内存池子池释放第一个元素后状态图;
图7是根据本发明实施例一种空闲页框组织结构图;
图8是根据本发明实施例的存储空间管理装置的结构框图;
图9是根据本发明实施例的一种存储空间管理装置的结构框图;
图10是根据本发明实施例的另一种存储空间管理装置的结构框图;以及,
图11是根据本发明实施例的又一种存储空间管理装置的结构框图。
具体实施方式
下文中将参考附图并结合实施例来详细说明本发明。需要说明的是,在不冲突的情况下,本申请中的实施例及实施例中的特征可以相互组合。
需要说明的是,本发明的说明书和权利要求书及上述附图中的术语“第一”、“第二”等是用于区别类似的对象,而不必用于描述特定的顺序或先后次序。
实施例一
本实施例提供的存储空间管理方法可以适用于非易失存储存储器,其中,非易失存储存储器(Non-Volatile Memory,简称NVM)至少可以包括:阻变式存储器(Resistive RAM,简称RRAM)、相变存储器(Phase Change Memory,简称PCM)、磁性随机存储器(Magnetic RAM,简称MRAM)以及自旋力矩转移存储器(Spin-Torque Transfer RAM,简称STT RAM)。其中,该类型的存储器具备大容量、高密度、低能耗、读写速度快、磨损周期长等特点,该类型的存储器可以直接连接到处理器内存子系统中,即和内存总线相连。在这种情况下,NVM可被称为持久化内存,即持久化内存(Persistent Memory)。在云计算和大数据背景下,很多应用程序都会要求自己的数据或数据结构被持久化存储。而持久化内存不仅可以满足这一要求,还可以减少存储栈层次,提高存储效率。应用程序使用持久化内存最有效的一种方式,即将持久化内存映射到进程地址空间中。如此一来,应用程序就可以直接读写持久化内存区域,极大地减少额外开销。当有多个进程都需要映射持久化内存时,就要对整个持久化内存进行有效的组织和管理,并根据应用的需求将指定大小的持久化存储区域,映射到进程地址空间。映射完成后,该持久化存储区域便可称之为持久化堆,应用程序就可以在其中存储需要持久化的数据或数据结构。
但是由于相关持久化内存管理方案中基于传统页框管理机制在buddy系统基础之上,针对NVM磨损周期短、读写性能不对称等特性进行优化设计,而未考虑buddy系统带来的空间浪费问题。该问题主要表现在以下几个方面:
首先,在向buddy系统请求分配空间时,即使用户申请的页框数不是2的幂,但buddy系统总是按照2的幂进行分配。比如,用户只请求300个页框,但buddy系统会分配给它512个页框。显然,这浪费了212个页框。同样由于上述原因,可能会出现即使有足够的连续空间,也不能分配给用户的情况。比如,当前最大的连续空闲页框数是384,而用户同样申请300个页框。虽然空闲页框数大于请求的页框数,但buddy系统依然会尝试分配512个连续页框。显然,此时不能满足分配请求,而无形中这384个连续页框没有得到充分利用,导致了空间浪费。
其次,由于页表机制的存在,在向进程/内核地址空间映射时,被映射的存储空间不一定非要连续。而buddy系统没有充分利用这一点,只提供了连续页框的分配方式。如此一来,就可能会出现即使有足够的不连续空闲空间,也不能满足用户分配要求的情况。在这种情况下,很有可能造成一些小的内存碎片长期得不到使用,导致空间浪费。
最后,虽然buddy系统只能按照2的幂进行页框分配,但是构建于buddy系统之上的内存管理机制,可以通过多次调用buddy系统,以达到满足非2的幂的空间请求的同时,保证不多分配空间给用户的目的。该方法是基于如下规律,即任何一个整数都能以二进制形式表示。比如用户申请255个页框,可以转换成8次对buddy系统的调用。其中,每次分别申请 128、64、32、16、8、4、2、1个连续页框。该方法虽然避免了因buddy系统只按照2的幂进行分配所造成的空间浪费,但是引入了过多的元数据。每次buddy系统进行分配后,都需要一个元数据描述当前获得的空间。显然,上述例子中需要8个元数据,才能描述8次调用所获得的空间。因此,上述方法的主要缺陷,即元数据空间开销过大。
并且,在文件系统进行管理时,在NVM上建立文件系统,然后使用一个文件或其他对象映射进内存(mmap)的方式映射持久化内存。实际上,文件系统的应用场景和前面描述的持久化内存的应用场景是截然不同的。虽然可以使用文件系统完成持久化内存映射和反映射的任务,但是文件系统中的许多设计,对持久化内存应用场景而言是不合适的。使用文件系统管理持久化内存的主要问题,即元数据部分的空间开销过大。例如,在一些文件系统中,使用位图管理数据块,其中位图的长度和存储空间的大小成比例。如此一来,存储空间越大,位图所代表的元数据部分所占据的空间也就越大。换句话说,元数据的使用量不具备可扩展性,其空间开销过大。再比如,一些文件系统认识到了上述位图设计方案空间性能不佳,而采用了B树组织一个文件的若干连续存储空间。其中,B树节点的大小即一个数据块的大小,比如4KBytes。这种设计对于文件系统而言可能是高效的,但是对于持久化内存应用场景却是不适合的。应用程序在向持久化内存管理机制申请映射空间时,通常都会给出被映射的空间大小。因此,若要使用文件系统管理持久化内存,最高效的做法之一,就是在进行mmap操作之前,首先调用fallocate等函数为被映射的文件,尽可能地预留连续物理空间。虽然应用程序可以通过增大被映射文件的方式,以扩容其持久化堆空间,但是该操作往往只会发生在其持久化堆的剩余空间过小时。换句话说,一个持久化堆对应的映射文件,所包含的连续空间的个数,相对于普通文件而言很有限。因此在这种情况下,使用B树,甚至是使用B树的一个节点,组织被映射文件的、个数极其有限的连续空间,是非常浪费的。另外,文件系统中的一些元数据结构,比如块组及其描述符表等,对于持久化内存应用场景而言,完全是不需要的。显然,这部分元数据所占用的空间,也是被浪费掉了。
因此,本发明实施例针对上述问题提供一种存储空间管理方法,基于NVM介质特点和持久化内存应用场景,具体如下:
在本实施例中提供了一种存储空间管理方法,图1是根据本发明实施例的存储空间管理方法的流程图,如图1所示,该流程包括如下步骤:
步骤S102,接收应用程序的内存空间请求;
步骤S104,获取内存空间请求中所请求的空闲页框数目;
步骤S106,依据空闲页框数目查询空闲页框组织,获取与空闲页框数目对应的连续空闲存储空间大小相等的连续空闲存储空间;
步骤S108,将连续空闲存储空间分配给应用程序。
具体的,首先接收来自应用程序的内存空间请求,其次,在对该内存空间请求进行解析的过程中,获取该内存空间请求中该应用程序所请求的空闲页框数目,第三,在获取到空闲页框数目后查询页框链表或树,进而获取与该空闲页框数目对应的连续空闲存储空间大小相等的连续空闲存储空间,最后,将连续空闲存储空间分配给应用程序。
通过上述步骤,采用接收应用程序的内存空间请求;获取内存空间请求中所请求的空闲页框数目;依据空闲页框数目查询页框链表或树,获取与空闲页框数目对应的连续空闲存储空间大小相等的连续空闲存储空间;将连续空闲存储空间分配给应用程序。解决了相关技术中缺少针对NVM的内存管理机制导致内存空间分配效率低下的问题,进而达到了提升NVM的内存管理效率的效果。
本实施例在实现存储空间管理方法的过程中还提供了以下两种元数据:
一是代表连续页框的分配单元描述符,其中,无论是空闲空间,还是已分配空间,均是分配单元描述符表示;且,分配单元描述符中,会存放其描述的连续页框的起始地址和长度等信息。
二是代表持久化堆的对象描述符。对应的,对象描述符会指向一个包含了若干分配单元描述符的链表,以表示其对应的持久化堆所包含的若干连续页框。为了减少元数据分配和释放操作所产生的内存碎片,本实施例还设计了元数据内存池。由于有两种元数据,所以分别实例化了分配单元描述符内存池和对象描述符内存池。
在本发明实施例中,在空闲页框组织中包括:页框链表或树,页框链表或树包括:至少一个分配单元描述符,至少一个分配单元描述符用于描述连续存储空间的存储状态以及连续存储空间对应的页框的起始地址和长度的情况下,步骤S106中的依据空闲页框数目查询空闲页框组织,获取与空闲页框数目对应的连续空闲存储空间大小相等的连续空闲存储空间的步骤包括:
Step1,查询页框链表或树中的至少一个分配单元描述符;
Step2,将至少一个分配单元描述符中对应的连续空闲存储空间的大小,与应用程序所请求的空闲页框数目对应的连续空闲存储空间进行匹配,查询是否存在与空闲页框数目对应的连续空闲存储空间大小相等的连续空闲存储空间;
Step3,若存在与空闲页框数目对应的连续空闲存储空间大小相等的连续空闲存储空间,则提取连续空闲存储空间作为分配至空闲页框数目对应的连续空闲存储空间。
其中,连续存储空间的存储状态包括:已分配和空闲。
具体的,在空闲页框组织(即,对应的连续空闲存储空间)方面,本实施例根据连续页框对应的连续空闲存储空间的大小,将连续空闲存储空间对应的分配单元描述符放入页框链表或树中。本实施例以提供了一共使用了127个空闲页框链表为例进行说明,其中,每个页框链表由多个分配单元描述符组成,同一个页框链表中分配单元描述符对应的存储空间大小 相同。例如,第1个页框链表中,每个分配单元描述符对应的空间大小是1个页框;第2个页框链表中,每个分配单元描述符对应的空间大小是2个连续页框;第3个页框链表中,每个分配单元描述符对应的空间大小是3个连续页框,直至第127个页框链表中,每个分配单元描述符对应的空间大小时127个连续页框。
另外,本实施例还提供了以使用了5棵平衡二叉树为例进行说明,其中,该5可平衡二叉树用于存放较大的连续页框。其中,树的key即连续页框的大小。每棵树也是由多个分配单元描述符组成,同一棵树内的分配单元描述符对应的空间大小设置为在固定的范围内。例如,第1棵树中,每个分配单元描述符对应的空间大小在128个页框到255个页框之间,第2棵树中是在256个页框到511个页框之间,第3棵树中是在512个页框到1023个页框之间,第4棵树中是在1024个页框到2047个页框之间,超过2048个连续页框空闲空间对应的分配单元描述符,位于第5棵树。有可能存在多个分配单元描述符对应同一个树节点,即这些分配单元描述符所对应的连续空间大小相等。此时,会以链表的形式组织这些分配单元描述符,并由树节点指向该链表。
综上,通过查询页框链表或树中的至少一个分配单元描述符,将至少一个分配单元描述符中对应的连续空闲存储空间的大小,与应用程序所请求的空闲页框数目对应的连续空闲存储空间进行匹配,在页框链表或树中是否存在与应用程序请求的空闲页框数目对应的连续空闲存储空间大小相等的连续空闲存储空间。
在本发明实施例中,步骤S108中的将连续空闲存储空间分配给应用程序的步骤包括:
Step1,若页框链表或树中的连续空闲存储空间的大小等于空闲页框数目对应的连续空闲存储空间大小,则将连续空闲存储空间分配给应用程序;
具体的,根据应用程序请求的空闲页框数目对应的连续空闲存储空间的大小,在页框链表或树中查找和该空闲页框数目对应的连续空闲存储空间大小相等的连续空间。
或者,
Step2,若页框链表或树中的连续空闲存储空间的大小小于空闲页框数目对应的连续空闲存储空间大小,则在页框链表或树中查询N个连续空闲存储空间,判断N个连续空闲存储空间是否与空闲页框数目对应的连续空闲存储空间大小相等,N为整数,且大于1;
Step3,在判断结果为是的情况下,将N个连续空闲存储空间分配给应用程序。
具体的,若在Step1的基础上,应用程序所请求的空闲页框数目对应的连续空闲存储空间的大小无法被满足,则依据Step2和Step3在页框链表或树中查找两个连续空闲存储空间的大小与该空闲页框数目对应的连续空闲存储空间大小相等的连续空闲存储空间。
本实施例以2个连续空闲存储空间为例对存储空间管理方法进行说明,以实现本实施例提供的存储空间管理方法为准,具体不做限定。
在本发明实施例中,在上述步骤S108中Step1至Step3并列的方案中,本实施了提供的 存储空间管理方法还包括:
Step4,若N个连续空闲存储空间小于空闲页框数目对应的连续空闲存储空间,则在页框链表或树中查找大于空闲页框数目对应的连续空闲存储空间大小的第一连续空闲存储空间;
Step5,若得到第一连续空闲存储空间,则对第一连续空闲存储空间进行切割,得到与空闲页框数目对应的连续空闲存储空间大小相等的第二连续空闲存储空间;
Step6,将第二连续空闲存储空间分配给应用程序;
其中,将第一连续空闲存储空间切割后的剩余连续空闲存储空间归还至页框链表或树;
具体的,基于Step1,Step2和Step3对应的连续空闲存储空间的分配方式,当Step1,Step2和Step3提供的分配方案均无法满足应用程序所请求的空闲页框数目对应的连续空闲存储空间的大小时,依据Step4至Step6提供的方法在页框链表或树中查找最接近、但大于所请求大小的第一连续空闲存储空间,在找到该第一连续空闲存储空间后,会对该第一连续空闲存储空间进行切割,分离出与空闲页框数目对应的连续空闲存储空间大小相等的第二连续空闲存储空间,并将剩下的连续空闲存储空间放入对应的页框链表或树中。
Step7,若页框链表或树中最大的连续空闲存储空间小于空闲页框数目对应的连续空闲存储空间,则在空闲页框链表或树中查询是否存在小于最大连续空闲存储空间的连续存储空间,且连续存储空间的大小与空闲页框数目对应的连续空闲存储空间匹配,若连续存储空间小于空闲页框数目对应的连续空闲存储空间,则在空闲页框链表或树中查询是否存在小于连续空闲存储空间的连续存储空间,直至查询得到与空闲页框数目对应的连续空闲存储空间匹配的连续空闲存储空间,或,提示当前没有能够与空闲页框数目对应的连续空闲存储空间匹配的连续空闲存储空间。
具体的,当Step1,Step2和Step3以及Step4至Step6提供的分配方案均无法满足应用程序所请求的空闲页框数目对应的连续空闲存储空间的大小时,即当前系统最大连续空间都小于所请求的大小时,首先在空闲页框链表或树中查找最大的连续空间;其次,会递归地查找次大的连续空间,直到获取到满足该空闲页框数目对应的连续空闲存储空间大小相等的连续空闲存储空间为止。
此外,在进行页框释放时,会检查与被释放页框物理连续的页框的状态。若该物理连续的页框是空闲的,则会引发合并操作。即,只使用一个分配单元描述符表示合并后的连续页框。
由此得出,在页框分配以及释放的过程(即,连续空闲存储空间分配及释放的过程)中,本实施例提供的存储空间管理方法能够以使用最少的分配单元描述符为代价,满足应用程序对连续空闲存储空间的大小需求,以及在已分配的连续存储空间释放后,依然能够以一个分配单元描述符将已释放后的连续存储空间进行合并,从而节约系统资源。
在本发明实施例中,在步骤S108将连续空闲存储空间分配给应用程序之后,本实施例提 供的存储空间管理方法还包括:
步骤S109,接收应用程序的持久化内存请求,持久化内存请求用于指示查询应用程序预先存储的数据;
步骤S110,依据持久化内存请求,通过内存系统中的对象描述符进行查询,得到应用程序预先存储的数据;
其中,对象描述符用于指示包含至少一个分配单元描述符的页框链表或树。
具体的,这里所说的对象,即持久化堆。由步骤S109和步骤S110中的方法,对象或持久化堆,是由对象描述符表示,该对象描述符中除了包含由分配单元描述符组成的页框链表外,还保存了持久化堆的标识。通过该标识,可以让应用程序找回之前申请到的持久化内存。本实施例中,将所有的对象描述符组织到一个平衡二叉树中。其中,树的key即持久化堆的标识。
本实施例提供的存储空间管理方法与相关技术的区别是:与基于传统页框管理机制的方案不同,本实施例不会多分配持久化内存给应用程序。而是依据Step1至Step7中对应描述的四种分配策略,无论哪一种策略,都是按照应用申请的页框数进行分配。即应用申请多少页框,就分配多少页框,不会出现浪费空间的情况。
其次,与基于传统页框管理机制的方案不同,本实施例不会出现有足够空闲空间,却不能进行分配的情况。仍旧依据Step1至Step7中对应描述的四种分配策略为例,在本实施例中当释放页框时,总是会尽力合并相邻的空闲连续页框。换句话说,在空闲链表和树中的连续页框都是不可再合并的。因此按照切割分配的原则,是不可能出现有足够的连续空间,却不能进行分配的情况。另一方面,组合分配和尽力分配的策略,会将离散的多个连续空间组合起来进行分配。因此,也不可能出现有足够的若干不连续空间,却不能进行分配的情况。很显然,上述两项优势,有利于提升持久化内存的空间利用率。
第三,与相关的持久化内存管理方案相比,本实施例的元数据空间开销很小,且元数据使用量具有可扩展性。具体的,本实施例中,元数据主要包括分配单元描述符和对象描述符,而这两个描述符会构成链表或树的节点。其中,分配单元描述符包含了所描述的连续页框的起始地址和长度、与所描述的连续空间物理相邻的空间所对应的分配单元描述符的地址、构成链表或树的指针等信息;而对象描述符则包含了持久化堆标识、指向分配单元描述符链表的指针、构成树的指针等信息。很显然,本实施例设计的两种元数据所占的空间都很小,均在几十个字节以内。而与文件系统不同的是,将整个数据块作为一个B树节点。
在进行空闲页框分配时,每次只会增加一个或两个元数据。如果是新映射一个持久化堆,则会新增一个对象描述符;而如果只是对已有的持久化堆扩容,则不会增加对象描述符。下面说明分配单元描述符的增加情况。其中,在按照确切分配策略进行空间分配时,可以直接将找到的分配单元描述符,加入到对象描述符所包含的页框链表中。因此,不会增加新的分配单元描述符。在按照组合分配策略进行空间分配时,由于只会使用找到的两个分配单元描 述符,因此该种情况也不会新增分配单元描述符。在按照切割分配策略进行空间分配时,会对找到的空闲空间进行分割,此时就需要一个新的分配单元描述符以表示切割后剩余的空间或分配出去的空间。在按照尽力分配策略进行空间分配时,会找到若干个分配单元描述符及其对应的空闲空间。如果这些空闲空间的大小之和,恰好等于请求的空间大小,则无需新增分配单元描述符。否则,将需要对最后找到的那个空闲空间进行切割。在这种情况下,将会新增一个分配单元描述符。进行持久化堆扩容时,本实施例在实施四项分配策略时,会尽量在原有空间的尾部(即高地址部分)分配新空间,即新空间和原有空间物理相邻。如此一来,就可以将原有空间和新空间进行合并,即减少分配单元描述符的个数。
在归还页框时,本实施例会合并物理相邻的空闲空间。因此,归还操作也可能会减少分配单元描述符。
本实施例中,除了对象描述符和分配单元描述符外,元数据部分还包括:大小固定的NVM总描述符和日志区域。其中,NVM总描述符和日志区域所占空间很小,只有一个页框。
综上,元数据部分的空间复杂度即为O(CN+ε),其中N为持久化堆的个数、C为一个堆进行扩容操作的次数、ε为NVM总描述符等代表的固定开销,而CN即持久化空间申请的次数。如前所述,通常情况下只有当持久化堆的剩余空闲空间不够用时,才会引发扩容操作,因此扩容次数即C通常很小,完全可以当作一个常量。如此一来,元数据部分的空间复杂度,只和持久化堆的个数有关,而与分配的空间大小或整个持久化内存大小无关,具备良好的可扩展性。
第四,本实施例在持久化内存内部为持久化堆建立了索引结构,即对象描述符组成的一棵平衡二叉树。通过持久化堆的标识,就可以在这棵树中找到对应的对象描述符,以及持久化堆所拥有的若干连续存储空间,即实现了页框找回。上述方案所提供的名字服务,仅依赖于持久化内存内部维护的数据结构,避免了之前分析的通过外部存储找回页框而带来的不可靠问题。
具体的,下面首先介绍本实施例中应用到的持久化内存的总体布局,然后再分析元数据及其内存池,并在此基础之上,介绍页框的组织与分配,以及对象描述符的组织与管理。
第一,内存布局:
在整个持久化内存空间的头部,有一个被称为NVM总描述符的固定大小区域。该区域包含了整个持久化内存管理机制的控制描述信息,比如各类标志信息、空闲页框管理结构信息等等。下面给出了这个头部区域的结构定义,即NVM总描述符。
Figure PCTCN2015087705-appb-000001
Figure PCTCN2015087705-appb-000002
NvmDescriptor结构体中的各个字段解释如下:
FormatFlag:格式化标志,用于判断当前持久化内存是否已经完成了格式化。
InitialFlag:初始化标志,用于标识系统启动时故障恢复中的各个阶段。
NVMSize:NVM区大小,用于表示整个持久化内存的空间大小。
MetaDataSize:整个持久化内存中,元数据部分所占大小。
FreePageNumber:整个持久化内存中,未被使用的页框数目。
FreePageLists:空闲页框链表入口。该入口一共包含了127个指针,分别指向对应的空闲页框链表NvmAllocUnitDescriptorInList(该结构将在后面描述)。
FreePageTrees:空闲页框树入口。该入口一共包含了5个指针,分别指向对应的空闲页框树NvmAllocUnitDescriptorInTree(该结构将在后面描述)。
NvmObjectTree:对象描述符树入口,即指向对象描述符树NvmObjectDestriptor(该结构将在后面描述)。
AllocUnitDescriptorMemoryPoolForList:位于链表中的分配单元描述符内存池入口点结构。该入口点类型NvmMetaDataPool,将在后面描述。
AllocUnitDescriptorMemoryPoolForTree:位于树中的分配单元描述符内存池入口点结构。该入口点类型NvmMetaDataPool,将在后面描述。
NvmObjectDescriptorMemoryPool:对象描述符内存池入口点结构。该入口点类型NvmMetaDataPool,将在后面描述。
实际上,上述NvmDescriptor结构体占据在持久化内存的头部区域,即第一个页框中。其中,页框的大小设为4KBytes。由于NvmDescriptor的大小仅1KBbytes多,因此第一个页框的剩余部分作为了日志区域使用。
第二,元数据及内存池
a、元数据结构
如前所述,本专利方案使用了两种元数据,一是分配单元描述符,二是对象描述符。分配单元描述符用于表示一段连续页框,下面给出了其各个字段的定义。
Figure PCTCN2015087705-appb-000003
NvmAllocUnitDescriptor即分配单元描述符,其各个字段含义如下。
SpaceAddress:当前分配单元描述符所表示的连续页框的基地址。
SpaceSize:当前分配单元描述符所表示的连续页框的大小。
PreSpaceAddress:与当前分配单元描述符所描述空间物理相邻(前驱)的连续空间,对应的分配单元描述符的地址。
NextSpaceAddress:与当前分配单元描述符所描述空间物理相邻(后继)的连续空间,对应的分配单元描述符的地址。
Flags:当前分配单元描述符对应的连续空间的状态标志,包括是否是空闲空间等信息。
由于分配单元描述符可以位于链表或树中,因此NvmAllocUnitDescriptor结构体衍生出了两种包裹结构,即NvmAllocUnitDescriptorInList和NvmAllocUnitDescriptorInTree。这两个结构体的定义如下。
Figure PCTCN2015087705-appb-000004
Figure PCTCN2015087705-appb-000005
从上述定义可以看出,NvmAllocUnitDescriptorInList结构代表了位于链表中的分配单元描述符,而NvmAllocUnitDescriptorInTree结构则代表了位于树中的分配单元描述符。其中,Prev和Next字段分别指向双向链表节点的前驱和后继节点,而LeftChild、RightChild和Parent字段,则分别指向了树中节点的左右孩子和父节点。另外,SameSizeList字段比较特殊。当分配单元描述符位于树中时,可能存在这样一种情况,即多个分配单元描述符所对应的连续空间大小相等。此时,本专利方案就将这些分配单元描述符,以链表形式组织,并由树节点指向。而SameSizeList字段,即为了构成上述链表。
对象描述符代表了持久化堆,其结构如下所示。
Figure PCTCN2015087705-appb-000006
NvmObjectDestriptor中各个字段含义如下:
LeftChild、RightChild和Parent:由于持久化堆对应的对象描述符会位于一棵树中,因此上述三个字段分别指向了树中的左右孩子、父节点。
SpaceList:指向分配单元描述符链表,即一个由分配单元描述符组成的链表。该链表的每个元素,表示了持久化堆所拥有的连续空间。
NvmObjectSize:当前对象描述符表示的持久化堆空间的总大小。
Flags:当前持久化堆的各种标志。
NvmUUID:128位的持久化堆标识
b、元数据内存池
为了减少内存碎片,本专利方案为元数据建立了三种内存池,即位于链表中的分配单元描述符内存池、位于树中的分配单元描述符内存池和对象描述符内存池。这三种内存池的结构完全一致,不同点仅在于其内部管理的元数据大小不一。下面给出了内存池入口点的结构定义。
Figure PCTCN2015087705-appb-000007
NvmMetaDataPool中的各个字段定义如下:
TotalElementNumber:内存池中总共元数据的个数。
FreeElementNumber:内存池中可分配的元数据个数。
MetaDataFreeList:指向可分配元素子池链表,链表中的每个元素是分配单元描述符。
MetaDataFullList:指向不可分配元素子池链表,链表中的每个元素是分配单元描述符。
一个内存池是由若干个内存子池构成的,而每个内存子池是由分配单元描述符表示。NvmMetaDataPool中包含的两个指针,分别指向了可分配元素子池链表和不可分配元素子池链表。前者还包含有空闲元数据,而后者的所有元数据均已被分配出去。图2给出了上述内存池的基本结构。
图2除了给出上述信息外,也给出了子池的基本结构。如前所述,内存池子池是由分配单元描述符表示,即通过该分配单元描述符可找到内存子池的实际存储空间。在这实际存储空间的头部,是一个NvmMetaDataSubPool结构体实例,用于控制内存子池中的分配和释放操作。下面给出了该结构体的定义。
Figure PCTCN2015087705-appb-000008
NvmMetaDataSubPool中各个字段的含义如下:
TotalElementNumber:表示当前内存子池中总共的元素个数。
FreeElementNumber:表示当前内存子池中可供分配的元素个数。
FirstFreeElementAddress:表示当前内存子池中可供分配的第一个元素的地址。
FreedElementNumber:表示当前内存子池中被释放的元素个数,即以链表形式组织起来的元数据个数。该字段的具体含义见后面的分析。
内存子池实际存储区域中,除了头部的NvmMetaDataSubPool结构外,剩余部分都是用于元素存储。在内存池子池的内部存在一个链表,该链表是用于链接被释放的元素。每一个这样的元素的头8个字节,用于描述该元素相关的控制信息。元素的状态不同,其头8个字节的含义不同。元素存在三种状态:从未分配、已分配、被释放(其中从未分配和被释放都是空闲状态)。头8个字节的含义如下:
当元素为从未分配状态时,该元素后面可能存在若干连续元素,其状态都是从未分配。这种情况下,当前元素的头8个字节存储这种连续元素的个数。
当元素为已分配状态时,该元素的头8个字节存储其所在内存池子池对应的分配单元描述符结构的地址,即NvmAllocUnitDescriptorInList的地址。
当元素为被释放状态时,该元素的头8个字节存储下一个可分配元素的地址。
c、元数据的分配和释放
元数据的分配和释放建立在上述元数据内存池结构之上,下面说明具体的分配和释放步骤。
当接收到元数据分配请求后,首先判断所请求的元数据类型,即位于链表中的分配单元描述符、位于树中的分配单元描述符或对象描述符。
根据元数据类型,在NvmDescriptor结构和NvmMetaDataPool结构中获取可分配的内存子池的信息。
然后,在内存子池中完成元数据的分配操作。
对于释放操作而言,元数据的释放过程如下:
首先,当接收到元数据释放请求时,根据存储在元数据头8个字节内的信息,找到该元数据所属的内存子池对应的分配单元描述符,即NvmAllocUnitDescriptorInList结构体实例的地址。
其次,在找到的内存池子池中执行元数据释放操作。
从上面的讨论可知,元数据的分配和释放操作,主要发生在内存池子池内。下面将列举 一个例子,说明如何在子池中进行元素的分配和释放。
初始化时,内存池子池的所有元素都是未分配的,图3给出了初始时的状态。在图3中,NvmMetaDataSubPool的FirstFreeElementAddress字段指向了该子池的第一个元素,也是第一个可以分配出去的元素。
当进行了第一次元素分配后,第一个可分配元素被分配出去了,即如图4所示。此时在图4中,FirstFreeElementAddress字段,指向了第二个元素。当再一次分配元素后,FirstFreeElementAddress字段将指向第三个元素,即如图5所示。假设第一个元素被释放,FirstFreeElementAddress字段将会指向被释放的元素,并在该元素中存储FirstFreeElementAddress字段之前指向的元素的地址。整个过程的最终结果,将会如图6所示。这样一来,被释放的元素将会组织进一个链表中,方便再次分配使用。
综上,本实施例中提到的元素可以为分配单元描述符对应的空闲的内存空间,以实现本实施例提供的存储空间管理方法为准,具体不做限定。
第三,空闲页框的组织、分配和释放
如前所述,本实施例将连续的空闲页框,按照其大小组织到了127个链表和5棵平衡二叉树中,即NvmDescriptor结构中的FreePageLists和FreePageTrees字段。图7给出了空闲页框的组织示意,该图的左侧是空闲页框链表入口点和空闲页框树入口点结构,而右侧则是若干连续空闲页框对应的分配单元描述符,组成的若干链表和平衡二叉树。在图7中,链表的元素和树的节点,均是分配单元描述符。另外,从图7中也可以看出,本专利方案采用的树是红黑树。
由于前面已经讨论了页框分配和释放的策略,这里仅就分配和释放的流程进行说明。主要的分配过程如下:
Step1,当接收到分配请求后,首先访问位于持久化内存头部的NvmDescriptor结构体。并根据其FreePageNumber字段,判断是否拥有足够的空闲页框。若空闲页框数不够,则出错返回。
Step2,若有足够的空闲页框,则判断请求的页框数量n,是否大于127个页框。若大于,则转向7)执行;否则,执行3)。
Step3,判断NvmDescriptor结构中的FreePageLists[n-1]链表,是否为空。若不是,则从该链表中取出一个分配单元描述符并将其返回给请求者,同时更新NvmDescriptor结构中的FreePageNumber字段。即,执行了确切分配的策略。
Step4,若FreePageLists[n-1]链表为空,则遍历FreePageLists[0]到FreePageLists[n-2]中的链表,以判断是否存在两个分配单元描述符,其对应的空间之和等于n。若找到,则将找到的分配单元描述符返回给请求者,同时更新NvmDescriptor结构中的FreePageNumber字段。即,执行了组合分配的策略。
Step5,若4)步骤查找失败,且n<127,则从FreePageLists[n]开始查找,直到在FreePageLists数组中找到一个非空的链表。若找不到或n等于127,则转向7)执行。
Step6,若在FreePageLists数组中找到一个非空的链表,则从其中取出一个分配单元描述符,并对其所表示的连续空间按照n的大小进行切割。在切割的过程中,会从分配单元描述符内存池中申请一个空闲的分配单元描述符,以表示切割后剩余的空间,并根据剩余空间的大小将其加入到FreePageLists中的某个链表。最后,将原来的分配单元描述符返回给请求者,同时更新NvmDescriptor结构中的FreePageNumber字段。上述过程,即切割分配。
Step7,根据n的大小,确定在FreePageTrees中的哪棵树内进行分配。
Step8,若找到的树FreePageTrees[m]不为空,则根据n的值,在树中进行查找。若能找到一个分配单元描述符,其对应的空间大小正好是n个页框,则取出该分配单元描述符并将其返回给请求者,同时更新NvmDescriptor结构中的FreePageNumber字段。即,执行了确切分配的策略。
Step9,若7)中找到的树为空,或8)中未找到上述分配单元描述符,则遍历FreePageLists[0]到FreePageTrees[m],以判断是否存在两个分配单元描述符,其对应的空间之和等于n。若找到,则将找到的分配单元描述符返回给请求者,同时更新NvmDescriptor结构中的FreePageNumber字段。即,执行了组合分配的策略。
Step10,若9)执行失败,则从FreePageTrees[m]开始遍历,直到找到一个分配单元描述符,其对应的空间大小大于n个页框。
Step11,若找到一个分配单元描述符满足10)的要求,则对该分配单元描述符进行切割。在切割的过程中,会从分配单元描述符内存池中申请一个空闲的分配单元描述符,以表示切割后剩余的空间,并根据剩余空间的大小将其加入到FreePageLists中的某个链表或FreePageTrees中的某个树。最后,将原来的分配单元描述符返回给请求者,同时更新NvmDescriptor结构中的FreePageNumber字段。上述过程,即切割分配。
Step12,若10)执行失败,则从FreePageTrees[4]内的最大分配单元描述符开始递归查找,直到找到足够的空闲页框为止。最后,会将找到的分配单元描述符返回给请求者,并更新NvmDescriptor结构中的FreePageNumber字段。上述过程,即尽力分配。
具体的,页框的释放流程如下:
1、接收到被释放的连续页框的分配单元描述符后,根据NvmAllocUnitDescriptor结构体的PreSpaceAddress字段和NextSpaceAddress字段,找到与被释放页框物理相邻的两个分配单元描述符。
2、根据找到的分配单元描述符NvmAllocUnitDescriptor的Flags字段,判断其对应的空间是否是空闲的。若不是空闲的,则转入4)执行。
3、若是空闲的,则进行合并操作。即使用一个分配单元描述符,表示合并后的空间。合 并时,需要从原来所在的FreePageLists或FreePageTrees中,取下被合并的分配单元描述符;之后,再将合并后的分配单元描述符,按照其空间大小放入FreePageLists或FreePageTrees中,并更新NvmDescriptor结构的FreePageNumber字段。
4、根据释放的分配单元描述符对应的空间大小,将其放入到FreePageLists或FreePageTrees中,并更新NvmDescriptor结构的FreePageNumber字段。
第四,对象描述符的组织与管理
由于存在页框找回,即持久化堆找回的问题,所以必须要记录应用程序创建了哪些持久化堆,以及每个持久化堆又包含了哪些页框。本实施例中,使用对象描述符这种元数据,表示一个持久化堆。对象描述符中会记录持久化堆的标识、持久化堆所拥有的若干连续页框(即对象描述符中的分配单元描述符链表)。本实施例中,会根据每个持久化堆的标识,将所有的对象描述符组织成一棵平衡二叉树,该树的根节点即由NvmDescriptor结构中的NvmObjectDestriptor字段指向。当应用程序新创建一个持久化堆时,其主要的操作过程如下:
首先,根据NvmDescriptor结构中的NvmObjectTree字段,查找创建者传递而来的对象UUID。若找到UUID,则出错返回。
其次,若找不到UUID,则首先根据NvmObjectDescriptorMemoryPool字段,从对象描述符内存池中申请一个空闲的对象描述符,并存入UUID等信息。
第三,将申请到的对象描述符插入到NvmObjectTree指向的红黑树中。
最后,根据创建者要求的空间大小,申请若干空闲页框,并将申请到的分配单元描述符,加入到NvmObjectDestriptor结构的NvmAllocUnitDescriptorInList链表中。
持久化堆的扩充操作和上述过程类似,只是不用创建新的对象描述符而已。当创建完持久化堆后,就可以将申请的页框映射到进程地址空间。下面描述持久化堆删除的主要步骤。
a,当应用程序要删除一个持久化堆时,需要给出被删堆的标识,即UUID。
b,根据NvmDescriptor结构中的NvmObjectTree字段,查找删除者传递而来的对象UUID。若找不到UUID,则出错返回。
c,找到UUID后,即获得对象描述符后,遍历其NvmAllocUnitDescriptorInList链表。
d,对于上述链表中的每个分配单元描述符,都调用页框释放操作,归还分配单元描述符对应的空间。
e,最后,从NvmObjectTree指向的红黑树中删除对象描述符,并归还给对象描述符内存池。
在执行持久化堆的缩减操作时,需要指定缩减的大小。本实施例主要的缩减过程同删除一个持久化堆类似,只是不需要从树中删除对象描述符。
第五,其他说明
在介绍了内存布局、元数据组织、页框的组织与管理,以及对象描述符的组织与管理后,接下来将对锁和日志这两部分内容进行介绍。
由于持久化堆是用来存放应用程序需要持久化保存的数据或数据结构,因此,在同一个系统中,持久化堆的个数和应用的个数相关。而一个应用,通常只会拥有少量的持久化堆。因此,整个系统中持久化堆的个数相对较少。如前所述,只有当持久化堆内剩余的空闲空间过小或过大时,才会导致持久化堆进行扩容或缩减。换句话说,扩容和缩减的操作次数很少。通常情况下,只有持久化堆的申请、删除、扩容和缩减操作,才会在较大程度上影响持久化内存管理机制的并发性。从上面的分析可以看出,这些操作的次数相对比较少。因此,持久化内存管理机制产生争用的概率较小。所以,本专利方案采用了简单的粗粒度锁,在上述操作入口处即进行加锁。
另外,为了确保持久化内存管理机制的一致性,特别是为了确保出现系统掉电等故障后的一致性,本专利方案采用了传统的undo类型日志方案。在进行持久化堆的申请、删除、扩容和缩减等操作时,会记录元数据部分的所有修改项。在实施例中,用于保存日志的区域,即持久化内存的第一个页框除去NvmDescriptor结构的部分。NvmDescriptor结构占据了1千多字节,因此日志区域有近3千字节。而这3千字节,完全能够容纳元数据部分的修改项。
实施例二
通过以上的实施方式的描述,本领域的技术人员可以清楚地了解到根据上述实施例的方法可借助软件加必需的通用硬件平台的方式来实现,当然也可以通过硬件,但很多情况下前者是更佳的实施方式。基于这样的理解,本发明的技术方案本质上或者说对现有技术做出贡献的部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质(如ROM/RAM、磁碟、光盘)中,包括若干指令用以使得一台终端设备(可以是手机,计算机,服务器,或者网络设备等)执行本发明各个实施例所述的方法。
在本实施例中还提供了一种存储空间管理装置,该装置用于实现上述实施例及优选实施方式,已经进行过说明的不再赘述。如以下所使用的,术语“模块”可以实现预定功能的软件和/或硬件的组合。尽管以下实施例所描述的装置较佳地以软件来实现,但是硬件,或者软件和硬件的组合的实现也是可能并被构想的。
图8是根据本发明实施例的存储空间管理装置的结构框图,如图8所示,该装置包括:第一接收模块82,获取模块84,第一查询模块86和分配模块88,其中,
第一接收模块82,设置为接收应用程序的内存空间请求;
获取模块84,与第一接收模块82建立电连接,设置为获取内存空间请求中所请求的空闲页框数目;
第一查询模块86,与获取模块84建立电连接,设置为依据空闲页框数目查询空闲页框组织,获取与空闲页框数目对应的连续空闲存储空间大小相等的连续空闲存储空间;
分配模块88,与第一查询模块86建立电连接,设置为将连续空闲存储空间分配给应用程序。
在本发明实施例中,图9是根据本发明实施例的一种存储空间管理装置的结构框图,如图9所示,在空闲页框组织中包括:页框链表或树,页框链表或树包括:至少一个分配单元描述符,至少一个分配单元描述符用于描述连续存储空间的存储状态以及连续存储空间对应的页框的起始地址和长度的情况下,第一查询模块86包括:查询单元861,第一匹配单元862和第二匹配单元863,其中,
查询单元861,设置为查询页框链表或树中的至少一个分配单元描述符;
第一匹配单元862,与查询单元861建立电连接,设置为将至少一个分配单元描述符中对应的连续空闲存储空间的大小,与应用程序所请求的空闲页框数目对应的连续空闲存储空间进行匹配,查询是否存在与空闲页框数目对应的连续空闲存储空间大小相等的连续空闲存储空间;
第二匹配单元863,与第一匹配单元862建立电连接,设置为若存在与空闲页框数目对应的连续空闲存储空间大小相等的连续空闲存储空间,则提取连续空闲存储空间作为分配至空闲页框数目对应的连续空闲存储空间;其中,连续存储空间的存储状态包括:已分配和空闲。
在本发明实施例中,图10是根据本发明实施例的另一种存储空间管理装置的结构框图,如图10所示,分配模块88包括:第一分配单元881,判断单元882,第二分配单元883,空间查询单元884,空间切割单元885,第三分配单元886,或第四分配单元887,其中,
第一分配单元881,设置为在页框链表或树中的连续空闲存储空间的大小等于空闲页框数目对应的连续空闲存储空间大小的情况下,将连续空闲存储空间分配给应用程序;
或者,
判断单元882,设置为在页框链表或树中的连续空闲存储空间的大小小于空闲页框数目对应的连续空闲存储空间大小的情况下,在页框链表或树中查询N个连续空闲存储空间,判断N个连续空闲存储空间是否与空闲页框数目对应的连续空闲存储空间大小相等,N为整数,且大于1;
第二分配单元883,与判断单元882建立电连接,设置为在判断结果为是的情况下,将N个连续空闲存储空间分配给应用程序。
在本发明实施例中,分配模块88还包括:空间查询单元884,设置为在N个连续空闲存储空间小于空闲页框数目对应的连续空闲存储空间的情况下,在页框链表或树中查找大于空闲页框数目对应的连续空闲存储空间大小的第一连续空闲存储空间;
空间切割单元885,与空间查询单元884建立电连接,设置为在得到第一连续空闲存储空间的情况下,对第一连续空闲存储空间进行切割,得到与空闲页框数目对应的连续空闲存储空间大小相等的第二连续空闲存储空间;
第三分配单元886,与空间切割单元885建立电连接,设置为将第二连续空闲存储空间分配给应用程序;其中,将第一连续空闲存储空间切割后的剩余连续空闲存储空间归还至页框链表或树;
或者,
第四分配单元887,设置为在页框链表或树中最大的连续空闲存储空间小于空闲页框数目对应的连续空闲存储空间的情况下,在空闲页框链表或树中查询是否存在小于最大连续空闲存储空间的连续存储空间,且连续存储空间的大小与空闲页框数目对应的连续空闲存储空间匹配,若连续存储空间小于空闲页框数目对应的连续空闲存储空间,则在空闲页框链表或树中查询是否存在小于连续空闲存储空间的连续存储空间,直至查询得到与空闲页框数目对应的连续空闲存储空间匹配的连续空闲存储空间,或,提示当前没有能够与空闲页框数目对应的连续空闲存储空间匹配的连续空闲存储空间。
在本发明实施例中,图11是根据本发明实施例的又一种存储空间管理装置的结构框图,如图11所示,该存储空间管理装置还包括:第二接收模块1100和第二查询模块1101,其中
第二接收模块1100,设置为在将连续空闲存储空间分配给应用程序之后,接收应用程序的持久化内存请求,持久化内存请求用于指示查询应用程序预先存储的数据;
第二查询模块1101,与第二接收模块1100建立电连接关系,设置为依据持久化内存请求,通过内存系统中的对象描述符进行查询,得到应用程序预先存储的数据;其中,对象描述符用于指示包含至少一个分配单元描述符的页框链表或树。
需要说明的是,上述各个模块是可以通过软件或硬件来实现的,对于后者,可以通过以下方式实现,但不限于此:上述模块均位于同一处理器中;或者,上述模块分别位于多个处理器中。
本发明的实施例还提供了一种存储介质。可选地,在本实施例中,上述存储介质可以被设置为存储用于执行以下步骤的程序代码:
S1,接收应用程序的内存空间请求;
S2,获取内存空间请求中所请求的空闲页框数目;
S3,依据空闲页框数目查询空闲页框组织,获取与空闲页框数目对应的连续空闲存储空间大小相等的连续空闲存储空间;
S4,将连续空闲存储空间分配给应用程序。
可选地,存储介质还被设置为存储用于执行以下步骤的程序代码:在空闲页框组织中包括:页框链表或树,页框链表或树包括:至少一个分配单元描述符,至少一个分配单元描述符用于描述连续存储空间的存储状态以及连续存储空间对应的页框的起始地址和长度的情况下,依据空闲页框数目查询空闲页框组织,获取与空闲页框数目对应的连续空闲存储空间大小相等的连续空闲存储空间的步骤包括:
S1,查询页框链表或树中的至少一个分配单元描述符;
S2,将至少一个分配单元描述符中对应的连续空闲存储空间的大小,与应用程序所请求的空闲页框数目对应的连续空闲存储空间进行匹配,查询是否存在与空闲页框数目对应的连续空闲存储空间大小相等的连续空闲存储空间;其中,连续存储空间的存储状态包括:已分配和空闲;
S3,若存在与空闲页框数目对应的连续空闲存储空间大小相等的连续空闲存储空间,则提取连续空闲存储空间作为分配至空闲页框数目对应的连续空闲存储空间。
可选地,在本实施例中,上述存储介质可以包括但不限于:U盘、只读存储器(ROM,Read-Only Memory)、随机存取存储器(RAM,Random Access Memory)、移动硬盘、磁碟或者光盘等各种可以存储程序代码的介质。
可选地,在本实施例中,处理器根据存储介质中已存储的程序代码执行将连续空闲存储空间分配给应用程序的步骤包括:若页框链表或树中的连续空闲存储空间的大小等于空闲页框数目对应的连续空闲存储空间大小,则将连续空闲存储空间分配给应用程序;若页框链表或树中的连续空闲存储空间的大小小于空闲页框数目对应的连续空闲存储空间大小,则在页框链表或树中查询N个连续空闲存储空间,判断N个连续空闲存储空间是否与空闲页框数目对应的连续空闲存储空间大小相等,N为整数,且大于1;在判断结果为是的情况下,将N个连续空闲存储空间分配给应用程序。
可选地,在本实施例中,处理器根据存储介质中已存储的程序代码执行若N个连续空闲存储空间小于空闲页框数目对应的连续空闲存储空间,则在页框链表或树中查找大于空闲页框数目对应的连续空闲存储空间大小的第一连续空闲存储空间;若得到第一连续空闲存储空间,则对第一连续空闲存储空间进行切割,得到与空闲页框数目对应的连续空闲存储空间大小相等的第二连续空闲存储空间;将第二连续空闲存储空间分配给应用程序;其中,将第一连续空闲存储空间切割后的剩余连续空闲存储空间归还至页框链表或树;若页框链表或树中最大的连续空闲存储空间小于空闲页框数目对应的连续空闲存储空间,则在空闲页框链表或树中查询是否存在小于最大连续空闲存储空间的连续存储空间,且连续存储空间的大小与空闲页框数目对应的连续空闲存储空间匹配,若连续存储空间小于空闲页框数目对应的连续空闲存储空间,则在空闲页框链表或树中查询是否存在小于连续空闲存储空间的连续存储空间,直至查询得到与空闲页框数目对应的连续空闲存储空间匹配的连续空闲存储空间,或,提示当前没有能够与空闲页框数目对应的连续空闲存储空间匹配的连续空闲存储空间。
可选地,在本实施例中,处理器根据存储介质中已存储的程序代码执行将连续空闲存储空间分配给应用程序之后,本实施例提供的存储空间管理方法还包括:接收应用程序的持久化内存请求,持久化内存请求用于指示查询应用程序预先存储的数据;依据持久化内存请求,通过内存系统中的对象描述符进行查询,得到应用程序预先存储的数据;其中,对象描述符用于指示包含至少一个分配单元描述符的页框链表或树。
可选地,本实施例中的具体示例可以参考上述实施例及可选实施方式中所描述的示例, 本实施例在此不再赘述。
显然,本领域的技术人员应该明白,上述的本发明的各模块或各步骤可以用通用的计算装置来实现,它们可以集中在单个的计算装置上,或者分布在多个计算装置所组成的网络上,可选地,它们可以用计算装置可执行的程序代码来实现,从而,可以将它们存储在存储装置中由计算装置来执行,并且在某些情况下,可以以不同于此处的顺序执行所示出或描述的步骤,或者将它们分别制作成各个集成电路模块,或者将它们中的多个模块或步骤制作成单个集成电路模块来实现。这样,本发明不限制于任何特定的硬件和软件结合。
以上所述仅为本发明的优选实施例而已,并不用于限制本发明,对于本领域的技术人员来说,本发明可以有各种更改和变化。凡在本发明的精神和原则之内,所作的任何修改、等同替换、改进等,均应包含在本发明的保护范围之内。
工业实用性
本发明实施例提供的技术方案,可以应用于存储空间的管理,采用接收应用程序的内存空间请求;获取内存空间请求中所请求的空闲页框数目;依据空闲页框数目查询空闲页框组织,获取与空闲页框数目对应的连续空闲存储空间大小相等的连续空闲存储空间;将连续空闲存储空间分配给应用程序。解决了相关技术中缺少针对NVM的内存管理机制导致内存空间分配效率低下的问题,进而达到了提升NVM的内存管理效率的效果。

Claims (10)

  1. 一种存储空间管理方法,包括:
    接收应用程序的内存空间请求;
    获取所述内存空间请求中所请求的空闲页框数目;
    依据所述空闲页框数目查询空闲页框组织,获取与所述空闲页框数目对应的连续空闲存储空间大小相等的连续空闲存储空间;
    将所述连续空闲存储空间分配给所述应用程序。
  2. 根据权利要求1所述的方法,其中,在所述空闲页框组织中包括:页框链表或树,所述页框链表或树包括:至少一个分配单元描述符,所述至少一个分配单元描述符用于描述连续存储空间的存储状态以及所述连续存储空间对应的页框的起始地址和长度的情况下,所述依据所述空闲页框数目查询空闲页框组织,获取与所述空闲页框数目对应的连续空闲存储空间大小相等的连续空闲存储空间的步骤包括:
    查询所述页框链表或树中的至少一个分配单元描述符;
    将所述至少一个分配单元描述符中对应的连续空闲存储空间的大小,与所述应用程序所请求的空闲页框数目对应的连续空闲存储空间进行匹配,查询是否存在与所述空闲页框数目对应的连续空闲存储空间大小相等的连续空闲存储空间;
    若存在与所述空闲页框数目对应的连续空闲存储空间大小相等的连续空闲存储空间,则提取所述连续空闲存储空间作为分配至所述空闲页框数目对应的连续空闲存储空间;
    其中,所述连续存储空间的存储状态包括:已分配和空闲。
  3. 根据权利要求2所述的方法,其中,所述将所述连续空闲存储空间分配给所述应用程序的步骤包括:
    若所述页框链表或树中的连续空闲存储空间的大小等于所述空闲页框数目对应的连续空闲存储空间大小,则将所述连续空闲存储空间分配给所述应用程序;
    若所述页框链表或树中的连续空闲存储空间的大小小于所述空闲页框数目对应的连续空闲存储空间大小,则在所述页框链表或树中查询N个连续空闲存储空间,判断所述N个连续空闲存储空间是否与所述空闲页框数目对应的连续空闲存储空间大小相等,N为整数,且大于1;
    在判断结果为是的情况下,将所述N个连续空闲存储空间分配给所述应用程序。
  4. 根据权利要求3所述的方法,其中,所述方法还包括:
    若所述N个连续空闲存储空间小于所述空闲页框数目对应的连续空闲存储空间,则在所述页框链表或树中查找大于所述空闲页框数目对应的连续空闲存储空间大小的第一连续空闲存储空间;
    若得到所述第一连续空闲存储空间,则对所述第一连续空闲存储空间进行切割,得到与所述空闲页框数目对应的连续空闲存储空间大小相等的第二连续空闲存储空间;
    将所述第二连续空闲存储空间分配给所述应用程序;
    其中,将所述第一连续空闲存储空间切割后的剩余连续空闲存储空间归还至所述页框链表或树;
    若所述页框链表或树中最大的连续空闲存储空间小于所述空闲页框数目对应的连续空闲存储空间,则在所述空闲页框链表或树中查询是否存在小于最大连续空闲存储空间的连续存储空间,且所述连续存储空间的大小与所述空闲页框数目对应的连续空闲存储空间匹配,若所述连续存储空间小于所述空闲页框数目对应的连续空闲存储空间,则在所述空闲页框链表或树中查询是否存在小于所述连续空闲存储空间的连续存储空间,直至查询得到与所述空闲页框数目对应的连续空闲存储空间匹配的连续空闲存储空间,或,提示当前没有能够与所述空闲页框数目对应的连续空闲存储空间匹配的连续空闲存储空间。
  5. 根据权利要求2所述的方法,其中,在将所述连续空闲存储空间分配给所述应用程序之后,所述方法还包括:
    接收所述应用程序的持久化内存请求,所述持久化内存请求用于指示查询所述应用程序预先存储的数据;
    依据所述持久化内存请求,通过内存系统中的对象描述符进行查询,得到所述应用程序预先存储的数据;
    其中,所述对象描述符用于指示包含所述至少一个分配单元描述符的所述页框链表或树。
  6. 一种存储空间管理装置,包括:
    第一接收模块,设置为接收应用程序的内存空间请求;
    获取模块,设置为获取所述内存空间请求中所请求的空闲页框数目;
    第一查询模块,设置为依据所述空闲页框数目查询空闲页框组织,获取与所述空闲页框数目对应的连续空闲存储空间大小相等的连续空闲存储空间;
    分配模块,设置为将所述连续空闲存储空间分配给所述应用程序。
  7. 根据权利要求6所述的装置,其中,在所述空闲页框组织中包括:页框链表或树,所述页框链表或树包括:至少一个分配单元描述符,所述至少一个分配单元描述符用于描述连续存储空间的存储状态以及所述连续存储空间对应的页框的起始地址和长度的情况下,所述第一查询模块包括:
    查询单元,设置为查询所述页框链表或树中的至少一个分配单元描述符;
    第一匹配单元,设置为将所述至少一个分配单元描述符中对应的连续空闲存储空间的大小,与所述应用程序所请求的空闲页框数目对应的连续空闲存储空间进行匹配,查询是否存在与所述空闲页框数目对应的连续空闲存储空间大小相等的连续空闲存储空间;
    第二匹配单元,设置为若存在与所述空闲页框数目对应的连续空闲存储空间大小相等的连续空闲存储空间,则提取所述连续空闲存储空间作为分配至所述空闲页框数目对应的连续空闲存储空间;
    其中,所述连续存储空间的存储状态包括:已分配和空闲。
  8. 根据权利要求7所述的装置,其中,所述分配模块包括:
    第一分配单元,设置为在所述页框链表或树中的连续空闲存储空间的大小等于所述空闲页框数目对应的连续空闲存储空间大小的情况下,将所述连续空闲存储空间分配给所述应用程序;
    判断单元,设置为在所述页框链表或树中的连续空闲存储空间的大小小于所述空闲页框数目对应的连续空闲存储空间大小的情况下,在所述页框链表或树中查询N个连续空闲存储空间,判断所述N个连续空闲存储空间是否与所述空闲页框数目对应的连续空闲存储空间大小相等,N为整数,且大于1;
    第二分配单元,设置为在判断结果为是的情况下,将所述N个连续空闲存储空间分配给所述应用程序。
  9. 根据权利要求8所述的装置,其中,所述分配模块还包括:
    空间查询单元,设置为在所述N个连续空闲存储空间小于所述空闲页框数目对应的连续空闲存储空间的情况下,在所述页框链表或树中查找大于所述空闲页框数目对应的连续空闲存储空间大小的第一连续空闲存储空间;
    空间切割单元,设置为在得到所述第一连续空闲存储空间的情况下,对所述第一连续空闲存储空间进行切割,得到与所述空闲页框数目对应的连续空闲存储空间大小相等的第二连续空闲存储空间;
    第三分配单元,设置为将所述第二连续空闲存储空间分配给所述应用程序;其中,将所述第一连续空闲存储空间切割后的剩余连续空闲存储空间归还至所述页框链表或树;
    第四分配单元,设置为在所述页框链表或树中最大的连续空闲存储空间小于所述空闲页框数目对应的连续空闲存储空间的情况下,在所述链表或树中查询是否存在小于最大连续空闲存储空间的连续存储空间,且所述连续存储空间的大小与所述空闲页框数目对应的连续空闲存储空间匹配,若所述连续存储空间小于所述空闲页框数目对应的连续空闲存储空间,则在所述空闲页框链表或树中查询是否存在小于所述连续空闲存储空间的连续存储空间,直至查询得到与所述空闲页框数目对应的连续空闲存储空间匹配的连 续空闲存储空间,或,提示当前没有能够与所述空闲页框数目对应的连续空闲存储空间匹配的连续空闲存储空间。
  10. 根据权利要求7所述的装置,其中,所述装置还包括:
    第二接收模块,设置为在将所述连续空闲存储空间分配给所述应用程序之后,接收所述应用程序的持久化内存请求,所述持久化内存请求用于指示查询所述应用程序预先存储的数据;
    第二查询模块,设置为依据持久化内存请求,通过内存系统中的对象描述符进行查询,得到所述应用程序预先存储的数据;
    其中,所述对象描述符用于指示包含所述至少一个分配单元描述符的所述页框链表或树。
PCT/CN2015/087705 2015-05-25 2015-08-20 一种存储空间管理方法及装置 WO2016187974A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201510271015.6 2015-05-25
CN201510271015.6A CN106294190B (zh) 2015-05-25 2015-05-25 一种存储空间管理方法及装置

Publications (1)

Publication Number Publication Date
WO2016187974A1 true WO2016187974A1 (zh) 2016-12-01

Family

ID=57393556

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2015/087705 WO2016187974A1 (zh) 2015-05-25 2015-08-20 一种存储空间管理方法及装置

Country Status (2)

Country Link
CN (1) CN106294190B (zh)
WO (1) WO2016187974A1 (zh)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111352863A (zh) * 2020-03-10 2020-06-30 腾讯科技(深圳)有限公司 内存管理方法、装置、设备及存储介质
CN111858393A (zh) * 2020-07-13 2020-10-30 Oppo(重庆)智能科技有限公司 内存页面管理方法、内存页面管理装置、介质与电子设备
CN111913657A (zh) * 2020-07-10 2020-11-10 长沙景嘉微电子股份有限公司 块数据读写方法、装置、系统及存储介质
CN113849311A (zh) * 2021-09-28 2021-12-28 苏州浪潮智能科技有限公司 内存空间管理方法、装置、计算机设备和存储介质

Families Citing this family (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107273061A (zh) * 2017-07-12 2017-10-20 郑州云海信息技术有限公司 一种固态硬盘创建多namespace的方法及系统
CN108038002B (zh) * 2017-12-15 2021-11-02 天津津航计算技术研究所 一种嵌入式软件内存管理方法
CN108132842B (zh) * 2017-12-15 2021-11-02 天津津航计算技术研究所 一种嵌入式软件内存管理系统
CN109960662B (zh) * 2017-12-25 2021-10-01 华为技术有限公司 一种内存回收方法及设备
CN110209489B (zh) * 2018-02-28 2020-07-31 贵州白山云科技股份有限公司 一种适用于内存页结构的内存管理方法及装置
CN109542356B (zh) * 2018-11-30 2021-12-31 中国人民解放军国防科技大学 面向容错的nvm持久化过程冗余信息的压缩方法和装置
CN111078587B (zh) * 2019-12-10 2022-05-06 Oppo(重庆)智能科技有限公司 内存分配方法、装置、存储介质及电子设备
CN111857575A (zh) * 2020-06-24 2020-10-30 国汽(北京)智能网联汽车研究院有限公司 计算平台内存空间确定方法、装置、设备及存储介质
CN111984201B (zh) * 2020-09-01 2023-01-31 云南财经大学 基于持久化内存的天文观测数据高可靠性采集方法和系统
CN112181790B (zh) * 2020-09-15 2022-12-27 苏州浪潮智能科技有限公司 一种存储设备的容量统计方法、系统及相关组件
CN113296940B (zh) * 2021-03-31 2023-12-08 阿里巴巴新加坡控股有限公司 数据处理方法及装置
CN113553142B (zh) * 2021-09-18 2022-01-25 云宏信息科技股份有限公司 云平台的存储空间整理方法、配置方法及可读存储介质

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1604051A (zh) * 2003-09-30 2005-04-06 三星电子株式会社 用于通过面向对象的程序执行动态内存管理的方法和设备
US20050154851A1 (en) * 2004-01-14 2005-07-14 Charles Andrew A. Fast, high reliability dynamic memory manager
CN101470665A (zh) * 2007-12-27 2009-07-01 Tcl集团股份有限公司 一种无mmu平台的应用系统内存管理的方法及系统
CN101470667A (zh) * 2007-12-28 2009-07-01 英业达股份有限公司 Linux系统平台上指定地址范围分配物理内存的方法
CN102866953A (zh) * 2011-07-08 2013-01-09 风网科技(北京)有限公司 存储管理系统及其存储管理方法
CN104317926A (zh) * 2014-10-31 2015-01-28 北京思特奇信息技术股份有限公司 一种持久化的数据存储和查询方法及对应的装置和系统

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9072972B2 (en) * 2011-04-28 2015-07-07 Numecent Holdings Ltd Application distribution network
CN104111896B (zh) * 2014-07-30 2017-07-14 云南大学 大数据处理中的虚拟内存管理方法及其装置
CN104375899B (zh) * 2014-11-21 2016-03-30 北京应用物理与计算数学研究所 高性能计算机numa感知的线程和内存资源优化方法与系统

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1604051A (zh) * 2003-09-30 2005-04-06 三星电子株式会社 用于通过面向对象的程序执行动态内存管理的方法和设备
US20050154851A1 (en) * 2004-01-14 2005-07-14 Charles Andrew A. Fast, high reliability dynamic memory manager
CN101470665A (zh) * 2007-12-27 2009-07-01 Tcl集团股份有限公司 一种无mmu平台的应用系统内存管理的方法及系统
CN101470667A (zh) * 2007-12-28 2009-07-01 英业达股份有限公司 Linux系统平台上指定地址范围分配物理内存的方法
CN102866953A (zh) * 2011-07-08 2013-01-09 风网科技(北京)有限公司 存储管理系统及其存储管理方法
CN104317926A (zh) * 2014-10-31 2015-01-28 北京思特奇信息技术股份有限公司 一种持久化的数据存储和查询方法及对应的装置和系统

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111352863A (zh) * 2020-03-10 2020-06-30 腾讯科技(深圳)有限公司 内存管理方法、装置、设备及存储介质
CN111352863B (zh) * 2020-03-10 2023-09-01 腾讯科技(深圳)有限公司 内存管理方法、装置、设备及存储介质
CN111913657A (zh) * 2020-07-10 2020-11-10 长沙景嘉微电子股份有限公司 块数据读写方法、装置、系统及存储介质
CN111858393A (zh) * 2020-07-13 2020-10-30 Oppo(重庆)智能科技有限公司 内存页面管理方法、内存页面管理装置、介质与电子设备
CN111858393B (zh) * 2020-07-13 2023-06-02 Oppo(重庆)智能科技有限公司 内存页面管理方法、内存页面管理装置、介质与电子设备
CN113849311A (zh) * 2021-09-28 2021-12-28 苏州浪潮智能科技有限公司 内存空间管理方法、装置、计算机设备和存储介质
CN113849311B (zh) * 2021-09-28 2023-11-17 苏州浪潮智能科技有限公司 内存空间管理方法、装置、计算机设备和存储介质

Also Published As

Publication number Publication date
CN106294190A (zh) 2017-01-04
CN106294190B (zh) 2020-10-16

Similar Documents

Publication Publication Date Title
WO2016187974A1 (zh) 一种存储空间管理方法及装置
CN106874383B (zh) 一种分布式文件系统元数据的解耦合分布方法
US8161244B2 (en) Multiple cache directories
CN108628753B (zh) 内存空间管理方法和装置
US20220261377A1 (en) Scalable multi-tier storage structures and techniques for accessing entries therein
US20150293933A1 (en) Method and apparatus for large scale data storage
CN108804510A (zh) 键值文件系统
KR102440128B1 (ko) 통합된 객체 인터페이스를 위한 메모리 관리 장치, 시스템 및 그 방법
CN106682110B (zh) 一种基于哈希格网索引的影像文件存储和管理系统及方法
WO2015078194A1 (zh) 一种哈希数据库的配置方法和装置
CN104899297A (zh) 具有存储感知的混合索引结构
US9307024B2 (en) Efficient storage of small random changes to data on disk
CN106570113B (zh) 一种海量矢量切片数据云存储方法及系统
KR20080060117A (ko) 파일 시스템 장치 및 그 파일 시스템의 파일 저장 및 파일 탐색 방법
CN100424699C (zh) 一种属性可扩展的对象文件系统
WO2020125630A1 (zh) 文件读取
CN111984425B (zh) 用于操作系统的内存管理方法、装置及设备
CN105117433A (zh) 一种基于Hive解析HFile统计查询HBase的方法和系统
CN102163232A (zh) 一种支持iec61850对象查询的sql接口实现方法
CN113805816B (zh) 一种磁盘空间管理方法、装置、设备及存储介质
WO2016187975A1 (zh) 内存碎片整理方法及装置
CN117271531B (zh) 一种数据存储方法、系统、设备及介质
US20170316042A1 (en) Index page with latch-free access
KR20090007926A (ko) 플래시 메모리에 저장된 데이터의 인덱스 정보 관리 장치및 방법
CN113138859A (zh) 一种基于共享内存池的通用数据存储方法

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 15893055

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 15893055

Country of ref document: EP

Kind code of ref document: A1