CN114356213A - Parallel space management method for NVM wear balance under NUMA architecture - Google Patents

Parallel space management method for NVM wear balance under NUMA architecture Download PDF

Info

Publication number
CN114356213A
CN114356213A CN202111431298.8A CN202111431298A CN114356213A CN 114356213 A CN114356213 A CN 114356213A CN 202111431298 A CN202111431298 A CN 202111431298A CN 114356213 A CN114356213 A CN 114356213A
Authority
CN
China
Prior art keywords
wear
data blocks
linked list
range
data block
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202111431298.8A
Other languages
Chinese (zh)
Other versions
CN114356213B (en
Inventor
吴挺
严琪韵
陈阔
钱鹰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chongqing University of Post and Telecommunications
Original Assignee
Chongqing University of Post and Telecommunications
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chongqing University of Post and Telecommunications filed Critical Chongqing University of Post and Telecommunications
Priority to CN202111431298.8A priority Critical patent/CN114356213B/en
Publication of CN114356213A publication Critical patent/CN114356213A/en
Application granted granted Critical
Publication of CN114356213B publication Critical patent/CN114356213B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The invention discloses a parallel space management method for NVM wear leveling under a NUMA architecture, which overcomes the problems of three aspects caused by the fact that the space management in the wear strategy does not consider NUMA multi-node processor and memory management architecture when the existing wear leveling strategy is directly applied to the NUMA architecture: 1) the nonvolatile memories in the nodes are non-uniformly worn; 2) the storage space of the non-volatile memory among the nodes is worn unevenly; 3) scalability issues when multiple processors request space allocation. An intra-node wear balance mechanism, a parallel distribution recovery mechanism and an inter-node wear balance mechanism are adopted, and the aims of realizing wear balance of a plurality of nonvolatile memory banks in nodes and among nodes in a NUMA (non-uniform memory access) architecture and improving the parallel performance of space distribution are fulfilled.

Description

Parallel space management method for NVM wear balance under NUMA architecture
Technical Field
The invention relates to the technical field of novel storage, in particular to a parallel space management method for wear balance of a non-volatile memory (NVM) under a non-uniform memory access (NUMA) architecture.
Background
The new Non-Volatile Memory (NVM) in recent years has the excellent characteristics of fast access speed, byte addressing, high storage density, Non-volatility of data, low latency, low power consumption, etc., such as PCM, 3D XPoint and the recent Optane DC PMM Memory released by intel corporation, and the appearance of these Non-Volatile memories is expected to reduce the performance difference between the high-speed processor and the low-speed Memory, which brings opportunities for the development of the storage system. In order to effectively utilize the characteristics of the nonvolatile memory, the academia and the industry respectively design and implement a plurality of nonvolatile memory file systems, such as EXT4-DAX, SCMFS, PMFS, NOVA, SIMFS and HiNFS. The memory file systems use the nonvolatile memory as the storage device of the file data, so that the software overhead of the traditional block device file system through the cache of an operating system and an I/O software stack is avoided, the data is directly copied between the cache of an application program and the nonvolatile memory device, and better performance improvement is obtained.
Although non-volatile memories have many advantages, non-volatile memories suffer from a low endurance, such as a PCM write endurance of about 108Second, the phase change material of the storage medium unit of PCM is switched to 10 in the crystalline state and the amorphous state8After that, the storage unit becomes unstable, which causes errors in stored data and seriously affects the reliability of the file system. This drawback hinders the widespread use of non-volatile memories, and is a critical issue to be solved urgently. At present, an important method for preventing the transient wear of the nonvolatile memory is a wear leveling technology, that is, the life of the nonvolatile memory is prolonged by evenly distributing the write operation to the whole storage space. Many nonvolatile memory file systems provide various wear leveling algorithms by combining with the access characteristics of data structures, avoid the condition of centralized writing of storage units through data migration, solve the problem of unbalanced wear of novel storage media on the bottom layer, and effectively prolong the service life of the nonvolatile memory.
To achieve greater throughput, reliability and economy in computer systems, modern processors are commonly designed as multiple processors. The Memory architecture of the multiprocessor system can be divided into a Uniform Memory Access (UMA) and a Non-Uniform Memory Access (NUMA) according to whether the time for the processor to Access the Memory is Uniform. Compared with the UMA architecture, the NUMA architecture divides the processors and the memory into nodes for management, avoids the problem of expansibility caused by the fact that memory access conflicts are continuously increased when a plurality of processors address the same memory address space, and has lower local memory access delay and better expansibility. The NUMA architecture computer is widely applied to the fields of high-performance computing and cloud computing, and is a mainstream architecture adopted by a data center server. Many base software perform special optimization processes for NUMA architecture computer systems. For example, Linux kernel memory management supports a non-uniform memory access model, and the load balance of a scheduler takes a node as a management domain; many research works propose a data allocation policy mechanism for the NUMA architecture, and optimize for the NUMA architecture for the memory allocation policy; enterprises perform special processing on database management systems and virtualization platforms aiming at NUMA architecture.
However, existing non-volatile memory file system wear leveling strategies are designed for UMA architectures and are not optimized for NUMA architectures. When the existing wear leveling strategy is directly applied to the NUMA architecture, three problems will be caused because space management in the wear strategy does not consider the NUMA multi-node processor and the memory management architecture: 1) the nonvolatile memories in the nodes are non-uniformly worn; 2) the storage space of the non-volatile memory among the nodes is worn unevenly; 3) scalability issues when multiple processors request space allocation. Non-volatile memory file system wear leveling strategies require special design for NUMA architectures.
Disclosure of Invention
Aiming at the problems in the prior art, the invention provides a parallel space management method for wear balance of a nonvolatile memory NVM (non-volatile memory) under a NUMA (non-uniform memory access) architecture, aiming at realizing wear balance of a plurality of nonvolatile memory banks in nodes and among nodes in the NUMA architecture and improving the parallel performance of space distribution.
In view of the above, the technical scheme adopted by the invention is as follows: a parallel space management method for wear balance of a nonvolatile memory (NVM) under a NUMA (non-uniform memory access) architecture comprises 3 mechanisms: an intra-node wear balancing mechanism, a parallel distribution and recovery mechanism and an inter-node wear balancing mechanism. Specifically, a parallel space management method for wear leveling of an NVM under a NUMA architecture includes the following steps:
the method comprises the steps of carrying out balance processing on abrasion in a plurality of nonvolatile memory banks and among the banks in a single NUMA node, managing the writing times of each data block by adopting a data block writing time counter management module, managing idle data in the single NUMA node by adopting a data block abrasion range management module based on bucket sequencing, and adjusting the number of the data blocks in two abrasion range buckets on line by adopting an online abrasion range adjustment algorithm.
The method adopts a parallel distribution and recovery mechanism to realize parallel distribution and recovery of the data blocks, and comprises the steps of adopting a data block parallel distribution and recovery management module to manage a distributed data block linked list and a recovered data block linked list, adopting a data block polling parallel distribution strategy to realize balanced distribution and wear of the data blocks in a multi-linked list, and periodically updating the mapping relation between a processor core id and a linked list id.
And an inter-node wear balancing mechanism is adopted, NUMA nodes are taken as a management domain, files are uniformly distributed on each NUMA node, write operation is randomly distributed to each node, and wear of the non-volatile memory between the nodes is balanced.
Further, the data block write number counter management module manages the write number of each data block. Taking the NUMA node as a range, managing all nonvolatile memories in the NUMA node as a domain, distributing a writing frequency counter for each data block in the domain, recording the writing frequency representing the wear degree of the data block, and reserving a section of nonvolatile memory space in the node as the storage space of the writing frequency counter.
Further, the bucket ordering-based data block wear range management module puts idle data blocks in a single NUMA node into a low wear range bucket or a high wear range bucket according to the following principle: and using the maximum writing times of the current idle data blocks as the boundary of the high-low abrasion range bucket, putting the data blocks with the writing times larger than the boundary into the high-abrasion range bucket, or putting the data blocks into the low-abrasion range bucket. And idle data blocks in the low abrasion range barrel and the high abrasion range barrel are managed through a linked list management structure body and an unordered single linked list.
Further, the online wear leveling algorithm preferentially allocates from the low wear leveling bucket when allocating data blocks; when the data blocks in the low abrasion range barrel are distributed, the data blocks in the high abrasion range barrel are idle data blocks with lower abrasion, and then the data blocks are distributed from the high abrasion range barrel; when a data block is reclaimed, a higher wear range bucket is constructed.
Furthermore, the data block parallel distribution and recovery management module manages the distributed data block linked list and the recovered data block linked list which are the same in number as the processor cores in the NUMA node, avoids the competition access of multi-core to the data block linked list in the process of distributing and recovering the data blocks, and improves the parallel performance of space distribution and recovery.
Further, the data block polling parallel distribution strategy realizes the balanced distribution wear of the data blocks in the multiple linked lists, and regularly updates the mapping relation between the processor core id and the linked list id, so that the plurality of processor cores distribute the data blocks from the plurality of space distribution linked lists in turn, and the plurality of space distribution linked lists are uniformly consumed.
Further, the inter-node wear leveling mechanism uniformly creates the new file on each NUMA node when the new file is operated, randomly distributes write operations to each node, and balances wear of the non-volatile memory between the nodes.
The present invention also provides a computer readable storage medium storing a computer program which, when executed, enables the aforementioned NVM wear leveling parallel space management.
The parallel space management method for the wear balance of the non-volatile memory NVM under the NUMA architecture realizes the wear balance between the nodes in the NUMA architecture and the wear balance between the bars in the non-volatile memory in the nodes through the wear balance mechanism in the nodes and the wear balance mechanism between the nodes, manages one distribution recovery linked list for each processor core through the parallel distribution recovery mechanism, and improves the parallel performance of space distribution and recovery operation in the NUMA architecture.
Drawings
FIG. 1 is a schematic diagram of a NVM wear leveling parallel space management method architecture;
FIG. 2 is a schematic diagram of a parallel space management method for NVM wear leveling under a NUMA architecture;
FIG. 3 is a diagram of a round robin parallel allocation strategy;
FIG. 4 is a polling parallel allocation policy workflow diagram;
FIG. 5 is a schematic diagram of a bucket ordering based data block wear range management method;
fig. 6 is a flow chart of the operation of the wear range automatic determination algorithm.
Detailed Description
Fig. 1 is a schematic structural diagram of a wear leveling parallel space management method, including a parallel allocation recovery module and a wear management module. The wear-leveling parallel space management method realizes wear leveling of the nonvolatile memory by preferentially distributing the data blocks with smaller wear degrees. When an application program initiates a write request or an update request to a nonvolatile memory file system, the file system needs to allocate a new data block for an accessed file to realize additional write or replacement write, initiates a data block allocation request, and allocates a new data block from a space management module; when the application program deletes the file and the reference number of the file is 0, the file system executes file deletion operation, initiates a recovery request of the data block and recovers the data block corresponding to the file. A plurality of CPUs in the NUMA architecture are divided into nodes for management, each node comprises a plurality of processor cores, and the parallel distribution and recovery management module avoids competition access of the CPUs in the nodes to the linked lists by maintaining the distribution linked lists and the recovery linked lists with the same number as the CPU cores in the NUMA nodes. The parallel distribution strategy avoids the problem that the data blocks in the linked lists are unevenly worn due to the fact that the data blocks in some parallel distribution linked lists are consumed too fast and the data blocks in some parallel distribution linked lists are consumed too slow. The wear management module manages and realizes wear balance in and among a plurality of nonvolatile memory banks in a single node, obtains idle data blocks from the parallel recovery module, divides the idle data blocks into two bucket ranges of high and low wear degrees according to the written number of the recovered data blocks, and the parallel distribution module obtains the data blocks from the buckets of the low wear degrees.
Fig. 2 is a schematic diagram of a parallel space management method for wear leveling of a non-volatile memory under a NUMA architecture, where the upper part of the diagram shows a logical structure of the NUMA architecture, the lower part shows a parallel space management method for wear sensing in nodes, and wear leveling between nodes is implemented in a file creation process. In order to reduce the influence of NUMA (non-uniform memory access) bandwidth and delay characteristics on the space allocation and recovery performance of a non-volatile memory file system, the method provided by the invention takes a plurality of non-volatile memories in each node as a domain to carry out wear management, the non-volatile memories in the domain are divided into data blocks with the size of 4KB to be managed, and each data block in the domain corresponds to a write frequency counter so as to record the wear degree of each data block. The parallel space management method for sensing the wear in the nodes comprises a data block writing frequency management module, a data block wear range management module based on bucket sequencing, an online wear range adjustment algorithm and a parallel distribution recovery mechanism. And the wear balancing mechanism among the nodes is used for balancing the wear of the nonvolatile memory among the nodes by uniformly distributing the files on each NUMA node and randomly distributing the write operation to each node.
Fig. 3 is a schematic diagram of a polling parallel allocation strategy. In order to implement parallel allocation and recovery of data blocks by multiple CPU cores and improve the scalability of the system, the existing non-volatile memory file system usually employs linked lists with the same number as the number of CPU cores to manage the free space of the file system, for example, NOVA maintains a free data block linked list corresponding to the id of a CPU core for each CPU core. However, the failure of existing mechanisms to account for the limited wear characteristics of non-volatile memory in multiprocessor systems leads to two problems: 1) the designated division of a segment of non-volatile memory for each CPU core will reduce the available non-volatile memory space for the CPU. 2) The different write modes of the nonvolatile memories are checked by each CPU, which causes different consumption of data blocks in the idle linked list and causes unbalanced wear of the nonvolatile memories. Aiming at the problems, the invention provides a polling parallel allocation strategy in the parallel space allocation process, so that the data blocks in each linked list are uniformly consumed. As shown in fig. 3, the polling parallel allocation mechanism manages the idle data blocks by using the allocation linked lists with the same number as the number of CPU cores in the node, and dynamically manages the mapping relationship between the CPU cores and the linked lists by linked list arrays, thereby avoiding the problems of unbalanced consumption and linked list competition of partial linked list data blocks checked by a plurality of CPUs. Each CPU core allocates data blocks from multiple free linked lists in a round robin manner, and the data block allocation process is as shown in fig. 4:
in step S401, the system first obtains the number of CPU cores and idle linked lists in the NUMA node, and when mounting the file system, divides the idle space into a plurality of linked lists according to the number of CPU cores to manage, creates a linked list array, uses the CPU core identifier as an index, and initializes the linked list array as a default value.
In step S402, the kernel thread running on the ith CPU core initiates a request to allocate a free data block to the file system space management module.
In step S403, the default List number List is looked up in the linked List array by the CPU core identification iiThrough Listi++mod CPUnumAnd calculating the target chain table number.
In step S404, a corresponding linked list management structure is found according to the target linked list number in S403, and whether the linked list is being accessed is determined by determining whether the linked list structure occupies the identifier. If the target linked list is already occupied, the method returns to step S403 to calculate a new target linked list number according to the current target linked list number.
In step S405, the corresponding linked list structure is accessed through the found unoccupied target linked list number, and an occupied identification bit in the linked list structure is set.
In step S406, free data blocks are allocated from the target linked list.
In step S407, the occupied flag in the target linked list structure is cleared.
In step S408, the element value corresponding to the CPU core in the linked list array is modified using the target linked list number.
FIG. 5 is a diagram illustrating a bucket ordering based data block wear range management method. And respectively putting the idle data blocks into the low-wear range bucket and the high-wear range bucket according to the writing times of the data blocks, using the current maximum writing times as the boundary of the low-wear range bucket and the high-wear range bucket, and preferentially consuming the idle data blocks in the low-wear range bucket to realize the balanced wear of the nonvolatile memory in the node. The bucket ordering based data block wear range management method is derived from the following 3 aspects of consideration: 1) wear leveling of the non-volatile memory is achieved by preferentially allocating idle data blocks with lower write times. In order to implement wear leveling, it is necessary to preferentially allocate an idle data block with a low write count, and in order to find an idle data block with a low write count, it is necessary to sequence the write counts of the data blocks, which is a method with a high performance overhead. In addition, only spare data blocks with lower writing times need to be allocated to achieve the effect of wear leveling. Therefore, after parallel recovery of data blocks, the current free data blocks are divided into a low wear range bucket and a high wear range bucket by a bucket sorting first step operation, and the free data blocks with relatively low write times are found. 2) The boundary of the wear-out range bucket is dynamically determined according to the write times of the recycled data blocks. The number of writes per data block is increasing, the wear-out range boundary changes as the number of writes per free data block increases, and the boundary of the wear-out range bucket needs to increase as the number of writes per data block increases, so the current maximum number of writes is used as the boundary of the low wear-out range bucket and the high wear-out range bucket. 3) When the data blocks in the low wear range bucket are consumed, a strategy needs to be designed to dynamically adjust the number of the data blocks in the high and low wear ranges, so that the aim of preferentially distributing the idle data blocks with low writing times to realize wear balance is fulfilled. An online wear range adjustment algorithm is provided, a current high wear range bucket is dynamically converted into a low wear range bucket, and a higher wear range bucket is constructed by comparing the writing times of data blocks when the data blocks are recycled.
Based on the design, when the parallel recovery management module recovers the idle data block, the writing times of the idle data block are compared with the current maximum writing times, if the writing times are smaller than the current maximum writing times, the data block is placed into a low-wear-range bucket, otherwise, the data block is placed into a high-wear-range bucket, and meanwhile, the writing times of the data block are used as a new bucket boundary. When the number of data blocks in the parallel allocation management mechanism is less than a certain percentage of the total number of free data blocks, the free data blocks are obtained from the low wear range bucket. The idle data blocks in the bucket are managed through a linked list management structure and an unordered single linked list. The linked list management structure stores the head pointer of the linked list, the number of idle data blocks in the linked list and the maximum writing times in the linked list. When the number of data blocks in the low wear range is consumed, the number of data blocks in the two wear range buckets is dynamically adjusted through an online wear range adjustment algorithm, and the flow of the adjustment algorithm is shown in fig. 6:
in step S601, when the number of free data blocks in the parallel allocation module list is lower than the threshold, the free data blocks are acquired from the low wear range bucket.
In step S602, it is determined whether the number of free data blocks in the low wear range bucket is 0, and if not, the data blocks are allocated from the head node of the list of the low wear range bucket, and if 0, step S603 is performed.
In step S603, pointers of the high and low data block supply buckets are exchanged, and the current high wear range bucket is used to supply data blocks to obtain free data blocks with lower current wear.
In step S604, it is determined whether the number of free data blocks in the current high-wear bucket is 0, and if so, it indicates that the available data blocks of the system are 0 and the system space is insufficient. Otherwise, the procedure returns to step S602 to obtain a free data block.
In step S605, the system space shortage is returned.
In step S606, the idle data block with low wear level is successfully acquired.
The protection of the present invention is not limited to the above embodiments. Those skilled in the art will recognize that changes and advantages may be made therein without departing from the spirit and scope of the inventive concept, which is set forth in the following claims.

Claims (9)

1. A parallel space management method for NVM wear leveling under NUMA architecture is characterized by comprising the following steps:
carrying out balance processing on the wear in and among a plurality of nonvolatile memory banks in a single NUMA node, wherein the balance processing comprises the steps of managing the writing times of each data block by adopting a data block writing time counter management module, managing idle data in the single NUMA node by adopting a data block wear range management module based on barrel sequencing, and adjusting the number of the data blocks in two wear range barrels on line by adopting an on-line wear range adjustment algorithm;
the method comprises the steps of adopting a parallel distribution and recovery mechanism to realize parallel distribution and recovery of data blocks, managing a distributed data block linked list and a recovered data block linked list by adopting a data block parallel distribution and recovery management module, adopting a data block polling parallel distribution strategy to realize balanced distribution and wear of the data blocks in a multi-linked list, and regularly updating a mapping relation between a processor core id and a linked list id;
and an inter-node wear balancing mechanism is adopted, NUMA nodes are taken as a management domain, files are uniformly distributed on each NUMA node, write operation is randomly distributed to each node, and wear of the non-volatile memory between the nodes is balanced.
2. The parallel space management method for NVM wear leveling under NUMA architecture according to claim 1, wherein: the data block write number counter management module manages all nonvolatile memories in a single NUMA node as a domain by taking the NUMA node as a range, allocates a write number counter for each data block in the domain, records write numbers representing the wear degree of the data block, and reserves a section of nonvolatile memory space in the node as a storage space of the write number counter.
3. The parallel space management method for NVM wear leveling under NUMA architecture according to claim 1, wherein: the data block wear range management module based on bucket sorting puts idle data blocks in a single NUMA node into a low wear range bucket or a high wear range bucket according to the following principle: and using the maximum writing times of the current idle data blocks as the boundary of the high-low abrasion range bucket, putting the data blocks with the writing times larger than the boundary into the high-abrasion range bucket, or putting the data blocks into the low-abrasion range bucket.
4. The parallel space management method for NVM wear leveling under NUMA architecture according to claim 3, wherein: and idle data blocks in the low abrasion range barrel and the high abrasion range barrel are managed through a linked list management structure body and an unordered single linked list.
5. The parallel space management method for NVM wear leveling under NUMA architecture according to claim 1, wherein: the online wear range adjustment algorithm preferentially allocates from a low wear range bucket when allocating data blocks; when the data blocks in the low abrasion range barrel are distributed, the data blocks in the high abrasion range barrel are idle data blocks with lower abrasion, and then the data blocks are distributed from the high abrasion range barrel; when a data block is reclaimed, a higher wear range bucket is constructed.
6. The parallel space management method for NVM wear leveling under NUMA architecture according to claim 5, wherein: the online wear range adjustment algorithm comprises the steps of:
step S601, when the number of idle data blocks in the parallel distribution module list is lower than a threshold value, acquiring idle data blocks from a low-wear range bucket;
step S602, judging whether the number of idle data blocks in the low abrasion range bucket is 0, if not, distributing the data blocks from the list head node of the low abrasion range bucket, and if so, executing step S603;
step S603, exchanging pointers of high and low data block providing buckets, and providing data blocks by using the current high wear range bucket to obtain idle data blocks with lower current wear;
step S604, judging whether the number of idle data blocks in the current high-wear bucket is 0, if so, indicating that the available data blocks of the system are 0 and the system space is insufficient, otherwise, returning to the step S602 to obtain the idle data blocks;
step S605, returning to the shortage of system space;
step S606, successfully acquire an idle data block with a low wear level.
7. The parallel space management method for NVM wear leveling under NUMA architecture according to claim 1, wherein: the data block polling parallel distribution strategy comprises the following steps:
step S401, the system firstly obtains the number of CPU cores and idle linked lists in the NUMA node, when the file system is mounted, the idle space is divided into a plurality of linked lists according to the number of the CPU cores for management, a linked list array is created, the CPU core identification is used as an index, and the linked list array is initialized as a default value;
step S402, the kernel thread running on the ith CPU core initiates a request for distributing free data blocks to the file system space management module;
step S403, searching a default chain table number List in the chain table array through the CPU core identifier iiThrough Listi++mod CPUnumCalculating a target chain table number;
step S404, finding out the corresponding linked list management structure according to the target linked list number in the step S403, judging whether the linked list is accessed by judging whether the linked list structure occupies the identifier or not, and returning to the step S403 to calculate a new target linked list number according to the current target linked list number if the target linked list is occupied;
step S405, accessing the corresponding linked list structure body through the found unoccupied target linked list number, and setting an occupied identification position in the linked list structure body;
step S406, distributing idle data blocks from the target linked list;
step S407, clearing an occupied identification bit in the target linked list structure;
step S408, the element value corresponding to the CPU core in the linked list array is modified by using the target linked list number.
8. The parallel space management method for NVM wear leveling under NUMA architecture according to claim 1, wherein: the wear balancing mechanism among the nodes is used for uniformly creating the new file on each NUMA node and randomly distributing write operation to each node when the new file is operated.
9. A storage management program of a computer-readable storage medium, characterized in that: the computer program when executed can implement the NVM wear leveling parallel space management of any of claims 1-8.
CN202111431298.8A 2021-11-29 2021-11-29 Parallel space management method for NVM wear balance under NUMA architecture Active CN114356213B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111431298.8A CN114356213B (en) 2021-11-29 2021-11-29 Parallel space management method for NVM wear balance under NUMA architecture

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111431298.8A CN114356213B (en) 2021-11-29 2021-11-29 Parallel space management method for NVM wear balance under NUMA architecture

Publications (2)

Publication Number Publication Date
CN114356213A true CN114356213A (en) 2022-04-15
CN114356213B CN114356213B (en) 2023-07-21

Family

ID=81097329

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111431298.8A Active CN114356213B (en) 2021-11-29 2021-11-29 Parallel space management method for NVM wear balance under NUMA architecture

Country Status (1)

Country Link
CN (1) CN114356213B (en)

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103946812A (en) * 2011-09-30 2014-07-23 英特尔公司 Apparatus and method for implementing a multi-level memory hierarchy
CN105930280A (en) * 2016-05-27 2016-09-07 诸葛晴凤 Efficient page organization and management method facing NVM (Non-Volatile Memory)
US20170206033A1 (en) * 2016-01-19 2017-07-20 SK Hynix Inc. Mechanism enabling the use of slow memory to achieve byte addressability and near-dram performance with page remapping scheme
CN106990911A (en) * 2016-01-19 2017-07-28 爱思开海力士有限公司 OS and application program transparent memory compress technique
CN107193756A (en) * 2013-03-15 2017-09-22 英特尔公司 For marking the beginning and the instruction of end that need to write back the non-transactional code area persistently stored
CN107515728A (en) * 2016-06-17 2017-12-26 清华大学 Play the data managing method and device of concurrent characteristic inside flash memory device
CN109656483A (en) * 2018-12-19 2019-04-19 中国人民解放军国防科技大学 Static wear balancing method and device for solid-state disk
CN110134492A (en) * 2019-04-18 2019-08-16 华中科技大学 A kind of non-stop-machine memory pages migratory system of isomery memory virtual machine
US10515014B1 (en) * 2015-06-10 2019-12-24 EMC IP Holding Company LLC Non-uniform memory access (NUMA) mechanism for accessing memory with cache coherence
US20200089434A1 (en) * 2017-05-31 2020-03-19 FMAD Engineering GK Efficient Storage Architecture for High Speed Packet Capture
CN112416813A (en) * 2020-11-19 2021-02-26 苏州浪潮智能科技有限公司 Wear leveling method and device for solid state disk, computer equipment and storage medium
CN113377291A (en) * 2021-06-09 2021-09-10 北京天融信网络安全技术有限公司 Data processing method, device, equipment and medium of cache equipment

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103946812A (en) * 2011-09-30 2014-07-23 英特尔公司 Apparatus and method for implementing a multi-level memory hierarchy
CN107193756A (en) * 2013-03-15 2017-09-22 英特尔公司 For marking the beginning and the instruction of end that need to write back the non-transactional code area persistently stored
US10515014B1 (en) * 2015-06-10 2019-12-24 EMC IP Holding Company LLC Non-uniform memory access (NUMA) mechanism for accessing memory with cache coherence
US20170206033A1 (en) * 2016-01-19 2017-07-20 SK Hynix Inc. Mechanism enabling the use of slow memory to achieve byte addressability and near-dram performance with page remapping scheme
CN106990911A (en) * 2016-01-19 2017-07-28 爱思开海力士有限公司 OS and application program transparent memory compress technique
CN105930280A (en) * 2016-05-27 2016-09-07 诸葛晴凤 Efficient page organization and management method facing NVM (Non-Volatile Memory)
CN107515728A (en) * 2016-06-17 2017-12-26 清华大学 Play the data managing method and device of concurrent characteristic inside flash memory device
US20200089434A1 (en) * 2017-05-31 2020-03-19 FMAD Engineering GK Efficient Storage Architecture for High Speed Packet Capture
CN109656483A (en) * 2018-12-19 2019-04-19 中国人民解放军国防科技大学 Static wear balancing method and device for solid-state disk
CN110134492A (en) * 2019-04-18 2019-08-16 华中科技大学 A kind of non-stop-machine memory pages migratory system of isomery memory virtual machine
CN112416813A (en) * 2020-11-19 2021-02-26 苏州浪潮智能科技有限公司 Wear leveling method and device for solid state disk, computer equipment and storage medium
CN113377291A (en) * 2021-06-09 2021-09-10 北京天融信网络安全技术有限公司 Data processing method, device, equipment and medium of cache equipment

Also Published As

Publication number Publication date
CN114356213B (en) 2023-07-21

Similar Documents

Publication Publication Date Title
CN110134514B (en) Extensible memory object storage system based on heterogeneous memory
CN109196459B (en) Decentralized distributed heterogeneous storage system data distribution method
KR102193689B1 (en) Systems and methods for efficient cache line handling based on predictions
US9734070B2 (en) System and method for a shared cache with adaptive partitioning
US8990506B2 (en) Replacing cache lines in a cache memory based at least in part on cache coherency state information
US20080040554A1 (en) Providing quality of service (QoS) for cache architectures using priority information
US9971698B2 (en) Using access-frequency hierarchy for selection of eviction destination
WO2015058695A1 (en) Memory resource optimization method and apparatus
CN111427969A (en) Data replacement method of hierarchical storage system
WO2005121966A2 (en) Cache coherency maintenance for dma, task termination and synchronisation operations
US10705977B2 (en) Method of dirty cache line eviction
US20130007341A1 (en) Apparatus and method for segmented cache utilization
CN113254358A (en) Method and system for address table cache management
CN111597125B (en) Wear balancing method and system for index nodes of nonvolatile memory file system
US11360891B2 (en) Adaptive cache reconfiguration via clustering
JPWO2014142337A1 (en) Storage apparatus, method and program
US9552295B2 (en) Performance and energy efficiency while using large pages
CN116364148A (en) Wear balancing method and system for distributed full flash memory system
WO2021143154A1 (en) Cache management method and device
US9699263B1 (en) Automatic read and write acceleration of data accessed by virtual machines
CN114356213B (en) Parallel space management method for NVM wear balance under NUMA architecture
CN108897618B (en) Resource allocation method based on task perception under heterogeneous memory architecture
Suei et al. Endurance-aware flash-cache management for storage servers
Wang et al. CLOCK-RWRF: A read-write-relative-frequency page replacement algorithm for PCM and DRAM of hybrid memory
Wu et al. Efficient space management and wear leveling for PCM-based storage systems

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant