CN112486857A - Multilayer nonvolatile caching method for wear sensing and load balancing - Google Patents

Multilayer nonvolatile caching method for wear sensing and load balancing Download PDF

Info

Publication number
CN112486857A
CN112486857A CN202011182462.1A CN202011182462A CN112486857A CN 112486857 A CN112486857 A CN 112486857A CN 202011182462 A CN202011182462 A CN 202011182462A CN 112486857 A CN112486857 A CN 112486857A
Authority
CN
China
Prior art keywords
cache
wear
node
read
load balancing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202011182462.1A
Other languages
Chinese (zh)
Other versions
CN112486857B (en
Inventor
刘芳
蔡振华
陈志广
苏屹宏
黄志杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
National Sun Yat Sen University
Original Assignee
National Sun Yat Sen University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by National Sun Yat Sen University filed Critical National Sun Yat Sen University
Priority to CN202011182462.1A priority Critical patent/CN112486857B/en
Publication of CN112486857A publication Critical patent/CN112486857A/en
Application granted granted Critical
Publication of CN112486857B publication Critical patent/CN112486857B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/0802Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
    • G06F12/0806Multiuser, multiprocessor or multiprocessing cache systems
    • G06F12/0811Multiuser, multiprocessor or multiprocessing cache systems with multilevel cache hierarchies
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/0802Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
    • G06F12/0866Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches for peripheral storage systems, e.g. disk cache
    • G06F12/0873Mapping of cache memory to specific storage devices or parts thereof
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/0802Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
    • G06F12/0893Caches characterised by their organisation or structure
    • G06F12/0897Caches characterised by their organisation or structure with two or more cache hierarchy levels
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The invention discloses a multilayer nonvolatile cache method for wear sensing and load balancing, which comprises the following steps: establishing a multilayer nonvolatile cache architecture for processing read-write requests of objects in a large-scale storage system; before processing the read-write request of the object, performing object caching by adopting an independent cache partition algorithm; in the process of processing the read-write request of the object, a multi-path selection method is adopted for object access; and in the process of processing the read-write request of the object, monitoring the wear condition of the multilayer nonvolatile cache architecture, and when the wear condition exceeds a set threshold value, carrying out object exchange by adopting a data migration method. The invention establishes a multilayer nonvolatile cache architecture and is used for processing read-write requests of a large-scale storage system; the object is cached by adopting an independent cache partition algorithm and is accessed by adopting a multi-path selection method, so that the load balancing capacity of the system is improved; and moreover, the data migration method is adopted to exchange the objects, so that the wear balance capability of the system is improved, and the service life of the whole system is prolonged.

Description

Multilayer nonvolatile caching method for wear sensing and load balancing
Technical Field
The invention relates to the field of nonvolatile cache, in particular to a multilayer nonvolatile cache method for wear sensing and load balancing.
Background
Data-intensive applications and services are increasingly demanding of data storage, and large-scale storage systems provide large-scale storage and fast response capabilities. Load balancing is a challenge for large-scale storage systems because real workloads are usually skewed (e.g., request distribution follows Zipf's law), some nodes receive more requests, while other nodes receive fewer requests, and unbalanced request distribution causes load imbalance, making hot spot nodes a system bottleneck, reducing system performance.
In an attempt to solve the load balancing problem, caching is applied. In a cache architecture using DRAM, a DRAM chip has not only a small capacity but also a high cost, and in contrast, a nonvolatile memory device has advantages of a large capacity and nonvolatile, and thus is widely deployed in a large-scale memory system. However, the nonvolatile memory device has a limited write lifetime, can only accept a certain number of write operations, and has asymmetric read and write performance, resulting in higher write operation latency. Asymmetric read and write operations can affect load balancing, and centralized write operations can also accelerate the damage of some nodes. The existing cache mechanism does not consider the asymmetric read-write and the service life of the nonvolatile storage device, so that the problems of uneven load, uneven wear and the like are presented. Obviously, how to improve the load balancing and wear leveling capability is the key of the non-volatile cache.
In the prior art, a chinese invention patent with publication number CN103268292A discloses a method for prolonging the lifetime of a nonvolatile external memory and a high-speed long-life external memory system in 2013, 08 and 28, which work by maintaining a write cache and an original cache; the maintenance method comprises a write cache mixed granularity scheduling method, a byte-based comparison write-back method and a double-cache coordination method; the write cache is used for combining multiple write operations of the file system on the same data, and the new data is written back through byte-based comparison between the write cache and the original cache. Although this solution extends the lifetime of the nonvolatile memory to some extent by reducing the amount of data written in the nonvolatile memory, it does not solve the above-mentioned problem. Therefore, a user urgently needs a multi-layer nonvolatile caching method for wear sensing and load balancing.
Disclosure of Invention
The invention provides a multilayer nonvolatile cache method for sensing abrasion and balancing the load, which aims to solve the problems of uneven load, uneven abrasion and the like caused by load inclination of access of a large-scale storage system, asymmetrical read-write of nonvolatile storage equipment and limited write service life.
The primary objective of the present invention is to solve the above technical problems, and the technical solution of the present invention is as follows:
a multi-layer nonvolatile caching method for wear sensing and load balancing comprises the following steps: establishing a multilayer nonvolatile cache architecture for processing read-write requests of objects in a large-scale storage system; before processing the read-write request of the object, performing object caching by adopting an independent cache partition algorithm; in the process of processing the read-write request of the object, a multi-path selection method is adopted for object access; and in the process of processing the read-write request of the object, monitoring the wear condition of the multilayer nonvolatile cache architecture, and when the wear condition exceeds a set threshold value, carrying out object exchange by adopting a data migration method.
Preferably, the multi-layer nonvolatile cache architecture is built through a client, a coordination node, a cache layer and a storage cluster; wherein the client is connected with a coordination node; the client is connected with the cache layer; the client is connected with the storage cluster; the cache layer is connected with the storage cluster.
Preferably, the cache layer comprises cache nodes; the cache node is connected with a client; the cache nodes are connected with the storage cluster.
Preferably, the storage cluster comprises a plurality of storage servers; the storage server is connected with the client; the storage server is connected with the cache node.
Preferably, the specific process of object caching is as follows: and each cache layer adopts an independent cache partition algorithm to hash data.
Preferably, the specific process of object access is as follows: and accessing the same object cached in different cache nodes by adopting a multi-path selection method.
Preferably, the coordinating node comprises a remapping table; the remapping table is used for storing the position information of the migrated object.
Preferably, the coordination node further includes a wear leveling module, where the wear leveling module records write operation times processed by each cache node, and calculates a write operation time variance between different cache nodes according to a fixed interval.
Preferably, the specific process of object exchange is as follows: the wear leveling module judges according to the write operation frequency variance, and if the write operation frequency variance is larger than a set threshold, data migration between cache nodes is triggered; and if the variance of the write operation times is less than or equal to the set threshold, data migration between the cache nodes is not required.
Preferably, the data migration is performed between different cache nodes in the same cache layer.
Compared with the prior art, the technical scheme of the invention has the beneficial effects that:
the invention processes the read-write request in the large-scale storage system by establishing a multilayer nonvolatile cache architecture; before the read-write request is processed, the object is cached by adopting an independent cache partition algorithm, so that the load inclination caused by the same hot point node generated by two cache layers is effectively avoided; in the process of processing the read-write request, a multi-path selection method is adopted to access the object, so that the phenomenon that the load pressure is too heavy due to more request processing times of a certain cache node is effectively avoided; this greatly improves the load balancing capability of large-scale storage systems. Moreover, when the abrasion of a certain cache node reaches a threshold value, the object is exchanged by adopting a data migration method, so that the condition that the abrasion is too heavy due to the fact that a certain cache node often processes hot data is effectively avoided; the wear leveling capability of the large-scale storage system is greatly improved, and the service life of the whole system is prolonged.
Drawings
FIG. 1 is a diagram of the steps of the method;
FIG. 2 is a schematic diagram of a multi-level non-volatile cache architecture;
FIG. 3 is a schematic diagram of an independent cache partitioning algorithm;
FIG. 4 is a schematic diagram of a multiplexing method;
FIG. 5 is a schematic diagram of data migration within the same cache layer;
FIG. 6 is a diagram illustrating data migration between different cache layers.
Detailed Description
In order that the above objects, features and advantages of the present invention can be more clearly understood, a more particular description of the invention will be rendered by reference to the appended drawings. It should be noted that the embodiments and features of the embodiments of the present application may be combined with each other without conflict.
In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present invention, however, the present invention may be practiced in other ways than those specifically described herein, and therefore the scope of the present invention is not limited by the specific embodiments disclosed below.
Example 1
As shown in fig. 1, a multi-level non-volatile cache method for wear sensing and load balancing includes the following steps: establishing a multilayer nonvolatile cache architecture for processing read-write requests of objects in a large-scale storage system; before processing the read-write request of the object, performing object caching by adopting an independent cache partition algorithm; in the process of processing the read-write request of the object, a multi-path selection method is adopted for object access; and in the process of processing the read-write request of the object, monitoring the wear condition of the multilayer nonvolatile cache architecture, and when the wear condition exceeds a set threshold value, carrying out object exchange by adopting a data migration method.
In the scheme, the read-write request in the large-scale storage system is processed based on the established multilayer nonvolatile cache architecture; by adopting the independent cache partition algorithm to cache the objects, the load inclination caused by the same hot point nodes can be avoided; object access is carried out through a multi-path selection method, so that the phenomenon that the processing times of a certain node are more, and the pressure of the node is too heavy can be avoided; the load balancing capacity of the large-scale storage system can be effectively improved for avoiding load inclination and overweight node pressure; furthermore, the wear condition of the multilayer nonvolatile cache architecture is monitored, and when the wear condition reaches a threshold value, the object is exchanged by adopting a data migration method, so that the situation that a certain node often processes hot data and the wear is more serious than other nodes is avoided; and the excessive abrasion of the nodes is avoided, and the abrasion balance capability of a large-scale storage system can be effectively improved, so that the service life of the whole system is prolonged.
Specifically, as shown in fig. 2, the multi-layer nonvolatile cache architecture is built by a client, a coordination node, a cache layer, and a storage cluster; wherein the client is connected with a coordination node; the client is connected with the cache layer; the client is connected with the storage cluster; the cache layer is connected with the storage cluster.
In the above scheme, the client is responsible for sending requests and follows Zipf distribution, the requests can be divided into read requests (querying an object) and write requests (updating an object) according to types, wherein the read requests are divided into cache hits (accessing objects in the cache) and cache misses (accessing objects not in the cache); the coordination node is responsible for storing the addresses of the migrated objects and the wear condition of the cache layer; the cache layers (shown as cache layer 1 and cache layer 2) are responsible for caching the recently accessed data, and when the cache space is insufficient, the LRU algorithm is used for cache replacement, that is, the data which is accessed the last time and is farthest from the current time is eliminated by sorting according to the recent use time.
Specifically, the cache layer includes cache nodes; the cache node is connected with a client; the cache nodes are connected with the storage cluster.
In the above scheme, the cache node is deployed in each cluster, according to the locality principle, the most recently accessed object is cached, most requests are processed, and the object which is not cached is delivered to the storage server for response. The cache node uses the nonvolatile storage device as a storage medium, improves the cache capacity by utilizing the high integration degree of the nonvolatile storage device, provides the nonvolatile storage capacity without losing the data when the power fails, is convenient for the rapid data recovery, and improves the reliability of a large-scale storage system.
Specifically, the storage cluster comprises a plurality of storage servers; the storage server is connected with the client; the storage server is connected with the cache node.
In the above scheme, the processing procedures of the multi-layer nonvolatile cache architecture for the read request and the write request are respectively as follows: and (3) reading request: the client sends a read request to the coordination node, if the address information of the read object can be acquired in the coordination node, the client directly forwards the read request to the corresponding cache node, and the cache node responds without reaching the storage server; if the address information of the read object cannot be acquired in the coordination node, the object may be an untransferred cache object or an uncached object, at this time, according to a cache partitioning algorithm, the client forwards the request to the corresponding cache node, if the object exists in the cache node, the result is returned, and if the object does not exist in the cache node, the cache node further forwards the request to the storage server, and the storage server responds. And (3) writing request: the client side directly sends the write request to the storage server for data updating, and then the storage server forwards the write request to the cache node for updating, so that the consistency of data is ensured.
Specifically, the specific process of object caching is as follows: and each cache layer adopts an independent cache partition algorithm to hash data.
In the above scheme, when different cache layers use the same cache partition algorithm, the two layers will be caused to have the same hot node, and when different cache layers use different cache partition algorithms, the data in the same node in one layer will be scattered to different nodes in the other layer at a high probability, as shown in fig. 2, cache layer 1 uses hash1 function to hash the data, object AB is mapped into node C3, object C is mapped into node C4, and object DE is mapped into node C5; the cache layer 2 performs data hashing by using a hash2 function, an object BC is mapped into a node C0, an object D is mapped into a node C1, and an object AE is mapped into a node C2; it can be seen that the object AB is located at the same node in the cache layer 1, and the object AB is scattered in the cache layer 2; in addition, the hash function may employ the MD5 algorithm, the XXHASH algorithm, or the like. The concrete steps of object caching are as follows: the storage server sends the objects to be cached to different caching layers; after different cache layers receive the object to be cached, data hashing is carried out by adopting mutually independent cache partition algorithms; according to the cache partitioning algorithm, objects needing caching are dispersed into corresponding cache nodes.
Specifically, the specific process of the object access is as follows: and accessing the same object cached in different cache nodes by adopting a multi-path selection method.
In the above scheme, for the same cache object, there will be multiple cache layers at the same time, so there are multiple paths to access the object; at this time, according to the load conditions (the number of past processing requests) of a plurality of cache nodes, one cache node with a light load is selected to send the request, so that the pressure of the cache node with the heavy load is reduced, the delay of request queuing is reduced, and the load balancing capability of the large-scale storage system can be further improved. As shown in fig. 3, for the object a, both the cache nodes C2 and C3 cache the object, and it is known from the load information recorded by the coordinator node that the number of most recently processed requests of the cache node C2 is large, and therefore the request is forwarded to the cache node 3 to respond. The specific steps of object access are as follows: the client determines an object to be accessed; the coordination node searches a cache node containing the object; the coordination nodes judge according to the load conditions of the coordination nodes; selecting a cache node with a lighter load by the coordination node; the selected cache node responds.
In particular, the coordinating node comprises a remapping table; the remapping table is used for storing the position information of the migrated object.
In the above scheme, the remapping table is responsible for storing the location information of the migrated object.
Specifically, the coordination node further comprises a wear leveling module, wherein the wear leveling module records the write operation times processed by each cache node, and calculates the write operation time variance between different cache nodes according to a fixed interval.
In the above scheme, the wear leveling module represents the wear condition of the cache nodes by recording the write operation times processed by each cache node.
Specifically, the specific process of object exchange is as follows: the wear leveling module judges according to the write operation frequency variance, and if the write operation frequency variance is larger than a set threshold, data migration between cache nodes is triggered; and if the variance of the write operation times is less than or equal to the set threshold, data migration between the cache nodes is not required.
In the above scheme, the data migration is to migrate hot data (an object with a higher update frequency) on a heavily worn node to a lightly worn node, and to migrate cold data (an object with a lower update frequency) on the lightly worn node to the heavily worn node. The specific steps for exchanging objects are as follows: the wear balancing module records the wear condition of each cache node; the wear leveling module judges and triggers data migration according to the wear condition; selecting two cache nodes needing to be migrated by the wear leveling module; and exchanging objects between the two cache nodes.
Specifically, the data migration is performed between different cache nodes in the same cache layer.
In the above scheme, in the multi-layer nonvolatile cache architecture, the heavy wear node and the light wear node may be in the same cache layer or may be in different cache layers. For data migration occurring in the same cache layer, as shown in fig. 4, hot data on a heavily worn node and cold data on a lightly worn node are exchanged, and after the exchange is completed, the system operates normally. For data migration between different cache layers, as shown in fig. 5, the selected objects may not be directly exchanged, and both invalid exchange and improper exchange may increase the wear degree of the cache nodes, thereby reducing the overall service life of the large-scale storage system: if the selected objects in different cache layers are the same (the object E in the cache layer 2 is exchanged with the object E in the cache layer 1), the data migration is invalid, the object distribution is not changed before and after the exchange, and the improvement on the write request distribution and the wear balance cannot be achieved; if two same objects exist in the same cache node after the data migration occurs (the object E in the cache layer 2 is exchanged with the object D in the cache layer 1), subsequent read-write requests of the objects are concentrated on one cache node to form a hot spot, so that the load and the abrasion are unbalanced, and the system performance is reduced. Thus, data migration is performed between different cache nodes in the same cache tier.
The same or similar reference numerals correspond to the same or similar parts;
the terms describing positional relationships in the drawings are for illustrative purposes only and are not to be construed as limiting the patent;
it should be understood that the above-described embodiments of the present invention are merely examples for clearly illustrating the present invention, and are not intended to limit the embodiments of the present invention. Other variations and modifications will be apparent to persons skilled in the art in light of the above description. And are neither required nor exhaustive of all embodiments. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present invention should be included in the protection scope of the claims of the present invention.

Claims (10)

1. A multi-layer nonvolatile caching method for wear sensing and load balancing is characterized by comprising the following steps:
establishing a multilayer nonvolatile cache architecture for processing read-write requests of objects in a large-scale storage system;
before processing the read-write request of the object, performing object caching by adopting an independent cache partition algorithm;
in the process of processing the read-write request of the object, a multi-path selection method is adopted for object access;
and in the process of processing the read-write request of the object, monitoring the wear condition of the multilayer nonvolatile cache architecture, and when the wear condition exceeds a set threshold value, carrying out object exchange by adopting a data migration method.
2. The multilayer nonvolatile caching method for wear awareness and load balancing according to claim 1, wherein the multilayer nonvolatile caching architecture is established by a client, a coordinating node, a caching layer and a storage cluster; wherein the client is connected with a coordination node; the client is connected with the cache layer; the client is connected with the storage cluster; the cache layer is connected with the storage cluster.
3. The method of claim 2, wherein the cache tier comprises cache nodes; the cache node is connected with a client; the cache nodes are connected with the storage cluster.
4. The multi-tier non-volatile caching method for wear awareness and load balancing according to claim 3, wherein said storage cluster comprises a plurality of storage servers; the storage server is connected with the client; the storage server is connected with the cache node.
5. The multi-layer nonvolatile caching method for wear awareness and load balancing according to claim 2, wherein the specific process of object caching is as follows: and each cache layer adopts an independent cache partition algorithm to hash data.
6. The multi-level nonvolatile caching method for wear awareness and load balancing according to claim 3, wherein the specific process of object access is as follows: and accessing the same object cached in different cache nodes by adopting a multi-path selection method.
7. The method of claim 2, wherein the coordinating node comprises a remapping table; the remapping table is used for storing the position information of the migrated object.
8. The method as claimed in claim 3, wherein the coordinating node further comprises a wear leveling module, and the wear leveling module records the write operation times processed by each cache node and calculates the write operation time variance between different cache nodes according to a fixed interval.
9. The method for multi-level nonvolatile caching for wear awareness and load balancing according to claim 8, wherein the specific process of object swapping is: the wear leveling module judges according to the write operation frequency variance, and if the write operation frequency variance is larger than a set threshold, data migration between cache nodes is triggered; and if the variance of the write operation times is less than or equal to the set threshold, data migration between the cache nodes is not required.
10. The method of claim 9, wherein the data migration is between different cache nodes in the same cache tier.
CN202011182462.1A 2020-10-29 2020-10-29 Multi-layer nonvolatile caching method for wear sensing and load balancing Active CN112486857B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011182462.1A CN112486857B (en) 2020-10-29 2020-10-29 Multi-layer nonvolatile caching method for wear sensing and load balancing

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011182462.1A CN112486857B (en) 2020-10-29 2020-10-29 Multi-layer nonvolatile caching method for wear sensing and load balancing

Publications (2)

Publication Number Publication Date
CN112486857A true CN112486857A (en) 2021-03-12
CN112486857B CN112486857B (en) 2023-08-29

Family

ID=74927770

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011182462.1A Active CN112486857B (en) 2020-10-29 2020-10-29 Multi-layer nonvolatile caching method for wear sensing and load balancing

Country Status (1)

Country Link
CN (1) CN112486857B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115113798A (en) * 2021-03-17 2022-09-27 中国移动通信集团山东有限公司 Data migration method, system and equipment applied to distributed storage

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090216910A1 (en) * 2007-04-23 2009-08-27 Duchesneau David D Computing infrastructure
US20100241881A1 (en) * 2009-03-18 2010-09-23 International Business Machines Corporation Environment Based Node Selection for Work Scheduling in a Parallel Computing System
CN104811493A (en) * 2015-04-21 2015-07-29 华中科技大学 Network-aware virtual machine mirroring storage system and read-write request handling method
CN105354152A (en) * 2014-08-19 2016-02-24 华为技术有限公司 Nonvolatile memory and wear leveling method
CN106980799A (en) * 2017-03-10 2017-07-25 华中科技大学 The nonvolatile memory encryption system that a kind of abrasion equilibrium is perceived
US20190102111A1 (en) * 2016-08-06 2019-04-04 Wolley Inc. Apparatus and Method of Wear Leveling for Storage Class Memory Using Address Cache
US10318180B1 (en) * 2016-12-20 2019-06-11 EMC IP Holding Cmpany LLC Metadata paging mechanism tuned for variable write-endurance flash
US20200105354A1 (en) * 2018-09-29 2020-04-02 Western Digital Technologies, Inc. Wear leveling with wear-based attack detection for non-volatile memory
US20200133841A1 (en) * 2018-10-25 2020-04-30 Pure Storage, Inc. Scalable garbage collection
CN111258925A (en) * 2020-01-20 2020-06-09 中国科学院微电子研究所 Nonvolatile memory access method, nonvolatile memory access device, memory controller, nonvolatile memory device and nonvolatile memory medium

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090216910A1 (en) * 2007-04-23 2009-08-27 Duchesneau David D Computing infrastructure
US20100241881A1 (en) * 2009-03-18 2010-09-23 International Business Machines Corporation Environment Based Node Selection for Work Scheduling in a Parallel Computing System
CN105354152A (en) * 2014-08-19 2016-02-24 华为技术有限公司 Nonvolatile memory and wear leveling method
CN104811493A (en) * 2015-04-21 2015-07-29 华中科技大学 Network-aware virtual machine mirroring storage system and read-write request handling method
US20190102111A1 (en) * 2016-08-06 2019-04-04 Wolley Inc. Apparatus and Method of Wear Leveling for Storage Class Memory Using Address Cache
US10318180B1 (en) * 2016-12-20 2019-06-11 EMC IP Holding Cmpany LLC Metadata paging mechanism tuned for variable write-endurance flash
CN106980799A (en) * 2017-03-10 2017-07-25 华中科技大学 The nonvolatile memory encryption system that a kind of abrasion equilibrium is perceived
US20200105354A1 (en) * 2018-09-29 2020-04-02 Western Digital Technologies, Inc. Wear leveling with wear-based attack detection for non-volatile memory
US20200133841A1 (en) * 2018-10-25 2020-04-30 Pure Storage, Inc. Scalable garbage collection
CN111258925A (en) * 2020-01-20 2020-06-09 中国科学院微电子研究所 Nonvolatile memory access method, nonvolatile memory access device, memory controller, nonvolatile memory device and nonvolatile memory medium

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
JIAXIN OU ET AL.: "EDM: an Endurance-aware Data Migration Scheme for Load Balancing in SSD Storage Clusters", 《2014 IEEE 28TH INTERNATIONAL PARALLEL & DISTRIBUTED PROCESSING SYMPOSIUM》, pages 787 - 796 *
LINGYU ZHU ET AL.: "Wear Leveling for Non-Volatile Memory: a Runtime System Approach", 《IEEE》, vol. 6, pages 60622 - 60634, XP011698482, DOI: 10.1109/ACCESS.2018.2875820 *
沈凡凡;何炎祥;张军;江南;李清安;李建华;: "一种SRAM辅助新型非易失性缓存的磨损均衡方法", 计算机学报, no. 03 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115113798A (en) * 2021-03-17 2022-09-27 中国移动通信集团山东有限公司 Data migration method, system and equipment applied to distributed storage
CN115113798B (en) * 2021-03-17 2024-03-19 中国移动通信集团山东有限公司 Data migration method, system and equipment applied to distributed storage

Also Published As

Publication number Publication date
CN112486857B (en) 2023-08-29

Similar Documents

Publication Publication Date Title
CN105740164B (en) Multi-core processor supporting cache consistency, reading and writing method, device and equipment
US6324621B2 (en) Data caching with a partially compressed cache
US8171223B2 (en) Method and system to increase concurrency and control replication in a multi-core cache hierarchy
CN102760101B (en) SSD-based (Solid State Disk) cache management method and system
US7574556B2 (en) Wise ordering for writes—combining spatial and temporal locality in write caches
JP6613375B2 (en) Profiling cache replacement
CN110058822B (en) Transverse expansion method for disk array
KR100978156B1 (en) Method, apparatus, system and computer readable recording medium for line swapping scheme to reduce back invalidations in a snoop filter
US20160034195A1 (en) Memory network
US11861204B2 (en) Storage system, memory management method, and management node
US6766424B1 (en) Computer architecture with dynamic sub-page placement
CN108762671A (en) Mixing memory system and its management method based on PCM and DRAM
US7925857B2 (en) Method for increasing cache directory associativity classes via efficient tag bit reclaimation
US8402198B1 (en) Mapping engine for a storage device
US11360891B2 (en) Adaptive cache reconfiguration via clustering
CN112486857B (en) Multi-layer nonvolatile caching method for wear sensing and load balancing
WO2016131175A1 (en) Method and device for accessing data visitor directory in multi-core system
CN111506517B (en) Flash memory page level address mapping method and system based on access locality
US11836089B2 (en) Cache memory, memory system including the same and operating method thereof
CN115563029A (en) Caching method and device based on two-layer caching structure
CN115509962A (en) Multi-level cache management method, device and equipment and readable storage medium
CN112445794B (en) Caching method of big data system
CN109669882B (en) Bandwidth-aware dynamic cache replacement method, apparatus, system, and medium
CN112231241B (en) Data reading method and device and computer readable storage medium
CN112631789A (en) Distributed memory system facing short video data and video data management method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant