CN111722797A - SSD and HA-SMR hybrid storage system oriented data management method, storage medium and device - Google Patents

SSD and HA-SMR hybrid storage system oriented data management method, storage medium and device Download PDF

Info

Publication number
CN111722797A
CN111722797A CN202010420508.2A CN202010420508A CN111722797A CN 111722797 A CN111722797 A CN 111722797A CN 202010420508 A CN202010420508 A CN 202010420508A CN 111722797 A CN111722797 A CN 111722797A
Authority
CN
China
Prior art keywords
write
zone
smr
block
cache
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010420508.2A
Other languages
Chinese (zh)
Other versions
CN111722797B (en
Inventor
伍卫国
张驰
张晨
聂世强
郑旭达
马春苗
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xian Jiaotong University
Original Assignee
Xian Jiaotong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xian Jiaotong University filed Critical Xian Jiaotong University
Priority to CN202010420508.2A priority Critical patent/CN111722797B/en
Publication of CN111722797A publication Critical patent/CN111722797A/en
Application granted granted Critical
Publication of CN111722797B publication Critical patent/CN111722797B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0602Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
    • G06F3/0604Improving or facilitating administration, e.g. storage management
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/0802Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
    • G06F12/0891Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches using clearing, invalidating or resetting means
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0602Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
    • G06F3/0614Improving the reliability of storage systems
    • G06F3/0616Improving the reliability of storage systems in relation to life time, e.g. increasing Mean Time Between Failures [MTBF]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0629Configuration or reconfiguration of storage systems
    • G06F3/0631Configuration or reconfiguration of storage systems by allocating resources to storage systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0638Organizing or formatting or addressing of data
    • G06F3/064Management of blocks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0638Organizing or formatting or addressing of data
    • G06F3/0644Management of space entities, e.g. partitions, extents, pools
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0668Interfaces specially adapted for storage systems adopting a particular infrastructure
    • G06F3/0671In-line storage system
    • G06F3/0683Plurality of storage devices
    • G06F3/0685Hybrid storage combining heterogeneous device types, e.g. hierarchical storage, hybrid arrays

Abstract

The invention discloses a data management method, a storage medium and equipment for an SSD and HA-SMR hybrid storage system, which orient IO streams to different storage levels by taking the principle that whether an LBA (logical block addressing) of a target data block is aligned with a write pointer of a zone pair in the SMR and combining the size of a non-sequential write request. And for the management strategy of the cache layer, dividing the data blocks in the cache layer into different set sets according to whether the data blocks belong to the same zone, calculating the comprehensive heat of write and read flag bits set for the data blocks entering the cache when cache cleaning operation occurs, calculating the coverage rate of the sets corresponding to different zones, obtaining the comprehensive weight of the set sets by combining the write and read flag bits, and adding the comprehensive weight into the corresponding cache elimination linked list. The invention fully considers the factors of the SSD service life, the cache hit rate and the SMR write amplification, reduces the response delay to the upper application request and improves the overall performance of the hybrid storage system.

Description

SSD and HA-SMR hybrid storage system oriented data management method, storage medium and device
Technical Field
The invention belongs to the technical field of computer storage, and particularly relates to a write-friendly data management method, a storage medium and write-friendly data management equipment for an SSD and HA-SMR hybrid storage system.
Background
As a new type of inexpensive magnetic medium suitable for mass storage of cold data in recent years, SMR shingled disks have the advantages of low price, high storage density, and simple production process, and are suitable for mass data storage today, but as the capacity of the disks increases, the disks are overlapped by compressing magnetic tracks, and a larger and wider magnetic head is required to generate a stronger magnetic field. SMR is no different from a general HDD disk in terms of read requests and sequential write requests, but at the time of random writing, since a magnetic head interferes with a track adjacent to a target track downstream to cause erasure of downstream data, a read-modify-write operation needs to be performed on the data each time, and a so-called "write amplification" caused thereby causes a response delay to IO applied to an upper layer. The SSD is generally composed of flash chips, has stronger random read-write capability compared with a magnetic disk, and can effectively cope with random IO of upper-layer services by using the SSD as a cache.
However, SSDs also have asymmetric read and write characteristics, and due to the limited number of erase operations, frequent erase operations can cause "write through" of the flash memory chip, which can cause irreversible effects on the stability of data storage. Meanwhile, compared with the early SMR, the SMR with the host management mode and the host sensing mode can expose the internal state and the device information of the disk to the upper layer application so as to help the upper layer application to better know the bottom layer information. However, the data management policy of the existing hybrid storage system is usually only to simply use the LRU algorithm to eliminate data that is not used for a long time or write data into the SSD as much as possible based on the locality principle, so that the cache layer has a high hit rate, which does not fully consider the characteristics of the underlying storage medium, and has a great influence on the service life and the overall performance of the system.
Therefore, how to enable the upper layer application to better organize and sequence the service IO stream by using the internal information of the HA-SMR (host-aware shingled disk) and reduce the write-in of the invalid flow of the SSD so as to prolong the service life of the SSD; and how to use the information to make the SSD serving as the cache layer formulate a proper cache replacement strategy to reasonably write back data with different cold and hot states to the SMR, so that the influence of SMR write amplification on upper-layer business IO is reduced, and the problem of efficient data management of the hybrid storage system needs to be considered.
Disclosure of Invention
The technical problem to be solved by the present invention is to provide a write-friendly data management method for a mixed storage system of SSD and HA-SMR, which is a mixed storage system composed of SSD as a cache layer and HA-SMR as a capacity layer, and which solves the application service scenario of a read request of a mixed part mainly based on a write request, and completes the implementation at a host end by considering factors such as SSD life, cache hit rate, SMR write amplification; when data enters the SSD, judging whether a write path is directly written into a capacity layer or a cache layer according to write pointer state information of the HA-SMR; meanwhile, when the data of the cache layer is written back to the capacity layer, the data belonging to the same zone is written back together, on one hand, the available space of the cache layer is improved, on the other hand, the cleaning time of a lasting buffer layer in the SMR is reduced, more random writing is converted into sequential writing, and the SMR writing amplification rate is reduced. Thereby improving the performance of the hybrid storage system as a whole.
The invention adopts the following technical scheme:
a write-friendly data management method facing an SSD and HA-SMR hybrid storage system comprises the following steps:
s1, clustering blocks belonging to the same zone in the SMR into a set in the SSD cache layer, and inserting the blocks into the set in an LBA ascending order modeblockManaging a linked list, and writing back to an SMR capacity layer by taking set as a basic unit; inserting set into zone in FIFO first-in first-out modesetA temporary management linked list;
s2, respectively setting a write flag bit and a read flag bit for each block in the set, recording the reading and writing times of the block, setting an initial value to be 0, and adding 1 each time after the reading and writing request hits;
s3, calculating the comprehensive heat of each block in the updated set according to the reading and writing times recorded in the step S2; simultaneously calculating the coverage rate of the updated set;
s4, calculating and updating the final weight of each set according to the comprehensive heat and coverage rate of each block in the step S3, and adding the weight to the zone again in a weight descending mannersetCaching the elimination linked list;
s5, when the remaining space of the buffer is less than the prescribed threshold, the zone from step S4setSelecting an elimination object from the cache elimination linked list, executing a cache replacement algorithm, replacing data meeting the requirement from the cache to a capacity layer, and executing the step S6 after the replacement is finished;
s6, clearing the write and read flag bits of the replaced data block in the SSD cache layer;
s7, when the read request arrives, firstly checking whether the requested block exists in the cache, if so, reading the target data from the SSD cache layer and returning the target data to the upper layer application, and then executing the step S2; if the read request is not hit in the cache, reading target data from the SMR capacity layer and returning the target data to the upper application;
s8, when the write request arrives, firstly checking whether the requested data block exists in the cache, if so, directly updating the target data in the SSD cache layer, and then executing the step S2; if the target data does not exist in the cache layer, judging whether the LBA of the target data block is aligned with a write pointer of a zone pair in the SMR, and if the LBA of the target data block is aligned with the write pointer of the zone pair in the SMR, directly writing the zone pair in the SMR; otherwise, judging whether the size of the non-sequential write request exceeds the set non-sequential write threshold value T or notNSWIf the number of the SMR data exceeds the preset value, the SMR data is regarded as large non-sequential writing, and the large non-sequential writing is written into a persistent buffer area inside the SMR disk; if the two cases are not met, the data needs to be written into the SSD cache layer, and steps S1 and S2 are executed; judging whether the residual space is less than the residual space threshold T or not after the write request is completedfreeIf it is less than the threshold value TfreeThen step S5 is executed to implement write-friendly data management of the SSD and HA-SMR hybrid storage system.
Specifically, in step S1, the minimum unit of the upper IO request is a 4KB block, and the HA-SMR uses a 256 MB-sized zone as a management unit and consists of 65536 blocks with a size of 4 KB; the set is composed of all block blocks belonging to the same zone and located in the SSD, and each block in the set is addressed according to the LBAAscending insertion setblockManaging a linked list; each set in the SSD corresponds to one zone in the SMR, the set is a minimum of 4KB and a maximum of 256MB, and is inserted into the zone in FIFO form for each setsetThe linked list is temporarily managed.
Specifically, in step S3, a write flag bit and a read flag bit are added to each block entering the cache, the update and read times are recorded, and the comprehensive heat of each block in the cache is calculated.
Specifically, in step S3, the coverage of the updated set is Ci=set_block/zone_block,0≤C i1 or less, zone _ block is constant, representing 65536 blocks of 4KB size in 256MB size zones; set _ block represents the number of 4KB sized blocks in the set, 0 ≦ set _ block ≦ 65536.
Specifically, in step S4, the total heat of the block and the coverage of the set are obtained in step S3, the total weight of the sets is obtained, and the zone is inserted into each set in descending order of the total weightsetManaging a linked list with each node including an integrated weight WiAnd coverage rate CiThe information of (1).
Specifically, step S5 specifically includes:
s501, when the read request is reached, checking whether a block of the request exists in a cache or not according to a mapping table, if so, reading target data from an SSD cache layer and returning the target data to an upper layer application, and then executing the step S502;
s502, updating the read zone bit of the block which hits the reading request, and adding 1 to the recording times;
and S503, when the read request is not hit in the cache, reading the target data from the SMR capacity layer and returning the target data to the upper application.
Specifically, step S6 specifically includes:
s601, when a write request arrives, checking whether a requested data block exists in a cache according to a mapping table, if so, directly updating target data in an SSD cache layer, and then executing a step S602;
s602, updating the write zone bit of the block which hits the write request, and adding 1 to the recording times;
s603, if the target data does not exist in the cache layer, judging whether the LBA of the target data block is aligned with the write pointer of the zone pair in the SMR, and if so, directly writing the target data block into the zone in the SMR, otherwise, entering the step S604;
s604, according to the judgment result of S603, if the LBA of the target data block is not aligned with the write pointer of the zone pair in the SMR, judging again whether the magnitude of the non-sequential write request exceeds the set non-sequential write threshold TNSW, and if the magnitude exceeds TNSWIf the write command is 1MB, the write command is regarded as large non-sequential write, and the write command is written into a persistent buffer inside the SMR disk, otherwise, the step S605 is executed;
s605, writing data into the SSD cache layer because the conditions of the step S603 and the step S604 are not satisfied, and entering the step S606;
s606, adding a write zone bit and a read zone bit to a block newly entering an SSD cache layer, and adding 1 to the write zone bit recording times; subsequently, if the corresponding set already existsblockThe management linked list is added into the linked list, otherwise a new set is createdblockManaging linked list, adding new set into zone after completing the above stepssetA temporary management linked list;
s607, after the write request is finished, judging whether the residual space is less than the residual space threshold value TfreeI.e. T free10% of SSD buffer capacity, if less than threshold TfreeThen, step S7 is executed to start the cache scrubbing operation.
Specifically, step S7 specifically includes:
s701, calculating the comprehensive heat of the block blocks in each set according to the updating times recorded in the step S2, and calculating the coverage rate of each set; subsequently, the zone is calculatedsetTemporarily managing the final comprehensive weight of each set in the linked list;
s702, adding the set sets to the zone in sequence according to the rule that the comprehensive weight value is descending from large to smallsetEliminating the linked list;
s703 slave zonesetAt the end of the culling list, i.e. beginning at the end with the lowest weight, the set of each node is checked firstblockWhether a write pointer pair of a starting LBA and a zone to be written back is existed in the management chain tableIf the candidates having the coverage rate Ci of 10% are present, they are written back to the zone of SMR as eliminators in a sequential writing manner until the capacity of the cache space returns to the remaining space threshold TfreeAbove; if no such candidate exists, proceed to step S704;
s704, again starting from the zonesetStarting from the end of the linked list, checking the coverage rate information of each node, writing a set with the coverage rate Ci reaching more than 50% as a candidate back to a persistent buffer in the SMR until the capacity of the cache space returns to the residual space threshold TfreeAbove; at this time, the write-back operation is changed into non-sequential write, the device side executes read-modify-write operation to sequentially write the data of the persistent buffer back to the zone of the SMR, and if the condition is not met, the step S705 is performed;
s705, if no candidate exists, the slave zonesetStarting from the end of the linked list, selecting the node with the lowest weight as a candidate to write back to a persistent buffer in the SMR until the capacity of the cache space returns to the residual space threshold value TfreeAbove; at this time, the write-back operation is changed into non-sequential write, and the device side executes read-modify-write operation to write the data in the persistent buffer back to the zone of the SMR in sequence.
A computer readable storage medium storing one or more programs, the one or more programs comprising instructions, which when executed by a computing device, cause the computing device to perform any of the methods of claims 1-8.
A management device, comprising:
one or more processors, memory, and one or more programs stored in the memory and configured for execution by the one or more processors, the one or more programs including instructions for performing any of the methods of claims 1-8.
Compared with the prior art, the invention has at least the following beneficial effects:
the invention aims at mixed storage of SSD and HA-SMRThe write-friendly data management method of the storage system is based on the fact that the service life of the limited erasing times of the SSD and the fact that the SMR random writing can generate ' write amplification ', on one hand, the SSD is used as a cache layer due to the fact that the SSD has stronger random writing capability, and on the other hand, the sequential ' IO requests meeting the writing pointer alignment condition are directly directed to the SMR on the basis of the principle that whether the IO request initial address of the upper layer is aligned with the writing pointer of the SMR or not, so that the sequential writing capability of the mechanical hard disk is better utilized; for write pointer misalignment but exceeding a set threshold TNSWThe non-sequential writing can be written into a persistent buffer area in the SMR disk, the write flow with low SSD value is further filtered, the writing frequency of the SSD is reduced through the steps, frequent erasing is avoided, and the service life of the SSD is prolonged; and finally, only small non-sequential writes can enter the SSD buffer layer with good random write capability, and blocks of the same zone are clustered and reordered to be converted into larger continuous blocks which are beneficial to the execution of 'read-modify-write' cleaning operation of the persistent buffer layer in the HA-SMR, so that the PRT-system performance recovery time is shortened, and the write amplification rate of the SMR is reduced.
Furthermore, by clustering blocks belonging to the same zone and writing the blocks back to the SMR together, the frequency of the read-modify-write operation executed by the equipment end can be reduced, the cleaning efficiency of a persistent buffer area of the equipment end is improved, the PRT performance recovery time of a disk is shortened, the write amplification rate of the SMR is reduced, and the overall performance of the system is improved.
Furthermore, by adding read and write flag bits to the block, the read-write characteristics of the block can be distinguished with finer granularity.
Furthermore, the reading heat and the writing heat of the block can be respectively calculated through the recorded reading and writing times, so that the comprehensive heat of the block can be solved; and calculating the number of blocks in the cache by using the set corresponding to a certain zone to obtain the corresponding coverage rate, thereby obtaining the occupation ratio of the certain zone in the cache.
Further, by calculating the comprehensive heat of each block in the set and the coverage of the set corresponding to the zone in the cache, the heat and quantity factors of the data blocks are comprehensively considered, and the comprehensive weight of the set is obtained according to the heat and quantity factorsAdding the set into the zone in a descending manner according to the weight valuesetThe obsolete linked list is cached.
Furthermore, when cache replacement occurs, the overall weight value obtained by calculation is used for obtaining the zonesetAnd selecting the best culler from the tail end of the cache culling linked list.
Further, the data block replaced out of the SSD cache layer will not enter the cache layer again from the capacity layer, and therefore the write and read flag bits thereof are cleared.
Further, when the read request is reached, the data storage position is located by searching the mapping table, and the data is read from the corresponding layer and returned to the upper application.
Furthermore, when a write request arrives, a mapping table is searched for a specific storage position of the data, if cache hit, direct write is performed, otherwise, whether the zone of the SMR is written is determined according to whether the LBA logical address of the data is aligned with the write pointer, and for larger random write, the data is written into the persistent buffer area of the SMR, and only small and inevitable random write is written into the SSD cache layer.
In summary, the invention comprehensively considers three factors, namely the cache hit rate, the SSD service life and the SMR write amplification, so as to improve the comprehensive performance of the hybrid storage system as much as possible.
The technical solution of the present invention is further described in detail by the accompanying drawings and embodiments.
Drawings
FIG. 1 is a schematic diagram of a hybrid storage system architecture of the present invention;
FIG. 2 is a data flow diagram of the present invention
FIG. 3 is a diagram of a FIFO managed zone of the present inventionsetA temporary management linked list schematic diagram;
FIG. 4 is a block diagram of the LBA descending tube according to the present inventionHair care zonesetA schematic diagram of a cache elimination linked list;
FIG. 5 is a flow chart of the present invention process for handling a read I/O request;
FIG. 6 is a flow chart of the present invention process for handling a write I/O request;
FIG. 7 is a flow chart of cache replacement operation according to the present invention.
Detailed Description
Referring to fig. 1, the management method of the present invention is implemented in a single kernel module in the overall logic structure of the hybrid storage system, so that the Linux operating system does not need to be modified. The module is positioned in the block layer in a form of pseudo block equipment, mainly comprises cache management and IO redirection functions, and is respectively used for triggering data write-back operation during cache space cleaning and responding to read-write requests under different conditions to realize IO relocation.
Referring to fig. 2, for the specific data flow of the present invention, when a read/write request arrives, the storage locations of different types of data flow are different, or when cache replacement occurs, the data flow is also changed.
The invention provides a write-friendly data management method facing an SSD and HA-SMR hybrid storage system, which positions different types of requests to different storage layers by judging data flow direction and reduces the data meeting the requirements to be written back to a capacity layer when cache cleaning operation is triggered. The method comprises the following specific steps:
s1, clustering blocks belonging to the same zone in the SMR into a set in the SSD cache layer, and inserting the blocks into the set in an LBA ascending order modeblockManaging the linked list and writing back to the SMR capacity layer in a basic unit of set. Inserting set into zone in FIFO first-in first-out modesetTemporary management link lists, as shown in FIG. 3;
the minimum unit of the upper layer IO request is a block of 4KB, and HA-SMRs each have a 256MB size zone as a management unit, and consist of 65536 blocks of 4KB size. That is, a set is composed of all block blocks belonging to the same zone and located in the SSD, and for each block inside the set, the set is inserted in the ascending order of LBA addressesblockA linked list is managed. Therefore, the temperature of the molten metal is controlled,each set in the SSD corresponds to a zone in the SMR, the set is a minimum of 4KB and a maximum of 256MB, and is inserted into the zone in the form of a FIFO for each setsetThe linked list is temporarily managed.
S2, respectively setting a write flag bit and a read flag bit for each block in the set, recording the reading and writing times of the block, setting an initial value to be 0, and adding 1 each time after the reading and writing request hits;
s3, calculating the comprehensive heat of each block in the updated set according to the reading and writing times recorded in S2; simultaneously calculating the coverage rate of the updated set;
in order to make the cache have higher hit rate for the read-write requests, therefore, the write and read flag bits are added to each block entering the cache, and the update and read times are recorded respectively, using formula Hi=Wi+X*RiCalculating the comprehensive heat of each block in the cache, wherein 0<X<1, X represents the proportion of read requests in the scene that the write requests are main mixed part read requests. At the same time, WiAnd RiThe initial values are all 0, which represents the number of updates and reads after the block enters the cache.
Further, set coverage indicates the proportion of the block belonging to a certain zone in the SMR in the cache layer, and can be represented by formula Ci=setblock/zoneblockSpecific data value can be obtained, C is more than or equal to 0iLess than or equal to 1. Wherein, zoneblockConstant, representing 65536 blocks of size 4KB in 256MB size zones; and setblockRepresents the number of 4 KB-sized blocks in the set, where 0 ≦ setblock≤65536。
S4, calculating and updating the final weight of each set according to the comprehensive heat and coverage rate of each block in S3, and adding the weight to the zone again in a weight descending mannersetCaching the elimination linked list;
referring to FIG. 4, the integrated heat of the block and the coverage of the set obtained in step S3 are calculated according to the formula Wi=∑Hi/CiDetermining a comprehensive weight of set, wherein CiDenotes the ith setiAnd ∑ HiPresentation cachingSet i of the ith setiThe sum of the comprehensive heat of all the block blocks; further, each set is inserted into the zone according to the descending order of the magnitude of the comprehensive weightsetManaging a linked list with each node including an integrated weight WiAnd coverage rate CiThe information of (1).
S5, when the remaining space of the buffer is less than the prescribed threshold, the zone from S4setSelecting an elimination object from the cache elimination linked list, executing a cache replacement algorithm, replacing data meeting the requirement from the cache to a capacity layer, and executing the step S6 after the replacement is finished;
referring to fig. 5, for analyzing the data flow of the IO redirection function, if a read request arrives, the processing of the read request includes the following steps:
s501, when a read request is reached, firstly checking whether a requested block exists in a cache according to a mapping table, if so, reading target data from an SSD cache layer and returning the target data to an upper layer application, wherein the data flow path of the target data is as a black dotted line in the graph 2, and then executing the step S502;
s502, updating the read zone bit of the block which hits the reading request, and adding 1 to the recording times;
s503, when the read request is not hit in the cache, reading target data from the SMR capacity layer and returning the target data to an upper layer application, wherein the data may exist in a persistent buffer layer or a zone of the SMR, the data flow path of the data flow path returns to the upper layer along the reference numeral 2 or the reference numeral 3 in the figure 2, and the position of the set in the corresponding linked list does not need to be adjusted.
S6, clearing the write and read flag bits of the replaced data block in the SSD cache layer;
referring to fig. 6, if a write request arrives, the specific steps are as follows:
s601, when a write request arrives, firstly checking whether a requested data block exists in a cache according to a mapping table, if so, directly updating target data in an SSD cache layer, wherein the data flow direction is as indicated by a reference numeral 1 in FIG. 2, and then executing a step S602;
s602, updating the write zone bit of the block which hits the write request, and adding 1 to the recording times;
s603, if the target data does not exist in the cache layer, judging whether the LBA of the target data block is aligned with the write pointer of the zone pair in the SMR, if the conditions are met, directly writing the zone in the SMR, wherein the data flow path is indicated by the reference number 2 in FIG. 2, otherwise, entering the step S604;
s604, according to the judgment result of S603, if the LBA of the target data block is not aligned with the write pointer of the zone pair in the SMR, judging again whether the size of the non-sequential write request exceeds the set non-sequential write threshold TNSWIf it exceeds TNSWThe method is regarded as larger non-sequential writing, which is beneficial to improving the cleaning efficiency of the disk persistent buffer, the method can be written into the persistent buffer inside the SMR disk, the data flow path of the persistent buffer is indicated by a mark 3 in figure 2, otherwise, the method enters the step S605;
s605, because the conditions of S603 and S604 are not satisfied, at this time, the data needs to be written into the SSD cache layer, the data flow path is changed to the direction indicated by reference numeral 1 in the figure again, and the process proceeds to step S606;
s606, adding a write flag bit and a read flag bit to the block newly entering the SSD cache layer, and adding 1 to the write flag bit recording times. Subsequently, if the corresponding set already existsblockAdding the management linked list into the linked list, otherwise, creating new setblockManaging linked list, adding new set into zone after completing the above stepssetA temporary management linked list;
s607, after the write request is finished, judging whether the residual space is less than the residual space threshold value TfreeIf it is less than the threshold value TfreeThen, step S7 is executed to start the cache scrubbing operation.
After a certain time has elapsed, a persistent buffer within the SMR capacity layer or a clean-up operation of read-modify-write is performed, at which time data is written back to the zone with the data flow indicated by reference number 4 in FIG. 2.
S7, when the read request arrives, firstly checking whether the requested block exists in the cache, if so, reading the target data from the SSD cache layer and returning the target data to the upper layer application, and then executing the step S2; if the read request is not hit in the cache, reading target data from the SMR capacity layer and returning the target data to the upper application;
referring to fig. 7, for analyzing the data flow direction of the cache management function, when the remaining space is insufficient and the cache replacement algorithm needs to be executed, the specific steps are as follows;
s701, according to the updating times recorded in the step S2, utilizing a formula Hi=Wi+X*RiCalculating the comprehensive heat of block blocks in each set and obtaining the heat through a formula Ci=setblock/zoneblockCalculating the coverage rate of each set; then, using the formula Wi=∑Hi/CiCalculating the zone shown in FIG. 3setTemporarily managing the final comprehensive weight of each set in the linked list;
s702, adding the set sets to the zone in sequence according to the rule that the comprehensive weight value is descending from large to smallsetEliminating the linked list;
s703 slave zonesetAt the end of the culling list, i.e. beginning at the end with the lowest weight, the set of each node is checked firstblockWhether the starting LBA is aligned with the write pointer of the destination zone to be written back or not in the management chain table and the coverage rate CiThe candidate up to 10%, if any, is written back to the SMR zone as an evictor in a sequential write, the data flow is as shown by reference numeral 5 in FIG. 2, and the scrubbing operation is performed until the cache space capacity returns to the remaining space threshold TfreeAbove. If no such candidate exists, proceed to step S704;
s704, again starting from the zonesetStarting from the end of the linked list, checking the coverage rate information of each node, and comparing the coverage rate CiThe set up to 50% or more is written back as a candidate to the persistent buffer in SMR, the data flow is shown as number 6 in FIG. 2, and the flush operation is performed until the capacity of the cache space returns to the remaining space threshold TfreeAbove. At this time, the write-back operation will be changed to non-sequential write, when the device side buffer is full or the device is in an idle state, the device side executes read-change-write operation to write the data of the persistent buffer back to the zone of the SMR in sequence, the data flow is as shown by reference numeral 4 in fig. 2, if the above condition is not met, the step S705 is entered;
s705, when the candidates of the two cases do not exist, the slave zonesetStarting from the end of the linked list, the node with the lowest weight is selected as a candidate to be written back to the persistent buffer in the SMR, and the data flow is as shown by reference numeral 6 in FIG. 2 until the capacity of the buffer space returns to the residual space threshold TfreeAbove. At this time, the write-back operation is changed into non-sequential write, and when the device side buffer is full or the device is in an idle state, the device side executes read-write-back operation to sequentially write the data in the persistent buffer back to the zone of the SMR.
S8, when the write request arrives, firstly checking whether the requested data block exists in the cache, if so, directly updating the target data in the SSD cache layer, and then executing the step S2; if the target data does not exist in the cache layer, judging whether the LBA of the target data block is aligned with a write pointer of a zone pair in the SMR, and if the LBA of the target data block is aligned with the write pointer of the zone pair in the SMR, directly writing the zone pair in the SMR; otherwise, judging whether the size of the non-sequential write request exceeds the set non-sequential write threshold value T or notNSWI.e. TNSWIf the current value exceeds 1MB, the current value is regarded as larger 'non-sequential writing', and a persistent buffer area inside an SMR disk can be written; if the two cases are not met, at this time, the data needs to be written into the SSD cache layer, and steps S1 and S2 are executed; judging whether the residual space is less than the residual space threshold T or not after the write request is completedfreeI.e. T free10% of SSD buffer capacity, if less than threshold TfreeThen step S5 is executed.
In summary, according to the write-friendly data management method for the SSD and HA-SMR hybrid storage system, three factors, namely SSD service life, SMR write amplification phenomenon and cache hit rate, are considered, the SSD is used as a cache layer to cache random writing, and the sequential IO requests meeting the write pointer alignment condition are directly directed to the SMR on the principle that whether the IO request start address of the upper layer is aligned with the write pointer of the SMR or not, so that the sequential writing capability of the mechanical hard disk is better utilized; for non-sequential writes with misaligned write pointers but exceeding a set threshold, the writes can be written to a persistent buffer inside the SMR disk, further filtering write traffic that is "of low value" to the SSD, and only writing small, unavoidable non-sequential writes to the SSD buffer. The steps reduce the write-in times of the SSD, avoid frequent erasure, prolong the service life of the SSD, cluster and reorder the blocks of the same zone at the same time, convert the blocks into larger continuous blocks which are beneficial to the execution of 'read-modify-write' cleaning operation of a persistent buffer layer in the HA-SMR, shorten the PRT-system performance recovery time, reduce the write amplification rate of the SMR, and effectively improve the overall performance of the hybrid storage system based on the above modes.
The above-mentioned contents are only for illustrating the technical idea of the present invention, and the protection scope of the present invention is not limited thereby, and any modification made on the basis of the technical idea of the present invention falls within the protection scope of the claims of the present invention.

Claims (10)

1. A data management method for a hybrid storage system of an SSD and an HA-SMR is characterized by comprising the following steps:
s1, clustering blocks belonging to the same zone in the SMR into a set in the SSD cache layer, and inserting the blocks into the set in an LBA ascending order modeblockManaging a linked list, and writing back to an SMR capacity layer by taking set as a basic unit; inserting set into zone in FIFO first-in first-out modesetA temporary management linked list;
s2, respectively setting a write flag bit and a read flag bit for each block in the set, recording the reading and writing times of the block, setting an initial value to be 0, and adding 1 each time after the reading and writing request hits;
s3, calculating the comprehensive heat of each block in the updated set according to the reading and writing times recorded in the step S2; simultaneously calculating the coverage rate of the updated set;
s4, calculating and updating the final weight of each set according to the comprehensive heat and coverage rate of each block in the step S3, and adding the weight to the zone again in a weight descending mannersetCaching the elimination linked list;
s5, when the remaining space of the buffer is less than the prescribed threshold, the zone from step S4setSelecting obsolete objects from the cache obsolete linked list, executing a cache replacement algorithm, and caching the data meeting the requirementsReplacing to the capacity layer, and executing step S6 after completion;
s6, clearing the write and read flag bits of the replaced data block in the SSD cache layer;
s7, when the read request arrives, firstly checking whether the requested block exists in the cache, if so, reading the target data from the SSD cache layer and returning the target data to the upper layer application, and then executing the step S2; if the read request is not hit in the cache, reading target data from the SMR capacity layer and returning the target data to the upper application;
s8, when the write request arrives, firstly checking whether the requested data block exists in the cache, if so, directly updating the target data in the SSD cache layer, and then executing the step S2; if the target data does not exist in the cache layer, judging whether the LBA of the target data block is aligned with a write pointer of a zone pair in the SMR, and if the LBA of the target data block is aligned with the write pointer of the zone pair in the SMR, directly writing the zone pair in the SMR; otherwise, judging whether the size of the non-sequential write request exceeds the set non-sequential write threshold value T or notNSWIf the number of the SMR data exceeds the preset value, the SMR data is regarded as large non-sequential writing, and the large non-sequential writing is written into a persistent buffer area inside the SMR disk; if the two cases are not met, the data needs to be written into the SSD cache layer, and steps S1 and S2 are executed; judging whether the residual space is less than the residual space threshold T or not after the write request is completedfreeIf it is less than the threshold value TfreeThen step S5 is executed to implement write-friendly data management of the SSD and HA-SMR hybrid storage system.
2. The method according to claim 1, wherein in step S1, the minimum unit of the upper IO request is a block of 4KB, and the HA-SMR comprises 65536 blocks of 4KB with a 256MB size zone as a management unit; the set is composed of all block blocks belonging to the same zone and located in the SSD, and for each block in the set, the set is inserted according to the ascending order of LBA addressesblockManaging a linked list; each set in the SSD corresponds to one zone in the SMR, the set is a minimum of 4KB and a maximum of 256MB, and is inserted into the zone in FIFO form for each setsetThe linked list is temporarily managed.
3. The method of claim 1, wherein in step S3, a write flag bit and a read flag bit are added to each block entering the cache, the number of updates and reads are recorded, and the comprehensive heat of each block in the cache is calculated.
4. The method according to claim 1, wherein in step S3, the coverage rate of the updated set is Ci=set_block/zone_block,0≤Ci1 or less, zone _ block is constant, representing 65536 blocks of 4KB size in 256MB size zones; set _ block represents the number of 4KB sized blocks in the set, 0 ≦ set _ block ≦ 65536.
5. The method of claim 1, wherein in step S4, the comprehensive weight of sets is determined by the comprehensive heat of the blocks and the coverage of the set sets obtained in step S3, and each set is inserted into the zone according to the descending order of the comprehensive weightsetManaging a linked list with each node including an integrated weight WiAnd coverage rate CiThe information of (1).
6. The method according to claim 1, wherein step S5 is specifically:
s501, when the read request is reached, checking whether a block of the request exists in a cache or not according to a mapping table, if so, reading target data from an SSD cache layer and returning the target data to an upper layer application, and then executing the step S502;
s502, updating the read zone bit of the block which hits the reading request, and adding 1 to the recording times;
and S503, when the read request is not hit in the cache, reading the target data from the SMR capacity layer and returning the target data to the upper application.
7. The method according to claim 1, wherein step S6 is specifically:
s601, when a write request arrives, checking whether a requested data block exists in a cache according to a mapping table, if so, directly updating target data in an SSD cache layer, and then executing a step S602;
s602, updating the write zone bit of the block which hits the write request, and adding 1 to the recording times;
s603, if the target data does not exist in the cache layer, judging whether the LBA of the target data block is aligned with the write pointer of the zone pair in the SMR, and if so, directly writing the target data block into the zone in the SMR, otherwise, entering the step S604;
s604, according to the judgment result of S603, if the LBA of the target data block is not aligned with the write pointer of the zone pair in the SMR, judging again whether the size of the non-sequential write request exceeds the set non-sequential write threshold TNSWIf it exceeds TNSWIf the write command is 1MB, the write command is regarded as large non-sequential write, and the write command is written into a persistent buffer inside the SMR disk, otherwise, the step S605 is executed;
s605, writing data into the SSD cache layer because the conditions of the step S603 and the step S604 are not satisfied, and entering the step S606;
s606, adding a write zone bit and a read zone bit to a block newly entering an SSD cache layer, and adding 1 to the write zone bit recording times; subsequently, if the corresponding set already existsblockThe management linked list is added into the linked list, otherwise a new set is createdblockManaging linked list, adding new set into zone after completing the above stepssetA temporary management linked list;
s607, after the write request is finished, judging whether the residual space is less than the residual space threshold value TfreeI.e. Tfree10% of SSD buffer capacity, if less than threshold TfreeThen, step S7 is executed to start the cache scrubbing operation.
8. The method according to claim 1, wherein step S7 is specifically:
s701, calculating the comprehensive heat of the block blocks in each set according to the updating times recorded in the step S2, and calculating the coverage rate of each set; subsequently, the zone is calculatedsetTemporarily managing the final comprehensive weight of each set in the linked list;
s702, adding the set sets to the zone in sequence according to the rule that the comprehensive weight value is descending from large to smallsetEliminating the linked list;
s703 slave zonesetAt the end of the culling list, i.e. beginning at the end with the lowest weight, the set of each node is checked firstblockWhether a candidate exists in the management linked list, wherein the starting LBA is aligned with the write pointer of the target zone to be written back, and the coverage rate Ci reaches 10%, if the candidate exists, the candidate is used as a deseriant and written back to the zone of the SMR in a sequential writing mode until the capacity of the cache space returns to the residual space threshold value TfreeAbove; if no such candidate exists, proceed to step S704;
s704, again starting from the zonesetStarting from the end of the linked list, checking the coverage rate information of each node, writing a set with the coverage rate Ci reaching more than 50% as a candidate back to a persistent buffer in the SMR until the capacity of the cache space returns to the residual space threshold TfreeAbove; at this time, the write-back operation is changed into non-sequential write, the device side executes read-modify-write operation to sequentially write the data of the persistent buffer back to the zone of the SMR, and if the condition is not met, the step S705 is performed;
s705, if no candidate exists, the slave zonesetStarting from the end of the linked list, selecting the node with the lowest weight as a candidate to write back to a persistent buffer in the SMR until the capacity of the cache space returns to the residual space threshold value TfreeAbove; at this time, the write-back operation is changed into non-sequential write, and the device side executes read-modify-write operation to write the data in the persistent buffer back to the zone of the SMR in sequence.
9. A computer readable storage medium storing one or more programs, the one or more programs comprising instructions, which when executed by a computing device, cause the computing device to perform any of the methods of claims 1-8.
10. A management device, comprising:
one or more processors, memory, and one or more programs stored in the memory and configured for execution by the one or more processors, the one or more programs including instructions for performing any of the methods of claims 1-8.
CN202010420508.2A 2020-05-18 2020-05-18 SSD and HA-SMR hybrid storage system oriented data management method, storage medium and device Active CN111722797B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010420508.2A CN111722797B (en) 2020-05-18 2020-05-18 SSD and HA-SMR hybrid storage system oriented data management method, storage medium and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010420508.2A CN111722797B (en) 2020-05-18 2020-05-18 SSD and HA-SMR hybrid storage system oriented data management method, storage medium and device

Publications (2)

Publication Number Publication Date
CN111722797A true CN111722797A (en) 2020-09-29
CN111722797B CN111722797B (en) 2021-06-29

Family

ID=72564655

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010420508.2A Active CN111722797B (en) 2020-05-18 2020-05-18 SSD and HA-SMR hybrid storage system oriented data management method, storage medium and device

Country Status (1)

Country Link
CN (1) CN111722797B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112446173A (en) * 2020-11-25 2021-03-05 河南省高速公路联网管理中心 Bridge temperature prediction method, medium and equipment based on long-term and short-term memory network
CN112764681A (en) * 2021-01-21 2021-05-07 上海七牛信息技术有限公司 Cache elimination method and device with weight judgment function and computer equipment
CN114138178A (en) * 2021-10-15 2022-03-04 苏州浪潮智能科技有限公司 IO processing method and system

Citations (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120300325A1 (en) * 2011-05-23 2012-11-29 David Robison Hall Shingle-written Magnetic Recording (SMR) Device with Hybrid E-region
US20130232292A1 (en) * 2012-03-01 2013-09-05 Hitachi Global Storage Technologies Netherlands B. V. Implementing large block random write hot spare ssd for smr raid
CN105138286A (en) * 2015-08-11 2015-12-09 智云创新(北京)科技有限公司 Method for mixed utilization of SSD and SMR hard disks in disk file system
CN105389135A (en) * 2015-12-11 2016-03-09 华中科技大学 Solid-state disk internal cache management method
CN105955664A (en) * 2016-04-29 2016-09-21 华中科技大学 Method for reading and writing segment-based shingle translation layer (SSTL)
WO2017063495A1 (en) * 2015-10-12 2017-04-20 中兴通讯股份有限公司 Data migration method and apparatus
US20180144015A1 (en) * 2016-11-18 2018-05-24 Microsoft Technology Licensing, Llc Redoing transaction log records in parallel
CN108804019A (en) * 2017-04-27 2018-11-13 华为技术有限公司 A kind of date storage method and device
US20190050311A1 (en) * 2017-08-11 2019-02-14 Seagate Technology Llc Routing of conductive traces in a printed circuit board
US20190050353A1 (en) * 2017-08-11 2019-02-14 Western Digital Technologies, Inc. Hybrid data storage array
CN109558084A (en) * 2018-11-29 2019-04-02 文华学院 A kind of data processing method and relevant device
CN109710184A (en) * 2018-12-19 2019-05-03 中国人民解放军国防科技大学 Hierarchical hybrid storage method and system for tile record disk perception
US10282096B1 (en) * 2014-12-17 2019-05-07 Western Digital Technologies, Inc. Identification of data with predetermined data pattern
CN109783020A (en) * 2018-12-28 2019-05-21 西安交通大学 A kind of rubbish recovering method based on SSD-SMR mixing key assignments storage system
CN109800185A (en) * 2018-12-29 2019-05-24 上海霄云信息科技有限公司 A kind of data cache method in data-storage system
CN110633051A (en) * 2018-06-25 2019-12-31 阿里巴巴集团控股有限公司 Method and system for data placement in hard disk drives based on access frequency for improved IOPS and utilization efficiency
CN110663019A (en) * 2017-05-26 2020-01-07 微软技术许可有限责任公司 File system for Shingled Magnetic Recording (SMR)

Patent Citations (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120300325A1 (en) * 2011-05-23 2012-11-29 David Robison Hall Shingle-written Magnetic Recording (SMR) Device with Hybrid E-region
US20130232292A1 (en) * 2012-03-01 2013-09-05 Hitachi Global Storage Technologies Netherlands B. V. Implementing large block random write hot spare ssd for smr raid
US10282096B1 (en) * 2014-12-17 2019-05-07 Western Digital Technologies, Inc. Identification of data with predetermined data pattern
CN105138286A (en) * 2015-08-11 2015-12-09 智云创新(北京)科技有限公司 Method for mixed utilization of SSD and SMR hard disks in disk file system
WO2017063495A1 (en) * 2015-10-12 2017-04-20 中兴通讯股份有限公司 Data migration method and apparatus
CN105389135A (en) * 2015-12-11 2016-03-09 华中科技大学 Solid-state disk internal cache management method
CN105955664A (en) * 2016-04-29 2016-09-21 华中科技大学 Method for reading and writing segment-based shingle translation layer (SSTL)
US20180144015A1 (en) * 2016-11-18 2018-05-24 Microsoft Technology Licensing, Llc Redoing transaction log records in parallel
CN108804019A (en) * 2017-04-27 2018-11-13 华为技术有限公司 A kind of date storage method and device
CN110663019A (en) * 2017-05-26 2020-01-07 微软技术许可有限责任公司 File system for Shingled Magnetic Recording (SMR)
US20190050311A1 (en) * 2017-08-11 2019-02-14 Seagate Technology Llc Routing of conductive traces in a printed circuit board
US20190050353A1 (en) * 2017-08-11 2019-02-14 Western Digital Technologies, Inc. Hybrid data storage array
CN110633051A (en) * 2018-06-25 2019-12-31 阿里巴巴集团控股有限公司 Method and system for data placement in hard disk drives based on access frequency for improved IOPS and utilization efficiency
CN109558084A (en) * 2018-11-29 2019-04-02 文华学院 A kind of data processing method and relevant device
CN109710184A (en) * 2018-12-19 2019-05-03 中国人民解放军国防科技大学 Hierarchical hybrid storage method and system for tile record disk perception
CN109783020A (en) * 2018-12-28 2019-05-21 西安交通大学 A kind of rubbish recovering method based on SSD-SMR mixing key assignments storage system
CN109800185A (en) * 2018-12-29 2019-05-24 上海霄云信息科技有限公司 A kind of data cache method in data-storage system

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
XIE XUCHAO等: "SMRC An Endurable SSD Cache for Host-Aware", 《IEEE ACCESS》 *

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112446173A (en) * 2020-11-25 2021-03-05 河南省高速公路联网管理中心 Bridge temperature prediction method, medium and equipment based on long-term and short-term memory network
CN112446173B (en) * 2020-11-25 2024-02-23 河南省高速公路联网管理中心 Bridge temperature prediction method, medium and equipment based on long-short-term memory network
CN112764681A (en) * 2021-01-21 2021-05-07 上海七牛信息技术有限公司 Cache elimination method and device with weight judgment function and computer equipment
CN112764681B (en) * 2021-01-21 2024-02-13 上海七牛信息技术有限公司 Cache elimination method and device with weight judgment and computer equipment
CN114138178A (en) * 2021-10-15 2022-03-04 苏州浪潮智能科技有限公司 IO processing method and system
CN114138178B (en) * 2021-10-15 2023-06-09 苏州浪潮智能科技有限公司 IO processing method and system

Also Published As

Publication number Publication date
CN111722797B (en) 2021-06-29

Similar Documents

Publication Publication Date Title
US11579773B2 (en) Memory system and method of controlling memory system
CN111722797B (en) SSD and HA-SMR hybrid storage system oriented data management method, storage medium and device
US10521131B2 (en) Storage apparatus and storage control apparatus
CN107193646B (en) High-efficiency dynamic page scheduling method based on mixed main memory architecture
TWI551989B (en) Method for managing a flash storage system
US8738882B2 (en) Pre-organization of data
WO2014074449A2 (en) Wear leveling in flash memory devices with trim commands
CN109710184B (en) Hierarchical hybrid storage method and system for tile record disk perception
CN110888600A (en) Buffer area management method for NAND flash memory
CN111580754B (en) Write-friendly flash memory solid-state disk cache management method
CN107273306A (en) A kind of digital independent of solid state hard disc, method for writing data and solid state hard disc
US20200133543A1 (en) Locality-aware, memory-efficient, time-efficient hot data identification using count-min-sketch for flash or streaming applications
CN111078143B (en) Hybrid storage method and system for data layout and scheduling based on segment mapping
CN112835534A (en) Garbage recycling optimization method and device based on storage array data access
CN111352593A (en) Solid state disk data writing method for distinguishing fast writing from normal writing
CN114741028A (en) OCSD-based persistent key value storage method, device and system
CN117234430B (en) Cache frame, data processing method, device, equipment and storage medium
KR101373613B1 (en) Hybrid storage device including non-volatile memory cache having ring structure
EP3862863A1 (en) Method for managing performance of logical disk, and storage array
CN115048056A (en) Solid state disk buffer area management method based on page replacement cost
CN117280314A (en) Memory controller and method for improving memory processing
Zhao et al. A buffer algorithm of flash database based on LRU

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant