WO2016082550A1 - Procédé et appareil pour vider des données d'antémémoire - Google Patents

Procédé et appareil pour vider des données d'antémémoire Download PDF

Info

Publication number
WO2016082550A1
WO2016082550A1 PCT/CN2015/083285 CN2015083285W WO2016082550A1 WO 2016082550 A1 WO2016082550 A1 WO 2016082550A1 CN 2015083285 W CN2015083285 W CN 2015083285W WO 2016082550 A1 WO2016082550 A1 WO 2016082550A1
Authority
WO
WIPO (PCT)
Prior art keywords
dirty data
lba
brushed
data blocks
cache
Prior art date
Application number
PCT/CN2015/083285
Other languages
English (en)
Chinese (zh)
Inventor
张志乐
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Publication of WO2016082550A1 publication Critical patent/WO2016082550A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems

Definitions

  • Embodiments of the present invention relate to a storage technology, and in particular, to a method and apparatus for buffering data.
  • SSD Solid State Drive
  • the controller when the storage system performs the write cache by using the cache technology, the controller first receives the write request sent by the processor, where the write request includes the data to be written and the logical block address corresponding to the data to be written ( LBA, Logical Block Address); secondly, the data to be written is written into the cache block of the SSD, the state of the cache block is set to a dirty data block, and the identifier of the dirty data block is corresponding to the data to be written.
  • the LBA performs associative storage; finally, data synchronization is performed on the disk at the time of the flashing of the cache (that is, the data in the write cache is sent to the disk for storage, which is also referred to as a cache disk in the present invention).
  • the controller when the cache is performed, the controller sends a plurality of cache data read requests to the SSD in an asynchronous IO manner to obtain cache data of the plurality of dirty data blocks; and receives any cache data read. After requesting the corresponding cache data, the cache data and the LBA corresponding to the cache data are sent to the disk, so that the disk stores the cache data to the corresponding LBA.
  • the disk Since the disk stores data, it is first necessary to step the head to the position indicated by the LBA, and then store the cached data; therefore, in the prior art, there is a problem that the controller is inefficiently brushed.
  • the invention provides a method and a device for buffering data to solve the problem of low efficiency of the controller brush in the prior art.
  • the present invention provides a method for buffering data, which is applied to a storage system, where the storage system includes a controller, a disk, and a solid state drive SSD, and the SSD serves as a cache of the disk, and the method includes The controller executes, and the method includes:
  • the cache data read response includes cache data of N dirty data blocks to be flashed, and M is greater than or equal to N;
  • the cache data of the N dirty data blocks to be brushed are sequentially stored to the disk according to the LBA from small to large.
  • the method before the sending the cache data read request to the SSD, the method further includes: from all the dirty data blocks according to the LBA of the current brush operation And selecting the dirty data blocks of the M to be brushed disks, wherein the M LBAs corresponding to the dirty data blocks of the M to be brushed disks are respectively greater than the LBA of the current brush operation.
  • the dirty data block of the disk includes: placing each dirty data block into the first queue or the second queue according to the LBA of the current brush operation and the LBA of each dirty data block, where the first queue is The LBA of the dirty data block is greater than or equal to the LBA of the current brush operation, the LBA of the dirty data block in the second queue is smaller than the LBA of the current brush operation; selecting the selected from the first queue M dirty data blocks to be brushed.
  • the method further includes: selecting, according to the selected dirty data of the M to be brushed disks The LBA of the block combines the dirty data blocks of the LBA in the dirty data blocks of the M to be brushed to obtain the dirty data blocks of the merged disk to be brushed;
  • sending a cache data read request to the SSD, where the cache data read request includes an identifier of the dirty data blocks of the M to be flashed including: sending a cache data read request to the SSD, the cache The data read request includes an identification of the dirty data block of the merged disc to be swiped.
  • the present invention provides a controller, where the controller is applied to a storage system, where the storage system includes the controller, a disk, and a solid state drive SSD, and the SSD serves as a cache of the disk,
  • the controller includes:
  • a read request sending module configured to send a cache data read request to the SSD, the number of caches According to the read request, the identifier of the dirty data block of the M to be brushed disks is included;
  • a read response receiving module configured to receive a cache data read response sent by the SSD, the cache data read response includes cache data of N dirty data blocks to be brushed, M is greater than or equal to N;
  • the LBA determining module is configured to determine, according to the correspondence between the identifier of the dirty data block and the logical block address LBA of the disk, the LBA corresponding to the dirty data blocks of the N to be brushed disks;
  • a brush disk module configured to store, according to the LBAs corresponding to the dirty data blocks of the N to be brushed disks, cache data of the dirty data blocks of the N to be brushed disks in order according to the LBA from small to large The disk.
  • the read request sending module is further configured to: select the M pieces from all dirty data blocks according to an LBA of a current brush operation A dirty data block to be brushed, wherein each of the M LBAs corresponding to the dirty data blocks of the M to be brushed disks is larger than the LBA of the current brush operation.
  • the read request sending module is specifically configured to: according to the LBA and each of the current brush operation An LBA of the dirty data block, the dirty data block is placed in the first queue or the second queue, and the LBA of the dirty data block in the first queue is greater than the LBA of the current brush operation, the The LBA of the dirty data block in the second queue is smaller than the LBA of the current brush operation; and the dirty data blocks of the M to be brushed are selected from the first queue.
  • the read request sending module is configured to: select a small LBA value from the first queue The dirty data blocks of the M to be brushed disks.
  • the read request sending module is further configured to: according to the selected M The LBA of the dirty data block to be brushed is merged, and the dirty data blocks of the LBA in the dirty data block of the M to be brushed are merged to obtain the dirty data block of the merged disk to be brushed;
  • the read request sending module is configured to: send a cache data read request to the SSD, where the cache data read request includes an identifier of the merged dirty data block to be brushed.
  • the present invention provides a method and a device for buffering data by using the LBAs corresponding to the dirty data blocks of the N to be brushed disks, and sequentially, according to the LBA, the N to be brushed disks in order from small to large.
  • the cache data of the dirty data block is stored to the disk, which avoids the wasted disk time due to the disk going back and forth, and improves the brushing efficiency of the controller.
  • FIG. 1 is a schematic diagram of an application scenario of a method for buffering data cached according to the present invention
  • Embodiment 2 is a flowchart of Embodiment 1 of a method for buffering data for buffering according to the present invention
  • Embodiment 3 is a flowchart of Embodiment 2 of a method for buffering data for buffering data according to the present invention
  • Embodiment 4 is a flowchart of Embodiment 3 of a method for buffering data for buffering data according to the present invention
  • FIG. 5 is a schematic diagram of an LBA according to an embodiment of the present invention.
  • Embodiment 1 of a controller according to the present invention is a schematic structural diagram of Embodiment 1 of a controller according to the present invention.
  • FIG. 7 is a schematic structural diagram of Embodiment 4 of a controller according to the present invention.
  • FIG. 1 is a schematic diagram of an application scenario of a method for buffering data for buffering data; as shown in FIG. 1, in a storage system, a processor 11 sends data to be written to a controller 12; the controller 12 first receives data. The data is sent to the SSD 13 for caching, and then the data buffered in the SSD 13 is synchronized to the disk 14 at the time of the flash of the cache.
  • the controller 12 when the controller 12 synchronizes the data buffered in the SSD 13 to the disk 14, the controller 12 sends multiple cache data read requests to the SSD 13 in an asynchronous IO manner to obtain a plurality of cached data; After the cache data corresponding to any cache data read request is sent, the cache data and the LBA corresponding to the cache data are sent to the disk 14 so that the disk 14 stores the cache data to the corresponding LBA. Since the disk stores data, it first needs to step the head to the position indicated by the LBA, and then store the cached data; and the head can only step in one direction; therefore, there is a problem that the controller is inefficiently brushed.
  • Embodiment 1 of a method for buffering data for buffering data according to the present invention, as shown in FIG.
  • the method of the example can include:
  • Step 201 Send a cache data read request to the SSD, where the cache data read request includes identifiers of the dirty data blocks of the M to be brushed disks;
  • the sending the cache data read request to the SSD includes:
  • each cache data read request includes an identifier of a dirty data block to be brushed; or sending a cache data read request to the SSD, the cache data read request The identifier of the dirty data block including M to be brushed.
  • the dirty data blocks of the M to be brushed disks may be any of the M dirty data blocks of all the dirty data blocks.
  • Step 202 Receive a cache data read response sent by the SSD, where the cache data read response includes cache data of N dirty data blocks to be brushed, and M is greater than or equal to N;
  • Step 203 Determine, according to the correspondence between the identifier of the dirty data block and the LBA, the LBA corresponding to the dirty data blocks of the N to be brushed disks;
  • Step 204 Store, according to the LBAs corresponding to the dirty data blocks of the N to be brushed disks, the cache data of the N dirty data blocks to be brushed to the disk according to the LBA from small to large. .
  • the controller according to the cached data of the dirty data block 1 of the disk to be brushed, and the disk is to be brushed again.
  • the cache data of the dirty data block 3, the cache data of the dirty data block 5 of the disk to be brushed, the cache data of the dirty data block 2 of the disk to be brushed, and the cache data of the dirty data block 4 of the disk to be brushed will be sequentially
  • the cache data of the five dirty data blocks to be brushed is stored to the disk.
  • the controller sends multiple cache data read requests to the SSD in an asynchronous IO manner, and after receiving the cache data corresponding to any cache data read request, the cache data is synchronized to the disk.
  • the cache data of the N dirty data blocks to be brushed are sequentially stored in the LBA order from small to large. Disk.
  • the controller sends a plurality of cache data read requests to the SSD by means of asynchronous IO, and after receiving the cache data corresponding to any cache data read request, the cache data is synchronized to the disk; And when the disk is storing data, you first need to step the head to the bit indicated by the LBA. Set, then store the cached data; and the head can only step in one direction. Therefore, if the head is currently located at the LBA equal to 3, the LBA corresponding to the cache data 1 received by the controller for the first time is 8, and the LBA corresponding to the second received buffer data 2 is 6. The third time is received.
  • the LBA corresponding to the cache data 3 is 2; then: the head of the disk first needs to step to the position where the LBA is equal to 8 to store the cache data 1, and then the head needs to step to the maximum position of the LBA, and then return to the LBA equal to 0.
  • the position is further stepped to the position where the LBA is equal to 6 to store the cache data 2, and then the head needs to step to the maximum position of the LBA, and then return to the position where the LBA is equal to 0, and then step to LBA equal to 2.
  • the location stores the cached data 3; therefore, the disk needs to be homaged (ie, after stepping to the LBA's maximum position, back to the LBA equal to 0, and then stepping to the LBA location corresponding to the cached data)
  • the storage of each cached data is realized; and it takes a long time for the disk to seek back and forth; therefore, there is a problem that the controller is inefficient in brushing the disk in the prior art.
  • the cache data of the N dirty data blocks to be brushed are sequentially stored in the order of LBA from small to large. The disk is described, which avoids the wasted disk time due to the disk going back and forth, and improves the brushing efficiency of the controller.
  • the cache data of the N dirty data blocks to be brushed are sequentially stored in the order of LBA from small to large.
  • the disk is described, which avoids the wasted disk time due to the disk going back and forth, and improves the brushing efficiency of the controller.
  • the execution body of the method in the present invention is a controller.
  • FIG. 3 is a flowchart of Embodiment 2 of a method for buffering data for buffering data according to the present invention. As shown in FIG. 3, the method in this embodiment may include:
  • Step 301 Obtain a ratio of all the cache blocks of the dirty data block to the SSD;
  • the controller when the controller receives the data to be written sent by the processor, the data to be written is written into the plurality of cache blocks of the SSD, and the states of the plurality of cache blocks are set as dirty data blocks; And the plurality of cache blocks respectively correspond to identifiers of a dirty data block.
  • the ratio of the dirty data block to all cache blocks of the SSD may be: the number of cache blocks whose status is dirty data block divided by the total number of SSD cache blocks, and multiplied by one hundred percent.
  • Step 302 Determine whether the ratio is greater than or equal to a preset ratio
  • step 303 If yes, go to step 303; otherwise, end.
  • the preset ratio can be 40%.
  • Step 303 Select M to be brushed from all the dirty data blocks according to the LBA of the current brush operation.
  • the dirty data block wherein the M LBAs corresponding to the dirty data blocks of the M to be brushed disks are respectively greater than the LBA of the current brush operation;
  • the size of the M is determined according to the proportion of the dirty data blocks occupying all the cache blocks of the SSD: when the proportion of the dirty data blocks occupying all the cache blocks of the SSD is larger, the M is larger.
  • the LBA of the current brush operation may be the LBA corresponding to the dirty data block when the cache data of a dirty data block is last stored to the disk.
  • the dirty data blocks of the M to be brushed are selected from all the dirty data blocks according to the LBA of the current brush operation; wherein the M LBAs corresponding to the dirty data blocks of the M to be brushed disks respectively All are larger than the LBA of the current brush operation; the LBA corresponding to all the cache data synchronized by the controller to the disk is greater than or equal to the LBA of the current brush operation, so that the head is going to each time when the disk is brushed The direction is stepped, thereby reducing the time of head seek; further improving the brushing efficiency of the controller.
  • the selecting according to the LBA of the current brush operation, selecting the dirty data blocks of the M to be brushed from all the dirty data blocks, including:
  • each dirty data block into the first queue or the second queue according to the LBA of the current brush operation and the LBA of each dirty data block, where the LBA of the dirty data block in the first queue is greater than or Equal to the LBA of the current brush operation, the LBA of the dirty data block in the second queue is smaller than the LBA of the current brush operation; and the dirty data of the M to be brushed disks are selected from the first queue Piece.
  • the dirty data block is placed in the first queue.
  • dirty data block 1 dirty data block 2
  • dirty data block 3 dirty data block 20
  • LBA of dirty block 4 25
  • LBA of dirty block 5 30, LBA of current brush operation is 9
  • dirty data block 2 dirty data block 3.
  • the dirty data block 4 and the dirty data block 5 are placed in the first queue; the dirty data block 1 is placed in the second queue.
  • the dirty data blocks of the M to-be-printed disks having a small LBA value are selected from the first queue.
  • the first queue includes dirty data block A, dirty data block B, dirty data block C, dirty data block D, dirty data block E; wherein, the dirty data block A has an LBA of 5, and a dirty data block
  • the LBA of B is 9, the LBA of the dirty block C is 20, the LBA of the dirty block D is 3, and the LBA of the dirty block E is 10; then the M candidates having a small LBA value are selected from the first queue.
  • Dirty data block of the brush disk The dirty data of the M to be brushed disks is the dirty data block D and the dirty data block A.
  • the LBAs according to the dirty data blocks included in the first queue can be small when the disk is swiped.
  • the cache data of the M dirty data blocks in the first queue are sequentially stored to the disk; the time of head seek is further reduced, and the brushing efficiency of the controller is improved.
  • Step 304 Send a cache data read request to the SSD, where the cache data read request includes an identifier of the dirty data block of the M to be brushed disks;
  • the controller can send a cache data read request to the SSD by means of asynchronous IO.
  • Step 305 receiving a cache data read response sent by the SSD, the cache data read response includes cache data of N dirty data blocks to be brushed, M is greater than or equal to N;
  • Step 306 Determine, according to the correspondence between the identifier of the dirty data block and the LBA, the LBA corresponding to the dirty data blocks of the N to be brushed disks;
  • Step 307 According to the LBAs corresponding to the dirty data blocks of the N to be brushed disks, sequentially store the cache data of the N dirty data blocks to be brushed to the disk according to the LBA from small to large. ;
  • M is greater than N
  • the controller sends a cache data read request to the SSD (including 8 (ie, M) identifiers of dirty data blocks to be brushed), for a period of time (eg, 10 ms)
  • the cache data of 6 (that is, N) dirty data blocks to be brushed is received; then: the controller may first perform disk synchronization on the six cache data.
  • the cache data read request includes an identifier of the M dirty data blocks to be brushed; and receiving a cache of the dirty data block of the N to be brushed After the data, the N cache data is disk synchronized; thereby avoiding wasting a long time waiting for the cache data when the controller receives a plurality of cache data later.
  • the cache data of the remaining M-N dirty data blocks to be brushed are stored to the disk.
  • Step 308 recycling the cache space
  • the cache space may be recovered by updating the state of the cache block storing the N cache data in the SSD from a dirty data block to a clean cache block; if a cache block corresponds to a clean cache block, The controller can use this cache block to cache data.
  • step 308 is performed, step 301 is performed.
  • FIG. 4 is a flowchart of Embodiment 3 of a method for buffering data for buffering data according to the present invention. As shown in FIG. 4, the method in this embodiment may include:
  • Step 401 Select, from the first queue, M dirty data blocks to be brushed;
  • the dirty data blocks of the M to be brushed disks having a small LBA value are selected from the first queue.
  • Step 402 Combine the LBA consecutive dirty data blocks in the dirty data blocks of the M to-be-brushed disks according to the selected LBAs of the dirty data blocks to be brushed, to obtain the merged to-be-brushed disks.
  • Dirty data block
  • L is less than or equal to M
  • FIG. 5 is a schematic diagram of an LBA according to an embodiment of the present invention.
  • M 80
  • the LBAs of the 80 dirty data blocks to be brushed respectively correspond to LBAs of 1 to 80
  • the cached data of the dirty data blocks to be brushed the cached data of the dirty data blocks to be brushed.
  • the size of each is 32K, and one LBA corresponds to 32K; then, after the LBA consecutive dirty data blocks are merged, three (ie, L equal to 3) merged dirty data blocks to be brushed can be obtained (ie, merged
  • the dirty data block of the disk to be brushed, the dirty data block 2 of the merged disk to be brushed, and the dirty data block 3 of the merged disk to be brushed, and the merged dirty data block 1 of the disk to be brushed corresponds to The LBA is 1.
  • the LBA corresponding to the dirty data block 2 of the combined disk to be brushed is 33, and the LBA corresponding to the dirty data block 3 of the combined disk to be brushed is 65.
  • the cache data read request may be sent to the SSD according to the merged dirty data block of the disk to be swiped.
  • Step 403 Send a cache data read request to the SSD, where the cache data read request includes an identifier of the merged dirty data block to be brushed;
  • Step 404 Receive a cache data read response sent by the SSD, where the cache data read response includes cached data of the dirty data block of the merged disk to be brushed;
  • Step 405 Determine, according to the correspondence between the identifier of the dirty data block and the LBA, the LBA corresponding to the dirty data block of the merged disk to be brushed;
  • Step 406 According to the LBA corresponding to the dirty data blocks of the merged disk to be flashed, store the cached data of the merged dirty data block to be scanned in order according to the LBA from small to large. Said disk.
  • the LBA consecutive dirty data blocks in the dirty data blocks of the M to-be-brushed disks are merged according to the LBAs of the dirty data blocks of the M to be brushed disks to obtain the merged to-be-brushed disks.
  • Dirty data block; according to the LBA corresponding to the dirty data block of the merged disk to be brushed, according to the LBA In a small to large order, the cached data of the merged dirty data block to be flashed is sequentially stored to the disk; the number of times the controller performs buffered data synchronization to the disk is reduced.
  • FIG. 6 is a schematic structural diagram of Embodiment 1 of a controller according to the present invention.
  • the controller is applied to a storage system, where the storage system includes the controller, a disk, and an SSD, and the SSD is used as a cache of the disk. As shown in FIG.
  • the controller of this embodiment may include: a read request sending module 601, configured to send a cache data read request to the SSD, where the cache data read request includes M dirty data blocks to be brushed
  • the identifier response receiving module 602 is configured to receive a cache data read response sent by the SSD, where the cache data read response includes cache data of N dirty data blocks to be brushed, and M is greater than or equal to N
  • the LBA determining module 603 is configured to determine, according to the correspondence between the identifier of the dirty data block and the logical block address LBA of the disk, the LBA corresponding to the dirty data blocks of the N to be brushed disks, and the brush disk module 604, configured to The LBAs corresponding to the dirty data blocks of the N to be brushed disks sequentially store the cache data of the N dirty data blocks to be brushed to the disk according to the LBA from small to large.
  • the controller of this embodiment may be used to implement the technical solution of the method embodiment shown in FIG. 2, and the implementation principle and technical effects are similar, and details are not described herein again.
  • the read request sending module 601 is further configured to: select, according to the LBA of the current brush operation, the M to be brushed disks from all the dirty data blocks.
  • the dirty data block wherein the M LBAs corresponding to the dirty data blocks of the M to be brushed disks are respectively greater than the LBA of the current brush operation.
  • the read request sending module 601 is configured to: put each dirty data block into the first queue or the second queue according to the LBA of the current brush operation and the LBA of each dirty data block.
  • the LBA of the dirty data block in the first queue is larger than the LBA of the current brush operation, and the LBA of the dirty data block in the second queue is smaller than the LBA of the current brush operation;
  • the dirty data blocks of the M to be brushed disks are selected in the queue.
  • the read request sending module 601 is specifically configured to: select, from the first queue, the dirty data blocks of the M to be brushed disks having a small LBA value.
  • the controller of this embodiment may be used to implement the technical solution of the method embodiment shown in FIG. 3, and the implementation principle and technical effects are similar, and details are not described herein again.
  • the request sending module 601 is read, and And: combining LBA consecutive dirty data blocks in the dirty data blocks of the M to-be-brushed disks according to the selected LBAs of the dirty data blocks to be brushed, to obtain the merged disks to be brushed
  • the read request sending module 602 is specifically configured to: send a cache data read request to the SSD, where the cache data read request includes the merged dirty data block of the disk to be brushed logo.
  • the controller of this embodiment may be used to implement the technical solution of the method embodiment shown in FIG. 4, and the implementation principle and technical effects are similar, and details are not described herein again.
  • FIG. 7 is a schematic structural diagram of Embodiment 4 of the controller of the present invention.
  • the controller of this embodiment may include: a transmitter 701, a receiver 702, and a processor 703.
  • the transmitter 701 is configured to send a cache data read request to the SSD, where the cache data read request includes identifiers of the dirty data blocks of the M to be brushed, and the receiver 702 is configured to receive the SSD Cache data read response, the cache data read response includes cache data of N dirty data blocks to be swiped, M is greater than or equal to N; and processor 703 is configured to correspond to the LBA according to the identifier of the dirty data block a relationship, the LBAs corresponding to the dirty data blocks of the N to be brushed disks are respectively determined; according to the LBAs corresponding to the dirty data blocks of the N to be brushed disks, the Ns are sequentially arranged according to the order of the LBAs from small to large The cache data of the dirty data blocks to be brushed is stored to the disk.
  • the controller of this embodiment may be used to implement the technical solution of the method embodiment shown in FIG. 2, and the implementation principle and technical effects are similar, and details are not described herein again.
  • the processor 703 is further configured to: select, according to the LBA of the current brush operation, the dirty data blocks of the M to be brushed from all the dirty data blocks,
  • the M LBAs corresponding to the dirty data blocks of the M to be brushed disks are respectively larger than the LBAs of the current brush operation.
  • the processor 703 is configured to: put each dirty data block into the first queue or the second queue according to the LBA of the current brush operation and the LBA of each dirty data block, where The LBA of the dirty data block in the first queue is larger than the LBA of the current brush operation, the LBA of the dirty data block in the second queue is smaller than the LBA of the current brush operation; selecting from the first queue The dirty data blocks of the M to be brushed disks.
  • processor 703 is specifically configured to: select, from the first queue, dirty data blocks of the M to-be-printed disks with a small LBA value.
  • the controller of this embodiment may be used to implement the technical solution of the method embodiment shown in FIG. 3, and the implementation principle and technical effects are similar, and details are not described herein again.
  • the processor 703 is further configured to: dirty the M to be brushed disks according to the selected LBAs of the dirty data blocks of the M to be brushed disks The consecutive dirty data blocks of the LBA in the data block are merged to obtain the dirty data blocks of the merged disk to be brushed;
  • the transmitter 701 is specifically configured to: send a cache data read request to the SSD, where the cache data read request includes an identifier of the merged dirty data block to be brushed.
  • the controller of this embodiment may be used to implement the technical solution of the method embodiment shown in FIG. 4, and the implementation principle and technical effects are similar, and details are not described herein again.
  • the aforementioned program can be stored in a computer readable storage medium.
  • the program when executed, performs the steps including the foregoing method embodiments; and the foregoing storage medium includes various media that can store program codes, such as a ROM, a RAM, a magnetic disk, or an optical disk.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Memory System Of A Hierarchy Structure (AREA)

Abstract

L'invention concerne un procédé et un appareil pour vider des données d'antémémoire. Le procédé pour vider des données d'antémémoire est appliqué à un système de stockage qui comprend un contrôleur (12), un disque magnétique (14) et un SSD (13). Le SSD (13) sert d'antémémoire pour le disque magnétique (14). Le procédé est exécuté par le contrôleur (12). Le procédé comprend les étapes consistant à : envoyer une demande de lecture de données d'antémémoire au SSD, la demande de lecture de données d'antémémoire contenant les identificateurs de M blocs de données sales à vider (201) ; recevoir une réponse de lecture de données d'antémémoire envoyée par le SSD, la réponse de lecture de données d'antémémoire contenant les données d'antémémoire de N blocs de données sales à vider, M étant supérieur ou égal à N (202) ; déterminer, selon une correspondance entre les identificateurs des blocs de données sales et les LBA, les LBA qui correspondent aux N blocs de données sales à vider (203) ; et stocker séquentiellement les données d'antémémoire des N blocs de données sales à vider dans le disque magnétique par ordre croissant des LBA, selon les LBA qui correspondent aux N blocs de données sales à vider (204).
PCT/CN2015/083285 2014-11-28 2015-07-03 Procédé et appareil pour vider des données d'antémémoire WO2016082550A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201410712971.9 2014-11-28
CN201410712971.9A CN104461936B (zh) 2014-11-28 2014-11-28 缓存数据的刷盘方法及装置

Publications (1)

Publication Number Publication Date
WO2016082550A1 true WO2016082550A1 (fr) 2016-06-02

Family

ID=52908022

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2015/083285 WO2016082550A1 (fr) 2014-11-28 2015-07-03 Procédé et appareil pour vider des données d'antémémoire

Country Status (2)

Country Link
CN (1) CN104461936B (fr)
WO (1) WO2016082550A1 (fr)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112084048A (zh) * 2020-09-25 2020-12-15 中国建设银行股份有限公司 Kafka同步刷盘方法、装置及消息服务器
CN112181315A (zh) * 2020-10-30 2021-01-05 新华三大数据技术有限公司 一种数据刷盘方法及装置
CN112835528A (zh) * 2021-02-22 2021-05-25 北京金山云网络技术有限公司 脏页刷新方法和装置、电子设备和存储介质

Families Citing this family (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104461936B (zh) * 2014-11-28 2017-10-17 华为技术有限公司 缓存数据的刷盘方法及装置
CN105095112B (zh) * 2015-07-20 2019-01-11 华为技术有限公司 控制缓存刷盘方法、装置及非易失性计算机可读存储介质
CN104991745B (zh) * 2015-07-21 2018-06-01 浪潮(北京)电子信息产业有限公司 一种存储系统数据写入方法和系统
CN106557430B (zh) * 2015-09-19 2019-06-21 成都华为技术有限公司 一种缓存数据刷盘方法及装置
CN106227675B (zh) * 2016-07-19 2019-05-24 华为技术有限公司 一种空间分配和刷盘相配合的方法及装置
CN108132756B (zh) * 2016-11-30 2021-01-05 成都华为技术有限公司 一种对存储阵列进行刷盘的方法和装置
CN108089818B (zh) * 2017-12-12 2021-09-07 腾讯科技(深圳)有限公司 数据处理方法、装置及存储介质
CN109542348B (zh) * 2018-11-19 2022-05-10 郑州云海信息技术有限公司 一种数据下刷方法及装置
CN109783023B (zh) * 2019-01-04 2024-06-07 平安科技(深圳)有限公司 一种数据下刷的方法和相关装置
CN111797080A (zh) * 2019-04-09 2020-10-20 Oppo广东移动通信有限公司 模型训练方法、数据回收方法、装置、存储介质及设备
CN110502457B (zh) * 2019-08-23 2022-02-18 北京浪潮数据技术有限公司 一种元数据存储方法及装置
CN110928496B (zh) * 2019-11-12 2022-04-22 杭州宏杉科技股份有限公司 一种在多控存储系统上的数据处理方法及装置
CN111177022B (zh) * 2019-12-26 2022-08-12 广东浪潮大数据研究有限公司 一种特征提取方法、装置、设备及存储介质
CN111857589B (zh) * 2020-07-16 2023-04-18 苏州浪潮智能科技有限公司 分布式存储系统中ssd缓存下刷速度控制方法及系统
CN112306904B (zh) * 2020-11-20 2022-03-29 新华三大数据技术有限公司 一种缓存数据的刷盘方法及装置
CN112817520B (zh) * 2020-12-31 2022-08-09 杭州宏杉科技股份有限公司 数据刷盘方法及装置
CN113986118B (zh) * 2021-09-28 2024-06-07 新华三大数据技术有限公司 一种数据处理方法及装置
CN115168304B (zh) * 2022-09-06 2023-01-20 北京奥星贝斯科技有限公司 一种数据处理方法、装置、存储介质及设备
CN115268798B (zh) * 2022-09-27 2023-01-10 天津卓朗昆仑云软件技术有限公司 缓存数据的刷脏方法和系统

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101241420A (zh) * 2008-03-20 2008-08-13 杭州华三通信技术有限公司 用于提高写地址非连续的数据存储效率的方法和存储设备
CN101727299A (zh) * 2010-02-08 2010-06-09 北京同有飞骥科技有限公司 连续数据存储中面向raid5的写操作优化设计方法
CN102147768A (zh) * 2010-05-21 2011-08-10 苏州捷泰科信息技术有限公司 存储器、固态缓存系统及缓存数据处理方法
CN103049222A (zh) * 2012-12-28 2013-04-17 中国船舶重工集团公司第七0九研究所 一种raid5的写io优化处理方法
US20140013025A1 (en) * 2012-07-06 2014-01-09 Seagate Technology Llc Hybrid memory with associative cache
CN104461936A (zh) * 2014-11-28 2015-03-25 华为技术有限公司 缓存数据的刷盘方法及装置

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101354633B (zh) * 2008-08-22 2010-09-22 杭州华三通信技术有限公司 提高虚拟存储系统写效率的方法及虚拟存储系统
JP2012078939A (ja) * 2010-09-30 2012-04-19 Toshiba Corp 情報処理装置およびキャッシュ制御方法
CN102541468B (zh) * 2011-12-12 2015-03-04 华中科技大学 虚拟化环境下的脏数据回写系统
CN103631528B (zh) * 2012-08-21 2016-05-18 苏州捷泰科信息技术有限公司 用固态硬盘作为缓存器的读写方法、系统及读写控制器
JP6060277B2 (ja) * 2012-12-26 2017-01-11 華為技術有限公司Huawei Technologies Co.,Ltd. ディスクアレイ・フラッシュ方法及びディスクアレイ・フラッシュ装置
CN103488431A (zh) * 2013-09-10 2014-01-01 华为技术有限公司 一种写数据方法及存储设备
CN103577349B (zh) * 2013-11-06 2016-11-23 华为技术有限公司 在高速缓存中选择数据进行刷盘的方法和装置

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101241420A (zh) * 2008-03-20 2008-08-13 杭州华三通信技术有限公司 用于提高写地址非连续的数据存储效率的方法和存储设备
CN101727299A (zh) * 2010-02-08 2010-06-09 北京同有飞骥科技有限公司 连续数据存储中面向raid5的写操作优化设计方法
CN102147768A (zh) * 2010-05-21 2011-08-10 苏州捷泰科信息技术有限公司 存储器、固态缓存系统及缓存数据处理方法
US20140013025A1 (en) * 2012-07-06 2014-01-09 Seagate Technology Llc Hybrid memory with associative cache
CN103049222A (zh) * 2012-12-28 2013-04-17 中国船舶重工集团公司第七0九研究所 一种raid5的写io优化处理方法
CN104461936A (zh) * 2014-11-28 2015-03-25 华为技术有限公司 缓存数据的刷盘方法及装置

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112084048A (zh) * 2020-09-25 2020-12-15 中国建设银行股份有限公司 Kafka同步刷盘方法、装置及消息服务器
CN112181315A (zh) * 2020-10-30 2021-01-05 新华三大数据技术有限公司 一种数据刷盘方法及装置
CN112181315B (zh) * 2020-10-30 2022-08-30 新华三大数据技术有限公司 一种数据刷盘方法及装置
CN112835528A (zh) * 2021-02-22 2021-05-25 北京金山云网络技术有限公司 脏页刷新方法和装置、电子设备和存储介质

Also Published As

Publication number Publication date
CN104461936A (zh) 2015-03-25
CN104461936B (zh) 2017-10-17

Similar Documents

Publication Publication Date Title
WO2016082550A1 (fr) Procédé et appareil pour vider des données d'antémémoire
US9400603B2 (en) Implementing enhanced performance flash memory devices
KR101824295B1 (ko) 솔리드 스테이트 장치 가상화를 포함하는 캐시 관리
JP5759623B2 (ja) メモリシステムコントローラを含む装置および関連する方法
JP4788528B2 (ja) ディスク制御装置、ディスク制御方法、ディスク制御プログラム
US20160026575A1 (en) Selective mirroring in caches for logical volumes
JP6893897B2 (ja) ソリッドステートドライブ(ssd)、そのガーベッジコレクションに係る方法、及びその具現に係る物品
US20150253992A1 (en) Memory system and control method
US9348747B2 (en) Solid state memory command queue in hybrid device
KR20140013098A (ko) 메모리 시스템 컨트롤러들을 포함하는 장치 및 관련 방법들
WO2017006674A1 (fr) Système de traitement d'informations, dispositif de commande de stockage, procédé de commande de stockage et programme de commande de stockage
US20130067147A1 (en) Storage device, controller, and read command executing method
US8327041B2 (en) Storage device and data transfer method for the same
WO2017006675A1 (fr) Système de traitement d'informations, dispositif de commande de stockage, procédé de commande de stockage et programme de commande de stockage
US11593262B1 (en) Garbage collection command scheduling
KR101481898B1 (ko) Ssd의 명령어 큐 스케줄링 장치 및 방법
US20160283379A1 (en) Cache flushing utilizing linked lists
US20160070648A1 (en) Data storage system and operation method thereof
KR102061069B1 (ko) 텍스쳐 맵핑 파이프라인을 위한 논블로킹 방식의 텍스쳐 캐쉬 메모리 시스템 및 논블로킹 방식의 텍스쳐 캐쉬 메모리의 동작 방법
US9910797B2 (en) Space efficient formats for scatter gather lists
WO2016091124A1 (fr) Procédé et dispositif d'interception de fichier
US9471227B2 (en) Implementing enhanced performance with read before write to phase change memory to avoid write cancellations
US20190129863A1 (en) Performance Booster With Resolution Of Picket-Fence I/O Flushing In A Storage System With Heterogeneous I/O Workloads
WO2015170702A1 (fr) Dispositif de stockage, système de traitement d'informations, procédé de commande de stockage et programme
JP6521694B2 (ja) 記憶制御システム及び記憶制御装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 15862779

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 15862779

Country of ref document: EP

Kind code of ref document: A1