CN113254265A - Snapshot implementation method and storage system based on solid state disk - Google Patents

Snapshot implementation method and storage system based on solid state disk Download PDF

Info

Publication number
CN113254265A
CN113254265A CN202110506296.4A CN202110506296A CN113254265A CN 113254265 A CN113254265 A CN 113254265A CN 202110506296 A CN202110506296 A CN 202110506296A CN 113254265 A CN113254265 A CN 113254265A
Authority
CN
China
Prior art keywords
snapshot
chunk
timestamp
solid state
state disk
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110506296.4A
Other languages
Chinese (zh)
Other versions
CN113254265B (en
Inventor
杨国华
朱文禧
许毅
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Suzhou Kuhan Information Technology Co Ltd
Original Assignee
Suzhou Kuhan Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Suzhou Kuhan Information Technology Co Ltd filed Critical Suzhou Kuhan Information Technology Co Ltd
Priority to CN202110506296.4A priority Critical patent/CN113254265B/en
Publication of CN113254265A publication Critical patent/CN113254265A/en
Priority to PCT/CN2022/102018 priority patent/WO2022237916A1/en
Application granted granted Critical
Publication of CN113254265B publication Critical patent/CN113254265B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/14Error detection or correction of the data by redundancy in operation
    • G06F11/1402Saving, restoring, recovering or retrying
    • G06F11/1446Point-in-time backing up or restoration of persistent data
    • G06F11/1456Hardware arrangements for backup
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/0223User address space allocation, e.g. contiguous or non contiguous base addressing
    • G06F12/023Free address space management
    • G06F12/0238Memory management in non-volatile memory, e.g. resistive RAM or ferroelectric memory
    • G06F12/0246Memory management in non-volatile memory, e.g. resistive RAM or ferroelectric memory in block erasable memory, e.g. flash memory

Abstract

The application relates to the technical field of storage, and discloses a snapshot implementation method and a storage system based on a solid state disk. The method comprises the steps of dividing an LBA segment in a solid state disk into a plurality of logic blocks, dividing an L2P mapping table into a plurality of L2P chunks, wherein each logic block corresponds to one L2P chunk, receiving a snapshot generating command by the solid state disk, adding a timestamp to each L2P chunk to generate a snapshot with the timestamp, checking the written logic block to which the LBA belongs, determining that the corresponding L2P chunk already has the timestamp, newly building a new L2P chunk, updating a corresponding flash physical address into the new L2P chunk, receiving the snapshot generating command again, traversing the latest L2P chunk corresponding to all the logic blocks, determining that the latest L2P chunk does not have the timestamp, adding a new timestamp to the latest L2P chunk to generate a snapshot with the latest timestamp, and associating a plurality of L2P chunks corresponding to each logic block. According to the method and the device, the snapshot is realized in the solid state disk, the DRAM resource of a software layer can be saved, and the cost is reduced.

Description

Snapshot implementation method and storage system based on solid state disk
Technical Field
The present application relates to the field of storage technologies, and in particular, to a snapshot implementation method and system based on a solid state disk.
Background
Snapshot (Snapshot) is a technique for recording the state of a storage system at a certain time. A user may create multiple snapshots and access, modify, copy, and rollback any one of the snapshots that was created at any point in time in the future.
The storage system is mainly composed of a software layer (i.e., a computer host) and a hard disk (i.e., a storage device), wherein the software layer is communicated with one or more users, as shown in fig. 1, the software layer defines that a snapshot technology is realized in the prior art, and the hard disk does not need to be or cannot sense the existence of the snapshot technology. The software layer needs to occupy the DRAM hardware resources of many computers and the computational resources of the CPU in order to implement the snapshot technology, and the execution time of the snapshot command is long.
The current trend in the storage industry is that Solid State Disks (SSDs) are gradually replacing mechanical hard disks (HDDs), and the inside of the SSD is equipped with large and expensive DRAM resources for fast access to flash media.
The snapshot function of the prior art storage system is implemented in the software layer of fig. 1. Specifically, as shown in fig. 2, a mapping table (map table) from a user Logical Address (LA) to a hard disk Physical Address (PA) is maintained in a software layer, the data size represented by each LA and PA is 4KB (the mainstream enterprise-level storage system uses 4KB as the minimum read/write unit of data), and the data of each LA can be written into any PA, so that each entry in the mapping table records PA location information written by all LAs. Generally, each entry occupies 4 bytes of DRAM, so assuming that the space of the hard disk exposed to the user is 4TB, the size of the mapping table is 4TB/4KB × 4B — 4 GB. As shown in fig. 2, data of LA (0) is written in PA (3), data of LA (1) is written in PA (9), data of LA (2) is written in PA (7), and data of LA (3) is written in PA (2). At this time, the user wants to generate a snapshot at this moment (T1), that is, the storage system state at the time of T1 is archived for later viewing, and the software layer permanently saves the mapping table at the time of T1 in the hard disk, that is:
the first step is as follows: the user initiates a snapshot generation command.
The second step is that: the software layer locks the mapping table, i.e. does not respond to subsequent user write commands, preventing the mapping table from being updated.
The third step: the software layer writes the 4GB mapping table at time T1 to the hard disk as shown in fig. 3. The step takes a long time, and the current mainstream solid state disk interface speed is 4GB/s, so the step takes 1 s. During which the storage system may respond to a read command from the user because the read command does not change the contents of the mapping table.
The fourth step: the software layer unlocks the mapping table, i.e., continues to respond to subsequent write commands.
After that, the user continues to write a new LA or to overwrite an LA that has been written before, and assuming that the state of the storage system at time T2 is as shown in fig. 4, i.e., overwrite LA (0) to PA (8), newly write LA (5) to PA (4), and generate a new snapshot (T2), the above steps are repeated to write the T2 snapshot to the hard disk. Generally, the storage system sets an upper limit of the number of supported snapshots, and if TH is set, the storage system automatically overwrites the oldest snapshot after the number of snapshots generated by the user exceeds TH, and the user senses this.
Therefore, the storage system can store a maximum of TH snapshots (system states), and a user can access data of any snapshot, and only a certain snapshot needs to be specified when a read command is issued, for example, LA (0) at the snapshot time of T1 is read, the storage system first reads the mapping table at the time of T1 from the hard disk to the DRAM, then reads PA (3) in the mapping table LA (0), and finally reads data in PA (3) from the hard disk and returns the data to the user.
The problems of the prior art are as follows:
1) the software layer must use expensive DRAM to maintain the mapping table, and the size of the mapping table is 1/1024 times the capacity of the hard disk.
2) The software layer cannot respond to the user write command during the storage of the mapping table, so most storage systems choose to perform the snapshot operation late night, but the problem is not solved essentially.
Disclosure of Invention
The application aims to provide a snapshot implementation method based on a solid state disk, wherein the snapshot is implemented in the solid state disk, DRAM resources of a software layer in the prior art can be saved, and the cost is greatly reduced.
The application discloses a snapshot implementation method based on a solid state disk, which divides an LBA segment in the solid state disk into a plurality of logic blocks and correspondingly divides an L2P mapping table into a plurality of L2P chunks, wherein each logic block corresponds to an L2P chunk, and the solid state disk is configured to execute the following steps:
receiving a snapshot generating command, adding a time stamp to each L2P chunk, and generating a snapshot with the time stamp;
checking the logic block to which the written LBA belongs, determining that the L2P chunk corresponding to the logic block already has a timestamp, creating a new L2P chunk and updating the corresponding flash memory physical address into the new L2P chunk;
receiving the snapshot generating command again, traversing the latest L2P chunk corresponding to all the logic blocks, determining that the latest L2P chunk does not have a timestamp, adding a new timestamp to the latest L2P chunk, and generating a snapshot with the latest timestamp;
and associating a plurality of L2P chunks corresponding to each logic block.
In a preferred example, the method for associating the plurality of L2P chunks corresponding to each of the logic blocks includes: linked lists, arrays, and hash algorithms.
In a preferred example, the method for associating the plurality of L2P chunks corresponding to each of the logic blocks includes: and connecting a plurality of L2P chunks corresponding to each logic block by using a linked list, wherein the latest L2P chunk is positioned at the head of the linked list.
In a preferred example, the solid state disk is further configured to perform the following steps: receiving a read command for reading data under the snapshot of the appointed LBA and the timestamp, determining the logical block to which the LBA belongs and the L2P chunk of the appointed timestamp corresponding to the logical block, and returning the data stored in the flash memory physical address corresponding to the LBA in the L2P chunk.
In a preferred example, the solid state disk is further configured to perform the following steps: receiving a read command for reading data under the snapshot of the specified LBA and the timestamp, determining the logical block to which the LBA belongs and the logical block does not have the corresponding L2P chunk of the specified timestamp, and returning the data stored in the flash memory physical address corresponding to the LBA in the L2P chunk of the timestamp previous to the logical block.
In a preferred example, the solid state disk is further configured to perform the following steps: receiving a read command for reading data under the snapshot with the appointed LBA and the timestamp, determining that the logical block to which the LBA belongs and the flash memory physical address corresponding to the LBA in the L2P chunk with the appointed timestamp corresponding to the logical block are empty, and returning the data stored in the flash memory physical address corresponding to the LBA in the L2P chunk with the timestamp previous to the logical block.
In a preferred example, the flash memory physical address in the solid state disk is divided into a plurality of physical blocks, and each physical block includes a plurality of physical pages.
In a preferred example, each of the logic blocks includes 1-1G consecutive LBAs.
In a preferred embodiment, the step of generating a snapshot with a timestamp by the solid state disk further includes:
determining a system timestamp corresponding to the snapshot generating command when the snapshot generating command is received as a timestamp of the current snapshot, or:
the system timestamp maintained before receiving the snapshot generation command is determined as the timestamp of the next snapshot, and the maintained system timestamp is updated after receiving the snapshot generation command to serve as the timestamp of the new L2P chunk.
The present application also discloses a storage system, comprising: a software layer and a solid state disk, the software layer communicating with one or more users, the LBA segment in the solid state disk being divided into a number of logical blocks and a corresponding L2P mapping table being divided into a number of L2P chunks, each logical block corresponding to one L2P chunk, the software layer being configured to perform the steps of:
initiating a snapshot generation command to enable the solid state disk to timestamp each L2P chunk and generate a snapshot with a timestamp;
initiating a write command to enable the solid state disk to: checking the logic block to which the written LBA belongs, determining that the L2P chunk corresponding to the logic block already has a timestamp, creating a new L2P chunk and updating the corresponding flash memory physical address into the new L2P chunk;
and initiating a snapshot generation command again to enable the solid state disk to: traversing the latest L2P chunk corresponding to all the logic blocks, determining that the latest L2P chunk does not have a timestamp, adding a new timestamp to the latest L2P chunk, generating a snapshot with the latest timestamp, and associating a plurality of L2P chunks corresponding to each logic block.
In a preferred example, the software layer is configured to perform the following steps: initiating a read command to take a snapshot of the specified LBA and timestamp from the solid state disk.
Compared with the prior art, the method has the following beneficial effects:
in the embodiment of the application, the snapshot is realized in the solid state disk, so that DRAM (dynamic random access memory) resources of a software layer in the prior art can be saved, and the cost is greatly reduced. The solid state disk optimizes the snapshot implementation technology, so that the response time of the snapshot command approaches zero.
The present specification describes a number of technical features distributed throughout the various technical aspects, and if all possible combinations of technical features (i.e. technical aspects) of the present specification are listed, the description is made excessively long. In order to avoid this problem, the respective technical features disclosed in the above summary of the invention of the present application, the respective technical features disclosed in the following embodiments and examples, and the respective technical features disclosed in the drawings may be freely combined with each other to constitute various new technical solutions (which should be regarded as having been described in the present specification) unless such a combination of the technical features is technically infeasible. For example, in one example, the feature a + B + C is disclosed, in another example, the feature a + B + D + E is disclosed, and the features C and D are equivalent technical means for the same purpose, and technically only one feature is used, but not simultaneously employed, and the feature E can be technically combined with the feature C, then the solution of a + B + C + D should not be considered as being described because the technology is not feasible, and the solution of a + B + C + E should be considered as being described.
Drawings
FIG. 1 is a schematic diagram of a prior art storage system.
Fig. 2 is a mapping relationship of logical addresses and physical addresses.
Fig. 3 is a diagram illustrating a system state at time T1.
Fig. 4 is a diagram illustrating a system state at time T2.
Fig. 5 is a flowchart illustrating a snapshot implementation method based on a solid state disk according to a first embodiment of the present application.
Fig. 6 is a mapping relationship between logical addresses and physical addresses based on a solid state disk according to an embodiment of the present application.
Fig. 7 is a diagram illustrating a system state at time T1 according to an embodiment of the present application.
FIG. 8 is a diagram of creating a new L2P chunk according to an embodiment of the present application.
Fig. 9 is a schematic diagram of a system state at time T2 according to an embodiment of the present application.
Fig. 10 is a schematic diagram illustrating an initial state of a solid state disk according to an embodiment of the present application.
FIG. 11 is a schematic diagram of writing LBAs in accordance with an embodiment of the present application.
Fig. 12 is a schematic diagram of a first time snapshot command is generated according to an embodiment of the present application.
FIG. 13 is a schematic diagram of a carbon copy LBA in accordance with an embodiment of the present application.
Detailed Description
In the following description, numerous technical details are set forth in order to provide a better understanding of the present application. However, it will be understood by those skilled in the art that the technical solutions claimed in the present application may be implemented without these technical details and with various changes and modifications based on the following embodiments.
To make the objects, technical solutions and advantages of the present application more clear, embodiments of the present application will be described in further detail below with reference to the accompanying drawings.
A first embodiment of the present application relates to a snapshot implementation method based on a solid state disk, and a flow of the implementation method is shown in fig. 5. The Solid State disk is also called a Solid State Drive (SSD), which is a hard disk made of a Solid State electronic memory chip array, and the SSD mainly consists of a main controller (or a processor) and memory particles. The host controller may process commands from the software layer, e.g., snapshot generation commands, write commands, read commands, erase commands, and perform related operations. The memory particles may be NAND flash memory, NOR flash memory, Magnetoresistive Random Access Memory (MRAM), Resistive Random Access Memory (RRAM), Phase Change Random Access Memory (PCRAM), Nano-RAM, and the like. NAND flash memory may be used as an example to demonstrate snapshot implementation techniques.
As shown in fig. 6, due to the unique characteristics of flash memory, the storage space (e.g., DRAM) of the solid state disk maintains an L2P (Logical _2_ Physical) linear table to record the Physical location information of each Logical Block Address (LBA) written in the flash memory.
The LBA segment (i.e., all the LBAs) in the solid state disk is divided into a plurality of logical blocks, and a plurality of consecutive LBAs are divided into one logical block. And, the L2P mapping table is correspondingly divided into a plurality of L2P chunks, one L2P chunk for each logical block. Referring to fig. 7, the LBAs are divided into logical blocks 0,1, etc., each of which includes 1 to 1G LBAs. The L2P is correspondingly divided into a plurality of L2P chunks (chunks), and the physical address (PBA) is divided into a plurality of physical blocks, each physical block includes a plurality of physical pages, for example, 5 physical pages, the physical page is the minimum read-write unit, and the physical block is the minimum erase unit. The physical address includes a corresponding physical block and physical page, e.g., 2 nd physical block 3 rd physical page {2,3 }. Each physical block corresponds to a logical block, and each physical block corresponds to an L2P chunk.
In one embodiment, the snapshot implementation method comprises the following steps:
step 501, the solid state disk receives a snapshot generating command, and adds a timestamp to each L2P chunk to generate a snapshot with a timestamp.
Step 502, the solid state disk checks the logical block to which the written LBA belongs, determines that the L2P chunk corresponding to the logical block already has a timestamp, creates a new L2P chunk, and updates the corresponding physical address to the new L2P chunk.
Step 503, the solid state disk receives the snapshot generating command again, traverses the latest L2P chunks corresponding to all the logical blocks, determines that the latest L2P chunk does not have a timestamp, and adds a new timestamp to the latest L2P chunk to generate a snapshot with the latest timestamp.
And step 504, the solid state disk associates multiple L2P chunks corresponding to each of the logic blocks.
In one embodiment, the method for associating the plurality of L2P chunks corresponding to each of the logic blocks includes: linked lists, arrays, and hash algorithms. For example, in one embodiment, the association is performed by using a linked list method, and a plurality of L2P chunks corresponding to each logical block are linked by using a linked list, and the latest L2P chunk is located at the head of the linked list.
In one embodiment, when data in the solid state disk needs to be read, the method further includes:
receiving a read command for reading data under the snapshot with the appointed LBA and the timestamp, determining a logical block to which the LBA belongs and an L2P chunk with the appointed timestamp corresponding to the logical block, and returning the data stored in the flash memory physical address corresponding to the LBA in the L2P chunk;
determining a logical block to which the LBA belongs and the logical block does not have a corresponding L2P chunk with a specified timestamp, and returning data stored in a flash memory physical address corresponding to the LBA in an L2P chunk with a previous timestamp of the logical block;
and determining that the logical block to which the LBA belongs and the flash memory physical address corresponding to the LBA in the L2P chunk with the assigned timestamp corresponding to the logical block are null, and returning data stored in the flash memory physical address corresponding to the LBA in the L2P chunk with the timestamp previous to the logical block.
In an embodiment, a system timestamp corresponding to the time when the snapshot generating command is received may be determined as a timestamp of the current snapshot to generate the snapshot with the timestamp. And, the latest system timestamp corresponding to when the snapshot generation command is received again may be determined as the timestamp of the latest snapshot.
In an embodiment, the solid state disk maintains a system timestamp before receiving the snapshot generating command, and when the snapshot generating command is received, the system timestamp maintained by the solid state disk is determined as the timestamp of the snapshot, and the system timestamp maintained by the solid state disk is updated, and the updated system timestamp is used as the timestamp of the new L2P chunk.
Disclosed in a second embodiment of the present application is a storage system comprising: a software layer and a solid state disk, the software layer communicating with one or more users, the LBA segment in the solid state disk being divided into a number of logical blocks and a corresponding L2P mapping table being divided into a number of L2P chunks, each logical block corresponding to one L2P chunk, the software layer being configured to perform the steps of:
initiating a snapshot generation command to enable the solid state disk to timestamp each L2P chunk and generate a snapshot with a timestamp;
initiating a write command to enable the solid state disk to: checking the logic block to which the written LBA belongs, determining that the L2P chunk corresponding to the logic block already has a timestamp, newly creating a new L2P chunk and updating the corresponding physical address into the new L2P chunk;
and initiating a snapshot generation command again to enable the solid state disk to: traversing the latest L2P chunk corresponding to all the logic blocks, determining that the latest L2P chunk does not have a timestamp, adding a new timestamp to the latest L2P chunk, generating a snapshot with the latest timestamp, and associating a plurality of L2P chunks corresponding to each logic block.
It should be understood that the technical details in the first embodiment can be applied to the present embodiment, and the technical details in the present embodiment can also be applied to the first embodiment.
In order to better understand the technical solution of the present application, the following description is given with reference to a specific example, in which the listed details are mainly for the sake of understanding and are not intended to limit the scope of the present application.
The mainstream medium for storing data in the solid state disk is a Flash memory (NAND Flash), and the Flash memory has the unique characteristics that: the physical page is the minimum read-write unit, the physical block is the minimum erase unit, and the physical page can be written after being erased. In order to make the user (specifically, a software layer in the storage system) not perceive these characteristics of the flash memory, a L2P linear table is maintained in the storage space of the mainstream solid state disk to record the physical location information of each LBA written in the flash memory, and each table entry occupies 4B DRAM space, so assuming that the capacity of the solid state disk is 4TB, the size of L2P is 4TB/4KB 4B — 4 GB. It should be understood that pa (x) of the hard disk as seen by the software layer is equivalent to LBA _ x in the solid state disk.
Therefore, the L2P table in the solid state disk is substantially similar to the mapping table in the software layer of the prior art, and the DRAM hardware resources in the software layer can be saved by storing the L2P in the storage space of the solid state disk.
Further, multiple snapshots (at different times) are defined to hold multiple L2P mapping tables, but in practice there is a high probability that only a small number of LBAs are written or updated between two snapshots, and most of the LBAs are unchanged, and the previous snapshots can be directly referenced to these unchanged LBAs.
As shown in fig. 7, to demonstrate the versatility of the present technology, the LBA interval (segment) of the solid state disk is logically divided into a plurality of logical blocks, where each logical block may have any size (for example, any number of consecutive LBAs including 1 to 1G), and for example, taking 1K LBAs as an example, the LBA interval of the solid state disk is divided into 1G/1K — 1M logical blocks. LBA (1) -LBA (1K-1) belong to logical block 0, LBA (K) -LBA (2K-1) belong to logical block 1, and so on. The solid state disk may determine the logical block to which the LBA belongs according to the LBA written by the software layer, for example, LBA/K is equal to the logical block to which the LBA belongs. And, by taking mod for K by the LBA, the position of the LBA in the logical block can be determined, for example, LBA mod K is equal to the position in the logical block to which the LBA belongs. Similarly, the L2P mapping table is also managed according to the granularity of the logical blocks, that is, the size of the physical L2P corresponding to each logical block is 1K × 4B ═ 4KB, which is called L2P Chunk (Chunk). In the initial state, each logical block corresponds to only one L2P Chunk.
The solid state disk realizes the snapshot as follows:
the first step is as follows: assuming that the state of the solid state disk at time T1 is as shown in fig. 7, the software layer initiates the snapshot generation command.
The second step is that: after the solid state disk receives the snapshot command, each L2P Chunk is marked with a [ T1], and T1 indicates the timestamp to which the snapshot belongs, and also implies that the L2P Chunk is locked and cannot change its content. And the operation of marking the mark can be executed in the background task of the solid state disk, so that the response time of the snapshot command is very fast, and the microsecond level can be completed.
The third step: assuming that the user overwrites LBA0, which belongs to logical block 0, the solid state disk checks that the L2P Chunk corresponding to logical block 0 has been marked with [ T1] so that it belongs to a snapshot before and cannot be changed, the solid state disk additionally applies for a new L2P Chunk (not marked) in the DRAM, and then updates the physical address { physical block 2, physical page 3} newly written in logical block 0 into the L2P Chunk. This process is also almost cost free. It is found that logic block 0 corresponds to two L2P Chunks, and therefore they can be associated, including but not limited to a linked list, an array, a hash, etc. For example, taking a linked list as an example, each logical block may correspond to a plurality of L2P chunks, the L2 pchunks are concatenated with the linked list, and the latest L2P Chunk is appended to the head of the linked list. With more LBAs written, more than one logical block corresponding to two L2P Chunk will occur, assuming at time T2 the state of the solid state disk is as shown in FIG. 8 and the software layer has an initiate snapshot command.
The fourth step: the solid state disk traverses the latest L2P Chunk of all the logic blocks, and if the Chunk is not marked, which indicates that the logic block has new data write in the time period of T1- > T2, a [ T2] mark is marked; otherwise, it is stated that the logic block has no new data written during the time period T1- > T2, and its state is the same as that at time T1, so nothing needs to be done, as shown in fig. 9.
The fifth step: and repeating the third step and the fourth step, and continuously processing the subsequent write command and the snapshot command.
Compared with the prior art, the snapshot implementation technology is high in efficiency, and extra time delay is hardly generated for the write command of the user.
The software layer may specify a timestamp and a LBA in the read command to read data under a specific snapshot, for example, specify to read [ LBA0, T2], the solid state disk may determine that the LBA0 belongs to the logical block 0, and start to find the L2P Chunk marked as [ T2] from the head of the L2P Chunk linked list corresponding to the logical block 0, and read data in the flash physical address {2,3} to the software layer after finding the Chunk. Here, the following two special cases occur:
1) the logical block to which the designated LBA belongs does not have the L2P chunk with the designated timestamp, such as the designated read [ LBA (k), T2], and the L2PChunk marked as [ T2] of the logical block 1 to which no LBA (k) belongs in the solid state disk, because the logical block 1 has no new data to write in the T1- > T2 time period, in which case the solid state disk reads data from the snapshot before [ T2] to the software layer, for example, reads data in the flash physical address {2,0} corresponding to LBA (k) in the L2P chunk marked as [ T1] to the software layer, which is really the data desired by the software layer.
2) The flash physical address of the LBA in the L2P Chunk of the designated time stamp to which the logical block belongs is empty, for example, the designated read [ LBA (2), T2], where in the solid state disk, L2P Chunk marked as [ T2] of the logical block 0 to which the designated LBA (2) belongs can be found, but the flash physical address corresponding to the LBA (2) in the L2P Chunk is [ - ] (i.e., empty), which indicates that no data update occurs in the LBA (2) designated in the T1- > T2 time period, that is, the user wants data is in the snapshot of [ T1], so that, similarly to 1), the solid state disk reads data from the snapshot before T2 to the software layer, for example, reads data in the flash physical address {0,1} corresponding to the LBA (2) in the L2P Chunk marked as [ T1] to the software layer.
In the above two special cases, the software layer can still obtain the data desired by the user.
In the implementation details, the solid state disk determines the time stamp of the snapshot only when the software layer initiates a snapshot generation command. It should be noted that other similar embodiments for determining the snapshot timestamp may be used. For example, before the software layer initiates a snapshot generation command, a system timestamp [ Tx ] is determined, that is, a system timestamp [ Tx ] is maintained inside the solid state disk, which records a timestamp corresponding to the next snapshot command in advance, and when the solid state disk applies for a new L2P Chunk in the process of processing a write command, the system timestamp [ Tx ] can be immediately stamped into the newly applied L2P Chunk, which is implemented as follows:
the first step is as follows: as shown in fig. 10, in the initial state, that is, the software layer never initiates the snapshot generating command and the write command, the system timestamp maintained inside the solid state disk is set to [ T1] in advance, and is used as the timestamp indicating the first snapshot generating command in the future.
The second step is that: as shown in fig. 11, the software layer initiates a series of write commands, the solid state disk applies for several L2P chunks according to the LBA of the write command, and at the same time applies for the L2P Chunk, the system timestamp [ T1] is stamped on the L2P Chunk.
The third step: as shown in fig. 12, the software layer initiates the first snapshot generating command, and all that needs to be done by the solid state disk at this time is to update the system timestamp to [ T2], that is, the timestamp corresponding to the second snapshot generating command is recorded in advance.
The fourth step: the second step to the third step are repeated, and assuming that the software layer initiates a duplicate LBA0 command, the solid state disk creates a new L2P Chunk, and a system timestamp [ T2] is printed on the newly applied L2P Chunk, as shown in fig. 13.
It is noted that, in the present patent application, relational terms such as first and second, and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, the use of the verb "comprise a" to define an element does not exclude the presence of another, same element in a process, method, article, or apparatus that comprises the element. In the present patent application, if it is mentioned that a certain action is executed according to a certain element, it means that the action is executed according to at least the element, and two cases are included: performing the action based only on the element, and performing the action based on the element and other elements. The expression of a plurality of, a plurality of and the like includes 2, 2 and more than 2, more than 2 and more than 2.
All documents mentioned in this specification are to be considered as being incorporated in their entirety into the disclosure of the present application so as to be subject to modification as necessary. It should be understood that the above description is only a preferred embodiment of the present disclosure, and is not intended to limit the scope of the present disclosure. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of one or more embodiments of the present disclosure should be included in the scope of protection of one or more embodiments of the present disclosure.
In some cases, the actions or steps recited in the claims may be performed in a different order than in the embodiments and still achieve desirable results. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing may also be possible or may be advantageous.

Claims (11)

1. A snapshot implementation method based on a solid state disk is characterized in that an LBA segment in the solid state disk is divided into a plurality of logical blocks, and an L2P mapping table is correspondingly divided into a plurality of L2P chunks, each logical block corresponds to one L2P chunk, and the solid state disk is configured to execute the following steps:
receiving a snapshot generating command, adding a time stamp to each L2P chunk, and generating a snapshot with the time stamp;
checking the logic block to which the written LBA belongs, determining that the L2P chunk corresponding to the logic block already has a timestamp, creating a new L2P chunk and updating the corresponding flash memory physical address into the new L2P chunk;
receiving the snapshot generating command again, traversing the latest L2P chunk corresponding to all the logic blocks, determining that the latest L2P chunk does not have a timestamp, adding a new timestamp to the latest L2P chunk, and generating a snapshot with the latest timestamp;
and associating a plurality of L2P chunks corresponding to each logic block.
2. The solid state disk-based snapshot implementation method of claim 1, wherein the method for associating the plurality of L2P chunks corresponding to each of the logical blocks comprises: linked lists, arrays, and hash algorithms.
3. The solid state disk-based snapshot implementation method of claim 1, wherein the method for associating the plurality of L2P chunks corresponding to each of the logical blocks comprises: and connecting a plurality of L2P chunks corresponding to each logic block by using a linked list, wherein the latest L2P chunk is positioned at the head of the linked list.
4. The solid state disk-based snapshot implementation method of claim 1, wherein the solid state disk is further configured to perform the following steps: receiving a read command for reading data under the snapshot of the appointed LBA and the timestamp, determining the logical block to which the LBA belongs and the L2P chunk of the appointed timestamp corresponding to the logical block, and returning the data stored in the flash memory physical address corresponding to the LBA in the L2P chunk.
5. The solid state disk-based snapshot implementation method of claim 1, wherein the solid state disk is further configured to perform the following steps: receiving a read command for reading data under the snapshot of the specified LBA and the timestamp, determining the logical block to which the LBA belongs and the logical block does not have the corresponding L2P chunk of the specified timestamp, and returning the data stored in the flash memory physical address corresponding to the LBA in the L2P chunk of the timestamp previous to the logical block.
6. The solid state disk-based snapshot implementation method of claim 1, wherein the solid state disk is further configured to perform the following steps: receiving a read command for reading data under the snapshot with the appointed LBA and the timestamp, determining that the logical block to which the LBA belongs and the flash memory physical address corresponding to the LBA in the L2P chunk with the appointed timestamp corresponding to the logical block are empty, and returning the data stored in the flash memory physical address corresponding to the LBA in the L2P chunk with the timestamp previous to the logical block.
7. The solid state disk-based snapshot implementation method of claim 1, wherein a flash memory physical address in the solid state disk is divided into a plurality of physical blocks, each physical block comprising a plurality of physical pages.
8. The solid state disk-based snapshot implementation method of claim 1, wherein each of the logical blocks comprises 1-1G consecutive LBAs.
9. The method for implementing the snapshot based on the solid state disk according to claim 1, wherein the step of generating the snapshot with the timestamp by the solid state disk further comprises:
determining a system timestamp corresponding to the snapshot generating command when the snapshot generating command is received as a timestamp of the current snapshot, or:
the system timestamp maintained before receiving the snapshot generation command is determined as the timestamp of the next snapshot, and the maintained system timestamp is updated after receiving the snapshot generation command to serve as the timestamp of the new L2P chunk.
10. A storage system, comprising: a software layer and a solid state disk, the software layer communicating with one or more users, the LBA segment in the solid state disk being divided into a number of logical blocks and a corresponding L2P mapping table being divided into a number of L2P chunks, each logical block corresponding to one L2P chunk, the software layer being configured to perform the steps of:
initiating a snapshot generation command to enable the solid state disk to timestamp each L2P chunk and generate a snapshot with a timestamp;
initiating a write command to enable the solid state disk to: checking the logic block to which the written LBA belongs, determining that the L2P chunk corresponding to the logic block already has a timestamp, creating a new L2P chunk and updating the corresponding flash memory physical address into the new L2P chunk;
and initiating a snapshot generation command again to enable the solid state disk to: traversing the latest L2P chunk corresponding to all the logic blocks, determining that the latest L2P chunk does not have a timestamp, adding a new timestamp to the latest L2P chunk, generating a snapshot with the latest timestamp, and associating a plurality of L2P chunks corresponding to each logic block.
11. The storage system of claim 10, wherein the software layer is configured to perform the steps of: initiating a read command to take a snapshot of the specified LBA and timestamp from the solid state disk.
CN202110506296.4A 2021-05-10 2021-05-10 Snapshot implementation method and storage system based on solid state disk Active CN113254265B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202110506296.4A CN113254265B (en) 2021-05-10 2021-05-10 Snapshot implementation method and storage system based on solid state disk
PCT/CN2022/102018 WO2022237916A1 (en) 2021-05-10 2022-06-28 Snapshot implementation method based on solid state drive, and storage system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110506296.4A CN113254265B (en) 2021-05-10 2021-05-10 Snapshot implementation method and storage system based on solid state disk

Publications (2)

Publication Number Publication Date
CN113254265A true CN113254265A (en) 2021-08-13
CN113254265B CN113254265B (en) 2023-03-14

Family

ID=77224012

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110506296.4A Active CN113254265B (en) 2021-05-10 2021-05-10 Snapshot implementation method and storage system based on solid state disk

Country Status (2)

Country Link
CN (1) CN113254265B (en)
WO (1) WO2022237916A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022237916A1 (en) * 2021-05-10 2022-11-17 苏州库瀚信息科技有限公司 Snapshot implementation method based on solid state drive, and storage system

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100153620A1 (en) * 2008-12-17 2010-06-17 Mckean Brian Storage system snapshot assisted by SSD technology
CN102033793A (en) * 2010-12-14 2011-04-27 成都市华为赛门铁克科技有限公司 Snapshot method and solid-state hard disk
CN102591790A (en) * 2011-12-30 2012-07-18 记忆科技(深圳)有限公司 Method for implementing data storage snapshot based on solid state disk, and solid state disk
US20140195725A1 (en) * 2013-01-08 2014-07-10 Violin Memory Inc. Method and system for data storage
US20170091115A1 (en) * 2015-09-25 2017-03-30 Beijing Lenovo Software Ltd. Method and electronic device for a mapping table in a solid-state memory
US20190146908A1 (en) * 2017-11-13 2019-05-16 Silicon Motion Inc. Method for accessing flash memory module and associated flash memory controller and electronic device
CN112631950A (en) * 2020-12-11 2021-04-09 苏州浪潮智能科技有限公司 L2P table saving method, system, device and medium

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102789368B (en) * 2012-06-21 2015-10-21 记忆科技(深圳)有限公司 A kind of solid state hard disc and data managing method, system
US10754785B2 (en) * 2018-06-28 2020-08-25 Intel Corporation Checkpointing for DRAM-less SSD
CN112506438B (en) * 2020-12-14 2024-03-26 深圳大普微电子科技有限公司 Mapping table management method and solid state disk
CN113254265B (en) * 2021-05-10 2023-03-14 苏州库瀚信息科技有限公司 Snapshot implementation method and storage system based on solid state disk

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100153620A1 (en) * 2008-12-17 2010-06-17 Mckean Brian Storage system snapshot assisted by SSD technology
CN102033793A (en) * 2010-12-14 2011-04-27 成都市华为赛门铁克科技有限公司 Snapshot method and solid-state hard disk
CN102591790A (en) * 2011-12-30 2012-07-18 记忆科技(深圳)有限公司 Method for implementing data storage snapshot based on solid state disk, and solid state disk
US20140195725A1 (en) * 2013-01-08 2014-07-10 Violin Memory Inc. Method and system for data storage
US20170091115A1 (en) * 2015-09-25 2017-03-30 Beijing Lenovo Software Ltd. Method and electronic device for a mapping table in a solid-state memory
US20190146908A1 (en) * 2017-11-13 2019-05-16 Silicon Motion Inc. Method for accessing flash memory module and associated flash memory controller and electronic device
CN112631950A (en) * 2020-12-11 2021-04-09 苏州浪潮智能科技有限公司 L2P table saving method, system, device and medium

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022237916A1 (en) * 2021-05-10 2022-11-17 苏州库瀚信息科技有限公司 Snapshot implementation method based on solid state drive, and storage system

Also Published As

Publication number Publication date
CN113254265B (en) 2023-03-14
WO2022237916A1 (en) 2022-11-17

Similar Documents

Publication Publication Date Title
US11232041B2 (en) Memory addressing
US8130554B1 (en) Securely erasing flash-based memory
US11782632B2 (en) Selective erasure of data in a SSD
JP6732684B2 (en) Information processing device, storage device, and information processing system
US10956071B2 (en) Container key value store for data storage devices
WO2018194772A1 (en) Persistent memory for key-value storage
KR101813786B1 (en) System and method for copy on write on an ssd
US20070094445A1 (en) Method to enable fast disk caching and efficient operations on solid state disks
US8214581B2 (en) System and method for cache synchronization
WO2015112864A1 (en) Garbage collection and data relocation for data storage system
US10552045B2 (en) Storage operation queue
US20170010810A1 (en) Method and Apparatus for Providing Wear Leveling to Non-Volatile Memory with Limited Program Cycles Using Flash Translation Layer
US11150819B2 (en) Controller for allocating memory blocks, operation method of the controller, and memory system including the controller
US20140325168A1 (en) Management of stored data based on corresponding attribute data
CN113254265B (en) Snapshot implementation method and storage system based on solid state disk
US10642531B2 (en) Atomic write method for multi-transaction
CN110231914B (en) Data storage device and method of operating the same
KR102589609B1 (en) Snapshot management in partitioned storage
US6910214B1 (en) Method, system, and program for converting an input parameter list into an output parameter list
TWI669610B (en) Data storage device and control method for non-volatile memory
CN114661238B (en) Method for recovering storage system space with cache and application
CN115269274B (en) Data recovery method, device, computer equipment and storage medium
US20220164119A1 (en) Controller, and memory system and data processing system including the same
KR100939814B1 (en) Method of managing and writing log file for flash memory
US20140258610A1 (en) RAID Cache Memory System with Volume Windows

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
PP01 Preservation of patent right

Effective date of registration: 20240218

Granted publication date: 20230314

PP01 Preservation of patent right