CN113254265B - Snapshot implementation method and storage system based on solid state disk - Google Patents

Snapshot implementation method and storage system based on solid state disk Download PDF

Info

Publication number
CN113254265B
CN113254265B CN202110506296.4A CN202110506296A CN113254265B CN 113254265 B CN113254265 B CN 113254265B CN 202110506296 A CN202110506296 A CN 202110506296A CN 113254265 B CN113254265 B CN 113254265B
Authority
CN
China
Prior art keywords
timestamp
snapshot
chunk
solid state
state disk
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110506296.4A
Other languages
Chinese (zh)
Other versions
CN113254265A (en
Inventor
杨国华
朱文禧
许毅
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Suzhou Kuhan Information Technology Co Ltd
Original Assignee
Suzhou Kuhan Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Suzhou Kuhan Information Technology Co Ltd filed Critical Suzhou Kuhan Information Technology Co Ltd
Priority to CN202110506296.4A priority Critical patent/CN113254265B/en
Publication of CN113254265A publication Critical patent/CN113254265A/en
Priority to PCT/CN2022/102018 priority patent/WO2022237916A1/en
Application granted granted Critical
Publication of CN113254265B publication Critical patent/CN113254265B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/14Error detection or correction of the data by redundancy in operation
    • G06F11/1402Saving, restoring, recovering or retrying
    • G06F11/1446Point-in-time backing up or restoration of persistent data
    • G06F11/1456Hardware arrangements for backup
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/0223User address space allocation, e.g. contiguous or non contiguous base addressing
    • G06F12/023Free address space management
    • G06F12/0238Memory management in non-volatile memory, e.g. resistive RAM or ferroelectric memory
    • G06F12/0246Memory management in non-volatile memory, e.g. resistive RAM or ferroelectric memory in block erasable memory, e.g. flash memory

Abstract

The application relates to the technical field of storage, and discloses a snapshot implementation method and a storage system based on a solid state disk. The method comprises the steps of dividing an LBA segment in a solid state disk into a plurality of logic blocks, dividing an L2P mapping table into a plurality of L2P chunks, enabling each logic block to correspond to one L2P chunk, enabling the solid state disk to receive a snapshot generating command, adding a timestamp to each L2P chunk to generate a snapshot with the timestamp, checking the logic block to which the written LBA belongs, determining that the corresponding L2P chunk has the timestamp, newly building a new L2P chunk, updating a corresponding flash physical address to the new L2P chunk, receiving the snapshot generating command again, traversing the latest L2P chunk corresponding to all the logic blocks, determining that the latest L2P chunk does not have the timestamp, adding the new timestamp to the latest L2P chunk to generate a snapshot with the latest timestamp, and associating the plurality of L2P chunks corresponding to each logic block. According to the method and the device, the snapshot is realized in the solid state disk, the DRAM resource of a software layer can be saved, and the cost is reduced.

Description

Snapshot implementation method and storage system based on solid state disk
Technical Field
The present application relates to the field of storage technologies, and in particular, to a snapshot implementation method and system based on a solid state disk.
Background
Snapshot (Snapshot) is a technique for recording the state of a storage system at a certain time. A user may create multiple snapshots and access, modify, copy, and rollback any one snapshot that was created at any point in time in the future.
The storage system is mainly composed of a software layer (i.e., a computer host) and a hard disk (i.e., a storage device), wherein the software layer is communicated with one or more users, as shown in fig. 1, the software layer defines that a snapshot technology is realized in the prior art, and the hard disk does not need to be or cannot sense the existence of the snapshot technology. The software layer needs to occupy the DRAM hardware resource of a plurality of computers and the computing resource of a CPU in order to realize the snapshot technology, and the execution time of the snapshot command is long.
The current trend in the storage industry is that Solid State Disks (SSDs) are gradually replacing mechanical hard disks (HDDs), and the inside of the SSD is equipped with large and expensive DRAM resources for fast access to flash media.
The snapshot function of the prior art storage system is implemented in the software layer of fig. 1. Specifically, as shown in fig. 2, a mapping table (map table) from a user Logical Address (LA) to a hard disk Physical Address (PA) is maintained in a software layer, the data size represented by each LA and PA is 4KB (the mainstream enterprise-level storage system uses 4KB as the minimum read/write unit of data), and the data of each LA can be written into any PA, so that each entry in the mapping table records PA location information written by all LAs. Generally, each entry occupies 4Byte DRAM, so assuming that the space of the hard disk exposed to the user is, the size of the mapping table is 4TB/4kb × 4b =4gb. As shown in fig. 2, data of LA (0) is written in PA (3), data of LA (1) is written in PA (9), data of LA (2) is written in PA (7), and data of LA (3) is written in PA (2). At this time, the user wants to generate a snapshot at this moment (T1), that is, the storage system state at the time of T1 is archived for later viewing, and the software layer permanently saves the mapping table at the time of T1 into the hard disk, that is:
the first step is as follows: the user initiates a snapshot generation command.
The second step: the software layer locks the mapping table, i.e. does not respond to subsequent user write commands, preventing the mapping table from being updated.
The third step: the software layer writes the 4GB mapping table at time T1 to the hard disk, as shown in fig. 3. The step takes a long time, and the current mainstream solid state disk interface speed is 4GB/s, so the step takes 1s. During which the storage system may respond to a read command from the user because the read command does not change the contents of the mapping table.
The fourth step: the software layer unlocks the mapping table, i.e., continues to respond to subsequent write commands.
And then, the user continues to write new LA or LA which has been written before, assuming that the state of the storage system at the time of T2 is as shown in FIG. 4, namely LA (0) to PA (8) are rewritten, LA (5) to PA (4) are newly written, and a new snapshot (T2) is generated, and the steps are repeated to write the snapshot T2 into the hard disk. Generally, the storage system sets an upper limit of the number of supported snapshots, and if the number of snapshots generated by the user exceeds TH, the storage system automatically covers the oldest snapshot, and the user senses the situation.
Therefore, the storage system can store a maximum of TH snapshots (system states), and a user can access data of any snapshot, and only a certain snapshot needs to be specified when a read command is issued, for example, LA (0) at the time of T1 snapshot is read, the storage system first reads the mapping table at the time of T1 from the hard disk to the DRAM, then reads PA = PA (3) in the mapping table LA (0), and finally reads data in PA (3) from the hard disk and returns the data to the user.
The problems of the prior art are as follows:
1) The software layer must use expensive DRAM to maintain the mapping table, and the size of the mapping table is 1/1024 of the capacity of the hard disk.
2) The software layer cannot respond to the user write command during the storage of the mapping table, so most storage systems choose to perform the snapshot operation late night, but the problem is not solved essentially.
Disclosure of Invention
The application aims to provide a snapshot implementation method based on a solid state disk, wherein the snapshot is implemented in the solid state disk, DRAM resources of a software layer in the prior art can be saved, and the cost is greatly reduced.
The application discloses a snapshot implementation method based on a solid state disk, which divides an LBA segment in the solid state disk into a plurality of logic blocks and correspondingly divides an L2P mapping table into a plurality of L2P chunks, wherein each logic block corresponds to one L2P chunk, and the solid state disk is configured to execute the following steps:
receiving a snapshot generating command, adding a timestamp to each L2P chunk, and generating a snapshot with the timestamp;
checking the logic block to which the written LBA belongs, determining that the L2P chunk corresponding to the logic block already has a timestamp, creating a new L2P chunk and updating a corresponding flash memory physical address into the new L2P chunk;
receiving a snapshot generating command again, traversing the latest L2P chunk corresponding to all the logic blocks, determining that the latest L2P chunk does not have a timestamp, adding a new timestamp to the latest L2P chunk, and generating a snapshot with the latest timestamp;
and associating a plurality of L2P chunks corresponding to each logic block.
In a preferred example, the method for associating the plurality of L2P chunks corresponding to each of the logic blocks includes: linked lists, arrays, and hash algorithms.
In a preferred example, the method for associating the plurality of L2P chunks corresponding to each of the logic blocks includes: and connecting a plurality of L2P chunks corresponding to each logic block by using a linked list, wherein the latest L2P chunk is positioned at the head of the linked list.
In a preferred example, the solid state disk is further configured to perform the following steps: receiving a reading command for reading data under the snapshot of the appointed LBA and the timestamp, determining the logical block to which the LBA belongs and the L2P chunk of the appointed timestamp corresponding to the logical block, and returning the data stored in the flash memory physical address corresponding to the LBA in the L2P chunk.
In a preferred example, the solid state disk is further configured to perform the following steps: receiving a read command for reading data under the snapshot with the appointed LBA and the timestamp, determining a logical block to which the LBA belongs and the logical block does not have a corresponding L2P chunk with the appointed timestamp, and returning data stored in a flash memory physical address corresponding to the LBA in the L2P chunk with the timestamp before the logical block.
In a preferred example, the solid state disk is further configured to perform the following steps: receiving a reading command for reading data under the snapshot with the appointed LBA and the timestamp, determining that the flash memory physical address corresponding to the LBA in the logic block to which the LBA belongs and the L2P chunk with the appointed timestamp corresponding to the logic block are empty, and returning the data stored in the flash memory physical address corresponding to the LBA in the L2P chunk with the timestamp previous to the logic block.
In a preferred example, the flash memory physical address in the solid state disk is divided into a plurality of physical blocks, and each physical block includes a plurality of physical pages.
In a preferred example, each of the logic blocks includes 1 to 1G consecutive LBAs.
In a preferred embodiment, the step of generating a snapshot with a timestamp by the solid state disk further includes:
determining a system timestamp corresponding to the snapshot generating command when the snapshot generating command is received as a timestamp of the current snapshot, or:
determining a system timestamp maintained before receiving the snapshot generation command as a timestamp of a next snapshot, and updating the maintained system timestamp after receiving the snapshot generation command to be used as a timestamp of a new L2P chunk.
The present application also discloses a storage system, comprising: the software layer is communicated with one or more users, the LBA segments in the solid state disk are divided into a plurality of logic blocks, the L2P mapping table is correspondingly divided into a plurality of L2P chunks, each logic block corresponds to one L2P chunk, and the software layer is configured to execute the following steps:
initiating a snapshot generation command to enable the solid state disk to add a timestamp to each L2P chunk and generate a snapshot with the timestamp;
initiating a write command to enable the solid state disk to: checking the logic block to which the written LBA belongs, determining that the L2P chunk corresponding to the logic block has a timestamp, creating a new L2P chunk and updating a corresponding flash memory physical address into the new L2P chunk;
and initiating a snapshot generation command again to enable the solid state disk to: traversing the latest L2P chunks corresponding to all the logic blocks, determining that the latest L2P chunk does not have a timestamp, adding a new timestamp to the latest L2P chunk to generate a snapshot with the latest timestamp, and associating a plurality of L2P chunks corresponding to each logic block.
In a preferred example, the software layer is configured to perform the following steps: initiating a read command to take a snapshot of the specified LBA and timestamp from the solid state disk.
Compared with the prior art, the method has the following beneficial effects:
in the embodiment of the application, the snapshot is realized in the solid state disk, so that DRAM (dynamic random access memory) resources of a software layer in the prior art can be saved, and the cost is greatly reduced. The solid state disk optimizes snapshot implementation technology, so that snapshot command response time approaches zero.
The present invention is not limited to the embodiments described above, but rather, the embodiments described above may be implemented in a variety of forms (e.g., a variety of forms, and a variety of combinations). In order to avoid this problem, the respective technical features disclosed in the above summary of the invention of the present application, the respective technical features disclosed in the following embodiments and examples, and the respective technical features disclosed in the drawings may be freely combined with each other to constitute various new technical solutions (all of which should be considered as having been described in the present specification), unless such a combination of the technical features is technically impossible. For example, in one example, the feature a + B + C is disclosed, in another example, the feature a + B + D + E is disclosed, and the features C and D are equivalent technical means for performing the same function, and technically only one feature is used, but not simultaneously used, and the feature E can be technically combined with the feature C, so that the solution of a + B + C + D should not be considered as being described because the technology is not feasible, and the solution of a + B + C + E should be considered as being described.
Drawings
FIG. 1 is a schematic diagram of a prior art storage system.
Fig. 2 is a mapping relationship of logical addresses and physical addresses.
Fig. 3 is a schematic diagram of the system state at time T1.
Fig. 4 is a schematic diagram of the system state at time T2.
Fig. 5 is a flowchart illustrating a snapshot implementation method based on a solid state disk according to a first embodiment of the present application.
Fig. 6 is a mapping relationship between a logical address and a physical address based on a solid state disk according to an embodiment of the present application.
Fig. 7 is a schematic diagram of a system state at time T1 according to an embodiment of the present application.
Fig. 8 is a schematic diagram of creating an L2P chunk according to an embodiment of the present application.
Fig. 9 is a schematic diagram of a system state at time T2 according to an embodiment of the present application.
Fig. 10 is a schematic diagram illustrating an initial state of a solid state disk according to an embodiment of the application.
FIG. 11 is a schematic diagram of writing LBAs in accordance with an embodiment of the present application.
Fig. 12 is a schematic diagram of a first time snapshot command is generated according to an embodiment of the present application.
FIG. 13 is a schematic diagram of a carbon copy LBA in accordance with an embodiment of the present application.
Detailed Description
In the following description, numerous technical details are set forth in order to provide a better understanding of the present application. However, it will be understood by those skilled in the art that the technical solutions claimed in the present application may be implemented without these technical details and with various changes and modifications based on the following embodiments.
To make the objects, technical solutions and advantages of the present application more clear, embodiments of the present application will be described in further detail below with reference to the accompanying drawings.
A first embodiment of the present application relates to a snapshot implementation method based on a solid state disk, and a flow of the implementation method is shown in fig. 5. The Solid State disk is also called a Solid State Drive (SSD), which is a hard disk made of a Solid State electronic memory chip array, and the SSD mainly consists of a main controller (or a processor) and memory particles. The host controller may process commands from the software layer, e.g., snapshot generation commands, write commands, read commands, erase commands, and perform related operations. The memory particles may be NAND flash memory, NOR flash memory, magnetoresistive Random Access Memory (MRAM), resistive Random Access Memory (RRAM), phase Change Random Access Memory (PCRAM), nano-RAM, and the like. NAND flash memory may be used as an example to demonstrate snapshot implementation techniques.
As shown in fig. 6, due to the unique characteristics of flash memory, the storage space (e.g., DRAM) of the solid state disk maintains an L2P (Logical _2 \physical) linear table to record the physical location information of each Logical Block Address (LBA) written in the flash memory.
The LBA segment (i.e., all the LBAs) in the solid state disk is divided into a plurality of logical blocks, and consecutive LBAs are divided into one logical block. And correspondingly dividing the L2P mapping table into a plurality of L2P chunks, wherein each logic block corresponds to one L2P chunk. Referring to fig. 7, the LBA is divided into logical block 0, logical block 1, and the like, each of which includes 1 to 1G LBAs. The L2P is correspondingly divided into a plurality of L2P chunks (chunks), and the physical address (PBA) is divided into a plurality of physical blocks, each physical block includes a plurality of physical pages, for example, 5 physical pages, a physical page is a minimum read-write unit, and a physical block is a minimum erase unit. The physical address includes a corresponding physical block and physical page, e.g., 2 nd physical block 3 rd physical page 2,3. Each physical block corresponds to a logical block, and each physical block corresponds to an L2P chunk.
In one embodiment, the snapshot implementation method comprises the following steps:
step 501, the solid state disk receives a snapshot generating command, adds a timestamp to each L2P chunk, and generates a snapshot with a timestamp.
Step 502, the solid state disk checks the logical block to which the written LBA belongs, determines that the L2P chunk corresponding to the logical block already has a timestamp, creates a new L2P chunk, and updates the corresponding physical address to the new L2P chunk.
Step 503, the solid state disk receives the snapshot generating command again, traverses the latest L2P chunks corresponding to all the logical blocks, determines that the latest L2P chunk does not have a timestamp, and adds a new timestamp to the latest L2P chunk to generate a snapshot with the latest timestamp.
Step 504, the solid state disk associates multiple L2P chunks corresponding to each of the logic blocks.
In one embodiment, the method for associating the plurality of L2P chunks corresponding to each of the logical blocks includes: linked lists, arrays, and hash algorithms. For example, in one embodiment, the association is performed by using a linked list method, and the plurality of L2P chunks corresponding to each logical block are connected by using a linked list, where the latest L2P chunk is located at the head of the linked list.
In one embodiment, when data in the solid state disk needs to be read, the method further includes:
receiving a reading command for reading data under the snapshot with the appointed LBA and the timestamp, determining a logic block to which the LBA belongs and an L2P chunk with the appointed timestamp corresponding to the logic block, and returning data stored in a flash memory physical address corresponding to the LBA in the L2P chunk;
determining a logical block to which the LBA belongs, wherein the logical block does not have a corresponding L2P group block with a specified timestamp, and returning data stored in a flash memory physical address corresponding to the LBA in an L2P group block with a previous timestamp of the logical block;
and determining that the logical block to which the LBA belongs and the flash memory physical address corresponding to the LBA in the L2P chunk with the appointed timestamp corresponding to the logical block are null, and returning data stored in the flash memory physical address corresponding to the LBA in the L2P chunk with the previous timestamp of the logical block.
In an embodiment, a system timestamp corresponding to the time when the snapshot generating command is received may be determined as a timestamp of the current snapshot to generate the snapshot with the timestamp. And, the latest system timestamp corresponding to when the snapshot generation command is received again may be determined as the timestamp of the latest snapshot.
In an embodiment, the solid state disk maintains a system timestamp before receiving a snapshot generating command, and when receiving the snapshot generating command, the system timestamp maintained by the solid state disk is determined as the timestamp of a snapshot, and is updated at the same time, and the updated system timestamp is used as the timestamp of a new L2P chunk.
Disclosed in a second embodiment of the present application is a storage system comprising: a software layer and a solid state disk, the software layer communicating with one or more users, the LBA segment in the solid state disk being divided into a plurality of logical blocks and the L2P mapping table being correspondingly divided into a plurality of L2P chunks, each logical block corresponding to one L2P chunk, the software layer being configured to perform the steps of:
initiating a snapshot generating command to enable the solid state disk to add a timestamp to each L2P chunk and generate a snapshot with the timestamp;
initiating a write command to enable the solid state disk to: checking the logic block to which the written LBA belongs, determining that the L2P chunk corresponding to the logic block already has a timestamp, creating a new L2P chunk and updating a corresponding physical address into the new L2P chunk;
and initiating a snapshot generation command again to enable the solid state disk to: traversing the latest L2P chunks corresponding to all the logic blocks, determining that the latest L2P chunk does not have a timestamp, adding a new timestamp to the latest L2P chunk to generate a snapshot with the latest timestamp, and associating a plurality of L2P chunks corresponding to each logic block.
It should be understood that the technical details in the first embodiment can be applied to the present embodiment, and the technical details in the present embodiment can also be applied to the first embodiment.
In order to better understand the technical solutions of the present application, the following description is given with reference to a specific example, in which the listed details are mainly for the sake of understanding, and are not intended to limit the scope of the present application.
The mainstream medium for storing data in the solid state disk is a Flash memory (NAND Flash), and the Flash memory has the unique characteristics that: the physical page is the minimum read-write unit, the physical block is the minimum erasing unit, and the physical page can be written after being erased. In order to prevent the user (specifically, a software layer in the storage system) from sensing these characteristics of the flash memory, an L2P linear table is maintained in the storage space of the mainstream solid state disk to record the physical location information of each LBA written in the flash memory, and each table entry occupies 4B DRAM space, so assuming that the capacity of the solid state disk is 4TB, the size of the L2P is 4TB/4kb × 4B =4gb. It should be understood that PA (x) of the hard disk as seen by the software layer is equivalent to LBA _ x in the solid state disk.
Therefore, the L2P table in the solid state disk is basically similar to the mapping table in the software layer in the prior art, and the L2P is stored in the storage space of the solid state disk, so that the DRAM hardware resources in the software layer can be saved.
Further, multiple snapshots (at different times) are defined to store multiple L2P mapping tables, but in reality there is a high probability that only a small number of LBAs are written or updated between two snapshots, and most of the LBAs are unchanged, and the previous snapshots can be directly referred to for these unchanged LBAs.
As shown in fig. 7, to demonstrate the versatility of the present technology, the LBA interval (segment) of the solid state disk is logically cut into a plurality of logical blocks, each logical block may have an arbitrary size (for example, including any consecutive LBAs of 1 to 1G), and for example, taking 1K LBAs as an example, the LBA interval of the hard disk is cut into 1G/1k =1m logical blocks. LBA (1) -LBA (1K-1) belong to logical block 0, LBA (K) -LBA (2K-1) belong to logical block 1, and so on. The solid state disk may determine the logical block to which the LBA belongs according to the LBA written by the software layer, for example, LBA/K is equal to the logical block to which the LBA belongs. And, by taking mod for the LBA to K, the position of the LBA in the logical block can be determined, for example, LBA mod K is equal to the position in the logical block to which the LBA belongs. Similarly, the L2P mapping table is also managed according to the granularity of the logical blocks, that is, the physical L2P size corresponding to each logical block is 1k × 4b =4kb, which is referred to as L2P Chunk (Chunk). In the initial state, each logical block corresponds to only one L2P Chunk.
The solid state disk realizes the snapshot as follows:
the first step is as follows: assuming that the state of the solid state disk at time T1 is as shown in fig. 7, the software layer initiates a snapshot generation command at this time.
The second step is that: after the solid state disk receives the snapshot command, marking each L2P Chunk with a [ T1] mark, wherein T1 represents the time stamp to which the snapshot belongs, and meanwhile, the L2P Chunk is also implied to be locked and the content of the L2P Chunk cannot be changed. And the operation of marking the mark can be executed in the background task of the solid state disk, so that the response time of the snapshot command is very fast, and the microsecond level can be completed.
The third step: assuming that the user overwrites LBA0, which belongs to logical block 0, the solid state disk checks that the L2P Chunk corresponding to logical block 0 has been marked with [ T1] so that it belongs to a previous snapshot and cannot be changed, the solid state disk additionally applies for a new L2P Chunk (not marked) in the DRAM, and then updates the physical address { physical block 2, physical page 3} newly written in logical block 0 into the L2P Chunk. This process is also almost cost free. It is found that logic block 0 corresponds to two L2P chunks, and therefore they can be associated, including but not limited to linked list, array, hash, etc. For example, taking a linked list as an example, that is, each logical block may correspond to multiple L2P chunks, the L2P chunks are chained together by the linked list, and the latest L2P Chunk is appended to the head of the linked list. With the writing of more other LBAs, there will be more cases that one logical block corresponds to two L2P chunks, assuming that at time T2, the state of the solid state disk is as shown in fig. 8, and the software layer has the snapshot command initiated.
The fourth step: the solid state disk traverses the latest L2P Chunk of all the logic blocks, and if the Chunk is not marked, the logic block is marked with [ T2] mark in the time period of T1- > T2, which indicates that new data are written in the logic block; otherwise, it is stated that there is no new data written in the logical block in the time period T1- > T2, and its status is the same as that at time T1, so what does not need to be done, as shown in fig. 9.
The fifth step: and repeating the third step and the fourth step, and continuously processing the subsequent write command and the snapshot command.
Compared with the prior art, the snapshot implementation technology is high in efficiency, and extra time delay is hardly generated for the write command of the user.
The software layer may specify a timestamp and a LBA in the read command to read data under a specific snapshot, for example, specify to read [ LBA0, T2], the solid state disk may determine that LBA0 belongs to logical block 0, search for an L2P Chunk marked as [ T2] from a head of an L2P Chunk linked list corresponding to logical block 0, and read out data in the flash physical address {2,3} to the software layer after finding. Here, the following two special cases occur:
1) The logical block to which the designated LBA belongs does not have the L2P Chunk with the designated timestamp, such as designating read [ LBA (K), T2], and there is no L2P Chunk marked as [ T2] of the logical block 1 to which the LBA (K) belongs in the solid state disk, because there is no new data written in the logical block 1 in the time period T1- > T2, in this case, the solid state disk reads data from the snapshot before [ T2] to the software layer, for example, reads data in the flash physical address {2,0} corresponding to the LBA (K) in the L2P Chunk marked as [ T1] to the software layer, which is indeed the data desired by the software layer.
2) Specifying that the flash physical address of the LBA in the L2P Chunk of the logical block specified timestamp to which the LBA belongs is null, for example, specifying to read [ LBA (2), T2], where in the solid state disk, an L2P Chunk labeled [ T2] of the logical block 0 to which the LBA (2) belongs can be found, but the flash physical address corresponding to the LBA (2) in the L2P Chunk is [ - ] (i.e., null), which indicates that no data update occurs in the LBA (2) specified in the T1- > T2 period, that is, the data desired by the user is in the snapshot of [ T1], so that, like 1), the solid state disk reads data from the snapshot before T2 to the software layer, for example, reads data in the flash physical address {0,1} corresponding to the LBA (2) in the L2P Chunk labeled [ T1] to the software layer.
In the above two special cases, the software layer can still obtain the data desired by the user.
In the implementation details, the solid state disk determines the time stamp of the snapshot only when the software layer initiates a snapshot generation command. It should be noted that other similar embodiments for determining the snapshot timestamp may be used. For example, before the software layer initiates a snapshot generation command, a system timestamp [ Tx ] is determined, that is, a system timestamp [ Tx ] is maintained inside the solid state disk, and the timestamp corresponding to the next snapshot command is recorded in advance, and when a new L2P Chunk is applied in the process of processing a write command by the solid state disk, the system timestamp [ Tx ] can be immediately stamped into the newly applied L2P Chunk, and the implementation process is as follows:
the first step is as follows: as shown in fig. 10, in the initial state, that is, the software layer never initiates the snapshot generating command and the write command, the system timestamp maintained inside the solid state disk is set to [ T1] in advance, and is used as the timestamp indicating the timestamp corresponding to the first snapshot generating command in the future.
The second step is that: as shown in fig. 11, the software layer initiates a series of write commands, the solid state disk applies for a plurality of L2P chunks according to the LBA of the write command, and applies for the L2P chunks and simultaneously sends a system timestamp [ T1] to the L2P chunks.
The third step: as shown in fig. 12, the software layer initiates the first snapshot generation command, and all that needs to be done by the solid state disk at this time is to update the system timestamp to [ T2], that is, the timestamp corresponding to the second snapshot generation command is recorded in advance.
The fourth step: the second step to the third step are repeated, assuming that the software layer initiates a duplicate LBA0 command, a new L2P Chunk is created by the fixed state hard disk, and a system timestamp [ T2] is printed on the newly applied L2P Chunk, as shown in fig. 13.
It is noted that, in the present patent application, relational terms such as first and second, and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising a" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element. In the present patent application, if it is mentioned that a certain action is executed according to a certain element, it means that the action is executed according to at least the element, and two cases are included: the action is performed based on only the element, and the action is performed based on the element and other elements. Multiple, etc. expressions include 2, 2 2 kinds, more than 2 times, more than 2 kinds.
All documents mentioned in this specification are to be considered as being incorporated in their entirety into the disclosure of the present application so as to be subject to modification as necessary. It should be understood that the above description is only a preferred embodiment of the present disclosure, and is not intended to limit the scope of the present disclosure. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of one or more embodiments of the present disclosure should be included in the scope of protection of one or more embodiments of the present disclosure.
In some cases, the actions or steps recited in the claims may be performed in a different order than in the embodiments and still achieve desirable results. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some implementations, multitasking and parallel processing may also be possible or may be advantageous.

Claims (11)

1. A snapshot implementation method based on a solid state disk is characterized in that an LBA segment in the solid state disk is divided into a plurality of logic blocks and an L2P mapping table is correspondingly divided into a plurality of L2P chunks, each logic block corresponds to one L2P chunk, and the solid state disk is configured to execute the following steps:
receiving a snapshot generating command, adding a timestamp to each L2P chunk, and generating a snapshot with the timestamp;
checking the logic block to which the written LBA belongs, determining that the L2P chunk corresponding to the logic block already has a timestamp, creating a new L2P chunk and updating a corresponding flash memory physical address into the new L2P chunk;
receiving a snapshot generating command again, traversing the latest L2P chunk corresponding to all the logic blocks, determining that the latest L2P chunk does not have a timestamp, and adding a new timestamp to the latest L2P chunk to generate a snapshot with the latest timestamp;
and associating a plurality of L2P chunks corresponding to each logic block.
2. The solid state disk-based snapshot implementation method according to claim 1, wherein the method for associating the plurality of L2P chunks corresponding to each of the logical blocks comprises: linked lists, arrays, and hash algorithms.
3. The solid state disk-based snapshot implementation method according to claim 1, wherein the method for associating the plurality of L2P chunks corresponding to each of the logical blocks comprises: and connecting a plurality of L2P chunks corresponding to each logic block by using a linked list, wherein the latest L2P chunk is positioned at the head of the linked list.
4. The solid state disk-based snapshot implementation method of claim 1, wherein the solid state disk is further configured to perform the following steps: receiving a read command for reading data under the snapshot with the appointed LBA and the timestamp, determining a logical block to which the LBA belongs and an L2P chunk with the appointed timestamp corresponding to the logical block, and returning the data stored in the flash memory physical address corresponding to the LBA in the L2P chunk.
5. The solid state disk-based snapshot implementation method of claim 1, wherein the solid state disk is further configured to perform the following steps: receiving a read command for reading data under the snapshot with the appointed LBA and the timestamp, determining a logical block to which the LBA belongs and the logical block does not have a corresponding L2P chunk with the appointed timestamp, and returning data stored in a flash memory physical address corresponding to the LBA in the L2P chunk with the timestamp before the logical block.
6. The solid state disk-based snapshot implementation method of claim 1, wherein the solid state disk is further configured to perform the following steps: receiving a read command for reading data under the snapshot with the appointed LBA and the timestamp, determining that the logical block to which the LBA belongs and the flash memory physical address corresponding to the LBA in the L2P chunk with the appointed timestamp corresponding to the logical block are null, and returning the data stored in the flash memory physical address corresponding to the LBA in the L2P chunk with the timestamp before the logical block.
7. The solid state disk-based snapshot implementation method of claim 1, wherein a flash memory physical address in the solid state disk is divided into a plurality of physical blocks, each physical block comprising a plurality of physical pages.
8. The solid state disk-based snapshot implementation method of claim 1, wherein each of the logical blocks comprises 1-1G consecutive LBAs.
9. The method for implementing the snapshot based on the solid state disk according to claim 1, wherein the step of generating the snapshot with the timestamp by the solid state disk further comprises:
determining a system timestamp corresponding to the snapshot generating command when the snapshot generating command is received as a timestamp of the current snapshot, or:
determining a system timestamp maintained before receiving the snapshot generating command as a timestamp of a next snapshot, and updating the maintained system timestamp after receiving the snapshot generating command to be used as a timestamp of a new L2P chunk.
10. A storage system, comprising: a software layer and a solid state disk, the software layer communicating with one or more users, the LBA segment in the solid state disk being divided into a plurality of logical blocks, and the L2P mapping table being correspondingly divided into a plurality of L2P chunks, each logical block corresponding to one L2P chunk, the software layer being configured to perform the following steps:
initiating a snapshot generating command to enable the solid state disk to add a timestamp to each L2P chunk and generate a snapshot with the timestamp;
initiating a write command to enable the solid state disk to: checking the logic block to which the written LBA belongs, determining that the L2P chunk corresponding to the logic block already has a timestamp, creating a new L2P chunk and updating a corresponding flash memory physical address into the new L2P chunk;
and initiating a snapshot generation command again to enable the solid state disk to: traversing the latest L2P chunks corresponding to all the logic blocks, determining that the latest L2P chunk does not have a timestamp, adding a new timestamp to the latest L2P chunk to generate a snapshot with the latest timestamp, and associating a plurality of L2P chunks corresponding to each logic block.
11. The storage system of claim 10, wherein the software layer is configured to perform the steps of: initiating a read command to take a snapshot of the specified LBA and timestamp from the solid state disk.
CN202110506296.4A 2021-05-10 2021-05-10 Snapshot implementation method and storage system based on solid state disk Active CN113254265B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202110506296.4A CN113254265B (en) 2021-05-10 2021-05-10 Snapshot implementation method and storage system based on solid state disk
PCT/CN2022/102018 WO2022237916A1 (en) 2021-05-10 2022-06-28 Snapshot implementation method based on solid state drive, and storage system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110506296.4A CN113254265B (en) 2021-05-10 2021-05-10 Snapshot implementation method and storage system based on solid state disk

Publications (2)

Publication Number Publication Date
CN113254265A CN113254265A (en) 2021-08-13
CN113254265B true CN113254265B (en) 2023-03-14

Family

ID=77224012

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110506296.4A Active CN113254265B (en) 2021-05-10 2021-05-10 Snapshot implementation method and storage system based on solid state disk

Country Status (2)

Country Link
CN (1) CN113254265B (en)
WO (1) WO2022237916A1 (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113254265B (en) * 2021-05-10 2023-03-14 苏州库瀚信息科技有限公司 Snapshot implementation method and storage system based on solid state disk

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102033793A (en) * 2010-12-14 2011-04-27 成都市华为赛门铁克科技有限公司 Snapshot method and solid-state hard disk
CN112631950A (en) * 2020-12-11 2021-04-09 苏州浪潮智能科技有限公司 L2P table saving method, system, device and medium

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8200922B2 (en) * 2008-12-17 2012-06-12 Netapp, Inc. Storage system snapshot assisted by SSD technology
CN102591790B (en) * 2011-12-30 2015-11-25 记忆科技(深圳)有限公司 Data based on solid state hard disc store snapshot implementing method and solid state hard disc
CN102789368B (en) * 2012-06-21 2015-10-21 记忆科技(深圳)有限公司 A kind of solid state hard disc and data managing method, system
US9378135B2 (en) * 2013-01-08 2016-06-28 Violin Memory Inc. Method and system for data storage
US10552335B2 (en) * 2015-09-25 2020-02-04 Beijing Lenovo Software Ltd. Method and electronic device for a mapping table in a solid-state memory
TWI661303B (en) * 2017-11-13 2019-06-01 慧榮科技股份有限公司 Method for accessing flash memory module and associated flash memory controller and electronic device
US10754785B2 (en) * 2018-06-28 2020-08-25 Intel Corporation Checkpointing for DRAM-less SSD
CN112506438B (en) * 2020-12-14 2024-03-26 深圳大普微电子科技有限公司 Mapping table management method and solid state disk
CN113254265B (en) * 2021-05-10 2023-03-14 苏州库瀚信息科技有限公司 Snapshot implementation method and storage system based on solid state disk

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102033793A (en) * 2010-12-14 2011-04-27 成都市华为赛门铁克科技有限公司 Snapshot method and solid-state hard disk
CN112631950A (en) * 2020-12-11 2021-04-09 苏州浪潮智能科技有限公司 L2P table saving method, system, device and medium

Also Published As

Publication number Publication date
WO2022237916A1 (en) 2022-11-17
CN113254265A (en) 2021-08-13

Similar Documents

Publication Publication Date Title
US10248362B2 (en) Data management for a data storage device
US11232041B2 (en) Memory addressing
US10649661B2 (en) Dynamically resizing logical storage blocks
EP3100165B1 (en) Garbage collection and data relocation for data storage system
KR101086857B1 (en) Control Method of Solid State Storage System for Data Merging
JP6732684B2 (en) Information processing device, storage device, and information processing system
US10956071B2 (en) Container key value store for data storage devices
US11782632B2 (en) Selective erasure of data in a SSD
KR101813786B1 (en) System and method for copy on write on an ssd
US20130151759A1 (en) Storage device and operating method eliminating duplicate data storage
US10552045B2 (en) Storage operation queue
KR20100065786A (en) Cache synchronization method and system for fast power-off
CN114730300A (en) Enhanced file system support for zone namespace storage
US20140325168A1 (en) Management of stored data based on corresponding attribute data
US11150819B2 (en) Controller for allocating memory blocks, operation method of the controller, and memory system including the controller
KR102530583B1 (en) Storage device and memory system
CN114730290A (en) Moving change log tables to align with partitions
CN113254265B (en) Snapshot implementation method and storage system based on solid state disk
US10642531B2 (en) Atomic write method for multi-transaction
CN110231914B (en) Data storage device and method of operating the same
KR102589609B1 (en) Snapshot management in partitioned storage
US6910214B1 (en) Method, system, and program for converting an input parameter list into an output parameter list
CN114730291A (en) Data parking of SSD with partitions
TWI669610B (en) Data storage device and control method for non-volatile memory
CN115269274B (en) Data recovery method, device, computer equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
PP01 Preservation of patent right

Effective date of registration: 20240218

Granted publication date: 20230314

PP01 Preservation of patent right