WO2022121471A1 - 一种l2p表的保存方法、系统、设备以及介质 - Google Patents

一种l2p表的保存方法、系统、设备以及介质 Download PDF

Info

Publication number
WO2022121471A1
WO2022121471A1 PCT/CN2021/121914 CN2021121914W WO2022121471A1 WO 2022121471 A1 WO2022121471 A1 WO 2022121471A1 CN 2021121914 W CN2021121914 W CN 2021121914W WO 2022121471 A1 WO2022121471 A1 WO 2022121471A1
Authority
WO
WIPO (PCT)
Prior art keywords
lba
data
incremental data
snapshot
response
Prior art date
Application number
PCT/CN2021/121914
Other languages
English (en)
French (fr)
Inventor
陈庆陆
Original Assignee
苏州浪潮智能科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 苏州浪潮智能科技有限公司 filed Critical 苏州浪潮智能科技有限公司
Priority to US18/034,541 priority Critical patent/US20240020240A1/en
Publication of WO2022121471A1 publication Critical patent/WO2022121471A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/10Address translation
    • G06F12/1027Address translation using associative or pseudo-associative address translation means, e.g. translation look-aside buffer [TLB]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/0223User address space allocation, e.g. contiguous or non contiguous base addressing
    • G06F12/023Free address space management
    • G06F12/0238Memory management in non-volatile memory, e.g. resistive RAM or ferroelectric memory
    • G06F12/0246Memory management in non-volatile memory, e.g. resistive RAM or ferroelectric memory in block erasable memory, e.g. flash memory
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/14Error detection or correction of the data by redundancy in operation
    • G06F11/1402Saving, restoring, recovering or retrying
    • G06F11/1471Saving, restoring, recovering or retrying involving logging of persistent data for recovery
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/0223User address space allocation, e.g. contiguous or non contiguous base addressing
    • G06F12/023Free address space management
    • G06F12/0238Memory management in non-volatile memory, e.g. resistive RAM or ferroelectric memory
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/0223User address space allocation, e.g. contiguous or non contiguous base addressing
    • G06F12/0292User address space allocation, e.g. contiguous or non contiguous base addressing using tables or multilevel address translation means
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/22Indexing; Data structures therefor; Storage structures
    • G06F16/2282Tablespace storage structures; Management thereof
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/23Updating
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2212/00Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
    • G06F2212/10Providing a specific technical effect
    • G06F2212/1016Performance improvement
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2212/00Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
    • G06F2212/10Providing a specific technical effect
    • G06F2212/1032Reliability improvement, data loss prevention, degraded operation etc
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2212/00Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
    • G06F2212/72Details relating to flash memory management
    • G06F2212/7201Logical to physical mapping or translation of blocks or pages

Definitions

  • the present disclosure relates to the field of solid-state hard disks, and in particular, to a method, system, device, and storage medium for storing an L2P table.
  • the FTL mapping table also known as the L2P table, realizes the mapping from the host logical address space (LBA) to the flash physical address space (PBA), and is one of the core metadata in the SSD management data.
  • the L2P table is a linear table with LBA as the index and PBA as the content.
  • LBA logical address space
  • PBA flash physical address space
  • the L2P table is a linear table with LBA as the index and PBA as the content.
  • the SSD When the SSD is working, it will flash each user data to the flash memory address space, and record the mapping relationship between the logical address and the physical address in the L2P table.
  • the host reads the data, it sends the LBA of the data to the SSD, and the SSD can search the L2P table through the LBA, find the corresponding flash physical space address PBA, read the data stored in the Flash, and return it to the user.
  • the L2P table resides in the DDR for fast access by the SSD master.
  • the L2P table can be written from the DDR to the FLASH, and when the SSD is powered on, it is read from the Flash and loaded into the specified DDR area.
  • the size of the mapping table is 1/1024 of the SSD capacity. Due to the large amount of data in the L2P table of SSDs, for example, the L2P size of a 4T disk is about 4GB.
  • enterprise-level SSDs are equipped with energy storage capacitors, which can provide tens of ms of power supply in the event of an abnormal power failure, they cannot The requirement of flashing the entire L2P table to Flash.
  • SSDs generally use snapshots to save L2P tables, that is, during SSD operation, at certain intervals or when certain conditions are met, a snapshot of the L2P table is written to FLASH, so that only in the event of an abnormal power failure , just flash a few unsaved snapshots. After the SSD is powered on, the entire L2P table is built into the DDR through the saved snapshot.
  • the real-time operating system architecture of multi-core processor platform includes SMP (Symmetric Multi-Processing) framework and asymmetric multi-processing AMP (Asymmetric Multi-Processing) framework.
  • SMP Symmetric Multi-Processing
  • AMP Asymmetric Multi-Processing
  • the entire nand space of the SSD is divided into multiple partitions, each partition maintains an L2P table independently, and is configured with a separate JM and a WM responsible for data read and write and L2P table update.
  • JM and WM are on the same core.
  • any changes made by managers such as WM to any entry in the L2P table will generate an L2P delta (LBA, PBA), and the generated L2P delta can be sent to JM in an orderly manner, and JM can also receive it in an orderly manner To L2P delta, ensure that delta data can be stored in order.
  • an embodiment of the present disclosure proposes a method for saving an L2P table, including the following steps:
  • the incremental data and several pieces of basic data currently to be saved in the L2P table are saved in the non-volatile memory as snapshots.
  • it also includes:
  • sending the LBA to a log manager further comprises:
  • the LBA cache is sent to the log manager in response to the number of the LBAs in the LBA cache reaching a threshold.
  • a corresponding PBA is read in the L2P table according to the received LBA and the LBA and the corresponding PBA are assembled into a Incremental data, further including:
  • the log manager sequentially reads the LBAs in the LBA cache to obtain corresponding PBAs through the L2P table to obtain a plurality of the incremental data, and sequentially stores the plurality of incremental data to the write cache middle.
  • the incremental data and several pieces of basic data currently to be saved in the L2P table are saved in a non-volatile memory as snapshots, further comprising:
  • the write cache is saved in the non-volatile memory as a snapshot.
  • the amount of incremental data stored in the first preset space of the write cache is an integer multiple of the number of LBAs stored in the LBA cache.
  • it also includes:
  • Each of the incremental data in each of the snapshot data is restored to the L2P table according to the sequence of the snapshot information in the restoration table and the sequence of the plurality of incremental data in each of the snapshot data.
  • an embodiment of the present disclosure also provides a system for saving an L2P table, including:
  • the acquisition module is configured to, in response to detecting that the L2P table is updated, acquire the LBA whose mapping relationship is updated in the L2P table;
  • the sending module is configured to send the LBA to the log manager
  • the assembly module is configured to, in response to the log manager receiving the LBA, read the corresponding PBA in the L2P table according to the received LBA to combine the LBA with the corresponding LBA PBA is assembled into incremental data;
  • the saving module is configured to save the incremental data and several pieces of basic data currently to be saved in the L2P table into a non-volatile memory as snapshots.
  • an embodiment of the present disclosure also provides a computer device, including:
  • the memory stores computer-readable instructions that can be executed on the processor, wherein when the processor executes the program, the processor executes the steps of any one of the above-mentioned methods for saving an L2P table.
  • an embodiment of the present disclosure further provides a computer-readable storage medium, where the computer-readable storage medium stores computer-readable instructions, and the computer-readable instructions are When executed, the processor executes the steps of any of the above-mentioned methods for saving the L2P table.
  • the present disclosure has one of the following beneficial technical effects: the present disclosure is directed to the traditional method of sending delta (LBA, PBA) when saving L2P updates in the SSD of the SMP system, which causes the problem of performance degradation caused by out-of-order storage of delta data or mutually exclusive storage.
  • FIG. 1 is a schematic flowchart of a method for saving an L2P table according to an embodiment of the present disclosure
  • FIG. 2 is a schematic structural diagram of a storage system for an L2P table provided by an embodiment of the present disclosure
  • FIG. 3 is a schematic structural diagram of a computer device provided by an embodiment of the present disclosure.
  • FIG. 4 is a schematic structural diagram of a computer-readable storage medium provided by an embodiment of the present disclosure.
  • WM Write Manager Write Manager
  • JM Journal Manager Journal Manager
  • LBA Logical Block Address logical block address
  • PBA Physical Block Address physical block address
  • L2P Logical To Physical Table logical block
  • the mapping table to physical blocks that is, the FTL mapping table.
  • LBA for a certain LBA, at the first time, it is updated from WM0 to delta0 (LBA, PBA0), and at the second time, it is updated from WM1 to delta1 (LBA, PBA1). If the LBA is updated, the entry corresponding to the LBA in the L2P table is PBA1. According to the order preservation requirements of JM storage, JM needs to store delta0 (LBA, PBA0) in the delta buffer first, and then store delta1 (LBA, PBA1).
  • WM0 sends delta0 (LBA, PBA0)
  • WM1 sends delta1 (LBA, PBA1)
  • JM rotates to check whether there is a message in the inbound of each core to the core where the current JM is located, so there is no guarantee that JM will
  • the message of delta0 is received first, and then the message of delta1 is received.
  • JM receives the message of delta1 first, completes the save, and then receives the message of delta0, completes the save. In this way, during the power-on recovery process, the JM will first complete the patch of delta1, and then complete the patch of delta0, so that in the L2P table that is finally restored, the entry corresponding to the LBA is PBA0. This is inconsistent with the fact that the LBA entry in the L2P table is PBA1 before the power is turned off. This is the problem of out-of-order storage.
  • an embodiment of the present disclosure proposes a method for saving an L2P table, as shown in FIG. 1 , which may include the steps:
  • S4 Save the incremental data and some basic data currently to be saved in the L2P table into a non-volatile memory as snapshots.
  • the Journal manager is a software module in the SSD firmware, which is mainly responsible for the management of metadata (such as L2P tables, etc.).
  • it also includes:
  • NAND address is a linear table of contents
  • a recovery table can be created and maintained in JM, such as First table.
  • JM such as First table.
  • a new mapping relationship is formed whenever JM writes a snapshot to NAND, which is JM_LBA and the SLC NAND address where the snapshot is stored. relationship, and record the mapping relationship in the First table.
  • First table is a linear table with JM_LBA as index and SLC NAND address as content.
  • step S2, sending the LBA to the log manager further includes:
  • the LBA cache is sent to the log manager in response to the number of the LBAs in the LBA cache reaching a threshold.
  • the SSD based on the SMP system can be configured with 4 or 8 WMs, which are distributed on different cores and are responsible for the maintenance and update of the L2P table.
  • One JM is configured, which is distributed on a fixed core and is responsible for Storage of L2P table update data delta.
  • Each WM can apply for an LBA cache (LBA buffer) to the buffer manager, which is used to store the LBAs where the L2P table update occurs, and each LBA occupies 4 bytes.
  • LBA buffer LBA cache
  • After WM applies for an LBA buffer it stores the LBAs generated by the L2P table update into the LBA buffer in turn.
  • the LBA buffer is full, the LBA number information stored in the LBA buffer and the buffer address are sent to JM through a message.
  • JM After JM receives the message, it obtains the buffer address, reads the LBA in the buffer in turn, then accesses the L2P table to obtain the PBA, assembles it into a delta (LBA, PBA), and saves it.
  • WM sends L2P table update data to JM through the LBA buffer method. Compared with the method of sending L2P updates when there is an L2P update, it can greatly reduce the interaction frequency and consumption of messages, which is beneficial to the improvement of SSD performance.
  • step S3 in response to the log manager receiving the LBA, read the corresponding PBA in the L2P table according to the received LBA and compare the LBA with the corresponding PBA.
  • the PBA is assembled into incremental data, which further includes:
  • the log manager sequentially reads the LBAs in the LBA cache to obtain corresponding PBAs through the L2P table to obtain a plurality of the incremental data, and sequentially stores the plurality of incremental data to the write cache middle.
  • storing the plurality of incremental data in the write cache in sequence may be to store the incremental data in the order in which the LBA is read in the LBA buffer, and save the incremental data to the JM write buffer.
  • the amount of incremental data stored in the first preset space of the write cache is an integer multiple of the number of LBAs stored in the LBA cache.
  • JM is configured with several 16k write buffers for storing the received L2P update data.
  • the number of deltas stored in a single write buffer is exactly an integer multiple of the number of LBAs stored in a single LBA buffer.
  • the number of LBAs required to fill a single LBA buffer is just enough to fill the wrtie buffer of a single JM.
  • JM After receiving the LBA buffer, JM reads the LBA in turn, obtains the PBA in the L2P table, assembles it into a delta (LBA, PBA), and stores it in the delta buffer in the write buffer in turn.
  • the delta buffer of one write buffer will be filled.
  • JM releases the LBA buffer so that WM can continue to use it. JM continues to fill in the buffer header information and base data, and it can be sent to the NAND controller to be flashed to FLASH.
  • the number of LBAs sent to JM will be multiple (same as the number of modifications), so that when JM receives the LBA sent by which WM, it can
  • the real-time PBA data is obtained through the L2P table, and although the currently obtained PBA data is not sure whether it is the latest PBA data, the last obtained data must be the latest data.
  • WM0 is updated to delta0 (LBA1, PBA0)
  • WM1 is updated to delta1 (LBA1, PBA1)
  • JM will receive LBA1 twice, after the first time, due to The delay of sending messages by different WMs
  • the PBA queried through LBA1 may be PBA0 or PBA1, but because the data in the L2P table is updated first, the LBA will be sent, so after receiving the LBA for the second time , the corresponding PBA queried by JM according to the L2P table must be PBA1.
  • the incremental data (LBA1, PBA0) are restored first, and the incremental data (LBA1, PBA1) are restored after, which can ensure that in the L2P table where the final restoration is completed,
  • the entry corresponding to the LBA is PBA1, which is consistent with the LBA entry in the L2P table before the power-off being PBA1.
  • the JM does not have the problem of out-of-order storage.
  • step S4 saving the incremental data and several basic data currently to be saved in the L2P table as snapshots in a non-volatile memory, further comprising:
  • the write cache is saved in the non-volatile memory as a snapshot.
  • the SSD uses a snapshot method of delta (incremental data) + base (basic data) to save metadata.
  • Delta is incremental data, which is derived from base data.
  • base data For example, the entire L2P table in DDR is called base data.
  • An entry (LBA0, PBA0) of the L2P table is updated to (LBA0, PBA1), then (LBA0, PBA1) is an L2P-type delta data.
  • the reading and writing of NAND is performed in units of pages, and the current single snapshot size is the page size of the SLC mode (in the case of using NAND in the SLC mode, the page size is 16KiB).
  • a single snapshot consists of 3 parts, which are formed in the SSD DDR in the form of write buffers, including header, delta buffer, and base buffer.
  • the entire L2P table is segmented according to the size of the base buffer, and each segment is called "a number of basic data currently to be saved”.
  • the metadata is stored in SLC, and several 16k write buffers are opened in DDR.
  • the first 64 bytes of each write buffer are used for buffer header information, and the following delta buffer (the first preset space) is used to cache the generated delta data.
  • the complete L2P table can be divided into multiple segments of base data according to the size of the base buffer, and the incremental data is for the complete L2P table. Therefore, when the incremental data fills up the first preset space, a segment of base data is cyclically filled into the base buffer of the current write buffer.
  • the incremental data in the current write buffer may correspond to the base data of other segments, or may correspond to The base data in the current write buffer. In this way, the updated L2P data is either stored in the delta buffer or in the base buffer, so that the later data in the recovery table is newer.
  • it also includes:
  • Each of the incremental data in each of the snapshot data is restored to the L2P table according to the sequence of the snapshot information in the restoration table and the sequence of the plurality of incremental data in each of the snapshot data.
  • the JM when the JM is powered on to restore the L2P table, it will record the first table according to the first table (a new mapping relationship formed by JM flushing a 16k write buffer, which is maintained internally by JM) in turn. 16K data stored. After restoring a 16K data to DDR, the base data will be moved to the actual position in the L2P table first; after the Base is restored, the delta (LBA, PBA) data will be read in turn, and the entry address of the l2p table will be obtained according to the LBA, Write the PBA to complete the patch of a single delta. When all 16k mappings recorded in the first table are read and the above base and patch processes are completed, the entire L2P table is constructed in DDR.
  • the first table a new mapping relationship formed by JM flushing a 16k write buffer, which is maintained internally by JM
  • an embodiment of the present disclosure further provides a system 400 for saving an L2P table, as shown in FIG. 2 , including:
  • the acquisition module 401 is configured to, in response to detecting that the L2P table is updated, acquire the LBA whose mapping relationship is updated in the L2P table;
  • the sending module 402 is configured to send the LBA to the log manager;
  • the assembling module 403 is configured to, in response to the log manager receiving the LBA, read the corresponding PBA in the L2P table according to the received LBA to combine the LBA with the LBA
  • the corresponding PBA is assembled into incremental data
  • a saving module 404 the saving module 404 is configured to save the incremental data and several basic data currently to be saved in the L2P table in the non-volatile memory as snapshots.
  • an embodiment of the present disclosure further provides a computer device 501, including:
  • the memory 510 stores computer-readable instructions 511 that can be executed on the processor.
  • the processor 520 executes the readable instructions, the processor 520 executes the steps of any of the above methods for saving an L2P table.
  • an embodiment of the present disclosure further provides a non-volatile computer-readable storage medium 601.
  • the non-volatile computer-readable storage medium 601 Computer-readable instructions 610 are stored, and when the computer-readable instructions 610 are executed by the processor, perform the steps of any of the above methods for saving an L2P table.
  • computer-readable storage media e.g. memory
  • volatile memory e.g., memory
  • non-volatile memory e.g., RAM

Abstract

本公开提供了一种L2P表的保存方法,包括以下步骤:响应于检测到更新L2P表,获取L2P表中发生映射关系更新的LBA;将LBA发送到日志管理器中;响应于日志管理器接收到LBA,根据接收到的LBA在L2P表中读取对应的PBA并将LBA和对应的PBA组装成增量数据;将增量数据和L2P表中当前待保存的若干个基础数据作为快照保存到非易失性存储器中。本公开还提供了一种系统、计算机设备以及可读存储介质。

Description

一种L2P表的保存方法、系统、设备以及介质
相关申请的交叉引用
本申请要求于2020年12月11日提交中国专利局,申请号为202011437985.6,发明名称为“一种L2P表的保存方法、系统、设备以及介质”的中国专利申请的优先权及权益,其全部内容通过引用而并入本文。
技术领域
本公开涉及固态硬盘领域,具体涉及一种L2P表的保存方法、系统、设备以及存储介质。
背景技术
FTL映射表又称L2P表,实现主机逻辑地址空间(LBA)到Flash闪存物理地址空间(PBA)的映射,是SSD管理数据中的核心元数据之一。L2P表是一份以LBA为索引,PBA为内容的线性表。SSD在工作时,会将每一笔用户数据刷写到闪存地址空间,并记录该逻辑地址到物理地址的映射关系到L2P表中。主机读取该数据时,将发送该数据的LBA给SSD,SSD可通过LBA查找L2P表,找到对应的闪存物理空间地址PBA,读出Flash上存储的该笔数据,返回给用户。
SSD运行时,L2P表驻留在DDR中,便于SSD主控快速访问。SSD下电时可将L2P表从DDR中刷写到FLASH中,在SSD上电时,从Flash中读出并加载到指定的DDR区域。映射表大小为SSD容量的1/1024。由于SSD的L2P表数据量比较大,比如4T盘的L2P大小约4GB,这样,虽然企业级SSD配置储能电容,能在遭遇异常掉电时提供几十ms的电量供应,但也无法满足将整份L2P表刷写到Flash中的需求。因此,SSD一般采用快照方式保存L2P表,即在SSD运行过程中,每隔一定时间或满足一定条件时,刷写一份L2P表的快照到FLASH中,这样,只需在遭遇异常掉电时,刷写少量未保存的快照就行。SSD上电后,通过已保存的快照再将整份L2P表构建到DDR中。
多核处理器平台的实时操作系统体系结构包括对称多处理SMP(Symmetric Multi-Processing)构架和非对称多处理AMP(Asymmetric Multi-Processing)构架两种。
在采用AMP系统的SSD中,将SSD的整个nand空间划分成多个partition,每个partition独立维护一份L2P表,配置有单独的1个JM,1个负责数据读写以及L2P表更新的WM,JM和WM处在同一个core上。在一个partition内部,WM等manager对L2P表的任一表项的改动都会生成1个L2P delta(LBA,PBA),并且生成的L2P delta可以有序地发送给JM,JM也可以有序地接收到L2P delta,确保delta数据能够保序存储。
但是,在采用SMP系统的SSD中,为了更好的读写性能,不再严格配置partition。处在不同core上的多个WM会并行访问L2P表,并将L2P表更新通过消息发送给JM。这里就产生了一个JM存储L2P delta如何保序的问题。现有方案中可以通过互斥量的方式可以解决上述问题,即将16K的write buffer视作共享资源,所有WM可见,哪一个WM更新了L2P表之后,需要将delta写入write buffer之前,需要先申请互斥锁,拿到锁之后才能将delta写入,写完之后释放互斥锁;如果拿不到互斥锁,需要一直等待直到拿到锁为止。但是经过评估与实测,多个WM通过互斥锁方式的系统消耗过大,直接影响读写性能。
发明内容
有鉴于此,为了克服上述问题的至少一个方面,本公开实施例提出一种L2P表的保存方法,包括以下步骤:
响应于检测到更新L2P表,获取所述L2P表中发生映射关系更新的LBA;
将所述LBA发送到日志管理器中;
响应于所述日志管理器接收到所述LBA,根据接收到的所述LBA在所述L2P表中读取对应的PBA并将所述LBA和所述对应的PBA组装成增量数据;
将所述增量数据和所述L2P表中当前待保存的若干个基础数据作为快照保存到非易失性存储器中。
在一些实施例中,还包括:
创建恢复表;
将所述快照的快照信息记录到所述恢复表中。
在一些实施例中,将所述LBA发送到日志管理器中,进一步包括:
将所述LBA存储到LBA缓存中;
响应于所述LBA缓存中的所述LBA的数量达到阈值,将所述LBA缓存发送到所述日志管理器中。
在一些实施例中,响应于所述日志管理器接收到所述LBA,根据接收到的所述LBA在所述L2P表中读取对应的PBA并将所述LBA和所述对应的PBA组装成增量数据,进一步包括:
所述日志管理器依次读取所述LBA缓存中的LBA以通过所述L2P表得到对应的PBA进而得到多个所述增量数据,并按顺序将所述多个增量数据存储到写缓存中。
在一些实施例中,将所述增量数据和所述L2P表中当前待保存的若干个基础数据作为快照保存到非易失性存储器中,进一步包括:
响应于所述增量数据将所述写缓存中的第一预设空间填满,获取所述L2P表中当前待保存的若干个基础数据并填充到所述写缓存的第二预设空间;
填充所述写缓存的头部信息后将所述写缓存作为快照保存到所述非易失性存储器中。
在一些实施例中,所述写缓存的第一预设空间存储的增量数据的数量是所述LBA缓存中存储的LBA的数量的整数倍。
在一些实施例中,还包括:
响应于接收到恢复L2P表的指令,根据所述恢复表中记录的快照信息从所述非易失性存储器获取到多个快照数据;
根据所述恢复表中所述快照信息的顺序以及每一个所述快照数据中多个增量数据的顺序将每一个所述快照数据中的每一个所述增量数据恢复到L2P表中。
基于同一发明构思,根据本公开的另一个方面,本公开的实施例还提供了一种L2P表的保存系统,包括:
获取模块,所述获取模块配置为响应于检测到更新L2P表,获取所述L2P表中发生映射关系更新的LBA;
发送模块,所述发送模块配置为将所述LBA发送到日志管理器中;
组装模块,所述组装模块配置为响应于所述日志管理器接收到所述LBA,根据接收到的所述LBA在所述L2P表中读取对应的PBA以将所述LBA和所述对应的PBA组装成增量数据;
保存模块,所述保存模块配置为将所述增量数据和所述L2P表中当前待保存的若干个基础数据作为快照保存到非易失性存储器中。
基于同一发明构思,根据本公开的另一个方面,本公开的实施例还提供了一种计算机设备,包括:
至少一个处理器;以及
存储器,所述存储器存储有可在所述处理器上运行的计算机可读指令,其中,所述处理器执行所述程序时执行如上所述的任一种L2P表的保存方法的步骤。
基于同一发明构思,根据本公开的另一个方面,本公开的实施例还提供了一种计算机可读存储介质,所述计算机可读存储介质存储有计算机可读指令,所述计算机可读指令被处理器执行时执行如上所述的任一种L2P表的保存方法的步骤。
本公开具有以下有益技术效果之一:本公开针对SMP系统的SSD在保存L2P更新时,采用发送delta(LBA,PBA)的传统方式,引起delta数据乱序保存或互斥保存方式造成性能下降问题,提出了一种只发送L2P更新的LBA给JM,JM通过接收到的LBA访问L2P表获得PBA,组装成delta(LBA,PBA)再进行保存,有效解决了delta保存乱序的问题。因此采用只发送LBA的方式,由于JM根据LBA获得L2P表永远都是实时的PBA值。只要WM不存在漏发问题,JM就不存在乱序保存的问题。
附图说明
为了更清楚地说明本公开实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本公开的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的实施例。
图1为本公开的实施例提供的L2P表的保存方法的流程示意图;
图2为本公开的实施例提供的L2P表的保存系统的结构示意图;
图3为本公开的实施例提供的计算机设备的结构示意图;
图4为本公开的实施例提供的计算机可读存储介质的结构示意图。
具体实施方式
为使本公开的目的、技术方案和优点更加清楚明白,以下结合具体实施例,并参照附图,对本公开实施例进一步详细说明。
需要说明的是,本公开实施例中所有使用“第一”和“第二”的表述均是为了区分两个相同名称非相同的实体或者非相同的参量,可见“第一”“第二”仅为了表述的方便,不应理解为对本公开实施例的限定,后续实施例对此不再一一说明。
在本公开的实施例中,WM:Write Manager写管理器;JM:Journal Manager日志管理器;LBA:Logical Block Address逻辑块地址;PBA:Physical Block Address物理块地址;L2P:Logical To Physical Table逻辑块到物理块的映射表,也即FTL映射表。
在本公开的实施例中,对于某个LBA,在第一时间,由WM0更新为delta0(LBA,PBA0),在第二时间,由WM1更新为delta1(LBA,PBA1),如果后续没有再针对该LBA的更新,那么L2P表中LBA对应的表项就是PBA1。按照JM存储的保序需求,JM需要先将delta0(LBA,PBA0)存储到delta buffer中,再存储delta1(LBA,PBA1)。如果通过消息发送,即WM0发送delta0(LBA,PBA0),WM1发送delta1(LBA,PBA1),由于JM轮转查询每个core到当前JM所在core上的inbound中是否有消息,这样,无法保证JM会先收到delta0的消息,再收到delta1的消息。如果JM先收到delta1的消息,完成保存,后收到delta0的消息,完成保存。这样,JM在上电恢复过程中,会先完成delta1的patch,再完成delta0的patch,这样最终恢复完成的L2P表中,LBA对应的表项是PBA0。与下电之前L2P表中LBA表项是PBA1不一致,这就是保存乱序的问题。
根据本公开的一个方面,本公开的实施例提出一种L2P表的保存方法,如图1所示,其可以包括步骤:
S1,响应于检测到更新L2P表,获取所述L2P表中发生映射关系更新的LBA;
S2,将所述LBA发送到日志管理器中;
S3,响应于所述日志管理器接收到所述LBA,根据接收到的所述LBA在所述L2P表中读取对应的PBA并将所述LBA和所述对应的PBA组装成增量数据;
S4,将所述增量数据和所述L2P表中当前待保存的若干个基础数据作为快照保存到非易失性存储器中。
其中,如本文所用,日志管理器(Journal manager,简称JM),是SSD固件中的一个软件模块,主要负责元数据(比如L2P表等)的管理。
本公开针对SMP系统的SSD在保存L2P更新时,采用发送delta(LBA,PBA)的传统方式,引起delta数据乱序保存或互斥保存方式造成性能下降问题,提出了一种只发送L2P更新的LBA给JM,JM通过接收到的LBA访问L2P表获得PBA,组装成delta(LBA,PBA)再进行保存,有效解决了delta保存乱序的问题。因此采用只发送LBA的方式,由于JM根据LBA获得L2P表永远都是实时的PBA值。只要WM不存在漏发问题,JM就不存在乱序保存的问题。
在一些实施例中,还包括:
创建恢复表;
将所述快照的快照信息记录到所述恢复表中。
一笔快照刷写到NAND中后,会产生一个映射关系,就是JM_LBA和存储快照的SLC NAND address的关系,这个映射关系记录到恢复表中(也称为first table,就是以JM_LBA为索引,SLC NAND address为内容的线性表)。
具体的,可以在JM中创建并维护一个恢复表,例如First table,每当JM将1个快照刷写到NAND中时即形成的1个新的映射关系,就是JM_LBA和存储快照的SLC NAND address的关系,并将该映射关系记录到First table中。First table为以JM_LBA为索引,SLC NAND address为内容的线性表。当JM在上电恢复L2P表时,会根据First table中记录的映射关系在非易失性存储器中得到待恢复的快照数据。快照信息可以包括SLC NAND address。
在一些实施例中,步骤S2,将所述LBA发送到日志管理器中,进一步包括:
将所述LBA存储到LBA缓存中;
响应于所述LBA缓存中的所述LBA的数量达到阈值,将所述LBA缓存发送到所述日志管理器中。
具体的,基于SMP系统的SSD中可以配置有4个或8个WM,分布在不同的core上,负责L2P表的维护和更新,配置有1个JM,分布在某个固定的core上,负责L2P 表更新数据delta的存储。每个WM可以向buffer manager申请LBA缓存(LBA buffer),用于存储发生L2P表更新的LBA,每个LBA占用4bytes。WM申请获得1个LBA buffer后,将L2P表更新生成的LBA依次存储到LBA buffer中,当LBA buffer填满之后,将该LBA buffer存储的LBA个数信息以及buffer地址通过消息发送给JM。JM收到消息后获得buffer地址,依次读取buffer中的LBA,再访问L2P表获得PBA,组装成delta(LBA,PBA),再进行保存。WM通过LBA buffer方式发送L2P表更新数据给JM,相比于有一个L2P更新就发送的方式,可以极大降低消息的交互频次和消耗,有利于SSD性能的提升。
在一些实施例中,步骤S3,响应于所述日志管理器接收到所述LBA,根据接收到的所述LBA在所述L2P表中读取对应的PBA并将所述LBA和所述对应的PBA组装成增量数据,进一步包括:
所述日志管理器依次读取所述LBA缓存中的LBA以通过所述L2P表得到对应的PBA进而得到多个所述增量数据,并按顺序将所述多个增量数据存储到写缓存中。其中,按顺序将所述多个增量数据存储到写缓存中可以是按照LBA buffer中读取LBA的顺序进行增量数据的存储,保存到JM write buffer中。
在一些实施例中,所述写缓存的第一预设空间存储的增量数据的数量是所述LBA缓存中存储的LBA的数量的整数倍。
具体的,JM配置有若干16k大小的写缓存(write buffer),用于存储接收到的L2P更新数据。在一些实施例中,单个write buffer存储的delta数量刚好是单个LBA buffer存储的LBA个数的整数倍。优选的,单个LBA buffer填满所需的LBA个数刚好能够将单个JM的wrtie buffer填满。JM在收到LBA buffer后,依次读取LBA并在L2P表获得PBA,组装成delta(LBA,PBA),再依次存储到write buffer中的delta buffer中。正常的,处理完1个LBA Buffer,1个write buffer的delta buffer就填满了。JM释放LBA buffer,便于WM继续使用。JM再继续填充buffer header信息和base数据,就可以发送给NAND控制器刷写到FLASH中了。
需要说明的是,不同的WM针对同一个LBA做了多次修改,则发送给JM的LBA为多个(与修改次数相同),这样,当JM无论收到哪个WM发送的LBA,其均可通过L2P表获取到实时的PBA数据,而且虽然当前获取的PBA数据不确定是否是最新的PBA数据,但是最后一次获取到一定是最新的数据。例如,在第一时间,由WM0更新为delta0 (LBA1,PBA0),在第二时间,由WM1更新为delta1(LBA1,PBA1),JM会收到两次LBA1,第一次收到后,由于不同的WM发送消息的延迟情况,通过LBA1查询到的PBA可能是PBA0也可能是PBA1,但是由于是先对L2P表中的数据进行更新后,才会发送LBA,因此第二次收到LBA后,JM根据L2P表查询到的对应的PBA一定是PBA1。这样,在按照快照数据的顺序和增量数据的顺序恢复时,增量数据(LBA1,PBA0)先恢复,增量数据(LBA1,PBA1)后恢复,即可保证最终恢复完成的L2P表中,LBA对应的表项是PBA1,与下电之前L2P表中LBA表项是PBA1一致,JM不存在乱序保存的问题。
在一些实施例中,步骤S4,将所述增量数据和所述L2P表中当前待保存的若干个基础数据作为快照保存到非易失性存储器中,进一步包括:
响应于所述增量数据将所述写缓存中的第一预设空间填满,获取所述L2P表中当前待保存的若干个基础数据并填充到所述写缓存的第二预设空间;
填充所述写缓存的头部信息后将所述写缓存作为快照保存到所述非易失性存储器中。
具体的,SSD采用delta(增量数据)+base(基础数据)的快照方式保存元数据。Delta即增量数据,来源于base数据。比如,DDR中的整份L2P表称为base数据。L2P表的某个表项(LBA0,PBA0)更新成(LBA0,PBA1),则(LBA0,PBA1)就是一个L2P类型的delta数据。NAND的读写以page为单位进行,当前的单笔快照大小就是SLC方式的page大小(在SLC方式使用NAND的情况下,page大小是16KiB)。单笔快照由3部分组成,以write buffer形式在SSD DDR中构成,包括header,delta buffer,base buffer。根据base buffer大小将整份L2P表分段,每一段就称为“当前待保存的若干个基础数据”。元数据保存到SLC中,在DDR中开辟若干16k大小的write buffer,每一个write buffer的前64bytes用于buffer header信息,紧跟的delta buffer(第一预设空间)用于缓存产生的delta数据,剩下的base buffer(第二预设空间)(size=16k-64bytes buffer header–delta buffer size)大小用于保存一段base数据。
需要说明的是,完整的L2P表可以根据base buffer的大小分成多段base数据,增量数据是针对完整的L2P表。因此当增量数据填充满第一预设空间后,依次循环的将一段base数据填充到当前write buffer的base buffer中,当前write buffer中的增量数据可能对应其他段的base数据,也可能对应当前write buffer中的base数据。这样,更新后 的L2P数据要么保存在delta buffer中,要么存在于base buffer中,使得恢复表中时间越靠后的数据越新。
在一些实施例中,还包括:
响应于接收到恢复L2P表的指令,根据所述恢复表中记录的快照信息从所述非易失性存储器获取到多个快照数据;
根据所述恢复表中所述快照信息的顺序以及每一个所述快照数据中多个增量数据的顺序将每一个所述快照数据中的每一个所述增量数据恢复到L2P表中。
具体的,JM上电恢复L2P表时,会根据First table(JM刷写1个16k write buffer形成的1个新的映射关系,记录到first table中,该表由JM内部维护)依次读取已经存储的16K数据。恢复完成1个16K数据到DDR后,会先将base数据搬移到L2P表中的实际位置;Base恢复之后,再依次读取delta(LBA,PBA)数据,根据LBA获得l2p表的表项地址,将PBA写入,完成单个delta的patch。当first table记录的所有16k的映射关系读完成,并完成上述base和patch过程,整个L2P表在DDR中就构建出来了。
本公开针对SMP系统的SSD在保存L2P更新时,采用发送delta(LBA,PBA)的传统方式,引起delta数据乱序保存或互斥保存方式造成性能下降问题,提出了一种只发送L2P更新的LBA给JM,JM通过接收到的LBA访问L2P表获得PBA,组装成delta(LBA,PBA)再进行保存,有效解决了delta保存乱序的问题。因此采用只发送LBA的方式,由于JM根据LBA获得L2P表永远都是实时的PBA值。只要WM不存在漏发问题,JM就不存在乱序保存的问题。
基于同一发明构思,根据本公开的另一个方面,本公开的实施例还提供了一种L2P表的保存系统400,如图2所示,包括:
获取模块401,所述获取模块401配置为响应于检测到更新L2P表,获取所述L2P表中发生映射关系更新的LBA;
发送模块402,所述发送模块402配置为将所述LBA发送到日志管理器中;
组装模块403,所述组装模块403配置为响应于所述日志管理器接收到所述LBA,根据接收到的所述LBA在所述L2P表中读取对应的PBA以将所述LBA和所述对应的PBA组装成增量数据;
保存模块404,所述保存模块404配置为将所述增量数据和所述L2P表中当前待保 存的若干个基础数据作为快照保存到非易失性存储器中。
基于同一发明构思,根据本公开的另一个方面,如图3所示,本公开的实施例还提供了一种计算机设备501,包括:
至少一个处理器520;以及
存储器510,存储器510存储有可在处理器上运行的计算机可读指令511,处理器520执行可读指令时执行如上的任一种L2P表的保存方法的步骤。
基于同一发明构思,根据本公开的另一个方面,如图4所示,本公开的实施例还提供了一种非易失性计算机可读存储介质601,非易失性计算机可读存储介质601存储有计算机可读指令610,计算机可读指令610被处理器执行时执行如上的任一种L2P表的保存方法的步骤。
最后需要说明的是,本领域普通技术人员可以理解实现上述实施例方法中的全部或部分流程,可以通过计算机可读指令来指令相关硬件来完成,可读指令可存储于一计算机可读取存储介质中,该可读指令在被执行时,可包括如上述各方法的实施例的流程。
此外,应该明白的是,本文的计算机可读存储介质(例如,存储器)可以是易失性存储器或非易失性存储器,或者可以包括易失性存储器和非易失性存储器两者。
本领域技术人员还将明白的是,结合这里的公开所描述的各种示例性逻辑块、模块、电路和算法步骤可以被实现为电子硬件、计算机软件或两者的组合。为了清楚地说明硬件和软件的这种可互换性,已经就各种示意性组件、方块、模块、电路和步骤的功能对其进行了一般性的描述。这种功能是被实现为软件还是被实现为硬件取决于具体应用以及施加给整个系统的设计约束。本领域技术人员可以针对每种具体应用以各种方式来实现的功能,但是这种实现决定不应被解释为导致脱离本公开实施例公开的范围。
以上是本公开公开的示例性实施例,但是应当注意,在不背离权利要求限定的本公开实施例公开的范围的前提下,可以进行多种改变和修改。根据这里描述的公开实施例的方法权利要求的功能、步骤和/或动作不需以任何特定顺序执行。此外,尽管本公开实施例公开的元素可以以个体形式描述或要求,但除非明确限制为单数,也可以理解为多个。
应当理解的是,在本文中使用的,除非上下文清楚地支持例外情况,单数形式“一个”旨在也包括复数形式。还应当理解的是,在本文中使用的“和/或”是指包括一个或者 一个以上相关联地列出的项目的任意和所有可能组合。
上述本公开实施例公开实施例序号仅仅为了描述,不代表实施例的优劣。
本领域普通技术人员可以理解实现上述实施例的全部或部分步骤可以通过硬件来完成,也可以通过计算机可读指令来指令相关的硬件完成,该计算机可读指令可以存储于一种非易失性计算机可读存储介质中,上述提到的存储介质可以是只读存储器,磁盘或光盘等。
所属领域的普通技术人员应当理解:以上任何实施例的讨论仅为示例性的,并非旨在暗示本公开实施例公开的范围(包括权利要求)被限于这些例子;在本公开实施例的思路下,以上实施例或者不同实施例中的技术特征之间也可以进行组合,并存在如上的本公开实施例的不同方面的许多其它变化,为了简明它们没有在细节中提供。因此,凡在本公开实施例的精神和原则之内,所做的任何省略、修改、等同替换、改进等,均应包含在本公开实施例的保护范围之内。

Claims (10)

  1. 一种L2P表的保存方法,包括以下步骤:
    检测L2P表,响应于检测到L2P表的更新,获取所述L2P表中发生映射关系更新的LBA;
    将所述LBA发送到日志管理器中;
    响应于所述日志管理器接收到所述LBA,根据接收到的所述LBA在所述L2P表中读取对应的PBA并将所述LBA和所述对应的PBA组装成增量数据;
    将所述增量数据和所述L2P表中当前待保存的若干个基础数据作为快照保存到非易失性存储器中。
  2. 如权利要求1所述的方法,还包括:
    创建恢复表;
    将所述快照的快照信息记录到所述恢复表中。
  3. 如权利要求1所述的方法,其中,将所述LBA发送到日志管理器中的步骤进一步包括:
    将所述LBA存储到LBA缓存中;
    响应于所述LBA缓存中的所述LBA的数量达到阈值,将所述LBA缓存中的LBA发送到所述日志管理器中。
  4. 如权利要求3所述的方法,其中,响应于所述日志管理器接收到所述LBA,根据接收到的所述LBA在所述L2P表中读取对应的PBA并将所述LBA和所述对应的PBA组装成增量数据的步骤进一步包括:
    所述日志管理器依次读取所述LBA缓存中的LBA以通过所述L2P表得到对应的PBA进而得到多个所述增量数据,并按顺序将所述多个增量数据存储到写缓存中。
  5. 如权利要求4所述的方法,其中,将所述增量数据和所述L2P表中当前待保存的若干个基础数据作为快照保存到非易失性存储器中的步骤进一步包括:
    响应于所述增量数据将所述写缓存中的第一预设空间填满,获取所述L2P表中当前待保存的若干个基础数据并填充到所述写缓存的第二预设空间;
    填充所述写缓存的头部信息后将所述写缓存作为快照保存到所述非易失性存储器中。
  6. 如权利要求5所述的方法,其中,所述写缓存的第一预设空间存储的增量数据的数量是所述LBA缓存中存储的LBA的数量的整数倍。
  7. 如权利要求2所述的方法,还包括:
    响应于接收到恢复L2P表的指令,根据所述恢复表中记录的快照信息从所述非易失性存储器获取到多个快照数据;
    根据所述恢复表中所述快照信息的顺序以及每一个所述快照数据中多个增量数据的顺序将每一个所述快照数据中的每一个所述增量数据恢复到L2P表中。
  8. 一种L2P表的保存系统,包括:
    获取模块,所述获取模块配置为响应于检测到更新L2P表,获取所述L2P表中发生映射关系更新的LBA;
    发送模块,所述发送模块配置为将所述LBA发送到日志管理器中;
    组装模块,所述组装模块配置为响应于所述日志管理器接收到所述LBA,根据接收到的所述LBA在所述L2P表中读取对应的PBA以将所述LBA和所述对应的PBA组装成增量数据;
    保存模块,所述保存模块配置为将所述增量数据和所述L2P表中当前待保存的若干个基础数据作为快照保存到非易失性存储器中。
  9. 一种计算机设备,包括:
    至少一个处理器;以及
    存储器,所述存储器存储有可在所述处理器上运行的计算机可读指令,其中,所述处理器执行所述计算机可读指令时执行如权利要求1-7中任意一项所述的方法的步骤。
  10. 一种非易失性计算机可读存储介质,所述非易失性计算机可读存储介质存储有计算机可读指令,其中,所述计算机可读指令被处理器执行时执行如权利要求1-7任意一项所述的方法的步骤。
PCT/CN2021/121914 2020-12-11 2021-09-29 一种l2p表的保存方法、系统、设备以及介质 WO2022121471A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US18/034,541 US20240020240A1 (en) 2020-12-11 2021-09-29 Method for storing l2p table, system, device, and medium

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202011437985.6 2020-12-11
CN202011437985.6A CN112631950B (zh) 2020-12-11 2020-12-11 一种l2p表的保存方法、系统、设备以及介质

Publications (1)

Publication Number Publication Date
WO2022121471A1 true WO2022121471A1 (zh) 2022-06-16

Family

ID=75309333

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/121914 WO2022121471A1 (zh) 2020-12-11 2021-09-29 一种l2p表的保存方法、系统、设备以及介质

Country Status (3)

Country Link
US (1) US20240020240A1 (zh)
CN (1) CN112631950B (zh)
WO (1) WO2022121471A1 (zh)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112631950B (zh) * 2020-12-11 2022-07-19 苏州浪潮智能科技有限公司 一种l2p表的保存方法、系统、设备以及介质
CN113254265B (zh) * 2021-05-10 2023-03-14 苏州库瀚信息科技有限公司 基于固态硬盘的快照实现方法、存储系统
CN115543865B (zh) * 2022-11-25 2023-04-11 成都佰维存储科技有限公司 掉电保护方法、装置、可读存储介质及电子设备
CN116991757B (zh) * 2023-09-26 2023-12-15 四川云海芯科微电子科技有限公司 一种l2p表增量数据压缩方法及系统

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107402724A (zh) * 2017-07-31 2017-11-28 郑州云海信息技术有限公司 一种SSD中Journal元数据的保存方法及系统
CN107422992A (zh) * 2017-07-31 2017-12-01 郑州云海信息技术有限公司 一种SSD运行时Journal保存方法及系统
CN109213690A (zh) * 2018-09-21 2019-01-15 浪潮电子信息产业股份有限公司 一种l2p表的重建方法及相关装置
US20190018601A1 (en) * 2017-07-11 2019-01-17 Western Digital Technologies, Inc. Bitmap Processing for Log-Structured Data Store
CN110647295A (zh) * 2019-09-12 2020-01-03 苏州浪潮智能科技有限公司 缩短ssd上电恢复时间的方法、系统、设备及存储介质
CN112631950A (zh) * 2020-12-11 2021-04-09 苏州浪潮智能科技有限公司 一种l2p表的保存方法、系统、设备以及介质

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111581126B (zh) * 2020-05-08 2023-08-01 苏州浪潮智能科技有限公司 一种基于ssd的日志数据保存方法、装置、设备和介质

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190018601A1 (en) * 2017-07-11 2019-01-17 Western Digital Technologies, Inc. Bitmap Processing for Log-Structured Data Store
CN107402724A (zh) * 2017-07-31 2017-11-28 郑州云海信息技术有限公司 一种SSD中Journal元数据的保存方法及系统
CN107422992A (zh) * 2017-07-31 2017-12-01 郑州云海信息技术有限公司 一种SSD运行时Journal保存方法及系统
CN109213690A (zh) * 2018-09-21 2019-01-15 浪潮电子信息产业股份有限公司 一种l2p表的重建方法及相关装置
CN110647295A (zh) * 2019-09-12 2020-01-03 苏州浪潮智能科技有限公司 缩短ssd上电恢复时间的方法、系统、设备及存储介质
CN112631950A (zh) * 2020-12-11 2021-04-09 苏州浪潮智能科技有限公司 一种l2p表的保存方法、系统、设备以及介质

Also Published As

Publication number Publication date
CN112631950A (zh) 2021-04-09
CN112631950B (zh) 2022-07-19
US20240020240A1 (en) 2024-01-18

Similar Documents

Publication Publication Date Title
WO2022121471A1 (zh) 一种l2p表的保存方法、系统、设备以及介质
US9886383B2 (en) Self-journaling and hierarchical consistency for non-volatile storage
CN109643275B (zh) 存储级存储器的磨损均衡设备和方法
US9213633B2 (en) Flash translation layer with lower write amplification
US9996542B2 (en) Cache management in a computerized system
US11175850B2 (en) Selective erasure of data in a SSD
WO2016082524A1 (zh) 一种进行数据存储的方法、装置及系统
TWI645404B (zh) 資料儲存裝置以及非揮發式記憶體操作方法
US20150331624A1 (en) Host-controlled flash translation layer snapshot
US20140344539A1 (en) Managing data in a storage system
US10108503B2 (en) Methods and systems for updating a recovery sequence map
CN105718530B (zh) 文件存储系统及其文件存储控制方法
US20180260319A1 (en) Writing ssd system data
US8370587B2 (en) Memory system storing updated status information and updated address translation information and managing method therefor
US9086991B2 (en) Solid state drive cache recovery in a clustered storage system
KR20170010729A (ko) 비휘발성 메모리의 메타 데이터 관리 방법 및 스토리지 시스템
JP2012512482A (ja) Ssd技術支援のストレージシステムのスナップショット
US10521148B2 (en) Data storage device backup
US10459803B2 (en) Method for management tables recovery
US10289321B1 (en) Bad block table recovery in a solid state drives
KR20170085951A (ko) 버저닝 저장 장치 및 방법
TWI823504B (zh) 非暫態電腦可讀取媒體、儲存裝置、及儲存方法
CN110928890B (zh) 数据存储方法、装置、电子设备及计算机可读存储介质
TWI817638B (zh) 條件更新,延遲查找
JP2013196155A (ja) メモリシステム

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21902179

Country of ref document: EP

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: 18034541

Country of ref document: US

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21902179

Country of ref document: EP

Kind code of ref document: A1