WO2022134669A1 - 一种加速读存储介质的方法、读加速硬件模块及存储器 - Google Patents

一种加速读存储介质的方法、读加速硬件模块及存储器 Download PDF

Info

Publication number
WO2022134669A1
WO2022134669A1 PCT/CN2021/118472 CN2021118472W WO2022134669A1 WO 2022134669 A1 WO2022134669 A1 WO 2022134669A1 CN 2021118472 W CN2021118472 W CN 2021118472W WO 2022134669 A1 WO2022134669 A1 WO 2022134669A1
Authority
WO
WIPO (PCT)
Prior art keywords
information
pma
npa
algorithm
storage medium
Prior art date
Application number
PCT/CN2021/118472
Other languages
English (en)
French (fr)
Inventor
陈祥
曹学明
杨颖�
黄朋
杨州
Original Assignee
深圳大普微电子科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳大普微电子科技有限公司 filed Critical 深圳大普微电子科技有限公司
Publication of WO2022134669A1 publication Critical patent/WO2022134669A1/zh
Priority to US18/201,754 priority Critical patent/US20230305956A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/0223User address space allocation, e.g. contiguous or non contiguous base addressing
    • G06F12/023Free address space management
    • G06F12/0238Memory management in non-volatile memory, e.g. resistive RAM or ferroelectric memory
    • G06F12/0246Memory management in non-volatile memory, e.g. resistive RAM or ferroelectric memory in block erasable memory, e.g. flash memory
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/0223User address space allocation, e.g. contiguous or non contiguous base addressing
    • G06F12/0292User address space allocation, e.g. contiguous or non contiguous base addressing using tables or multilevel address translation means
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/10Address translation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2212/00Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
    • G06F2212/10Providing a specific technical effect
    • G06F2212/1016Performance improvement
    • G06F2212/1024Latency reduction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2212/00Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
    • G06F2212/10Providing a specific technical effect
    • G06F2212/1032Reliability improvement, data loss prevention, degraded operation etc
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2212/00Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
    • G06F2212/72Details relating to flash memory management
    • G06F2212/7201Logical to physical mapping or translation of blocks or pages
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Definitions

  • the present invention relates to the field of data reading, in particular to a method for accelerating reading of a storage medium, a read acceleration hardware module and a memory.
  • the parsed information (including LBA (Logical Block Address, logical block address)) is sent to FTL (Flash Translation Layer, flash memory translation layer); FTL converts LBA information into NPA (Nand Physical Address, flash memory physical address) information and sends it To BE (Back End, back end); after receiving the NPA information, the BE reads the data corresponding to the NPA information from the storage medium and sends it back to the FE, so that the FE returns the data to the host, thereby completing the data reading process.
  • FTL Flash Translation Layer, flash memory translation layer
  • NPA Nand Physical Address, flash memory physical address
  • BE Back End, back end
  • the existing FTL has problems of high latency and low efficiency due to its processing method, which reduces the read performance.
  • the purpose of the present invention is to provide a method for accelerating reading of a storage medium, a hardware module for reading acceleration and a memory, which abandons the processing method of FTL, and uses an algorithm solidified in hardware to process LBA information to obtain NPA information.
  • the read bandwidth of the host is greatly improved, so that the data read per unit time is significantly increased, thereby greatly improving the read performance.
  • the present invention provides a method for speeding up reading a storage medium, including:
  • the valid PMA information is converted into NPA information, and corresponding data is read from the storage medium of the memory according to the NPA information.
  • the process of performing a table lookup operation based on a table lookup algorithm solidified in hardware to obtain valid PMA information corresponding to the LBA information includes:
  • the process of performing a table lookup operation based on a table lookup algorithm solidified in hardware to obtain valid PMA information corresponding to the LBA information also includes:
  • the process of performing a table lookup operation based on a table lookup algorithm solidified in hardware to obtain valid PMA information corresponding to the LBA information also includes:
  • the process of converting valid PMA information into NPA information based on an address translation algorithm solidified in hardware includes:
  • the PMA information is composed of SuperBlock information, superPage information, and mau information in sequence
  • the NPA information is composed of block information, page information, lun information, ce information, chan information, and mauoff information in sequence
  • the process of converting valid PMA information into NPA information includes:
  • the SuperBlock information is multiplied by the preset coefficient value to obtain the block information of the NPA information;
  • the bit information of the mau information corresponds to the lun information, ce information, chan information, mauoff information.
  • a read acceleration hardware module comprising:
  • the DB processing hardware module is used to trigger the algorithm processing hardware module to process the LBA information to obtain NPA information after receiving the LBA information issued by the FE of the memory;
  • the algorithm processing hardware module solidified with the table lookup algorithm and the address translation algorithm is used to implement the steps of any of the above-mentioned methods for accelerating the reading of the storage medium when the table lookup algorithm and the address translation algorithm are sequentially executed.
  • the DB processing hardware module and the algorithm processing hardware module are integrated in the BE of the memory;
  • an FPH respectively connected to the algorithm processing hardware module and the storage medium of the memory, for reading out corresponding data from the storage medium according to the NPA information and transmitting it to the algorithm processing hardware module;
  • the ADM which is respectively connected with the algorithm processing hardware module and the FE, is used to send back the corresponding data read from the storage medium to the FE.
  • the L2P table, trim table and remap table required by the table lookup algorithm are stored in the DDR.
  • the present invention also provides a memory, including FE, BE, storage medium, DDR, and any of the above read acceleration hardware modules.
  • the invention provides a method for speeding up reading a storage medium, receiving LBA information issued by an FE of a memory; performing a table lookup operation based on a table lookup algorithm solidified in hardware, so as to obtain an effective PMA corresponding to the LBA information through the lookup table Information; based on the address conversion algorithm solidified in the hardware, the effective PMA information is converted into NPA information, so that the BE of the memory reads the corresponding data from the storage medium of the memory according to the NPA information and sends it back to the FE.
  • this application abandons the processing method of FTL, and uses the algorithm solidified in the hardware to process the LBA information to obtain the NPA information. It is found through experiments that the read bandwidth of the host can be greatly improved, and the data read per unit time can be significantly increased. , which greatly improves the read performance.
  • the present invention also provides a read acceleration hardware module and a memory, which have the same beneficial effects as the above acceleration method.
  • Fig. 1 is a kind of data read flow schematic diagram in the prior art
  • FIG. 2 is a flowchart of a method for accelerating reading of a storage medium provided by an embodiment of the present invention
  • FIG. 3 is a specific flowchart of a method for accelerating reading of a storage medium provided by an embodiment of the present invention
  • FIG. 4 is a bit correspondence diagram of a kind of PMA information and NPA information provided by an embodiment of the present invention
  • FIG. 5 is a schematic structural diagram of a read acceleration hardware module according to an embodiment of the present invention.
  • the core of the present invention is to provide a method for accelerating reading of a storage medium, a hardware module for reading acceleration and a memory, which abandons the processing method of FTL, and uses an algorithm solidified in hardware to process LBA information to obtain NPA information.
  • the read bandwidth of the host is greatly improved, so that the data read per unit time is significantly increased, thereby greatly improving the read performance.
  • FIG. 2 is a flowchart of a method for accelerating reading of a storage medium provided by an embodiment of the present invention.
  • the method for speeding up reading a storage medium includes:
  • Step S1 Receive the LBA information delivered by the FE of the memory.
  • RACC read acceleration hardware module
  • the host sends a read command (including LBA information) to the FE of the memory; the FE of the memory parses the read command sent by the host, obtains the LBA information, and sends the LBA information to the RACC; the RACC receives the memory
  • the LBA information sent by the FE starts to enter the LBA information processing flow.
  • Step S2 Perform a table lookup operation based on a table lookup algorithm solidified in the hardware to obtain valid PMA information corresponding to the LBA information.
  • a table look-up algorithm for obtaining valid PMA (Physical Media Address, physical storage address) information corresponding to the LBA information is solidified in advance, so after the RACC receives the LBA information issued by the FE of the memory, The table lookup operation is performed based on the table lookup algorithm solidified in the hardware, and the purpose is to obtain the valid PMA information corresponding to the LBA information through the lookup table, so as to enter the address translation process of the NPA information later.
  • PMA Physical Media Address, physical storage address
  • Step S3 Based on the address conversion algorithm solidified in the hardware, convert the valid PMA information into NPA information, and read the corresponding data from the storage medium of the memory according to the NPA information.
  • an address translation algorithm for converting PMA information into NPA information is also solidified in the hardware of RACC in advance. Therefore, after obtaining valid PMA information, RACC converts the valid PMA information into The NPA information is then sent to the BE of the memory.
  • the BE of the memory reads the corresponding data from the storage medium of the memory according to the NPA information and sends it back to the FE of the memory, so that the FE of the memory returns the data read from the storage medium to the host, thereby completing the data read process.
  • the read command issued by the host to the FE of the storage may also include information such as namespaceId (namespace ID), portId (port ID), and dataFormat (the format of the read data).
  • Information such as portId and dataFormat are sent to RACC.
  • the RACC processes the LBA information into NPA information, it sends the NPA information together with the namespaceId, portId, dataFormat and other information to the BE of the memory, so that the BE of the memory can retrieve the information from the storage medium of the memory according to the NPA information and information such as namespaceId, portId and dataFormat.
  • the corresponding data is read out and sent back to the FE of the memory, so that the read acceleration supports multiple namespaces, multiple dataFormats, and multiple ports at the same time.
  • the invention provides a method for speeding up reading a storage medium, receiving LBA information issued by an FE of a memory; performing a table lookup operation based on a table lookup algorithm solidified in hardware, so as to obtain an effective PMA corresponding to the LBA information through the lookup table Information; based on the address conversion algorithm solidified in the hardware, the effective PMA information is converted into NPA information, so that the BE of the memory reads the corresponding data from the storage medium of the memory according to the NPA information and sends it back to the FE.
  • this application abandons the processing method of FTL, and uses the algorithm solidified in the hardware to process the LBA information to obtain the NPA information. It is found through experiments that the read bandwidth of the host can be greatly improved, and the data read per unit time can be significantly increased. , which greatly improves the read performance.
  • FIG. 3 is a specific flowchart of a method for accelerating reading of a storage medium provided by an embodiment of the present invention.
  • the process of performing a table lookup operation based on a table lookup algorithm solidified in hardware to obtain valid PMA information corresponding to the LBA information includes:
  • the present application is provided with an L2P (Logic to Physical) table for representing the mapping relationship between the LBA information and the PMA information, that is, the RACC can find the PMA information corresponding to the LBA information by looking up the L2P table.
  • L2P Logical to Physical
  • a trim table indicating the invalid PMA information corresponding to the erased data specifically: a bit of the trim table indicates whether a PMA information is valid, for example, "0" indicates that the corresponding PMA information is invalid, and "1" indicates that the corresponding PMA information is invalid. PMA information is valid.
  • RACC judges whether the found PMA information exists in the invalid PMA information contained in the trim table by looking up the trim table; In the invalid PMA information, it indicates that the found PMA information is invalid and should be filtered out, and does not enter the address translation process of the subsequent NPA information; if the found PMA information does not exist in the invalid PMA information contained in the trim table, the found PMA information will be explained for the first time. If the information is valid, if there are no other problems, you can enter the address translation process of the subsequent NPA information.
  • the process of performing a table lookup operation based on a table lookup algorithm solidified in hardware to obtain valid PMA information corresponding to the LBA information also includes:
  • the PMA information corresponding to the broken data blocks is invalid and should be filtered out, and does not enter the address conversion process of the subsequent NPA information, so this application sets There is a remap table for data blocks that have been corrupted.
  • RACC determines that the PMA information is valid for the first time by looking up the trim table, it looks up the remap table to determine whether the PMA information that is valid for the first time exists in the invalid PMA information corresponding to the damaged data block; if the PMA information that is valid for the first time exists in the In the invalid PMA information corresponding to the data block that has been damaged, it indicates that the first valid PMA information should be invalid and should be filtered out, and will not enter the address translation process of subsequent NPA information; if the first valid PMA information does not exist in the already broken In the invalid PMA information corresponding to the dropped data block, it indicates that the first valid PMA information is still valid at present. If there are no other problems, the address translation process of the subsequent NPA information can be entered.
  • the process of performing a table lookup operation based on a table lookup algorithm solidified in hardware to obtain valid PMA information corresponding to the LBA information also includes:
  • the value of the PMA information has a maximum value
  • the value of the PMA information found through the L2P table is greater than the maximum value, it means that the found PMA information is abnormal, that is, the PMA information is invalid and should be filtered out, and will not enter the subsequent NPA information. Therefore, this application reasonably sets the maximum PMA value in advance according to the actual situation.
  • the maximum PMA value is set to maxU32-5 (maxU32: convert the 32-bit binary maximum value to a decimal value, and then convert The value of the decimal value minus 5 is used as the maximum PMA value; the reserved 5 is used for the determination of specific PMAs such as UNMAP, UNC, DEBUG, INVALID and TRIM. It should be noted that the reserved value can be adjusted according to the actual situation) .
  • the RACC determines whether the value of the second time valid PMA information is less than the preset maximum PMA value; if the value of the second time valid PMA information is not less than the preset maximum PMA value , indicating that the second-time valid PMA information should be invalid and should be filtered out, and will not enter the address conversion process of the subsequent NPA information; if the value of the second-time valid PMA information is less than the preset maximum PMA value, it indicates that the second-time valid PMA value The information is still valid at present, and can directly enter the address translation process of subsequent NPA information.
  • the process of converting valid PMA information into NPA information based on an address translation algorithm solidified in hardware includes:
  • the RACC can convert the valid PMA information into the NPA information according to the correspondence between the bits of the PMA information and the NPA information.
  • PMA information consists of SuperBlock information, superPage information, and mau information in sequence (mau refers to media AU, that is, refers to the smallest unit of a medium); NPA information consists of block information, page information, and lun information (lun information) in sequence. refers to the logical unit number), ce information (ce refers to chip selection information), chan information (chan refers to channel), and mauoff information;
  • the process of converting valid PMA information into NPA information includes:
  • the bit information of the mau information corresponds to the lun information, ce information, chan information, and mauoff information of the NPA information.
  • the PMA information is sequentially composed of SuperBlock (super block) information, superPage (super page) information, and mau information
  • the NPA information is sequentially composed of block (block) information, page (page) information, lun information, ce information, chan information, and mauoff information
  • the superPage information value of the PMA information the page information value of the NPA information; the bit bit of the mau information of the PMA information
  • the information value corresponds to the lun information, ce information, chan information, and mauoff information of the NPA information.
  • the mauoff information occupies 3 bits, the ce information and chan information occupy 2 bits respectively, and the lun information occupies 1 bit (the bits occupied by these information
  • the mauoff information value the information value composed of the 2, 1, and 0 bits of the mau information
  • the chan information value the information value composed of the 4, 3 bits of the mau information
  • ce information value information value composed of 6 and 5 bits of mau information
  • lun information value 7-bit information value of mau information.
  • the conversion process of converting PMA information into NPA information is as follows: disassemble the PMA information bit by bit to obtain SuperBlock information, superPage information, and mau information. Multiply the SuperBlock information by the preset coefficient value to obtain the block information of the NPA information. Use the superPage information as the page information of the NPA information.
  • the bit information of the mau information corresponds to the lun information, ce information, chan information, and mauoff information of the NPA information.
  • the mauoff information, chan information, ce information, and lun information of the NPA information are sequentially obtained by shifting processing, that is, the last value of the mauoff information that is the same as the bit occupied by the mauoff information is used as the mauoff information value (as shown in Figure 4, the mauoff information Occupying 3 bits, the last bit of the mau information is the same as the bit bit occupied by the mauoff information as the mauoff information value, which means: take the last 2, 1, and 0 bits of the mau information as the mauoff information value); shift the mau information to the right , to remove the last value of the mau information that is the same as the bit occupied by the mauoff information (as shown in Figure 4, removing the last value of the mau information that is the same as the bit occupied by the mauoff information means: remove the mau information The last 2, 1, and 0 bits are removed.
  • the original 5, 4, and 3 bits of the mau information replace the original 2, 1, and 0 bits of the mau information), and the last and chan information of the processed mau information will be removed.
  • the value that occupies the same bit is used as the chan information value (as shown in Figure 4, the chan information occupies 2 bits, that is, the current last 4-bit and 3-bit value of the mau information is used as the chan information value, and the following is the same, and this application will not describe it in detail) ;
  • the chan information occupies 2 bits, that is, the current last 4-bit and 3-bit value of the mau information is used as the chan information value, and the following is the same, and this application will not describe it in detail
  • the last bit of the removed mau information will be the same as the bit occupied by the ce information.
  • the value of the ce information is used as the value of ce information; continue to shift the mau information to the right to remove the last bit value of the mau information that is the same as the bit occupied by the ce information, and remove the last part of the removed mau information and the lun information.
  • the value with the same bit bit is used as the lun information value to obtain the NPA information.
  • the above method of accelerating the reading of the storage medium has significantly improved the host read bandwidth.
  • the measured data the traditional method processes the host read, and the measured bandwidth is 2000KiB/s; the above accelerated reading of the storage medium The method handles host reads, and the measured bandwidth is 3999KiB/s.
  • FIG. 5 is a schematic structural diagram of a read acceleration hardware module according to an embodiment of the present invention.
  • the read acceleration hardware module includes:
  • the DB processing hardware module 1 is used to trigger the algorithm processing hardware module 2 to process the LBA information to obtain the NPA information after receiving the LBA information issued by the FE of the memory;
  • the algorithm processing hardware module 2 solidified with the table lookup algorithm and the address translation algorithm is used to implement the steps of any of the above-mentioned methods for accelerating the reading of the storage medium when the table lookup algorithm and the address translation algorithm are executed in sequence.
  • the read acceleration hardware module (referred to as RACC) of this application includes a DB (DoorBell, doorbell) processing hardware module 1 and an algorithm processing hardware module 2.
  • the DB processing hardware module 1 receives the LBA information issued by the FE of the memory after receiving the LBA information.
  • the algorithm processing hardware module 2 is triggered to perform address processing, and the algorithm processing hardware module 2 mainly processes the LBA information to obtain the NPA information.
  • the specific processing principle please refer to the above-mentioned embodiments of the method for accelerating the reading of the storage medium, which will not be repeated in this application. Repeat.
  • the DB processing hardware module 1 and the algorithm processing hardware module 2 are integrated in the BE of the memory;
  • the FPH that is respectively connected with the algorithm processing hardware module 2 and the storage medium of the memory is used to read out the corresponding data from the storage medium according to the NPA information and transmit it to the algorithm processing hardware module 2;
  • the ADMs which are respectively connected with the algorithm processing hardware module 2 and the FE, are used to send the corresponding data read out from the storage medium back to the FE.
  • the RACC of the present application can be integrated into the BE of the memory, and the BE of the memory includes ADM (Advanced Data Management, advanced data management) and FPH (Flash Protocol Hanlder, flash memory protocol processor), and the memory
  • ADM Advanced Data Management, advanced data management
  • FPH Flash Protocol Hanlder, flash memory protocol processor
  • the FPH finds that there is a UNC (uncorrectable error) in the read data when reading data from the storage medium, it will not return the data to the algorithm processing hardware module 2 to confirm that the data read is abnormal.
  • UNC uncorrectable error
  • the L2P table, trim table and remap table required by the table lookup algorithm are stored in the DDR.
  • the L2P table, trim table and remap table required by the table look-up algorithm of the present application can be stored in DDR (Double Data Rate, double-rate synchronous dynamic random access memory), then the algorithm processing hardware module 2 interacts with the DDR, In order to complete the table lookup operation of L2P table, trim table and remap table.
  • DDR Double Data Rate, double-rate synchronous dynamic random access memory
  • the present application also provides a memory, including FE, BE, storage medium, DDR, and any of the above read acceleration hardware modules.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

一种加速读存储介质的方法、读加速硬件模块及存储器,接收存储器的FE下发的LBA信息;基于固化在硬件中的查表算法进行查表操作,以获取与LBA信息对应的有效的PMA信息;基于固化在硬件中的地址转换算法,将有效的PMA信息转换为NPA信息,根据NPA信息从存储器的存储介质中读出相应数据。可见,本申请摒弃了FTL的处理方式,采用固化在硬件中的算法对LBA信息进行处理得到NPA信息,经实验发现能够大幅度提升主机的读带宽,使得单位时间内读取的数据显著增大,从而大大提升了读性能。

Description

一种加速读存储介质的方法、读加速硬件模块及存储器
本申请要求于2020年12月23日提交至中国专利局、申请号为202011539885.4、发明名称为“一种加速读存储介质的方法、读加速硬件模块及存储器”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本发明涉及数据读取领域,特别是涉及一种加速读存储介质的方法、读加速硬件模块及存储器。
背景技术
随着大数据时代的发展,对数据处理速度的要求越来越高。在数据处理过程中,包含数据读取、数据扫描、数据分析等一系列的操作。对于数据读取环节来说,现有技术常规的数据读流程如图1所示,主机host向存储器的FE(Front End,前端)下发读取命令;FE接收并解析读取命令,并将解析得到的信息(包含LBA(Logical Block Address,逻辑区块地址))发送至FTL(Flash Translation Layer,闪存转换层);FTL将LBA信息转换成NPA(Nand Physical Address,闪存物理地址)信息下发至BE(Back End,后端);BE接收到NPA信息后,从存储介质中读出与NPA信息对应的数据送回FE,以由FE将数据返回主机,从而完成数据读流程。但是,现有的FTL因其处理方式存在高延迟、低效率的问题,导致读性能有所降低。
因此,如何提供一种解决上述技术问题的方案是本领域的技术人员目前需要解决的问题。
发明内容
本发明的目的是提供一种加速读存储介质的方法、读加速硬件模块及存储器,摒弃了FTL的处理方式,采用固化在硬件中的算法对LBA信息进行处理得到NPA信息,经实验发现能够大幅度提升主机的读带宽,使得单位时间内读取的数据显著增大,从而大大提升了读性能。
为解决上述技术问题,本发明提供了一种加速读存储介质的方法,包括:
接收存储器的FE下发的LBA信息;
基于固化在硬件中的查表算法进行查表操作,以获取与所述LBA信息对应的有效的PMA信息;
基于固化在硬件中的地址转换算法,将有效的PMA信息转换为NPA信息,根据所述NPA信息从所述存储器的存储介质中读出相应数据。
优选地,基于固化在硬件中的查表算法进行查表操作,以获取与所述LBA信息对应的有效的PMA信息的过程,包括:
通过查找用于表示LBA信息与PMA信息之间映射关系的L2P表,获取与所述LBA信息对应的PMA信息;
通过查找用于表示已被擦除数据对应的无效PMA信息的trim表,判断所述PMA信息是否存在于所述无效PMA信息中;
若是,则确定所述PMA信息无效;
若否,则确定所述PMA信息初次有效。
优选地,基于固化在硬件中的查表算法进行查表操作,以获取与所述LBA信息对应的有效的PMA信息的过程,还包括:
在通过查找所述trim表确定所述PMA信息初次有效后,通过查找用于表示已坏的数据块的remap表,判断所述PMA信息是否存在于已坏的数据块对应的无效PMA信息中;
若是,则确定所述PMA信息无效;
若否,则确定所述PMA信息二次有效。
优选地,基于固化在硬件中的查表算法进行查表操作,以获取与所述LBA信息对应的有效的PMA信息的过程,还包括:
在通过查找所述remap表确定所述PMA信息二次有效后,判断所述PMA信息的数值是否小于预设最大PMA数值;
若是,则确定所述PMA信息最终有效;
若否,则确定所述PMA信息无效。
优选地,基于固化在硬件中的地址转换算法,将有效的PMA信息转换为NPA信息的过程,包括:
根据PMA信息与NPA信息的各bit位对应关系,将有效的PMA信息转换为NPA信息。
优选地,所述PMA信息依次由SuperBlock信息、superPage信息、mau信息组成;所述NPA信息依次由block信息、page信息、lun信息、ce信息、chan信息、mauoff信息组成;
相应的,根据PMA信息与NPA信息的各bit位对应关系,将有效的PMA信息转换为NPA信息的过程,包括:
将所述PMA信息按位拆解得到SuperBlock信息、superPage信息、mau信息;
将所述SuperBlock信息乘以预设系数值,得到所述NPA信息的block信息;
将所述superPage信息作为所述NPA信息的page信息;
根据所述mau信息与所述NPA信息的lun信息、ce信息、chan信息、mauoff信息的bit位对应关系,将所述mau信息的bit位信息对应作为所述NPA信息的lun信息、ce信息、chan信息、mauoff信息。
为解决上述技术问题,本发明还提供了一种读加速硬件模块,包括:
DB处理硬件模块,用于在接收到存储器的FE下发的LBA信息后,触发算法处理硬件模块对所述LBA信息进行处理得到NPA信息;
固化有查表算法和地址转换算法的算法处理硬件模块,用于在依次执行所述查表算法和地址转换算法时,实现上述任一种加速读存储介质的方法的步骤。
优选地,所述DB处理硬件模块和所述算法处理硬件模块集成于所述存储器的BE内;
且所述BE包括:
分别与所述算法处理硬件模块和所述存储器的存储介质连接的FPH,用于根据所述NPA信息从所述存储介质中读出相应数据传送至所述算法处理硬件模块;
分别与所述算法处理硬件模块和所述FE连接的ADM,用于将从所述存储介质中读出的相应数据送回所述FE。
优选地,所述查表算法所需的L2P表、trim表及remap表存储于DDR中。
为解决上述技术问题,本发明还提供了一种存储器,包括FE、BE、存储介质、DDR及上述任一种读加速硬件模块。
本发明提供了一种加速读存储介质的方法,接收存储器的FE下发的LBA信息;基于固化在硬件中的查表算法进行查表操作,以通过查表获取与LBA信息对应的有效的PMA信息;基于固化在硬件中的地址转换算法,将有效的PMA信息转换为NPA信息,以使存储器的BE根据NPA信息从存储器的存储介质中读出相应数据送回FE。可见,本申请摒弃了FTL的处理方式,采用固化在硬件中的算法对LBA信息进行处理得到NPA信息,经实验发现能够大幅度提升主机的读带宽,使得单位时间内读取的数据显著增大,从而大大提升了读性能。
本发明还提供了一种读加速硬件模块及存储器,与上述加速方法具有相同的有益效果。
附图说明
为了更清楚地说明本发明实施例中的技术方案,下面将对现有技术和实施例中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本发明的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1为现有技术中的一种数据读流程示意图;
图2为本发明实施例提供的一种加速读存储介质的方法的流程图;
图3为本发明实施例提供的一种加速读存储介质的方法的具体流程图;
图4为本发明实施例提供的一种PMA信息与NPA信息的bit位对应关系图;
图5为本发明实施例提供的一种读加速硬件模块的结构示意图。
具体实施方式
本发明的核心是提供一种加速读存储介质的方法、读加速硬件模块及存储器,摒弃了FTL的处理方式,采用固化在硬件中的算法对LBA信息进行处理得到NPA信息,经实验发现能够大幅度提升主机的读带宽,使得单位时间内读取的数据显著增大,从而大大提升了读性能。
为使本发明实施例的目的、技术方案和优点更加清楚,下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例是本发明一部分实施例,而不是全部的实施例。基于本发明中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本发明保护的范围。
请参照图2,图2为本发明实施例提供的一种加速读存储介质的方法的流程图。
该加速读存储介质的方法包括:
步骤S1:接收存储器的FE下发的LBA信息。
需要说明的是,本申请的加速读存储介质的方法由读加速硬件模块(称为RACC)实现。
具体地,主机向存储器的FE下发读取命令(包含LBA信息);存储器的FE对主机下发的读取命令进行解析,得到LBA信息,并将LBA信息下发至RACC;RACC接收到存储器的FE下发的LBA信息,开始进入LBA信息的处理流程。
步骤S2:基于固化在硬件中的查表算法进行查表操作,以获取与LBA信息对应的有效的PMA信息。
具体地,RACC的硬件中提前固化有用于获取与LBA信息对应的有效的PMA(Physical Media Address,物理存储地址)信息的查表算法,所以RACC在接收到存储器的FE下发的LBA信息之后,基于固化在硬件中的查表算法进行查表操作,目的是通过查表获取与LBA信息对应的有效的PMA信息,以供后续进入NPA信息的地址转换流程。
步骤S3:基于固化在硬件中的地址转换算法,将有效的PMA信息转换为NPA信息,根据NPA信息从存储器的存储介质中读出相应数据。
具体地,RACC的硬件中还提前固化有用于将PMA信息转换为NPA信息的地址转换算法,所以RACC在获取有效的PMA信息之后,基于固化在硬件中的地址转换算法将有效的PMA信息转换为NPA信息,然后将NPA信息发送给存储器的BE。存储器的BE根据NPA信息从存储器的存储介质中读出相应数据送回存储器的FE,以由存储器的FE将从存储介质中读出的数据返给主机,从而完成数据读流程。
此外,主机向存储器的FE下发的读取命令中还可包含namespaceId(命名空间ID)、portId(端口ID)、dataFormat(读取数据的格式)等信息,存储器的FE将LBA信息及namespaceId、portId、dataFormat等信息下发至RACC。RACC在将LBA信息处理成NPA信息之后,将NPA信息及namespaceId、portId、dataFormat等信息一起发送给存储器的BE,供存储器的BE根据NPA信息及namespaceId、portId、dataFormat等信息,从存储器的存储介质中读出相应数据送回存储器的FE,从而使读加速同时支持多namespace、多dataFormat、多port。
本发明提供了一种加速读存储介质的方法,接收存储器的FE下发的LBA信息;基于固化在硬件中的查表算法进行查表操作,以通过查表获取与LBA信息对应的有效的PMA信息;基于固化在硬件中的地址转换算法,将有效的PMA信息转换为NPA信息,以使存储器的BE根据NPA信息从存储器的存储介质中读出相应数据送回FE。可见,本申请摒弃了FTL的处理方式,采用固化在硬件中的算法对LBA信息进行处理得到NPA信息,经实验发现能够大幅度提升主机的读带宽,使得单位时间内读取的数据显著增大,从而大大提升了读性能。
在上述实施例的基础上:
请参照图3,图3为本发明实施例提供的一种加速读存储介质的方法的具体流程图。
作为一种可选的实施例,基于固化在硬件中的查表算法进行查表操作,以获取与LBA信息对应的有效的PMA信息的过程,包括:
通过查找用于表示LBA信息与PMA信息之间映射关系的L2P表,获取与LBA信息对应的PMA信息;
通过查找用于表示已被擦除数据对应的无效PMA信息的trim表,判断PMA信息是否存在于无效PMA信息中;
若是,则确定PMA信息无效;
若否,则确定PMA信息初次有效。
具体地,本申请设有用于表示LBA信息与PMA信息之间映射关系的L2P(Logic to Physical,逻辑到物理)表,即RACC通过查找L2P表,可找到与LBA信息对应的PMA信息。
与此同时,考虑到存储器中存储的数据可能被用户擦除,已被擦除数据对应的PMA信息无效,应被滤除掉,不进入后续NPA信息的地址转换流程,所以本申请设有用于表示已被擦除数据对应的无效PMA信息的trim表,具体可为:trim表的一个bit位对应表示一个PMA信息是否有效,如“0”表示对应的PMA信息无效,“1”表示对应的PMA信息有效。基于此,RACC在查L2P表找到与LBA信息对应的PMA信息之后,通过查找trim表判断找到的PMA信息是否存在于trim表包含的无效PMA信息中;若找到的PMA信息存在于trim表包含的无效PMA信息中,说明找到的PMA信息无效,应被滤除掉,不进入后续NPA信息的地址转换流程;若找到的PMA信息不存在于trim表包含的无效PMA信息中,初次说明找到的PMA信息有效,如无其它问题,可进入后续NPA信息的地址转换流程。
作为一种可选的实施例,基于固化在硬件中的查表算法进行查表操作,以获取与LBA信息对应的有效的PMA信息的过程,还包括:
在通过查找trim表确定PMA信息初次有效后,通过查找用于表示已坏的数据块的remap表,判断PMA信息是否存在于已坏的数据块对应的无效PMA信息中;
若是,则确定PMA信息无效;
若否,则确定PMA信息二次有效。
进一步地,考虑到存储器中用于存储数据的数据块可能坏掉,已经坏掉的数据块对应的PMA信息无效,应被滤除掉,不进入后续NPA信息的地址转换流程,所以本申请设有用于表示已经坏掉的数据块的remap表。基于此,RACC在通过查找trim表确定PMA信息初次有效之后,通过查找remap表判断初次有效的PMA信息是否存在于已经坏掉的数据块对应的无效PMA信息中;若初次有效的PMA信息存在于已经坏掉的数据块对应的无效PMA信息中,说明初次有效的PMA信息应属于无效,应被滤除掉,不进入后续NPA信息的地址转换流程;若初次有效的PMA信息不存在于已经坏掉的数据块对应的无效PMA信息中,说明初次有效的PMA信息当前仍属于有效,如无其它问题,可进入后续NPA信息的地址转换流程。
作为一种可选的实施例,基于固化在硬件中的查表算法进行查表操作,以获取与LBA信息对应的有效的PMA信息的过程,还包括:
在通过查找remap表确定PMA信息二次有效后,判断PMA信息的数值是否小于预设最大PMA数值;
若是,则确定PMA信息最终有效;
若否,则确定PMA信息无效。
进一步地,考虑到PMA信息的数值存在最大值,若通过L2P表找到的PMA信息的数值大于最大值,说明找到的PMA信息异常,即PMA信息无效,应被滤除掉,不进入后续NPA信息的地址转换流程,所以本申请提前根据实际情况合理设置最大PMA数值,如PMA信息为32bit时,最大PMA数值设为maxU32-5(maxU32:将32bit的二进制最大数值转换为十进制的数值,而后转换的十进制数值减去5的数值结果作为最大PMA数值;预留的5用于UNMAP、UNC、DEBUG、INVALID和TRIM等特定PMA的判定,需要说明的是,预留值可根据实际情况进行调整)。基于此,RACC在通过查找remap表确定PMA信息二次有效之后,判断二次有效的PMA信息的数值是否小于预设最大PMA数值;若二次有效的PMA信息的数值不小于预设最大PMA数值,说明二次有效的PMA信息应属于无效,应被滤除掉,不进入后续NPA信息的地址转换流程;若二次有效的 PMA信息的数值小于预设最大PMA数值,说明二次有效的PMA信息当前仍属于有效,可直接进入后续NPA信息的地址转换流程。
作为一种可选的实施例,基于固化在硬件中的地址转换算法,将有效的PMA信息转换为NPA信息的过程,包括:
根据PMA信息与NPA信息的各bit位对应关系,将有效的PMA信息转换为NPA信息。
具体地,考虑到PMA信息与NPA信息的各bit位之间存在一定对应关系,所以RACC可根据PMA信息与NPA信息的各bit位对应关系,将有效的PMA信息转换为NPA信息。
作为一种可选的实施例,PMA信息依次由SuperBlock信息、superPage信息、mau信息组成(mau指media AU,即指介质的最小单元);NPA信息依次由block信息、page信息、lun信息(lun指逻辑单元号)、ce信息(ce指片选信息)、chan信息(chan指通道)、mauoff信息组成;
相应的,根据PMA信息与NPA信息的各bit位对应关系,将有效的PMA信息转换为NPA信息的过程,包括:
将PMA信息按位拆解得到SuperBlock信息、superPage信息、mau信息;
将SuperBlock信息乘以预设系数值,得到NPA信息的block信息;
将superPage信息作为NPA信息的page信息;
根据mau信息与NPA信息的lun信息、ce信息、chan信息、mauoff信息的bit位对应关系,将mau信息的bit位信息对应作为NPA信息的lun信息、ce信息、chan信息、mauoff信息。
具体地,如图4所示,PMA信息依次由SuperBlock(超级块)信息、superPage(超级页)信息、mau信息组成,NPA信息依次由block(块)信息、page(页)信息、lun信息、ce信息、chan信息、mauoff信息组成,二者的bit位具有一定对应关系:PMA信息的SuperBlock信息值×预设系数值=NPA信息的block信息值(该预设系数值与Nand颗粒相关,通过乘以预设系数值将PMA中的SuperBlockId换算成Nand中实际物理位置的BlockId,不同颗粒分配不同。);PMA信息的superPage信息值=NPA信息 的page信息值;PMA信息的mau信息的bit位信息值对应等于NPA信息的lun信息、ce信息、chan信息、mauoff信息,如图4所示,mauoff信息占3bit,ce信息和chan信息分别占2bit,lun信息占1bit(这些信息所占的bit位数不限于此,可按实际颗粒需要进行配置),则mauoff信息值=mau信息的2、1、0bit位组成的信息值,chan信息值=mau信息的4、3bit位组成的信息值,ce信息值=mau信息的6、5bit位组成的信息值,lun信息值=mau信息的7bit位信息值。
基于此,PMA信息转换为NPA信息的转换流程为:将PMA信息按位拆解得到SuperBlock信息、superPage信息、mau信息。将SuperBlock信息乘以预设系数值,得到NPA信息的block信息。将superPage信息作为NPA信息的page信息。根据mau信息与NPA信息的lun信息、ce信息、chan信息、mauoff信息的bit位对应关系,将mau信息的bit位信息对应作为NPA信息的lun信息、ce信息、chan信息、mauoff信息,具体可通过移位处理方式依次得到NPA信息的mauoff信息、chan信息、ce信息、lun信息,即mau信息最后的与mauoff信息所占bit位相同的数值作为mauoff信息值(如图4所示,mauoff信息占3bit,mau信息最后的与mauoff信息所占bit位相同的数值作为mauoff信息值的意思是:将mau信息最后的2、1、0bit位数值作为mauoff信息值);将mau信息向右移位,以将mau信息最后的与mauoff信息所占bit位相同的数值移除(如图4所示,将mau信息最后的与mauoff信息所占bit位相同的数值移除的意思是:将mau信息最后的2、1、0bit位移除掉,此时mau信息的原来的5、4、3bit位代替mau信息原来的2、1、0bit位),并将移除处理的mau信息最后的与chan信息所占bit位相同的数值作为chan信息值(如图4所示,chan信息占2bit,即将mau信息当前最后的4、3bit位数值作为chan信息值,后续同理,本申请不再详细叙述);将mau信息继续向右移位,以将mau信息最后的与chan信息所占bit位相同的数值移除(如图4所示,将mau信息当前最后的4、3bit位移除掉,此时mau信息的原来的6、5bit位代替mau信息当前最后的4、3bit位,后续同理,本申请不再详细叙述),并将移除处理的mau信息最后的与ce信息所占bit位相同的数值作为ce信息值; 将mau信息继续向右移位,以将mau信息最后的与ce信息所占bit位相同的数值移除,并将移除处理的mau信息最后的与lun信息所占bit位相同的数值作为lun信息值,从而得到NPA信息。
综上,上述加速读存储介质的方法相比传统方式,主机读带宽有显著提升,在CPU为5M情况下,实测数据:传统方式处理host读,实测带宽为2000KiB/s;上述加速读存储介质的方法处理host读,实测带宽为3999KiB/s。
请参照图5,图5为本发明实施例提供的一种读加速硬件模块的结构示意图。
该读加速硬件模块包括:
DB处理硬件模块1,用于在接收到存储器的FE下发的LBA信息后,触发算法处理硬件模块2对LBA信息进行处理得到NPA信息;
固化有查表算法和地址转换算法的算法处理硬件模块2,用于在依次执行查表算法和地址转换算法时,实现上述任一种加速读存储介质的方法的步骤。
具体地,本申请的读加速硬件模块(称为RACC)包括DB(DoorBell,门铃)处理硬件模块1和算法处理硬件模块2,DB处理硬件模块1在接收到存储器的FE下发的LBA信息后,触发算法处理硬件模块2进行地址处理工作,算法处理硬件模块2主要对LBA信息进行处理得到NPA信息,其具体处理原理请参考上述加速读存储介质的方法的实施例,本申请在此不再赘述。
作为一种可选的实施例,DB处理硬件模块1和算法处理硬件模块2集成于存储器的BE内;
且BE包括:
分别与算法处理硬件模块2和存储器的存储介质连接的FPH,用于根据NPA信息从存储介质中读出相应数据传送至算法处理硬件模块2;
分别与算法处理硬件模块2和FE连接的ADM,用于将从存储介质中读出的相应数据送回FE。
具体地,如图5所示,本申请的RACC可集成于存储器的BE内,存储器的BE包括ADM(Advanced Data Management,高级数据管理)和FPH(Flash Protocol Hanlder,闪存协议处理器),存储器的BE由其内FPH根据NPA信息从存储介质中读出相应数据传送至算法处理硬件模块2,由其内ADM将从存储介质中读出的相应数据送回存储器的FE,以使存储器的FE将从存储介质中读出的相应数据返给主机。
此外,若FPH从存储介质中读数据时发现所读取的数据存在UNC(不可纠正错误),则不将数据返给算法处理硬件模块2,确认数据读异常。除此之外,还可能存在unmap错误,即转换的NPA信息对应的存储介质内没有写入过数据,无法进行数据读流程,确认数据读异常。
作为一种可选的实施例,查表算法所需的L2P表、trim表及remap表存储于DDR中。
具体地,本申请的查表算法所需的L2P表、trim表及remap表可存储于DDR(Double Data Rate,双倍速率同步动态随机存储器)中,则算法处理硬件模块2与DDR进行交互,以完成L2P表、trim表及remap表的查表操作。
本申请还提供了一种存储器,包括FE、BE、存储介质、DDR及上述任一种读加速硬件模块。
本申请提供的存储器的介绍请参考上述读加速硬件模块的实施例,本申请在此不再赘述。
还需要说明的是,在本说明书中,诸如第一和第二等之类的关系术语仅仅用来将一个实体或者操作与另一个实体或操作区分开来,而不一定要求或者暗示这些实体或操作之间存在任何这种实际的关系或者顺序。而且,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、物品或者设备不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、物品或者设备所固有的要素。在没有更多限制的情况下,由语句“包括一 个……”限定的要素,并不排除在包括所述要素的过程、方法、物品或者设备中还存在另外的相同要素。
对所公开的实施例的上述说明,使本领域专业技术人员能够实现或使用本发明。对这些实施例的多种修改对本领域的专业技术人员来说将是显而易见的,本文中所定义的一般原理可以在不脱离本发明的精神或范围的情况下,在其他实施例中实现。因此,本发明将不会被限制于本文所示的这些实施例,而是要符合与本文所公开的原理和新颖特点相一致的最宽的范围。

Claims (10)

  1. 一种加速读存储介质的方法,其特征在于,包括:
    接收存储器的FE下发的LBA信息;
    基于固化在硬件中的查表算法进行查表操作,以获取与所述LBA信息对应的有效的PMA信息;
    基于固化在硬件中的地址转换算法,将有效的PMA信息转换为NPA信息,根据所述NPA信息从所述存储器的存储介质中读出相应数据。
  2. 如权利要求1所述的加速读存储介质的方法,其特征在于,基于固化在硬件中的查表算法进行查表操作,以获取与所述LBA信息对应的有效的PMA信息的过程,包括:
    通过查找用于表示LBA信息与PMA信息之间映射关系的L2P表,获取与所述LBA信息对应的PMA信息;
    通过查找用于表示已被擦除数据对应的无效PMA信息的trim表,判断所述PMA信息是否存在于所述无效PMA信息中;
    若是,则确定所述PMA信息无效;
    若否,则确定所述PMA信息初次有效。
  3. 如权利要求2所述的加速读存储介质的方法,其特征在于,基于固化在硬件中的查表算法进行查表操作,以获取与所述LBA信息对应的有效的PMA信息的过程,还包括:
    在通过查找所述trim表确定所述PMA信息初次有效后,通过查找用于表示已坏的数据块的remap表,判断所述PMA信息是否存在于已坏的数据块对应的无效PMA信息中;
    若是,则确定所述PMA信息无效;
    若否,则确定所述PMA信息二次有效。
  4. 如权利要求3所述的加速读存储介质的方法,其特征在于,基于固化在硬件中的查表算法进行查表操作,以获取与所述LBA信息对应的有效的PMA信息的过程,还包括:
    在通过查找所述remap表确定所述PMA信息二次有效后,判断所述PMA信息的数值是否小于预设最大PMA数值;
    若是,则确定所述PMA信息最终有效;
    若否,则确定所述PMA信息无效。
  5. 如权利要求1所述的加速读存储介质的方法,其特征在于,基于固化在硬件中的地址转换算法,将有效的PMA信息转换为NPA信息的过程,包括:
    根据PMA信息与NPA信息的各bit位对应关系,将有效的PMA信息转换为NPA信息。
  6. 如权利要求5所述的加速读存储介质的方法,其特征在于,所述PMA信息依次由SuperBlock信息、superPage信息、mau信息组成;所述NPA信息依次由block信息、page信息、lun信息、ce信息、chan信息、mauoff信息组成;
    相应的,根据PMA信息与NPA信息的各bit位对应关系,将有效的PMA信息转换为NPA信息的过程,包括:
    将所述PMA信息按位拆解得到SuperBlock信息、superPage信息、mau信息;
    将所述SuperBlock信息乘以预设系数值,得到所述NPA信息的block信息;
    将所述superPage信息作为所述NPA信息的page信息;
    根据所述mau信息与所述NPA信息的lun信息、ce信息、chan信息、mauoff信息的bit位对应关系,将所述mau信息的bit位信息对应作为所述NPA信息的lun信息、ce信息、chan信息、mauoff信息。
  7. 一种读加速硬件模块,其特征在于,包括:
    DB处理硬件模块,用于在接收到存储器的FE下发的LBA信息后,触发算法处理硬件模块对所述LBA信息进行处理得到NPA信息;
    固化有查表算法和地址转换算法的算法处理硬件模块,用于在依次执行所述查表算法和地址转换算法时,实现如权利要求1-6任一项所述的加速读存储介质的方法的步骤。
  8. 如权利要求7所述的读加速硬件模块,其特征在于,所述DB处理硬件模块和所述算法处理硬件模块集成于所述存储器的BE内;
    且所述BE包括:
    分别与所述算法处理硬件模块和所述存储器的存储介质连接的FPH,用于根据所述NPA信息从所述存储介质中读出相应数据传送至所述算法处理硬件模块;
    分别与所述算法处理硬件模块和所述FE连接的ADM,用于将从所述存储介质中读出的相应数据送回所述FE。
  9. 如权利要求7所述的读加速硬件模块,其特征在于,所述查表算法所需的L2P表、trim表及remap表存储于DDR中。
  10. 一种存储器,其特征在于,包括FE、BE、存储介质、DDR及如权利要求7-9任一项所述的读加速硬件模块。
PCT/CN2021/118472 2020-12-23 2021-09-15 一种加速读存储介质的方法、读加速硬件模块及存储器 WO2022134669A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US18/201,754 US20230305956A1 (en) 2020-12-23 2023-05-24 Method for accelerating reading of storage medium, read acceleration hardware module, and memory

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202011539885.4A CN112559392B (zh) 2020-12-23 2020-12-23 一种加速读存储介质的方法、读加速硬件模块及存储器
CN202011539885.4 2020-12-23

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US18/201,754 Continuation US20230305956A1 (en) 2020-12-23 2023-05-24 Method for accelerating reading of storage medium, read acceleration hardware module, and memory

Publications (1)

Publication Number Publication Date
WO2022134669A1 true WO2022134669A1 (zh) 2022-06-30

Family

ID=75032275

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/118472 WO2022134669A1 (zh) 2020-12-23 2021-09-15 一种加速读存储介质的方法、读加速硬件模块及存储器

Country Status (3)

Country Link
US (1) US20230305956A1 (zh)
CN (1) CN112559392B (zh)
WO (1) WO2022134669A1 (zh)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112559392B (zh) * 2020-12-23 2023-08-15 深圳大普微电子科技有限公司 一种加速读存储介质的方法、读加速硬件模块及存储器

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104205059A (zh) * 2012-04-27 2014-12-10 株式会社日立制作所 存储系统和存储控制装置
US9588904B1 (en) * 2014-09-09 2017-03-07 Radian Memory Systems, Inc. Host apparatus to independently schedule maintenance operations for respective virtual block devices in the flash memory dependent on information received from a memory controller
CN110297780A (zh) * 2018-03-22 2019-10-01 东芝存储器株式会社 存储装置及计算机系统
CN110308863A (zh) * 2018-03-27 2019-10-08 东芝存储器株式会社 存储装置、计算机系统及存储装置的动作方法
CN112559392A (zh) * 2020-12-23 2021-03-26 深圳大普微电子科技有限公司 一种加速读存储介质的方法、读加速硬件模块及存储器

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104205059A (zh) * 2012-04-27 2014-12-10 株式会社日立制作所 存储系统和存储控制装置
US9588904B1 (en) * 2014-09-09 2017-03-07 Radian Memory Systems, Inc. Host apparatus to independently schedule maintenance operations for respective virtual block devices in the flash memory dependent on information received from a memory controller
CN110297780A (zh) * 2018-03-22 2019-10-01 东芝存储器株式会社 存储装置及计算机系统
CN110308863A (zh) * 2018-03-27 2019-10-08 东芝存储器株式会社 存储装置、计算机系统及存储装置的动作方法
CN112559392A (zh) * 2020-12-23 2021-03-26 深圳大普微电子科技有限公司 一种加速读存储介质的方法、读加速硬件模块及存储器

Also Published As

Publication number Publication date
US20230305956A1 (en) 2023-09-28
CN112559392A (zh) 2021-03-26
CN112559392B (zh) 2023-08-15

Similar Documents

Publication Publication Date Title
US20200301850A1 (en) Data processing method and nvme storage device
JP2018185814A (ja) NVMe−oF SSDにおける低レイテンシ直接データアクセス方法、及びそのためのシステム
US7653798B2 (en) Apparatus and method for controlling memory allocation for variable size packets
WO2009111971A1 (zh) 缓存数据写入系统及方法和缓存数据读取系统及方法
WO2022134669A1 (zh) 一种加速读存储介质的方法、读加速硬件模块及存储器
TW201118877A (en) Flash memory device, data storage system, and operation method of a data storage system
US20190196989A1 (en) Method, Apparatus, and System for Accessing Memory Device
WO2014101498A1 (zh) 压缩内存访问控制方法、装置及系统
TW513881B (en) Data transfer controller and electronic device
EP3657744A1 (en) Message processing
US20220327102A1 (en) Data index management method and apparatus in storage system
CN1949695A (zh) 一种帧数据传输中错帧丢弃的方法和系统
TW453064B (en) Transmission control circuit using hashing function operation and its method
US20060075142A1 (en) Storing packet headers
CN110232029A (zh) 一种基于索引的fpga中ddr4包缓存的实现方法
US10185783B2 (en) Data processing device, data processing method, and non-transitory computer readable medium
CN112637602B (zh) 一种jpeg接口及数字图像处理系统
CN114567614B (zh) 基于fpga实现arp协议处理的方法及装置
WO2021212770A1 (zh) 一种基于fpga的以太网交换机的mac地址管理装置及方法
WO2022111326A9 (zh) 一种数据传输方法、装置、电子设备和存储介质
CN112187669B (zh) 一种数据交互方法、装置、设备及可读存储介质
WO2000025216A1 (fr) Controleur de transfert de donnees et dispositif electronique
CN113312275A (zh) 内存设备的数据处理方法、装置和系统
TW202009710A (zh) 能夠避免無效的記憶體儲存區塊交換或無效的垃圾回收之方法及快閃記憶體控制器
CN116795746B (zh) 数据传输装置、系统、组件、电子设备及方法

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21908671

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21908671

Country of ref document: EP

Kind code of ref document: A1