CN111399750B - Flash memory data writing method and computer readable storage medium - Google Patents

Flash memory data writing method and computer readable storage medium Download PDF

Info

Publication number
CN111399750B
CN111399750B CN201910220318.3A CN201910220318A CN111399750B CN 111399750 B CN111399750 B CN 111399750B CN 201910220318 A CN201910220318 A CN 201910220318A CN 111399750 B CN111399750 B CN 111399750B
Authority
CN
China
Prior art keywords
host write
host
queue
write instruction
user data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910220318.3A
Other languages
Chinese (zh)
Other versions
CN111399750A (en
Inventor
黄国庭
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Silicon Motion Inc
Original Assignee
Silicon Motion Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Silicon Motion Inc filed Critical Silicon Motion Inc
Priority to US16/445,702 priority Critical patent/US11288185B2/en
Publication of CN111399750A publication Critical patent/CN111399750A/en
Priority to US17/667,801 priority patent/US11960396B2/en
Application granted granted Critical
Publication of CN111399750B publication Critical patent/CN111399750B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0655Vertical data movement, i.e. input-output transfer; data movement between one or more hosts and one or more storage devices
    • G06F3/0658Controller construction arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/0223User address space allocation, e.g. contiguous or non contiguous base addressing
    • G06F12/023Free address space management
    • G06F12/0238Memory management in non-volatile memory, e.g. resistive RAM or ferroelectric memory
    • G06F12/0246Memory management in non-volatile memory, e.g. resistive RAM or ferroelectric memory in block erasable memory, e.g. flash memory
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/0223User address space allocation, e.g. contiguous or non contiguous base addressing
    • G06F12/023Free address space management
    • G06F12/0253Garbage collection, i.e. reclamation of unreferenced memory
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0602Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
    • G06F3/0604Improving or facilitating administration, e.g. storage management
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0602Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
    • G06F3/061Improving I/O performance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0655Vertical data movement, i.e. input-output transfer; data movement between one or more hosts and one or more storage devices
    • G06F3/0659Command handling arrangements, e.g. command buffers, queues, command scheduling
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0668Interfaces specially adapted for storage systems adopting a particular infrastructure
    • G06F3/0671In-line storage system
    • G06F3/0673Single storage device
    • G06F3/0679Non-volatile semiconductor memory device, e.g. flash memory, one time programmable memory [OTP]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2212/00Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
    • G06F2212/72Details relating to flash memory management
    • G06F2212/7205Cleaning, compaction, garbage collection, erase control
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2212/00Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
    • G06F2212/72Details relating to flash memory management
    • G06F2212/7209Validity control, e.g. using flags, time stamps or sequence numbers
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Techniques For Improving Reliability Of Storages (AREA)
  • Memory System (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The invention provides a data writing method of a flash memory, which is executed by a processing unit and comprises the following steps: before executing a part of logic-physical comparison table updating or garbage collection program, judging whether a host write instruction which needs immediate processing exists in a delivery queue; and when the host write command which needs immediate processing exists, executing the host write command in a batch, and then executing the partial logic-physical comparison table updating or garbage collection program.

Description

Flash memory data writing method and computer readable storage medium
Technical Field
The present invention relates to a memory device, and more particularly, to a data writing method for a flash memory and a computer readable storage medium.
Background
Flash memory is generally classified into NOR flash memory and NAND flash memory. NOR flash memory is a random access device, and a Host device (Host) may provide any address on an address pin that accesses NOR flash memory and obtain data stored at that address from the NOR flash memory's data pin in a timely manner. In contrast, NAND flash is not random access, but sequential access. NAND flash memory, like NOR flash memory, cannot access any random address, but instead the host device needs to write the value of the byte(s) of the sequence into the NAND flash memory to define the type of Command (Command) (e.g., read, write, erase, etc.), and the address used on the Command. The address may point to one page (the smallest block of data for a write operation in flash) or one block (the smallest block of data for an erase operation in flash).
The Latency (Latency) of data writing is one of the important measures of quality of service (Quality of Service QoS). The test can first randomly write 4K of data into the memory cell for several hours, leave the memory cell in Dirty Mode (Dirty Mode), randomly write 4K of data for 180 seconds with the instruction depth of QD1/QD128, and measure the delay time. Because the memory cell is in the dirty mode, the NAND Flash memory also needs to be scheduled to write the updated logical-physical Table (Host-Flash H2F Table) in the sram or the dram to the memory cell, so as to reduce the time for performing the abrupt power-off recovery (SPO Recovery SPOR) after the abrupt power-off (Sudden Power Off SPO). In addition, when NAND flash is stored in the dirty mode, it is also necessary to schedule time to perform garbage collection (Garbage Collection GC) operation, so as to avoid that the memory unit cannot write user data due to insufficient space. The invention provides a data writing method of a flash memory and a computer program product, which are used for meeting the requirement of measurement of delay time when a storage unit is in a dirty mode.
Disclosure of Invention
In view of this, how to alleviate or eliminate the above-mentioned drawbacks of the related art is a real problem to be solved.
The invention provides a data writing method of a flash memory, which is implemented by a processing unit when loading and executing program codes of software or firmware modules, and comprises the following steps: before executing a part of logic-physical comparison table updating or garbage collection program, judging whether a host write instruction which needs immediate processing exists in a delivery queue; and when the host write command which needs immediate processing exists, executing the host write command in a batch, and then executing the partial logic-physical comparison table updating or garbage collection program.
The present invention further provides a computer readable storage medium for writing flash memory data, for storing a computer program executable by a processing unit, the computer program implementing the following steps when executed by the processing unit: before executing a part of logic-physical comparison table updating or garbage collection program, judging whether a host write instruction which needs immediate processing exists in a delivery queue; and when the host write command which needs immediate processing exists, executing the host write command in a batch, and then executing the partial logic-physical comparison table updating or garbage collection program.
One of the advantages of the above embodiment is that the above determination can avoid excessively long waiting time of part of the host write instructions in the commit queue due to the update of the logical-physical lookup table or the execution of the garbage collection program.
Other advantages of the present invention will be explained in more detail in connection with the following description and accompanying drawings.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this application, illustrate embodiments of the application and together with the description serve to explain the application and do not constitute an undue limitation to the application.
Fig. 1 is a system architecture diagram of a flash memory according to an embodiment of the invention.
Fig. 2 is a schematic diagram of connection between a flash memory interface and a LUN.
FIG. 3 is a schematic diagram of an instruction queue.
Fig. 4 is a schematic diagram of a flash translation layer (Flash Translation Layer FTL) architecture.
FIG. 5 is a flow chart of a data writing method of some embodiments.
FIG. 6 is a flow chart of a method for processing a host write command according to an embodiment of the invention.
FIG. 7 is a schematic diagram illustrating the arrival and processing of a host write command according to an embodiment of the present invention.
FIG. 8 is a flowchart of a method for updating a logical-physical comparison Table (Host-Flash H2F Table) according to an embodiment of the present invention.
Fig. 9 is a physical storage control schematic.
Fig. 10 is a flowchart of a method for executing a garbage collection (Garbage Collection GC) procedure according to an embodiment of the invention.
[ list of reference numerals ]
100. Electronic device
110. Main device
120. 131 random access memory
130. Storage device
132. Host interface
133. Processing unit
135. Flash memory controller
137. Flash memory interface
139 LUN
139#0~139#11 LUN
CH#0 to CH#3 input/output channels
CE#0 to CE#2 enable signals
310. Submitting queue
330. Completion queue
CQH, CQT, SQH, SQT pointer
410. 430, 450, 470 software or firmware modules
Method steps of S510-S590, S611-S635, S810-S870 and S1010-S1070
70. Execution of a batch of host write instructions
70a start-of-run time point
70b, T0, T1, T2, T3 end run time point
W0-W12 host write instructions
910 H2F meter
930. Physical address information
930-0 (physical) block numbering
930-1 (physical) page number and offset
930-2 (physical) plane numbering
930-3 logical unit numbering
Detailed Description
Embodiments of the present invention will be described below with reference to the accompanying drawings. In the drawings, like reference numerals designate identical or similar components or process flows.
It should be understood that the terms "comprises" and/or "comprising," when used in this specification, are taken to specify the presence of stated features, values, method steps, operation processes, components, and/or groups of components, but do not preclude the addition of further features, values, method steps, operation processes, components, and groups of components, or groups of any of the above.
In the present invention, terms such as "first," "second," "third," and the like are used for modifying elements of the claims, and are not intended to denote a prior order, a first order, or a sequence of steps of a method, for example, for distinguishing between elements having the same name.
It will be understood that when an element is referred to as being "connected" or "coupled" to another element, it can be directly connected or coupled to the other element or intervening elements may be present. Conversely, when an element is described as being "directly connected" or "directly coupled" to another element, there are no intervening elements present. Other words used to describe the relationship between components may also be interpreted in a similar fashion, such as "between" and "directly between" or "adjacent" and "directly adjacent" or the like.
Reference is made to fig. 1. The electronic device 100 includes a host 110, a random access memory (Random Access Memory RAM) 120, and a storage 130. The master device 110 may be operated to establish a Queue (Queue) according to its requirements. The electronic device 100 is an electronic product such as a personal computer, a notebook computer (Laptop PC), a tablet computer, a mobile phone, a digital camera, and a digital video camera. A particular portion of the random access memory 120 may be configured as a data buffer, a queue, etc. The memory device 130 may include a processing unit 133 and may further include a random access memory 131 to improve the performance of the memory device 130. The processing unit 133 may receive commands from the Host device 110 through a Host Interface (Host Interface) 132 and instruct the flash memory controller 135 to perform data reading, writing, erasing, etc. accordingly. Communication protocols such as general flash memory (Universal Flash Storage UFS), flash nonvolatile memory (Non-Volatile Memory Express NVMe), universal serial bus (Universal Serial Bus USB), advanced technology attachment (advanced technology attachment ATA), serial advanced technology attachment (serial advanced technology attachment SATA), and flash peripheral component interconnect (peripheral component interconnect express PCI-E) may be used between the host device 110 and the processing unit 133 for communication. Any of the master device 110 and the processing unit 133 may be implemented in a variety of ways, such as using general-purpose hardware (e.g., a single processor, parallel-processing-capable multiprocessor, graphics processor, or other computing-capable processor), and providing the functionality described below when executing software and/or firmware instructions. Random access memories 120 and 131 may store data, such as variables, tables of data, etc., that are needed during execution.
The logical unit number (Logical Unit Number LUN) 139 provides a large amount of storage space, typically hundreds of Gigabytes, even terabates, available for storing large amounts of user data, such as high resolution pictures, movies, etc. LUN139 contains control circuitry and a Memory Array (Memory Array) in which Memory Cells can be three-tier Cells (Triple Level Cells, TLCs) or four-tier Cells (Quad-Level Cells QLCs). Random access memory 131 may be used to cache user data to be written to LUN139 by master 110, user data to be read from LUN139 and to be knocked out to master 110, and a Logical-physical lookup table (Logical-Physical Mapping Table, L2P table) required at the time of lookup. Random access memory 131 may also store data, such as variables, data tables, and the like, that are needed in the course of execution of software and firmware instructions. The random access memory 131 may include a static random access memory (State Random Access Memory SRAM), a dynamic random access memory (Dynamic Random Access Memory DRAM), or both.
Storage device 130 also includes flash controller 135, flash interface 137, and LUN139, and flash controller 135 communicates with LUN139 via flash interface 137, in particular, a double data rate (Double Data Rate DDR) communication protocol may be employed, such as open NAND flash (Open NAND Flash Interface ONFI), double data rate switch (DDR Toggle), or other interfaces. Flash controller 135 of storage device 130 writes user data to a specified address (destination address) in LUN139 and reads user data from a specified address (source address) in LUN139 through flash interface 137. Flash interface 137 uses a plurality of electronic signals to coordinate Data and command transfers between flash controller 135 and LUN139, including Data lines (Data lines), clock signals (clock signals), and control signals (control signals). The data line can be used for transmitting commands, addresses, read-out and written data; the control signal lines may be used to transfer control signals such as Chip Enable (Chip Enable CE), address fetch Enable (Address Latch Enable ALE), command fetch Enable (Command Latch Enable CLE), write Enable (Write Enable WE), etc. The processing unit 133 and the flash controller 135 may be present separately or integrated in the same chip.
Referring to fig. 2, the flash memory interface 137 may include four input/output channels (I/O channels, hereinafter referred to as channels) ch#0 to ch#3, each channel connecting three LUNs, for example, channel ch#0 connects LUNs 139#0, 139#4, and 139#8. It should be noted that, to meet various system requirements, one skilled in the art may set up multiple channels in flash interface 137 and connect each channel with at least one LUN, and the present invention is not limited thereto. Flash controller 135 may drive flash interface 137 to issue one of enable signals CE#0 to CE#2 to enable LUNs 139#0 to 139#3, 139#4 to 139#7, or 139#8 to 139#11, and then read user data from or write user data to the enabled LUNs in a parallel manner.
Referring to FIG. 3, the instruction queues may include a commit queue (Submission Queue SQ) 310 and a completion queue (Completion Queue CQ) 330 for buffering master instructions and completion components (Completion Element CE), respectively. The submit queue 310 and the completion queue 330 are preferably built in the same device, e.g., the submit queue 310 and the completion queue 330 are preferably built in the random access memory 120 of the Host Side (Host Side) and also in the random access memory 131 of the storage device 130. The commit queue 310 and the completion queue 330 may also be established in different devices. Each of the commit queue 310 and the completion queue 330 includes a set of multiple entries (Entry). Each entry in the commit queue 310 may store an input/output Command (I/O Command), such as an erase, read, write Command, etc. The items in the collection are stored sequentially. The basic principle of operation of the collection is to add an entry (which may be referred to as enqueue) by an end location (such as the location indicated by pointer SQT or CQT) and remove an entry (which may be referred to as dequeue) by a start location (such as the location indicated by pointer SQH or CQH). That is, the first instruction to be added to the commit queue 310 will also be the first instruction to be removed. Master 110 may write multiple write instructions to commit queue 310 and processing unit 133 reads (or referred to as Fetch) the earliest arriving write instruction from commit queue 310 and executes it. After the execution of the write instruction is completed, the processing unit 133 writes the completion component to the completion queue 330, and the host device 110 may read or extract the completion component to determine the execution result of the write instruction.
Referring to fig. 4, the flash translation layer (Flash Translation Layer FTL) architecture includes a write instruction read module 410, a write instruction execute module 430, an H2F table write module 450, and a garbage collection (Garbage Collection GC) operations module 470. The function hw_pushiocmdinofrdinfo () may contain the program code of the write instruction read module 410 and when loaded and executed by the processing unit 133 reads a specified number of host write instructions from the commit queue (Host Write Commands) and buffers the user data of the specific Logical Address (Logical Address) to which the write host instructions are to be written to the random access memory 131. The function ftl_handleprdinfo () may include the program code of the write instruction execution module 430, and when the processing unit 133 loads and executes, the user data temporarily stored in the random access memory 131 is written into the LUN139 through the flash controller 135 and the flash interface 137 according to the host write instruction, and the Physical Address (Physical Address) is obtained from the information recovered by the flash controller 135, and then the correspondence between the logical Address and the Physical Address is updated to the appropriate position of the H2F table in the random access memory 131. The function SaveMap () may contain the program code of H2F table write module 450 and, when loaded and executed by processing unit 133, writes the updated H2F table to LUN139 through flash controller 135 and flash interface 137. When processing unit 133 loads and executes GC operation module 470, broken user data in multiple physical pages is collected, and the collected user data is written into new physical pages in LUN139 through flash controller 135 and flash interface 137, so that these released physical pages can be used by other user data after being erased.
In some embodiments, the processing unit 133 may implement the method flow shown in fig. 5 when loading and executing the program code of the control module. When the processing unit 133 detects that the host device 110 starts writing the host write instruction to the commit queue 310, the loop may be repeatedly executed (steps S510 to S590) until there is no host write instruction in the commit queue 310 (no path in step S590). In each pass (Iteration), the processing unit 133 may execute the write instruction fetch module 410, the write instruction execute module 430, the H2F table write module 450, and the GC operation module 470 in sequence. However, when the H2F table write module 450 or GC operation module 470 is running too long, the latency of the host write instruction in the commit queue 310 may be too long, thus making it impossible to meet the requirements of the latency measure of quality of service (Quality of Service QoS). In addition, since master device 110 may write any number of host write instructions to commit queue 310 at any point in time, host interface 132 (which may be referred to as Hardware Hardware HW) may only read a maximum number of host write instructions. If the host device 110 issues more than the upper limit number of host write commands at a time, the host interface 132 can only read the upper limit number of host write commands to be processed by the write command reading module 410. The remaining host write instructions can only be processed by the instruction fetch module 410 in the next round. Due to the lack of time information for each host write instruction to reach commit queue 310, the control module (which may be referred to simply as Firmware FW) cannot know how long the host write instruction fetched from the hardware has been delayed.
To complement the time information that the host write instruction arrives at the commit queue 310, in some embodiments, the write instruction read module 410 may be modified to append a timestamp to the host write instruction that newly arrives at the commit queue 310 during processing of the host write instruction. Referring to the embodiment of the processing method of the host write instruction shown in fig. 6, the method is implemented when the processing unit 133 loads and executes the program code of the write instruction reading module 410. First, a loop is repeatedly executed (steps S611 to S613) for reading all host write instructions in the commit queue 310 that need immediate processing in one Batch (Batch). Because of hardware limitations, processing unit 133 does not read more than the upper limit number of host write instructions per round. In step S611 when the cycle is first entered, the processing unit 133 may read the time information of the host write command reaching the commit queue 310 from the random access memory 131, and determine the host write command that needs to be immediately processed according to the time information. The time information to reach the commit queue 310 may be implemented using table 1 below:
TABLE 1
Instruction set numbering Host write instruction number Arrival time stamp
S0 W0-W4 T0
S1 W5-W9 T1
Each entry in table 1 may be associated with an instruction set including an instruction set number, the number of host write instructions that this data set includes, and an arrival timestamp associated with all host write instructions. For example, instruction set "S0" contains host write instructions "W0" through "W4" and their arrival time at commit queue 310 is "T0". "W0" to "W4" may also represent host write instructions for entries 0-4 in the commit queue 310. The processing unit 133 may use equation (1) to determine whether a host write instruction in one instruction set requires immediate processing:
Tnow-Ti>Ttr
where Tnow represents the current time, i represents a positive integer, ti represents the arrival time of the ith host write instruction in the commit queue 310, and Tr represents a threshold. The threshold may be set with reference to the latency requirement, e.g., if the latency requirement requires less than 5 milliseconds (ms) for 99% of the host write instructions, the threshold may be set to a value between 4 and 5 ms. When the condition of equation (1) is satisfied, immediate processing is required on behalf of the ith host write instruction in commit queue 310.
In Cache Mode, processing unit 133 may retrieve each host write instruction from commit queue 310 via host interface 132, and read user data to be written to LUN139 from random access memory 120 via host interface 132 according to address information in the host write instruction, and store the user data to random access memory 131. Since the completion host write command is executed when the user data is stored in the ram 131, the processing unit 133 can write the completion component (Completion Element CE) corresponding to the host write command to the completion queue 330 through the host interface 132. Thereafter, processing unit 133 may be arranged to time execution of the program code of write instruction execution module 430 for writing the user data buffered in random access memory 131 to LUN139 via flash controller 135 and flash interface 137.
In a Non-cache Mode (Non-cache Mode) or when the storage device 130 does not configure a storage space for temporarily storing user data, the processing unit 133 may directly jump to the program code of the execution write instruction execution module 430 for writing the user data into the LUN139 through the flash memory interface 137 after obtaining one or more host write instructions and the user data to be written through the host interface 132. After successfully writing LUN139, processing unit 133 may switch back to executing the program code of write instruction execution module 430 for writing the completion component corresponding to the host write instruction(s) to completion queue 330. In some embodiments, the write instruction execution module 430 and the write instruction execution module 430 may be integrated into a single module, not limited to FTL architectures.
After the loop execution is completed, the processing unit 133 obtains a time stamp Tpre representing the end of reading the host write command of the previous batch from the random access memory 131 (step S631), updates the arrival time information in the random access memory 131, and adds Tpre to the record of the new host write command in the submitted queue 310 (step S633), and updates Tpre as the time stamp representing the current time for reference of the execution of the host write command of the next batch (step S635).
The following examples assist in describing the process flow described in fig. 6. Referring to fig. 7, the execution of the write instruction fetch module 410 of the previous batch ends at time point T2 and the execution 70 of the write instruction fetch module 410 of the previous batch begins at time point 70a and ends at time point T3 (70 b). At time point 70a, the random access memory 131 stores the execution end time stamp Tpre of the host write instruction of the previous batch as time point T2, and the time information that the host write instructions "W0" to "W9" arrive at the commit queue 310 as described in table 1. Assuming that instruction set "S0" (i.e., host write instructions "W0" through "W4") satisfies the condition of equation (1), it needs to be immediately processed. Then, the processing unit 133 reads the host write instructions "W0" to "W4" in the commit queue 310 (step S631). At near time point T3, the read operation of the host write instructions "W0" to "W4" ends. After the operation is ended, the processing unit 133 reads the execution end time stamp Tpre (=t2) of the host write instruction of the previous batch from the random access memory 131 (step S633). Assume that between time points T2 to T3, master 110 writes host write instructions "W10" to "W12" to commit queue 310 and changes pointer SQT to point to item 13 in commit queue 310. By comparing the arrival time information in the RAM 131 with the address currently pointed to by the pointer SQT, the processing unit 131 can know that the host device 110 is newly writing the host write instructions "W10" to "W12" to the commit queue 310. Next, the processing unit 131 updates the arrival time information as shown in table 2 (step S633):
TABLE 2
Instruction set numbering Host write instruction number Arrival time stamp
S1 W5-W9 T1
S2 W10-W12 T2
Although the actual arrival times of the host write instructions "W10" through "W12" are later than the time point T2, since the write instruction read module 410 does not know the actual arrival time of any host write instruction, the possibility that the actual delay time of the host write instruction exceeds the time measurement requirement can be reduced by using the earliest possible arrival time T2 as the timestamp.
Although FIG. 3 shows only two queues 310 and 330, the host may create a greater number of submitted Sub-queues (Sub-queues) and Completion Sub-queues (Completion Sub-queues) depending on the application requirements. Table 1 may be modified to include arrival time information of host write instructions in different submitted sub-queues and make a collective determination as to whether or not the host write instructions in all submitted sub-queues need immediate processing, and the invention is not so limited.
To solve the technical problem that occurs when LUN139 is in dirty mode, the method flow shown in fig. 8 and 10 is a data writing method of flash memory, and the method is implemented when processing unit 133 loads and executes the program codes of the relevant software or firmware modules, and includes the following steps: before performing a portion of the H2F table update or GC operation, determining whether there is at least one host write instruction in the commit queue 310 that needs immediate processing; and when there is a host write instruction that needs immediate processing, executing the host write instruction(s) in a batch, and then executing the H2F table update or GC operation of the portion; and directly performing an update or GC operation of the H2F table of this portion when there is no host write instruction that needs immediate processing. Those skilled in the art will appreciate that the H2F table update and GC operations are self-initiated by the memory device 130 to optimize the performance of the memory device 130, rather than initiated by the master device 110 as a host write command. The detailed description is as follows:
to avoid frequent updating of the H2F table in LUN139, processing unit 133 may register all or a portion of the H2F table in random access memory 131 (typically DRAM) and update the contents of the registered H2F table after the write operation is completed. To reduce the time to perform a post-power-down restore (SPO Recovery SPOR) after a sudden power-down (Sudden Power Off SPO), processing unit 133 writes the updated contents of the temporary H2F table to LUN139 every time a certain number of records are updated. When the storage device 130 is in the dirty mode, such H2F table writing operations may be more frequent. However, the processing unit 133 and the flash memory interface 137 need a period of time to complete the entire write operation requiring updating, which may result in too long a waiting time for the partial host write instruction in the queue 310 to meet the requirement of the latency test of QoS. To avoid the problems described above, in some embodiments, the H2F table writing module 450 may be modified to divide all updated contents in the H2F table into segments and determine whether there is a host write instruction that needs immediate processing before writing a segment of updated contents. When there are host write instructions that need immediate processing, these host write instructions are preferentially processed.
Referring to fig. 9, the H2F table 910 preferably stores physical address information corresponding to each logical address (or logical block address Logical Block Address LBA) in order. The space required for H2F table 910 is preferably proportional to the total number of logical addresses. The logical address may be represented as a logical block address, with each LBA corresponding to a logical block of fixed size, e.g., 512 Bytes (Bytes), and stored to a physical address. For example, H2F table 910 sequentially stores physical address information from LBA #0 to LBA # 65535. The data of several consecutive logical addresses (e.g., LBA#0 to LBA#7) may form one main Page (Host Page). The Physical address information 330 includes, for example, four bytes, wherein 930-0 records (Physical) Block Number, 930-1 records (Physical) page Number and offset (offset); 930-2 records (physical) plane numbers, 930-3 records logical unit numbers, output-input channel numbers, etc. For example, physical address information 930 corresponding to LBA#2 may point to one local 951 in block 950.
Referring to the embodiment of the method for updating the H2F table shown in fig. 8, the method is implemented when the processing unit 133 loads and executes the program codes of the H2F table writing module 450. Processing unit 133 may repeatedly perform a loop (steps S810 through S870) for writing all updated contents of the H2F table to LUN139 in segments. For example, when the physical address information in the temporary H2F table is updated for the LBA #0 to LBA #2047, the processing unit 133 may write the physical address information for the LBA #0 to LBA #1023 (i.e. the first segment) in one batch, and write the physical address information for the LBA #1028 to LBA #2047 (i.e. the second segment) in the next batch. In each round, the processing unit 133 first determines whether there is a host write instruction that needs immediate processing (step S810). The determination of the host write command can refer to the descriptions of table 1, step S613 and formula (1) as described above, and is not repeated for brevity. When there is a host write instruction requiring immediate processing (yes path in step S810), processing unit 133 first reads the host write instruction requiring immediate processing (step S830), and then stores an updated section of the H2F table to LUN139 (step S850). When there is no host write instruction requiring immediate processing (no path in step S810), a piece of updated contents in the H2F table is directly stored to LUN139 (step S850).
When storage device 130 is in dirty mode, many physical pages in LUN139 may contain valid and invalid extents (also referred to as stale extents), where valid extents store valid user data and invalid extents store invalid (old) user data. When processing unit 133 detects that there is insufficient space available in LUN139, it may instruct flash controller 135 to read and collect user data in the valid sector in the source block, and then instruct flash controller 135 to rewrite the collected valid user data to the empty physical page of the active block (or destination block) so that these data blocks (source block) containing invalid user data may be changed to an idle block. The idle block can be used as an active block to provide a data storage space after being erased. The procedure described above is called garbage collection.
However, the processing unit 133 and the flash memory interface 137 require a period of time to complete the entire GC procedure, which may result in too long a waiting time for the partial host write instruction in the queue 310 to meet the QoS latency requirement. To avoid the problems described above, in some embodiments, the GC operation module 470 may be modified to divide the entire garbage collection program into several stages and determine whether there are host write instructions that need to be processed immediately before performing one stage of work. When there are host write instructions that need immediate processing, these host write instructions are preferentially processed.
In some embodiments, the entire GC procedure can be divided into five phases of operation: the processing unit 133 may determine the source address of the source block and the destination address of the destination block of the valid user data in the first stage. In the second phase, processing unit 133 may instruct flash controller 135 to read user data from the source address of LUN139 and instruct flash controller 135 to write the read user data to the destination address of LUN139. In the third and fourth stages, the processing unit 133 may update the H2F table and the Physical-logical comparison table (Physical-Logical Mapping Table, P2L table), respectively. The processing unit 133 may change the destination block to the idle block in the fifth stage. The above five stages are merely examples, and one skilled in the art may combine several stages into a single stage or split one stage into several sub-stages in the GC operation module 470 according to the operating speeds of the processing unit 133, the flash controller 135 and the flash interface 137. In addition, the GC operation module 470 may optimize the execution order of the five stages according to the processing status, for example, arrange the first to second stages into a loop until the destination block cannot write the user data from the source block, and then execute the third to fifth stages.
Referring to the embodiment of the execution method of the GC program shown in fig. 10, the method is implemented when the processing unit 133 loads and executes the program codes of the GC operation module 470. The processing unit 133 may repeatedly perform one cycle (steps S1010 to S1070) for executing the GC program in stages. In each lot, the processing unit 133 first determines whether there is a host write command that needs immediate processing (step S1010). The determination of the host write command can refer to the descriptions of table 1, step S613 and formula (1) as described above, and is not repeated for brevity. When there is a host write instruction requiring immediate processing (yes path in step S1010), the processing unit 133 reads the host write instruction requiring immediate processing (step S1030), and then performs GC operation of the first or next stage (step S1050). When there is no host write instruction requiring immediate processing (no path in step S1010), the GC operation of the first or next stage is directly performed (step S1050).
In some embodiments of steps S830 or S1030, the processing unit 133 may call and execute the function hw_pushiocmdinofrdinfo () for completing the method steps as described in fig. 6. In alternative embodiments of steps S830 or S1030, the H2F table writing module 450 or GC operation module 470 may embed program code of the method steps as described in fig. 6 for execution by the processing unit 133.
All or part of the steps in the method described in the present invention may be implemented by a computer program, for example, an operating system of a computer, a driver of specific hardware in a computer, or a software program. In addition, other types of programs as shown above may also be implemented. Those of ordinary skill in the art will be able to write the methods of the embodiments of the present invention as a computer program and will not be described again for brevity. Computer programs for implementing methods according to embodiments of the present invention may be stored on a suitable computer readable data carrier, such as a DVD, CD-ROM, USB, hard disk, or may be located on a network server accessible via a network (e.g., the internet, or other suitable carrier).
Although the components described above are included in fig. 1, it is not excluded that many other additional components may be used to achieve a better technical result without violating the spirit of the invention. In addition, although the flowcharts of fig. 6, 8 and 10 are executed in the specified order, the order among these steps may be modified by those skilled in the art without departing from the spirit of the invention, and therefore, the present invention is not limited to the use of only the order described above. Furthermore, one skilled in the art may integrate several steps into one step or perform more steps in addition to these steps, sequentially or in parallel, and the invention is not limited thereby.
While the invention has been illustrated by the above examples, it should be noted that the description is not intended to limit the invention. On the contrary, this invention covers modifications and similar arrangements apparent to those skilled in the art. Therefore, the scope of the claims is to be accorded the broadest interpretation so as to encompass all such obvious modifications and similar arrangements.

Claims (14)

1. A method for writing flash memory data, implemented by a processing unit when loading and executing program code of a software or firmware module, comprising:
before executing a part of logic-physical comparison table updating or garbage collection program, judging whether a host write instruction which needs immediate processing exists in a delivery queue; and
when the host write command which needs immediate processing exists, the host write command is executed in a batch, then the partial logic-physical comparison table updating or garbage collection program is executed,
wherein executing each of the host write instructions comprises retrieving the host write instruction from the commit queue through a host interface; reading user data to be written into the storage unit from the random access memory through the host interface according to the address information in the host write command; storing the user data in a random access memory; and writing a completion component corresponding to the host write instruction to a completion queue through the host interface.
2. The method for writing flash memory data according to claim 1, comprising:
when there is no host write instruction that needs immediate processing, the logical-physical lookup table update or garbage collection procedure for that portion is performed.
3. The method of any one of claims 1 to 2, wherein the following formula is used to determine whether the host write command requiring immediate processing is present in the commit queue:
Tnow-Ti>Ttr
wherein Tnow represents the current time, i represents a positive integer, ti represents the arrival time point of the i-th host write instruction in the delivery queue, ttr represents a threshold; when the condition of the formula is satisfied, it represents that the ith host write instruction in the commit queue needs immediate processing.
4. The method of claim 3, wherein the arrival time of each host write command in the commit queue is a time point when the processing unit detects that execution of a host write command of a previous batch of host write commands entered into the commit queue is completed.
5. The method of any one of claims 1 to 2, wherein the partial logical-to-physical table update includes writing physical address information associated with a segment of consecutive logical addresses to the memory unit via the flash interface.
6. The method of any one of claims 1 to 2, wherein the garbage collection process is divided into a plurality of stages, and the portion of the garbage collection process includes a one-stage operation.
7. The method of claim 6, wherein the plurality of stages of operations comprise: determining a source address of a sector containing valid user data and a destination address of an empty physical page of an idle block or an active block; instructing a flash controller to read user data from the source address of a memory cell and to write the read user data to the destination address of the memory cell; updating a logic-physical comparison table; or instruct the flash memory controller to erase the data block containing the source address in the memory cell.
8. The method of any one of claims 1 to 2, wherein executing each host write instruction includes retrieving the host write instruction from the commit queue through a host interface; reading user data to be written into the storage unit from the random access memory through the host interface according to the address information in the host write command; writing the user data into the storage unit through a flash memory interface; and writing a completion component corresponding to the host write instruction to a completion queue through the host interface.
9. A computer readable storage medium on which flash memory data is written for storing a computer program executable by a processing unit, characterized in that the computer program when executed by the processing unit realizes the steps of:
before executing a part of logic-physical comparison table updating or garbage collection program, judging whether a host write instruction which needs immediate processing exists in a delivery queue; and
when the host write command which needs immediate processing exists, the host write command is executed in a batch, then the partial logic-physical comparison table updating or garbage collection program is executed,
wherein executing each of the host write instructions comprises retrieving the host write instruction from the commit queue through a host interface; reading user data to be written into the storage unit from the random access memory through the host interface according to the address information in the host write command; storing the user data in a random access memory; and writing a completion component corresponding to the host write instruction to a completion queue through the host interface.
10. The computer readable storage medium of claim 9, wherein the determination of whether the host write instruction is present in the commit queue that requires immediate processing is performed using the formula:
Tnow-Ti>Ttr
wherein Tnow represents the current time, i represents a positive integer, ti represents the arrival time point of the i-th host write instruction in the delivery queue, ttr represents a threshold; when the condition of the formula is satisfied, it represents that the ith host write instruction in the commit queue needs immediate processing.
11. The computer readable storage medium of claim 10, wherein the arrival time of each host write command in the commit queue is a time point when the processing unit detects that execution of a host write command of a previous batch of host write commands has ended when the host write command entered the commit queue.
12. The computer readable storage medium of claim 9, wherein the partial logical-to-physical table update includes writing physical address information associated with a segment of consecutive logical addresses to the memory location via the flash interface.
13. The computer readable storage medium of claim 9, wherein the garbage collection program is divided into a plurality of stages, and the portion of the garbage collection program comprises a one-stage operation.
14. The computer-readable storage medium of any one of claims 9 to 13, wherein performing each operation of the host write instruction includes retrieving the host write instruction from the commit queue through a host interface; reading user data to be written into the storage unit from the random access memory through the host interface according to the address information in the host write command; writing the user data into the storage unit through a flash memory interface; and writing a completion component corresponding to the host write instruction to a completion queue through the host interface.
CN201910220318.3A 2019-01-03 2019-03-22 Flash memory data writing method and computer readable storage medium Active CN111399750B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
US16/445,702 US11288185B2 (en) 2019-01-03 2019-06-19 Method and computer program product for performing data writes into a flash memory
US17/667,801 US11960396B2 (en) 2019-01-03 2022-02-09 Method and computer program product for performing data writes into a flash memory

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US201962787810P 2019-01-03 2019-01-03
US62/787,810 2019-01-03

Publications (2)

Publication Number Publication Date
CN111399750A CN111399750A (en) 2020-07-10
CN111399750B true CN111399750B (en) 2023-05-26

Family

ID=71428322

Family Applications (2)

Application Number Title Priority Date Filing Date
CN201910220318.3A Active CN111399750B (en) 2019-01-03 2019-03-22 Flash memory data writing method and computer readable storage medium
CN201910486615.2A Active CN111399752B (en) 2019-01-03 2019-06-05 Control device and method for different types of storage units

Family Applications After (1)

Application Number Title Priority Date Filing Date
CN201910486615.2A Active CN111399752B (en) 2019-01-03 2019-06-05 Control device and method for different types of storage units

Country Status (2)

Country Link
CN (2) CN111399750B (en)
TW (3) TWI739075B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111949580B (en) * 2020-08-12 2021-11-12 深圳安捷丽新技术有限公司 Multi-frequency memory interface and configuration method thereof
CN114327240A (en) 2020-09-29 2022-04-12 慧荣科技股份有限公司 Computer readable storage medium, data storage method and device of flash memory
TWI754396B (en) * 2020-09-29 2022-02-01 慧榮科技股份有限公司 Method and apparatus and computer program product for storing data in flash memory
CN112379830B (en) * 2020-11-03 2022-07-26 成都佰维存储科技有限公司 Method and device for creating effective data bitmap, storage medium and electronic equipment

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102375779A (en) * 2010-08-16 2012-03-14 深圳市朗科科技股份有限公司 Data processing method and data processing module
CN102799534A (en) * 2012-07-18 2012-11-28 上海宝存信息科技有限公司 Storage system and method based on solid state medium and cold-hot data identification method
CN107844431A (en) * 2017-11-03 2018-03-27 合肥兆芯电子有限公司 Map table updating method, memorizer control circuit unit and memory storage apparatus

Family Cites Families (22)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8296467B2 (en) * 2000-01-06 2012-10-23 Super Talent Electronics Inc. Single-chip flash device with boot code transfer capability
US6973554B2 (en) * 2003-04-23 2005-12-06 Microsoft Corporation Systems and methods for multiprocessor scalable write barrier
KR101404083B1 (en) * 2007-11-06 2014-06-09 삼성전자주식회사 Solid state disk and operating method thereof
US7409489B2 (en) * 2005-08-03 2008-08-05 Sandisk Corporation Scheduling of reclaim operations in non-volatile memory
US7716411B2 (en) * 2006-06-07 2010-05-11 Microsoft Corporation Hybrid memory device with single interface
US7444461B2 (en) * 2006-08-04 2008-10-28 Sandisk Corporation Methods for phased garbage collection
US7441071B2 (en) * 2006-09-28 2008-10-21 Sandisk Corporation Memory systems for phased garbage collection using phased garbage collection block or scratch pad block as a buffer
CN101599295B (en) * 2008-06-02 2011-12-07 联阳半导体股份有限公司 integrated storage device and control method thereof
TWI442222B (en) * 2010-07-21 2014-06-21 Silicon Motion Inc Flash memory device and method for managing a flash memory device
CN104160384B (en) * 2012-01-27 2017-06-16 马维尔国际贸易有限公司 For the system and method for dynamic priority control
US9417820B2 (en) * 2012-12-06 2016-08-16 Kabushiki Kaisha Toshiba Low-overhead storage of a hibernation file in a hybrid disk drive
US9348747B2 (en) * 2013-10-29 2016-05-24 Seagate Technology Llc Solid state memory command queue in hybrid device
US9684568B2 (en) * 2013-12-26 2017-06-20 Silicon Motion, Inc. Data storage device and flash memory control method
US9471254B2 (en) * 2014-04-16 2016-10-18 Sandisk Technologies Llc Storage module and method for adaptive burst mode
CN104361113B (en) * 2014-12-01 2017-06-06 中国人民大学 A kind of OLAP query optimization method under internal memory flash memory mixing memory module
CN106326136A (en) * 2015-07-02 2017-01-11 广明光电股份有限公司 Method for collecting garage block in solid state disk
US20170060434A1 (en) * 2015-08-27 2017-03-02 Samsung Electronics Co., Ltd. Transaction-based hybrid memory module
US10409719B2 (en) * 2016-03-17 2019-09-10 Samsung Electronics Co., Ltd. User configurable passive background operation
TWI595412B (en) * 2016-09-09 2017-08-11 大心電子(英屬維京群島)股份有限公司 Data transmitting method, memory storage device and memory control circuit unit
US10359933B2 (en) * 2016-09-19 2019-07-23 Micron Technology, Inc. Memory devices and electronic systems having a hybrid cache including static and dynamic caches with single and multiple bits per cell, and related methods
US10359953B2 (en) * 2016-12-16 2019-07-23 Western Digital Technologies, Inc. Method and apparatus for offloading data processing to hybrid storage devices
CN108959108B (en) * 2017-05-26 2021-08-24 上海宝存信息科技有限公司 Solid state disk access method and device using same

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102375779A (en) * 2010-08-16 2012-03-14 深圳市朗科科技股份有限公司 Data processing method and data processing module
CN102799534A (en) * 2012-07-18 2012-11-28 上海宝存信息科技有限公司 Storage system and method based on solid state medium and cold-hot data identification method
CN107844431A (en) * 2017-11-03 2018-03-27 合肥兆芯电子有限公司 Map table updating method, memorizer control circuit unit and memory storage apparatus

Also Published As

Publication number Publication date
TW202026891A (en) 2020-07-16
TWI719494B (en) 2021-02-21
TWI828963B (en) 2024-01-11
CN111399752B (en) 2023-11-28
CN111399752A (en) 2020-07-10
TWI739075B (en) 2021-09-11
CN111399750A (en) 2020-07-10
TW202026893A (en) 2020-07-16
TW202137017A (en) 2021-10-01

Similar Documents

Publication Publication Date Title
CN111399750B (en) Flash memory data writing method and computer readable storage medium
TWI506430B (en) Method of recording mapping information method, and memory controller and memory storage apparatus using the same
US9183136B2 (en) Storage control apparatus and storage control method
US20190095123A1 (en) Methods for internal data movements of a flash memory device and apparatuses using the same
US10101914B2 (en) Memory management method, memory control circuit unit and memory storage device
US9448946B2 (en) Data storage system with stale data mechanism and method of operation thereof
KR101301840B1 (en) Method of data processing for non-volatile memory
US9176865B2 (en) Data writing method, memory controller, and memory storage device
CN110633048B (en) Namespace operation method of flash memory storage device
TWI592865B (en) Data reading method, data writing method and storage controller using the same
CN103999060A (en) Solid-state storage management
CN1658171A (en) Faster write operations to nonvolatile memory by manipulation of frequently accessed sectors
US11960396B2 (en) Method and computer program product for performing data writes into a flash memory
US11675698B2 (en) Apparatus and method and computer program product for handling flash physical-resource sets
TWI726314B (en) A data storage device and a data processing method
US11210226B2 (en) Data storage device and method for first processing core to determine that second processing core has completed loading portion of logical-to-physical mapping table thereof
CN110895449A (en) Apparatus and method for managing valid data in a memory system
TWI629590B (en) Memory management method, memory control circuit unit and memory storage device
CN113885808A (en) Mapping information recording method, memory control circuit unit and memory device
US11494113B2 (en) Computer program product and method and apparatus for scheduling execution of host commands
TWI758745B (en) Computer program product and method and apparatus for scheduling executions of host commands
US20240143226A1 (en) Data storage device and method for managing a write buffer
US20240126473A1 (en) Data storage device and method for managing a write buffer
CN116149540A (en) Method for updating host and flash memory address comparison table, computer readable storage medium and device
CN111159065A (en) Hardware buffer management unit with key word (BMU)

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant