CN108228483B - Method and apparatus for processing atomic write commands - Google Patents

Method and apparatus for processing atomic write commands Download PDF

Info

Publication number
CN108228483B
CN108228483B CN201611159579.1A CN201611159579A CN108228483B CN 108228483 B CN108228483 B CN 108228483B CN 201611159579 A CN201611159579 A CN 201611159579A CN 108228483 B CN108228483 B CN 108228483B
Authority
CN
China
Prior art keywords
cache
write command
atomic write
data
command
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201611159579.1A
Other languages
Chinese (zh)
Other versions
CN108228483A (en
Inventor
孙清涛
殷雪冰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Memblaze Technology Co Ltd
Original Assignee
Beijing Memblaze Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Memblaze Technology Co Ltd filed Critical Beijing Memblaze Technology Co Ltd
Priority to CN201611159579.1A priority Critical patent/CN108228483B/en
Publication of CN108228483A publication Critical patent/CN108228483A/en
Application granted granted Critical
Publication of CN108228483B publication Critical patent/CN108228483B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/0802Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
    • G06F12/0877Cache access modes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/0802Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
    • G06F12/0893Caches characterised by their organisation or structure

Abstract

The present disclosure provides a method of processing an atomic write command, comprising: receiving an atomic write command; allocating one or more cache units for the atomic write command; in response to the one or more cache units all receiving the data to be written by the atomic write command, writing the data to be written by the atomic write command into the one or more cache units; and indicating to the host that the atomic write command processing is complete. The present disclosure enables, at a minimum, techniques to efficiently implement atomic write commands in solid-state storage devices, thereby meeting the requirements of the NVMe specification.

Description

Method and apparatus for processing atomic write commands
Technical Field
The present application relates to the field of storage, in particular, to the field of solid state disks, and more particularly, to a method and apparatus for processing an atomic write command.
Background
FIG. 1 illustrates a block diagram of a storage device. As shown in fig. 1, a solid-state storage device 102 is coupled to a host for providing storage capability to the host. The host and the solid-state storage device 102 may be coupled by various methods, including but not limited to, connecting the host and the solid-state storage device 102 by, for example, SATA (Serial Advanced Technology Attachment), SCSI (Small Computer System Interface), SAS (Serial Attached SCSI), IDE (Integrated Drive Electronics), USB (Universal Serial Bus), PCIE (Peripheral Component Interconnect Express, PCIE, high-speed Peripheral Component Interconnect), NVMe (NVM Express, high-speed nonvolatile storage), ethernet, fiber channel, wireless communication network, etc. The host may be an information processing device, such as a personal computer, tablet, server, portable computer, network switch, router, cellular telephone, personal digital assistant, etc., capable of communicating with the storage device in the manner described above. The Memory device 102 includes an interface 103, a control section 104, one or more NVM chips 105, and a DRAM (Dynamic Random Access Memory) 110.
NAND flash Memory, phase change Memory, FeRAM (Ferroelectric RAM), MRAM (magnetoresistive Memory), RRAM (Resistive Random Access Memory), etc. are common NVM.
The interface 103 may be adapted to exchange data with a host by means such as SATA, IDE, USB, PCIE, NVMe, SAS, ethernet, fibre channel, etc.
The control unit 104 is used to control data transfer between the interface 103, the NVM chip 105, and the DRAM 110, and also used for memory management, host logical address to flash physical address mapping, erase leveling, bad block management, and the like. The control component 104 can be implemented in various manners of software, hardware, firmware, or a combination thereof, for example, the control component 104 can be in the form of an FPGA (Field-programmable gate array), an ASIC (Application-Specific Integrated Circuit), or a combination thereof. The control component 104 may also include a processor or controller in which software is executed to manipulate the hardware of the control component 104 to process IO commands. The control component 104 may also be coupled to the DRAM 110 and may access data of the DRAM 110. FTL tables and/or cached IO command data may be stored in the DRAM.
Control section 104 includes a flash interface controller (or referred to as a media interface controller, a flash channel controller) that is coupled to NVM chip 105 and issues commands to NVM chip 105 in a manner that conforms to an interface protocol of NVM chip 105 to operate NVM chip 105 and receive command execution results output from NVM chip 105. Known NVM chip interface protocols include "Toggle", "ONFI", etc.
The software and/or firmware (hereinafter collectively referred to as "firmware") running in the control component 104 may be stored in the NVM chip 105 or another firmware memory. Upon power up of the solid state storage device 102, firmware is loaded from the firmware memory into the DRAM 110 and/or memory internal to the control component 104. Optionally, the firmware is received and loaded through interface 103 or a debug interface.
Data is typically stored and read on NVM on a page basis. And data is erased in blocks. A block contains a plurality of pages. Pages on the storage medium (referred to as physical pages) have a fixed size, e.g., 17664 bytes. Physical pages may also have other sizes.
In the solid-state storage device, mapping information from logical addresses to physical addresses is maintained using FTL (Flash Translation Layer). The logical addresses constitute the storage space of the solid-state storage device as perceived by upper-level software, such as an operating system. The physical address is an address for accessing a physical memory location of the solid-state memory device. Address mapping may also be implemented in the prior art using an intermediate address modality. E.g. mapping the logical address to an intermediate address, which in turn is further mapped to a physical address.
A table structure storing mapping information from logical addresses to physical addresses is called an FTL table. FTL tables are important metadata in solid state storage devices. Usually, the data entry of the FTL table records the address mapping relationship in the unit of data page in the solid-state storage device.
Atomic Operation (Atomic Operation) is defined in the NVMe specification, see http:// nvmexpress. org/wp-content/uploads/NVM _ Express _1_2_1_ Gold _20160603. pdf. The atomic operation includes an atomic write command. To execute an atomic write command, the solid-state storage device needs to ensure that the data indicated in the atomic write command is either all written to the solid-state storage device or none written to the solid-state storage device, and there are no other execution results. When two or more atomic write commands for writing data to the same or partially the same address exist at the same time, the atomic write commands are executed serially.
For example, referring to Table 1, an atomic write command A writes data to logical addresses (LBAs) 0-3, and an atomic write command B writes data to logical addresses (LBAs) 1-4 (data written by atomic write command A is indicated by "A" and data written by atomic write command B is indicated by "B" in Table 1). Lines 2 and 3 of table 1 show the results of the correct execution of command a and command B. Referring to Table 1, one possible result (as shown in row 2 of Table 1) is that LBA 0-LBA 3 are the data written by write command A, and LBA 4 is the data written by write command B. In other words, write command B takes effect first, atomically updating LBAs 1-4, and write command A takes effect next, atomically updating LBAs 0-3. Another possible result (as shown in row 3 of Table 1) is that LBA 0 is the data written by write command A, and LBA 2-4 is the data written by write command B. In other words, write command A takes effect first, atomically updating LBAs 0-3, and write command B takes effect next, atomically updating LBAs 1-4. Except for the two results mentioned above, none of the other results meet the NVMe specification requirements for atomic write commands. Such as the results in line 4 of table 1, are not allowed to occur for atomic write commands.
LBA0 1 2 3 4 5 6
Correctly executing results A A A A B
Correctly executing results A B B B B
Invalid result A A B B B
TABLE 1
However, there is no prior art that provides how to implement atomic write commands in solid state storage devices to meet the requirements of the NVMe specification.
Disclosure of Invention
The application aims to provide a technology capable of realizing atomic write commands in a solid-state storage device and meeting the requirements of NVMe specifications.
According to a first aspect of the present disclosure, there is provided a first method of processing an atomic write command, comprising: receiving an atomic write command; allocating one or more cache units for the atomic write command; in response to the one or more cache units all receiving the data to be written by the atomic write command, writing the data to be written by the atomic write command into the one or more cache units; and indicating to the host that the atomic write command processing is complete.
The present disclosure provides a second method for processing an atomic write command according to a first method for processing an atomic write command of a first aspect of the present disclosure, wherein one or more buffer units allocated for the atomic write command include: one or more cache units hit by the atomic write command; and/or one or more cache units applied for the atomic write command when part or all of the atomic write command misses the cache units.
According to the first or second method of processing an atomic write command of the first aspect of the present disclosure, there is provided a third method of processing an atomic write command, wherein if any one of the one or more cache units cannot receive data to be written by the atomic write command, processing of the atomic write command is suspended.
According to a third method of processing an atomic write command of the first aspect of the present disclosure, there is provided a fourth method of processing an atomic write command, wherein if the atomic write command hits in a cache unit, but data is stored in the hit cache unit, the hit cache unit cannot receive the data to be written by the atomic write command.
According to a third method of processing an atomic write command of the first aspect of the present disclosure, there is provided a fifth method of processing an atomic write command, wherein if the atomic write command hits in a cache unit, but a logical address range of data stored in the hit cache unit overlaps with a logical address range of the atomic write command, the hit cache unit cannot receive data to be written by the atomic write command.
According to one of the third to fifth methods of processing an atomic write command of the first aspect of the present disclosure, there is provided a sixth method of processing an atomic write command, wherein if data is stored in one or more cache units applied for the atomic write command, the applied one or more cache units cannot receive the data to be written by the atomic write command.
According to one of the third to sixth methods of processing an atomic write command of the first aspect of the present disclosure, there is provided a seventh method of processing an atomic write command, further comprising: and if any one of the one or more cache units can not receive the data to be written by the atomic write command, emptying the cache unit which can not receive the data to be written by the atomic write command.
According to a seventh method of processing an atomic write command of the first aspect of the present disclosure, there is provided an eighth method of processing an atomic write command, wherein flushing a cache unit to which data is to be written that cannot receive the atomic write command comprises: and writing the data in the one or more cache units into the NVM.
According to a seventh method of processing an atomic write command of the first aspect of the present disclosure, there is provided a fourth method of processing an atomic write command, wherein each buffer unit includes a plurality of buffer subunits, and emptying a buffer unit that cannot receive the atomic write command and has data to be written in includes: if the cache subunits in the cache units have blank cache subunits which are not written with data, sending a read command to a logic address corresponding to the blank cache subunits to fill the blank cache subunits; and when all the cache subunits in the cache unit are filled with data, writing the data of the cache unit into the NVM in a whole manner.
According to one of the first to ninth methods of processing an atomic write command of the first aspect of the present disclosure, there is provided a tenth method of processing an atomic write command, wherein if the atomic write command hits one or more cache units, and a logical address of data stored in the hit one or more cache units does not overlap with the logical address of the atomic write command, the hit one or more cache units may receive data to be written by the atomic write command.
According to one of the second to tenth methods of processing an atomic write command of the first aspect of the present disclosure, there is provided an eleventh method of processing an atomic write command, wherein if valid data does not exist in one or more cache units applied for the atomic write command when part or all of the atomic write command misses a cache unit, the one or more cache units applied for the atomic write command may receive data to be written by the atomic write command.
According to one of the second to eleventh methods of processing an atomic write command of the first aspect of the present disclosure, there is provided a twelfth method of processing an atomic write command, wherein a cache unit is applied from a pool of cache units.
According to one of the seventh to ninth methods of processing an atomic write command of the first aspect of the present disclosure, there is provided a thirteenth method of processing an atomic write command, wherein processing of the atomic write command is suspended during emptying of a cache unit.
According to one of the first to thirteenth methods of processing an atomic write command of the first aspect of the present disclosure, there is provided a fourteenth method of processing an atomic write command, further comprising: splitting the atomic write command into one or more subcommands according to the size of a cache unit; distributing a buffer unit for each sub-command; wherein the range of the logical address accessed by each sub-command does not exceed the range of the logical address of one cache unit.
According to a fourteenth method of processing an atomic write command of the first aspect of the present disclosure, there is provided a fifteenth method of processing an atomic write command, wherein the buffer unit allocated to each sub-command includes: a cache unit hit by the sub-command; or the cache unit applied for the sub-write command when the sub-command misses any cache unit.
According to a fourteenth or fifteenth method of processing an atomic write command of the first aspect of the present disclosure, there is provided a sixteenth method of processing an atomic write command, wherein, if any of the buffer units allocated to the one or more sub-commands cannot receive data to be written by the sub-command, the processing of the atomic write command is suspended until all of the buffer units allocated to the one or more sub-commands can receive data to be written by the sub-command.
According to a sixteenth method of processing an atomic write command of the first aspect of the present disclosure, there is provided a seventeenth method of processing an atomic write command, wherein if the sub-command hits in a first cache unit but a logical address range of data stored in the first cache unit overlaps with a logical address range of the sub-command, the first cache unit cannot receive the data to be written by the sub-command.
According to a seventeenth method of processing an atomic write command of the first aspect of the present disclosure, there is provided an eighteenth method of processing an atomic write command, wherein if a logical address range of data stored for a second cache unit applied for the sub-write command overlaps with a logical address range of the sub-command when the sub-command misses any cache unit, the second cache unit cannot receive data to be written by the sub-command.
According to one of the sixteenth to eighteenth methods of processing an atomic write command of the first aspect of the present disclosure, there is provided a nineteenth method of processing an atomic write command, further comprising: emptying a buffer unit which cannot receive the data to be written by the sub-command, so that the data can be written in the buffer unit.
According to a nineteenth method of processing an atomic write command of the first aspect of the present disclosure, there is provided a twentieth method of processing an atomic write command, wherein the flushing a cache unit that cannot receive data to be written by the sub command includes: and writing the data in the cache unit into the NVM.
According to a nineteenth method of processing an atomic write command of the first aspect of the present disclosure, there is provided a twenty-first method of processing an atomic write command, wherein each buffer unit includes a plurality of buffer subunits, and emptying a buffer unit that cannot receive data to be written by the subcommand includes: if the cache subunits in the cache units have blank cache subunits which are not written with data, sending a read command to a logic address corresponding to the blank cache subunits to fill the blank cache subunits; and when all the cache subunits in the cache unit are filled with data, writing the data of the cache unit into the NVM in a whole manner.
According to one of the fourteenth to twenty-first methods of processing an atomic write command of the first aspect of the present disclosure, there is provided a twenty-second method of processing an atomic write command, wherein if the sub-command hits a third cache unit, and a logical address of data already stored in the third cache unit does not overlap with a logical address of the sub-command, the third cache unit may receive data to be written by the sub-command.
According to a sixteenth method of processing an atomic write command of the first aspect of the present disclosure, there is provided a twenty-third method of processing an atomic write command, wherein if there is no valid data in a fourth cache unit applied for the sub-command when the sub-command misses any cache unit, the fourth cache unit may receive data to be written by the sub-command.
According to one of the first to twenty-third methods of processing an atomic write command of the first aspect of the present disclosure, there is provided a twenty-fourth method of processing an atomic write command, further comprising: in response to a power loss, data in the cache unit is written to the NVM using the backup power supply.
According to a second aspect of the present disclosure, there is provided a first method of processing a write command, comprising: receiving a write command; detecting whether the write command hits a cache unit; if the write command hits the cache unit, writing data into the hit cache unit; and if the write command misses the cache unit, allocating the cache unit for the write command.
According to a first method of processing a write command of a second aspect of the present disclosure, there is provided a second method of processing a write command, further comprising: if data is stored in the allocated cache unit, execution of the write command is suspended before the data stored in the allocated cache unit is emptied.
According to the first or second method of processing a write command of the second aspect of the present disclosure, there is provided a third method of processing a write command, further comprising: and if the allocated cache unit does not store valid data, writing data corresponding to the write command into the cache unit.
According to a first method of processing a write command of a second aspect of the present disclosure, there is provided a fourth method of processing a write command, wherein the write command is split into a plurality of sub-commands according to the size of a cache unit, wherein a range of logical addresses accessed by each sub-command does not exceed a range of logical addresses of one cache unit.
According to a fourth method of processing a write command of a second aspect of the present disclosure, there is provided a fifth method of processing a write command, wherein the plurality of sub-commands include a first sub-command that hits in a cache unit, and data of the first sub-command is written in the hit cache unit regardless of whether or not other sub-commands hit in the cache unit.
According to a fourth method of processing a write command of the second aspect of the present disclosure, there is provided a fifth method of processing a write command, further comprising: and responding to the data corresponding to the write command written into the cache unit, and indicating the write command to complete the processing to the host.
According to one of the first to fifth methods of processing a write command of the second aspect of the present disclosure, there is provided a sixth method of processing a write command, writing data in a cache unit to an NVM using a backup power supply in response to a power failure.
According to a third aspect of the present disclosure, there is provided a storage device comprising: one or more processors; one or more memories; when a program stored in the one or more memories is executed by one or more processors, the program causes the controller to perform the method as described above.
According to a fourth aspect of the present disclosure, there is provided an apparatus for processing an atomic write command, comprising: means for receiving an atomic write command; means for allocating one or more cache units for the atomic write command; means for writing data to be written by the atomic write command to the one or more cache units in response to all of the one or more cache units receiving the data to be written by the atomic write command; and means for indicating to the host that the atomic write command processing is complete.
According to a fifth aspect of the present disclosure, there is provided an apparatus for processing a write command, comprising: means for receiving a write command; means for detecting whether the write command hits in a cache unit; means for writing data into the hit cache location if the write command hits the cache location; means for allocating a cache unit for the write command if the write command misses the cache unit.
According to a sixth aspect of the present disclosure, there is provided a computer-readable storage medium storing a program which, when executed by an apparatus, causes the apparatus to perform the method described above.
The present disclosure provides at least techniques that enable atomic write commands in solid-state storage devices to meet the requirements of the NVMe specification.
Drawings
FIG. 1 illustrates a block diagram of a prior art storage device;
FIG. 2 shows a block diagram of a control component of a storage device according to an embodiment of the disclosure;
FIG. 3 is a state transition diagram illustrating various states of a cache unit and state transitions;
FIG. 4 illustrates a flow diagram of a method of processing an atomic write command in accordance with an embodiment of the disclosure;
5A-5B are diagrams of atomic write commands and corresponding state change diagrams for cache locations, according to one embodiment of the present disclosure;
6A-6B are schematic diagrams of atomic write commands and corresponding state change diagrams of cache molecules, according to another embodiment of the disclosure;
7A-7B are schematic diagrams of atomic write commands and corresponding state change diagrams of cache molecules according to yet another embodiment of the disclosure;
8A-8B are schematic diagrams of atomic write commands and corresponding state change diagrams for cache locations, according to yet another embodiment of the disclosure;
9A-9B are diagrams of a non-atomic write command and corresponding state change diagrams of a cache molecule according to yet another embodiment of the disclosure; and
fig. 10 is a flow chart of a power down process according to an embodiment of the disclosure.
Detailed Description
Specific embodiments of the present disclosure are described in detail below with reference to the accompanying drawings. It should be understood that the terms "first," "second," and the like in this disclosure are used merely for convenience in referring to objects, and are not intended to limit the number and/or order.
FIG. 2 shows a block diagram of control components of a storage device according to an embodiment of the disclosure. The control unit 104 includes a host interface 210, a front-end processing module 220, a flash management module 230, and a back-end processing module 240.
The host interface 210 is used to exchange commands and data with a host. In one example, the host and the storage device communicate via NVMe/PCIe protocol, and the host interface 210 processes the PCIe protocol data packet, extracts the NVMe protocol command, and returns a processing result of the NVMe protocol command to the host. The FTL module 230 converts the logical address of the flash memory access command into a physical address, and manages the flash memory, thereby providing services such as wear leveling and garbage collection. The back-end processing module 240 accesses the one or more NVM chips according to the physical address. Processing before accessing the FTL is referred to as front-end processing, and processing after accessing the FTL is referred to as back-end processing. The front-end control unit 104 is also coupled to an external memory (e.g., RAM) 260. A portion of the space of the memory 260 is used as a front-end cache (front-end cache 265) that the front-end processing module 220 may access to the memory 260 to use. Optionally, a front-end cache module 225 is provided within the control component 104 for use as a front-end cache.
The disclosed front-end cache provides a plurality of cache units. Each of the cache molecules may be in a number of different states. In an alternative embodiment, the cache location is provided by memory 260 (see FIG. 2), while the metadata for the cache location is stored by memory internal to the control component 104. The metadata records the state corresponding to the cache unit, the logical address corresponding to the cache unit and/or the record of the use condition of the cache subunit of the cache unit. Fig. 3 shows a state transition diagram of a cache unit. Cache molecules may be in a variety of states, including "free", "occupied", and "evicted". Optionally, a busy state may also be included.
The "idle" state indicates that the cache location is unused, and valid data is not cached in the cache location in the "idle" state. After writing data to the buffer unit in the "idle" state, the buffer unit changes to the "occupied" state to indicate that the data has been stored in the buffer unit. Alternatively, since the process of writing data "takes" a certain time, a state in which the process of writing data to the buffer unit has started but the process of writing data has not been completed is indicated by a busy state.
The process of writing the data cached by the cache unit in the "occupied" state into the NVM is called "eviction". In response to the "eviction" process beginning, the cache molecule enters an "eviction" state. In response to the end of the cache element "eviction" state, the cache element re-enters the "idle" state. The "deselected" state may also be referred to as a "cleared" state.
By way of example, each buffer unit may be 4KB in size. Obviously, the cache units may have other sizes. Preferably, the size of the cache unit is a data unit size corresponding to a physical address range of an entry in the FTL table. Optionally, the cache unit further includes a cache subunit. By way of example, each buffer subunit is 1KB in size. Preferably, the size of the cache subunit is equal to the minimum data unit size of the IO command sent by the host to the solid-state storage device.
FIG. 4 illustrates a method of processing an atomic write command according to an embodiment of an aspect of the disclosure.
First, in operation S410, an atomic write command is received.
Next, one or more buffer units are allocated for the atomic write command in operation S420.
The range of logical addresses (i.e., the logical addresses indicated by the metadata of the cache unit) corresponding to the cache units are all aligned by, for example, 4KB (the starting address is located at an integer multiple of 4KB, e.g., 0, 4KB, 8KB), and the size of the logical address space corresponding to the cache units is, for example, 4 KB. The size of the logical address range of the atomic write command may be different from the size of the cache unit (e.g., 4 KB).
In an embodiment according to the present disclosure, one or more sub-commands are generated for an atomic write command according to a logical address range of the atomic write command, where each sub-command accesses a logical address range that does not exceed a logical address range corresponding to one cache unit. And allocating a buffer unit for each sub-command.
The cache unit allocated to the subcommand may be the cache unit hit by the subcommand, and in case that the subcommand misses any cache unit, the cache unit is applied for the subcommand.
According to one embodiment of the present disclosure, the allocated cache unit may be one or more cache units hit by the atomic write command. Whether the cache unit hits is determined by comparing the logical address of the sub-command with the logical address recorded in the cache unit metadata. If the logical address of the sub-command is the same as the logical address of the metadata record of the cache unit, or the logical address range of the sub-command is contained by the logical address range of the metadata record of the cache unit, the sub-command hits the cache unit.
The cache unit for the subcommand may be a cache unit in an "idle" state into which data has not been written, or a cache unit in an "available", "busy" or "obsolete" into which data has been written.
It can be understood that, if the atomic write command is split into a plurality of subcommands, the subcommands may all hit the cache unit, or a part of the subcommands may hit the cache unit, while another part of the subcommands miss the cache unit, and apply for the cache unit for the subcommand that misses the cache unit.
After the buffer units are allocated for the atomic write command, in operation S430, in response to that the allocated buffer units can all receive the data to be written by the atomic write command, the data to be written by the atomic write command is written into the buffer units.
In one embodiment, in order to allocate a cache unit for a sub-command that misses any cache unit, a cache unit pool is established for cache units in an "idle" state, where all cache units in the cache unit pool are in the "idle" state. When a sub-command misses any cache unit, the cache unit is fetched from the pool of cache units, thereby enabling convenient allocation of the cache unit for the sub-command. Further, the emptied cache units may be returned to the cache unit pool.
In operation S440, in response to the atomic write command that the data to be written is all written to the cache units, the atomic write command processing is indicated to the host as complete. At this time, although the data corresponding to the atomic write command may not have been written to the NVM, the host is notified that the atomic write is complete as long as the data is written to the cache unit. This is advantageous in reducing the delay of the write command processing.
In operation S430 according to an embodiment of the present disclosure, the cache unit may receive data to be written by the atomic write command, that is, (1) a sub-command of the atomic write command hits the cache unit, and the hit cache unit is in an "available" state, and a logical address range of the data stored by the cache unit does not overlap with a logical address range accessed by the sub-command; or (2) a buffer unit is applied for the subcommand, and the buffer unit is in an idle state.
If any one or more of the cache units allocated for the atomic write command cannot receive the data of the atomic write command, according to the embodiment of the present disclosure, the processing of the atomic write command is suspended (the processing of all sub-commands of the atomic write command is suspended).
The inability of the cache unit to receive data for the atomic write command means that,
(1) the subcommand of the atomic write command hits the cache unit, the hit cache unit is in an available state, but the logic address range of the data stored in the hit cache unit is overlapped with the logic address range accessed by the subcommand;
or (2) the subcommand misses the cache unit, and the cache unit applied for the subcommand is in a non-free state (including an available state, a busy state or an obsolete state).
The situation that the cache unit cannot receive the data of the atomic write command is also referred to as that the atomic write command or its subcommand conflicts with the cache unit.
For the cache units which can not receive the atomic write command and need to write data, the cache units are in an idle state through a 'elimination' or emptying process, so that the conflict is eliminated. And so that the cache unit will be able to receive the data of the atomic write command.
And clearing the data in the buffer unit by writing the data in the buffer unit into the NVM.
To "retire" or empty a cache unit, if there are cache subunits in the cache unit that have not yet been written to with data, in other words, some of the cache subunits of the cache unit are filled with data, while other cache subunits are not filled with data. In this case, a read command is issued to the logical address corresponding to the cache subunit not filled with data, data is read from the logical address, and the cache subunit not filled with data is filled with the data; after all the cache subunits which are not filled with data are filled with data, the data of the cache units are entirely written into the NVM, so that the cache units are emptied.
Example 1
Fig. 5A and 5B are schematic diagrams of an atomic write command and corresponding state change diagrams of a cache unit according to an embodiment of the disclosure. For clarity, the ranges of logical addresses (i.e., the logical addresses indicated by the metadata of the cache unit) corresponding to the cache units are all aligned by 4KB (the starting addresses of the ranges are located at integer multiples of 4KB, e.g., 0, 4KB, 8KB), and the size of the logical address space corresponding to the cache units is 4 KB. For example, the atomic write command 510 indicates to write data to a logical address space of 1KB-10 KB.
The atomic write command 510 is split into a plurality of subcommands according to the logical address range of the cache unit, and the logical address range accessed by each subcommand does not exceed the logical address range of one cache unit. And distributing the cache units for the sub-commands according to the logic address range accessed by the sub-commands. The logical address range of the cache unit filled with data is recorded in the metadata of the cache unit.
Referring to FIG. 5A, the atomic write command 510 is split into sub-commands L1/L2/L3, sub-command L1 accessing a 1KB-3KB size of 3KB logical address range, sub-command L2 accessing a 4KB-7KB size of 4KB logical address range, sub-command L3 accessing an 8KB-10KB size of 3KB address range, the sub-command accessing a logical address range that does not exceed the range of logical addresses of the cache units allocated thereto.
Alternatively, the logical address spaces corresponding to the write commands need not be contiguous, and the logical address spaces of the sub-commands need not be contiguous.
FIG. 5B is a diagram illustrating the state changes of any cache locations after allocating free cache locations for the atomic write command's subcommands when the atomic write command misses any of these cache locations.
As shown in FIG. 5B, a buffer location is allocated for each sub-command based on the logical address accessed by each sub-command. As an example embodiment, sub-commands L1/L2/L3 each miss any cache locations, cache locations 512/514/516 are fetched for sub-command L1/L2/L3, respectively, and these cache locations 512/514/5516 are each in an "idle" state (as shown in "state 51" in FIG. 5B). Then, the logical addresses of the sub-commands L1/L2/L3 are recorded in the metadata of the allocated cache unit 512/514/516, respectively.
Since all the buffer cells (512/514/516) allocated for the atomic write command 510 are in the "free" state, it is able to receive the data to be written by all the subcommands L1/L2/L3, and thus the data corresponding to the atomic write command 510 is written into these buffer cells, respectively. A DMA transfer is initiated between the host issuing the atomic write command 510 and the cache unit 512/514/516, transferring the data to be written to the cache unit 512/5214/516 according to its logical address. In response to data being written to a cache location, the state of the cache location recorded in the metadata of cache location (512/514/516) is changed to "occupied", as shown by "state 52" in FIG. 5B.
In response to all of the subcommands L1/L2/L3 of the atomic write command 510 having had data written to the cache location, a message is sent to the host indicating that the atomic write command 510 processing is complete.
Example 2
Fig. 6A and 6B are schematic diagrams of an atomic write command and corresponding state change diagrams of cache units according to another embodiment of the disclosure. As an exemplary embodiment, the write command 610 writes data to a 1KB sized space of logical address range 0-1KB, a 4KB sized space of logical address range 4KB-7KB, and a 1KB sized space of logical address range 10KB-11 KB.
Referring to FIG. 6A, the atomic write command 610 is split into sub-commands L4/L5/L6, where sub-command L4 writes data to the space of logical address range 0-1KB, sub-command L5 writes data to the space of logical address range 4KB-7KB, and sub-command L6 writes data to the space of logical addresses 10KB-11 KB.
By way of example, there are cache units 612, 614, and 616, where the logical address range of cache unit 612 is 0-3KB and is in an "occupied" state, but no data is stored in the logical address range 0-1 KB; the logical address range of the buffer unit 614 is 4-7KB, which is in "occupied" state, and the logical address range 4-7KB has stored data; the buffer unit 616 has a logical address range of 8-11KB and is in the "busy" state, and no data is stored in the logical addresses 10KB-11KB, as shown in "state 61" of FIG. 6B.
Sub-command L4 writes data to a 1KB size space of logical address range 0-1KB, hitting the logical address range (0-3KB range, 4KB in size) of cache unit 612; sub-command L5 writes data to a 4KB size space of logical address range 4-7KB, hitting the logical address range (4-7KB range, 4KB in size) of cache unit 614; sub-command L6 writes data to a 1KB size space with a range of logical addresses from 10KB to 11KB, hitting the range of logical addresses of cache unit 616 (a range of 8KB to 11KB, 4KB in size).
Further, since the logical address range of 0-1KB of the buffer unit 612 has not been written with data, and the logical address range of 10KB-11KB of the buffer unit 616 has not been written with data, (since the logical address range of the data stored by the buffer unit 612/616 does not overlap with the logical address range accessed by the corresponding sub-command L4/L6), the buffer unit 612/616 can receive the data to be written by the sub-command L4/L6, respectively. Since the logical address range of 4KB-7KB of the buffer unit 614 stores data, the buffer unit 614 cannot receive the data corresponding to the sub-command L5 for a while. In this case, since all 3 cache units required by atomic write command 610 may not yet receive a sub-command to write data, in an embodiment according to the present disclosure, to ensure atomicity of the operation result, processing of atomic write command 610 is suspended, atomic write command 610 is suspended or added to the waiting command set. Next, an "eviction" or flush procedure for the cache molecule 614 is initiated, and the state of the cache molecule 614 is marked as "evicted", as shown in "state 62" of FIG. 6B.
During the "eviction" process, the data of the cache unit 614 is written to the NVM. After the data of the cache location 614 is written to NVM, the "eviction" process is complete and the status of the cache location 614 is marked as "free". Further, in "evicting" the data of cache location 614, data of other cache locations assigned to the same atomic write command 610 to which data may be written need not be "evicted," e.g., data in cache locations 612 and 616 need not be evicted.
After the "eviction" of cache molecule 614 is complete, cache molecule 614 becomes the "free" state, while cache molecule 612/616 remains in the "occupied" state (as shown in "state 63" of FIG. 6B). At this time, all the buffer units 612/614/616 allocated to the atomic write command 610 can receive the data to be written by the sub-commands L4/L5/L6 of the atomic write command 610, thus initiating the data transfer between the host and the buffer unit 612/614/616, and writing the data corresponding to the sub-commands L4/L5/L6 to the buffer unit 612/614/616. In response to the sub-command L4/L5/L6 corresponding to data being completely written to cache element 612/614/616, the status of cache element 612/614/616 is marked as "occupied" and a message is sent to the host indicating that atomic write command 610 processing is complete, as shown in "status 64" in FIG. 6B.
Example 3
7A-7B are schematic diagrams of atomic write commands and corresponding state change diagrams of cache molecules, according to yet another embodiment of the disclosure. As shown in FIG. 7A, to process the atomic write command 710, it is split into 3 subcommands (L7/L8/L9) by the logical address accessed. By way of example, sub-command L7/L8/L9 is allocated a cache location, for example, by querying cache locations to find that sub-command L7/L8/L9 misses any cache locations (712/714/716). The buffer units can be allocated according to a direct buffer mapping mode, such as direct mapping, multi-way set association or full association, and the buffer units can also be allocated from a buffer unit pool. As an exemplary embodiment, referring to FIG. 7B, the obtained cache location 712 is in an "idle" state, while cache location 714/716 is in an "occupied" state, as shown in "state 71" in FIG. 7B.
Since the atomic write command 710 misses cache location 714/716, the data in cache location 714/716, which is in the "busy" state, needs to be flushed, i.e., written, to NVM. This is accomplished by initiating a "eviction" or flush procedure, and setting the state of cache molecule 714/716 to "eviction". Further, to "evict" data of cache location 714/716, data of other cache locations (e.g., cache location 712) that are assigned to the same atomic write command 710 as cache location 714/716 need not be "evicted". Since all 3 cache units required by the atomic write command 710 may not yet receive the data to be written by the sub-command, to ensure atomicity of the operation result, the processing of the atomic write command 710 is suspended, and the atomic write command 710 is suspended or added to the waiting command set.
After writing the data of cache location 714/716 to NVM, the "eviction" process is complete and the state of cache location 714/716 is marked as "free," as shown by "state 72" in FIG. 7B.
At this time, the cache units 712/714/716 may each receive data to be written by the sub-commands L7/L8/L9 of the atomic write command 710, respectively, and thus (the atomic write command 710 is fetched from the waiting command set) initiates data transfer between the host and the cache unit 712/714/716, writing the data corresponding to the sub-commands L7/L8/L9 to the cache unit 712/714/716. In response to the sub-commands L7/L8/L9 corresponding to data being written in its entirety to cache element 712/714/716, respectively, the state of cache element 712/714/716 is marked as "occupied" (as indicated by "state 73" in FIG. 7B), and a message is sent to the host indicating that atomic write command 710 processing is complete.
Example 4
FIGS. 8A and 8B are schematic diagrams of an atomic write command and corresponding state change diagrams of cache locations according to yet another embodiment of the disclosure. As shown in FIG. 8A, to process an atomic write command 810, it is split into 3 subcommands (L10/L11/L12) according to its logical address. By looking up the logical address of the cache location, sub-command L10 is found to hit cache location 812, while sub-command L11/L12 misses any cache location, and a cache location is allocated for sub-command L11/L12 (814/816). By way of example, referring to FIG. 8B, the obtained cache location 812 is in an "occupied" state, while cache location 814/816 is also in an "occupied" state, as shown in "state 81".
To use cache cell 814/816, the data for cache cell 814/816 in the "busy" state needs to be written to NVM due to a miss in cache cell 814/816. This is accomplished by initiating a "eviction" process and setting the state of cache molecule 814/816 to "eviction". For the hit cache unit 812, it is further checked whether the logical address range to be written by the sub-command L10 overlaps with the logical address range of the cache unit 812 to which data has been written, if there is any overlapping portion, the data of the cache unit 812 needs to be written into the NVM, and after all of the 3 cache units (812/814/816) allocated for the atomic write command 810 can receive the data of the sub-command, the cache unit 812 is used to receive the data of the sub-command L10 to ensure the atomicity of the data. To write the data of cache location 812 to NVM, an "eviction" process is also initiated.
Since all 3 cache units required by the atomic write command 810 have not yet been written with data, to ensure atomicity of the operation result, processing of the atomic write command 810 is suspended, and the atomic write command 810 is suspended or added to the waiting command set.
In the example of FIG. 8B, since sub-command L10 has an overlap in the logical address range to be written to and the logical address range of cache molecule 812 to which data has been written, wait for the completion of the "eviction" process of cache molecule 812 and also wait for the completion of the "eviction" process of cache molecule 814/824. In another example, if sub-command L10 does not have any overlap in the logical address range to be written to and the logical address range of cache molecule 812 to which data has already been written, then no "eviction" process needs to be initiated for cache molecule 812, and only the "eviction" process of cache molecule 814/824 is waited for to complete.
After the cache units 812/814/816 each receive data to be written by the subcommands L10/L11/L12 of the atomic write command 810, for example, when the cache units 812/814/816 are all in the "idle" state (as shown in "state 82" of FIG. 8B), the atomic write command 810 is fetched from the waiting command set or recovered from the suspended state, data transfer between the host and the cache unit 812/814/816 is initiated, and the data corresponding to the subcommands L10/L11/L12 are written into the cache unit 812/814/816, respectively.
In response to the sub-command L10/L11/L12 corresponding to data being completely written to cache element 812/814/816, the status of cache element 812/814/816 is marked as "occupied" and a message is sent to the host indicating that the atomic write command 810 processing is complete, as shown in "status 83" of FIG. 8B.
Example 5
Fig. 9A and 9B in this disclosure are schematic diagrams of processing a non-atomic write command (normal write command) 910. As shown in FIG. 9A, by way of example, the write command 910 is split into 3 sub-commands (L13/L14/L15). By looking up the logical address of the cache location, it is found that sub-command L13 hits in cache location 912, while sub-command L14/L15 misses in any cache location, sub-command L14/L15 is allocated cache location 914/916. By way of example, referring to FIG. 9B, hit cache unit 912 is in an "occupied" state, while allocated cache unit 914/916 is in an "occupied" state, as shown in "state 91" in FIG. 9B.
Since the write command 910 does not require atomicity, any sub-command of the write command 910 may initiate a data transfer process to fill a cache unit with data when the data transfer may be initiated to the cache unit without waiting for all cache units of the write command 910 to receive data. According to one embodiment of the present disclosure, even if one sub-command of the write command 910 misses a cache location, and other sub-commands can write to the cache location, there is no need to wait for the sub-command that misses the cache location to be allocated the cache location.
Referring to FIG. 9B, since sub-command L13 hits in cache unit 912 and cache unit 912 is in an "occupied" state, data is transferred (e.g., by DMA transfer) between the host and cache unit 912 pursuant to sub-command L13. Even if the logical address range to be written by the sub-command L13 overlaps with the logical address range of the buffer unit 912 to which data has already been written, data can be immediately written to the buffer unit 912 according to the sub-command L13. While subcommands L14 and L15 do not command cache units 914 and 916, and cache unit 914/916 is in an "occupied" state (meaning that the data already written in cache unit 914/916 is at a different logical address than the data to be written by subcommand L14/L15), the data of cache unit 914/916 is written to NVM, this is done by initiating a "retirement" process, and the state of cache unit 914/916 is set to "retirement".
When the "eviction" process is complete, cache molecule 912 remains in the "occupied" state, and cache molecule 914/916 changes to the "free" state (shown as "state 92" in FIG. 9B), writing data to cache molecule 914/916 pursuant to sub-command L14/L15. In response to the sub-commands L14/L15/L16 corresponding to data being completely written to cache unit 912/914/916, cache units 912/914/916 each change to an "occupied" state and send a message to the host indicating that the write command 910 processing is complete, as shown in "state 93" of FIG. 9B.
Fig. 10 is a flow chart of a power down process according to an embodiment of the disclosure.
When power failure occurs due to an abnormality (S1010), the solid state disk is powered by the standby power supply, and data written in (for example, all) cache units in the "occupied" state is written into the NVM (S1020), and it is ensured that the "eviction" operation of the cache units in the "evicted" state is completed. And discards (or ignores) the command that has not indicated to the host that the processing is complete (ACK) (S1030).
Thus, according to the embodiments of the present disclosure, for an atomic write command and/or a non-atomic write command that has indicated to the host that the processing is completed, the data to be written is at least written into the cache unit and is written into the NVM when the power is down, thereby ensuring that the processing of the atomic write command/the non-atomic write command conforms to the NVMe specification. Even if power failure does not occur, when a cache unit is evicted, the data in the cache unit is written to the NVM. Optionally, the data of the buffer unit in the "occupied" state is written into the NVM periodically, when the solid-state storage device is idle, or according to a "clear" command issued by the host.
The methods and apparatus of the embodiments of the present application may be implemented by hardware, software, firmware, or any combination thereof. The hardware may include digital circuitry, analog circuitry, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), and so forth. The software may include information processing apparatus readable programs that, when executed by an information processing apparatus, implement methods provided according to embodiments of the present application.
For example, embodiments of the present application may be implemented as a storage controller, which may include: one or more processors; a memory; programs stored in the memory, which when executed by the one or more processors, cause the solid state storage device processors to perform methods provided by embodiments of the present invention.
The software of the embodiments of the present application may also be stored in a computer-readable storage medium, such as a hard disk, an optical disk, etc., which stores a program that, when executed by an apparatus, causes the apparatus to perform the method provided according to the embodiments of the present invention.
The foregoing description is merely exemplary rather than exhaustive of the present invention, and those skilled in the art may add, delete, modify, replace, etc. the above methods, apparatuses, devices, modules, etc. without departing from the spirit and scope of the present invention.

Claims (6)

1. A method of processing an atomic write command, comprising:
receiving an atomic write command;
allocating one or more cache units for the atomic write command;
in response to the one or more cache units all receiving the data to be written by the atomic write command, writing the data to be written by the atomic write command into the one or more cache units; and
indicating to a host that the atomic write command processing is complete;
the cache unit can receive data to be written by the atomic write command, namely, (1) the sub-command of the atomic write command hits the cache unit, the hit cache unit is in an available state, and the logical address range of the data stored by the cache unit is not overlapped with the logical address range accessed by the sub-command; or (2) a cache unit is applied for the subcommand and is in an idle state;
if any one or more cache units in the cache units distributed for the atomic write command cannot receive the data of the atomic write command, suspending the processing of the atomic write command;
for the cache unit which can not receive the atomic write command and needs to write data, the cache unit is in an idle state through a 'elimination' or emptying process, so that the conflict is eliminated, and the cache unit can receive the data of the atomic write command.
2. The method of claim 1, wherein the one or more cache units allocated for the atomic write command comprise:
one or more cache units hit by the atomic write command; and/or
And when part or all of the atomic write commands miss the cache units, applying for one or more cache units of the atomic write commands.
3. The method of claim 1, further comprising:
splitting the atomic write command into one or more subcommands according to the size of a cache unit;
distributing a buffer unit for each sub-command;
wherein the range of the logical address accessed by each sub-command does not exceed the range of the logical address of one cache unit.
4. The method of claim 3, wherein the allocation of a buffer unit for each sub-command comprises:
a cache unit hit by the sub-command; or
And when the subcommand does not hit any cache unit, applying for the cache unit of the subcommand.
5. The method of claim 3 or 4,
and if any one of the cache units allocated to the one or more subcommands cannot receive the data to be written by the subcommand, suspending the processing of the atomic write command until the cache units allocated to the one or more subcommands can receive the data to be written by the subcommand.
6. A storage device, comprising:
one or more processors;
one or more memories;
the program stored in the one or more memories, when executed by the one or more processors, causes the processors to perform the method of any one of claims 1-5.
CN201611159579.1A 2016-12-15 2016-12-15 Method and apparatus for processing atomic write commands Active CN108228483B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201611159579.1A CN108228483B (en) 2016-12-15 2016-12-15 Method and apparatus for processing atomic write commands

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201611159579.1A CN108228483B (en) 2016-12-15 2016-12-15 Method and apparatus for processing atomic write commands

Publications (2)

Publication Number Publication Date
CN108228483A CN108228483A (en) 2018-06-29
CN108228483B true CN108228483B (en) 2021-09-14

Family

ID=62651441

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201611159579.1A Active CN108228483B (en) 2016-12-15 2016-12-15 Method and apparatus for processing atomic write commands

Country Status (1)

Country Link
CN (1) CN108228483B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020010540A1 (en) * 2018-07-11 2020-01-16 华为技术有限公司 Atomic operation execution method and apparatus
CN111290974A (en) * 2018-12-07 2020-06-16 北京忆恒创源科技有限公司 Cache elimination method for storage device and storage device
CN114840452A (en) * 2018-12-24 2022-08-02 北京忆芯科技有限公司 Control component
WO2020168516A1 (en) * 2019-02-21 2020-08-27 Alibaba Group Holding Limited Method and system for facilitating fast atomic write operations in shingled magnetic recording hard disk drives
CN110390969B (en) * 2019-06-28 2021-03-09 苏州浪潮智能科技有限公司 Method and system for realizing atomic writing

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6931487B2 (en) * 2001-10-22 2005-08-16 Hewlett-Packard Development Company L.P. High performance multi-controller processing
CN101425052A (en) * 2008-12-04 2009-05-06 中国科学院计算技术研究所 Method for implementing transactional memory
CN104267975A (en) * 2014-06-11 2015-01-07 大唐微电子技术有限公司 Card, device and method for processing extensive application data
CN104899158A (en) * 2014-03-05 2015-09-09 华为技术有限公司 Memory access optimization method and memory access optimization device
CN105183378A (en) * 2015-08-31 2015-12-23 北京神州云科数据技术有限公司 Adaptive cache mixed reading/writing method
CN105849688A (en) * 2014-12-01 2016-08-10 华为技术有限公司 Data write-in method, apparatus and device, and storage system
CN106201335A (en) * 2015-05-29 2016-12-07 株式会社东芝 Storage system

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6922666B2 (en) * 2000-12-22 2005-07-26 Bull Hn Information Systems Inc. Method and data processing system for performing atomic multiple word reads
US20060294300A1 (en) * 2005-06-22 2006-12-28 Seagate Technology Llc Atomic cache transactions in a distributed storage system
CN101403986B (en) * 2008-11-12 2010-12-08 中国船舶重工集团公司第七○九研究所 Disaster tolerance technology for storing and leading out flash memory data
US20130198447A1 (en) * 2012-01-30 2013-08-01 Infinidat Ltd. Storage system for atomic write which includes a pre-cache
US20140344503A1 (en) * 2013-05-17 2014-11-20 Hitachi, Ltd. Methods and apparatus for atomic write processing

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6931487B2 (en) * 2001-10-22 2005-08-16 Hewlett-Packard Development Company L.P. High performance multi-controller processing
CN101425052A (en) * 2008-12-04 2009-05-06 中国科学院计算技术研究所 Method for implementing transactional memory
CN104899158A (en) * 2014-03-05 2015-09-09 华为技术有限公司 Memory access optimization method and memory access optimization device
CN104267975A (en) * 2014-06-11 2015-01-07 大唐微电子技术有限公司 Card, device and method for processing extensive application data
CN105849688A (en) * 2014-12-01 2016-08-10 华为技术有限公司 Data write-in method, apparatus and device, and storage system
CN106201335A (en) * 2015-05-29 2016-12-07 株式会社东芝 Storage system
CN105183378A (en) * 2015-08-31 2015-12-23 北京神州云科数据技术有限公司 Adaptive cache mixed reading/writing method

Also Published As

Publication number Publication date
CN108228483A (en) 2018-06-29

Similar Documents

Publication Publication Date Title
CN108228483B (en) Method and apparatus for processing atomic write commands
JP6224253B2 (en) Speculative prefetching of data stored in flash memory
CN108572796B (en) SSD with heterogeneous NVM types
CN106354615B (en) Solid state disk log generation method and device
US8321639B2 (en) Command tracking for direct access block storage devices
CN109164976B (en) Optimizing storage device performance using write caching
US10802733B2 (en) Methods and apparatus for configuring storage tiers within SSDs
US20100287217A1 (en) Host control of background garbage collection in a data storage device
CN107797759B (en) Method, device and driver for accessing cache information
CN107908571B (en) Data writing method, flash memory device and storage equipment
CN107797760B (en) Method and device for accessing cache information and solid-state drive
US10459803B2 (en) Method for management tables recovery
US10223001B2 (en) Memory system
KR20130107070A (en) A solid state drive controller and a method controlling thereof
US11422930B2 (en) Controller, memory system and data processing system
CN108628760B (en) Method and device for atomic write command
CN109213425B (en) Processing atomic commands in solid state storage devices using distributed caching
CN108664212B (en) Distributed caching for solid state storage devices
CN108628761B (en) Atomic command execution method and device
TWI626540B (en) Methods for regular and garbage-collection data access and apparatuses using the same
TWI782847B (en) Method and apparatus for performing pipeline-based accessing management in a storage server
CN110865945B (en) Extended address space for memory devices
CN110515861B (en) Memory device for processing flash command and method thereof
CN111290975A (en) Method for processing read command and pre-read command by using unified cache and storage device thereof
CN115993930A (en) System, method and apparatus for in-order access to data in block modification memory

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information

Address after: 100192 room A302 / 303 / 305 / 306 / 307, 3rd floor, building B-2, Dongsheng Science Park, Zhongguancun, 66 xixiaokou Road, Haidian District, Beijing

Applicant after: Beijing yihengchuangyuan Technology Co.,Ltd.

Address before: 100192 room A302 / 303 / 305 / 306 / 307, 3rd floor, building B-2, Dongsheng Science Park, Zhongguancun, 66 xixiaokou Road, Haidian District, Beijing

Applicant before: MEMBLAZE TECHNOLOGY (BEIJING) Co.,Ltd.

CB02 Change of applicant information
GR01 Patent grant
GR01 Patent grant