CN114385235A - Command eviction using host memory buffering - Google Patents
Command eviction using host memory buffering Download PDFInfo
- Publication number
- CN114385235A CN114385235A CN202110636725.XA CN202110636725A CN114385235A CN 114385235 A CN114385235 A CN 114385235A CN 202110636725 A CN202110636725 A CN 202110636725A CN 114385235 A CN114385235 A CN 114385235A
- Authority
- CN
- China
- Prior art keywords
- storage device
- data storage
- command
- data
- hmb
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 230000003139 buffering effect Effects 0.000 title abstract description 6
- 238000013500 data storage Methods 0.000 claims abstract description 121
- 239000000872 buffer Substances 0.000 claims description 110
- 238000012546 transfer Methods 0.000 claims description 26
- 238000012545 processing Methods 0.000 claims description 7
- 238000010586 diagram Methods 0.000 description 13
- 230000008901 benefit Effects 0.000 description 4
- 230000005540 biological transmission Effects 0.000 description 4
- 238000012423 maintenance Methods 0.000 description 4
- 101150101414 PRP1 gene Proteins 0.000 description 3
- 101100368710 Rattus norvegicus Tacstd2 gene Proteins 0.000 description 3
- 101100342406 Saccharomyces cerevisiae (strain ATCC 204508 / S288c) PRS1 gene Proteins 0.000 description 3
- 230000006870 function Effects 0.000 description 3
- 238000000034 method Methods 0.000 description 3
- 101000907912 Homo sapiens Pre-mRNA-splicing factor ATP-dependent RNA helicase DHX16 Proteins 0.000 description 2
- 101000798532 Homo sapiens Transmembrane protein 171 Proteins 0.000 description 2
- 102100023390 Pre-mRNA-splicing factor ATP-dependent RNA helicase DHX16 Human genes 0.000 description 2
- 238000013467 fragmentation Methods 0.000 description 2
- 238000006062 fragmentation reaction Methods 0.000 description 2
- 230000008569 process Effects 0.000 description 2
- 238000011084 recovery Methods 0.000 description 2
- 101100498818 Arabidopsis thaliana DDR4 gene Proteins 0.000 description 1
- 208000033748 Device issues Diseases 0.000 description 1
- 102100036725 Epithelial discoidin domain-containing receptor 1 Human genes 0.000 description 1
- 101710131668 Epithelial discoidin domain-containing receptor 1 Proteins 0.000 description 1
- 239000003990 capacitor Substances 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 230000001427 coherent effect Effects 0.000 description 1
- 238000012937 correction Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 239000000835 fiber Substances 0.000 description 1
- 238000007667 floating Methods 0.000 description 1
- 235000003642 hunger Nutrition 0.000 description 1
- 238000007726 management method Methods 0.000 description 1
- 230000007246 mechanism Effects 0.000 description 1
- 230000002093 peripheral effect Effects 0.000 description 1
- 230000037351 starvation Effects 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 230000001360 synchronised effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0602—Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
- G06F3/0614—Improving the reliability of storage systems
- G06F3/0619—Improving the reliability of storage systems in relation to data integrity, e.g. data losses, bit errors
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/30—Arrangements for executing machine instructions, e.g. instruction decode
- G06F9/30003—Arrangements for executing specific machine instructions
- G06F9/3004—Arrangements for executing specific machine instructions to perform operations on memory
- G06F9/30047—Prefetch instructions; cache control instructions
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F13/00—Interconnection of, or transfer of information or other signals between, memories, input/output devices or central processing units
- G06F13/14—Handling requests for interconnection or transfer
- G06F13/16—Handling requests for interconnection or transfer for access to memory bus
- G06F13/1605—Handling requests for interconnection or transfer for access to memory bus based on arbitration
- G06F13/1642—Handling requests for interconnection or transfer for access to memory bus based on arbitration with request queuing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F13/00—Interconnection of, or transfer of information or other signals between, memories, input/output devices or central processing units
- G06F13/14—Handling requests for interconnection or transfer
- G06F13/16—Handling requests for interconnection or transfer for access to memory bus
- G06F13/1668—Details of memory controller
- G06F13/1673—Details of memory controller using buffers
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0602—Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
- G06F3/0604—Improving or facilitating administration, e.g. storage management
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0602—Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
- G06F3/061—Improving I/O performance
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0628—Interfaces specially adapted for storage systems making use of a particular technique
- G06F3/0646—Horizontal data movement in storage systems, i.e. moving data in between storage devices or systems
- G06F3/065—Replication mechanisms
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0628—Interfaces specially adapted for storage systems making use of a particular technique
- G06F3/0655—Vertical data movement, i.e. input-output transfer; data movement between one or more hosts and one or more storage devices
- G06F3/0656—Data buffering arrangements
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0628—Interfaces specially adapted for storage systems making use of a particular technique
- G06F3/0655—Vertical data movement, i.e. input-output transfer; data movement between one or more hosts and one or more storage devices
- G06F3/0659—Command handling arrangements, e.g. command buffers, queues, command scheduling
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0668—Interfaces specially adapted for storage systems adopting a particular infrastructure
- G06F3/0671—In-line storage system
- G06F3/0673—Single storage device
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0668—Interfaces specially adapted for storage systems adopting a particular infrastructure
- G06F3/0671—In-line storage system
- G06F3/0673—Single storage device
- G06F3/0679—Non-volatile semiconductor memory device, e.g. flash memory, one time programmable memory [OTP]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/30—Arrangements for executing machine instructions, e.g. instruction decode
- G06F9/30003—Arrangements for executing specific machine instructions
- G06F9/3004—Arrangements for executing specific machine instructions to perform operations on memory
- G06F9/30043—LOAD or STORE instructions; Clear instruction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/30—Arrangements for executing machine instructions, e.g. instruction decode
- G06F9/32—Address formation of the next instruction, e.g. by incrementing the instruction counter
- G06F9/322—Address formation of the next instruction, e.g. by incrementing the instruction counter for non-sequential address
- G06F9/327—Address formation of the next instruction, e.g. by incrementing the instruction counter for non-sequential address for interrupts
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Human Computer Interaction (AREA)
- Software Systems (AREA)
- Computer Security & Cryptography (AREA)
- Retry When Errors Occur (AREA)
- Techniques For Improving Reliability Of Storages (AREA)
- Memory System Of A Hierarchy Structure (AREA)
Abstract
The present disclosure generally relates to efficiently aborting commands using Host Memory Buffering (HMB). The command contains an index directing the data storage device to various locations on the data storage device where the associated content is located. Upon receiving the abort command, the contents of the host pointer stored in the data storage device RAM are changed to point to the HMB. The data storage device then waits until any started transactions over the interface bus associated with the command have completed. Thereafter, a failure completion command is issued to the host device.
Description
Cross Reference to Related Applications
This application claims the benefit of U.S. provisional patent application serial No. 63/087,737, filed on 5/10/2020, which is incorporated herein by reference.
Background
Technical Field
Embodiments of the present disclosure generally relate to efficiently aborting commands using Host Memory Buffering (HMB).
Description of the related Art
In a storage system, a host device or data storage device sometimes suspends pending commands. There are several situations in which a command should be aborted. One situation is where a host device issues a suspend command, whereby the host device specifies the ID of the command that should be suspended, and the data storage device should function accordingly. The second scenario is where the host device deletes a queue command that the host device has previously issued. The host device may delete the commit or completion queue and then the data storage device should abort all associated commands before deleting the queue.
The third situation is that the data storage device may require a command timeout to terminate the command. The termination may be due to recovery from a NAND failure involving a recovery mechanism that reconstructs data based on parity information, but the reconstruction takes a long time. Termination may also be due to maintenance starvation, which may result from extreme fragmentation of physical space. Fragmentation reduces throughput, which can fatal timeout if command and maintenance operations are interleaved. Interleaving of commands and maintenance typically occurs during aggressive power management where maintenance time is not allowed or during dense high queue depth random write workloads. The termination may also be due to a very high queue depth, where if a command within the device is suspended for the reasons described above, the unprocessed command may time out before the data storage device retrieves the command.
A fourth scenario is advanced command retry, where the data storage device decides to fail a command, while requiring the host device to re-queue the command at a later time. In general, the abort command is not a simple flow. The challenge is when the command has initiated the execution phase. Before aborting a command, the data storage device must first terminate all tasks associated with the command and, until then, issue a completion message to the host device. After issuing the completion message, the data storage device does not have access to the associated Host Memory Buffer (HMB).
Previously, prior to aborting a command, the data storage device first terminated all tasks associated with the command by scanning for pending activity and, until then, issued a completion message to the host device. Alternatively, prior to the abort command, the data storage device first waits until the tasks that have begun are completed, and until then a completion message will be issued to the host device.
Therefore, there is a need in the art to more efficiently process abort commands.
Disclosure of Invention
The present disclosure generally relates to efficiently aborting commands using Host Memory Buffering (HMB). The command contains an index directing the data storage device to various locations on the data storage device where the associated content is located. Upon receiving the abort command, the contents of the host pointer stored in the data storage device RAM are changed to point to the HMB. The data storage device then waits until any started transactions over the interface bus associated with the command have completed. Thereafter, a failure completion command is issued to the host device.
In one embodiment, a data storage device comprises: one or more memory devices; and a controller coupled to the one or more memory devices, wherein the controller is configured to: receiving an original command from a host device; starting to execute the original command; receiving an abort request command to abort the original command, wherein the abort request command is received from a host device or generated by a data storage device; modifying one or more metrics of an original command residing in a data storage device; evicting a set of data associated with the original command to a Host Memory Buffer (HMB); and returning a failure completion message to the host device, wherein the failure completion message is returned to the host device after completion of the already issued data transfer using the original command metrics.
In another embodiment, a data storage device comprises: one or more memory devices; and a controller coupled to the one or more memory devices, wherein the controller is configured to: receiving an original command from a host device; determining to complete the original command with an Advanced Command Retry (ACR); allocating one or more Host Memory Buffers (HMBs) for holding a set of data associated with the original command; returning a completion message to the host device, wherein the completion message requests the host device to retry the original command; executing the original command while transferring the data to the allocated one or more buffers within the HMB; receiving a reissued original command from the host device; and copying data of the original command for reissue from the allocated one or more buffers within the HMB.
In another embodiment, a data storage device comprises: one or more memory devices; and a controller coupled to the one or more memory devices, wherein the controller is configured to: receiving an abort command request from a host device; allocating a first Host Memory Buffer (HMB) and a second HMB for holding a series of data associated with the abort command request, wherein: the first HMB is configured to drain a series of data associated with the abort command request; and the second HMB is configured to point to the drain buffer; and returning a completion message to the host device.
Drawings
So that the manner in which the above recited features of the present disclosure can be understood in detail, a more particular description of the disclosure, briefly summarized above, may be had by reference to embodiments, some of which are illustrated in the appended drawings. It is to be noted, however, that the appended drawings illustrate only typical embodiments of this disclosure and are therefore not to be considered limiting of its scope, for the disclosure may admit to other equally effective embodiments.
FIG. 1 is a schematic block diagram illustrating a storage system in which a data storage device may be used as a storage device for a host device, according to one embodiment.
FIG. 2 is a schematic illustration of an abort request.
FIG. 3 is a flow diagram illustrating an abort request process, according to one embodiment.
FIG. 4 is a timing diagram of processing an abort request, according to one embodiment.
Fig. 5 is a schematic diagram of a PRP list described in NVMe standard.
FIG. 6 is a diagram of two Host Memory Buffers (HMBs) for command eviction, according to one embodiment.
Fig. 7 is a flow diagram illustrating Advanced Command Retry (ACR) according to one embodiment.
To facilitate understanding, identical reference numerals have been used, where possible, to designate identical elements that are common to the figures. It is contemplated that elements disclosed in one embodiment may be beneficially utilized on other embodiments without specific recitation.
Detailed Description
Hereinafter, reference is made to embodiments of the present disclosure. It should be understood, however, that the disclosure is not limited to the specifically described embodiments. Rather, any combination of the following features and elements, whether related to different embodiments or not, is contemplated to implement and practice the present disclosure. Moreover, although embodiments of the disclosure may achieve advantages over other possible solutions and/or over the prior art, whether or not a particular advantage is achieved by a given embodiment is not a limitation of the disclosure. Thus, the following aspects, features, embodiments and advantages are merely illustrative and are not considered elements or limitations of the appended claims except where explicitly recited in a claim(s). Likewise, reference to "the disclosure" should not be construed as a generalization of any inventive subject matter disclosed herein and should not be considered to be an element or limitation of the appended claims except where explicitly recited in a claim(s).
The present disclosure generally relates to efficiently aborting commands using Host Memory Buffering (HMB). The command contains an index directing the data storage device to various locations on the data storage device where the associated content is located. Upon receiving the abort command, the contents of the host pointer stored in the data storage device RAM are changed to point to the HMB. The data storage device then waits until any started transactions over the interface bus associated with the command have completed. Thereafter, a failure completion command is issued to the host device.
FIG. 1 is a schematic block diagram illustrating a storage system 100 in which a data storage device 106 may serve as a storage device for a host device 104, according to the disclosed embodiments. For example, the host device 104 may utilize a non-volatile memory (NVM)110 included in the data storage device 106 to store and retrieve data. Host device 104 includes host DRAM 138, where a portion of host DRAM 138 is allocated as Host Memory Buffer (HMB) 140. HMB 140 may be used by data storage device 106 as an additional work area or an additional storage area for data storage device 106. In some examples, HMB 140 may not be accessible to the host device. In some examples, storage system 100 may include a plurality of storage devices, such as data storage device 106, operable as a storage array. For example, the storage system 100 may include a plurality of data storage devices 106 configured as a redundant array of inexpensive/independent disks (RAID) that collectively function as a mass storage device for the host device 104.
The data storage device 106 includes a controller 108, an NVM 110, a power supply 111, a volatile memory 112, an interface 114, and a write buffer 116. In some examples, data storage device 106 may include additional components not shown in fig. 1 for clarity. For example, data storage device 106 may include a Printed Circuit Board (PCB) to which components of data storage device 106 are mechanically attached and which includes conductive traces that electrically interconnect components of data storage device 106, and the like. In some examples, the physical dimensions and connector configuration of the data storage device 106 may conform to one or more standard form factors. Some example standard form factors include, but are not limited to, 3.5 "data storage devices (e.g., HDD or SSD), 2.5" data storage devices, 1.8 "data storage devices, Peripheral Component Interconnect (PCI), PCI Express (PCI-X), PCI Express (PCIe) (e.g., PCIe X1, X4, X8, X16, PCIe Mini card, MiniPCI, etc.). In some examples, data storage device 106 may be directly coupled (e.g., soldered directly) to the motherboard of host device 104.
The interface 114 of the data storage device 106 may include one or both of a data bus for exchanging data with the host device 104 and a control bus for exchanging commands with the host device 104. The interface 114 may operate according to any suitable protocol. For example, the interface 114 may operate according to one or more of the following protocols: advanced Technology Attachment (ATA) (e.g., serial ATA (sata) and parallel ATA (pata)), Fibre Channel Protocol (FCP), Small Computer System Interface (SCSI), serial attached SCSI (sas), PCI and PCIe, non-volatile memory express (nvme), OpenCAPI, GenZ, cache coherent interface accelerator (CCIX), open channel ssd (ocsd), and the like. The electrical connection (e.g., data bus, control bus, or both) of the interface 114 is electrically connected to the controller 108, thereby providing an electrical connection between the host device 104 and the controller 108, allowing data to be exchanged between the host device 104 and the controller 108. In some examples, the electrical connection of interface 114 may also allow data storage device 106 to receive power from host device 104. For example, as shown in fig. 1, power supply 111 may receive power from host device 104 via interface 114.
The NVM 110 can include multiple memory devices or memory cells. The NVM 110 can be configured to store and/or retrieve data. For example, a storage unit of the NVM 110 can receive data and receive a message from the controller 108 instructing the storage unit to store the data. Similarly, a storage unit of the NVM 110 can receive a message from the controller 108 instructing the storage unit to retrieve data. In some examples, each of the memory cells may be referred to as a die. In some examples, a single physical chip may include multiple dies (i.e., multiple memory cells). In some examples, each memory cell may be configured to store a relatively large amount of data (e.g., 128MB, 256MB, 512MB, 1GB, 2GB, 4GB, 8GB, 16GB, 32GB, 64GB, 128GB, 256GB, 512GB, 1TB, etc.).
In some examples, each memory cell of NVM 110 may include any type of non-volatile memory device, such as a flash memory device, a Phase Change Memory (PCM) device, a resistive random access memory (ReRAM) device, a Magnetoresistive Random Access Memory (MRAM) device, a ferroelectric random access memory (F-RAM), a holographic memory device, and any other type of non-volatile memory device.
The NVM 110 may include multiple flash memory devices or memory cells. NVM flash memory devices may include NAND or NOR based flash memory devices and may store data based on the charge contained in the floating gate of the transistor for each flash memory cell. In NVM flash memory devices, the flash memory device may be divided into multiple dies, where each die of the multiple dies includes multiple blocks, which may be further divided into multiple pages. Each of the plurality of blocks within a particular memory device may include a plurality of NVM cells. A row of NVM cells can be electrically connected using wordlines to define a page of a plurality of pages. The respective cells in each of the plurality of pages may be electrically connected to respective bit lines. Further, the NVM flash memory devices may be 2D or 3D devices and may be Single Level Cells (SLC), multi-level cells (MLC), Three Level Cells (TLC), or four level cells (QLC). The controller 108 may write data to and read data from the NVM flash memory device at the page level and erase data from the NVM flash memory device at the block level.
The data storage device 106 includes a power supply 111 that can provide power to one or more components of the data storage device 106. When operating in the standard mode, the power supply 111 may power one or more components using power provided by an external device, such as the host device 104. For example, the power supply 111 may power one or more components using power received from the host device 104 via the interface 114.
In some examples, the power supply 111 may include one or more power storage components configured to provide power to one or more components when operating in an off mode, such as in the event of cessation of power received from an external device. In this manner, power supply 111 may be used as an on-board backup power supply. Some examples of one or more power storage components include, but are not limited to, capacitors, supercapacitors, batteries, and the like.
In some examples, the amount of power that may be stored by the one or more power storage components may be a function of the cost and/or size (e.g., area/volume) of the one or more power storage components. In other words, as the amount of power stored by the one or more power storage components increases, the cost and/or size of the one or more power storage components also increases.
The data storage device 106 also includes volatile memory 112 that may be used by the controller 108 to store information. The volatile memory 112 may include one or more volatile memory devices. In some examples, the controller 108 may use the volatile memory 112 as a cache. For example, the controller 108 may store the cached information in the volatile memory 112 until the cached information is written to the non-volatile memory 110. As shown in fig. 1, the volatile memory 112 may consume power received from the power supply 111. Examples of volatile memory 112 include, but are not limited to, Random Access Memory (RAM), Dynamic Random Access Memory (DRAM), Static RAM (SRAM), and synchronous dynamic RAM (SDRAM (e.g., DDR1, DDR2, DDR3, DDR3L, LPDDR3, DDR4, LPDDR4, etc.)).
The data storage device 106 includes a controller 108 that may manage one or more operations of the data storage device 106. For example, the controller 108 may manage reading data from the NVM 110 and/or writing data to the NVM 110. In some embodiments, when the data storage device 106 receives a write command from the host device 104, the controller 108 can initiate a data storage command to store data to the NVM 110 and monitor the progress of the data storage command. The controller 108 can determine at least one operating characteristic of the storage system 100 and store the at least one operating characteristic to the NVM 110. In some embodiments, when the data storage device 106 receives a write command from the host device 104, the controller 108 temporarily stores data associated with the write command in an internal memory or write buffer 116 before sending the data to the NVM 110. In some other embodiments, HMB 140 may be utilized.
FIG. 2 is a schematic illustration of an abort request. Aspects of FIG. 2 may be similar to the storage system 100 of FIG. 1. For example, host 220 can be host device 104, controller 202 can be controller 108, and NVM 222 can be NVM 110. During operation of a data storage device, such as data storage device 106 of FIG. 1, host 220 or controller 202 may abort pending commands, such as host-generated read commands or host-generated write commands. The suspend command may be issued by the host 220 or by the controller 202. For example, the suspend command may be generated by the main processor 204 of the controller 202, where the main processor 204 sends the suspend command to one or more processors 206a-206n of the controller 202.
When an abort command is received by one or more of the processors 206a-206n, the one or more processors may terminate all tasks associated with the abort command by scanning for pending commands or wait to terminate all pending commands that have not yet been initiated, wherein pending commands that are allowed to be initiated are completed before terminating all other pending commands. After terminating the associated pending command, a completion message is issued to the host 220.
With respect to FIG. 2, the main processor 204 issues an abort command request to one or more of the processors 206a-206 n. One or more processors 206a-206n utilize a Hardware (HW) accelerator 208 to scan each pending command and terminate the associated pending command. After terminating the associated pending command, the one or more processors issue a completion message to the data path 210, which may be a failed completion message if an abort command is initiated by the data storage device, wherein the data path 210 transmits the completion message to the host 220.
In conventional operation, the data path 210 may be used to transfer data to and from the NVM 222 by utilizing a Direct Memory Access (DMA) module 212, encode/decode ECC using an Error Correction Code (ECC) engine 218, generate a security protocol by the security engine 214, and manage storage of data by the RAID module 216. The abort command operation may have a high latency before issuing a completion message or a failed completion message to the host 220. Due to the high latency, the buffers and resources of the data storage device may be inefficiently utilized. Furthermore, certain instances of the abort command operation may have to be performed separately or have a separate program to complete certain instances of the abort command operation.
Fig. 3 is a flow diagram illustrating an abort request procedure 300, according to one embodiment. At block 302, one or more processors, such as one or more of processors 206a-206n of FIG. 2, where the one or more processors may be a component of a controller (such as controller 202 of FIG. 2), receive an abort request or abort command. In some embodiments, the abort request may be generated by a host, such as host 220, and transmitted to the controller via a data bus. In other embodiments, the abort request may be generated by a host processor, such as host processor 204 of FIG. 2, where the host processor sends the abort request to an associated processor of the one or more processors.
At block 304, the controller modifies the contents of the buffer pointer residing in the internal copy of the command. The internal copy of the command may be a command stored in volatile memory, such as DRAM, of the data storage device. The buffer index may point to an HMB, such as HMB 140 of FIG. 1. In some embodiments, the HMB includes two HMB buffers of 4 KB. The previously listed values are not intended to be limiting but rather to provide examples of possible embodiments. At block 306, the controller determines whether all current transmissions are complete. The current transfer may be a command that is executed but not yet completed. If the current transmission has not completed, the controller waits for the current transmission to complete. However, if the current transmission is complete at block 306, then at block 308, the controller issues a completion message or a failure completion message to the host device.
FIG. 4 is a timing diagram of processing an abort request, according to one embodiment. Aspects of fig. 4 may be similar to those described in fig. 3. At time 1, a host device (such as host device 104 of FIG. 1) issues data to a data storage device (such as data storage device 106 of FIG. 1). The command may be a read command, a write command, etc. At some time (such as time 2) after the host issues a command to the data storage device due to a transfer delay or the like, a controller (such as controller 202 of FIG. 2) initiates a data transfer operation.
While performing the data transfer operation, the data storage device receives an abort command at time 3. In one embodiment, the abort command may be generated by the host device. In another embodiment, the abort command may be generated by a data storage device, where the abort command is generated by a controller or a host processor, such as host processor 204 of FIG. 2. At time 4, the data storage device modifies one or more metrics associated with the abort command residing in the data storage device.
At time 5, the data storage device sends a failure complete message to the host device, which occurs after the data transfer operation at time 2. At time 6, the data transfer operation has stopped, and the data storage device drains a set of data associated with the abort request command to an HMB, such as HMB 140 of FIG. 1. In some embodiments, the draining of the set of data begins before a failure completion message is issued to the host. In other embodiments, a failure completion message is issued prior to aborting the data transfer to the HMB operation.
Fig. 5 is a schematic diagram of a PRP list described in NVMe standard. The command 502 includes a plurality of Physical Region Page (PRP) indicators, such as a first PRP 1504 and a second PRP 2506, where each PRP indicator points to a buffer of a plurality of buffers. The plurality of buffers may be part of an HMB, such as HMB 140 of fig. 1. Further, in FIG. 5, each page, page 0518, page 1520, page 2522, and page 3524 represent different buffers. In one example, each of the buffers may have a size that is aligned with the size of the command or virtual command, such as about 4K. The virtual command may be a command generated by the data storage device to set a parameter for the size of the buffer in the HMB. The first PRP 1504 and the second PRP 2506 include an offset "xx", where the offset is an index offset from a location (such as a header). Each PRP index may be an index to a buffer or an index to a list of entries.
For example, the first PRP 1504 includes a first index 526 pointing to a first page 0518. The second PRP 2506 includes a second pointer 528 that points to the first entry PRP entry 0510 of the PRP list 508. The PRP list 508 has an offset of 0 such that the PRP list 508 is aligned with the size of the buffer. For example, the first PRP entry 0510 includes a third pointer 530 pointing to the second page 1520, the second PRP entry 1512 includes a fourth pointer 532 pointing to the third page 2522, and the third PRP entry 2514 includes a fifth pointer 534 pointing to the fourth page 3524. The last entry of PRP list 508 can include a pointer to a subsequent or new PRP list.
FIG. 6 is a diagram of two Host Memory Buffers (HMBs) for command eviction, according to one embodiment. NVMe commands 602 are a stored copy of commands received by the controller, where NVMe commands 602 may be stored in volatile memory or non-volatile memory of the data storage device. The first PRP 1604 a can be the first PRP 504 of fig. 5, and the second PRP 2604 b can be the second PRP 506 of fig. 5. The value of first PRP 1604 a is overwritten to point to first HMB buffer 606 a. Second PRP 2604 b points to second HMB buffer 606 b.
The HMB, such as HMB 140, includes a first HMB buffer 606a and a second HMB buffer 606 b. The first HMB buffer 606a and the second HMB buffer 606b may have a size of about 4 KB. The first HMB buffer 606a may serve as a drain buffer into which data associated with the suspend command is to be drained or transferred in both read and write operations. The second HMB buffer 606b is a list of multiple buffers 608a-608 n.
In the initialization phase, the second HMB buffer 606b may be initialized by a controller (such as the controller 202 of fig. 2) of a data storage device (such as the data storage device 106 of fig. 1). The initialization phase may be during a wake-up operation of the data storage device, such as when power is provided to the data storage device. Each pointer in the plurality of buffers 608a-608n of the second HMB buffer 606b points to the first HMB buffer 606 a. Further, rather than the last pointer 608n pointing to a subsequent or next buffer list, the last pointer 608n points to the first buffer 608a of the same HMB buffer. By pointing each pointer of the second HMB buffer 606b to the first HMB buffer 606a, pointing the pointer of the last buffer 608n of the second HMB buffer 606b to the first buffer 608a, and pointing the pointer of the first PRP1 to the first HMB buffer 606a, relevant data associated with a read operation or a write operation will be flushed to the first HMB buffer 606a upon receipt of an abort command.
Fig. 7 is a flow diagram 700 illustrating Advanced Command Retry (ACR) according to one embodiment. When the data storage device receives a command that includes an ACR request, one or more HMBs may be allocated to hold the set of data for the command. When the failed command has an ACR, the host (such as host device 104 of fig. 1) is notified of the failed command, and the host can re-queue the failed command in the command buffer after a delay (such as about 10 seconds). The delay time may be published by a data storage device, such as data storage device 104 of FIG. 1, via a recognition controller command.
Rather than having the data associated with the failed command re-queued in a host buffer (such as host DRAM 138 of fig. 1), the data associated with the failed command is queued in a host HMB (such as host HMB 140 of fig. 1) by the data storage device. The flow diagram 700 is initiated at block 702 when an ACR request for a command is received. At block 704, an HMB buffer is allocated. The HMB buffers include a first HMB buffer (such as the first HMB buffer 606a of fig. 6) and a second HMB buffer (such as the second HMB buffer 606b of fig. 6), where the first HMB buffer is a drain buffer and the second HMB buffer is a list of buffer indices that point to the first HMB buffer.
At block 706, the internal versions of the indicators (i.e., PRP1 and PRP2) are modified to point to the allocated HMB buffer. For example, the PRP1 index may point to a first HMB buffer and the PRP2 index may point to a second HMB buffer. At block 708, the controller determines whether all current transfers of commands that have started with the associated target host buffer are complete. If the current transfer has not completed, the controller waits for the command to complete.
At block 710, after all current commands are completed, the controller issues a failure completion message to the host with an ACR indication for the command that has failed. At block 712, one or more HMBs are accessed such that data of the failed command is transmitted to the location of the one or more HMBs. A representation of the series of transfers is issued on an interface of the host device, where the series of transfers are stored in one or more HMBs. When the HMB buffer is accessed, data associated with the failed command is transferred to the first HMB buffer (i.e., the HMB buffer is drained). At block 714, the host device re-queues the command to the data storage device, where the re-queued command is the original command that has failed. At block 716, data associated with the re-queued commands is copied from the relevant location in the HMB (or in some embodiments, one or more HMBs) to the host buffer. The re-queued commands are executed by the controller using data stored in the host buffer.
By changing the contents of the command indicators, suspend commands can be more efficiently processed, thereby improving storage device performance. Aborting commands in a simple manner without any delay improves efficiency compared to the complex high-delay streams that currently exist. Additionally, using HMB as a cache buffer for commands that have an ACR failure will speed up processing.
In one embodiment, a data storage device comprises: one or more memory devices; and a controller coupled to the one or more memory devices, wherein the controller is configured to: receiving an original command from a host device; starting to execute the original command; receiving an abort request command to abort the original command, wherein the abort request command is received from a host device or generated by a data storage device; modifying one or more metrics of an original command residing in a data storage device; evicting a set of data associated with the original command to a Host Memory Buffer (HMB); and returning a failure completion message to the host device, wherein the failure completion message is returned to the host device after completion of the already issued data transfer using the original command metrics. The controller is further configured to continue processing data transfers associated with the original command after receiving the abort request. Processing the data transfer continues after modifying the one or more metrics is completed. The discharge of this set of data occurs: after returning the failure completion message, before returning the failure completion message, or a combination thereof. Delivering the failure completion message while the data transfer associated with the original command is still being processed, wherein the data transfer occurring after delivery of the failure completion message utilizes the modified one or more metrics. Draining a set of data includes pointing each pointer to a drain buffer. The last pointer points to the same buffer list where the last pointer resides.
In another embodiment, a data storage device comprises: one or more memory devices; and a controller coupled to the one or more memory devices, wherein the controller is configured to: receiving an original command from a host device; determining to complete the original command with an Advanced Command Retry (ACR); allocating one or more Host Memory Buffers (HMBs) for holding a set of data associated with the original command; returning a completion message to the host device, wherein the completion message requests the host device to retry the original command; executing the original command while transferring the data to the allocated one or more buffers within the HMB; receiving a reissued original command from the host device; and copying data of the original command for reissue from the allocated one or more buffers within the HMB. When the controller returns a completion message to the host device: issuing a representation of data on an interface of a host device; and the data is stored in the HMB, wherein the HMB is not used to drain the data, and wherein the HMB includes a plurality of buffers of sufficient size to hold the data to ensure that the data storage device is able to copy the data from the HMB to the host device upon receiving a command from the host device to retrieve the data. The controller is further configured to receive a reissued command of the original command from the host device. The controller is further configured to copy data from the one or more HMBs. Copying includes copying a series of transfers from one or more HMBs to a host buffer for reissued commands. The controller is configured to wait for completion of a current transfer associated with the original command that has begun before the completion message is returned, wherein after the controller returns the completion message, the data storage device does not access the original buffer with the original command, and wherein after the controller returns the completion message, the data storage device is able to access the one or more HMBs. During the wait and before returning the completion message, the data storage device can access the raw buffer and the one or more HMBs in parallel.
In another embodiment, a data storage device comprises: one or more memory devices; and a controller coupled to the one or more memory devices, wherein the controller is configured to: receiving an abort command request from a host device; allocating a first Host Memory Buffer (HMB) and a second HMB for holding a series of data associated with the abort command request, wherein: the first HMB is configured to drain a series of data associated with the abort command request; and the second HMB is configured to point to the drain buffer; and returning a completion message to the host device. The first HMB is a drain buffer. Data associated with the suspend command is drained to the drain buffer in read and write operations. The second HMB is configured to contain a list of buffer indices. All pointers in the buffer pointer list except the last pointer point to the drain buffer. The last indicator in the list of buffer indicators points to a different indicator in the list of buffer indicators.
While the foregoing is directed to embodiments of the present disclosure, other and further embodiments of the disclosure may be devised without departing from the basic scope thereof, and the scope thereof is determined by the claims that follow.
Claims (20)
1. A data storage device, the data storage device comprising:
one or more memory devices; and
a controller coupled to the one or more memory devices, wherein the controller is configured to:
receiving an original command from a host device;
starting to execute the original command;
receiving an abort request command to abort the original command, wherein the abort request command is received from a host device or generated by the data storage device;
modifying one or more metrics of the original command residing in the data storage device;
evicting a set of data associated with the original command to a Host Memory Buffer (HMB); and
returning a failure completion message to the host device, wherein the failure completion message is returned to the host device after completion of the already issued data transfer using the original command metrics.
2. The data storage device of claim 1, wherein the controller is further configured to continue processing data transfers associated with the original command after receiving the abort request.
3. The data storage device of claim 2, wherein the processing data transfer continues after the modifying one or more metrics is completed.
4. The data storage device of claim 1, wherein the draining of the set of data occurs by: after returning the failure completion message, before returning the failure completion message, or a combination thereof.
5. The data storage device of claim 1, wherein the failure completion message is delivered while a data transfer associated with the original command is still being processed, wherein the data transfer occurring after delivery of the failure completion message utilizes the modified one or more metrics.
6. The data storage device of claim 1, wherein draining the set of data comprises pointing each index to a drain buffer.
7. The data storage device of claim 6, wherein a last index points to a same buffer list where the last index resides.
8. A data storage device, the data storage device comprising:
one or more memory devices; and
a controller coupled to the one or more memory devices, wherein the controller is configured to:
receiving an original command from a host device;
determining to complete the original command with an Advanced Command Retry (ACR);
allocating one or more buffers within a Host Memory Buffer (HMB) for holding a set of data associated with the original command;
returning a completion message with an ACR indication to the host device, wherein the completion message requests the host device to retry the original command;
executing the original command while transferring data to the allocated one or more buffers within the HMB;
receiving a reissued original command from the host device; and
copying data for the reissued original command from the allocated one or more buffers within the HMB.
9. The data storage device of claim 8, wherein when the controller returns the completion message to the host device:
issuing a representation of the data on an interface of the host device; and is
The data is stored in the HMB, wherein the HMB is not used to drain data, and wherein the HMB includes a plurality of buffers of sufficient size to hold data to ensure that the data storage device is able to copy data from the HMB to the host device upon receiving a command from the host device to retrieve the data.
10. The data storage device of claim 8, wherein the controller is further configured to:
receiving a reissued command of the original command from the host device.
11. The data storage device of claim 10, wherein the controller is further configured to copy data from the one or more HMBs.
12. The data storage device of claim 11, wherein the copying comprises copying the data from the one or more HMBs to a host buffer for the reissued command.
13. The data storage device of claim 8, wherein the controller is configured to wait to complete a current transfer associated with the original command that has begun before returning the completion message, wherein after the controller returns the completion message, the data storage device does not access an original buffer with an original command, and wherein after the controller returns the completion message, the data storage device has access to the one or more HMBs.
14. The data storage device of claim 13, wherein during the waiting and before returning the completion message, the data storage device can access the raw buffer and the one or more HMBs in parallel.
15. A data storage device, the data storage device comprising:
one or more memory devices; and
a controller coupled to the one or more memory devices, wherein the controller is configured to:
receiving an abort command request from a host device;
allocating a first Host Memory Buffer (HMB) and a second HMB for holding a series of data associated with the abort command request, wherein:
the first HMB is configured to drain the series of data associated with the abort command request; and is
The second HMB is configured to point to a drain buffer; and
returning a completion message to the host device.
16. The data storage device of claim 15, wherein the first HMB is the drain buffer.
17. The data storage device of claim 15, wherein data associated with the suspend command is drained to the drain buffer in read and write operations.
18. The data storage device of claim 15, wherein the second HMB is configured to contain a list of buffer indices.
19. The data storage device of claim 18, wherein all but the last indicator in the list of buffer indicators points to the drain buffer.
20. The data storage device of claim 19, wherein the last indicator in the list of buffer indicators points to a different indicator in the list of buffer indicators.
Applications Claiming Priority (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US202063087737P | 2020-10-05 | 2020-10-05 | |
US63/087,737 | 2020-10-05 | ||
US17/184,527 | 2021-02-24 | ||
US17/184,527 US11500589B2 (en) | 2020-10-05 | 2021-02-24 | Command draining using host memory buffer |
Publications (1)
Publication Number | Publication Date |
---|---|
CN114385235A true CN114385235A (en) | 2022-04-22 |
Family
ID=80738159
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110636725.XA Pending CN114385235A (en) | 2020-10-05 | 2021-06-07 | Command eviction using host memory buffering |
Country Status (4)
Country | Link |
---|---|
US (2) | US11500589B2 (en) |
KR (1) | KR102645982B1 (en) |
CN (1) | CN114385235A (en) |
DE (1) | DE102021114458A1 (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
TWI831474B (en) * | 2022-11-15 | 2024-02-01 | 瑞昱半導體股份有限公司 | Electronic apparatus and control method for managing available pointers of packet buffer |
Families Citing this family (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2022071543A (en) * | 2020-10-28 | 2022-05-16 | キヤノン株式会社 | Control device and method for controlling control device |
US11941298B2 (en) * | 2021-05-11 | 2024-03-26 | Mediatek Inc. | Abort handling by host controller for storage device |
US11809742B2 (en) * | 2021-09-20 | 2023-11-07 | Western Digital Technologies, Inc. | Recovery from HMB loss |
US11914900B2 (en) | 2022-05-31 | 2024-02-27 | Western Digital Technologies, Inc. | Storage system and method for early command cancelation |
US20240168682A1 (en) * | 2022-11-18 | 2024-05-23 | Western Digital Technologies, Inc. | Failure Recovery Using Command History Buffer in Storage Device |
US20240232068A1 (en) * | 2023-01-05 | 2024-07-11 | Western Digital Technologies, Inc. | Data Storage Device and Method for Race-Based Data Access in a Multiple Host Memory Buffer System |
Family Cites Families (16)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US4603382A (en) * | 1984-02-27 | 1986-07-29 | International Business Machines Corporation | Dynamic buffer reallocation |
US6694390B1 (en) * | 2000-09-11 | 2004-02-17 | Intel Corporation | Managing bus transaction dependencies |
US7752340B1 (en) | 2006-03-31 | 2010-07-06 | Emc Corporation | Atomic command retry in a data storage system |
WO2009060500A1 (en) * | 2007-11-07 | 2009-05-14 | Fujitsu Limited | Read/write processing method for medium storage device and medium storage device |
US20130179614A1 (en) | 2012-01-10 | 2013-07-11 | Diarmuid P. Ross | Command Abort to Reduce Latency in Flash Memory Access |
US9792046B2 (en) * | 2014-07-31 | 2017-10-17 | Sandisk Technologies Llc | Storage module and method for processing an abort command |
CN104657145B (en) * | 2015-03-09 | 2017-12-15 | 上海兆芯集成电路有限公司 | The system and method that repeating transmission for microprocessor is stopped |
US9996262B1 (en) | 2015-11-09 | 2018-06-12 | Seagate Technology Llc | Method and apparatus to abort a command |
US10725677B2 (en) * | 2016-02-19 | 2020-07-28 | Sandisk Technologies Llc | Systems and methods for efficient power state transitions |
US10521305B2 (en) * | 2016-04-29 | 2019-12-31 | Toshiba Memory Corporation | Holdup time measurement for solid state drives |
US10521118B2 (en) * | 2016-07-13 | 2019-12-31 | Sandisk Technologies Llc | Methods, systems, and computer readable media for write classification and aggregation using host memory buffer (HMB) |
US10372378B1 (en) | 2018-02-15 | 2019-08-06 | Western Digital Technologies, Inc. | Replacement data buffer pointers |
US10642536B2 (en) | 2018-03-06 | 2020-05-05 | Western Digital Technologies, Inc. | Non-volatile storage system with host side command injection |
KR102599188B1 (en) * | 2018-11-09 | 2023-11-08 | 삼성전자주식회사 | Storage device using host memory and operating method thereof |
US11861217B2 (en) * | 2020-10-05 | 2024-01-02 | Western Digital Technologies, Inc. | DRAM-less SSD with command draining |
US20220113901A1 (en) * | 2020-10-12 | 2022-04-14 | Qualcomm Incorporated | Read optional and write optional commands |
-
2021
- 2021-02-24 US US17/184,527 patent/US11500589B2/en active Active
- 2021-06-04 DE DE102021114458.2A patent/DE102021114458A1/en active Pending
- 2021-06-07 CN CN202110636725.XA patent/CN114385235A/en active Pending
- 2021-06-11 KR KR1020210076083A patent/KR102645982B1/en active IP Right Grant
-
2022
- 2022-11-03 US US17/980,177 patent/US11954369B2/en active Active
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
TWI831474B (en) * | 2022-11-15 | 2024-02-01 | 瑞昱半導體股份有限公司 | Electronic apparatus and control method for managing available pointers of packet buffer |
Also Published As
Publication number | Publication date |
---|---|
US11954369B2 (en) | 2024-04-09 |
KR20220045548A (en) | 2022-04-12 |
US20220107758A1 (en) | 2022-04-07 |
US11500589B2 (en) | 2022-11-15 |
US20230051007A1 (en) | 2023-02-16 |
KR102645982B1 (en) | 2024-03-08 |
DE102021114458A1 (en) | 2022-04-07 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
KR102645982B1 (en) | Command Draining Using Host Memory Buffer | |
US9927999B1 (en) | Trim management in solid state drives | |
US11861217B2 (en) | DRAM-less SSD with command draining | |
US11204833B1 (en) | NVM endurance group controller using shared resource architecture | |
US11556268B2 (en) | Cache based flow for a simple copy command | |
KR20220010424A (en) | Parallel boot execution of memory devices | |
CN113744783A (en) | Write data transfer scheduling in a partitioned namespace (ZNS) drive | |
US11513736B2 (en) | Revised host command generation for unaligned access | |
WO2024063821A1 (en) | Dynamic and shared cmb and hmb allocation | |
WO2024063822A1 (en) | Partial speed changes to improve in-order transfer | |
US20230289226A1 (en) | Instant Submission Queue Release | |
US11853571B2 (en) | Storage devices hiding parity swapping behavior | |
US11733920B2 (en) | NVMe simple copy command support using dummy virtual function | |
JP2024525777A (en) | Host Memory Buffer Cache Management | |
US11138066B1 (en) | Parity swapping to DRAM | |
US20210333996A1 (en) | Data Parking for SSDs with Streams | |
WO2021257117A1 (en) | Fast recovery for persistent memory region (pmr) of a data storage device | |
US20230297277A1 (en) | Combining Operations During Reset | |
US20220405011A1 (en) | Latency On Indirect Admin Commands | |
US20230214254A1 (en) | PCIe TLP Size And Alignment Management | |
US20240143512A1 (en) | Write buffer linking for easy cache reads | |
US20240086108A1 (en) | Parallel fragmented sgl fetching for hiding host turnaround time | |
WO2023064003A1 (en) | Efficient data path in compare command execution | |
CN116126746A (en) | DRAM-free SSD with HMB cache management |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
TA01 | Transfer of patent application right |
Effective date of registration: 20240827 Address after: California, USA Applicant after: SanDisk Technology Co. Country or region after: U.S.A. Address before: California, USA Applicant before: Western Digital Technologies, Inc. Country or region before: U.S.A. |
|
TA01 | Transfer of patent application right |