CN116700597A - DRAM-free SSD with low latency secure HMB - Google Patents

DRAM-free SSD with low latency secure HMB Download PDF

Info

Publication number
CN116700597A
CN116700597A CN202210545036.2A CN202210545036A CN116700597A CN 116700597 A CN116700597 A CN 116700597A CN 202210545036 A CN202210545036 A CN 202210545036A CN 116700597 A CN116700597 A CN 116700597A
Authority
CN
China
Prior art keywords
data
hmb
host
signature
memory devices
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210545036.2A
Other languages
Chinese (zh)
Inventor
S·班尼斯提
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Western Digital Technologies Inc
Original Assignee
Western Digital Technologies Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US17/652,652 external-priority patent/US20220179593A1/en
Application filed by Western Digital Technologies Inc filed Critical Western Digital Technologies Inc
Publication of CN116700597A publication Critical patent/CN116700597A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0602Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
    • G06F3/061Improving I/O performance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/50Monitoring users, programs or devices to maintain the integrity of platforms, e.g. of processors, firmware or operating systems
    • G06F21/52Monitoring users, programs or devices to maintain the integrity of platforms, e.g. of processors, firmware or operating systems during program execution, e.g. stack integrity ; Preventing unwanted data erasure; Buffer overflow
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0655Vertical data movement, i.e. input-output transfer; data movement between one or more hosts and one or more storage devices
    • G06F3/0659Command handling arrangements, e.g. command buffers, queues, command scheduling
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/08Error detection or correction by redundancy in data representation, e.g. by using checking codes
    • G06F11/10Adding special bits or symbols to the coded information, e.g. parity check, casting out 9's or 11's
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/08Error detection or correction by redundancy in data representation, e.g. by using checking codes
    • G06F11/10Adding special bits or symbols to the coded information, e.g. parity check, casting out 9's or 11's
    • G06F11/1008Adding special bits or symbols to the coded information, e.g. parity check, casting out 9's or 11's in individual solid state devices
    • G06F11/1048Adding special bits or symbols to the coded information, e.g. parity check, casting out 9's or 11's in individual solid state devices using arrangements adapted for a specific error detection or correction feature
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/16Error detection or correction of the data by redundancy in hardware
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/0802Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
    • G06F12/0866Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches for peripheral storage systems, e.g. disk cache
    • G06F12/0868Data transfer between cache memory and other subsystems, e.g. storage devices or host systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/14Protection against unauthorised use of memory or access to memory
    • G06F12/1416Protection against unauthorised use of memory or access to memory by checking the object accessibility, e.g. type of access defined by the memory independently of subject rights
    • G06F12/1425Protection against unauthorised use of memory or access to memory by checking the object accessibility, e.g. type of access defined by the memory independently of subject rights the protection being physical, e.g. cell, word, block
    • G06F12/1433Protection against unauthorised use of memory or access to memory by checking the object accessibility, e.g. type of access defined by the memory independently of subject rights the protection being physical, e.g. cell, word, block for a module or a part of a module
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F13/00Interconnection of, or transfer of information or other signals between, memories, input/output devices or central processing units
    • G06F13/14Handling requests for interconnection or transfer
    • G06F13/20Handling requests for interconnection or transfer for access to input/output bus
    • G06F13/28Handling requests for interconnection or transfer for access to input/output bus using burst mode transfer, e.g. direct memory access DMA, cycle steal
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/60Protecting data
    • G06F21/64Protecting data integrity, e.g. using checksums, certificates or signatures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0602Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
    • G06F3/0604Improving or facilitating administration, e.g. storage management
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0602Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
    • G06F3/061Improving I/O performance
    • G06F3/0611Improving I/O performance in relation to response time
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0602Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
    • G06F3/0614Improving the reliability of storage systems
    • G06F3/0619Improving the reliability of storage systems in relation to data integrity, e.g. data losses, bit errors
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0653Monitoring storage devices or systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0655Vertical data movement, i.e. input-output transfer; data movement between one or more hosts and one or more storage devices
    • G06F3/0656Data buffering arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0655Vertical data movement, i.e. input-output transfer; data movement between one or more hosts and one or more storage devices
    • G06F3/0658Controller construction arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0668Interfaces specially adapted for storage systems adopting a particular infrastructure
    • G06F3/0671In-line storage system
    • G06F3/0673Single storage device
    • G06F3/0679Non-volatile semiconductor memory device, e.g. flash memory, one time programmable memory [OTP]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/14Protection against unauthorised use of memory or access to memory
    • G06F12/1416Protection against unauthorised use of memory or access to memory by checking the object accessibility, e.g. type of access defined by the memory independently of subject rights
    • G06F12/145Protection against unauthorised use of memory or access to memory by checking the object accessibility, e.g. type of access defined by the memory independently of subject rights the protection being virtual, e.g. for virtual blocks or segments before a translation mechanism
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2212/00Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
    • G06F2212/10Providing a specific technical effect
    • G06F2212/1052Security improvement
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2212/00Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
    • G06F2212/21Employing a record carrier using a specific recording technology
    • G06F2212/214Solid state disk
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2212/00Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
    • G06F2212/31Providing disk cache in a specific location of a storage system
    • G06F2212/311In host system
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2212/00Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
    • G06F2212/31Providing disk cache in a specific location of a storage system
    • G06F2212/312In storage controller
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2212/00Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
    • G06F2212/46Caching storage objects of specific type in disk cache
    • G06F2212/466Metadata, control data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2212/00Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
    • G06F2212/50Control mechanisms for virtual memory, cache or TLB
    • G06F2212/507Control mechanisms for virtual memory, cache or TLB using speculative control
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2212/00Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
    • G06F2212/72Details relating to flash memory management
    • G06F2212/7201Logical to physical mapping or translation of blocks or pages
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2213/00Indexing scheme relating to interconnection of, or transfer of information or other signals between, memories, input/output devices or central processing units
    • G06F2213/28DMA

Abstract

Aspects of the present disclosure generally relate to data storage devices and related methods using secure host memory buffers and low latency operations. In one aspect, a controller is configured to obtain a command from a host device and to obtain entry data from a Host Memory Buffer (HMB) of the host device in response to the command from the host device. HMB is used to replace DRAM in the controller so that the data storage device is free of DRAM. In one embodiment, the ingress data includes a logical to physical (L2P) address. The controller is further configured to obtain read data from the one or more memory devices using the ingress data, perform a validity check on the ingress data obtained from the HMB while obtaining the read data from the one or more memory devices, and transmit validity result data to the host device.

Description

DRAM-free SSD with low latency secure HMB
Cross Reference to Related Applications
The present application is a continuation of the section of co-pending U.S. patent application Ser. No. 17/183,140, filed 2/23/2021, claiming the benefit of U.S. provisional patent application Ser. No. 63/086,966, filed 2/10/2020. Each of the aforementioned related patent applications is incorporated herein by reference.
Background
Technical Field
Aspects of the present disclosure generally relate to data storage devices and related methods using secure Host Memory Buffers (HMBs) and low latency operations. In one aspect, a data storage device facilitates low latency using simultaneous memory sense operations and validity check operations.
Description of related Art
A Host Memory Buffer (HMB) of a host device is used in conjunction with a data storage device, such as a Solid State Drive (SSD). HMB is a dedicated storage location within the host device. The host device provides HMB for the data storage device to use as needed. The host device does not control the operation of the HMB by the data storage device. The host device only controls access to the HMB. Thus, the host device may sometimes block access to the HMB. HMB may be used as an additional storage device for the data storage device in addition to the DRAM located within the controller.
HMBs may be subject to security attacks, such as network attacks, including replay attacks and/or replay attacks. Such attacks may expose the host device and affect performance. However, efforts to protect the HMB of the host device from such attacks may result in delays, such as 4 μsec (microseconds) or more.
Accordingly, there is a need in the art for a data storage device that practically and simply protects HMB while facilitating latency reduction and performance enhancement.
Disclosure of Invention
Aspects of the present disclosure generally relate to data storage devices and related methods using secure host memory buffers and low latency operations. In one aspect, a controller of a data storage device coupled to one or more memory devices is configured to obtain commands from a host device and to obtain entry data from a Host Memory Buffer (HMB) of the host device in response to the commands from the host device. HMB is used to replace DRAM in the controller so that the data storage device is free of DRAM. In one embodiment, the ingress data includes a logical to physical (L2P) address. The controller is further configured to obtain read data from the one or more memory devices using the ingress data, perform a validity check on the ingress data obtained from the HMB while obtaining the read data from the one or more memory devices, and transmit validity result data to the host device.
In one embodiment, a data storage device includes one or more memory devices and a controller coupled to the one or more memory devices, wherein the controller is devoid of DRAM. The controller is configured to obtain commands from a host device, and the host device includes a Host Memory Buffer (HMB). The controller is configured to obtain the ingress data from the HMB in response to a command from the host device. The controller is configured to obtain read data from the one or more memory devices using the entry data. The controller is configured to perform a validity check on the ingress data obtained from the HMB while obtaining read data from the one or more memory devices, and to transmit validity result data to the host device.
In one embodiment, a data storage device includes one or more memory devices and a controller coupled to the one or more memory devices, wherein the controller is devoid of DRAM. The controller is configured to obtain a command from the host device. The host device includes a Host Memory Buffer (HMB), and the HMB includes a merck tree having a plurality of hashes. The controller is configured to obtain the ingress data from the HMB in response to a command from the host device. The controller is configured to obtain read data from the one or more memory devices using the entry data. The controller is configured to perform a validity check on the ingress data obtained from the HMB while the read data is obtained from the one or more memory devices. The validity check includes comparing a signature of a top-level hash of the plurality of hashes with a stored signature stored within the controller and determining whether the signature is the same as or different from the stored signature.
In one embodiment, a data storage device includes a memory arrangement and a controller, wherein the controller is devoid of DRAM, and wherein the controller is configured to retrieve commands from a host device, and the host device includes a Host Memory Buffer (HMB); obtaining entry data from the HMB in response to a command from the host device; retrieving read data from one or more memory devices using the entry data; performing a validity check on the entry data obtained from the HMB while obtaining read data from the one or more memory devices; and transmitting the validity result data to the host device.
Drawings
So that the manner in which the above recited features of the present disclosure can be understood in detail, a more particular description of the disclosure, a brief summary of the disclosure, a more particular description of the disclosure, briefly summarized above, may be had by reference to embodiments, some of which are illustrated in the appended drawings. It is to be noted, however, that the appended drawings illustrate only typical embodiments of this disclosure and are therefore not to be considered limiting of its scope, for the disclosure may admit to other equally effective embodiments.
Fig. 1 is a schematic diagram of a merck tree according to one implementation.
Fig. 2A and 2B are schematic diagrams of data systems according to various implementations.
FIG. 3 is a schematic diagram of the data system shown in FIG. 2A during an operational flow using the data system.
FIG. 4 is a schematic diagram of a method of operating a data system according to one implementation.
FIG. 5 is a schematic diagram of a method of operating a data system according to one implementation.
To facilitate understanding, identical reference numerals have been used, where possible, to designate identical elements that are common to the figures. It is contemplated that elements disclosed in one embodiment may be beneficially utilized on other embodiments without specific recitation.
Detailed Description
Hereinafter, reference is made to embodiments of the present disclosure. However, it should be understood that the present disclosure is not limited to the specifically described embodiments. Rather, any combination of the following features and elements, whether related to different embodiments or not, is contemplated to implement and practice the present disclosure. Furthermore, although embodiments of the present disclosure may achieve advantages over other possible solutions and/or over the prior art, whether a particular advantage is achieved by a given embodiment is not a limitation of the present disclosure. Thus, the following aspects, features, embodiments and advantages are merely illustrative and are not considered elements or limitations of the appended claims except where explicitly recited in a claim(s). Likewise, references to "the present disclosure" should not be construed as an generalization of any inventive subject matter disclosed herein and should not be considered to be an element or limitation of the appended claims except where explicitly recited in a claim.
Aspects of the present disclosure generally relate to data storage devices and related methods using secure host memory buffers and low latency operations. In one aspect, a controller of a data storage device coupled to one or more memory devices is configured to receive a command from a host device and to retrieve entry data from a Host Memory Buffer (HMB) of the host device in response to the command from the host device. HMB is used to replace DRAM in the controller so that the data storage device is free of DRAM. In one embodiment, the ingress data includes a logical to physical (L2P) address. The controller is further configured to obtain read data from the one or more memory devices using the ingress data, perform a validity check on the ingress data obtained from the HMB while obtaining the read data from the one or more memory devices, and transmit validity result data to the host device.
Fig. 1 is a schematic diagram of a merck tree 100 according to one implementation. The merck tree 100 includes data, such as entry data corresponding to data stored in one or more memory devices. Data is stored in a plurality of data blocks 101-104. The merck tree 100 is part of a host device, such as the operating system of the host device. The merck tree 100 includes a first plurality of hashes 111-114 of the first hash layer 110 and a second plurality of hashes 121, 122 of the second hash layer 120. A first plurality of hashes 111-114 is created using the plurality of data blocks 101-104. Each hash of the first plurality of hashes 111-114 corresponds to a data block of the plurality of data blocks 101-104. The second plurality of hashes 121, 122 is created by combining the hashes of the first plurality of hashes 111-114. The merck tree 100 includes a top hash 131 of the top hash layer 130. The top hash layer 130 includes signatures created using all hashes of the merck tree 100. The signature of the top-level hash 131 is created by combining the two hashes 121, 122 of the hash layer (e.g., the second hash layer 120) disposed immediately below the top-level hash layer 130. The top hash layer 130 includes a single hash (e.g., top hash 131). As the merck tree 100 moves up from the multiple data chunks 101-104 and toward the top-level hashes 131, the hashes of each hash layer 110, 120 gradually combine until a signature of a single top-level hash 131 is created for the top-level hash layer 130.
The merck tree 100 is used to protect and authenticate (e.g., through the use of validity checks) a portion of the host device. Because of the progressive nature of the hash layers 110, 120, 130, if the data of one of the plurality of data blocks 101-104 is changed or corrupted, such as during a network attack, the signature of the top-level hash 131 is changed or corrupted. The altered or corrupted signature of top-level hash 131 indicates that the data of one or more of data blocks 101-104 has been altered or corrupted. When data is written and stored in data blocks 101-104, a signature of the merck tree 100 and the top-level hash 131 is created. The signature of the top-level hash 131 is stored as a stored signature.
The present disclosure contemplates that FIG. 1 is exemplary and may include more data blocks than data blocks 101-104 shown in FIG. 1, more hash layers than hash layers 110, 120, 130 shown in FIG. 1, and more hashes than hashes 111-114, 121, 122, 131 shown in FIG. 1.
Fig. 2A is a schematic diagram of a data system 200 according to one implementation. The data system 200 includes a data storage device 201. In one embodiment, which may be combined with other embodiments, the data storage device 201 is a Solid State Drive (SSD). The present disclosure contemplates that aspects of data storage device 201 may be used in other data storage devices. The data storage device 201 includes a controller 210 coupled to one or more memory devices 220 (one memory device is shown). In one embodiment, which may be combined with other embodiments, one or more memory devices 220 are NAND devices. The present disclosure contemplates that aspects of data storage device 201 may be used in other memory devices.
The present disclosure contemplates that terms such as "coupled" may include, but are not limited to, an operable coupling, such as a wired or wireless coupling for communication purposes. The present disclosure contemplates that terms such as "coupled" may include, but are not limited to, direct coupling and/or indirect coupling.
The controller 210 is coupled to a Host Memory Buffer (HMB) 231 of a host device 230 of the data system 200. The HMB 231 includes the merck tree 100 shown in fig. 1 stored in the HMB 231 of the host device 230. Each node of the merck tree 100 (shown in fig. 1) is stored in the HMB 231 except for the top hash 131 of the top hash layer 130. While the top-level hash 131 and associated storage signature are stored in the data storage device 201 and are not visible to the host device 230.
The host device 230 stores an internal database in the HMB 231, such as entry data that may include logical to physical (L2P) addresses. HMB 231 is part of host memory 242 as external memory. Since the HMB 231 is part of the external memory, the merck tree 100 is used to protect the HMB 231 of the host device 230, such as by using a validity check to determine whether the HMB 231 has been altered or corrupted by a network attack, such as a replay attack and/or a replay attack. When writing and storing data in data blocks 101-104 using host device 230, the signatures of the merck tree 100 and top-level hash 131 are created and the signature of top-level hash 131 is stored within controller 210 as a stored signature. The present disclosure contemplates that aspects disclosed herein may be used in conjunction with other security operations, such as security algorithms other than the merck tree 100.
The controller 210 includes a host interface module 211, a control path 212 having one or more processors, direct Memory Access (DMA) 213, error Correction Code (ECC) 214, and a flash interface module 215.DMA 213, control path 212, and ECC 214 are part of the Flash Translation Layer (FTL) of controller 210. The control path 212 is configured to control one or more memory devices 220 and determine whether the HMB 231 is secure. The control path 212 is also configured to control other aspects of the data storage device 201. The host interface module 211 is configured to communicate with the HMB 231 and the control path 212 of the host device 230. The host interface module 211 is configured to manage the HMB 231. The DMA 213 is configured to communicate with the host interface module 211 and the control path 212. The DMA 213 is configured to control the transfer of data from the HMB 231 to the controller 210 and from the controller 210 to the host memory 242. The ECC 214 is configured to communicate with the DMA 213 and is configured to encode and decode data to correct errors relative to the DMA 213. The flash interface module 215 is configured to communicate with one or more memory devices 220. The controller 210 is configured to: in response to a command from the host device 230, data is retrieved from the one or more memory devices 220 while determining whether the HMB 231 is secure, and if the HMB 231 is secure, the data from the one or more memory devices 220 is forwarded to the HMB 231. If the HMB 231 is not secure (such as being changed or corrected), the controller 210 cancels forwarding the data onto the HMB 231.
Flash interface module 215 is coupled to one or more memory devices 220. The control path 212 is coupled to one or more memory devices 220 through a flash interface module 215. Each of the control path 212, flash interface module 215, and ECC 214 are coupled to the DMA 213. The DMA 213 is coupled to the host interface module 211. One or more memory devices 220 are coupled to the DMA 213 through a flash interface module 215. The host interface module 211 is coupled to the HMB 231 of the host device 230. The control path 212 is coupled to the host interface module 211, and the control path 212 is coupled to the HMB 231 through the host interface module 211.
The one or more memory devices 220 may include a plurality of memory devices or memory units. The one or more memory devices 220 may be configured to store and/or retrieve data. For example, the storage units of one or more memory devices 220 may receive data and receive a message from the controller 210 indicating that the storage units store the data. Similarly, a storage unit may receive a message from controller 210 indicating that the storage unit retrieves data. In some examples, each of the memory cells may be referred to as a die. In some examples, one or more memory devices 220 may include multiple dies (i.e., multiple memory cells). In some examples, each storage unit may be configured to store a relatively large amount of data (e.g., 128MB, 256MB, 512MB, 1GB, 2GB, 4GB, 8GB, 16GB, 32GB, 64GB, 128GB, 256GB, 512GB, 1TB, etc.).
In some examples, each memory cell may include any type of nonvolatile memory device, such as a flash memory device, a Phase Change Memory (PCM) device, a resistive random access memory (ReRAM) device, a Magnetoresistive Random Access Memory (MRAM) device, a ferroelectric random access memory (F-RAM), a holographic memory device, and any other type of nonvolatile memory device.
The one or more memory devices 220 may include a plurality of flash memory devices or memory cells. NVM flash memory devices may include NAND or NOR based flash memory devices, and may store data based on charge contained in the floating gate of the transistor for each flash memory cell. In an NVM flash memory device, the flash memory device may be divided into a plurality of dies, wherein each die of the plurality of dies includes a plurality of physical or logical blocks, which may be further divided into a plurality of pages. Each of the plurality of blocks within a particular memory device may include a plurality of NVM cells. The rows of NVM cells can be electrically connected using word lines to define pages of the plurality of pages. The respective cells in each of the plurality of pages may be electrically connected to a respective bit line. Further, the NVM flash memory device may be a 2D or 3D device, and may be a Single Level Cell (SLC), a multi-level cell (MLC), a three-level cell (TLC), or a four-level cell (QLC). The controller 210 can write data to and read data from the NVM flash memory devices at the page level and erase data from the NVM flash memory devices at the block level.
Fig. 2B is a schematic diagram of a data system 250 according to one implementation. The data system 250 is similar to the data system 200 shown in fig. 2A, except that the data system 250 includes DRAM 252 in the device controller 210, while the data system 200 is devoid of DRAM. In a data system that includes DRAM 252, data system 250 may choose to store data in either DRAM 252 or HMB 231, or both. When data is stored in the HMB 231, the merck tree 100 is used to protect and verify the data stored in the HMB 231.
Fig. 3 is a schematic diagram of the data system 200 shown in fig. 2A during an operational flow using the data system 200. The operational flow then stores the stored characteristics of the merck tree 100 in the host interface module 211. The controller 210 is configured to perform the operations described herein with respect to the operational flow shown in fig. 3.
In the operational flow, the control path 212 retrieves commands from the host memory 242 of the host device 230. The command is a random read command from the host device 230. In response to a command from the host device 230, the controller 210 obtains the ingress data from the HMB 231 of the host device 230. In one embodiment, which may be combined with other embodiments, the ingress data includes a logical to physical (L2P) address. Acquiring the ingress data includes the control path 212 sending 301 a request to the host interface module 211 to acquire the ingress data from the HMB 231, and the host interface module 211 sending 302 a read request to the HMB 231. In response to a read request from the host interface module 211, the HMB 231 sends 303 the ingress data to the control path 212 through the host interface module 211. In response to the ingress data, the control path 212 speculatively uses the ingress data and obtains read data from the one or more memory devices 220 using the ingress data. In one embodiment, which may be combined with other embodiments, read data obtained from one or more memory devices 220 corresponds to entry data, such as an L2P address.
Retrieving read data from the one or more memory devices 220 includes the control path 212 sending 304 a sense request to the one or more memory devices 220 through the flash interface module 215. When retrieving read data from the one or more memory devices 220, the host interface module 211 performs a validity check on the ingress data retrieved from the HMB 231 while retrieving read data from the one or more memory devices 220. The entry data obtained from the HMB 231 includes a signature of a top-level hash of the plurality of hashes of the merck tree 100 of the HMB 231. The validity check includes comparing the signature of the top-level hash 131 of the plurality of hashes with a stored signature stored within the host interface module 211 of the controller 210. The validity check also includes determining whether the signature is the same as or different from the stored signature. The validity check includes obtaining data from the HMB 231 such as a hash and/or calculating a hash of the HMB 231. The validity check is performed by the host interface module 211. In one embodiment, which may be combined with other embodiments, a sensing request is sent to one or more memory devices 220 before determining whether the signature is the same as or different from the stored signature in the validity check. The host interface module 211 sends 305 the result of the validity check to the control path 212. The control path 212 sends 307 the result received from the host interface module 211 to the DMA 213. The one or more memory devices 220 send 306 read data to the DMA 213 through the flash interface module 215.
In response to the validity check and the result received by the DMA 213, the DMA 213 determines whether to send read data to the host memory 242. If the result indicates that the signature analyzed in the validity check is the same as the stored signature, the DMA 213 sends 309 to the host interface module 211 an instruction to transfer 308 validity result data to the host memory 242 of the host device 230, and the validity result data transferred from the host interface module 211 to the host memory 242 includes read data obtained from the one or more memory devices 220. If the result indicates that the signature analyzed in the validity check is different from the stored signature, the DMA 213 cancels the transmission of the read data to the host memory 242, and the DMA 213 transmits 309 to the host interface module 211 an instruction to transfer 308 the validity result data to the host memory 242 of the host device 230. The validity result data transferred from the host interface module 211 to the host memory 242 includes garbage data different from the read data. In one embodiment, which may be combined with other embodiments, the garbage data includes random iterations of 0 and 1, the random iterations of 0 and 1 being different from the read data and from other data stored on the one or more memory devices 220.
The DMA 213 also sends an instruction to issue 310 a completion message to the host memory 242 of the host device 230 to the host interface module 211, and the host interface module 211 issues 310 the completion message to the host memory 242. If the DMA 213 determines that the signature is the same as the stored signature, the completion message includes a valid notification. The valid notification indicates to the host memory 242 of the host device 230 that the data stored in the HMB 231 is valid and has not been changed or corrupted (such as by a network attack). If the DMA 213 determines that the signature is different from the stored signature, the completion message includes an error notification. The error notification indicates to the host memory 242 of the host device 230 that the data stored in the HMB 231 is invalid and has been changed or corrupted (such as by a network attack).
FIG. 4 is a schematic diagram of a method 400 of operating a data system according to one implementation. Method 400 includes the operations, aspects, features, components, and/or characteristics of the operational flow illustrated in fig. 3. The present disclosure contemplates an operational flow including the operations, aspects, features, components, and/or characteristics of method 400 shown in fig. 4.
Operation 401 of method 400 includes retrieving a command, such as a random read command, from a host device. Operation 403 comprises retrieving entry data from a Host Memory Buffer (HMB) of the host device. In one embodiment, which may be combined with other embodiments, the ingress data includes a logical to physical (L2P) address. Operation 405 comprises performing a validity check on the portal data obtained from the HMB. Performing the validity check includes comparing a signature of a top-level hash of the plurality of hashes of the merck tree with a stored signature previously stored with respect to the merck tree. Performing the validity check also includes determining whether the signature is the same as or different from the stored signature.
While the validity check is in operation 405, retrieval of read data from one or more memory devices (such as one or more NAND devices) occurs in operation 407. Operation 409 includes determining whether the validity check passes. If the signature of the top-level hash is the same as the stored signature, the validity check passes. If the signature of the top-level hash is different from the stored signature, the validity check fails. If the validity check passes, the HMB is secure and has not been altered or compromised (such as by a network attack). If the validity check fails, the HMB is unsafe and has been altered or corrupted (such as by a network attack).
If the validity check passes, the read data obtained from the one or more memory devices is transmitted to the host memory of the host device at operation 411, and a completion message with a validity notification is issued to the host memory of the host device at operation 413.
If the validity check fails, the transfer of the read data to the host memory of the host device is canceled at operation 415, and the garbage data is transferred to the host memory of the host device at operation 415. If the validity check fails, operation 417 includes issuing a completion message with an error notification to the host memory of the host device.
FIG. 5 is a schematic diagram of a method 500 of operating a data system according to one implementation. Operation 502 of method 500 includes obtaining a command, such as a random read command, from a host device. Once the command is obtained from the host device, a determination is made at 504 as to whether the entry data is located in the HMB or in the DRAM, which is located in the data storage device controller. If the entry data is located in the HMB, operation 506 comprises obtaining the entry data from the HMB. In one embodiment, which may be combined with other embodiments, the ingress data includes a logical to physical (L2P) address. Operation 508 comprises performing a validity check on the portal data obtained from the HMB. Performing the validity check includes comparing a signature of a top-level hash of the plurality of hashes of the merck tree with a stored signature previously stored with respect to the merck tree. Performing the validity check also includes determining whether the signature is the same as or different from the stored signature.
While the validity check is in operation 508, retrieval of read data from one or more memory devices (such as one or more NAND devices) occurs at operation 510. Operation 512 comprises determining whether the validity check passes. If the signature of the top-level hash is the same as the stored signature, the validity check passes. If the signature of the top-level hash is different from the stored signature, the validity check fails. If the validity check passes, the HMB is secure and has not been altered or compromised (such as by a network attack). If the validity check fails, the HMB is unsafe and has been altered or corrupted (such as by a network attack).
If the validity check passes, read data obtained from the one or more memory devices is transmitted to the host memory of the host device at operation 514 and a completion message with a validity notification is issued to the host memory of the host device at operation 516.
If the validity check fails, the transfer of read data to the host memory of the host device is canceled at operation 518, and the garbage data is transferred to the host memory of the host device at operation 518. If the validity check fails, operation 520 comprises issuing a completion message with an error notification to host memory of the host device.
If the entry data is stored in DRAM as determined at 504, the entry data is retrieved from DRAM at 522, read and transmitted to the host device at 514, and a valid notification is issued to the host device at 516. Because the entry data is stored in the DRAM rather than in the HMB, a validity check for HMB storage is not required. However, it should be appreciated that the same validity check may occur when the ingress data is stored in the HMB even though the ingress data is stored in the DRAM. In such a scenario, rather than proceeding to 514 after 522, method 500 proceeds to 508 and 510 after 522. The method of fig. 5 represents an embodiment where both HMB and DRAM are present, while the method of fig. 4 represents an embodiment where the data storage device is devoid of DRAM.
The benefits of the present disclosure actually include practical and simple protection of the HMB of the host device while facilitating latency reduction and performance and operational efficiency enhancement. For example, the aspects described herein help to implement merck tree security on HMB 231 in a practical and simple manner with reduced latency. Using the merck tree to verify the security of the HMB 231 may otherwise take 5 μsec (microseconds) or longer. However, using the aspects described herein, the use of the merck tree 100 to verify the security of the HMB 231 occurs simultaneously with the retrieval of read data from one or more memory devices 220, which may take 50 μsec or more, to reduce latency and operating latency by 4-5 μsec or more.
It is contemplated that one or more aspects disclosed herein may be combined. Further, it is contemplated that one or more aspects disclosed herein may include some or all of the foregoing benefits. For example, the operations, aspects, components, features, and/or characteristics of the data system 200 shown in fig. 2A and the operational flow shown in fig. 3 may be combined with the method 400 shown in fig. 4.
In one embodiment, a data storage device includes one or more memory devices and a controller coupled to the one or more memory devices, wherein the controller is devoid of DRAM. The controller is configured to obtain commands from a host device, and the host device includes a Host Memory Buffer (HMB). The controller is configured to obtain the ingress data from the HMB in response to a command from the host device. The controller is configured to obtain read data from the one or more memory devices using the entry data. The controller is configured to perform a validity check on the ingress data obtained from the HMB while obtaining read data from the one or more memory devices, and to transmit validity result data to the host device. The HMB includes a merck tree with a plurality of hashes. The validity check includes comparing a signature of a top-level hash of the plurality of hashes with a stored signature stored within the controller and determining whether the signature is the same as or different from the stored signature. If the signature is the same as the stored signature, the validity result data transmitted to the host device includes read data. If the signature is different from the stored signature, the validity result data transmitted to the host device includes junk data. The controller is further configured to issue a completion message to the host device. If the signature is the same as the stored signature, the completion message includes a valid notification. If the signature is different from the stored signature, the completion message includes an error notification. Retrieving read data from one or more memory devices includes sending a sensing request to one or more memory devices before determining whether the signature is the same as or different from the stored signature. In one example, the one or more memory devices are one or more NAND devices.
In one embodiment, a data storage device includes one or more memory devices and a controller coupled to the one or more memory devices, wherein the controller is devoid of DRAM. The controller is configured to obtain a command from the host device. The host device includes a Host Memory Buffer (HMB), and the HMB includes a merck tree having a plurality of hashes. The controller is configured to obtain the ingress data from the HMB in response to a command from the host device. The controller is configured to obtain read data from the one or more memory devices using the entry data. The controller is configured to perform a validity check on the ingress data obtained from the HMB while the read data is obtained from the one or more memory devices. The validity check includes comparing a signature of a top-level hash of the plurality of hashes with a stored signature stored within the controller and determining whether the signature is the same as or different from the stored signature. The controller includes a control path including one or more processors, and the control path is configured to control the one or more memory devices. The controller includes a host interface module, and the host interface module is configured to communicate with the host device and the control path. The host interface module is configured to manage the HMB. The controller includes a Direct Memory Access (DMA), and the DMA is configured to communicate with the host interface module and the control path. The controller includes an Error Correction Code (ECC), and the ECC is configured to communicate with the DMA and to encode and decode data for error correction. The controller includes a flash interface module, and the flash interface module is configured to communicate with one or more memory devices. Retrieving read data from the one or more memory devices includes a control path sending a sense request to the one or more memory devices. The host interface module performs a validity check. The DMA is configured to send an instruction to the host interface module to transfer the validity result data to the host device. The DMA is further configured to send an instruction to issue a completion message to the host device to the host interface module.
In one embodiment, a data storage device includes a memory arrangement and a controller, wherein the controller is devoid of DRAM, and wherein the controller is configured to retrieve commands from a host device, and the host device includes a Host Memory Buffer (HMB); obtaining entry data from the HMB in response to a command from the host device; retrieving read data from one or more memory devices using the entry data; performing a validity check on the entry data obtained from the HMB while obtaining read data from the one or more memory devices; and transmitting the validity result data to the host device. Performing a validity check of the portal data includes comparing a signature of a top-level hash of the plurality of hashes stored in the merck tree of the HMB to the stored signature and determining whether the signature is the same as or different from the stored signature. The controller includes a host interface module coupled to a Direct Memory Access (DMA). The DMA is coupled to an error correction module and a Flash Interface Module (FIM). The validity check is performed simultaneously with the acquisition of the read data.
While the foregoing is directed to embodiments of the present disclosure, other and further embodiments of the disclosure may be devised without departing from the basic scope thereof, and the scope thereof is determined by the claims that follow.

Claims (20)

1. A data storage device, the data storage device comprising:
one or more memory devices; and
a controller coupled to the one or more memory devices, wherein the controller is devoid of DRAM, and wherein the controller is configured to:
obtaining a command from a host device, wherein the host device includes a Host Memory Buffer (HMB);
obtain inlet data from the HMB in response to the command from the host device;
retrieving read data from the one or more memory devices using the entry data;
performing a validity check on the entry data acquired from the HMB while the read data is acquired from the one or more memory devices; and is also provided with
Validity result data is transmitted to the host device.
2. The data storage device of claim 1, wherein the HMB comprises a merck tree having a plurality of hashes, and the validity check comprises:
comparing a signature of a top-level hash of the plurality of hashes with a stored signature stored within the controller; and
it is determined whether the signature is the same as or different from the stored signature.
3. The data storage device of claim 2, wherein the validity result data transmitted to the host device includes the read data if the signature is the same as the stored signature.
4. The data storage device of claim 2, wherein the validity result data transmitted to the host device comprises spam data if the signature is different from the stored signature.
5. The data storage device of claim 2, wherein the controller is further configured to issue a completion message to the host device.
6. The data storage device of claim 5, wherein the completion message comprises a valid notification if the signature is the same as the stored signature.
7. The data storage device of claim 5, wherein the completion message comprises an error notification if the signature is different from the stored signature.
8. The data storage device of claim 2, wherein obtaining the read data from the one or more memory devices comprises sending a sense request to the one or more memory devices before determining whether the signature is the same as or different from the stored signature.
9. The data storage device of claim 1, wherein the one or more memory devices are one or more NAND devices.
10. A data storage device, the data storage device comprising:
one or more memory devices; and
a controller coupled to the one or more memory devices, wherein the controller is devoid of DRAM, and wherein the controller is configured to:
obtaining a command from a host device, wherein the host device comprises a Host Memory Buffer (HMB), and the HMB comprises a merck tree having a plurality of hashes;
obtain inlet data from the HMB in response to the command from the host device;
retrieving read data from the one or more memory devices using the entry data; and
performing a validity check on the entry data acquired from the HMB while the read data is acquired from the one or more memory devices, the validity check comprising:
comparing the signature of the top hash of the plurality of hashes with a stored signature stored within the controller, and
it is determined whether the signature is the same as or different from the stored signature.
11. The data storage device of claim 10, wherein the controller comprises:
a control path comprising one or more processors, wherein the control path is configured to control the one or more memory devices;
a host interface module, wherein the host interface module is configured to communicate with the host device and the control path, and the host interface module is configured to manage the HMB;
a Direct Memory Access (DMA), wherein the DMA is configured to communicate with the host interface module and the control path;
an Error Correction Code (ECC), wherein the ECC is configured to communicate with the DMA and is configured to encode and decode data for error correction; and
a flash interface module, wherein the flash interface module is configured to communicate with the one or more memory devices.
12. The data storage device of claim 11, wherein to obtain the read data from the one or more memory devices comprises the control path to send a sense request to the one or more memory devices.
13. The data storage device of claim 11, wherein the host interface module performs the validity check.
14. The data storage device of claim 11, wherein the DMA is configured to send instructions to the host interface module to transfer validity result data to the host device.
15. The data storage device of claim 14, wherein the DMA is further configured to send an instruction to issue a completion message to the host device to the host interface module.
16. A data storage device, the data storage device comprising:
a memory device; and
a controller coupled to the memory device, wherein the controller is devoid of DRAM, and wherein the controller is configured to:
obtaining a command from a host device, wherein the host device includes a Host Memory Buffer (HMB);
obtain inlet data from the HMB in response to the command from the host device;
retrieving read data from one or more memory devices using the entry data;
performing a validity check on the entry data acquired from the HMB while the read data is acquired from the one or more memory devices; and is also provided with
Validity result data is transmitted to the host device.
17. The data storage device of claim 16, wherein performing the validity check of the ingress data comprises:
comparing a signature of a top-level hash of a plurality of hashes stored in the merck tree of the HMB with a stored signature; and
it is determined whether the signature is the same as or different from the stored signature.
18. The data storage device of claim 16, wherein the controller comprises a host interface module coupled to Direct Memory Access (DMA).
19. The data storage device of claim 18, wherein the DMA is coupled to an error correction module and a Flash Interface Module (FIM).
20. The data storage device of claim 18, wherein the validity check is performed concurrently with acquiring read data.
CN202210545036.2A 2022-02-25 2022-05-19 DRAM-free SSD with low latency secure HMB Pending CN116700597A (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US17/652,652 US20220179593A1 (en) 2020-10-02 2022-02-25 DRAM-Less SSD With Secure HMB For Low Latency
US17/652,652 2022-02-25

Publications (1)

Publication Number Publication Date
CN116700597A true CN116700597A (en) 2023-09-05

Family

ID=87557206

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210545036.2A Pending CN116700597A (en) 2022-02-25 2022-05-19 DRAM-free SSD with low latency secure HMB

Country Status (3)

Country Link
KR (1) KR20230127822A (en)
CN (1) CN116700597A (en)
DE (1) DE102022112533B4 (en)

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR102589402B1 (en) 2018-10-04 2023-10-13 삼성전자주식회사 Storage device and method for operating storage device

Also Published As

Publication number Publication date
DE102022112533B4 (en) 2023-09-07
DE102022112533A1 (en) 2023-08-31
KR20230127822A (en) 2023-09-01

Similar Documents

Publication Publication Date Title
US10643707B2 (en) Group write operations for a data storage device
US10725835B2 (en) System and method for speculative execution of commands using a controller memory buffer
US9880783B2 (en) System and method for utilization of a shadow data buffer in a host where the shadow data buffer is controlled by external storage controller
US10567006B2 (en) Data relocation
US10157012B2 (en) Zero read on trimmed blocks in a non-volatile memory system
US20170123991A1 (en) System and method for utilization of a data buffer in a storage device
US8001316B2 (en) Controller for one type of NAND flash memory for emulating another type of NAND flash memory
US20160357462A1 (en) Nonvolatile Memory Modules and Data Management Methods Thereof
US20170123721A1 (en) System and method for utilization of a data buffer by command completion in parts
US20220179593A1 (en) DRAM-Less SSD With Secure HMB For Low Latency
KR101856506B1 (en) Data storage device and data write method thereof
US10114743B2 (en) Memory erase management
KR102107723B1 (en) Memory controller and operating method thereof
US10713157B2 (en) Storage system and method for improving read performance using multiple copies of a logical-to-physical address table
US20220138096A1 (en) Memory system
US11262928B2 (en) Storage system and method for enabling partial defragmentation prior to reading in burst mode
US11086786B2 (en) Storage system and method for caching a single mapping entry for a random read command
US11822489B2 (en) Data integrity protection for relocating data in a memory system
US20230087462A1 (en) Data storage device and operating method thereof
US20230393930A1 (en) Memory sub-system addressing for data and additional data portions
US11556268B2 (en) Cache based flow for a simple copy command
US11630604B2 (en) Methods for controlling data storage device, and associated flash memory controller
CN116700597A (en) DRAM-free SSD with low latency secure HMB
US11138115B2 (en) Hardware-based coherency checking techniques
US10628322B2 (en) Memory system and operating method thereof

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination