US20210286554A1 - Combined QLC Programming Method - Google Patents

Combined QLC Programming Method Download PDF

Info

Publication number
US20210286554A1
US20210286554A1 US16/818,571 US202016818571A US2021286554A1 US 20210286554 A1 US20210286554 A1 US 20210286554A1 US 202016818571 A US202016818571 A US 202016818571A US 2021286554 A1 US2021286554 A1 US 2021286554A1
Authority
US
United States
Prior art keywords
memory
data
slc
foggy
mlc
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
US16/818,571
Other versions
US11137944B1 (en
Inventor
Sergey Anatolievich Gorobets
Alan D. Bennett
Ryan R. Jones
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Western Digital Technologies Inc
Original Assignee
Western Digital Technologies Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Assigned to WESTERN DIGITAL TECHNOLOGIES, INC. reassignment WESTERN DIGITAL TECHNOLOGIES, INC. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: BENNETT, ALAN D., GOROBETS, SERGEY ANATOLIEVICH, JONES, RYAN R.
Priority to US16/818,571 priority Critical patent/US11137944B1/en
Application filed by Western Digital Technologies Inc filed Critical Western Digital Technologies Inc
Assigned to JPMORGAN CHASE BANK, N.A., AS AGENT reassignment JPMORGAN CHASE BANK, N.A., AS AGENT SECURITY INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: WESTERN DIGITAL TECHNOLOGIES, INC.
Priority to DE102020116189.1A priority patent/DE102020116189B3/en
Priority to CN202010564338.5A priority patent/CN113393884A/en
Priority to KR1020200074823A priority patent/KR102345454B1/en
Publication of US20210286554A1 publication Critical patent/US20210286554A1/en
Publication of US11137944B1 publication Critical patent/US11137944B1/en
Application granted granted Critical
Assigned to WESTERN DIGITAL TECHNOLOGIES, INC. reassignment WESTERN DIGITAL TECHNOLOGIES, INC. RELEASE OF SECURITY INTEREST AT REEL 053482 FRAME 0453 Assignors: JPMORGAN CHASE BANK, N.A.
Assigned to JPMORGAN CHASE BANK, N.A. reassignment JPMORGAN CHASE BANK, N.A. PATENT COLLATERAL AGREEMENT - DDTL LOAN AGREEMENT Assignors: WESTERN DIGITAL TECHNOLOGIES, INC.
Assigned to JPMORGAN CHASE BANK, N.A. reassignment JPMORGAN CHASE BANK, N.A. PATENT COLLATERAL AGREEMENT - A&R LOAN AGREEMENT Assignors: WESTERN DIGITAL TECHNOLOGIES, INC.
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11CSTATIC STORES
    • G11C16/00Erasable programmable read-only memories
    • G11C16/02Erasable programmable read-only memories electrically programmable
    • G11C16/06Auxiliary circuits, e.g. for writing into memory
    • G11C16/10Programming or data input circuits
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0655Vertical data movement, i.e. input-output transfer; data movement between one or more hosts and one or more storage devices
    • G06F3/0659Command handling arrangements, e.g. command buffers, queues, command scheduling
    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11CSTATIC STORES
    • G11C11/00Digital stores characterised by the use of particular electric or magnetic storage elements; Storage elements therefor
    • G11C11/56Digital stores characterised by the use of particular electric or magnetic storage elements; Storage elements therefor using storage elements with more than two stable states represented by steps, e.g. of voltage, current, phase, frequency
    • G11C11/5621Digital stores characterised by the use of particular electric or magnetic storage elements; Storage elements therefor using storage elements with more than two stable states represented by steps, e.g. of voltage, current, phase, frequency using charge storage in a floating gate
    • G11C11/5628Programming or writing circuits; Data input circuits
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/0223User address space allocation, e.g. contiguous or non contiguous base addressing
    • G06F12/023Free address space management
    • G06F12/0238Memory management in non-volatile memory, e.g. resistive RAM or ferroelectric memory
    • G06F12/0246Memory management in non-volatile memory, e.g. resistive RAM or ferroelectric memory in block erasable memory, e.g. flash memory
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/08Error detection or correction by redundancy in data representation, e.g. by using checking codes
    • G06F11/10Adding special bits or symbols to the coded information, e.g. parity check, casting out 9's or 11's
    • G06F11/1008Adding special bits or symbols to the coded information, e.g. parity check, casting out 9's or 11's in individual solid state devices
    • G06F11/1068Adding special bits or symbols to the coded information, e.g. parity check, casting out 9's or 11's in individual solid state devices in sector programmable memories, e.g. flash disk
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0602Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
    • G06F3/0604Improving or facilitating administration, e.g. storage management
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0668Interfaces specially adapted for storage systems adopting a particular infrastructure
    • G06F3/0671In-line storage system
    • G06F3/0673Single storage device
    • G06F3/0679Non-volatile semiconductor memory device, e.g. flash memory, one time programmable memory [OTP]
    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11CSTATIC STORES
    • G11C11/00Digital stores characterised by the use of particular electric or magnetic storage elements; Storage elements therefor
    • G11C11/56Digital stores characterised by the use of particular electric or magnetic storage elements; Storage elements therefor using storage elements with more than two stable states represented by steps, e.g. of voltage, current, phase, frequency
    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11CSTATIC STORES
    • G11C16/00Erasable programmable read-only memories
    • G11C16/02Erasable programmable read-only memories electrically programmable
    • G11C16/04Erasable programmable read-only memories electrically programmable using variable threshold transistors, e.g. FAMOS
    • G11C16/0483Erasable programmable read-only memories electrically programmable using variable threshold transistors, e.g. FAMOS comprising cells having several storage transistors connected in series
    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11CSTATIC STORES
    • G11C16/00Erasable programmable read-only memories
    • G11C16/02Erasable programmable read-only memories electrically programmable
    • G11C16/06Auxiliary circuits, e.g. for writing into memory
    • G11C16/08Address circuits; Decoders; Word-line control circuits
    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11CSTATIC STORES
    • G11C29/00Checking stores for correct operation ; Subsequent repair; Testing stores during standby or offline operation
    • G11C29/04Detection or location of defective memory elements, e.g. cell constructio details, timing of test signals
    • G11C29/08Functional testing, e.g. testing during refresh, power-on self testing [POST] or distributed testing
    • G11C29/12Built-in arrangements for testing, e.g. built-in self testing [BIST] or interconnection details
    • G11C29/38Response verification devices
    • G11C29/42Response verification devices using error correcting codes [ECC] or parity check
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2212/00Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
    • G06F2212/72Details relating to flash memory management
    • G06F2212/7203Temporary buffering, e.g. using volatile buffer or dedicated buffer blocks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2212/00Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
    • G06F2212/72Details relating to flash memory management
    • G06F2212/7208Multiple device management, e.g. distributing data over multiple flash devices

Definitions

  • Embodiments of the present disclosure generally relate to improving foggy-fine writing to QLC.
  • Programming or writing data may require two writing phases: foggy and fine.
  • foggy-fine programming the bits to be written cannot simply be written once. Rather, the data needs to be first written by foggy programming where voltage pulses are provided to push the current state to a more resolved state, but not completely resolved state.
  • Fine programming is performed at a point in time after foggy programming to write the data again in the completely resolved state.
  • foggy-fine programming there is a four page transfer for foggy programming and a four page transfer for fine programming for a 128 KB transfer in total for a two-plane device.
  • the foggy state is unreadable, and the data needs to be protected in case of a possible power loss event (PLI).
  • PKI power loss event
  • foggy-fine programming occurs in a staggered word line sequence, which means that data in transit is five times or eight times the programmable unit of 128 KB.
  • To perform foggy-fine programming multiple megabytes may be programmed multiple times. To perform the multiple programming, a large amount of data needs to be set aside in order to perform repeat programming with the exact same data.
  • the present disclosure generally relates to improved foggy-fine programming.
  • the data to be written initially passes through an encoder before being written to SLC. While the data is being written to SLC, the data also passes through DRAM before going through the encoder to prepare for fine writing.
  • the data that is to be stored in SLC is in latches in the memory device and is then written to MLC as a foggy write. Thereafter, the data that has passed through the encoder is fine written to MLC.
  • the programming occurs in a staggered fashion where the ratio of SLC:foggy:fine writing is 4:1:1. To ensure sufficient XOR context management, programming across multiple dies, as well as across multiple super-devices, is staggered so that only four XOR parity context are necessary across 64 dies.
  • data storage device comprises: one or more memory devices, the one or more memory devices including SLC memory and MLC memory; and a controller coupled to the one or more memory devices, the controller configured to: write data to the SLC memory; foggy write the data to MLC memory, wherein the foggy writing the data to the MLC memory includes retrieving the data from latches in the one or more memory devices and writing the retrieved data to the MLC memory; and fine writing the data to the MLC memory.
  • a data storage device comprises: one or more memory devices, the one or more memory devices each including a plurality of dies with each die including SLC memory and MLC memory; and a controller coupled to the one or more memory devices, the controller configured to: stagger writing to the SLC memory, foggy writing to the MLC memory, and fine writing to the MLC memory, wherein a ratio of writing to the SLC memory to foggy writing to the MLC memory to fine writing to the MLC memory is 4:1:1.
  • a data storage device comprises: one or more memory devices, wherein each memory device has a plurality of dies, wherein the plurality of dies are arranged into four strings, wherein the one or more memory devices each include SLC memory and MLC memory; a controller coupled to the one or more memory devices, the controller configured to: write data to the SLC memory of a first string on a first word line for a first set of dies; foggy write data to the MLC memory of the first string on the first word line for the first set of dies; write data to the SLC memory of a second string on the first word line for the first set of dies; foggy write data to the MLC memory of the second string on the first word line for the first set of dies; write data to the SLC memory of a third string on the first word line for the first set of dies; foggy write data to the MLC memory of the third string on the first word line for the first set of dies; write data to the SLC memory of the first string on the first word line for a
  • FIG. 1 is a schematic illustration of a system for storing data according to one embodiment.
  • FIGS. 2A-2C are schematic illustrations of scheduling foggy-fine programming according to various embodiments.
  • FIG. 3 is a chart illustrating staggering foggy-fine programming.
  • FIGS. 4A-4C are collectively a schematic illustration showing a programming ratio of SLC:foggy:fine according to one embodiment.
  • FIGS. 5A-5C are collectively a schematic illustration showing foggy-fine programming for a single super-device.
  • FIGS. 6A-6C are collectively a schematic illustration showing foggy-fine programming for multiple super-devices.
  • the present disclosure generally relates to improved foggy-fine programming.
  • the data to be written initially passes through an encoder before being written to SLC. While the data is being written to SLC, the data also passes through DRAM before going through the encoder to prepare for fine writing.
  • the data that is to be stored in SLC is in latches in the memory device and is then written to MLC as a foggy write. Thereafter, the data that has passed through the encoder is fine written to MLC.
  • the programming occurs in a staggered fashion where the ratio of SLC:foggy:fine writing is 4:1:1. To ensure sufficient XOR context management, programming across multiple dies, as well as across multiple super-devices, is staggered so that only four XOR parity context are necessary across 64 dies.
  • FIG. 1 is a schematic illustration of a system 100 for storing data according to one embodiment.
  • the system 100 for storing data according to one embodiment includes a host device 102 and a data storage device 104 .
  • the host device 102 includes a dynamic random-access memory (DRAM) 112 .
  • the host device 102 may include a wide range of devices, such as computer servers, network attached storage (NAS) units, desktop computers, notebook (i.e., laptop) computers, tablet computers (i.e., “smart” pad), set-top boxes, telephone handsets (i.e., “smart” phones), televisions, cameras, display devices, digital media players, video gaming consoles, video streaming devices, and automotive applications (i.e., mapping, autonomous driving).
  • NAS network attached storage
  • host device 102 includes any device having a processing unit or any form of hardware capable of processing data, including a general purpose processing unit, dedicated hardware (such as an application specific integrated circuit (ASIC)), configurable hardware such as a field programmable gate array (FPGA), or any other form of processing unit configured by software instructions, microcode, or firmware.
  • ASIC application specific integrated circuit
  • FPGA field programmable gate array
  • the data storage device 104 communicates with the host device 102 through an interface 106 included in the data storage device 104 .
  • the data storage device 104 includes a controller 108 , a buffer 114 , and one or more memory devices 110 .
  • the data storage device 104 may be an internal storage drive, such as a notebook hard drive or a desktop hard drive.
  • Data storage device 104 may be a removable mass storage device, such as, but not limited to, a handheld, removable memory device, such as a memory card (e.g., a secure digital (SD) card, a micro secure digital (micro-SD) card, or a multimedia card (MMC)) or a universal serial bus (USB) device.
  • SD secure digital
  • micro-SD micro secure digital
  • MMC multimedia card
  • USB universal serial bus
  • Data storage device 104 may take the form of an embedded mass storage device, such as an eSD/eMMC embedded flash drive, embedded in host device 102 .
  • Data storage device 104 may also be any other type of internal storage device, removable storage device, embedded storage device, external storage device, or network storage device.
  • Memory device 110 may be, but is not limited to, internal or external storage units.
  • the memory device 110 relies on a semiconductor memory chip, in which data can be stored as random-access memory (RAM), read-only memory (ROM), or other forms for RAM and ROM.
  • RAM random-access memory
  • ROM read-only memory
  • RAM is utilized for temporary storage of data whereas ROM is utilized for storing data permanently.
  • Data storage device 104 includes a controller 108 which manages operations of data storage device 104 , such as writes to or reads from memory device 110 .
  • the controller 108 executes computer-readable program code (e.g., software or firmware) executable instructions (herein referred to as “instructions”) for the transfer of data.
  • the instructions may be executed by various components of controller 108 such as processor, logic gates, switches, applications specific integrated circuits (ASICs), programmable logic controllers embedded microcontrollers, and other components of controller 108 .
  • Data storage device 104 includes a buffer 114 which is a region of physical memory storage used to temporarily store data while the data is being moved from one place to another (i.e., from host device 102 to memory device 110 ).
  • Data may be transferred to or from the DRAM 112 of the host device 102 to the data storage device 104 .
  • One data transfer pathway may originate from the DRAM 112 of the host device 102 and communicate through the interface 106 of the data storage device 104 to the controller 108 .
  • the data will then pass through the buffer 114 of the data storage device 104 and be stored in the memory device 110 . If the data is written to a SLC memory, then the data is simply written. If, however, the data is written to a MLC, such as a QLC memory, then a foggy-fine writing process occurs. It is to be noted that writing and programming may be used interchangeably throughout the disclosure. In one embodiment, the data is first written to SLC memory and then moved to MLC memory.
  • all data is written to SLC cache first and then moved to QLC for sequential or non-repetitive writes.
  • the moving of the data to QLC is scheduled by the data storage device 204 as to create free space in SLC for the following writes from the host device 102 .
  • repetitive write comprises the host rewriting recently written LBAs where recently means the data is still in SLC cache.
  • FIGS. 2A-2C are schematic illustrations of scheduling foggy-fine programming according to various embodiments.
  • the Front End (FE) module 202 comprises an XOR engine 204 and a static random-access memory (SRAM) 206 .
  • Host data may be initially delivered to the FE module 202 .
  • the data passes through the XOR engine 204 and is written to the SRAM 206 .
  • the XOR engine 204 generates XOR parity information prior to writing to SRAM 206 .
  • Exclusive OR (XOR) parity information is used to improve reliability of storage device for storing data, such as enabling data recovery of failed writes or failed reads of data to and from NVM or enabling data recovery in case of power loss.
  • the storage device may be the data storage device 104 of FIG. 1 .
  • the reliability may be provided by using XOR parity information generated or computed based on data stored to storage device.
  • the XOR engine 204 may generate a parity stream to be written to SRAM 206 .
  • SRAM 206 may contain a plurality of dies in which data may be written to.
  • the Second Flash Manager (FM2) module 210 comprises of an encoder 212 , a SRAM 216 , and a decoder 214 .
  • the decoder 214 may comprise a low gear (LG) decoder and a high gear (HG) decoder.
  • the LG decoder can implement low power bit flipping algorithms, such as a low density parity check (LDPC) algorithm.
  • the LG decoder may be operable to decode data and correct bit flips where such data has a low bit error rate (BER).
  • BER bit error rate
  • the HG decoder can implement full power decoding and error correction algorithms, which may be initiated upon a failure of the LG decoder to decode and correct bit flips in data.
  • the HG decoder can be operable to correct bit flips where such data has a high BER.
  • FM2 may be replaced with a combined FE-FM monochip.
  • the encoder 212 and decoder 214 can include processing circuitry or a processor (with a computer-readable medium that stores computer-readable program code (e.g., firmware) executable by the processor), logic circuitry, an application specific integrated circuit (ASIC), a programmable logic controller, an embedded microcontroller, a combination thereof, or the like, for example.
  • the encoder 212 and the decoder 214 are separate from the storage controller, and in other examples, the encoder 212 and the decoder 214 are embedded in or part of the storage controller.
  • the LG decoder is a hardened circuit, such as logic circuitry, an ASIC, or the like.
  • the HG decoder can be a soft decoder (e.g., implemented by a processor). Data may be written to SRAM 216 after being decoded at the decoder 214 . The data at SRAM 216 may be further delivered to the encoder 212 , as discussed below.
  • the memory device may be a NAND memory device.
  • the memory device 220 may comprise of a SLC 222 and a MLC 224 . It is to be understood that the embodiments discussed herein are applicable to any multilevel cell such as MLC, TLC or QLC. MLC is simply exemplified. SLC 222 , MLC, TLC, QLC, and PLC are named according to the number of bits that a memory cell may accept. For example, SLC may accept one bit per memory cell and QLC may accept four bits per memory cell. Each bit is registered on the storage device as a 1 or a 0. Additionally, while SLC memory is exemplified as a memory device, it is also contemplated that the SLC memory may be replaced with a 2-bit cell or MLC memory device.
  • FIG. 2A is a schematic illustration of a foggy-fine writing process, according to one embodiment.
  • Host data is fed to the FE module 202 .
  • the host data is sent through the XOR engine 204 , and XOR parity information is generated.
  • the data is then written to the SRAM 206 at the FE module 202 .
  • the FM2 module 210 data is delivered to the encoder 212 from the SRAM 206 along stream 1 .
  • the data is then written to the SLC 222 of the memory device 220 along stream 2 .
  • the data is read from SLC 222 and then decoded at the decoder 214 of the FM2 module 210 along stream 3 .
  • the decoded data is then written to the SRAM 216 of the FM2 module 210 in steam 4 .
  • the data is then send through the encoded 212 along stream 5 for encoding.
  • the foggy write occurs after the data that is encoded at the encoder 212 of the FM2 module 210 from the SRAM 216 of the FM2 module 210 along stream 6 .
  • the foggy write is the initial write from encoder 212 of the FM2 module 210 to the MLC 224 of the memory device 220 .
  • To proceed with the fine write data is then read from SLC 222 and delivered to the decoded 214 along stream 7 .
  • the data is then written in SRAM along stream 8 and then delivered to the encoder 212 along stream 9 for encoding.
  • the now encoded data is then fine written to MLC 224 along stream 10 .
  • the SLC and MLC programming may be de-coupled.
  • the foggy-fine writing process may incorporate multi-stream with direct write hot/cold sorting support.
  • the bus traffic may be higher.
  • FIG. 2B is a schematic illustration of a foggy-fine writing process, according to another embodiment.
  • Host data is delivered to the FE module 202 .
  • the host data passes through the XOR engine 204 and XOR parity information is generated.
  • the data is then written to the SRAM 206 at the FE module 202 .
  • the data is then transferred to the encoder 212 along stream 1 .
  • the data is written to the SLC 222 along steam 2 . Simultaneous with transferring the data to the encoder 212 along stream 1 , the data is transferred to the DRAM 230 along stream 3 .
  • the foggy-fine writing process involves first sending the data written to the DRAM 230 and then to the encoder 212 along stream 4 for encoding. The encoded data is then foggy written to MLC along stream 5 . Thereafter, the data is again set from DRAM 230 to the encoder 212 along stream 6 for encoding. Following encoding the data is then fine written along stream 7 to MLC 224 .
  • the foggy write step transfers the data from the DRAM 230 to the encoder 212 and writes the data to the MLC 224 .
  • the fine write step occurs after the foggy write step.
  • the fine write step transfers the data from the DRAM 230 to the encoder 212 and writes the data to the MLC 224 .
  • the SLC and MLC programs may occur in a sequential write process due to buffer limitations.
  • FIG. 2C is a schematic illustration of a foggy-fine writing process, according to another embodiment.
  • Host data is delivered to the FE module 202 .
  • the host data passes through the XOR engine 204 and XOR parity information is generated.
  • the data is then written to the SRAM 206 at the FE module 202 .
  • From the SRAM 206 at the FE module the data is then transferred to the encoder 212 along stream 1 and then written to the SLC 222 along stream 2 .
  • the foggy write step writes the data from SLC 222 to MLC 224 . More specifically, the data is read from SLC 222 and then foggy written to MLC 224 along stream 3 .
  • the fine write step involves sending the data from DRAM 230 to the encoder 212 along stream 5 , along with XOR data, and then foggy writing the encoded data to MLC 224 along stream 6 .
  • the data path using NAND data latches to stage data between SLC and foggy programs, so that single 4-page data transfer may be used for SLC and foggy MLC programs.
  • the fine write may also occur from SLC 222 to MLC 224 after the foggy write in the case that the original fine write becomes corrupted.
  • the SLC and MLC programing may occur in a sequential write process due to buffer limitations.
  • FIG. 3 is a chart illustrating staggering foggy-fine programming. It is to be understood that the disclosure is not limited to the staggered foggy-fine programming exemplified in FIG. 3 , but rather, other sequences are contemplated as well. More specifically, to perform foggy-fine programming, foggy programming along a word line for a particular string cannot occur back-to-back. As shown in FIG. 3 , to properly foggy-fine write to word line 0 at string 0 , several additional writes need to occur between the foggy write to word line 0 , string 0 and the fine write to word line 0 , string 0 . The foggy-fine write process proceeds as follows.
  • data is foggy written to word line 0 , string 0 .
  • data is foggy written to word line 0 , string 1 .
  • data is foggy written to word line 0 , string 2 .
  • data is foggy written to word line 0 , string 3 .
  • data is foggy written to word line 1 , string 0 .
  • data can be fine written to word line 0 , string 0 .
  • the arrows in FIG. 3 illustrate the path of writing in the foggy-fine writing process. Basically, to properly foggy-fine write data, data is initially foggy written to the specifically data location. Then, three additional foggy data writes occur to the same word line, but at different strings.
  • a fourth foggy write occurs in an adjacent word line along the same string of the specific data location. Only after the fourth foggy write to the adjacent word line and same string may the fine writing to the original word line and original string (i.e., the original data location) be performed. In total, four additional foggy writes occur prior to the fine writing.
  • FIGS. 4A-4C are collectively a schematic illustration showing a programming ratio of SLC:foggy:fine according to one embodiment.
  • SLC programming is significantly faster than foggy and fine MLC (or TLC or QLC) programs (i.e., about 10 x difference).
  • MLC or TLC or QLC
  • the overall transfers may be more consistent with little to no performance loss and without throttling the data transfer (e.g., slowing down the host transfers to artificially make the drive have a more consistent performance).
  • the ratio of (x)SLC:(y)foggy:(z)fine programs should generally be as close as possible to 4*tPROG SLC :tPROG Foggy :tPROG Fine , where the value 4 reflects the 1:4 density difference between SLC and QLC.
  • This principle may be applied to a different mix of NAND modes, such as MLC and TLC.
  • FIGS. 4A-4C show the staggering principle in a drive with three active super-devices comprising of 96-die each.
  • data may be written to the various word lines, strings, and foggy/fine programming that may resemble the process outlined in FIG. 3 .
  • SLC denotes the small vertical bars associated adjacent the foggy writes.
  • 2 MB of SLC data should be written every 1.5 ms.
  • FIGS. 5A-5C are collectively a schematic illustration showing foggy-fine programming for a single super-device.
  • 128 KB of XOR context may be generated for each foggy program, and written to DRAM.
  • FIG. 5 demonstrates the principle of sharing a limited number of parity contexts to achieve the staggered scheduling of different program types in a super-device.
  • the limited number of parity contexts refers to the value of 4, which is the 1:4 density difference between SLC and QLC.
  • Each parity group refers to a super-WL, which is a unit of all data in the same WL-string in all devices.
  • FIG. 5 may refer to the method in FIG. 3 to demonstrate the method for SLC/foggy-fine programming to a super-device.
  • the XOR data is generated by the XOR engine, and the XOR engine is then available for generating XOR parity data for the next completed word line. Similar to FIG. 4 , the staggering of the SLC-foggy-fine writing, and the proportions of 4:1:1 for writing, results in the four XOR generators being sufficient to provide the needed parity data by staggering the availability of the XOR generators.
  • FIGS. 6A-6C are collectively a schematic illustration showing foggy-fine programming for multiple super-devices.
  • the XOR context generated from each foggy program is unique to its own super device (i.e., there is no XOR relationship between super devices).
  • a minimum time gap may be required before starting a SLC/foggy write sequence on String 0 (i.e., the initial write). The time gap may cause the host-write performance to be more consistent. However, the minimum time gap may depend on the drive capacity.
  • SLC programs on the dies should not overlap within a super-device. To ensure smooth 2 GB/s write performance, 2 MB of SLC data should be written every 1.5 ms. An overlap of SLC programs may cause a performance bottleneck. Priority for data write is given to a WL that has already been utilized so that SLC writes may not overlap.
  • Fine data transfer occurs between about 400 ⁇ s to about 500 ⁇ s (with a 400 MT/s TM bus). Furthermore, there may be one fine data transfer for each 2 MB of SLC transfer/programming.
  • the QLC fine data writes may not overlap.
  • the string number may be used. In general, the lower string number will be transferred prior to the higher string number.
  • a SLC write may occur in order to maintain a more consistent host performance.
  • the 4 page transfer limitation refers to the 4 XOR contexts that a single super-device may require.
  • the next set of SLC/foggy dies to program are determined by round-robin.
  • the SLC write, foggy write, and fine write outlined in the embodiment may significantly reduce data transfers over the NAND-bus and DRAM-bus. The reduced data transfers may improve host write performance and may reduce device power consumption.
  • data storage device comprises: one or more memory devices, the one or more memory devices including SLC memory and MLC memory; and a controller coupled to the one or more memory devices, the controller configured to: write data to the SLC memory; foggy write the data to MLC memory, wherein the foggy writing the data to the MLC memory includes retrieving the data from latches in the one or more memory devices and writing the retrieved data to the MLC memory; and fine writing the data to the MLC memory.
  • the data that is finely written to the MLC memory does not pass through the SLC memory.
  • the data that is finely written to the MLC memory passes through DRAM and an encoder before being finely written in the MLC memory.
  • the data written to the SLC memory passes through an encoder.
  • the data that is written to the SLC memory does not pass through the DRAM that the data finely written to the MLC memory passes through.
  • the data that is foggy written to the MLC memory does not pass through the DRAM that the data finely written to the MLC memory passes through.
  • a single four page transfer is used for both the writing the data to the SLC memory and foggy writing the data to the MLC memory.
  • a data storage device comprises: one or more memory devices, the one or more memory devices each including a plurality of dies with each die including SLC memory and MLC memory; and a controller coupled to the one or more memory devices, the controller configured to: stagger writing to the SLC memory, foggy writing to the MLC memory, and fine writing to the MLC memory, wherein a ratio of writing to the SLC memory to foggy writing to the MLC memory to fine writing to the MLC memory is 4:1:1.
  • Writing to the SLC memory occurs on only one word line at a time for a given memory device of the one or more memory devices.
  • Foggy writing to the MLC memory occurs with for multiple word lines simultaneously. Simultaneous foggy writing occurs for different dies.
  • the simultaneous foggy writing occurs for the same string.
  • the MLC memory is QLC memory.
  • the controller is configured to write data to the SLC memory of at least a first memory device of the one or more memory devices simultaneous with foggy writing data to the MLC memory of a second memory device of the one or more memory devices and simultaneous with fine writing data to the MLC memory of at least a third memory device of the one or more memory devices.
  • a data storage device comprises: one or more memory devices, wherein each memory device has a plurality of dies, wherein the plurality of dies are arranged into four strings, wherein the one or more memory devices each include SLC memory and MLC memory; a controller coupled to the one or more memory devices, the controller configured to: write data to the SLC memory of a first string on a first word line for a first set of dies; foggy write data to the MLC memory of the first string on the first word line for the first set of dies; write data to the SLC memory of a second string on the first word line for the first set of dies; foggy write data to the MLC memory of the second string on the first word line for the first set of dies; write data to the SLC memory of a third string on the first word line for the first set of dies; foggy write data to the MLC memory of the third string on the first word line for the first set of dies; write data to the SLC memory of the first string on the first word line for a
  • the writing data to the SLC memory of the first string on teh first word line for the second set of dies occurs simultaneous with the foggy writing data to the MLC memory of the third string on the first word line for the first set of dies.
  • the controller is further configured to: write data to the SLC memory of the second string on the first word line for the second set of dies; foggy write data to the MLC memory of the second string on the first word line for the second set of dies; and fine write data to the MLC memory of the first string of a different word line from the first word line and the second word line for the first set of dies.
  • the one or more memory devices each have four XOR parity contexts.
  • the one or more memory devices comprises two memory devices and wherein writing to SLC memory is staggered across the two memory devices. There is a time gap between starting an SLC writing on a string in a first memory device of the two memory devices and a string in a second memory device of the two memory devices.

Abstract

The present disclosure generally relates to improved foggy-fine programming. The data to be written initially passes through an encoder before being written to SLC. While the data is being written to SLC, the data also passes through DRAM before going through the encoder to prepare for fine writing. The data that is to be stored in SLC is in latches in the memory device and is then written to MLC as a foggy write. Thereafter, the data that has passed through the encoder is fine written to MLC. The programming occurs in a staggered fashion where the ratio of SLC:foggy:fine writing is 4:1:1. To ensure sufficient XOR context management, programming across multiple dies, as well as across multiple super-devices, is staggered so that only four XOR parity context are necessary across 64 dies.

Description

    BACKGROUND OF THE DISCLOSURE Field of the Disclosure
  • Embodiments of the present disclosure generally relate to improving foggy-fine writing to QLC.
  • Description of the Related Art
  • Programming or writing data may require two writing phases: foggy and fine. In foggy-fine programming, the bits to be written cannot simply be written once. Rather, the data needs to be first written by foggy programming where voltage pulses are provided to push the current state to a more resolved state, but not completely resolved state. Fine programming is performed at a point in time after foggy programming to write the data again in the completely resolved state.
  • In foggy-fine programming, there is a four page transfer for foggy programming and a four page transfer for fine programming for a 128 KB transfer in total for a two-plane device. The foggy state is unreadable, and the data needs to be protected in case of a possible power loss event (PLI). Additionally, foggy-fine programming occurs in a staggered word line sequence, which means that data in transit is five times or eight times the programmable unit of 128 KB. To perform foggy-fine programming, multiple megabytes may be programmed multiple times. To perform the multiple programming, a large amount of data needs to be set aside in order to perform repeat programming with the exact same data.
  • Therefore, there is a need in the art for improved foggy-fine programming.
  • SUMMARY OF THE DISCLOSURE
  • The present disclosure generally relates to improved foggy-fine programming. The data to be written initially passes through an encoder before being written to SLC. While the data is being written to SLC, the data also passes through DRAM before going through the encoder to prepare for fine writing. The data that is to be stored in SLC is in latches in the memory device and is then written to MLC as a foggy write. Thereafter, the data that has passed through the encoder is fine written to MLC. The programming occurs in a staggered fashion where the ratio of SLC:foggy:fine writing is 4:1:1. To ensure sufficient XOR context management, programming across multiple dies, as well as across multiple super-devices, is staggered so that only four XOR parity context are necessary across 64 dies.
  • In one embodiment, data storage device comprises: one or more memory devices, the one or more memory devices including SLC memory and MLC memory; and a controller coupled to the one or more memory devices, the controller configured to: write data to the SLC memory; foggy write the data to MLC memory, wherein the foggy writing the data to the MLC memory includes retrieving the data from latches in the one or more memory devices and writing the retrieved data to the MLC memory; and fine writing the data to the MLC memory.
  • In another embodiment, a data storage device comprises: one or more memory devices, the one or more memory devices each including a plurality of dies with each die including SLC memory and MLC memory; and a controller coupled to the one or more memory devices, the controller configured to: stagger writing to the SLC memory, foggy writing to the MLC memory, and fine writing to the MLC memory, wherein a ratio of writing to the SLC memory to foggy writing to the MLC memory to fine writing to the MLC memory is 4:1:1.
  • In another embodiment, a data storage device comprises: one or more memory devices, wherein each memory device has a plurality of dies, wherein the plurality of dies are arranged into four strings, wherein the one or more memory devices each include SLC memory and MLC memory; a controller coupled to the one or more memory devices, the controller configured to: write data to the SLC memory of a first string on a first word line for a first set of dies; foggy write data to the MLC memory of the first string on the first word line for the first set of dies; write data to the SLC memory of a second string on the first word line for the first set of dies; foggy write data to the MLC memory of the second string on the first word line for the first set of dies; write data to the SLC memory of a third string on the first word line for the first set of dies; foggy write data to the MLC memory of the third string on the first word line for the first set of dies; write data to the SLC memory of the first string on the first word line for a second set of dies different from the first set of dies; foggy write data to the MLC memory of the first string on the first word line for the second set of dies; write data to the SLC memory of a fourth string on the first word line for the first set of dies; and foggy write data to the MLC memory of the fourth string on the first word line for the first set of dies. It is to be understood that the writing may occur in a different order than discussed above. Specifically, it is to be understood that the writing order dictates how many word lines will be in the foggy state prior to being written in the fine state.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • So that the manner in which the above recited features of the present disclosure can be understood in detail, a more particular description of the disclosure, briefly summarized above, may be had by reference to embodiments, some of which are illustrated in the appended drawings. It is to be noted, however, that the appended drawings illustrate only typical embodiments of this disclosure and are therefore not to be considered limiting of its scope, for the disclosure may admit to other equally effective embodiments.
  • FIG. 1 is a schematic illustration of a system for storing data according to one embodiment.
  • FIGS. 2A-2C are schematic illustrations of scheduling foggy-fine programming according to various embodiments.
  • FIG. 3 is a chart illustrating staggering foggy-fine programming.
  • FIGS. 4A-4C are collectively a schematic illustration showing a programming ratio of SLC:foggy:fine according to one embodiment.
  • FIGS. 5A-5C are collectively a schematic illustration showing foggy-fine programming for a single super-device.
  • FIGS. 6A-6C are collectively a schematic illustration showing foggy-fine programming for multiple super-devices.
  • To facilitate understanding, identical reference numerals have been used, where possible, to designate identical elements that are common to the figures. It is contemplated that elements disclosed in one embodiment may be beneficially utilized on other embodiments without specific recitation.
  • DETAILED DESCRIPTION
  • In the following, reference is made to embodiments of the disclosure. However, it should be understood that the disclosure is not limited to specific described embodiments. Instead, any combination of the following features and elements, whether related to different embodiments or not, is contemplated to implement and practice the disclosure. Furthermore, although embodiments of the disclosure may achieve advantages over other possible solutions and/or over the prior art, whether or not a particular advantage is achieved by a given embodiment is not limiting of the disclosure. Thus, the following aspects, features, embodiments and advantages are merely illustrative and are not considered elements or limitations of the appended claims except where explicitly recited in a claim(s). Likewise, reference to “the disclosure” shall not be construed as a generalization of any inventive subject matter disclosed herein and shall not be considered to be an element or limitation of the appended claims except where explicitly recited in a claim(s).
  • The present disclosure generally relates to improved foggy-fine programming. The data to be written initially passes through an encoder before being written to SLC. While the data is being written to SLC, the data also passes through DRAM before going through the encoder to prepare for fine writing. The data that is to be stored in SLC is in latches in the memory device and is then written to MLC as a foggy write. Thereafter, the data that has passed through the encoder is fine written to MLC. The programming occurs in a staggered fashion where the ratio of SLC:foggy:fine writing is 4:1:1. To ensure sufficient XOR context management, programming across multiple dies, as well as across multiple super-devices, is staggered so that only four XOR parity context are necessary across 64 dies.
  • FIG. 1 is a schematic illustration of a system 100 for storing data according to one embodiment. The system 100 for storing data according to one embodiment includes a host device 102 and a data storage device 104. The host device 102 includes a dynamic random-access memory (DRAM) 112. The host device 102 may include a wide range of devices, such as computer servers, network attached storage (NAS) units, desktop computers, notebook (i.e., laptop) computers, tablet computers (i.e., “smart” pad), set-top boxes, telephone handsets (i.e., “smart” phones), televisions, cameras, display devices, digital media players, video gaming consoles, video streaming devices, and automotive applications (i.e., mapping, autonomous driving). In certain embodiments, host device 102 includes any device having a processing unit or any form of hardware capable of processing data, including a general purpose processing unit, dedicated hardware (such as an application specific integrated circuit (ASIC)), configurable hardware such as a field programmable gate array (FPGA), or any other form of processing unit configured by software instructions, microcode, or firmware.
  • The data storage device 104 communicates with the host device 102 through an interface 106 included in the data storage device 104. The data storage device 104 includes a controller 108, a buffer 114, and one or more memory devices 110. The data storage device 104 may be an internal storage drive, such as a notebook hard drive or a desktop hard drive. Data storage device 104 may be a removable mass storage device, such as, but not limited to, a handheld, removable memory device, such as a memory card (e.g., a secure digital (SD) card, a micro secure digital (micro-SD) card, or a multimedia card (MMC)) or a universal serial bus (USB) device. Data storage device 104 may take the form of an embedded mass storage device, such as an eSD/eMMC embedded flash drive, embedded in host device 102. Data storage device 104 may also be any other type of internal storage device, removable storage device, embedded storage device, external storage device, or network storage device.
  • Memory device 110 may be, but is not limited to, internal or external storage units. The memory device 110 relies on a semiconductor memory chip, in which data can be stored as random-access memory (RAM), read-only memory (ROM), or other forms for RAM and ROM. RAM is utilized for temporary storage of data whereas ROM is utilized for storing data permanently.
  • Data storage device 104 includes a controller 108 which manages operations of data storage device 104, such as writes to or reads from memory device 110. The controller 108 executes computer-readable program code (e.g., software or firmware) executable instructions (herein referred to as “instructions”) for the transfer of data. The instructions may be executed by various components of controller 108 such as processor, logic gates, switches, applications specific integrated circuits (ASICs), programmable logic controllers embedded microcontrollers, and other components of controller 108.
  • Data storage device 104 includes a buffer 114 which is a region of physical memory storage used to temporarily store data while the data is being moved from one place to another (i.e., from host device 102 to memory device 110).
  • Data may be transferred to or from the DRAM 112 of the host device 102 to the data storage device 104. One data transfer pathway may originate from the DRAM 112 of the host device 102 and communicate through the interface 106 of the data storage device 104 to the controller 108. The data will then pass through the buffer 114 of the data storage device 104 and be stored in the memory device 110. If the data is written to a SLC memory, then the data is simply written. If, however, the data is written to a MLC, such as a QLC memory, then a foggy-fine writing process occurs. It is to be noted that writing and programming may be used interchangeably throughout the disclosure. In one embodiment, the data is first written to SLC memory and then moved to MLC memory. In another embodiment, all data is written to SLC cache first and then moved to QLC for sequential or non-repetitive writes. In such a scenario, the moving of the data to QLC is scheduled by the data storage device 204 as to create free space in SLC for the following writes from the host device 102. In another embodiment, repetitive write comprises the host rewriting recently written LBAs where recently means the data is still in SLC cache. In such a scenario, three possibilities exist: move all the data, including the old obsolete LBAs to QLC and create obsolete ‘holes”; move valid data only, skipping obsolete data in SLC; and if the amount of obsolete data is high, then compact SLC cache, do a garbage collection, without moving any data to QLC.
  • FIGS. 2A-2C are schematic illustrations of scheduling foggy-fine programming according to various embodiments. The Front End (FE) module 202 comprises an XOR engine 204 and a static random-access memory (SRAM) 206. Host data may be initially delivered to the FE module 202. The data passes through the XOR engine 204 and is written to the SRAM 206. The XOR engine 204 generates XOR parity information prior to writing to SRAM 206. Exclusive OR (XOR) parity information is used to improve reliability of storage device for storing data, such as enabling data recovery of failed writes or failed reads of data to and from NVM or enabling data recovery in case of power loss. The storage device may be the data storage device 104 of FIG. 1. The reliability may be provided by using XOR parity information generated or computed based on data stored to storage device. The XOR engine 204 may generate a parity stream to be written to SRAM 206. SRAM 206 may contain a plurality of dies in which data may be written to.
  • The Second Flash Manager (FM2) module 210 comprises of an encoder 212, a SRAM 216, and a decoder 214. The decoder 214 may comprise a low gear (LG) decoder and a high gear (HG) decoder. The LG decoder can implement low power bit flipping algorithms, such as a low density parity check (LDPC) algorithm. The LG decoder may be operable to decode data and correct bit flips where such data has a low bit error rate (BER). The HG decoder can implement full power decoding and error correction algorithms, which may be initiated upon a failure of the LG decoder to decode and correct bit flips in data. The HG decoder can be operable to correct bit flips where such data has a high BER. Alternatively, FM2 may be replaced with a combined FE-FM monochip.
  • The encoder 212 and decoder 214 (including the LG decoder and HG decoder) can include processing circuitry or a processor (with a computer-readable medium that stores computer-readable program code (e.g., firmware) executable by the processor), logic circuitry, an application specific integrated circuit (ASIC), a programmable logic controller, an embedded microcontroller, a combination thereof, or the like, for example. In some examples, the encoder 212 and the decoder 214 are separate from the storage controller, and in other examples, the encoder 212 and the decoder 214 are embedded in or part of the storage controller. In some examples, the LG decoder is a hardened circuit, such as logic circuitry, an ASIC, or the like. In some examples, the HG decoder can be a soft decoder (e.g., implemented by a processor). Data may be written to SRAM 216 after being decoded at the decoder 214. The data at SRAM 216 may be further delivered to the encoder 212, as discussed below.
  • The memory device may be a NAND memory device. The memory device 220 may comprise of a SLC 222 and a MLC 224. It is to be understood that the embodiments discussed herein are applicable to any multilevel cell such as MLC, TLC or QLC. MLC is simply exemplified. SLC 222, MLC, TLC, QLC, and PLC are named according to the number of bits that a memory cell may accept. For example, SLC may accept one bit per memory cell and QLC may accept four bits per memory cell. Each bit is registered on the storage device as a 1 or a 0. Additionally, while SLC memory is exemplified as a memory device, it is also contemplated that the SLC memory may be replaced with a 2-bit cell or MLC memory device.
  • FIG. 2A is a schematic illustration of a foggy-fine writing process, according to one embodiment. Host data is fed to the FE module 202. The host data is sent through the XOR engine 204, and XOR parity information is generated. The data is then written to the SRAM 206 at the FE module 202. At the FM2 module 210, data is delivered to the encoder 212 from the SRAM 206 along stream 1. The data is then written to the SLC 222 of the memory device 220 along stream 2. To proceed with the foggy-fine writing to the MLC 224, the data is read from SLC 222 and then decoded at the decoder 214 of the FM2 module 210 along stream 3. The decoded data is then written to the SRAM 216 of the FM2 module 210 in steam 4. The data is then send through the encoded 212 along stream 5 for encoding. The foggy write occurs after the data that is encoded at the encoder 212 of the FM2 module 210 from the SRAM 216 of the FM2 module 210 along stream 6. The foggy write is the initial write from encoder 212 of the FM2 module 210 to the MLC 224 of the memory device 220. To proceed with the fine write, data is then read from SLC 222 and delivered to the decoded 214 along stream 7. Following decoding, the data is then written in SRAM along stream 8 and then delivered to the encoder 212 along stream 9 for encoding. The now encoded data is then fine written to MLC 224 along stream 10.
  • According to the embodiment referred to in FIG. 2A, there may be no DRAM-bus traffic. Furthermore, the SLC and MLC programming may be de-coupled. The foggy-fine writing process may incorporate multi-stream with direct write hot/cold sorting support. However, the bus traffic may be higher.
  • FIG. 2B is a schematic illustration of a foggy-fine writing process, according to another embodiment. Host data is delivered to the FE module 202. The host data passes through the XOR engine 204 and XOR parity information is generated. The data is then written to the SRAM 206 at the FE module 202. From the SRAM 206 at the FE module, the data is then transferred to the encoder 212 along stream 1. Once the data is encoded, the data is written to the SLC 222 along steam 2. Simultaneous with transferring the data to the encoder 212 along stream 1, the data is transferred to the DRAM 230 along stream 3. The foggy-fine writing process involves first sending the data written to the DRAM 230 and then to the encoder 212 along stream 4 for encoding. The encoded data is then foggy written to MLC along stream 5. Thereafter, the data is again set from DRAM 230 to the encoder 212 along stream 6 for encoding. Following encoding the data is then fine written along stream 7 to MLC 224. The foggy write step transfers the data from the DRAM 230 to the encoder 212 and writes the data to the MLC 224. The fine write step occurs after the foggy write step. The fine write step transfers the data from the DRAM 230 to the encoder 212 and writes the data to the MLC 224. The SLC and MLC programs may occur in a sequential write process due to buffer limitations.
  • FIG. 2C is a schematic illustration of a foggy-fine writing process, according to another embodiment. Host data is delivered to the FE module 202. The host data passes through the XOR engine 204 and XOR parity information is generated. The data is then written to the SRAM 206 at the FE module 202. From the SRAM 206 at the FE module, the data is then transferred to the encoder 212 along stream 1 and then written to the SLC 222 along stream 2. The foggy write step writes the data from SLC 222 to MLC 224. More specifically, the data is read from SLC 222 and then foggy written to MLC 224 along stream 3. Concurrently with sending the data to the encoder 212, the data is transferred to the DRAM 230 from SRAM 206 at the FE module 202 along stream 4. The fine write step involves sending the data from DRAM 230 to the encoder 212 along stream 5, along with XOR data, and then foggy writing the encoded data to MLC 224 along stream 6. The data path using NAND data latches to stage data between SLC and foggy programs, so that single 4-page data transfer may be used for SLC and foggy MLC programs. The fine write may also occur from SLC 222 to MLC 224 after the foggy write in the case that the original fine write becomes corrupted. The SLC and MLC programing may occur in a sequential write process due to buffer limitations.
  • FIG. 3 is a chart illustrating staggering foggy-fine programming. It is to be understood that the disclosure is not limited to the staggered foggy-fine programming exemplified in FIG. 3, but rather, other sequences are contemplated as well. More specifically, to perform foggy-fine programming, foggy programming along a word line for a particular string cannot occur back-to-back. As shown in FIG. 3, to properly foggy-fine write to word line 0 at string 0, several additional writes need to occur between the foggy write to word line 0, string 0 and the fine write to word line 0, string 0. The foggy-fine write process proceeds as follows.
  • Initially, data is foggy written to word line 0, string 0. Then, data is foggy written to word line 0, string 1. Thereafter, data is foggy written to word line 0, string 2. Then, data is foggy written to word line 0, string 3. Thereafter, data is foggy written to word line 1, string 0. Now, finally, data can be fine written to word line 0, string 0. The arrows in FIG. 3 illustrate the path of writing in the foggy-fine writing process. Basically, to properly foggy-fine write data, data is initially foggy written to the specifically data location. Then, three additional foggy data writes occur to the same word line, but at different strings. A fourth foggy write occurs in an adjacent word line along the same string of the specific data location. Only after the fourth foggy write to the adjacent word line and same string may the fine writing to the original word line and original string (i.e., the original data location) be performed. In total, four additional foggy writes occur prior to the fine writing.
  • FIGS. 4A-4C are collectively a schematic illustration showing a programming ratio of SLC:foggy:fine according to one embodiment. In general, SLC programming is significantly faster than foggy and fine MLC (or TLC or QLC) programs (i.e., about 10x difference). By staggering writing to the drive (e.g., utilizing single 4-page data transfer), the overall transfers may be more consistent with little to no performance loss and without throttling the data transfer (e.g., slowing down the host transfers to artificially make the drive have a more consistent performance). The ratio of (x)SLC:(y)foggy:(z)fine programs should generally be as close as possible to 4*tPROGSLC:tPROGFoggy:tPROGFine, where the value 4 reflects the 1:4 density difference between SLC and QLC. This principle may be applied to a different mix of NAND modes, such as MLC and TLC.
  • FIGS. 4A-4C show the staggering principle in a drive with three active super-devices comprising of 96-die each. Within each super-device, data may be written to the various word lines, strings, and foggy/fine programming that may resemble the process outlined in FIG. 3. Furthermore, for every four writes (i.e., 4-page data transfer) to SLC (denoted by the small vertical bars associated adjacent the foggy writes), there is one foggy write to QLC or one fine write to QLC. To ensure smooth 2GB/s write performance, 2 MB of SLC data should be written every 1.5 ms.
  • FIGS. 5A-5C are collectively a schematic illustration showing foggy-fine programming for a single super-device. For each word line, 128 KB of XOR context may be generated for each foggy program, and written to DRAM. FIG. 5 demonstrates the principle of sharing a limited number of parity contexts to achieve the staggered scheduling of different program types in a super-device. The limited number of parity contexts refers to the value of 4, which is the 1:4 density difference between SLC and QLC. Each parity group refers to a super-WL, which is a unit of all data in the same WL-string in all devices. FIG. 5 may refer to the method in FIG. 3 to demonstrate the method for SLC/foggy-fine programming to a super-device. After a string in a word line is at capacity, the XOR data is generated by the XOR engine, and the XOR engine is then available for generating XOR parity data for the next completed word line. Similar to FIG. 4, the staggering of the SLC-foggy-fine writing, and the proportions of 4:1:1 for writing, results in the four XOR generators being sufficient to provide the needed parity data by staggering the availability of the XOR generators.
  • FIGS. 6A-6C are collectively a schematic illustration showing foggy-fine programming for multiple super-devices. The XOR context generated from each foggy program is unique to its own super device (i.e., there is no XOR relationship between super devices). A minimum time gap may be required before starting a SLC/foggy write sequence on String 0 (i.e., the initial write). The time gap may cause the host-write performance to be more consistent. However, the minimum time gap may depend on the drive capacity. Furthermore, to ensure a smooth performance, SLC programs on the dies should not overlap within a super-device. To ensure smooth 2 GB/s write performance, 2 MB of SLC data should be written every 1.5 ms. An overlap of SLC programs may cause a performance bottleneck. Priority for data write is given to a WL that has already been utilized so that SLC writes may not overlap.
  • Fine data transfer occurs between about 400 μs to about 500 μs (with a 400 MT/s TM bus). Furthermore, there may be one fine data transfer for each 2 MB of SLC transfer/programming. The QLC fine data writes may not overlap. In order to determine which fine data transfer occurs prior to the other, the string number may be used. In general, the lower string number will be transferred prior to the higher string number. In between QLC fine data transfers, a SLC write may occur in order to maintain a more consistent host performance.
  • The 4 page transfer limitation refers to the 4 XOR contexts that a single super-device may require. In general, the next set of SLC/foggy dies to program are determined by round-robin. Overall, the SLC write, foggy write, and fine write outlined in the embodiment may significantly reduce data transfers over the NAND-bus and DRAM-bus. The reduced data transfers may improve host write performance and may reduce device power consumption.
  • In one embodiment, data storage device comprises: one or more memory devices, the one or more memory devices including SLC memory and MLC memory; and a controller coupled to the one or more memory devices, the controller configured to: write data to the SLC memory; foggy write the data to MLC memory, wherein the foggy writing the data to the MLC memory includes retrieving the data from latches in the one or more memory devices and writing the retrieved data to the MLC memory; and fine writing the data to the MLC memory. The data that is finely written to the MLC memory does not pass through the SLC memory. The data that is finely written to the MLC memory passes through DRAM and an encoder before being finely written in the MLC memory. The data written to the SLC memory passes through an encoder. The data that is written to the SLC memory does not pass through the DRAM that the data finely written to the MLC memory passes through. The data that is foggy written to the MLC memory does not pass through the DRAM that the data finely written to the MLC memory passes through. A single four page transfer is used for both the writing the data to the SLC memory and foggy writing the data to the MLC memory.
  • In another embodiment, a data storage device comprises: one or more memory devices, the one or more memory devices each including a plurality of dies with each die including SLC memory and MLC memory; and a controller coupled to the one or more memory devices, the controller configured to: stagger writing to the SLC memory, foggy writing to the MLC memory, and fine writing to the MLC memory, wherein a ratio of writing to the SLC memory to foggy writing to the MLC memory to fine writing to the MLC memory is 4:1:1. Writing to the SLC memory occurs on only one word line at a time for a given memory device of the one or more memory devices. Foggy writing to the MLC memory occurs with for multiple word lines simultaneously. Simultaneous foggy writing occurs for different dies. The simultaneous foggy writing occurs for the same string. The MLC memory is QLC memory. The controller is configured to write data to the SLC memory of at least a first memory device of the one or more memory devices simultaneous with foggy writing data to the MLC memory of a second memory device of the one or more memory devices and simultaneous with fine writing data to the MLC memory of at least a third memory device of the one or more memory devices.
  • In another embodiment, a data storage device comprises: one or more memory devices, wherein each memory device has a plurality of dies, wherein the plurality of dies are arranged into four strings, wherein the one or more memory devices each include SLC memory and MLC memory; a controller coupled to the one or more memory devices, the controller configured to: write data to the SLC memory of a first string on a first word line for a first set of dies; foggy write data to the MLC memory of the first string on the first word line for the first set of dies; write data to the SLC memory of a second string on the first word line for the first set of dies; foggy write data to the MLC memory of the second string on the first word line for the first set of dies; write data to the SLC memory of a third string on the first word line for the first set of dies; foggy write data to the MLC memory of the third string on the first word line for the first set of dies; write data to the SLC memory of the first string on the first word line for a second set of dies different from the first set of dies; foggy write data to the MLC memory of the first string on the first word line for the second set of dies; write data to the SLC memory of a fourth string on the first word line for the first set of dies; and foggy write data to the MLC memory of the fourth string on the first word line for the first set of dies. The writing data to the SLC memory of the first string on teh first word line for the second set of dies occurs simultaneous with the foggy writing data to the MLC memory of the third string on the first word line for the first set of dies. The controller is further configured to: write data to the SLC memory of the second string on the first word line for the second set of dies; foggy write data to the MLC memory of the second string on the first word line for the second set of dies; and fine write data to the MLC memory of the first string of a different word line from the first word line and the second word line for the first set of dies. The one or more memory devices each have four XOR parity contexts. The one or more memory devices comprises two memory devices and wherein writing to SLC memory is staggered across the two memory devices. There is a time gap between starting an SLC writing on a string in a first memory device of the two memory devices and a string in a second memory device of the two memory devices.
  • By foggy writing data in MLC that has been stored in latches in the memory device, improved foggy-fine programming occurs. Additionally, by programming in a staggered fashion at a ratio of 4:1:1, efficient XOR context management occurs.
  • While the foregoing is directed to embodiments of the present disclosure, other and further embodiments of the disclosure may be devised without departing from the basic scope thereof, and the scope thereof is determined by the claims that follow.

Claims (20)

1. A data storage device, comprising:
one or more memory devices, the one or more memory devices including SLC memory and MLC memory; and
a controller coupled to the one or more memory devices, the controller configured to:
write data to the SLC memory;
foggy write the data to MLC memory, wherein the foggy writing the data to the MLC memory includes:
retrieving the data from the SLC memory; and
writing the retrieved data to the MLC memory; and
fine writing the data to the MLC memory.
2. The data storage device of claim 1, wherein the data that is finely written to the MLC memory does not pass through the SLC memory.
3. The data storage device of claim 2, wherein the data that is finely written to the MLC memory passes through DRAM and an encoder before being finely written in the MLC memory.
4. The data storage device of claim 3, wherein the data written to the SLC memory passes through an encoder.
5. The data storage device of claim 4, wherein the data that is written to the SLC memory does not pass through the DRAM that the data finely written to the MLC memory passes through.
6. The data storage device of claim 5, wherein the data that is foggy written to the MLC memory does not pass through the DRAM that the data finely written to the MLC memory passes through.
7. The data storage device of claim 1, wherein a single four page transfer is used for both the writing the data to the SLC memory and foggy writing the data to the MLC memory.
8. A data storage device, comprising:
one or more memory devices, the one or more memory devices each including a plurality of dies with each die including SLC memory and MLC memory; and
a controller coupled to the one or more memory devices, the controller configured to:
stagger writing to the SLC memory, foggy writing to the MLC memory, and fine writing to the MLC memory, wherein a ratio of writing to the SLC memory to foggy writing to the MLC memory to fine writing to the MLC memory is 4:1:1.
9. The data storage device of claim 8, wherein writing to the SLC memory occurs on only one word line at a time for a given memory device of the one or more memory devices.
10. The data storage device of claim 8, wherein foggy writing to the MLC memory occurs with for multiple word lines simultaneously.
11. The data storage device of claim 10, wherein simultaneous foggy writing occurs for different dies.
12. The data storage device of claim 11, wherein the simultaneous foggy writing occurs for the same string.
13. The data storage device of claim 12, wherein the MLC memory is QLC memory.
14. The data storage device of claim 8, wherein the controller is configured to write data to the SLC memory of at least a first memory device of the one or more memory devices simultaneous with foggy writing data to the MLC memory of a second memory device of the one or more memory devices and simultaneous with fine writing data to the MLC memory of at least a third memory device of the one or more memory devices.
15. A data storage device, comprising:
one or more memory devices, wherein each memory device has a plurality of dies, wherein the plurality of dies are arranged into four strings, wherein the one or more memory devices each include SLC memory and MLC memory;
a controller coupled to the one or more memory devices, the controller configured to:
write data to the SLC memory of a first string on a first word line for a first set of dies;
foggy write data to the MLC memory of the first string on the first word line for the first set of dies;
write data to the SLC memory of a second string on the first word line for the first set of dies;
foggy write data to the MLC memory of the second string on the first word line for the first set of dies;
write data to the SLC memory of a third string on the first word line for the first set of dies;
foggy write data to the MLC memory of the third string on the first word line for the first set of dies;
write data to the SLC memory of the first string on the first word line for a second set of dies different from the first set of dies;
foggy write data to the MLC memory of the first string on the first word line for the second set of dies;
write data to the SLC memory of a fourth string on the first word line for the first set of dies; and
foggy write data to the MLC memory of the fourth string on the first word line for the first set of dies.
16. The data storage device of claim 15, wherein the writing data to the SLC memory of the first string on the first word line for the second set of dies occurs simultaneous with the foggy writing data to the MLC memory of the third string on the first word line for the first set of dies.
17. The data storage device of claim 15, wherein the controller is further configured to:
write data to the SLC memory of the second string on the first word line for the second set of dies;
foggy write data to the MLC memory of the second string on the first word line for the second set of dies; and
fine write data to the MLC memory of the first string of a different word line from the first word line and the second word line for the first set of dies.
18. The data storage device of claim 15, wherein the one or more memory devices each have four XOR parity contexts.
19. The data storage device of claim 15, wherein the one or more memory devices comprises two memory devices and wherein writing to SLC memory is staggered across the two memory devices.
20. The data storage device of claim 19, where there is a time gap between starting an SLC writing on a string in a first memory device of the two memory devices and a string in a second memory device of the two memory devices.
US16/818,571 2020-03-13 2020-03-13 Combined QLC programming method Active US11137944B1 (en)

Priority Applications (4)

Application Number Priority Date Filing Date Title
US16/818,571 US11137944B1 (en) 2020-03-13 2020-03-13 Combined QLC programming method
DE102020116189.1A DE102020116189B3 (en) 2020-03-13 2020-06-18 COMBINED QLC PROGRAMMING PROCEDURE
CN202010564338.5A CN113393884A (en) 2020-03-13 2020-06-19 Combined QLC programming method
KR1020200074823A KR102345454B1 (en) 2020-03-13 2020-06-19 Combined qlc programming method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
US16/818,571 US11137944B1 (en) 2020-03-13 2020-03-13 Combined QLC programming method

Publications (2)

Publication Number Publication Date
US20210286554A1 true US20210286554A1 (en) 2021-09-16
US11137944B1 US11137944B1 (en) 2021-10-05

Family

ID=76968942

Family Applications (1)

Application Number Title Priority Date Filing Date
US16/818,571 Active US11137944B1 (en) 2020-03-13 2020-03-13 Combined QLC programming method

Country Status (4)

Country Link
US (1) US11137944B1 (en)
KR (1) KR102345454B1 (en)
CN (1) CN113393884A (en)
DE (1) DE102020116189B3 (en)

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160098319A1 (en) * 2014-10-02 2016-04-07 Sandisk Technologies Inc. System and method for pre-encoding of data for direct write to multi-level cell memory

Family Cites Families (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7719889B2 (en) 2007-06-25 2010-05-18 Sandisk Corporation Methods of programming multilevel cell nonvolatile memory
US8144512B2 (en) * 2009-12-18 2012-03-27 Sandisk Technologies Inc. Data transfer flows for on-chip folding
US8054684B2 (en) * 2009-12-18 2011-11-08 Sandisk Technologies Inc. Non-volatile memory and method with atomic program sequence and write abort detection
US9092340B2 (en) 2009-12-18 2015-07-28 Sandisk Technologies Inc. Method and system for achieving die parallelism through block interleaving
US8355280B2 (en) * 2010-03-09 2013-01-15 Samsung Electronics Co., Ltd. Data storage system having multi-bit memory device and operating method thereof
KR101903091B1 (en) * 2011-10-05 2018-10-02 삼성전자주식회사 Memory system comprising a non-volatile memory device and operating method thereof
KR102125376B1 (en) 2013-07-01 2020-06-23 삼성전자주식회사 Storage device and writing method thereof
KR102163872B1 (en) 2013-08-09 2020-10-13 삼성전자 주식회사 Multi-bit memory device, onchip buffered program method thereof and multi-bit memory system
US20150120988A1 (en) * 2013-10-28 2015-04-30 Skymedi Corporation Method of Accessing Data in Multi-Layer Cell Memory and Multi-Layer Cell Storage Device Using the Same
US9564212B2 (en) * 2014-05-06 2017-02-07 Western Digital Technologies, Inc. Solid-state memory corruption mitigation
JP6502880B2 (en) * 2016-03-10 2019-04-17 東芝メモリ株式会社 Semiconductor memory device
US10090044B2 (en) * 2016-07-21 2018-10-02 Sandisk Technologies Llc System and method for burst programming directly to MLC memory
US10109361B1 (en) * 2017-06-29 2018-10-23 Intel Corporation Coarse pass and fine pass multi-level NVM programming
JP7051546B2 (en) 2018-04-16 2022-04-11 キオクシア株式会社 Memory system and control method

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160098319A1 (en) * 2014-10-02 2016-04-07 Sandisk Technologies Inc. System and method for pre-encoding of data for direct write to multi-level cell memory

Also Published As

Publication number Publication date
KR102345454B1 (en) 2021-12-29
CN113393884A (en) 2021-09-14
KR20210116160A (en) 2021-09-27
DE102020116189B3 (en) 2021-08-12
US11137944B1 (en) 2021-10-05

Similar Documents

Publication Publication Date Title
US8782329B2 (en) Method for performing data shaping, and associated memory device and controller thereof
US8145855B2 (en) Built in on-chip data scrambler for non-volatile memory
US8429330B2 (en) Method for scrambling data in which scrambling data and scrambled data are stored in corresponding non-volatile memory locations
TWI527048B (en) Error correction code unit, self-test method and associated controller applied to flash memory device
US20170315867A1 (en) Method for accessing flash memory module and associated flash memory controller and memory device
US9348694B1 (en) Detecting and managing bad columns
US10110255B2 (en) Method for accessing flash memory module and associated flash memory controller and memory device
WO2018192488A1 (en) Data processing method and apparatus for nand flash memory device
Tanakamaru et al. Highly reliable and low power SSD using asymmetric coding and stripe bitline-pattern elimination programming
US11631457B2 (en) QLC programming method with staging of fine data
CN115083486A (en) TLC data programming with hybrid parity
US10133645B2 (en) Data recovery in three dimensional non-volatile memory array after word line short
CN111033483A (en) Memory address verification method and memory device using the same
US20160077913A1 (en) Method of controlling nonvolatile memory
US11137944B1 (en) Combined QLC programming method
US20230396269A1 (en) Scaled bit flip thresholds across columns for irregular low density parity check decoding
US11886718B2 (en) Descrambling of scrambled linear codewords using non-linear scramblers
US10360973B2 (en) Data mapping enabling fast read multi-level 3D NAND to improve lifetime capacity
JP2008102693A (en) Memory controller, flash memory system, and method of controlling flash memory
KR102593374B1 (en) Qlc data programming
US20230162806A1 (en) Apparatus and method for reducing signal interference in a semiconductor device
US20240062840A1 (en) Read verification cadence and timing in memory devices
US9146805B2 (en) Data protecting method, memory storage device, and memory controller
CN114077515A (en) Data writing method, memory control circuit unit and memory storage device
CN115331721A (en) Memory system, memory device and programming method and reading method thereof

Legal Events

Date Code Title Description
AS Assignment

Owner name: WESTERN DIGITAL TECHNOLOGIES, INC., CALIFORNIA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:GOROBETS, SERGEY ANATOLIEVICH;BENNETT, ALAN D.;JONES, RYAN R.;REEL/FRAME:052112/0320

Effective date: 20200312

FEPP Fee payment procedure

Free format text: ENTITY STATUS SET TO UNDISCOUNTED (ORIGINAL EVENT CODE: BIG.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

AS Assignment

Owner name: JPMORGAN CHASE BANK, N.A., AS AGENT, ILLINOIS

Free format text: SECURITY INTEREST;ASSIGNOR:WESTERN DIGITAL TECHNOLOGIES, INC.;REEL/FRAME:053482/0453

Effective date: 20200511

STCF Information on status: patent grant

Free format text: PATENTED CASE

AS Assignment

Owner name: WESTERN DIGITAL TECHNOLOGIES, INC., CALIFORNIA

Free format text: RELEASE OF SECURITY INTEREST AT REEL 053482 FRAME 0453;ASSIGNOR:JPMORGAN CHASE BANK, N.A.;REEL/FRAME:058966/0279

Effective date: 20220203

AS Assignment

Owner name: JPMORGAN CHASE BANK, N.A., ILLINOIS

Free format text: PATENT COLLATERAL AGREEMENT - A&R LOAN AGREEMENT;ASSIGNOR:WESTERN DIGITAL TECHNOLOGIES, INC.;REEL/FRAME:064715/0001

Effective date: 20230818

Owner name: JPMORGAN CHASE BANK, N.A., ILLINOIS

Free format text: PATENT COLLATERAL AGREEMENT - DDTL LOAN AGREEMENT;ASSIGNOR:WESTERN DIGITAL TECHNOLOGIES, INC.;REEL/FRAME:067045/0156

Effective date: 20230818