CN111221676A - Apparatus and method for NAND device hybrid parity management - Google Patents

Apparatus and method for NAND device hybrid parity management Download PDF

Info

Publication number
CN111221676A
CN111221676A CN201911185085.4A CN201911185085A CN111221676A CN 111221676 A CN111221676 A CN 111221676A CN 201911185085 A CN201911185085 A CN 201911185085A CN 111221676 A CN111221676 A CN 111221676A
Authority
CN
China
Prior art keywords
data
parity
memory
segment
nand device
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
CN201911185085.4A
Other languages
Chinese (zh)
Inventor
D·A·帕尔默
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Micron Technology Inc
Original Assignee
Micron Technology Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Micron Technology Inc filed Critical Micron Technology Inc
Publication of CN111221676A publication Critical patent/CN111221676A/en
Withdrawn legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/0802Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
    • G06F12/0804Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches with main memory updating
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/08Error detection or correction by redundancy in data representation, e.g. by using checking codes
    • G06F11/10Adding special bits or symbols to the coded information, e.g. parity check, casting out 9's or 11's
    • G06F11/1076Parity data used in redundant arrays of independent storages, e.g. in RAID systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/08Error detection or correction by redundancy in data representation, e.g. by using checking codes
    • G06F11/10Adding special bits or symbols to the coded information, e.g. parity check, casting out 9's or 11's
    • G06F11/1004Adding special bits or symbols to the coded information, e.g. parity check, casting out 9's or 11's to protect a block of data words, e.g. CRC or checksum
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2212/00Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
    • G06F2212/10Providing a specific technical effect
    • G06F2212/1041Resource optimization
    • G06F2212/1044Space efficiency improvement
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Quality & Reliability (AREA)
  • Computer Security & Cryptography (AREA)
  • Techniques For Improving Reliability Of Storages (AREA)

Abstract

The application relates to an apparatus and method for NAND device hybrid parity management. Devices and techniques for improving the efficiency of empty transfers in a storage device are described herein. A flush trigger event for user data writing may be identified. Here, user data is written corresponding to the user data and stored in a buffer. The size of the user data stored in the buffer is less than the write width of the storage device undergoing the write. The difference between the user data size and the write width in the buffer is buffer available space. Additional data may be collated in response to the identification of the flush trigger event. Here, the size of the additional data is less than or equal to the buffer available space. The user data and the additional data may then be written to the storage device.

Description

Apparatus and method for NAND device hybrid parity management
Technical Field
The present application relates to memory devices.
Background
Memory devices are typically provided as internal semiconductor integrated circuits in computers or other electronic devices. There are many different types of memory, including volatile and non-volatile memory.
Volatile memory requires power to maintain its data and includes Random Access Memory (RAM), Dynamic Random Access Memory (DRAM), or Synchronous Dynamic Random Access Memory (SDRAM), among others.
Non-volatile memory can hold stored data when not powered and includes flash memory, Read Only Memory (ROM), electrically erasable programmable ROM (eeprom), static ram (sram), erasable programmable ROM (eprom), resistance variable memory such as Phase Change Random Access Memory (PCRAM), Resistive Random Access Memory (RRAM), or Magnetoresistive Random Access Memory (MRAM), among others.
Flash memory is used as non-volatile memory for a wide range of electronic applications. Flash memory devices typically include one or more groups of single transistor floating gate or charge well memory cells that allow for high memory density, high reliability, and low power consumption.
Two common types of flash memory array architectures include NAND and NOR architectures, named in the logical form in which each's basic memory cell configuration is arranged. The memory cells of a memory array are typically arranged in a matrix. In an example, the gate of each floating gate memory cell in a row of the array is coupled to an access line (e.g., a word line). In the NOR architecture, the drain of each memory cell in a column of the array is coupled to a data line (e.g., a bit line). In a NAND architecture, the drains of each memory cell in a string of the array are coupled together in series, source to drain, between a source line and a bit line.
Both NOR and NAND architecture semiconductor memory arrays are accessed by a decoder that activates a particular memory cell by selecting the word line coupled to its gate. In a NOR architecture semiconductor memory array, once activated, selected memory cells place their data values on bit lines, causing different currents to flow depending on the state in which the particular cell is programmed. In a NAND architecture semiconductor memory array, a high bias voltage is applied to a drain side Select Gate (SGD) line. The word lines coupled to the gates of each group of unselected memory cells are driven at a specified pass voltage (e.g., Vpass) to cause each group of unselected memory cells to operate as pass transistors (e.g., pass current in a manner that is unrestricted by their stored data values). Current then flows from the source line through each series-coupled group to the bit lines, limited only by the selected memory cells in each group, placing the currently encoded data values of the selected memory cells on the bit lines.
Each flash memory cell in a NOR or NAND architecture semiconductor memory array can be programmed individually or collectively to one or several programmed states. For example, a Single Level Cell (SLC) may represent one of two programmed states (e.g., 1 or 0), representing one bit of data.
However, flash memory cells can also represent one of more than two programmed states, allowing higher density memory to be fabricated without increasing the number of memory cells, as each cell can represent more than one binary digit (e.g., more than one bit). Such cells may be referred to as multi-state memory cells, multi-digit cells, or multi-level cells (MLCs). In some examples, an MLC may refer to a memory cell that may store two bits of data per cell (e.g., one of four programmed states), a three-level cell (TLC) may refer to a memory cell that may store three bits of data per cell (e.g., one of eight programmed states), and a four-level cell (QLC) may store four bits of data per cell. MLC is used herein in its broader context to refer to any memory cell that can store more than one bit of data per cell (i.e., can represent more than two programmed states).
Conventional memory arrays are two-dimensional (2D) structures disposed on a surface of a semiconductor substrate. To increase memory capacity and reduce cost for a given area, the size of individual memory cells has been reduced. However, there are technical limitations to the reduction in size of individual memory cells, and thus, of the memory density of 2D memory arrays. In response, three-dimensional (3D) memory structures, such as 3D NAND architecture semiconductor memory devices, are being developed to further increase memory density and reduce memory cost.
Such 3D NAND devices often include strings of memory cells coupled in series (e.g., drain to source) between one or more source-side Select Gates (SGS) proximate to the source and one or more drain-side Select Gates (SGD) proximate to the bit line. In an example, the SGS or SGD may include one or more Field Effect Transistors (FETs), or metal-oxide semiconductor (MOS) structure devices, among others. In some examples, the strings will extend vertically through a plurality of vertically spaced levels containing respective word lines. A semiconductor structure (e.g., a polysilicon structure) may extend adjacent to a string of memory cells to form a channel for the memory cells of the string. In the example of a vertical string, the polysilicon structures may be in the form of vertically extending pillars. In some examples, the string may be "folded" and thus arranged relative to the U-shaped strut. In other examples, multiple vertical structures may be stacked on top of each other to form a stacked array of strings of storage cells.
Memory arrays or devices can be combined to form the storage capacity of a memory system, such as Solid State Drives (SSDs), Universal Flash Storage (UFS)TM) Device, multi-media card (MMC) solid state memory device, embedded MMC device (eMMC)TM) And the like. SSDs are particularly useful as the primary storage device for computers, with advantages over traditional hard disk drives with moving parts with respect to, for example, performance, size, weight, robustness, operating temperature range, and power consumption. For example, SSDs may have reduced seek times, latency, or other delays associated with disk drives (e.g., electromechanical, etc.). SSDs use non-volatile memory cells, such as flash memory cells, to avoid internal battery power requirements, thus allowing the drive to be more versatile and compact.
An SSD may include a number of memory devices, including a number of dies or logic units (e.g., logic unit numbers or LUNs), and may include one or more processors or other controllers that perform the logic functions required to operate the memory devices or interface with external systems. Such SSDs may include one or more flash memory dies including a number of memory arrays and peripheral circuitry thereon. A flash memory array may include a number of blocks of memory cells organized into physical pages. In many examples, an SSD will also include DRAM or SRAM (or other forms of memory die or other memory structures). The SSD can receive commands from the host associated with memory operations, such as read or write operations to transfer data (e.g., user data and associated integrity data, such as error data and address data, etc.) between the memory device and the host, or erase operations to erase data from the memory device.
Disclosure of Invention
In one aspect, the present application provides an array controller for NAND device hybrid parity management, the array controller comprising: a volatile memory; and processing circuitry to: receiving a first portion of data corresponding to a first segment of data defined with respect to a structure of the NAND device; receiving a second data portion corresponding to a second segment of data defined with respect to the structure of the NAND device, the second segment of data being different from the first segment; calculating a parity value using the first data portion and the second data portion; and storing the parity value in the volatile memory.
In another aspect, the present application provides a method for hybrid parity management for a NAND device, the method comprising: receiving a first data portion corresponding to a first data segment defined with respect to a structure of the NAND device; receiving a second data portion corresponding to a second segment of data defined with respect to the structure of the NAND device, the second segment of data being different from the first segment; calculating a parity value using the first data portion and the second data portion; and storing the parity value.
In another aspect, the present application provides a system comprising means for performing the above method.
In another aspect, the present application provides a machine-readable medium comprising instructions that when executed by processing circuitry cause the processing circuitry to perform the above method.
Drawings
In the drawings, which are not necessarily drawn to scale, like numerals may describe similar components in different views. Like numerals having different letter suffixes may represent different instances of similar components. The drawings illustrate generally, by way of example, and not by way of limitation, various embodiments discussed in the present document.
FIG. 1 shows an example of an environment including a memory device.
Fig. 2 shows an example of data flow in a system implementing improved clear transfer efficiency.
Fig. 3 shows an example of a sequence of messages between components when performing an enhanced flush transfer.
Fig. 4 shows a flow chart of a method for improving the efficiency of empty delivery.
Fig. 5 is a block diagram illustrating an example of a machine on which one or more embodiments may be implemented.
Detailed Description
Data written to a non-volatile memory device (e.g., a flash memory device) is typically buffered (e.g., in a write buffer or write cache) prior to being written to the underlying storage array. Buffering generally enables faster write transfers to the memory device, and also enables any special handling of data, such as resolving logical-to-physical (L2P) relationships between storage units, such as virtual blocks and physical blocks.
Typically, writing data to the storage array is done in sectors of defined size. Example increments may include thirty-two bits, sixty-four bits, one hundred twenty-eight bits, and so on. The particular increments available for writing are typically defined by the underlying hardware. Thus, a thirty-two bit increment may correspond to thirty-two connections of a buffer or other device to an array element. The size of the increment is typically a design decision for the granularity used by the array developer to balance speed or complexity. The size of the buffer may correspond to a multiple of the storage array write increment.
In general, data is flushed from a buffer to an array in response to a triggering event. Example trigger events include full buffering, aging factors, or another developing condition or state, in response to which a buffer will be emptied (examples of which are discussed further below with respect to fig. 1). The aging factor defines the longest period that data may reside in the buffer. An aging factor is an instance of a triggering event that causes the buffer to be emptied before the data equals the storage array write increment. When data is written to the array in full write increments, the buffered data is typically padded with zero or similar fill material to equal the write increments prior to writing to the array.
When writing large pieces of data (e.g., a megabyte of media file), the shim program is not as inefficient as it typically only shims one write increment out of thousands. However, if a smaller write increment is used, for example, to reduce the number of padding bits, it is less efficient because there will be more writes for the same size of data, and the additional writes involve more overhead. Thus, the reduction in padding data comes at the cost of greater overhead. Thus, it is generally more efficient to use large write increments of large segments of data because overhead is reduced and the relative amount of padding data to real data is generally smaller.
However, when writing small segments of real data, the ratio of real data to shim data decreases and may cause a large amount of storage consumption due to the shim data. This scenario becomes more common as the presence of devices increases, which wake up for a shorter period, write smaller state updates to storage, and resume a low power (e.g., sleep, hibernate, etc.) state. Such devices include mobile phones, tablets, internet of things (IoT) devices and sensors, and the like. When considering flash memory devices in a storage array, not only may the shim data consume a large amount of storage, but the writing of unrelated shim data may cause increased wear on the device, thereby shortening the operating life of the storage array.
To address these issues, some or all of the padding data may be replaced with useful data. Generally, a memory device writes certain maintenance (e.g., management) data to a storage array. Such maintenance data may include L2P table portions, metadata, statistics (e.g., write error statistics for blocks), bad block tables, and the like. In some conventional arrangements, the maintenance data is written to a portion of the storage array reserved for maintenance data, or otherwise separated from the user data. Here, however, the maintenance data replaces the shim data, thereby writing less to the storage array and using the underlying storage more efficiently.
In response to a flush trigger event, the maintenance data is collated (e.g., collected, retrieved, or assembled, etc.) to replace the shim data for writing. Maintenance data may be accumulated in one or more separate buffers instead of the write buffer described above. The defragmented fill-sized maintenance data comes from the separate buffer. Direct component queries may also be used to collate maintenance data. In one example, the memory controller updates the lookup data structure to locate the maintenance data. This is useful because maintenance data may be intermixed with user data and distributed throughout the storage array. Additional details and examples are provided below.
FIG. 1 shows an example of an environment 100 including a host device 105 and a memory device 110 configured to communicate over a communication interface. The host device 105 or the memory device 110 may be included in a variety of products 150, such as internet of things (IoT) devices (e.g., refrigerators or other appliances, sensors, motors or actuators, mobile communication devices, automobiles, drones, etc.) to support processing, communication, or control of the products 150.
The memory device 110 includes a memory controller 115 and a memory array 120 including, for example, a number of individual memory dies (e.g., a stack of three-dimensional (3D) NAND dies). In 3D architecture semiconductor memory technology, vertical structures are stacked, increasing the number of levels, physical pages, and correspondingly increasing the density of memory devices (e.g., storage devices). In an example, memory device 110 may be a discrete memory or storage component of host device 105. In other examples, memory device 110 may be part of an integrated circuit (e.g., a system on a chip (SOC), etc.) stacked or otherwise included with one or more other components of host device 105. In these examples, the memory device 110 communicates with the host device 105 components through an interconnect 111 (e.g., a bus). Thus, as described herein, host or host device 105 operation is different than that of memory device 110, even when memory device 110 is integrated into host device 105.
One or more communication interfaces, such as interconnect 111, may be used for one or more other components (e.g., a Serial Advanced Technology Attachment (SATA) interface, a peripheral component interconnect express (PCIe) interface, a universal serial bus (usb)) between the memory device 110 and the host device 105USB) interface, Universal Flash Storage (UFS) interface, eMMCTMAn interface or one or more other connectors or interfaces). Host device 105 may include a host system, an electronic device, a processor, a memory card reader, or one or more other electronic devices external to memory device 110. In some examples, host 105 may be a machine having some or all of the components discussed with reference to machine 500 of fig. 5.
Memory controller 115 may receive instructions from host 105 and may communicate with memory array 120 in order to transfer (e.g., write or erase) data to or from one or more of the memory cells, planes, sub-blocks, or pages of memory array 120. Memory controller 115 may include, among other things, circuitry or firmware, including one or more components or integrated circuits. For example, the memory controller 115 may include one or more memory control units, circuits, or components configured to control access across the memory array 120 and provide a translation layer between the host 105 and the memory device 110. Although memory controller 115 is shown here as part of the memory device 110 package, other configurations may also be employed, such as memory controller 115 being a component of host 105 (e.g., as a discrete package on a system-on-chip of host 105 separate from memory service 110), or even implemented by a Central Processing Unit (CPU) of host 105.
Memory manager 125 may include, among other things, circuitry or firmware, such as components or integrated circuits associated with various memory management functions. For purposes of the present description, example memory operation and management functions will be described in the context of a NAND memory. Those skilled in the art will recognize that other forms of non-volatile memory may have similar memory operation or management functions. Such NAND management functions include wear leveling (e.g., garbage collection or reclamation), error detection or correction, block retirement, or one or more other memory management functions. The memory manager 125 may parse or format host commands (e.g., commands received from a host) into device commands (e.g., commands associated with operation of a memory array, etc.), or generate device commands for the array controller 135 or one or more other components of the memory device 110 (e.g., to achieve various memory management functions).
The memory manager 125 may include a set of management tables 130 configured to maintain various information associated with one or more components of the memory device 110 (e.g., various information associated with a memory array or one or more memory cells coupled to the memory controller 115). For example, the management table 130 may include information regarding block age, block erase counts, error history, or one or more error counts (e.g., write operation error counts, read bit error counts, read operation error counts, erase error counts, etc.) of one or more blocks of memory cells coupled to the memory controller 115. In some examples, a bit error may be referred to as an uncorrectable bit error if the number of errors detected for one or more of the error counts is above a threshold. The management table 130 may maintain, among other things, a count of bit errors that may or may not be correctable. In an example, the management table 103 may include a translation table or logical to physical (L2P) mapping.
The array controller 135 may include, among other things, circuitry or components configured to control memory operations associated with: writing data to, reading data from, or erasing one or more memory cells of memory device 110 coupled to memory controller 115. For example, the memory operation may be based on a host command received from the host 105 or generated internally by the memory manager 125 (e.g., in conjunction with wear leveling, error detection or correction, or the like).
The array controller 135 may include an Error Correction Code (ECC) component 140, which may include, among other things, an ECC engine or other circuitry configured to detect or correct errors associated with: data is written to or read from one or more memory cells of memory device 110 coupled to memory controller 115. The memory controller 115 may be configured to actively detect and recover from error occurrences (e.g., bit errors, operational errors, etc.) associated with various operations or data storage based on ECC data maintained by the array controller 135. This enables memory controller 115 to maintain the integrity of data transferred between host 105 and memory device 110 or to maintain the integrity of stored data. Portions of such integrity maintenance may include removing (e.g., retiring) failed memory resources (e.g., memory cells, memory arrays, pages, blocks, etc.) to prevent future errors. RAIN is another technique that may be used by memory device 110 to maintain data integrity. The array controller 135 may be arranged to implement RAIN parity data generation and storage in the array 120. Memory controller 115 may be involved in reconstructing the corrupted data using parity data.
Memory array 120 may include a number of memory cells arranged, for example, in a number of devices, planes, sub-blocks, or pages. As one example, a 48GB TLC NAND memory device may include 18,592 bytes (B) of data per page (16,384+2208 bytes), 1536 pages per block, 548 blocks per plane, and 4 or more planes per device. As another example, a 32GB MLC memory device, storing two bits of data per cell (i.e., 4 programmable states), may include 18,592 bytes (B) of data per page (16,384+2208 bytes), 1024 pages per block, 548 blocks per plane, and 4 planes per device, but the write time required is half and the program/erase (P/E) cycle is twice as long as a corresponding TLC memory device. Other examples may include other numbers or arrangements. In some examples, the memory device or portions thereof may be selectively operated in SLC mode or in a desired MLC mode (e.g., TLC, QLC, etc.).
In operation, data is typically written to or read from the NAND memory device 110 in pages and erased in blocks. However, one or more memory operations (e.g., read, write, erase, etc.) may be performed on larger or smaller groups of memory cells as desired. The data transfer size of the NAND memory device 110 is commonly referred to as a page, while the data transfer size of the host is commonly referred to as a sector.
Although a page of data may include a number of bytes of user data (e.g., a data payload including a number of sectors of data) and their corresponding metadata, the size of a page often refers only to the number of bytes used to store user data. As an example, a page of data having a page size of 4KB may include 4KB of user data (e.g., 8 sectors assuming a sector size of 512B) and metadata corresponding to a number of bytes of user data (e.g., 32B, 54B, 224B, etc.), such as integrity data (e.g., error detection or correction code data), address data (e.g., logical address data, etc.), or other metadata associated with the user data.
Different types of memory cells or memory arrays 120 may provide different page sizes, or may require different amounts of metadata associated therewith. For example, different memory device types may have different bit error rates, which may result in a different amount of metadata being required to ensure the integrity of a data page (e.g., a memory device with a higher bit error rate may require more bytes of error correction code data than a memory device with a lower bit error rate). As an example, a multi-level cell (MLC) NAND flash memory device may have a higher bit error rate than a corresponding single-level cell (SLC) NAND flash memory device. Thus, MLC devices may require more bytes of metadata for error data than corresponding SLC devices.
Several of the foregoing components may be arranged to implement an efficient flush transfer, such as the memory controller 115 or the array controller 135. The following example uses the memory controller 115 as the implementing component, but any component of the host 105 or memory device that includes the following arrangement of components also implements the effective flush transfers described herein.
As noted above, the problem addressed by the described efficient flush transfer addresses the discrepancy between the amount of user data buffered in the memory controller 115 from the host and the write width of the array 120. As used herein, the write width is the size of data that can be written to the array 120. As noted above, in general, the smallest unit of data that can be written to a flash array is a page (e.g., between two kilobytes and sixteen kilobytes). However, some designs may allow for smaller write widths or larger write widths, such as blocks, super blocks, and the like.
A typical design for memory controller 115 includes a buffer at least equal to the minimum write width supported by array 120. Thus, if the buffer is full, width increments can be written to empty it, enabling efficient use of the storage array 120. However, any other flush triggering event, such as expiration of a timer period (e.g., timeout, data aging, etc.), a power down interrupt of memory device 110, or an explicit flush request from host 105, may generate buffered user data that is typically significantly smaller than the write width, resulting in a size deviation that is traditionally addressed by adding padding data (e.g., irrelevant or dummy data) to the user data until it equals the write width. An efficient clear transfer of user data replaces the shim data with useful data, such as data that would otherwise be written to the array 120, to reduce the inefficiency of writing the shim data.
To this end, the memory controller 115 is arranged to identify a flush triggering event for user data writes that cause user data to be buffered in the memory controller 115. The identification may include, among other things, receiving a command or interrupt to flush a write from host 105, receiving a maintenance operation to retain data to array 120 (such as a power down signal for memory device 110 or an extreme operating condition of the memory device), receiving expiration of a timer period (e.g., to prevent the risk of losing data due to retaining data in volatile memory of the buffer for too long). In any case, the flush triggering event includes a signal to write user data from the buffer to the array 120 when the user data is less than the write width. In an example, memory controller 115 identifies this condition by comparing the user data in the buffer to the write width. In one example, the memory controller 115 identifies this condition by clearing the type of triggering event. The difference between the user data size and the write width is the buffer available space.
The memory controller 115 is arranged to collate (e.g. collect, retrieve, query or otherwise obtain) additional data in response to identifying a flush triggering event. The additional data is collated to fill available space. However, in case the user data plus the extra data unit is not exactly aligned with the word width (e.g. it is slightly larger or slightly smaller than the word width), then an extra data unit is used that fits into the available space. Herein, an extra data unit refers to a discrete element given an extra data meaning. For example, if the additional data is a count of bad blocks, the additional data unit is the overall count plus the metadata that typically accompanies the bad block data structure. Becoming smaller makes the extra data unintelligible and equates to padding data. The padding data is used to fill in the buffer available space that still exists after the extra data is considered.
In an example, the additional data is maintenance (e.g., management) data of the storage device. In one example, the maintenance data includes an L2P data map. The L2P mapping is typically segmented in this example to enable smaller volatile memory buffers in the memory controller 115. For example, if desired, the L2P map segment is loaded from array 120 into memory controller 115 and used to determine which physical address corresponds to the logical address provided by host 105. These mappings are often used and updated according to most write and maintenance operations, such as garbage collection. Thus, the L2P map sectors may often be updated and rewritten to the array 120, making them generally available as additional data.
In one example, the maintenance data includes wear leveling data. In one example, the maintenance data includes crossover temperature data. In one example, the maintenance data includes power down data. These forms of maintenance data may be accumulated in a buffer of a scratch pad of the memory controller 115 and occasionally written for future use. It is typically smaller than the L2P mapping described above, and enables smaller buffer-available space to be used for additional data.
Once the additional data is collected and combined with the user data to approximate or match the write width, the memory controller 115 is arranged to write the user data and the additional data to the array 120. The result is that the user data and additional data are co-located within the write width on the array 120. This arrangement differs from conventional systems that generally separate memory device 110 data from user data by specifying certain areas of the array 120 for memory device 110 data or by making the write width (e.g., page or block) entirely user data or entirely memory device 110 data. To account for such changes in data organization, the memory controller 115 is arranged to tag additional data to enable future retrieval thereof. In an example, the additional data is blended with the user data based on the physical address. In an example, the additional data includes metadata indicating that it is not user data. Here, the user data and the additional data share the same physical address, thus being blended. However, metadata written with extra data represents its nature. Thus, to retrieve additional data, the memory controller 115 loads the page, e.g., specified by the physical address, and scans the page until the metadata tag is reached, treating the remaining data in the page as additional data. This technique does not rely on extra tracking techniques, as compared to techniques used in traditional maintenance data tracking, but does consume some additional bits to write metadata and may involve extra processing for reads.
In an example, the additional data is separated from the user data based on the physical address. In one example, the additional data includes an address identifier upon which the separation is performed. In an example, the address identifier is in metadata of the additional data. In an example, the address identifier is one of an absolute address or an address relative to an address of the user data. In these examples, the user data and the additional data are addressed based on the distinction. For example, in the address instance relative to the user data, the additional data is specified by the user data and the address and offset. Thus, the memory controller 115 may simply read, for example, a page and skip the offset bits to reach the additional data. In the case of absolute addresses, the memory controller 115 may retrieve the additional data directly without reference to the user data address. Such direct addressing may be supported by virtual addresses or the like that support write width resolution for the readings. In all of these cases, the metadata supporting the separately addressed additional data is typically stored elsewhere, e.g., in one of the tables 130 or otherwise managed by the memory manager 125. Such techniques may include the overhead of conventional techniques, but may result in more efficient use of the storage array 120 or improved read performance.
Fig. 2 shows an example of data flow in a system implementing improved clear transfer efficiency. Here, the host 205 provides host data 230 to the memory device 210. The memory device 210 buffers host data 230 in a local cache 220 (e.g., DRAM, SRAM, or storage class memory).
Upon a flush triggering event, a component 215 (e.g., memory controller, array controller, etc.) of the memory device 210 populates the local cache 220 with additional device data 235, such that the local cache 220 is full or as full as may be achieved given the device data size and the amount of available space in the local cache 220 after consideration of the host data 230.
Once the device data 235 is collated and written to the local cache 220, the local cache is flushed (e.g., written) to the non-volatile storage array 225. In an example, the device data 235 is not actually written to the local cache 220, but is instead appended to the host data 230 during the flush.
Fig. 3 shows an example of a sequence of messages between components when performing an enhanced flush transfer. When several small segments of data are written by the host, typically, the controller appends each write to the buffer. Here, the host also transmits a flush command (indicated by the dashed arrow) to the controller. The controller identifies the flush trigger event and then collects (e.g., marshals) additional data to fill the available space remaining in the buffer. The additional data is then stored to a buffer. The controller then clears the buffer to store user data from the host and additional device data from the memory device to the base storage.
Fig. 4 illustrates a flow chart of a method 400 for improving the efficiency of clear transfer. The operations of method 400 are performed by computer hardware, such as that described above (e.g., memory controller, array controller, etc.) or that described below (e.g., processing circuitry).
At operation 405, a flush trigger event for a user data write is identified. Here, user data corresponding to user data writing is stored in a buffer, and the size of the user data stored in the buffer is smaller than the write width of the storage device undergoing writing. This creates buffer available space. In one example, the storage device is a NAND flash memory device. In one example, the write width is one page. In one example, the write width is one block. In one example, the write width is a super block.
At operation 410, the additional data is consolidated in response to the identification of the flush trigger event. Here, the size of the additional data is less than or equal to the available space. In an example, the flush triggering event is at least one of receipt of a flush command or expiration of a time period.
In one example, the additional data is maintenance data for the storage device. In one example, maintenance data is gathered by a controller of the storage device. In one example, the maintenance data includes an L2P data map. In one example, the maintenance data includes wear leveling data. In one example, the maintenance data includes crossover temperature data. In one example, the maintenance data includes power down data.
In an example, the additional data is blended with the user data based on the physical address. In an example, the additional data includes metadata indicating that it is not user data.
In an example, the additional data is separated from the user data based on the physical address. In one example, the additional data includes an address identifier upon which the separation is performed. In an example, the address identifier is in metadata of the additional data. In an example, the address identifier is one of an absolute address or an address relative to an address of the user data.
At operation 415, the user data and additional data are written to the storage device.
Fig. 5 illustrates a block diagram of an example machine 500 on which any one or more of the techniques (e.g., methods) discussed herein may be executed. In alternative embodiments, the machine 500 may operate as a standalone device or may be connected (e.g., networked) to other machines. In a networked deployment, the machine 500 may operate in the capacity of a server machine, a client machine, or both, in server-client network environments. In an example, the machine 500 may operate as a peer machine in a peer-to-peer (P2P) (or other distributed) network environment. The machine 500 may be a Personal Computer (PC), a tablet PC, a set-top box (STB), a Personal Digital Assistant (PDA), a mobile telephone, a web appliance, an IoT device, an automotive system, or any machine capable of executing instructions that specify actions to be taken by that machine (whether sequentially or otherwise). Further, while only a single machine is illustrated, the term "machine" shall also be taken to include any collection of machines that individually or jointly execute a set (or multiple sets) of instructions to perform any one or more of the methodologies discussed herein (e.g., cloud computing, software as a service (SaaS), other computer cluster configurations).
As described herein, an example may include, or be operable with, logic, a component, a device, a package, or a mechanism. The circuitry is an aggregate (e.g., set) of circuits implemented in a tangible entity that includes hardware (e.g., simple circuits, gates, logic, etc.). The circuitry members may be flexible over time and underlying hardware variability. The circuitry includes components that, when operated, may perform particular tasks either individually or in combination. In an example, the hardware of the circuitry may be designed in an immutable manner to perform a particular operation (e.g., hardwired). In an example, the hardware of the circuitry may include variably connected physical components (e.g., execution units, transistors, simple circuits, etc.) that include computer-readable media physically modified (e.g., invariant aggregate magnetic, electrically movable placement, etc.) to encode instructions for a particular operation. When physical components are connected, the underlying electrical properties of the hardware components change, for example, from an insulator to a conductor or vice versa. The instructions enable the participating hardware (e.g., execution units or loading mechanisms) to generate, through variable connections, portions of circuitry components in the hardware to perform specific tasks in operation. Thus, when the device operates, the computer-readable medium is communicatively coupled to other components of the circuitry. In an example, any of the physical components may be used in more than one component in more than one circuitry. For example, under operation, an execution unit may be used at one point in time for a first circuit of a first circuitry and reused by a second circuit in the first circuitry, or reused by a third circuit in the second circuitry at a different time.
A machine (e.g., computer system) 500 (e.g., host device 105, memory device 110, etc.) may include a hardware processor 502 (e.g., a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), a hardware processor core, or any combination thereof, such as memory controller 115, etc.), a main memory 504, and a static memory 506, some or all of which may communicate with each other through an interconnect (e.g., bus) 508. The machine 500 may further include a display unit 510, an alphanumeric input device 512 (e.g., a keyboard), and a User Interface (UI) navigation device 514 (e.g., a mouse). In an example, the display unit 510, the input device 512, and the UI navigation device 514 may be a touchscreen display. The machine 500 may additionally include a storage device (e.g., drive unit) 508, a signal generation device 518 (e.g., a speaker), a network interface device 520, and one or more sensors 516, such as a Global Positioning System (GPS) sensor, compass, accelerometer, or other sensor. The machine 500 may include an output controller 528, such as a serial (e.g., Universal Serial Bus (USB)), parallel, or other wired or wireless (e.g., Infrared (IR), Near Field Communication (NFC), etc.) connection to communicate or control one or more peripheral devices (e.g., a printer, card reader, etc.).
The storage 508 may include a machine-readable medium 522 on which is stored one or more set of data structures or instructions 524 (e.g., software) embodying or utilized by any one or more of the techniques or functions described herein. The instructions 524 may also reside, completely or at least partially, within the main memory 504, within static memory 506, or within the hardware processor 502 during execution thereof by the machine 500. In an example, one or any combination of the hardware processor 502, the main memory 504, the static memory 506, or the storage device 508 may constitute the machine-readable medium 522.
While the machine-readable medium 522 is shown to be a single medium, the term "machine-readable medium" can include a single medium or multiple media (e.g., a centralized or distributed database, or associated caches and servers) configured to store the one or more instructions 524.
The term "machine-readable medium" may include any medium that is capable of storing, encoding or carrying instructions for execution by the machine 500 and that cause the machine 500 to perform any one or more of the techniques of this disclosure, or that is capable of storing, encoding or carrying data structures used by or associated with such instructions. Non-limiting examples of machine-readable media can include solid-state memory and optical and magnetic media. In an example, a centralized machine-readable medium includes a machine-readable medium having a plurality of particles with an invariant (e.g., static) mass. Thus, the centralized machine-readable medium is a non-transitory propagating signal. Particular examples of a centralized machine-readable medium may include: non-volatile memories such as semiconductor memory devices (e.g., electrically programmable read-only memory (EPROM) or electrically erasable programmable read-only memory (EEPROM)) and flash memory devices; magnetic disks such as internal hard disks and removable disks; magneto-optical disks; and CD-ROM and DVD-ROM disks.
Instructions 524 (e.g., software, programs, an Operating System (OS), etc.) or other data stored on the storage 521 may be accessed by the memory 504 for use by the processor 502. Memory 504 (e.g., DRAM) is typically fast but volatile, and thus is a different type of storage than storage device 521 (e.g., SSD), which is suitable for long-term storage, including in an "off condition. Instructions 524 or data for use by a user or the machine 500 are typically loaded into memory 504 for use by the processor 502. When the memory 504 is full, virtual space of the storage 521 may be allocated to the supplemental memory 504; however, since storage 521 is typically slower than memory 504 and write speeds are typically at least twice slower than read speeds, the use of virtual memory may greatly reduce the user experience due to storage latency (as compared to memory 504, such as DRAM). Furthermore, the use of storage 521 for virtual memory may greatly reduce the usable lifetime of storage 521.
In contrast to virtual memory, virtual memory compression (e.g.,
Figure BDA0002292198660000131
kernel feature "ZRAM") uses portions of memory as compressed blocks to avoid paging to storage 521. Paging occurs in compressed blocks until such data must be written to storage 521. Virtual memory compression increases the available size of memory 504 while reducing wear on storage 521.
Storage devices optimized for mobile electronic devices or mobile storage traditionally include MMC solid state storage devices (e.g., micro amp digital (microSD)TM) Card, etc.). MMC devices include several parallel interfaces (e.g., an 8-bit parallel interface) with a host device, and are typically components that are removable and separable from the host device. In contrast, eMMCTMThe device is attached to a circuit board and is considered a component of a host device with a read speed comparable to that of a serial ATA based deviceTMSSD device (serial Advanced Technology (AT) attachment, or SATA). However, the demand for mobile device performance continues to increase in order to fully implement virtual or augmented reality devices, take advantage of increased network speeds, and the like. In response to this demand, the storage device has been converted from parallel to a serial communication interface. Universal Flash Storage (UFS) devices, including controllers and firmware, communicate with host devices using a Low Voltage Differential Signaling (LVDS) serial interface with dedicated read/write paths, further advancing higher read/write speeds.
The instructions 524 may further be transmitted or received over a communication network 526 using a transmission medium by the network interface device 520 utilizing any one of a number of transfer protocols, such as frame relay, Internet Protocol (IP), Transmission Control Protocol (TCP), User Datagram Protocol (UDP), hypertext transfer protocol (HTTP), etc. Example communication networks can include a Local Area Network (LAN), a Wide Area Network (WAN), a packet data network (e.g., the internet), a mobile telephone network (e.g., a cellular network) (e.g., as specified by the third generation partnership project (3GPP) family of standards (e.g., 3G, 4G, mn,5G, Long Term Evolution (LTE), etc.), Plain Old Telephone (POTS) networks, and wireless data networks (e.g., referred to as
Figure BDA0002292198660000141
The Institute of Electrical and Electronics Engineers (IEEE)802.11 series of standards), the IEEE 802.15.4 series of standards, peer-to-peer (P2P) networks, and so forth. In an example, the network interface device 520 may include one or more physical jacks (e.g., ethernet, coaxial, or telephone jacks) or one or more antennas to connect to the communication network 526. In an example, network interface device 520 may include multiple antennas to wirelessly communicate using at least one of single-input multiple-output (SIMO), multiple-input multiple-output (MIMO), or multiple-input single-output (MISO) techniques. The term "transmission medium" shall be taken to include any intangible medium that is capable of storing, encoding or carrying instructions for execution by the machine 500, and includes digital or analog communications signals or other intangible medium to facilitate communication of such software.
Additional examples:
example 1 is a memory controller for improving efficiency of a flush transfer, the memory controller comprising: an interface to a storage device; a buffer; and processing circuitry for: identifying a flush triggering event for a user data write, user data corresponding to the user data write having been stored in the buffer, the size of the user data stored in the buffer being less than a write width of the storage device undergoing the write thereby generating buffer usable space; sorting additional data in response to identifying the flush trigger event, the additional data having a size less than or equal to the buffer available space; and writing the user data and the additional data to the storage device through the interface.
In example 2, the subject matter of example 1, wherein the storage device is a NAND flash memory device.
In example 3, the subject matter of example 2, wherein the write width is a block.
In example 4, the subject matter of any of examples 2-3, wherein the write width is a super block.
In example 5, the subject matter of any of examples 1-4, wherein the additional data is maintenance data for the storage device.
In example 6, the subject matter of example 5, wherein the processing circuitry is arranged to gather the maintenance data.
In example 7, the subject matter of any of examples 5-6, wherein the maintenance data includes a logical to physical data mapping.
In example 8, the subject matter of any of examples 5-7, wherein the maintenance data comprises wear leveling data.
In example 9, the subject matter of any of examples 5-8, wherein the maintenance data includes crossover temperature data.
In example 10, the subject matter of any of examples 5-9, wherein the maintenance data includes power down data.
In example 11, the subject matter of any of examples 1-10, wherein the additional data is blended with user data based on a physical address.
In example 12, the subject matter of example 11, wherein the additional data includes metadata indicating that it is not user data.
In example 13, the subject matter of any of examples 1-12, wherein the additional data is separated from user data based on a physical address.
In example 14, the subject matter of example 13, wherein the additional data includes an address identifier based on which the separation was performed.
In example 15, the subject matter of example 14, wherein the address identifier is in metadata of the additional data.
In example 16, the subject matter of any of examples 14-15, wherein the address identifier is one of an absolute address or an address relative to an address of the user data.
In example 17, the subject matter of any of examples 1-16, wherein the flush triggering event is at least one of receipt of a flush command or expiration of a time period.
Example 18 is a method of increasing efficiency of clear transfer, the method comprising: identifying a flush triggering event for a user data write of user data stored in a buffer, a size of the user data stored in the buffer being smaller than a write width of a storage device undergoing the write to produce buffer usable space; sorting additional data in response to identifying the flush trigger event, the additional data having a size less than or equal to the buffer available space; and writing the user data and the additional data to the storage device.
In example 19, the subject matter of example 18, wherein the storage device is a NAND flash memory device.
In example 20, the subject matter of example 19, wherein the write width is a block.
In example 21, the subject matter of any of examples 19-20, wherein the write width is a super block.
In example 22, the subject matter of any of examples 18-21, wherein the additional data is maintenance data for the storage device.
In example 23, the subject matter of example 22, wherein the maintenance data is gathered by a controller of the storage device.
In example 24, the subject matter of any of examples 22-23, wherein the maintenance data comprises a logical to physical data mapping.
In example 25, the subject matter of any of examples 22-24, wherein the maintenance data comprises wear leveling data.
In example 26, the subject matter of any of examples 22-25, wherein the maintenance data includes crossover temperature data.
In example 27, the subject matter of any of examples 22-26, wherein the maintenance data includes outage data.
In example 28, the subject matter of any of examples 18-27, wherein the additional data is blended with user data based on a physical address.
In example 29, the subject matter of example 28, wherein the additional data includes metadata representing that it is not user data.
In example 30, the subject matter of any of examples 18-29, wherein the additional data is separated from user data based on a physical address.
In example 31, the subject matter of example 30, wherein the additional data includes an address identifier based on which the separation was performed.
In example 32, the subject matter of example 31, wherein the address identifier is in metadata of the additional data.
In example 33, the subject matter of any of examples 31-32, wherein the address identifier is one of an absolute address or an address relative to an address of the user data.
In example 34, the subject matter of any of examples 18-33, wherein the flush triggering event is at least one of receipt of a flush command or expiration of a time period.
Example 35 is a machine-readable medium comprising instructions for improving efficiency of flush transfer, which when executed by processing circuitry causes the processing circuitry to perform operations comprising: identifying a flush trigger event for a user data write, user data corresponding to the user data write having been stored in a buffer, a size of the user data stored in the buffer being less than a write width of a storage device undergoing the write to produce a buffer usable space; sorting additional data in response to identifying the flush trigger event, the additional data having a size less than or equal to the buffer available space; and writing the user data and the additional data to the storage device.
In example 36, the subject matter of example 35, wherein the storage device is a NAND flash memory device.
In example 37, the subject matter of example 36, wherein the write width is a block.
In example 38, the subject matter of any of examples 36-37, wherein the write width is a superblock.
In example 39, the subject matter of any of examples 35-38, wherein the additional data is maintenance data for the storage device.
In example 40, the subject matter of example 39, wherein the maintenance data is gathered by a controller of the storage device.
In example 41, the subject matter of any of examples 39-40, wherein the maintenance data comprises a logical to physical data mapping.
In example 42, the subject matter of any of examples 39-41, wherein the maintenance data comprises wear leveling data.
In example 43, the subject matter of any of examples 39-42, wherein the maintenance data includes crossover temperature data.
In example 44, the subject matter of any of examples 39-43, wherein the maintenance data includes power down data.
In example 45, the subject matter of any of examples 35-44, wherein the additional data is blended with the user data based on a physical address.
In example 46, the subject matter of example 45, wherein the additional data includes metadata indicating that it is not user data.
In example 47, the subject matter of any of examples 35-46, wherein the additional data is separated from user data based on a physical address.
In example 48, the subject matter of example 47, wherein the additional data includes an address identifier based on which the separation was performed.
In example 49, the subject matter of example 48, wherein the address identifier is in metadata of the additional data.
In example 50, the subject matter of any of examples 48-49, wherein the address identifier is one of an absolute address or an address relative to an address of the user data.
In example 51, the subject matter of any of examples 35-50, wherein the flush triggering event is at least one of receipt of a flush command or an expiration of a time period.
Example 52 is a system for improving efficiency of empty delivery, the system comprising: means for identifying a flush triggering event for a user data write, user data corresponding to the user data write having been stored in a buffer, a size of the user data stored in the buffer being less than a write width of a storage device undergoing the write thereby generating a buffer usable space; means for collating additional data in response to identifying the flush trigger event, the additional data having a size less than or equal to the buffer available space; and means for writing the user data and the additional data to the storage device.
In example 53, the subject matter of example 52, wherein the storage device is a NAND flash memory device.
In example 54, the subject matter of example 53, wherein the write width is a block.
In example 55, the subject matter of any of examples 53-54, wherein the write width is a super block.
In example 56, the subject matter of any of examples 52-55, wherein the additional data is maintenance data for the storage device.
In example 57, the subject matter of example 56, wherein the maintenance data is gathered by a controller of the storage device.
In example 58, the subject matter of any of examples 56-57, wherein the maintenance data comprises a logical to physical data mapping.
In example 59, the subject matter of any of examples 56-58, wherein the maintenance data comprises wear leveling data.
In example 60, the subject matter of any of examples 56-59, wherein the maintenance data includes crossover temperature data.
In example 61, the subject matter of any of examples 56-60, wherein the maintenance data includes power down data.
In example 62, the subject matter of any of examples 52-61, wherein the additional data is blended with user data based on a physical address.
In example 63, the subject matter of example 62 wherein the additional data includes metadata indicating that it is not user data.
In example 64, the subject matter of any of examples 52-63, wherein the additional data is separated from user data based on a physical address.
In example 65, the subject matter of example 64, wherein the additional data includes an address identifier based on which the separation was performed.
In instance 66, the subject matter of instance 65, wherein the address identifier is in metadata of the additional data.
In example 67, the subject matter of any of examples 65-66, wherein the address identifier is one of an absolute address or an address relative to an address of the user data.
In example 68, the subject matter of any of examples 52-67, wherein the flush triggering event is at least one of receipt of a flush command or an expiration of a time period.
Example 69 is at least one machine readable medium comprising instructions that when executed by processing circuitry cause the processing circuitry to perform operations to implement any of examples 1-68.
Example 70 is an apparatus comprising means to implement any one of examples 1-68.
Example 71 is a system to implement any one of examples 1-68.
Example 72 is a method to implement any one of examples 1-68.
The foregoing detailed description includes references to the accompanying drawings, which form a part of the detailed description. The drawings show, by way of illustration, specific embodiments in which the invention may be practiced. These embodiments are also referred to herein as "examples". Such examples may include elements other than those illustrated or described. However, the inventors also contemplate examples in which only the elements shown or described are provided. Moreover, the inventors also contemplate examples (or one or more aspects thereof) using any combination or permutation of those elements shown or described with respect to a particular example (or one or more aspects thereof) or with respect to other examples (or one or more aspects thereof) shown or described herein.
In this document, the terms "a" or "an" are used, as is common in patent documents, to include one or more than one, regardless of any other instances or usages of "at least one" or "one or more". In this document, the term "or" is used to refer to non-exclusive or such that "a or B" may include "a instead of B", "B instead of a", and "a and B" unless otherwise indicated. In the appended claims, the terms "including" and "in which" are used as the plain-ordinary equivalents of the respective terms "comprising" and "in which". Furthermore, in the following claims, the terms "comprising" and "including" are open-ended, that is, a system, device, article, or process that includes elements in addition to those listed in a claim after such term is considered to be within the scope of the claims. Furthermore, in the appended claims, the terms "first," "second," and "third," etc. are used merely as labels, and are not intended to impose numerical requirements on their objects.
In various examples, the components, controllers, processors, units, engines, or tables described herein may include, among other things, physical circuitry or firmware stored on a physical device. As used herein, "processor" means any type of computing circuit, such as, but not limited to, a microprocessor, a microcontroller, a graphics processor, a Digital Signal Processor (DSP), or any other type of processor or processing circuit, including a processor or group of multi-core devices.
The terms "wafer" and "substrate" are used herein to generally refer to any structure on which integrated circuits are formed, and also to such structures during various stages of integrated circuit fabrication. The following detailed description is, therefore, not to be taken in a limiting sense, and the scope of various embodiments is defined only by the appended claims, along with the full range of equivalents to which such claims are entitled.
Various embodiments in accordance with the present invention and described herein include memories that utilize a vertical structure of memory cells (e.g., NAND strings of memory cells). As used herein, directional adjectives will be employed with respect to the substrate surface on which the memory cells are formed (i.e., the vertical structures will be considered to extend away from the substrate surface, the bottom ends of the vertical structures will be considered to be the ends closest to the substrate surface, and the top ends of the vertical structures will be considered to be the ends furthest from the substrate surface).
As used herein, operating a memory cell includes reading from, writing to, or erasing a memory cell. The operation of placing a memory cell in a desired state is referred to herein as "programming," and may include writing to or erasing from the memory cell (e.g., the memory cell may be programmed to an erased state).
In accordance with one or more embodiments of the present disclosure, a memory controller (e.g., processor, controller, firmware, etc.) located internal or external to a memory device is capable of determining (e.g., selecting, setting, adjusting, calculating, changing, clearing, communicating, adapting, deriving, defining, utilizing, modifying, applying, etc.) a number of wear cycles or wear states (e.g., recording wear cycles, counting operations of the memory device as it occurs, tracking operations of the memory device from which it originates, evaluating memory device characteristics corresponding to the wear states, etc.).
In accordance with one or more embodiments of the present disclosure, a memory access device may be configured to provide wear cycle information to a memory device with respect to each memory operation. Memory device control circuitry (e.g., control logic) can be programmed to compensate for memory device performance changes corresponding to wear cycle information. The memory device may receive wear cycle information and determine one or more operating parameters (e.g., values, characteristics) in response to the wear cycle information.
The method examples described herein may be implemented at least in part by a machine or computer. Some examples may include a computer-readable medium or machine-readable medium encoded with instructions operable to configure an electronic device to perform a method as described in the above examples. Embodiments of such methods may include code, such as microcode, assembly language code, a high-level language code, and the like. Such code may contain computer readable instructions for performing various methods. The code may form part of a computer program product. Further, the code may be tangibly stored on one or more volatile or non-volatile tangible computer-readable media, e.g., during execution or at other times. Examples of such tangible computer-readable media may include (but are not limited to): hard disks, removable magnetic disks, removable optical disks (e.g., compact disks and digital video disks), cartridges, memory cards or sticks, Random Access Memories (RAMs), Read Only Memories (ROMs), Solid State Drives (SSDs), universal flash memory storage (UFS) devices, embedded mmc (emmc) devices, and so forth.
The above description is intended to be illustrative and not restrictive. For example, the examples described above (or one or more aspects thereof) may be used in combination with each other. For example, one of ordinary skill in the art, upon reviewing the above description, may use other embodiments. It is submitted with the understanding that it will not be used to interpret or limit the scope or meaning of the claims. Also, in the above detailed description, various features may be grouped together to simplify the present disclosure. This should not be interpreted as intending that an unclaimed disclosed feature is essential to any claim. Rather, inventive subject matter may lie in less than all features of a particular disclosed embodiment. Thus the following claims are hereby incorporated into the detailed description, with each claim standing on its own as a separate embodiment, and it is contemplated that these embodiments may be combined with each other in various combinations or permutations. The scope of the invention should be determined with reference to the appended claims, along with the full scope of equivalents to which such claims are entitled.

Claims (24)

1. An array controller for NAND device hybrid parity management, the array controller comprising:
a volatile memory; and
processing circuitry to:
receiving a first data portion corresponding to a first data segment defined with respect to a structure of the NAND device;
receiving a second data portion corresponding to a second segment of data defined with respect to the structure of the NAND device, the second segment of data being different from the first segment;
calculating a parity value using the first data portion and the second data portion; and
storing the parity value in the volatile memory.
2. The array controller of claim 1, wherein the volatile memory is random access memory.
3. The array controller of claim 2, wherein the parity value replaces a previous parity value in the random access memory for the first data segment.
4. The array controller of claim 2, wherein the processing circuitry is arranged to flush parity data stored in the random access memory to a NAND block.
5. The array controller of claim 4, wherein to empty the parity data, the processing circuitry decouples parity data of the first and second data portions from parity data derived from the parity value prior to writing the parity data to a switch block.
6. The array controller of claim 1, wherein the processing circuitry is arranged to decouple parity data of the first data portion and the second data portion from parity data derived from the parity value in response to a trigger event.
7. The array controller of claim 6, wherein the triggering event is a failed write of the first portion of data or the second portion of data to a block of the NAND device.
8. The array controller of claim 6, wherein the triggering event is a validation error of a closed block of the NAND device to which the first portion of data or the second portion of data is written.
9. The array controller of claim 6, wherein the triggering event is writing the parity data to a block of the NAND device to which the first data portion is written.
10. A method for NAND device hybrid parity management, the method comprising:
receiving a first data portion corresponding to a first data segment defined with respect to a structure of the NAND device;
receiving a second data portion corresponding to a second segment of data defined with respect to the structure of the NAND device, the second segment of data being different from the first segment;
calculating a parity value using the first data portion and the second data portion; and
storing the parity value.
11. The method of claim 10, wherein calculating the parity value comprises applying an exclusive or XOR operation between bits of the first data portion and the second data portion.
12. The method of claim 10, wherein calculating the parity value comprises maintaining a data structure to store a mapping between the first data portion, the second data portion, and the parity value.
13. The method of claim 10, wherein the parity value is stored in random access memory.
14. The method of claim 13, wherein the parity value replaces a previous parity value in the random access memory for the first data segment.
15. The method of claim 13, comprising flushing parity data stored in the random access memory to a NAND block.
16. The method of claim 15, wherein flushing the parity data comprises decoupling parity data of the first data portion and the second data portion from parity data derived from the parity value prior to writing the parity data to a switch block.
17. The method of claim 10, comprising decoupling parity data of the first data portion and the second data portion from parity data derived from the parity value in response to a triggering event.
18. The method of claim 17, wherein the triggering event is a failed write of the first portion of data or the second portion of data to a block of the NAND device.
19. The method of claim 17, wherein the triggering event is a validation error of a closed block of the NAND device to which the first data portion or the second data portion is written.
20. The method of claim 17, wherein the triggering event is writing the parity data to a block of the NAND device to which the first portion of data is written.
21. The method of claim 10, wherein the structure of the NAND device defining the first segment of data and the second segment of data is a block.
22. The method of claim 21, wherein the block is a logical block, and wherein the first portion of data and a third portion of data in the first segment of data are different pages allocated to different physical blocks of the NAND device.
23. A system comprising means to perform any of the methods of claims 10-22.
24. A machine-readable medium comprising instructions which, when executed by processing circuitry, cause the processing circuitry to perform any of the methods of claims 10-22.
CN201911185085.4A 2018-11-27 2019-11-27 Apparatus and method for NAND device hybrid parity management Withdrawn CN111221676A (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US16/201,537 US10997071B2 (en) 2018-11-27 2018-11-27 Write width aligned storage device buffer flush
US16/201,537 2018-11-27

Publications (1)

Publication Number Publication Date
CN111221676A true CN111221676A (en) 2020-06-02

Family

ID=70771703

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911185085.4A Withdrawn CN111221676A (en) 2018-11-27 2019-11-27 Apparatus and method for NAND device hybrid parity management

Country Status (2)

Country Link
US (2) US10997071B2 (en)
CN (1) CN111221676A (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2023039212A (en) 2021-09-08 2023-03-20 キオクシア株式会社 Memory system and control method
US11922020B2 (en) * 2022-01-20 2024-03-05 Dell Products L.P. Read-disturb-based read temperature information persistence system

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102346694A (en) * 2007-03-29 2012-02-08 提琴存储器公司 Method of calculating parity in memory system
CN106445724A (en) * 2015-08-11 2017-02-22 Hgst荷兰公司 Storing parity data separate from protected data
CN111274062A (en) * 2018-12-05 2020-06-12 美光科技公司 NAND device hybrid parity management

Family Cites Families (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4914577A (en) * 1987-07-16 1990-04-03 Icon International, Inc. Dynamic memory management system and method
US8200904B2 (en) 2007-12-12 2012-06-12 Sandisk Il Ltd. System and method for clearing data from a cache
US9213633B2 (en) * 2013-04-30 2015-12-15 Seagate Technology Llc Flash translation layer with lower write amplification
US9244858B1 (en) * 2014-08-25 2016-01-26 Sandisk Technologies Inc. System and method of separating read intensive addresses from non-read intensive addresses
US10346362B2 (en) * 2014-09-26 2019-07-09 Oracle International Corporation Sparse file access
US9658966B2 (en) * 2014-11-24 2017-05-23 Sandisk Technologies Llc Systems and methods of write cache flushing
KR20180005858A (en) * 2016-07-07 2018-01-17 에스케이하이닉스 주식회사 Memory system and operating method of memory system
US10254966B2 (en) * 2016-12-28 2019-04-09 Western Digital Technologies, Inc. Data management based on I/O traffic profiling
US10430285B2 (en) * 2017-02-17 2019-10-01 International Business Machines Corporation Backing up metadata
US10468077B2 (en) * 2018-02-07 2019-11-05 Intel Corporation Adaptive object buffering and meta-data indexing using persistent memory to improve flash memory durability in tiered storage
US10573391B1 (en) * 2018-12-03 2020-02-25 Micron Technology, Inc. Enhanced flush transfer efficiency via flush prediction

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102346694A (en) * 2007-03-29 2012-02-08 提琴存储器公司 Method of calculating parity in memory system
CN106445724A (en) * 2015-08-11 2017-02-22 Hgst荷兰公司 Storing parity data separate from protected data
CN111274062A (en) * 2018-12-05 2020-06-12 美光科技公司 NAND device hybrid parity management

Also Published As

Publication number Publication date
US10997071B2 (en) 2021-05-04
US20210326257A1 (en) 2021-10-21
US20200167279A1 (en) 2020-05-28

Similar Documents

Publication Publication Date Title
CN111538618B (en) Apparatus and techniques for one-time parity check
US11720489B2 (en) Scheme to improve efficiency of device garbage collection in memory devices
CN111383689B (en) Tunable NAND write performance
US20210096984A1 (en) L2p translation techniques in limited ram systems
US11397640B2 (en) Extended error correction in storage device
US11609819B2 (en) NAND device mixed parity management
US11693732B2 (en) Cryptographic data integrity protection
US11210093B2 (en) Large data read techniques
US11663120B2 (en) Controlling NAND operation latency
US20210390014A1 (en) Parity protection
US10930354B2 (en) Enhanced flush transfer efficiency via flush prediction
CN112055843A (en) Synchronizing NAND logical to physical table tracking
US20210326257A1 (en) Write width aligned storage device buffer flush
US11868245B2 (en) Pre-load techniques for improved sequential memory access in a memory device
US20240054070A1 (en) A dynamic read disturb management algorithm for flash-based memory

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WW01 Invention patent application withdrawn after publication

Application publication date: 20200602

WW01 Invention patent application withdrawn after publication