US20200026600A1 - Die-level error recovery scheme - Google Patents
Die-level error recovery scheme Download PDFInfo
- Publication number
- US20200026600A1 US20200026600A1 US16/041,204 US201816041204A US2020026600A1 US 20200026600 A1 US20200026600 A1 US 20200026600A1 US 201816041204 A US201816041204 A US 201816041204A US 2020026600 A1 US2020026600 A1 US 2020026600A1
- Authority
- US
- United States
- Prior art keywords
- memory
- data
- die
- memory die
- xor
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
- G06F11/07—Responding to the occurrence of a fault, e.g. fault tolerance
- G06F11/0703—Error or fault processing not based on redundancy, i.e. by taking additional measures to deal with the error or fault not making use of redundancy in operation, in hardware, or in data representation
- G06F11/0706—Error or fault processing not based on redundancy, i.e. by taking additional measures to deal with the error or fault not making use of redundancy in operation, in hardware, or in data representation the processing taking place on a specific hardware platform or in a specific software environment
- G06F11/0727—Error or fault processing not based on redundancy, i.e. by taking additional measures to deal with the error or fault not making use of redundancy in operation, in hardware, or in data representation the processing taking place on a specific hardware platform or in a specific software environment in a storage system, e.g. in a DASD or network based storage system
-
- G—PHYSICS
- G11—INFORMATION STORAGE
- G11C—STATIC STORES
- G11C11/00—Digital stores characterised by the use of particular electric or magnetic storage elements; Storage elements therefor
- G11C11/005—Digital stores characterised by the use of particular electric or magnetic storage elements; Storage elements therefor comprising combined but independently operative RAM-ROM, RAM-PROM, RAM-EPROM cells
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
- G06F11/07—Responding to the occurrence of a fault, e.g. fault tolerance
- G06F11/0703—Error or fault processing not based on redundancy, i.e. by taking additional measures to deal with the error or fault not making use of redundancy in operation, in hardware, or in data representation
- G06F11/0793—Remedial or corrective actions
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
- G06F11/07—Responding to the occurrence of a fault, e.g. fault tolerance
- G06F11/08—Error detection or correction by redundancy in data representation, e.g. by using checking codes
- G06F11/10—Adding special bits or symbols to the coded information, e.g. parity check, casting out 9's or 11's
- G06F11/1008—Adding special bits or symbols to the coded information, e.g. parity check, casting out 9's or 11's in individual solid state devices
- G06F11/1012—Adding special bits or symbols to the coded information, e.g. parity check, casting out 9's or 11's in individual solid state devices using codes or arrangements adapted for a specific type of error
- G06F11/1032—Simple parity
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
- G06F11/07—Responding to the occurrence of a fault, e.g. fault tolerance
- G06F11/08—Error detection or correction by redundancy in data representation, e.g. by using checking codes
- G06F11/10—Adding special bits or symbols to the coded information, e.g. parity check, casting out 9's or 11's
- G06F11/1008—Adding special bits or symbols to the coded information, e.g. parity check, casting out 9's or 11's in individual solid state devices
- G06F11/1068—Adding special bits or symbols to the coded information, e.g. parity check, casting out 9's or 11's in individual solid state devices in sector programmable memories, e.g. flash disk
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
- G06F11/07—Responding to the occurrence of a fault, e.g. fault tolerance
- G06F11/08—Error detection or correction by redundancy in data representation, e.g. by using checking codes
- G06F11/10—Adding special bits or symbols to the coded information, e.g. parity check, casting out 9's or 11's
- G06F11/1076—Parity data used in redundant arrays of independent storages, e.g. in RAID systems
- G06F11/108—Parity data distribution in semiconductor storages, e.g. in SSD
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0602—Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
- G06F3/0614—Improving the reliability of storage systems
- G06F3/0619—Improving the reliability of storage systems in relation to data integrity, e.g. data losses, bit errors
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0628—Interfaces specially adapted for storage systems making use of a particular technique
- G06F3/0655—Vertical data movement, i.e. input-output transfer; data movement between one or more hosts and one or more storage devices
- G06F3/0658—Controller construction arrangements
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0628—Interfaces specially adapted for storage systems making use of a particular technique
- G06F3/0655—Vertical data movement, i.e. input-output transfer; data movement between one or more hosts and one or more storage devices
- G06F3/0659—Command handling arrangements, e.g. command buffers, queues, command scheduling
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0668—Interfaces specially adapted for storage systems adopting a particular infrastructure
- G06F3/0671—In-line storage system
- G06F3/0673—Single storage device
- G06F3/0679—Non-volatile semiconductor memory device, e.g. flash memory, one time programmable memory [OTP]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0668—Interfaces specially adapted for storage systems adopting a particular infrastructure
- G06F3/0671—In-line storage system
- G06F3/0683—Plurality of storage devices
- G06F3/0688—Non-volatile semiconductor memory arrays
-
- G—PHYSICS
- G11—INFORMATION STORAGE
- G11C—STATIC STORES
- G11C13/00—Digital stores characterised by the use of storage elements not covered by groups G11C11/00, G11C23/00, or G11C25/00
- G11C13/0002—Digital stores characterised by the use of storage elements not covered by groups G11C11/00, G11C23/00, or G11C25/00 using resistive RAM [RRAM] elements
- G11C13/0021—Auxiliary circuits
- G11C13/004—Reading or sensing circuits or methods
-
- G—PHYSICS
- G11—INFORMATION STORAGE
- G11C—STATIC STORES
- G11C13/00—Digital stores characterised by the use of storage elements not covered by groups G11C11/00, G11C23/00, or G11C25/00
- G11C13/0002—Digital stores characterised by the use of storage elements not covered by groups G11C11/00, G11C23/00, or G11C25/00 using resistive RAM [RRAM] elements
- G11C13/0021—Auxiliary circuits
- G11C13/0069—Writing or programming circuits or methods
-
- G—PHYSICS
- G11—INFORMATION STORAGE
- G11C—STATIC STORES
- G11C29/00—Checking stores for correct operation ; Subsequent repair; Testing stores during standby or offline operation
- G11C29/04—Detection or location of defective memory elements, e.g. cell constructio details, timing of test signals
- G11C29/08—Functional testing, e.g. testing during refresh, power-on self testing [POST] or distributed testing
- G11C29/12—Built-in arrangements for testing, e.g. built-in self testing [BIST] or interconnection details
- G11C29/38—Response verification devices
- G11C29/42—Response verification devices using error correcting codes [ECC] or parity check
-
- G—PHYSICS
- G11—INFORMATION STORAGE
- G11C—STATIC STORES
- G11C29/00—Checking stores for correct operation ; Subsequent repair; Testing stores during standby or offline operation
- G11C29/52—Protection of memory contents; Detection of errors in memory contents
-
- G—PHYSICS
- G11—INFORMATION STORAGE
- G11C—STATIC STORES
- G11C29/00—Checking stores for correct operation ; Subsequent repair; Testing stores during standby or offline operation
- G11C29/70—Masking faults in memories by using spares or by reconfiguring
- G11C29/74—Masking faults in memories by using spares or by reconfiguring using duplex memories, i.e. using dual copies
-
- G—PHYSICS
- G11—INFORMATION STORAGE
- G11C—STATIC STORES
- G11C29/00—Checking stores for correct operation ; Subsequent repair; Testing stores during standby or offline operation
- G11C29/70—Masking faults in memories by using spares or by reconfiguring
- G11C29/78—Masking faults in memories by using spares or by reconfiguring using programmable devices
-
- G—PHYSICS
- G11—INFORMATION STORAGE
- G11C—STATIC STORES
- G11C11/00—Digital stores characterised by the use of particular electric or magnetic storage elements; Storage elements therefor
- G11C11/21—Digital stores characterised by the use of particular electric or magnetic storage elements; Storage elements therefor using electric elements
- G11C11/22—Digital stores characterised by the use of particular electric or magnetic storage elements; Storage elements therefor using electric elements using ferroelectric elements
-
- G—PHYSICS
- G11—INFORMATION STORAGE
- G11C—STATIC STORES
- G11C13/00—Digital stores characterised by the use of storage elements not covered by groups G11C11/00, G11C23/00, or G11C25/00
- G11C13/0002—Digital stores characterised by the use of storage elements not covered by groups G11C11/00, G11C23/00, or G11C25/00 using resistive RAM [RRAM] elements
- G11C13/0004—Digital stores characterised by the use of storage elements not covered by groups G11C11/00, G11C23/00, or G11C25/00 using resistive RAM [RRAM] elements comprising amorphous/crystalline phase transition cells
-
- G—PHYSICS
- G11—INFORMATION STORAGE
- G11C—STATIC STORES
- G11C16/00—Erasable programmable read-only memories
- G11C16/02—Erasable programmable read-only memories electrically programmable
- G11C16/04—Erasable programmable read-only memories electrically programmable using variable threshold transistors, e.g. FAMOS
- G11C16/0483—Erasable programmable read-only memories electrically programmable using variable threshold transistors, e.g. FAMOS comprising cells having several storage transistors connected in series
-
- G—PHYSICS
- G11—INFORMATION STORAGE
- G11C—STATIC STORES
- G11C29/00—Checking stores for correct operation ; Subsequent repair; Testing stores during standby or offline operation
- G11C29/04—Detection or location of defective memory elements, e.g. cell constructio details, timing of test signals
- G11C2029/0409—Online test
-
- G—PHYSICS
- G11—INFORMATION STORAGE
- G11C—STATIC STORES
- G11C29/00—Checking stores for correct operation ; Subsequent repair; Testing stores during standby or offline operation
- G11C29/04—Detection or location of defective memory elements, e.g. cell constructio details, timing of test signals
- G11C2029/0411—Online error correction
-
- G—PHYSICS
- G11—INFORMATION STORAGE
- G11C—STATIC STORES
- G11C2213/00—Indexing scheme relating to G11C13/00 for features not covered by this group
- G11C2213/70—Resistive array aspects
- G11C2213/71—Three dimensional array
-
- H—ELECTRICITY
- H05—ELECTRIC TECHNIQUES NOT OTHERWISE PROVIDED FOR
- H05K—PRINTED CIRCUITS; CASINGS OR CONSTRUCTIONAL DETAILS OF ELECTRIC APPARATUS; MANUFACTURE OF ASSEMBLAGES OF ELECTRICAL COMPONENTS
- H05K1/00—Printed circuits
- H05K1/18—Printed circuits structurally associated with non-printed electric components
- H05K1/181—Printed circuits structurally associated with non-printed electric components associated with surface mounted components
-
- H—ELECTRICITY
- H05—ELECTRIC TECHNIQUES NOT OTHERWISE PROVIDED FOR
- H05K—PRINTED CIRCUITS; CASINGS OR CONSTRUCTIONAL DETAILS OF ELECTRIC APPARATUS; MANUFACTURE OF ASSEMBLAGES OF ELECTRICAL COMPONENTS
- H05K2201/00—Indexing scheme relating to printed circuits covered by H05K1/00
- H05K2201/10—Details of components or other objects attached to or integrated in a printed circuit board
- H05K2201/10007—Types of components
- H05K2201/10159—Memory
-
- H—ELECTRICITY
- H05—ELECTRIC TECHNIQUES NOT OTHERWISE PROVIDED FOR
- H05K—PRINTED CIRCUITS; CASINGS OR CONSTRUCTIONAL DETAILS OF ELECTRIC APPARATUS; MANUFACTURE OF ASSEMBLAGES OF ELECTRICAL COMPONENTS
- H05K2201/00—Indexing scheme relating to printed circuits covered by H05K1/00
- H05K2201/10—Details of components or other objects attached to or integrated in a printed circuit board
- H05K2201/10431—Details of mounted components
- H05K2201/10507—Involving several components
- H05K2201/10522—Adjacent components
Definitions
- a computing system includes processing circuitry, such as one or more processors or other suitable components, and memory devices, such as chips or integrated circuits.
- One or more memory devices may be implemented on a memory module, such as a dual in-line memory module (DIMM), to store data accessible to the processing circuitry.
- DIMM dual in-line memory module
- the processing circuitry may request that a memory module retrieve data corresponding to the user input from its memory devices.
- the retrieved data may include instructions executable by the processing circuitry to perform an operation and/or may include data to be used as an input for the operation.
- data output from the operation may be stored in memory, for example, to enable subsequent retrieval.
- the data stored in the memory devices may include particular data that is desired to be preserved, retained, or recreated in the case of data loss or memory device malfunction. Resources dedicated to storing such data may be unavailable for other uses and may thus constrain device operability.
- FIG. 1 is a block diagram of a computing system that includes client devices and one or more remote computing devices, in accordance with an embodiment
- FIG. 2 is a block diagram of a memory module that may be implemented in a remote computing device of FIG. 1 , in accordance with an embodiment
- FIG. 3 is a block diagram of the memory module of FIG. 2 arranged in a first non-volatile memory arrangement, in accordance with an embodiment
- FIG. 4 is a block diagram of the memory module of FIG. 2 arranged in a second non-volatile memory arrangement, in accordance with an embodiment
- FIG. 5 is a block diagram of the memory module of FIG. 2 arranged in a third non-volatile memory arrangement, in accordance with an embodiment
- FIG. 6 is a flow diagram of a process for operating the memory module of FIG. 4-5 to perform die-level redundancy operations, in accordance with an embodiment.
- a memory device may be designated to store parity data.
- the parity data may be stored or backed-up in non-volatile memory, or volatile memory powered by an additional power supply, for example, to protect against data loss from power loss or component defect.
- the memory device may store parity data used to recover data for additional memory devices as a way to back-up the data of the additional memory devices.
- backing-up a whole memory device may lead to excessive overprovisioning of memory and wasting of resources. So as described herein, a die-level redundancy scheme may be employed in which parity data associated with particular die (rather than a whole memory device) may be stored.
- hardware of a computing system includes processing circuitry and memory, for example, implemented using one or more processors and/or one or more memory devices (e.g., chips or integrated circuits).
- the processing circuitry may perform various operations (e.g., tasks) by executing corresponding instructions, for example, based on a user input to determine output data by performing operations on input data.
- data accessible to the processing circuitry may be stored in a memory device, such that the memory device stores the input data, the output data, data indicating the executable instructions, or any combination thereof.
- multiple memory devices may be implemented on a memory module, thereby enabling the memory devices to be communicatively coupled to the processing circuitry as a unit.
- a dual in-line memory module may include a printed circuit board (PCB) and multiple memory devices.
- Memory modules respond to commands from a memory controller communicatively coupled to a client device or a host device via a communication network.
- a memory controller may be implemented on the host-side of a memory-host interface; for example, a processor, microcontroller, or ASIC may include a memory controller.
- This communication network may enable data communication therebetween and, thus, the client device to utilize hardware resources accessible through the memory controller.
- processing circuitry of the memory controller may perform one or more operations to facilitate the retrieval or transmission of data between the client device and the memory devices.
- Data communicated between the client device and the memory devices may be used for a variety of purposes including, but not limited to, presentation of a visualization to a user through a graphical user interface (GUI) at the client device, processing operations, calculations, or the like.
- GUI graphical user interface
- memory devices may be implemented using different memory types.
- a memory device may be implemented as volatile memory, such as dynamic random-access memory (DRAM) or static random-access memory (SRAM).
- the memory device may be implemented as non-volatile memory, such as flash (e.g., NAND, NOR) memory, phase-change memory (e.g., 3D XPointTM), or ferroelectric random access memory (FeRAM).
- flash e.g., NAND, NOR
- phase-change memory e.g., 3D XPointTM
- FeRAM ferroelectric random access memory
- memory devices generally include at least one memory die (i.e., an array of memory cells configured on a portion or “die” of a semiconductor wafer) to store data bits (e.g., “0” bit or “1” bit) transmitted to the memory device through a channel (e.g., data channel, communicative coupling) and may be functionally similar from the perspective of the processing circuitry even when implemented using different memory types.
- memory die i.e., an array of memory cells configured on a portion or “die” of a semiconductor wafer
- data bits e.g., “0” bit or “1” bit
- channel e.g., data channel, communicative coupling
- volatile memory may provide faster data transfer (e.g., read and/or write) speeds compared to non-volatile memory.
- non-volatile memory may provide higher data storage density compared to volatile memory.
- a combination of non-volatile memory cells and volatile memory cells may be used in a computing system to balance the costs and benefits of each type of memory.
- Non-volatile memory cells in contrast to volatile memory, may also maintain their stored value or data bits while in an unpowered state.
- implementing a combination of non-volatile memory cells and volatile memory cells may change how data redundancy operations are managed in the computing system.
- data of non-volatile or volatile memory cells may be backed-up by non-volatile memory to protect the data of the computing system.
- memory may be protected against data loss through various redundancy schemes.
- An example of a redundancy scheme is a redundant array of independent disks, DIMMs, DRAM, 3D XPointTM, or any suitable form of memory, through which memory cells are protected against data loss through following digital logic verification and/or protection techniques, such as exclusive-or (XOR) verification and XOR protection.
- XOR exclusive-or
- the data stored in the non-volatile memories are subjected to an XOR logical operation.
- the result of the XOR logical operation is stored as the XOR result indicative of the correct data initially stored across the non-volatile memory.
- the data of the defective non-volatile memory may be recreated using the parity data as a replacement for the missing or lost data.
- Redundancy schemes like the one described above, provide a reliable means of protecting memory against data loss.
- data loss including memory malfunction, power loss (e.g., power loss causing data stored in non-volatile memory to not be refreshed to preserve data values), or other similar hardware defects that cause data loss.
- Redundancy schemes like the one described above, may be used to recover data down to the smallest granularity of data used in the XOR logical operation. Thus, if a memory device is subjected to an XOR logical operation with other memory devices, and the parity data is used for recovery, the XOR recovery may recover data from the entire memory device after a data loss event.
- redundancy schemes operate to protect the entire memory device, that is, package-level redundancy schemes that use data of the whole memory device without regard to smaller, more practical data granularity. This may cause overprovisioning since malfunction of the entire memory device is uncommon and unlikely. In some instances, this overprovisioning leads to using larger sized memories to store the parity data and, thus, may increase costs of providing the data protection.
- Die-level redundancy schemes may reduce the overall overprovisioning while also providing one or more spare memory die.
- a redundant array of independent 3D XPointTM memory (RAIX) is used as an example redundancy scheme that may be improved through die-level redundancy operations.
- a die-level RAIX scheme may enable the memory module to have access to an increased amount of spare memory.
- Die-level RAIX schemes enable the memory module to back-up data stored in individual memory die regardless of the number of memory devices included on the memory module.
- These memory die receive data from a memory controller through a channel, or in some embodiments, a channel that provides data to multiple individual memory die located on a same or different memory device.
- a memory die may receive data through a dedicated channel (e.g., 1:1 channel to memory die ratio) or through a channel shared with additional memory die (e.g., M:N channel, M, to memory die, N, ratio).
- a dedicated channel e.g., 1:1 channel to memory die ratio
- additional memory die e.g., M:N channel, M, to memory die, N, ratio
- a die-level RAIX schemes may operate to back-up the data stored in the individual memory die, thus corresponding to the data transmitted through a channel to the memory die, and in this way may decrease over-provisioning and decrease costs of production while providing adequate protection of the memory module data.
- a variety of computing systems may implement die-level RAIX schemes including one or more client devices communicatively coupled to one or more remote computing devices.
- certain computing processes are separated from each other to improve operational efficiency of the computing system.
- the memory processing circuitry may be implemented to perform data processing operations, for example, which would otherwise be performed by host processing circuitry.
- die-level RAIX is described below as implemented in a computing system using these remote computing devices, however, it should be understood that a variety of valid embodiments may implement die-level RAIX schemes.
- a computing system that does not use remote computing devices and instead combines components of a client device with memory modules and processing circuity of the remote computing devices may be employed.
- FIG. 1 depicts an example of a computing system 10 , which includes one or more remote computing devices 11 .
- the remote computing devices 11 may be communicatively coupled to the one or more client devices 12 via a communication network 14 .
- the depicted embodiment is merely intended to be illustrative and not limiting.
- the remote computing devices 11 may be communicatively coupled to a single client device 12 or more than two client devices 12 .
- the communication network 14 may enable data communication between the client devices 12 and the remote computing devices 11 .
- the client devices 12 may be physically remote (e.g., separate) from the remote computing devices 11 , for example, such that the remote computing devices 11 are located at a centralized data center.
- the communication network 14 may be a wide area network (WAN), such as the Internet.
- the remote computing devices 11 and the client devices 12 may each include a network interface 16 .
- a client device 12 may include input devices 18 and/or an electronic display 20 to enable a user to interact with the client device 12 .
- the input devices 18 may receive user inputs and, thus, may include buttons, keyboards, mice, trackpads, and/or the like.
- the electronic display 20 may include touch sensing components that receive user inputs by detecting occurrence and/or position of an object touching its screen (e.g., surface of the electronic display 20 ).
- the electronic display 20 may facilitate providing visual representations of information by displaying a graphical user interface (GUI) of an operating system, an application interface, text, a still image, video content, or the like.
- GUI graphical user interface
- the communication network 14 may enable data communication between the remote computing devices 11 and one or more client devices 12 .
- the communication network 14 may enable user inputs to be communicated from a client device 12 to a remote computing device 11 .
- the communication network 14 may enable results of operations performed by the remote computing device 11 based on the user inputs to be communicated back to the client device 12 , for example, as image data to be displayed on its electronic display 20 .
- data communication provided by the communication network 14 may be leveraged to make centralized hardware available to multiple users, such that hardware at client devices 12 may be reduced.
- the remote computing devices 11 may provide data storage for multiple different client devices 12 , thereby enabling data storage (e.g., memory) provided locally at the client devices 12 to be reduced.
- the remote computing devices 11 may provide processing for multiple different client devices 12 , thereby enabling processing power provided locally at the client devices 12 to be reduced.
- the remote computing devices 11 may include processing circuitry 22 and one or more memory modules 24 (e.g., sub-systems) communicatively coupled via a data bus 25 .
- the processing circuitry 22 and/or the memory modules 24 may be implemented across multiple remote computing devices 11 , for example, such that a first remote computing device 11 includes a portion of the processing circuitry 22 and the first memory module 24 A, while an Mth remote computing device 11 includes another portion of the processing circuitry 22 and the Mth memory module 24 M.
- the processing circuitry 22 and the memory modules 24 may be implemented in a single remote computing device 11 .
- the processing circuitry 22 may generally execute instructions to perform operations, for example, indicated by user inputs received from a client device 12 .
- the processing circuitry 22 may include one or more central processing units (CPUs), one or more graphics processing units (GPUs), one or more processor cores, or any combination thereof.
- the processing circuitry 22 may additionally perform operations based on circuit connections formed (e.g., programmed) in the processing circuitry 22 .
- the processing circuitry 22 may additionally include one or more application specific integrated circuits (ASICs), one or more field programmable logic arrays (FPGAs), or both.
- ASICs application specific integrated circuits
- FPGAs field programmable logic arrays
- a memory module 24 may provide data storage accessible to the processing circuitry 22 .
- a memory module 24 may store data received from a client device 12 , data resulting from an operation performed by the processing circuitry 22 , data to be input to the operation performed by the processing circuitry 22 , instructions executable by the processing circuitry 22 to perform the operation, or any combination thereof.
- a memory module 24 may include one or more memory devices 26 (e.g., chips or integrated circuits).
- the memory devices 26 may each be a tangible, non-transitory, computer-readable medium that stores data accessible to the processing circuitry 22 .
- a memory module 24 may store data corresponding with different client devices 12 .
- the data may be grouped and stored as data blocks 28 .
- data corresponding with each client device 12 may be stored as a separate data block 28 .
- the memory devices 26 in the first memory module 24 A may store a first data block 28 A corresponding with the first client device 12 A and an Nth data block 28 N corresponding with the Nth client device 12 N.
- One or more data blocks 28 may be stored within a memory die of the memory device 26 .
- a data block 28 may correspond to a virtual machine (VM) provided to a client device 12 .
- VM virtual machine
- a remote computing device 11 may provide the first client device 12 A a first virtual machine via the first data block 28 A and provide the Nth client device 12 N an Nth virtual machine via the Nth data block 28 N.
- the first client device 12 A may communicate the user inputs to the remote computing devices 11 via the communication network 14 .
- the remote computing device 11 may retrieve the first data block 28 A, execute instructions to perform corresponding operations, and communicate the results of the operations back to the first client device 12 A via the communication network 14 .
- the Nth client device 12 N may communicate the user inputs to the remote computing devices 11 via the communication network 14 .
- the remote computing device 11 may retrieve the Nth data block 28 N, execute instructions to perform corresponding operations, and communicate the results of the operations back to the Nth client device 12 N via the communication network 14 .
- the remote computing devices 11 may access (e.g., read and/or write) various data blocks 28 stored in a memory module 24 .
- a memory module 24 may include a memory controller 30 that controls storage of data in its memory devices 26 .
- the memory controller 30 may operate based on circuit connections formed (e.g., programmed) in the memory controller 30 .
- the memory controller 30 may include one or more application specific integrated circuits (ASICs), one or more field programmable logic gate arrays (FPGAs), or both.
- ASICs application specific integrated circuits
- FPGAs field programmable logic gate arrays
- a memory module 24 may include memory devices 26 that implement different memory types, for example, which provide varying tradeoffs between data access speed and data storage density.
- the memory controller 30 may control data storage across multiple memory devices 26 to facilitate leveraging the various tradeoffs, for example, such that the memory module 24 provides fast data access speed as well as high data storage capacity.
- FIG. 2 depicts an example of a memory module 24 including different types of memory devices 26 .
- the memory module 24 includes one or more non-volatile memory devices 32 and one or more volatile memory devices 34 .
- the volatile memory devices 34 may be implemented as dynamic random-access memory (DRAM) and/or static random-access memory (SRAM).
- the memory module 24 may include one or more DRAM devices (e.g., chips or integrated circuits), one or more SRAM devices (e.g., chips or integrated circuits), or both.
- the non-volatile memory devices 32 may be implemented as flash (e.g., NAND) memory, phase-change (e.g., 3D XPointTM) memory, and/or ferroelectric random access memory (FeRAM).
- the memory module 24 may include one or more NAND memory devices, one or more 3D XPointTM memory devices, or both.
- the non-volatile memory devices 32 may provide storage class memory (SCM), which, at least in some instance, may facilitate reducing implementation associated cost, for example, by obviating other non-volatile data storage devices in the computing system 10 .
- SCM storage class memory
- the memory module 24 may be implemented by disposing each of the non-volatile memory devices 32 and the volatile memory devices 34 on a flat (e.g., front and/or back) surface of a printed circuit board (PCB).
- the memory module 24 may include a bus interface 36 .
- the bus interface 36 may include data pins (e.g., contacts) formed along an (e.g., bottom) edge of the printed circuit board.
- the memory module 24 may be a single in-line memory module (SIMM), a dual in-line memory module (DIMM), or the like.
- the bus interface 36 may include logic that enables the memory module 24 to communicate via a communication protocol implemented on the data bus 25 .
- the bus interface 36 may control timing of data output from the memory module 24 to the data bus 25 and/or interpret data input to the memory module 24 from the data bus 25 in accordance with the communication protocol.
- the bus interface 36 may be a double data rate fourth-generation (DDR4) interface, a double data rate fifth-generation (DDR5) interface, a peripheral component interconnect express (PCIe) interface, a non-volatile dual in-line memory module (e.g., NVDIMM-P) interface, or the like.
- DDR4 double data rate fourth-generation
- DDR5 double data rate fifth-generation
- PCIe peripheral component interconnect express
- NVDIMM-P non-volatile dual in-line memory module
- a memory controller 30 may control data storage within the memory module 24 , for example, to facilitate improving data access speed and/or data storage efficiency by leveraging the various tradeoffs provided by memory types implemented in the memory module 24 .
- the memory controller 30 may be coupled between the bus interface 36 and the memory devices 26 via one or more internal buses 37 , for example, implemented via conductive traces formed on the printed circuit board.
- the memory controller 30 may control whether a data block 28 is stored in the non-volatile memory devices 32 or in the volatile memory devices 34 . In other words, the memory controller 30 may transfer a data block 28 from the non-volatile memory devices 32 into the volatile memory devices 34 or vice versa.
- the memory controller 30 may include buffer memory 38 , for example, to provide temporary data storage.
- the buffer memory 38 may include static random-access memory (SRAM) and, thus, may provide faster data access speed compared to the volatile memory devices 34 and the non-volatile memory devices 32 .
- the buffer memory 38 may be DRAM or FeRAM in some cases.
- the memory module 24 may include an address map, for example, stored in the buffer memory 38 , a non-volatile memory device 32 , a volatile memory device 34 , a dedicated address map memory device 26 , or any combination thereof.
- the remote computing device 11 may communicate with a service processor and/or a service bus included in or separate from the processing circuitry 22 and/or the data bus 25 .
- the service processor, processing circuitry 22 , and/or the memory controller 30 may perform error detection operations and/or error correction operations (ECC), and may be disposed external from the remote computing device 11 such that error detection and error correction operations may continue if power to the remote computing device 11 is lost.
- ECC error correction operations
- the functions of the service processor are described as being included in and performed by the memory controller 30 , however, it should be noted that in some embodiments the error correction operations or data recovery operations may be implemented as functions performed by the service processor, processing circuitry 22 , or additional processing circuitry located internal or external to the remote computing device 11 or the client device 12 .
- the memory module 24 is depicted in FIG. 2 as a single device that includes various components or submodules.
- a remote computing device may include one or several discrete components equivalent to the various devices, modules, and components that make up memory module 24 .
- a remote computing device may include non-volatile memory, volatile memory, and a controller that are positioned on one or several different chips or substrates.
- the features and functions of memory module 24 need not be implemented in a single module to achieve the benefits described herein.
- FIG. 3 depicts a block diagram of an example of a package-level RAIX scheme.
- FIG. 3 depicts an embodiment of the memory module 24 , memory module 24 A, that includes nine non-volatile memory devices 32 arranged to form a symmetric RAIX scheme where a full non-volatile memory device 321 is used to store parity data corresponding to the other eight non-volatile memory devices 32 A- 32 H.
- Each non-volatile memory device 32 may store a segment of data corresponding to memory address in a package 52 .
- the segment of data may be smaller than the overall size of the package 52 , for example, the segment of data may be 512 bytes while the package 52 may store several gigabytes.
- RAIX schemes may be implemented using greater than or less than nine non-volatile memory devices 32 with components of any suitable size.
- each non-volatile memory device 32 stores a particular amount of data accessible to the client device 12 .
- the processing circuitry 22 and/or the memory controller 30 may facilitate communication between the non-volatile memory device 32 and the client device 12 via channels. It may be desirable to be able to recover data stored in the packages 52 in the case of data loss. Thus, a package-level RAIX scheme may be used to protect data of the package 52 stored in the non-volatile memory devices 32 .
- a package-level RAIX scheme is implemented in the memory module 24 A, meaning that in the event of data loss of a package 52 , data transmitted via respective channels to each non-volatile memory device 32 and stored in the packages 52 may be recovered.
- the package-level RAIX scheme uses an XOR logical operation to back-up data of each package 52 . That is, the data of the package 52 A is XOR'd with the data of the package 52 B, and the XOR result is XOR'd with the data of the package 52 C, and so on until the second-to-last XOR result is XOR'd with the package 52 H. The last XOR result is considered the parity data, and is stored into a package 52 I.
- the ending size of the parity data is the same size as the segment of data stored in the packages 52 .
- the parity data stored on the package 52 I may equal 512 bytes (equal to the size of the individual segments of data backed-up through the package-level RAIX scheme) and the package 52 may have the capacity to store 512 bytes—the same as the other packages 52 .
- the parity data stored in the package 52 may be used to recreate the lost data (e.g., by substituting the parity data in the XOR logical operation to recreate the lost data).
- the basic logical properties of XOR are understood to mean exclusive-or, or the XOR logical function, and causes a logical high (e.g., 1) if a first input is a logical low and a second input is a logical high (e.g., 0 is first input and 1 is second input, 1 is the first input and 0 is the second input) but causes an output of a logical low if both the first input and the second input are either a logical high or a logical low (e.g., 0 is first and second input, 1 is first and second input).
- This output relationship may be leveraged to back-up data stored in the various non-volatile memory devices 32 , as described above.
- the package-level RAIX scheme operates to back-up package 52 A and 52 B with the parity data.
- the package 52 A is XOR'd with the package 52 B to create the parity data.
- the XOR result of 111 XOR 000 is 111.
- this parity data, 111 may be XOR'd with the data of the package 52 B to recreate the data of the package 52 A—that is, 111 XOR 000 equals 111.
- the package 52 A stores 101 and the package 52 B stores 110, the parity data equals 011. If package 52 B were to experience data loss, 011 XOR 101 recreates the data of the package 52 B and equals 110.
- any smaller groupings of data creating the packages 52 may not be able to be separately recreated.
- a memory die may malfunction and the rest of the package 52 may function as desired, but because the parity data represents the XOR result of the packages 52 , the whole package 52 is recreated from the parity data to save the lost data from the physical malfunction of the memory die.
- this depicted package-level RAIX scheme overprovisions and uses more memory to store the parity data in the package 52 I than the amount of memory sufficient to protect data of the memory module 24 .
- the depicted package-level RAIX scheme follows an 8:1 protection ratio (e.g., eight packages 52 A- 52 H storing data backed up by one package 52 I storing parity data). This protection ratio translates into a 12.5% overprovisioning (e.g., 1 ⁇ 8) of the packages 52 .
- the amount of overprovisioning correlates to RAIX scheme efficiency—in other words, the lower the percent of overprovisioning, the less memory is used to provide memory module 24 data protection. Instead, it is more likely that a non-volatile memory device 32 experiences data loss at a memory die level (not depicted in FIG. 3 ).
- a RAIX scheme to protect against data loss at the memory die level is more applicable to normal operation of the computing system 10 .
- FIG. 4 depicts a block diagram of an example of a die-level RAIX scheme.
- FIG. 4 depicts a second embodiment of a memory module 24 , memory module 24 B, that includes nine non-volatile memory devices 32 each represented as storing a particular amount of data in a memory die 58 .
- FIG. 4 depicts a second embodiment of a memory module 24 , memory module 24 B, that includes nine non-volatile memory devices 32 each represented as storing a particular amount of data in a memory die 58 .
- FIG. 4 depicts a second embodiment of a memory module 24 , memory module 24 B, that includes nine non-volatile memory devices 32 each represented as storing a particular amount of data in a memory die 58 .
- FIG. 4 depicts a block diagram of an example of a die-level RAIX scheme.
- FIG. 4 depicts a second embodiment of a memory module 24 , memory module 24 B, that includes nine non-volatile memory devices 32 each represented as storing
- the memory module 24 B follows a die-level RAIX scheme where each package 52 is divided into memory die 58 to store segments of data of size 256 bytes. Using the individual memory die 58 for determination of the parity data, instead of the individual packages 52 , decreases the overprovisioning from 12.5% (e.g., 1 ⁇ 8) to about 5.8% (e.g., 1/17).
- This separation may increase circuit complexity because an increased amount of signal routings, components, and/or pins may be used to provide the increased number of channels.
- the increased design complexity may also increase manufacturing and/or design costs associated with memory module 24 production.
- increasing the number of signal routings (e.g., channels) may cause signal integrity to decrease as well, for example, from signal interferences.
- a scheme that balances these trade-offs with the overall level of overprovisioning may be desirable for some embodiments, while other embodiments may implement memory module 24 B.
- FIG. 5 depicts a block diagram of a second example of a die-level RAIX scheme.
- This third embodiment of the memory module 24 includes a Z number of non-volatile memory devices 32 each represented as storing a particular amount of data in a package 52 , where the package 52 is separated into multiple memory die 58 .
- the depicted example is merely intended to be illustrative and not limiting.
- die-level RAIX schemes may be implemented using any number of memory die 58 per non-volatile memory device 32 .
- the packages 52 from FIG. 3 are generally divided into separate memory die 58 .
- memory die 58 A 1 , 58 B 1 , . . . , 58 X 1 are stored on the same non-volatile memory device 32 A and the same package 52 A.
- the memory controller 30 and/or the processing circuitry 22 may operate to protect the memory module 24 C data via the depicted asymmetric die-level RAIX scheme.
- each memory die 58 respectively undergoes the XOR logical operation, as opposed to the whole package 52 undergoing the XOR logical operation to create the parity data.
- the resulting parity data is stored in the memory die 58 XZ of non-volatile memory device 32 Z. It should be noted that while the parity data is stored in what is depicted as the last memory die 58 XZ, there is no restriction on the memory die 58 that the parity data is to be stored in. That is, for example, the parity data may be stored in a memory die 58 AZ or on memory die 58 A 1 . The parity data is able to be stored on the memory die 58 , thus less memory may be allocated for the purpose of storing the parity data—hence why the memory die 58 XZ is all that may be allocated to serve the same purpose as the whole package 52 used to support the package-level RAIX scheme of FIG. 3 .
- the remaining memory die of the non-volatile memory device 32 Z may be allocated as a spare memory, where the spare memory die 58 AZ, 58 BZ, . . . , 58 CZ may be used for operational overflow, additional data storage, information used by the memory controller 30 and/or processing circuitry 22 to translate logical addresses into physical address, and the like.
- the memory module 24 C is an improvement from the memory module 24 A that had relatively high overprovisioning and no spare memory, and an improvement from the memory module 24 B that has no spare memory and high design complexity.
- Dividing the packages 52 , for the purposes of redundancy, into the memory die 58 creates an overprovisioning of about 6.25% (e.g., 1/16) which is a decrease from the 12.5% (e.g., 1 ⁇ 8) overprovisioning of memory module 24 A and an increase from the 5.8% (e.g., 1/17) overprovisioning of memory module 24 B.
- the die-level RAIX scheme is an improvement to package-level RAIX schemes due to the simplicity of design and the minimal overprovisioning of memory to support the redundancy or protection.
- the client device 12 receives inputs from users or other components and, in response to the inputs, requests the memory controller 30 of the memory module 24 C to facilitate performing memory operations.
- the client device 12 may issue these requests as commands and may indicate a logical address from where to retrieve or store the corresponding data.
- the client device 12 is unaware of the true physical address of where the corresponding data is stored since sometimes data is divided and stored in a multitude of locations referenced via one logical address.
- the memory controller 30 may receive these commands and translate the logical addresses into physical addresses to appropriately access stored data.
- the memory controller 30 may operate to read the data stored in each respective memory die 58 or may operate to write the data to be written in each respective memory die 58 .
- the memory controller 30 may also parse or interpret data stored in each respective memory die 58 as part of this read/write operation to complete the requested operation from the client device 12 . These operations are performed by transmitting segments of data through channels communicatively coupling the non-volatile memory device 32 to the memory controller 30 .
- the memory controller 30 may facilitate the updating of the parity data stored in the memory die 58 . To do this, the data to be stored in each memory die 58 is XOR'd with the data of the subsequent memory die 58 until each memory die 58 is reflected in the parity data.
- the memory controller 30 may also facilitate verifying the quality of data stored in the memory die 58 . In some embodiments, the memory controller 30 may perform the XOR-ing of the data in the memory die to verify that the resulting parity data is the same.
- the memory controller 30 may perform these redundancy operations in response to an event or a control signal, in response to performing a reading or writing operation, in response to a defined amount of time passing (e.g., for example, data in the memory die 58 is refreshed periodically, including the parity data), or any other suitable indication or event.
- the depicted components of the computing system 10 may be used to perform memory operations.
- the die-level RAIX scheme is integrated into the memory operation control flow.
- the die-level RAIX scheme is performed in response to a particular indication, signal, event, at periodic or defined time intervals, or the like.
- the die-level RAIX scheme is performed both at certain times during memory operations and in response to a control signal.
- die-level RAIX schemes may be incorporated into memory operations in a variety of ways.
- FIG. 6 depicts an example of a process 74 for controlling memory operations and die-level RAIX back-up schemes of a memory module 24 .
- the process 74 includes the memory controller 30 waiting for a memory operation request from the host (e.g., processing circuitry 22 and/or client device 12 ) (process block 76 ), receiving a memory operation request from the host (process block 78 ), and determining if the memory operation request corresponds to a data read event (decision block 80 ).
- the memory controller 30 may update the parity data, append the parity data to a segment of data for writing, and write the segment of data (process block 82 ), where upon completion of the writing, the memory controller 30 may wait for additional memory operation requests from the host (process block 76 ). However, in response to the memory operation request corresponding to a data read event, the memory controller 30 may read a segment of data from a corresponding memory address (process block 84 ) and determine if a data error occurred (decision block 86 ).
- the memory controller 30 may wait for additional memory operation requests from the host (process block 76 ), however, in response to determining that a data error did occur, the memory controller 30 may attempt to resolve the error using error correction code (ECC) techniques (process block 88 ), and determine whether the data error is eliminated (decision block 90 ). In response to determining that the data error is eliminated, the memory controller 30 may send the read data to the host (process block 92 ), and proceed to wait for additional memory operation requests from the host (process block 76 ).
- ECC error correction code
- the memory controller 30 may determine the faulty memory die 58 (process block 94 ), use an XOR logical operation to recover lost data based on the faulty memory die 58 (process block 96 ), send the recovered data to the host (process bock 92 ), and proceed to wait for an additional memory operation request from the host (process block 76 ).
- a memory controller 30 may wait for a memory operation request from its host device (process block 76 ). In this way, the memory controller 30 may be idle, and not performing memory operations (e.g., read, write) in-between read or write access events initiated by the host device.
- memory operations e.g., read, write
- the memory controller 30 may receive a memory operation request from the host (process block 78 ) and may perform memory operations in response to the received memory operation request.
- the memory operation request may identify the requested data block 28 or segment of data by a corresponding logical address.
- a memory controller 30 may convert the logical address into a physical address. This physical address indicates where the data is actually stored in the memory module 24 .
- the memory controller 30 may use an address map, a look-up table, an equation conversion, or any suitable method to convert the logical address to a physical address.
- the processing circuitry 22 receives the various memory operation requests via communication with the client device 12 , however in some embodiments, the processing circuitry 22 may initiate various memory operation requests independent of the client device 12 . These memory operation requests may include requests to retrieve, or read, data from one or more of the non-volatile memory devices 32 or requests to store, or write, data into one or more of the non-volatile memory devices 32 . In this way, during memory operations, the memory controller 30 may receive a logical address from the host, may translate the logical address into a physical address indicative of where the corresponding data is to be stored (e.g., writing operations) or is stored (e.g., reading operations), and may operate to read or write the corresponding data based on a corresponding physical address.
- the memory controller 30 may determine if the memory operation request corresponds to a data read event (decision block 80 ). The memory controller 30 may check for changes to data stored in the non-volatile memory devices 32 and/or may operate by assuming data stored in the non-volatile memory devices 32 changes after each data write. Thus, the memory controller 30 generally determines whether a data write event occurred, where the data write event changes data stored in any one of the memory die 58 . This determination is performed to facilitate keeping parity data stored in the memory die 58 relevant and/or accurate.
- the memory controller 30 may append parity bits to the segment of data to be written and may write the segment of data to memory (process block 82 ). These parity bits may be used in future error correcting code operations to resolve minor transmission errors (e.g., process block 88 ). In addition, the memory controller 30 may update the parity data to reflect the changed segment of data.
- the memory controller 30 of the memory module 24 may perform the XOR logical operation to each of the memory die 58 and may store the XOR result as the updated parity data into a parity data memory die 58 (e.g., memory die 58 XZ).
- the memory controller 30 may include data of the spare memory in the XOR logical operation, such that the XOR result represents the XOR of each memory die 58 and data stored in the spare memory. It should be noted that, in some embodiments, the memory controller 30 updates the parity data in response to receiving an indication created in response to a timer tracking minimum parity data update intervals or an indication transmitted from the client device 12 to request the update of the parity data.
- the memory controller 30 may update the parity data more frequently than just in response to data write operations and thus by the memory controller 30 determining if the memory operation request corresponds to a data read event, the memory controller 30 may update the parity data in response to each memory operation request except for those which correspond to a data read event including, for example, requests based on tracked time intervals. Upon appending and writing the segment of data to memory, the memory controller 30 may wait to receive an additional memory operation request from the host (process block 76 ).
- the memory controller 30 may read a segment of data at a corresponding memory address (decision block 84 ).
- the memory operation request includes a logical address at which a desired segment of memory is stored.
- the memory controller 30 may retrieve the desired segment of memory at the indicated logical address in response to the memory operation request (e.g., through referencing a converted physical address and operating to retrieve the segment of data from the corresponding memory die 58 ).
- the memory controller 30 may determine if the data is correct (e.g., not defective) (decision block 86 ).
- the memory controller 30 may perform various data verification techniques to confirm the data is correct by verifying the data read is the same as was initially represented with the parity data stored on memory die 58 . These data verification techniques may facilitate the detection of both physical and digital defects associated with the memory module 24 . These defects may include issues such as data writing errors, mechanical defects associated with the physical memory die 58 , mechanical defects associated with the non-volatile memory device 32 , and the like.
- the memory controller 30 may proceed to use XOR verification to determine if the data read in response to the data read event is uncorrupted and correct.
- the memory controller 30 of the memory module 24 may XOR the data of each memory die 58 , and in some embodiments the data of each memory die 58 and the spare memory, to determine an additional XOR result. Upon calculating the additional XOR result, the memory controller 30 may determine if the XOR results are the same. The memory controller 30 of the memory module 24 may compare the additional XOR result and the parity data stored in memory die 58 to determine if the XOR results are equal or substantially similar (e.g., within a threshold of similarity such that the results are considered equal).
- the memory controller 30 may proceed to wait for an additional memory operation request from the host (process block 76 ). However, in response to determining that the XOR results are not the same and thus the read data is incorrect (e.g., found data error), the memory controller 30 may attempt to resolve the data error with error correcting code (ECC) techniques (process block 88 ).
- ECC error correcting code
- Error correcting code techniques may include adding redundant parity data to a segment of data such that, upon reading, the original segment of data may still be recovered even if minor data corruption occurs. There are a wide variety of valid ways to perform this preliminary quality control step to verify that the data error is not caused by a minor transmission issue, such as convolutional codes and block codes methods.
- the memory controller 30 may determine if the data error has been eliminated from the correction (decision block 90 ). If the memory controller 30 determines the error equals zero after implementing the error correcting code techniques, the memory controller 30 may send the read data to the host device for further processing and/or use in computing activities. After transmission of the read data, the memory controller 30 waits for an additional memory operation request from the host (process block 78 ).
- the memory controller 30 may proceed to determine which of the memory die 58 is defective or faulty (process block 94 ).
- the memory controller 30 may perform various determination activities to determine which memory die 58 is faulty, such as systematic testing of the memory die 58 responses to test write or read operations.
- the memory controller 30 may communicate the data error to the client device 12 and receive an indication from the host, such as an indication originating from a user of the client device 12 , communicating which memory die 58 is defective or faulty.
- the memory controller 30 may use the parity data to recover the data lost in response to the faulty memory die 58 (process block 96 ).
- the memory controller 30 may recover the lost data by performing an inverse of the XOR logical operation. That is, the memory controller may XOR each of the memory die 58 without XOR-ing the faulty memory die 58 data and with including the parity data.
- the memory controller 30 XORs all of the memory die 58 to determine the lost data of memory die 58 C without XOR-ing the data of the faulty memory die 58 A 2 and substituting the data of the memory die 58 A 2 with the parity data to recreate the lost data of the memory die 58 A 2 (e.g., the data of memory die 58 A 1 XOR'd with the data of memory die 58 B 2 XOR'd with the parity data of memory die 58 to determine lost data of the memory die 58 A 2 ).
- the memory controller 30 performs this recovery operation in response to receiving a proceed indication from the processing circuitry 22 , or other suitable processing circuitry. In this way, in these embodiments, the memory controller 30 may wait to recover the lost data until a physical repair is performed.
- the memory controller 30 may transmit the recovered data to the host (process block 92 ) and proceed to wait for an additional memory operation request (process block 76 ).
- the memory controller 30 may continue the process 74 to keep the parity data up to date, to monitor data quality stored within the non-volatile memory devices 32 , and/or to perform recovery operations in the event of data loss.
- technical effects of the present disclosure include facilitating improved redundancy operations to protect against data loss at a die-level or memory die sized granularity.
- These techniques describe systems and methods for performing XOR logical operations to create parity data, verify data integrity or quality, and to recover data in the event of data loss, all at the die-level instead of the package-level.
- These techniques also provide for one or more additional spare memory die, an improvement from package-level redundancy operations.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Human Computer Interaction (AREA)
- Quality & Reliability (AREA)
- Computer Security & Cryptography (AREA)
- Computer Hardware Design (AREA)
- Techniques For Improving Reliability Of Storages (AREA)
Abstract
Description
- This section is intended to introduce the reader to various aspects of art that may be related to various aspects of the present techniques, which are described and/or claimed below. This discussion is believed to be helpful in providing the reader with background information to facilitate a better understanding of the various aspects of the present disclosure. Accordingly, it should be understood that these statements are to be read in this light and not as admissions of prior art.
- Generally, a computing system includes processing circuitry, such as one or more processors or other suitable components, and memory devices, such as chips or integrated circuits. One or more memory devices may be implemented on a memory module, such as a dual in-line memory module (DIMM), to store data accessible to the processing circuitry. For example, based on a user input to the computing system, the processing circuitry may request that a memory module retrieve data corresponding to the user input from its memory devices. In some instances, the retrieved data may include instructions executable by the processing circuitry to perform an operation and/or may include data to be used as an input for the operation. In addition, in some cases, data output from the operation may be stored in memory, for example, to enable subsequent retrieval.
- Furthermore, the data stored in the memory devices may include particular data that is desired to be preserved, retained, or recreated in the case of data loss or memory device malfunction. Resources dedicated to storing such data may be unavailable for other uses and may thus constrain device operability.
- Various aspects of this disclosure may be better understood upon reading the following detailed description and upon reference to the drawings in which:
-
FIG. 1 is a block diagram of a computing system that includes client devices and one or more remote computing devices, in accordance with an embodiment; -
FIG. 2 is a block diagram of a memory module that may be implemented in a remote computing device ofFIG. 1 , in accordance with an embodiment; -
FIG. 3 is a block diagram of the memory module ofFIG. 2 arranged in a first non-volatile memory arrangement, in accordance with an embodiment; -
FIG. 4 is a block diagram of the memory module ofFIG. 2 arranged in a second non-volatile memory arrangement, in accordance with an embodiment; -
FIG. 5 is a block diagram of the memory module ofFIG. 2 arranged in a third non-volatile memory arrangement, in accordance with an embodiment; and -
FIG. 6 is a flow diagram of a process for operating the memory module ofFIG. 4-5 to perform die-level redundancy operations, in accordance with an embodiment. - A memory device may be designated to store parity data. The parity data may be stored or backed-up in non-volatile memory, or volatile memory powered by an additional power supply, for example, to protect against data loss from power loss or component defect. In some cases, the memory device may store parity data used to recover data for additional memory devices as a way to back-up the data of the additional memory devices. However, in many cases, backing-up a whole memory device may lead to excessive overprovisioning of memory and wasting of resources. So as described herein, a die-level redundancy scheme may be employed in which parity data associated with particular die (rather than a whole memory device) may be stored.
- Generally, hardware of a computing system includes processing circuitry and memory, for example, implemented using one or more processors and/or one or more memory devices (e.g., chips or integrated circuits). During operation of the computing system, the processing circuitry may perform various operations (e.g., tasks) by executing corresponding instructions, for example, based on a user input to determine output data by performing operations on input data. To facilitate operation of the computing system, data accessible to the processing circuitry may be stored in a memory device, such that the memory device stores the input data, the output data, data indicating the executable instructions, or any combination thereof.
- In some instances, multiple memory devices may be implemented on a memory module, thereby enabling the memory devices to be communicatively coupled to the processing circuitry as a unit. For example, a dual in-line memory module (DIMM) may include a printed circuit board (PCB) and multiple memory devices. Memory modules respond to commands from a memory controller communicatively coupled to a client device or a host device via a communication network. Or in some cases, a memory controller may be implemented on the host-side of a memory-host interface; for example, a processor, microcontroller, or ASIC may include a memory controller. This communication network may enable data communication therebetween and, thus, the client device to utilize hardware resources accessible through the memory controller. Based at least in part on user input to the client device, processing circuitry of the memory controller may perform one or more operations to facilitate the retrieval or transmission of data between the client device and the memory devices. Data communicated between the client device and the memory devices may be used for a variety of purposes including, but not limited to, presentation of a visualization to a user through a graphical user interface (GUI) at the client device, processing operations, calculations, or the like.
- Additionally, in some instances, memory devices may be implemented using different memory types. For example, a memory device may be implemented as volatile memory, such as dynamic random-access memory (DRAM) or static random-access memory (SRAM). Alternatively, the memory device may be implemented as non-volatile memory, such as flash (e.g., NAND, NOR) memory, phase-change memory (e.g., 3D XPoint™), or ferroelectric random access memory (FeRAM). In any case, memory devices generally include at least one memory die (i.e., an array of memory cells configured on a portion or “die” of a semiconductor wafer) to store data bits (e.g., “0” bit or “1” bit) transmitted to the memory device through a channel (e.g., data channel, communicative coupling) and may be functionally similar from the perspective of the processing circuitry even when implemented using different memory types.
- However, different memory types may provide varying tradeoffs that affect implementation associated cost of a computing system. For example, volatile memory may provide faster data transfer (e.g., read and/or write) speeds compared to non-volatile memory. On the other hand, non-volatile memory may provide higher data storage density compared to volatile memory. Thus, a combination of non-volatile memory cells and volatile memory cells may be used in a computing system to balance the costs and benefits of each type of memory. Non-volatile memory cells, in contrast to volatile memory, may also maintain their stored value or data bits while in an unpowered state. Thus, implementing a combination of non-volatile memory cells and volatile memory cells may change how data redundancy operations are managed in the computing system.
- In particular, data of non-volatile or volatile memory cells may be backed-up by non-volatile memory to protect the data of the computing system. In some circumstances, however, memory may be protected against data loss through various redundancy schemes. An example of a redundancy scheme is a redundant array of independent disks, DIMMs, DRAM, 3D XPoint™, or any suitable form of memory, through which memory cells are protected against data loss through following digital logic verification and/or protection techniques, such as exclusive-or (XOR) verification and XOR protection. In XOR protection techniques, the data stored in the non-volatile memories are subjected to an XOR logical operation. The result of the XOR logical operation, often referred to as parity data or parity bits, is stored as the XOR result indicative of the correct data initially stored across the non-volatile memory. In the event of data loss, the data of the defective non-volatile memory may be recreated using the parity data as a replacement for the missing or lost data.
- Redundancy schemes, like the one described above, provide a reliable means of protecting memory against data loss. A variety of circumstances may cause data loss including memory malfunction, power loss (e.g., power loss causing data stored in non-volatile memory to not be refreshed to preserve data values), or other similar hardware defects that cause data loss. Redundancy schemes, like the one described above, may be used to recover data down to the smallest granularity of data used in the XOR logical operation. Thus, if a memory device is subjected to an XOR logical operation with other memory devices, and the parity data is used for recovery, the XOR recovery may recover data from the entire memory device after a data loss event.
- Commonly, redundancy schemes operate to protect the entire memory device, that is, package-level redundancy schemes that use data of the whole memory device without regard to smaller, more practical data granularity. This may cause overprovisioning since malfunction of the entire memory device is uncommon and unlikely. In some instances, this overprovisioning leads to using larger sized memories to store the parity data and, thus, may increase costs of providing the data protection. Thus, there may be particular advantages to implementing a die-level redundancy scheme to provide protection to individual memory die of the memory device, instead of the memory device or channel as a whole. Die-level redundancy schemes may reduce the overall overprovisioning while also providing one or more spare memory die. For purposes of this disclosure, a redundant array of independent 3D XPoint™ memory (RAIX) is used as an example redundancy scheme that may be improved through die-level redundancy operations.
- To facilitate improving RAIX schemes, the present disclosure provides techniques for implementing and operating memory modules to provide die-level RAIX schemes (i.e., die-level redundancy schemes). In particular, a die-level RAIX scheme may enable the memory module to have access to an increased amount of spare memory. Die-level RAIX schemes enable the memory module to back-up data stored in individual memory die regardless of the number of memory devices included on the memory module. These memory die receive data from a memory controller through a channel, or in some embodiments, a channel that provides data to multiple individual memory die located on a same or different memory device. In this way, a memory die may receive data through a dedicated channel (e.g., 1:1 channel to memory die ratio) or through a channel shared with additional memory die (e.g., M:N channel, M, to memory die, N, ratio). In this way, several channels may be allocated to a memory device that includes two or more memory die and one or more memory die may be associated with one or more channels. A die-level RAIX schemes may operate to back-up the data stored in the individual memory die, thus corresponding to the data transmitted through a channel to the memory die, and in this way may decrease over-provisioning and decrease costs of production while providing adequate protection of the memory module data.
- In accordance with embodiments described herein, a variety of computing systems may implement die-level RAIX schemes including one or more client devices communicatively coupled to one or more remote computing devices. In these devices, certain computing processes are separated from each other to improve operational efficiency of the computing system. For example, beyond merely controlling data access (e.g., storage and/or retrieval), the memory processing circuitry may be implemented to perform data processing operations, for example, which would otherwise be performed by host processing circuitry. For ease of description, die-level RAIX is described below as implemented in a computing system using these remote computing devices, however, it should be understood that a variety of valid embodiments may implement die-level RAIX schemes. For example, a computing system that does not use remote computing devices and instead combines components of a client device with memory modules and processing circuity of the remote computing devices may be employed.
- To help illustrate,
FIG. 1 depicts an example of acomputing system 10, which includes one or moreremote computing devices 11. As in the depicted embodiment, theremote computing devices 11 may be communicatively coupled to the one ormore client devices 12 via acommunication network 14. It should be appreciated that the depicted embodiment is merely intended to be illustrative and not limiting. For example, in other embodiments, theremote computing devices 11 may be communicatively coupled to asingle client device 12 or more than twoclient devices 12. - In any case, the
communication network 14 may enable data communication between theclient devices 12 and theremote computing devices 11. In some embodiments, theclient devices 12 may be physically remote (e.g., separate) from theremote computing devices 11, for example, such that theremote computing devices 11 are located at a centralized data center. Thus, in some embodiments, thecommunication network 14 may be a wide area network (WAN), such as the Internet. To facilitate communication via thecommunication network 14, theremote computing devices 11 and theclient devices 12 may each include anetwork interface 16. - In addition to the
network interface 16, aclient device 12 may includeinput devices 18 and/or anelectronic display 20 to enable a user to interact with theclient device 12. For example, theinput devices 18 may receive user inputs and, thus, may include buttons, keyboards, mice, trackpads, and/or the like. Additionally or alternatively, theelectronic display 20 may include touch sensing components that receive user inputs by detecting occurrence and/or position of an object touching its screen (e.g., surface of the electronic display 20). In addition to enabling user inputs, theelectronic display 20 may facilitate providing visual representations of information by displaying a graphical user interface (GUI) of an operating system, an application interface, text, a still image, video content, or the like. - As described above, the
communication network 14 may enable data communication between theremote computing devices 11 and one ormore client devices 12. In other words, thecommunication network 14 may enable user inputs to be communicated from aclient device 12 to aremote computing device 11. Additionally or alternatively, thecommunication network 14 may enable results of operations performed by theremote computing device 11 based on the user inputs to be communicated back to theclient device 12, for example, as image data to be displayed on itselectronic display 20. - In fact, in some embodiments, data communication provided by the
communication network 14 may be leveraged to make centralized hardware available to multiple users, such that hardware atclient devices 12 may be reduced. For example, theremote computing devices 11 may provide data storage for multipledifferent client devices 12, thereby enabling data storage (e.g., memory) provided locally at theclient devices 12 to be reduced. Additionally or alternatively, theremote computing devices 11 may provide processing for multipledifferent client devices 12, thereby enabling processing power provided locally at theclient devices 12 to be reduced. - Thus, in addition to the
network interface 16, theremote computing devices 11 may include processingcircuitry 22 and one or more memory modules 24 (e.g., sub-systems) communicatively coupled via adata bus 25. In some embodiments, theprocessing circuitry 22 and/or thememory modules 24 may be implemented across multipleremote computing devices 11, for example, such that a firstremote computing device 11 includes a portion of theprocessing circuitry 22 and thefirst memory module 24A, while an Mthremote computing device 11 includes another portion of theprocessing circuitry 22 and theMth memory module 24M. Additionally or alternatively, theprocessing circuitry 22 and thememory modules 24 may be implemented in a singleremote computing device 11. - In any case, the
processing circuitry 22 may generally execute instructions to perform operations, for example, indicated by user inputs received from aclient device 12. Thus, theprocessing circuitry 22 may include one or more central processing units (CPUs), one or more graphics processing units (GPUs), one or more processor cores, or any combination thereof. In some embodiments, theprocessing circuitry 22 may additionally perform operations based on circuit connections formed (e.g., programmed) in theprocessing circuitry 22. Thus, in such embodiments, theprocessing circuitry 22 may additionally include one or more application specific integrated circuits (ASICs), one or more field programmable logic arrays (FPGAs), or both. - Additionally, a
memory module 24 may provide data storage accessible to theprocessing circuitry 22. For example, amemory module 24 may store data received from aclient device 12, data resulting from an operation performed by theprocessing circuitry 22, data to be input to the operation performed by theprocessing circuitry 22, instructions executable by theprocessing circuitry 22 to perform the operation, or any combination thereof. To facilitate providing data storage, amemory module 24 may include one or more memory devices 26 (e.g., chips or integrated circuits). In other words, thememory devices 26 may each be a tangible, non-transitory, computer-readable medium that stores data accessible to theprocessing circuitry 22. - Since hardware of the
remote computing devices 11 may be utilized bymultiple client devices 12, at least in some instances, amemory module 24 may store data corresponding withdifferent client devices 12. To facilitate identifying appropriate data, in some embodiments, the data may be grouped and stored as data blocks 28. In fact, in some embodiments, data corresponding with eachclient device 12 may be stored as a separate data block 28. For example, thememory devices 26 in thefirst memory module 24A may store afirst data block 28A corresponding with thefirst client device 12A and an Nth data block 28N corresponding with theNth client device 12N. One or more data blocks 28 may be stored within a memory die of thememory device 26. - Additionally, in some embodiments, a data block 28 may correspond to a virtual machine (VM) provided to a
client device 12. In other words, as an illustrative example, aremote computing device 11 may provide thefirst client device 12A a first virtual machine via thefirst data block 28A and provide theNth client device 12N an Nth virtual machine via the Nth data block 28N. Thus, when thefirst client device 12A receives user inputs intended for the first virtual machine, thefirst client device 12A may communicate the user inputs to theremote computing devices 11 via thecommunication network 14. Based at least in part on the user inputs, theremote computing device 11 may retrieve thefirst data block 28A, execute instructions to perform corresponding operations, and communicate the results of the operations back to thefirst client device 12A via thecommunication network 14. - Similarly, when the
Nth client device 12N receives user inputs intended for the Nth virtual machine, theNth client device 12N may communicate the user inputs to theremote computing devices 11 via thecommunication network 14. Based at least in part on the user inputs, theremote computing device 11 may retrieve the Nth data block 28N, execute instructions to perform corresponding operations, and communicate the results of the operations back to theNth client device 12N via thecommunication network 14. Thus, theremote computing devices 11 may access (e.g., read and/or write) various data blocks 28 stored in amemory module 24. - To facilitate improving access to stored data blocks 28, a
memory module 24 may include amemory controller 30 that controls storage of data in itsmemory devices 26. In some embodiments, thememory controller 30 may operate based on circuit connections formed (e.g., programmed) in thememory controller 30. Thus, in such embodiments, thememory controller 30 may include one or more application specific integrated circuits (ASICs), one or more field programmable logic gate arrays (FPGAs), or both. In any case, as described above, amemory module 24 may includememory devices 26 that implement different memory types, for example, which provide varying tradeoffs between data access speed and data storage density. Thus, in such embodiments, thememory controller 30 may control data storage acrossmultiple memory devices 26 to facilitate leveraging the various tradeoffs, for example, such that thememory module 24 provides fast data access speed as well as high data storage capacity. - To help illustrate,
FIG. 2 depicts an example of amemory module 24 including different types ofmemory devices 26. In particular, thememory module 24 includes one or morenon-volatile memory devices 32 and one or morevolatile memory devices 34. In some embodiments, thevolatile memory devices 34 may be implemented as dynamic random-access memory (DRAM) and/or static random-access memory (SRAM). In other words, in such embodiments, thememory module 24 may include one or more DRAM devices (e.g., chips or integrated circuits), one or more SRAM devices (e.g., chips or integrated circuits), or both. - Additionally, in some embodiments, the
non-volatile memory devices 32 may be implemented as flash (e.g., NAND) memory, phase-change (e.g., 3D XPoint™) memory, and/or ferroelectric random access memory (FeRAM). In other words, in such embodiments, thememory module 24 may include one or more NAND memory devices, one or more 3D XPoint™ memory devices, or both. In fact, in some embodiments, thenon-volatile memory devices 32 may provide storage class memory (SCM), which, at least in some instance, may facilitate reducing implementation associated cost, for example, by obviating other non-volatile data storage devices in thecomputing system 10. - In any case, in some embodiments, the
memory module 24 may be implemented by disposing each of thenon-volatile memory devices 32 and thevolatile memory devices 34 on a flat (e.g., front and/or back) surface of a printed circuit board (PCB). To facilitate data communication via thedata bus 25, thememory module 24 may include abus interface 36. For example, thebus interface 36 may include data pins (e.g., contacts) formed along an (e.g., bottom) edge of the printed circuit board. Thus, in some embodiments, thememory module 24 may be a single in-line memory module (SIMM), a dual in-line memory module (DIMM), or the like. - Additionally, in some embodiments, the
bus interface 36 may include logic that enables thememory module 24 to communicate via a communication protocol implemented on thedata bus 25. For example, thebus interface 36 may control timing of data output from thememory module 24 to thedata bus 25 and/or interpret data input to thememory module 24 from thedata bus 25 in accordance with the communication protocol. Thus, in some embodiments, thebus interface 36 may be a double data rate fourth-generation (DDR4) interface, a double data rate fifth-generation (DDR5) interface, a peripheral component interconnect express (PCIe) interface, a non-volatile dual in-line memory module (e.g., NVDIMM-P) interface, or the like. - In any case, as described above, a
memory controller 30 may control data storage within thememory module 24, for example, to facilitate improving data access speed and/or data storage efficiency by leveraging the various tradeoffs provided by memory types implemented in thememory module 24. Thus, as in the depicted example, thememory controller 30 may be coupled between thebus interface 36 and thememory devices 26 via one or moreinternal buses 37, for example, implemented via conductive traces formed on the printed circuit board. For example, thememory controller 30 may control whether a data block 28 is stored in thenon-volatile memory devices 32 or in thevolatile memory devices 34. In other words, thememory controller 30 may transfer a data block 28 from thenon-volatile memory devices 32 into thevolatile memory devices 34 or vice versa. - To facilitate data transfers, the
memory controller 30 may includebuffer memory 38, for example, to provide temporary data storage. In some embodiments, thebuffer memory 38 may include static random-access memory (SRAM) and, thus, may provide faster data access speed compared to thevolatile memory devices 34 and thenon-volatile memory devices 32. Thebuffer memory 38 may be DRAM or FeRAM in some cases. Additionally, to facilitate accessing stored data blocks 28, thememory module 24 may include an address map, for example, stored in thebuffer memory 38, anon-volatile memory device 32, avolatile memory device 34, a dedicated addressmap memory device 26, or any combination thereof. - In addition, the
remote computing device 11 may communicate with a service processor and/or a service bus included in or separate from theprocessing circuitry 22 and/or thedata bus 25. The service processor, processingcircuitry 22, and/or thememory controller 30 may perform error detection operations and/or error correction operations (ECC), and may be disposed external from theremote computing device 11 such that error detection and error correction operations may continue if power to theremote computing device 11 is lost. For simplicity of description, the functions of the service processor are described as being included in and performed by thememory controller 30, however, it should be noted that in some embodiments the error correction operations or data recovery operations may be implemented as functions performed by the service processor, processingcircuitry 22, or additional processing circuitry located internal or external to theremote computing device 11 or theclient device 12. - The
memory module 24 is depicted inFIG. 2 as a single device that includes various components or submodules. In some examples, however, a remote computing device may include one or several discrete components equivalent to the various devices, modules, and components that make upmemory module 24. For instance, a remote computing device may include non-volatile memory, volatile memory, and a controller that are positioned on one or several different chips or substrates. In other words, the features and functions ofmemory module 24 need not be implemented in a single module to achieve the benefits described herein. - To help illustrate,
FIG. 3 depicts a block diagram of an example of a package-level RAIX scheme. Generally,FIG. 3 depicts an embodiment of thememory module 24,memory module 24A, that includes ninenon-volatile memory devices 32 arranged to form a symmetric RAIX scheme where a fullnon-volatile memory device 321 is used to store parity data corresponding to the other eightnon-volatile memory devices 32A-32H. Eachnon-volatile memory device 32 may store a segment of data corresponding to memory address in a package 52. The segment of data may be smaller than the overall size of the package 52, for example, the segment of data may be 512 bytes while the package 52 may store several gigabytes. It should be appreciated that the depicted example is merely intended to be illustrative and not limiting. In fact, in some embodiments, RAIX schemes may be implemented using greater than or less than ninenon-volatile memory devices 32 with components of any suitable size. - In any case, with regard to the depicted embodiments shown in
FIG. 3 , eachnon-volatile memory device 32 stores a particular amount of data accessible to theclient device 12. Theprocessing circuitry 22 and/or thememory controller 30 may facilitate communication between thenon-volatile memory device 32 and theclient device 12 via channels. It may be desirable to be able to recover data stored in the packages 52 in the case of data loss. Thus, a package-level RAIX scheme may be used to protect data of the package 52 stored in thenon-volatile memory devices 32. - As depicted, a package-level RAIX scheme is implemented in the
memory module 24A, meaning that in the event of data loss of a package 52, data transmitted via respective channels to eachnon-volatile memory device 32 and stored in the packages 52 may be recovered. The package-level RAIX scheme uses an XOR logical operation to back-up data of each package 52. That is, the data of thepackage 52A is XOR'd with the data of thepackage 52B, and the XOR result is XOR'd with the data of the package 52C, and so on until the second-to-last XOR result is XOR'd with the package 52H. The last XOR result is considered the parity data, and is stored into a package 52I. Since each bit of thepackages 52A-52H is XOR'd with its corresponding bit of the subsequent package 52, the ending size of the parity data is the same size as the segment of data stored in the packages 52. Thus, in this example, the parity data stored on the package 52I may equal 512 bytes (equal to the size of the individual segments of data backed-up through the package-level RAIX scheme) and the package 52 may have the capacity to store 512 bytes—the same as the other packages 52. As described earlier, if any portion of a respectivenon-volatile memory device 32 malfunctions and data loss occurs, the parity data stored in the package 52 may be used to recreate the lost data (e.g., by substituting the parity data in the XOR logical operation to recreate the lost data). - To help illustrate, the basic logical properties of XOR are understood to mean exclusive-or, or the XOR logical function, and causes a logical high (e.g., 1) if a first input is a logical low and a second input is a logical high (e.g., 0 is first input and 1 is second input, 1 is the first input and 0 is the second input) but causes an output of a logical low if both the first input and the second input are either a logical high or a logical low (e.g., 0 is first and second input, 1 is first and second input). This output relationship may be leveraged to back-up data stored in the various
non-volatile memory devices 32, as described above. As a simplified example, if thepackage 52A stores 111 and thepackage 52B stores 000, the package-level RAIX scheme operates to back-uppackage package 52A is XOR'd with thepackage 52B to create the parity data. The XOR result of 111 XOR 000 is 111. In the event that the data of thepackage 52A was lost, this parity data, 111, may be XOR'd with the data of thepackage 52B to recreate the data of thepackage 52A—that is, 111 XOR 000 equals 111. If thepackage 52A stores 101 and thepackage 52B stores 110, the parity data equals 011. Ifpackage 52B were to experience data loss, 011 XOR 101 recreates the data of thepackage 52B and equals 110. - However, since the data of the package 52 may be the smallest granularity used in the XOR logical operation, any smaller groupings of data creating the packages 52, such as individual memory die of the
non-volatile memory device 32, may not be able to be separately recreated. For example, a memory die may malfunction and the rest of the package 52 may function as desired, but because the parity data represents the XOR result of the packages 52, the whole package 52 is recreated from the parity data to save the lost data from the physical malfunction of the memory die. In actual operation, it is unlikely that a whole package 52 of thenon-volatile memory device 32 experiences data loss. In fact, this depicted package-level RAIX scheme overprovisions and uses more memory to store the parity data in the package 52I than the amount of memory sufficient to protect data of thememory module 24. - The depicted package-level RAIX scheme follows an 8:1 protection ratio (e.g., eight
packages 52A-52H storing data backed up by one package 52I storing parity data). This protection ratio translates into a 12.5% overprovisioning (e.g., ⅛) of the packages 52. In general, the amount of overprovisioning correlates to RAIX scheme efficiency—in other words, the lower the percent of overprovisioning, the less memory is used to providememory module 24 data protection. Instead, it is more likely that anon-volatile memory device 32 experiences data loss at a memory die level (not depicted inFIG. 3 ). Thus, a RAIX scheme to protect against data loss at the memory die level is more applicable to normal operation of thecomputing system 10. - To help illustrate the differences between package-level and die-level RAIX schemes,
FIG. 4 depicts a block diagram of an example of a die-level RAIX scheme. Generally,FIG. 4 depicts a second embodiment of amemory module 24,memory module 24B, that includes ninenon-volatile memory devices 32 each represented as storing a particular amount of data in a memory die 58. It should be appreciated that the depicted example is merely intended to be illustrative and not limiting. In fact, in some embodiments, RAIX schemes may be implemented using greater than or less than ninenon-volatile memory devices 32, using greater than or less than eighteen channels, and may include components of any suitable size. - The
memory module 24B follows a die-level RAIX scheme where each package 52 is divided into memory die 58 to store segments of data of size 256 bytes. Using the individual memory die 58 for determination of the parity data, instead of the individual packages 52, decreases the overprovisioning from 12.5% (e.g., ⅛) to about 5.8% (e.g., 1/17). This separation, however, may increase circuit complexity because an increased amount of signal routings, components, and/or pins may be used to provide the increased number of channels. The increased design complexity may also increase manufacturing and/or design costs associated withmemory module 24 production. Furthermore, increasing the number of signal routings (e.g., channels) may cause signal integrity to decrease as well, for example, from signal interferences. Thus, a scheme that balances these trade-offs with the overall level of overprovisioning may be desirable for some embodiments, while other embodiments may implementmemory module 24B. - To illustrate this compromise,
FIG. 5 depicts a block diagram of a second example of a die-level RAIX scheme. This third embodiment of thememory module 24,memory module 24C, includes a Z number ofnon-volatile memory devices 32 each represented as storing a particular amount of data in a package 52, where the package 52 is separated into multiple memory die 58. It should be appreciated that the depicted example is merely intended to be illustrative and not limiting. In fact, in some embodiments, die-level RAIX schemes may be implemented using any number of memory die 58 pernon-volatile memory device 32. - In the depicted die-level RAIX scheme, the packages 52 from
FIG. 3 are generally divided into separate memory die 58. For example, memory die 58A1, 58B1, . . . , 58X1 are stored on the samenon-volatile memory device 32A and thesame package 52A. During operation, thememory controller 30 and/or theprocessing circuitry 22 may operate to protect thememory module 24C data via the depicted asymmetric die-level RAIX scheme. In the die-level RAIX scheme, each memory die 58 respectively undergoes the XOR logical operation, as opposed to the whole package 52 undergoing the XOR logical operation to create the parity data. The resulting parity data is stored in the memory die 58XZ ofnon-volatile memory device 32Z. It should be noted that while the parity data is stored in what is depicted as the last memory die 58XZ, there is no restriction on the memory die 58 that the parity data is to be stored in. That is, for example, the parity data may be stored in a memory die 58AZ or on memory die 58A1. The parity data is able to be stored on the memory die 58, thus less memory may be allocated for the purpose of storing the parity data—hence why the memory die 58XZ is all that may be allocated to serve the same purpose as the whole package 52 used to support the package-level RAIX scheme ofFIG. 3 . The remaining memory die of thenon-volatile memory device 32Z may be allocated as a spare memory, where the spare memory die 58AZ, 58BZ, . . . , 58CZ may be used for operational overflow, additional data storage, information used by thememory controller 30 and/orprocessing circuitry 22 to translate logical addresses into physical address, and the like. Thus, thememory module 24C is an improvement from thememory module 24A that had relatively high overprovisioning and no spare memory, and an improvement from thememory module 24B that has no spare memory and high design complexity. - Dividing the packages 52, for the purposes of redundancy, into the memory die 58 creates an overprovisioning of about 6.25% (e.g., 1/16) which is a decrease from the 12.5% (e.g., ⅛) overprovisioning of
memory module 24A and an increase from the 5.8% (e.g., 1/17) overprovisioning ofmemory module 24B. Despite the small increase in overprovisioning, the die-level RAIX scheme is an improvement to package-level RAIX schemes due to the simplicity of design and the minimal overprovisioning of memory to support the redundancy or protection. - In general, during computing operations, the
client device 12 receives inputs from users or other components and, in response to the inputs, requests thememory controller 30 of thememory module 24C to facilitate performing memory operations. Theclient device 12 may issue these requests as commands and may indicate a logical address from where to retrieve or store the corresponding data. Theclient device 12, however, is unaware of the true physical address of where the corresponding data is stored since sometimes data is divided and stored in a multitude of locations referenced via one logical address. Thememory controller 30 may receive these commands and translate the logical addresses into physical addresses to appropriately access stored data. - Upon determining the physical address for the corresponding data, the
memory controller 30 may operate to read the data stored in each respective memory die 58 or may operate to write the data to be written in each respective memory die 58. Thememory controller 30 may also parse or interpret data stored in each respective memory die 58 as part of this read/write operation to complete the requested operation from theclient device 12. These operations are performed by transmitting segments of data through channels communicatively coupling thenon-volatile memory device 32 to thememory controller 30. - The
memory controller 30, or other suitable processing circuitry, may facilitate the updating of the parity data stored in the memory die 58. To do this, the data to be stored in each memory die 58 is XOR'd with the data of the subsequent memory die 58 until each memory die 58 is reflected in the parity data. Thememory controller 30, or the other suitable processing circuitry, may also facilitate verifying the quality of data stored in the memory die 58. In some embodiments, thememory controller 30 may perform the XOR-ing of the data in the memory die to verify that the resulting parity data is the same. If an error is detected (e.g., the parity data is not the same and thus was determined based on defective data), this may mean a memory die 58 is physically malfunctioning, a data reading or writing error occurred, or the like. Thememory controller 30 may perform these redundancy operations in response to an event or a control signal, in response to performing a reading or writing operation, in response to a defined amount of time passing (e.g., for example, data in the memory die 58 is refreshed periodically, including the parity data), or any other suitable indication or event. - As described above, the depicted components of the
computing system 10 may be used to perform memory operations. In some embodiments, the die-level RAIX scheme is integrated into the memory operation control flow. In other embodiments, the die-level RAIX scheme is performed in response to a particular indication, signal, event, at periodic or defined time intervals, or the like. However, in certain embodiments, the die-level RAIX scheme is performed both at certain times during memory operations and in response to a control signal. Thus, it should be understood that die-level RAIX schemes may be incorporated into memory operations in a variety of ways. - To help illustrate,
FIG. 6 depicts an example of aprocess 74 for controlling memory operations and die-level RAIX back-up schemes of amemory module 24. Generally, theprocess 74 includes thememory controller 30 waiting for a memory operation request from the host (e.g., processingcircuitry 22 and/or client device 12) (process block 76), receiving a memory operation request from the host (process block 78), and determining if the memory operation request corresponds to a data read event (decision block 80). In response to the memory operation request not corresponding to a data read event, thememory controller 30 may update the parity data, append the parity data to a segment of data for writing, and write the segment of data (process block 82), where upon completion of the writing, thememory controller 30 may wait for additional memory operation requests from the host (process block 76). However, in response to the memory operation request corresponding to a data read event, thememory controller 30 may read a segment of data from a corresponding memory address (process block 84) and determine if a data error occurred (decision block 86). In response to determining that a data error did not occur, thememory controller 30 may wait for additional memory operation requests from the host (process block 76), however, in response to determining that a data error did occur, thememory controller 30 may attempt to resolve the error using error correction code (ECC) techniques (process block 88), and determine whether the data error is eliminated (decision block 90). In response to determining that the data error is eliminated, thememory controller 30 may send the read data to the host (process block 92), and proceed to wait for additional memory operation requests from the host (process block 76). However, in response to determining that the resolved error is not zero, thememory controller 30 may determine the faulty memory die 58 (process block 94), use an XOR logical operation to recover lost data based on the faulty memory die 58 (process block 96), send the recovered data to the host (process bock 92), and proceed to wait for an additional memory operation request from the host (process block 76). - In any case, as described above, a
memory controller 30 may wait for a memory operation request from its host device (process block 76). In this way, thememory controller 30 may be idle, and not performing memory operations (e.g., read, write) in-between read or write access events initiated by the host device. - The
memory controller 30 may receive a memory operation request from the host (process block 78) and may perform memory operations in response to the received memory operation request. In some embodiments, the memory operation request may identify the requested data block 28 or segment of data by a corresponding logical address. As described above, when identified by the logical address, amemory controller 30 may convert the logical address into a physical address. This physical address indicates where the data is actually stored in thememory module 24. For example, thememory controller 30 may use an address map, a look-up table, an equation conversion, or any suitable method to convert the logical address to a physical address. Theprocessing circuitry 22 receives the various memory operation requests via communication with theclient device 12, however in some embodiments, theprocessing circuitry 22 may initiate various memory operation requests independent of theclient device 12. These memory operation requests may include requests to retrieve, or read, data from one or more of thenon-volatile memory devices 32 or requests to store, or write, data into one or more of thenon-volatile memory devices 32. In this way, during memory operations, thememory controller 30 may receive a logical address from the host, may translate the logical address into a physical address indicative of where the corresponding data is to be stored (e.g., writing operations) or is stored (e.g., reading operations), and may operate to read or write the corresponding data based on a corresponding physical address. - In response to the memory operation request, the
memory controller 30 may determine if the memory operation request corresponds to a data read event (decision block 80). Thememory controller 30 may check for changes to data stored in thenon-volatile memory devices 32 and/or may operate by assuming data stored in thenon-volatile memory devices 32 changes after each data write. Thus, thememory controller 30 generally determines whether a data write event occurred, where the data write event changes data stored in any one of the memory die 58. This determination is performed to facilitate keeping parity data stored in the memory die 58 relevant and/or accurate. - If the memory operation request corresponds to a data write event (e.g, not a data read event), the
memory controller 30 may append parity bits to the segment of data to be written and may write the segment of data to memory (process block 82). These parity bits may be used in future error correcting code operations to resolve minor transmission errors (e.g., process block 88). In addition, thememory controller 30 may update the parity data to reflect the changed segment of data. Thememory controller 30 of thememory module 24 may perform the XOR logical operation to each of the memory die 58 and may store the XOR result as the updated parity data into a parity data memory die 58 (e.g., memory die 58XZ). In some embodiments, thememory controller 30 may include data of the spare memory in the XOR logical operation, such that the XOR result represents the XOR of each memory die 58 and data stored in the spare memory. It should be noted that, in some embodiments, thememory controller 30 updates the parity data in response to receiving an indication created in response to a timer tracking minimum parity data update intervals or an indication transmitted from theclient device 12 to request the update of the parity data. In these embodiments, it may be desirable for thememory controller 30 to update the parity data more frequently than just in response to data write operations and thus by thememory controller 30 determining if the memory operation request corresponds to a data read event, thememory controller 30 may update the parity data in response to each memory operation request except for those which correspond to a data read event including, for example, requests based on tracked time intervals. Upon appending and writing the segment of data to memory, thememory controller 30 may wait to receive an additional memory operation request from the host (process block 76). - However, in response to determining the memory operation request corresponds to a data read event, the
memory controller 30 may read a segment of data at a corresponding memory address (decision block 84). The memory operation request includes a logical address at which a desired segment of memory is stored. Thememory controller 30 may retrieve the desired segment of memory at the indicated logical address in response to the memory operation request (e.g., through referencing a converted physical address and operating to retrieve the segment of data from the corresponding memory die 58). - After reading the segment of data, the
memory controller 30 may determine if the data is correct (e.g., not defective) (decision block 86). Thememory controller 30 may perform various data verification techniques to confirm the data is correct by verifying the data read is the same as was initially represented with the parity data stored on memory die 58. These data verification techniques may facilitate the detection of both physical and digital defects associated with thememory module 24. These defects may include issues such as data writing errors, mechanical defects associated with the physical memory die 58, mechanical defects associated with thenon-volatile memory device 32, and the like. To verify the data, for example, thememory controller 30 may proceed to use XOR verification to determine if the data read in response to the data read event is uncorrupted and correct. To do this, thememory controller 30 of thememory module 24 may XOR the data of each memory die 58, and in some embodiments the data of each memory die 58 and the spare memory, to determine an additional XOR result. Upon calculating the additional XOR result, thememory controller 30 may determine if the XOR results are the same. Thememory controller 30 of thememory module 24 may compare the additional XOR result and the parity data stored in memory die 58 to determine if the XOR results are equal or substantially similar (e.g., within a threshold of similarity such that the results are considered equal). - In response to determining if the XOR results are the same and thus the read data is correct (e.g., found no data error), the
memory controller 30 may proceed to wait for an additional memory operation request from the host (process block 76). However, in response to determining that the XOR results are not the same and thus the read data is incorrect (e.g., found data error), thememory controller 30 may attempt to resolve the data error with error correcting code (ECC) techniques (process block 88). Error correcting code techniques may include adding redundant parity data to a segment of data such that, upon reading, the original segment of data may still be recovered even if minor data corruption occurs. There are a wide variety of valid ways to perform this preliminary quality control step to verify that the data error is not caused by a minor transmission issue, such as convolutional codes and block codes methods. - After attempting to resolve the data error with error correcting code techniques, the
memory controller 30 may determine if the data error has been eliminated from the correction (decision block 90). If thememory controller 30 determines the error equals zero after implementing the error correcting code techniques, thememory controller 30 may send the read data to the host device for further processing and/or use in computing activities. After transmission of the read data, thememory controller 30 waits for an additional memory operation request from the host (process block 78). - However, if the
memory controller 30 determines the data error is not eliminated (e.g., error does not equal zero), thememory controller 30 may proceed to determine which of the memory die 58 is defective or faulty (process block 94). Thememory controller 30 may perform various determination activities to determine which memory die 58 is faulty, such as systematic testing of the memory die 58 responses to test write or read operations. Furthermore, in some embodiments, thememory controller 30 may communicate the data error to theclient device 12 and receive an indication from the host, such as an indication originating from a user of theclient device 12, communicating which memory die 58 is defective or faulty. - When the
memory controller 30 determines which memory die 58 is faulty, thememory controller 30 may use the parity data to recover the data lost in response to the faulty memory die 58 (process block 96). Thememory controller 30 may recover the lost data by performing an inverse of the XOR logical operation. That is, the memory controller may XOR each of the memory die 58 without XOR-ing the faulty memory die 58 data and with including the parity data. Assume, for example, that a memory die 58A2 is faulty—in this example, thememory controller 30 XORs all of the memory die 58 to determine the lost data of memory die 58C without XOR-ing the data of the faulty memory die 58A2 and substituting the data of the memory die 58A2 with the parity data to recreate the lost data of the memory die 58A2 (e.g., the data of memory die 58A1 XOR'd with the data of memory die 58B2 XOR'd with the parity data of memory die 58 to determine lost data of the memory die 58A2). Furthermore, in some embodiments, thememory controller 30 performs this recovery operation in response to receiving a proceed indication from theprocessing circuitry 22, or other suitable processing circuitry. In this way, in these embodiments, thememory controller 30 may wait to recover the lost data until a physical repair is performed. - Upon recovering the lost data, the
memory controller 30 may transmit the recovered data to the host (process block 92) and proceed to wait for an additional memory operation request (process block 76). Thememory controller 30 may continue theprocess 74 to keep the parity data up to date, to monitor data quality stored within thenon-volatile memory devices 32, and/or to perform recovery operations in the event of data loss. - Thus, technical effects of the present disclosure include facilitating improved redundancy operations to protect against data loss at a die-level or memory die sized granularity. These techniques describe systems and methods for performing XOR logical operations to create parity data, verify data integrity or quality, and to recover data in the event of data loss, all at the die-level instead of the package-level. These techniques also provide for one or more additional spare memory die, an improvement from package-level redundancy operations.
- The specific embodiments described above have been shown by way of example, and it should be understood that these embodiments may be susceptible to various modifications and alternative forms. It should be further understood that the claims are not intended to be limited to the particular forms disclosed, but rather to cover all modifications, equivalents, and alternatives falling within the spirit and scope of this disclosure.
- The techniques presented and claimed herein are referenced and applied to material objects and concrete examples of a practical nature that demonstrably improve the present technical field and, as such, are not abstract, intangible or purely theoretical. Further, if any claims appended to the end of this specification contain one or more elements designated as “means for [perform]ing [a function] . . . ” or “step for [perform]ing [a function] . . . ”, it is intended that such elements are to be interpreted under 35 U.S.C. 112(f). However, for any claims containing elements designated in any other manner, it is intended that such elements are not to be interpreted under 35 U.S.C. 112(f).
Claims (20)
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US16/041,204 US10628258B2 (en) | 2018-07-20 | 2018-07-20 | Die-level error recovery scheme |
CN201910432583.8A CN110737539B (en) | 2018-07-20 | 2019-05-23 | Die level error recovery scheme |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US16/041,204 US10628258B2 (en) | 2018-07-20 | 2018-07-20 | Die-level error recovery scheme |
Publications (2)
Publication Number | Publication Date |
---|---|
US20200026600A1 true US20200026600A1 (en) | 2020-01-23 |
US10628258B2 US10628258B2 (en) | 2020-04-21 |
Family
ID=69161288
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US16/041,204 Active US10628258B2 (en) | 2018-07-20 | 2018-07-20 | Die-level error recovery scheme |
Country Status (2)
Country | Link |
---|---|
US (1) | US10628258B2 (en) |
CN (1) | CN110737539B (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20220137835A1 (en) * | 2020-10-30 | 2022-05-05 | Kioxia Corporation | Systems and methods for parity-based failure protection for storage devices |
Family Cites Families (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5469566A (en) * | 1992-03-12 | 1995-11-21 | Emc Corporation | Flexible parity generation circuit for intermittently generating a parity for a plurality of data channels in a redundant array of storage units |
US9208070B2 (en) * | 2011-12-20 | 2015-12-08 | Sandisk Technologies Inc. | Wear leveling of multiple memory devices |
US10067829B2 (en) * | 2013-12-13 | 2018-09-04 | Intel Corporation | Managing redundancy information in a non-volatile memory |
US9954557B2 (en) * | 2014-04-30 | 2018-04-24 | Microsoft Technology Licensing, Llc | Variable width error correction |
-
2018
- 2018-07-20 US US16/041,204 patent/US10628258B2/en active Active
-
2019
- 2019-05-23 CN CN201910432583.8A patent/CN110737539B/en active Active
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20220137835A1 (en) * | 2020-10-30 | 2022-05-05 | Kioxia Corporation | Systems and methods for parity-based failure protection for storage devices |
Also Published As
Publication number | Publication date |
---|---|
US10628258B2 (en) | 2020-04-21 |
CN110737539A (en) | 2020-01-31 |
CN110737539B (en) | 2021-06-04 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN112543909B (en) | Enhanced codewords for media persistence and diagnostics | |
US11231992B2 (en) | Memory systems for performing failover | |
CN108376120B (en) | System and method for managing write blocks in a non-volatile memory system | |
TWI605459B (en) | Dynamic application of ecc based on error type | |
US10459793B2 (en) | Data reliability information in a non-volatile memory device | |
US8099570B2 (en) | Methods, systems, and computer program products for dynamic selective memory mirroring | |
US8516343B2 (en) | Apparatus, system, and method for retiring storage regions | |
US7478285B2 (en) | Generation and use of system level defect tables for main memory | |
US7603528B2 (en) | Memory device verification of multiple write operations | |
US10282251B2 (en) | System and method for protecting firmware integrity in a multi-processor non-volatile memory system | |
US20190034270A1 (en) | Memory system having an error correction function and operating method of memory module and memory controller | |
US10127103B2 (en) | System and method for detecting and correcting mapping table errors in a non-volatile memory system | |
US8566669B2 (en) | Memory system and method for generating and transferring parity information | |
US20130304970A1 (en) | Systems and methods for providing high performance redundant array of independent disks in a solid-state device | |
US10838645B2 (en) | Memory writing operations with consideration for thermal thresholds | |
CN111566738B (en) | Active and selective spare bits in a memory system | |
US9690649B2 (en) | Memory device error history bit | |
CN112612639A (en) | Method of operating memory system, method of operating host, and computing system | |
US10628258B2 (en) | Die-level error recovery scheme | |
CN114446379A (en) | Ranking memory devices based on performance metrics for various timing margin parameter settings | |
US20240134744A1 (en) | Command address fault detection using a parity pin | |
US20240061744A1 (en) | Command address fault detection | |
US12009835B2 (en) | Command address fault detection | |
US20230395183A1 (en) | Error detection for a semiconductor device | |
CN117795466A (en) | Access request management using subcommands |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: MICRON TECHNOLOGY, INC., IDAHO Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:BASU, RESHMI;REEL/FRAME:046599/0759 Effective date: 20180719 |
|
FEPP | Fee payment procedure |
Free format text: ENTITY STATUS SET TO UNDISCOUNTED (ORIGINAL EVENT CODE: BIG.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY |
|
AS | Assignment |
Owner name: JPMORGAN CHASE BANK, N.A.., AS COLLATERAL AGENT, ILLINOIS Free format text: SUPPLEMENT NO. 1 TO PATENT SECURITY AGREEMENT;ASSIGNOR:MICRON TECHNOLOGY, INC.;REEL/FRAME:047630/0756 Effective date: 20181015 Owner name: MORGAN STANLEY SENIOR FUNDING, INC., AS COLLATERAL AGENT, MARYLAND Free format text: SUPPLEMENT NO. 10 TO PATENT SECURITY AGREEMENT;ASSIGNOR:MICRON TECHNOLOGY, INC.;REEL/FRAME:048102/0420 Effective date: 20181015 |
|
AS | Assignment |
Owner name: MICRON TECHNOLOGY, INC., IDAHO Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:MORGAN STANLEY SENIOR FUNDING, INC., AS COLLATERAL AGENT;REEL/FRAME:050719/0550 Effective date: 20190731 |
|
AS | Assignment |
Owner name: MICRON TECHNOLOGY, INC., IDAHO Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:JPMORGAN CHASE BANK, N.A., AS COLLATERAL AGENT;REEL/FRAME:051028/0835 Effective date: 20190731 |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: NOTICE OF ALLOWANCE MAILED -- APPLICATION RECEIVED IN OFFICE OF PUBLICATIONS |
|
STCF | Information on status: patent grant |
Free format text: PATENTED CASE |
|
MAFP | Maintenance fee payment |
Free format text: PAYMENT OF MAINTENANCE FEE, 4TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1551); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY Year of fee payment: 4 |