US20200104068A1 - Data erasure in memory sub-systems - Google Patents

Data erasure in memory sub-systems Download PDF

Info

Publication number
US20200104068A1
US20200104068A1 US16/148,564 US201816148564A US2020104068A1 US 20200104068 A1 US20200104068 A1 US 20200104068A1 US 201816148564 A US201816148564 A US 201816148564A US 2020104068 A1 US2020104068 A1 US 2020104068A1
Authority
US
United States
Prior art keywords
block
erase
blocks
retired
memory
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
US16/148,564
Other versions
US10628076B1 (en
Inventor
Kevin R. Brandt
Thomas Cougar Van Eaton
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Micron Technology Inc
Original Assignee
Micron Technology Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Micron Technology Inc filed Critical Micron Technology Inc
Priority to US16/148,564 priority Critical patent/US10628076B1/en
Assigned to JPMORGAN CHASE BANK, N.A., AS COLLATERAL AGENT reassignment JPMORGAN CHASE BANK, N.A., AS COLLATERAL AGENT SUPPLEMENT NO. 3 TO PATENT SECURITY AGREEMENT Assignors: MICRON TECHNOLOGY, INC.
Assigned to MORGAN STANLEY SENIOR FUNDING, INC., AS COLLATERAL AGENT reassignment MORGAN STANLEY SENIOR FUNDING, INC., AS COLLATERAL AGENT SUPPLEMENT NO. 12 TO PATENT SECURITY AGREEMENT Assignors: MICRON TECHNOLOGY, INC.
Priority to CN202210692044.XA priority patent/CN115048054A/en
Priority to EP19869194.1A priority patent/EP3861428A4/en
Priority to PCT/US2019/053590 priority patent/WO2020072321A1/en
Priority to KR1020217013274A priority patent/KR20210057192A/en
Priority to CN201980072847.1A priority patent/CN112997139B/en
Assigned to MICRON TECHNOLOGY, INC. reassignment MICRON TECHNOLOGY, INC. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: BRANDT, Kevin R, VAN EATON, THOMAS COUGAR
Assigned to MICRON TECHNOLOGY, INC. reassignment MICRON TECHNOLOGY, INC. RELEASE BY SECURED PARTY (SEE DOCUMENT FOR DETAILS). Assignors: MORGAN STANLEY SENIOR FUNDING, INC., AS COLLATERAL AGENT
Assigned to MICRON TECHNOLOGY, INC. reassignment MICRON TECHNOLOGY, INC. RELEASE BY SECURED PARTY (SEE DOCUMENT FOR DETAILS). Assignors: JPMORGAN CHASE BANK, N.A., AS COLLATERAL AGENT
Priority to US16/824,335 priority patent/US11237755B2/en
Publication of US20200104068A1 publication Critical patent/US20200104068A1/en
Publication of US10628076B1 publication Critical patent/US10628076B1/en
Application granted granted Critical
Priority to US17/529,908 priority patent/US11775198B2/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L9/00Cryptographic mechanisms or cryptographic arrangements for secret or secure communications; Network security protocols
    • H04L9/08Key distribution or management, e.g. generation, sharing or updating, of cryptographic keys or passwords
    • H04L9/0894Escrow, recovery or storing of secret information, e.g. secret key escrow or cryptographic key storage
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0646Horizontal data movement in storage systems, i.e. moving data in between storage devices or systems
    • G06F3/0652Erasing, e.g. deleting, data cleaning, moving of data to a wastebasket
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/70Protecting specific internal or peripheral components, in which the protection of a component leads to protection of the entire computer
    • G06F21/78Protecting specific internal or peripheral components, in which the protection of a component leads to protection of the entire computer to assure secure storage of data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0602Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
    • G06F3/0608Saving storage space on storage systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0638Organizing or formatting or addressing of data
    • G06F3/064Management of blocks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0668Interfaces specially adapted for storage systems adopting a particular infrastructure
    • G06F3/0671In-line storage system
    • G06F3/0673Single storage device
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0668Interfaces specially adapted for storage systems adopting a particular infrastructure
    • G06F3/0671In-line storage system
    • G06F3/0673Single storage device
    • G06F3/0679Non-volatile semiconductor memory device, e.g. flash memory, one time programmable memory [OTP]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0668Interfaces specially adapted for storage systems adopting a particular infrastructure
    • G06F3/0671In-line storage system
    • G06F3/0683Plurality of storage devices
    • G06F3/0688Non-volatile semiconductor memory arrays
    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11CSTATIC STORES
    • G11C16/00Erasable programmable read-only memories
    • G11C16/02Erasable programmable read-only memories electrically programmable
    • G11C16/06Auxiliary circuits, e.g. for writing into memory
    • G11C16/10Programming or data input circuits
    • G11C16/102External programming circuits, e.g. EPROM programmers; In-circuit programming or reprogramming; EPROM emulators
    • G11C16/105Circuits or methods for updating contents of nonvolatile memory, especially with 'security' features to ensure reliable replacement, i.e. preventing that old data is lost before new data is reliably written
    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11CSTATIC STORES
    • G11C16/00Erasable programmable read-only memories
    • G11C16/02Erasable programmable read-only memories electrically programmable
    • G11C16/06Auxiliary circuits, e.g. for writing into memory
    • G11C16/10Programming or data input circuits
    • G11C16/14Circuits for erasing electrically, e.g. erase voltage switching circuits
    • G11C16/16Circuits for erasing electrically, e.g. erase voltage switching circuits for erasing blocks, e.g. arrays, words, groups
    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11CSTATIC STORES
    • G11C16/00Erasable programmable read-only memories
    • G11C16/02Erasable programmable read-only memories electrically programmable
    • G11C16/06Auxiliary circuits, e.g. for writing into memory
    • G11C16/34Determination of programming status, e.g. threshold voltage, overprogramming or underprogramming, retention
    • G11C16/3436Arrangements for verifying correct programming or erasure
    • G11C16/3468Prevention of overerasure or overprogramming, e.g. by verifying whilst erasing or writing
    • G11C16/3472Circuits or methods to verify correct erasure of nonvolatile memory cells whilst erasing is in progress, e.g. by detecting onset or cessation of current flow in cells and using the detector output to terminate erasure
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L9/00Cryptographic mechanisms or cryptographic arrangements for secret or secure communications; Network security protocols
    • H04L9/08Key distribution or management, e.g. generation, sharing or updating, of cryptographic keys or passwords
    • H04L9/0891Revocation or update of secret information, e.g. encryption key update or rekeying
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2212/00Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
    • G06F2212/72Details relating to flash memory management
    • G06F2212/7204Capacity control, e.g. partitioning, end-of-life degradation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2221/00Indexing scheme relating to security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F2221/03Indexing scheme relating to G06F21/50, monitoring users, programs or devices to maintain the integrity of platforms
    • G06F2221/034Test or assess a computer or a system
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2221/00Indexing scheme relating to security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F2221/21Indexing scheme relating to G06F21/00 and subgroups addressing additional information or applications relating to security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F2221/2143Clearing memory, e.g. to prevent the data from being stolen
    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11CSTATIC STORES
    • G11C29/00Checking stores for correct operation ; Subsequent repair; Testing stores during standby or offline operation
    • G11C29/04Detection or location of defective memory elements, e.g. cell constructio details, timing of test signals
    • G11C29/08Functional testing, e.g. testing during refresh, power-on self testing [POST] or distributed testing
    • G11C29/12Built-in arrangements for testing, e.g. built-in self testing [BIST] or interconnection details
    • G11C2029/4402Internal storage of test result, quality data, chip identification, repair information

Definitions

  • the present disclosure generally relates to memory sub-systems, and more specifically, relates to data erasure in memory sub-systems.
  • a memory sub-system can be a storage system, such as a solid-state drive (SSD), and can include one or more memory components that store data.
  • the memory components can be, for example, non-volatile memory components and volatile memory components.
  • a host system can utilize a memory sub-system to store data at the memory components and to retrieve data from the memory components.
  • FIG. 1 illustrates an example computing environment that includes a memory sub-system in accordance with some embodiments of the present disclosure.
  • FIG. 2 is a flow diagram of an example method to execute a sanitize operation, such as the sanitize operation of FIG. 1 , in accordance with some embodiments of the present disclosure.
  • FIG. 3 is a flow diagram of another example method to execute a sanitize operation, such as the sanitize operation of FIG. 1 , in accordance with some embodiments of the present disclosure.
  • FIG. 4 is a flow diagram of an example method to read a block for which an erase cycle has failed, in accordance with some embodiments of the present disclosure.
  • FIG. 5 is a flow diagram of another example method to execute a sanitize operation, such as the sanitize operation of FIG. 1 , in accordance with some embodiments of the present disclosure.
  • FIG. 6 is a block diagram of an example computer system in which embodiments of the present disclosure may operate.
  • a memory sub-system is also hereinafter referred to as a “memory system.”
  • An example of a memory sub-system is a storage system, such as a solid-state drive (SSD).
  • the memory sub-system is a hybrid memory/storage sub-system.
  • a host system can utilize a memory sub-system that includes one or more memory components. The host system can provide data to be stored at the memory sub-system and can request data to be retrieved from the memory sub-system.
  • the memory sub-system can include multiple memory components that can store data from the host system. Different memory components can include different types of media. Examples of media include, but are not limited to, a cross-point array of non-volatile memory and flash-based memory cells.
  • a delete operation involves indicating at a controller of the memory sub-system and/or an operating system of a host machine that the affected memory cells are no longer in use.
  • a delete operation does not include immediately erasing data stored at the memory cells. Deleted memory cells may be later erased and configured for re-use by periodic garbage collection operations.
  • garbage collection operations memory cells including data that is no longer needed (e.g., deleted) are identified and erased.
  • a sanitize operation is executed.
  • the memory sub-system erases data at the indicated memory cells, for example, by changing the programmed state of some or all of the memory cells.
  • a block or blocks of memory cells can be retired, for example, if the block fails to operate correctly. For example, a block can be retired if it one or more pages at the block cannot be correctly written or read. Also, in some examples, a block can be retired if the block cannot be successfully erased. When a block is retired, it is no longer used to store data.
  • retired blocks are handled in different ways during delete and sanitize operations. For example, when a retired block is deleted, the block is indicated to be no longer in use. In some examples, when a retired block is deleted, garbage collection operations do not erase and configure the block for re-use as it does other blocks. For example, retired blocks may not be included in the garbage collection operation and may remain unused. Similarly, during a sanitize operation, retired blocks may be erased without being configured for re-use.
  • a sanitize command does not provide an indication of whether blocks, including retired blocks, were successfully erased. Accordingly, the successful completion of a sanitize command may not provide an indication that all data at the relevant block or blocks has been erased.
  • the block may have been retired because a previous erase cycle at the block failed to complete within a threshold time. This is also referred to as timing out.
  • the retired block is subjected to an additional erase cycle. The additional erase cycle may be successful or unsuccessful, for example, depending on the reasons that the first erase cycle timed out. If the additional erase cycle fails, then data may be stored at the retired block, and potentially recoverable, despite the sanitize operation.
  • the indication may include an erase indicator. If an erase cycle during the sanitize operation fails, the erase indicator is set to false. If all erase cycles during the sanitize operation are successful, the erase indicator is set to true. Any suitable erase indicator value can be selected to correspond to true or false. The value of the erase indicator corresponding to true is any value indicating that all erase cycles during the sanitize operation were successful. The value of the erase indicator corresponding to false is any value indicating that not all of the erase cycles during the sanitize operation were successful.
  • the erase indicator value corresponding to true can be logic “1” while the erase indicator value corresponding to false is logic “0.” In other implementations, the erase indicator value corresponding to true can be logic “0” while the erase indicator value corresponding to false is logic “1.” In other examples, the erase indicator values can include more than one bit. The erase indicator values for true and false are different from one another. For example, the erase indicator value for true is not equal to the erase indicator value for false.
  • the memory component in some examples, provides a status indicator after an erase cycle is performed at a block.
  • the status indicator provides an indication of whether the erase cycle was successful or unsuccessful.
  • the memory sub-system attempts read operations at some or all of the blocks that are the subject of the sanitize operation after erase cycles are executed. Attempting a read operation at a block can include attempting a read operation and some or all of the pages or other sub-components included at the block. If a read operation is successful, then the block is not successfully erased and the erase indicator is set to false.
  • the status indicator after an erase cycle is set based on an erase verify operation that can check that the pages of erased blocks are erased by reading groups of pages in a single operation or set of operations.
  • FIG. 1 illustrates an example computing environment 100 that includes a memory sub-system 110 in accordance with some embodiments of the present disclosure.
  • the memory sub-system 110 can include media 121 , such as memory components 112 A to 112 N.
  • the memory components 112 A to 112 N can be volatile memory components, non-volatile memory components, or a combination of such.
  • the memory sub-system 110 is a storage system.
  • An example of a storage system is a SSD.
  • the memory sub-system 110 is a hybrid memory/storage sub-system.
  • the computing environment 100 can include a host system 120 that uses the memory sub-system 110 .
  • the host system 120 can write data to the memory sub-system 110 and read data from the memory sub-system 110 .
  • the host system 120 can be a computing device such as a desktop computer, laptop computer, network server, mobile device, or such computing device that includes a memory and a processing device.
  • the host system 120 can include or be coupled to the memory sub-system 110 so that the host system 120 can read data from or write data to the memory sub-system 110 .
  • the host system 120 can be coupled to the memory sub-system 110 via a physical host interface.
  • “coupled to” generally refers to a connection between components, which can be an indirect communicative connection or direct communicative connection (e.g., without intervening components), whether wired or wireless, including connections such as electrical, optical, magnetic, etc.
  • Examples of a physical host interface include, but are not limited to, a serial advanced technology attachment (SATA) interface, a peripheral component interconnect express (PCIe) interface, universal serial bus (USB) interface, Fibre Channel, Serial Attached SCSI (SAS), etc.
  • the physical host interface can be used to transmit data between the host system 120 and the memory sub-system 110 .
  • the host system 120 can further utilize an NVM Express (NVMe) interface to access the memory components 112 A to 112 N when the memory sub-system 110 is coupled with the host system 120 by the PCIe interface.
  • NVMe NVM Express
  • the physical host interface can provide an interface for passing control, address, data, and other signals between the memory sub-system 110 and the host system 120 .
  • the memory components 112 A to 112 N can include any combination of the different types of non-volatile memory components and/or volatile memory components.
  • An example of non-volatile memory components includes a negative-and (NAND) type flash memory.
  • Each of the memory components 112 A to 112 N can include one or more arrays of memory cells such as Single Level Cells (SLCs) or Multilevel Cells (MLCs).
  • SLCs Single Level Cells
  • MLCs Multilevel Cells
  • TLCs triple level cells
  • QLCs quad-level cells
  • a particular memory component can include both an SLC portion and a MLC portion of memory cells.
  • Each of the memory cells can store one or more bits of data used by the host system 120 or memory sub-system 110 .
  • non-volatile memory components such as NAND type flash memory are described, the memory components 112 A to 112 N can be based on any other type of memory such as a volatile memory.
  • the memory components 112 A to 112 N can be, but are not limited to, random access memory (RAM), read-only memory (ROM), dynamic random access memory (DRAM), synchronous dynamic random access memory (SDRAM), phase change memory (PCM), magneto random access memory (MRAM), negative-or (NOR) flash memory, electrically erasable programmable read-only memory (EEPROM), and a cross-point array of non-volatile memory cells.
  • a cross-point array of non-volatile memory can perform bit storage based on a change of bulk resistance, in conjunction with a stackable cross-gridded data access array. Additionally, in contrast to many flash-based memories, cross-point non-volatile memory can perform a write in-place operation, where a non-volatile memory cell can be programmed without the non-volatile memory cell being previously erased. Furthermore, the memory cells of the memory components 112 A to 112 N can be grouped as memory pages or data blocks that can refer to a unit of the memory component used to store data.
  • the memory system controller 115 can communicate with the memory components 112 A to 112 N to perform operations such as reading data, writing data, or erasing data at the memory components 112 A to 112 N and other such operations.
  • the controller 115 can include hardware such as one or more integrated circuits and/or discrete components, a buffer memory, or a combination thereof.
  • the controller 115 can be a microcontroller, special purpose logic circuitry (e.g., a field programmable gate array (FPGA), an application specific integrated circuit (ASIC), etc.), or another suitable processor.
  • the controller 115 can include a processor (processing device) 117 configured to execute instructions stored in local memory 119 .
  • the local memory 119 of the controller 115 includes an embedded memory configured to store instructions for performing various processes, operations, logic flows, and routines that control operation of the memory sub-system 110 , including handling communications between the memory sub-system 110 and the host system 120 .
  • the local memory 119 can include memory registers storing memory pointers, fetched data, etc.
  • the local memory 119 can also include read-only memory (ROM) for storing micro-code.
  • a memory sub-system 110 may rely upon external control (e.g., provided by an external host, or by a processor or controller separate from the memory sub-system) to perform some or all of the management of the memory sub-system 110 .
  • external control e.g., provided by an external host, or by a processor or controller separate from the memory sub-system
  • the controller 115 may be omitted.
  • the controller 115 can receive commands or operations from the host system 120 and can convert the commands or operations into instructions or appropriate commands to achieve the desired access to the memory components 112 A to 112 N.
  • the controller 115 can be responsible for other operations such as wear leveling operations, garbage collection operations, error detection and error-correcting code (ECC) operations, encryption operations, caching operations, and address translations between a logical address and a physical address that are associated with the memory components 112 A to 112 N.
  • the controller 115 can further include host interface circuitry to communicate with the host system 120 via the physical host interface.
  • the host interface circuitry can convert the commands received from the host system into command instructions to access the memory components 112 A to 112 N as well as convert responses associated with the memory components 112 A to 112 N into information for the host system 120 .
  • the memory sub-system 110 can also include additional circuitry or components that are not illustrated.
  • the memory sub-system 110 can include a cache or buffer (e.g., DRAM) and address circuitry (e.g., a row decoder and a column decoder) that can receive a logical address from the controller 115 and decode the logical address to one or more physical addresses at the memory components 112 A to 112 N.
  • a cache or buffer e.g., DRAM
  • address circuitry e.g., a row decoder and a column decoder
  • the memory sub-system 110 of FIG. 1 executes an example sanitize operation 130 .
  • the sanitize operation 130 is directed to a set of blocks 136 A, 136 B, 136 C, 136 D, 136 E, 136 F, 136 G, 136 H, 136 I, 136 J, 136 K, 136 L. Although twelve blocks are shown, the sanitize operation 130 may be directed to any suitable number of blocks. In some examples, the sanitize operation 130 is directed to all blocks at a memory component 112 A, 112 B or all blocks in the memory sub-system 110 . In the example of FIG.
  • a first portion of the blocks are unretired, including example blocks 136 A, 136 C, 136 D, 136 F, 136 G, 136 I, 136 J, 136 K.
  • a second portion of the blocks are retired, including example blocks 136 B, 136 E, 136 H, 136 L.
  • the retired blocks 136 B, 136 E, 136 H, 136 L may have been retired for any suitable reason including, for example, a failure to write, a failure to read, a failure to erase, etc.
  • the sanitize operation 130 includes erase cycles 132 .
  • the erase cycles 132 may include one erase cycle for each block 136 A, 136 B, 136 C, 136 D. 136 E, 136 F, 136 G, 136 H, 136 I, 136 J, 136 K, 136 L of the set of blocks.
  • the memory sub-system 110 sets the erase flag (or other indicator) 138 with a set flag operation 134 . For example, if each of the erase cycles 132 were successful, then the erase flag 138 is set to true. If any of the erase cycles 132 were unsuccessful, then the erase flag 138 is set to indicate that the erase was unsuccessful.
  • the erase flag 138 may be available to the host system 120 or other user of the memory sub-system 110 in various ways.
  • the erase flag 138 may be provided to the host system, for example, in response to a status request command.
  • the erase flag 138 may be provided by itself or as part of a status page or word.
  • the status request command may be generic or, in some examples, may be specific to the manufacturer of the memory sub-system 110 .
  • the erase flag 138 is stored at local memory 119 and/or a memory component 112 A, 112 B as a log page.
  • the log page may be accessible to the host system 120 and/or the controller 115 with a read request.
  • the sanitize operation 130 includes setting one or more indicators (e.g., flags) such as, for example, a retired block erase flag and an unretired block erase flag.
  • the retired block erase flag is set to true if all retired blocks have been successfully erased.
  • the retired block erase flag is set to false if any retired blocks have not been successfully erased.
  • the unretired block erase flag is set to true if all unretired blocks have been successfully erased.
  • the unretired block erase flag is set to false if any unretired blocks have not been successfully erased.
  • the local memory 119 of the controller 1155 includes executing the sanitize operation 130 .
  • the sanitize operation 130 is executed by the controller 115 .
  • some or all of the logic for executing the sanitize operation can be execute at various other locations.
  • some or all of the logic for executing the sanitize operation 130 is included as code executed at the host system 120 .
  • some or all of the logic for executing the sanitize operation 130 is included, for example, as processing logic at one or more of the memory components 112 A, 112 B.
  • FIG. 2 is a flow diagram of an example method 200 to execute a sanitize operation, such as the sanitize operation 130 of FIG. 1 , in accordance with some embodiments of the present disclosure.
  • the sanitize operation is directed to one or more blocks at a memory component or memory components.
  • the sanitize operation is directed to all the blocks at a memory component and/or all the blocks at a memory media.
  • Memory blocks to which the sanitize operation is directed are referred to as blocks within the scope of the sanitize operation.
  • the method 200 can be performed by processing logic that can include hardware (e.g., processing device, circuitry, dedicated logic, programmable logic, microcode, hardware of a device, integrated circuit, etc.), software (e.g., instructions run or executed on a processing device), or a combination thereof.
  • processing logic can include hardware (e.g., processing device, circuitry, dedicated logic, programmable logic, microcode, hardware of a device, integrated circuit, etc.), software (e.g., instructions run or executed on a processing device), or a combination thereof.
  • processing logic can include hardware (e.g., processing device, circuitry, dedicated logic, programmable logic, microcode, hardware of a device, integrated circuit, etc.), software (e.g., instructions run or executed on a processing device), or a combination thereof.
  • the processing logic deletes one or more cryptographic keys corresponding to the blocks within the scope of the sanitize operation.
  • data at some or all the blocks within the scope of the sanitize operation may be encrypted by the memory sub-system.
  • the memory sub-system manages cryptographic keys to encrypt and decrypt the data. For example, when the memory sub-system receives write request including a data unit, it generates and/or retrieves a cryptographic key. The memory sub-system uses the cryptographic key to encrypt the data unit, resulting in an encrypted data unit.
  • the cryptographic key is stored at the local memory of the controller, at a memory component, or at another suitable location at the memory sub-system 110 .
  • the same cryptographic key may be used for multiple blocks or, in some examples, different cryptographic keys are used for different blocks.
  • the encrypted data unit is written to a block at a memory component.
  • the memory sub-system retrieves the appropriate cryptographic key and decrypts the encrypted data unit.
  • the decrypted data unit is then returned in response to the read request. For blocks that utilize encryption, deleting the cryptographic key or keys that correspond to the blocks within the scope of the sanitize operation increases the difficulty of recovering data from the blocks after the sanitize operation, even if the blocks are not successfully erased.
  • the processing logic initiates an erase cycle for a first block within the scope of the sanitize operation.
  • the erase cycle may be executed in any suitable manner.
  • the host device and/or controller may send to the memory component including the block an erase command that causes the memory component to execute an erase cycle for the block.
  • the first block comprises NAND memory cells.
  • the NAND memory cells of the block may be erased by holding the sources of the respective memory cells to ground and raising the control gates to an erase voltage for a cycle time. After the cycle time, the memory component including the block returns an indication that the erase cycle was successful or unsuccessful.
  • the processing logic determines if the erase cycle for the block was successful. If the erase cycle was not successful, the processing logic changes the erase flag to false at operation 208 , indicating that not all blocks have been successfully erased. If the erase cycle was successful, the processing logic determines at operation 210 whether there are additional blocks in the scope of the sanitize operation that have not been subjected to an erase cycle. If no blocks remain, the processing logic returns the current erase flag value at operation 212 . If blocks do remain, then processing logic moves to the next block at operation 214 and returns to operation 204 to execute an erase cycle at the next block.
  • FIG. 3 is a flow diagram of another example method 300 to execute a sanitize operation, such as the sanitize operation 130 of FIG. 1 , in accordance with some embodiments of the present disclosure.
  • the sanitize operation is directed to one or more blocks at a memory component or memory components.
  • the sanitize operation is directed to all the blocks at a memory component and/or all the blocks at a memory media.
  • Memory blocks to which the sanitize operation is directed are referred to as blocks within the scope of the sanitize operation.
  • the method 300 can be performed by processing logic that can include hardware (e.g., processing device, circuitry, dedicated logic, programmable logic, microcode, hardware of a device, integrated circuit, etc.), software (e.g., instructions run or executed on a processing device), or a combination thereof.
  • processing logic can include hardware (e.g., processing device, circuitry, dedicated logic, programmable logic, microcode, hardware of a device, integrated circuit, etc.), software (e.g., instructions run or executed on a processing device), or a combination thereof.
  • processing logic can include hardware (e.g., processing device, circuitry, dedicated logic, programmable logic, microcode, hardware of a device, integrated circuit, etc.), software (e.g., instructions run or executed on a processing device), or a combination thereof.
  • the processing logic deletes one or more cryptographic keys for blocks within the scope of the sanitize operation, for example, similar to operation 202 above.
  • the processing logic executes an erase cycle for a first block within the scope of the sanitize operation.
  • the processing logic determines if the erase cycle was successful.
  • the processing logic attempts to read the block at operation 308 .
  • a block may be unreadable after an erase operation even if the erase operation failed.
  • the read at operation 308 checks whether the block is unreadable despite the failure of the erase cycle at operation 304 .
  • the block may be readable by page, where there are multiple pages in the block.
  • the reading of the block at operation 308 may include reading each page of the block. In some examples, the reading of the block at operation 308 includes reading less than all the pages of the block (e.g., every other page, every third page, etc.) or executing an operation that reads multiple pages simultaneously.
  • the processing logic determines if the read was successful at operation 310 .
  • the read is successful, for example, if the read produces a data unit without bit errors, for example, as determined by Error Correction Code (ECC) or other suitable technique.
  • ECC Error Correction Code
  • the read may be considered successful if any of the pages of the block are successfully read. In some examples, the read is considered successful if less than a threshold number of pages of the block are successfully read. If the read is unsuccessful, it indicates that the block is effectively erased. Accordingly, if the read is unsuccessful, the processing logic proceeds to the next block at operation 316 without setting the erase flag to false. If the read is successful, the processing logic sets the erase flag to false at operation 312 .
  • the processing logic retires the block, for example, by storing an indication of the block to a retired blocks list.
  • the processing logic determines if there are additional blocks within the scope of the sanitize operation. If yes, then the processing logic proceeds to the next block at operation 318 and returns to operation 304 to execute an erase cycle at that block. If there are no more blocks within the scope of the sanitize operation, the processing logic returns the current erase flag value at operation 320 .
  • FIG. 4 is a flow diagram of an example method 400 to read a block for which an erase cycle has failed, in accordance with some embodiments of the present disclosure.
  • the method 400 is one example way that the process logic can execute the operation 308 of the method 300 described herein.
  • the method 400 can be performed by processing logic that can include hardware (e.g., processing device, circuitry, dedicated logic, programmable logic, microcode, hardware of a device, integrated circuit, etc.), software (e.g., instructions run or executed on a processing device), or a combination thereof. Although shown in a particular sequence or order, unless otherwise specified, the order of the processes can be modified.
  • the processing logic reads a first page at the block.
  • the processing logic determines if the read was a success, for example, based on an ECC or other method. If the read is a success, then the processing logic returns an indication that the read was successful at operation 406 . If the read was not successful, the processing logic determines at operation 408 whether there are additional pages at the block to be read at operation 408 . This may include determining whether there are any pages at the block that have not been read yet. If there are no more pages, the processing logic returns a read failure at operation 410 indicating that the block was not successfully read. If there are remaining pages, the processing logic proceeds to the next page at operation 412 and reads the next page at operation 402 .
  • FIG. 5 is a flow diagram of another example method 500 to execute a sanitize operation, such as the sanitize operation 130 of FIG. 1 , in accordance with some embodiments of the present disclosure.
  • the sanitize operation is directed to one or more blocks at a memory component or memory components.
  • the sanitize operation is directed to all the blocks at a memory component and/or all the blocks at a memory media.
  • Memory blocks to which the sanitize operation is directed are referred to as blocks within the scope of the sanitize operation.
  • the method 300 can be performed by processing logic that can include hardware (e.g., processing device, circuitry, dedicated logic, programmable logic, microcode, hardware of a device, integrated circuit, etc.), software (e.g., instructions run or executed on a processing device), or a combination thereof.
  • processing logic can include hardware (e.g., processing device, circuitry, dedicated logic, programmable logic, microcode, hardware of a device, integrated circuit, etc.), software (e.g., instructions run or executed on a processing device), or a combination thereof.
  • processing logic can include hardware (e.g., processing device, circuitry, dedicated logic, programmable logic, microcode, hardware of a device, integrated circuit, etc.), software (e.g., instructions run or executed on a processing device), or a combination thereof.
  • the method 500 shows an example where the processing logic maintains two erase flags.
  • An unretired block erase flag corresponds to blocks that were unretired at the outset of the sanitize operation and a retired block erase flag corresponds to retired blocks.
  • the processing logic deletes one or more cryptographic keys corresponding to the blocks within the scope of the sanitize operation.
  • the processing logic initiates an erase cycle for a first block within the scope of the sanitize operation.
  • the processing logic determines if the erase cycle for the block was successful.
  • the processing logic changes the erase flag corresponding to the type of the block to false at operation 508 , indicating that not all blocks of that type have been successfully erased. For example, if the block was previously retired, the processing logic changes the retired block erase flag to false. If the block was previously unretired, the processing logic changes the unretired block erase flag to false. (In some examples, the processing logic reads the block before setting the appropriate flag to false, for example, as described herein with respect to FIGS. 3 and 4 .
  • the processing logic determines at operation 510 whether there are additional blocks in the scope of the sanitize operation that have not been subjected to an erase cycle. If no blocks remain, the processing logic returns the current retired block erase flag value and unretired block erase flag value at operation 512 . If blocks do remain, then processing logic moves to the next block at operation 514 and returns to operation 504 to execute an erase cycle at the next block.
  • FIG. 6 illustrates an example machine of a computer system 600 within which a set of instructions, for causing the machine to perform any one or more of the methodologies discussed herein, can be executed.
  • the computer system 600 can correspond to a host system (e.g., the host system 120 of FIG. 1 ) that includes, is coupled to, or utilizes a memory sub-system (e.g., the memory sub-system 110 of FIG. 1 ) or can be used to perform the operations of a controller (e.g., to execute an operating system to execute instructions 113 for executing a sanitize operation.
  • a host system e.g., the host system 120 of FIG. 1
  • a memory sub-system e.g., the memory sub-system 110 of FIG. 1
  • a controller e.g., to execute an operating system to execute instructions 113 for executing a sanitize operation.
  • the machine can be connected (e.g., networked) to other machines in a LAN, an intranet, an extranet, and/or the Internet.
  • the machine can operate in the capacity of a server or a client machine in client-server network environment, as a peer machine in a peer-to-peer (or distributed) network environment, or as a server or a client machine in a cloud computing infrastructure or environment.
  • the machine can be a personal computer (PC), a tablet PC, a set-top box (STB), a Personal Digital Assistant (PDA), a cellular telephone, a web appliance, a server, a network router, a switch or bridge, or any machine capable of executing a set of instructions (sequential or otherwise) that specify actions to be taken by that machine.
  • PC personal computer
  • PDA Personal Digital Assistant
  • STB set-top box
  • STB set-top box
  • a cellular telephone a web appliance
  • server a server
  • network router a network router
  • switch or bridge or any machine capable of executing a set of instructions (sequential or otherwise) that specify actions to be taken by that machine.
  • machine shall also be taken to include any collection of machines that individually or jointly execute a set (or multiple sets) of instructions to perform any one or more of the methodologies discussed herein.
  • the example computer system 600 includes a processing device 602 , a main memory 604 (e.g., read-only memory (ROM), flash memory, dynamic random-access memory (DRAM) such as synchronous DRAM (SDRAM) or Rambus DRAM (RDRAM), etc.), a static memory 606 (e.g., flash memory, static random-access memory (SRAM), etc.), and a data storage system 618 , which communicate with each other via a bus 630 .
  • main memory 604 e.g., read-only memory (ROM), flash memory, dynamic random-access memory (DRAM) such as synchronous DRAM (SDRAM) or Rambus DRAM (RDRAM), etc.
  • DRAM dynamic random-access memory
  • SDRAM synchronous DRAM
  • RDRAM Rambus DRAM
  • static memory 606 e.g., flash memory, static random-access memory (SRAM), etc.
  • SRAM static random-access memory
  • Processing device 602 represents one or more general-purpose processing devices such as a microprocessor, a central processing unit, or the like. More particularly, the processing device can be a complex instruction set computing (CISC) microprocessor, reduced instruction set computing (RISC) microprocessor, very long instruction word (VLIW) microprocessor, or a processor implementing other instruction sets, or processors implementing a combination of instruction sets. Processing device 602 can also be one or more special-purpose processing devices such as an application specific integrated circuit (ASIC), a field programmable gate array (FPGA), a digital signal processor (DSP), network processor, or the like. The processing device 602 is configured to execute instructions 626 for performing the operations and steps discussed herein.
  • the computer system 600 can further include a network interface device 608 to communicate over the network 620 .
  • the data storage system 618 can include a non-transitory machine-readable storage medium 624 (also known as a computer-readable medium) on which is stored one or more sets of instructions 626 or software embodying any one or more of the methodologies or functions described herein.
  • the instructions 626 can also reside, completely or at least partially, within the main memory 604 and/or within the processing device 602 during execution thereof by the computer system 600 , the main memory 604 and the processing device 602 also constituting machine-readable storage media.
  • the machine-readable storage medium 624 , data storage system 618 , and/or main memory 604 can correspond to the memory sub-system 110 of FIG. 1 .
  • the instructions 626 include the instructions 113 to implement functionality corresponding to the sanitize operation, as described herein. While the machine-readable storage medium 624 is shown in an example embodiment to be a single medium, the term “machine-readable storage medium” should be taken to include a single medium or multiple media that store the one or more sets of instructions. The term “machine-readable storage medium” shall also be taken to include any medium that is capable of storing or encoding a set of instructions for execution by the machine and that cause the machine to perform any one or more of the methodologies of the present disclosure. The term “machine-readable storage medium” shall accordingly be taken to include, but not be limited to, solid-state memories, optical media, and magnetic media.
  • the present disclosure also relates to an apparatus for performing the operations herein.
  • This apparatus can be specially constructed for the intended purposes, or it can include a general purpose computer selectively activated or reconfigured by a computer program stored in the computer.
  • a computer program can be stored in a computer readable storage medium, such as, but not limited to, any type of disk including floppy disks, optical disks, CD-ROMs, and magnetic-optical disks, read-only memories (ROMs), random access memories (RAMs), EPROMs.
  • EEPROMs electrically erasable programmable read-only memories
  • magnetic or optical cards or any type of media suitable for storing electronic instructions, each coupled to a computer system bus.
  • the present disclosure can be provided as a computer program product, or software, that can include a machine-readable medium having stored thereon instructions, which can be used to program a computer system (or other electronic devices) to perform a process according to the present disclosure.
  • a machine-readable medium includes any mechanism for storing information in a form readable by a machine (e.g., a computer).
  • a machine-readable (e.g., computer-readable) medium includes a machine (e.g., a computer) readable storage medium such as a read only memory (“ROM”), random access memory (“RAM”), magnetic disk storage media, optical storage media, flash memory components, etc.

Abstract

Various examples are directed to memory systems comprising a component and a processing device. The memory system may comprise a plurality of blocks. A first portion of the plurality of blocks may be retired and a second portion of the plurality of blocks may be unretired. The processing device receives a sanitize operation for the plurality of blocks. The processing device initiates a first erase cycle at a first retired block of the plurality of blocks. The processing device determines that the first erase cycle was not successful and sets an erase indicator to false.

Description

    TECHNICAL FIELD
  • The present disclosure generally relates to memory sub-systems, and more specifically, relates to data erasure in memory sub-systems.
  • BACKGROUND
  • A memory sub-system can be a storage system, such as a solid-state drive (SSD), and can include one or more memory components that store data. The memory components can be, for example, non-volatile memory components and volatile memory components. In general, a host system can utilize a memory sub-system to store data at the memory components and to retrieve data from the memory components.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • The present disclosure will be understood more fully from the detailed description given below and from the accompanying drawings of various embodiments of the disclosure.
  • FIG. 1 illustrates an example computing environment that includes a memory sub-system in accordance with some embodiments of the present disclosure.
  • FIG. 2 is a flow diagram of an example method to execute a sanitize operation, such as the sanitize operation of FIG. 1, in accordance with some embodiments of the present disclosure.
  • FIG. 3 is a flow diagram of another example method to execute a sanitize operation, such as the sanitize operation of FIG. 1, in accordance with some embodiments of the present disclosure.
  • FIG. 4 is a flow diagram of an example method to read a block for which an erase cycle has failed, in accordance with some embodiments of the present disclosure.
  • FIG. 5 is a flow diagram of another example method to execute a sanitize operation, such as the sanitize operation of FIG. 1, in accordance with some embodiments of the present disclosure.
  • FIG. 6 is a block diagram of an example computer system in which embodiments of the present disclosure may operate.
  • DETAILED DESCRIPTION
  • Aspects of the present disclosure are directed to a memory sub-system with erasure verification. A memory sub-system is also hereinafter referred to as a “memory system.” An example of a memory sub-system is a storage system, such as a solid-state drive (SSD). In some embodiments, the memory sub-system is a hybrid memory/storage sub-system. In general, a host system can utilize a memory sub-system that includes one or more memory components. The host system can provide data to be stored at the memory sub-system and can request data to be retrieved from the memory sub-system.
  • The memory sub-system can include multiple memory components that can store data from the host system. Different memory components can include different types of media. Examples of media include, but are not limited to, a cross-point array of non-volatile memory and flash-based memory cells. In many memory sub-systems, there is a distinction between deleting memory cells and sanitizing memory cells. A delete operation involves indicating at a controller of the memory sub-system and/or an operating system of a host machine that the affected memory cells are no longer in use. A delete operation does not include immediately erasing data stored at the memory cells. Deleted memory cells may be later erased and configured for re-use by periodic garbage collection operations. In garbage collection operations, memory cells including data that is no longer needed (e.g., deleted) are identified and erased. When it is desirable to erase data, a sanitize operation is executed. During a sanitize operation, the memory sub-system erases data at the indicated memory cells, for example, by changing the programmed state of some or all of the memory cells.
  • In many memory components, including those that include flash memory cells, a block or blocks of memory cells can be retired, for example, if the block fails to operate correctly. For example, a block can be retired if it one or more pages at the block cannot be correctly written or read. Also, in some examples, a block can be retired if the block cannot be successfully erased. When a block is retired, it is no longer used to store data.
  • In different memory sub-systems, retired blocks are handled in different ways during delete and sanitize operations. For example, when a retired block is deleted, the block is indicated to be no longer in use. In some examples, when a retired block is deleted, garbage collection operations do not erase and configure the block for re-use as it does other blocks. For example, retired blocks may not be included in the garbage collection operation and may remain unused. Similarly, during a sanitize operation, retired blocks may be erased without being configured for re-use.
  • In conventional arrangements, however, a sanitize command does not provide an indication of whether blocks, including retired blocks, were successfully erased. Accordingly, the successful completion of a sanitize command may not provide an indication that all data at the relevant block or blocks has been erased. Consider an example block that was retired for failure to properly erase. The block may have been retired because a previous erase cycle at the block failed to complete within a threshold time. This is also referred to as timing out. During a sanitize operation, the retired block is subjected to an additional erase cycle. The additional erase cycle may be successful or unsuccessful, for example, depending on the reasons that the first erase cycle timed out. If the additional erase cycle fails, then data may be stored at the retired block, and potentially recoverable, despite the sanitize operation.
  • Aspects of the present disclosure address the above and other deficiencies by tracking the success or failure of the erase cycles at blocks of a memory component during a sanitize operation and generating an indication of whether any erase cycles were unsuccessful. The indication may include an erase indicator. If an erase cycle during the sanitize operation fails, the erase indicator is set to false. If all erase cycles during the sanitize operation are successful, the erase indicator is set to true. Any suitable erase indicator value can be selected to correspond to true or false. The value of the erase indicator corresponding to true is any value indicating that all erase cycles during the sanitize operation were successful. The value of the erase indicator corresponding to false is any value indicating that not all of the erase cycles during the sanitize operation were successful. For example, in some implementations, the erase indicator value corresponding to true can be logic “1” while the erase indicator value corresponding to false is logic “0.” In other implementations, the erase indicator value corresponding to true can be logic “0” while the erase indicator value corresponding to false is logic “1.” In other examples, the erase indicator values can include more than one bit. The erase indicator values for true and false are different from one another. For example, the erase indicator value for true is not equal to the erase indicator value for false.
  • The memory component, in some examples, provides a status indicator after an erase cycle is performed at a block. The status indicator provides an indication of whether the erase cycle was successful or unsuccessful. In some examples, in addition to or instead of relying on a status indicator provided by a memory component, the memory sub-system attempts read operations at some or all of the blocks that are the subject of the sanitize operation after erase cycles are executed. Attempting a read operation at a block can include attempting a read operation and some or all of the pages or other sub-components included at the block. If a read operation is successful, then the block is not successfully erased and the erase indicator is set to false. In some examples, the status indicator after an erase cycle is set based on an erase verify operation that can check that the pages of erased blocks are erased by reading groups of pages in a single operation or set of operations.
  • FIG. 1 illustrates an example computing environment 100 that includes a memory sub-system 110 in accordance with some embodiments of the present disclosure. The memory sub-system 110 can include media 121, such as memory components 112A to 112N. The memory components 112A to 112N can be volatile memory components, non-volatile memory components, or a combination of such. In some embodiments, the memory sub-system 110 is a storage system. An example of a storage system is a SSD. In some embodiments, the memory sub-system 110 is a hybrid memory/storage sub-system. In general, the computing environment 100 can include a host system 120 that uses the memory sub-system 110. For example, the host system 120 can write data to the memory sub-system 110 and read data from the memory sub-system 110.
  • The host system 120 can be a computing device such as a desktop computer, laptop computer, network server, mobile device, or such computing device that includes a memory and a processing device. The host system 120 can include or be coupled to the memory sub-system 110 so that the host system 120 can read data from or write data to the memory sub-system 110. The host system 120 can be coupled to the memory sub-system 110 via a physical host interface. As used herein, “coupled to” generally refers to a connection between components, which can be an indirect communicative connection or direct communicative connection (e.g., without intervening components), whether wired or wireless, including connections such as electrical, optical, magnetic, etc. Examples of a physical host interface include, but are not limited to, a serial advanced technology attachment (SATA) interface, a peripheral component interconnect express (PCIe) interface, universal serial bus (USB) interface, Fibre Channel, Serial Attached SCSI (SAS), etc. The physical host interface can be used to transmit data between the host system 120 and the memory sub-system 110. The host system 120 can further utilize an NVM Express (NVMe) interface to access the memory components 112A to 112N when the memory sub-system 110 is coupled with the host system 120 by the PCIe interface. The physical host interface can provide an interface for passing control, address, data, and other signals between the memory sub-system 110 and the host system 120.
  • The memory components 112A to 112N can include any combination of the different types of non-volatile memory components and/or volatile memory components. An example of non-volatile memory components includes a negative-and (NAND) type flash memory. Each of the memory components 112A to 112N can include one or more arrays of memory cells such as Single Level Cells (SLCs) or Multilevel Cells (MLCs). (MLCs refer generally to memory cells that store more than one bit of data, including two level cells, triple level cells (TLCs) or quad-level cells (QLCs)). In some embodiments, a particular memory component can include both an SLC portion and a MLC portion of memory cells. Each of the memory cells can store one or more bits of data used by the host system 120 or memory sub-system 110. Although non-volatile memory components such as NAND type flash memory are described, the memory components 112A to 112N can be based on any other type of memory such as a volatile memory. In some embodiments, the memory components 112A to 112N can be, but are not limited to, random access memory (RAM), read-only memory (ROM), dynamic random access memory (DRAM), synchronous dynamic random access memory (SDRAM), phase change memory (PCM), magneto random access memory (MRAM), negative-or (NOR) flash memory, electrically erasable programmable read-only memory (EEPROM), and a cross-point array of non-volatile memory cells. A cross-point array of non-volatile memory can perform bit storage based on a change of bulk resistance, in conjunction with a stackable cross-gridded data access array. Additionally, in contrast to many flash-based memories, cross-point non-volatile memory can perform a write in-place operation, where a non-volatile memory cell can be programmed without the non-volatile memory cell being previously erased. Furthermore, the memory cells of the memory components 112A to 112N can be grouped as memory pages or data blocks that can refer to a unit of the memory component used to store data.
  • The memory system controller 115 (hereinafter referred to as “controller”) can communicate with the memory components 112A to 112N to perform operations such as reading data, writing data, or erasing data at the memory components 112A to 112N and other such operations. The controller 115 can include hardware such as one or more integrated circuits and/or discrete components, a buffer memory, or a combination thereof. The controller 115 can be a microcontroller, special purpose logic circuitry (e.g., a field programmable gate array (FPGA), an application specific integrated circuit (ASIC), etc.), or another suitable processor. The controller 115 can include a processor (processing device) 117 configured to execute instructions stored in local memory 119. In the illustrated example, the local memory 119 of the controller 115 includes an embedded memory configured to store instructions for performing various processes, operations, logic flows, and routines that control operation of the memory sub-system 110, including handling communications between the memory sub-system 110 and the host system 120. In some embodiments, the local memory 119 can include memory registers storing memory pointers, fetched data, etc. The local memory 119 can also include read-only memory (ROM) for storing micro-code.
  • While the example memory sub-system 110 in FIG. 1 has been illustrated as including the controller 115, in another embodiment of the present disclosure, a memory sub-system 110 may rely upon external control (e.g., provided by an external host, or by a processor or controller separate from the memory sub-system) to perform some or all of the management of the memory sub-system 110. In examples where some or all of the management of the memory sub-system 110 are performed by an external host, the controller 115 may be omitted.
  • In general, the controller 115 can receive commands or operations from the host system 120 and can convert the commands or operations into instructions or appropriate commands to achieve the desired access to the memory components 112A to 112N. The controller 115 can be responsible for other operations such as wear leveling operations, garbage collection operations, error detection and error-correcting code (ECC) operations, encryption operations, caching operations, and address translations between a logical address and a physical address that are associated with the memory components 112A to 112N. The controller 115 can further include host interface circuitry to communicate with the host system 120 via the physical host interface. The host interface circuitry can convert the commands received from the host system into command instructions to access the memory components 112A to 112N as well as convert responses associated with the memory components 112A to 112N into information for the host system 120.
  • The memory sub-system 110 can also include additional circuitry or components that are not illustrated. In some embodiments, the memory sub-system 110 can include a cache or buffer (e.g., DRAM) and address circuitry (e.g., a row decoder and a column decoder) that can receive a logical address from the controller 115 and decode the logical address to one or more physical addresses at the memory components 112A to 112N.
  • The memory sub-system 110 of FIG. 1 executes an example sanitize operation 130. The sanitize operation 130 is directed to a set of blocks 136A, 136B, 136C, 136D, 136E, 136F, 136G, 136H, 136I, 136J, 136K, 136L. Although twelve blocks are shown, the sanitize operation 130 may be directed to any suitable number of blocks. In some examples, the sanitize operation 130 is directed to all blocks at a memory component 112A, 112B or all blocks in the memory sub-system 110. In the example of FIG. 1, a first portion of the blocks are unretired, including example blocks 136A, 136C, 136D, 136F, 136G, 136I, 136J, 136K. A second portion of the blocks are retired, including example blocks 136B, 136E, 136H, 136L. The retired blocks 136B, 136E, 136H, 136L may have been retired for any suitable reason including, for example, a failure to write, a failure to read, a failure to erase, etc.
  • The sanitize operation 130 includes erase cycles 132. The erase cycles 132 may include one erase cycle for each block 136A, 136B, 136C, 136D. 136E, 136F, 136G, 136H, 136I, 136J, 136K, 136L of the set of blocks. Based on the results of the erase cycles 132, the memory sub-system 110 sets the erase flag (or other indicator) 138 with a set flag operation 134. For example, if each of the erase cycles 132 were successful, then the erase flag 138 is set to true. If any of the erase cycles 132 were unsuccessful, then the erase flag 138 is set to indicate that the erase was unsuccessful. The erase flag 138 may be available to the host system 120 or other user of the memory sub-system 110 in various ways. For example, the erase flag 138 may be provided to the host system, for example, in response to a status request command. The erase flag 138 may be provided by itself or as part of a status page or word. The status request command may be generic or, in some examples, may be specific to the manufacturer of the memory sub-system 110. Also, in some examples, the erase flag 138 is stored at local memory 119 and/or a memory component 112A, 112B as a log page. The log page may be accessible to the host system 120 and/or the controller 115 with a read request.
  • In some examples, the sanitize operation 130 includes setting one or more indicators (e.g., flags) such as, for example, a retired block erase flag and an unretired block erase flag. The retired block erase flag is set to true if all retired blocks have been successfully erased. The retired block erase flag is set to false if any retired blocks have not been successfully erased. The unretired block erase flag is set to true if all unretired blocks have been successfully erased. The unretired block erase flag is set to false if any unretired blocks have not been successfully erased.
  • In the example environment 100 of FIG. 1, the local memory 119 of the controller 1155 includes executing the sanitize operation 130. In this example, the sanitize operation 130 is executed by the controller 115. In other examples, some or all of the logic for executing the sanitize operation can be execute at various other locations. In some examples, some or all of the logic for executing the sanitize operation 130 is included as code executed at the host system 120. Also, in some examples, some or all of the logic for executing the sanitize operation 130 is included, for example, as processing logic at one or more of the memory components 112A, 112B.
  • FIG. 2 is a flow diagram of an example method 200 to execute a sanitize operation, such as the sanitize operation 130 of FIG. 1, in accordance with some embodiments of the present disclosure. The sanitize operation is directed to one or more blocks at a memory component or memory components. In some examples, the sanitize operation is directed to all the blocks at a memory component and/or all the blocks at a memory media. Memory blocks to which the sanitize operation is directed are referred to as blocks within the scope of the sanitize operation. The method 200 can be performed by processing logic that can include hardware (e.g., processing device, circuitry, dedicated logic, programmable logic, microcode, hardware of a device, integrated circuit, etc.), software (e.g., instructions run or executed on a processing device), or a combination thereof. Although shown in a particular sequence or order, unless otherwise specified, the order of the processes can be modified. Thus, the illustrated embodiments should be understood only as examples, and the illustrated processes can be performed in a different order, and some processes can be performed in parallel. Additionally, one or more processes can be omitted in various embodiments. Thus, not all processes are required in every embodiment. Other process flows are possible.
  • At optional operation 202, the processing logic deletes one or more cryptographic keys corresponding to the blocks within the scope of the sanitize operation. For example, data at some or all the blocks within the scope of the sanitize operation may be encrypted by the memory sub-system. When data at a block is encrypted by the memory sub-system, the memory sub-system manages cryptographic keys to encrypt and decrypt the data. For example, when the memory sub-system receives write request including a data unit, it generates and/or retrieves a cryptographic key. The memory sub-system uses the cryptographic key to encrypt the data unit, resulting in an encrypted data unit. The cryptographic key is stored at the local memory of the controller, at a memory component, or at another suitable location at the memory sub-system 110. The same cryptographic key may be used for multiple blocks or, in some examples, different cryptographic keys are used for different blocks. The encrypted data unit is written to a block at a memory component. Upon receiving a read request for the data unit, the memory sub-system retrieves the appropriate cryptographic key and decrypts the encrypted data unit. The decrypted data unit is then returned in response to the read request. For blocks that utilize encryption, deleting the cryptographic key or keys that correspond to the blocks within the scope of the sanitize operation increases the difficulty of recovering data from the blocks after the sanitize operation, even if the blocks are not successfully erased.
  • At operation 204, the processing logic initiates an erase cycle for a first block within the scope of the sanitize operation. The erase cycle may be executed in any suitable manner. For example, the host device and/or controller may send to the memory component including the block an erase command that causes the memory component to execute an erase cycle for the block. In some examples, the first block comprises NAND memory cells. The NAND memory cells of the block may be erased by holding the sources of the respective memory cells to ground and raising the control gates to an erase voltage for a cycle time. After the cycle time, the memory component including the block returns an indication that the erase cycle was successful or unsuccessful.
  • At operation 206, the processing logic determines if the erase cycle for the block was successful. If the erase cycle was not successful, the processing logic changes the erase flag to false at operation 208, indicating that not all blocks have been successfully erased. If the erase cycle was successful, the processing logic determines at operation 210 whether there are additional blocks in the scope of the sanitize operation that have not been subjected to an erase cycle. If no blocks remain, the processing logic returns the current erase flag value at operation 212. If blocks do remain, then processing logic moves to the next block at operation 214 and returns to operation 204 to execute an erase cycle at the next block.
  • FIG. 3 is a flow diagram of another example method 300 to execute a sanitize operation, such as the sanitize operation 130 of FIG. 1, in accordance with some embodiments of the present disclosure. The sanitize operation is directed to one or more blocks at a memory component or memory components. In some examples, the sanitize operation is directed to all the blocks at a memory component and/or all the blocks at a memory media. Memory blocks to which the sanitize operation is directed are referred to as blocks within the scope of the sanitize operation. The method 300 can be performed by processing logic that can include hardware (e.g., processing device, circuitry, dedicated logic, programmable logic, microcode, hardware of a device, integrated circuit, etc.), software (e.g., instructions run or executed on a processing device), or a combination thereof. Although shown in a particular sequence or order, unless otherwise specified, the order of the processes can be modified. Thus, the illustrated embodiments should be understood only as examples, and the illustrated processes can be performed in a different order, and some processes can be performed in parallel. Additionally, one or more processes can be omitted in various embodiments. Thus, not all processes are required in every embodiment. Other process flows are possible.
  • At optional operation 302, the processing logic deletes one or more cryptographic keys for blocks within the scope of the sanitize operation, for example, similar to operation 202 above. At operation 304, the processing logic executes an erase cycle for a first block within the scope of the sanitize operation. At operation 306, the processing logic determines if the erase cycle was successful.
  • If the erase cycle is unsuccessful, the processing logic attempts to read the block at operation 308. For example, in some situations, a block may be unreadable after an erase operation even if the erase operation failed. The read at operation 308 checks whether the block is unreadable despite the failure of the erase cycle at operation 304. When the block includes NAND flash memory cells, the block may be readable by page, where there are multiple pages in the block. The reading of the block at operation 308 may include reading each page of the block. In some examples, the reading of the block at operation 308 includes reading less than all the pages of the block (e.g., every other page, every third page, etc.) or executing an operation that reads multiple pages simultaneously.
  • The processing logic determines if the read was successful at operation 310. The read is successful, for example, if the read produces a data unit without bit errors, for example, as determined by Error Correction Code (ECC) or other suitable technique. The read may be considered successful if any of the pages of the block are successfully read. In some examples, the read is considered successful if less than a threshold number of pages of the block are successfully read. If the read is unsuccessful, it indicates that the block is effectively erased. Accordingly, if the read is unsuccessful, the processing logic proceeds to the next block at operation 316 without setting the erase flag to false. If the read is successful, the processing logic sets the erase flag to false at operation 312. At optional operation 314, the processing logic retires the block, for example, by storing an indication of the block to a retired blocks list.
  • At operation 316, the processing logic determines if there are additional blocks within the scope of the sanitize operation. If yes, then the processing logic proceeds to the next block at operation 318 and returns to operation 304 to execute an erase cycle at that block. If there are no more blocks within the scope of the sanitize operation, the processing logic returns the current erase flag value at operation 320.
  • FIG. 4 is a flow diagram of an example method 400 to read a block for which an erase cycle has failed, in accordance with some embodiments of the present disclosure. The method 400 is one example way that the process logic can execute the operation 308 of the method 300 described herein. The method 400 can be performed by processing logic that can include hardware (e.g., processing device, circuitry, dedicated logic, programmable logic, microcode, hardware of a device, integrated circuit, etc.), software (e.g., instructions run or executed on a processing device), or a combination thereof. Although shown in a particular sequence or order, unless otherwise specified, the order of the processes can be modified. Thus, the illustrated embodiments should be understood only as examples, and the illustrated processes can be performed in a different order, and some processes can be performed in parallel. Additionally, one or more processes can be omitted in various embodiments. Thus, not all processes are required in every embodiment. Other process flows are possible.
  • At operation 402, the processing logic reads a first page at the block. At operation 404, the processing logic determines if the read was a success, for example, based on an ECC or other method. If the read is a success, then the processing logic returns an indication that the read was successful at operation 406. If the read was not successful, the processing logic determines at operation 408 whether there are additional pages at the block to be read at operation 408. This may include determining whether there are any pages at the block that have not been read yet. If there are no more pages, the processing logic returns a read failure at operation 410 indicating that the block was not successfully read. If there are remaining pages, the processing logic proceeds to the next page at operation 412 and reads the next page at operation 402.
  • FIG. 5 is a flow diagram of another example method 500 to execute a sanitize operation, such as the sanitize operation 130 of FIG. 1, in accordance with some embodiments of the present disclosure. The sanitize operation is directed to one or more blocks at a memory component or memory components. In some examples, the sanitize operation is directed to all the blocks at a memory component and/or all the blocks at a memory media. Memory blocks to which the sanitize operation is directed are referred to as blocks within the scope of the sanitize operation. The method 300 can be performed by processing logic that can include hardware (e.g., processing device, circuitry, dedicated logic, programmable logic, microcode, hardware of a device, integrated circuit, etc.), software (e.g., instructions run or executed on a processing device), or a combination thereof. Although shown in a sequence or order, unless otherwise specified, the order of the processes can be modified. Thus, the illustrated embodiments should be understood only as examples, and the illustrated processes can be performed in a different order, and some processes can be performed in parallel. Additionally, one or more processes can be omitted in various embodiments. Thus, not all processes are required in every embodiment. Other process flows are possible.
  • The method 500 shows an example where the processing logic maintains two erase flags. An unretired block erase flag corresponds to blocks that were unretired at the outset of the sanitize operation and a retired block erase flag corresponds to retired blocks. At optional operation 502, the processing logic deletes one or more cryptographic keys corresponding to the blocks within the scope of the sanitize operation. At operation 504, the processing logic initiates an erase cycle for a first block within the scope of the sanitize operation. At operation 506, the processing logic determines if the erase cycle for the block was successful.
  • If the erase cycle was not successful, the processing logic changes the erase flag corresponding to the type of the block to false at operation 508, indicating that not all blocks of that type have been successfully erased. For example, if the block was previously retired, the processing logic changes the retired block erase flag to false. If the block was previously unretired, the processing logic changes the unretired block erase flag to false. (In some examples, the processing logic reads the block before setting the appropriate flag to false, for example, as described herein with respect to FIGS. 3 and 4.
  • If the erase cycle was successful, the processing logic determines at operation 510 whether there are additional blocks in the scope of the sanitize operation that have not been subjected to an erase cycle. If no blocks remain, the processing logic returns the current retired block erase flag value and unretired block erase flag value at operation 512. If blocks do remain, then processing logic moves to the next block at operation 514 and returns to operation 504 to execute an erase cycle at the next block.
  • FIG. 6 illustrates an example machine of a computer system 600 within which a set of instructions, for causing the machine to perform any one or more of the methodologies discussed herein, can be executed. In some embodiments, the computer system 600 can correspond to a host system (e.g., the host system 120 of FIG. 1) that includes, is coupled to, or utilizes a memory sub-system (e.g., the memory sub-system 110 of FIG. 1) or can be used to perform the operations of a controller (e.g., to execute an operating system to execute instructions 113 for executing a sanitize operation. In alternative embodiments, the machine can be connected (e.g., networked) to other machines in a LAN, an intranet, an extranet, and/or the Internet. The machine can operate in the capacity of a server or a client machine in client-server network environment, as a peer machine in a peer-to-peer (or distributed) network environment, or as a server or a client machine in a cloud computing infrastructure or environment.
  • The machine can be a personal computer (PC), a tablet PC, a set-top box (STB), a Personal Digital Assistant (PDA), a cellular telephone, a web appliance, a server, a network router, a switch or bridge, or any machine capable of executing a set of instructions (sequential or otherwise) that specify actions to be taken by that machine. Further, while a single machine is illustrated, the term “machine” shall also be taken to include any collection of machines that individually or jointly execute a set (or multiple sets) of instructions to perform any one or more of the methodologies discussed herein.
  • The example computer system 600 includes a processing device 602, a main memory 604 (e.g., read-only memory (ROM), flash memory, dynamic random-access memory (DRAM) such as synchronous DRAM (SDRAM) or Rambus DRAM (RDRAM), etc.), a static memory 606 (e.g., flash memory, static random-access memory (SRAM), etc.), and a data storage system 618, which communicate with each other via a bus 630.
  • Processing device 602 represents one or more general-purpose processing devices such as a microprocessor, a central processing unit, or the like. More particularly, the processing device can be a complex instruction set computing (CISC) microprocessor, reduced instruction set computing (RISC) microprocessor, very long instruction word (VLIW) microprocessor, or a processor implementing other instruction sets, or processors implementing a combination of instruction sets. Processing device 602 can also be one or more special-purpose processing devices such as an application specific integrated circuit (ASIC), a field programmable gate array (FPGA), a digital signal processor (DSP), network processor, or the like. The processing device 602 is configured to execute instructions 626 for performing the operations and steps discussed herein. The computer system 600 can further include a network interface device 608 to communicate over the network 620.
  • The data storage system 618 can include a non-transitory machine-readable storage medium 624 (also known as a computer-readable medium) on which is stored one or more sets of instructions 626 or software embodying any one or more of the methodologies or functions described herein. The instructions 626 can also reside, completely or at least partially, within the main memory 604 and/or within the processing device 602 during execution thereof by the computer system 600, the main memory 604 and the processing device 602 also constituting machine-readable storage media. The machine-readable storage medium 624, data storage system 618, and/or main memory 604 can correspond to the memory sub-system 110 of FIG. 1.
  • In one embodiment, the instructions 626 include the instructions 113 to implement functionality corresponding to the sanitize operation, as described herein. While the machine-readable storage medium 624 is shown in an example embodiment to be a single medium, the term “machine-readable storage medium” should be taken to include a single medium or multiple media that store the one or more sets of instructions. The term “machine-readable storage medium” shall also be taken to include any medium that is capable of storing or encoding a set of instructions for execution by the machine and that cause the machine to perform any one or more of the methodologies of the present disclosure. The term “machine-readable storage medium” shall accordingly be taken to include, but not be limited to, solid-state memories, optical media, and magnetic media.
  • Some portions of the preceding detailed descriptions have been presented in terms of algorithms and symbolic representations of operations on data bits within a computer memory. These algorithmic descriptions and representations are the ways used by those skilled in the data processing arts to most effectively convey the substance of their work to others skilled in the art. An algorithm is here, and generally, conceived to be a self-consistent sequence of operations leading to a desired result. The operations are those requiring physical manipulations of physical quantities. Usually, though not necessarily, these quantities take the form of electrical or magnetic signals capable of being stored, combined, compared, and otherwise manipulated. It has proven convenient at times, principally for reasons of common usage, to refer to these signals as bits, values, elements, symbols, characters, terms, numbers, or the like.
  • It should be borne in mind, however, that all of these and similar terms are to be associated with the appropriate physical quantities and are merely convenient labels applied to these quantities. The present disclosure can refer to the action and processes of a computer system, or similar electronic computing device, that manipulates and transforms data represented as physical (electronic) quantities within the computer system's registers and memories into other data similarly represented as physical quantities within the computer system memories or registers or other such information storage systems.
  • The present disclosure also relates to an apparatus for performing the operations herein. This apparatus can be specially constructed for the intended purposes, or it can include a general purpose computer selectively activated or reconfigured by a computer program stored in the computer. Such a computer program can be stored in a computer readable storage medium, such as, but not limited to, any type of disk including floppy disks, optical disks, CD-ROMs, and magnetic-optical disks, read-only memories (ROMs), random access memories (RAMs), EPROMs. EEPROMs, magnetic or optical cards, or any type of media suitable for storing electronic instructions, each coupled to a computer system bus.
  • The algorithms and displays presented herein are not inherently related to any particular computer or other apparatus. Various general purpose systems can be used with programs in accordance with the teachings herein, or it can prove convenient to construct a more specialized apparatus to perform the method. The structure for a variety of these systems will appear as set forth in the description below. In addition, the present disclosure is not described with reference to any particular programming language. It will be appreciated that a variety of programming languages can be used to implement the teachings of the disclosure as described herein.
  • The present disclosure can be provided as a computer program product, or software, that can include a machine-readable medium having stored thereon instructions, which can be used to program a computer system (or other electronic devices) to perform a process according to the present disclosure. A machine-readable medium includes any mechanism for storing information in a form readable by a machine (e.g., a computer). In some embodiments, a machine-readable (e.g., computer-readable) medium includes a machine (e.g., a computer) readable storage medium such as a read only memory (“ROM”), random access memory (“RAM”), magnetic disk storage media, optical storage media, flash memory components, etc.
  • In the foregoing specification, embodiments of the disclosure have been described with reference to specific example embodiments thereof. It will be evident that various modifications can be made thereto without departing from the broader spirit and scope of embodiments of the disclosure as set forth in the following claims. The specification and drawings are, accordingly, to be regarded in an illustrative sense rather than a restrictive sense.

Claims (20)

What is claimed is:
1. A memory system comprising:
a memory component comprising a plurality of blocks; and
a processing device programmed to perform operations comprising:
receiving a sanitize command for the plurality of blocks, wherein a first portion of the plurality of blocks are retired and a second portion of the plurality of blocks are unretired;
initiating a first erase cycle at a first retired block of the plurality of blocks;
determining that the first erase cycle was successful;
initiating a second erase cycle at a second retired block of the plurality of blocks;
determining that the second erase cycle was not successful; and
setting an erase indicator to false.
2. The memory system of claim 1, wherein the processing device is further programmed to perform operations comprising deleting at least one cryptographic key for decrypting data stored at the first retired block.
3. The memory system of claim 1, wherein the processing device is further programmed to perform operations comprising:
sending to the memory component a first erase command directed to the first retired block; and
receiving, from the memory component, a status indicator indicating that the first erase command was unsuccessful.
4. The memory system of claim 1, wherein the processing device is further programmed to perform operations comprising:
determining that all the blocks of the second portion are erased; and
setting an unretired block erase indicator to true.
5. The memory system of claim 1, wherein the processing device is further programmed to perform operations comprising:
reading the first retired block; and
determining that the reading of the first retired block was successful.
6. The memory system of claim 1, further comprising storing to the memory system an indication that the first retired block is retired.
7. The memory system of claim 1, wherein the processing device is further programmed to perform operations comprising:
determining that a third block of the plurality of blocks failed to erase;
reading the third block; and
determining that the reading of the third block was unsuccessful.
8. The memory system of claim 1, wherein the processing device is further programmed to perform operations comprising:
receiving, from a host device, a request to read the erase indicator; and
sending, to the host device, an indication of the erase indicator.
9. A method comprising:
receiving a sanitize operation for a plurality of blocks, wherein a first portion of the plurality of blocks are retired and a second portion of the plurality of blocks are unretired;
initiating a first erase cycle at a first retired block of the plurality of blocks;
determining that the first erase cycle was successful;
initiating a second erase cycle at a second retired block of the plurality of blocks;
determining that the second erase cycle was not successful; and
setting an erase indicator to false.
10. The method of claim 9, further comprising deleting at least one cryptographic key for decrypting data stored at the first retired block.
11. The method of claim 9, further comprising:
sending to a memory component comprising the first retired block a first erase command directed to the first retired block; and
receiving, from the memory system, a status indicator indicating that the first erase command was unsuccessful.
12. The method of claim 9, further comprising:
determining that all the blocks of the second portion are erased; and
setting an unretired block erase indicator to true.
13. The method of claim 9, further comprising:
reading the first retired block; and
determining that the reading of the first retired block was successful.
14. The method of claim 9, further comprising storing to the memory system an indication that the first retired block is retired.
15. The method of claim 9, further comprising:
determining that a second block of the plurality of blocks failed to erase;
reading the second block; and
determining that the reading of the second block was unsuccessful.
16. The method of claim 9, further comprising:
receiving, from a host device, a request to read the erase indicator; and
sending, to the host device, an indication of the erase indicator.
17. A non-transitory machine-readable storage medium comprising instructions that, when executed by a processing device, cause the processing device to perform operations comprising:
receiving a sanitize operation for a plurality of blocks, wherein a first portion of the plurality of blocks are retired and a second portion of the plurality of blocks are unretired;
initiating a first erase cycle at a first retired block of the plurality of blocks;
determining that the first erase cycle was successful;
initiating a second erase cycle at a second retired block of the plurality of blocks;
determining that the second erase cycle was not successful; and
setting an erase indicator to false.
18. The non-transitory machine-readable storage medium of claim 17, further comprising instructions that, when executed by the processing device, cause the processing device to perform operations comprising deleting at least one cryptographic key for decrypting data stored at the first retired block.
19. The non-transitory machine-readable storage medium of claim 17, further comprising instructions that, when executed by the processing device, cause the processing device to perform operations comprising:
sending to a memory component comprising the first retired block a first erase command directed to the first retired block; and
receiving, from the memory component, a status indicator indicating that the first erase command was unsuccessful.
20. The non-transitory machine-readable storage medium of claim 17, further comprising instructions that, when executed by the processing device, cause the processing device to perform operations comprising:
determining that all the blocks of the second portion are erased; and
setting an unretired block erase indicator to true.
US16/148,564 2018-10-01 2018-10-01 Data erasure in memory sub-systems Active US10628076B1 (en)

Priority Applications (8)

Application Number Priority Date Filing Date Title
US16/148,564 US10628076B1 (en) 2018-10-01 2018-10-01 Data erasure in memory sub-systems
CN202210692044.XA CN115048054A (en) 2018-10-01 2019-09-27 Data erasure in memory subsystems
EP19869194.1A EP3861428A4 (en) 2018-10-01 2019-09-27 Data erasure in memory sub-systems
PCT/US2019/053590 WO2020072321A1 (en) 2018-10-01 2019-09-27 Data erasure in memory sub-systems
KR1020217013274A KR20210057192A (en) 2018-10-01 2019-09-27 Erasing data from the memory subsystem
CN201980072847.1A CN112997139B (en) 2018-10-01 2019-09-27 Data erasure in memory subsystems
US16/824,335 US11237755B2 (en) 2018-10-01 2020-03-19 Data erasure in memory sub-systems
US17/529,908 US11775198B2 (en) 2018-10-01 2021-11-18 Data erasure in memory sub-systems

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
US16/148,564 US10628076B1 (en) 2018-10-01 2018-10-01 Data erasure in memory sub-systems

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US16/824,335 Continuation US11237755B2 (en) 2018-10-01 2020-03-19 Data erasure in memory sub-systems

Publications (2)

Publication Number Publication Date
US20200104068A1 true US20200104068A1 (en) 2020-04-02
US10628076B1 US10628076B1 (en) 2020-04-21

Family

ID=69947551

Family Applications (3)

Application Number Title Priority Date Filing Date
US16/148,564 Active US10628076B1 (en) 2018-10-01 2018-10-01 Data erasure in memory sub-systems
US16/824,335 Active 2038-10-31 US11237755B2 (en) 2018-10-01 2020-03-19 Data erasure in memory sub-systems
US17/529,908 Active 2038-10-23 US11775198B2 (en) 2018-10-01 2021-11-18 Data erasure in memory sub-systems

Family Applications After (2)

Application Number Title Priority Date Filing Date
US16/824,335 Active 2038-10-31 US11237755B2 (en) 2018-10-01 2020-03-19 Data erasure in memory sub-systems
US17/529,908 Active 2038-10-23 US11775198B2 (en) 2018-10-01 2021-11-18 Data erasure in memory sub-systems

Country Status (5)

Country Link
US (3) US10628076B1 (en)
EP (1) EP3861428A4 (en)
KR (1) KR20210057192A (en)
CN (2) CN115048054A (en)
WO (1) WO2020072321A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11237755B2 (en) 2018-10-01 2022-02-01 Micron Technology, Inc. Data erasure in memory sub-systems
US20230063167A1 (en) * 2021-09-01 2023-03-02 Micron Technology, Inc. Internal resource monitoring in memory devices

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11579772B2 (en) 2020-11-25 2023-02-14 Micron Technology, Inc. Managing page retirement for non-volatile memory

Family Cites Families (23)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20010011318A1 (en) * 1997-02-27 2001-08-02 Vishram P. Dalvi Status indicators for flash memory
US7169043B2 (en) 2000-10-17 2007-01-30 Atlantic City Coin & Slot Service Company, Inc. Gaming display device and method of use
US7003621B2 (en) * 2003-03-25 2006-02-21 M-System Flash Disk Pioneers Ltd. Methods of sanitizing a flash-based data storage device
US7139864B2 (en) * 2003-12-30 2006-11-21 Sandisk Corporation Non-volatile memory and method with block management system
US7624239B2 (en) * 2005-11-14 2009-11-24 Sandisk Corporation Methods for the management of erase operations in non-volatile memories
KR20100049809A (en) * 2008-11-04 2010-05-13 삼성전자주식회사 Method of erasing a non-volatile memory device
US8281227B2 (en) * 2009-05-18 2012-10-02 Fusion-10, Inc. Apparatus, system, and method to increase data integrity in a redundant storage system
US8321727B2 (en) 2009-06-29 2012-11-27 Sandisk Technologies Inc. System and method responsive to a rate of change of a performance parameter of a memory
JP5612514B2 (en) * 2010-03-24 2014-10-22 パナソニック株式会社 Nonvolatile memory controller and nonvolatile storage device
US8832402B2 (en) * 2011-04-29 2014-09-09 Seagate Technology Llc Self-initiated secure erasure responsive to an unauthorized power down event
JP5687648B2 (en) 2012-03-15 2015-03-18 株式会社東芝 Semiconductor memory device and program
FI125308B (en) * 2012-07-05 2015-08-31 Blancco Oy Ltd Device, arrangement, procedure and computer program for erasing data stored in a mass memory
US9483397B2 (en) * 2013-07-16 2016-11-01 Intel Corporation Erase management in memory systems
KR20160016481A (en) 2014-07-31 2016-02-15 삼성전자주식회사 Memory controller for controlling data sanitization and memory system including the same
US20160034217A1 (en) * 2014-07-31 2016-02-04 Samsung Electronics Co., Ltd. Memory controller configured to control data sanitization and memory system including the same
EP3133604B1 (en) * 2015-08-17 2020-11-11 Harman Becker Automotive Systems GmbH Method and device for fail-safe erase of flash memory
US20170115900A1 (en) * 2015-10-23 2017-04-27 International Business Machines Corporation Dummy page insertion for flexible page retirement in flash memory storing multiple bits per memory cell
US10452532B2 (en) 2017-01-12 2019-10-22 Micron Technology, Inc. Directed sanitization of memory
US10248515B2 (en) * 2017-01-19 2019-04-02 Apple Inc. Identifying a failing group of memory cells in a multi-plane storage operation
US9996458B1 (en) * 2017-07-12 2018-06-12 Nxp Usa, Inc. Memory sector retirement in a non-volatile memory
KR102603916B1 (en) * 2018-04-25 2023-11-21 삼성전자주식회사 Storage device comprising nonvolatile memory device and controller
CN110473584B (en) * 2018-05-11 2021-07-23 建兴储存科技(广州)有限公司 Method for re-verifying erased block in solid state storage device
US10628076B1 (en) 2018-10-01 2020-04-21 Micron Technology, Inc. Data erasure in memory sub-systems

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11237755B2 (en) 2018-10-01 2022-02-01 Micron Technology, Inc. Data erasure in memory sub-systems
US11775198B2 (en) 2018-10-01 2023-10-03 Micron Technology, Inc. Data erasure in memory sub-systems
US20230063167A1 (en) * 2021-09-01 2023-03-02 Micron Technology, Inc. Internal resource monitoring in memory devices

Also Published As

Publication number Publication date
US11775198B2 (en) 2023-10-03
KR20210057192A (en) 2021-05-20
CN112997139A (en) 2021-06-18
CN115048054A (en) 2022-09-13
EP3861428A1 (en) 2021-08-11
EP3861428A4 (en) 2022-06-15
CN112997139B (en) 2022-06-21
US20220075549A1 (en) 2022-03-10
US10628076B1 (en) 2020-04-21
US20200218467A1 (en) 2020-07-09
WO2020072321A1 (en) 2020-04-09
US11237755B2 (en) 2022-02-01

Similar Documents

Publication Publication Date Title
US11775198B2 (en) Data erasure in memory sub-systems
US11749373B2 (en) Bad block management for memory sub-systems
US11567825B2 (en) Generating error checking data for error detection during modification of data in a memory sub-system
US20200219573A1 (en) Using a status indicator in a memory sub-system to detect an event
US20230005548A1 (en) Data erase operations for a memory system
US20210389910A1 (en) Managing a memory system including memory devices with different characteristics
US11705216B2 (en) Data redirection upon failure of a program operation
US10909251B2 (en) Modification of a segment of data based on an encryption operation
US20210191816A1 (en) Storing critical data at a memory system
US11221912B2 (en) Mitigating an undetectable error when retrieving critical data during error handling

Legal Events

Date Code Title Description
FEPP Fee payment procedure

Free format text: ENTITY STATUS SET TO UNDISCOUNTED (ORIGINAL EVENT CODE: BIG.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

AS Assignment

Owner name: JPMORGAN CHASE BANK, N.A., AS COLLATERAL AGENT, ILLINOIS

Free format text: SUPPLEMENT NO. 3 TO PATENT SECURITY AGREEMENT;ASSIGNOR:MICRON TECHNOLOGY, INC.;REEL/FRAME:048951/0902

Effective date: 20190416

Owner name: MORGAN STANLEY SENIOR FUNDING, INC., AS COLLATERAL AGENT, MARYLAND

Free format text: SUPPLEMENT NO. 12 TO PATENT SECURITY AGREEMENT;ASSIGNOR:MICRON TECHNOLOGY, INC.;REEL/FRAME:048948/0677

Effective date: 20190416

AS Assignment

Owner name: MICRON TECHNOLOGY, INC., IDAHO

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:BRANDT, KEVIN R;VAN EATON, THOMAS COUGAR;REEL/FRAME:050668/0931

Effective date: 20181011

AS Assignment

Owner name: MICRON TECHNOLOGY, INC., IDAHO

Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:MORGAN STANLEY SENIOR FUNDING, INC., AS COLLATERAL AGENT;REEL/FRAME:050724/0392

Effective date: 20190731

AS Assignment

Owner name: MICRON TECHNOLOGY, INC., IDAHO

Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:JPMORGAN CHASE BANK, N.A., AS COLLATERAL AGENT;REEL/FRAME:051041/0317

Effective date: 20190731

STCF Information on status: patent grant

Free format text: PATENTED CASE

MAFP Maintenance fee payment

Free format text: PAYMENT OF MAINTENANCE FEE, 4TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1551); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

Year of fee payment: 4