US20170228180A1 - System and method of updating metablocks - Google Patents

System and method of updating metablocks Download PDF

Info

Publication number
US20170228180A1
US20170228180A1 US15/495,946 US201715495946A US2017228180A1 US 20170228180 A1 US20170228180 A1 US 20170228180A1 US 201715495946 A US201715495946 A US 201715495946A US 2017228180 A1 US2017228180 A1 US 2017228180A1
Authority
US
United States
Prior art keywords
metablock
relinking
blocks
memory
update
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US15/495,946
Inventor
Zhenlei Shen
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Western Digital Technologies Inc
Original Assignee
Western Digital Technologies Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Western Digital Technologies Inc filed Critical Western Digital Technologies Inc
Priority to US15/495,946 priority Critical patent/US20170228180A1/en
Assigned to SANDISK TECHNOLOGIES INC. reassignment SANDISK TECHNOLOGIES INC. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: SHEN, ZHENLEI
Assigned to WESTERN DIGITAL TECHNOLOGIES, INC. reassignment WESTERN DIGITAL TECHNOLOGIES, INC. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: SANDISK TECHNOLOGIES LLC
Assigned to SANDISK TECHNOLOGIES LLC reassignment SANDISK TECHNOLOGIES LLC CHANGE OF NAME (SEE DOCUMENT FOR DETAILS). Assignors: SANDISK TECHNOLOGIES INC.
Publication of US20170228180A1 publication Critical patent/US20170228180A1/en
Abandoned legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0668Interfaces specially adapted for storage systems adopting a particular infrastructure
    • G06F3/0671In-line storage system
    • G06F3/0673Single storage device
    • G06F3/0679Non-volatile semiconductor memory device, e.g. flash memory, one time programmable memory [OTP]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0602Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
    • G06F3/0614Improving the reliability of storage systems
    • G06F3/0619Improving the reliability of storage systems in relation to data integrity, e.g. data losses, bit errors
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/0223User address space allocation, e.g. contiguous or non contiguous base addressing
    • G06F12/023Free address space management
    • G06F12/0238Memory management in non-volatile memory, e.g. resistive RAM or ferroelectric memory
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/0223User address space allocation, e.g. contiguous or non contiguous base addressing
    • G06F12/023Free address space management
    • G06F12/0238Memory management in non-volatile memory, e.g. resistive RAM or ferroelectric memory
    • G06F12/0246Memory management in non-volatile memory, e.g. resistive RAM or ferroelectric memory in block erasable memory, e.g. flash memory
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0602Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
    • G06F3/0604Improving or facilitating administration, e.g. storage management
    • G06F3/0605Improving or facilitating administration, e.g. storage management by facilitating the interaction with a user or administrator
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0602Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
    • G06F3/0614Improving the reliability of storage systems
    • G06F3/0616Improving the reliability of storage systems in relation to life time, e.g. increasing Mean Time Between Failures [MTBF]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0638Organizing or formatting or addressing of data
    • G06F3/064Management of blocks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0638Organizing or formatting or addressing of data
    • G06F3/0644Management of space entities, e.g. partitions, extents, pools
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0653Monitoring storage devices or systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0655Vertical data movement, i.e. input-output transfer; data movement between one or more hosts and one or more storage devices
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0655Vertical data movement, i.e. input-output transfer; data movement between one or more hosts and one or more storage devices
    • G06F3/0659Command handling arrangements, e.g. command buffers, queues, command scheduling
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0668Interfaces specially adapted for storage systems adopting a particular infrastructure
    • G06F3/0671In-line storage system
    • G06F3/0683Plurality of storage devices
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0668Interfaces specially adapted for storage systems adopting a particular infrastructure
    • G06F3/0671In-line storage system
    • G06F3/0683Plurality of storage devices
    • G06F3/0688Non-volatile semiconductor memory arrays
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2212/00Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
    • G06F2212/10Providing a specific technical effect
    • G06F2212/1016Performance improvement
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2212/00Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
    • G06F2212/10Providing a specific technical effect
    • G06F2212/1032Reliability improvement, data loss prevention, degraded operation etc
    • G06F2212/1036Life time enhancement
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2212/00Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
    • G06F2212/72Details relating to flash memory management
    • G06F2212/7205Cleaning, compaction, garbage collection, erase control
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2212/00Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
    • G06F2212/72Details relating to flash memory management
    • G06F2212/7207Details relating to flash memory management management of metadata or control data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2212/00Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
    • G06F2212/72Details relating to flash memory management
    • G06F2212/7208Multiple device management, e.g. distributing data over multiple flash devices
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2212/00Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
    • G06F2212/72Details relating to flash memory management
    • G06F2212/7211Wear leveling
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0602Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
    • G06F3/0626Reducing size or complexity of storage systems

Definitions

  • the present disclosure is generally related to data storage devices that link physical blocks of memory to form metablocks.
  • Non-volatile data storage devices such as embedded memory devices (e.g., embedded MultiMedia Card (eMMC) devices) and removable memory devices (e.g., removable universal serial bus (USB) flash memory devices and other removable storage cards), have allowed for increased portability of data and software applications. Users of non-volatile data storage devices increasingly rely on the non-volatile storage devices to store and provide rapid access to a large amount of data.
  • embedded memory devices e.g., embedded MultiMedia Card (eMMC) devices
  • removable memory devices e.g., removable universal serial bus (USB) flash memory devices and other removable storage cards
  • Non-volatile data storage devices may include multiple memory dies and may group blocks of multiple dies for fast write performance.
  • a logical grouping of blocks may be referred to as a metablock or a superblock.
  • a linking of the group of blocks included in a metablock is generally static, and, as a result, when one block included in the metablock fails, the whole metablock is identified as unusable. Thus, a life of the metablock may be cut short based on failure of a single block.
  • a metablock can be “relinked” to replace a failed block with a spare block (if a spare block is available), data recovery and transfer from the failed block to the spare block is resource intensive and time consuming and may result in diminished performance and non-compliance with designated command response times.
  • a data storage device may include a controller coupled to a memory that has multiple memory dies. Each memory die may include multiple blocks of storage elements, and metablocks of the data storage device may be defined as groups of blocks from multiple memory dies.
  • An erased metablock may be sent to a free metablock pool to be available for a data write operation, and a determination may be made whether there is a large difference between individual blocks of the metablock in terms of block health.
  • the metablock may be identified as a relinking candidate, assigned a low write priority, and provided to a relinking pool associated with the free metablock pool.
  • a relinking process may be performed to update the linkings of the metablocks that are in the relinking pool. For example, blocks with similar health values may be grouped together to generate updated metablocks. After the relinking process, the updated metablocks may be removed from the relinking pool and reused during subsequent memory operations.
  • FIG. 1 is a block diagram of a particular illustrative embodiment of a system including a data storage device configured to relink metablocks;
  • FIG. 2 is a diagram to illustrate an illustrative example of a process of relinking metablocks
  • FIG. 3 is a flow diagram of a first illustrative embodiment of a method to relink metablocks
  • FIG. 4 is a flow diagram of a second illustrative embodiment of a method to relink metablocks.
  • FIG. 1 is a block diagram of a particular illustrative embodiment of a system 100 including a data storage device 102 and a host device 130 .
  • the data storage device 102 includes a controller 120 (e.g., a memory controller) coupled to a memory device 104 including multiple memory dies 103 .
  • the data storage device 102 is configured to logically link together blocks from the multiple memory dies 103 to define “metablocks” (or “superblocks”) as groups of blocks that span the multiple memory dies 103 for read and write operations.
  • the data storage device 102 may identify metablocks that are metablock update candidates—that is, candidates for block relinking, also referred to as “relinking candidates”—and, when a sufficient number of metablocks are identified as relinking candidates, the data storage device 102 may update a linking of the blocks of the identified metablocks to generate updated metablocks.
  • the blocks may be relinked so that the updated metablocks have an average useful life that is longer than the average useful life of the identified metablocks prior to relinking.
  • the data storage device 102 may be embedded within the host device 130 , such as in accordance with an embedded MultiMedia Card (eMMC®) (trademark of Joint Electron Devices Engineering Council (JEDEC) Solid State Technology Association, Arlington, Va.) configuration.
  • eMMC® embedded MultiMedia Card
  • JEDEC Joint Electron Devices Engineering Council
  • the data storage device 102 may be removable from (i.e., “removably” coupled to) the host device 130 .
  • the data storage device 102 may be removably coupled to the host device 130 in accordance with a removable universal serial bus (USB) configuration.
  • USB universal serial bus
  • the data storage device 102 may include or correspond to a solid state drive (SSD), which may be used as an embedded storage drive, an enterprise storage drive (ESD), or a cloud storage drive (CSD), as illustrative, non-limiting examples.
  • SSD solid state drive
  • ESD enterprise storage drive
  • CSS cloud storage drive
  • the data storage device 102 may be configured to be coupled to the host device 130 via a communication path 110 , such as a wired communication path and/or a wireless communication path.
  • the data storage device 102 may include an interface 108 (e.g., a host interface) that enables communication (via the communication path 110 ) between the data storage device 102 and the host device 130 , such as when the interface 108 is coupled to the host device 130 .
  • the data storage device 102 may be configured to be coupled to the host device 130 as embedded memory, such as embedded MultiMedia Card (eMMC®) (trademark of JEDEC Solid State Technology Association, Arlington, Va.) and embedded secure digital (eSD) (Secure Digital (SD®) is a trademark of SD-3C LLC, Wilmington, Del.), as illustrative examples.
  • embedded MultiMedia Card eMMC®
  • eSD embedded secure digital
  • SD® Secure Digital
  • the data storage device 102 may correspond to an eMMC (embedded MultiMedia Card) device.
  • the data storage device 102 may correspond to a memory card, such as a Secure Digital (SD®) card, a microSD® card, a miniSDTM card (trademarks of SD-3C LLC, Wilmington, Del.), a MultiMediaCardTM (MMCTM) card (trademark of JEDEC Solid State Technology Association, Arlington, Va.), or a CompactFlash® (CF) card (trademark of SanDisk Corporation, Milpitas, Calif.).
  • SD® Secure Digital
  • MMCTM MultiMediaCardTM
  • CF CompactFlash®
  • the data storage device 102 may operate in compliance with a JEDEC industry specification.
  • the data storage device 102 may operate in compliance with a JEDEC eMMC specification, a JEDEC Universal Flash Storage (UFS) specification, one or more other specifications, or a combination thereof.
  • the host device 130 may include a processor and a memory.
  • the memory may be configured to store data and/or instructions that may be executable by the processor.
  • the memory may be a single memory or may include one or more memories, such as one or more non-volatile memories, one or more volatile memories, or a combination thereof.
  • the host device 130 may issue one or more commands to the data storage device 102 , such as one or more requests to read data from or write data to the memory 104 of the data storage device 102 .
  • the host device 130 may be configured to provide data, such as user data 132 , to be stored at the memory 104 or to request data to be read from the memory 104 .
  • the user data 132 may have a size that corresponds to a size of a metablock at the data storage device 102 (rather than corresponding to a size of an individual block).
  • the host device 130 may include a mobile telephone, a music player, a video player, a gaming console, an electronic book reader, a personal digital assistant (PDA), a computer, such as a laptop computer or notebook computer, any other electronic device, or any combination thereof.
  • the host device 130 communicates via a memory interface that enables reading from the memory 104 and writing to the memory 104 .
  • the host device 130 may operate in compliance with a Joint Electron Devices Engineering Council (JEDEC) industry specification, such as a Universal Flash Storage (UFS) Host Controller Interface specification.
  • JEDEC Joint Electron Devices Engineering Council
  • UFS Universal Flash Storage
  • the host device 130 may operate in compliance with one or more other specifications, such as a Secure Digital (SD) Host Controller specification as an illustrative example.
  • SD Secure Digital
  • the data storage device 102 includes the controller 120 coupled to the memory 104 that includes the multiple memory dies 103 , illustrated as three memory dies 152 - 156 .
  • the controller 120 may be coupled to the memory dies 103 via a bus 106 , an interface (e.g., interface circuitry), another structure, or a combination thereof.
  • the bus 106 may include multiple distinct channels to enable the controller 120 to communicate with each of the memory dies 103 in parallel with, and independently of, communication with the other memory dies 103 .
  • Each of the memory dies 152 - 156 includes multiple blocks, illustrated as five blocks per die.
  • the first memory die 152 is illustrated as having five blocks (block 1 - 1 to block 1 - 5 ).
  • each of the memory dies 152 - 156 may have more than five blocks (or fewer than five blocks).
  • Each block may include multiple word lines, and each word line may include (e.g., may be coupled to) multiple storage elements.
  • each storage element may be configured as a single-level cell (SLC, storing one bit per storage element) or a multi-level cell (MLC, storing multiple bits per storage element).
  • SLC single-level cell
  • MLC multi-level cell
  • each block is an erase unit and data is erasable from the memory 104 according to a block-by-block granularity.
  • One or more of the memory dies 152 - 156 may include a two dimensional (2D) memory configuration or a three dimensional (3D) memory configuration.
  • the memory 104 may store data, such as the user data 132 or encoded user data, such as a codeword 133 , as described further herein.
  • the memory 104 may include support circuitry associated with the memory 104 .
  • the memory 104 may be associated with circuitry to support operation of the storage elements of the memory dies 152 - 156 , such as read circuitry 140 and write circuitry 142 .
  • the read circuitry 140 and the write circuitry 142 may be combined into a single component (e.g., hardware and/or software) of the memory 104 .
  • each of the individual memory dies 152 - 156 may include read and write circuitry that is operable to read and/or write from the individual memory die independent of any other read and/or write operations at any of the other memory dies 152 - 156 .
  • the controller 120 is configured to receive data and instructions from and to send data to the host device 130 while the data storage device 102 is operatively coupled to the host device 130 .
  • the controller 120 is further configured to send data and commands to the memory 104 and to receive data from the memory 104 .
  • the controller 120 is configured to send data and a write command to instruct the memory 104 to store the data to a specified metablock.
  • the controller 120 may be configured to send a first portion of write data and a first physical address (of a first block of the metablock, e.g., block 1 - 3 ) to the first memory die 152 , a second portion of the write data and a second physical address (of a second block of the metablock.
  • the controller 120 may be configured to send a read request to the memory 104 to read the first portion from the first physical address in the first memory die 152 , to read the second portion from the second physical address in the second memory die 154 , and to read the third portion from the third physical address in the third memory die 156 .
  • the controller 120 includes a metablock data table 192 , a free metablock pool 180 , a metablock relinker 186 , and a metablock relinking metric generator 194 .
  • the metablock data table 192 may correspond to a data structure that tracks metablocks and the blocks that form the metablocks.
  • the controller 120 may define a metablock as a first block from the first memory die 152 , a second block from the second memory die 154 , and a third block from the third memory die 156 .
  • the controller 120 may populate an entry in the metablock data table 192 that associates a metablock identifier with the selected blocks from the memory dies 152 - 156 that form the metablock, forming a metablock through linking of the selected blocks from the multiple memory dies 152 - 156 .
  • the metablock data table 162 may also include additional information, such as block health values, metablock relinking metrics, one or more other types of data, or a combination thereof.
  • the free metablock pool 180 may be a data structure, such as a prioritized list or table of identifiers of metablocks that are available for storing data.
  • a metablock may be erased by erasing each of the individual blocks that form the metablock, such as via an erase request 162 .
  • a metablock may be erased responsive to a command from the host device 130 or based on an internal process at the data storage device 102 , such as a garbage collection operation that copies valid data from multiple metablocks into a single metablock and flags the multiple metablocks to be erased (e.g., to be erased as part of a background operation).
  • the metablock may be “added” to the free metablock pool 180 by adding an identifier of the erased metablock to the free metablock pool 180 .
  • metablocks are described herein as being “added to” the free metablock pool 180 , “in” the free metablock pool 180 , or “removed from” the free metablock pool 180 for ease of explanation, it should be understood that metablock identifiers or other indications of the metablocks are added to, stored in, or removed from the free metablock pool 180 .
  • a write priority may be assigned to or otherwise determined for the metablocks in the free metablock pool 180 .
  • Metablocks in the free metablock pool 180 having a higher write priority may be selected for write operations before selection of metablocks in the free metablock pool 180 having a lower write priority. For example, metablocks formed of blocks exhibiting better block “health” (e.g., fewer data errors or having a smaller count of program/erase cycles) may be assigned a higher write priority than metablocks formed of blocks exhibiting lesser health (e.g., having more data errors or having a larger count of program/erase cycles).
  • the metablock may be removed from the free metablock pool 180 .
  • Metablocks in the free metablock pool 180 that are identified as candidates for relinking may be added to a relinking pool 182 .
  • the relinking pool 182 may include a data structure that includes identifiers of relinking candidates.
  • Metablocks added to the relinking pool 182 may be assigned a low write priority to delay selection of the metablocks for write operations and to enable accumulation of a sufficient number of relinking candidates 184 in the relinking pool 182 to begin a relinking operation at the metablock relinker 186 .
  • metablocks added to the relinking pool 182 may be prevented from selection for write operations until after relinking has been performed.
  • the relinking pool 182 is described as a data structure within the free metablock pool 180 , in other implementations the relinking pool 182 may correspond to a data structure that is separate from the free metablock pool 180 . In other implementations, the relinking pool 182 may not correspond to a dedicated data structure and may instead correspond to a logical grouping of metablocks in the free metablock pool 180 that have been assigned the low write priority. For example, a lowest write priority (or a dedicated write priority value) may be assigned to the identified relinking candidate(s) 184 , and the relinking pool 182 may correspond to all metablocks in the free metablock pool 180 having the lowest write priority (or the dedicated write priority value).
  • the metablock relinker 186 is configured to determine whether a metablock added to the free metablock pool 180 is a relinking candidate 184 .
  • a metablock may be identified as a relinking candidate based on an amount of variation in the health of the blocks that form the metablock.
  • block health values 196 may be determined for each block of the metablock to indicate a relative health of the block.
  • a block health value may be determined based on a bit error rate, a number of program/erase (P/E) cycles, time to complete an erase operation, time to complete a programming operation, one or more other factors associated with the block, or any combination thereof.
  • a bit error rate may be determined during one or more data read operations from the block (e.g., by an error correction coding (ECC) decoder of the data storage device 102 ), with a higher error rate corresponding to lower health.
  • ECC error correction coding
  • a count of P/E cycles may be maintained by the data storage device 102 (e.g., such as for use with wear leveling), with higher count of P/E cycles corresponding to lower health.
  • a time to complete an erase operation may correspond to a measured duration between initiating an erase operation and completing the erase operation.
  • the time to complete the erase operation may correspond to a count of erase pulses applied during the erase operation (such as modified by a magnitude of an erase voltage applied during the erase pulses), with a longer time corresponding to lower health.
  • a time to complete a programming operation may correspond to a measured duration between initiating a programming operation and completing the programming operation.
  • the time to complete the programming operation may correspond to a count of programming pulses applied during the programming operation (such as modified by a magnitude of a programming voltage applied during the programming pulses), with a longer time corresponding to lower health.
  • a useful life of a metablock may be limited by the shortest useful life of its blocks, which may be predicted using the block health values 196 .
  • the metablocks' average useful life may be maximized when all of the blocks of each particular metablock reach the end of their respective useful lives at the same time.
  • the metablock relinking metric generator 194 may be configured to determine a value of a metablock relinking metric for a metablock based on a difference of the block health values of its component blocks.
  • the metablock relinking metric generator 194 may determine the metablock relinking metric by identifying the highest block health value of a component block of a metablock, identifying the lowest block health value of a component block of the metablock, and assigning the metablock relinking metric as a computed difference between the highest block health value and the lowest block health value.
  • the metablock relinker 186 may be configured to determine whether a metablock is a relinking candidate 184 based on a value of the metablock relinking metric for the metablock. For example, the metablock relinker 186 may be configured to receive the metablock relinking metric and to identify the metablock as a relinking candidate 184 in response to the metablock relinking metric satisfying a relinking metric threshold 188 . In some implementations, the metablock relinking metric satisfies the relinking metric threshold 188 when the metablock relinking metric equals or exceeds the relinking metric threshold 188 . In other implementations, the metablock relinking metric satisfies the relinking metric threshold 188 when the metablock relinking metric exceeds the relinking metric threshold 188 .
  • the metablock relinker 186 may be configured to determine when a sufficient number of the relinking candidates 184 have been identified to perform a relinking operation to the relinking candidates 184 .
  • the metablock relinker 186 may be configured to receive or to otherwise determine a number of relinking candidates 172 that are in the relinking pool 182 .
  • the metablock relinker 186 may be configured to initiate a relinking operation in response to the number of relinking candidates 172 satisfying a relinking pool threshold 190 .
  • the number of relinking candidates 172 satisfies the relinking pool threshold 190 when the number of relinking candidates 172 equals or exceeds the relinking pool threshold 190 . In other implementations, the number of relinking candidates 172 satisfies the relinking pool threshold 190 when the number of relinking candidates 172 exceeds the relinking pool threshold 190 .
  • the metablock relinker 186 may be configured to perform a relinking operation on blocks that form the relinking candidates 184 . As described further below with reference to FIG. 2 , the relinking operation may determine updated groupings of the blocks based on block health values 196 . For example, the metablock relinker 186 may be configured to access the block health values 196 corresponding to the blocks of the relinking candidates 184 and to sort the blocks in order of block health value for each memory die 152 - 156 . The metablock relinker 186 may generate updated metablocks by re-grouping the sorted blocks according to the sort order.
  • a first updated metablock may be formed by grouping the first block in the sort order for the first die 152 , the first block in the sort order for the second die 154 , and the first block in the sort order for the third die 156 .
  • a second updated metablock may be formed by grouping the second block in the sort order for the first die 152 , the second block in the sort order for the second die 154 , and the second block in the sort order for the third die 156 .
  • the metablock relinker 188 may generate the updated metablocks by relinking blocks from the relinking candidates 184 .
  • the controller 120 may perform an erase operation to erase the blocks that form a particular metablock, such as a representative metablock 160 formed of a first block (block 1 - 2 ) 162 in the first memory die 152 , a second block (block 2 - 2 ) 164 in the second memory die 154 , and a third block (block 3 - 2 ) 166 in the third memory die 156 .
  • a metablock identifier 170 of the erased metablock 160 (e.g., metablock “ 2 ”) may be provided to the metablock relinker 186 .
  • the block health values 196 corresponding to the blocks 162 - 166 of the erased metablock 160 may be updated, such as based on a time to complete the erase operation at each of the blocks 162 - 166 or based on a number of errors detected in data most recently read from each of the blocks 162 - 166 (e.g., when copying data during a garbage collection operation).
  • the metablock relinking metric generator 194 may generate a metablock relinking metric for the erased metablock 160 .
  • the metablock relinking metric generator 194 may subtract the lowest block health value of the blocks 162 - 166 from the highest block health value of the blocks 162 - 166 .
  • the difference between the lowest and highest of the block heath values of the block 162 - 166 may be provided to the metablock relinker 186 as the metablock relinking metric.
  • the metablock relinker 186 may compare the metablock relinking metric to the relinking metric threshold 188 . In response to the metablock relinking metric satisfying the relinking metric threshold 188 (e.g., equaling or exceeding the relinking metric threshold 188 ), the metablock relinker 186 may determine a low write priority for the metablock 160 , designating the metablock 160 as a relinking candidate for the relinking pool 182 . Otherwise, in response to the metablock relinking metric not satisfying the relinking metric threshold 188 , the metablock relinker 186 may determine a higher write priority for the metablock 160 .
  • the metablock relinker 186 may determine a higher write priority for the metablock 160 .
  • all metablocks other than the relinking candidates 184 may be assigned the “high” write priority.
  • the write priority may be at least partially based on block health values.
  • the write priority may be based on an average of the block health values of the blocks of a metablock so that metablocks with better average health have a higher write priority than metablocks with lower average health.
  • Data including a metablock identifier and write priority for the metablock 160 may be provided to the free metablock pool 180 .
  • the metablock relinker 186 may relink the blocks of the relinking candidates 184 based on the block health values 196 , as described above.
  • the metablock relinker 186 may provide updated linkings 174 to the metablock data table 192 to indicate the updated metablocks resulting from the relinking operation.
  • Updated write priorities 176 may be provided to the free metablock pool 180 to indicate a higher write priority for the updated metablocks that are removed from the relinking pool 182 .
  • Operation of the metablock relinker 186 may be adjusted over the life of the data storage device 102 by adjusting values of one or both of the thresholds 188 , 190 , such as may be determined by the controller 120 or the host device 130 .
  • the thresholds 188 , 190 may have relatively large values to reduce an impact of relinking operations on device performance.
  • one or both of the thresholds 188 , 190 may be decreased to more tightly group the blocks by health value and to increase a frequency of performing relinking operations to further extend the average useful life of the metablocks.
  • the data storage device 102 is able to improve average metablock useful life while avoiding a delay and computational complexity associated with relinking all of the metablocks in the free metablock pool 182 .
  • Metablocks selected to undergo a relinking operation may be limited to the metablocks that satisfy the relinking metric threshold 188 (e.g., by having a relatively large variance in block health) and that therefore are more likely to provide greater improvement of average useful life as a result of relinking as compared to metablocks that do not satisfy the relinking metric threshold 188 (e.g., by having a relatively low variance in block health).
  • a graphical representation 200 illustrates an example of operation of the data storage device 102 of FIG. 1 .
  • An erased metablock in the free metablock pool 180 may be selected for a data write operation and provided to a filled metablock pool 210 .
  • the metablock may be returned to the free metablock pool 180 .
  • Erased metablocks identified as relinking candidates may be provided to the relinking pool 182 .
  • a table 250 illustrates an example of contents of the free metablock pool 180 .
  • the table 250 indicates, for each metablock in the free metablock pool 180 , a metablock identifier (MB#), a block number (B#) and health value (HV) for a component block in the first memory die 152 (Die_ 1 ), a block number (B#) and health value (HV) for a component block in the second die 154 (Die_ 2 ), and a block number (B#) and health value (HV) for a component block in the third die 156 (Die_ 3 ).
  • the table 250 also includes a metric value of the metablock relinking metric (RMV) and a write priority for each of the metablocks in the free metablock pool 180 .
  • RMV metablock relinking metric
  • the free metablock pool 180 includes five metablocks: metablock 1 (MB_ 1 ), metablock 4 (MB_ 4 ), metablock 5 (MB_ 5 ), metablock 9 (MB_ 9 ), and metablock 10 (MB_ 10 ).
  • Metablock 1 (MB_ 1 ) includes block 1 from die 1 , block 1 from die 2 , and block 1 from die 3 .
  • Each of the blocks has a block health value of “1”, which may indicate a relatively high (or highest) block health value. Because a variation of the block health values is zero, the metablock relinking metric has a value of “0” (RMV_ 0 ) and a high write priority (H).
  • Metablock 4 includes block 4 from die 1 with health value “3,” block 4 from die 2 with health value “5,” and block 6 from die 3 with health value “4.”
  • Metablock 5 includes block 12 from die 1 with health value “1,” block 5 from die 2 with health value “8,” and block 5 from die 3 with health value “8.”
  • the metablock relinking metric has a value of “7.” Because the metablock relinking metric satisfies the relinking threshold (e.g., the relinking metric of 7 equals or exceeds the relinking threshold of 4), metablock 5 is identified as a relinking candidate and has a low write priority (L).
  • Metablock 9 includes block 9 from die 1 with health value “9” (e.g., indicating a relatively low, or lowest, block health value), block 13 from die 2 with health value “2,” and block 4 from die 3 with health value “5.”
  • the metablock relinking metric has a value of “7.” Because the metablock relinking metric satisfies the relinking threshold (e.g., the relinking metric of 7 equals or exceeds the relinking threshold of 4), metablock 9 is identified as a relinking candidate and has a low write priority.
  • Metablock 10 includes block 10 from die 1 with health value “7,” block 10 from die 2 with health value “7,” and block 14 from die 3 with health value “2.”
  • the metablock relinking metric has a value of “5” and metablock 10 is identified as a relinking candidate having a low write priority.
  • a relinking operation may be initiated when the number of relinking candidates (e.g., three in table 250 ) satisfies a relinking pool threshold (e.g., 3).
  • the relinking operation may include sorting the relinking candidate blocks of die 1 of the relinking candidates in order of block health value, such as to generate a first sorted list (from highest to lowest) of block 12 , block 10 , and block 9 .
  • the relinking candidate blocks of die 2 may be sorted to generate a second sorted list of block 13 , block 10 , and block 5 .
  • the relinking candidate blocks of die 3 may be sorted to generate a third sorted list of block 14 , block 4 , and block 5 .
  • Results of the relinking operation including the updated metablocks 5 , 9 , and 10 are shown in table 270 . Because no metablock in the free metablock pool 180 has a metablock relinking metric that satisfies the relinking metric threshold, no relinking candidates remain, the relinking pool 182 is empty, and all metablocks in the free metablock pool 180 are assigned a high write priority.
  • metablocks 5 , 9 , and 10 Prior to the relinking operation, metablocks 5 , 9 , and 10 may be expected to have a useful life limited by one or two blocks of relatively poor block health.
  • FIGS. 1-2 illustrate the memory 104 as including three memory dies 152 - 156
  • the non-volatile memory may include two memory dies or more than three memory dies.
  • metablocks are described as including a single block from each of the memory dies of the memory 104
  • a metablock may include multiple blocks from one or more of the memory dies, may exclude blocks from one of more of the memory dies, or a combination thereof.
  • one or more of the memory dies 103 may include multiple planes of storage elements, each plane being accessible and erasable independently of the other plane(s), and a metablock may include blocks from multiple planes on the same memory die.
  • the data storage device may include or correspond to the data storage device 102 of FIG. 1 .
  • the data storage device may include memory having multiple memory dies, such has the memory 104 having the multiple dies 103 of FIG. 1 .
  • Each memory die of the multiple memory dies may include multiple blocks of storage elements and metablocks may be defined in the data storage device as groups of blocks from the multiple memory dies.
  • the method 300 may be performed by the controller 120 (e.g., the metablock relinker 186 ) or the memory 104 of the data storage device 102 , or by the host device 130 of FIG. 1 .
  • the method 300 may include receiving a metablock at a free pool, at 302 .
  • the free pool may include or correspond to the free metablock pool 180 of FIG. 1 .
  • the method 300 may also include determining whether the metablock is a relinking candidate, at 304 .
  • the metablock may be associated with (e.g., correspond to) a metablock relinking metric.
  • the metablock relinking metric may be compared to a relinking metric threshold.
  • the metablock may be identified as a relinking candidate when the metablock relinking metric satisfies the relinking metric threshold.
  • the method 300 advances to identify a next metablock, at 312 .
  • the next metablock 312 may be a free metablock to be added to the free pool.
  • the method 300 advances to 306 .
  • the metablock is added to the relinking pool and assigned a lowest write priority, at 306 .
  • the relinking pool may include or correspond to the relinking pool 182 of FIG. 1 .
  • the relinking pool may be associated with (e.g., included in) the free pool.
  • the relinking pool may include a designated portion of the free pool.
  • the relinking pool may correspond to one or more entries of the free pool that have a lowest priority.
  • the method 300 may further include determining whether a number of relinking candidates matches or exceeds a relinking pool threshold, at 308 .
  • the number of relinking candidates may be determined and compared to the relinking pool threshold.
  • the number of relinking candidates 172 may be compared to the relinking pool threshold 190 of FIG. 1 .
  • the method 300 advances to identify the next metablock, at 312 .
  • the method 300 advances to 310 .
  • Metablocks that are in the relinking pool are relinked and assigned new write priorities, at 310 .
  • one or more of the relinked metablocks may be removed from the relinking pool.
  • the one or more relinked metablocks may be removed from the relinking pool and each of the one or more relinked metablocks may be assigned a higher priority.
  • the method 300 advances to identify the next metablock, at 312 .
  • metablocks identified as relinking candidates may be relinked (e.g., re-grouped) and may result in similar quality blocks being grouped together.
  • metablocks included in the free pool and not in the relinking pool may be available to be selected for write operations. Accordingly, relinking of the metablocks in the relinking pool does not prohibit selection of metablocks that are included in the free pool and not in the relinking pool.
  • the data storage device may include or correspond to the data storage device 102 of FIG. 1 .
  • the data storage device may include memory having multiple memory dies, such has the memory 104 having the multiple dies 103 of FIG. 1 .
  • Each memory die of the multiple memory dies may include multiple blocks of storage elements and metablocks may be formed in the data storage device through linking of blocks from the multiple memory dies.
  • the method 400 may be performed by the controller 120 (e.g., the metablock relinker 186 ) or the memory 104 of the data storage device 102 , or by the host device 130 of FIG. 1 .
  • the method 400 includes determining whether a metablock is a metablock update candidate (e.g., a relinking candidate 184 of FIG. 1 ) based on a relinking metric corresponding to the metablock, at 402 .
  • a metablock e.g., a relinking candidate 184 of FIG. 1
  • one or more metablocks may be determined to be metablock update candidates based on relinking metrics corresponding to the one or more metablocks, such as upon adding the metablocks to the free metablock pool 180 of FIG. 1 .
  • a value of the relinking metric for a metablock may be determined based on a difference in block health values of the blocks forming the metablock, such as a difference between a largest block health value and a smallest block health value of the blocks forming the metablock.
  • the block health value of each of the blocks may be determined based on a bit error rate, a count of program/erase cycles, a time to complete an erase operation, a time to complete a data program operation, or any combination thereof.
  • the metablock may be determined to be a metablock update candidate based on a comparison of the relinking metric to a relinking metric threshold, such as the relinking metric threshold 188 of FIG. 1 .
  • the metablock (e.g., the metablock identifier) may be added to the relinking pool of a free metablock pool.
  • the metablock update candidate may be added to the relinking pool 182 of the free metablock pool 180 of FIG. 1 .
  • the metablock that is added to the relinking pool may be assigned a lowest possible write priority associated with the free metablock pool. For example, in response to determining that the metablock is a metablock update candidate, a write priority of the metablock may be assigned a value indicating a low write priority.
  • the metablock (e.g., the metablock identifier) may be added to the free metablock pool, but may not be included in the relinking pool.
  • the metablock that is added to the free pool and not included in the relinking pool may be assigned a write priority other than the lowest possible write priority associated with the free metablock pool.
  • the method 400 also includes comparing a number of the metablock update candidates to a relinking pool threshold, at 404 .
  • the number of the metablock update candidates and the relinking pool threshold may include or correspond to the number of relinking candidates 172 and the relinking pool threshold 190 of FIG. 1 , respectively.
  • the method 400 further includes, in response to the number of the metablock update candidates satisfying the relinking pool threshold, updating the linking of the blocks of the metablock update candidates to form updated metablocks, at 406 .
  • updated linkings may be generated for multiple metablock update candidates included in the relinking pool.
  • the linkings of the blocks may be updated by changing data within a field of a metablock data table, such as the metablock data table 192 of FIG. 1 or a field of the table 250 of FIG. 2 .
  • the updated linkings such as the updated linkings 174 of FIG. 1 , may be provided to and recorded in the metablock data table (e.g., the metablock data table 192 of FIG. 1 ), that tracks which group of blocks from the multiple memory dies is included in (e.g., define) a corresponding metablock.
  • one or more write priorities associated with the metablock update candidates may be updated (e.g., reassigned). For example, one or more of the updated metablocks may be assigned a write priority other than the lowest possible write priority associated with the free metablock pool such that the one or more updated metablocks are not included in the relinking pool.
  • metablocks identified as metablock update candidates may be relinked (e.g., re-grouped) which may result in similar quality blocks being grouped together.
  • a life of one or more of the updated metablocks may be extended and an average useful life of the metablocks in the data storage device may be improved.
  • the method 300 of FIG. 3 and/or the method 400 of FIG. 4 may be initiated or controlled by an application-specific integrated circuit (ASIC), a processing unit, such as a central processing unit (CPU), a digital signal processor (DSP), a controller, another hardware device, a firmware device, a field-programmable gate array (FPGA) device, or any combination thereof.
  • ASIC application-specific integrated circuit
  • a processing unit such as a central processing unit (CPU), a digital signal processor (DSP), a controller, another hardware device, a firmware device, a field-programmable gate array (FPGA) device, or any combination thereof.
  • the method 300 of FIG. 3 and/or the method 400 of FIG. 4 can be initiated or controlled by one or more processors, such as one or more processors included in or coupled to a controller or a memory of the data storage device 102 and/or the host device 130 of FIG. 1 .
  • a controller configured to perform the method 300 of FIG. 3 and/or the method 400 of FIG. 4 may be
  • the controller 120 , the memory 104 , and/or the host 130 of FIG. 1 includes a processor executing instructions that are stored at a memory, such as a non-volatile memory of the data storage device 102 or the host device 130 .
  • executable instructions that are executed by the processor may be stored at a separate memory location that is not part of the non-volatile memory, such as at a read-only memory (ROM) of the data storage device 102 or the host device 130 of FIG. 1 .
  • ROM read-only memory
  • the processor may execute the instructions to determine whether a metablock is a metablock update candidate based on a relinking metric corresponding to the metablock.
  • a data structure corresponding to the metablock e.g., a table entry of a metablock data table
  • the relinking metric may be accessed and compared to a relinking metric threshold.
  • the metablock may be determined to be a metablock update candidate.
  • the processor may also execute instructions to compare a number of metablock update candidates to a relinking pool threshold.
  • a data structure may include a list of entries corresponding to metablocks that are available for write operations, such as the free metablock pool 180 of FIG. 1 .
  • Each of the entries may include a field storing an indicator of a write priority of the associated metablock.
  • the data structure may be traversed and the indicator of write priority may be read from each entry and compared to a relinking metric threshold.
  • a counter may initialized and, in response to each instance of an indicator of write priority being detected as equaling or exceeding the relinking metric threshold, the counter may be incremented.
  • a resulting count of the entries having write priorities that equal or exceed the relinking metric threshold may be compared to the relinking pool threshold.
  • a dedicated data structure such as the relinking pool 182
  • the number of metablock update candidates may be determined by accessing a property of the dedicated data structure, such as a size of the dedicated data structure or a number of entries included in the dedicated data structure.
  • the processor may execute instructions to, in response to the number of metablock update candidates matching or exceeding the relinking pool threshold, update the linking of the blocks of the metablock update candidates to form updated metablocks. For example, a list of blocks may be generated for each of the memory dies. The list of blocks may be populated with entries by traversing a list of blocks of each of the metablock update candidates (e.g., via accessing the block listings in the metablock data table 192 ) and, for each block of the metablock update candidates, adding an entry to one of the generated lists based on which memory die the block is in. The added entry may indicate a block identifier and a block health value of the block.
  • each of the lists may be sorted based on block health value to result in sorted lists.
  • the list of blocks of each of the metablock update candidates may be updated to include blocks having a common index value in each of the sorted lists. For example, blocks having a first index value (e.g., having highest health values occurring in each of the sorted lists) in each of the lists may be assigned to a first of the metablock update candidates. Blocks having a second index value (e.g., having second-highest health values occurring in each of the sorted lists) in each of the lists may be assigned to a second of the metablock update candidates.
  • Semiconductor memory devices such as the memory 104 (e.g., the first memory die 152 , the second memory die 154 , and/or the third memory die 156 ) of FIG. 1 may include volatile memory devices, such as dynamic random access memory (“DRAM”) or static random access memory (“SRAM”) devices, non-volatile memory devices, such as resistive random access memory (“ReRAM”), electrically erasable programmable read only memory (“EEPROM”), flash memory (which can also be considered a subset of EEPROM), ferroelectric random access memory (“FRAM”), and other semiconductor elements capable of storing information.
  • volatile memory devices such as dynamic random access memory (“DRAM”) or static random access memory (“SRAM”) devices
  • non-volatile memory devices such as resistive random access memory (“ReRAM”), electrically erasable programmable read only memory (“EEPROM”), flash memory (which can also be considered a subset of EEPROM), ferroelectric random access memory (“FRAM”), and other semiconductor elements capable of storing information.
  • the memory devices can be formed from passive and/or active elements, in any combinations.
  • passive semiconductor memory elements include ReRAM device elements, which in some embodiments include a resistivity switching storage element, such as an anti-fuse, phase change material, etc., and optionally a steering element, such as a diode, etc.
  • active semiconductor memory elements include EEPROM and flash memory device elements, which in some embodiments include elements containing a charge storage region, such as a floating gate, conductive nanoparticles or a charge storage dielectric material.
  • Multiple memory elements may be configured so that they are connected in series or so that each element is individually accessible.
  • flash memory devices in a NAND configuration typically contain memory elements connected in series.
  • a NAND memory array may be configured so that the array is composed of multiple strings of memory in which a string is composed of multiple memory elements sharing a single bit line and accessed as a group.
  • memory elements may be configured so that each element is individually accessible, e.g., in a NOR memory array.
  • NAND and NOR memory configurations described have been presented as examples, and memory elements may be otherwise configured.
  • the semiconductor memory elements located within and/or over a substrate may be arranged in two or three dimensions, such as a two dimensional memory structure or a three dimensional memory structure.
  • the semiconductor memory elements are arranged in a single plane or a single memory device level.
  • memory elements are arranged in a plane (e.g., in an x-z direction plane) which extends substantially parallel to a major surface of a substrate that supports the memory elements.
  • the substrate may be a wafer over or in which the layer of the memory elements are formed or it may be a carrier substrate which is attached to the memory elements after they are formed.
  • the substrate may include a semiconductor material, such as silicon.
  • the memory elements may be arranged in the single memory device level in an ordered array, such as in a plurality of rows and/or columns. However, the memory elements may be arranged in non-regular or non-orthogonal configurations.
  • the memory elements may each have two or more electrodes or contact lines, such as bit lines and wordlines.
  • a three dimensional memory array is arranged so that memory elements occupy multiple planes or multiple memory device levels, thereby forming a structure in three dimensions (i.e., in the x, y and z directions, where the y direction is substantially perpendicular and the x and z directions are substantially parallel to the major surface of the substrate).
  • a three dimensional memory structure may be vertically arranged as a stack of multiple two dimensional memory device levels.
  • a three dimensional memory array may be arranged as multiple vertical columns (e.g., columns extending substantially perpendicular to the major surface of the substrate, i.e., in they direction) with each column having multiple memory elements in each column.
  • the columns may be arranged in a two dimensional configuration (e.g., in an x-z plane), resulting in a three dimensional arrangement of memory elements with elements arranged on multiple vertically stacked memory planes.
  • Other configurations of memory elements in three dimensions can also constitute a three dimensional memory array.
  • the memory elements may be coupled together to form a NAND string within a single horizontal (e.g., x-z) memory device level.
  • the memory elements may be coupled together to form a vertical NAND string that traverses across multiple horizontal memory device levels.
  • Other three dimensional configurations can be envisioned wherein some NAND strings contain memory elements in a single memory level while other strings contain memory elements which span multiple memory levels.
  • Three dimensional memory arrays may also be designed in a NOR configuration and in a ReRAM configuration.
  • a monolithic three dimensional memory array typically, one or more memory device levels are formed above a single substrate.
  • the monolithic three dimensional memory array may also have one or more memory layers at least partially within the single substrate.
  • the substrate may include a semiconductor material, such as silicon.
  • the layers constituting each memory device level of the array are typically formed on the layers of the underlying memory device levels of the array.
  • layers of adjacent memory device levels of a monolithic three dimensional memory array may be shared or have intervening layers between memory device levels.
  • Two dimensional arrays may be formed separately and then packaged together to form a non-monolithic memory device having multiple layers of memory.
  • non-monolithic stacked memories can be constructed by forming memory levels on separate substrates and then stacking the memory levels atop each other.
  • each of the memory device levels may have a corresponding substrate thinned or removed before stacking the memory device levels to form memory arrays. Because each of the memory device levels are initially formed over separate substrates, the resulting memory arrays are not monolithic three dimensional memory arrays.
  • multiple two dimensional memory arrays or three dimensional memory arrays may be formed on separate chips and then packaged together to form a stacked-chip memory device.
  • the memory 104 (e.g., the first memory die 152 , the second memory die 154 , and/or the third memory die 156 ) of FIG. 1 is a non-volatile memory having a three-dimensional (3D) memory configuration that is monolithically formed in one or more physical levels of arrays of memory cells having an active area disposed above a silicon substrate.
  • the active area of a memory cell may be an area of the memory cell that is conductively throttled by a charge trap portion of the memory cell.
  • the data storage device 102 and/or the host device 130 of FIG. 1 may include circuitry, such as read/write circuitry, as an illustrative, non-limiting example, associated with operation of the memory cells.
  • Associated circuitry is typically used for operation of the memory elements and for communication with the memory elements.
  • memory devices may have circuitry for controlling and driving memory elements to perform functions such as programming and reading.
  • the associated circuitry may be on the same substrate as the memory elements and/or on a separate substrate.
  • a controller for memory read-write operations may be located on a separate controller chip and/or on the same substrate as the memory elements

Abstract

A method includes, in a data storage device that includes a non-volatile memory having multiple memory dies, determining whether one or more metablocks are metablock update candidates based on relinking metrics corresponding to the one or more metablocks. Each memory die includes multiple blocks of storage elements and metablocks are formed through linking of blocks from the multiple memory dies. The method also includes comparing a number of the metablock update candidates to a relinking pool threshold. The method further includes, in response to the number of the metablock update candidates satisfying the relinking pool threshold, updating the linking of the blocks of the metablock update candidates to form updated metablocks. Linking of blocks may be updated by changing fields of a metablock data table, and blocks may be grouped based on block health values to extend an average useful life of the updated metablocks.

Description

    CROSS-REFERENCE TO RELATED APPLICATIONS
  • This application is a continuation of co-pending U.S. patent application Ser. No. 14/322,781, filed Jul. 2, 2014, which is herein incorporated by reference.
  • FIELD OF THE DISCLOSURE
  • The present disclosure is generally related to data storage devices that link physical blocks of memory to form metablocks.
  • BACKGROUND
  • Non-volatile data storage devices, such as embedded memory devices (e.g., embedded MultiMedia Card (eMMC) devices) and removable memory devices (e.g., removable universal serial bus (USB) flash memory devices and other removable storage cards), have allowed for increased portability of data and software applications. Users of non-volatile data storage devices increasingly rely on the non-volatile storage devices to store and provide rapid access to a large amount of data.
  • Non-volatile data storage devices may include multiple memory dies and may group blocks of multiple dies for fast write performance. For example, a logical grouping of blocks may be referred to as a metablock or a superblock. A linking of the group of blocks included in a metablock is generally static, and, as a result, when one block included in the metablock fails, the whole metablock is identified as unusable. Thus, a life of the metablock may be cut short based on failure of a single block. Although a metablock can be “relinked” to replace a failed block with a spare block (if a spare block is available), data recovery and transfer from the failed block to the spare block is resource intensive and time consuming and may result in diminished performance and non-compliance with designated command response times.
  • SUMMARY
  • The present disclosure describes one or more techniques for dynamically relinking metablocks (e.g., updating the groups of blocks that form the metablocks) in a data storage device. A data storage device may include a controller coupled to a memory that has multiple memory dies. Each memory die may include multiple blocks of storage elements, and metablocks of the data storage device may be defined as groups of blocks from multiple memory dies.
  • An erased metablock may be sent to a free metablock pool to be available for a data write operation, and a determination may be made whether there is a large difference between individual blocks of the metablock in terms of block health. When there is a large difference, the metablock may be identified as a relinking candidate, assigned a low write priority, and provided to a relinking pool associated with the free metablock pool. When a number of metablocks included in the relinking pool reaches a threshold number, a relinking process may be performed to update the linkings of the metablocks that are in the relinking pool. For example, blocks with similar health values may be grouped together to generate updated metablocks. After the relinking process, the updated metablocks may be removed from the relinking pool and reused during subsequent memory operations.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • FIG. 1 is a block diagram of a particular illustrative embodiment of a system including a data storage device configured to relink metablocks;
  • FIG. 2 is a diagram to illustrate an illustrative example of a process of relinking metablocks;
  • FIG. 3 is a flow diagram of a first illustrative embodiment of a method to relink metablocks; and
  • FIG. 4 is a flow diagram of a second illustrative embodiment of a method to relink metablocks.
  • DETAILED DESCRIPTION
  • Particular embodiments of the present disclosure are described with reference to the drawings. In the description, common features are designated by common reference numbers throughout the drawings.
  • FIG. 1 is a block diagram of a particular illustrative embodiment of a system 100 including a data storage device 102 and a host device 130. The data storage device 102 includes a controller 120 (e.g., a memory controller) coupled to a memory device 104 including multiple memory dies 103. The data storage device 102 is configured to logically link together blocks from the multiple memory dies 103 to define “metablocks” (or “superblocks”) as groups of blocks that span the multiple memory dies 103 for read and write operations. The data storage device 102 may identify metablocks that are metablock update candidates—that is, candidates for block relinking, also referred to as “relinking candidates”—and, when a sufficient number of metablocks are identified as relinking candidates, the data storage device 102 may update a linking of the blocks of the identified metablocks to generate updated metablocks. The blocks may be relinked so that the updated metablocks have an average useful life that is longer than the average useful life of the identified metablocks prior to relinking.
  • The data storage device 102 may be embedded within the host device 130, such as in accordance with an embedded MultiMedia Card (eMMC®) (trademark of Joint Electron Devices Engineering Council (JEDEC) Solid State Technology Association, Arlington, Va.) configuration. Alternatively, the data storage device 102 may be removable from (i.e., “removably” coupled to) the host device 130. For example, the data storage device 102 may be removably coupled to the host device 130 in accordance with a removable universal serial bus (USB) configuration. In some embodiments, the data storage device 102 may include or correspond to a solid state drive (SSD), which may be used as an embedded storage drive, an enterprise storage drive (ESD), or a cloud storage drive (CSD), as illustrative, non-limiting examples.
  • The data storage device 102 may be configured to be coupled to the host device 130 via a communication path 110, such as a wired communication path and/or a wireless communication path. For example, the data storage device 102 may include an interface 108 (e.g., a host interface) that enables communication (via the communication path 110) between the data storage device 102 and the host device 130, such as when the interface 108 is coupled to the host device 130.
  • For example, the data storage device 102 may be configured to be coupled to the host device 130 as embedded memory, such as embedded MultiMedia Card (eMMC®) (trademark of JEDEC Solid State Technology Association, Arlington, Va.) and embedded secure digital (eSD) (Secure Digital (SD®) is a trademark of SD-3C LLC, Wilmington, Del.), as illustrative examples. To illustrate, the data storage device 102 may correspond to an eMMC (embedded MultiMedia Card) device. As another example, the data storage device 102 may correspond to a memory card, such as a Secure Digital (SD®) card, a microSD® card, a miniSD™ card (trademarks of SD-3C LLC, Wilmington, Del.), a MultiMediaCard™ (MMC™) card (trademark of JEDEC Solid State Technology Association, Arlington, Va.), or a CompactFlash® (CF) card (trademark of SanDisk Corporation, Milpitas, Calif.). The data storage device 102 may operate in compliance with a JEDEC industry specification. For example, the data storage device 102 may operate in compliance with a JEDEC eMMC specification, a JEDEC Universal Flash Storage (UFS) specification, one or more other specifications, or a combination thereof.
  • The host device 130 may include a processor and a memory. The memory may be configured to store data and/or instructions that may be executable by the processor. The memory may be a single memory or may include one or more memories, such as one or more non-volatile memories, one or more volatile memories, or a combination thereof. The host device 130 may issue one or more commands to the data storage device 102, such as one or more requests to read data from or write data to the memory 104 of the data storage device 102. For example, the host device 130 may be configured to provide data, such as user data 132, to be stored at the memory 104 or to request data to be read from the memory 104. For example, the user data 132 may have a size that corresponds to a size of a metablock at the data storage device 102 (rather than corresponding to a size of an individual block). The host device 130 may include a mobile telephone, a music player, a video player, a gaming console, an electronic book reader, a personal digital assistant (PDA), a computer, such as a laptop computer or notebook computer, any other electronic device, or any combination thereof. The host device 130 communicates via a memory interface that enables reading from the memory 104 and writing to the memory 104. For example, the host device 130 may operate in compliance with a Joint Electron Devices Engineering Council (JEDEC) industry specification, such as a Universal Flash Storage (UFS) Host Controller Interface specification. As other examples, the host device 130 may operate in compliance with one or more other specifications, such as a Secure Digital (SD) Host Controller specification as an illustrative example. The host device 130 may communicate with the memory 104 in accordance with any other suitable communication protocol.
  • The data storage device 102 includes the controller 120 coupled to the memory 104 that includes the multiple memory dies 103, illustrated as three memory dies 152-156. The controller 120 may be coupled to the memory dies 103 via a bus 106, an interface (e.g., interface circuitry), another structure, or a combination thereof. For example, the bus 106 may include multiple distinct channels to enable the controller 120 to communicate with each of the memory dies 103 in parallel with, and independently of, communication with the other memory dies 103.
  • Each of the memory dies 152-156 includes multiple blocks, illustrated as five blocks per die. For example, the first memory die 152 is illustrated as having five blocks (block 1-1 to block 1-5). However, each of the memory dies 152-156 may have more than five blocks (or fewer than five blocks). Each block may include multiple word lines, and each word line may include (e.g., may be coupled to) multiple storage elements. For example, each storage element may be configured as a single-level cell (SLC, storing one bit per storage element) or a multi-level cell (MLC, storing multiple bits per storage element). In some implementations, each block is an erase unit and data is erasable from the memory 104 according to a block-by-block granularity. One or more of the memory dies 152-156 may include a two dimensional (2D) memory configuration or a three dimensional (3D) memory configuration. The memory 104 may store data, such as the user data 132 or encoded user data, such as a codeword 133, as described further herein.
  • The memory 104 may include support circuitry associated with the memory 104. For example, the memory 104 may be associated with circuitry to support operation of the storage elements of the memory dies 152-156, such as read circuitry 140 and write circuitry 142. Although depicted as separate components, the read circuitry 140 and the write circuitry 142 may be combined into a single component (e.g., hardware and/or software) of the memory 104. Although the read circuitry 140 and the write circuitry 142 are depicted as external to the memory dies 103, each of the individual memory dies 152-156 may include read and write circuitry that is operable to read and/or write from the individual memory die independent of any other read and/or write operations at any of the other memory dies 152-156.
  • The controller 120 is configured to receive data and instructions from and to send data to the host device 130 while the data storage device 102 is operatively coupled to the host device 130. The controller 120 is further configured to send data and commands to the memory 104 and to receive data from the memory 104. For example, the controller 120 is configured to send data and a write command to instruct the memory 104 to store the data to a specified metablock. For example, the controller 120 may be configured to send a first portion of write data and a first physical address (of a first block of the metablock, e.g., block 1-3) to the first memory die 152, a second portion of the write data and a second physical address (of a second block of the metablock. e.g., block 2-3) to the second memory die 154, and a third portion of the write data and a third physical address (of a third block of the metablock, e.g., block 3-3) to the third memory die 156. The controller 120 may be configured to send a read request to the memory 104 to read the first portion from the first physical address in the first memory die 152, to read the second portion from the second physical address in the second memory die 154, and to read the third portion from the third physical address in the third memory die 156.
  • The controller 120 includes a metablock data table 192, a free metablock pool 180, a metablock relinker 186, and a metablock relinking metric generator 194. The metablock data table 192 may correspond to a data structure that tracks metablocks and the blocks that form the metablocks. For example, the controller 120 may define a metablock as a first block from the first memory die 152, a second block from the second memory die 154, and a third block from the third memory die 156. The controller 120 may populate an entry in the metablock data table 192 that associates a metablock identifier with the selected blocks from the memory dies 152-156 that form the metablock, forming a metablock through linking of the selected blocks from the multiple memory dies 152-156. The metablock data table 162 may also include additional information, such as block health values, metablock relinking metrics, one or more other types of data, or a combination thereof.
  • The free metablock pool 180 may be a data structure, such as a prioritized list or table of identifiers of metablocks that are available for storing data. For example, a metablock may be erased by erasing each of the individual blocks that form the metablock, such as via an erase request 162. To illustrate, a metablock may be erased responsive to a command from the host device 130 or based on an internal process at the data storage device 102, such as a garbage collection operation that copies valid data from multiple metablocks into a single metablock and flags the multiple metablocks to be erased (e.g., to be erased as part of a background operation). After a metablock is erased, the metablock may be “added” to the free metablock pool 180 by adding an identifier of the erased metablock to the free metablock pool 180. Although metablocks are described herein as being “added to” the free metablock pool 180, “in” the free metablock pool 180, or “removed from” the free metablock pool 180 for ease of explanation, it should be understood that metablock identifiers or other indications of the metablocks are added to, stored in, or removed from the free metablock pool 180.
  • A write priority may be assigned to or otherwise determined for the metablocks in the free metablock pool 180. Metablocks in the free metablock pool 180 having a higher write priority may be selected for write operations before selection of metablocks in the free metablock pool 180 having a lower write priority. For example, metablocks formed of blocks exhibiting better block “health” (e.g., fewer data errors or having a smaller count of program/erase cycles) may be assigned a higher write priority than metablocks formed of blocks exhibiting lesser health (e.g., having more data errors or having a larger count of program/erase cycles). When a metablock in the free metablock pool 180 is selected for a data write operation, the metablock may be removed from the free metablock pool 180.
  • Metablocks in the free metablock pool 180 that are identified as candidates for relinking (e.g., as a relinking candidate 184) may be added to a relinking pool 182. The relinking pool 182 may include a data structure that includes identifiers of relinking candidates. Metablocks added to the relinking pool 182 may be assigned a low write priority to delay selection of the metablocks for write operations and to enable accumulation of a sufficient number of relinking candidates 184 in the relinking pool 182 to begin a relinking operation at the metablock relinker 186. Alternatively, metablocks added to the relinking pool 182 may be prevented from selection for write operations until after relinking has been performed.
  • Although the relinking pool 182 is described as a data structure within the free metablock pool 180, in other implementations the relinking pool 182 may correspond to a data structure that is separate from the free metablock pool 180. In other implementations, the relinking pool 182 may not correspond to a dedicated data structure and may instead correspond to a logical grouping of metablocks in the free metablock pool 180 that have been assigned the low write priority. For example, a lowest write priority (or a dedicated write priority value) may be assigned to the identified relinking candidate(s) 184, and the relinking pool 182 may correspond to all metablocks in the free metablock pool 180 having the lowest write priority (or the dedicated write priority value).
  • The metablock relinker 186 is configured to determine whether a metablock added to the free metablock pool 180 is a relinking candidate 184. A metablock may be identified as a relinking candidate based on an amount of variation in the health of the blocks that form the metablock. For example, block health values 196 may be determined for each block of the metablock to indicate a relative health of the block. For example, a block health value may be determined based on a bit error rate, a number of program/erase (P/E) cycles, time to complete an erase operation, time to complete a programming operation, one or more other factors associated with the block, or any combination thereof.
  • To illustrate, a bit error rate may be determined during one or more data read operations from the block (e.g., by an error correction coding (ECC) decoder of the data storage device 102), with a higher error rate corresponding to lower health. A count of P/E cycles may be maintained by the data storage device 102 (e.g., such as for use with wear leveling), with higher count of P/E cycles corresponding to lower health. A time to complete an erase operation may correspond to a measured duration between initiating an erase operation and completing the erase operation. As another example, the time to complete the erase operation may correspond to a count of erase pulses applied during the erase operation (such as modified by a magnitude of an erase voltage applied during the erase pulses), with a longer time corresponding to lower health. A time to complete a programming operation may correspond to a measured duration between initiating a programming operation and completing the programming operation. As another example, the time to complete the programming operation may correspond to a count of programming pulses applied during the programming operation (such as modified by a magnitude of a programming voltage applied during the programming pulses), with a longer time corresponding to lower health.
  • A useful life of a metablock may be limited by the shortest useful life of its blocks, which may be predicted using the block health values 196. The metablocks' average useful life may be maximized when all of the blocks of each particular metablock reach the end of their respective useful lives at the same time. The metablock relinking metric generator 194 may be configured to determine a value of a metablock relinking metric for a metablock based on a difference of the block health values of its component blocks. For example, the metablock relinking metric generator 194 may determine the metablock relinking metric by identifying the highest block health value of a component block of a metablock, identifying the lowest block health value of a component block of the metablock, and assigning the metablock relinking metric as a computed difference between the highest block health value and the lowest block health value.
  • The metablock relinker 186 may be configured to determine whether a metablock is a relinking candidate 184 based on a value of the metablock relinking metric for the metablock. For example, the metablock relinker 186 may be configured to receive the metablock relinking metric and to identify the metablock as a relinking candidate 184 in response to the metablock relinking metric satisfying a relinking metric threshold 188. In some implementations, the metablock relinking metric satisfies the relinking metric threshold 188 when the metablock relinking metric equals or exceeds the relinking metric threshold 188. In other implementations, the metablock relinking metric satisfies the relinking metric threshold 188 when the metablock relinking metric exceeds the relinking metric threshold 188.
  • The metablock relinker 186 may be configured to determine when a sufficient number of the relinking candidates 184 have been identified to perform a relinking operation to the relinking candidates 184. For example, the metablock relinker 186 may be configured to receive or to otherwise determine a number of relinking candidates 172 that are in the relinking pool 182. The metablock relinker 186 may be configured to initiate a relinking operation in response to the number of relinking candidates 172 satisfying a relinking pool threshold 190. In some implementations, the number of relinking candidates 172 satisfies the relinking pool threshold 190 when the number of relinking candidates 172 equals or exceeds the relinking pool threshold 190. In other implementations, the number of relinking candidates 172 satisfies the relinking pool threshold 190 when the number of relinking candidates 172 exceeds the relinking pool threshold 190.
  • In response to determining that a sufficient number of the relinking candidates 184 have been identified, the metablock relinker 186 may be configured to perform a relinking operation on blocks that form the relinking candidates 184. As described further below with reference to FIG. 2, the relinking operation may determine updated groupings of the blocks based on block health values 196. For example, the metablock relinker 186 may be configured to access the block health values 196 corresponding to the blocks of the relinking candidates 184 and to sort the blocks in order of block health value for each memory die 152-156. The metablock relinker 186 may generate updated metablocks by re-grouping the sorted blocks according to the sort order. For example, a first updated metablock may be formed by grouping the first block in the sort order for the first die 152, the first block in the sort order for the second die 154, and the first block in the sort order for the third die 156. A second updated metablock may be formed by grouping the second block in the sort order for the first die 152, the second block in the sort order for the second die 154, and the second block in the sort order for the third die 156. In this manner, the metablock relinker 188 may generate the updated metablocks by relinking blocks from the relinking candidates 184.
  • During operation, the controller 120 may perform an erase operation to erase the blocks that form a particular metablock, such as a representative metablock 160 formed of a first block (block 1-2) 162 in the first memory die 152, a second block (block 2-2) 164 in the second memory die 154, and a third block (block 3-2) 166 in the third memory die 156. A metablock identifier 170 of the erased metablock 160 (e.g., metablock “2”) may be provided to the metablock relinker 186. The block health values 196 corresponding to the blocks 162-166 of the erased metablock 160 may be updated, such as based on a time to complete the erase operation at each of the blocks 162-166 or based on a number of errors detected in data most recently read from each of the blocks 162-166 (e.g., when copying data during a garbage collection operation).
  • The metablock relinking metric generator 194 may generate a metablock relinking metric for the erased metablock 160. For example, the metablock relinking metric generator 194 may subtract the lowest block health value of the blocks 162-166 from the highest block health value of the blocks 162-166. The difference between the lowest and highest of the block heath values of the block 162-166 may be provided to the metablock relinker 186 as the metablock relinking metric.
  • The metablock relinker 186 may compare the metablock relinking metric to the relinking metric threshold 188. In response to the metablock relinking metric satisfying the relinking metric threshold 188 (e.g., equaling or exceeding the relinking metric threshold 188), the metablock relinker 186 may determine a low write priority for the metablock 160, designating the metablock 160 as a relinking candidate for the relinking pool 182. Otherwise, in response to the metablock relinking metric not satisfying the relinking metric threshold 188, the metablock relinker 186 may determine a higher write priority for the metablock 160. For example, in an implementation using two write priority values (e.g., “low” and “high”), all metablocks other than the relinking candidates 184 may be assigned the “high” write priority. In other implementations, the write priority may be at least partially based on block health values. For example, the write priority may be based on an average of the block health values of the blocks of a metablock so that metablocks with better average health have a higher write priority than metablocks with lower average health. Data including a metablock identifier and write priority for the metablock 160 may be provided to the free metablock pool 180.
  • In response to the number of relinking candidates 172 satisfying the relinking pool threshold 190, the metablock relinker 186 may relink the blocks of the relinking candidates 184 based on the block health values 196, as described above. The metablock relinker 186 may provide updated linkings 174 to the metablock data table 192 to indicate the updated metablocks resulting from the relinking operation. Updated write priorities 176 may be provided to the free metablock pool 180 to indicate a higher write priority for the updated metablocks that are removed from the relinking pool 182.
  • Operation of the metablock relinker 186 may be adjusted over the life of the data storage device 102 by adjusting values of one or both of the thresholds 188, 190, such as may be determined by the controller 120 or the host device 130. For example, early in the life of the data storage device 102, the thresholds 188, 190 may have relatively large values to reduce an impact of relinking operations on device performance. As the data storage device 102 approaches a predicted end of its useful life, one or both of the thresholds 188, 190 may be decreased to more tightly group the blocks by health value and to increase a frequency of performing relinking operations to further extend the average useful life of the metablocks.
  • By relinking the metablocks in the relinking pool 182 without also relinking the metablocks in the free metablock pool 180 that are not relinking candidates, the data storage device 102 is able to improve average metablock useful life while avoiding a delay and computational complexity associated with relinking all of the metablocks in the free metablock pool 182. Metablocks selected to undergo a relinking operation may be limited to the metablocks that satisfy the relinking metric threshold 188 (e.g., by having a relatively large variance in block health) and that therefore are more likely to provide greater improvement of average useful life as a result of relinking as compared to metablocks that do not satisfy the relinking metric threshold 188 (e.g., by having a relatively low variance in block health).
  • Referring to FIG. 2, a graphical representation 200 illustrates an example of operation of the data storage device 102 of FIG. 1. An erased metablock in the free metablock pool 180 may be selected for a data write operation and provided to a filled metablock pool 210. Upon erasing of the metablock (e.g., in response to an erase instruction from the host device 130 or during a housekeeping operation in the data storage device 102), the metablock may be returned to the free metablock pool 180. Erased metablocks identified as relinking candidates may be provided to the relinking pool 182.
  • A table 250 illustrates an example of contents of the free metablock pool 180. The table 250 indicates, for each metablock in the free metablock pool 180, a metablock identifier (MB#), a block number (B#) and health value (HV) for a component block in the first memory die 152 (Die_1), a block number (B#) and health value (HV) for a component block in the second die 154 (Die_2), and a block number (B#) and health value (HV) for a component block in the third die 156 (Die_3). The table 250 also includes a metric value of the metablock relinking metric (RMV) and a write priority for each of the metablocks in the free metablock pool 180.
  • As illustrated, the free metablock pool 180 includes five metablocks: metablock 1 (MB_1), metablock 4 (MB_4), metablock 5 (MB_5), metablock 9 (MB_9), and metablock 10 (MB_10). Metablock 1 (MB_1) includes block 1 from die 1, block 1 from die 2, and block 1 from die 3. Each of the blocks has a block health value of “1”, which may indicate a relatively high (or highest) block health value. Because a variation of the block health values is zero, the metablock relinking metric has a value of “0” (RMV_0) and a high write priority (H).
  • Metablock 4 includes block 4 from die 1 with health value “3,” block 4 from die 2 with health value “5,” and block 6 from die 3 with health value “4.” The metablock relinking metric has a value of “2” (i.e., 5 (from die 2)−(from die 1)=2). Because the metablock relinking metric does not satisfy a relinking threshold (e.g., the relinking metric of 2 does not exceed a relinking threshold of 4), metablock 4 has a high write priority (H).
  • Metablock 5 includes block 12 from die 1 with health value “1,” block 5 from die 2 with health value “8,” and block 5 from die 3 with health value “8.” The metablock relinking metric has a value of “7.” Because the metablock relinking metric satisfies the relinking threshold (e.g., the relinking metric of 7 equals or exceeds the relinking threshold of 4), metablock 5 is identified as a relinking candidate and has a low write priority (L).
  • Metablock 9 includes block 9 from die 1 with health value “9” (e.g., indicating a relatively low, or lowest, block health value), block 13 from die 2 with health value “2,” and block 4 from die 3 with health value “5.” The metablock relinking metric has a value of “7.” Because the metablock relinking metric satisfies the relinking threshold (e.g., the relinking metric of 7 equals or exceeds the relinking threshold of 4), metablock 9 is identified as a relinking candidate and has a low write priority.
  • Metablock 10 includes block 10 from die 1 with health value “7,” block 10 from die 2 with health value “7,” and block 14 from die 3 with health value “2.” The metablock relinking metric has a value of “5” and metablock 10 is identified as a relinking candidate having a low write priority.
  • A relinking operation may be initiated when the number of relinking candidates (e.g., three in table 250) satisfies a relinking pool threshold (e.g., 3). The relinking operation may include sorting the relinking candidate blocks of die 1 of the relinking candidates in order of block health value, such as to generate a first sorted list (from highest to lowest) of block 12, block 10, and block 9. The relinking candidate blocks of die 2 may be sorted to generate a second sorted list of block 13, block 10, and block 5. The relinking candidate blocks of die 3 may be sorted to generate a third sorted list of block 14, block 4, and block 5.
  • Relinking may be performed based on the sort order of the sorted lists. For example, metablock 5 may be relinked to include the first block in the sort order of each of the sorted lists: block 12 of die 1 (HV=1), block 13 of die 2 (HV=2), and block 14 of die 3 (HV=2), resulting in a metablock relinking metric value of “1”. Metablock 10 may be relinked to include the second block in the sort order of each of the sorted lists: block 10 of die 1 (HV=7), block 10 of the second die (HV=7), and block 4 of the third die (HV=5), resulting in a metablock relinking metric value of “2.” Metablock 9 may be relinked to include the third block in the sort order of each of the sorted lists: block 0 of die 1 (HV=9), block 5 of the second die (HV=8), and block 5 of the third die (HV=8), resulting in a metablock relinking metric value of “1.”
  • Results of the relinking operation including the updated metablocks 5, 9, and 10 are shown in table 270. Because no metablock in the free metablock pool 180 has a metablock relinking metric that satisfies the relinking metric threshold, no relinking candidates remain, the relinking pool 182 is empty, and all metablocks in the free metablock pool 180 are assigned a high write priority.
  • Prior to the relinking operation, metablocks 5, 9, and 10 may be expected to have a useful life limited by one or two blocks of relatively poor block health. For example, the useful life of metablock 9 may be expected to end when block 9 on die 1 fails (HV=9), even though the blocks on die 2 and die 3 may have good block health and may otherwise be continue to be usable. The useful life of metablock 5 may be expected to end at approximately the same time as metablock 9, when block 5 on die 2 or block 5 on die 3 fails (HV=8), even though the block on die 1 may have good block health and may be continue to be usable. After the relinking operation, the useful life of metablock 9 may still be limited by block 9 on die 1 (HV=9), but the useful life of metablock 5 may be extended significantly by the replacement of blocks with blocks having relatively good block health (HV=2). As a result, an average useful life of the metablocks may be extended.
  • Although FIGS. 1-2 illustrate the memory 104 as including three memory dies 152-156, in other implementations the non-volatile memory may include two memory dies or more than three memory dies. Although metablocks are described as including a single block from each of the memory dies of the memory 104, in other implementations a metablock may include multiple blocks from one or more of the memory dies, may exclude blocks from one of more of the memory dies, or a combination thereof. For example, one or more of the memory dies 103 may include multiple planes of storage elements, each plane being accessible and erasable independently of the other plane(s), and a metablock may include blocks from multiple planes on the same memory die.
  • Referring to FIG. 3, a first illustrative embodiment of a method 300 to relink metablocks associated with a data storage device is shown. The data storage device may include or correspond to the data storage device 102 of FIG. 1. The data storage device may include memory having multiple memory dies, such has the memory 104 having the multiple dies 103 of FIG. 1. Each memory die of the multiple memory dies may include multiple blocks of storage elements and metablocks may be defined in the data storage device as groups of blocks from the multiple memory dies. For example, the method 300 may be performed by the controller 120 (e.g., the metablock relinker 186) or the memory 104 of the data storage device 102, or by the host device 130 of FIG. 1.
  • The method 300 may include receiving a metablock at a free pool, at 302. For example, the free pool may include or correspond to the free metablock pool 180 of FIG. 1.
  • The method 300 may also include determining whether the metablock is a relinking candidate, at 304. The metablock may be associated with (e.g., correspond to) a metablock relinking metric. To determine whether the metablock is the relinking candidate, the metablock relinking metric may be compared to a relinking metric threshold. The metablock may be identified as a relinking candidate when the metablock relinking metric satisfies the relinking metric threshold. When the metablock is not a relinking candidate, the method 300 advances to identify a next metablock, at 312. The next metablock 312 may be a free metablock to be added to the free pool. When the metablock is a relinking candidate, the method 300 advances to 306.
  • The metablock is added to the relinking pool and assigned a lowest write priority, at 306. The relinking pool may include or correspond to the relinking pool 182 of FIG. 1. The relinking pool may be associated with (e.g., included in) the free pool. In some embodiments, the relinking pool may include a designated portion of the free pool. In other embodiments, the relinking pool may correspond to one or more entries of the free pool that have a lowest priority.
  • The method 300 may further include determining whether a number of relinking candidates matches or exceeds a relinking pool threshold, at 308. The number of relinking candidates may be determined and compared to the relinking pool threshold. For example, the number of relinking candidates 172 may be compared to the relinking pool threshold 190 of FIG. 1. When the number of relinking candidates does not match and/or does not exceed the relinking pool threshold, the method 300 advances to identify the next metablock, at 312. When the number of relinking candidates matches or exceeds the relinking pool threshold, the method 300 advances to 310.
  • Metablocks that are in the relinking pool are relinked and assigned new write priorities, at 310. By relinking the metablocks, one or more of the relinked metablocks may be removed from the relinking pool. For example, the one or more relinked metablocks may be removed from the relinking pool and each of the one or more relinked metablocks may be assigned a higher priority. After performing the relinking operation, the method 300 advances to identify the next metablock, at 312.
  • Thus, metablocks identified as relinking candidates may be relinked (e.g., re-grouped) and may result in similar quality blocks being grouped together. During relinking of metablocks identified as relinking candidates, metablocks included in the free pool and not in the relinking pool may be available to be selected for write operations. Accordingly, relinking of the metablocks in the relinking pool does not prohibit selection of metablocks that are included in the free pool and not in the relinking pool.
  • Referring to FIG. 4, a second illustrative embodiment of a method 400 to relink blocks to form updated metablocks associated with a data storage device is shown. The data storage device may include or correspond to the data storage device 102 of FIG. 1. The data storage device may include memory having multiple memory dies, such has the memory 104 having the multiple dies 103 of FIG. 1. Each memory die of the multiple memory dies may include multiple blocks of storage elements and metablocks may be formed in the data storage device through linking of blocks from the multiple memory dies. For example, the method 400 may be performed by the controller 120 (e.g., the metablock relinker 186) or the memory 104 of the data storage device 102, or by the host device 130 of FIG. 1.
  • The method 400 includes determining whether a metablock is a metablock update candidate (e.g., a relinking candidate 184 of FIG. 1) based on a relinking metric corresponding to the metablock, at 402. For example, one or more metablocks may be determined to be metablock update candidates based on relinking metrics corresponding to the one or more metablocks, such as upon adding the metablocks to the free metablock pool 180 of FIG. 1. A value of the relinking metric for a metablock may be determined based on a difference in block health values of the blocks forming the metablock, such as a difference between a largest block health value and a smallest block health value of the blocks forming the metablock. For example, the block health value of each of the blocks may be determined based on a bit error rate, a count of program/erase cycles, a time to complete an erase operation, a time to complete a data program operation, or any combination thereof. The metablock may be determined to be a metablock update candidate based on a comparison of the relinking metric to a relinking metric threshold, such as the relinking metric threshold 188 of FIG. 1.
  • When the metablock is determined to be a metablock update candidate, the metablock (e.g., the metablock identifier) may be added to the relinking pool of a free metablock pool. To illustrate, the metablock update candidate may be added to the relinking pool 182 of the free metablock pool 180 of FIG. 1. The metablock that is added to the relinking pool may be assigned a lowest possible write priority associated with the free metablock pool. For example, in response to determining that the metablock is a metablock update candidate, a write priority of the metablock may be assigned a value indicating a low write priority. When the metablock is determined to not be a metablock update candidate, the metablock (e.g., the metablock identifier) may be added to the free metablock pool, but may not be included in the relinking pool. The metablock that is added to the free pool and not included in the relinking pool may be assigned a write priority other than the lowest possible write priority associated with the free metablock pool.
  • The method 400 also includes comparing a number of the metablock update candidates to a relinking pool threshold, at 404. For example, the number of the metablock update candidates and the relinking pool threshold may include or correspond to the number of relinking candidates 172 and the relinking pool threshold 190 of FIG. 1, respectively.
  • The method 400 further includes, in response to the number of the metablock update candidates satisfying the relinking pool threshold, updating the linking of the blocks of the metablock update candidates to form updated metablocks, at 406. For example, updated linkings may be generated for multiple metablock update candidates included in the relinking pool. The number of metablock update candidates may satisfy the relinking pool threshold when the number of metablock update candidates matches or exceeds the relinking pool threshold. Updating the linking of the blocks of the metablock update candidates may be performed by grouping the blocks according to block health value such that an average useful life of the updated metablocks exceeds an average useful life of the metablock update candidates.
  • The linkings of the blocks may be updated by changing data within a field of a metablock data table, such as the metablock data table 192 of FIG. 1 or a field of the table 250 of FIG. 2. The updated linkings, such as the updated linkings 174 of FIG. 1, may be provided to and recorded in the metablock data table (e.g., the metablock data table 192 of FIG. 1), that tracks which group of blocks from the multiple memory dies is included in (e.g., define) a corresponding metablock. Based on the updated linking of the blocks of the metablock update candidates, one or more write priorities associated with the metablock update candidates may be updated (e.g., reassigned). For example, one or more of the updated metablocks may be assigned a write priority other than the lowest possible write priority associated with the free metablock pool such that the one or more updated metablocks are not included in the relinking pool.
  • Thus, metablocks identified as metablock update candidates may be relinked (e.g., re-grouped) which may result in similar quality blocks being grouped together. By grouping similar quality blocks together, a life of one or more of the updated metablocks may be extended and an average useful life of the metablocks in the data storage device may be improved.
  • The method 300 of FIG. 3 and/or the method 400 of FIG. 4 may be initiated or controlled by an application-specific integrated circuit (ASIC), a processing unit, such as a central processing unit (CPU), a digital signal processor (DSP), a controller, another hardware device, a firmware device, a field-programmable gate array (FPGA) device, or any combination thereof. As an example, the method 300 of FIG. 3 and/or the method 400 of FIG. 4 can be initiated or controlled by one or more processors, such as one or more processors included in or coupled to a controller or a memory of the data storage device 102 and/or the host device 130 of FIG. 1. A controller configured to perform the method 300 of FIG. 3 and/or the method 400 of FIG. 4 may be able to relink metablocks.
  • Although various components of the data storage device 102 and the host device 130 of FIG. 1 are depicted herein are illustrated as block components and described in general terms, such components may include one or more microprocessors, state machines, or other circuits configured to enable the various components to perform operations described herein. One or more aspects of the various components may be implemented using a microprocessor or microcontroller programmed to perform operations described herein, such as one or more operations of the method 300 of FIG. 3 and/or the method 400 of FIG. 4. In a particular embodiment, the controller 120, the memory 104, and/or the host 130 of FIG. 1 includes a processor executing instructions that are stored at a memory, such as a non-volatile memory of the data storage device 102 or the host device 130. Alternatively or additionally, executable instructions that are executed by the processor may be stored at a separate memory location that is not part of the non-volatile memory, such as at a read-only memory (ROM) of the data storage device 102 or the host device 130 of FIG. 1.
  • In an illustrative example, the processor may execute the instructions to determine whether a metablock is a metablock update candidate based on a relinking metric corresponding to the metablock. For example, a data structure corresponding to the metablock (e.g., a table entry of a metablock data table) may include a field storing a value of a relinking metric corresponding to the metablock. The relinking metric may be accessed and compared to a relinking metric threshold. In response to the relinking metric equaling or exceeding the relinking metric threshold, the metablock may be determined to be a metablock update candidate.
  • The processor may also execute instructions to compare a number of metablock update candidates to a relinking pool threshold. For example, a data structure may include a list of entries corresponding to metablocks that are available for write operations, such as the free metablock pool 180 of FIG. 1. Each of the entries may include a field storing an indicator of a write priority of the associated metablock. The data structure may be traversed and the indicator of write priority may be read from each entry and compared to a relinking metric threshold. A counter may initialized and, in response to each instance of an indicator of write priority being detected as equaling or exceeding the relinking metric threshold, the counter may be incremented. A resulting count of the entries having write priorities that equal or exceed the relinking metric threshold (e.g., a counter value after traversing the data structure) may be compared to the relinking pool threshold. As another example, a dedicated data structure, such as the relinking pool 182, may be populated with entries corresponding to metablock update candidates, and the number of metablock update candidates may be determined by accessing a property of the dedicated data structure, such as a size of the dedicated data structure or a number of entries included in the dedicated data structure.
  • The processor may execute instructions to, in response to the number of metablock update candidates matching or exceeding the relinking pool threshold, update the linking of the blocks of the metablock update candidates to form updated metablocks. For example, a list of blocks may be generated for each of the memory dies. The list of blocks may be populated with entries by traversing a list of blocks of each of the metablock update candidates (e.g., via accessing the block listings in the metablock data table 192) and, for each block of the metablock update candidates, adding an entry to one of the generated lists based on which memory die the block is in. The added entry may indicate a block identifier and a block health value of the block. After populating each of the generated lists, each of the lists may be sorted based on block health value to result in sorted lists. The list of blocks of each of the metablock update candidates may be updated to include blocks having a common index value in each of the sorted lists. For example, blocks having a first index value (e.g., having highest health values occurring in each of the sorted lists) in each of the lists may be assigned to a first of the metablock update candidates. Blocks having a second index value (e.g., having second-highest health values occurring in each of the sorted lists) in each of the lists may be assigned to a second of the metablock update candidates.
  • Semiconductor memory devices, such as the memory 104 (e.g., the first memory die 152, the second memory die 154, and/or the third memory die 156) of FIG. 1 may include volatile memory devices, such as dynamic random access memory (“DRAM”) or static random access memory (“SRAM”) devices, non-volatile memory devices, such as resistive random access memory (“ReRAM”), electrically erasable programmable read only memory (“EEPROM”), flash memory (which can also be considered a subset of EEPROM), ferroelectric random access memory (“FRAM”), and other semiconductor elements capable of storing information. Each type of memory device may have different configurations. For example, flash memory devices may be configured in a NAND or a NOR configuration.
  • The memory devices can be formed from passive and/or active elements, in any combinations. By way of non-limiting example, passive semiconductor memory elements include ReRAM device elements, which in some embodiments include a resistivity switching storage element, such as an anti-fuse, phase change material, etc., and optionally a steering element, such as a diode, etc. Further by way of non-limiting example, active semiconductor memory elements include EEPROM and flash memory device elements, which in some embodiments include elements containing a charge storage region, such as a floating gate, conductive nanoparticles or a charge storage dielectric material.
  • Multiple memory elements may be configured so that they are connected in series or so that each element is individually accessible. By way of non-limiting example, flash memory devices in a NAND configuration (NAND memory) typically contain memory elements connected in series. A NAND memory array may be configured so that the array is composed of multiple strings of memory in which a string is composed of multiple memory elements sharing a single bit line and accessed as a group. Alternatively, memory elements may be configured so that each element is individually accessible, e.g., in a NOR memory array. NAND and NOR memory configurations described have been presented as examples, and memory elements may be otherwise configured.
  • The semiconductor memory elements located within and/or over a substrate may be arranged in two or three dimensions, such as a two dimensional memory structure or a three dimensional memory structure.
  • In a two dimensional memory structure, the semiconductor memory elements are arranged in a single plane or a single memory device level. Typically, in a two dimensional memory structure, memory elements are arranged in a plane (e.g., in an x-z direction plane) which extends substantially parallel to a major surface of a substrate that supports the memory elements. The substrate may be a wafer over or in which the layer of the memory elements are formed or it may be a carrier substrate which is attached to the memory elements after they are formed. As a non-limiting example, the substrate may include a semiconductor material, such as silicon.
  • The memory elements may be arranged in the single memory device level in an ordered array, such as in a plurality of rows and/or columns. However, the memory elements may be arranged in non-regular or non-orthogonal configurations. The memory elements may each have two or more electrodes or contact lines, such as bit lines and wordlines.
  • A three dimensional memory array is arranged so that memory elements occupy multiple planes or multiple memory device levels, thereby forming a structure in three dimensions (i.e., in the x, y and z directions, where the y direction is substantially perpendicular and the x and z directions are substantially parallel to the major surface of the substrate).
  • As a non-limiting example, a three dimensional memory structure may be vertically arranged as a stack of multiple two dimensional memory device levels. As another non-limiting example, a three dimensional memory array may be arranged as multiple vertical columns (e.g., columns extending substantially perpendicular to the major surface of the substrate, i.e., in they direction) with each column having multiple memory elements in each column. The columns may be arranged in a two dimensional configuration (e.g., in an x-z plane), resulting in a three dimensional arrangement of memory elements with elements arranged on multiple vertically stacked memory planes. Other configurations of memory elements in three dimensions can also constitute a three dimensional memory array.
  • By way of non-limiting example, in a three dimensional NAND memory array, the memory elements may be coupled together to form a NAND string within a single horizontal (e.g., x-z) memory device level. Alternatively, the memory elements may be coupled together to form a vertical NAND string that traverses across multiple horizontal memory device levels. Other three dimensional configurations can be envisioned wherein some NAND strings contain memory elements in a single memory level while other strings contain memory elements which span multiple memory levels. Three dimensional memory arrays may also be designed in a NOR configuration and in a ReRAM configuration.
  • Typically, in a monolithic three dimensional memory array, one or more memory device levels are formed above a single substrate. Optionally, the monolithic three dimensional memory array may also have one or more memory layers at least partially within the single substrate. As a non-limiting example, the substrate may include a semiconductor material, such as silicon. In a monolithic three dimensional array, the layers constituting each memory device level of the array are typically formed on the layers of the underlying memory device levels of the array. However, layers of adjacent memory device levels of a monolithic three dimensional memory array may be shared or have intervening layers between memory device levels.
  • Two dimensional arrays may be formed separately and then packaged together to form a non-monolithic memory device having multiple layers of memory. For example, non-monolithic stacked memories can be constructed by forming memory levels on separate substrates and then stacking the memory levels atop each other. To illustrate, each of the memory device levels may have a corresponding substrate thinned or removed before stacking the memory device levels to form memory arrays. Because each of the memory device levels are initially formed over separate substrates, the resulting memory arrays are not monolithic three dimensional memory arrays. Further, multiple two dimensional memory arrays or three dimensional memory arrays (monolithic or non-monolithic) may be formed on separate chips and then packaged together to form a stacked-chip memory device.
  • In some implementations, the memory 104 (e.g., the first memory die 152, the second memory die 154, and/or the third memory die 156) of FIG. 1 is a non-volatile memory having a three-dimensional (3D) memory configuration that is monolithically formed in one or more physical levels of arrays of memory cells having an active area disposed above a silicon substrate. The active area of a memory cell may be an area of the memory cell that is conductively throttled by a charge trap portion of the memory cell. The data storage device 102 and/or the host device 130 of FIG. 1 may include circuitry, such as read/write circuitry, as an illustrative, non-limiting example, associated with operation of the memory cells.
  • Associated circuitry is typically used for operation of the memory elements and for communication with the memory elements. As non-limiting examples, memory devices may have circuitry for controlling and driving memory elements to perform functions such as programming and reading. The associated circuitry may be on the same substrate as the memory elements and/or on a separate substrate. For example, a controller for memory read-write operations may be located on a separate controller chip and/or on the same substrate as the memory elements
  • One of skill in the art will recognize that this disclosure is not limited to the two dimensional and three dimensional structures described but cover all relevant memory structures within the spirit and scope of the disclosure as described herein and as understood by one of skill in the art.
  • The Abstract of the Disclosure is submitted with the understanding that it will not be used to interpret or limit the scope or meaning of the claims. In addition, in the foregoing Detailed Description, various features may be grouped together or described in a single embodiment for the purpose of streamlining the disclosure. This disclosure is not to be interpreted as reflecting an intention that the claimed embodiments require more features than are expressly recited in each claim. Rather, as the following claims reflect, inventive subject matter may be directed to less than all of the features of any of the disclosed embodiments.
  • The illustrations of the embodiments described herein are intended to provide a general understanding of the various embodiments. Other embodiments may be utilized and derived from the disclosure, such that structural and logical substitutions and changes may be made without departing from the scope of the disclosure. This disclosure is intended to cover any and all subsequent adaptations or variations of various embodiments.
  • The above-disclosed subject matter is to be considered illustrative, and not restrictive, and the appended claims are intended to cover all such modifications, enhancements, and other embodiments, which fall within the scope of the present disclosure. Thus, to the maximum extent allowed by law, the scope of the present disclosure is to be determined by the broadest permissible interpretation of the following claims and their equivalents, and shall not be restricted or limited by the foregoing detailed description.

Claims (20)

What is claimed is:
1. A data storage device, comprising:
a non-volatile memory including multiple memory dies, wherein each memory die of the multiple memory dies includes multiple blocks of storage elements;
means to form metablocks through linking of blocks from the multiple memory dies;
means to determine whether one or more metablocks are metablock update candidates based on relinking metrics corresponding to the one or more metablocks;
means to compare a number of the metablock update candidates to a threshold; and
means to update the linking of the blocks of the metablock update candidates to form updated metablocks in response to the number of the metablock update candidates satisfying the threshold, wherein the means to form, the means to determine, the means to compare and the means to update are coupled to the non-volatile memory.
2. The data storage device of claim 1, further comprising means to update the linking of the blocks of the metablock update candidates by changing data within one or more fields of a metablock data table, and wherein the data includes identifiers of blocks that are associated with each of the metablock update candidates.
3. The data storage device of claim 1, further comprising means to update the linking of the blocks of the metablock update candidates by grouping the blocks of the metablock update candidates according to block health value so that an average useful life of the updated metablocks exceeds an average useful life of the metablock update candidates.
4. The data storage device of claim 1, wherein the threshold corresponds to a lowest count of the metablock update candidates that is sufficient to initiate a relinking operation, and further comprising means to determine whether the number of the metablock update candidates satisfies the threshold by determining whether the number of the metablock update candidates matches or exceeds the threshold.
5. The data storage device of claim 4, further comprising means to determine a value of the relinking metric of a first metablock based on a difference between a largest block health value of the blocks forming the first metablock and a smallest block health value of the blocks forming the first metablock, and further comprising means to determine the block health value of each of the blocks forming the first metablock based on a time to complete an erase operation, a time to complete a data program operation, or a combination thereof.
6. The data storage device of claim 1, further comprising means to select a first metablock of the one or more metablocks for write operations at least partially based on a write priority of the first metablock, and wherein the controller is configured, in response to determining that the first metablock is a metablock update candidate, to assign a write priority of the first metablock to a value indicating a low write priority.
7. The data storage device of claim 1, further comprising means to maintain a free metablock pool and to determine whether a first metablock of the one or more metablocks is a metablock update candidate upon assignment of the first metablock to the free metablock pool, and means to assign the first metablock to a relinking pool in response to determining that the first metablock is a metablock update candidate.
8. The data storage device of claim 1, further comprising means to compare the relinking metric of a first metablock of the one or more metablocks to a relinking metric threshold to determine whether the first metablock is a metablock update candidate.
9. The data storage device of claim 1, wherein the multiple memory dies are coupled in a stacked configuration.
10. The data storage device of claim 1, wherein the non-volatile memory comprises a resistive random access memory (ReRAM).
11. The data storage device of claim 1, wherein the non-volatile memory comprises a flash memory.
12. The data storage device of claim 1, wherein at least one of the multiple memory dies includes a three-dimensional (3D) memory configuration that is monolithically formed in one or more physical levels of arrays of memory cells having an active area disposed above a silicon substrate, and wherein the non-volatile memory includes circuitry associated with operation of the memory cells.
13. A device, comprising:
a non-volatile memory including dies that include blocks of storage elements;
means to determine a value of a relinking metric of a first metablock based on a difference between a largest block health value of the blocks forming the first metablock and a smallest block health value of the blocks forming the first metablock;
means to determine one or more metablock update candidates based on multiple relinking metrics;
means to compare a number of the metablock update candidates to a threshold; and
means to update the linking of the blocks of the metablock update candidates to form updated metablocks in response to the number of the metablock update candidates satisfying the threshold, wherein the means to determine a value of a relinking metric, the means to determine one or more metablock update candidates, the means to compare and the means to update are all coupled to the non-volatile memory.
14. The device of claim 13, further comprising means to update the linking of the blocks of the metablock update candidates by changing data within one or more fields of a metablock data table, and wherein the data includes identifiers of blocks that are associated with each of the metablock update candidates.
15. The device of claim 13, further comprising means to update the linking of the blocks of the metablock update candidates by grouping the blocks of the metablock update candidates according to block health value.
16. The device of claim 13, wherein the threshold corresponds to a lowest count of the metablock update candidates that is sufficient to initiate a relinking operation, and further comprising means to determine whether the number of the metablock update candidates satisfies the threshold by determining whether the number of the metablock update candidates matches or exceeds the threshold.
17. The device of claim 13, wherein at least one of the dies includes a three-dimensional (3D) memory configuration that is monolithically formed in one or more physical levels of arrays of memory cells having an active area disposed above a silicon substrate, and wherein the non-volatile memory includes circuitry associated with operation of the memory cells.
18. A data storage device, comprising:
a non-volatile memory including dies that include blocks of storage elements;
means to receive a metablock relining metric and to identify the metablock as a relinking candidate;
means to initiate a relinking operation in response to a number of relinking candidates; and
means to access block health values correspond to blocks or relinking candidates and to sort the blocks in order of block health for each memory die.
19. The data storage device of claim 18, further comprising means to generate a metablock relinking metric for an erased metablock.
20. The data storage device of claim 19, further comprising means to provide updated linkings to a metablock data table to indicate the updated metablocks resulting from a relinking operation.
US15/495,946 2014-07-02 2017-04-24 System and method of updating metablocks Abandoned US20170228180A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US15/495,946 US20170228180A1 (en) 2014-07-02 2017-04-24 System and method of updating metablocks

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US14/322,781 US9632712B2 (en) 2014-07-02 2014-07-02 System and method of updating metablocks associated with multiple memory dies
US15/495,946 US20170228180A1 (en) 2014-07-02 2017-04-24 System and method of updating metablocks

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
US14/322,781 Continuation US9632712B2 (en) 2014-07-02 2014-07-02 System and method of updating metablocks associated with multiple memory dies

Publications (1)

Publication Number Publication Date
US20170228180A1 true US20170228180A1 (en) 2017-08-10

Family

ID=53674330

Family Applications (2)

Application Number Title Priority Date Filing Date
US14/322,781 Active 2034-08-11 US9632712B2 (en) 2014-07-02 2014-07-02 System and method of updating metablocks associated with multiple memory dies
US15/495,946 Abandoned US20170228180A1 (en) 2014-07-02 2017-04-24 System and method of updating metablocks

Family Applications Before (1)

Application Number Title Priority Date Filing Date
US14/322,781 Active 2034-08-11 US9632712B2 (en) 2014-07-02 2014-07-02 System and method of updating metablocks associated with multiple memory dies

Country Status (2)

Country Link
US (2) US9632712B2 (en)
WO (1) WO2016004072A1 (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10126970B2 (en) * 2015-12-11 2018-11-13 Sandisk Technologies Llc Paired metablocks in non-volatile storage device
US10289550B1 (en) 2016-12-30 2019-05-14 EMC IP Holding Company LLC Method and system for dynamic write-back cache sizing in solid state memory storage
US10290331B1 (en) 2017-04-28 2019-05-14 EMC IP Holding Company LLC Method and system for modulating read operations to support error correction in solid state memory
US10338983B2 (en) 2016-12-30 2019-07-02 EMC IP Holding Company LLC Method and system for online program/erase count estimation
US10403366B1 (en) 2017-04-28 2019-09-03 EMC IP Holding Company LLC Method and system for adapting solid state memory write parameters to satisfy performance goals based on degree of read errors
US11069418B1 (en) 2016-12-30 2021-07-20 EMC IP Holding Company LLC Method and system for offline program/erase count estimation

Families Citing this family (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9626289B2 (en) * 2014-08-28 2017-04-18 Sandisk Technologies Llc Metalblock relinking to physical blocks of semiconductor memory in adaptive wear leveling based on health
US9740425B2 (en) * 2014-12-16 2017-08-22 Sandisk Technologies Llc Tag-based wear leveling for a data storage device
JP6541369B2 (en) * 2015-02-24 2019-07-10 キヤノン株式会社 Data processing apparatus for processing data in memory, data processing method, and program
US10439650B2 (en) * 2015-05-27 2019-10-08 Quantum Corporation Cloud-based solid state device (SSD) with dynamically variable error correcting code (ECC) system
US9886208B2 (en) * 2015-09-25 2018-02-06 International Business Machines Corporation Adaptive assignment of open logical erase blocks to data streams
KR20180064198A (en) * 2016-12-05 2018-06-14 에스케이하이닉스 주식회사 Data storage device and operating method thereof
US10223022B2 (en) * 2017-01-27 2019-03-05 Western Digital Technologies, Inc. System and method for implementing super word line zones in a memory device
JP2018142240A (en) * 2017-02-28 2018-09-13 東芝メモリ株式会社 Memory system
US10649661B2 (en) * 2017-06-26 2020-05-12 Western Digital Technologies, Inc. Dynamically resizing logical storage blocks
US10691540B2 (en) * 2017-11-21 2020-06-23 SK Hynix Inc. Soft chip-kill recovery for multiple wordlines failure
US10387243B2 (en) * 2017-12-08 2019-08-20 Macronix International Co., Ltd. Managing data arrangement in a super block
US10445230B2 (en) * 2017-12-08 2019-10-15 Macronix International Co., Ltd. Managing block arrangement of super blocks
US10949113B2 (en) * 2018-01-10 2021-03-16 SK Hynix Inc. Retention aware block mapping in flash-based solid state drives
US11055002B2 (en) 2018-06-11 2021-07-06 Western Digital Technologies, Inc. Placement of host data based on data characteristics
US10990320B2 (en) * 2019-02-01 2021-04-27 Western Digital Technologies, Inc. Systems and methods to optimally select metablocks
US11581048B2 (en) 2020-11-30 2023-02-14 Cigent Technology, Inc. Method and system for validating erasure status of data blocks
US20230061800A1 (en) * 2021-09-01 2023-03-02 Micron Technology, Inc. Dynamic superblock construction
CN116382598B (en) * 2023-06-05 2023-09-08 深圳大普微电子科技有限公司 Data moving method, flash memory device controller and flash memory device

Family Cites Families (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050144516A1 (en) 2003-12-30 2005-06-30 Gonzalez Carlos J. Adaptive deterministic grouping of blocks into multi-block units
US7433993B2 (en) * 2003-12-30 2008-10-07 San Disk Corportion Adaptive metablocks
US20080052446A1 (en) 2006-08-28 2008-02-28 Sandisk Il Ltd. Logical super block mapping for NAND flash memory
US20080162612A1 (en) 2006-12-28 2008-07-03 Andrew Tomlin Method for block relinking
US8040744B2 (en) * 2009-01-05 2011-10-18 Sandisk Technologies Inc. Spare block management of non-volatile memories
US8832507B2 (en) * 2010-08-23 2014-09-09 Apple Inc. Systems and methods for generating dynamic super blocks
TWI446345B (en) 2010-12-31 2014-07-21 Silicon Motion Inc Method for performing block management, and associated memory device and controller thereof
US9158670B1 (en) * 2011-06-30 2015-10-13 Western Digital Technologies, Inc. System and method for dynamically adjusting garbage collection policies in solid-state memory
US9098399B2 (en) 2011-08-31 2015-08-04 SMART Storage Systems, Inc. Electronic system with storage management mechanism and method of operation thereof
US9417803B2 (en) * 2011-09-20 2016-08-16 Apple Inc. Adaptive mapping of logical addresses to memory devices in solid state drives
US8700961B2 (en) * 2011-12-20 2014-04-15 Sandisk Technologies Inc. Controller and method for virtual LUN assignment for improved memory bank mapping
US9355929B2 (en) 2012-04-25 2016-05-31 Sandisk Technologies Inc. Data storage based upon temperature considerations
US8953398B2 (en) 2012-06-19 2015-02-10 Sandisk Technologies Inc. Block level grading for reliability and yield improvement
US9430322B2 (en) 2012-08-02 2016-08-30 Sandisk Technologies Llc Device based wear leveling using intrinsic endurance
US9195584B2 (en) * 2012-12-10 2015-11-24 Sandisk Technologies Inc. Dynamic block linking with individually configured plane parameters
US9626289B2 (en) * 2014-08-28 2017-04-18 Sandisk Technologies Llc Metalblock relinking to physical blocks of semiconductor memory in adaptive wear leveling based on health

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10126970B2 (en) * 2015-12-11 2018-11-13 Sandisk Technologies Llc Paired metablocks in non-volatile storage device
US10289550B1 (en) 2016-12-30 2019-05-14 EMC IP Holding Company LLC Method and system for dynamic write-back cache sizing in solid state memory storage
US10338983B2 (en) 2016-12-30 2019-07-02 EMC IP Holding Company LLC Method and system for online program/erase count estimation
US11069418B1 (en) 2016-12-30 2021-07-20 EMC IP Holding Company LLC Method and system for offline program/erase count estimation
US10290331B1 (en) 2017-04-28 2019-05-14 EMC IP Holding Company LLC Method and system for modulating read operations to support error correction in solid state memory
US10403366B1 (en) 2017-04-28 2019-09-03 EMC IP Holding Company LLC Method and system for adapting solid state memory write parameters to satisfy performance goals based on degree of read errors
US10861556B2 (en) 2017-04-28 2020-12-08 EMC IP Holding Company LLC Method and system for adapting solid state memory write parameters to satisfy performance goals based on degree of read errors

Also Published As

Publication number Publication date
WO2016004072A1 (en) 2016-01-07
US9632712B2 (en) 2017-04-25
US20160004464A1 (en) 2016-01-07

Similar Documents

Publication Publication Date Title
US9632712B2 (en) System and method of updating metablocks associated with multiple memory dies
US9740425B2 (en) Tag-based wear leveling for a data storage device
US10304559B2 (en) Memory write verification using temperature compensation
US10572169B2 (en) Scheduling scheme(s) for a multi-die storage device
US9983828B2 (en) Health indicator of a storage device
US10102119B2 (en) Garbage collection based on queued and/or selected write commands
US9880760B2 (en) Managing data stored in a nonvolatile storage device
US20160162215A1 (en) Meta plane operations for a storage device
US9396080B2 (en) Storage module and method for analysis and disposition of dynamically tracked read error events
US9720769B2 (en) Storage parameters for a data storage device
US9626312B2 (en) Storage region mapping for a data storage device
US20160232088A1 (en) Garbage Collection in Storage System with Distributed Processors
US10002042B2 (en) Systems and methods of detecting errors during read operations and skipping word line portions
US9251891B1 (en) Devices and methods to conditionally send parameter values to non-volatile memory
US20150339187A1 (en) System and method of storing redundancy data
US9244858B1 (en) System and method of separating read intensive addresses from non-read intensive addresses
WO2017151474A1 (en) Temperature variation compensation of a memory
US9812209B2 (en) System and method for memory integrated circuit chip write abort indication
US20160141029A1 (en) Health data associated with a resistance-based memory
CN111373383B (en) Memory cache management
US9760481B2 (en) Multiport memory
CN107980126B (en) Method for scheduling multi-die storage device, data storage device and apparatus
US20160070643A1 (en) System and method of counting program/erase cycles
US9870167B2 (en) Systems and methods of storing data
US20170102879A1 (en) Descriptor data management

Legal Events

Date Code Title Description
AS Assignment

Owner name: WESTERN DIGITAL TECHNOLOGIES, INC., CALIFORNIA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:SANDISK TECHNOLOGIES LLC;REEL/FRAME:042133/0940

Effective date: 20170328

Owner name: SANDISK TECHNOLOGIES INC., TEXAS

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:SHEN, ZHENLEI;REEL/FRAME:042133/0938

Effective date: 20140702

Owner name: SANDISK TECHNOLOGIES LLC, TEXAS

Free format text: CHANGE OF NAME;ASSIGNOR:SANDISK TECHNOLOGIES INC.;REEL/FRAME:042333/0166

Effective date: 20160516

STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION