US10860474B2 - Multilevel addressing - Google Patents

Multilevel addressing Download PDF

Info

Publication number
US10860474B2
US10860474B2 US15/841,378 US201715841378A US10860474B2 US 10860474 B2 US10860474 B2 US 10860474B2 US 201715841378 A US201715841378 A US 201715841378A US 10860474 B2 US10860474 B2 US 10860474B2
Authority
US
United States
Prior art keywords
segment
segments
storage memory
volatile storage
address
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active, expires
Application number
US15/841,378
Other versions
US20190188124A1 (en
Inventor
Gianfranco Ferrante
Dionisio Minopoli
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Micron Technology Inc
Original Assignee
Micron Technology Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Micron Technology Inc filed Critical Micron Technology Inc
Assigned to MICRON TECHNOLOGY, INC. reassignment MICRON TECHNOLOGY, INC. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: FERRANTE, GIANFRANCO, MINOPOLI, DIONISIO
Priority to US15/841,378 priority Critical patent/US10860474B2/en
Assigned to MORGAN STANLEY SENIOR FUNDING, INC., AS COLLATERAL AGENT reassignment MORGAN STANLEY SENIOR FUNDING, INC., AS COLLATERAL AGENT SUPPLEMENT NO. 7 TO PATENT SECURITY AGREEMENT Assignors: MICRON TECHNOLOGY, INC.
Assigned to JPMORGAN CHASE BANK, N.A., AS COLLATERAL AGENT reassignment JPMORGAN CHASE BANK, N.A., AS COLLATERAL AGENT SECURITY INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: MICRON SEMICONDUCTOR PRODUCTS, INC., MICRON TECHNOLOGY, INC.
Priority to KR1020207020187A priority patent/KR20200089338A/en
Priority to CN201880078165.7A priority patent/CN111433748A/en
Priority to EP18888075.1A priority patent/EP3724769A4/en
Priority to PCT/US2018/061201 priority patent/WO2019118125A1/en
Priority to JP2020532688A priority patent/JP6908789B2/en
Publication of US20190188124A1 publication Critical patent/US20190188124A1/en
Assigned to MICRON TECHNOLOGY, INC. reassignment MICRON TECHNOLOGY, INC. RELEASE BY SECURED PARTY (SEE DOCUMENT FOR DETAILS). Assignors: MORGAN STANLEY SENIOR FUNDING, INC., AS COLLATERAL AGENT
Assigned to MICRON SEMICONDUCTOR PRODUCTS, INC., MICRON TECHNOLOGY, INC. reassignment MICRON SEMICONDUCTOR PRODUCTS, INC. RELEASE BY SECURED PARTY (SEE DOCUMENT FOR DETAILS). Assignors: JPMORGAN CHASE BANK, N.A., AS COLLATERAL AGENT
Priority to US17/112,268 priority patent/US11461228B2/en
Publication of US10860474B2 publication Critical patent/US10860474B2/en
Application granted granted Critical
Active legal-status Critical Current
Adjusted expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/0223User address space allocation, e.g. contiguous or non contiguous base addressing
    • G06F12/023Free address space management
    • G06F12/0238Memory management in non-volatile memory, e.g. resistive RAM or ferroelectric memory
    • G06F12/0246Memory management in non-volatile memory, e.g. resistive RAM or ferroelectric memory in block erasable memory, e.g. flash memory
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2212/00Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
    • G06F2212/72Details relating to flash memory management
    • G06F2212/7201Logical to physical mapping or translation of blocks or pages
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2212/00Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
    • G06F2212/72Details relating to flash memory management
    • G06F2212/7202Allocation control and policies
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2212/00Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
    • G06F2212/72Details relating to flash memory management
    • G06F2212/7206Reconfiguration of flash memory system
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2212/00Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
    • G06F2212/72Details relating to flash memory management
    • G06F2212/7211Wear leveling

Definitions

  • the present disclosure relates generally to apparatus, such as storage systems, and their operation, and, more particularly, to multilevel addressing.
  • Storage systems may be implemented in electronic systems, such as computers, cell phones, hand-held electronic devices, etc.
  • Some storage systems such as solid state drives (SSDs)
  • SSDs may include non-volatile storage memories for storing user data from a host.
  • Non-volatile storage memories provide persistent data by retaining stored data when not powered and may include cross-point memory, NAND flash memory, among other types of memory that can be written to a particular number of times throughout their lifetime.
  • Storage systems typically perform an initialization procedure to locate information vital to the operation of the storage systems.
  • FIG. 1A is a block diagram of an apparatus, in accordance with a number of embodiments of the present disclosure.
  • FIG. 1B illustrates an example of multiple reads of a storage memory according to a multiple level addressing scheme, in accordance with a number of embodiments of the present disclosure.
  • FIG. 2 is a flowchart of a method of locating vital information, in accordance with a number of embodiments of the present disclosure.
  • FIG. 3A is an example of a fragment of a storage memory, in accordance with a number of embodiments of the present disclosure.
  • FIG. 3B illustrates an example of wear leveling a fragment of a storage memory, in accordance with a number of embodiments of the present disclosure
  • a starting address corresponding to a location of particular information within a non-volatile storage memory is determined during an initialization process using a multilevel addressing scheme.
  • Using the multilevel addressing scheme may include performing multiple reads of the storage memory at respective address levels to determine the starting address corresponding to the location of the particular information.
  • the actual location in the storage memory of a storage system in which, particular information, such as initialization information (e.g., vital information), vital to the operation of the storage system, is stored may be addressed directly during the initialization.
  • initialization information e.g., vital information
  • the initialization may be performed in response to executing instructions, such as firmware, that may specify the address of the actual physical location of the vital information.
  • firmware such as firmware
  • previous approaches may use single-level, direct addressing to locate the vital information.
  • Some previous approaches may store new vital information to the location at the single address level of a memory each time the storage system is powered down, such as by writing the new vital information to the location during each power down. For example, the previously written vital information at the location may be written over with the new vital information.
  • memory cells in storage memories may be written to a particular number of times during the lifetime of the memory cells, and the number of power downs may be many orders of magnitude (e.g., five orders of magnitude) greater than the particular number of times.
  • storage systems may need to write vital information to a location at a single address level more times than the location can be written during the lifetime of the memory cells at the location.
  • Embodiments of the present disclosure provide a technical advantage over previous approaches by solving the problem of writing to a location at a single address level in a storage memory, to be directly specified by an address during initialization of the storage device, by performing multilevel, indirect addressing.
  • the embodiments disclosed herein determine a starting address corresponding to a location of vital information within a non-volatile storage memory during initialization of the apparatus using a multilevel addressing scheme that may include performing multiple reads of the storage memory.
  • the storage system may use the same initial address that indirectly addresses the location of the vital information during each initialization throughout the lifetime of the apparatus.
  • a disclosed storage system may read an intermediate address from a location having the initial address at an initial address level and read the address of the location of the vital information from a location at the intermediate address at an intermediate address level to determine the location of the vital information at a final address level.
  • the intermediate address at the location having the initial address is changed each time the intermediate address is changed, and the intermediate address is changed each time the location of the vital information is changed.
  • the number of times the intermediate address is changed and the number of times the location of the vital information is changed may be selected such that the number of changes at the location having the initial address during the lifetime of the storage device remains below a threshold number of times.
  • the threshold number of times may be the number of times the memory cells at the location having the initial address may be written to throughout their lifetime, meaning that the initial address may remain the same during the lifetime of the storage device.
  • FIG. 1A is a block diagram of an apparatus in the form of a computing system 100 in accordance with a number of embodiments of the present disclosure.
  • the computing system 100 includes a storage system 102 that may be, for example, a solid-state drive (SSD).
  • storage system 102 is coupled to a host 104 and includes a storage memory 106 that can be a non-volatile memory, such as a cross-point memory (e.g., a three-dimensional (3D) cross-point memory), among others.
  • a controller 108 e.g., an SSD controller
  • a storage system e.g., 102
  • a controller e.g., 108
  • a storage memory e.g., 106
  • storage memory 106 may be a 3D cross-point memory that may include cross-point memory cells, such as 3D cross-point memory cells, located at intersections of first and second signal lines (e.g., at intersections of word lines and bit lines) that are used to access the cells.
  • Some cross-point memory cells can be, for example, resistance variable memory cells whose state (e.g., stored data value) depends on the programmed resistance of the memory cell.
  • the memory cells may be resistance-variable memory cells that can be overwritten individually, without first being erased.
  • the memory cells may include a material programmable to different data states.
  • Some resistance variable memory cells can comprise a select element (e.g., a diode, transistor, or other switching device) in series with a storage element (e.g., a phase change material, metal oxide material, and/or some other material programmable to different resistance levels).
  • Some variable resistance memory cells which may be referred to as self-selecting memory cells, comprise a single material that can serve as both a select element and a storage element for the memory cell.
  • each of the memory cells may include a material that may act as a selector material (e.g., a switching material) and a storage material, so that each memory cell may act as both a selector device and a memory element.
  • each memory cell may include a chalcogenide material that may be formed of various doped or undoped materials, that may or may not be a phase-change material, and/or that may or may not undergo a phase change during reading and/or writing the memory cell.
  • each memory cell may include a ternary composition that may include selenium (Se), arsenic (As), and germanium (Ge), a quaternary composition that may include silicon (Si), Se, As, and Ge, etc.
  • storage memory 106 may be arranged in a single tier (e.g., deck) of memory cells or in multiple tiers of memory cells.
  • controller 108 can comprise a state machine, a sequencer, and/or some other type of control circuitry, which may be implemented in the form of an application particular integrated circuit (ASIC) coupled to a printed circuit board.
  • Controller 108 includes an initialization component 110 , a read only memory (ROM) 114 , a wear leveling component 116 , and a mapping component, such as a logical address to a physical address (e.g., a L2P) mapping component 118 .
  • ROM 114 may be a hardware component that includes instructions that may be executed during initialization of storage system 102 .
  • Controller 108 is coupled to volatile memory, such as random access memory (RAM) 112 .
  • RAM random access memory
  • Controller 108 is configured to perform the methods disclosed herein, such as initializing storage system 102 , in accordance with a number of embodiments.
  • initialization component 110 performs the methods during the initialization of storage system 102 .
  • Initialization component 110 may initialize storage system 102 by determining a starting address, such as a starting address of logical to physical mapping information, corresponding to a location of vital information within storage memory 106 that is vital to the operation of storage system 102 using a multilevel addressing scheme to read storage memory 106 .
  • initialization component 110 may determine the location of the vital information indirectly by performing multiple reads of storage memory 106 while using the multilevel addressing scheme.
  • storage system 102 may not be ready to accept commands, such as read and write commands, from host 104 .
  • storage system 102 may send a ready signal to host 104 to indicate that storage system 102 is ready to accept commands from host 104 .
  • initialization component 110 may initialize storage system 102 in response to executing instructions (e.g., firmware code) stored in storage memory 106 .
  • the initialization component 110 may read the instructions from storage memory 106 into RAM 112 and execute the instructions from the RAM 112 .
  • initialization component 110 may locate the instructions in storage memory 106 during the initialization. For example, initialization component 110 may determine the location of the instructions using a multilevel addressing scheme. For example, initialization component 110 may determine the location of the instructions by performing multiple reads of storage memory 106 while using the multilevel addressing scheme. In some examples, initialization component 110 may perform the multiple reads in response to executing the instructions, such as ROM (e.g., hardware) code, stored in ROM 114 .
  • ROM e.g., hardware
  • FIG. 1B illustrates an example of multiple reads of storage memory 106 according to a multiple level addressing scheme, in accordance with a number of embodiments of the present disclosure.
  • controller 108 may assign a number of different address levels to portions of storage memory 106 .
  • An initial (e.g., a highest) address level 120 is assigned to a portion 122 of storage memory 106 .
  • a final (e.g., a lowest) address level 124 is assigned to a portion 128 of storage memory 106 .
  • a number (e.g., one or more) intermediate address levels are assigned between initial address level 120 and final address level 124 .
  • intermediate address level 130 and intermediate address level 132 are assigned to a portion 134 of storage memory 106 .
  • address level 130 may be a higher address level than address level 132 .
  • Portion 134 may be used to store L2P mapping information, such as L2P tables, that may be loaded into L2P mapping component 118 during initialization of storage system 102 .
  • host 104 may send a logical address to controller 108 , corresponding to data, such as user data (e.g., host data) to be written to or read from storage system 102 .
  • L2P mapping component 118 may then map the logical address to a physical address corresponding to a physical location in storage memory 106 , such as in portion 128 .
  • Storage memory 106 includes segments of memory cells.
  • a segment is the smallest addressable unit when accessing storage memory 106 .
  • some segments may be 16 bytes or 64 bytes, among others.
  • memory 106 may include segments, such as 512-byte or four-kilobyte segments, etc., for storing user data, such as host data from host 104 , and/or the vital information.
  • Portion 128 includes addressable segments 135 .
  • a set of segments 135 at the final address level 124 is used to store the vital information and/or user data, and segments 135 are addressed using final addresses of the multilevel addressing and/or physical addresses from L2P mappings.
  • L2P mapping component 118 may use the L2P mapping information to map a logical address to a physical address of a segment 135 used to store user data and/or the vital information. Therefore, segments 135 may be referred to as physical blocks.
  • segments 135 may be the smallest addressable unit used by host 104 and may be 512 bytes or 4 kilobytes, etc.
  • the segments in portion 134 such as segments 137 , 138 , and 139 , and segments 140 in portion 122 may be 16 bytes, 64 bytes, etc., and may be smallest addressable units. Therefore, the multiple level addressing scheme portrayed in FIG. 1B may use, for example, different sized addressable units at address level 124 than at address levels 120 , 130 , and 132 .
  • Portion 134 includes addressable segments 137 at intermediate address level 130 and addressable segments 138 at intermediate address level 132 .
  • a set of segments 138 is used to store the final addresses of the segments 135 at final address level 124 .
  • a segment 138 T of the set of segments 138 may store the final address of a segment 135 T that stores the vital information.
  • a set of segments 137 is used to store intermediate addresses of the segments 138 .
  • a segment 137 T of the set of segments 137 may store the intermediate address of the segment 138 T at address level 132 .
  • portion 134 may include a set segments 139 for storing the L2P mapping information.
  • Portion 122 includes segments 140 .
  • segments 140 are used to store intermediate addresses of the segments 137 .
  • a segment 140 T may store the intermediate address of the segment 137 T.
  • a segment 140 B may include a copy (e.g., a backup copy) of the intermediate address stored in segment 140 T.
  • intermediate address level 130 may be omitted, in which case segment 140 T may store the intermediate address of the segment 138 T.
  • the address of segment 140 T may be an initial address of the multiple addressing scheme.
  • the initial address, and thus segment 140 T may be fixed for the lifetime of storage system 102 .
  • the initial address may be stored in a register 111 of component 110 or stored in storage memory 106 , such as in the firmware code.
  • the initial address may be used by initialization component 110 to determine the location of segment 140 T each time storage system 102 is initialized throughout the lifetime of storage system 102 .
  • the initial address may be used to indirectly address (e.g., indirectly determine the location of) segment 135 T via the multiple address levels 120 , 130 , and 132 .
  • the intermediate addresses stored in segment 140 T and used to address segments 137 at intermediate address level 130 may be variable in that they may change in response to using another segment 137 .
  • the intermediate addresses stored in a segment 137 and used to address segments 138 at intermediate address level 132 may be variable in that they may change in response to using another segment 138 .
  • the final addresses stored in a segment 138 and used to address segments 135 at final address level 124 may be variable in that they may change in response to using another segment 135 .
  • Initialization component 110 may initialize storage system 102 . During the initialization, initialization component 110 may perform the method 250 depicted in the flowchart in FIG. 2 to determine the vital information. At block 252 initialization component 110 reads an address of segment 137 T from segment 140 T in response to the initial address. At block 254 , initialization component 110 reads an address of segment 138 T from segment 137 T. At block 256 , initialization component 110 reads the address of segment 135 T from segment 138 T to locate the location of vital information. At block 258 , initialization component 110 reads the vital information from segment 135 T.
  • the vital information may be a starting address of L2P mapping information.
  • the starting address may be the address of segment 139 S in portion 134 .
  • Initialization component 110 may then read the L2P mapping information, starting from the starting address, into RAM 112 .
  • Initialization component 110 may perform the method, such as method 250 , for determining the vital information in response to instructions stored in storage memory 106 , such as in a set of the segments 135 at address level 124 . As part of the initialization, initialization component 110 may use the multiple reads in FIG. 1B to locate the instructions.
  • Initialization component 110 may execute instructions in ROM 114 to locate the instructions at address level 124 during the initialization. For example, an initial address in ROM 114 may address a segment 140 . Initialization component 110 may read an address of a segment 137 from the segment 140 . Initialization component 110 may read an address of a segment 138 from the segment 137 , and then read the address of a segment 135 , that may contain the instructions, from the segment 138 . Initialization component 110 may then execute the instructions to locate the vital information, as previously described. In some examples, controller 108 may update the instructions in a segment 135 , so that initialization component may retrieve the updated instructions using the multiple addressing, as previously described.
  • the segments described previously in conjunction with FIG. 1B can be written a threshold number of times during their lifetime after which time they might be replaced by other segments.
  • the memory cells in a segment, and thus the segment may be overwritten directly, without being first erased and/or moved, until they are written the threshold number of times.
  • the vital information in a segment 135 may be written during each power down or a number of times between successive power downs of storage system 102 throughout the lifetime of storage system 102 .
  • user data may be written in the segment 135 between successive power downs.
  • the number of power downs may be greater than the threshold number of times a segment 135 can be written. Therefore, the number of times the vital information and user data is written during the lifetime of storage system 102 is expected to be much greater, such as several (e.g., about 5 or greater than 5) orders of magnitude greater, than the threshold number of times a segment 135 can be written.
  • the address of segment 135 T may be changed (e.g., updated) to a new address of a new segment 135 by overwriting the address of segment 135 T with the new address, either in segment 138 T or in one of the segments of the set of segments 139 currently storing L2P mappings information.
  • the new segment 135 may be used to store the vital information or user data until the new segment 135 is written a certain number of times, at which time the new segment is changed to a different new segment by overwriting the address of the new segment 135 with a different new address of the different new segment 135 either into segment 138 T or in a segment of the set of segments 139 currently storing the L2P mapping information.
  • Controller 108 may allocate a number of segments 135 at address level 124 to have a combined number of potential writes that is greater than an expected number of times vital information and user data is written during the lifetime of storage system 102 .
  • the number of allocated segments 135 may be greater than at least the expected number of times vital information and user data is written during the lifetime of storage system 102 divided by an expected number of times each respective segment 135 is to be written during the lifetime of the respective segment 135 .
  • the controller 108 may keep the number of segments 135 at address level 124 fixed throughout the lifetime of storage system 102 .
  • controller 108 may statically allocate the number of segments 135 .
  • Segment 138 T may be overwritten until it is written the threshold number of times. Therefore, in response to writing to segment 138 T the threshold number of times, the address of segment 138 T in segment 137 T may be changed to a new address of a new segment 138 by overwriting the address of segment 138 T with the new address, thereby changing from segment 138 T to the new segment 138 .
  • Controller 108 may allocate a number of segments 137 of a set of segments 137 at address level 130 , a number of segments 138 of a set of segments 138 at address level 132 and a number of segments 139 of a set of segments 139 in portion 134 to have a combined number of potential writes that is greater than an expected number of times the vital information and the user data is written during the lifetime of storage system 102 .
  • controller 108 may allocate a number of segments 138 of a set of segments 138 in portion 134 to have a combined number of potential writes that is greater than the number of segments 135 used to store vital information.
  • the number of segments 138 in the set may be determined to be greater than at least the number of segments 135 used to store vital information divided by an expected number of times each respective segment 138 is to be written during the lifetime of the respective segment 138 .
  • the controller 108 may keep the number of segments of the set 138 at address level 132 fixed throughout the lifetime of storage system 102 . For example, controller 108 may statically allocate the number of segments 138 of the set.
  • Segment 137 T may be overwritten until it is written the threshold number of times. Therefore, in response to writing to segment 137 T the threshold number of times, the address of segment 137 T in segment 140 T may be changed to a new address of a new segment 137 by overwriting the address of segment 137 T with the new address, thereby changing from segment 137 T to the new segment 137 .
  • a previous address of a previously written segment 137 in segment 140 T may be changed to a new address of a new segment 137 each time a new segment 137 is used, for example, by overwriting the previous address in segment 140 T with the new address.
  • using multiple address levels as described previously can keep the number of writes to segment 140 T below the threshold number of times segment 140 T can be written, thereby allowing the segment 140 T to be used during the lifetime of storage system 102 .
  • controller 108 may keep track of the number of times the segments 135 , 137 , 138 , and 140 have been written by maintaining a write count that may be stored in segments 135 , 137 , 138 , and 140 or in entries in a table corresponding to the segments 135 , 137 , 138 , and 140 , which table may be stored in storage memory 106 .
  • controller 108 may dynamically assign sets (e.g., fragments) of segments 137 of a number of sets of segments 137 at address level 130 and sets (e.g., fragments) of segments 138 of a number of sets of segments 138 at address level 132 .
  • initialization component 110 may dynamically assign the sets of segments 137 and/or the sets of segments 138 during the initialization as part of wear leveling that is not performed during background operations in addition to wear leveling component 116 performing other wear leveling of storage memory 106 during background operations.
  • initialization component 110 may dynamically assign a different set of segments in response to a previously dynamically assigned set of segments being wear leveled and released. For example, a set of segments may be wear leveled when the segments have been written a common (e.g., the same) number of times.
  • FIG. 3A illustrates an example of a fragment 360 of segments 362 , such as segments 362 - 0 to 362 -N.
  • Fragment 360 may be dynamically assigned at an intermediate address level, such as address level 130 and/or address level 132 , by initialization component 110 during the initialization as part of wear leveling that is not performed during background operations.
  • the fragment 360 might be one of a number of fragments at address level 130 and/or might be one of a number of fragments at address level 132 .
  • Each respective segment 362 includes a storage region 367 .
  • segments 362 - 0 to 362 -N respectively include storage regions 367 - 0 to 367 -N.
  • Each respective segment 362 includes a write count region 369 configured to store a write count WrtCnt.
  • segments 362 - 0 to 362 -N respectively include write count regions 369 - 0 to 369 -N respectively configured to store write counts WrtCnt ( 0 ) to WrtCnt (N).
  • Segments 362 may be addressed by addresses stored at a higher address level.
  • Storage region 367 may store an address of a lower address level and may be overwritten with a new address without being first erased or moved.
  • a respective segment 362 may be addressed by a fragment address and a respective offset stored at a higher address level.
  • a respective write count WrtCnt may be a number of writes performed on the respective segment.
  • the respective write counts WrtCnt ( 0 ) to WrtCnt (N) may be the respective number of writes performed on respective segments 362 - 0 to 362 -N.
  • controller 108 may increment the respective write count WrtCnt each time the respective storage region 367 is overwritten.
  • controller 108 may keep track of a number of overwrites of a respective storage region 367 between when storage system 102 is initialized and about to be powered down and, during each power down, may increment the respective write count WrtCnt by the number of overwrites.
  • the write count WrtCnt may be omitted from segments 362 and controller 108 may store the write count for each segment 362 in a table that may be stored in storage memory 106 .
  • fragment 360 may be dynamically assigned before fragment 360 is previously written.
  • each segment may be written the same predetermined number of times and released when each segment is written the same predetermined number of times.
  • Another fragment may be dynamically assigned to replace fragment 360 in response to each segment of fragment 360 being written the same predetermined number of times.
  • fragment 360 may be dynamically assigned after fragment 360 is previously written, for example, after segments 362 - 0 to 362 -N are respectively previously written a respective different number of times.
  • FIG. 3B illustrates an example of wear leveling fragment 360 (e.g., during initialization of storage system 102 ) in accordance with a number of embodiments of the present disclosure.
  • FIG. 3B shows an initial state of fragment 360 when fragment 360 is first dynamically assigned and the corresponding initial values of the write counts WrtCnt ( 0 ) to WrtCnt (N) respectively of segments 362 - 0 to 362 -N.
  • the respective initial values of the write counts are the number of times the respective segments 362 - 0 to 362 -N have been previously written at the time of the assignment.
  • initial values of the write counts WrtCnt ( 0 ) to WrtCnt (N) are respectively M, P, and m.
  • FIG. 3B further shows the wear-leveled state of fragment 360 and the corresponding common wear-leveled value of the write counts WrtCnt ( 0 ) to WrtCnt (N).
  • the write counts WrtCnt ( 0 ) to WrtCnt (N) all have the same wear-leveled value, and fragment 360 is about to be released and subsequently replaced by another fragment 360 .
  • the wear-leveled value may less than the threshold number times the segments can be written during their lifetime.
  • the wear leveling in the example of FIG. 3B includes controller 108 determining the common wear-leveled value of the write counts for the wear-leveled state. Controller 108 may determine the common wear-leveled value by determining which segment 362 of fragment 360 has been previously written the greatest number of times. For example, controller 108 may read the initial values of the write counts (e.g., from the respective write count regions 369 ) and determine the maximum of the initial values of the write counts. For example, controller 108 may determine the maximum of M, P, and m (e.g., M) and the corresponding segment (e.g. segment 362 - 0 ).
  • Controller 108 may then determine the common wear-leveled value by adding a fixed number “Fixed” to M so that the common wear-leveled value is M+Fixed, in which Fixed is a number of times segment 362 - 0 is to be overwritten during the wear leveling.
  • the fixed number is selected so that the common wear-leveled value M+Fixed is less than the threshold number times the segments can be written during their lifetime so that wear-leveled fragment 360 can be dynamically reassigned (e.g., for storing addresses at an intermediate level) after it is released.
  • Controller 108 may then determine the number of times each of the respective remaining segments 362 - 1 to 362 -N is to be overwritten during wear leveling. For example, the respective number of times each respective remaining segment is to be overwritten is the common number of times minus the respective initial value of the write count of the respective remaining segment. For example, segment 326 - 1 may be overwritten Fixed+M ⁇ P times, and segment 326 -N may be overwritten Fixed+M ⁇ m times. Note that the wear-leveled value of each respective write count of each respective remaining segment is the initial value of the respective write count of the respective remaining segment plus the respective number of times the respective remaining segment is to be overwritten.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Techniques For Improving Reliability Of Storages (AREA)
  • Memory System (AREA)

Abstract

In an example, a starting address corresponding to a location of particular information within a non-volatile storage memory is determined during an initialization process using a multilevel addressing scheme. Using the multilevel addressing scheme may include performing multiple reads of the storage memory at respective address levels to determine the starting address corresponding to the location of the particular information.

Description

TECHNICAL FIELD
The present disclosure relates generally to apparatus, such as storage systems, and their operation, and, more particularly, to multilevel addressing.
BACKGROUND
Storage systems may be implemented in electronic systems, such as computers, cell phones, hand-held electronic devices, etc. Some storage systems, such as solid state drives (SSDs), may include non-volatile storage memories for storing user data from a host. Non-volatile storage memories provide persistent data by retaining stored data when not powered and may include cross-point memory, NAND flash memory, among other types of memory that can be written to a particular number of times throughout their lifetime. Storage systems typically perform an initialization procedure to locate information vital to the operation of the storage systems.
BRIEF DESCRIPTION OF THE DRAWINGS
FIG. 1A is a block diagram of an apparatus, in accordance with a number of embodiments of the present disclosure.
FIG. 1B illustrates an example of multiple reads of a storage memory according to a multiple level addressing scheme, in accordance with a number of embodiments of the present disclosure.
FIG. 2 is a flowchart of a method of locating vital information, in accordance with a number of embodiments of the present disclosure.
FIG. 3A is an example of a fragment of a storage memory, in accordance with a number of embodiments of the present disclosure.
FIG. 3B illustrates an example of wear leveling a fragment of a storage memory, in accordance with a number of embodiments of the present disclosure
DETAILED DESCRIPTION
In an example, a starting address corresponding to a location of particular information within a non-volatile storage memory is determined during an initialization process using a multilevel addressing scheme. Using the multilevel addressing scheme may include performing multiple reads of the storage memory at respective address levels to determine the starting address corresponding to the location of the particular information.
In previous approaches, the actual location in the storage memory of a storage system in which, particular information, such as initialization information (e.g., vital information), vital to the operation of the storage system, is stored may be addressed directly during the initialization. For example, in previous approaches, the initialization may be performed in response to executing instructions, such as firmware, that may specify the address of the actual physical location of the vital information. For instance, previous approaches may use single-level, direct addressing to locate the vital information.
Some previous approaches may store new vital information to the location at the single address level of a memory each time the storage system is powered down, such as by writing the new vital information to the location during each power down. For example, the previously written vital information at the location may be written over with the new vital information.
However, memory cells in storage memories may be written to a particular number of times during the lifetime of the memory cells, and the number of power downs may be many orders of magnitude (e.g., five orders of magnitude) greater than the particular number of times. For instance, in previous approaches, storage systems may need to write vital information to a location at a single address level more times than the location can be written during the lifetime of the memory cells at the location.
Embodiments of the present disclosure provide a technical advantage over previous approaches by solving the problem of writing to a location at a single address level in a storage memory, to be directly specified by an address during initialization of the storage device, by performing multilevel, indirect addressing. For example, the embodiments disclosed herein, determine a starting address corresponding to a location of vital information within a non-volatile storage memory during initialization of the apparatus using a multilevel addressing scheme that may include performing multiple reads of the storage memory.
The storage system may use the same initial address that indirectly addresses the location of the vital information during each initialization throughout the lifetime of the apparatus. In some examples, a disclosed storage system may read an intermediate address from a location having the initial address at an initial address level and read the address of the location of the vital information from a location at the intermediate address at an intermediate address level to determine the location of the vital information at a final address level.
In some embodiments, the intermediate address at the location having the initial address is changed each time the intermediate address is changed, and the intermediate address is changed each time the location of the vital information is changed. For example, the number of times the intermediate address is changed and the number of times the location of the vital information is changed may be selected such that the number of changes at the location having the initial address during the lifetime of the storage device remains below a threshold number of times. For example, the threshold number of times may be the number of times the memory cells at the location having the initial address may be written to throughout their lifetime, meaning that the initial address may remain the same during the lifetime of the storage device.
In the following detailed description, reference is made to the accompanying drawings that form a part hereof, and in which is shown, by way of illustration, specific examples. In the drawings, like numerals describe substantially similar components throughout the several views. Other examples may be utilized and structural and electrical changes may be made without departing from the scope of the present disclosure. The following detailed description is, therefore, not to be taken in a limiting sense, and the scope of the present disclosure is defined only by the appended claims and equivalents thereof.
FIG. 1A is a block diagram of an apparatus in the form of a computing system 100 in accordance with a number of embodiments of the present disclosure. The computing system 100 includes a storage system 102 that may be, for example, a solid-state drive (SSD). In the example of FIG. 1A, storage system 102 is coupled to a host 104 and includes a storage memory 106 that can be a non-volatile memory, such as a cross-point memory (e.g., a three-dimensional (3D) cross-point memory), among others. A controller 108 (e.g., an SSD controller), such as a processing device, is coupled to memory 106. As used herein, a storage system (e.g., 102), a controller (e.g., 108), and/or a storage memory (e.g., 106) may separately be considered an “apparatus.”
In some examples, storage memory 106 may be a 3D cross-point memory that may include cross-point memory cells, such as 3D cross-point memory cells, located at intersections of first and second signal lines (e.g., at intersections of word lines and bit lines) that are used to access the cells. Some cross-point memory cells can be, for example, resistance variable memory cells whose state (e.g., stored data value) depends on the programmed resistance of the memory cell. For example, the memory cells may be resistance-variable memory cells that can be overwritten individually, without first being erased. The memory cells may include a material programmable to different data states.
Some resistance variable memory cells can comprise a select element (e.g., a diode, transistor, or other switching device) in series with a storage element (e.g., a phase change material, metal oxide material, and/or some other material programmable to different resistance levels). Some variable resistance memory cells, which may be referred to as self-selecting memory cells, comprise a single material that can serve as both a select element and a storage element for the memory cell. In some examples, each of the memory cells may include a material that may act as a selector material (e.g., a switching material) and a storage material, so that each memory cell may act as both a selector device and a memory element. For example, each memory cell may include a chalcogenide material that may be formed of various doped or undoped materials, that may or may not be a phase-change material, and/or that may or may not undergo a phase change during reading and/or writing the memory cell. In some examples, each memory cell may include a ternary composition that may include selenium (Se), arsenic (As), and germanium (Ge), a quaternary composition that may include silicon (Si), Se, As, and Ge, etc. In some examples, storage memory 106 may be arranged in a single tier (e.g., deck) of memory cells or in multiple tiers of memory cells.
In some examples, the controller 108 can comprise a state machine, a sequencer, and/or some other type of control circuitry, which may be implemented in the form of an application particular integrated circuit (ASIC) coupled to a printed circuit board. Controller 108 includes an initialization component 110, a read only memory (ROM) 114, a wear leveling component 116, and a mapping component, such as a logical address to a physical address (e.g., a L2P) mapping component 118. In an example, ROM 114 may be a hardware component that includes instructions that may be executed during initialization of storage system 102. Controller 108 is coupled to volatile memory, such as random access memory (RAM) 112.
Controller 108 is configured to perform the methods disclosed herein, such as initializing storage system 102, in accordance with a number of embodiments. For example, initialization component 110 performs the methods during the initialization of storage system 102. Initialization component 110 may initialize storage system 102 by determining a starting address, such as a starting address of logical to physical mapping information, corresponding to a location of vital information within storage memory 106 that is vital to the operation of storage system 102 using a multilevel addressing scheme to read storage memory 106. For example, initialization component 110 may determine the location of the vital information indirectly by performing multiple reads of storage memory 106 while using the multilevel addressing scheme. In some examples, during the initialization, storage system 102 may not be ready to accept commands, such as read and write commands, from host 104. Upon completion of the initialization, storage system 102 may send a ready signal to host 104 to indicate that storage system 102 is ready to accept commands from host 104.
In some examples, initialization component 110 may initialize storage system 102 in response to executing instructions (e.g., firmware code) stored in storage memory 106. For example, the initialization component 110 may read the instructions from storage memory 106 into RAM 112 and execute the instructions from the RAM 112.
In some examples, initialization component 110 may locate the instructions in storage memory 106 during the initialization. For example, initialization component 110 may determine the location of the instructions using a multilevel addressing scheme. For example, initialization component 110 may determine the location of the instructions by performing multiple reads of storage memory 106 while using the multilevel addressing scheme. In some examples, initialization component 110 may perform the multiple reads in response to executing the instructions, such as ROM (e.g., hardware) code, stored in ROM 114.
FIG. 1B illustrates an example of multiple reads of storage memory 106 according to a multiple level addressing scheme, in accordance with a number of embodiments of the present disclosure. For example, controller 108 may assign a number of different address levels to portions of storage memory 106.
An initial (e.g., a highest) address level 120 is assigned to a portion 122 of storage memory 106. A final (e.g., a lowest) address level 124 is assigned to a portion 128 of storage memory 106. A number (e.g., one or more) intermediate address levels are assigned between initial address level 120 and final address level 124. For example, intermediate address level 130 and intermediate address level 132 are assigned to a portion 134 of storage memory 106. For example, address level 130 may be a higher address level than address level 132.
Portion 134 may be used to store L2P mapping information, such as L2P tables, that may be loaded into L2P mapping component 118 during initialization of storage system 102. In an example, host 104 may send a logical address to controller 108, corresponding to data, such as user data (e.g., host data) to be written to or read from storage system 102. L2P mapping component 118 may then map the logical address to a physical address corresponding to a physical location in storage memory 106, such as in portion 128.
Storage memory 106 includes segments of memory cells. A segment is the smallest addressable unit when accessing storage memory 106. In some examples, some segments may be 16 bytes or 64 bytes, among others. In addition, memory 106 may include segments, such as 512-byte or four-kilobyte segments, etc., for storing user data, such as host data from host 104, and/or the vital information.
Portion 128 includes addressable segments 135. For example, a set of segments 135 at the final address level 124 is used to store the vital information and/or user data, and segments 135 are addressed using final addresses of the multilevel addressing and/or physical addresses from L2P mappings. For example, L2P mapping component 118 may use the L2P mapping information to map a logical address to a physical address of a segment 135 used to store user data and/or the vital information. Therefore, segments 135 may be referred to as physical blocks. In some examples, segments 135 may be the smallest addressable unit used by host 104 and may be 512 bytes or 4 kilobytes, etc. However, the segments in portion 134, such as segments 137, 138, and 139, and segments 140 in portion 122 may be 16 bytes, 64 bytes, etc., and may be smallest addressable units. Therefore, the multiple level addressing scheme portrayed in FIG. 1B may use, for example, different sized addressable units at address level 124 than at address levels 120, 130, and 132.
Portion 134 includes addressable segments 137 at intermediate address level 130 and addressable segments 138 at intermediate address level 132. For example, a set of segments 138 is used to store the final addresses of the segments 135 at final address level 124. For example, a segment 138T of the set of segments 138 may store the final address of a segment 135T that stores the vital information.
A set of segments 137 is used to store intermediate addresses of the segments 138. For example, a segment 137T of the set of segments 137 may store the intermediate address of the segment 138T at address level 132. In some examples, portion 134 may include a set segments 139 for storing the L2P mapping information.
Portion 122 includes segments 140. For example, segments 140 are used to store intermediate addresses of the segments 137. For example, a segment 140T may store the intermediate address of the segment 137T. In some examples, a segment 140B may include a copy (e.g., a backup copy) of the intermediate address stored in segment 140T. In an example, intermediate address level 130 may be omitted, in which case segment 140T may store the intermediate address of the segment 138T. In other examples, there may be other intermediate address levels assigned to portion 134 in addition to intermediate address levels 130 and 132.
The address of segment 140T may be an initial address of the multiple addressing scheme. The initial address, and thus segment 140T, may be fixed for the lifetime of storage system 102. For example, the initial address may be stored in a register 111 of component 110 or stored in storage memory 106, such as in the firmware code. The initial address may be used by initialization component 110 to determine the location of segment 140T each time storage system 102 is initialized throughout the lifetime of storage system 102. For example, the initial address may be used to indirectly address (e.g., indirectly determine the location of) segment 135T via the multiple address levels 120, 130, and 132.
The intermediate addresses stored in segment 140T and used to address segments 137 at intermediate address level 130 may be variable in that they may change in response to using another segment 137. The intermediate addresses stored in a segment 137 and used to address segments 138 at intermediate address level 132 may be variable in that they may change in response to using another segment 138. The final addresses stored in a segment 138 and used to address segments 135 at final address level 124 may be variable in that they may change in response to using another segment 135.
Initialization component 110 may initialize storage system 102. During the initialization, initialization component 110 may perform the method 250 depicted in the flowchart in FIG. 2 to determine the vital information. At block 252 initialization component 110 reads an address of segment 137T from segment 140T in response to the initial address. At block 254, initialization component 110 reads an address of segment 138T from segment 137T. At block 256, initialization component 110 reads the address of segment 135T from segment 138T to locate the location of vital information. At block 258, initialization component 110 reads the vital information from segment 135T.
The vital information may be a starting address of L2P mapping information. For example, the starting address may be the address of segment 139S in portion 134. Initialization component 110 may then read the L2P mapping information, starting from the starting address, into RAM 112.
Initialization component 110, may perform the method, such as method 250, for determining the vital information in response to instructions stored in storage memory 106, such as in a set of the segments 135 at address level 124. As part of the initialization, initialization component 110 may use the multiple reads in FIG. 1B to locate the instructions.
Initialization component 110 may execute instructions in ROM 114 to locate the instructions at address level 124 during the initialization. For example, an initial address in ROM 114 may address a segment 140. Initialization component 110 may read an address of a segment 137 from the segment 140. Initialization component 110 may read an address of a segment 138 from the segment 137, and then read the address of a segment 135, that may contain the instructions, from the segment 138. Initialization component 110 may then execute the instructions to locate the vital information, as previously described. In some examples, controller 108 may update the instructions in a segment 135, so that initialization component may retrieve the updated instructions using the multiple addressing, as previously described.
The segments described previously in conjunction with FIG. 1B can be written a threshold number of times during their lifetime after which time they might be replaced by other segments. For example, the memory cells in a segment, and thus the segment, may be overwritten directly, without being first erased and/or moved, until they are written the threshold number of times.
The vital information in a segment 135, such as segment 135T, may be written during each power down or a number of times between successive power downs of storage system 102 throughout the lifetime of storage system 102. In addition, user data may be written in the segment 135 between successive power downs. However, the number of power downs may be greater than the threshold number of times a segment 135 can be written. Therefore, the number of times the vital information and user data is written during the lifetime of storage system 102 is expected to be much greater, such as several (e.g., about 5 or greater than 5) orders of magnitude greater, than the threshold number of times a segment 135 can be written. Therefore, in response to writing, respectively, vital information, such as vital firmware information, or user data to segment 135T a certain number of times, the address of segment 135T may be changed (e.g., updated) to a new address of a new segment 135 by overwriting the address of segment 135T with the new address, either in segment 138T or in one of the segments of the set of segments 139 currently storing L2P mappings information. The new segment 135 may be used to store the vital information or user data until the new segment 135 is written a certain number of times, at which time the new segment is changed to a different new segment by overwriting the address of the new segment 135 with a different new address of the different new segment 135 either into segment 138T or in a segment of the set of segments 139 currently storing the L2P mapping information.
Controller 108 may allocate a number of segments 135 at address level 124 to have a combined number of potential writes that is greater than an expected number of times vital information and user data is written during the lifetime of storage system 102. For example, the number of allocated segments 135 may be greater than at least the expected number of times vital information and user data is written during the lifetime of storage system 102 divided by an expected number of times each respective segment 135 is to be written during the lifetime of the respective segment 135. The controller 108 may keep the number of segments 135 at address level 124 fixed throughout the lifetime of storage system 102. For example, controller 108 may statically allocate the number of segments 135.
Segment 138T may be overwritten until it is written the threshold number of times. Therefore, in response to writing to segment 138T the threshold number of times, the address of segment 138T in segment 137T may be changed to a new address of a new segment 138 by overwriting the address of segment 138T with the new address, thereby changing from segment 138T to the new segment 138.
Controller 108 may allocate a number of segments 137 of a set of segments 137 at address level 130, a number of segments 138 of a set of segments 138 at address level 132 and a number of segments 139 of a set of segments 139 in portion 134 to have a combined number of potential writes that is greater than an expected number of times the vital information and the user data is written during the lifetime of storage system 102. For example, controller 108 may allocate a number of segments 138 of a set of segments 138 in portion 134 to have a combined number of potential writes that is greater than the number of segments 135 used to store vital information. The number of segments 138 in the set, for example, may be determined to be greater than at least the number of segments 135 used to store vital information divided by an expected number of times each respective segment 138 is to be written during the lifetime of the respective segment 138. The controller 108 may keep the number of segments of the set 138 at address level 132 fixed throughout the lifetime of storage system 102. For example, controller 108 may statically allocate the number of segments 138 of the set.
Segment 137T may be overwritten until it is written the threshold number of times. Therefore, in response to writing to segment 137T the threshold number of times, the address of segment 137T in segment 140T may be changed to a new address of a new segment 137 by overwriting the address of segment 137T with the new address, thereby changing from segment 137T to the new segment 137. A previous address of a previously written segment 137 in segment 140T may be changed to a new address of a new segment 137 each time a new segment 137 is used, for example, by overwriting the previous address in segment 140T with the new address. However, using multiple address levels as described previously can keep the number of writes to segment 140T below the threshold number of times segment 140T can be written, thereby allowing the segment 140T to be used during the lifetime of storage system 102.
In some examples, controller 108 may keep track of the number of times the segments 135, 137, 138, and 140 have been written by maintaining a write count that may be stored in segments 135, 137, 138, and 140 or in entries in a table corresponding to the segments 135, 137, 138, and 140, which table may be stored in storage memory 106.
In some examples, controller 108 may dynamically assign sets (e.g., fragments) of segments 137 of a number of sets of segments 137 at address level 130 and sets (e.g., fragments) of segments 138 of a number of sets of segments 138 at address level 132. For example, initialization component 110 may dynamically assign the sets of segments 137 and/or the sets of segments 138 during the initialization as part of wear leveling that is not performed during background operations in addition to wear leveling component 116 performing other wear leveling of storage memory 106 during background operations. In some examples, initialization component 110 may dynamically assign a different set of segments in response to a previously dynamically assigned set of segments being wear leveled and released. For example, a set of segments may be wear leveled when the segments have been written a common (e.g., the same) number of times.
FIG. 3A illustrates an example of a fragment 360 of segments 362, such as segments 362-0 to 362-N. Fragment 360 may be dynamically assigned at an intermediate address level, such as address level 130 and/or address level 132, by initialization component 110 during the initialization as part of wear leveling that is not performed during background operations. For example, the fragment 360 might be one of a number of fragments at address level 130 and/or might be one of a number of fragments at address level 132.
Each respective segment 362 includes a storage region 367. For example, segments 362-0 to 362-N respectively include storage regions 367-0 to 367-N. Each respective segment 362 includes a write count region 369 configured to store a write count WrtCnt. For example, segments 362-0 to 362-N respectively include write count regions 369-0 to 369-N respectively configured to store write counts WrtCnt (0) to WrtCnt (N).
Segments 362 may be addressed by addresses stored at a higher address level. Storage region 367 may store an address of a lower address level and may be overwritten with a new address without being first erased or moved. In some examples, a respective segment 362 may be addressed by a fragment address and a respective offset stored at a higher address level.
A respective write count WrtCnt may be a number of writes performed on the respective segment. For example, the respective write counts WrtCnt (0) to WrtCnt (N) may be the respective number of writes performed on respective segments 362-0 to 362-N. In some examples, controller 108 may increment the respective write count WrtCnt each time the respective storage region 367 is overwritten. In other examples, controller 108 may keep track of a number of overwrites of a respective storage region 367 between when storage system 102 is initialized and about to be powered down and, during each power down, may increment the respective write count WrtCnt by the number of overwrites. In an example, the write count WrtCnt may be omitted from segments 362 and controller 108 may store the write count for each segment 362 in a table that may be stored in storage memory 106.
In some examples, fragment 360 may be dynamically assigned before fragment 360 is previously written. In such examples, each segment may be written the same predetermined number of times and released when each segment is written the same predetermined number of times. Another fragment may be dynamically assigned to replace fragment 360 in response to each segment of fragment 360 being written the same predetermined number of times.
In other examples, fragment 360 may be dynamically assigned after fragment 360 is previously written, for example, after segments 362-0 to 362-N are respectively previously written a respective different number of times. FIG. 3B illustrates an example of wear leveling fragment 360 (e.g., during initialization of storage system 102) in accordance with a number of embodiments of the present disclosure.
FIG. 3B shows an initial state of fragment 360 when fragment 360 is first dynamically assigned and the corresponding initial values of the write counts WrtCnt (0) to WrtCnt (N) respectively of segments 362-0 to 362-N. For example, the respective initial values of the write counts are the number of times the respective segments 362-0 to 362-N have been previously written at the time of the assignment. In the example of FIG. 3B, initial values of the write counts WrtCnt (0) to WrtCnt (N) are respectively M, P, and m.
FIG. 3B further shows the wear-leveled state of fragment 360 and the corresponding common wear-leveled value of the write counts WrtCnt (0) to WrtCnt (N). For example, at the wear-leveled state, the write counts WrtCnt (0) to WrtCnt (N) all have the same wear-leveled value, and fragment 360 is about to be released and subsequently replaced by another fragment 360. For example, the wear-leveled value may less than the threshold number times the segments can be written during their lifetime.
The wear leveling in the example of FIG. 3B includes controller 108 determining the common wear-leveled value of the write counts for the wear-leveled state. Controller 108 may determine the common wear-leveled value by determining which segment 362 of fragment 360 has been previously written the greatest number of times. For example, controller 108 may read the initial values of the write counts (e.g., from the respective write count regions 369) and determine the maximum of the initial values of the write counts. For example, controller 108 may determine the maximum of M, P, and m (e.g., M) and the corresponding segment (e.g. segment 362-0).
Controller 108 may then determine the common wear-leveled value by adding a fixed number “Fixed” to M so that the common wear-leveled value is M+Fixed, in which Fixed is a number of times segment 362-0 is to be overwritten during the wear leveling. In some examples, the fixed number is selected so that the common wear-leveled value M+Fixed is less than the threshold number times the segments can be written during their lifetime so that wear-leveled fragment 360 can be dynamically reassigned (e.g., for storing addresses at an intermediate level) after it is released.
Controller 108 may then determine the number of times each of the respective remaining segments 362-1 to 362-N is to be overwritten during wear leveling. For example, the respective number of times each respective remaining segment is to be overwritten is the common number of times minus the respective initial value of the write count of the respective remaining segment. For example, segment 326-1 may be overwritten Fixed+M−P times, and segment 326-N may be overwritten Fixed+M−m times. Note that the wear-leveled value of each respective write count of each respective remaining segment is the initial value of the respective write count of the respective remaining segment plus the respective number of times the respective remaining segment is to be overwritten. For example, the wear-leveled value the write count WrtCnt (1) of segment 362-1 is P+Fixed+M−P=Fixed+M, and the wear-leveled value the write count WrtCnt (N) of segment 362-N is m+Fixed+M−m=Fixed+M.
Although specific examples have been illustrated and described herein, those of ordinary skill in the art will appreciate that an arrangement calculated to achieve the same results can be substituted for the specific embodiments shown. This disclosure is intended to cover adaptations or variations of one or more embodiments of the present disclosure. It is to be understood that the above description has been made in an illustrative fashion, and not a restrictive one. The scope of one or more examples of the present disclosure should be determined with reference to the appended claims, along with the full range of equivalents to which such claims are entitled.

Claims (21)

What is claimed is:
1. A method, comprising:
determining a starting address corresponding to a location of particular information within a non-volatile storage memory of an apparatus during an initialization process using a multilevel addressing scheme, wherein using the multilevel addressing scheme comprises:
performing multiple reads of the storage memory at respective address levels to determine the starting address corresponding to the location of the particular information;
wherein the respective address levels comprise an initial address level assigned to a first portion of the non-volatile storage memory, a final address level assigned to a second portion of the non-volatile storage memory, and an intermediate address level that is between the initial address level and the final address level and that is assigned to a third portion of the non-volatile storage memory;
wherein performing the multiple reads of the storage memory comprises:
performing a read of a segment in the first portion of the non-volatile storage memory to determine an address of a segment in the third portion of the non-volatile storage memory, wherein the segment in the third portion is a segment in a set of segments in the third portion used to store addresses of segments in a set of segments in the second portion;
performing a read of the segment in the third portion of the non-volatile storage memory to determine an address of the segment in the second portion of the non-volatile storage memory, wherein the segment in the second portion of the non-volatile storage memory is a segment in the set of segments in the second portion used to store the particular information, and indicates the starting address corresponding to the location of the particular information;
allocating, by a controller, a number of segments of the set of segments in the second portion to accommodate a final number of potential writes that is greater than an expected number of times the particular information and user data is to be written during a lifetime of the non-volatile storage memory; and
allocating, by the controller, a number of segments of the set of segments in the third portion to accommodate an intermediate number of potential writes that is greater than the final number of potential writes accommodated by the number of segments in the second portion.
2. The method of claim 1, further comprising, during the initialization process, reading the particular information into volatile memory from the location in the storage memory addressed by the starting address.
3. The method of claim 1, wherein an address of the segment in the first portion of the non-volatile storage memory is fixed and is used during each initialization of the apparatus.
4. The method of claim 1, further comprising:
responsive to updating the segment in the second portion of the non-volatile storage memory a threshold number of times:
changing the segment in the second portion of the non-volatile storage memory to a different segment in the second portion of the non-volatile storage memory; and
updating the segment in the third portion of the non-volatile storage memory such that the segment in the third portion of the non-volatile storage memory stores an indication of an address of the different segment in the second portion of the non-volatile storage memory.
5. The method of claim 1, further comprising performing wear leveling of the third portion of the non-volatile storage memory as part using the multilevel addressing scheme during the initialization process.
6. The method of claim 5, wherein performing the wear leveling of the third portion of the non-volatile storage memory comprises overwriting each respective segment of a plurality of segments in the third portion of the non-volatile storage memory a respective number of times so that the respective segments are written a same number of times, wherein at least one of the plurality of segments in the third portion of the non-volatile storage memory is overwritten each time a segment in the second portion of the non-volatile storage memory used to store the particular information is changed.
7. The method of claim 1, wherein the starting address comprises a starting address of logical to physical mapping information, the method further comprising:
reading the starting address of the logical to physical mapping information from the segment in the second portion of the non-volatile storage memory; and
using the starting address to access logical to physical mapping information in a segment of a set of segments in the third portion of the non-volatile storage memory.
8. The method of claim 7, further comprising mapping a logical address corresponding to user data from a host to a physical address of an additional segment in the second portion of the non-volatile storage memory using the logical to physical mapping information.
9. The method of claim 8, further comprising storing the user data in the additional segment.
10. The method of claim 1, further comprising writing a copy of the address of the segment in the third portion of the non-volatile storage memory to an additional segment in the first portion of the non-volatile storage memory.
11. An apparatus, comprising:
a non-volatile storage memory, comprising first, second, and third portions, wherein an initial address level is assigned to the first portion, a final address level is assigned to the second portion, and an intermediate address level, between the initial address level and the final address level, is assigned to the third portion; and
a controller coupled to the non-volatile storage memory, the controller configured to perform multiple reads of the non-volatile storage memory according to a multilevel addressing scheme to determine a location of initialization information during initialization of the apparatus wherein the multiple reads comprise:
a read of a segment in the first portion to determine an address of a segment in the third portion of the non-volatile storage memory, wherein the segment in the third portion is a segment of a set of segments in the third portion used to store addresses of segments of a set of segments in the second portion;
a read of the segment in the third portion to determine an address of a segment of the set of segments in the second portion to determine the location of the initialization information, wherein the segment in the second portion of the non-volatile storage memory is a segment of the set of segments in the second portion used to store the initialization information;
wherein the controller is configured to allocate a number of segments of the set of segments in the second portion to accommodate a final number of potential writes that is greater than an expected number of times the initialization information is to be written during a lifetime of the non-volatile storage memory; and
wherein the controller is configured to allocate a number of segments of the set of segments in the third portion to accommodate an intermediate number of potential writes that is greater than the final number of potential writes accommodated by the number of segments of the set of segments in the second portion.
12. The apparatus of claim 11, wherein the initialization information comprises a starting address of a logical to physical mapping table.
13. The apparatus of claim 11, wherein the controller being configured to perform the multiple reads comprises the controller being configured to perform the multiple reads in response to instructions stored in the storage memory.
14. The apparatus of claim 13, wherein the controller is configured to execute additional instructions to locate the instructions stored in the storage memory.
15. The apparatus of claim 14, wherein the controller comprises read only memory configured to store the additional instructions.
16. The apparatus of claim 13, wherein the controller is configured to perform additional multiple reads to locate the instructions stored in the storage memory.
17. The apparatus of claim 16, wherein the controller is configured to perform the additional multiple reads according to an additional multilevel addressing scheme.
18. The apparatus of claim 13, wherein the controller is configured to update the instructions.
19. An apparatus, comprising:
a non-volatile storage memory;
a controller coupled to the non-volatile storage memory, the controller configured to implement multilevel addressing to find, during initialization of the apparatus, a location in the non-volatile storage memory that stores information vital to the operation of the apparatus, wherein the controller is configured to implement the multilevel addressing by:
assigning a first portion of the non-volatile storage memory to an initial address level;
assigning a second portion of the non-volatile storage memory to a final address level;
dynamically assigning a different set of segments of a plurality of sets of segments of a third portion of the non-volatile storage memory to an intermediate address level, between the initial and final address levels, in response to each segment of a respective previous set of segments of the plurality of sets of segments being written a same number of times;
performing a read of a segment in the first portion to determine an address of a segment of a dynamically assigned set of segments of the third portion;
performing a read of the segment of the dynamically assigned set of segments of the third portion to determine an address of a segment in the second portion to determine the location in the non-volatile storage memory that stores the information vital to the operation of the apparatus;
wherein the segment in the second portion is one of a number of segments at the final address level;
allocating, by the controller, the number of segments of a set of segments at the final address level to accommodate a final number of potential writes that is greater than an expected number of times the information vital to the operation of the apparatus and user data is to be written during a lifetime of the non-volatile storage memory; and
allocating, by the controller, a number of segments of a set of segments at the intermediate address level to accommodate an intermediate number of potential writes that is greater than the final number of potential writes accommodated by the number of segments at the final address level.
20. The apparatus of claim 19, wherein the controller is configured to determine the same number of times by:
determining a number of times each respective segment of the respective previous set of segments is previously written to determine a particular segment of respective previous set that has been previously written a most number of times; and
determining the same number of times to be a sum of the most number of times and a fixed number of times the particular segment is to be overwritten.
21. The apparatus of claim 20, wherein the controller is configured to determine the number of times each respective segment of the respective previous set of segments is previously written by reading a respective write count stored in each respective segment of the respective previous set of segments.
US15/841,378 2017-12-14 2017-12-14 Multilevel addressing Active 2038-04-05 US10860474B2 (en)

Priority Applications (7)

Application Number Priority Date Filing Date Title
US15/841,378 US10860474B2 (en) 2017-12-14 2017-12-14 Multilevel addressing
KR1020207020187A KR20200089338A (en) 2017-12-14 2018-11-15 Multi-level addressing
CN201880078165.7A CN111433748A (en) 2017-12-14 2018-11-15 Multi-level addressing
EP18888075.1A EP3724769A4 (en) 2017-12-14 2018-11-15 Multilevel addressing
PCT/US2018/061201 WO2019118125A1 (en) 2017-12-14 2018-11-15 Multilevel addressing
JP2020532688A JP6908789B2 (en) 2017-12-14 2018-11-15 Multi-level addressing
US17/112,268 US11461228B2 (en) 2017-12-14 2020-12-04 Multilevel addressing

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
US15/841,378 US10860474B2 (en) 2017-12-14 2017-12-14 Multilevel addressing

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US17/112,268 Continuation US11461228B2 (en) 2017-12-14 2020-12-04 Multilevel addressing

Publications (2)

Publication Number Publication Date
US20190188124A1 US20190188124A1 (en) 2019-06-20
US10860474B2 true US10860474B2 (en) 2020-12-08

Family

ID=66816044

Family Applications (2)

Application Number Title Priority Date Filing Date
US15/841,378 Active 2038-04-05 US10860474B2 (en) 2017-12-14 2017-12-14 Multilevel addressing
US17/112,268 Active 2038-04-11 US11461228B2 (en) 2017-12-14 2020-12-04 Multilevel addressing

Family Applications After (1)

Application Number Title Priority Date Filing Date
US17/112,268 Active 2038-04-11 US11461228B2 (en) 2017-12-14 2020-12-04 Multilevel addressing

Country Status (6)

Country Link
US (2) US10860474B2 (en)
EP (1) EP3724769A4 (en)
JP (1) JP6908789B2 (en)
KR (1) KR20200089338A (en)
CN (1) CN111433748A (en)
WO (1) WO2019118125A1 (en)

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10732881B1 (en) * 2019-01-30 2020-08-04 Hewlett Packard Enterprise Development Lp Region cloning for deduplication
TWI761748B (en) * 2020-01-06 2022-04-21 慧榮科技股份有限公司 Data storage device with hierarchical mapping information management and non-volatile memory control method
CN111758086B (en) * 2020-05-22 2021-06-22 长江存储科技有限责任公司 Method for refreshing mapping table of SSD
US11579786B2 (en) * 2021-04-23 2023-02-14 Vmware, Inc. Architecture utilizing a middle map between logical to physical address mapping to support metadata updates for dynamic block relocation
US11487456B1 (en) 2021-04-23 2022-11-01 Vmware, Inc. Updating stored content in an architecture utilizing a middle map between logical and physical block addresses
CN114448890B (en) * 2021-12-22 2023-10-10 天翼云科技有限公司 Addressing method, addressing device, electronic equipment and storage medium
US11797214B2 (en) * 2022-01-04 2023-10-24 Vmware, Inc. Micro-batching metadata updates to reduce transaction journal overhead during snapshot deletion

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080256416A1 (en) 2007-04-11 2008-10-16 Nec Computertechno, Ltd. Apparatus and method for initializing memory
US20080320214A1 (en) * 2003-12-02 2008-12-25 Super Talent Electronics Inc. Multi-Level Controller with Smart Storage Transfer Manager for Interleaving Multiple Single-Chip Flash Memory Devices
US20090055618A1 (en) 2005-07-29 2009-02-26 Matsushita Electric Industrial Co., Ltd. Memory controller, nonvolatile storage device, nonvolatile storage system, and nonvolatile memory address management method
US20090198948A1 (en) 2008-02-01 2009-08-06 Arimilli Ravi K Techniques for Data Prefetching Using Indirect Addressing
US20100005226A1 (en) 2006-07-26 2010-01-07 Panasonic Corporation Nonvolatile memory device, access device, and nonvolatile memory system
US20100115192A1 (en) * 2008-11-05 2010-05-06 Samsung Electronics Co., Ltd. Wear leveling method for non-volatile memory device having single and multi level memory cell blocks
US20110283048A1 (en) 2010-05-11 2011-11-17 Seagate Technology Llc Structured mapping system for a memory device
US20120023282A1 (en) 2010-07-21 2012-01-26 Seagate Technology Llc Multi-Tier Address Mapping in Flash Memory
US20120239855A1 (en) * 2009-07-23 2012-09-20 Stec, Inc. Solid-state storage device with multi-level addressing
US20130080726A1 (en) * 2011-09-25 2013-03-28 Andrew G. Kegel Input/output memory management unit with protection mode for preventing memory access by i/o devices
WO2013048503A1 (en) 2011-09-30 2013-04-04 Intel Corporation Apparatus and method for implementing a multi-level memory hierarchy having different operating modes
US20140297990A1 (en) * 2011-01-06 2014-10-02 Micron Technology, Inc. Memory address translation

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090089515A1 (en) * 2007-10-02 2009-04-02 Qualcomm Incorporated Memory Controller for Performing Memory Block Initialization and Copy
US8180995B2 (en) * 2009-01-21 2012-05-15 Micron Technology, Inc. Logical address offset in response to detecting a memory formatting operation

Patent Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080320214A1 (en) * 2003-12-02 2008-12-25 Super Talent Electronics Inc. Multi-Level Controller with Smart Storage Transfer Manager for Interleaving Multiple Single-Chip Flash Memory Devices
US20090055618A1 (en) 2005-07-29 2009-02-26 Matsushita Electric Industrial Co., Ltd. Memory controller, nonvolatile storage device, nonvolatile storage system, and nonvolatile memory address management method
US20100005226A1 (en) 2006-07-26 2010-01-07 Panasonic Corporation Nonvolatile memory device, access device, and nonvolatile memory system
US20080256416A1 (en) 2007-04-11 2008-10-16 Nec Computertechno, Ltd. Apparatus and method for initializing memory
US20090198948A1 (en) 2008-02-01 2009-08-06 Arimilli Ravi K Techniques for Data Prefetching Using Indirect Addressing
US20100115192A1 (en) * 2008-11-05 2010-05-06 Samsung Electronics Co., Ltd. Wear leveling method for non-volatile memory device having single and multi level memory cell blocks
US20120239855A1 (en) * 2009-07-23 2012-09-20 Stec, Inc. Solid-state storage device with multi-level addressing
US20110283048A1 (en) 2010-05-11 2011-11-17 Seagate Technology Llc Structured mapping system for a memory device
US20120023282A1 (en) 2010-07-21 2012-01-26 Seagate Technology Llc Multi-Tier Address Mapping in Flash Memory
US20140297990A1 (en) * 2011-01-06 2014-10-02 Micron Technology, Inc. Memory address translation
US9274973B2 (en) 2011-01-06 2016-03-01 Micron Technology, Inc. Memory address translation
US20130080726A1 (en) * 2011-09-25 2013-03-28 Andrew G. Kegel Input/output memory management unit with protection mode for preventing memory access by i/o devices
WO2013048503A1 (en) 2011-09-30 2013-04-04 Intel Corporation Apparatus and method for implementing a multi-level memory hierarchy having different operating modes

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
International Search Report and Written Opinion from related international application No. PCT/US2018/061201, dated Mar. 6, 2019, 14 pp.

Also Published As

Publication number Publication date
EP3724769A4 (en) 2021-09-08
JP2021507373A (en) 2021-02-22
WO2019118125A1 (en) 2019-06-20
EP3724769A1 (en) 2020-10-21
KR20200089338A (en) 2020-07-24
US20190188124A1 (en) 2019-06-20
JP6908789B2 (en) 2021-07-28
CN111433748A (en) 2020-07-17
US11461228B2 (en) 2022-10-04
US20210089443A1 (en) 2021-03-25

Similar Documents

Publication Publication Date Title
US11461228B2 (en) Multilevel addressing
CN110603532B (en) Memory management
US10853233B2 (en) Reconstruction of address mapping in a host of a storage system
US8095724B2 (en) Method of wear leveling for non-volatile memory and apparatus using via shifting windows
US7711892B2 (en) Flash memory allocation for improved performance and endurance
KR101343237B1 (en) Memory block selection
US7120729B2 (en) Automated wear leveling in non-volatile storage systems
US11361840B2 (en) Storage system having a host that manages physical data locations of storage device
US9627072B2 (en) Variant operation sequences for multibit memory
CN108959112B (en) Memory system and wear leveling method using the same
KR20080082601A (en) Flash drive fast wear leveling
US10073771B2 (en) Data storage method and system thereof
CN112306902A (en) Memory controller and method of operating the same
CN112114740A (en) Storage device and operation method thereof
CN111352854A (en) Storage device, controller and method for operating storage device
EP1713085A1 (en) Automated wear leveling in non-volatile storage systems
US20110055459A1 (en) Method for managing a plurality of blocks of a flash memory, and associated memory device and controller thereof
US11288007B2 (en) Virtual physical erase of a memory of a data storage device
CN111949208B (en) Virtual physical erasure of memory of data storage device
US20230089246A1 (en) Memory system
CN111324292A (en) Memory recovery method of nonvolatile memory

Legal Events

Date Code Title Description
AS Assignment

Owner name: MICRON TECHNOLOGY, INC., IDAHO

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:FERRANTE, GIANFRANCO;MINOPOLI, DIONISIO;REEL/FRAME:044393/0516

Effective date: 20171213

FEPP Fee payment procedure

Free format text: ENTITY STATUS SET TO UNDISCOUNTED (ORIGINAL EVENT CODE: BIG.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

AS Assignment

Owner name: MORGAN STANLEY SENIOR FUNDING, INC., AS COLLATERAL

Free format text: SUPPLEMENT NO. 7 TO PATENT SECURITY AGREEMENT;ASSIGNOR:MICRON TECHNOLOGY, INC.;REEL/FRAME:045267/0833

Effective date: 20180123

Owner name: MORGAN STANLEY SENIOR FUNDING, INC., AS COLLATERAL AGENT, MARYLAND

Free format text: SUPPLEMENT NO. 7 TO PATENT SECURITY AGREEMENT;ASSIGNOR:MICRON TECHNOLOGY, INC.;REEL/FRAME:045267/0833

Effective date: 20180123

AS Assignment

Owner name: JPMORGAN CHASE BANK, N.A., AS COLLATERAL AGENT, IL

Free format text: SECURITY INTEREST;ASSIGNORS:MICRON TECHNOLOGY, INC.;MICRON SEMICONDUCTOR PRODUCTS, INC.;REEL/FRAME:047540/0001

Effective date: 20180703

Owner name: JPMORGAN CHASE BANK, N.A., AS COLLATERAL AGENT, ILLINOIS

Free format text: SECURITY INTEREST;ASSIGNORS:MICRON TECHNOLOGY, INC.;MICRON SEMICONDUCTOR PRODUCTS, INC.;REEL/FRAME:047540/0001

Effective date: 20180703

STPP Information on status: patent application and granting procedure in general

Free format text: NON FINAL ACTION MAILED

STPP Information on status: patent application and granting procedure in general

Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER

AS Assignment

Owner name: MICRON TECHNOLOGY, INC., CALIFORNIA

Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:MORGAN STANLEY SENIOR FUNDING, INC., AS COLLATERAL AGENT;REEL/FRAME:050716/0678

Effective date: 20190731

AS Assignment

Owner name: MICRON SEMICONDUCTOR PRODUCTS, INC., IDAHO

Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:JPMORGAN CHASE BANK, N.A., AS COLLATERAL AGENT;REEL/FRAME:051028/0001

Effective date: 20190731

Owner name: MICRON TECHNOLOGY, INC., IDAHO

Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:JPMORGAN CHASE BANK, N.A., AS COLLATERAL AGENT;REEL/FRAME:051028/0001

Effective date: 20190731

STPP Information on status: patent application and granting procedure in general

Free format text: FINAL REJECTION MAILED

STPP Information on status: patent application and granting procedure in general

Free format text: ADVISORY ACTION MAILED

STPP Information on status: patent application and granting procedure in general

Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION

STPP Information on status: patent application and granting procedure in general

Free format text: PUBLICATIONS -- ISSUE FEE PAYMENT VERIFIED

STCF Information on status: patent grant

Free format text: PATENTED CASE