EP2761471B1 - Statistical wear leveling for non-volatile system memory - Google Patents

Statistical wear leveling for non-volatile system memory Download PDF

Info

Publication number
EP2761471B1
EP2761471B1 EP11873322.9A EP11873322A EP2761471B1 EP 2761471 B1 EP2761471 B1 EP 2761471B1 EP 11873322 A EP11873322 A EP 11873322A EP 2761471 B1 EP2761471 B1 EP 2761471B1
Authority
EP
European Patent Office
Prior art keywords
block
blocks
wear
write
memory
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
EP11873322.9A
Other languages
German (de)
English (en)
French (fr)
Other versions
EP2761471A1 (en
EP2761471A4 (en
Inventor
Raj K. Ramanujan
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Intel Corp
Original Assignee
Intel Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Intel Corp filed Critical Intel Corp
Publication of EP2761471A1 publication Critical patent/EP2761471A1/en
Publication of EP2761471A4 publication Critical patent/EP2761471A4/en
Application granted granted Critical
Publication of EP2761471B1 publication Critical patent/EP2761471B1/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/0223User address space allocation, e.g. contiguous or non contiguous base addressing
    • G06F12/023Free address space management
    • G06F12/0238Memory management in non-volatile memory, e.g. resistive RAM or ferroelectric memory
    • G06F12/0246Memory management in non-volatile memory, e.g. resistive RAM or ferroelectric memory in block erasable memory, e.g. flash memory
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0668Interfaces specially adapted for storage systems adopting a particular infrastructure
    • G06F3/0671In-line storage system
    • G06F3/0683Plurality of storage devices
    • G06F3/0689Disk arrays, e.g. RAID, JBOD
    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11CSTATIC STORES
    • G11C16/00Erasable programmable read-only memories
    • G11C16/02Erasable programmable read-only memories electrically programmable
    • G11C16/06Auxiliary circuits, e.g. for writing into memory
    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11CSTATIC STORES
    • G11C16/00Erasable programmable read-only memories
    • G11C16/02Erasable programmable read-only memories electrically programmable
    • G11C16/06Auxiliary circuits, e.g. for writing into memory
    • G11C16/34Determination of programming status, e.g. threshold voltage, overprogramming or underprogramming, retention
    • G11C16/349Arrangements for evaluating degradation, retention or wearout, e.g. by counting erase cycles
    • G11C16/3495Circuits or methods to detect or delay wearout of nonvolatile EPROM or EEPROM memory devices, e.g. by counting numbers of erase or reprogram cycles, by using multiple memory areas serially or cyclically
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2212/00Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
    • G06F2212/72Details relating to flash memory management
    • G06F2212/7211Wear leveling

Definitions

  • the present disclosure relates to the field of wear leveling for memory arrays, and in particular, to wear leveling the use of memory cell blocks using block lists and move operations.
  • a wear-leveling system is used to level the wear across the memory cells distributing writes through-out the device.
  • the wear leveling may be in an operating system, a software application, a memory controller, or a memory module.
  • PCM Phase Change Memory
  • PCMS Phase Change Memory and Switch
  • NVM Non-Volatile Memory
  • PCMS provides fast reads and writes and can allow a single memory cell or a small group of cells to be written at one time. This makes PCMS suitable not just for replacing conventional mass storage memory but also short term and buffer memory.
  • DRAM Dynamic Random Access Memory
  • SRAM Static Random Access Memory
  • PCMS has a limit to the total number of write cycles that can be performed to any one storage cell.
  • US 2009/089485 A1 describes a wear leveling method for a non-volatile memory, wherein the non-volatile memory is substantially divided into a plurality of blocks, and these blocks are grouped into at least a data area, a spare area, a substitute area, and a temporary area.
  • the wear leveling method includes selecting blocks from the spare area according to different purposes and executing a wear leveling procedure.
  • the present invention uses a statistical wear leveling mechanism that is fast and has very low overhead. It works in the presence of small granularity write operations and still maximizes the service life of the connected memory array.
  • a method implementing a statistical wear leveling mechanism is provided and defined in claim 1 and a corresponding apparatus is provided and is defined in claim 10.
  • the described wear-leveling scheme is particularly useful for use in high performance applications with small write granularity, but is not so limited.
  • the scheme enables small granularity writes directly to a PCMS array for highest performance and bandwidth efficiency.
  • Wear leveling with smaller block granularity, such as a 4KB block is allowed.
  • the wear level frequency can be modulated using a fixed write cycle count trigger that trades-off the uniformity of wear-leveling with the bandwidth cost. Hot spots can be avoided by using the entire PCMS address space for uniform wear-level moves.
  • Workload behavior can be decoupled from the selection of a block for a wear-leveling move. Malicious or abusive software can be protected against.
  • An overall wear count can be maintained for each block to help with "wear band" classification requirements.
  • ECC Error Correction Code
  • error detection codes can be used to identify blocks that are reaching the end of the wear-out life. These blocks can be treated as bad blocks and removed from use.
  • These and other advantages can be obtained using one or more of the features that are described in more detail below.
  • These features include a high speed address indirection mechanism, and a periodic trigger based on write frequency to perform a wear-level move.
  • the scheme has an ability to allow small granularity writes into a PCMS system while being able to wear level at a larger block granularity.
  • the automatic exclusion of bad blocks can be comprehended and a periodic trigger can be combined with random selection of free wear level blocks to ensure some statistical uniformity to the wear leveling.
  • the wear-leveling scheme described herein divides up the System Physical Address space (SPA) that software uses to address memory into blocks of 4KB size.
  • the PCMS Device Address space (PDA) is also divided up in the same size blocks.
  • Each SPA block is mapped to a specific PDA block. Writes into this block are counted and tracked.
  • the SPA Block is moved to a different PDA block selected using a variety of different algorithms that ensures randomization with respect to the software write pattern.
  • SPA Blocks are continuously remapped to different PDA blocks depending on the frequency of writes. The remapping scheme ensures that the entire PDA space is covered without creating any software generated hot spots.
  • the shuffle is initiated using a predetermined seed and a key. This key can be tracked for each block in the AIT. The next time the SPA sub-block is accessed; this same seed and key are used to regenerate the 64B address within the PDA block.
  • the target blocks can be selected using a walk through the physical addresses, a patterned walk, or a random walk can be used. The target blocks can also be selected based on a list generated during startup. Additional alternatives include a random shuffle using a static seed, sequential rotation, applying an XOR to address bits with a random number, applying an XOR to PDA address bits with SPA address bits, etc.
  • FIG. 1 shows a block diagram of a memory address indirection system that may include a wear leveling system as described in more detail below.
  • the address indirection unit 100 sends and receives memory address, command, and data over through a memory channel 112 that is coupled to a memory controller, a general purpose controller, or a processor, depending on the particular implementation.
  • the read commands and data are received into a read queue 114 and the write commands and data are received into a write queue 116.
  • Reads and writes are both sent to an address control unit 110 that interprets the memory channel addresses and applies them to physical addresses of a physical memory cell array 122.
  • the address control unit is further coupled to a write buffer 118 and a read buffer 120 to buffer reads and writes after they have been translated in the address control unit for application to the memory 122.
  • FIG. 1 shows two of the data structures that may be used by the address control unit in one embodiment of the invention.
  • An Address Indirection Table (AIT) 101 stores the current map of address translation from System Physical Address (SPA) space, used by the memory channel 112, to PCMS Device Address Space (PDA), used by the memory 122. It also holds the number of writes into each block of address space since the most recent wear leveling move.
  • the AIT is kept in some type of fast memory such as DRAM for quick access. It may be accessed with each read and write transaction.
  • the AIT is frequently modified by the wear leveling processes described below.
  • the security of the system and the leveling of the wear can be enhanced further by randomizing the mapping of the AIT when it is initially configured.
  • the AIT can also be randomized again for the erased blocks. This not only levels the wear, but eliminates any ability for outsiders to predict which PDA corresponds to any particular SPA.
  • the wear may be further leveled by randomizing the ordering of sub-blocks of the PDA within each block.
  • the AIT is organized as a simple linear table of translation entries. Each entry maps, for example, a 4KB PCMS page. Each entry in the AIT may be only a few bytes. Therefore the size of the AIT is kept small. While the AIT may be maintained in the PCMS, it may then also require wear leveling which renders the system more complex.
  • the AIT is located in a separate DRAM chip easily accessible by and separate from the memory controller chip.
  • the off-die AIT may be augmented with an AIT cache (see Figure 2 ) stored locally in the memory controller.
  • the cache may be used for the most recently accessed AIT entries such that future accesses into the blocks covered by these entries will not require an off-die AIT access. Since software often accesses the same location multiple times in short succession, the cache may provide for faster memory access times. On the other hand, the cache adds complexity in that it must be kept up to date and any address not in the cache will be accessed more slowly.
  • FIG. 1 also shows a PCMS Descriptor Table (PDT) 102 between the read and write buffers and the physical memory.
  • the PCMS Descriptor Table is a reverse AIT. It stores the same map in reverse to translate the addresses of each PDA block to addresses of the corresponding SPA block. In addition it can also be used to hold a type for each block. This can include a record as to which blocks are bad blocks.
  • the PDT can be used to pick blocks in PDA space for the destination of wear-level moves. Such blocks are called "Free Blocks" and kept in a Free Block List in the memory controller. Since a block selected to be a Free Block can contain valid data, this must first be moved to some unused block prior to moving it to the Free Block List.
  • the AIT is appropriately modified to change the SPA to PDA map for the original block that was moved.
  • the PDT is maintained in persistent memory and typically will reside in the PCMS memory itself.
  • FIG. 2 shows the Address Indirection Unit 100 of Figure 1 in more detail.
  • Figure 2 shows detail of Wear Level Logic 200 within the Address Indirection Unit 100.
  • the wear level logic is shown as a set of queues or lists that interact with state machines.
  • the wear level logic may be implemented as hardware components as illustrated or as software modules, as firmware or as a combination of different such items.
  • Block 103 is a Pending Wear Level Queue (PWLQ) within the Wear Level Logic 200.
  • PWLQ Pending Wear Level Queue
  • This queue holds a descriptor of each SPA block to indicate each block's level of wear. This can be used to determine when a block should be moved for wear leveling purposes.
  • This queue as with most of the other data structures and state machines described below may be maintained inside the memory controller or in some other quickly accessible location.
  • an entry is made to the bottom of the queue when a write request through the AIT find that the write count value in the AIT entry for that block is greater than or equal to the values for a write count low threshold and less than or equal to a write count-high threshold.
  • the AIT maintains a write count register for each block.
  • the value may be an actual count of write operations or some other value that is related to the number of writes.
  • the write count is a factor of the actual write count, such as one-sixteenth or some other convenient value.
  • the write counts can be initialized to random values. This will also begin the wear leveling process sooner so that the Pending Queue does not remain empty for some amount of time and then very quickly fill as several blocks reach the write count threshold around the same time.
  • the various block lists are able to statistically distribute the use of the blocks without knowledge of total write counts. Accordingly, when a block is moved back into use a new write count can be initialized either at zero or at some other random or predetermined value.
  • Block 104 is a Free Wear Level Block List (FWLBL) in the wear level logic.
  • This list contains descriptors of the PDA blocks that are available for a wear leveling move. The blocks in this list are not currently mapped to SPA blocks and are therefore available to receive a move from a block that is currently mapped to an SPA block.
  • There are different ways to pick a block that is to receive a move In some implementations, it may be desired to pick the algorithm so that it is decoupled and random relative to the software using the memory and software or workload memory allocation or read/write patterns. It may also be desired to select a technique that covers the entire PCMS address space uniformly, avoiding any potential hotspot creating patterns and biases the selection towards low write-count blocks.
  • BIOS Basic Input/Output System
  • BIOS Basic Input/Output System
  • a similar startup resource to randomize the SPA to PDA mapping and use this to initialize the AIT and the PDT.
  • the entries for the Free Wear Level Block List can be picked by sequentially walking through the entire PDA address space excluding only the free blocks and the bad blocks.
  • Block 105 is a Target Wear Level Block List (TWLBL) in the wear level logic.
  • TWLBL Target Wear Level Block List
  • This list contains descriptors of PDA blocks to be used to create new free blocks as the free block list 104 starts running low. The blocks in this list are still assigned to SPA blocks. After a block on this list is copied to another block in a wear level move, then the block is moved to the Unused Block List.
  • Block 106 is an Unused Block List (UBL) in the wear level logic.
  • ULB Unused Block List
  • This list contains descriptors of PDA blocks that are not currently being used.
  • the blocks may be blocks that have never been used because the system has not yet needed them.
  • most or all of the blocks will be blocks that were recently holding SPA blocks that were then wear level moved to another block.
  • this queue contains a list of the blocks that previously were mapped into the system memory space but then that system memory space was moved to another physical block, leaving this physical block unused.
  • the diagram of Figure 2 shows the wear level logic as operating using state machines.
  • the wear leveling is initiated during system configuration.
  • the BIOS or another memory configuration agent sets aside a set of PDA blocks for use as free blocks for wear leveling purposes. This is the initial set of blocks that make up the free block list.
  • blocks are continuously used and added to the free block list to maintain some constant number of free blocks on average.
  • the free blocks are accordingly, subtracted from the total block count available to the system software.
  • the blocks that are not set aside are assigned to MCA (Memory Channel Address) blocks.
  • MCA is derived from SPA for each PCMS Controller. This initial MCA to PDA assignment can use a variety of different mapping schemes. In one example, it uses a randomized mapping scheme.
  • FIG. 2 shows a Wear Level Move State Machine 201 as part of the wear level logic 200.
  • This state machine takes an entry from the Pending Queue 103 and moves it to a PDA block taken from the Free Block List 104. When this is completed, it will update the AIT 101 to show the new map between SPA and PDA. It will put the previously used PDA block into the Unused Block List 106. The PDT 102 is also updated with the correct reverse map.
  • the Wear Level Move State Machine is coupled to each of the queues and lists and also to the Address Control Logic 110. Through the Address Control Logic, the Wear Level State Machine can reach the AIT and the PDT.
  • the AIT is reached through a DRAM (Dynamic Random Access Memory) Controller 212 that includes an AIT cache 214 for recently used AIT values.
  • the DRAM Controller is coupled through a DRAM memory channel bus to the AIT which resides off-die in an external location (not shown).
  • the Free Block List Expansion State Machine 202 takes an entry from the Target Block List 105 and moves its contents to an entry from the Unused Block List 106. It will then move the Target Block List entry into the Free Block List 104. Through the Address Control Logic, to which it is connected, it will then update the AIT 101 and PDT 102 appropriately.
  • the Target Free Block Generation State Machine 203 selects the next block for the Free Block List Expansion State Machine to use.
  • a list of such blocks is maintained in the memory controller, for example, in the wear level logic or in the address indirection unit.
  • the Target Free Block Generation State Machine will use a particular technique to pick the next PDA block that is destined to enter the Target Free Block List 104.
  • This technique may be selected based on a variety of different criteria. Two criteria are to cover all of the PDA space in its selection process and to decouple any association between system software and its use of a particular SPA address space. In other words, for the first criterion, the technique spreads wear over the entire memory array and not just some portion. This helps to ensure that the whole array wears out at about the same time.
  • the second criterion prevents software running on the computing system from determining the PDA addresses to be used. Some software program will be written to use specific addresses. This will cause the memory cells corresponding to those particular addresses to wear out more quickly. By changing the address space the physical address designated by the software will be changed over time to different actual physical memory cells, leveling the wear.
  • Different techniques to pick the next PDA block can be selected depending on the needs of a particular design for a particular application. As with the random shuffling of sub-blocks within a block, the accuracy and precision of the technique can be balanced against its complexity and the latency that it might introduce. Similar techniques can be used as are used for shuffling sub-block including a patterned walk, a random walk, a startup list, a random shuffle, sequential rotation, etc.
  • the AIT Cache 214 assists in the operation of the AIT 101. Since the AIT is a very large table and is in the critical timing path for each transaction, an AIT cache may be kept in SRAM, DRAM, or some other type of memory, within a PCMS controller to provide very fast access to the SPA to PDA mapping table stored in the AIT.
  • the write transaction when a write transaction is received by the PCMS controller, the write transaction will include an address in the SPA.
  • the controller uses the SPA address to look up an entry in the AIT to determine the corresponding PDA block address.
  • the write controller will also increment a Write Count saved in the accessed AIT entry. If the Write Count hits a certain pre-determined value, the entry is added to the Pending Queue 103. If not, the AIT entry is simply updated with the new Write Count and the write proceeds as normal to memory array 122, such as a PCMS.
  • the granularity of writes to the PCMS is much smaller than the wear level block.
  • the sub-block is 64 Bytes and the wear level block is 4 KiloBytes. This trade-off provides for an efficient write bandwidth use into the PCMS while keeping the AIT a reasonable size (entries for each 4KB block).
  • other sizes may be selected for the write and wear-leveling granularity to suit different implementations.
  • FIG. 3A shows the operations described above in terms of a process flow diagram.
  • the process begins at block 301.
  • the Wear Level Move State Machine 201 takes an entry from the Pending Queue 103 and moves it to a PDA block taken from the Free Block List 104.
  • a write request instruction will be executed and will invoke the memory controller, the address indirection logic, or the wear leveling system to read the AIT in order to find the appropriate physical address corresponding to the logical address of the write request.
  • the write request is executed, for example, by writing data into a PCMS write buffer 118, from which it will be written into the PCMS 122 in the regular course of operations.
  • the system Upon reading the AIT, the system will check the AIT Write-Count value. This value may be stored in the AIT or in another location. If this value is greater than a low threshold, then the system can post a move request in the Pending Queue 103. At the same time, the Write Count value may be updated, depending on the particular embodiment. In addition, a Wear Level pending bit can be set in the AIT. This will indicate that a wear leveling move is pending for the corresponding block of physical memory.
  • the Wear Level Move State Machine picks a wear level request from the Pending Queue and enters this into an Active Wear Level Move Entry (AWLM)
  • AWLM Active Wear Level Move Entry
  • the AWLM entry is the context associated with a currently active wear leveling move operation. The AWLM entry remains valid until the completion of the corresponding wear leveling move and the associated state updates.
  • the AWLM can include a valid/invalid entry to indicate whether a wear leveling move is in progress, a source address indicating the block from which the data is to be moved, and a destination address indicating the destination block to which the data from the source block will be moved.
  • the AWLM entry may be cleared, or set to invalid, after the corresponding wear leveling move is completed.
  • any future write operations to that block of the memory can be held in a buffer until after the wear leveling move is completed.
  • the instruction can be held or the write can be performed to a write buffer from which it will be written to the PCMS after the move.
  • the wear leveling move is finished, then the pending write operations will be sent to the new target block.
  • the Wear Level Move State Machine picks a free block from the Free Block List, reads the PCMS block to be moved and moves the data from the physical memory of the source block into a buffer. Any entries in the write buffer waiting to go to the physical memory may also be gathered into the write buffer. Alternatively, these may be left in the write buffer for execution after the move is completed. The Wear Level State Machine may then write the data that was in the source block from the move buffer into the destination block. Upon completion of the wear leveling move, the AWLM entry can be cleared or set to invalid.
  • the Wear Level Move State Machine updates the AIT to show a new map between SPA and PDA. This will reflect the change in the mapping from mapping to the source block to mapping to the destination block.
  • the AIT is also updated to refresh the Write Count, and clear the list of pending write operations, if any.
  • it will put the previously used PDA block into the Unused Block List and at block 304 it will update the PDT also to reflect the move. After the move the write count for the previously used PDA block can be deleted or overwritten with a default or random value. There is no need to track the write count for the unused blocks. This increases the system's robustness through power cycling and catastrophic events.
  • the Free Block List is expanded to continually add new blocks. These are the blocks consumed by a wear leveling move when a block is moved to a block in the Free Wear Level Block List.
  • the Free Block List Expansion State Machine takes blocks from the Unused Block List and moves them to the Free Wear Level Block List so that they can become destination blocks in a wear leveling move.
  • the Free Block List Expansion State Machine 202 takes an entry from the Target Block List 105 and moves its contents to an entry from the Unused Block List. 106.
  • This move may involve many of the operations mentioned above for the Wear Level Move State Machine including setting the Wear Level Pending bit, moving data into a write buffer, gathering any pending write operations, and then clearing the pending bit when the move is completed.
  • the block from the Unused Block List is the destination block in this case and is removed from the Unused Block List before or after the move.
  • the block from the Target Block List is the source block and it is moved to the Free Block List.
  • the Free Block List Expansion State Machine moves the Target Block List 105 entry into the Free Block List 104. In other words, the block from the Target Block List is now empty and can be moved to the Free Block List.
  • the state machine will then update the AIT and PDT appropriately.
  • the free block list expansion consumes blocks from the Target Block List so the Target Free Block Generation State Machine replenishes the Target Block List.
  • the Target Free Block Generation State Machine 203 picks the next PDA block that is to enter the Free Block List.
  • the selection can be done in a variety of different ways depending on the particular implementation. This may include a random walk, a check of wear level count values, etc.
  • the state machine selects the next block that is not identified as a bad block.
  • a write transaction when received by the PCMS controller, it looks up the system address (SPA) in the AIT to determine the PDA block address. At block 311, it will increment the Write Count saved in the AIT entry. At block 312, if the Write Count hits a certain pre-determined value, the entry is deposited in the Pending Queue at block 313. If not, then the AIT is simply updated with the new Write Count at block 314 and the write proceeds as normal to PCMS.
  • SPA system address
  • Figures 4A through 4B represent the movement of data from one block to another among the blocks of memory cells in a memory array, for example, a PCMS array.
  • the free blocks F represent free PDA blocks.
  • the SPA blocks 1-16 represent PDA blocks that are currently assigned to SPA blocks. In other words, the assigned PDA blocks are currently being used to store data.
  • Figure 4A shows a stable state in which the a first set of blocks, labeled F, are available or free and a second state of blocks labeled 1-16 are in use.
  • the F blocks are listed in the Free Block List 104 and blocks 1-16 are indicated in the AIT 101 and PDT 102.
  • SPA Block 8 becomes "hot” due to a large total number of writes to it. It is then placed in the pending queue and will, accordingly, be scheduled for a Wear Level Move.
  • the Wear Level Move State Machine copies the hot block 8 to the first available free block, the F block at the top of the diagram The former block 8 becomes an unused block, indicated by U. This block will be added to the Unused Block List 106.
  • the Target Free Block state machine 203 picks the next available PDA block as the next free block. As indicated in Figure 4C , the next free block is identified as block 1. The free blocks are indicated in the Free Block List 104. After the next free block is identified, the Free Block Expansion state machine 202 moves the current contents of the target free block, indicated as block 1 in Figure 4C to the unused block indicated as U in Figure 4C . It then adds an identifier of the target free block in the Target Block List 105. This condition is shown in Figure 4D in which there are four free block F, however, the former first used block, 1, is not a free block F and the contents formerly in block 1 have been moved to the former block 8.
  • the technique described above helps to provide that the free blocks slowly move through the entire PDA space and that the identity of the free blocks is decoupled from the software generated SPA addresses.
  • the process flow described above uses Pending Queue 103 from which the Wear Level Move State Machine 201 selects blocks for a wear leveling move.
  • the blocks can be selected using a first-in-first-out approach or using some sort of pattern or random selection.
  • all of the blocks in the Pending Queue are ready for a move and it is not important which ones to move first provided that the number of blocks in the queue does not become too large.
  • the process flow described above also uses an Unused Block List 106, a Target Block List 105, and a Free Block List 104.
  • the Unused Block List is used to allow blocks to be moved from the Target Block List to the Free Block List.
  • the Free Block List is used to allow blocks in use with a high write count (in the Pending Queue) to be moved to the Unused Block List.
  • the Wear Level Move State Machine moves data to blocks in the Free Block List
  • the Free Expansion State Machine moves data from blocks in the Target Block List to blocks of the Unused Block List.
  • the wear leveling process takes blocks from the Pending Queue and moves them to the Unused Block List. To do this it also takes blocks from the Free Block List and puts them into use where they may eventually return to the Pending Queue.
  • the free block list expansion process takes blocks from the Target Block List and moves them to the Free Block List. To do this it also takes blocks from the Unused Block list puts them into use where they may eventually return to the Pending Queue.
  • the Pending Queue from which the wear leveling draws ties these two processes and the three block lists together so that blocks can move from one process to the other.
  • the two simultaneous moving processes rely on different selection criteria for their operation.
  • the wear leveling process relies on the write count. Blocks are placed in the pending queue based on the write count.
  • the free block list expansion process does not use the write count. This eliminates any need to track write counts for unused and free blocks. As mentioned above, it can use a wide range of different techniques to fill the target free block list.
  • the second process adds a mixing of sorts to the use of the physical address blocks that improves the integrity of the wear leveling system. The second process also reduces the predictability of the system. This makes the system harder to attack by an outsider.
  • the second process by adding another factor to the overall use of the memory blocks helps to ensure that all blocks are used and that the wear leveling is applied across the entire physical memory regardless of the system addresses that might be invoked by particular software or BIOS programs.
  • the target blocks can be selected using a walk through the physical addresses, a patterned walk, or a random walk can be used.
  • the target blocks can also be selected based on a list generated during startup. Additional alternatives include a random shuffle using a static seed, sequential rotation, applying an XOR to address bits with a random number, applying an XOR to PDA address bits with SPA address bits, etc.
  • the second process, the free block list expansion process, the Target Block List, and the Free Block List are not used. Instead, the Wear Level State Machine takes blocks from the Pending Queue, moves content from those blocks into a block selected from the Unused Block List and, after the move, assigns the first block to the Unused Block List. In such an embodiment, the selection of blocks from the Unused Block List can be done using the processes described above the selecting target blocks.
  • the first process, the wear leveling process, the unused block list and the pending queue are not used.
  • the free block expansion process takes blocks from the target free block list, moves the content to a block selected from the free block list and, after the move, assigns the first block to the free block list.
  • the selection of blocks from the free block list can be done as described above for selecting target blocks. While in this embodiment, wear leveling is not addressed directly, it will be addressed indirectly by remapping the address translation after each move.
  • the AIT can be randomized initially or periodically for free or unused blocks.
  • the mapping of sub-blocks within the AIT can be randomized initially and each time the AIT is updated after a move.
  • the write counts can be randomized.
  • current values can be randomized at intervals using an addition or replacement or other operations.
  • Many of the other parameters of the system can also be similarly manipulated to spread operations throughout the memory array and to change the timing of operations.
  • FIG. 5 shows an example of a system to which embodiments of the present invention may be applied.
  • a Central Processing Unit (CPU) 501 is coupled to a controller hub 502 or south bridge 502.
  • the CPU includes a memory controller and is coupled directly to system memory 505, typically in the form of DRAM, but in the examples described above, this memory may be a non-volatile memory, such as PCMS.
  • the CPU is also coupled to a peripheral graphics bus 503 for use by one or more video cards and may also provide video directly from a video port 504 to a display.
  • the controller hub 502 also has a memory controller interface to mass memory 506 such as magnetic disk, flash-based memory or other memory types, including PCMS.
  • mass memory 506 such as magnetic disk, flash-based memory or other memory types, including PCMS.
  • the mass memory is coupled directly to the system memory (shown in dotted line) or to the CPU and does not connect to the controller hub.
  • both the system memory and the mass memory are coupled to the CPU through the controller hub on different buses or though the same bus.
  • An address indirection unit may be incorporated into the CPU, the controller hub, or the memory array.
  • the controller hub is also connected to a number of other devices, depending upon the intended use of the computer system.
  • the controller hub is also coupled to an external serial bus 507, such as one or more of USB (Universal Serial Bus), Firewire, Thunderbolt, or DisplayPort.
  • the controller hub is also coupled to a firmware hub 508 to permit connections to system devices (not shown).
  • User I/O devices 509 and Audio I/O 510 are also coupled to the controller hub.
  • Video may also be supported, together with peripheral component buses 511.
  • Network interfaces, such as Ethernet 512, WiFi, 513 or cellular data may also be supported by independent connection to the controller hub or through connection to one or more of the other buses shown in the diagram.
  • Embodiments of the present invention may be used as an integral part of a PCMS controller in a microcomputer architecture, or for a hierarchical server memory subsystem using PCMS as the bulk memory, for other uses.
  • the technique optimizes performance and PCMS wear-out and hence provides a uniquely valuable solution for server platforms.
  • it can be used with any PCMS based memory subsystem that uses a DRAM based Address Indirection Table.
  • the described technique allows, for example, for direct 64B writes into PCMS.
  • Embodiments may be implemented as any or a combination of: one or more microchips or integrated circuits interconnected using a parentboard, hardwired logic, software stored by a memory device and executed by a microprocessor, firmware, an application specific integrated circuit (ASIC), and/or a field programmable gate array (FPGA).
  • logic may include, by way of example, software or hardware and/or combinations of software and hardware.
  • references to “one embodiment”, “an embodiment”, “example embodiment”, “various embodiments”, etc., indicate that the embodiment(s) of the invention so described may include particular features, structures, or characteristics, but not every embodiment necessarily includes the particular features, structures, or characteristics. Further, some embodiments may have some, all, or none of the features described for other embodiments.
  • Coupled is used to indicate that two or more elements co-operate or interact with each other, but they may or may not have intervening physical or electrical components between them.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Techniques For Improving Reliability Of Storages (AREA)
  • Memory System (AREA)
  • Read Only Memory (AREA)
EP11873322.9A 2011-09-30 2011-09-30 Statistical wear leveling for non-volatile system memory Active EP2761471B1 (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/US2011/054384 WO2013048470A1 (en) 2011-09-30 2011-09-30 Statistical wear leveling for non-volatile system memory

Publications (3)

Publication Number Publication Date
EP2761471A1 EP2761471A1 (en) 2014-08-06
EP2761471A4 EP2761471A4 (en) 2015-05-27
EP2761471B1 true EP2761471B1 (en) 2017-10-25

Family

ID=47996201

Family Applications (1)

Application Number Title Priority Date Filing Date
EP11873322.9A Active EP2761471B1 (en) 2011-09-30 2011-09-30 Statistical wear leveling for non-volatile system memory

Country Status (5)

Country Link
US (1) US9298606B2 (zh)
EP (1) EP2761471B1 (zh)
CN (1) CN103946819B (zh)
TW (2) TWI578157B (zh)
WO (1) WO2013048470A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019231554A1 (en) * 2018-05-30 2019-12-05 Micron Technology, Inc. Memory management

Families Citing this family (78)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9294224B2 (en) 2011-09-28 2016-03-22 Intel Corporation Maximum-likelihood decoder in a memory controller for synchronization
US9317429B2 (en) 2011-09-30 2016-04-19 Intel Corporation Apparatus and method for implementing a multi-level memory hierarchy over common memory channels
WO2013048493A1 (en) 2011-09-30 2013-04-04 Intel Corporation Memory channel that supports near memory and far memory access
WO2013048485A1 (en) 2011-09-30 2013-04-04 Intel Corporation Autonomous initialization of non-volatile random access memory in a computer system
EP2761476B1 (en) 2011-09-30 2017-10-25 Intel Corporation Apparatus, method and system that stores bios in non-volatile random access memory
EP2761467B1 (en) 2011-09-30 2019-10-23 Intel Corporation Generation of far memory access signals based on usage statistic tracking
EP2761464B1 (en) 2011-09-30 2018-10-24 Intel Corporation Apparatus and method for implementing a multi-level memory hierarchy having different operating modes
CN103946816B (zh) 2011-09-30 2018-06-26 英特尔公司 作为传统大容量存储设备的替代的非易失性随机存取存储器(nvram)
CN103946824B (zh) 2011-11-22 2016-08-24 英特尔公司 一种用于非易失性随机访问存储器的访问控制方法、装置及系统
CN104106057B (zh) 2011-12-13 2018-03-30 英特尔公司 用非易失性随机存取存储器提供对休眠状态转变的即时响应的方法和系统
WO2013089685A1 (en) 2011-12-13 2013-06-20 Intel Corporation Enhanced system sleep state support in servers using non-volatile random access memory
CN103999161B (zh) 2011-12-20 2016-09-28 英特尔公司 用于相变存储器漂移管理的设备和方法
WO2013095465A1 (en) 2011-12-21 2013-06-27 Intel Corporation High-performance storage structures and systems featuring multiple non-volatile memories
DE112011106032B4 (de) 2011-12-22 2022-06-15 Intel Corporation Energieeinsparung durch Speicherkanal-Abschaltung
WO2013095644A1 (en) 2011-12-23 2013-06-27 Intel Corporation Page miss handler including wear leveling logic
US20140189284A1 (en) * 2011-12-23 2014-07-03 Nevin Hyuseinova Sub-block based wear leveling
US9396118B2 (en) 2011-12-28 2016-07-19 Intel Corporation Efficient dynamic randomizing address remapping for PCM caching to improve endurance and anti-attack
US9418700B2 (en) 2012-06-29 2016-08-16 Intel Corporation Bad block management mechanism
US9754648B2 (en) 2012-10-26 2017-09-05 Micron Technology, Inc. Apparatuses and methods for memory operations having variable latencies
US20150143021A1 (en) * 2012-12-26 2015-05-21 Unisys Corporation Equalizing wear on storage devices through file system controls
US9734097B2 (en) 2013-03-15 2017-08-15 Micron Technology, Inc. Apparatuses and methods for variable latency memory operations
US9727493B2 (en) 2013-08-14 2017-08-08 Micron Technology, Inc. Apparatuses and methods for providing data to a configurable storage area
US9513815B2 (en) * 2013-12-19 2016-12-06 Macronix International Co., Ltd. Memory management based on usage specifications
CN104932833B (zh) * 2014-03-21 2018-07-31 华为技术有限公司 磨损均衡方法、装置及存储设备
US10365835B2 (en) * 2014-05-28 2019-07-30 Micron Technology, Inc. Apparatuses and methods for performing write count threshold wear leveling operations
WO2016068907A1 (en) * 2014-10-29 2016-05-06 Hewlett Packard Enterprise Development Lp Committing altered metadata to a non-volatile storage device
US10055267B2 (en) * 2015-03-04 2018-08-21 Sandisk Technologies Llc Block management scheme to handle cluster failures in non-volatile memory
US10204047B2 (en) 2015-03-27 2019-02-12 Intel Corporation Memory controller for multi-level system memory with coherency unit
US10073659B2 (en) 2015-06-26 2018-09-11 Intel Corporation Power management circuit with per activity weighting and multiple throttle down thresholds
US9952801B2 (en) * 2015-06-26 2018-04-24 Intel Corporation Accelerated address indirection table lookup for wear-leveled non-volatile memory
US10387259B2 (en) 2015-06-26 2019-08-20 Intel Corporation Instant restart in non volatile system memory computing systems with embedded programmable data checking
US10108549B2 (en) 2015-09-23 2018-10-23 Intel Corporation Method and apparatus for pre-fetching data in a system having a multi-level system memory
US10185501B2 (en) 2015-09-25 2019-01-22 Intel Corporation Method and apparatus for pinning memory pages in a multi-level system memory
US10261901B2 (en) 2015-09-25 2019-04-16 Intel Corporation Method and apparatus for unneeded block prediction in a computing system having a last level cache and a multi-level system memory
US9792224B2 (en) 2015-10-23 2017-10-17 Intel Corporation Reducing latency by persisting data relationships in relation to corresponding data in persistent memory
US10033411B2 (en) 2015-11-20 2018-07-24 Intel Corporation Adjustable error protection for stored data
US10095618B2 (en) 2015-11-25 2018-10-09 Intel Corporation Memory card with volatile and non volatile memory space having multiple usage model configurations
GB2545409B (en) * 2015-12-10 2020-01-08 Advanced Risc Mach Ltd Wear levelling in non-volatile memories
US9747041B2 (en) 2015-12-23 2017-08-29 Intel Corporation Apparatus and method for a non-power-of-2 size cache in a first level memory device to cache data present in a second level memory device
TWI571882B (zh) * 2016-02-19 2017-02-21 群聯電子股份有限公司 平均磨損方法、記憶體控制電路單元及記憶體儲存裝置
US10007606B2 (en) 2016-03-30 2018-06-26 Intel Corporation Implementation of reserved cache slots in computing system having inclusive/non inclusive tracking and two level system memory
US10007462B1 (en) * 2016-03-31 2018-06-26 EMC IP Holding Company LLC Method and system for adaptive data migration in solid state memory
US10185619B2 (en) 2016-03-31 2019-01-22 Intel Corporation Handling of error prone cache line slots of memory side cache of multi-level system memory
US10031845B2 (en) 2016-04-01 2018-07-24 Intel Corporation Method and apparatus for processing sequential writes to a block group of physical blocks in a memory device
US10019198B2 (en) 2016-04-01 2018-07-10 Intel Corporation Method and apparatus for processing sequential writes to portions of an addressable unit
US10120806B2 (en) 2016-06-27 2018-11-06 Intel Corporation Multi-level system memory with near memory scrubbing based on predicted far memory idle time
US10528462B2 (en) * 2016-09-26 2020-01-07 Intel Corporation Storage device having improved write uniformity stability
CN106547629B (zh) * 2016-11-03 2020-05-26 中山大学 一种状态机副本管理模型的优化方法
US10430085B2 (en) * 2016-11-08 2019-10-01 Micron Technology, Inc. Memory operations on data
US10261876B2 (en) 2016-11-08 2019-04-16 Micron Technology, Inc. Memory management
CN109219804B (zh) * 2016-12-28 2023-12-29 华为技术有限公司 非易失内存访问方法、装置和系统
US10915453B2 (en) 2016-12-29 2021-02-09 Intel Corporation Multi level system memory having different caching structures and memory controller that supports concurrent look-up into the different caching structures
US10445261B2 (en) 2016-12-30 2019-10-15 Intel Corporation System memory having point-to-point link that transports compressed traffic
KR20180094391A (ko) 2017-02-15 2018-08-23 에스케이하이닉스 주식회사 메모리 시스템 및 메모리 시스템의 동작 방법
US11074018B2 (en) * 2017-04-06 2021-07-27 International Business Machines Corporation Network asset management
KR20180125694A (ko) 2017-05-16 2018-11-26 에스케이하이닉스 주식회사 메모리 시스템 및 메모리 시스템의 동작방법
CN106990926A (zh) * 2017-06-14 2017-07-28 郑州云海信息技术有限公司 一种固态硬盘磨损均衡的处理方法
US10304814B2 (en) 2017-06-30 2019-05-28 Intel Corporation I/O layout footprint for multiple 1LM/2LM configurations
US10642727B1 (en) * 2017-09-27 2020-05-05 Amazon Technologies, Inc. Managing migration events performed by a memory controller
US11188467B2 (en) 2017-09-28 2021-11-30 Intel Corporation Multi-level system memory with near memory capable of storing compressed cache lines
US10698819B2 (en) * 2017-12-14 2020-06-30 SK Hynix Inc. Memory system and operating method thereof
KR102655350B1 (ko) * 2017-12-14 2024-04-09 에스케이하이닉스 주식회사 메모리 시스템 및 그것의 동작 방법
US10860244B2 (en) 2017-12-26 2020-12-08 Intel Corporation Method and apparatus for multi-level memory early page demotion
KR102398540B1 (ko) 2018-02-19 2022-05-17 에스케이하이닉스 주식회사 메모리 장치, 반도체 장치 및 반도체 시스템
US11099995B2 (en) 2018-03-28 2021-08-24 Intel Corporation Techniques for prefetching data to a first level of memory of a hierarchical arrangement of memory
US10983715B2 (en) * 2018-09-19 2021-04-20 Western Digital Technologies, Inc. Expandable memory for use with solid state systems and devices
US10860219B2 (en) * 2018-10-05 2020-12-08 Micron Technology, Inc. Performing hybrid wear leveling operations based on a sub-total write counter
CN111258491B (zh) * 2018-11-30 2021-10-15 北京忆芯科技有限公司 降低读命令处理延迟的方法与装置
US11055228B2 (en) 2019-01-31 2021-07-06 Intel Corporation Caching bypass mechanism for a multi-level memory
US11144230B2 (en) * 2019-02-18 2021-10-12 International Business Machines Corporation Data copy amount reduction in data replication
JP2022545859A (ja) * 2019-08-31 2022-11-01 クリプトグラフィ リサーチ, インコーポレイテッド ランダムアクセシング
CN115757193B (zh) * 2019-11-15 2023-11-03 荣耀终端有限公司 一种内存的管理方法及电子设备
US11244717B2 (en) 2019-12-02 2022-02-08 Micron Technology, Inc. Write operation techniques for memory systems
US11836351B2 (en) * 2022-03-31 2023-12-05 Lenovo Global Technology (United States) Inc. Causing a storage device to switch storage tiers based on a wear level
US20230342060A1 (en) * 2022-04-26 2023-10-26 Micron Technology, Inc. Techniques for data transfer operations
CN116027988B (zh) * 2023-03-22 2023-06-23 电子科技大学 用于存储器的损耗均衡方法及其芯片控制器的控制方法
CN117632038B (zh) * 2024-01-25 2024-04-02 合肥兆芯电子有限公司 损耗平衡方法、存储器存储装置及存储器控制电路单元
CN117742619B (zh) * 2024-02-21 2024-04-19 合肥康芯威存储技术有限公司 一种存储器及其数据处理方法

Family Cites Families (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6000006A (en) * 1997-08-25 1999-12-07 Bit Microsystems, Inc. Unified re-map and cache-index table with dual write-counters for wear-leveling of non-volatile flash RAM mass storage
JP2004310650A (ja) 2003-04-10 2004-11-04 Renesas Technology Corp メモリ装置
US7441067B2 (en) 2004-11-15 2008-10-21 Sandisk Corporation Cyclic flash memory wear leveling
JP4751163B2 (ja) 2005-09-29 2011-08-17 株式会社東芝 メモリシステム
TWI303828B (en) 2006-03-03 2008-12-01 Winbond Electronics Corp A method and a system for erasing a nonvolatile memory
US20070208904A1 (en) 2006-03-03 2007-09-06 Wu-Han Hsieh Wear leveling method and apparatus for nonvolatile memory
JP4863749B2 (ja) * 2006-03-29 2012-01-25 株式会社日立製作所 フラッシュメモリを用いた記憶装置、その消去回数平準化方法、及び消去回数平準化プログラム
TWI366828B (en) 2007-09-27 2012-06-21 Phison Electronics Corp Wear leveling method and controller using the same
CN101645309B (zh) * 2008-08-05 2013-05-22 威刚科技(苏州)有限公司 非挥发性存储装置及其控制方法
KR101662273B1 (ko) * 2009-11-27 2016-10-05 삼성전자주식회사 비휘발성 메모리 장치, 그것을 포함한 메모리 시스템 및 그것의 마모도 관리 방법
US8949506B2 (en) * 2010-07-30 2015-02-03 Apple Inc. Initiating wear leveling for a non-volatile memory
US8612804B1 (en) * 2010-09-30 2013-12-17 Western Digital Technologies, Inc. System and method for improving wear-leveling performance in solid-state memory

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
None *

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019231554A1 (en) * 2018-05-30 2019-12-05 Micron Technology, Inc. Memory management
US10636459B2 (en) 2018-05-30 2020-04-28 Micron Technology, Inc. Wear leveling
KR20210003305A (ko) * 2018-05-30 2021-01-11 마이크론 테크놀로지, 인크 메모리 관리
US11056157B2 (en) 2018-05-30 2021-07-06 Micron Technology, Inc. Wear leveling
EP3803875A4 (en) * 2018-05-30 2022-03-09 Micron Technology, Inc. MEMORY MANAGEMENT
US11646065B2 (en) 2018-05-30 2023-05-09 Micron Technology, Inc. Wear leveling

Also Published As

Publication number Publication date
TWI578157B (zh) 2017-04-11
EP2761471A1 (en) 2014-08-06
EP2761471A4 (en) 2015-05-27
CN103946819B (zh) 2017-05-17
TW201627865A (zh) 2016-08-01
US20130282967A1 (en) 2013-10-24
US9298606B2 (en) 2016-03-29
CN103946819A (zh) 2014-07-23
TWI518503B (zh) 2016-01-21
WO2013048470A1 (en) 2013-04-04
TW201324147A (zh) 2013-06-16

Similar Documents

Publication Publication Date Title
EP2761471B1 (en) Statistical wear leveling for non-volatile system memory
CN113138713B (zh) 存储器系统
US20180239697A1 (en) Method and apparatus for providing multi-namespace using mapping memory
US9183136B2 (en) Storage control apparatus and storage control method
KR101324688B1 (ko) 영구 가비지 컬렉션을 갖는 메모리 시스템
US9239781B2 (en) Storage control system with erase block mechanism and method of operation thereof
US9336133B2 (en) Method and system for managing program cycles including maintenance programming operations in a multi-layer memory
JP4518951B2 (ja) 不揮発性記憶システムにおける自動損耗均等化
US7890550B2 (en) Flash memory system and garbage collection method thereof
US8452911B2 (en) Synchronized maintenance operations in a multi-bank storage system
US6831865B2 (en) Maintaining erase counts in non-volatile storage systems
EP1576478B1 (en) Method and apparatus for grouping pages within a block
JP2021128582A (ja) メモリシステムおよび制御方法
CN110806984B (zh) 在存储器系统中搜索有效数据的设备和方法
US20090300269A1 (en) Hybrid memory management
WO2000002126A1 (en) Method and apparatus for performing erase operations transparent to a solid state storage system
JP2011519095A (ja) マルチバンク記憶装置のためのストレージアドレス再マッピングのための方法およびシステム
KR20120059569A (ko) 인-시츄 메모리 어닐링
JP7353934B2 (ja) メモリシステムおよび制御方法
CN113924546A (zh) 非易失性存储器中的磨损感知块模式转换
CN111435291A (zh) 用于擦除非易失性存储器块中编程的数据的装置和方法
Kwon et al. Data pattern aware FTL for SLC+ MLC hybrid SSD
McEwan et al. A real-time dependable flash storage system
CN114746848B (zh) 用于存储装置的高速缓存架构
TW202414222A (zh) 資料處理方法及對應之資料儲存裝置

Legal Events

Date Code Title Description
PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

17P Request for examination filed

Effective date: 20140320

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

DAX Request for extension of the european patent (deleted)
RA4 Supplementary search report drawn up and despatched (corrected)

Effective date: 20150429

RIC1 Information provided on ipc code assigned before grant

Ipc: G11C 16/06 20060101ALI20150422BHEP

Ipc: G06F 12/06 20060101AFI20150422BHEP

Ipc: G06F 12/02 20060101ALI20150422BHEP

GRAP Despatch of communication of intention to grant a patent

Free format text: ORIGINAL CODE: EPIDOSNIGR1

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: GRANT OF PATENT IS INTENDED

INTG Intention to grant announced

Effective date: 20170509

GRAS Grant fee paid

Free format text: ORIGINAL CODE: EPIDOSNIGR3

GRAA (expected) grant

Free format text: ORIGINAL CODE: 0009210

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE PATENT HAS BEEN GRANTED

AK Designated contracting states

Kind code of ref document: B1

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

REG Reference to a national code

Ref country code: GB

Ref legal event code: FG4D

REG Reference to a national code

Ref country code: CH

Ref legal event code: EP

REG Reference to a national code

Ref country code: AT

Ref legal event code: REF

Ref document number: 940521

Country of ref document: AT

Kind code of ref document: T

Effective date: 20171115

REG Reference to a national code

Ref country code: IE

Ref legal event code: FG4D

REG Reference to a national code

Ref country code: DE

Ref legal event code: R096

Ref document number: 602011042814

Country of ref document: DE

REG Reference to a national code

Ref country code: NL

Ref legal event code: MP

Effective date: 20171025

REG Reference to a national code

Ref country code: LT

Ref legal event code: MG4D

REG Reference to a national code

Ref country code: AT

Ref legal event code: MK05

Ref document number: 940521

Country of ref document: AT

Kind code of ref document: T

Effective date: 20171025

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: NL

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20171025

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: ES

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20171025

Ref country code: FI

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20171025

Ref country code: LT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20171025

Ref country code: NO

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20180125

Ref country code: SE

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20171025

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: RS

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20171025

Ref country code: AT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20171025

Ref country code: BG

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20180125

Ref country code: IS

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20180225

Ref country code: GR

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20180126

Ref country code: HR

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20171025

Ref country code: LV

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20171025

REG Reference to a national code

Ref country code: DE

Ref legal event code: R097

Ref document number: 602011042814

Country of ref document: DE

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: DK

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20171025

Ref country code: SK

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20171025

Ref country code: CZ

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20171025

Ref country code: EE

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20171025

Ref country code: CY

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20171025

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: PL

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20171025

Ref country code: SM

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20171025

Ref country code: IT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20171025

Ref country code: RO

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20171025

PLBE No opposition filed within time limit

Free format text: ORIGINAL CODE: 0009261

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: NO OPPOSITION FILED WITHIN TIME LIMIT

26N No opposition filed

Effective date: 20180726

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: SI

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20171025

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: MC

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20171025

REG Reference to a national code

Ref country code: CH

Ref legal event code: PL

REG Reference to a national code

Ref country code: BE

Ref legal event code: MM

Effective date: 20180930

REG Reference to a national code

Ref country code: IE

Ref legal event code: MM4A

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: LU

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20180930

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: IE

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20180930

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: BE

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20180930

Ref country code: LI

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20180930

Ref country code: CH

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20180930

Ref country code: FR

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20180930

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: MT

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20180930

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: TR

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20171025

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: PT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20171025

Ref country code: HU

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT; INVALID AB INITIO

Effective date: 20110930

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: MK

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20171025

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: AL

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20171025

P01 Opt-out of the competence of the unified patent court (upc) registered

Effective date: 20230518

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: GB

Payment date: 20230817

Year of fee payment: 13

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: DE

Payment date: 20230822

Year of fee payment: 13