WO2011146823A2 - Method and apparatus for using cache memory in a system that supports a low power state - Google Patents

Method and apparatus for using cache memory in a system that supports a low power state Download PDF

Info

Publication number
WO2011146823A2
WO2011146823A2 PCT/US2011/037319 US2011037319W WO2011146823A2 WO 2011146823 A2 WO2011146823 A2 WO 2011146823A2 US 2011037319 W US2011037319 W US 2011037319W WO 2011146823 A2 WO2011146823 A2 WO 2011146823A2
Authority
WO
WIPO (PCT)
Prior art keywords
error correction
cache line
cache
logic
correction logic
Prior art date
Application number
PCT/US2011/037319
Other languages
English (en)
French (fr)
Other versions
WO2011146823A3 (en
Inventor
Christopher B. Wilkerson
Wei Wu
Alaa R. Alameldeen
Shih-Lien Lu
Zeshan A. Chishti
Dinesh Somasekhar
Original Assignee
Intel Corporation
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Intel Corporation filed Critical Intel Corporation
Priority to BRPI1105243A priority Critical patent/BRPI1105243A8/pt
Priority to KR20127033246A priority patent/KR101495049B1/ko
Priority to DE112011100579.2T priority patent/DE112011100579B4/de
Priority to GB1122300.5A priority patent/GB2506833B/en
Priority to JP2012517938A priority patent/JP5604513B2/ja
Publication of WO2011146823A2 publication Critical patent/WO2011146823A2/en
Publication of WO2011146823A3 publication Critical patent/WO2011146823A3/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/08Error detection or correction by redundancy in data representation, e.g. by using checking codes
    • G06F11/10Adding special bits or symbols to the coded information, e.g. parity check, casting out 9's or 11's
    • G06F11/1008Adding special bits or symbols to the coded information, e.g. parity check, casting out 9's or 11's in individual solid state devices
    • G06F11/1064Adding special bits or symbols to the coded information, e.g. parity check, casting out 9's or 11's in individual solid state devices in cache or content addressable memories
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/08Error detection or correction by redundancy in data representation, e.g. by using checking codes
    • G06F11/10Adding special bits or symbols to the coded information, e.g. parity check, casting out 9's or 11's
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F1/00Details not covered by groups G06F3/00 - G06F13/00 and G06F21/00
    • G06F1/26Power supply means, e.g. regulation thereof
    • G06F1/32Means for saving power
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems

Definitions

  • the present invention relates generally to memory, and more particularly to reducing the power consumption of cache memory while a system is in a low power state.
  • Dynamic Random Access Memory caches with a Central Processing Unit (CPU).
  • Embedded DRAM is significantly denser than traditional Static Random Access Memories (SRAMs), but must be periodically refreshed to retain data.
  • SRAMs Static Random Access Memories
  • embedded DRAM is susceptible to device variations, which play a role in determining a refresh period for embedded DRAM cells. Power consumed to refresh eDRAM represents a large portion of overall system power, particularly during low-power states when the CPU is idle.
  • Fig. 1 is an embodiment of a processor that includes a cache memory and error code correction logic (ECC) according to the principles of the present invention
  • Fig. 2 is a block diagram of a system including an embodiment of a Recently
  • Accessed Lines Table (RALT) and the cache memory and ECC logic shown in Fig. 1 illustrating a fast access to a cache line in the cache memory;
  • Fig. 3 is a block diagram of the system shown in Fig. 2 illustrating a subsequent read of a cache line within the refresh period;
  • Fig. 4A is a block diagram illustrating an embodiment of an ECC encoder included in the quick ECC logic shown in Fig. 1 ;
  • Fig. 4B is a block diagram illustrating an embodiment of an ECC decoder
  • Fig. 5 is a flow graph illustrating an embodiment of a method for using the system shown in Fig. 1 according to the principles of the present invention.
  • Fig. 6 is a block diagram of a system that includes an embodiment of the processor shown in Fig. 1.
  • ECC Error-correcting codes
  • On-chip caches in a device and memory devices typically use simple and fast ECC such as, Single Error Correction and Double Error Detection (SECDED) Hamming codes.
  • SECDED Single Error Correction and Double Error Detection
  • Slower devices such as flash memories use multi-bit ECCs with strong error correcting capabilities, for example, Reed-Solomon codes.
  • the higher decoding latencies of the strong ECC mechanisms do not pose a problem for mass storage devices, for example, disk drives because the encoding/decoding latency is insignificant as compared to intrinsic device access time.
  • the on-chip memory arrays are more susceptible to multi-bit errors.
  • strong ECC codes are also desirable for on-chip cache.
  • the storage overhead of the additional ECC bits is an obstacle to using multi-bit ECC for on-chip cache memories.
  • micro-processors implement a number of idle states to support lower power modes (states). Reducing the power consumed during idle states is particularly important because the typical Central
  • CPU Processing Unit spends a lot of time in idle state.
  • Embedded DRAM technology enables smaller memory cells as compared to SRAM cells, resulting in a large increase in memory density.
  • DRAM may be used to replace SRAM as the last-level on-chip cache in high performance processors.
  • eDRAM embedded DRAM
  • the retention time of an eDRAM cell is defined as the length of time for which the cell can retain its state (charge).
  • Cell retention time is dependent on the leakage current, which, in turn, is dependent on the device leakage.
  • the refresh period needs to be less than the cell retention time. Since eDRAM is DRAM integrated on a conventional logic process it uses fast logic transistors with a higher leakage current than transistors used in conventional DRAM. Therefore, the refresh time for eDRAM is about a thousand times shorter than conventional DRAM. The shorter refresh period increases power consumed during the idle state and also leads to reduced availability.
  • a method to reduce cache power is to use power gates.
  • Power gates are switches on the power supply that allow power to be completely shut off to a block of transistors. Since memory technologies such as eDRAM and SRAM are unable to retain state when deprived of power, power-gating is performed at the cost of losing memory state.
  • the DRAM refresh period may be increased through the use of error-correcting codes (ECC) to dynamically identify and repair cells that lose their state.
  • ECC error-correcting codes
  • the refresh rate is set irrespective of the weakest eDRAM cells, using ECC to compensate for lost state.
  • a stronger error-correcting code, with the ability to correct multi-bit errors, implies increased refresh rate and reduced power consumption.
  • multi-bit ECC codes have a high storage and complexity overhead which limit their applicability.
  • An embodiment of the present invention provides a flexible memory structure that uses multi-bit ECC codes with a low storage and complexity overhead and can operate at very low idle power, without dramatically increasing transition latency to and from the idle power state due to loss of state of cells (bits).
  • Fig. 1 is an embodiment of a processor 100 that includes a cache memory and error code correction (ECC) logic 122 according to the principles of the present invention.
  • the ECC logic 122 is low-latency, low-cost, multi-bit error- correcting logic that compensates for high failure rates in volatile memory such as the memory cache 1 10 shown in Fig. 1.
  • the memory cache 1 10 is embedded DRAM (eDRAM).
  • the memory cache 1 10 may be Static Random Access Memory (SRAM) or any other type of volatile memory
  • a Single Error Correcting, Double Error Detecting (SECDED) code for a 64 Byte (512-bit) cache line requires 1 1 bits which is an overhead of about 2%.
  • the number of bits in an ECC code relative to the number of bits in the data word diminishes as the number of bits in the data word increases.
  • SECDED code for a 64 Byte cache line has an 1 1-bit overhead (2%)
  • SECDED code for a 1024 Byte (1KB) cache line has a 15-bit overhead (0.18%).
  • BCH inherits the additive property of linear systems, which ensures that ECC check bits can be updated using only the information of the modified sub-block (chunk of data).
  • the data word d (representing a cache line) is divided into multiple chunks (sub-blocks) [di-i,di-2,— ,do ⁇ -
  • Equation (1) shows that the generation of new check bits requires only the old value of check bits and the old and new values of the sub-block being modified.
  • the ECC logic 122 is low-latency, low-cost, multi-bit error- correction logic that can compensate for high failure rates in the eDRAM cache 1 10.
  • the ECC logic 122 implements a strong BCH code with the ability to correct five errors (5EC) and to detect six errors (6ED) (hereafter referred to as a 5EC6ED code).
  • 5EC6ED code six errors
  • a traditional approach using multi-bit ECC suffers from two prohibitive overheads that limit its applicability. First, building a low-latency decoder for multi-bit ECC codes is extremely costly. Second, the storage overhead of ECC bits is high (around 10% for a 5EC6ED ECC code for a cache line having 64 bytes).
  • the ECC logic 122 implements a multi-bit error-correcting code with very small area, latency, and power overhead.
  • the ECC logic 122 minimizes embedded DRAM power consumption in low-power operating modes (idle states) without penalizing performance in the normal operating mode.
  • the ECC logic 122 includes a quick ECC logic 104 that is optimized for the cache lines that require little or no correction.
  • the ECC logic 122 includes a high latency ECC logic 106 for cache lines that require complex multi-bit correction.
  • the ECC logic 122 disables lines with multi- bit failures.
  • the ECC logic 122 leverages the natural spatial locality of the data to reduce the cost of storing the ECC bits.
  • the embedded DRAM 110 is a 128 Mega Bytes (MB) last level (Level 3 (L3)) cache included in the processor 100.
  • MB Mega Bytes
  • L3 Level 3
  • the time between refreshes for the embedded DRAM cache 1 10 is 30 microseconds (us). This results in a significant amount of power consumed even when the Central Processing Unit (CPU) 102 is idle. Power consumed during refresh (refresh power) may be reduced by flushing and power gating the embedded DRAM cache 1 10 during low-power operating modes, for example, idle states.
  • refresh power consumption may be reduced by decreasing the refresh frequency, that is, increasing the refresh period (time between refreshes) of the data stored in cache lines in the embedded DRAM cache 110.
  • refresh frequency is decreased by decreasing the refresh frequency, that is, increasing the refresh period (time between refreshes) of the data stored in cache lines in the embedded DRAM cache 110.
  • the ECC logic 122 implements a code on each 1KB cache line (5EC6ED), requiring an additional 71 bits (0.87% overhead) for each cache line to store the 5EC6ED code.
  • the baseline configuration with no failure mitigation operates at the baseline refresh time of 30 micro seconds (us).
  • the error correction code logic 122 allows an increase in the refresh period to 440 micro seconds which is almost a 15 times reduction in the refresh period compared to the baseline configuration.
  • Logic to support a 5EC6ED code is very complex and imposes a long decoding latency penalty, proportional to both the number of error bits corrected and the number of data bits. If full encoding/decoding is required for every access to the cache memory, this can significantly increase cache access latency. In an embodiment of the present invention, error-prone portions of the cache can be disabled, avoiding the high latency of decode during operation.
  • the error code correction logic 122 includes a quick error correction code (ECC) logic (first error correction logic) 104 and a high-latency error correction code (ECC) logic (second error code correction logic) 106.
  • ECC quick error correction code
  • ECC high-latency error correction code
  • the Quick-ECC logic (unit) 104 includes syndrome generation logic and error correction logic for cache lines in eDRAM 110 with zero or one failures.
  • the Quick-ECC logic 104 also classifies cache lines into two groups based on the syndrome: cache lines that require complex multi-bit error correction and cache lines that have less than two, that is, zero or one errors.
  • Cache lines that require multi-bit error correction are forwarded to the high latency ECC processing logic (unit) 106 that performs multi-bit error correction.
  • Cache lines that are corrected by the Quick ECC logic 104 are forwarded to the CPU 102 via Ll/L2 cache 124.
  • the high latency ECC processing logic 106 performs error correction using software. In another embodiment, the high latency multi-bit ECC processing logic 106 performs multi-bit error correction using a state machine.
  • the combination of the quick ECC logic 104 and the high-latency ECC processing logic 106 allows cache lines in the eDRAM 1 10 that require one or less error corrections to be immediately corrected and forwarded with low latency to the CPU 102 via the L1/L2 cache 124. Latency increases for forwarding of cache lines in the eDRAM 1 10 with two or more failures to the CPU 102.
  • the quick ECC logic 104 in the ECC logic 122 performs a one cycle ECC to correct a single bit error in a cache line in the embedded DRAM 110.
  • the high latency correction logic 106 in the ECC logic 122 performs un-pipelined, high-latency ECC processing to correct multiple bit errors in a cache line.
  • a cache line When a cache line is read from the embedded DRAM 110, it is passed through data buffer 1 14 to the quick error correction logic 104 together with the tag and ECC associated with the cache line read from the tag/ECC array 108.
  • the tag and ECC are passed through data buffer 1 16 to the Quick ECC logic 104.
  • a decoder (not shown) in the quick ECC logic 104 generates the syndrome for the received cache line.
  • the generated syndrome includes information on whether the cache line has zero, one, or a higher number of errors. If the cache line has zero or one bit failures, the decoder in the quick ECC logic 104 performs the correction of the one bit failure in a short period of time.
  • the short period of time can be a single cycle (500 pico seconds (ps)). In other embodiments, the short period of time can be more than one cycle. The period of time is shorter than the time to perform multi-bit error correction by the high-latency ECC processing logic 106.
  • the high latency associated with handling multi-bit failures may significantly reduce performance.
  • disabling problematic lines or a mechanism such as bit- fix may be integrated in repair logic 120.
  • the frequency of errors plays a role in the disable strategy. If there is a low multi- bit error rate, an approach such as disabling cache lines containing multi-bit errors reduces the performance penalty. However, cache line disable results in unacceptable cache capacity loss if multi-bit error rates are high. If there is a high multi-bit error rate, a more complex mechanism such as bit-fix may be used to minimize the capacity lost to disabling cache lines.
  • repair logic 120 is coupled between the data buffers 1 14, 116 and the quick ECC logic 122. With the additional repair logic 120, the performance penalty of multi-bit decoding is incurred only once, that is, the first time an error due to a weak cell in the eDRAM 1 10 is identified.
  • the repair logic 120 allows the number of errors to be reduced prior to forwarding the cache line to the ECC logic 122. Thus, overall latency is reduced by first using a repair mechanism to fix known errors in a cache line prior to applying ECC to the cache line.
  • the repair logic 120 includes bit fix logic.
  • Bit fix logic identifies "broken” bit-pairs and maintains patches to repair the "broken" bit-pairs in the cache line.
  • the bit fix logic uses a quarter of the ways in a cache set to store positions and fixing bits for failing bits (that is, the correct state (value) for the failing bits in other ways of the set).
  • two of the eight ways are reserved to store defect-correction information to correct the "broken" bit pairs.
  • the bit fix logic allows defective pairs, that is, groups of 2-bits in the cache line in which at least one bit is defective (due to a logic state retention failure) to be disabled.
  • the bit fix logic maintains a 2-bit "patch" (correct bit state) that can be used to correct the defective 2-bit pair.
  • Repair patterns are stored in selected cache lines in cache memory (eDRAM) 1 10. During low-voltage operation, the repair patterns (repair pointers and patches) are stored in the cache memory 110.
  • a read or write operation on a cache-line first fetches the repair patterns for the cache line. When reading, repair patterns allow reads to avoid reading data from "broken" bits (defective bits).
  • repair patterns allow writes to avoid writing to failed bits. New patches are written to the repair patterns to reflect new data written to the cache.
  • An embodiment of a repair mechanism (repair logic 120 that uses bit-fix logic has been described. In other embodiments, repair mechanisms other than bit fix can be used to fix known errors prior to applying ECC.
  • the cache memory 1 10 is a 32K 8-way cache having 64B cache lines. Each access to data stored in the cache memory 1 10 requires an additional access to retrieve the appropriate repair patterns.
  • the bit-fix scheme organizes the cache memory 110 into two banks. Two fix-lines are maintained, one in each bank, and each is used for repairing cache-lines in the opposite bank. The repair patterns for three cache lines fit in a single cache line. Thus a single fix-line (a cache line storing repair patterns) for every three cache lines is maintained. A fix-line is assigned to the bank opposite to the three cache lines that use its repair patterns. This allows a cache line to be fetched in parallel with its repair patterns without increasing the number of cache ports.
  • the data line On a cache hit, the data line is read from one bank in the cache memory 110 and a fix-line is read from another bank in the cache memory 1 10.
  • the data line passes through 'n' bit shift stages, where 'n' represents the number of defective bit pairs. Each stage removes a defective pair, replacing it with the fixed pair.
  • SECDED ECC is applied to correct the repair patterns in the fix line before they are used. After the repair patterns have been fixed, they are used to correct the data- line. Repairing a single defective pair consists of three parts. First, SECDED ECC repairs any defective bits in the repair pattern. Second, a defect pointer identifies the defective pair. Third, after the defective pair has been removed, a patch reintroduces the missing correct bits into the cache line.
  • Fig. 2 is a block diagram of a system 200 including an embodiment of a Recently Accessed Line Table (RALT) 1 12 and the embedded DRAM cache 110 and ECC logic 122 shown in Fig. 1 illustrating a fast access to a cache line in the eDRAM cache 110.
  • RALT Recently Accessed Line Table
  • a cache line size greater than 64 bytes is used to reduce the memory storage required to store multi-bit ECC codes.
  • the eDRAM cache 110 is a Level 3 (L3) cache which is 128MB embedded DRAM and the size of a cache line 202 is 1024 Bytes (1 Kilobytes (KB)).
  • a Level 2 (L2) cache/Level 1 (LI) cache 124 has a 64 Byte (B) cache line (referred to as a sub-block of the L3 cache line).
  • Most writes to the L3 eDRAM cache 110 are in the form of smaller 64 Byte sub-blocks generated at lower-level (LI or L2) cache memories 124 or fetched from non-cache memory (main
  • a read-modify -write operation is performed by the CPU 102 in order to compute the ECC code.
  • the 64B sub-block 204 that is being overwritten is read from the eDRAM cache 1 10 together with the ECC code 208 for the entire 1 KB cache line 202.
  • the old data, old ECC code, and new data are used to compute the new ECC 208 for the entire 1KB cache line 202.
  • the new 64B sub-block 204 and a new ECC code 208 are written back to the L3 eDRAM cache 110.
  • the entire 1 KB cache line 202 is not read in order to compute the new ECC 208 as will be discussed later.
  • L3 cache 110 Most reads from L3 cache 110 are performed to provide cache lines for allocation in lower-level (LI and L2) caches 124. Processing any sub-block 204 of a cache line 202 requires the ECC 208 to be processed with the entire data word (a 1KB cache line) 202 that it protects. As each 64B sub-block 204 in the 1KB cache line 202 needs to be checked, each reference to a 64B sub-block 204 is accompanied by a reference to the surrounding 64B sub-blocks 204. Thus, any read of the L3 embedded DRAM cache 110 accesses all 16 64-bit sub-blocks 204 in the 1KB cache line 202, in addition to the ECC 208 (per cache line) that all of the sub-blocks 204 share in the cache line 202.
  • the majority of eDRAM failures are due to retention failures because as already discussed, the eDRAM cache 110 needs to be periodically refreshed to maintain the current state of each memory cell.
  • the retention time is 30 micro seconds (us), and each read of a particular cache line automatically implies a refresh of that cache line.
  • retention failures should not occur for 30us in a particular cache line after that cache line has been read. This observation allows the number the superfluous reads to be minimized.
  • the RALT 1 12 is used to track cache lines that have been referenced (read) within the last 30us.
  • the first read to a cache line 202 in the eDRAM cache 110 results in all of the sub- blocks 204 in the cache line 202 being read and checked for errors.
  • the address of the cache line 202 that is read is stored in a RALT entry 206 in the RALT 1 12.
  • the stored address indicates that the cache line 202 has recently been read and checked and thus should remain free from retention errors for the next 30us. While the address of the read cache line is stored in the RALT 112, any subsequent reads of sub-block from that cache line 202 can forgo ECC processing and thus avoid reading the ECC 208 associated with the cache line 202 and other sub-blocks 204 in the cache line 202.
  • the RALT 1 12 ensures that none of its entries 206 have been stored for more than 30us by dividing each 30us time period into a plurality of equal "cache line read” periods. Entries 206 that are allocated in the RALT 112 during each period are marked with a period identifier 214 identifying the sub-refresh period. Transitions between sub-refresh periods results in all RALT entries previously allocated in one of the plurality of "cache line read” periods to be invalidated (as indicated by the state of the "valid" field associated with the entry 206 in the RALT).
  • Each entry 206 in the RALT 1 12 includes the following fields: a line address field
  • the cache line 209 to identify the cache line that the entry is associated with; a valid field 212, a period identifier field 214 to indicate in which period the line was allocated; and a parity field 211 that includes one parity bit for each sub-block in the cache line.
  • the period identifier field 214 has two bits to indicate which of four periods (P0, PI, P2, P3) the cache line was allocated and the parity field 21 1 has 16-bits, one per 64B sub-block in the cache line.
  • the RALT 1 12 is direct mapped, but supports a CAM (Content
  • the first time a sub-block 204 is read the entire ECC 208 is also read along with each sub-block in the 1KB cache line 202 to allow ECC processing for a single 64B sub- block 204.
  • the entry 206 associated with the cache line 202 in the RALT 1 12 is updated with the line address of the referenced cache line 202, a period identifier, and a single parity bit for each sub-block 204 in the cache line 202.
  • the first read to a cache line causes all sub-blocks in the line to be read and checked for failures.
  • the address of the line is then stored in the RALT to indicate that it has recently been checked and will remain free from retention failures for the next 30usec. As long as the address of the line is stored in the RALT, any sub-block reads from the line can forgo ECC processing and thus avoid reading the ECC code and other sub-blocks in the line.
  • the RALT 1 12 ensures that none of its entries are more than 30us old.
  • a counter 216 is used to measure the passage of each 30us period.
  • Each 30us period is divided into four equal sub-periods (P0, PI, P2, P3).
  • Entries allocated in the RALT 1 12 during each period are marked with a 2 bit identifier to specify the allocation sub-period which can be determined by checking the current value of the counter. For example, the passage of 30us in a 2GHz processor 100 can be measured using a counter 216 that increments every cycle counting to 60000.
  • the counter 216 When the counter 216 reaches 60000 it resets to 0, resulting in a transition from P3 to P0.
  • Each sub-period transition can cause the invalidation of some or all of the RALT entries allocated during the previous instance of that sub-period. For example, a transition from sub-period P0 to sub-period P 1 will result in all RALT entries previously allocated in sub-period PI to be invalidated.
  • Fig. 3 is a block diagram of the system shown in Fig. 2 illustrating a subsequent read of a cache line within the refresh period.
  • Parity for the 64B sub-block 204 is computed and compared to the parity 21 1 for that 64B sub-block 204 of the cache line 202 stored in the RALT 1 12. If there is a match, the inference is that the 64B sub-block 204 is valid and the 64B sub- block 204 is forwarded to the requesting cache 124 or processor 102.
  • a parity mismatch is treated as a RALT miss and the entire 1KB cache line 202 is read.
  • the RALT 1 12 is used to track recently accessed cache lines 202 to avoid reading the entire 1KB cache line 202 on every cache read, thus minimizing dynamic power.
  • Fig. 4A is a block diagram illustrating an embodiment of an ECC encoder 400 included in the quick ECC logic 104 shown in Fig. 1.
  • BCH codes are a large class of multi-bit error-correcting codes which can correct both highly concentrated and widely scattered errors.
  • each BCH code is a linear block code defined over a finite Galois Field GF(2 m ) with a generator polynomial, where 2 m represents the maximum number of code word bits.
  • u d x G
  • BCH is a systematic code
  • the original k-bit data is retained in the code word u(x), and is followed by r check bits.
  • Fig. 4B is a block diagram illustrating an embodiment of an ECC decoder (decoding logic) 402 included in the quick ECC logic shown in Fig. 1.
  • the decoding logic 402 detects and corrects any errors in the received code word u(x) to recover the original value of data.
  • the decoding logic 402 includes syndrome generation logic 404, error classification logic 406 and error correction logic 408.
  • the general form of H-matrix is as follows:
  • the error classification logic uses the syndrome S to detect if the code word has any errors. Since:
  • the error correction logic uses the syndrome value to pinpoint the locations of corrupted bits, if the above equation is not satisfied.
  • each syndrome component Si can be specified as:
  • the correction logic implements the following three steps:
  • Step 1 Determine the coefficients of error location polynomial ⁇ ( ⁇ ), where ⁇ ( ⁇ ) is defined such that the roots ⁇ ( ⁇ ) are given by the inverse of error elements d 1 , d 2 , ... , d' respectively,
  • Step 1 of error correction is based on a t-step iterative algorithm, where each iteration involves a Galois Field inversion, which alone takes 2m operations
  • Step 2 Solve the roots of ⁇ ( ⁇ ), which are the error locations. When polynomial ⁇ ( ⁇ ) is determined, each field element d is substituted into the polynomial. Those elements which make the polynomial equal to zero are the roots.
  • the implementation of Step 2 can either take n-cycles with one circuit, or a single cycle with n parallel circuits. Either way, the base circuit is 0(t*m 2 ).
  • Step 3 Calculate the correct value for data bits. This is done by simply flipping the bits at error locations.
  • the syndrome In the case of a single-bit error, the syndrome exactly matches the H-matrix column that corresponds to the error bit. Therefore, a single-bit error can be detected by comparing each column of the H-matrix with the syndrome. This correction is significantly faster than the general case of f-bit correction (with t > 1) because it does not require Step 1 and most of the Step 2 of the error correction logic. All the syndrome components do not need to be matched with entire H-matrix columns. All that is needed is to compare Si to each column in Hi (defined in equation (1)) and verify that the following equation is satisfied:
  • Fig. 5 is a flow graph illustrating an embodiment of a method for using the system 100 shown in Fig. 1 according to the principles of the present invention.
  • the cache line address (addr) is stored in a line address field 209 in a RALT entry 206 the RALT Table 1 12 as discussed earlier in conjunction with Fig. 2.
  • Data stored in the cache line in cache memory 1 10 and the Tag/ECC stored in the Tag/ECC array 1 18 corresponding to the address is read and forwarded through data buffers 1 14, 1 16. Processing continues with block 504.
  • processing continues with block 512 to repair the cache line. In an embodiment that does not include repair logic, processing continues with block 506.
  • quick ECC is performed by quick ECC logic 104 to determine if there are errors in the cache line. Processing continues with block 508.
  • processing continues with block 514. If there are less than two errors, the error is corrected by the quick ECC logic 104 and processing continues with block 510. At block 510, the corrected cache line data is forwarded to the CPU 102 via L1/L2 cache 124.
  • the Hi ECC logic corrects the multi-bit error in the cache line and processing continues with block 510.
  • Fig. 6 is a block diagram of a system 600 that includes an embodiment of the processor 100 shown in Fig. 1.
  • the system 600 includes a processor 100 with embedded cache memory, a Memory Controller Hub (MCH) 602 and an Input/Output (I/O)
  • MCH Memory Controller Hub
  • I/O Input/Output
  • the MCH 602 includes a memory controller 606 that controls communication between the processor 601 and external memory (main memory) 610.
  • the processor 100 and MCH 602 communicate over a system bus 616.
  • the CPU 102 may be any one of a plurality of processors such as a single core Intel® Pentium IV ® processor, a single core Intel Celeron processor, an Intel®
  • XScale processor or a multi-core processor such as Intel® Pentium D, Intel® Xeon® processor, or Intel® Core® Duo processor or any other type of processor.
  • the memory 610 may be Dynamic Random Access Memory (DRAM), Static Random Access Memory (SRAM), Synchronized Dynamic Random Access Memory (SDRAM), Double Data Rate 2 (DDR2) RAM or Rambus Dynamic Random Access Memory (RDRAM) or any other type of memory.
  • DRAM Dynamic Random Access Memory
  • SRAM Static Random Access Memory
  • SDRAM Synchronized Dynamic Random Access Memory
  • DDR2 Double Data Rate 2
  • RDRAM Rambus Dynamic Random Access Memory
  • the ICH 604 may be coupled to the MCH 602 using a high speed chip-to-chip interconnect 614 such as Direct Media Interface (DMI).
  • DMI Direct Media Interface
  • DMI supports 2 Gigabit/second concurrent transfer rates via two unidirectional lanes.
  • the ICH 604 may include a storage Input/Output (I/O) controller for controlling communication with at least one storage device 612 coupled to the ICH 604.
  • the storage device may be, for example, a disk drive, Digital Video Disk (DVD) drive, Compact Disk (CD) drive, Redundant Array of Independent Disks (RAID), tape drive or other storage device.
  • the ICH 604 may communicate with the storage device 612 over a storage protocol interconnect 618 using a serial storage protocol such as, Serial Attached Small Computer System Interface (SAS) or Serial Advanced Technology Attachment (SATA).
  • SAS Serial Attached Small Computer System Interface
  • SATA Serial Advanced Technology Attachment
  • a computer usable medium may consist of a read only memory device, such as a Compact Disk Read Only Memory (CD ROM) disk or conventional ROM devices, or a computer diskette, having a computer readable program code stored thereon.
  • a computer usable medium may consist of a read only memory device, such as a Compact Disk Read Only Memory (CD ROM) disk or conventional ROM devices, or a computer diskette, having a computer readable program code stored thereon.
  • CD ROM Compact Disk Read Only Memory

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Quality & Reliability (AREA)
  • Techniques For Improving Reliability Of Storages (AREA)
  • Memory System Of A Hierarchy Structure (AREA)
  • Detection And Correction Of Errors (AREA)
  • Control Of Vending Devices And Auxiliary Devices For Vending Devices (AREA)
PCT/US2011/037319 2010-05-21 2011-05-20 Method and apparatus for using cache memory in a system that supports a low power state WO2011146823A2 (en)

Priority Applications (5)

Application Number Priority Date Filing Date Title
BRPI1105243A BRPI1105243A8 (pt) 2010-05-21 2011-05-20 "método e aparelho de uso de memória de cache em um sistema que sustenta estado de baixa potência"
KR20127033246A KR101495049B1 (ko) 2010-05-21 2011-05-20 저전력 상태를 지원하는 시스템에서 캐시 메모리를 이용하는 방법 및 장치
DE112011100579.2T DE112011100579B4 (de) 2010-05-21 2011-05-20 Verfahren und vorrichtung zum verwenden von cachespeicher in einem system, welches einen niedrigleistungszustand unterstützt
GB1122300.5A GB2506833B (en) 2010-05-21 2011-05-20 Method and apparatus for using cache memory in a system that supports a low power state
JP2012517938A JP5604513B2 (ja) 2010-05-21 2011-05-20 低電力状態をサポートするシステムにおいてキャッシュメモリを利用する方法および装置

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US12/785,182 US8640005B2 (en) 2010-05-21 2010-05-21 Method and apparatus for using cache memory in a system that supports a low power state
US12/785,182 2010-05-21

Publications (2)

Publication Number Publication Date
WO2011146823A2 true WO2011146823A2 (en) 2011-11-24
WO2011146823A3 WO2011146823A3 (en) 2012-04-05

Family

ID=44973483

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2011/037319 WO2011146823A2 (en) 2010-05-21 2011-05-20 Method and apparatus for using cache memory in a system that supports a low power state

Country Status (9)

Country Link
US (1) US8640005B2 (de)
JP (1) JP5604513B2 (de)
KR (1) KR101495049B1 (de)
CN (1) CN102253865B (de)
BR (1) BRPI1105243A8 (de)
DE (1) DE112011100579B4 (de)
GB (1) GB2506833B (de)
TW (1) TWI502599B (de)
WO (1) WO2011146823A2 (de)

Families Citing this family (57)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8484539B1 (en) * 2009-06-09 2013-07-09 Sk Hynix Memory Solutions Inc. Controlling power consumption in iterative ECC processing systems
US8533572B2 (en) 2010-09-24 2013-09-10 Intel Corporation Error correcting code logic for processor caches that uses a common set of check bits
US8924817B2 (en) * 2010-09-29 2014-12-30 Advanced Micro Devices, Inc. Method and apparatus for calculating error correction codes for selective data updates
US8788904B2 (en) * 2011-10-31 2014-07-22 Hewlett-Packard Development Company, L.P. Methods and apparatus to perform error detection and correction
US9304570B2 (en) 2011-12-15 2016-04-05 Intel Corporation Method, apparatus, and system for energy efficiency and energy conservation including power and performance workload-based balancing between multiple processing elements
GB2514970B (en) * 2012-03-29 2020-09-09 Intel Corp Enhanced storage of metadata utilizing improved error detection and correction in computer memory
US9444496B2 (en) * 2012-04-04 2016-09-13 University Of Southern California Correctable parity protected memory
US9323608B2 (en) 2012-06-07 2016-04-26 Micron Technology, Inc. Integrity of a data bus
KR101979734B1 (ko) 2012-08-07 2019-05-17 삼성전자 주식회사 메모리 장치의 독출 전압 제어 방법 및 이를 이용한 데이터 독출 방법
US9703364B2 (en) * 2012-09-29 2017-07-11 Intel Corporation Rotational graphics sub-slice and execution unit power down to improve power performance efficiency
KR102081980B1 (ko) * 2012-10-08 2020-02-27 삼성전자 주식회사 메모리 시스템에서의 라이트 동작 또는 리드 동작 수행 방법
US9092353B1 (en) 2013-01-29 2015-07-28 Pmc-Sierra Us, Inc. Apparatus and method based on LDPC codes for adjusting a correctable raw bit error rate limit in a memory system
US9128858B1 (en) * 2013-01-29 2015-09-08 Pmc-Sierra Us, Inc. Apparatus and method for adjusting a correctable raw bit error rate limit in a memory system using strong log-likelihood (LLR) values
KR102024033B1 (ko) 2013-03-04 2019-09-24 삼성전자주식회사 이동 통신 시스템에서 메모리 제어 방법 및 장치
US10230396B1 (en) 2013-03-05 2019-03-12 Microsemi Solutions (Us), Inc. Method and apparatus for layer-specific LDPC decoding
US9813080B1 (en) 2013-03-05 2017-11-07 Microsemi Solutions (U.S.), Inc. Layer specific LDPC decoder
US9397701B1 (en) 2013-03-11 2016-07-19 Microsemi Storage Solutions (Us), Inc. System and method for lifetime specific LDPC decoding
US9590656B2 (en) 2013-03-15 2017-03-07 Microsemi Storage Solutions (Us), Inc. System and method for higher quality log likelihood ratios in LDPC decoding
US9454414B2 (en) 2013-03-15 2016-09-27 Microsemi Storage Solutions (Us), Inc. System and method for accumulating soft information in LDPC decoding
US9450610B1 (en) 2013-03-15 2016-09-20 Microsemi Storage Solutions (Us), Inc. High quality log likelihood ratios determined using two-index look-up table
JP2014211800A (ja) * 2013-04-19 2014-11-13 株式会社東芝 データ記憶装置、ストレージコントローラおよびデータ記憶制御方法
TWI502601B (zh) * 2013-04-24 2015-10-01 Ind Tech Res Inst 混合式錯誤修復方法及其記憶體裝置
US9898365B2 (en) 2013-07-31 2018-02-20 Hewlett Packard Enterprise Development Lp Global error correction
WO2015016879A1 (en) * 2013-07-31 2015-02-05 Hewlett-Packard Development Company, L.P. Operating a memory unit
CN107193684B (zh) * 2013-08-23 2020-10-16 慧荣科技股份有限公司 存取快闪存储器中储存单元的方法以及使用该方法的装置
JP6275427B2 (ja) * 2013-09-06 2018-02-07 株式会社東芝 メモリ制御回路およびキャッシュメモリ
US9286224B2 (en) 2013-11-26 2016-03-15 Intel Corporation Constraining prefetch requests to a processor socket
CN103811047B (zh) * 2014-02-17 2017-01-18 上海新储集成电路有限公司 一种基于分块dram的低功耗刷新方法
JP6140093B2 (ja) * 2014-03-18 2017-05-31 株式会社東芝 キャッシュメモリ、誤り訂正回路およびプロセッサシステム
US9417804B2 (en) 2014-07-07 2016-08-16 Microsemi Storage Solutions (Us), Inc. System and method for memory block pool wear leveling
KR102193682B1 (ko) 2014-08-01 2020-12-21 삼성전자주식회사 선택적 ecc 기능을 갖는 반도체 메모리 장치
US9442801B2 (en) 2014-09-26 2016-09-13 Hewlett Packard Enterprise Development Lp Platform error correction
US9703632B2 (en) * 2014-11-07 2017-07-11 Nxp B. V. Sleep mode operation for volatile memory circuits
US9489255B2 (en) 2015-02-12 2016-11-08 International Business Machines Corporation Dynamic array masking
US10332613B1 (en) 2015-05-18 2019-06-25 Microsemi Solutions (Us), Inc. Nonvolatile memory system with retention monitor
US9740558B2 (en) 2015-05-31 2017-08-22 Intel Corporation On-die ECC with error counter and internal address generation
US9799405B1 (en) 2015-07-29 2017-10-24 Ip Gem Group, Llc Nonvolatile memory system with read circuit for performing reads using threshold voltage shift read instruction
US9842021B2 (en) 2015-08-28 2017-12-12 Intel Corporation Memory device check bit read mode
US9886214B2 (en) 2015-12-11 2018-02-06 Ip Gem Group, Llc Nonvolatile memory system with erase suspend circuit and method for erase suspend management
US10268539B2 (en) * 2015-12-28 2019-04-23 Intel Corporation Apparatus and method for multi-bit error detection and correction
US9892794B2 (en) 2016-01-04 2018-02-13 Ip Gem Group, Llc Method and apparatus with program suspend using test mode
US11169707B2 (en) * 2016-01-22 2021-11-09 Netapp, Inc. Garbage collection pacing in a storage system
US9899092B2 (en) 2016-01-27 2018-02-20 Ip Gem Group, Llc Nonvolatile memory system with program step manager and method for program step management
US10283215B2 (en) 2016-07-28 2019-05-07 Ip Gem Group, Llc Nonvolatile memory system with background reference positioning and local reference positioning
US10291263B2 (en) 2016-07-28 2019-05-14 Ip Gem Group, Llc Auto-learning log likelihood ratio
US10236915B2 (en) 2016-07-29 2019-03-19 Microsemi Solutions (U.S.), Inc. Variable T BCH encoding
US10379944B2 (en) * 2017-04-17 2019-08-13 Advanced Micro Devices, Inc. Bit error protection in cache memories
US10642683B2 (en) 2017-10-11 2020-05-05 Hewlett Packard Enterprise Development Lp Inner and outer code generator for volatile memory
KR102606009B1 (ko) * 2018-08-16 2023-11-27 에스케이하이닉스 주식회사 캐시 버퍼 및 이를 포함하는 반도체 메모리 장치
KR20200042360A (ko) * 2018-10-15 2020-04-23 에스케이하이닉스 주식회사 에러 정정 회로, 이를 포함하는 메모리 컨트롤러 및 메모리 시스템
US10884940B2 (en) * 2018-12-21 2021-01-05 Advanced Micro Devices, Inc. Method and apparatus for using compression to improve performance of low voltage caches
KR20200140074A (ko) 2019-06-05 2020-12-15 에스케이하이닉스 주식회사 휘발성 메모리 장치 및 이의 동작 방법
US11036636B2 (en) 2019-06-28 2021-06-15 Intel Corporation Providing improved efficiency for metadata usages
KR20210015087A (ko) 2019-07-31 2021-02-10 에스케이하이닉스 주식회사 오류 정정 회로, 이를 포함하는 메모리 컨트롤러 및 메모리 시스템
US11049585B1 (en) * 2020-03-27 2021-06-29 Macronix International Co., Ltd. On chip block repair scheme
KR20210122455A (ko) 2020-04-01 2021-10-12 삼성전자주식회사 반도체 메모리 장치
CN112181712B (zh) * 2020-09-28 2022-02-22 中国人民解放军国防科技大学 一种提高处理器核可靠性的方法及装置

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020038442A1 (en) * 1998-09-24 2002-03-28 Robert Cypher Technique for correcting single-bit errors in caches with sub-block parity bits
US20030167437A1 (en) * 2002-03-04 2003-09-04 Desota Donald R. Cache entry error-correcting code (ECC) based at least on cache entry data and memory address
US20060031708A1 (en) * 2004-08-04 2006-02-09 Desai Kiran R Method and apparatus for correcting errors in a cache array
US20100070809A1 (en) * 2005-12-30 2010-03-18 Dempsey Morgan J Repair bits for a low voltage cache

Family Cites Families (26)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4139148A (en) * 1977-08-25 1979-02-13 Sperry Rand Corporation Double bit error correction using single bit error correction, double bit error detection logic and syndrome bit memory
US4236247A (en) * 1979-01-15 1980-11-25 Organisation Europeene De Recherches Spatiales Apparatus for correcting multiple errors in data words read from a memory
JP2696212B2 (ja) 1987-05-06 1998-01-14 セイコーエプソン株式会社 誤り訂正装置
JPH0275039A (ja) 1988-09-12 1990-03-14 Mitsubishi Electric Corp メモリ回路
US5604213A (en) * 1992-03-31 1997-02-18 British Technology Group Limited 17-substituted steroids useful in cancer treatment
US5604753A (en) * 1994-01-04 1997-02-18 Intel Corporation Method and apparatus for performing error correction on data from an external memory
CN1159648C (zh) 1994-12-02 2004-07-28 现代电子美国公司 有限游程转移预测方法
JPH0991206A (ja) * 1995-09-27 1997-04-04 Toshiba Corp メモリ制御装置およびメモリ検査方法
US5802582A (en) * 1996-09-10 1998-09-01 International Business Machines Corporation Explicit coherence using split-phase controls
US6044479A (en) * 1998-01-29 2000-03-28 International Business Machines Corporation Human sensorially significant sequential error event notification for an ECC system
US6480975B1 (en) * 1998-02-17 2002-11-12 International Business Machines Corporation ECC mechanism for set associative cache array
US6772383B1 (en) * 1999-05-27 2004-08-03 Intel Corporation Combined tag and data ECC for enhanced soft error recovery from cache tag errors
US6505318B1 (en) * 1999-10-01 2003-01-07 Intel Corporation Method and apparatus for partial error detection and correction of digital data
JP2003203010A (ja) 2002-01-07 2003-07-18 Nec Computertechno Ltd L2キャッシュメモリ
US7296213B2 (en) * 2002-12-11 2007-11-13 Nvidia Corporation Error correction cache for flash memory
JP4299558B2 (ja) * 2003-03-17 2009-07-22 株式会社ルネサステクノロジ 情報記憶装置および情報処理システム
US7069494B2 (en) * 2003-04-17 2006-06-27 International Business Machines Corporation Application of special ECC matrix for solving stuck bit faults in an ECC protected mechanism
US7389465B2 (en) * 2004-01-30 2008-06-17 Micron Technology, Inc. Error detection and correction scheme for a memory device
JP4041076B2 (ja) * 2004-02-27 2008-01-30 株式会社東芝 データ記憶システム
US7653862B2 (en) 2005-06-15 2010-01-26 Hitachi Global Storage Technologies Netherlands B.V. Error detection and correction for encoded data
US7590920B2 (en) * 2005-08-05 2009-09-15 Hitachi Global Storage Technologies Netherlands, B.V. Reduced complexity error correction encoding techniques
US7590913B2 (en) * 2005-12-29 2009-09-15 Intel Corporation Method and apparatus of reporting memory bit correction
US7512847B2 (en) * 2006-02-10 2009-03-31 Sandisk Il Ltd. Method for estimating and reporting the life expectancy of flash-disk memory
US7890836B2 (en) 2006-12-14 2011-02-15 Intel Corporation Method and apparatus of cache assisted error detection and correction in memory
US8010875B2 (en) * 2007-06-26 2011-08-30 International Business Machines Corporation Error correcting code with chip kill capability and power saving enhancement
JP4672743B2 (ja) * 2008-03-01 2011-04-20 株式会社東芝 誤り訂正装置および誤り訂正方法

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020038442A1 (en) * 1998-09-24 2002-03-28 Robert Cypher Technique for correcting single-bit errors in caches with sub-block parity bits
US20030167437A1 (en) * 2002-03-04 2003-09-04 Desota Donald R. Cache entry error-correcting code (ECC) based at least on cache entry data and memory address
US20060031708A1 (en) * 2004-08-04 2006-02-09 Desai Kiran R Method and apparatus for correcting errors in a cache array
US20100070809A1 (en) * 2005-12-30 2010-03-18 Dempsey Morgan J Repair bits for a low voltage cache

Also Published As

Publication number Publication date
GB2506833B (en) 2018-12-19
BRPI1105243A8 (pt) 2018-04-24
JP2012531683A (ja) 2012-12-10
US20110289380A1 (en) 2011-11-24
WO2011146823A3 (en) 2012-04-05
GB201122300D0 (en) 2012-02-01
KR101495049B1 (ko) 2015-02-24
TW201209841A (en) 2012-03-01
CN102253865B (zh) 2014-03-05
KR20130020808A (ko) 2013-02-28
DE112011100579B4 (de) 2021-09-02
TWI502599B (zh) 2015-10-01
US8640005B2 (en) 2014-01-28
JP5604513B2 (ja) 2014-10-08
DE112011100579T5 (de) 2013-02-07
GB2506833A (en) 2014-04-16
CN102253865A (zh) 2011-11-23
BRPI1105243A2 (pt) 2017-06-20

Similar Documents

Publication Publication Date Title
US8640005B2 (en) Method and apparatus for using cache memory in a system that supports a low power state
Wilkerson et al. Reducing cache power with low-cost, multi-bit error-correcting codes
KR101684045B1 (ko) 로컬 에러 검출 및 글로벌 에러 정정
US10901840B2 (en) Error correction decoding with redundancy data
Jian et al. Low-power, low-storage-overhead chipkill correct via multi-line error correction
US7437597B1 (en) Write-back cache with different ECC codings for clean and dirty lines with refetching of uncorrectable clean lines
US8276039B2 (en) Error detection device and methods thereof
Nair et al. Citadel: Efficiently protecting stacked memory from tsv and large granularity failures
US10020822B2 (en) Error tolerant memory system
US8869007B2 (en) Three dimensional (3D) memory device sparing
Son et al. CiDRA: A cache-inspired DRAM resilience architecture
Mittal et al. A survey of techniques for improving error-resilience of DRAM
US9229803B2 (en) Dirty cacheline duplication
US11170869B1 (en) Dual data protection in storage devices
Chen et al. RATT-ECC: Rate adaptive two-tiered error correction codes for reliable 3D die-stacked memory
US9430375B2 (en) Techniques for storing data in bandwidth optimized or coding rate optimized code words based on data access frequency
Chen et al. E-ecc: Low power erasure and error correction schemes for increasing reliability of commodity dram systems
Jian et al. High performance, energy efficient chipkill correct memory with multidimensional parity
Longofono et al. Predicting and mitigating single-event upsets in DRAM using HOTH
Yalcin et al. Flexicache: Highly reliable and low power cache under supply voltage scaling
Hijaz et al. NUCA-L1: A non-uniform access latency level-1 cache architecture for multicores operating at near-threshold voltages
Nair Architectural techniques to enable reliable and scalable memory systems
Wang Architecting Memory Systems Upon Highly Scaled Error-Prone Memory Technologies
US11625173B1 (en) Reduced power consumption by SSD using host memory buffer
US12019881B1 (en) Reduced power consumption by SSD using host memory buffer

Legal Events

Date Code Title Description
WWE Wipo information: entry into national phase

Ref document number: 2012517938

Country of ref document: JP

ENP Entry into the national phase

Ref document number: 1122300

Country of ref document: GB

Kind code of ref document: A

Free format text: PCT FILING DATE = 20110520

WWE Wipo information: entry into national phase

Ref document number: 1122300.5

Country of ref document: GB

WWE Wipo information: entry into national phase

Ref document number: 1120111005792

Country of ref document: DE

Ref document number: 112011100579

Country of ref document: DE

REG Reference to national code

Ref country code: BR

Ref legal event code: B01A

Ref document number: PI1105243

Country of ref document: BR

ENP Entry into the national phase

Ref document number: 20127033246

Country of ref document: KR

Kind code of ref document: A

122 Ep: pct application non-entry in european phase

Ref document number: 11784312

Country of ref document: EP

Kind code of ref document: A2

ENP Entry into the national phase

Ref document number: PI1105243

Country of ref document: BR

Kind code of ref document: A2