US20140304561A1 - Shared fuse wrapper architecture for memory repair - Google Patents

Shared fuse wrapper architecture for memory repair Download PDF

Info

Publication number
US20140304561A1
US20140304561A1 US14305975 US201414305975A US2014304561A1 US 20140304561 A1 US20140304561 A1 US 20140304561A1 US 14305975 US14305975 US 14305975 US 201414305975 A US201414305975 A US 201414305975A US 2014304561 A1 US2014304561 A1 US 2014304561A1
Authority
US
Grant status
Application
Patent type
Prior art keywords
repair data
memory
repair
fuse
data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US14305975
Inventor
Viraj Vikram SINGH
Ashish Bansal
Rangarajan Ramanujam
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
STMicroelectronics International NV
Original Assignee
STMicroelectronics International NV
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/08Error detection or correction by redundancy in data representation, e.g. by using checking codes
    • G06F11/10Adding special bits or symbols to the coded information, e.g. parity check, casting out 9's or 11's
    • G06F11/1076Parity data used in redundant arrays of independent storages, e.g. in RAID systems
    • G06F11/1088Reconstruction on already foreseen single or plurality of spare disks
    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11CSTATIC STORES
    • G11C29/00Checking stores for correct operation ; Subsequent repair; Testing stores during standby or offline operation
    • G11C29/04Detection or location of defective memory elements, e.g. cell constructio details, timing of test signals
    • G11C29/08Functional testing, e.g. testing during refresh, power-on self testing [POST] or distributed testing
    • G11C29/12Built-in arrangements for testing, e.g. built-in self testing [BIST] or interconnection details
    • G11C29/44Indication or identification of errors, e.g. for repair
    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11CSTATIC STORES
    • G11C29/00Checking stores for correct operation ; Subsequent repair; Testing stores during standby or offline operation
    • G11C29/04Detection or location of defective memory elements, e.g. cell constructio details, timing of test signals
    • G11C29/08Functional testing, e.g. testing during refresh, power-on self testing [POST] or distributed testing
    • G11C29/12Built-in arrangements for testing, e.g. built-in self testing [BIST] or interconnection details
    • G11C29/44Indication or identification of errors, e.g. for repair
    • G11C29/4401Indication or identification of errors, e.g. for repair for self repair
    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11CSTATIC STORES
    • G11C29/00Checking stores for correct operation ; Subsequent repair; Testing stores during standby or offline operation
    • G11C29/70Masking faults in memories by using spares or by reconfiguring
    • G11C29/78Masking faults in memories by using spares or by reconfiguring using programmable devices
    • G11C29/785Masking faults in memories by using spares or by reconfiguring using programmable devices with redundancy programming schemes
    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11CSTATIC STORES
    • G11C29/00Checking stores for correct operation ; Subsequent repair; Testing stores during standby or offline operation
    • G11C29/70Masking faults in memories by using spares or by reconfiguring
    • G11C29/78Masking faults in memories by using spares or by reconfiguring using programmable devices
    • G11C29/785Masking faults in memories by using spares or by reconfiguring using programmable devices with redundancy programming schemes
    • G11C29/787Masking faults in memories by using spares or by reconfiguring using programmable devices with redundancy programming schemes using a fuse hierarchy
    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11CSTATIC STORES
    • G11C29/00Checking stores for correct operation ; Subsequent repair; Testing stores during standby or offline operation
    • G11C29/70Masking faults in memories by using spares or by reconfiguring
    • G11C29/78Masking faults in memories by using spares or by reconfiguring using programmable devices
    • G11C29/80Masking faults in memories by using spares or by reconfiguring using programmable devices with improved layout
    • G11C29/802Masking faults in memories by using spares or by reconfiguring using programmable devices with improved layout by encoding redundancy signals
    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11CSTATIC STORES
    • G11C29/00Checking stores for correct operation ; Subsequent repair; Testing stores during standby or offline operation
    • G11C29/04Detection or location of defective memory elements, e.g. cell constructio details, timing of test signals
    • G11C29/08Functional testing, e.g. testing during refresh, power-on self testing [POST] or distributed testing
    • G11C29/12Built-in arrangements for testing, e.g. built-in self testing [BIST] or interconnection details
    • G11C2029/4402Internal storage of test result, quality data, chip identification, repair information

Abstract

A memory repair mechanism for the memories clustered across the multiple power domains and can be switched on and off independent of each other, thereby enabling low power operation. Enhancements in the shared Fuse Wrapper Architecture enable sharing of a plurality of parallel links connecting the memory blocks of each power domains to the Shared Fuse Wrapper architecture.

Description

    RELATED APPLICATION
  • The present invention is a divisional of U.S. patent application Ser. No. 12/784,424 filed May 20, 2010 which claims priority of India Patent Application No. 1203/Del/2009 filed Jun. 11, 2009, both of which are incorporated herein in their entirety by this reference.
  • TECHNICAL FIELD
  • The present disclosure relates to memory repair and more specifically to a shared fuse wrapper architecture using a single fuse macro cell covering multiple power domains.
  • BACKGROUND
  • Advancement in semiconductor technologies allows implementation of multiple memories on a single chip. Due to their high density, memories are prone to faults which reduce the total chip yield. In order to correct such defects memory sub-systems are provided with a repair mechanism that makes use of redundant memory locations.
  • A memory is periodically tested by an external test hardware or by an on chip dedicated hardware (i.e. memory Built-In-Self-Test (BIST)) in order to identify fault locations. A BIST mechanism uses a built-in algorithm which performs a series of read-write operations to identify one or more defective row or column addresses. The defective row/column is then replaced by a redundant row/column depending upon whether there is a row or column redundancy support. At the end of the test process repair data of the repairable memories is programmed onto the fuse macro cells provided on the chip for all small/medium memories during wafer production. This fused information is further used during chip functional operation as a repair solution.
  • U.S. Patent Publication No. 2008/0065929A1 entitled “Method and Apparatus for storing and distributing memory repair information” to Nadeau-Dostie, et al. published on Mar. 13, 2008 discloses a system for repairing embedded memories on an integrated circuit. The system comprises an external Built-In Self-repair Register (BISR) associated with every reparable memory on the circuit. Each BISR is configured to accept a serial input from a daisy chain connection and to generate a serial output to a daisy chain connection, so that a plurality of BISRs are connected in a daisy chain with a fuse box controller. The fuse box controller has no information as to the number, configuration or size of the embedded memories, but determines, upon power up, the length of the daisy chain. With this information, the fuse box controller may perform a corresponding number of serial shift operations to move repair data to and from the BISRs and into and out of a fuse box associated with the controller. Memories having a parallel repair interface are supported by a parallel address bus and enable control signal on the BISR, while those having a serial repair interface are supported by a parallel daisy chain path that may be selectively cycled to shift the contents of the BISR to an internal serial register in the memory. Preferably, each of the BISRs has an associated repair analysis facility having a parallel address bus and enable control signal by which fuse data may be dumped in parallel into the BISR and from there, either uploaded to the fuse box through the controller or downloaded into the memory to effect repairs. Advantageously, pre-designed circuit blocks may provide daisy chain inputs and access ports to affect the inventive system there along or to permit the circuit block to be bypassed for testing purposes. While U.S. Patent Publication No. 2008/0065929A1 provides a fuse box module to repair memory structures present in a single power domain, it remains specific to the number, size and configuration of the embedded memory locations and is incapable of supporting memories spanning across plurality of power domains.
  • SUMMARY
  • A system of the present invention includes multiple memory blocks spanning a plurality of power domains. The system includes a Shared Fuse Wrapper architecture for storing the repair data of the memory blocks on a centralized fuse macro cell operatively coupled therewith and at least one repair data register for storing memory repair data thereon. Each of the repair data registers is operatively coupled with corresponding memory block and is operative for transmitting the memory repair data thereto to effect repairs of the corresponding memory block. The system includes a plurality of parallel links connecting the memory blocks of each power domains to the Shared Fuse Wrapper architecture.
  • An embodiment of the present invention includes a multiple memory architecture comprising multiple memory blocks spanning a plurality of power domains. The architecture includes a Shared Fuse Wrapper architecture for storing the repair data of the memory blocks on a centralized fuse macro cell operatively coupled therewith, at least one repair data register for storing memory repair data thereon, each of which is operatively coupled with corresponding memory block and for transmitting the memory repair data thereto to effect repairs thereof, and a plurality of parallel links connecting the memory blocks of each power domains to the Shared Fuse Wrapper architecture.
  • In another embodiment, a device comprising multiple memory blocks spanning a plurality of power domains includes a Shared Fuse Wrapper architecture for storing the repair data of the memory blocks on a centralized fuse macro cell operatively coupled therewith, at least one repair data register for storing memory repair data thereon, each of which is operatively coupled with corresponding memory block and for transmitting the memory repair data thereto to effect repairs thereof, and a plurality of parallel links connecting the memory blocks of each power domains to the Shared Fuse Wrapper architecture.
  • A method of the present invention for memory repair across multiple power domains includes determining defective memory locations and corresponding repair data for each memory block in each power domain, encoding the address and the repair data of each defective memory location, storing the encoded address and repair data obtained by incremental encoding across power domains, decoding the encoded the address and the repair data during functional operation, and repairing the defective memory locations.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • The present disclosure explains the various embodiments of the instant disclosure in the following description, taken in conjunction with the accompanying drawings, wherein:
  • FIG. 1 illustrates an arrangement of memory repair system on an integrated circuit according to first embodiment of the present disclosure.
  • FIG. 2 illustrates an arrangement of memory repair system on an integrated circuit according to second embodiment of the present disclosure.
  • FIG. 3 illustrates internal structure of a fuse wrapper according to an embodiment of the present disclosure.
  • FIG. 4 illustrates frame structure of the encoded stream according to an embodiment of the present disclosure as described in FIG. 1.
  • FIG. 5 illustrates frame structure of the encoded stream according to another embodiment of the present disclosure as described in FIG. 2.
  • FIG. 6 illustrates architecture of the serial interface in accordance with the present disclosure.
  • FIG. 7 illustrates a flow chart for a method for memory testing according to an embodiment of the present disclosure.
  • FIG. 8 illustrates a flow chart for a method for memory repair according to an embodiment of the present disclosure.
  • While the disclosure will be described in conjunction with the illustrated embodiment, it will be understood that it is not intended to limit the disclosure to such embodiment. On the contrary, it is intended to cover all alternatives, modifications and equivalents as may be included within the spirit and scope of the disclosure as defined by the appended claims.
  • DETAILED DESCRIPTION OF PREFERRED EMBODIMENTS
  • The embodiments of the present disclosure will now be described in detail with reference to the accompanying drawings. However, the present disclosure is not limited to the embodiments. The present disclosure can be modified in various forms. Thus, the embodiments of the present disclosure are only provided to explain more clearly the present disclosure to the ordinarily skilled in the art of the present disclosure. In the accompanying drawings, like reference numerals are used to indicate like components.
  • The present disclosure describes a system comprising multiple memory blocks spanning a plurality of power domains, said system comprising: a Shared Fuse Wrapper architecture for storing the repair data of the memory blocks on a centralized fuse macro cell operatively coupled therewith; at least one repair data register for storing memory repair data thereon, each of which is operatively coupled with corresponding memory block and for transmitting the memory repair data thereto to effect repairs thereof; and a plurality of parallel links connecting the memory blocks of each power domains to the Shared Fuse Wrapper architecture.
  • An embodiment of the present disclosure describes multiple memory architecture comprising multiple memory blocks spanning a plurality of power domains, said architecture comprising: a Shared Fuse Wrapper architecture for storing the repair data of the memory blocks on a centralized fuse macro cell operatively coupled therewith; at least one repair data register for storing memory repair data thereon, each of which is operatively coupled with corresponding memory block and for transmitting the memory repair data thereto to effect repairs thereof; and a plurality of parallel links connecting the memory blocks of each power domains to the Shared Fuse Wrapper architecture.
  • An embodiment of the present disclosure describes a device comprising multiple memory blocks spanning a plurality of power domains, said device comprising: a Shared Fuse Wrapper architecture for storing the repair data of the memory blocks on a centralized fuse macro cell operatively coupled therewith; at least one repair data register for storing memory repair data thereon, each of which is operatively coupled with corresponding memory block and for transmitting the memory repair data thereto to effect repairs thereof; and a plurality of parallel links connecting the memory blocks of each power domains to the Shared Fuse Wrapper architecture.
  • An embodiment of the present disclosure illustrates a method for memory repair across multiple power domains, said method comprising: determining defective memory locations and corresponding repair data for each memory block in each power domain; encoding the address and the repair data of each defective memory location; storing the encoded address and repair data obtained by incremental encoding across power domains; decoding the encoded the address and the repair data during functional operation; and repairing the defective memory locations.
  • In the present disclosure, there are two exemplary embodiments described in the disclosure to perform memory repair through common shared fuse wrapper architecture across multiple power domains on an integrated circuit. Those having ordinary skill in this art will appreciate that the configuration of the architecture has comparable distinguishing features.
  • FIG. 1 illustrates an arrangement of a memory repair system on an integrated circuit incorporating multiple power domains according to first embodiment of the present disclosure. The memory repair system includes a common shared fuse wrapper architecture 101 to which the memory blocks in the individual power domains 102 are connected in parallel. The common shared fuse wrapper architecture 101 further includes a Fuse Macro cell 103 to store information related to different power domains. Each power domain includes multiple memory blocks coupled with BIST controllers.
  • FIG. 2 illustrates another arrangement of a memory repair system on an integrated circuit according to second embodiment of the present disclosure. The memory repair system includes a common shared fuse wrapper architecture 201 to which the memory blocks in the individual power domains 202 are connected in parallel. The common shared fuse wrapper architecture 201 further includes a Fuse Macro cell 203 to store information related to different power domains on an integrated circuit. Each power domain consists of memories coupled with BIST controllers. And at least one external repair register 204 for storing memory repair data thereon, each of which is operatively coupled with corresponding memory and for transmitting the memory repair data thereto to effect repairs thereof.
  • FIG. 3 illustrates internal structure of a fuse wrapper according to an embodiment of the present disclosure. The fuse wrapper includes an encoder module 301 used to encode the defective row/column data and the corresponding repair parameters for all the power domains at the end of the test process; a register data bank 302 to store the encoded defective addresses across power domains; counters 303 to keep track of the fuse bits used and remaining. The fuse wrapper also includes a decoder module 304 for decoding the encoded data during normal operation. Individual decoders Finite State Machines (FSMs) are provided for each power domain to provide for fast wake up.
  • Further, register data bank 302 includes shared fuse data registers 305 and a common address register bank 306. The shared fuse data registers 305 stores the complete encoded data stream during the encoding. The common address register bank 306 stores the addresses of each defective memory and its Power Domain Identity (ID). The Power Domain ID's are needed to eventually store the information in the Fuse Macro cell in ascending order of IDs. Further, maintaining the common address bank in the register bank also helps in pooling of resources and thereby reducing the overall area of the chip. There are also some additional repair data registers which store the data temporarily during encoding. The size of these repair data registers is optimized after considering all combinations of minimum number of memories to be repaired on the chip.
  • FIG. 4 illustrates frame structure containing the fuse bits of the encoded stream according to an embodiment of the present disclosure described in FIG. 1. This frame structure shows the final alignment of the encoded data before shifting to tester. This frame structure comprises header 401, number of defective memories 402, defective memory addresses 403 and total repair data 404. The total number of fuse bits for the frame is computed as follows:
  • Fuse Bits = 1 + R = 1 R = N ( log 2 ( Mr + 1 ) ) + R = 1 R = N ( Krd Length ) + K min * Log 2 ( Bf ) + Krd ( max )
  • where ‘N’ is the number of Power Domains; ‘Mr’ is the number of redundant memories in a Power Domain; Krd (length) is the length of total repair data (corresponding to Kmin memories) in a Power Domain; ‘Kmin’ is the minimum number of memories that can be repaired on a chip; Log2(Bf) is the logarithm to base 2 of the maximum number of memories on the Power Domain; and ‘Krd(max)’ is the largest repair data length corresponding to Kmin memories across Power Domains.
  • Further, header 401 contains a single global repair bit. When the global bit is at logic 0 it indicates there are no bad memories on the chip to be repaired. When the global bit is at logic 1, it indicates that a repair is needed for the bad memories on the chip.
  • Furthermore, the numbers of defective memories 402 are approximated by the following:
  • R = 1 R = N ( log 2 ( Mr + 1 ) )
  • and the value of all defective memory addresses 403 are computed using Kmin*Log2 (BJ). (While the bits always need to be repaired are Kmin; these Kmin memories can fail in a single power domain or across more than one power domains.) Therefore, the repair capability of Fuse Wrapper varies from 1 to Kmax memories (Kmax memories are defined as the capability to repair more number of memories by fuse wrapper shared architecture. It is possible when more number of memories can fail in a power domain but with lesser repair data length.). The memories up to Kmin will always be repaired while memories within Kmin<K<=Kmax may or may not be repaired. It will be appreciated by those having ordinary skill in this art that the other frame structures may have comparable arrangements.
  • FIG. 5 illustrates frame structure containing the fuse bits of the encoded stream according to the embodiment of the present disclosure described in FIG. 2. This frame structure shows the final alignment of the encoded data before shifting to tester. This frame structure comprises header 501, number of defective memories 502, offset corresponding to each power domain 503 and total repair data 504. The total number of fuse bits needed to repair Kmin memories can be computed as follows:
  • Fuse bits = ( Header ) + i = 1 i = N [ log 2 ( Mi ( r ) + 1 ) ] + log 2 ( Offseti ) * K min + Krd - reli ( Max ) * K min + Krd max
  • where, ‘i’=1 to N denotes the total no. of power domains; ‘{log 2 (Offseti)*Kmin+Krd−reli (Max)*Kmin+Krd max} ’ corresponds to the power domain that has the largest value of the sum; ‘Offseti’ corresponds to the total repair data corresponding to all the repair memories in a power domain and Krd−rel corresponds to the maximum relative repair data length within a power domain. It will be appreciated by those having ordinary skill in this art that the other frame structures may have comparable arrangements.
  • FIG. 6 illustrates interconnections between BIST associated with a memory having a serial repair interface in accordance with the present disclosure. Those having ordinary skill in the art will appreciate that the configuration of other built-in-self-repairs may have comparable configurations. In this serial interface, there is one serial chain across repair status register 601; another serial chain passes through the repair data registers 602 of BIST and also of memory. The repair data register contains bypass flop 603 and other flip-flops. In case of non redundant memory, the repair data register will be of single bit. In a functional operation, the repair data register (except for bypass flop) will be bypassed if the memory is good.
  • Embodiments of the method for memory test and repair routing are described in FIG. 7 and FIG. 8. The methods are illustrated as a collection of blocks in a logical flow graph, which represents a sequence of operations that can be implemented in hardware, software, or a combination thereof. The order in which the process is described is not intended to be construed as a limitation, and any number of the described blocks can be combined in any order to implement the process, or an alternate process.
  • FIG. 7 illustrates a flow chart for a method for memory testing according to an embodiment of the present disclosure. In step 701, the testing starts with switching on the power domain and asserting a handshake signal pd_active. In step 702, BIST is run to analyze all the memories in that power domain and determines the defective addresses and corresponding repair data. Once the BIST has completed the process, in step 703 the global status signal is checked. If all the memories are good then the global chip status (the global status is a two bit status output generated from the fuse wrapper) is “01” and the encoding process is skipped to save time and testing proceeds to other power domains. If any of the memories is dead then the global chip status is “10” and encoding is again terminated early as dead memories cannot be repaired. If the memories are repairable in a power domain then the global chip status is “11” and the encoding mode is asserted by the fuse wrapper.
  • Further, in step 704 Fuse Wrapper is run in an encoding mode. The encoding is done sequentially even if more than one Power Domain is switched on simultaneously. If all the memories are good, encoding is terminated earlier than usual to save test time. Otherwise, in case of repairable memories, finite state machine (FSM) of encoder calculates the logical addresses of defective memories and stores them in shared fuse data registers. The memory repair data for defective memories is then shifted from BIST to Fuse Wrapper serially. After completion of the complete process for a single power domain, Fuse wrapper provides a handshake signal called pd_ready to indicate the end of encoding for this Power Domain. Similarly, the encoding for the other Power Domains is performed in the aforementioned manner. When encoding for all Power Domains has been completed a handshake signal called system_dataready is asserted showing end of encoding, the encoded data per Power Domain is realigned and arranged in ascending order of Power Domain's as shown in FIG. 4. This arrangement obviates the need to store power domain ID on fuse macro cell and thereby saves area. This encoded stream is transferred to an external tester and fuses are programmed with this encoded stream for chip repair. Finally, in step 705 after the completion of the complete encoding process, the integrated circuits are packaged with the encoded information for shipment.
  • Referring to FIG. 7 the encoding process of addresses and repair data for both the embodiments is explained in the subsequent lines separately. It will be apparent to those having ordinary skill in this art that various modifications and variations may be made to the encoding process disclosed herein, consistent with the present invention, without departing from the spirit and scope of the present invention. According to the first exemplary embodiment the encoding of addresses and repair data follows two cycles:
      • (a) In the first cycle, finding out the number of defective memories, total repair data length and their logical addresses by transferring 0/1 mode and repair status in BIST, which loads serial registers for redundant memories with 11 . . . 11110, while for non redundant memory ‘0’ is loaded. If the status is good only 1 flop will come in the chain. Further, the transferred data is shifted out from BIST to shard Fuse Wrapper and the following pattern in the repair status of each memory based on its header content as follows:
        • 10: start encoding redundant memory; increment defective memory count; store the logical address of defective memory and start counting the repair data length in counter.
        • 11: increment the repair data length counter.
        • 01: end encoding of redundant memory and store the value of repair data length counter.
        • 00: ignore it as it corresponds to data of non redundant memory.
        • If the fuse bits are insufficient and the global chip status (the global status is a two bit status output generated from the fuse wrapper) is “00”, then skip up the encoding process.
      • (b) In the second cycle, transferring the actual repair data in BIST and shift out the data from BIST to Fuse Wrapper. Whenever a 1 is obtained, it is a start of redundant memory. Load repair data length stored in first cycle. These cycles are repeated until all defective memories are encoded completely.
  • Further, according to the second exemplary embodiment the encoding of the defective memory addresses and repair data also follows two cycles:
      • (a) The first cycle finds out the number of defective memories, total repair data length and their logical addresses by transferring 0/1 mode and repair status in BIST, where serial registers for good Redundant memories are loaded with 0111 . . . 1 and for repairable memories loaded with 011 . . . 01 (Ending in 01), while for non redundant memory it is loaded with ‘0’. Furthermore, the above said data are shifted out of the Serial interface as shown in FIG. 5, and performs a pattern search for all memories: if a 0 is obtained, then ignore it as it is a bypass flop; if a 1 is followed by 0, it is a good redundant memory and add to the offset length till a next 0 is obtained which signifies the end of this memory. Reset the pattern. If a 1 is followed by 1, it is a defective redundant memory, add to defective memory count. Furthermore, add to failing memory data length till a next ‘0’ is obtained which signifies the end of this memory. Save the current offset value also and reset the pattern. Repeat all the steps for all the bits on a power domain.
        • If the fuse bits are insufficient and the global chip status (the global status is a two bit status output generated from the fuse wrapper) is “00”, then skip up the encoding process.
      • (b) In the second cycle, transfer actual repair data and repair status in Serial Interface and shift repair data from Serial Interface to Fuse Wrapper. Whenever a 1 is obtained, shift for cycles equal to the length stored in the first cycle. Save this repair data and continue and repeat till a next 1 is obtained. Early terminate when number of defective memories is equal to the defective memory count.
  • FIG. 8 illustrates a flow chart for a method for memory repair according to an embodiment of the present disclosure. During chip in-field operation, step 801 starts the process by turning Fuse wrapper on. In step 802, the global repair bit is checked, if the repair bit is 0 (represents a good chip) then decoding is instantly terminated for fast chip wake up. If the global repair bit is 1, pd_active signal is asserted, and decoding process is started. In step 803, Fuse Wrapper downloads the encoded stream and stores the defective memory count of each Power Domains IDs in the internal registers. This defective memory count is then used to identify the addresses of each Power Domain ID. If there is an address match, the index pointer is shifted to point to the next address in the address register bank with the corresponding Power Domain ID. Similarly, the entire data is shifted out and the end signal pd_ready is asserted showing completion of the decoding. If two or more Power Domains are turned on simultaneously, multiple decoder finite state machines (FSMs) are run in tandem transferring the repair data in all chains independent of each other and enabling fast memory repair by significantly reducing the wake up time.
  • Similarly, referring to FIG. 8 the decoding process of addresses and repair data for both the embodiments is explained in the subsequent lines separately. It will be apparent to those having ordinary skill in this art that various modifications and variations may be made to the encoding process disclosed herein, consistent with the present invention, without departing from the spirit and scope of the present invention. According to the first exemplary embodiment, the decoding follows the following steps:
  • the Following Initialization Step is Performed as Soon as the Integrated Circuit is Turned on:
  • Shift the encoded data from Fuse Macro Cell to the shared Fuse Wrapper registers and segregate the number of failing memories, their logical addresses and the repair data and store them in shared Fuse Wrapper registers. (This step is performed irrespective of power domain's getting turned as to enable fast operation.)
  • the Following Steps are Done to Send the Repair Data to Memory, when a Power Domain is Turned on:
  • Shift 11 in repair status registers of BIST Serial Interface so that all the repair registers of BIST are connected. Load 0/1 pattern mode in BIST Serial Interface repair registers and shift out the 0/1 pattern from BIST Serial Interface to shared Fuse Wrapper. If the shifted address matches with the failing memory address as stored in shared Fuse Wrapper, shift out that memory's repair data into Memory; if the address does not match, shift 0's into Memory (this corresponds to good memory). At the end of all shifts the correct data will be initialized in all the memories of a Power domain. If a Power Domain is turned off and on again, the above steps will be performed again to reinitialize the memory. If two or more power domains are turned on simultaneously, the repair of memories in both power domains will happen in parallel implying fast wake up.
  • Further, according to the second exemplary embodiment, the decoding follows the following steps,
  • the Following Initialization Step is Performed as Soon as the Integrated Circuit is Turned on and the Following Steps are Performed Irrespective of Power Domain's Switching on to Enable Fast Operation:
      • Shift the encoded addresses and repair data from Fuse Macro cell to shared Fuse Wrapper registers and segregate the number of defective memories, their logical addresses and the repair data and store these in shared Fuse Wrapper registers.
      • Load first offset value.
      • Shift 0's into the local repair data registers till the Offset count is 0.
      • Now load the length of the first repair data
      • Shift the repair data into the Local registers till the length of the first repair data.
  • The above mentioned steps are repeated for all the repair data for a particular Power Domain/Repair Chain. This is done till all the repair data registers for all the Power Domains are configured. The data is held inside the local registers which are in always on domain and this further avoids again turning on and off the power domains and performing the initialization process again and again. Thus, there is overall improvement in the processing time of the repair mechanism in subsequent wake ups.
  • The present disclosure proposed by shared Fuse Wrapper Architecture saves chip area by optimally sharing resources across all power domains and provides for fast wake up. According to the first embodiment, in the described architecture, the fast wake up time reduces the memory repair time of various Power Domains by removing the dependency by accessing the repair information for a particular Power Domain independent of the order in which it was stored on the Fuse Macro cell i.e. one or more Power Domains can be switched on independent of the other.
  • The present disclosure is applicable to all types of on-chip and off chip memories used in various in digital electronic circuitry, or in hardware, firmware, or in computer hardware, firmware, software, or in combination thereof. Apparatus of the invention can be implemented in a computer program product tangibly embodied in a machine-readable storage device for execution by a programmable processor; and methods actions can be performed by a programmable processor executing a program of instructions to perform functions of the invention by operating on input data and generating output. The invention can be implemented advantageously on a programmable system including at least one input device, and at least one output device. Each computer program can be implemented in a high-level procedural or object-oriented programming language or in assembly or machine language, if desired; and in any case, the language can be a compiled or interpreted language.
  • Suitable processors include, by way of example, both general and specific microprocessors. Generally, a processor will receive instructions and data from a read-only memory and/or a random access memory. Generally, a computer will include one or more mass storage devices for storing data file; such devices include magnetic disks and cards, such as internal hard disks, and removable disks and cards; magneto-optical disks; and optical disks. Storage devices suitable for tangibly embodying computer program instructions and data include all forms of volatile and non-volatile memory, including by way of example semiconductor memory devices, such as EPROM, EEPROM, and flash memory devices; magnetic disks such as internal hard disks and removable disks; magneto-optical disks; CD-ROM and DVD-ROM disks; and buffer circuits such as latches and/or flip flops. Any of the foregoing can be supplemented by, or incorporated in ASICs (application-specific integrated circuits), FPGAs (field-programmable gate arrays) and/or DSPs) digital signal processors).
  • It will be apparent to those having ordinary skill in this art that various modifications and variations may be made to the embodiments disclosed herein, consistent with the present invention, without departing from the spirit and scope of the present invention. Other embodiments consistent with the present invention will become apparent from consideration of the specification and the practice of the invention disclosed herein.
  • Although the instant disclosure has been described in connection with the embodiment of the present disclosure illustrated in the accompanying drawings, it is not limited thereto. It will be apparent to those skilled in the art that various substitutions, modifications and changes may be made thereto without departing from the scope and spirit of the disclosure.

Claims (13)

    We claim:
  1. 1. A method for memory repair across multiple power domains, said method comprising:
    determining defective memory locations and corresponding repair data for each memory block in each power domain;
    encoding the address and the repair data of each defective memory location;
    storing the encoded address and repair data obtained by incremental encoding across power domains;
    decoding the encoded the address and the repair data during functional operation; and
    repairing the defective memory locations.
  2. 2. The method as claimed in claim 1, wherein the encoding of the defective memory addresses and repair data comprises:
    transferring the memory repair status to a fuse wrapper to determine the number of defective memories, total repair data length and corresponding logical addresses of defective memory locations; and
    processing the repair status of each memory based on its header content as follows:
    for status value 10, start encoding of a defective redundancy memory location; incrementing defective memory count; storing the logical address of defective memory location; and starting the repair data length count;
    for a status value 11, incrementing the repair data length count;
    for a status value 01, ending encoding of the defective redundancy memory location and storing the repair data length count;
    ignoring a status value of 00, as it corresponds to data of non redundancy memory; and
    transferring the actual repair data of the defective memory location to the fuse wrapper.
  3. 3. The method as claimed in claim 1, wherein the encoding of the defective memory addresses and repair data comprises:
    transferring the memory repair status to fuse wrapper in order to determine the number of defective memories, their individual repair data lengths and corresponding offsets of the repair data of defective faulty memory locations; and
    processing the repair status of each memory based on its header content as follows:
    for status value of 01, start encoding of a redundancy memory location; incrementing the offset length till a next 0 is obtained which signifies the end of this redundancy memory location;
    for status value of 11, incrementing the defective memory count and storing the repair data length till a next 0 is obtained;
    ignoring a status value of 00, as it corresponds to data of non redundancy memory; and
    transferring the actual repair data of the defective memory location to the fuse wrapper.
  4. 4. The method as claimed in claim 1, wherein storing the encoded addresses and repair data in a common set of fuse data registers.
  5. 5. The method as claimed in claim 1, wherein the decoding the encoded addresses and repair data comprises:
    initializing by shifting the encoded data from Fuse Macro Cell to shared Fuse Wrapper and segregating the number of defective memories, their logical addresses and the repair data and storing these in shared Fuse Wrapper;
    repairing the memory;
    transferring 11 from fuse wrapper in order to configure the serial chain;
    wherein, if the location address matches with the defective memory address as stored in shared Fuse Wrapper, shift out that memory's repair data into the repair data register chain; and
    if the location address does not match, shift 0 into the repair data register chain.
  6. 6. The method as claimed in claim 1, wherein the decoding the encoded addresses and repair data comprises:
    transferring the encoded addresses and repair data from Fuse Macro Cell to shared Fuse Wrapper and segregating the number of defective memories, offsets of the repair data, individual repair data lengths and the repair data and store these in Fuse Wrapper;
    loading the first offset value internally in the Fuse Wrapper;
    shifting 0's into the local repair data registers till the offset count becomes 0;
    loading the first repair data length internally in the Fuse Wrapper;
    shifting the repair data from the Fuse Wrapper into the repair data registers till the repair data length count becomes 0;
    wherein, repeating the above mentioned steps for next offsets and repair data until all the repair data chains have been configured.
  7. 7. The method as claimed in claim 1, wherein the data to be fused in the fuse data registers is approximated by:
    Fuse Bits = 1 + R = 1 R = N ( log 2 ( Mr + 1 ) ) + R = 1 R = N ( Krd Length ) + K min * Log 2 ( Bf ) + Krd ( max )
    where Mr is the number of redundancy memories in a power domain, Krd is the length of total repair data in a power domain, Kmin is the minimum number of memories to be always repaired, Bf is the no. of bypass flops corresponding to logical addresses and, which are one per redundancy memory & one per non-redundancy collar, and Krd (max) is the largest repair data length corresponding to Kmin memories across power domains.
  8. 8. The method device as claimed in claim 1, wherein the data to be fused in the fuse data registers is approximated by:
    Fuse Bits = ( header ) + i = 1 R = N ( log 2 ( Mi ( r + 1 ) ) + log 2 ( Offseti ) * K min + Krd - reli ( max ) * K min + Krd ( max )
    where i=1 to N denotes the total number of power domains, {log 2 (Offsea)*Kmin+Krd−reli(max)*Kmin}corresponds to the power domain that has the largest value of the sum, Offseti corresponds to the total repair data corresponding to all the repair memories in a power domain and Krd−reli corresponds to the maximum relative repair data length within a power domain.
  9. 9. The method as claimed in claim 1, wherein said memories are a SRAM or a DRAM or a ROM memory.
  10. 10. The method as claimed in claim 1 further comprising providing a Shared Fuse Wrapper architecture for storing the repair data of the memory blocks in each power domain.
  11. 11. The method as claimed in claim 1 further comprising providing at least one repair data register for storing memory repair data thereon, each of which is operatively coupled with a corresponding memory block and for transmitting the memory repair data.
  12. 12. The method as claimed in claim 1 further comprising providing a plurality of parallel links for coupling the memory blocks of each power domains to a Shared Fuse Wrapper architecture.
  13. 13. The method as claimed in claim 5, wherein decoding the encoded addresses and repair data is performed in parallel for all serial link chains that are powered on.
US14305975 2009-06-11 2014-06-16 Shared fuse wrapper architecture for memory repair Abandoned US20140304561A1 (en)

Priority Applications (4)

Application Number Priority Date Filing Date Title
IN1203/DEL/2009 2009-06-11
IN1203DE2009 2009-06-11
US12784424 US8775880B2 (en) 2009-06-11 2010-05-20 Shared fuse wrapper architecture for memory repair
US14305975 US20140304561A1 (en) 2009-06-11 2014-06-16 Shared fuse wrapper architecture for memory repair

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
US14305975 US20140304561A1 (en) 2009-06-11 2014-06-16 Shared fuse wrapper architecture for memory repair

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
US12784424 Division US8775880B2 (en) 2009-06-11 2010-05-20 Shared fuse wrapper architecture for memory repair

Publications (1)

Publication Number Publication Date
US20140304561A1 true true US20140304561A1 (en) 2014-10-09

Family

ID=43307455

Family Applications (2)

Application Number Title Priority Date Filing Date
US12784424 Active 2031-09-21 US8775880B2 (en) 2009-06-11 2010-05-20 Shared fuse wrapper architecture for memory repair
US14305975 Abandoned US20140304561A1 (en) 2009-06-11 2014-06-16 Shared fuse wrapper architecture for memory repair

Family Applications Before (1)

Application Number Title Priority Date Filing Date
US12784424 Active 2031-09-21 US8775880B2 (en) 2009-06-11 2010-05-20 Shared fuse wrapper architecture for memory repair

Country Status (1)

Country Link
US (2) US8775880B2 (en)

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP5720552B2 (en) * 2011-12-09 2015-05-20 富士通株式会社 Memory device
US8942051B2 (en) * 2012-07-27 2015-01-27 Taiwan Semiconductor Manufacturing Company, Ltd. Mechanisms for built-in self test and repair for memory devices
KR20140078292A (en) * 2012-12-17 2014-06-25 에스케이하이닉스 주식회사 fuse repair apparatus and method of the same
US9053799B2 (en) * 2013-02-07 2015-06-09 Texas Instruments Incorporated Optimizing fuseROM usage for memory repair
US9383932B2 (en) * 2013-12-27 2016-07-05 Intel Corporation Data coherency model and protocol at cluster level
US9659616B2 (en) * 2014-08-14 2017-05-23 Apple Inc. Configuration fuse data management in a partial power-on state
KR20170008553A (en) * 2015-07-14 2017-01-24 에스케이하이닉스 주식회사 Semiconductor apparatus and repair method of the same

Citations (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5617531A (en) * 1993-11-02 1997-04-01 Motorola, Inc. Data Processor having a built-in internal self test controller for testing a plurality of memories internal to the data processor
US5664089A (en) * 1994-04-26 1997-09-02 Unisys Corporation Multiple power domain power loss detection and interface disable
US5999463A (en) * 1997-07-21 1999-12-07 Samsung Electronics Co., Ltd. Redundancy fuse box and semiconductor device including column redundancy fuse box shared by a plurality of memory blocks
US6041000A (en) * 1998-10-30 2000-03-21 Stmicroelectronics, Inc. Initialization for fuse control
US6052700A (en) * 1998-09-17 2000-04-18 Bull Hn Information Systems Inc. Calendar clock caching in a multiprocessor data processing system
US20030196143A1 (en) * 2002-04-11 2003-10-16 Lsi Logic Corporation Power-on state machine implementation with a counter to control the scan for products with hard-BISR memories
US20040066684A1 (en) * 2002-10-08 2004-04-08 Hiroshi Ito Semiconductor integrated circuit device
US20060156189A1 (en) * 2004-12-21 2006-07-13 Andrew Tomlin Method for copying data in reprogrammable non-volatile memory
US20070247886A1 (en) * 2006-04-19 2007-10-25 Russell Andrew C Memory circuit
US7313038B2 (en) * 2005-04-06 2007-12-25 Kabushiki Kaisha Toshiba Nonvolatile memory including a verify circuit
US7415640B1 (en) * 2003-10-13 2008-08-19 Virage Logic Corporation Methods and apparatuses that reduce the size of a repair data container for repairable memories
US7434122B2 (en) * 2004-08-04 2008-10-07 Samsung Electronics Co., Ltd. Flash memory device for performing bad block management and method of performing bad block management of flash memory device
US7460421B2 (en) * 2006-01-18 2008-12-02 Kabushiki Kaisha Toshiba Semiconductor integrated circuit device
US7477564B2 (en) * 2004-12-20 2009-01-13 International Business Machines Corporation Method and apparatus for redundant memory configuration in voltage island
US20090132876A1 (en) * 2007-11-19 2009-05-21 Ronald Ernest Freking Maintaining Error Statistics Concurrently Across Multiple Memory Ranks
US7793179B2 (en) * 2006-06-27 2010-09-07 Silicon Image, Inc. Test clock control structures to generate configurable test clocks for scan-based testing of electronic circuits using programmable test clock controllers
US7954023B2 (en) * 2007-12-25 2011-05-31 Renesas Electronics Corporation Semiconductor integrated circuit including power domains

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7757135B2 (en) 2006-09-11 2010-07-13 Mentor Graphics Corporation Method and apparatus for storing and distributing memory repair information

Patent Citations (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5617531A (en) * 1993-11-02 1997-04-01 Motorola, Inc. Data Processor having a built-in internal self test controller for testing a plurality of memories internal to the data processor
US5664089A (en) * 1994-04-26 1997-09-02 Unisys Corporation Multiple power domain power loss detection and interface disable
US5999463A (en) * 1997-07-21 1999-12-07 Samsung Electronics Co., Ltd. Redundancy fuse box and semiconductor device including column redundancy fuse box shared by a plurality of memory blocks
US6052700A (en) * 1998-09-17 2000-04-18 Bull Hn Information Systems Inc. Calendar clock caching in a multiprocessor data processing system
US6041000A (en) * 1998-10-30 2000-03-21 Stmicroelectronics, Inc. Initialization for fuse control
US20030196143A1 (en) * 2002-04-11 2003-10-16 Lsi Logic Corporation Power-on state machine implementation with a counter to control the scan for products with hard-BISR memories
US20040066684A1 (en) * 2002-10-08 2004-04-08 Hiroshi Ito Semiconductor integrated circuit device
US6804156B2 (en) * 2002-10-08 2004-10-12 Kabushiki Kaisha Toshiba Semiconductor integrated circuit device
US7415640B1 (en) * 2003-10-13 2008-08-19 Virage Logic Corporation Methods and apparatuses that reduce the size of a repair data container for repairable memories
US7434122B2 (en) * 2004-08-04 2008-10-07 Samsung Electronics Co., Ltd. Flash memory device for performing bad block management and method of performing bad block management of flash memory device
US7477564B2 (en) * 2004-12-20 2009-01-13 International Business Machines Corporation Method and apparatus for redundant memory configuration in voltage island
US20060156189A1 (en) * 2004-12-21 2006-07-13 Andrew Tomlin Method for copying data in reprogrammable non-volatile memory
US7313038B2 (en) * 2005-04-06 2007-12-25 Kabushiki Kaisha Toshiba Nonvolatile memory including a verify circuit
US7460421B2 (en) * 2006-01-18 2008-12-02 Kabushiki Kaisha Toshiba Semiconductor integrated circuit device
US20070247886A1 (en) * 2006-04-19 2007-10-25 Russell Andrew C Memory circuit
US7793179B2 (en) * 2006-06-27 2010-09-07 Silicon Image, Inc. Test clock control structures to generate configurable test clocks for scan-based testing of electronic circuits using programmable test clock controllers
US20090132876A1 (en) * 2007-11-19 2009-05-21 Ronald Ernest Freking Maintaining Error Statistics Concurrently Across Multiple Memory Ranks
US7954023B2 (en) * 2007-12-25 2011-05-31 Renesas Electronics Corporation Semiconductor integrated circuit including power domains

Also Published As

Publication number Publication date Type
US20100318843A1 (en) 2010-12-16 application
US8775880B2 (en) 2014-07-08 grant

Similar Documents

Publication Publication Date Title
US6829728B2 (en) Full-speed BIST controller for testing embedded synchronous memories
US6940765B2 (en) Repair apparatus and method for semiconductor memory device to be selectively programmed for wafer-level test or post package test
US6373758B1 (en) System and method of operating a programmable column fail counter for redundancy allocation
US6898776B1 (en) Method for concurrently programming a plurality of in-system-programmable logic devices by grouping devices to achieve minimum configuration time
US6370661B1 (en) Apparatus for testing memory in a microprocessor
US5469390A (en) Semiconductor memory system with the function of the replacement to the other chips
US6205564B1 (en) Optimized built-in self-test method and apparatus for random access memories
US6550023B1 (en) On-the-fly memory testing and automatic generation of bitmaps
US7415640B1 (en) Methods and apparatuses that reduce the size of a repair data container for repairable memories
US6691252B2 (en) Cache test sequence for single-ported row repair CAM
US6408401B1 (en) Embedded RAM with self-test and self-repair with spare rows and columns
US5764577A (en) Fusleless memory repair system and method of operation
US20020108073A1 (en) System for and method of operating a programmable column fail counter for redundancy allocation
US7139204B1 (en) Method and system for testing a dual-port memory at speed in a stressed environment
US20020133770A1 (en) Circuit and method for test and repair
US20060156134A1 (en) Programmable memory built-in-self-test (MBIST) method and apparatus
US6768694B2 (en) Method of electrically blowing fuses under control of an on-chip tester interface apparatus
US20050128830A1 (en) Semiconductor memory device
US20130173970A1 (en) Memory device with background built-in self-testing and background built-in self-repair
US6667917B1 (en) System and method for identification of faulty or weak memory cells under simulated extreme operating conditions
US7127647B1 (en) Apparatus, method, and system to allocate redundant components
US6973605B1 (en) System and method for assured built in self repair of memories
US5920515A (en) Register-based redundancy circuit and method for built-in self-repair in a semiconductor memory device
US7237154B1 (en) Apparatus and method to generate a repair signature
US6445627B1 (en) Semiconductor integrated circuit