CA2313951A1 - Scheme for accelerating bit line equalization in a high speed dram architecture - Google Patents
Scheme for accelerating bit line equalization in a high speed dram architecture Download PDFInfo
- Publication number
- CA2313951A1 CA2313951A1 CA 2313951 CA2313951A CA2313951A1 CA 2313951 A1 CA2313951 A1 CA 2313951A1 CA 2313951 CA2313951 CA 2313951 CA 2313951 A CA2313951 A CA 2313951A CA 2313951 A1 CA2313951 A1 CA 2313951A1
- Authority
- CA
- Canada
- Prior art keywords
- bitline
- bit line
- equalization
- memory
- scheme
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Abandoned
Links
Classifications
-
- G—PHYSICS
- G11—INFORMATION STORAGE
- G11C—STATIC STORES
- G11C7/00—Arrangements for writing information into, or reading information out from, a digital store
- G11C7/12—Bit line control circuits, e.g. drivers, boosters, pull-up circuits, pull-down circuits, precharging circuits, equalising circuits, for bit lines
-
- G—PHYSICS
- G11—INFORMATION STORAGE
- G11C—STATIC STORES
- G11C2207/00—Indexing scheme relating to arrangements for writing information into, or reading information out from, a digital store
- G11C2207/12—Equalization of bit lines
Description
p2.txt A Scheme for Accelerating Bit Line Equalization in a High Speed DRAM Architecture by Paul W. DeMone, June 6, 2000 A) Problem The bitline equalization and precharge portion of a DRAM row access cycle represents operational overhead that increases the average latency of memory operations and reduces the rate at which row accesses can be performed. Part of the difficulty in reducing this dead time is due to typical DRAM architectures which maximimize memory capacity per unit area by favouring large DRAM cell arrays. Long and highly capacitive bitlines require a relatively large amount of current to quickly change the voltage on them. At the same time the width of large DRAM
arrays requires the simultanous precharge and equalization of thousands of bit lines. The large number of active bitlines limits the drive strength of precharge and equalization devices for individual bit line pairs to avoid difficulties associated with large peak aggregate currents.
New DRAM architectures for embedded applications often focus on performance rather than bit density. This is achieved by increasing the degree of subdivision of the overall memory into a larger number of sub-arrays. Smaller active sub-arrays permit the use of higher drive, faster precharge and equaliz-ation circuits than possible in commodity memory devices. But this approach runs into fundamental limits to how much the bitline equalization period can be shortened due to the distr-ibuted resistive and capacitive parasitic characteristics of the bitline material.
B) Previous Approaches Traditionally the designers of commodity DRAM devices were strongly focussed on achieving low cost per bit through high aggregate bit density than higher memory performance. The cell capacity of a two dimensional memory array increases quadratically with scaling while the "overhead" area of bit line sense amps, word line drivers, and X and Y address dec-oders increases linearly with scaling. Therefore the focus on memory density meant that commodity DRAM devices were architected with sub-arrays as large as practically possible despite its strongly negative effect on the time required to perform bitline pre-charge and equalization (as well as cell readout, sensing, and writing new values).
The latency impact of slow bitline equalization and precharge p2.txt has traditionally been minimized by the creation of two diff-erent classes of memory operations: bank accesses (full row and column access) and faster page accesses (column access only to a row left open from a bank operation). The efficacy of page accesses in reducing average latency is due to the statistical spacial locality in the memory access patterns o~f many computing and communication applications, that is, the strong probability that consecutive memory accesses will target the same row.
But this architecture is undesirable for many applications such as real-time control and digital signal processing that value deterministic, or at least minimum assured levels of memory performance regardless of the memory address access pattern. One solution is to perform a complete row and column access for every memory operation and automatically close the row at the end of the operation. Unfortunately even the use o-f a highly sub-divided, small sub-array DRAM architecture is performance limited by the distributed RC parasitic character-istics of the bit line material due to current DRAM design anc layout practices.
C) Key Aspects of the Invention Current DRAM design and layout practices related to bitline precharge and equalization are shown in Figure 1. The DRAM
array is composed of a number of pairs of bitlines each of which share sense amplifiers and precharge equalization circuitry. The DRAM may be arranged with all circuitry assoc-iated with the bitlines on one side of the memory cell array (lA) or with the peripheral circuitry for adjacent bitline pairs distributed on opposite side of the array (1B).
Bitline precharge and equalization is performed by three n channel transistors N1, N2, and N3. Nl helps to equalize the voltage on the associated true and complementary bitline while N2 and N3 drive the true and complementary bitline to the pre-c,harge voltage level respectively.
During a DRAM access the bitline sense amplifiers SA sense the voltage difference between the true and complementary bitlines induced from the readout of the charge within the accessed memory cell. The sense amp amplifies the difference until the bitline with the higher voltage is raised close to Vdd while the bitline with the lower voltage is pulled down close to Vss. Common practice is for the bitline precharge voltage Vblp to be set close to midway between Vdd and Vss.
Ideally only device Nl is needed because the precharge voltage can be achieved by charge sharing between the true and compl-p2.txt ementary bit line when the two are shorted through Nl. In practice leakage, capacitive coupling, asymmetries in bitline capacitance and other effects mean that some current must be supplied through N2 and N3 to restore the bitline to Ublp.
D) The Invention The difficulty involved in performing the bitline equalization and precharge quickly is illustrated in Figure 2A. The necessary circuitry, transistors Nl, N2, and N3, are located at one end of a bitline pair. The bitlines have significant distributed RC
parasitics due to the minimum or near minimum width of the bit-lines and the drain capacitance of the memory array access tran-sistors attached to them. The time needed to equalize and pre-charge a bit line pair is approximately proportional to the square of the length of the bitline within the memory array.
The invention is the addition of an extra equalization transistor N4 connected across each bitline pair as shown in Figure 2B. The N4 device is located at the opposite side of the memory array as the sense amplifier and traditional equalization devices. The add-ition of the N4 device effectively halves the length of the bitline as far as the RC delay is concerned and reduces the time needed to perform bitline equalization and precharge time by about 75%. The location of N4 is the key to the invention, not the extra drive N4 represents.
E) Design variations The invention can be implemented for both DRAM architecture with bitline peripheral circuitry located on one or both sides of the memory sub-array as shown in Figure 3A and 3B respectively. An alternative arrangement places the secondary bitline equalization shorting transistor N4 in the middle of the array. In this case the size of the primary shorting transistor Nl may be be greatly reduced because it is only needed to compensate for the capacit-ance of the sense amplifier and column access devices; the central location of N4 is sufficient to cut the effective length of the distributed RC delay of the bitlines in half. This variant is shown in Figure 4A and 4B for both single sided and dual sided bitline peripheral circuit arrangements respectively.
F) Other Applications The invention can be applied to other situations where long pairs of wires are used to transmit data either differentially or dual p2.tXt rail, and the signal pair is equalized between data items. This may include high performance SRAMs, other types of electronic memories that are arranged in arrays, and long, high fanout data buses within the datapaths of digital signal processors and micro-processors.
arrays requires the simultanous precharge and equalization of thousands of bit lines. The large number of active bitlines limits the drive strength of precharge and equalization devices for individual bit line pairs to avoid difficulties associated with large peak aggregate currents.
New DRAM architectures for embedded applications often focus on performance rather than bit density. This is achieved by increasing the degree of subdivision of the overall memory into a larger number of sub-arrays. Smaller active sub-arrays permit the use of higher drive, faster precharge and equaliz-ation circuits than possible in commodity memory devices. But this approach runs into fundamental limits to how much the bitline equalization period can be shortened due to the distr-ibuted resistive and capacitive parasitic characteristics of the bitline material.
B) Previous Approaches Traditionally the designers of commodity DRAM devices were strongly focussed on achieving low cost per bit through high aggregate bit density than higher memory performance. The cell capacity of a two dimensional memory array increases quadratically with scaling while the "overhead" area of bit line sense amps, word line drivers, and X and Y address dec-oders increases linearly with scaling. Therefore the focus on memory density meant that commodity DRAM devices were architected with sub-arrays as large as practically possible despite its strongly negative effect on the time required to perform bitline pre-charge and equalization (as well as cell readout, sensing, and writing new values).
The latency impact of slow bitline equalization and precharge p2.txt has traditionally been minimized by the creation of two diff-erent classes of memory operations: bank accesses (full row and column access) and faster page accesses (column access only to a row left open from a bank operation). The efficacy of page accesses in reducing average latency is due to the statistical spacial locality in the memory access patterns o~f many computing and communication applications, that is, the strong probability that consecutive memory accesses will target the same row.
But this architecture is undesirable for many applications such as real-time control and digital signal processing that value deterministic, or at least minimum assured levels of memory performance regardless of the memory address access pattern. One solution is to perform a complete row and column access for every memory operation and automatically close the row at the end of the operation. Unfortunately even the use o-f a highly sub-divided, small sub-array DRAM architecture is performance limited by the distributed RC parasitic character-istics of the bit line material due to current DRAM design anc layout practices.
C) Key Aspects of the Invention Current DRAM design and layout practices related to bitline precharge and equalization are shown in Figure 1. The DRAM
array is composed of a number of pairs of bitlines each of which share sense amplifiers and precharge equalization circuitry. The DRAM may be arranged with all circuitry assoc-iated with the bitlines on one side of the memory cell array (lA) or with the peripheral circuitry for adjacent bitline pairs distributed on opposite side of the array (1B).
Bitline precharge and equalization is performed by three n channel transistors N1, N2, and N3. Nl helps to equalize the voltage on the associated true and complementary bitline while N2 and N3 drive the true and complementary bitline to the pre-c,harge voltage level respectively.
During a DRAM access the bitline sense amplifiers SA sense the voltage difference between the true and complementary bitlines induced from the readout of the charge within the accessed memory cell. The sense amp amplifies the difference until the bitline with the higher voltage is raised close to Vdd while the bitline with the lower voltage is pulled down close to Vss. Common practice is for the bitline precharge voltage Vblp to be set close to midway between Vdd and Vss.
Ideally only device Nl is needed because the precharge voltage can be achieved by charge sharing between the true and compl-p2.txt ementary bit line when the two are shorted through Nl. In practice leakage, capacitive coupling, asymmetries in bitline capacitance and other effects mean that some current must be supplied through N2 and N3 to restore the bitline to Ublp.
D) The Invention The difficulty involved in performing the bitline equalization and precharge quickly is illustrated in Figure 2A. The necessary circuitry, transistors Nl, N2, and N3, are located at one end of a bitline pair. The bitlines have significant distributed RC
parasitics due to the minimum or near minimum width of the bit-lines and the drain capacitance of the memory array access tran-sistors attached to them. The time needed to equalize and pre-charge a bit line pair is approximately proportional to the square of the length of the bitline within the memory array.
The invention is the addition of an extra equalization transistor N4 connected across each bitline pair as shown in Figure 2B. The N4 device is located at the opposite side of the memory array as the sense amplifier and traditional equalization devices. The add-ition of the N4 device effectively halves the length of the bitline as far as the RC delay is concerned and reduces the time needed to perform bitline equalization and precharge time by about 75%. The location of N4 is the key to the invention, not the extra drive N4 represents.
E) Design variations The invention can be implemented for both DRAM architecture with bitline peripheral circuitry located on one or both sides of the memory sub-array as shown in Figure 3A and 3B respectively. An alternative arrangement places the secondary bitline equalization shorting transistor N4 in the middle of the array. In this case the size of the primary shorting transistor Nl may be be greatly reduced because it is only needed to compensate for the capacit-ance of the sense amplifier and column access devices; the central location of N4 is sufficient to cut the effective length of the distributed RC delay of the bitlines in half. This variant is shown in Figure 4A and 4B for both single sided and dual sided bitline peripheral circuit arrangements respectively.
F) Other Applications The invention can be applied to other situations where long pairs of wires are used to transmit data either differentially or dual p2.tXt rail, and the signal pair is equalized between data items. This may include high performance SRAMs, other types of electronic memories that are arranged in arrays, and long, high fanout data buses within the datapaths of digital signal processors and micro-processors.
Claims
Priority Applications (10)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CA 2313951 CA2313951A1 (en) | 2000-07-07 | 2000-07-07 | Scheme for accelerating bit line equalization in a high speed dram architecture |
PCT/CA2000/001008 WO2002005289A1 (en) | 2000-07-07 | 2000-08-31 | A method and apparatus for accelerating signal equalization between a pair of signal lines |
KR10-2003-7000243A KR20030037263A (en) | 2000-07-07 | 2000-08-31 | A method and apparatus for accelerating signal equalization between a pair of signal lines |
CN00819733A CN1454385A (en) | 2000-07-07 | 2000-08-31 | A method and apparatus for accelerating signal equalization between a pair of signal lines |
GB0230353A GB2379546A (en) | 2000-07-07 | 2000-08-31 | A method and apparatus for accelerating signal equalization between a pair of signal lines |
DE10085476T DE10085476T1 (en) | 2000-07-07 | 2000-08-31 | Method and device for accelerating signal equalization between a pair of signal lines |
CA002414249A CA2414249A1 (en) | 2000-07-07 | 2000-08-31 | A method and apparatus for accelerating signal equalization between a pair of signal lines |
AU2000268134A AU2000268134A1 (en) | 2000-07-07 | 2000-08-31 | A method and apparatus for accelerating signal equalization between a pair of signal lines |
US10/336,851 US6785176B2 (en) | 2000-07-07 | 2003-01-06 | Method and apparatus for accelerating signal equalization between a pair of signal lines |
US10/855,410 US20040264272A1 (en) | 2000-07-07 | 2004-05-28 | Method and apparatus for accelerating signal equalization between a pair of signal lines |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CA 2313951 CA2313951A1 (en) | 2000-07-07 | 2000-07-07 | Scheme for accelerating bit line equalization in a high speed dram architecture |
Publications (1)
Publication Number | Publication Date |
---|---|
CA2313951A1 true CA2313951A1 (en) | 2002-01-07 |
Family
ID=4166721
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CA 2313951 Abandoned CA2313951A1 (en) | 2000-07-07 | 2000-07-07 | Scheme for accelerating bit line equalization in a high speed dram architecture |
Country Status (1)
Country | Link |
---|---|
CA (1) | CA2313951A1 (en) |
-
2000
- 2000-07-07 CA CA 2313951 patent/CA2313951A1/en not_active Abandoned
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US6522565B2 (en) | Semiconductor storage device | |
KR100443029B1 (en) | Semiconductor memory device, semiconductor device, data processing device and computer system | |
US7613057B2 (en) | Circuit and method for a sense amplifier | |
US7746716B2 (en) | Memory having a dummy bitline for timing control | |
KR100824798B1 (en) | Memory core capable of writing a full data pattern to edge sub arrays, semiconductor memory device having the same, and method for testing edge sub arrays | |
US6421290B2 (en) | Output circuit for alternating multiple bit line per column memory architecture | |
US6058065A (en) | Memory in a data processing system having improved performance and method therefor | |
KR100702355B1 (en) | Semiconductor memory having dual port cell supporting hidden refresh | |
US10153007B2 (en) | Apparatuses including a memory array with separate global read and write lines and/or sense amplifier region column select line and related methods | |
US20210134371A1 (en) | Sram memory having subarrays with common io block | |
US6266266B1 (en) | Integrated circuit design exhibiting reduced capacitance | |
US4418399A (en) | Semiconductor memory system | |
US8107278B2 (en) | Semiconductor storage device | |
EP0454061B1 (en) | Dynamic random access memory device with improved power supply system for speed-up of rewriting operation on data bits read-out from memory cells | |
US6785176B2 (en) | Method and apparatus for accelerating signal equalization between a pair of signal lines | |
KR20080009129A (en) | Storage circuit and method therefor | |
US5666306A (en) | Multiplication of storage capacitance in memory cells by using the Miller effect | |
US5745423A (en) | Low power precharge circuit for a dynamic random access memory | |
US5375097A (en) | Segmented bus architecture for improving speed in integrated circuit memories | |
US6977860B1 (en) | SRAM power reduction | |
US6697293B2 (en) | Localized direct sense architecture | |
US6876571B1 (en) | Static random access memory having leakage reduction circuit | |
CA2313951A1 (en) | Scheme for accelerating bit line equalization in a high speed dram architecture | |
CA2414249A1 (en) | A method and apparatus for accelerating signal equalization between a pair of signal lines | |
WO2022269492A1 (en) | Low-power static random access memory |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
FZDE | Dead |