US20010025333A1 - Integrated circuit memory device incorporating a non-volatile memory array and a relatively faster access time memory cache - Google Patents
Integrated circuit memory device incorporating a non-volatile memory array and a relatively faster access time memory cache Download PDFInfo
- Publication number
- US20010025333A1 US20010025333A1 US09/864,458 US86445801A US2001025333A1 US 20010025333 A1 US20010025333 A1 US 20010025333A1 US 86445801 A US86445801 A US 86445801A US 2001025333 A1 US2001025333 A1 US 2001025333A1
- Authority
- US
- United States
- Prior art keywords
- cache
- memory device
- volatile memory
- memory array
- row
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Abandoned
Links
- 230000015654 memory Effects 0.000 title claims abstract description 173
- 230000003068 static effect Effects 0.000 claims abstract description 10
- 238000000034 method Methods 0.000 description 24
- 230000008569 process Effects 0.000 description 21
- 238000005516 engineering process Methods 0.000 description 12
- 230000007704 transition Effects 0.000 description 9
- 239000003990 capacitor Substances 0.000 description 8
- 230000007246 mechanism Effects 0.000 description 5
- 230000005684 electric field Effects 0.000 description 4
- 230000008901 benefit Effects 0.000 description 3
- 238000013500 data storage Methods 0.000 description 3
- 230000004048 modification Effects 0.000 description 3
- 238000012986 modification Methods 0.000 description 3
- 230000001360 synchronised effect Effects 0.000 description 3
- 238000010586 diagram Methods 0.000 description 2
- 239000003989 dielectric material Substances 0.000 description 2
- 230000006872 improvement Effects 0.000 description 2
- 230000014759 maintenance of location Effects 0.000 description 2
- ATJFFYVFTNAWJD-UHFFFAOYSA-N Tin Chemical compound [Sn] ATJFFYVFTNAWJD-UHFFFAOYSA-N 0.000 description 1
- 230000000295 complement effect Effects 0.000 description 1
- 230000008878 coupling Effects 0.000 description 1
- 238000010168 coupling process Methods 0.000 description 1
- 238000005859 coupling reaction Methods 0.000 description 1
- 239000013078 crystal Substances 0.000 description 1
- 230000001934 delay Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 230000001066 destructive effect Effects 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 238000012423 maintenance Methods 0.000 description 1
- 239000000463 material Substances 0.000 description 1
- 230000010287 polarization Effects 0.000 description 1
- 238000012545 processing Methods 0.000 description 1
- 230000000717 retained effect Effects 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F12/00—Accessing, addressing or allocating within memory systems or architectures
- G06F12/02—Addressing or allocation; Relocation
- G06F12/08—Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
- G06F12/0802—Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
- G06F12/0893—Caches characterised by their organisation or structure
-
- G—PHYSICS
- G11—INFORMATION STORAGE
- G11C—STATIC STORES
- G11C11/00—Digital stores characterised by the use of particular electric or magnetic storage elements; Storage elements therefor
- G11C11/005—Digital stores characterised by the use of particular electric or magnetic storage elements; Storage elements therefor comprising combined but independently operative RAM-ROM, RAM-PROM, RAM-EPROM cells
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2212/00—Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
- G06F2212/20—Employing a main memory using a specific memory technology
- G06F2212/202—Non-volatile memory
- G06F2212/2022—Flash memory
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2212/00—Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
- G06F2212/30—Providing cache or TLB in specific location of a processing system
- G06F2212/304—In main memory subsystem
- G06F2212/3042—In main memory subsystem being part of a memory device, e.g. cache DRAM
-
- G—PHYSICS
- G11—INFORMATION STORAGE
- G11C—STATIC STORES
- G11C16/00—Erasable programmable read-only memories
- G11C16/02—Erasable programmable read-only memories electrically programmable
- G11C16/06—Auxiliary circuits, e.g. for writing into memory
- G11C16/26—Sensing or reading circuits; Data output circuits
Definitions
- the present invention relates, in general, to the field of non-volatile integrated circuit (“IC”) memory devices. More particularly, the present invention relates to an integrated circuit memory device incorporating a non-volatile memory array and a relatively faster access time memory cache integrated monolithically therewith.
- IC non-volatile integrated circuit
- main memory has been made up of numbers of asynchronous dynamic random access memory (“DRAM”) integrated circuits and it was not until the introduction of faster static random access memory (“SRAM”) cache memory that the performance of systems with DRAM main memory improved.
- DRAM asynchronous dynamic random access memory
- SRAM static random access memory
- This method of copying memory is referred to as “caching” a memory system and is a technique made possible by virtue of the fact that much of the CPU accesses to memory is directed at localized memory address regions. Once such a region is copied from main memory to the cache, the CPU can access the cache through many bus cycles before needing to refresh the cache with a new memory address region.
- This method of memory copying is advantageous in memory Read cycles which, in contrast to Write cycles, have been shown to constitute 90% of the external accesses of the CPU.
- EDRAM® Enhanced DRAM
- DRAM memory devices are designed utilizing a volatile, dynamic memory cell architecture, typically with each cell comprising a single transistor and capacitor. They are “volatile” in the sense that upon powerdown, the memory contents are lost and “dynamic” in the sense that they must be constantly refreshed to maintain the charge in the cell capacitor. The refresh operation is accomplished when the memory contents of a row of cells in the memory array is read by the sense amplifiers and the logic states in the cells that have been read are amplified and written back to the cells. As mentioned previously, DRAM is used primarily for memory reads and writes and is relatively inexpensive to produce in terms of die area. It does, however, provide relatively slow access times.
- SRAM devices are designed utilizing a volatile static memory cell architecture. They are considered to be “static” in that the contents of the memory cells need not be refreshed and the memory contents may be maintained indefinitely as long as power is supplied to the device.
- the individual memory cells of an SRAM generally comprise a simple, bi-stable transistor-based latch, using four or six transistors, that is either set or reset depending on the state of the data that was written to it.
- SRAM provides much faster read and write access time than DRAM and, as previously mentioned, is generally used as a memory cache. However, because the individual memory cell size is significantly larger, it is much more expensive to produce in terms of on-chip die area than DRAM and it also generates more heat. Typical devices cost three to four times that of DRAM.
- non-volatile memory devices are also currently available, by means of which data, can be retained without continuously applied power. These include, for example, erasable programmable read only memory (“EPROM”) devices, including electrically erasable (“EEPROM”) devices, and Flash memory. While providing non-volatile data storage, their relatively slow access times (and in particular their very slow “write” times) present a significant disadvantage to their use in certain applications.
- EPROM erasable programmable read only memory
- EEPROM electrically erasable
- Flash memory While providing non-volatile data storage, their relatively slow access times (and in particular their very slow “write” times) present a significant disadvantage to their use in certain applications.
- ferroelectric memory devices such as the FRAM® family of solid state, random access memory integrated circuits available from Ramtron International Corporation provide non-volatile data storage through the use of a ferroelectric dielectric material which may be polarized in one direction or another tin order to store a binary value.
- the ferroelectric effect allows for the retention of a stable polarization in the absence of an applied electric field due to the alignment of internal dipoles within the Perovskite crystals in the dielectric material. This alignment may be selectively achieved by application of an electric field which exceeds the coercive field of the material. Conversely, reversal of the applied field reverses the internal dipoles.
- both elements are polarized in the same direction and the sense amps measure the difference between the amount of charge transferred from the cells to a pair of complementary bit lines. In either case, since a “read” to a ferroelectric memory is a destructive operation, the correct data is then restored to the cell during a precharge operation.
- the conventional write mechanism for a 2T/2C memory cell includes inverting the dipoles on one cell capacitor and holding the electrode, or plate, to a positive potential greater than the coercive voltage for a nominal 100 nanosecond (“nsec.”) time period. The electrode is then brought back to circuit ground for the other cell capacitor to be written for an additional nominal 100 nsec.
- non-volatile memory device that provides the traditional benefits of non-volatile memory retention in the absence of applied power yet also provides the enhanced access times approaching that of other memory technologies when utilized as an on-chip integrated cache in conjunction with a non-volatile memory array.
- the cache may be provided as SRAM and the non-volatile memory array provided as ferroelectric random access memory (for example, FRAM®) wherein on a read, the row is cached and the write back cycle is started allowing subsequent in page reads to occur very quickly. If in page accesses are sufficient the memory array precharge in a ferroelectric based memory array may be hidden and writes can occur utilizing write back or write through caching.
- ferroelectric random access memory for example, FRAM®
- the non-volatile memory array may comprise EPROM, EEPROM or Flash memory in conjunction with an SRAM cache or a ferroelectric random access memory based cache (for example, FRAM®) which has symmetric read/write times and faster write times than EPROM, EEPROM or Flash memory.
- EPROM electrically erasable programmable read-only memory
- EEPROM electrically erasable programmable read-only memory
- Flash memory in conjunction with an SRAM cache or a ferroelectric random access memory based cache (for example, FRAM®) which has symmetric read/write times and faster write times than EPROM, EEPROM or Flash memory.
- a memory device comprising a non-volatile memory array.
- the device includes an address bus for receiving row and column address signals for accessing specified locations within the memory array and a data bus for receiving data to be written to a location in the memory array specified by the row and column address signals and for presenting data read from the memory array at a location specified by the row and column address signals.
- the memory device further comprises a cache associated with the memory array and coupled to the data bus for storing at least a portion of the data to be read from the memory array, the cache having a relatively faster access time than the memory array.
- a non-volatile memory device which includes a non-volatile memory array having associated row and column decoders; an address bus for receiving row and column address signals for application to the row and column decoders respectively; a cache interposed between the column decoder and the non-volatile memory array, the cache having a relatively faster access time than the non-volatile memory array; and a data bus coupled to the cache for receiving data to be written to a location in the non-volatile memory array specified by the row and column decoders and for presenting data read from the memory array at a location specified by the row and column decoders.
- FIG. 1 is a simplified logic block diagram of a representative parallel version of an integrated circuit memory device incorporating a non-volatile memory array and a relatively faster access time memory cache in accordance with the present invention
- FIG. 2 is a logic flow chart of an exemplary memory device read cycle operation in an embodiment of the present invention utilizing a FRAM technology-based non-volatile memory array and an SRAM-based memory for the cache in a “write back” caching scheme;
- FIG. 3 is a corresponding logic flow chart of an exemplary memory device write cycle operation in an embodiment of the present invention corresponding to the embodiment characterized in FIG. 2 utilizing a “write back” caching scheme;
- FIG. 4 is a logic flow chart of an exemplary memory device read cycle operation in an embodiment of the present invention utilizing a FRAM technology-based non-volatile memory array and an SRAM-based memory for the cache in a “write through” caching scheme;
- FIG. 5 is a corresponding logic flow chart of an exemplary memory device write cycle operation in an embodiment of the present invention corresponding to the embodiment characterized in FIG. 4 utilizing a “write through” caching scheme;
- FIG. 6 is a logic flow chart of an exemplary memory device read cycle operation in an embodiment of the present invention utilizing an EEPROM or Flash technology-based non-volatile memory array and an SRAM-based memory for the cache in a “write back” caching scheme;
- FIG. 7 is a corresponding logic flow chart of an exemplary memory device write cycle operation in an embodiment of the present invention corresponding to the embodiment characterized in FIG. 6 utilizing a “write back” caching scheme.
- FIG. 1 a simplified logic block diagram of a representative integrated circuit memory device 10 incorporating a non-volatile memory array 12 and a relatively faster access time memory cache 14 in accordance with the present invention is shown. It should be noted that although a parallel memory device 10 has been illustrated, the principles of the present invention are likewise applicable to those incorporating a serial data bus as well as synchronous devices.
- the exemplary memory device 10 illustrated is accessed by means of an external address bus 16 comprising a number of address lines A 0 through A n inclusive.
- the address bus is applied to a row address latch 18 as well as a column address latch 20 .
- the row address latch 18 and column address latch 20 are operative to respectively maintain a row and column address for accesses to the non-volatile memory array 12 .
- the output of the row address latch 18 is supplied directly to a row decoder 22 associated with the non-volatile memory array 12 for accessing a specified row therein as well as to a row address compare block 24 .
- the output of the column address latch 20 is supplied to a corresponding column decoder 26 for accessing a specified column of the non-volatile memory array 12 as determined by that portion of the address signal supplied to the address bus 16 maintained in the column address latch 20 .
- the cache 14 may be interposed between the column decoder 26 and a number of sense amplifiers 28 bi-directionally coupling the column decoder and the non-volatile memory array 12 .
- the cache 14 may comprise a row of SRAM registers for maintaining a last row read (“LRR”) from the non-volatile memory array 12 , which itself may be constructed utilizing FRAM technology memory cells.
- LRR last row read
- the cache 14 may be rendered essentially non-volatile through the use of a pair of FRAM memory cells associated with SRAM memory cells as disclosed in U.S. Pat. No. 4,809,225 assigned to Ramtron International Corporation, the disclosure of which is herein incorporated by this reference.
- An input/output (“I/O”) decoder (or controller) 30 is coupled to an output of the row address compare block 24 and is bi-directionally coupled to the cache 14 .
- the I/O decoder 30 presents an external “Ready” (or “Not Ready” signal, either of which might be active “high” or active “low”) on line 32 .
- Data output from, (i.e. “Q”) or data to be written to, (i.e. “D”) the memory device 10 is handled by means of an input/output (“I/O”) bus 34 which may comprise any number of bi-directional signal lines I/O 0 through I/O N . In a serial implementation the “Q” and “D” signals would be separate outputs and inputs respectively.
- An externally supplied chip select (“CS”)signal on line 38 , write enable (“WE”) signal on line 40 and output enable (“OE”) signal on line 42 are also supplied to the memory device 10 through the I/O decoder 30 .
- an external clock signal (“CLK”) may be supplied on an optional clock line 36 .
- data is read from and written to the non-volatile memory array 12 through the cache 14 .
- an address bus 16 (A 0 -A n ), the I/O bus 34 , chip select (or chip enable) line 38 (“CS” or “CE”), a write enable line 40 (“WE”), an output enable line 42 (“OE”) and a Ready line 32 .
- the row address compare block 24 and Ready line 32 signal the user when data is present in the row register, or cache 14 , and a fast access is practicable.
- This function may be implemented externally (by the user) but inclusion as a portion of the memory device 10 has certain advantages.
- Alternative access and control schemes i.e. multiplexed addresses, burst counters, read only, common I/O, etc. are also within the contemplation of the present invention.
- FRAM memory cells for the non-volatile memory array 12 supports a much faster write cycle then does either Flash or EEPROM and therefore both “write through” and “write back” caching mechanisms might be utilized depending on the particular application.
- the following discussion describes the logical control for a memory device 10 for each of these technologies in a single bank implementation, although it should be noted that the principles of the present invention are likewise applicable to multiple non-volatile memory array 12 banks within a given memory device 10 .
- FIG. 2 the operation of a particular implementation of a memory device 10 in accordance with the present invention is shown for a read operation utilizing an FRAM memory cell based non-volatile memory array 12 in a write back caching scheme.
- a write back operation is contemplated wherein all accesses to the memory device 10 are made via the SRAM cache 14 .
- the contents of the cache 14 is only written to the non-volatile memory array 12 on a row “miss” or if the chip select (or chip enable “CE”) line 38 transitions to an inactive state (i.e. the memory device 10 is deactivated).
- the control logic must, therefore, know if the cache 14 has been written for which the embodiment of the present invention shown sets a ““dirty” bit” if a write has occurred.
- the SRAM row cache 14 also allows for reads from the cache 14 while a precharge cycle is completed.
- the current implementation of FRAM memory cell based memories inverts the data in the memory cell to determine the state. The data is written back to the cell during the precharge cycle. If the previous cycle had been a read “miss”, the precharge cycle could be in progress. Accesses to the non-volatile memory array 12 cannot be performed until the cycle is complete. In future implementations of ferroelectric memories this may no longer be necessary and, therefore, these delays could be eliminated.
- This will reduce the cycle time in applications where reads are either local or sequential (cache “hit”s).
- the memory device 10 operation begins with the chip enable (or chip select) line 38 going active (either active “high” or active “low” as a design choice) and depends on the state of the cache 14 (“dirty” or “clean”) and the operation preceding the cycle.
- the process 100 begins with the memory device 10 in a standby mode until the CE line 38 becomes active at decision step 102 .
- the address on the address bus 16 is detected at step 104 and latched at step 106 .
- a read or write cycle is determined by the state of write enable line 40 at decision step 142 following a row address compare step 108 . If at decision step 142 the WE line 40 is active, the process 100 proceeds to a write cycle as will be more fully described hereinafter with respect to FIG. 3.
- the read cycle is preceded by a page (row) detect operation at decision step 110 to determine if the data is in the cache 14 (row register). If the address is in the cache (a read “hit”), the Ready line 32 is asserted at step 112 , the column address is acquired at step 114 , the appropriate data is output at step 116 , the Ready signal on line 32 is de-asserted at step 118 (after a predetermined delay). At this point the memory device 10 will wait for a new valid address or a transition of the chip enable line 38 to an inactive state at decision step 120 . If the chip enable line 38 has gone inactive, the “dirty” bit is checked at decision step 122 .
- the cache 14 is “dirty”, it is written back at step 124 to the non-volatile memory array 12 (if a precharge cycle is in progress, it must complete before the write back begins), and the “dirty” bit is cleared at step 126 . This maintains coherency between the contents of the cache 14 and the non-volatile memory array 12 should a power down cycle occur before the CE line 38 becomes active again.
- the memory device 10 then waits for the chip enable line 38 to become active at decision step 102 . If the chip enable line 38 remains active, the memory device 10 waits for a valid address at step 104 .
- a read “miss” the memory device 10 again remains in standby until CE becomes active at decision step 102 .
- the address is detected at step 104 and latched at step 106 .
- a read or write cycle is determined by the state of write enable line 40 as previously described.
- the read cycle is preceded by a page (row) detect at decision step 110 to determine if the data is in the cache 14 . Since the address is not in the cache 14 (a read “miss”), it must be determined if a precharge cycle is in progress at decision step 128 .
- the precharge cycle is completed at step 130 , a new row is loaded in the cache 138 , the Ready line 32 is asserted at step 112 , the column address is acquired at step 114 , the precharge cycle is initiated at step 140 in parallel, the data is output at step 116 , the Ready line 32 is de-asserted at step 118 after a specified delay, and the memory device waits for a CE line 38 transition at step 120 or a valid address at step 104 . If the CE line 38 transitions, it is handled as previously described with respect to a read “hit”. If a precharge cycle at decision step 128 is not in progress, the “dirty” bit is checked at decision step 132 to see if the cache 14 had been previously written.
- the cache 14 is loaded at step 138 and the process 100 proceeds as previously described.
- the cache 14 is “dirty”
- the contents of the cache 14 are written back to the non-volatile memory array 12 at step 134 (full cycle including precharge)
- the “dirty” bit is cleared at step 136
- the cache 14 is loaded at step 138
- the Ready line 32 is asserted at step 112 and the process 100 proceeds as hereinbefore described.
- a write cycle process 200 is shown.
- the memory device 10 In a write “hit” mode of operation, the memory device 10 is in standby until CE line 38 becomes active at decision step 202 .
- the address is detected at step 204 and latched at step 206 .
- a read or write cycle is determined by the state of the write enable line 40 , and if it is not active at decision step 210 , a read cycle is entered at step 212 .
- the write cycle is preceded by a page detect operation at a row address compare step 208 . If the address is in the cache 14 at decision step 214 (i.e.
- the Ready line 32 is asserted at step 216 , the column address is acquired at step 218 , the data is written to the cache 14 at step 220 , the Ready line 32 is de-asserted at step 222 and the “dirty” bit is set at step 224 .
- the memory device 10 will wait for a new valid address or for the chip enable line 38 to transition to an inactive state at decision step 226 . If the chip enable line becomes inactive, the contents of the cache 14 are written back to the non-volatile memory array 12 at step 228 . (if a precharge cycle is in progress it must complete before the write back begins), and the “dirty” bit is cleared at step 230 and process 200 proceeds as previously described with respect to a Read Cycle.
- the memory device 10 waits for an active chip enable at decision step 202 and a valid address at step 204 .
- a write cycle is determined by the state of write enable line 40 .
- the address is latched at step 206 and compared at step 208 . If the address is not in the cache 14 , it must then be determined if a precharge cycle is in progress at decision step 232 . If the precharge cycle is in progress, it must be allowed to complete at step 234 before loading the cache 14 at step 242 , the Ready line 32 is asserted at step 216 and the process flow 200 completes as previously described. If a precharge is not in progress, the “dirty” bit is checked at step 236 .
- the cache 14 is “clean”, the new row is loaded into the cache 14 at step 242 , the Ready line 32 is asserted at step 216 and the process 200 completes as described. If the “dirty” bit is set at decision step 236 , the contents of the cache 14 are written back to the non-volatile memory array 12 at step 238 and the process 200 completes as previously described.
- Non-volatile memory devices utilizing FRAM memory cells may benefit some applications using a write through mechanism.
- the writes in this case will go directly to the non-volatile memory array 12 or to the cache 14 and the non-volatile memory array 12 in the event of a cache “hit”.
- the control in this instance is similar to that of the previously described write back case except there is no analogous write “hit”. Writes to the non-volatile memory array 12 will always require the access time of the FRAM array.
- a read process 300 is shown.
- a read “hit” the memory device 10 is in standby until CE line 38 becomes active at decision step 302 .
- the address is detected at step 304 and latched at step 310 .
- a read or write cycle is determined by the state of the write enable line 40 at decision step 306 , and if it is active, then the process 300 proceeds to a write cycle at step 308 .
- the read cycle is preceded by a page (row) detect at step 312 to see if the data is in the cache 14 .
- the Ready line 32 is asserted at step 316 , the column address is acquired at step 318 , the data is output on I/O bus 34 at step 320 and the Ready line is de-asserted at step 322 (after a predetermined delay).
- the memory device 10 then waits for an active CE line 38 at decision step 302 and a valid address at step 304 .
- a read “miss” operation the memory device 10 will remain in standby until the CE line 38 becomes active at decision step 302 .
- the address is detected at step 304 and latched at step 310 as before.
- a read or write cycle is determined by the state of the write enable line 40 .
- the read cycle is preceded by a page (row) detect at decision step 314 to determine if the data is in the cache 14 . Since the data is not in the cache in the case of a read miss, it must be determined if a precharge cycle is in progress at decision step 324 .
- the precharge cycle is completed, a new row is loaded in the cache 14 at step 328 , the Ready line 32 is asserted at step 316 , the column address is acquired at step 318 , the precharge cycle is initiated at step 330 in parallel, the data is output at step 320 , the Ready line 32 is de-asserted at step 322 (following a predetermined delay) and the memory device 10 again waits for the CE line 38 to become active at decision step 302 (if CE is active for another valid address) followed by a valid address at step 304 .
- a write process 400 for a write through mode of operation is shown.
- Writes are written to the non-volatile memory array 12 directly and begin with an active chip enable line 38 at decision step 402 .
- the address is detected at step 404 and latched at step 410 . If the write enable line 40 is not active at decision step 406 , then a read cycle is initiated at step 408 .
- the row address is compared at step 412 to determine if the address is in the cache 14 . If the address is in the cache 14 (a row hit) at decision step 414 , it must then be determined if a precharge cycle is in progress at decision step 416 .
- the column address is acquired at step 420 , the data is written to the cache 14 and the non-volatile memory array 12 simultaneously at step 422 , the Ready line 32 is asserted at step 424 after the access time requirement is met, and the Ready line is de-asserted at step 426 after a specified delay.
- the memory device 10 then waits for an active chip enable at decision step 402 and a valid address at step 404 .
- step 414 If the address is not in the cache 14 at decision step 414 (a row miss), it is determined if a precharge cycle is in progress at decision step 428 and, if so, it completes at step 430 , the column address is acquired at step 432 , the data is written to the non-volatile memory array only at step 434 (a write through) and the process 400 continues as previously described.
- a memory device 10 with a direct mapped row cache 14 (i.e. an SRAM Row Register) coupled with either an EEPROM or Flash non-volatile memory array 12 is similar to a memory device 10 utilizing a FRAM memory cell memory array 12 as described with respect to FIGS. 2 - 5 inclusive, except that the write mechanisms and write speeds are very much different.
- a read process 500 for a write through caching system utilizing EEPROM, Flash or similar technologies for the non-volatile memory array 12 is shown.
- the memory device 10 will remain in standby until the CE line 38 becomes active at decision step 502 .
- the address is detected at step 504 and latched at step 506 as before.
- a read or write cycle is determined by the state of the write enable line 40 at decision step 510 and if it is active, the process 500 proceeds to a write cycle at step 512 .
- the read cycle is preceded by a page (row) detect at step 508 to see if the data is in the cache 14 (row register).
- the memory device 10 will wait for a new valid address or a transition of the chip enable line 38 to an inactive state. If the chip enable line 38 has transitioned to an inactive state at decision step 524 , the “dirty” bit is checked at decision step 526 .
- the cache 14 is “dirty”, it is written back at step 528 to the EEPROM/Flash non-volatile memory array 12 , and the “dirty” bit is cleared at step 530 . This maintains coherency should a power down cycle occur before the CE line 38 becomes active again.
- the memory device 10 then waits for the chip enable line 38 to become active at decision step 502 . If the chip enable line 38 remains active, the memory device 10 waits for a valid address at step 504 .
- a read miss operation the memory device 10 will remain in standby until the CE line 38 becomes active at decision step 502 , the address is detected at step 504 and latched at step 506 .
- a read or write cycle is determined by the state of the write enable line 40 .
- the read cycle is preceded by a page (row) detect step 508 to determine if the data is in the cache 14 at decision step 514 . Since the address is not in the cache 14 (a read miss), it must be determined if a write cycle is in progress at decision step 532 .
- the write cycle is completed at step 534 , a new row is loaded in the cache 14 at step 542 , the Ready line 32 is asserted at step 516 , the column address is acquired at step 518 , the data is output at step 520 , the Ready line 32 is de-asserted at step 522 after a predetermined delay, and the memory device 10 waits for a CE line 38 transition at step 524 or (if CE line 38 is active) a valid address at step 504 . If the CE line 38 transitions, it is handled as previously described.
- the “dirty” bit is checked at decision step 536 to determine if the cache 14 had been written previously. If the cache 14 is “clean”, the cache 14 is loaded at step 542 and the process 500 completes as aforedescribed.
- the cache 14 is “dirty”, the contents of the cache 14 are written back to the EEPROM/Flash non-volatile memory array 12 at step 538 , the “dirty” bit is cleared at step 540 , the write cycle completes at step 534 , the cache is loaded at step 542 , the Ready line 32 is asserted at step 516 and the memory device 10 returns to wait for an active CE line at decision step 502 and a valid address at step 504 .
- a write process 600 is shown.
- the memory device 10 In a write “hit” mode of operation, the memory device 10 remains in standby until the CE line 38 becomes active at decision step 602 .
- the address is detected at step 504 and latched at step 606 .
- a read or write cycle is determined by the state of write enable line 40 at decision step 610 , and if it is not active, a read cycle is entered at step 612 .
- the write cycle is preceded by a page detect at step 608 .
- the memory device 10 will wait for a new valid address at step 604 or for the chip enable line 38 to transition to an inactive state. If the chip enable line 38 becomes inactive, the contents of the cache 14 are written back to the EEPROM/Flash non-volatile memory array at step 628 , the “dirty” bit is cleared at step 630 and the process 600 returns to wait for an active CE line 38 .
- a write “miss” mode of operation the memory device 10 again waits for an active chip enable line 38 at decision step 602 and a valid address at step 604 .
- the write cycle is determined by the state of the write enable line 40 .
- the address is latched at step 606 and compared at step 608 . If the address is not in the cache 14 , it must be determined if a write cycle is in progress at decision step 632 . If the write cycle is in progress, it must complete at step 634 before loading the cache 14 at step 642 , the Ready line 32 is asserted at step 616 and the process 600 proceeds as described above. If a write cycle is not in progress, the “dirty” bit is checked at step 636 .
- the cache 14 is “clean”, the new row is loaded into the cache 14 at step 642 , the Ready line 32 is asserted at step 616 and the process 600 proceeds as previously described. If the “dirty” bit is set at decision step 636 , the contents of the cache 14 are written back to the EEPROM/Flash non-volatile memory array at step 638 , the write cycle is completed at step 634 , the Ready line 32 is asserted at step 616 and the process 600 proceeds as described above.
- a non-volatile memory array is cached by use of a non-volatile memory-based cache.
- asynchronous parallel memory device has been illustrated and described herein, the principles of the present invention are likewise applicable to serial and synchronous memory device architectures as well.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Computer Hardware Design (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Memory System Of A Hierarchy Structure (AREA)
- Dram (AREA)
Abstract
Description
- The present invention is related to the subject matter of U.S. patent application Ser. No. 08/319,289 filed Oct. 6, 1994, now U.S. Pat. No. 5,699,317 and 08/460,665 filed Jun. 2, 1995, now U.S. Pat. No. 5,721,862, both assigned to Enhanced Memory Systems, Inc., a subsidiary of Ramtron International Corporation, Colorado Springs, Colo., assignee of the present invention, the disclosures of which are herein specifically incorporated by this reference.
- The present invention relates, in general, to the field of non-volatile integrated circuit (“IC”) memory devices. More particularly, the present invention relates to an integrated circuit memory device incorporating a non-volatile memory array and a relatively faster access time memory cache integrated monolithically therewith.
- As the performance of computer central processing units (“CPUs”) has increased dramatically in recent years, this performance improvement has far exceeded that of any corresponding increase in the performance of computer main memory. Typically, main memory has been made up of numbers of asynchronous dynamic random access memory (“DRAM”) integrated circuits and it was not until the introduction of faster static random access memory (“SRAM”) cache memory that the performance of systems with DRAM main memory improved. This performance improvement was achieved by making a high speed locally-accessed copy of memory available to the CPU so that even during memory accesses, the CPU would not always need to operate at the slower speeds of the system bus and the main memory DRAM. This method of copying memory is referred to as “caching” a memory system and is a technique made possible by virtue of the fact that much of the CPU accesses to memory is directed at localized memory address regions. Once such a region is copied from main memory to the cache, the CPU can access the cache through many bus cycles before needing to refresh the cache with a new memory address region. This method of memory copying is advantageous in memory Read cycles which, in contrast to Write cycles, have been shown to constitute 90% of the external accesses of the CPU.
- As mentioned previously, the most popular hardware realization of a cache memory employs a separate high-speed SRAM cache component and a slower but less expensive DRAM component. A proprietary Enhanced DRAM (EDRAM®) integrated circuit memory device, developed by Enhanced Memory Systems, Inc., integrates both of these memory elements on one chip along with on-chip tag maintenance circuitry to further enhance performance of computer main memory over separate SRAM and DRAM components. Access to the chip is provided by a single bus. Details of the EDRAM device are disclosed and claimed in the aforementioned United States Patents.
- DRAM memory devices are designed utilizing a volatile, dynamic memory cell architecture, typically with each cell comprising a single transistor and capacitor. They are “volatile” in the sense that upon powerdown, the memory contents are lost and “dynamic” in the sense that they must be constantly refreshed to maintain the charge in the cell capacitor. The refresh operation is accomplished when the memory contents of a row of cells in the memory array is read by the sense amplifiers and the logic states in the cells that have been read are amplified and written back to the cells. As mentioned previously, DRAM is used primarily for memory reads and writes and is relatively inexpensive to produce in terms of die area. It does, however, provide relatively slow access times.
- On the other hand, SRAM devices are designed utilizing a volatile static memory cell architecture. They are considered to be “static” in that the contents of the memory cells need not be refreshed and the memory contents may be maintained indefinitely as long as power is supplied to the device. The individual memory cells of an SRAM generally comprise a simple, bi-stable transistor-based latch, using four or six transistors, that is either set or reset depending on the state of the data that was written to it. SRAM provides much faster read and write access time than DRAM and, as previously mentioned, is generally used as a memory cache. However, because the individual memory cell size is significantly larger, it is much more expensive to produce in terms of on-chip die area than DRAM and it also generates more heat. Typical devices cost three to four times that of DRAM.
- In contrast to DRAM and SRAM, various types of non-volatile memory devices are also currently available, by means of which data, can be retained without continuously applied power. These include, for example, erasable programmable read only memory (“EPROM”) devices, including electrically erasable (“EEPROM”) devices, and Flash memory. While providing non-volatile data storage, their relatively slow access times (and in particular their very slow “write” times) present a significant disadvantage to their use in certain applications.
- In contrast, ferroelectric memory devices, such as the FRAM® family of solid state, random access memory integrated circuits available from Ramtron International Corporation provide non-volatile data storage through the use of a ferroelectric dielectric material which may be polarized in one direction or another tin order to store a binary value. The ferroelectric effect allows for the retention of a stable polarization in the absence of an applied electric field due to the alignment of internal dipoles within the Perovskite crystals in the dielectric material. This alignment may be selectively achieved by application of an electric field which exceeds the coercive field of the material. Conversely, reversal of the applied field reverses the internal dipoles.
- Data stored in a ferroelectric memory cell is “read” by applying an electric field to the cell capacitor. If the field is applied in a direction to switch the internal dipoles, more charge will be moved than if the dipoles are not reversed. As a result, sense amplifiers can measure the charge applied to the cell bit lines and produce wither a logic “1” or “0” at the IC output pins. In a conventional two transistor/two capacitor (“2C/2T”) ferroelectric memory cell, (one transistor/one capacitor “1T/1C” devices have also been described) a pair of two data storage elements are utilized, each polarized in opposite directions. To “read” the state of a 2T/2C memory cell, both elements are polarized in the same direction and the sense amps measure the difference between the amount of charge transferred from the cells to a pair of complementary bit lines. In either case, since a “read” to a ferroelectric memory is a destructive operation, the correct data is then restored to the cell during a precharge operation.
- In a simple “write” operation, an electric field is applied to the cell capacitor to polarize it to the desired state. Briefly, the conventional write mechanism for a 2T/2C memory cell includes inverting the dipoles on one cell capacitor and holding the electrode, or plate, to a positive potential greater than the coercive voltage for a nominal 100 nanosecond (“nsec.”) time period. The electrode is then brought back to circuit ground for the other cell capacitor to be written for an additional nominal 100 nsec.
- In light of the foregoing, it would be highly advantageous to provide a non-volatile memory device that provides the traditional benefits of non-volatile memory retention in the absence of applied power yet also provides the enhanced access times approaching that of other memory technologies when utilized as an on-chip integrated cache in conjunction with a non-volatile memory array.
- Disclosed herein is an integrated circuit memory device incorporating a non-volatile memory array and a relatively faster access time memory cache integrated monolithically therewith which improves the overall access time in page and provides faster cycle time for read operations. In a particular embodiment, the cache may be provided as SRAM and the non-volatile memory array provided as ferroelectric random access memory (for example, FRAM®) wherein on a read, the row is cached and the write back cycle is started allowing subsequent in page reads to occur very quickly. If in page accesses are sufficient the memory array precharge in a ferroelectric based memory array may be hidden and writes can occur utilizing write back or write through caching. In alternative embodiments, the non-volatile memory array may comprise EPROM, EEPROM or Flash memory in conjunction with an SRAM cache or a ferroelectric random access memory based cache (for example, FRAM®) which has symmetric read/write times and faster write times than EPROM, EEPROM or Flash memory.
- Particularly disclosed herein is a memory device comprising a non-volatile memory array. The device includes an address bus for receiving row and column address signals for accessing specified locations within the memory array and a data bus for receiving data to be written to a location in the memory array specified by the row and column address signals and for presenting data read from the memory array at a location specified by the row and column address signals. The memory device further comprises a cache associated with the memory array and coupled to the data bus for storing at least a portion of the data to be read from the memory array, the cache having a relatively faster access time than the memory array.
- Further disclosed herein is a non-volatile memory device which includes a non-volatile memory array having associated row and column decoders; an address bus for receiving row and column address signals for application to the row and column decoders respectively; a cache interposed between the column decoder and the non-volatile memory array, the cache having a relatively faster access time than the non-volatile memory array; and a data bus coupled to the cache for receiving data to be written to a location in the non-volatile memory array specified by the row and column decoders and for presenting data read from the memory array at a location specified by the row and column decoders.
- The aforementioned and other features and objects of the present invention and the manner of attaining them will become more apparent and the invention itself will be best understood by reference to the following description of a preferred embodiment taken in conjunction with the accompanying drawings, wherein:
- FIG. 1 is a simplified logic block diagram of a representative parallel version of an integrated circuit memory device incorporating a non-volatile memory array and a relatively faster access time memory cache in accordance with the present invention;
- FIG. 2 is a logic flow chart of an exemplary memory device read cycle operation in an embodiment of the present invention utilizing a FRAM technology-based non-volatile memory array and an SRAM-based memory for the cache in a “write back” caching scheme;
- FIG. 3 is a corresponding logic flow chart of an exemplary memory device write cycle operation in an embodiment of the present invention corresponding to the embodiment characterized in FIG. 2 utilizing a “write back” caching scheme;
- FIG. 4 is a logic flow chart of an exemplary memory device read cycle operation in an embodiment of the present invention utilizing a FRAM technology-based non-volatile memory array and an SRAM-based memory for the cache in a “write through” caching scheme;
- FIG. 5 is a corresponding logic flow chart of an exemplary memory device write cycle operation in an embodiment of the present invention corresponding to the embodiment characterized in FIG. 4 utilizing a “write through” caching scheme;
- FIG. 6 is a logic flow chart of an exemplary memory device read cycle operation in an embodiment of the present invention utilizing an EEPROM or Flash technology-based non-volatile memory array and an SRAM-based memory for the cache in a “write back” caching scheme; and
- FIG. 7 is a corresponding logic flow chart of an exemplary memory device write cycle operation in an embodiment of the present invention corresponding to the embodiment characterized in FIG. 6 utilizing a “write back” caching scheme.
- With reference now to FIG. 1, a simplified logic block diagram of a representative integrated
circuit memory device 10 incorporating anon-volatile memory array 12 and a relatively faster accesstime memory cache 14 in accordance with the present invention is shown. It should be noted that although aparallel memory device 10 has been illustrated, the principles of the present invention are likewise applicable to those incorporating a serial data bus as well as synchronous devices. - The
exemplary memory device 10 illustrated is accessed by means of anexternal address bus 16 comprising a number of address lines A0 through An inclusive. The address bus is applied to arow address latch 18 as well as a column address latch 20. Therow address latch 18 and column address latch 20 are operative to respectively maintain a row and column address for accesses to thenon-volatile memory array 12. The output of therow address latch 18 is supplied directly to arow decoder 22 associated with thenon-volatile memory array 12 for accessing a specified row therein as well as to a row address compareblock 24. The output of the column address latch 20 is supplied to acorresponding column decoder 26 for accessing a specified column of thenon-volatile memory array 12 as determined by that portion of the address signal supplied to theaddress bus 16 maintained in the column address latch 20. - As shown, the
cache 14 may be interposed between thecolumn decoder 26 and a number ofsense amplifiers 28 bi-directionally coupling the column decoder and thenon-volatile memory array 12. In a specific embodiment of the present invention, thecache 14 may comprise a row of SRAM registers for maintaining a last row read (“LRR”) from thenon-volatile memory array 12, which itself may be constructed utilizing FRAM technology memory cells. In a particular embodiment of the present invention, thecache 14 may be rendered essentially non-volatile through the use of a pair of FRAM memory cells associated with SRAM memory cells as disclosed in U.S. Pat. No. 4,809,225 assigned to Ramtron International Corporation, the disclosure of which is herein incorporated by this reference. - An input/output (“I/O”) decoder (or controller)30 is coupled to an output of the row address compare
block 24 and is bi-directionally coupled to thecache 14. The I/O decoder 30 presents an external “Ready” (or “Not Ready” signal, either of which might be active “high” or active “low”) online 32. Data output from, (i.e. “Q”) or data to be written to, (i.e. “D”) thememory device 10 is handled by means of an input/output (“I/O”)bus 34 which may comprise any number of bi-directional signal lines I/O0 through I/ON. In a serial implementation the “Q” and “D” signals would be separate outputs and inputs respectively. An externally supplied chip select (“CS”)signal online 38, write enable (“WE”) signal online 40 and output enable (“OE”) signal online 42 are also supplied to thememory device 10 through the I/O decoder 30. In a synchronous embodiment of thememory device 10, an external clock signal (“CLK”) may be supplied on anoptional clock line 36. - In the particular embodiment shown, data is read from and written to the
non-volatile memory array 12 through thecache 14. In other implementations of thememory device 10 of the present invention, it may be advantageous to write all data directly to thenon-volatile memory array 12 while reading all data from thecache 14 as it is written to thecache 14 from thenon-volatile memory array 12. - The operation of the
exemplary memory device 10 will be explained in greater detail hereinafter with respect to “write back” and “write through” caching schemes in conjunction with an FRAM memory cell technology-basednon-volatile memory array 12 with an associatedSRAM cache 14 as well as with an EEPROM or Flash technology-basednon-volatile memory array 12 utilizing asimilar cache 14. It should also be noted that, since FRAM memory cell read and write times are symmetric, and the latter time is significantly faster than that of EEPROM or Flash, the principles of the present invention are likewise applicable to an EEPROM, Flash or othernon-volatile memory array 12 utilizing FRAM memory cells for thecache 14. - As stated previously, access and control is accomplished via, an address bus16 (A0-An), the I/
O bus 34, chip select (or chip enable) line 38 (“CS” or “CE”), a write enable line 40 (“WE”), an output enable line 42 (“OE”) and aReady line 32. The row address compareblock 24 andReady line 32, signal the user when data is present in the row register, orcache 14, and a fast access is practicable. This function may be implemented externally (by the user) but inclusion as a portion of thememory device 10 has certain advantages. Alternative access and control schemes (i.e. multiplexed addresses, burst counters, read only, common I/O, etc.) are also within the contemplation of the present invention. - As also previously noted, the use of FRAM memory cells for the
non-volatile memory array 12 supports a much faster write cycle then does either Flash or EEPROM and therefore both “write through” and “write back” caching mechanisms might be utilized depending on the particular application. The following discussion describes the logical control for amemory device 10 for each of these technologies in a single bank implementation, although it should be noted that the principles of the present invention are likewise applicable to multiplenon-volatile memory array 12 banks within a givenmemory device 10. - With reference additionally now to FIG. 2, the operation of a particular implementation of a
memory device 10 in accordance with the present invention is shown for a read operation utilizing an FRAM memory cell basednon-volatile memory array 12 in a write back caching scheme. A write back operation is contemplated wherein all accesses to thememory device 10 are made via theSRAM cache 14. The contents of thecache 14 is only written to thenon-volatile memory array 12 on a row “miss” or if the chip select (or chip enable “CE”)line 38 transitions to an inactive state ( i.e. thememory device 10 is deactivated). The control logic must, therefore, know if thecache 14 has been written for which the embodiment of the present invention shown sets a ““dirty” bit” if a write has occurred. - The
SRAM row cache 14 also allows for reads from thecache 14 while a precharge cycle is completed. (The current implementation of FRAM memory cell based memories inverts the data in the memory cell to determine the state. The data is written back to the cell during the precharge cycle. If the previous cycle had been a read “miss”, the precharge cycle could be in progress. Accesses to thenon-volatile memory array 12 cannot be performed until the cycle is complete. In future implementations of ferroelectric memories this may no longer be necessary and, therefore, these delays could be eliminated.) This will reduce the cycle time in applications where reads are either local or sequential (cache “hit”s). - As shown, the
memory device 10 operation begins with the chip enable (or chip select)line 38 going active (either active “high” or active “low” as a design choice) and depends on the state of the cache 14 (“dirty” or “clean”) and the operation preceding the cycle. - Read Cycle
- The
process 100 begins with thememory device 10 in a standby mode until theCE line 38 becomes active atdecision step 102. The address on theaddress bus 16 is detected atstep 104 and latched atstep 106. A read or write cycle is determined by the state of write enableline 40 atdecision step 142 following a row address comparestep 108. If atdecision step 142 theWE line 40 is active, theprocess 100 proceeds to a write cycle as will be more fully described hereinafter with respect to FIG. 3. - The read cycle is preceded by a page (row) detect operation at
decision step 110 to determine if the data is in the cache 14 (row register). If the address is in the cache (a read “hit”), theReady line 32 is asserted atstep 112, the column address is acquired atstep 114, the appropriate data is output atstep 116, the Ready signal online 32 is de-asserted at step 118 (after a predetermined delay). At this point thememory device 10 will wait for a new valid address or a transition of the chip enableline 38 to an inactive state atdecision step 120. If the chip enableline 38 has gone inactive, the “dirty” bit is checked atdecision step 122. If thecache 14 is “dirty”, it is written back atstep 124 to the non-volatile memory array 12 (if a precharge cycle is in progress, it must complete before the write back begins), and the “dirty” bit is cleared atstep 126. This maintains coherency between the contents of thecache 14 and thenon-volatile memory array 12 should a power down cycle occur before theCE line 38 becomes active again. Thememory device 10 then waits for the chip enableline 38 to become active atdecision step 102. If the chip enableline 38 remains active, thememory device 10 waits for a valid address atstep 104. - In a read “miss”, the
memory device 10 again remains in standby until CE becomes active atdecision step 102. The address is detected atstep 104 and latched atstep 106. A read or write cycle is determined by the state of write enableline 40 as previously described. The read cycle is preceded by a page (row) detect atdecision step 110 to determine if the data is in thecache 14. Since the address is not in the cache 14 (a read “miss”), it must be determined if a precharge cycle is in progress atdecision step 128. When the precharge cycle is completed atstep 130, a new row is loaded in thecache 138, theReady line 32 is asserted atstep 112, the column address is acquired atstep 114, the precharge cycle is initiated atstep 140 in parallel, the data is output atstep 116, theReady line 32 is de-asserted atstep 118 after a specified delay, and the memory device waits for aCE line 38 transition atstep 120 or a valid address atstep 104. If theCE line 38 transitions, it is handled as previously described with respect to a read “hit”. If a precharge cycle atdecision step 128 is not in progress, the “dirty” bit is checked atdecision step 132 to see if thecache 14 had been previously written. If thecache 14 is “clean”, thecache 14 is loaded atstep 138 and theprocess 100 proceeds as previously described. Alternatively, if thecache 14 is “dirty”, the contents of thecache 14 are written back to thenon-volatile memory array 12 at step 134 (full cycle including precharge), the “dirty” bit is cleared atstep 136, thecache 14 is loaded atstep 138, theReady line 32 is asserted atstep 112 and theprocess 100 proceeds as hereinbefore described. - Write Cycle
- With reference additionally now to FIG. 3, a
write cycle process 200 is shown. In a write “hit” mode of operation, thememory device 10 is in standby untilCE line 38 becomes active atdecision step 202. The address is detected atstep 204 and latched atstep 206. A read or write cycle is determined by the state of the write enableline 40, and if it is not active atdecision step 210, a read cycle is entered atstep 212. The write cycle is preceded by a page detect operation at a row address comparestep 208. If the address is in thecache 14 at decision step 214 (i.e. a cache “hit”) theReady line 32 is asserted atstep 216, the column address is acquired atstep 218, the data is written to thecache 14 atstep 220, theReady line 32 is de-asserted atstep 222 and the “dirty” bit is set atstep 224. At this point, thememory device 10 will wait for a new valid address or for the chip enableline 38 to transition to an inactive state atdecision step 226. If the chip enable line becomes inactive, the contents of thecache 14 are written back to thenon-volatile memory array 12 atstep 228. (if a precharge cycle is in progress it must complete before the write back begins), and the “dirty” bit is cleared atstep 230 andprocess 200 proceeds as previously described with respect to a Read Cycle. - With respect to a write miss, the
memory device 10 waits for an active chip enable atdecision step 202 and a valid address atstep 204. As before, a write cycle is determined by the state of write enableline 40. The address is latched atstep 206 and compared atstep 208. If the address is not in thecache 14, it must then be determined if a precharge cycle is in progress atdecision step 232. If the precharge cycle is in progress, it must be allowed to complete atstep 234 before loading thecache 14 atstep 242, theReady line 32 is asserted atstep 216 and theprocess flow 200 completes as previously described. If a precharge is not in progress, the “dirty” bit is checked atstep 236. If thecache 14 is “clean”, the new row is loaded into thecache 14 atstep 242, theReady line 32 is asserted atstep 216 and theprocess 200 completes as described. If the “dirty” bit is set atdecision step 236, the contents of thecache 14 are written back to thenon-volatile memory array 12 atstep 238 and theprocess 200 completes as previously described. - Non-volatile memory devices utilizing FRAM memory cells may benefit some applications using a write through mechanism. The writes in this case will go directly to the
non-volatile memory array 12 or to thecache 14 and thenon-volatile memory array 12 in the event of a cache “hit”. The control in this instance is similar to that of the previously described write back case except there is no analogous write “hit”. Writes to thenon-volatile memory array 12 will always require the access time of the FRAM array. - Read Cycle
- With reference additionally now to FIG. 4, a
read process 300 is shown. In a read “hit”, thememory device 10 is in standby untilCE line 38 becomes active atdecision step 302. The address is detected atstep 304 and latched atstep 310. A read or write cycle is determined by the state of the write enableline 40 atdecision step 306, and if it is active, then theprocess 300 proceeds to a write cycle atstep 308. The read cycle is preceded by a page (row) detect atstep 312 to see if the data is in thecache 14. If the data is in thecache 14, theReady line 32 is asserted atstep 316, the column address is acquired atstep 318, the data is output on I/O bus 34 atstep 320 and the Ready line is de-asserted at step 322 (after a predetermined delay). Thememory device 10 then waits for anactive CE line 38 atdecision step 302 and a valid address atstep 304. - In a read “miss” operation, the
memory device 10 will remain in standby until theCE line 38 becomes active atdecision step 302. The address is detected atstep 304 and latched atstep 310 as before. A read or write cycle is determined by the state of the write enableline 40. As previously described, the read cycle is preceded by a page (row) detect atdecision step 314 to determine if the data is in thecache 14. Since the data is not in the cache in the case of a read miss, it must be determined if a precharge cycle is in progress atdecision step 324. If the precharge cycle is completed, a new row is loaded in thecache 14 atstep 328, theReady line 32 is asserted atstep 316, the column address is acquired atstep 318, the precharge cycle is initiated atstep 330 in parallel, the data is output atstep 320, theReady line 32 is de-asserted at step 322 (following a predetermined delay) and thememory device 10 again waits for theCE line 38 to become active at decision step 302 (if CE is active for another valid address) followed by a valid address atstep 304. - Write Cycle
- With reference additionally now to FIG. 5, a
write process 400 for a write through mode of operation is shown. Writes are written to thenon-volatile memory array 12 directly and begin with an active chip enableline 38 atdecision step 402. The address is detected atstep 404 and latched atstep 410. If the write enableline 40 is not active atdecision step 406, then a read cycle is initiated atstep 408. The row address is compared atstep 412 to determine if the address is in thecache 14. If the address is in the cache 14 (a row hit) atdecision step 414, it must then be determined if a precharge cycle is in progress atdecision step 416. If there is no precharge operation in progress, the column address is acquired atstep 420, the data is written to thecache 14 and thenon-volatile memory array 12 simultaneously atstep 422, theReady line 32 is asserted atstep 424 after the access time requirement is met, and the Ready line is de-asserted atstep 426 after a specified delay. Thememory device 10 then waits for an active chip enable atdecision step 402 and a valid address atstep 404. If the address is not in thecache 14 at decision step 414 (a row miss), it is determined if a precharge cycle is in progress atdecision step 428 and, if so, it completes atstep 430, the column address is acquired atstep 432, the data is written to the non-volatile memory array only at step 434 (a write through) and theprocess 400 continues as previously described. - A
memory device 10 with a direct mapped row cache 14 (i.e. an SRAM Row Register) coupled with either an EEPROM or Flashnon-volatile memory array 12 is similar to amemory device 10 utilizing a FRAM memorycell memory array 12 as described with respect to FIGS. 2-5 inclusive, except that the write mechanisms and write speeds are very much different. - The following description in conjunction with the flow charts of the succeeding figures describes the operation of such a
memory device 10 using a “write back” caching mechanism. - Read Cycle
- With reference additionally now to FIG. 6, a
read process 500 for a write through caching system utilizing EEPROM, Flash or similar technologies for thenon-volatile memory array 12 is shown. Thememory device 10 will remain in standby until theCE line 38 becomes active atdecision step 502. The address is detected at step504 and latched atstep 506 as before. A read or write cycle is determined by the state of the write enableline 40 atdecision step 510 and if it is active, theprocess 500 proceeds to a write cycle atstep 512. The read cycle is preceded by a page (row) detect atstep 508 to see if the data is in the cache 14 (row register). If the address is in the cache 14 (a read hit)atdecision step 514, theReady line 32 is asserted, the column address is acquired atstep 518, the appropriate data is output atstep 520 and theReady line 32 is de-asserted at step 522 (after a predetermined delay). At this point, thememory device 10 will wait for a new valid address or a transition of the chip enableline 38 to an inactive state. If the chip enableline 38 has transitioned to an inactive state atdecision step 524, the “dirty” bit is checked atdecision step 526. If thecache 14 is “dirty”, it is written back atstep 528 to the EEPROM/Flashnon-volatile memory array 12, and the “dirty” bit is cleared atstep 530. This maintains coherency should a power down cycle occur before theCE line 38 becomes active again. Thememory device 10 then waits for the chip enableline 38 to become active atdecision step 502. If the chip enableline 38 remains active, thememory device 10 waits for a valid address atstep 504. - In a read miss operation, the
memory device 10 will remain in standby until theCE line 38 becomes active atdecision step 502, the address is detected atstep 504 and latched atstep 506. As before, a read or write cycle is determined by the state of the write enableline 40. The read cycle is preceded by a page (row) detectstep 508 to determine if the data is in thecache 14 atdecision step 514. Since the address is not in the cache 14 (a read miss), it must be determined if a write cycle is in progress atdecision step 532. If so, the write cycle is completed atstep 534, a new row is loaded in thecache 14 atstep 542, theReady line 32 is asserted atstep 516, the column address is acquired atstep 518, the data is output atstep 520, theReady line 32 is de-asserted atstep 522 after a predetermined delay, and thememory device 10 waits for aCE line 38 transition atstep 524 or (ifCE line 38 is active) a valid address atstep 504. If theCE line 38 transitions, it is handled as previously described. - Alternatively, if a write cycle is not in progress at
decision step 532, the “dirty” bit is checked atdecision step 536 to determine if thecache 14 had been written previously. If thecache 14 is “clean”, thecache 14 is loaded atstep 542 and theprocess 500 completes as aforedescribed. If thecache 14 is “dirty”, the contents of thecache 14 are written back to the EEPROM/Flashnon-volatile memory array 12 atstep 538, the “dirty” bit is cleared atstep 540, the write cycle completes atstep 534, the cache is loaded atstep 542, theReady line 32 is asserted atstep 516 and thememory device 10 returns to wait for an active CE line atdecision step 502 and a valid address atstep 504. - Write Cycle
- With reference additionally now to FIG. 7, a
write process 600 is shown. In a write “hit” mode of operation, thememory device 10 remains in standby until theCE line 38 becomes active atdecision step 602. The address is detected atstep 504 and latched atstep 606. A read or write cycle is determined by the state of write enableline 40 atdecision step 610, and if it is not active, a read cycle is entered atstep 612. The write cycle is preceded by a page detect atstep 608. If the address is in thecache 14 atdecision step 614, theReady line 32 is asserted atstep 616, the column address is acquired atstep 618, the data is written to thecache 14 atstep 620, theReady line 32 is de-asserted atstep 622, and the “dirty” bit is set atstep 624. At this point, thememory device 10 will wait for a new valid address atstep 604 or for the chip enableline 38 to transition to an inactive state. If the chip enableline 38 becomes inactive, the contents of thecache 14 are written back to the EEPROM/Flash non-volatile memory array atstep 628, the “dirty” bit is cleared atstep 630 and theprocess 600 returns to wait for anactive CE line 38. - In a write “miss” mode of operation, the
memory device 10 again waits for an active chip enableline 38 atdecision step 602 and a valid address atstep 604. As before, the write cycle is determined by the state of the write enableline 40. The address is latched atstep 606 and compared atstep 608. If the address is not in thecache 14, it must be determined if a write cycle is in progress atdecision step 632. If the write cycle is in progress, it must complete atstep 634 before loading thecache 14 atstep 642, theReady line 32 is asserted atstep 616 and theprocess 600 proceeds as described above. If a write cycle is not in progress, the “dirty” bit is checked atstep 636. If thecache 14 is “clean”, the new row is loaded into thecache 14 atstep 642, theReady line 32 is asserted atstep 616 and theprocess 600 proceeds as previously described. If the “dirty” bit is set atdecision step 636, the contents of thecache 14 are written back to the EEPROM/Flash non-volatile memory array atstep 638, the write cycle is completed atstep 634, theReady line 32 is asserted atstep 616 and theprocess 600 proceeds as described above. - While there have been described above the principles of the present invention in conjunction with specific non-volatile memory array technologies and an SRAM-based cache it is to be clearly understood that the foregoing description is made only by way of example and not as a limitation to the scope of the invention. For example, other non-volatile memory technologies may be used to construct the memory array and, in fact, any relatively faster access time memory technology may then be utilized in fabricating the cache. A specific example would be an EEPROM or Flash-based memory array wherein the cache is constructed of FRAM-based memory (requiring a non-volatile “dirty” bit) since it exhibits a faster (and symmetric) read and write time than that of the memory array itself. In this example, a non-volatile memory array is cached by use of a non-volatile memory-based cache. Moreover, although an asynchronous parallel memory device has been illustrated and described herein, the principles of the present invention are likewise applicable to serial and synchronous memory device architectures as well.
- Particularly, it is recognized that the teachings of the foregoing disclosure will suggest other modifications to those persons skilled in the relevant art. Such modifications may involve other features which are already known per se and which may be used instead of or in addition to features already described herein. Although claims have been formulated in this application to particular combinations of features, it should be understood that the scope of the disclosure herein also includes any novel feature or any novel combination of features disclosed either explicitly or implicitly or any generalization or modification thereof which would be apparent to persons skilled in the relevant art, whether or not such relates to the same invention as presently claimed in any claim and whether or not it mitigates any or all of the same technical problems as confronted by the present invention. The applicants hereby reserve the right to formulate new claims to such features and/or combinations of such features during the prosecution of the present application or of any further application derived therefrom.
Claims (30)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US09/864,458 US20010025333A1 (en) | 1998-02-10 | 2001-05-24 | Integrated circuit memory device incorporating a non-volatile memory array and a relatively faster access time memory cache |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US09/021,132 US6263398B1 (en) | 1998-02-10 | 1998-02-10 | Integrated circuit memory device incorporating a non-volatile memory array and a relatively faster access time memory cache |
US09/864,458 US20010025333A1 (en) | 1998-02-10 | 2001-05-24 | Integrated circuit memory device incorporating a non-volatile memory array and a relatively faster access time memory cache |
Related Parent Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US09/021,132 Continuation US6263398B1 (en) | 1998-02-10 | 1998-02-10 | Integrated circuit memory device incorporating a non-volatile memory array and a relatively faster access time memory cache |
Publications (1)
Publication Number | Publication Date |
---|---|
US20010025333A1 true US20010025333A1 (en) | 2001-09-27 |
Family
ID=21802518
Family Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US09/021,132 Expired - Lifetime US6263398B1 (en) | 1998-02-10 | 1998-02-10 | Integrated circuit memory device incorporating a non-volatile memory array and a relatively faster access time memory cache |
US09/864,458 Abandoned US20010025333A1 (en) | 1998-02-10 | 2001-05-24 | Integrated circuit memory device incorporating a non-volatile memory array and a relatively faster access time memory cache |
Family Applications Before (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US09/021,132 Expired - Lifetime US6263398B1 (en) | 1998-02-10 | 1998-02-10 | Integrated circuit memory device incorporating a non-volatile memory array and a relatively faster access time memory cache |
Country Status (1)
Country | Link |
---|---|
US (2) | US6263398B1 (en) |
Cited By (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2004090725A2 (en) * | 2003-04-14 | 2004-10-21 | Nec Electronics (Europe) Gmbh | Secure storage system with flash memories and cache memory |
US20050015557A1 (en) * | 2002-12-27 | 2005-01-20 | Chih-Hung Wang | Nonvolatile memory unit with specific cache |
US20050050261A1 (en) * | 2003-08-27 | 2005-03-03 | Thomas Roehr | High density flash memory with high speed cache data interface |
US20050251617A1 (en) * | 2004-05-07 | 2005-11-10 | Sinclair Alan W | Hybrid non-volatile memory system |
US20060136656A1 (en) * | 2004-12-21 | 2006-06-22 | Conley Kevin M | System and method for use of on-chip non-volatile memory write cache |
US20070016719A1 (en) * | 2004-04-09 | 2007-01-18 | Nobuhiro Ono | Memory device including nonvolatile memory and memory controller |
US7173863B2 (en) | 2004-03-08 | 2007-02-06 | Sandisk Corporation | Flash controller cache architecture |
US20070189072A1 (en) * | 2006-02-14 | 2007-08-16 | Matsushita Electric Industrial Co., Ltd. | Semiconductor memory device |
CN100378656C (en) * | 2002-04-30 | 2008-04-02 | Nxp股份有限公司 | Integrated circuit with a non-volatile memory and method for fetching data from said memory |
WO2009117251A1 (en) * | 2008-03-19 | 2009-09-24 | Rambus Inc. | Optimizing storage of common patterns in flash memory |
US20100293337A1 (en) * | 2009-05-13 | 2010-11-18 | Seagate Technology Llc | Systems and methods of tiered caching |
US20120036310A1 (en) * | 2010-08-06 | 2012-02-09 | Renesas Electronics Corporation | Data processing device |
US8458404B1 (en) * | 2008-08-14 | 2013-06-04 | Marvell International Ltd. | Programmable cache access protocol to optimize power consumption and performance |
JP2015069520A (en) * | 2013-09-30 | 2015-04-13 | ルネサスエレクトロニクス株式会社 | Data processing device, microcontroller, and semiconductor device |
Families Citing this family (24)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6622243B1 (en) * | 1999-11-19 | 2003-09-16 | Intel Corporation | Method for securing CMOS configuration information in non-volatile memory |
US6711043B2 (en) | 2000-08-14 | 2004-03-23 | Matrix Semiconductor, Inc. | Three-dimensional memory cache system |
US6545891B1 (en) | 2000-08-14 | 2003-04-08 | Matrix Semiconductor, Inc. | Modular memory device |
US6765813B2 (en) | 2000-08-14 | 2004-07-20 | Matrix Semiconductor, Inc. | Integrated systems using vertically-stacked three-dimensional memory cells |
US6798599B2 (en) | 2001-01-29 | 2004-09-28 | Seagate Technology Llc | Disc storage system employing non-volatile magnetoresistive random access memory |
US20020138702A1 (en) * | 2001-03-26 | 2002-09-26 | Moshe Gefen | Using non-executable memory as executable memory |
US6836816B2 (en) * | 2001-03-28 | 2004-12-28 | Intel Corporation | Flash memory low-latency cache |
KR100389867B1 (en) | 2001-06-04 | 2003-07-04 | 삼성전자주식회사 | Flash memory management method |
JP3770171B2 (en) * | 2002-02-01 | 2006-04-26 | ソニー株式会社 | Memory device and memory system using the same |
US6768661B2 (en) * | 2002-06-27 | 2004-07-27 | Matrix Semiconductor, Inc. | Multiple-mode memory and method for forming same |
US6707702B1 (en) | 2002-11-13 | 2004-03-16 | Texas Instruments Incorporated | Volatile memory with non-volatile ferroelectric capacitors |
US6944042B2 (en) * | 2002-12-31 | 2005-09-13 | Texas Instruments Incorporated | Multiple bit memory cells and methods for reading non-volatile data |
US7107414B2 (en) * | 2003-01-15 | 2006-09-12 | Avago Technologies Fiber Ip (Singapore) Ptd. Ltd. | EEPROM emulation in a transceiver |
JP4241175B2 (en) * | 2003-05-09 | 2009-03-18 | 株式会社日立製作所 | Semiconductor device |
US20050013181A1 (en) * | 2003-07-17 | 2005-01-20 | Adelmann Todd C. | Assisted memory device with integrated cache |
US7315951B2 (en) * | 2003-10-27 | 2008-01-01 | Nortel Networks Corporation | High speed non-volatile electronic memory configuration |
US6947310B1 (en) | 2004-05-13 | 2005-09-20 | Texas Instruments Incorporated | Ferroelectric latch |
US7877566B2 (en) * | 2005-01-25 | 2011-01-25 | Atmel Corporation | Simultaneous pipelined read with multiple level cache for improved system performance using flash technology |
JP4961693B2 (en) * | 2005-07-29 | 2012-06-27 | ソニー株式会社 | Computer system |
US8103822B2 (en) * | 2009-04-26 | 2012-01-24 | Sandisk Il Ltd. | Method and apparatus for implementing a caching policy for non-volatile memory |
US8180981B2 (en) | 2009-05-15 | 2012-05-15 | Oracle America, Inc. | Cache coherent support for flash in a memory hierarchy |
US8719957B2 (en) * | 2011-04-29 | 2014-05-06 | Altera Corporation | Systems and methods for detecting and mitigating programmable logic device tampering |
US20130173864A1 (en) * | 2012-01-04 | 2013-07-04 | Elpida Memory, Inc. | Semiconductor device including row cache register |
US10373665B2 (en) * | 2016-03-10 | 2019-08-06 | Micron Technology, Inc. | Parallel access techniques within memory sections through section independence |
Family Cites Families (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US4809225A (en) | 1987-07-02 | 1989-02-28 | Ramtron Corporation | Memory cell with volatile and non-volatile portions having ferroelectric capacitors |
JPH02144641A (en) * | 1988-11-25 | 1990-06-04 | Nec Corp | Microcomputer |
EP0552667B1 (en) * | 1992-01-22 | 1999-04-21 | Enhanced Memory Systems, Inc. | Enhanced dram with embedded registers |
US5737748A (en) * | 1995-03-15 | 1998-04-07 | Texas Instruments Incorporated | Microprocessor unit having a first level write-through cache memory and a smaller second-level write-back cache memory |
US5682344A (en) | 1995-09-11 | 1997-10-28 | Micron Technology, Inc. | Destructive read protection using address blocking technique |
US5860113A (en) * | 1996-06-03 | 1999-01-12 | Opti Inc. | System for using a dirty bit with a cache memory |
WO1998013762A1 (en) * | 1996-09-26 | 1998-04-02 | Philips Electronics N.V. | Processing system and method for reading and restoring information in a ram configuration |
US5802583A (en) * | 1996-10-30 | 1998-09-01 | Ramtron International Corporation | Sysyem and method providing selective write protection for individual blocks of memory in a non-volatile memory device |
-
1998
- 1998-02-10 US US09/021,132 patent/US6263398B1/en not_active Expired - Lifetime
-
2001
- 2001-05-24 US US09/864,458 patent/US20010025333A1/en not_active Abandoned
Cited By (27)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN100378656C (en) * | 2002-04-30 | 2008-04-02 | Nxp股份有限公司 | Integrated circuit with a non-volatile memory and method for fetching data from said memory |
US20050015557A1 (en) * | 2002-12-27 | 2005-01-20 | Chih-Hung Wang | Nonvolatile memory unit with specific cache |
WO2004090725A3 (en) * | 2003-04-14 | 2005-09-15 | Nec Electronics Europ Gmbh | Secure storage system with flash memories and cache memory |
WO2004090725A2 (en) * | 2003-04-14 | 2004-10-21 | Nec Electronics (Europe) Gmbh | Secure storage system with flash memories and cache memory |
WO2005022550A1 (en) * | 2003-08-27 | 2005-03-10 | Infineon Technologies Ag | High density flash memory with high speed cache data interface |
US20050050261A1 (en) * | 2003-08-27 | 2005-03-03 | Thomas Roehr | High density flash memory with high speed cache data interface |
US20080250202A1 (en) * | 2004-03-08 | 2008-10-09 | Sandisk Corporation | Flash controller cache architecture |
US9678877B2 (en) | 2004-03-08 | 2017-06-13 | Sandisk Technologies Llc | Flash controller cache architecture |
US7173863B2 (en) | 2004-03-08 | 2007-02-06 | Sandisk Corporation | Flash controller cache architecture |
US20070143545A1 (en) * | 2004-03-08 | 2007-06-21 | Conley Kevin M | Flash Controller Cache Architecture |
US7408834B2 (en) | 2004-03-08 | 2008-08-05 | Sandisck Corporation Llp | Flash controller cache architecture |
US20070016719A1 (en) * | 2004-04-09 | 2007-01-18 | Nobuhiro Ono | Memory device including nonvolatile memory and memory controller |
US20050251617A1 (en) * | 2004-05-07 | 2005-11-10 | Sinclair Alan W | Hybrid non-volatile memory system |
US20100023681A1 (en) * | 2004-05-07 | 2010-01-28 | Alan Welsh Sinclair | Hybrid Non-Volatile Memory System |
US20060136656A1 (en) * | 2004-12-21 | 2006-06-22 | Conley Kevin M | System and method for use of on-chip non-volatile memory write cache |
US7882299B2 (en) * | 2004-12-21 | 2011-02-01 | Sandisk Corporation | System and method for use of on-chip non-volatile memory write cache |
KR101040961B1 (en) | 2004-12-21 | 2011-06-16 | 쌘디스크 코포레이션 | System and method for use of on-chip non-volatile memory write cache |
WO2006068916A1 (en) * | 2004-12-21 | 2006-06-29 | Sandisk Corporation | System and method for use of on-chip non-volatile memory write cache |
US20070189072A1 (en) * | 2006-02-14 | 2007-08-16 | Matsushita Electric Industrial Co., Ltd. | Semiconductor memory device |
WO2009117251A1 (en) * | 2008-03-19 | 2009-09-24 | Rambus Inc. | Optimizing storage of common patterns in flash memory |
US20110202709A1 (en) * | 2008-03-19 | 2011-08-18 | Rambus Inc. | Optimizing storage of common patterns in flash memory |
US8458404B1 (en) * | 2008-08-14 | 2013-06-04 | Marvell International Ltd. | Programmable cache access protocol to optimize power consumption and performance |
US8769204B1 (en) | 2008-08-14 | 2014-07-01 | Marvell International Ltd. | Programmable cache access protocol to optimize power consumption and performance |
US20100293337A1 (en) * | 2009-05-13 | 2010-11-18 | Seagate Technology Llc | Systems and methods of tiered caching |
US8327076B2 (en) * | 2009-05-13 | 2012-12-04 | Seagate Technology Llc | Systems and methods of tiered caching |
US20120036310A1 (en) * | 2010-08-06 | 2012-02-09 | Renesas Electronics Corporation | Data processing device |
JP2015069520A (en) * | 2013-09-30 | 2015-04-13 | ルネサスエレクトロニクス株式会社 | Data processing device, microcontroller, and semiconductor device |
Also Published As
Publication number | Publication date |
---|---|
US6263398B1 (en) | 2001-07-17 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US6263398B1 (en) | Integrated circuit memory device incorporating a non-volatile memory array and a relatively faster access time memory cache | |
US11024367B2 (en) | Memory with on-die data transfer | |
US6665209B2 (en) | Semiconductor memory apparatus, semiconductor apparatus, data processing apparatus and computer system | |
US5025421A (en) | Single port dual RAM | |
US6484246B2 (en) | High-speed random access semiconductor memory device | |
JP3304413B2 (en) | Semiconductor storage device | |
US6134180A (en) | Synchronous burst semiconductor memory device | |
US6546461B1 (en) | Multi-port cache memory devices and FIFO memory devices having multi-port cache memory devices therein | |
JP3280704B2 (en) | Semiconductor storage device | |
CA2313954A1 (en) | High speed dram architecture with uniform latency | |
US20080031067A1 (en) | Block erase for volatile memory | |
US20140029326A1 (en) | Ferroelectric random access memory with a non-destructive read | |
KR100977339B1 (en) | Semiconductor device | |
US7506100B2 (en) | Static random access memory (SRAM) compatible, high availability memory array and method employing synchronous dynamic random access memory (DRAM) in conjunction with a data cache and separate read and write registers and tag blocks | |
US20070016748A1 (en) | Method for self-timed data ordering for multi-data rate memories | |
US6337821B1 (en) | Dynamic random access memory having continuous data line equalization except at address translation during data reading | |
US7093047B2 (en) | Integrated circuit memory devices having clock signal arbitration circuits therein and methods of performing clock signal arbitration | |
US6246603B1 (en) | Circuit and method for substantially preventing imprint effects in a ferroelectric memory device | |
US20060190678A1 (en) | Static random access memory (SRAM) compatible, high availability memory array and method employing synchronous dynamic random access memory (DRAM) in conjunction with a single DRAM cache and tag | |
JP5189887B2 (en) | Ferroelectric memory device and operation method thereof | |
US7146454B1 (en) | Hiding refresh in 1T-SRAM architecture | |
US5802002A (en) | Cache memory device of DRAM configuration without refresh function | |
US6456519B1 (en) | Circuit and method for asynchronously accessing a ferroelectric memory device | |
CN107844430B (en) | Memory system and processor system | |
JPH02227897A (en) | Semiconductor memory |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
STCB | Information on status: application discontinuation |
Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION |
|
AS | Assignment |
Owner name: CYPRESS SEMICONDUCTOR CORPORATION, CALIFORNIA Free format text: PARTIAL RELEASE OF SECURITY INTEREST IN PATENTS;ASSIGNOR:MORGAN STANLEY SENIOR FUNDING, INC., AS COLLATERAL AGENT;REEL/FRAME:039708/0001 Effective date: 20160811 Owner name: SPANSION LLC, CALIFORNIA Free format text: PARTIAL RELEASE OF SECURITY INTEREST IN PATENTS;ASSIGNOR:MORGAN STANLEY SENIOR FUNDING, INC., AS COLLATERAL AGENT;REEL/FRAME:039708/0001 Effective date: 20160811 |
|
AS | Assignment |
Owner name: MONTEREY RESEARCH, LLC, CALIFORNIA Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:CYPRESS SEMICONDUCTOR CORPORATION;REEL/FRAME:040911/0238 Effective date: 20160811 |