WO2008042605A1 - Reverse reading in non-volatile memory with compensation for coupling - Google Patents

Reverse reading in non-volatile memory with compensation for coupling Download PDF

Info

Publication number
WO2008042605A1
WO2008042605A1 PCT/US2007/078950 US2007078950W WO2008042605A1 WO 2008042605 A1 WO2008042605 A1 WO 2008042605A1 US 2007078950 W US2007078950 W US 2007078950W WO 2008042605 A1 WO2008042605 A1 WO 2008042605A1
Authority
WO
WIPO (PCT)
Prior art keywords
word line
data
storage elements
adjacent
reading
Prior art date
Application number
PCT/US2007/078950
Other languages
French (fr)
Inventor
Nima Mokhlesi
Original Assignee
Sandisk Corporation
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US11/537,556 external-priority patent/US7447076B2/en
Priority claimed from US11/537,548 external-priority patent/US7684247B2/en
Application filed by Sandisk Corporation filed Critical Sandisk Corporation
Publication of WO2008042605A1 publication Critical patent/WO2008042605A1/en

Links

Classifications

    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11CSTATIC STORES
    • G11C11/00Digital stores characterised by the use of particular electric or magnetic storage elements; Storage elements therefor
    • G11C11/56Digital stores characterised by the use of particular electric or magnetic storage elements; Storage elements therefor using storage elements with more than two stable states represented by steps, e.g. of voltage, current, phase, frequency
    • G11C11/5621Digital stores characterised by the use of particular electric or magnetic storage elements; Storage elements therefor using storage elements with more than two stable states represented by steps, e.g. of voltage, current, phase, frequency using charge storage in a floating gate
    • G11C11/5628Programming or writing circuits; Data input circuits
    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11CSTATIC STORES
    • G11C11/00Digital stores characterised by the use of particular electric or magnetic storage elements; Storage elements therefor
    • G11C11/56Digital stores characterised by the use of particular electric or magnetic storage elements; Storage elements therefor using storage elements with more than two stable states represented by steps, e.g. of voltage, current, phase, frequency
    • G11C11/5621Digital stores characterised by the use of particular electric or magnetic storage elements; Storage elements therefor using storage elements with more than two stable states represented by steps, e.g. of voltage, current, phase, frequency using charge storage in a floating gate
    • G11C11/5642Sensing or reading circuits; Data output circuits
    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11CSTATIC STORES
    • G11C16/00Erasable programmable read-only memories
    • G11C16/02Erasable programmable read-only memories electrically programmable
    • G11C16/04Erasable programmable read-only memories electrically programmable using variable threshold transistors, e.g. FAMOS
    • G11C16/0483Erasable programmable read-only memories electrically programmable using variable threshold transistors, e.g. FAMOS comprising cells having several storage transistors connected in series
    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11CSTATIC STORES
    • G11C16/00Erasable programmable read-only memories
    • G11C16/02Erasable programmable read-only memories electrically programmable
    • G11C16/06Auxiliary circuits, e.g. for writing into memory
    • G11C16/34Determination of programming status, e.g. threshold voltage, overprogramming or underprogramming, retention
    • G11C16/3418Disturbance prevention or evaluation; Refreshing of disturbed memory data

Definitions

  • Embodiments of the present disclosure are directed to non-volatile memory technology.
  • Non-volatile semiconductor memory devices have become more popular for use in various electronic devices.
  • non-volatile semiconductor memory is used in cellular telephones, digital cameras, personal digital assistants, mobile computing devices, non-mobile computing devices and other devices.
  • Electrical Erasable Programmable Read Only Memory (EEPROM), including flash EEPROM, and Electronically Programmable Read Only Memory (EPROM) are among the most popular non-volatile semiconductor memories.
  • FIG. 1 is a top view showing one NAND string.
  • Figure 2 is an equivalent circuit thereof.
  • the NAND string depicted in Figures 1 and 2 includes four transistors 10, 12, 14 and 16 in series between a first select gate 12 and a second select gate 22.
  • Select gate 12 connects the NAND string to bit line terminal 26.
  • Select gate 22 connects the NAND string to source line terminal 28.
  • Select gate 12 is controlled by applying appropriate voltages to control gate 20CG via selection line SGD.
  • Select gate 22 is controlled by applying the appropriate voltages to control gate 22CG via selection line SGS.
  • transistors 10, 12, 14 and 16 includes a control gate and a floating gate, forming the gate elements of a memory cell.
  • transistor 10 includes control gate IOCG and floating gate 10FG.
  • Transistor 12 includes control gate 12CG and a floating gate 12FG.
  • Transistor 14 includes control gate 14CG and floating gate 14FG.
  • Transistor 16 includes a control gate 16CG and a floating gate 16FG.
  • Control gate IOCG is connected to word line WL3
  • control gate 12CG is connected to word line WL2
  • control gate 14CG is connected to word line WLl
  • control gate 16CG is connected to word line WLO.
  • FIG. 1 and 2 show four memory cells in the NAND string, the use of four transistors is only provided as an example.
  • a NAND string can have less than four memory cells or more than four memory cells.
  • some NAND strings will include eight memory cells, 16 memory cells, 32 memory cells, etc.
  • the discussion herein is not limited to any particular number of memory cells in a NAND string.
  • a typical architecture for a flash memory system using a NAND structure will include several NAND strings.
  • Relevant examples of NAND-type flash memories and their operation are provided in the following U.S. Patents/Patent Applications, all of which are incorporated herein by reference in their entirety: U.S. Pat. No. 5,570,315; U.S. Pat. No.
  • Shifts in the apparent charge stored on a floating gate can occur because of coupling of an electric field based on the charge stored in neighboring floating gates.
  • This floating gate to floating gate coupling phenomena is described in U.S. Patent No. 5,867,429, which is incorporated herein by reference in its entirety.
  • the floating gate to floating gate coupling phenomena occurs most pronouncedly, although not exclusively, between sets of adjacent memory cells that have been programmed at different times. For example, a first memory cell can be programmed to add a level of charge to its floating gate that corresponds to a set of data. Subsequently, one or more adjacent memory cells are programmed to add a level of charge to their floating gates that correspond to a set of data.
  • the charge level read from the first memory cell may appear to be different than when it was programmed because of the effect of the charge on the adjacent memory cell(s) being coupled to the first memory cell.
  • the coupling from adjacent memory cells can shift the apparent charge level being read from a selected memory cell by a sufficient amount to lead to an erroneous reading of the stored data.
  • Shifts in the apparent charge stored by a charge storage region such as a floating gate in a non-volatile memory cell can occur because of electrical field coupling based on charge stored in adjacent (or other) charge storage regions. Although not exclusively, the effects are most pronounced in situations where adjacent memory cells are programmed after a selected memory cell.
  • one or more compensations are applied when reading storage elements of a selected word line based on the charge stored by storage elements of other word lines. Efficient compensation techniques are provided by reverse reading blocks (or portions thereof) of memory cells. By reading in the opposite direction of programming, the information needed to apply (or select the results of) an appropriate compensation when reading a selected cell is determined during the actual read operation for the adjacent word line rather than dedicating a read operation to determine the information.
  • One embodiment includes programming non-volatile storage elements coupled to a plurality of word lines beginning with a first word line adjacent a first set of select gates and ending with a last word line adjacent a second set of select gates.
  • Programming includes changing a threshold voltage of selected ones of the storage elements in accordance with a target memory state.
  • Reading the non-volatile storage elements coupled to the plurality of word lines begins with the last word line and end with the first word line. Reading the storage elements includes, for each word line except the last word line, applying one or more compensations based on reading a word line adjacent to each word line in a direction of the second set of select gates.
  • Another embodiment includes receiving a request for data stored in a set of non-volatile storage elements coupled to a plurality of word lines.
  • the data is programmed to the set of non-volatile storage elements beginning with storage elements coupled to a first word line and ending with storage elements coupled to a last word line.
  • the first word line is adjacent to a second word line of the plurality.
  • the set of non-volatile storage elements is read in a sequence beginning with storage elements coupled to the last word line and ending with storage elements coupled to the first word line, wherein reading storage elements coupled to the first word line includes applying one or more compensations based on charge stored in storage elements of the second word line.
  • Reading further includes buffering data from storage elements of the second word line prior to reading the storage elements of said first word line, buffering data from storage elements of the first word line after buffering data from storage elements of the second word line, and maintaining the buffered data from storage elements of the second word line after buffering data from storage elements of the first word line.
  • Figure 1 is a top view of a NAND string.
  • Figure 2 is an equivalent circuit diagram of the NAND string of Figure 1.
  • Figure 3 is a block diagram of an array of NAND flash memory cells.
  • Figure 4 is a block diagram of a non-volatile memory system.
  • Figure 5 is a block diagram of one embodiment of a sense block.
  • Figure 6 depicts an exemplary set of threshold voltage distributions and a full sequence programming process.
  • Figure 7 depicts an exemplary set of threshold voltage distributions and an upper page/lower page programming process.
  • Figures 8A-8C depict an exemplary set of threshold voltages and a two-pass programming process.
  • Figure 9 is a timing diagram that explains the behavior of certain signals during read/verify operations.
  • Figure 10 is a flow chart of one embodiment for reading non-volatile memory.
  • Figure 11 is a flow chart of one embodiment for performing a read operation for non-volatile memory.
  • Figure 12 is a block diagram illustrating capacitive coupling between two neighboring memory cells.
  • Figure 13 is an exemplary set of threshold voltage distributions illustrating the effects of floating gate coupling.
  • Figure 14 is an exemplary set of threshold voltage distributions illustrating a technique for compensating for floating gate coupling.
  • Figure 15 is a flowchart for compensating for floating gate coupling using the technique of Figure 14.
  • Figure 16 is a flowchart for compensating for floating gate coupling using another technique.
  • Figure 17 is a flowchart illustrating an order of reading a set of memory cells using compensation according to the prior art.
  • Figure 18 is a table illustrating reading of a set of memory cells using compensation according to the prior art.
  • Figure 19 is a table illustrating reading of a set of memory cells using compensation according to one embodiment.
  • Figure 20 is a flowchart for reading a set of memory cells using compensation in accordance with one embodiment.
  • Figure 21 is flowchart for reporting data to a host in accordance with one embodiment.
  • Figure 22 is a table illustrating reading of a set of memory cells with compensation using read blocks in accordance with one embodiment.
  • Figure 23 is a flowchart for reading a set of memory cells using read blocks in accordance with one embodiment.
  • Figure 24 is a flowchart for reading a set of memory cells using threshold requested data sizes.
  • Memory cells can be used to store data represented in analog or digital form through manipulation of the cell's threshold voltage.
  • the range of possible threshold voltages of a memory cell can be divided into ranges which represent distinct memory states. For example, two ranges of threshold voltages can be used to establish two memory states that are assigned logical data 1 and 0.
  • At least one reference threshold voltage level is generally established so as to partition the threshold voltage memory window of the memory cell into the two ranges.
  • predetermined, fixed voltages e.g., read reference voltages
  • the cell If the current read is higher than that of the reference current level, the cell is determined to be "on” and in one logical state. If the current is less than the reference current level, the cell is determined to be "off and in the other logical state.
  • the voltage threshold is negative after the memory cell is erased, and defined as logic 1.
  • the threshold voltage is positive after a program operation, and defined as logic 0.
  • the threshold voltage When the threshold voltage is negative and a read is attempted by applying OV to the control gate, the memory cell will turn on to indicate logic 1 is being stored.
  • the threshold voltage When the threshold voltage is positive and a read operation is attempted by applying OV to the control gate, the memory cell will not turn on to indicate that logic 0 is being stored.
  • a memory cell can also store multiple bits of digital data by utilizing more than two ranges of threshold voltages to represent distinct memory states.
  • the threshold voltage window can be divided into the number of desired memory states and multiple voltage breakpoint levels used to resolve the individual states. For example, if four states are used, there will be four threshold voltage ranges representing four distinct memory states which are assigned the data values 11, 10, 01, and 00.
  • the specific relationship between the data programmed into the memory cell and the threshold voltage ranges of the cell depends upon the data encoding scheme adopted for the memory cells. For example, U.S. Patent No. 6,222,762 and U.S. Patent Application No. 10/461,244, "Tracking Cells For A Memory System," filed on June 13, 2003, both of which are incorporated herein by reference in their entirety, describe various data encoding schemes for multi-state flash memory cells.
  • the conduction current of a memory cell is measured by the rate it discharges a dedicated capacitor in the sense amplifier.
  • the conduction current of the selected memory cell allows (or fails to allow) the NAND string that included the memory cell to discharge the bit line. The charge on the bit line is measured after a period of time to see whether it has been discharged or not.
  • Another type of memory cell useful in flash EEPROM systems utilizes a non-conductive dielectric material in place of a conductive floating gate to store charge in a non-volatile manner.
  • a non-conductive dielectric material in place of a conductive floating gate to store charge in a non-volatile manner.
  • Such a cell is described in an article by Chan et al, "A True Single-Transistor Oxide-Nitride-Oxide EEPROM Device," IEEE Electron Device Letters, Vol. EDL-8, No. 3, March 1987, pp. 93-95.
  • a triple layer dielectric formed of silicon oxide, silicon nitride and silicon oxide (“ONO”) is sandwiched between a conductive control gate and a surface of a semi-conductive substrate above the memory cell channel.
  • the cell is programmed by injecting electrons from the cell channel into the nitride, where they are trapped and stored in a limited region. This stored charge then changes the threshold voltage of a portion of the channel of the cell in a manner that is detectable.
  • the cell is erased by injecting hot holes into the nitride. See also Nozaki et al., "A 1-Mb EEPROM with MONOS Memory Cell for Semiconductor Disk Application," IEEE Journal of Solid-State Circuits, Vol. 26, No. 4, April 1991, pp. 497-501, which describes a similar cell in a split-gate configuration where a doped polysilicon gate extends over a portion of the memory cell channel to form a separate select transistor.
  • Figure 3 illustrates an exemplary array 100 of NAND strings 50, such as those shown in Figures 1-2.
  • a bit line 27 is coupled to a drain terminal 26 of the bit line select gate for the NAND string 50.
  • a source line 29 may connect all the source terminals 28 of the source line select gates of the NAND strings.
  • the array 100 of memory cells is divided into a large number of blocks of memory cells.
  • the block is the unit of erase and may be referred to as an erase block or physical block.
  • Each block can contain the minimum number of memory cells that are erased together.
  • a block such as block 30 includes all the cells connected to a common set of word lines WLO-WLi.
  • Each block is typically divided into a number of pages.
  • a page is typically a minimum unit of programming or reading, although more than one page may be programmed or read in a single operation.
  • the individual pages may be divided into segments and the segments may contain the fewest number of cells that are written at one time as a basic programming operation.
  • One or more pages of data are typically stored in one row of memory cells.
  • a page can store one or more sectors of data, the size of which is generally defined by a host system.
  • a sector includes user data and overhead data.
  • Overhead data typically includes an Error Correction Code (ECC) that has been calculated from the user data of the sector.
  • ECC Error Correction Code
  • a portion of the controller calculates the ECC when data is being programmed into the array, and also checks it when data is being read from the array.
  • the ECCs and/or other overhead data are stored in different pages, or even different blocks, than those of the user data to which they pertain.
  • a sector of user data is typically 512 bytes, corresponding to the size of a sector commonly used in magnetic disk drives. Overhead data is typically an additional 16-20 bytes.
  • a large number of pages form a block, anywhere from 8 pages, for example, up to 32, 64 or more pages.
  • a row of NAND strings comprises a block.
  • Memory cells are erased in one embodiment by raising the p-well to an erase voltage (e.g., 20 volts) for a sufficient period of time and grounding the word lines of a selected block while the source and bit lines are floating. A strong electric field is thus applied to the tunnel oxide layers of selected memory cells and the data of the selected memory cells are erased as electrons of the floating gates are emitted to the substrate side. As electrons are transferred from the floating gate to the p-well region, the threshold voltage of a selected cell is lowered. Those cells whose erase is to be inhibited have their word lines set to a floating condition.
  • an erase voltage e.g. 20 volts
  • Each block of memory cells includes a set of bit lines forming columns and a set of word lines forming rows.
  • the bit lines are divided into odd bit lines and even bit lines. Memory cells along a common word line and connected to the odd bit lines are programmed at one time, while memory cells along a common word line and connected to even bit lines are programmed at another time ("odd/even programming").
  • memory cells are programmed along a word line for all bit lines in the block ("all bit line programming").
  • the bit lines or block can be broken up into other groupings (e.g., left and right, more than two groupings, etc.).
  • FIG. 4 illustrates a memory device 110 having read/write circuits for reading and programming a page of memory cells in parallel.
  • Memory device 110 may include one or more memory die or chips 112.
  • Memory die 112 includes a two-dimensional array of memory cells 100, control circuitry 120, and read/write circuits 130A and 130B.
  • access to the memory array 100 by the various peripheral circuits is implemented in a symmetric fashion, on opposite sides of the array, so that the densities of access lines and circuitry on each side are reduced by half.
  • the read/write circuits 130A and 130B include multiple sense blocks 200 which allow a page of memory cells to be read or programmed in parallel.
  • the memory array 100 is addressable by word lines via row decoders 140A and 140B and by bit lines via column decoders 142A and 142B.
  • a controller 144 is included in the same memory device 110 (e.g., a removable storage card or package) as the one or more memory die 112. Commands and data are transferred between the host and controller 144 via lines 132 and between the controller and the one or more memory die 112 via lines 134.
  • the controller can include an optional RAM memory 131 in one embodiment to assist in data transfer.
  • the control circuitry 120 cooperates with the read/write circuits 130A and 130B to perform memory operations on the memory array 100.
  • the control circuitry 120 includes a state machine 122, an on-chip address decoder 124 and a power control module 126.
  • the state machine 122 provides chip-level control of memory operations.
  • the on-chip address decoder 124 provides an address interface between that used by the host or a memory controller to the hardware address used by the decoders 140A, 140B, 142A, and 142B.
  • the power control module 126 controls the power and voltages supplied to the word lines and bit lines during memory operations.
  • An optional RAM memory 133 is provided in one embodiment to assist in memory operations.
  • FIG. 5 is a block diagram of an individual sense block 200 partitioned into a core portion, referred to as a sense module 210, and a common portion 220.
  • a sense module 210 for each bit line and one common portion 220 for a set of multiple sense modules 210.
  • a sense block will include one common portion 220 and eight sense modules 210.
  • Each of the sense modules in a group will communicate with the associated common portion via a data bus 206.
  • U.S. Patent Application 11/026,536 Non-Volatile Memory & Method with Shared Processing for an Aggregate of Sense Amplifiers
  • Sense module 210 comprises sense circuitry 204 that determines whether a conduction current in a connected bit line is above or below a predetermined threshold level.
  • Sense module 210 also includes a bit line latch 202 that is used to set a voltage condition on the connected bit line. For example, a predetermined state latched in bit line latch 202 will result in the connected bit line being pulled to a state designating program inhibit (e.g., Vdd).
  • Common portion 220 includes a processor 212, a set of data latches 214 and an I/O Interface 216 coupled between the set of data latches 214 and data bus 134. Processor 212 performs computations.
  • one of its functions is to determine the data stored in the sensed memory cell and store the determined data in the set of data latches.
  • the set of data latches 214 is used to store data bits determined by processor 212 during a read operation. It is also used to store data bits imported from the data bus 134 during a program operation. The imported data bits represent write data meant to be programmed into the memory.
  • I/O interface 216 provides an interface between data latches 214 and the data bus 134.
  • bit line latch 202 serves double duty, both as a latch for latching the output of the sense module 210 and also as a bit line latch as described above.
  • the data to be programmed is stored in the set of data latches 214 from the data bus 134.
  • the program operation under the control of the state machine, comprises a series of programming voltage pulses applied to the control gates of the addressed memory cells. Each programming pulse is followed by a read back (verify) to determine if the cell has been programmed to the desired memory state target threshold voltage.
  • Processor 212 monitors the read back memory state relative to the desired memory state. When the two are in agreement, the processor 212 sets the bit line latch 202 so as to cause the bit line to be pulled to a condition designating program inhibit (e.g., Vdd). This inhibits the cell coupled to the bit line from further programming even if programming pulses appear on its control gate.
  • the processor initially loads the bit line latch 202 and the sense circuitry sets it to an inhibit value during the verify process.
  • Data latch stack 214 contains a stack of data latches corresponding to the sense module. In one embodiment, there are three data latches per sense module 210. In some implementations (but not required), the data latches are implemented as a shift register so that the parallel data stored therein is converted to serial data for data bus 134, and vice versa. In the preferred embodiment, all the data latches corresponding to the read/write block of memory cells can be linked together to form a block shift register so that a block of data can be input or output by serial transfer. In particular, the bank of read/write modules is adapted so that each of its set of data latches will shift data in to or out of the data bus in sequence as if they are part of a shift register for the entire read/write block.
  • a page controller (not shown) expediently provides control and timing signals to the sense modules operated in parallel.
  • sense modules 210 and their operation see United States Patent Application Ser. No. 11X099,133, entitled “Compensating For Coupling During Read Operations Of Non-Volatile Memory,” filed April 5, 2005, incorporated by reference in its entirety. Additional information about the structure and/or operations of various embodiments of non-volatile storage devices can be found in (1) United States Patent Application Pub. No.
  • data is programmed to memory cells along a common word line.
  • This word line may be referred to as a selected word line.
  • the remaining word lines of a block are referred to as unselected word lines.
  • the selected word line may have one or two neighboring word lines. If the selected word line has two neighboring word lines, then the neighboring word line on the drain side is referred to as the drain side neighboring word line and the neighboring word line on the source side is referred to as the source side neighboring word line. For example, if WL2 of Figure 2 is the selected word line, then WLl is the source side neighboring word line and WL3 is the drain side neighboring word line.
  • the threshold voltages of the memory cells should be within one or more distributions of threshold voltages for programmed memory cells or within a distribution of threshold voltages for erased memory cells, as appropriate.
  • Figure 6 illustrates threshold voltage distributions for a group of memory cells when each memory cell stores two bits of data.
  • Figure 6 shows a first threshold voltage distribution E for erased memory cells and three threshold voltage distributions, A, B and C for programmed memory cells.
  • the threshold voltages in the E distribution are negative and the threshold voltages in the A, B and C distributions are positive.
  • Each distinct threshold voltage range of Figure 6 corresponds to predetermined values for the set of data bits.
  • the specific relationship between the data programmed into the memory cell and the threshold voltage levels of the cell depends upon the data encoding scheme adopted for the cells.
  • data values are assigned to the threshold voltage ranges using a gray code assignment so that if the threshold voltage of a floating gate erroneously shifts to its neighboring physical state, only one bit will be affected.
  • gray coding is not used.
  • One example assigns "11" to threshold voltage range E (state E), “10” to threshold voltage range A (state A), “00” to threshold voltage range B (state B) and "01” to threshold voltage range C (state C).
  • Figure 6 shows four states, embodiments in accordance with the present disclosure can also be used with other binary or multi-state structures including those that include more or less than four states.
  • Figure 6 shows three read reference voltages, Vra, Vrb and Vrc, for reading data from the memory cells.
  • Vra threshold voltage
  • Vrb threshold voltage
  • Vrc voltage
  • the system can determine what state the memory cell is in. If a memory cell conducts with Vra applied to its control gate, then the memory cell is in state E. If a memory cell conducts at Vrb and Vrc but not at Vra, then the memory cell is in state A. If the memory cell conducts at Vrc but not at Vra and Vrb, then the memory cell is in state B. If the memory cell does not conduct at Vra, Vrb or Vrc, then the memory cell is in state C.
  • Figure 6 also shows three verify reference voltages, Vva, Vvb and Vvc.
  • Figure 6 also depicts a full sequence programming technique.
  • full sequence programming memory cells are programmed from the erased state E directly to any of the programmed states A, B or C.
  • a population of memory cells to be programmed may first be erased so that all the memory cells are in erased state E.
  • a series of program voltage pulses is then applied to the control gates of the selected memory cells to program the memory cells directly into states A, B or C. While some memory cells are being programmed from state E to state A, other memory cells are being programmed from state E to state B and/or from state E to state C.
  • Figure 7 illustrates an example of a two-pass technique of programming multi-state memory cells that store data for two different pages: a lower page and an upper page.
  • states are depicted.
  • state E both pages store a "1."
  • state A the lower page stores a 0 and the upper page stores a 1.
  • state B both pages store 0.
  • state C the lower page stores 1 and the upper page stores 0.
  • the cell's threshold voltage level is set according to the bit to be programmed into the lower logical page. If that bit is a logic 1, the threshold voltage is not changed since it is in the appropriate state as a result of having been earlier erased. However, if the bit to be programmed is a logic 0, the threshold level of the cell is increased to be state A, as shown by arrow 250. That concludes the first programming pass.
  • the cell's threshold voltage level is set according to the bit being programmed into the upper logical page. If the upper logical page bit is to store a logic 1, then no programming occurs since the cell is in one of the states E or A, depending upon the programming of the lower page bit, both of which carry an upper page bit of 1. If the upper page bit is to be a logic 0, then the threshold voltage is shifted. If the first pass resulted in the cell remaining in the erased state E, then in the second pass the cell is programmed so that the threshold voltage is increased to be within state C, as depicted by arrow 254.
  • the memory cell is further programmed in the second pass so that the threshold voltage is increased to be within state B, as depicted by arrow 252.
  • the result of the second pass is to program the cell into the state designated to store a logic "0" for the upper page without changing the data for the lower page.
  • Figures 8A-8C disclose a process for programming non-volatile memory that reduces floating gate to floating gate coupling by, for any particular memory cell, writing to that particular memory cell with respect to a particular page subsequent to writing to adjacent memory cells for previous pages.
  • This technique may be referred to herein as the last first mode (LM) of programming.
  • each cell stores two bits of data per memory cell, using four data states. Erased state E stores data 11, state A stores data 01, state B stores data 10, and state C stores data 00. Other encodings of data to physical data states can also be used.
  • Each memory cell stores a portion of two logical pages of data. For reference purposes, these pages are called upper page and lower page but can be given other labels.
  • State A is encoded to store bit 0 for the upper page and bit 1 for the lower page
  • state B is encoded to store bit 1 for the upper page and bit 0 for the lower page
  • state C is encoded to store bit 0 for both pages.
  • the lower page data for memory cells at a word line WLn are programmed in a first step depicted in Figure 8A and upper page data for the cells is programmed in a second step depicted in Figure 8C. If the lower page data is to remain data 1 for a cell, then the cell's threshold voltage remains at state E during the first step. If the lower page data is to be programmed to 0, then the threshold voltage of the memory cell is raised to state B'.
  • State B' is an interim state B having a verify level Vvb', which is lower than Vvb.
  • the neighboring memory cells at adjacent word line WLn+ 1 will be programmed with respect to their lower page.
  • the lower page for memory cells at WL2 in Figures 1-3 may be programmed after the lower page for memory cells at WLl.
  • Floating gate coupling may raise the apparent threshold voltage of memory cell 12 if the threshold voltage of memory cell 10 is raised from state E to state B' after programming memory cell 12.
  • the cumulative coupling effect on the memory cells at WLn will widen the apparent threshold voltage distribution of threshold voltages for the cells, as depicted as in Figure 8B.
  • the apparent widening of the threshold voltage distribution can be remedied when programming the upper page for the word line of interest, as shown in Figure 8C.
  • Figure 8C depicts the process of programming the upper page for the cell at WLn. If a memory cell is in erased state E and its upper page bit is to remain at 1, the memory cell remains in state E. If the memory cell is in state E and its upper page data bit is to be programmed to 0, the threshold voltage of the memory cell is raised to be within the range for state A. If the memory cell was in intermediate threshold voltage distribution B' and its upper page data is to remain 1, the memory cell is programmed to final state B. If the memory cell is in intermediate threshold voltage distribution B' and its upper page data is to become data 0, the threshold voltage of the memory cell is raised to be within the range for state C.
  • Figures 8A-8C reduces the effect of floating gate coupling because only the upper page programming of neighbor memory cells will effect the apparent threshold voltage of a given memory cell.
  • An example of an alternate state coding for this technique is to move from intermediate state B' to state C when the upper page data is a 1, and to move to state B when the upper page data is 0.
  • Figures 8A-8C provide an example with respect to four data states and two pages of data, the concepts can be applied to other implementations with more or less than four states and different numbers of pages.
  • Figure 9 is a timing diagram depicting the behavior of various signals during one iteration of a read or verify process. Each iteration of the process of Figure 9 represents a single sense operation for each cell memory. If the memory cells are binary memory cells, the process of Figure 9 may be performed once. If the memory cells are multi-state memory cells with four states (e.g., E, A, B, and C), the process of Figure 9 may be performed three times (three sense operations) for each memory cell.
  • states e.g., E, A, B, and C
  • the selected word line is connected to a read reference voltage, a level of which is specified for each read and verify operation in order to determine whether a threshold voltage of the concerned memory cell has reached such level.
  • the conduction current of the memory cell is measured to determine whether the memory cell turned on in response to the voltage applied to the word line. If the conduction current is measured to be greater than a particular value, then it is assumed that the memory cell turned on and the voltage applied to the word line is greater than the threshold voltage of the memory cell. If the conduction current is not measured to be greater than the particular value, then it is assumed that the memory cell did not turn on and the voltage applied to the word line is not greater than the threshold voltage of the memory cell.
  • the conduction current of a memory cell is measured by the rate it discharges a dedicated capacitor in the sense amplifier.
  • the conduction current of the selected memory cell allows (or fails to allow) the NAND string that included the memory cell to discharge the bit line. The charge on the bit line is measured after a period of time to see whether it has been discharged or not.
  • Figure 9 shows signals SGD, WL_unsel. WLn+1, WLn, SGS, Selected BL, BLCLAMP, and Source starting at Vss (approximately 0 volts).
  • SGD is the gate selection line of the drain side select gate.
  • SGS is the gate selection line of the source side select gate.
  • WLn is the word line selected for reading/verification.
  • WLn+1 is the unselected word line that is the drain side neighboring word line to WLn.
  • WL_unsel represents the unselected word lines other than the drain side neighboring word line.
  • Selected BL is the bit line selected for reading/verification.
  • Source is the source line for the memory cells (see Figure 3).
  • BLCLAMP is an analog signal that sets the value of the bit line when charged from the sense amplifier. Note that there are two versions of SGS, Selected BL and BLCLAMP depicted. One set of these signals SGS (B), Selected BL (B) and BLCLAMP (B) depict a read/verify operation for an array of memory cells that measure the conduction current of a memory cell by determining whether the bit line has discharged. Another set of these signals SGS (C), Selected BL (C) and BLCLAMP (C) depict a read/verify operation for an array of memory cells that measure the conduction current of a memory cell by the rate it discharges a dedicated capacitor in the sense amplifier.
  • SGD is raised to Vdd (e.g., approximately 3.5 volts)
  • the unselected word lines (WL_unsel) are raised to Vread (e.g., approximately 5.5 volts)
  • the drain side neighboring word line (WLn+1) is raised to VreadX
  • the selected word line WLn is raised to Vcgr (e.g., Vra, Vrb, or Vrc of Figures 6, 7, and 8A-8C)) for a read operation or a verify level (e.g., Vva, Vvb, or Vvc of Figure 11) for a verify operation
  • BLCLAMP (B) is raised to a pre-charging voltage to pre- charge the selected bit line Selected BL(B) (e.g., to approximately 0.7V).
  • Vread and VreadX act as pass voltages because they cause the unselected memory cells to turn on (regardless of physical state or threshold voltage) and act as pass gates.
  • BLCLAMP (B) is lowered to Vss so the NAND string can control the bit line.
  • the source side select gate is turned on by raising SGS (B) to Vdd. This provides a path to dissipate the charge on the bit line. If the threshold voltage of the memory cell selected for reading is greater than Vcgr or the verify level applied to the selected word line WLn, then the selected memory cell will not turn on and the bit line will not discharge, as depicted by signal line 260.
  • the memory cell selected for reading will turn on (conduct) and the bit line voltage will dissipate, as depicted by curve 262.
  • the sense amplifier will determine whether the bit line has dissipated a sufficient amount. In between t2 and t3, BLCLAMP (B) is raised to let the sense amplifier measure the evaluated BL voltage and then lowered. At time t3, the depicted signals will be lowered to Vss (or another value for standby or recovery). Note that in other embodiments, the timing of some of the signals can be changed (e.g. shift the signal applied to the neighbor).
  • SGD is raised to Vdd (e.g., approximately 3.5 volts)
  • the unselected word lines (WL_unsel) are raised to Vread (e.g., approximately 5.5 volts)
  • the drain side neighboring word line (WLn+ 1) is raised to VreadX
  • the selected word line WLn is raised to Vcgr (e.g., Vra, Vrb, or Vrc of Figure 11) for a read operation or a verify level (e.g., Vva, Vvb, or Vvc of Figure 11) for a verify operation
  • BLCLAMP (C) is raised.
  • the sense amplifier holds the bit line voltage constant regardless of what the NAND sting is doing, so the sense amplifier measures the current flowing with the bit line "clamped" to that voltage. Therefore, BLCLAMP (C) rises at tl and does not change from tl to t3. At some point after time tl and prior to time t3 (as determined by the particular implementation), the sense amplifier will determine whether the capacitor in the sense amplifier has dissipated a sufficient amount. At time t3, the depicted signals will be lowered to Vss (or another value for standby or recovery). Note that in other embodiments, the timing of some of the signals can be changed.
  • Figure 10 is a flow chart describing one embodiment for reading data from non-volatile memory cells, such as those programmed according to the techniques of Figures 6, 7, or 8A-8C.
  • Figure 10 provides the read process at the system level.
  • a request to read data is received.
  • a read operation is performed for a particular page in response to the request to read data.
  • the system will also create extra bits used for Error Correction Codes (ECCs) and write those ECC bits along with the page of data.
  • ECC Error Correction Codes
  • the ECC process used can include any suitable ECC process known in the art.
  • the ECC bits When reading data from a page, the ECC bits will be used to determine whether there are any errors in the data at step 304.
  • the ECC process can be performed by the controller, the state machine or elsewhere in the system. If there are no errors in the data, the data is reported to the user at step 306. If an error is found at step 304, it is determined whether the error is correctable at step 308. The error may be due to floating gate to floating gate coupling or other reasons.
  • Various ECC methods have the ability to correct a predetermined number of errors in a set of data. If the ECC process can correct the data, then the ECC process is used to correct that data at step 310 and the data, as corrected, is reported to the user at step 312.
  • a data recovery process is performed at step 314. In some embodiments, an ECC process will be performed after step 314. More details about the data recovery process are described below. After the data is recovered, that data is reported at step 316. At step 318, the process loops back to step 302 if additional pages are to be read or ends at step 320. Note that the process of Figure 15 can be used with data programmed using all bit line programming or odd/even bit line programming.
  • Figure 11 is a flow chart describing one embodiment of a process for performing a read operation for a page (step 302 of Figure 10).
  • the process of Figure 11 can be performed for a page that encompasses all bit lines of a block, only odd bit lines of a block, only even bit lines of a block, or other subsets of bit lines of a block.
  • read reference voltage Vra is applied to the appropriate word line associated with the page.
  • bit lines associated with the page are sensed to determine whether the addressed memory cells turn on or do not turn on based on the application of Vra to their control gates.
  • Bit lines that conduct indicate that the memory cells were turned on; therefore, the threshold voltages of those memory cells are below Vra (e.g., in state E).
  • the result of the sensing for the bit lines is stored in the appropriate latches (e.g., latches 202) for those bit lines.
  • read reference voltage Vrb is applied to the word lines associated with the page being read.
  • the bit lines are sensed as described above.
  • the results are stored in the appropriate latches for the bit lines.
  • read reference voltage Vrc is applied to the word lines associated with the page.
  • the bit lines are sensed to determine which memory cells turn on, as described above.
  • the results from the sensing step are stored in the appropriate latches for the bit lines.
  • the data values for each bit line are determined. For example, if a memory cell conducts at Vra, then the memory cell is in state E. If a memory cell conducts at Vrb and Vrc but not at Vra, then the memory cell is in state A. If the memory cell conducts at Vrc but not at Vra and Vrb, then the memory cell is in state B. If the memory cell does not conduct at Vra, Vrb or Vrc, then the memory cell is in state C. In one embodiment, the data values are determined by processor 212.
  • processor 212 will store the determined data values in the appropriate latches (e.g., latches 214) for each bit line. In other embodiments, sensing the various levels (Vra, Vrb, and Vrc) may occur in different orders.
  • Floating gate coupling can cause errors during read operations.
  • the charge stored on the floating gate of a memory cell can undergo an apparent shift because of electrical field coupling associated with charge stored at a neighboring memory cell's floating gate or other charge storage region (e.g., dielectric charge storage region). While in theory the electric field from a charge on the floating gate of any memory cell in a memory array can couple to the floating gate of any other memory cell in the array, the effect is most pronounced and noticeable with adjacent memory cells.
  • Adjacent memory cells may include neighboring memory cells that are on the same bit line, neighboring memory cells on the same word line, or neighboring memory cells that are on both a neighboring bit line and neighboring word line, and thus, adjacent from each other in a diagonal direction.
  • the apparent shift in charge can result in errors when reading the memory state of a memory cell.
  • rows of memory cells are programmed starting with the word line (WLO) adjacent to the source side select gate line.
  • Programming proceeds sequentially thereafter by word line (WLl, WL2, WL3, etc.) through the strings of cells such that at least one page of data is programmed in an adjacent word line (WLn+ 1) after completing programming (placing each cell of the word line into its final state) of the preceding word line (WLn).
  • This pattern of programming results in an apparent shift of the threshold voltage of memory cells after being programmed due to floating gate coupling.
  • an adjacent word line is programmed subsequent to completing programming of the word line of interest. The negative charge added to the floating gates of memory cells on the adjacent, later programmed word line raises the apparent threshold voltage of the memory cells on the word line of interest.
  • Figure 12 graphically explains the concept of floating gate to floating gate coupling.
  • Figure 12 depicts neighboring floating gates 372 and 374, which are on the same NAND string. Floating gates 372 and 374 are situated above NAND channel/substrate 376, which has source/drain regions 378, 380 and 382. Above floating gate 372 is control gate 384 that is connected to and part of word line WLn. Above floating gate 374 is control gate 386 that is connected to and part of word line WLn+1.
  • floating gate 372 will likely be subject to coupling from multiple other floating gates, for simplicity
  • Figure 12 only shows the effects from one neighboring memory cell.
  • Figure 12 shows three components of coupling provided to floating gate 372 from its neighbor: rl, r2 and Cr.
  • the component rl is the coupling ratio between the neighboring floating gates (372 and 374), and is calculated as the capacitance of the neighboring floating gates divided by the sum of all capacitive couplings of floating gate 372 to all the other electrodes surrounding it.
  • the component r2 is the coupling ratio between the floating gate 372 and the drain side neighbor control gate 386, and is calculated as the capacitance of floating gate 372 and control gate 386 divided by the sum of all capacitive couplings of floating gate 372 to all the other electrodes surrounding it.
  • the component Cr is the control gate coupling ratio and is calculated as the capacitance between floating gate 374 and its corresponding control gate 386 divided by the sum of all capacitive couplings of floating gate 372 to all the other electrodes surrounding it.
  • Figure 13 shows the apparent threshold voltage distributions for a row of memory cells (e.g., WLn) before (solid curves) and after (dotted curves) its adjacent neighboring row (WLn+ 1) is programmed.
  • Each distribution is widened as a result of adding negative charge to the floating gates of the memory cells of the adjacent word line. Because of floating gate coupling, the negative charge of a later programmed memory cell on WLn+ 1 will raise the apparent threshold voltage of a memory cell on WLn that is connected to the same bit line.
  • Distributions 400 and 402 represent the cells of a selected word line WLn in state A before and after programming the adjacent word line WLn+ 1, respectively.
  • Distributions 404 and 406 represent the cells of WLn in state B before and after programming WLn+ 1, respectively.
  • Distributions 408 and 410 represent the cells of WLn in state C after programming WLn+ 1, respectively. Because the distributions are widened, memory cells may be read incorrectly as in an adjacent state. Memory cells at the upper end of each distribution can have an apparent threshold voltage above a corresponding read compare point. For example, when applying reference voltage Vrb, certain memory cells programmed to state A, may not conduct sufficiently because of the shift in their apparent threshold voltage. These cells may incorrectly be read as in state B, causing read errors. The later programmed cells can also effect the apparent threshold voltage of memory cells of WLn that are connected to different bit lines, such as those connected to adjacent bit lines.
  • Figure 14 graphically depicts a reading technique that can be used to address some of the apparent shifts in threshold voltages illustrated in Figure 13.
  • Figure 15 is a flowchart describing this technique.
  • the data of word line WLn+ 1 can also be read (step 420), and if the data on word line WLn+ 1 has disturbed the data on WLn, the read process for WLn can compensate for that disturb.
  • state or charge level information for the memory cells at word line WLn+ 1 can be determined in order to select appropriate read reference voltages for reading individual memory cells of word line WLn.
  • the process of Figure 11 can be used to read WLn+ 1.
  • Figure 14 depicts individual read reference voltages for reading WLn based on the state of an adjacent memory cell at word line WLn+ 1.
  • different offsets e.g., OV, 0.1V, 0.2V, 0.3V
  • the memory cells at word line WLn are sensed using each of the different read reference voltages including offsets.
  • the results from sensing at an appropriate one of the read reference voltages can be selected based upon the state of an adjacent memory cell at word line WLn+ 1.
  • the read operation for WLn+1 determines the actual data stored at WLn+1, while in other embodiments, the read operation for WLn+1 only determines the charge levels of these cells, which may or may not accurately reflect the data stored at WLn+1. In some embodiments, the levels and/or the number of levels used to read WLn+1 may not be exactly the same as those used to read WLn. Some approximation of the floating gate threshold value can be sufficient for WLn correction purposes in some implementations. In one embodiment, the results of reading at WLn+ 1 can be stored in latches 214 at each bit line to be used when reading WLn (step 422).
  • a read operation can first be performed for the word line of interest WLn at the nominal read reference voltage levels Vra, Vrb, and Vrc which do not compensate for any coupling effects (step 424).
  • the results of reading at the nominal reference levels are stored in the appropriate latches for bit lines with memory cells where the neighboring cell at WLn+ 1 was determined to be in state E (step 426).
  • the data is disregarded and the WLn+1 data maintained.
  • a read operation is then performed for word line WLn using a first set of offsets to the read reference voltages at step 428.
  • the read process can use Vral (Vra + 0.1V), Vrbl (Vrb + 0.1V) and Vrcl (Vrc + 0.1V) in the process of Figure 11.
  • the results from using these reference values are stored for bit lines with memory cells having neighboring memory cells at WLn+1 in state A at step 430. Data for other bit lines is disregarded.
  • a read operation is then performed at step 432 with a second set of offsets using read reference levels Vra2 (Vra + 0.2V), Vrb2 (Vrb + 0.2V) and Vrc2 (Vrc + 0.2V) in the process of Figure 11.
  • the results are stored in latches for bit lines with memory cells having neighbors at WLn+1 in state B at step 434.
  • a read operation is performed for word line WLn with a third set of offsets using reference levels Vra3 (Vra + 0.3V), Vrb3 (Vrb + 0.3V) and Vrc3 (Vrc + 0.3V) in the process of Figure 11 at step 436.
  • the results are stored for those bit lines having memory cells with neighboring cells at WLn+1 in state C at step 438.
  • no offsets will be used at Vra because of the larger natural margin between state E and state A.
  • Such an embodiment is depicted in Figure 14 where a single read reference voltage Vra is depicted at the state A level. Other embodiments may use offsets for this level as well.
  • the process of Figure 15 can be used for recovering data (e.g., step 314 of Figure 10) or as an initial read process (e.g., step 302).
  • the different offsets to the nominal read reference voltages can be selected as a function of the state of a memory cell on the adjacent word line.
  • a set of offset values could include a OV offset corresponding to an adjacent cell in state E, a 0.1V offset corresponding to an adjacent cell in state A, a 0.2V offset corresponding to an adjacent cell in state B, and a 0.3V offset corresponding to an adjacent cell in state C.
  • the offset values will vary according to implementation. In one embodiment, the offset values are equal to the amount of shift in apparent threshold voltage that results from an adjacent cell being programmed to a corresponding state.
  • 0.3V may represent the shift in apparent threshold voltage for a cell at WLn when an adjacent cell at WLn+ 1 is programmed to state C after programming WLn.
  • the offset values need not be the same for every reference voltage.
  • the offset values for the Vrb reference voltage may be OV, 0. IV, 0.2V, and 0.3V while those for the Vrc reference voltage may be OV, 0.15V, 0.25V and 0.35V.
  • the increment in offsets need not be the same for every state.
  • a set of offsets in one embodiment could include OV, 0.1V, 0.3V, and 0.4V for adjacent cells in states E, A, B, and C, respectively.
  • Another technique used to compensate for floating gate coupling effects provides compensation to a memory cell adjacent a selected memory cell in order to reduce the coupling effect that the adjacent memory cell has on the selected memory cell.
  • One such embodiment includes setting up, during the verification process, the required conditions for later application of compensation to the adjacent memory cell.
  • the pass voltage, otherwise known as Vread, applied to WLn+ 1 is reduced from a typical value of, for example, 6V that is applied to each other in selected word line down to, for example, 3 V.
  • the compensation consists of application of higher voltage, as compared to the voltage used during the verify phase of program/verify operations, to WLn+ 1 during the read operation performed on WLn.
  • ⁇ Vread ⁇ [Vread(WLn+l during read of WLn)] - [Vread(WLn+l during verify of WLn)] ⁇ .
  • FIG. 9 One example of such setting up for later compensation is depicted in Figure 9 as the application of VreadX to the drain side neighboring word line while the other unselected word lines receive Vread. In many prior art devices, all of the unselected word lines would receive Vread. In the embodiment of Figure 9, all of the unselected word lines, except for the drain side neighbor, receive Vread, while the drain side neighbor receives VreadX.
  • Vread4 is equal to 3.7V.
  • VreadX Vread.
  • other values can also be used.
  • different values of Vread4 or VreadX can be determined based on device characterization, experimentation and/or simulation.
  • ⁇ VTn+1 is the change in threshold voltage of the drain side neighbor memory cell between the time of program/verify of WLn and the present time.
  • ⁇ VTn+1, and rl are the root causes of the word line to word line parasitic coupling effect that is mitigated by the present method.
  • ⁇ Vread is the compensation that is brought to bear in order to combat this effect.
  • Figure 16 is a flow chart describing one embodiment of a process for performing an initial read operation (step 302) or recovering data (step 314) using the aforementioned technique.
  • the process depicted in Figure 16 applies to the full sequence programming described above with respect to Figure 11 in which two bits of one logical page are stored in each cell and will be read and reported out together.
  • a read operation for the neighboring word line WLn+ 1 is performed at step 450. This can include the process of Figure 11 for the neighboring word line.
  • the results are stored in the appropriate latches at step 452.
  • step 454 are stored at step 456 in the appropriate latches for bit lines with memory cells where the adjacent cell at WLn+ 1 was determined (at step 450) to be in state C. Therefore, the maximum compensation, CompC, is engaged for cells whose drain side neighbors had experienced the highest change in threshold voltage by being programmed from state E to state C. Note that these drain side neighbors were in State E during program/verify of WLn, but now are in state C. What has to be compensated for under all circumstances is the change in state of the drain side neighbor on WLn+ 1 experienced between the time of write of WLn and the present time of read of WLn. For other bit lines whose drain side neighbors are not being detected presently to be in state C, the data of this read of WLn which used Vreadl on WLn+ 1 will be disregarded.
  • Vread2 as compared to Vreadl, is closer in value to the Vreadp used during program verify.
  • a smaller compensation appropriate for cells whose drain side neighbors are now in state B is delivered.
  • Vread2 differs from Vreadp by compB.
  • the results of step 458 are stored for bit lines with memory cells having neighboring memory cells at WLn+ 1 in state B. Data for other bit lines is disregarded.
  • a read process is performed for WLn. with word line WLn+ 1 receiving Vread3.
  • Vreadp Vreadp used during programming.
  • a yet smaller compensation amount appropriate for cells whose drain side neighbors are now in state A is delivered.
  • Vread3 differs from Vreadp by compA.
  • the results of step 462 are stored for bit lines with memory cells having neighboring memory cells at WLn+ 1 in state A. Data for other bit lines is disregarded.
  • a read process is performed for WLn.
  • step 468 the results of step 466 are stored for bit lines with memory cells having neighboring memory cells at WLn+ 1 in state E. Data for other bit lines is disregarded. During the process of Figure 16, the neighboring bit line will receive four voltages.
  • each selected memory cell of WLn being read will only make use of or select the results when sensing at the one appropriate voltage corresponding to the state of its adjacent cell at WLn+ 1.
  • different values of Vreadl, Vread2, Vread3 and Vread 4 can be determined based on device characterization, experimentation and/or simulation. For more information on the technique of Figure 16, see U.S. Patent Application No. 11/384,057, entitled “Read Operation for Non-Volatile Storage with Compensation for Coupling," by Nima Mokhlesi, incorporated by reference herein in its entirety.
  • word lines are typically programmed beginning with the word line adjacent the source side select gates and ending at the word line adjacent the drain side select gates although the opposite order can be used as well.
  • the physical block comprised of the cells connected to word lines WLO through WLi would be programmed beginning with word line WLO. After completing programming of the memory cells to word line WLO programming would proceed to word line WLl, WL2, etc. ending with word line WLi.
  • the read operation will proceed in the same manner as the programming operation. Reading will begin with word line WLO and proceed sequentially thereafter until word line WLi is reached.
  • Figure 17 is a flowchart depicting a method for reading a physical block of data in accordance with such traditional techniques using one or more compensations.
  • a physical block is comprised of four word lines and it is assumed that programming begins at word line WLO and ends with word line WL3.
  • a read operation is performed for word line WLl. No compensation is used when reading word line WLl as this read operation is performed merely to acquire data or charge level information that can be used to more accurately read word line WLO.
  • performing the read operation for word line WLl is performed according to the technique of Figure 11.
  • the results data values or charge level information
  • step 502 corresponds to step 360 of Figure 11.
  • WLO is read using one or more compensations at step 504.
  • the data values read from WLO are stored at step 506 in the appropriate data latches 214.
  • the data from word line WLO is reported to the host at step 508.
  • Steps 504 can include performing a number of subreads at the selected word line in one embodiment.
  • applying one or more compensations at step 504 (514, 524) includes performing four subreads (each including three sense operations assuming four state memory cells) corresponding to steps 424, 428, 432, and 436 of Figure 15.
  • step 504 (514, 524) include performing four subreads (each including three sense operations assuming four state memory cells) corresponding to steps 454, 458, 462, and 466 of Figure 16.
  • other techniques for applying one or more compensations can be used. For example, a single subread can be used in some embodiments by applying a bit line specific compensation.
  • a bit line specific compensation can address each memory cell individually so that each memory cell simultaneously receives its appropriate compensation in parallel with each other memory cell of a common word line being read.
  • One set of conditions is applied to the word lines while each bit line is setup based on the state of the adjacent memory cell of that bit line on the neighboring word line.
  • a single subread can be used.
  • the trip point voltage used to determine if a cell is conductive or non-conductive under an applied read reference voltage can be tailored for each bit line, based on the state of the adjacent cell at WLn+ 1 for that bit line.
  • Other parameters such as integration time and pre-charge voltage can also be used to provide bit line specific compensation so that a single subread can be utilized. Tailoring combinations of these various parameters can be used in one embodiment.
  • step 510 a read operation for word line WL2 is performed.
  • step 510 can include performing the read method depicted in Figure 11 in one embodiment. No compensation is used when reading the word line WL2 as this read operation is performed merely to acquire data or charge level information that can be used to more accurately read word line WLl.
  • the data values or charge level information read from word line WL2 is stored at step 512.
  • step 512 corresponds to step 660 of Figure 11.
  • a read operation using one or more compensations is performed for word line WLl at step 514.
  • the data values determined in step 516 are stored in the appropriate data latches at step 516 and reported to the host device at step 518.
  • a read operation is performed for word line WL3 which can include performing the method of Figure 11.
  • the data values or charge level information are stored for word line WL3 in the appropriate latches.
  • a read operation including one or more compensations is performed for word line WL2.
  • the data values for word line WL2 are stored in the appropriate latches at step 526 and reported to the host device at step 528.
  • a read operation is performed for word line WL3.
  • word line WL3 is the last word line to be programmed and read for the memory system. Accordingly no compensations are used at word line WL3.
  • the data values are stored for word line WL3 at step 532 and reported to the host at step 534.
  • word line WLl In order to read word line WLO for example, word line WLl must be first read in order to ascertain its data state or charge level information in order to apply the appropriate compensation while reading word line WLO. Before reading word line WLl at step 514 for its actual data to be referred to the host, a read operation for word line WL2 must be performed. The addition of these additional read operations can affect system performance by requiring longer to complete read operations.
  • Figure 18 is a table depicting the number of sense operations that are necessary in such a memory system in some embodiments.
  • a 16 word line physical or erase block is depicted.
  • Column 550 lists the word lines of the physical block in order, with word line WL 15 - the word line adjacent the drain side select gates of the physical block - at the top of the page, and word line WLO - the word line adjacent the source side select gates for the physical block - at the bottom of the page.
  • the programming sequence for the memory cells of the word lines is depicted in column 552.
  • Programming the physical block begins with word line WLO and proceeds sequentially thereafter to word line WL15.
  • the sequence for reading the physical block of data is depicted in column 554.
  • Word line WLO is read first, followed by word line WLl, and proceeding sequentially thereafter to word line WL15.
  • Column 556 illustrates what is necessary to perform the read operation for each word line.
  • word line WLl In order to read word line WLO, for example, word line WLl must first be read, as shown in column 556. After reading word line WLl to determine its data values or charge level information, word line WLO is read using the information from word line WLl to apply (or select the results of) an appropriate compensation.
  • Column 558 sets forth the number of sense operations and subreads (SR) that are necessary for each read operation.
  • SR sense operations and subreads
  • word line WLO again for example, when first reading word line WLl, three sense operations are performed (assuming a four state memory cell). A first sense operation is performed using the read reference level Vra, a second sense operation is performed using the second read reference voltage level Vrb and a third sense operation is performed using the third read reference voltage level Vrc. These three sense operations comprise one subread at word line WLl. After reading word line WLl, the word line of interest word line WLO is read using the data from word line WLl to apply or select the results at the appropriate compensation. Reading word line WLO involves 12 sense operations corresponding to four subreads at word line WLO.
  • a first subread includes three sense operations at the levels Vra, Vrb and Vrc.
  • a second subread includes three sense operations at the read reference levels Vra plus a first offset, Vrb plus a first offset, and Vrc plus a first offset.
  • a third subread includes three sense operations at the levels Vra plus a second offset, Vrb plus a second offset and Vrc plus a second offset.
  • the fourth subread will include another three sense operations at the levels Vra plus a third offset, Vrb plus a third offset and Vrc plus a third offset.
  • applications of different levels of Vread to the neighboring word line while sensing the word line of interest can be used.
  • the first subread will include three sense operations at the levels Vra, Vrb and Vrc while word line WLl receives Vreadl.
  • a second subread will also include application of the read reference voltage levels Vra, Vrb and Vrc, but word line WLl will receive Vread2.
  • the third subread will include three sense operations at the Vra, Vrb and Vrc levels while word line WLl receives Vread3.
  • the fourth subread will again include application of the read reference voltage levels Vra, Vrb and Vrc to word line WLO while word line WLl receives Vread4. In total it can be seen that in order to read each word line of the physical block a total of 15 sense operations are necessary.
  • Figure 18 sets forth the total number of sense operations in box 610. It can be seen that in order to read a 16 word line physical block utilizing one of the compensation schemes of Figures 15 or 16, a total number of 228 sense operations is necessary.
  • bit line specific compensations are utilized with a single subread at the word line of interest, the number of sense operations can be reduced, but reading will still include additional reads for the neighboring word line when reading a selected word line. Looking at word line WLO again, reading word line WLl will still be performed, but reading WLO will only include 1 subread (3 sense operations). The total number of sense operations when reading all 16 word lines will be 93 as also shown in Figure 18.
  • Embodiments in accordance with the present disclosure seek to reduce the number of sense operations for read operations spanning multiple word lines, such as a read operation in response to a request for data from a physical or erase block.
  • Figure 19 is a similar table as shown in Figure 18, incorporating a reverse reading technique in accordance with one embodiment.
  • Word lines WLO through WL15 are listed in column 570.
  • the programming sequence begins with word line WLO and ends with WL 15 as shown in column 572.
  • Column 574 sets forth the read sequence, beginning with word line WL 15 and ending with word line WLO.
  • the manner of selecting and reading the word lines to accomplish read sequence 574 is listed in column 576. For each of the word lines when selected for reading, no other word line must be read to accomplish a compensation technique.
  • the information from an adjacent word line needed to apply or select the results of an appropriate read compensation for the selected word line are already known.
  • the adjacent word line data can be maintained from the already performed substantive read operation for that word line, thereby avoiding duplicative reads.
  • word line WLO is selected for reading during the 16 read operation of the sequence
  • the data from the WLl read operation, performed 15 th can be utilized. This stands in contrast the technique depicted in column 556 of Figure 18, where in order to read word line WLO it is necessary to first read word line WLl, and then repeat reading at word line WLl at a later time.
  • the number of sense operations is set forth in column 578.
  • Word line WL 15 When reading word line WL 15, one subread and three sense operations are performed since WL 15 has no later programmed neighbor.
  • Word line WL 14 is then read.
  • the data read from the memory cells of word line WL 15 are already known, and maintained (e.g., data latches 214) when reading word line WL 14. Accordingly, four subreads at word line WL14 requiring 12 sense operations, are all that are necessary to accurately read the word line. Contrast this with Figure 18, where reading word line WL 14 first requires reading WL 15, and thus, an extra subread and three sense operations.
  • Box 580 sets forth the total number of sense operations when using the technique of Figure 19.
  • sense operations compared with 228 using the technique of Figure 18 are all that are required. By reducing the number of sense operations from 228 to 183, performance improvements when reading the non-volatile memory system are obtained. If bit line specific compensations are utilized with a single subread at the word line of interest, the total number of sense operations would be 48. Each word line will be read in one subread using three sense operations. This too will provide improvements over traditional techniques, reducing the number of sense operations from 93 to 98.
  • Figure 19 sets forth a read sequence that can be used for cells programmed by full-sequence programming, upper-lower page programming, and last first mode (LM) programming.
  • the number of sense operations and subreads may vary for upper-lower page reading or LM reading, but will be apparent to those of ordinary skill in the art. While much of the discussion may be presented with respect to full-sequence programmed cells for exemplary purposes, the present disclosure is equally applicable to other processes for cells.
  • Figure 20 is a flowchart of a method in accordance with one embodiment for reverse reading a physical block of memory cells.
  • a four word line four state memory system is illustrated for exemplary purposes, the disclosed techniques may be applied to systems with other numbers of word lines and memory starts.
  • the last word line, WL3, to be programmed for the physical block is read at step 600. Because the last word line does not have a later programmed neighboring word line, no compensation is used and the method of Figure 11 can be applied.
  • the data values determined for WL3 are stored in the latches (e.g., latches 214) for each corresponding bit line.
  • the data values are buffered at step 604, to be reported to the host device later, as will be described hereinafter.
  • a read operation is performed for word line WL2 at step 606 using one or more compensations.
  • Compensation subreads e.g., Figure 15 or 16
  • one or more bit line based compensations can be applied for each bit line and a single subread utilized.
  • the data values from the appropriate subread, or from the single subread when using bit line compensation are stored in the data latches for each bit line.
  • the data values shared at step 602 are replaced with the WL2 values at step 608.
  • step 608 may be performed in between or as part of the subreads at step 606.
  • steps 606 and 608 can include performing steps 424-438 of Figure 15. In one embodiment that utilizes different values of Vread as shown in Figure 16, steps 606 and 608 can include performing steps 454-468 of Figure 16. After storing data values for word line WL2 in the latches, the data values are buffered at step 610.
  • a read operation is performed for word line WLl using compensation subreads or bit line compensation. Because word line WL2 was read at step 606, reading word line WL3 does not require reading word line WL2.
  • the values stored in the latches for each bit line at step 608 can be used to select the appropriate read results or apply the appropriate bit line compensation.
  • the data values for word line WLl are stored in the data latches. The data stored in the latches at step 608 is overwritten in one embodiment. After storing the data values for word line WLl, the data values are buffered at step 616.
  • a read operation is performed at word line WLO utilizing compensation subreads or bit line compensation.
  • the data values for the memory cells at word line WLO are stored in the appropriate latches at step 620.
  • the data values stored at step 620 are buffered.
  • the data buffered at steps 604, 610, 616 and 622 is reported to the host or requesting device.
  • Figure 21 is a flowchart depicting a method for reporting data to the host or requesting device, as can be performed at step 624 of Figure 20.
  • the data buffered at steps 604, 610, 616, and 622 is reordered at step 630.
  • Reordering the data can include putting the data from word line WLO first, followed by the data from word line WLl, the data from word line WL2, and finally the data from word line WL3.
  • the data was placed into the buffer in the opposite sequence to which it was programmed in steps 604, 610, 616, and 622.
  • host devices will expect data in the order it was programmed.
  • the data is reordered to coincide with the programming sequence.
  • the data from word line WLO is reported to the host device.
  • the data from word line WLl is reported to the host device.
  • the data from word line WL2 is reported to the host device, and finally at step 638, the data from word line WL3 is reported to the host device.
  • Buffering the data values and reporting them to the host can vary by implementation. For example, the data from each word line can be reported to the host after reading each individual word line. After reading word line WL3 and storing the data value in the latches for example, the data could be immediately reported to the host. In such an embodiment, buffering the data values at step 604 may not be necessary.
  • the host device will receive the data in the opposite order of programming and can reorder the data if necessary.
  • the location and type of memory for buffering the data may vary by embodiment.
  • the data is buffered in a memory such as RAM 131 ( Figure 4) in or accessible by controller 144.
  • the controller can reorder the data and report it to the host.
  • the data is buffered in a memory such as RAM 133 in or accessible by control circuitry 120.
  • the control circuitry or controller can reorder the data for delivery to the host by the controller.
  • physical blocks are divided into read blocks to decrease the amount of data required to be buffered at one time, and consequently, the size of memory for buffering.
  • the memory system can reverse read a portion of the physical block (read block), buffer the data, and report that data to the host device.
  • the memory system can read the next read block of the physical block as the data is reported, and buffer the data from the next read block in its place. This process can be performed as many times as necessary until the entire physical block requested by the host device has been read and reported.
  • Various sizes and ways of partitioning the physical block into individual read blocks can be utilized in accordance with different embodiments.
  • Figure 22 is a table illustrating the utilization of read blocks in accordance with one embodiment.
  • a 16 word line physical block is illustrated, however physical blocks of other sizes can be divided in a similar manner.
  • the word lines listed from WL 15 to WLO in column 650, are programmed beginning with word line WLO and proceeding sequentially thereafter to word line WL15, as shown in column 652.
  • the physical block of Figure 21 has been partitioned into four individual read blocks.
  • Read block 1 includes word lines WLO, WLl, WL2 and WL3.
  • Read block 2 includes word lines WL4, WL5, WL6 and WL7.
  • Read block 3 includes word lines WL8, WL9, WLlO and WLI l.
  • Read block 4 includes word lines WL 12, WLl 3, WL 14 and WL15. More or fewer read blocks including different numbers of word lines can also be used.
  • the sequence for reading the physical block is set forth in column 654.
  • the memory system begins reading and reporting individual read blocks.
  • the read sequence begins with word line WL3 in read block 1.
  • Word line WL3 is the last to be programmed for this read block.
  • the individual operations for reading the selected word line are set forth in column 656.
  • In order to read word line WL3, its neighboring drain side word line WL4 is first read. Actual data values or charge level information can be read from the cells of word line WL4. When reading word line WL4, no compensations are utilized. A single subread and three sense operations (assuming a four state device) are used as shown in column 658. After reading word line WL4, word line WL3 is read.
  • Reading word line WL3 includes four subreads and 12 sense operations when using a word line compensation scheme as shown in Figures 15 or 16. The data values for each subread will be selected for individual bit lines based on the state of the corresponding adjacent memory cell at word line WL4. An appropriate bit line compensation could alternatively be applied in one embodiment.
  • Reading word line WL3 the read sequence proceeds to word line WL2.
  • Reading word line WL2 only includes four subreads and 12 sense operations. The performance of an additional read operation at word line WL3 is not necessary since the data from word line WL3 is already known.
  • the read sequence proceeds to word line WLl and word line WLO. After reading read block 1 and buffering the data, the data can be reported to the host device. Reporting the data can include reordering the data to begin with word line WLO and end with WL3 as shown in Figure 21.
  • read block 2 can be read.
  • reading read block 2 can begin as the data from read block 1 is being reported.
  • the data from read block 2 can replace the data from read block 1 as it is reported to the host from the buffer memory.
  • Reading read block 2 begins at word line WL7, which includes first reading word line WL8 in order to determine data values or charge level information that can be used when reading word line WL7. Reading the remainder of read block 2 proceeds as described with respect to read block 1, buffering the data from each word line.
  • the data can be reordered and reported to the host, beginning with the data from word line WL4, then word line WL5, then word line WL6, and finally word line WL7.
  • Reading word line WLl 1 will include first reading word line WL 12 to apply compensations when reading WLI l as shown in column 656.
  • Reading word line WLI l After reading word line WLI l and selecting the appropriate values based on the data from word line WL12, the data will be buffered in the memory system. Reading then proceeds to word lines WLlO, WL9 and WL8 in a similar fashion to that already described. The memory system will reorder the data beginning with word line WL8 and ending with word line WLl 1 and report this data to the host device.
  • Reading read block 4 begins at word line WL 15 as shown in column 654. Because word line WL 15 is the last to be programmed it is not necessary to read any other word lines. After reading word line WLl 5, word line WL 14 is read using the data from WL 15 to apply an appropriate compensation. After reading and buffering the data from each of the remaining word lines of read block 4, the data can be reordered and reported to the host beginning with word line WL 12 and ending with word line WL 15.
  • the read block division of Figure 22 utilizes a total of 192 sense operations as shown at 660 when a word line compensation scheme is used. If a bit line compensation scheme is used, 57 sense operations are used. As can be seen when looking at Figure 18, the read block technique of Figure 22 provides a fewer number of sense operations while keeping the required buffer memory size down.
  • Figure 23 is a flowchart depicting a method for reading non-volatile memory in a reverse fashion using individual read blocks.
  • the embodiment of Figure 23 is set forth utilizing exemplary conditions of four read blocks as was shown in Figure 22. However, other embodiments may include different numbers of read blocks.
  • a read operation is performed for the first word line of the second read block.
  • the first word line refers to the first word line to be programmed for the second read block.
  • step 700 would include reading word line WL4.
  • the read operation at step 700 does not include the utilization of any compensation subreads (or bit line compensation) because the read operation is intended to obtain information that can be used when reading the source side neighboring word line.
  • the data values (or charge level information) for the first word line of read block two is stored in the data latches.
  • a reverse read is performed for the first read block. Reverse reading the first read block will first include reading the last word line of the first read block using compensation based on the first word line of the second read block. Each bit line of the last word line will store the data from the appropriate subread, etc. based on the information determined from reading the first word line of the second read block. After reading the last word line of the first read block, the second to last word line of the first read block will be read and the data values selected based on the previous read of the last word line of the first read block. This will continue until the first word line of the first read block to be programmed has been read.
  • step 704 would include reading word line WL3, then word line WL2, then word line WLl and finally word line WLO.
  • the memory system will buffer the data read from the first read block. In one embodiment, buffering will include buffering the data from each individual word line of the first read block after it has been read at step 704. After buffering the data and reading each word line of the first read block, the memory system will reorder the data to correspond to the order in which the data was programmed to the read block. Referring again to Figure 22, this will include reordering the data to begin with the data from word line WLO, and ending with the data from word line WL3. After the data has been reordered at step 706, the memory system will report the data of the first read to the host device at step 708.
  • a read operation is performed for the first word line of the third read block.
  • step 710 would include reading word line WL8 of read block 3.
  • the read operation for the first word line of read block 3 will not include compensation.
  • step 712 the data values read from the first word line of read block 3 are stored in the latches.
  • step 714 a reverse read operation for the second read block is performed. This reverse read operation will first include reading the last word line of read block two utilizing compensation based on the information determined from reading the first word line of read block 3 at step 710.
  • performing step 714 can include reading word line WL7 (using data from WL8), followed by word line WL6 (using data from WL9), word line WL5 (using data from WL6), and finally word line WL4 (using data from WL5).
  • the data from reverse reading read block two is buffered. Again buffering the data may include buffering the data after reading each individual word line of read block 2.
  • the memory system will reorder the data of read block 2, also at step 716.
  • the data is reported to the host device at step 718.
  • step 718 can include reporting the data from word line WL4, then the data from word line WL5, then the data from word line WL6, and concluding with the data from word line WL7.
  • a read operation for the first word line of the fourth read block is performed. Again, this may include reading word line WL 12 as depicted in column 656 for word line WLI l in Figure 22.
  • the data values for the first word line of the fourth read block are stored in the data latches at step 722.
  • a reverse read operation for the third read block is performed as described for read blocks 1 and 2. This will include reading the last word line of the third read block to be programmed first.
  • the memory system will buffer and reorder the data into the order in which it was originally programmed.
  • the reordered data will be reported to the host device.
  • a reverse read of the fourth read block is performed.
  • Reverse reading read block 4 in the embodiment of Figure 22 can include reading word line WL15, then word line WL 14, then word line WL13, and concluding with word line WL 12.
  • the data from the individual word lines will be buffered at step 732.
  • the memory system will reorder the data to begin with the data from word line WL 12 and conclude with the data from word line WLl 5. This reordered data is then reported to the host device at step 734.
  • threshold sizes of requested data from host devices are utilized in order to determine whether and to what extent to perform reverse reading operations.
  • Figure 24 depicts one embodiment utilizing two such thresholds.
  • a request to read data is received from the host device.
  • the memory system determines the size of the requested data at step 742.
  • the memory system will compare the size of the requested data to a first threshold at step 744. If the requested size of data is smaller than the first threshold size of data, no reverse read operations are performed.
  • the host device will have requested a small amount of data and in order to efficiently process this small amount of data, the memory system will quickly read the requested data and provide it to the host device.
  • a standard read operation using compensation can be performed at step 750.
  • the operation performed at step 750 will be performed as depicted in Figure 15 or Figure 16.
  • the memory system can read the appropriate neighboring word line as shown at step 420 of Figure 15 or step 450 of Figure 16. After reading the neighboring word line, the selected word line will be read utilizing compensation subreads as depicted in steps 422-438 of Figure 15 or steps 452- 468 of Figure 15. Bit line compensation can also be used. If the size of the requested data corresponds to more than one word line, individual performances of Figure 15 and Figure 16 can be used to read those multiple word lines.
  • the memory system will compare the size of the requested data with a second threshold at step 746.
  • the second threshold size is larger than the first. If the size of the requested data is larger than the first threshold but smaller than the second threshold, a reverse read operation of one or more read blocks is performed at step 752. For example, a reverse read of the read block containing the requested data, for example read block 1, can be performed at step 752. If the size of the requested data is larger than the second threshold, a reverse read operation for one or more physical blocks can be performed at step 748.
  • a reverse read of at least one read block could be performed in all cases.
  • the host may later request the data and it may be efficient to read and buffer it to avoid reading it later.
  • reverse reading one or more read blocks at step 752 may always include reverse reading at least two read blocks.
  • Another embodiment may utilize a single threshold value. In such a case if the requested data is smaller than a first threshold size, the memory system may perform standard read operations as depicted in Figures 15 and 16 without reverse reading.
  • the memory system may reverse read the physical block beginning with the requested word line and then proceeding in the opposite direction of programming to the end of the physical block. Another option could include moving in the direction of programming by a few word lines before beginning the reverse read. Where one threshold is used, if the size of the requested data is larger than the threshold size, the memory system can reverse read the corresponding physical block(s).

Abstract

Shifts in the apparent charge stored by a charge storage region such as a floating gate in a non- volatile memory cell can occur because of electrical field coupling based on charge stored in adjacent (or other) charge storage regions. Although not exclusively, the effects are most pronounced in situations where adjacent memory cells are programmed after a selected memory cell. To account for the shift in apparent charge, one or more compensations are applied when reading storage elements of a selected word line based on the charge stored by storage elements of other word lines. Efficient compensation techniques are provided by reverse reading blocks (or portions thereof) of memory cells. By reading in the opposite direction of programming, the information needed to apply (or select the results of) an appropriate compensation when reading a selected cell is determined during the actual read operation for the adjacent word line rather than dedicating a read operation to determine the information.

Description

REVERSE READING IN NON-VOLATILE MEMORY WITH COMPENSATION FOR COUPLING
BACKGROUND OF THE INVENTION
Field of the Invention
[0001] Embodiments of the present disclosure are directed to non-volatile memory technology.
Description of the Related Art
[0002] Semiconductor memory devices have become more popular for use in various electronic devices. For example, non-volatile semiconductor memory is used in cellular telephones, digital cameras, personal digital assistants, mobile computing devices, non-mobile computing devices and other devices. Electrical Erasable Programmable Read Only Memory (EEPROM), including flash EEPROM, and Electronically Programmable Read Only Memory (EPROM) are among the most popular non-volatile semiconductor memories.
[0003] One example of a flash memory system uses the NAND structure, which includes arranging multiple transistors in series between two select gates. The transistors in series and the select gates are referred to as a NAND string. Figure 1 is a top view showing one NAND string. Figure 2 is an equivalent circuit thereof. The NAND string depicted in Figures 1 and 2 includes four transistors 10, 12, 14 and 16 in series between a first select gate 12 and a second select gate 22. Select gate 12 connects the NAND string to bit line terminal 26. Select gate 22 connects the NAND string to source line terminal 28. Select gate 12 is controlled by applying appropriate voltages to control gate 20CG via selection line SGD. Select gate 22 is controlled by applying the appropriate voltages to control gate 22CG via selection line SGS. Each of the transistors 10, 12, 14 and 16 includes a control gate and a floating gate, forming the gate elements of a memory cell. For example, transistor 10 includes control gate IOCG and floating gate 10FG. Transistor 12 includes control gate 12CG and a floating gate 12FG. Transistor 14 includes control gate 14CG and floating gate 14FG. Transistor 16 includes a control gate 16CG and a floating gate 16FG. Control gate IOCG is connected to word line WL3, control gate 12CG is connected to word line WL2, control gate 14CG is connected to word line WLl, and control gate 16CG is connected to word line WLO.
[0004] Note that although Figures 1 and 2 show four memory cells in the NAND string, the use of four transistors is only provided as an example. A NAND string can have less than four memory cells or more than four memory cells. For example, some NAND strings will include eight memory cells, 16 memory cells, 32 memory cells, etc. The discussion herein is not limited to any particular number of memory cells in a NAND string. A typical architecture for a flash memory system using a NAND structure will include several NAND strings. Relevant examples of NAND-type flash memories and their operation are provided in the following U.S. Patents/Patent Applications, all of which are incorporated herein by reference in their entirety: U.S. Pat. No. 5,570,315; U.S. Pat. No. 5,774,397; U.S. Pat. No. 6,046,935; U.S. Pat. No. 5,386,422; U.S. Pat. No. 6,456,528; and U.S. Pat. Application Ser. No. 09/893,277 (Publication No. US2003/0002348). Other types of non-volatile memory in addition to NAND flash memory can also be used in accordance with embodiments.
[0005] When programming an EEPROM or flash memory device, typically a program voltage is applied to the control gate and the bit line is grounded. Electrons from the channel are injected into the floating gate. When electrons accumulate in the floating gate, the floating gate becomes negatively charged and the threshold voltage of the memory cell is raised so that the memory cell is in a programmed state. The floating gate charge and threshold voltage of the cell can be indicative of a particular state corresponding to stored data. More information about programming can be found in U.S. Patent Application 10/379,608, titled "Self Boosting Technique," filed on March 5, 2003; and in U.S. Patent Application 10/629,068, titled "Detecting Over Programmed Memory," filed on July 29, 2003, both applications are incorporated herein by reference in their entirety.
[0006] Shifts in the apparent charge stored on a floating gate can occur because of coupling of an electric field based on the charge stored in neighboring floating gates. This floating gate to floating gate coupling phenomena is described in U.S. Patent No. 5,867,429, which is incorporated herein by reference in its entirety. The floating gate to floating gate coupling phenomena occurs most pronouncedly, although not exclusively, between sets of adjacent memory cells that have been programmed at different times. For example, a first memory cell can be programmed to add a level of charge to its floating gate that corresponds to a set of data. Subsequently, one or more adjacent memory cells are programmed to add a level of charge to their floating gates that correspond to a set of data. After one or more of the adjacent memory cells are programmed, the charge level read from the first memory cell may appear to be different than when it was programmed because of the effect of the charge on the adjacent memory cell(s) being coupled to the first memory cell. The coupling from adjacent memory cells can shift the apparent charge level being read from a selected memory cell by a sufficient amount to lead to an erroneous reading of the stored data.
[0007] As memory cells continue to shrink in size, the natural programmed and erase distributions of threshold voltages are expected to increase due to short channel effects, greater oxide thickness/coupling ratio variations and more channel dopant fluctuations, reducing the available separation between adjacent states. This effect is much more significant for multi-state memories than binary memories using only two states. The reduction of the space between word lines and between bit lines will also increase the coupling between adjacent floating gates. The effect of the floating gate to floating gate coupling is of greater concern for multi-state devices because the allowed threshold voltage ranges and the forbidden ranges (range between two distinct threshold voltage ranges that represent distinct memory states) are narrower than in binary devices. Therefore, floating gate to floating gate coupling can result in memory cells being shifted from an allowed threshold voltage range to a forbidden range.
SUMMARY OF THE INVENTION
[0008] Shifts in the apparent charge stored by a charge storage region such as a floating gate in a non-volatile memory cell can occur because of electrical field coupling based on charge stored in adjacent (or other) charge storage regions. Although not exclusively, the effects are most pronounced in situations where adjacent memory cells are programmed after a selected memory cell. To account for the shift in apparent charge, one or more compensations are applied when reading storage elements of a selected word line based on the charge stored by storage elements of other word lines. Efficient compensation techniques are provided by reverse reading blocks (or portions thereof) of memory cells. By reading in the opposite direction of programming, the information needed to apply (or select the results of) an appropriate compensation when reading a selected cell is determined during the actual read operation for the adjacent word line rather than dedicating a read operation to determine the information.
[0009] One embodiment includes programming non-volatile storage elements coupled to a plurality of word lines beginning with a first word line adjacent a first set of select gates and ending with a last word line adjacent a second set of select gates. Programming includes changing a threshold voltage of selected ones of the storage elements in accordance with a target memory state. Reading the non-volatile storage elements coupled to the plurality of word lines begins with the last word line and end with the first word line. Reading the storage elements includes, for each word line except the last word line, applying one or more compensations based on reading a word line adjacent to each word line in a direction of the second set of select gates.
[0010] Another embodiment includes receiving a request for data stored in a set of non-volatile storage elements coupled to a plurality of word lines. The data is programmed to the set of non-volatile storage elements beginning with storage elements coupled to a first word line and ending with storage elements coupled to a last word line. The first word line is adjacent to a second word line of the plurality. In response to the request, the set of non-volatile storage elements is read in a sequence beginning with storage elements coupled to the last word line and ending with storage elements coupled to the first word line, wherein reading storage elements coupled to the first word line includes applying one or more compensations based on charge stored in storage elements of the second word line. Reading further includes buffering data from storage elements of the second word line prior to reading the storage elements of said first word line, buffering data from storage elements of the first word line after buffering data from storage elements of the second word line, and maintaining the buffered data from storage elements of the second word line after buffering data from storage elements of the first word line.
BRIEF DESCRIPTION OF THE DRAWINGS
[0001] Figure 1 is a top view of a NAND string.
[0002] Figure 2 is an equivalent circuit diagram of the NAND string of Figure 1.
[0003] Figure 3 is a block diagram of an array of NAND flash memory cells. [0004] Figure 4 is a block diagram of a non-volatile memory system.
[0005] Figure 5 is a block diagram of one embodiment of a sense block.
[0006] Figure 6 depicts an exemplary set of threshold voltage distributions and a full sequence programming process.
[0011] Figure 7 depicts an exemplary set of threshold voltage distributions and an upper page/lower page programming process.
[0012] Figures 8A-8C depict an exemplary set of threshold voltages and a two-pass programming process.
[0013] Figure 9 is a timing diagram that explains the behavior of certain signals during read/verify operations.
[0014] Figure 10 is a flow chart of one embodiment for reading non-volatile memory.
[0015] Figure 11 is a flow chart of one embodiment for performing a read operation for non-volatile memory.
[0016] Figure 12 is a block diagram illustrating capacitive coupling between two neighboring memory cells.
[0017] Figure 13 is an exemplary set of threshold voltage distributions illustrating the effects of floating gate coupling.
[0018] Figure 14 is an exemplary set of threshold voltage distributions illustrating a technique for compensating for floating gate coupling.
[0019] Figure 15 is a flowchart for compensating for floating gate coupling using the technique of Figure 14. [0020] Figure 16 is a flowchart for compensating for floating gate coupling using another technique.
[0021] Figure 17 is a flowchart illustrating an order of reading a set of memory cells using compensation according to the prior art.
[0022] Figure 18 is a table illustrating reading of a set of memory cells using compensation according to the prior art.
[0023] Figure 19 is a table illustrating reading of a set of memory cells using compensation according to one embodiment.
[0024] Figure 20 is a flowchart for reading a set of memory cells using compensation in accordance with one embodiment.
[0025] Figure 21 is flowchart for reporting data to a host in accordance with one embodiment.
[0026] Figure 22 is a table illustrating reading of a set of memory cells with compensation using read blocks in accordance with one embodiment.
[0027] Figure 23 is a flowchart for reading a set of memory cells using read blocks in accordance with one embodiment.
[0028] Figure 24 is a flowchart for reading a set of memory cells using threshold requested data sizes.
DETAILED DESCRIPTION
[0029] Memory cells can be used to store data represented in analog or digital form through manipulation of the cell's threshold voltage. The range of possible threshold voltages of a memory cell can be divided into ranges which represent distinct memory states. For example, two ranges of threshold voltages can be used to establish two memory states that are assigned logical data 1 and 0. At least one reference threshold voltage level is generally established so as to partition the threshold voltage memory window of the memory cell into the two ranges. When the cell is read by applying predetermined, fixed voltages (e.g., read reference voltages) to its gate that correspond to the reference threshold voltage level, its source/drain conduction state is established by comparing the conduction with a breakpoint level or reference current. If the current read is higher than that of the reference current level, the cell is determined to be "on" and in one logical state. If the current is less than the reference current level, the cell is determined to be "off and in the other logical state. In one example of a NAND-type flash memory, the voltage threshold is negative after the memory cell is erased, and defined as logic 1. The threshold voltage is positive after a program operation, and defined as logic 0. When the threshold voltage is negative and a read is attempted by applying OV to the control gate, the memory cell will turn on to indicate logic 1 is being stored. When the threshold voltage is positive and a read operation is attempted by applying OV to the control gate, the memory cell will not turn on to indicate that logic 0 is being stored.
[0030] A memory cell can also store multiple bits of digital data by utilizing more than two ranges of threshold voltages to represent distinct memory states. The threshold voltage window can be divided into the number of desired memory states and multiple voltage breakpoint levels used to resolve the individual states. For example, if four states are used, there will be four threshold voltage ranges representing four distinct memory states which are assigned the data values 11, 10, 01, and 00. The specific relationship between the data programmed into the memory cell and the threshold voltage ranges of the cell depends upon the data encoding scheme adopted for the memory cells. For example, U.S. Patent No. 6,222,762 and U.S. Patent Application No. 10/461,244, "Tracking Cells For A Memory System," filed on June 13, 2003, both of which are incorporated herein by reference in their entirety, describe various data encoding schemes for multi-state flash memory cells.
[0031] There are many ways to measure the conduction current of a memory cell during a read or verify operation. In one example, the conduction current of a memory cell is measured by the rate it discharges a dedicated capacitor in the sense amplifier. In another example, the conduction current of the selected memory cell allows (or fails to allow) the NAND string that included the memory cell to discharge the bit line. The charge on the bit line is measured after a period of time to see whether it has been discharged or not.
[0032] Another type of memory cell useful in flash EEPROM systems utilizes a non-conductive dielectric material in place of a conductive floating gate to store charge in a non-volatile manner. Such a cell is described in an article by Chan et al, "A True Single-Transistor Oxide-Nitride-Oxide EEPROM Device," IEEE Electron Device Letters, Vol. EDL-8, No. 3, March 1987, pp. 93-95. A triple layer dielectric formed of silicon oxide, silicon nitride and silicon oxide ("ONO") is sandwiched between a conductive control gate and a surface of a semi-conductive substrate above the memory cell channel. The cell is programmed by injecting electrons from the cell channel into the nitride, where they are trapped and stored in a limited region. This stored charge then changes the threshold voltage of a portion of the channel of the cell in a manner that is detectable. The cell is erased by injecting hot holes into the nitride. See also Nozaki et al., "A 1-Mb EEPROM with MONOS Memory Cell for Semiconductor Disk Application," IEEE Journal of Solid-State Circuits, Vol. 26, No. 4, April 1991, pp. 497-501, which describes a similar cell in a split-gate configuration where a doped polysilicon gate extends over a portion of the memory cell channel to form a separate select transistor. The foregoing two articles are incorporated herein by reference in their entirety. The programming techniques mentioned in section 1.2 of "Non-volatile Semiconductor Memory Technology," edited by William D. Brown and Joe E. Brewer, IEEE Press, 1998, incorporated herein by reference, are also described in that section to be applicable to dielectric charge-trapping devices. The memory cells described in this paragraph can also be used. Thus, the technology described herein also applies to coupling between dielectric regions of different memory cells.
[0033] Another approach to storing two bits in each cell has been described by Eitan et al, "NROM: A Novel Localized Trapping, 2-Bit Non-volatile Memory Cell," IEEE Electron Device Letters, vol. 21, no. 11, November 2000, pp. 543-545. An ONO dielectric layer extends across the channel between source and drain diffusions. The charge for one data bit is localized in the dielectric layer adjacent to the drain, and the charge for the other data bit localized in the dielectric layer adjacent to the source. Multi-state data storage is obtained by separately reading binary states of the spatially separated charge storage regions within the dielectric. The memory cells described in this paragraph can also be used.
[0034] Figure 3 illustrates an exemplary array 100 of NAND strings 50, such as those shown in Figures 1-2. Along each column, a bit line 27 is coupled to a drain terminal 26 of the bit line select gate for the NAND string 50. Along each row of NAND strings, a source line 29 may connect all the source terminals 28 of the source line select gates of the NAND strings.
[0035] The array 100 of memory cells is divided into a large number of blocks of memory cells. As is common for flash EEPROM systems, the block is the unit of erase and may be referred to as an erase block or physical block. Each block can contain the minimum number of memory cells that are erased together. In Figure 3, a block such as block 30 includes all the cells connected to a common set of word lines WLO-WLi. Each block is typically divided into a number of pages. A page is typically a minimum unit of programming or reading, although more than one page may be programmed or read in a single operation. In another embodiment, the individual pages may be divided into segments and the segments may contain the fewest number of cells that are written at one time as a basic programming operation. One or more pages of data are typically stored in one row of memory cells. A page can store one or more sectors of data, the size of which is generally defined by a host system. A sector includes user data and overhead data. Overhead data typically includes an Error Correction Code (ECC) that has been calculated from the user data of the sector. A portion of the controller (described below) calculates the ECC when data is being programmed into the array, and also checks it when data is being read from the array. Alternatively, the ECCs and/or other overhead data are stored in different pages, or even different blocks, than those of the user data to which they pertain.
[0036] A sector of user data is typically 512 bytes, corresponding to the size of a sector commonly used in magnetic disk drives. Overhead data is typically an additional 16-20 bytes. A large number of pages form a block, anywhere from 8 pages, for example, up to 32, 64 or more pages. In some embodiments, a row of NAND strings comprises a block.
[0037] Memory cells are erased in one embodiment by raising the p-well to an erase voltage (e.g., 20 volts) for a sufficient period of time and grounding the word lines of a selected block while the source and bit lines are floating. A strong electric field is thus applied to the tunnel oxide layers of selected memory cells and the data of the selected memory cells are erased as electrons of the floating gates are emitted to the substrate side. As electrons are transferred from the floating gate to the p-well region, the threshold voltage of a selected cell is lowered. Those cells whose erase is to be inhibited have their word lines set to a floating condition. Due to capacitive coupling, the unselected word lines, bit lines, select lines, and common source lines are also raised to a significant fraction of the erase voltage, thus inhibiting erasing of the unselected cells. Erasing can be performed on the entire memory array, separate blocks, or another unit of cells. [0038] Each block of memory cells includes a set of bit lines forming columns and a set of word lines forming rows. In one embodiment, the bit lines are divided into odd bit lines and even bit lines. Memory cells along a common word line and connected to the odd bit lines are programmed at one time, while memory cells along a common word line and connected to even bit lines are programmed at another time ("odd/even programming"). In another embodiment, memory cells are programmed along a word line for all bit lines in the block ("all bit line programming"). In other embodiments, the bit lines or block can be broken up into other groupings (e.g., left and right, more than two groupings, etc.).
[0039] Figure 4 illustrates a memory device 110 having read/write circuits for reading and programming a page of memory cells in parallel. Memory device 110 may include one or more memory die or chips 112. Memory die 112 includes a two-dimensional array of memory cells 100, control circuitry 120, and read/write circuits 130A and 130B. In one embodiment, access to the memory array 100 by the various peripheral circuits is implemented in a symmetric fashion, on opposite sides of the array, so that the densities of access lines and circuitry on each side are reduced by half. The read/write circuits 130A and 130B include multiple sense blocks 200 which allow a page of memory cells to be read or programmed in parallel. The memory array 100 is addressable by word lines via row decoders 140A and 140B and by bit lines via column decoders 142A and 142B. In a typical embodiment a controller 144 is included in the same memory device 110 (e.g., a removable storage card or package) as the one or more memory die 112. Commands and data are transferred between the host and controller 144 via lines 132 and between the controller and the one or more memory die 112 via lines 134. The controller can include an optional RAM memory 131 in one embodiment to assist in data transfer. [0040] The control circuitry 120 cooperates with the read/write circuits 130A and 130B to perform memory operations on the memory array 100. The control circuitry 120 includes a state machine 122, an on-chip address decoder 124 and a power control module 126. The state machine 122 provides chip-level control of memory operations. The on-chip address decoder 124 provides an address interface between that used by the host or a memory controller to the hardware address used by the decoders 140A, 140B, 142A, and 142B. The power control module 126 controls the power and voltages supplied to the word lines and bit lines during memory operations. An optional RAM memory 133 is provided in one embodiment to assist in memory operations.
[0041] Figure 5 is a block diagram of an individual sense block 200 partitioned into a core portion, referred to as a sense module 210, and a common portion 220. In one embodiment, there will be a separate sense module 210 for each bit line and one common portion 220 for a set of multiple sense modules 210. In one example, a sense block will include one common portion 220 and eight sense modules 210. Each of the sense modules in a group will communicate with the associated common portion via a data bus 206. For further details refer to U.S. Patent Application 11/026,536 "Non-Volatile Memory & Method with Shared Processing for an Aggregate of Sense Amplifiers" filed on 12/29/04 which is incorporated by reference herein in its entirety.
[0042] Sense module 210 comprises sense circuitry 204 that determines whether a conduction current in a connected bit line is above or below a predetermined threshold level. Sense module 210 also includes a bit line latch 202 that is used to set a voltage condition on the connected bit line. For example, a predetermined state latched in bit line latch 202 will result in the connected bit line being pulled to a state designating program inhibit (e.g., Vdd). [0043] Common portion 220 includes a processor 212, a set of data latches 214 and an I/O Interface 216 coupled between the set of data latches 214 and data bus 134. Processor 212 performs computations. For example, one of its functions is to determine the data stored in the sensed memory cell and store the determined data in the set of data latches. The set of data latches 214 is used to store data bits determined by processor 212 during a read operation. It is also used to store data bits imported from the data bus 134 during a program operation. The imported data bits represent write data meant to be programmed into the memory. I/O interface 216 provides an interface between data latches 214 and the data bus 134.
[0044] During read or sensing, the operation of the system is under the control of state machine 122 of Figure 4 that controls the supply of different control gate voltages to the addressed cell via word lines. As it steps through the various predefined control gate voltages corresponding to the various memory states supported by the memory, the sense module 210 will trip at one of these voltages and an output will be provided from sense module 210 to processor 212 via bus 206. At that point, processor 212 determines the resultant memory state by consideration of the tripping event(s) of the sense module and the information about the applied control gate voltage from the state machine via input lines 208. It then computes a binary encoding for the memory state and stores the resultant data bits into data latches 214. In another embodiment of the core portion, bit line latch 202 serves double duty, both as a latch for latching the output of the sense module 210 and also as a bit line latch as described above.
[0045] During program or verify, the data to be programmed is stored in the set of data latches 214 from the data bus 134. The program operation, under the control of the state machine, comprises a series of programming voltage pulses applied to the control gates of the addressed memory cells. Each programming pulse is followed by a read back (verify) to determine if the cell has been programmed to the desired memory state target threshold voltage. Processor 212 monitors the read back memory state relative to the desired memory state. When the two are in agreement, the processor 212 sets the bit line latch 202 so as to cause the bit line to be pulled to a condition designating program inhibit (e.g., Vdd). This inhibits the cell coupled to the bit line from further programming even if programming pulses appear on its control gate. In other embodiments the processor initially loads the bit line latch 202 and the sense circuitry sets it to an inhibit value during the verify process.
[0046] Data latch stack 214 contains a stack of data latches corresponding to the sense module. In one embodiment, there are three data latches per sense module 210. In some implementations (but not required), the data latches are implemented as a shift register so that the parallel data stored therein is converted to serial data for data bus 134, and vice versa. In the preferred embodiment, all the data latches corresponding to the read/write block of memory cells can be linked together to form a block shift register so that a block of data can be input or output by serial transfer. In particular, the bank of read/write modules is adapted so that each of its set of data latches will shift data in to or out of the data bus in sequence as if they are part of a shift register for the entire read/write block.
[0047] In general, memory cells of a page are operated on in parallel. Therefore a corresponding number of sense modules 210 are in operation in parallel. In one embodiment, a page controller (not shown) expediently provides control and timing signals to the sense modules operated in parallel. For more details regarding sense modules 210 and their operation, see United States Patent Application Ser. No. 11X099,133, entitled "Compensating For Coupling During Read Operations Of Non-Volatile Memory," filed April 5, 2005, incorporated by reference in its entirety. Additional information about the structure and/or operations of various embodiments of non-volatile storage devices can be found in (1) United States Patent Application Pub. No. 2004/0057287, "Non-Volatile Memory And Method With Reduced Source Line Bias Errors," published on March 25, 2004; (2) United States Patent Application Pub No. 2004/0109357 , "Non-Volatile Memory And Method with Improved Sensing," published on June 10, 2004; (3) U.S. Patent Application No. 11/015,199 titled "Improved Memory Sensing Circuit And Method For Low Voltage Operation," Inventor Raul-Adrian Cernea, filed on 12/16/04; (4) U.S. Patent Application 11/099,133, titled "Compensating for Coupling During Read Operations of Non-Volatile Memory," Inventor Jian Chen, filed on April 5, 2005; and (5) U.S. Patent Application No. 11/321,953, titled "Reference Sense Amplifier For Non-Volatile Memory, Inventors Siu Lung Chan and Raul- Adrian Cernea, filed on December 28, 2005. All five of the immediately above- listed patent documents are incorporated herein by reference in their entirety.
[0048] In one embodiment, data is programmed to memory cells along a common word line. This word line may be referred to as a selected word line. The remaining word lines of a block are referred to as unselected word lines. The selected word line may have one or two neighboring word lines. If the selected word line has two neighboring word lines, then the neighboring word line on the drain side is referred to as the drain side neighboring word line and the neighboring word line on the source side is referred to as the source side neighboring word line. For example, if WL2 of Figure 2 is the selected word line, then WLl is the source side neighboring word line and WL3 is the drain side neighboring word line.
[0049] At the end of a successful program process, the threshold voltages of the memory cells should be within one or more distributions of threshold voltages for programmed memory cells or within a distribution of threshold voltages for erased memory cells, as appropriate. Figure 6 illustrates threshold voltage distributions for a group of memory cells when each memory cell stores two bits of data. Figure 6 shows a first threshold voltage distribution E for erased memory cells and three threshold voltage distributions, A, B and C for programmed memory cells. In one embodiment, the threshold voltages in the E distribution are negative and the threshold voltages in the A, B and C distributions are positive.
[0050] Each distinct threshold voltage range of Figure 6 corresponds to predetermined values for the set of data bits. The specific relationship between the data programmed into the memory cell and the threshold voltage levels of the cell depends upon the data encoding scheme adopted for the cells. In one embodiment, data values are assigned to the threshold voltage ranges using a gray code assignment so that if the threshold voltage of a floating gate erroneously shifts to its neighboring physical state, only one bit will be affected. However, in other embodiments, gray coding is not used. One example assigns "11" to threshold voltage range E (state E), "10" to threshold voltage range A (state A), "00" to threshold voltage range B (state B) and "01" to threshold voltage range C (state C). Although Figure 6 shows four states, embodiments in accordance with the present disclosure can also be used with other binary or multi-state structures including those that include more or less than four states.
[0051] Figure 6 shows three read reference voltages, Vra, Vrb and Vrc, for reading data from the memory cells. By testing whether the threshold voltage of a given memory cell is above or below Vra, Vrb and Vrc, the system can determine what state the memory cell is in. If a memory cell conducts with Vra applied to its control gate, then the memory cell is in state E. If a memory cell conducts at Vrb and Vrc but not at Vra, then the memory cell is in state A. If the memory cell conducts at Vrc but not at Vra and Vrb, then the memory cell is in state B. If the memory cell does not conduct at Vra, Vrb or Vrc, then the memory cell is in state C. Figure 6 also shows three verify reference voltages, Vva, Vvb and Vvc. When programming memory cells to state A, the system tests whether those memory cells have a threshold voltage greater than or equal to Vva. When programming memory cells to state B, the system will test whether the memory cells have threshold voltages greater than or equal to Vvb. When programming memory cells to state C, the system will determine whether memory cells have their threshold voltage greater than or equal to Vvc.
[0052] Figure 6 also depicts a full sequence programming technique. In full sequence programming, memory cells are programmed from the erased state E directly to any of the programmed states A, B or C. A population of memory cells to be programmed may first be erased so that all the memory cells are in erased state E. A series of program voltage pulses is then applied to the control gates of the selected memory cells to program the memory cells directly into states A, B or C. While some memory cells are being programmed from state E to state A, other memory cells are being programmed from state E to state B and/or from state E to state C.
[0053] Figure 7 illustrates an example of a two-pass technique of programming multi-state memory cells that store data for two different pages: a lower page and an upper page. Four states are depicted. For state E, both pages store a "1." For state A, the lower page stores a 0 and the upper page stores a 1. For state B, both pages store 0. For state C, the lower page stores 1 and the upper page stores 0. Although specific bit patterns have been assigned to each of the states, different bit patterns may be assigned.
[0054] In a first programming pass, the cell's threshold voltage level is set according to the bit to be programmed into the lower logical page. If that bit is a logic 1, the threshold voltage is not changed since it is in the appropriate state as a result of having been earlier erased. However, if the bit to be programmed is a logic 0, the threshold level of the cell is increased to be state A, as shown by arrow 250. That concludes the first programming pass.
[0055] In a second programming pass, the cell's threshold voltage level is set according to the bit being programmed into the upper logical page. If the upper logical page bit is to store a logic 1, then no programming occurs since the cell is in one of the states E or A, depending upon the programming of the lower page bit, both of which carry an upper page bit of 1. If the upper page bit is to be a logic 0, then the threshold voltage is shifted. If the first pass resulted in the cell remaining in the erased state E, then in the second pass the cell is programmed so that the threshold voltage is increased to be within state C, as depicted by arrow 254. If the cell had been programmed into state A as a result of the first programming pass, then the memory cell is further programmed in the second pass so that the threshold voltage is increased to be within state B, as depicted by arrow 252. The result of the second pass is to program the cell into the state designated to store a logic "0" for the upper page without changing the data for the lower page.
[0056] Figures 8A-8C disclose a process for programming non-volatile memory that reduces floating gate to floating gate coupling by, for any particular memory cell, writing to that particular memory cell with respect to a particular page subsequent to writing to adjacent memory cells for previous pages. This technique may be referred to herein as the last first mode (LM) of programming. In the example of Figures 8A-8C, each cell stores two bits of data per memory cell, using four data states. Erased state E stores data 11, state A stores data 01, state B stores data 10, and state C stores data 00. Other encodings of data to physical data states can also be used. Each memory cell stores a portion of two logical pages of data. For reference purposes, these pages are called upper page and lower page but can be given other labels. State A is encoded to store bit 0 for the upper page and bit 1 for the lower page, state B is encoded to store bit 1 for the upper page and bit 0 for the lower page, and state C is encoded to store bit 0 for both pages. The lower page data for memory cells at a word line WLn are programmed in a first step depicted in Figure 8A and upper page data for the cells is programmed in a second step depicted in Figure 8C. If the lower page data is to remain data 1 for a cell, then the cell's threshold voltage remains at state E during the first step. If the lower page data is to be programmed to 0, then the threshold voltage of the memory cell is raised to state B'. State B' is an interim state B having a verify level Vvb', which is lower than Vvb.
[0057] In one embodiment, after the lower page data for the memory cell is programmed, the neighboring memory cells at adjacent word line WLn+ 1 will be programmed with respect to their lower page. For example, the lower page for memory cells at WL2 in Figures 1-3 may be programmed after the lower page for memory cells at WLl. Floating gate coupling may raise the apparent threshold voltage of memory cell 12 if the threshold voltage of memory cell 10 is raised from state E to state B' after programming memory cell 12. The cumulative coupling effect on the memory cells at WLn will widen the apparent threshold voltage distribution of threshold voltages for the cells, as depicted as in Figure 8B. The apparent widening of the threshold voltage distribution can be remedied when programming the upper page for the word line of interest, as shown in Figure 8C.
[0058] Figure 8C depicts the process of programming the upper page for the cell at WLn. If a memory cell is in erased state E and its upper page bit is to remain at 1, the memory cell remains in state E. If the memory cell is in state E and its upper page data bit is to be programmed to 0, the threshold voltage of the memory cell is raised to be within the range for state A. If the memory cell was in intermediate threshold voltage distribution B' and its upper page data is to remain 1, the memory cell is programmed to final state B. If the memory cell is in intermediate threshold voltage distribution B' and its upper page data is to become data 0, the threshold voltage of the memory cell is raised to be within the range for state C. The process depicted by Figures 8A-8C reduces the effect of floating gate coupling because only the upper page programming of neighbor memory cells will effect the apparent threshold voltage of a given memory cell. An example of an alternate state coding for this technique is to move from intermediate state B' to state C when the upper page data is a 1, and to move to state B when the upper page data is 0. Although Figures 8A-8C provide an example with respect to four data states and two pages of data, the concepts can be applied to other implementations with more or less than four states and different numbers of pages.
[0059] Figure 9 is a timing diagram depicting the behavior of various signals during one iteration of a read or verify process. Each iteration of the process of Figure 9 represents a single sense operation for each cell memory. If the memory cells are binary memory cells, the process of Figure 9 may be performed once. If the memory cells are multi-state memory cells with four states (e.g., E, A, B, and C), the process of Figure 9 may be performed three times (three sense operations) for each memory cell.
[0060] In general, during the read and verify operations, the selected word line is connected to a read reference voltage, a level of which is specified for each read and verify operation in order to determine whether a threshold voltage of the concerned memory cell has reached such level. After applying the word line voltage, the conduction current of the memory cell is measured to determine whether the memory cell turned on in response to the voltage applied to the word line. If the conduction current is measured to be greater than a particular value, then it is assumed that the memory cell turned on and the voltage applied to the word line is greater than the threshold voltage of the memory cell. If the conduction current is not measured to be greater than the particular value, then it is assumed that the memory cell did not turn on and the voltage applied to the word line is not greater than the threshold voltage of the memory cell.
[0061] There are many ways to measure the conduction current of a memory cell during a read or verify operation. In one example, the conduction current of a memory cell is measured by the rate it discharges a dedicated capacitor in the sense amplifier. In another example, the conduction current of the selected memory cell allows (or fails to allow) the NAND string that included the memory cell to discharge the bit line. The charge on the bit line is measured after a period of time to see whether it has been discharged or not.
[0062] Figure 9 shows signals SGD, WL_unsel. WLn+1, WLn, SGS, Selected BL, BLCLAMP, and Source starting at Vss (approximately 0 volts). SGD is the gate selection line of the drain side select gate. SGS is the gate selection line of the source side select gate. WLn is the word line selected for reading/verification. WLn+1 is the unselected word line that is the drain side neighboring word line to WLn. WL_unsel represents the unselected word lines other than the drain side neighboring word line. Selected BL is the bit line selected for reading/verification. Source is the source line for the memory cells (see Figure 3). BLCLAMP is an analog signal that sets the value of the bit line when charged from the sense amplifier. Note that there are two versions of SGS, Selected BL and BLCLAMP depicted. One set of these signals SGS (B), Selected BL (B) and BLCLAMP (B) depict a read/verify operation for an array of memory cells that measure the conduction current of a memory cell by determining whether the bit line has discharged. Another set of these signals SGS (C), Selected BL (C) and BLCLAMP (C) depict a read/verify operation for an array of memory cells that measure the conduction current of a memory cell by the rate it discharges a dedicated capacitor in the sense amplifier.
[0063] First, the behavior of the sensing circuits and the array of memory cells that are involved in measuring the conduction current of a memory cell by determining whether the bit line has discharged will be discussed with respect to SGS (B), Selected BL (B), and BLCLAMP (B). At time tl of Figure 9, SGD is raised to Vdd (e.g., approximately 3.5 volts), the unselected word lines (WL_unsel) are raised to Vread (e.g., approximately 5.5 volts), the drain side neighboring word line (WLn+1) is raised to VreadX, the selected word line WLn is raised to Vcgr (e.g., Vra, Vrb, or Vrc of Figures 6, 7, and 8A-8C)) for a read operation or a verify level (e.g., Vva, Vvb, or Vvc of Figure 11) for a verify operation, and BLCLAMP (B) is raised to a pre-charging voltage to pre- charge the selected bit line Selected BL(B) (e.g., to approximately 0.7V). The voltages Vread and VreadX act as pass voltages because they cause the unselected memory cells to turn on (regardless of physical state or threshold voltage) and act as pass gates. At time t2, BLCLAMP (B) is lowered to Vss so the NAND string can control the bit line. Also at time t2, the source side select gate is turned on by raising SGS (B) to Vdd. This provides a path to dissipate the charge on the bit line. If the threshold voltage of the memory cell selected for reading is greater than Vcgr or the verify level applied to the selected word line WLn, then the selected memory cell will not turn on and the bit line will not discharge, as depicted by signal line 260. If the threshold voltage in the memory cell selected for reading is below Vcgr or below the verify level applied to the selected word line WLn, then the memory cell selected for reading will turn on (conduct) and the bit line voltage will dissipate, as depicted by curve 262. At some point after time t2 and prior to time t3 (as determined by the particular implementation), the sense amplifier will determine whether the bit line has dissipated a sufficient amount. In between t2 and t3, BLCLAMP (B) is raised to let the sense amplifier measure the evaluated BL voltage and then lowered. At time t3, the depicted signals will be lowered to Vss (or another value for standby or recovery). Note that in other embodiments, the timing of some of the signals can be changed (e.g. shift the signal applied to the neighbor).
[0064] Next, the behavior of the sensing circuits and the array of memory cells that measure the conduction current of a memory cell by the rate it discharges a dedicated capacitor in the sense amplifier will be discussed with respect to SGS (C), Selected BL (C) and BLCLAMP (C). At time tl of Figure 9, SGD is raised to Vdd (e.g., approximately 3.5 volts), the unselected word lines (WL_unsel) are raised to Vread (e.g., approximately 5.5 volts), the drain side neighboring word line (WLn+ 1) is raised to VreadX, the selected word line WLn is raised to Vcgr (e.g., Vra, Vrb, or Vrc of Figure 11) for a read operation or a verify level (e.g., Vva, Vvb, or Vvc of Figure 11) for a verify operation, and BLCLAMP (C) is raised. In this case, the sense amplifier holds the bit line voltage constant regardless of what the NAND sting is doing, so the sense amplifier measures the current flowing with the bit line "clamped" to that voltage. Therefore, BLCLAMP (C) rises at tl and does not change from tl to t3. At some point after time tl and prior to time t3 (as determined by the particular implementation), the sense amplifier will determine whether the capacitor in the sense amplifier has dissipated a sufficient amount. At time t3, the depicted signals will be lowered to Vss (or another value for standby or recovery). Note that in other embodiments, the timing of some of the signals can be changed.
[0065] Figure 10 is a flow chart describing one embodiment for reading data from non-volatile memory cells, such as those programmed according to the techniques of Figures 6, 7, or 8A-8C. Figure 10 provides the read process at the system level. At step 300, a request to read data is received. At step 302, a read operation is performed for a particular page in response to the request to read data. In one embodiment, when data for a page is programmed, the system will also create extra bits used for Error Correction Codes (ECCs) and write those ECC bits along with the page of data. ECC technologies are well known in the art. The ECC process used can include any suitable ECC process known in the art. When reading data from a page, the ECC bits will be used to determine whether there are any errors in the data at step 304. The ECC process can be performed by the controller, the state machine or elsewhere in the system. If there are no errors in the data, the data is reported to the user at step 306. If an error is found at step 304, it is determined whether the error is correctable at step 308. The error may be due to floating gate to floating gate coupling or other reasons. Various ECC methods have the ability to correct a predetermined number of errors in a set of data. If the ECC process can correct the data, then the ECC process is used to correct that data at step 310 and the data, as corrected, is reported to the user at step 312. If the data is not correctable by the ECC process, a data recovery process is performed at step 314. In some embodiments, an ECC process will be performed after step 314. More details about the data recovery process are described below. After the data is recovered, that data is reported at step 316. At step 318, the process loops back to step 302 if additional pages are to be read or ends at step 320. Note that the process of Figure 15 can be used with data programmed using all bit line programming or odd/even bit line programming.
[0066] Figure 11 is a flow chart describing one embodiment of a process for performing a read operation for a page (step 302 of Figure 10). The process of Figure 11 can be performed for a page that encompasses all bit lines of a block, only odd bit lines of a block, only even bit lines of a block, or other subsets of bit lines of a block. At step 340, read reference voltage Vra is applied to the appropriate word line associated with the page. At step 342, the bit lines associated with the page are sensed to determine whether the addressed memory cells turn on or do not turn on based on the application of Vra to their control gates. Bit lines that conduct indicate that the memory cells were turned on; therefore, the threshold voltages of those memory cells are below Vra (e.g., in state E). At step 344 the result of the sensing for the bit lines is stored in the appropriate latches (e.g., latches 202) for those bit lines. At step 346, read reference voltage Vrb is applied to the word lines associated with the page being read. At step 348, the bit lines are sensed as described above. At step 350, the results are stored in the appropriate latches for the bit lines. At step 352, read reference voltage Vrc is applied to the word lines associated with the page. At step 354, the bit lines are sensed to determine which memory cells turn on, as described above. At step 356, the results from the sensing step are stored in the appropriate latches for the bit lines. At step 358, the data values for each bit line are determined. For example, if a memory cell conducts at Vra, then the memory cell is in state E. If a memory cell conducts at Vrb and Vrc but not at Vra, then the memory cell is in state A. If the memory cell conducts at Vrc but not at Vra and Vrb, then the memory cell is in state B. If the memory cell does not conduct at Vra, Vrb or Vrc, then the memory cell is in state C. In one embodiment, the data values are determined by processor 212. At step 360, processor 212 will store the determined data values in the appropriate latches (e.g., latches 214) for each bit line. In other embodiments, sensing the various levels (Vra, Vrb, and Vrc) may occur in different orders.
[0067] Steps 340-344 can include performing the operation depicted in Figure 9, with Vcgr=Vra and VreadX=Vread. Steps 346-350 can include performing the operation depicted in Figure 9, with Vcgr=Vrb and VreadX=Vread. Steps 352-356 can include performing the operation depicted in Figure 9, with Vcgr=Vrc and VreadX=Vread.
[0068] Floating gate coupling can cause errors during read operations. The charge stored on the floating gate of a memory cell can undergo an apparent shift because of electrical field coupling associated with charge stored at a neighboring memory cell's floating gate or other charge storage region (e.g., dielectric charge storage region). While in theory the electric field from a charge on the floating gate of any memory cell in a memory array can couple to the floating gate of any other memory cell in the array, the effect is most pronounced and noticeable with adjacent memory cells. Adjacent memory cells may include neighboring memory cells that are on the same bit line, neighboring memory cells on the same word line, or neighboring memory cells that are on both a neighboring bit line and neighboring word line, and thus, adjacent from each other in a diagonal direction. The apparent shift in charge can result in errors when reading the memory state of a memory cell.
[0069] The effects of floating gate coupling are most pronounced in situations where a memory cell adjacent a target memory cell is programmed subsequent to the target memory cell, although its effects may be seen in other situations as well. A charge placed on the floating gage of an adjacent memory cell, or a portion of the charge, will effectively be coupled to the target memory cell through electrical field coupling, resulting in an apparent shift of the threshold voltage of the target memory cell. A memory cell's apparent threshold voltage can be shifted to such a degree after being programmed that it will not turn on and off (conduct) under the applied read reference voltages as expected for a memory cell in the memory state to which it was intended to be programmed.
[0070] Typically, rows of memory cells are programmed starting with the word line (WLO) adjacent to the source side select gate line. Programming proceeds sequentially thereafter by word line (WLl, WL2, WL3, etc.) through the strings of cells such that at least one page of data is programmed in an adjacent word line (WLn+ 1) after completing programming (placing each cell of the word line into its final state) of the preceding word line (WLn). This pattern of programming results in an apparent shift of the threshold voltage of memory cells after being programmed due to floating gate coupling. For every word line except the last word line of a NAND string to be programmed, an adjacent word line is programmed subsequent to completing programming of the word line of interest. The negative charge added to the floating gates of memory cells on the adjacent, later programmed word line raises the apparent threshold voltage of the memory cells on the word line of interest.
[0071] Figure 12 graphically explains the concept of floating gate to floating gate coupling. Figure 12 depicts neighboring floating gates 372 and 374, which are on the same NAND string. Floating gates 372 and 374 are situated above NAND channel/substrate 376, which has source/drain regions 378, 380 and 382. Above floating gate 372 is control gate 384 that is connected to and part of word line WLn. Above floating gate 374 is control gate 386 that is connected to and part of word line WLn+1. Although floating gate 372 will likely be subject to coupling from multiple other floating gates, for simplicity Figure 12 only shows the effects from one neighboring memory cell. Figure 12 shows three components of coupling provided to floating gate 372 from its neighbor: rl, r2 and Cr. The component rl is the coupling ratio between the neighboring floating gates (372 and 374), and is calculated as the capacitance of the neighboring floating gates divided by the sum of all capacitive couplings of floating gate 372 to all the other electrodes surrounding it. The component r2 is the coupling ratio between the floating gate 372 and the drain side neighbor control gate 386, and is calculated as the capacitance of floating gate 372 and control gate 386 divided by the sum of all capacitive couplings of floating gate 372 to all the other electrodes surrounding it. The component Cr is the control gate coupling ratio and is calculated as the capacitance between floating gate 374 and its corresponding control gate 386 divided by the sum of all capacitive couplings of floating gate 372 to all the other electrodes surrounding it.
[0072] Figure 13 shows the apparent threshold voltage distributions for a row of memory cells (e.g., WLn) before (solid curves) and after (dotted curves) its adjacent neighboring row (WLn+ 1) is programmed. Each distribution is widened as a result of adding negative charge to the floating gates of the memory cells of the adjacent word line. Because of floating gate coupling, the negative charge of a later programmed memory cell on WLn+ 1 will raise the apparent threshold voltage of a memory cell on WLn that is connected to the same bit line. Distributions 400 and 402 represent the cells of a selected word line WLn in state A before and after programming the adjacent word line WLn+ 1, respectively. Distributions 404 and 406 represent the cells of WLn in state B before and after programming WLn+ 1, respectively. Distributions 408 and 410 represent the cells of WLn in state C after programming WLn+ 1, respectively. Because the distributions are widened, memory cells may be read incorrectly as in an adjacent state. Memory cells at the upper end of each distribution can have an apparent threshold voltage above a corresponding read compare point. For example, when applying reference voltage Vrb, certain memory cells programmed to state A, may not conduct sufficiently because of the shift in their apparent threshold voltage. These cells may incorrectly be read as in state B, causing read errors. The later programmed cells can also effect the apparent threshold voltage of memory cells of WLn that are connected to different bit lines, such as those connected to adjacent bit lines.
[0073] Figure 14 graphically depicts a reading technique that can be used to address some of the apparent shifts in threshold voltages illustrated in Figure 13. Figure 15 is a flowchart describing this technique. When reading data on word line WLn, the data of word line WLn+ 1 can also be read (step 420), and if the data on word line WLn+ 1 has disturbed the data on WLn, the read process for WLn can compensate for that disturb. For example, when reading word line WLn, state or charge level information for the memory cells at word line WLn+ 1 can be determined in order to select appropriate read reference voltages for reading individual memory cells of word line WLn. The process of Figure 11 can be used to read WLn+ 1. Figure 14 depicts individual read reference voltages for reading WLn based on the state of an adjacent memory cell at word line WLn+ 1. In general, different offsets (e.g., OV, 0.1V, 0.2V, 0.3V) to the nominal read reference voltages Vra, Vrb, and Vrc are used and the results of sensing at different offsets selected as a function of the state of a memory cell on a neighboring word line. In one embodiment, the memory cells at word line WLn are sensed using each of the different read reference voltages including offsets. For a given memory cell, the results from sensing at an appropriate one of the read reference voltages can be selected based upon the state of an adjacent memory cell at word line WLn+ 1.
[0074] In some embodiments, the read operation for WLn+1 determines the actual data stored at WLn+1, while in other embodiments, the read operation for WLn+1 only determines the charge levels of these cells, which may or may not accurately reflect the data stored at WLn+1. In some embodiments, the levels and/or the number of levels used to read WLn+1 may not be exactly the same as those used to read WLn. Some approximation of the floating gate threshold value can be sufficient for WLn correction purposes in some implementations. In one embodiment, the results of reading at WLn+ 1 can be stored in latches 214 at each bit line to be used when reading WLn (step 422).
[0075] A read operation can first be performed for the word line of interest WLn at the nominal read reference voltage levels Vra, Vrb, and Vrc which do not compensate for any coupling effects (step 424). The results of reading at the nominal reference levels are stored in the appropriate latches for bit lines with memory cells where the neighboring cell at WLn+ 1 was determined to be in state E (step 426). For other bit lines, the data is disregarded and the WLn+1 data maintained. A read operation is then performed for word line WLn using a first set of offsets to the read reference voltages at step 428. The read process can use Vral (Vra + 0.1V), Vrbl (Vrb + 0.1V) and Vrcl (Vrc + 0.1V) in the process of Figure 11. The results from using these reference values are stored for bit lines with memory cells having neighboring memory cells at WLn+1 in state A at step 430. Data for other bit lines is disregarded. A read operation is then performed at step 432 with a second set of offsets using read reference levels Vra2 (Vra + 0.2V), Vrb2 (Vrb + 0.2V) and Vrc2 (Vrc + 0.2V) in the process of Figure 11. The results are stored in latches for bit lines with memory cells having neighbors at WLn+1 in state B at step 434. Data for other bit lines is disregarded. A read operation is performed for word line WLn with a third set of offsets using reference levels Vra3 (Vra + 0.3V), Vrb3 (Vrb + 0.3V) and Vrc3 (Vrc + 0.3V) in the process of Figure 11 at step 436. The results are stored for those bit lines having memory cells with neighboring cells at WLn+1 in state C at step 438. In some embodiments, no offsets will be used at Vra because of the larger natural margin between state E and state A. Such an embodiment is depicted in Figure 14 where a single read reference voltage Vra is depicted at the state A level. Other embodiments may use offsets for this level as well. The process of Figure 15 can be used for recovering data (e.g., step 314 of Figure 10) or as an initial read process (e.g., step 302).
[0076] The different offsets to the nominal read reference voltages can be selected as a function of the state of a memory cell on the adjacent word line. For example, a set of offset values could include a OV offset corresponding to an adjacent cell in state E, a 0.1V offset corresponding to an adjacent cell in state A, a 0.2V offset corresponding to an adjacent cell in state B, and a 0.3V offset corresponding to an adjacent cell in state C. The offset values will vary according to implementation. In one embodiment, the offset values are equal to the amount of shift in apparent threshold voltage that results from an adjacent cell being programmed to a corresponding state. For example, 0.3V may represent the shift in apparent threshold voltage for a cell at WLn when an adjacent cell at WLn+ 1 is programmed to state C after programming WLn. The offset values need not be the same for every reference voltage. For example, the offset values for the Vrb reference voltage may be OV, 0. IV, 0.2V, and 0.3V while those for the Vrc reference voltage may be OV, 0.15V, 0.25V and 0.35V. Additionally, the increment in offsets need not be the same for every state. For example, a set of offsets in one embodiment could include OV, 0.1V, 0.3V, and 0.4V for adjacent cells in states E, A, B, and C, respectively.
[0077] Another technique used to compensate for floating gate coupling effects provides compensation to a memory cell adjacent a selected memory cell in order to reduce the coupling effect that the adjacent memory cell has on the selected memory cell. One such embodiment includes setting up, during the verification process, the required conditions for later application of compensation to the adjacent memory cell. In such an embodiment the pass voltage, otherwise known as Vread, applied to WLn+ 1 is reduced from a typical value of, for example, 6V that is applied to each other in selected word line down to, for example, 3 V. The compensation consists of application of higher voltage, as compared to the voltage used during the verify phase of program/verify operations, to WLn+ 1 during the read operation performed on WLn. The compensation can include a change/delta: ΔVread = {[Vread(WLn+l during read of WLn)] - [Vread(WLn+l during verify of WLn)] }. The advantage of using a lower Vread value during verify is that it allows the application of nominal values of Vread later during read operations, while maintaining the required ΔVread. Had it not been for the use of a smaller than nominal value of Vread during verify, the necessary value of Vread during read that would allow the application of sufficient ΔVread would have been, for example, 6+3 =9 V which would have been a large voltage that could lead to read disturb conditions. One example of such setting up for later compensation is depicted in Figure 9 as the application of VreadX to the drain side neighboring word line while the other unselected word lines receive Vread. In many prior art devices, all of the unselected word lines would receive Vread. In the embodiment of Figure 9, all of the unselected word lines, except for the drain side neighbor, receive Vread, while the drain side neighbor receives VreadX.
[0078] For the verify process where memory cells are programmed from the source side to the drain side, it is guaranteed (in one embodiment) that when writing to word line WLn, all memory cells on word lines WLn+ 1 are in the erased state (e.g., state E) (Note: This is true for full sequence and not for LM mode. Please see above explanation). Word line WLn+ 1 will receive a voltage level VreadX, where VreadX=Vread4 (discussed below). In one embodiment, Vread4 is equal to 3.7V. In another embodiment, VreadX=Vread. In other embodiment, other values can also be used. In different implementations, different values of Vread4 or VreadX can be determined based on device characterization, experimentation and/or simulation.
[0079] In one embodiment, the amount of required compensation, ΔVread, can be calculated as follows: [0080] AVread = (AVTn + 1) rl
1 + -
(rl)(Cr)
[0081] Where ΔVTn+1 is the change in threshold voltage of the drain side neighbor memory cell between the time of program/verify of WLn and the present time. ΔVTn+1, and rl are the root causes of the word line to word line parasitic coupling effect that is mitigated by the present method. ΔVread is the compensation that is brought to bear in order to combat this effect.
[0082] Figure 16 is a flow chart describing one embodiment of a process for performing an initial read operation (step 302) or recovering data (step 314) using the aforementioned technique. The process depicted in Figure 16 applies to the full sequence programming described above with respect to Figure 11 in which two bits of one logical page are stored in each cell and will be read and reported out together. A read operation for the neighboring word line WLn+ 1 is performed at step 450. This can include the process of Figure 11 for the neighboring word line. The results are stored in the appropriate latches at step 452.
[0083] At step 454, a read process is performed for the word line of interest WLn. This can include performing the process of Figure 11 with VreadX=Vreadl (Figure 9). In one embodiment, Vreadl=Vread. Thus, all of the unselected word lines (see WL_unsel and WLn+ 1 of Figure 9) receive Vread. This provides the maximum compensation as the compensation is determined by the difference between Vread value used on WLn+ 1 now during read operations and the Vread value used earlier during the verify phase of program/verify. The compensation value, compC, can be defined as follows: compC = Vread 1 - Vreadp = 5.5 - 3 = 2.5v, where Vreadp is the Vread value used during program/verify. The results of step 454 are stored at step 456 in the appropriate latches for bit lines with memory cells where the adjacent cell at WLn+ 1 was determined (at step 450) to be in state C. Therefore, the maximum compensation, CompC, is engaged for cells whose drain side neighbors had experienced the highest change in threshold voltage by being programmed from state E to state C. Note that these drain side neighbors were in State E during program/verify of WLn, but now are in state C. What has to be compensated for under all circumstances is the change in state of the drain side neighbor on WLn+ 1 experienced between the time of write of WLn and the present time of read of WLn. For other bit lines whose drain side neighbors are not being detected presently to be in state C, the data of this read of WLn which used Vreadl on WLn+ 1 will be disregarded.
[0084] At step 458, a read process is performed for WLn with the drain side neighbor word line WLn+ 1 receiving Vread2 (VreadX=Vread2); where Vread2, as compared to Vreadl, is closer in value to the Vreadp used during program verify. A smaller compensation appropriate for cells whose drain side neighbors are now in state B is delivered. One example of a compensation is compB = Vread2 - Vreadp = 4.9 - 3 = 1.9V. Thus, Vread2 differs from Vreadp by compB. At step 460, the results of step 458 are stored for bit lines with memory cells having neighboring memory cells at WLn+ 1 in state B. Data for other bit lines is disregarded.
[0085] At step 462, a read process is performed for WLn. with word line WLn+ 1 receiving Vread3. (VreadX=Vread3), where Vread3, as compared to Vread2, is closer in value to Vreadp used during programming. A yet smaller compensation amount appropriate for cells whose drain side neighbors are now in state A is delivered. One example of a compensation amount is compA = Vread3 - Vreadp = 4.3 - 3 = 1.3V. Thus Vread3 differs from Vreadp by compA. At step 464, the results of step 462 are stored for bit lines with memory cells having neighboring memory cells at WLn+ 1 in state A. Data for other bit lines is disregarded. [0086] At step 466, a read process is performed for WLn. with word line WLn+ 1 receiving Vread4 (VreadX=Vread4), where Vread4 is identical in value to Vreadp used during programming. This delivers no compensation which is appropriate for cells whose drain side neighbors are now in state E as they were at the time of program/verify. This compensation amount is compE = Vread4 - Vreadp = 3 - 3 = 0.0V. At step 468, the results of step 466 are stored for bit lines with memory cells having neighboring memory cells at WLn+ 1 in state E. Data for other bit lines is disregarded. During the process of Figure 16, the neighboring bit line will receive four voltages. However, each selected memory cell of WLn being read will only make use of or select the results when sensing at the one appropriate voltage corresponding to the state of its adjacent cell at WLn+ 1. In different implementations, different values of Vreadl, Vread2, Vread3 and Vread 4 can be determined based on device characterization, experimentation and/or simulation. For more information on the technique of Figure 16, see U.S. Patent Application No. 11/384,057, entitled "Read Operation for Non-Volatile Storage with Compensation for Coupling," by Nima Mokhlesi, incorporated by reference herein in its entirety.
[0087] In typical operations involving access to a non-volatile memory system, host devices will request multiple pages of data that may span multiple word lines. Traditionally, memory systems read data from multiple word lines in the same order that the data was programmed to those word lines. Word lines are typically programmed beginning with the word line adjacent the source side select gates and ending at the word line adjacent the drain side select gates although the opposite order can be used as well. Returning to Figure 3, the physical block comprised of the cells connected to word lines WLO through WLi would be programmed beginning with word line WLO. After completing programming of the memory cells to word line WLO programming would proceed to word line WLl, WL2, etc. ending with word line WLi. When a request is received to read all of the data from this physical block, the read operation will proceed in the same manner as the programming operation. Reading will begin with word line WLO and proceed sequentially thereafter until word line WLi is reached.
[0088] Figure 17 is a flowchart depicting a method for reading a physical block of data in accordance with such traditional techniques using one or more compensations. In the example of Figure 17, a physical block is comprised of four word lines and it is assumed that programming begins at word line WLO and ends with word line WL3. At step 500 a read operation is performed for word line WLl. No compensation is used when reading word line WLl as this read operation is performed merely to acquire data or charge level information that can be used to more accurately read word line WLO. In one embodiment, performing the read operation for word line WLl is performed according to the technique of Figure 11. After performing the read operation for word line WLl the results (data values or charge level information) are stored at step 502. In one embodiment, step 502 corresponds to step 360 of Figure 11.
[0089] After storing results for word line WLl, WLO is read using one or more compensations at step 504. The data values read from WLO are stored at step 506 in the appropriate data latches 214. After performing the read operation for word line WLO and storing the data values at step 506, the data from word line WLO is reported to the host at step 508.
[0090] Steps 504 (and steps 514, 524 described below) can include performing a number of subreads at the selected word line in one embodiment. In one embodiment, applying one or more compensations at step 504 (514, 524) includes performing four subreads (each including three sense operations assuming four state memory cells) corresponding to steps 424, 428, 432, and 436 of Figure 15. In another embodiment, step 504 (514, 524) include performing four subreads (each including three sense operations assuming four state memory cells) corresponding to steps 454, 458, 462, and 466 of Figure 16. [0091] In other embodiments, other techniques for applying one or more compensations can be used. For example, a single subread can be used in some embodiments by applying a bit line specific compensation. A bit line specific compensation, as opposed to the word line based compensations shown in Figures 15 and 16, can address each memory cell individually so that each memory cell simultaneously receives its appropriate compensation in parallel with each other memory cell of a common word line being read. One set of conditions is applied to the word lines while each bit line is setup based on the state of the adjacent memory cell of that bit line on the neighboring word line. In this manner, a single subread can be used. For example, the trip point voltage used to determine if a cell is conductive or non-conductive under an applied read reference voltage can be tailored for each bit line, based on the state of the adjacent cell at WLn+ 1 for that bit line. Other parameters such as integration time and pre-charge voltage can also be used to provide bit line specific compensation so that a single subread can be utilized. Tailoring combinations of these various parameters can be used in one embodiment.
[0092] At step 510 a read operation for word line WL2 is performed. Again step 510 can include performing the read method depicted in Figure 11 in one embodiment. No compensation is used when reading the word line WL2 as this read operation is performed merely to acquire data or charge level information that can be used to more accurately read word line WLl. The data values or charge level information read from word line WL2 is stored at step 512. In one embodiment, step 512 corresponds to step 660 of Figure 11. After storing the data values or charge level information for word line WL2, a read operation using one or more compensations is performed for word line WLl at step 514. The data values determined in step 516 are stored in the appropriate data latches at step 516 and reported to the host device at step 518.
[0093] At step 520 a read operation is performed for word line WL3 which can include performing the method of Figure 11. At step 522 the data values or charge level information are stored for word line WL3 in the appropriate latches. At step 524 a read operation including one or more compensations is performed for word line WL2. The data values for word line WL2 are stored in the appropriate latches at step 526 and reported to the host device at step 528. At step 530 a read operation is performed for word line WL3. In the example of Figure 17 word line WL3 is the last word line to be programmed and read for the memory system. Accordingly no compensations are used at word line WL3. The data values are stored for word line WL3 at step 532 and reported to the host at step 534.
[0094] As illustrated in Figure 17 these traditional techniques of reading in the same order in which the word lines were programmed necessitates additional read operations at the neighboring word line when reading each selected word line except the last word line. In order to read word line WLO for example, word line WLl must be first read in order to ascertain its data state or charge level information in order to apply the appropriate compensation while reading word line WLO. Before reading word line WLl at step 514 for its actual data to be referred to the host, a read operation for word line WL2 must be performed. The addition of these additional read operations can affect system performance by requiring longer to complete read operations.
[0095] Figure 18 is a table depicting the number of sense operations that are necessary in such a memory system in some embodiments. In Figure 18, a 16 word line physical or erase block is depicted. Column 550 lists the word lines of the physical block in order, with word line WL 15 - the word line adjacent the drain side select gates of the physical block - at the top of the page, and word line WLO - the word line adjacent the source side select gates for the physical block - at the bottom of the page. The programming sequence for the memory cells of the word lines is depicted in column 552. Programming the physical block begins with word line WLO and proceeds sequentially thereafter to word line WL15. The sequence for reading the physical block of data is depicted in column 554. Word line WLO is read first, followed by word line WLl, and proceeding sequentially thereafter to word line WL15. Column 556 illustrates what is necessary to perform the read operation for each word line. In order to read word line WLO, for example, word line WLl must first be read, as shown in column 556. After reading word line WLl to determine its data values or charge level information, word line WLO is read using the information from word line WLl to apply (or select the results of) an appropriate compensation.
[0096] Column 558 sets forth the number of sense operations and subreads (SR) that are necessary for each read operation. Referring to word line WLO again for example, when first reading word line WLl, three sense operations are performed (assuming a four state memory cell). A first sense operation is performed using the read reference level Vra, a second sense operation is performed using the second read reference voltage level Vrb and a third sense operation is performed using the third read reference voltage level Vrc. These three sense operations comprise one subread at word line WLl. After reading word line WLl, the word line of interest word line WLO is read using the data from word line WLl to apply or select the results at the appropriate compensation. Reading word line WLO involves 12 sense operations corresponding to four subreads at word line WLO. A first subread includes three sense operations at the levels Vra, Vrb and Vrc. A second subread includes three sense operations at the read reference levels Vra plus a first offset, Vrb plus a first offset, and Vrc plus a first offset. A third subread includes three sense operations at the levels Vra plus a second offset, Vrb plus a second offset and Vrc plus a second offset. Finally, the fourth subread will include another three sense operations at the levels Vra plus a third offset, Vrb plus a third offset and Vrc plus a third offset. Alternatively, in a scheme that uses the technique depicted in Figure 16, applications of different levels of Vread to the neighboring word line while sensing the word line of interest can be used. In such a technique the first subread will include three sense operations at the levels Vra, Vrb and Vrc while word line WLl receives Vreadl. A second subread will also include application of the read reference voltage levels Vra, Vrb and Vrc, but word line WLl will receive Vread2. The third subread will include three sense operations at the Vra, Vrb and Vrc levels while word line WLl receives Vread3. Finally, the fourth subread will again include application of the read reference voltage levels Vra, Vrb and Vrc to word line WLO while word line WLl receives Vread4. In total it can be seen that in order to read each word line of the physical block a total of 15 sense operations are necessary. This is true for each word line of the block except for the last word line of the block to be read. Figure 18 sets forth the total number of sense operations in box 610. It can be seen that in order to read a 16 word line physical block utilizing one of the compensation schemes of Figures 15 or 16, a total number of 228 sense operations is necessary.
[0097] If bit line specific compensations are utilized with a single subread at the word line of interest, the number of sense operations can be reduced, but reading will still include additional reads for the neighboring word line when reading a selected word line. Looking at word line WLO again, reading word line WLl will still be performed, but reading WLO will only include 1 subread (3 sense operations). The total number of sense operations when reading all 16 word lines will be 93 as also shown in Figure 18.
[0098] Embodiments in accordance with the present disclosure seek to reduce the number of sense operations for read operations spanning multiple word lines, such as a read operation in response to a request for data from a physical or erase block. Figure 19 is a similar table as shown in Figure 18, incorporating a reverse reading technique in accordance with one embodiment. Word lines WLO through WL15 are listed in column 570. The programming sequence begins with word line WLO and ends with WL 15 as shown in column 572. [0099] Column 574 sets forth the read sequence, beginning with word line WL 15 and ending with word line WLO. The manner of selecting and reading the word lines to accomplish read sequence 574 is listed in column 576. For each of the word lines when selected for reading, no other word line must be read to accomplish a compensation technique. The information from an adjacent word line needed to apply or select the results of an appropriate read compensation for the selected word line are already known. By reverse reading, the adjacent word line data can be maintained from the already performed substantive read operation for that word line, thereby avoiding duplicative reads. For example, when word line WLO is selected for reading during the 16 read operation of the sequence, the data from the WLl read operation, performed 15th, can be utilized. This stands in contrast the technique depicted in column 556 of Figure 18, where in order to read word line WLO it is necessary to first read word line WLl, and then repeat reading at word line WLl at a later time.
[00100] The number of sense operations is set forth in column 578. When reading word line WL 15, one subread and three sense operations are performed since WL 15 has no later programmed neighbor. Word line WL 14 is then read. The data read from the memory cells of word line WL 15 are already known, and maintained (e.g., data latches 214) when reading word line WL 14. Accordingly, four subreads at word line WL14 requiring 12 sense operations, are all that are necessary to accurately read the word line. Contrast this with Figure 18, where reading word line WL 14 first requires reading WL 15, and thus, an extra subread and three sense operations. Box 580 sets forth the total number of sense operations when using the technique of Figure 19. 183 sense operations, compared with 228 using the technique of Figure 18 are all that are required. By reducing the number of sense operations from 228 to 183, performance improvements when reading the non-volatile memory system are obtained. If bit line specific compensations are utilized with a single subread at the word line of interest, the total number of sense operations would be 48. Each word line will be read in one subread using three sense operations. This too will provide improvements over traditional techniques, reducing the number of sense operations from 93 to 98.
[00101] Figure 19 sets forth a read sequence that can be used for cells programmed by full-sequence programming, upper-lower page programming, and last first mode (LM) programming. The number of sense operations and subreads may vary for upper-lower page reading or LM reading, but will be apparent to those of ordinary skill in the art. While much of the discussion may be presented with respect to full-sequence programmed cells for exemplary purposes, the present disclosure is equally applicable to other processes for cells.
[00102] Figure 20 is a flowchart of a method in accordance with one embodiment for reverse reading a physical block of memory cells. A four word line four state memory system is illustrated for exemplary purposes, the disclosed techniques may be applied to systems with other numbers of word lines and memory starts. The last word line, WL3, to be programmed for the physical block is read at step 600. Because the last word line does not have a later programmed neighboring word line, no compensation is used and the method of Figure 11 can be applied. The data values determined for WL3 are stored in the latches (e.g., latches 214) for each corresponding bit line. The data values are buffered at step 604, to be reported to the host device later, as will be described hereinafter. A read operation is performed for word line WL2 at step 606 using one or more compensations. Compensation subreads (e.g., Figure 15 or 16) can be performed in one embodiment and the appropriate read results for each bit line selected, based on the data values determined at step 600 for word line WL3. Alternatively, one or more bit line based compensations can be applied for each bit line and a single subread utilized. At step 608, the data values from the appropriate subread, or from the single subread when using bit line compensation, are stored in the data latches for each bit line. In one embodiment, the data values shared at step 602 are replaced with the WL2 values at step 608. As Figures 15 and 16 illustrate, step 608 may be performed in between or as part of the subreads at step 606. In one embodiment that utilizes the technique of Figure 15, steps 606 and 608 can include performing steps 424-438 of Figure 15. In one embodiment that utilizes different values of Vread as shown in Figure 16, steps 606 and 608 can include performing steps 454-468 of Figure 16. After storing data values for word line WL2 in the latches, the data values are buffered at step 610.
[00103] At step 612 a read operation is performed for word line WLl using compensation subreads or bit line compensation. Because word line WL2 was read at step 606, reading word line WL3 does not require reading word line WL2. The values stored in the latches for each bit line at step 608 can be used to select the appropriate read results or apply the appropriate bit line compensation. At step 614, the data values for word line WLl are stored in the data latches. The data stored in the latches at step 608 is overwritten in one embodiment. After storing the data values for word line WLl, the data values are buffered at step 616. At step 618 a read operation is performed at word line WLO utilizing compensation subreads or bit line compensation. The data values for the memory cells at word line WLO are stored in the appropriate latches at step 620. At step 622, the data values stored at step 620 are buffered. At step 624, the data buffered at steps 604, 610, 616 and 622 is reported to the host or requesting device.
[00104] Figure 21 is a flowchart depicting a method for reporting data to the host or requesting device, as can be performed at step 624 of Figure 20. The data buffered at steps 604, 610, 616, and 622 is reordered at step 630. Reordering the data can include putting the data from word line WLO first, followed by the data from word line WLl, the data from word line WL2, and finally the data from word line WL3. The data was placed into the buffer in the opposite sequence to which it was programmed in steps 604, 610, 616, and 622. Generally, host devices will expect data in the order it was programmed. Thus, the data is reordered to coincide with the programming sequence. At step 632, the data from word line WLO is reported to the host device. At step 634, the data from word line WLl is reported to the host device. At step 636, the data from word line WL2 is reported to the host device, and finally at step 638, the data from word line WL3 is reported to the host device.
[00105] Buffering the data values and reporting them to the host can vary by implementation. For example, the data from each word line can be reported to the host after reading each individual word line. After reading word line WL3 and storing the data value in the latches for example, the data could be immediately reported to the host. In such an embodiment, buffering the data values at step 604 may not be necessary. The host device will receive the data in the opposite order of programming and can reorder the data if necessary. The location and type of memory for buffering the data may vary by embodiment. In one embodiment, the data is buffered in a memory such as RAM 131 (Figure 4) in or accessible by controller 144. The controller can reorder the data and report it to the host. In another embodiment, the data is buffered in a memory such as RAM 133 in or accessible by control circuitry 120. The control circuitry or controller can reorder the data for delivery to the host by the controller.
[00106] In some implementations buffering large amounts of data associated with a physical block may not be practical or otherwise desirable. Accordingly, in one embodiment, physical blocks are divided into read blocks to decrease the amount of data required to be buffered at one time, and consequently, the size of memory for buffering. The memory system can reverse read a portion of the physical block (read block), buffer the data, and report that data to the host device. The memory system can read the next read block of the physical block as the data is reported, and buffer the data from the next read block in its place. This process can be performed as many times as necessary until the entire physical block requested by the host device has been read and reported. Various sizes and ways of partitioning the physical block into individual read blocks can be utilized in accordance with different embodiments.
[00107] Figure 22 is a table illustrating the utilization of read blocks in accordance with one embodiment. A 16 word line physical block is illustrated, however physical blocks of other sizes can be divided in a similar manner. The word lines listed from WL 15 to WLO in column 650, are programmed beginning with word line WLO and proceeding sequentially thereafter to word line WL15, as shown in column 652. The physical block of Figure 21 has been partitioned into four individual read blocks. Read block 1 includes word lines WLO, WLl, WL2 and WL3. Read block 2 includes word lines WL4, WL5, WL6 and WL7. Read block 3 includes word lines WL8, WL9, WLlO and WLI l. Read block 4 includes word lines WL 12, WLl 3, WL 14 and WL15. More or fewer read blocks including different numbers of word lines can also be used.
[00108] The sequence for reading the physical block is set forth in column 654. When a request for data including the physical block is received, the memory system begins reading and reporting individual read blocks. The read sequence begins with word line WL3 in read block 1. Word line WL3 is the last to be programmed for this read block. The individual operations for reading the selected word line are set forth in column 656. In order to read word line WL3, its neighboring drain side word line WL4 is first read. Actual data values or charge level information can be read from the cells of word line WL4. When reading word line WL4, no compensations are utilized. A single subread and three sense operations (assuming a four state device) are used as shown in column 658. After reading word line WL4, word line WL3 is read. Reading word line WL3 includes four subreads and 12 sense operations when using a word line compensation scheme as shown in Figures 15 or 16. The data values for each subread will be selected for individual bit lines based on the state of the corresponding adjacent memory cell at word line WL4. An appropriate bit line compensation could alternatively be applied in one embodiment. After reading word line WL3 the read sequence proceeds to word line WL2. Reading word line WL2 only includes four subreads and 12 sense operations. The performance of an additional read operation at word line WL3 is not necessary since the data from word line WL3 is already known. After reading word line WL2, the read sequence proceeds to word line WLl and word line WLO. After reading read block 1 and buffering the data, the data can be reported to the host device. Reporting the data can include reordering the data to begin with word line WLO and end with WL3 as shown in Figure 21.
[00109] After the data from read block one has been reported to the host device, read block 2 can be read. In one embodiment, reading read block 2 can begin as the data from read block 1 is being reported. The data from read block 2 can replace the data from read block 1 as it is reported to the host from the buffer memory. Reading read block 2 begins at word line WL7, which includes first reading word line WL8 in order to determine data values or charge level information that can be used when reading word line WL7. Reading the remainder of read block 2 proceeds as described with respect to read block 1, buffering the data from each word line. The data can be reordered and reported to the host, beginning with the data from word line WL4, then word line WL5, then word line WL6, and finally word line WL7.
[00110] After reading and reporting the data from read block 2 the memory system will read the data from read block 3 beginning at word line WLI l as illustrated in column in 654. Reading word line WLl 1 will include first reading word line WL 12 to apply compensations when reading WLI l as shown in column 656. After reading word line WLI l and selecting the appropriate values based on the data from word line WL12, the data will be buffered in the memory system. Reading then proceeds to word lines WLlO, WL9 and WL8 in a similar fashion to that already described. The memory system will reorder the data beginning with word line WL8 and ending with word line WLl 1 and report this data to the host device.
[00111] After reading read block 3, the memory system proceeds to read read block 4. Reading read block 4 begins at word line WL 15 as shown in column 654. Because word line WL 15 is the last to be programmed it is not necessary to read any other word lines. After reading word line WLl 5, word line WL 14 is read using the data from WL 15 to apply an appropriate compensation. After reading and buffering the data from each of the remaining word lines of read block 4, the data can be reordered and reported to the host beginning with word line WL 12 and ending with word line WL 15. The read block division of Figure 22 utilizes a total of 192 sense operations as shown at 660 when a word line compensation scheme is used. If a bit line compensation scheme is used, 57 sense operations are used. As can be seen when looking at Figure 18, the read block technique of Figure 22 provides a fewer number of sense operations while keeping the required buffer memory size down.
[00112] Figure 23 is a flowchart depicting a method for reading non-volatile memory in a reverse fashion using individual read blocks. The embodiment of Figure 23 is set forth utilizing exemplary conditions of four read blocks as was shown in Figure 22. However, other embodiments may include different numbers of read blocks. At step 700, a read operation is performed for the first word line of the second read block. The first word line refers to the first word line to be programmed for the second read block. Referring to Figure 22, step 700 would include reading word line WL4. The read operation at step 700 does not include the utilization of any compensation subreads (or bit line compensation) because the read operation is intended to obtain information that can be used when reading the source side neighboring word line. At step 702, the data values (or charge level information) for the first word line of read block two is stored in the data latches. At step 704, a reverse read is performed for the first read block. Reverse reading the first read block will first include reading the last word line of the first read block using compensation based on the first word line of the second read block. Each bit line of the last word line will store the data from the appropriate subread, etc. based on the information determined from reading the first word line of the second read block. After reading the last word line of the first read block, the second to last word line of the first read block will be read and the data values selected based on the previous read of the last word line of the first read block. This will continue until the first word line of the first read block to be programmed has been read. Referring again to Figure 22, step 704 would include reading word line WL3, then word line WL2, then word line WLl and finally word line WLO. At step 706, the memory system will buffer the data read from the first read block. In one embodiment, buffering will include buffering the data from each individual word line of the first read block after it has been read at step 704. After buffering the data and reading each word line of the first read block, the memory system will reorder the data to correspond to the order in which the data was programmed to the read block. Referring again to Figure 22, this will include reordering the data to begin with the data from word line WLO, and ending with the data from word line WL3. After the data has been reordered at step 706, the memory system will report the data of the first read to the host device at step 708.
[00113] At step 710, a read operation is performed for the first word line of the third read block. Referring to Figure 22 again, step 710 would include reading word line WL8 of read block 3. The read operation for the first word line of read block 3 will not include compensation. At step 712, the data values read from the first word line of read block 3 are stored in the latches. At step 714, a reverse read operation for the second read block is performed. This reverse read operation will first include reading the last word line of read block two utilizing compensation based on the information determined from reading the first word line of read block 3 at step 710. Referring again to Figure 22, performing step 714 can include reading word line WL7 (using data from WL8), followed by word line WL6 (using data from WL9), word line WL5 (using data from WL6), and finally word line WL4 (using data from WL5). At step 716, the data from reverse reading read block two is buffered. Again buffering the data may include buffering the data after reading each individual word line of read block 2. After each word line has been read and the data buffered, the memory system will reorder the data of read block 2, also at step 716. After reordering the data, the data is reported to the host device at step 718. Referring again to Figure 22, step 718 can include reporting the data from word line WL4, then the data from word line WL5, then the data from word line WL6, and concluding with the data from word line WL7.
[00114] At step 720 a read operation for the first word line of the fourth read block is performed. Again, this may include reading word line WL 12 as depicted in column 656 for word line WLI l in Figure 22. The data values for the first word line of the fourth read block are stored in the data latches at step 722. At step 724, a reverse read operation for the third read block is performed as described for read blocks 1 and 2. This will include reading the last word line of the third read block to be programmed first. At step 726 the memory system will buffer and reorder the data into the order in which it was originally programmed. At step 728, the reordered data will be reported to the host device. At step 730 a reverse read of the fourth read block is performed. Because the fourth read block is the last read block, and includes the last word line to be programmed for the physical block, it is not necessary to read any word lines outside of read block 4. Reverse reading read block 4 in the embodiment of Figure 22 can include reading word line WL15, then word line WL 14, then word line WL13, and concluding with word line WL 12. The data from the individual word lines will be buffered at step 732. Also at step 732, the memory system will reorder the data to begin with the data from word line WL 12 and conclude with the data from word line WLl 5. This reordered data is then reported to the host device at step 734.
[00115] In one embodiment, threshold sizes of requested data from host devices are utilized in order to determine whether and to what extent to perform reverse reading operations. Figure 24 depicts one embodiment utilizing two such thresholds. At step 740, a request to read data is received from the host device. The memory system determines the size of the requested data at step 742. The memory system will compare the size of the requested data to a first threshold at step 744. If the requested size of data is smaller than the first threshold size of data, no reverse read operations are performed. The host device will have requested a small amount of data and in order to efficiently process this small amount of data, the memory system will quickly read the requested data and provide it to the host device. A standard read operation using compensation can be performed at step 750. In one embodiment, the operation performed at step 750 will be performed as depicted in Figure 15 or Figure 16. For example, if the size of the requested data corresponds to one word line or less, the memory system can read the appropriate neighboring word line as shown at step 420 of Figure 15 or step 450 of Figure 16. After reading the neighboring word line, the selected word line will be read utilizing compensation subreads as depicted in steps 422-438 of Figure 15 or steps 452- 468 of Figure 15. Bit line compensation can also be used. If the size of the requested data corresponds to more than one word line, individual performances of Figure 15 and Figure 16 can be used to read those multiple word lines.
[00116] If the size of the requested data is greater than the first threshold, the memory system will compare the size of the requested data with a second threshold at step 746. The second threshold size is larger than the first. If the size of the requested data is larger than the first threshold but smaller than the second threshold, a reverse read operation of one or more read blocks is performed at step 752. For example, a reverse read of the read block containing the requested data, for example read block 1, can be performed at step 752. If the size of the requested data is larger than the second threshold, a reverse read operation for one or more physical blocks can be performed at step 748.
[00117] Numerous variations to the technique disclosed in Figure 24 can be utilized. For example, if the requested data is not larger than the first threshold, a reverse read of at least one read block could be performed in all cases. The host may later request the data and it may be efficient to read and buffer it to avoid reading it later. In such a case, reverse reading one or more read blocks at step 752 may always include reverse reading at least two read blocks. Another embodiment may utilize a single threshold value. In such a case if the requested data is smaller than a first threshold size, the memory system may perform standard read operations as depicted in Figures 15 and 16 without reverse reading. In another similar embodiment, if the requested data is smaller than the first threshold size, the memory system may reverse read the physical block beginning with the requested word line and then proceeding in the opposite direction of programming to the end of the physical block. Another option could include moving in the direction of programming by a few word lines before beginning the reverse read. Where one threshold is used, if the size of the requested data is larger than the threshold size, the memory system can reverse read the corresponding physical block(s).
[00118] The foregoing detailed description has been presented for purposes of illustration and description. It is not intended to be exhaustive or to limit the invention to the precise form disclosed. Many modifications and variations are possible in light of the above teaching. The described embodiments were chosen in order to best explain the principles of the invention and its practical application, to thereby enable others skilled in the art to best utilize the invention in various embodiments and with various modifications as are suited to the particular use contemplated. It is intended that the scope of the invention be defined by the claims appended hereto.

Claims

CLAIMSWhat is claimed is:
1. A method of operating non- volatile storage, comprising: programming non-volatile storage elements coupled to a plurality of word lines beginning with a first word line adjacent a first set of select gates and ending with a last word line adjacent a second set of select gates, said programming includes changing a threshold voltage of selected ones of said storage elements in accordance with a target memory state; and reading said non-volatile storage elements coupled to said plurality of word lines beginning with said last word line and ending with said first word line, said reading includes for each word line except said last word line, applying one or more compensations based on reading a word line adjacent to said each word line in a direction of said second set of select gates.
2. The method of claim 1, wherein applying one or more compensations includes, for each word line except said last word line: performing a plurality of subreads and selecting a result of one of said subreads for each storage element of said each word line based on reading said word line adjacent to said each word line in said direction of said second set of select gates.
3. The method of claim 2, wherein performing a plurality of subreads for each word line except said last word line includes: performing a first subread by applying a first set of read reference voltages to said each word line and sensing a conduction of said storage elements of said each word line; and performing a second subread by applying a second set of read reference voltages to said each word line and sensing a conduction of said storage elements of said each word line.
4. The method of claim 3, wherein selecting a result of one of said subreads includes: selecting a result of said first subread for each storage element of said each word line having an adjacent memory cell on said adjacent word line that was determined to be in a first physical state during said reading said word line adjacent to said each word line; and selecting a result of said second subread for each storage element of said each word line having an adjacent memory cell of said adjacent word line that was determined to be in a second physical state during said reading said word line adjacent to said each word line.
5. The method of claim 2, wherein performing a plurality of subreads for each word line except said last word line includes: performing a first subread by applying a set of read reference voltages to said each word line, applying a first voltage to said word line adjacent said each word line, and sensing a conduction of said storage elements of said each word line while applying said set of read reference voltages and said first voltage; and performing a second subread by applying said set of read reference voltages to said each word line, applying a second voltage to said word line adjacent said each word line, and sensing a conduction of said storage elements of said each word line while applying said set of read reference voltages and said second voltage.
6. The method of claim 5, wherein selecting a result of one of said subreads includes: selecting a result of said first subread for each storage element of said each word line having an adjacent memory cell of said adjacent word line that was determined to be in a first physical state during said reading said word line adjacent to said each word line; selecting a result of said second subread for each storage element of said each word line having an adjacent memory cell of said adjacent word line that was determined to be in a second physical state during said reading said word line adjacent to said each word line.
7. The method of claim 1, wherein said non- volatile storage elements coupled to said plurality of word lines are part of a memory device, said method further comprising: temporarily storing data from said plurality of non-volatile storage elements in said memory device beginning with data from non-volatile storage elements coupled to said last word line and ending with data from non-volatile storage elements coupled to said first word line.
8. The method of claim 7, further comprising: providing data from said non-volatile storage elements coupled to said plurality of word lines to an output of said memory device in a sequence beginning with data from non-volatile storage elements coupled to said first word line and ending with data from non-volatile storage elements coupled to said last word line.
9. The method of claim 7, wherein: said non-volatile storage elements coupled to said plurality of word lines are formed on one or more memory chips; and temporarily storing data from said plurality of non-volatile storage elements includes buffering said data at said one or more memory chips
10. The method of claim 7, wherein: said non-volatile storage elements coupled to said plurality of word lines are formed on one or more memory chips, said one or more memory chips are in communication with a controller on a different chip; and temporarily storing data from said plurality of non-volatile storage elements includes buffering said data at said different chip.
11. The method of claim 7, wherein: temporarily storing data from said plurality of non-volatile storage elements includes buffering said data in a random access memory of said memory device.
12. The method of claim 1, wherein compensating applying one or more comensations includes, for each word line except said last word line: performing a single subread while applying at least one bit line based compensation for each storage element coupled to said each word line based on reading said word line adjacent to said each word line.
13. The method of claim 12, wherein, applying at least one bit line based compensation for each storage element includes: controlling at least one of an integration time, a pre-charge voltage, and a breakpoint voltage used by a sense module for reading said each storage element.
14. The method of claim 1, further comprising: after reading storage elements of each word line and before reading storage elements of another word line, storing data values for each storage element of said each word line in one or more data latches associated with a bit line of said each storage element; after storing said data values for each storage element of said each word line, buffering said data values in a different memory.
15. The method of claim 14, wherein: storing data values for each storage element of said each word line includes, for each word line except said last word line, overwriting one or more data values from a previously read storage element of an adjacent word line in said one or more data latches; buffering said data values includes, for each word line except said last word line, storing said data values with data values from previously read storage elements of an adjacent word line.
16. The method of claim 1, wherein programming said non- volatile storage elements coupled to said plurality of word lines includes programming said non-volatile storage elements by full sequence programming.
17. The method of claim 1, wherein programming said non-volatile storage elements coupled to said plurality of word lines includes programming said non-volatile storage elements using upper page/lower page programming.
18. The method of claim 1, wherein programming said non-volatile storage elements coupled to said plurality of word lines includes programming said non-volatile storage elements using last first mode programming.
19. The method of claim 1, wherein said non-volatile storage elements are multi-state non-volatile storage elements.
20. The method of claim 1, wherein said non-volatile storage elements are part of a NAND flash memory system.
21. A non-volatile memory system, comprising: a first set of select gates; a second set of select gates; a plurality of word lines including a first word line adjacent said first set of select gates and a last word line adjacent said second set of select gates; a plurality of non-volatile storage elements coupled to said plurality of word lines; and managing circuitry in communication with said plurality of non- volatile storage elements, said managing circuitry programs said storage elements beginning with said first word line and ending with said last word line by changing a threshold voltage of selected ones of said storage elements in accordance with a target memory state, said managing circuitry reads said nonvolatile storage elements beginning with said last word line and ending with said first word line, said managing circuitry applies one or more compensations when reading each word line except said last word line based on reading a word line adjacent to said each word line in a direction of said second set of select gates.
22. The non-volatile memory system of claim 21, wherein said managing circuitry applies one or more compensations when reading each word line except said last word line by: performing a plurality of subreads and selecting a result of one of said subreads for each storage element of said each word line based on reading a word line adjacent to said each word line in a direction of said second set of select gates.
23. The non-volatile memory system of claim 22, wherein said managing circuitry performing a plurality of subreads for each word line except said last word line includes: performing a first subread by applying a first set of read reference voltages to said each word line and sensing a conduction of said storage elements of said each word line; and performing a second subread by applying a second set of read reference voltages to said each word line and sensing a conduction of said storage elements of said each word line.
24. The non-volatile memory system of claim 23, wherein said managing circuitry selecting a result of one of said subreads includes: selecting a result of said first subread for each storage element of said each word line having an adjacent memory cell on said adjacent word line that was determined to be in a first physical state during said reading said word line adjacent to said each word line; and selecting a result of said second subread for each storage element of said each word line having an adjacent memory cell of said adjacent word line that was determined to be in a second physical state during said reading said word line adjacent to said each word line.
25. The non-volatile memory system of claim 22, wherein said managing circuitry performing a plurality of subreads for each word line except said last word line includes: performing a first subread by applying a set of read reference voltages to said each word line, applying a first voltage to said word line adjacent said each word line, and sensing a conduction of said storage elements of said each word line while applying said set of read reference voltages and said first voltage; and performing a second subread by applying said set of read reference voltages to said each word line, applying a second voltage to said word line adjacent said each word line, and sensing a conduction of said storage elements of said each word line while applying said set of read reference voltages and said second voltage.
26. The non-volatile memory system of claim 25, wherein said managing circuitry selecting a result of one of said subreads includes: selecting a result of said first subread for each storage element of said each word line having an adjacent memory cell of said adjacent word line that was determined to be in a first physical state during said reading said word line adjacent to said each word line; and selecting a result of said second subread for each storage element of said each word line having an adjacent memory cell of said adjacent word line that was determined to be in a second physical state during said reading said word line adjacent to said each word line.
27. The non-volatile memory system of claim 21, wherein said managing circuitry: temporarily stores data from said plurality of non- volatile storage elements in a memory beginning with data from non-volatile storage elements coupled to said last word line and ending with data from non-volatile storage elements coupled to said first word line.
28. The non-volatile memory system of claim 27, further comprising: an output; wherein said managing circuitry provides data from said non-volatile storage elements coupled to said plurality of word lines to said output in a sequence beginning with data from non-volatile storage elements coupled to said first word line and ending with data from non-volatile storage elements coupled to said last word line.
29. The non-volatile memory system of claim 27, wherein: said non-volatile storage elements coupled to said plurality of word lines are formed on one or more memory chips; and said managing circuitry temporarily storing data from said plurality of non-volatile storage elements includes buffering said data at said one or more memory chips
30. The non-volatile memory system of claim 27, wherein: said non-volatile storage elements coupled to said plurality of word lines are formed on one or more memory chips; at least a portion of said managing circuitry is on a different chip; and said managing circuitry temporarily storing data from said plurality of non-volatile storage elements includes buffering said data at a memory on said different chip.
31. The non-volatile memory system of claim 27, further comprising: a random access memory; said managing circuitry temporarily storing data from said plurality of non-volatile storage elements includes buffering said data in said random access memory.
32. The non-volatile memory system of claim 21, wherein said managing circuitry applies one or more compensations when reading each word line except said last word line by: performing a single subread while applying at least one bit line based compensation for each storage element coupled to said each word line based on reading said word line adjacent to said each word line.
33. The non-volatile memory system of claim 32, wherein said managing circuitry applying at least one bit line based compensation for each storage element includes: controlling at least one of an integration time, a pre-charge voltage, and a breakpoint voltage used by a sense module for reading said each storage element.
34. The non-volatile memory system of claim 21, further comprising: one or more data latches associated with a bit line of said each storage element; wherein said managing circuitry, after reading storage elements of each word line and before reading storage elements of another word line, stores data values for each storage element of said each word line in said one or more data latches associated with said each storage element; wherein said managing circuitry, after storing said data values for each storage element of said each word line, buffers said data values in a different memory.
35. The non-volatile memory system of claim 34, wherein: said managing circuitry storing data values for each storage element of said each word line includes, for each word line except said last word line, overwriting one or more data values from a previously read storage element of an adjacent word line in said one or more data latches; and said managing circuitry buffering said data values includes, for each word line except said last word line, storing said data values with data values from previously read storage elements of an adjacent word line.
36. The non-volatile memory system of claim 21, wherein said managing circuitry programs said non-volatile storage elements coupled to said plurality of word lines using full sequence programming.
37. The non-volatile memory system of claim 21, wherein said managing circuitry programs said non-volatile storage elements coupled to said plurality of word lines using upper page/lower page programming.
38. The non-volatile memory system of claim 21, wherein said managing circuitry programs said non-volatile storage elements coupled to said plurality of word lines using last first mode programming.
39. The non-volatile memory system of claim 21, wherein said nonvolatile storage elements are multi-state non-volatile storage elements.
40. The non-volatile memory system of claim 21, wherein said managing circuitry includes at least one of a controller and a state machine.
41. The non-volatile memory system of claim 21 , wherein said plurality of non-volatile storage elements are part of a NAND flash memory system.
PCT/US2007/078950 2006-09-29 2007-09-19 Reverse reading in non-volatile memory with compensation for coupling WO2008042605A1 (en)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
US11/537,556 US7447076B2 (en) 2006-09-29 2006-09-29 Systems for reverse reading in non-volatile memory with compensation for coupling
US11/537,548 US7684247B2 (en) 2006-09-29 2006-09-29 Reverse reading in non-volatile memory with compensation for coupling
US11/537,556 2006-09-29
US11/537,548 2006-09-29

Publications (1)

Publication Number Publication Date
WO2008042605A1 true WO2008042605A1 (en) 2008-04-10

Family

ID=39126210

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2007/078950 WO2008042605A1 (en) 2006-09-29 2007-09-19 Reverse reading in non-volatile memory with compensation for coupling

Country Status (2)

Country Link
TW (1) TWI356417B (en)
WO (1) WO2008042605A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI579853B (en) * 2014-09-12 2017-04-21 東芝股份有限公司 Semiconductor memory device
US10719272B2 (en) 2018-07-26 2020-07-21 Silicon Motion, Inc. Data storage device and control method for non-volatile memory

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP6895002B1 (en) 2020-05-27 2021-06-30 ウィンボンド エレクトロニクス コーポレーション Semiconductor storage device and readout method

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5867429A (en) * 1997-11-19 1999-02-02 Sandisk Corporation High density non-volatile flash memory without adverse effects of electric field coupling between adjacent floating gates
US20040213031A1 (en) * 2003-04-22 2004-10-28 Kabushiki Kaisha Toshiba Non-volatile semiconductor memory device and electric device with the same
US20050162913A1 (en) * 2004-01-26 2005-07-28 Jian Chen Method of reading NAND memory to compensate for coupling between storage elements
WO2006107731A1 (en) * 2005-04-05 2006-10-12 Sandisk Corporation Read operation for non-volatile storage that includes compensation for coupling

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5867429A (en) * 1997-11-19 1999-02-02 Sandisk Corporation High density non-volatile flash memory without adverse effects of electric field coupling between adjacent floating gates
US20040213031A1 (en) * 2003-04-22 2004-10-28 Kabushiki Kaisha Toshiba Non-volatile semiconductor memory device and electric device with the same
US20050162913A1 (en) * 2004-01-26 2005-07-28 Jian Chen Method of reading NAND memory to compensate for coupling between storage elements
WO2006107731A1 (en) * 2005-04-05 2006-10-12 Sandisk Corporation Read operation for non-volatile storage that includes compensation for coupling

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI579853B (en) * 2014-09-12 2017-04-21 東芝股份有限公司 Semiconductor memory device
US10719272B2 (en) 2018-07-26 2020-07-21 Silicon Motion, Inc. Data storage device and control method for non-volatile memory

Also Published As

Publication number Publication date
TWI356417B (en) 2012-01-11
TW200822119A (en) 2008-05-16

Similar Documents

Publication Publication Date Title
USRE46279E1 (en) Read operation for non-volatile storage with compensation for coupling
US7436733B2 (en) System for performing read operation on non-volatile storage with compensation for coupling
US7684247B2 (en) Reverse reading in non-volatile memory with compensation for coupling
US7289344B2 (en) Reverse coupling effect with timing information for non-volatile memory
EP2161723B1 (en) Read operation for non-volatile storage with compensation for floating gate coupling
US7310272B1 (en) System for performing data pattern sensitivity compensation using different voltage
US7447076B2 (en) Systems for reverse reading in non-volatile memory with compensation for coupling
US7450421B2 (en) Data pattern sensitivity compensation using different voltage
US7492634B2 (en) Method for programming of multi-state non-volatile memory using smart verify
US7518923B2 (en) Margined neighbor reading for non-volatile memory read operations including coupling compensation
US8743615B2 (en) Read compensation for partially programmed blocks of non-volatile storage
US7606070B2 (en) Systems for margined neighbor reading for non-volatile memory read operations including coupling compensation
EP2100307B1 (en) Margined neighbor reading for non-volatile memory read operations including coupling compensation
US7583531B2 (en) Reverse coupling effect with timing information
WO2008042605A1 (en) Reverse reading in non-volatile memory with compensation for coupling

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 07842819

Country of ref document: EP

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: 1520/DELNP/2009

Country of ref document: IN

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 07842819

Country of ref document: EP

Kind code of ref document: A1