TWI272487B - Non-volatile memory and method with memory planes alignment - Google Patents

Non-volatile memory and method with memory planes alignment Download PDF

Info

Publication number
TWI272487B
TWI272487B TW093141380A TW93141380A TWI272487B TW I272487 B TWI272487 B TW I272487B TW 093141380 A TW093141380 A TW 093141380A TW 93141380 A TW93141380 A TW 93141380A TW I272487 B TWI272487 B TW I272487B
Authority
TW
Taiwan
Prior art keywords
block
logical
memory
update
data
Prior art date
Application number
TW093141380A
Other languages
Chinese (zh)
Other versions
TW200601042A (en
Inventor
Sergey Gorobets
Peter John Smith
Alan David Bennett
Original Assignee
Sandisk Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US10/750,155 external-priority patent/US7139864B2/en
Application filed by Sandisk Corp filed Critical Sandisk Corp
Publication of TW200601042A publication Critical patent/TW200601042A/en
Application granted granted Critical
Publication of TWI272487B publication Critical patent/TWI272487B/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11CSTATIC STORES
    • G11C16/00Erasable programmable read-only memories
    • G11C16/02Erasable programmable read-only memories electrically programmable
    • G11C16/06Auxiliary circuits, e.g. for writing into memory
    • G11C16/10Programming or data input circuits
    • G11C16/102External programming circuits, e.g. EPROM programmers; In-circuit programming or reprogramming; EPROM emulators
    • G11C16/105Circuits or methods for updating contents of nonvolatile memory, especially with 'security' features to ensure reliable replacement, i.e. preventing that old data is lost before new data is reliably written
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/0223User address space allocation, e.g. contiguous or non contiguous base addressing
    • G06F12/023Free address space management
    • G06F12/0238Memory management in non-volatile memory, e.g. resistive RAM or ferroelectric memory
    • G06F12/0246Memory management in non-volatile memory, e.g. resistive RAM or ferroelectric memory in block erasable memory, e.g. flash memory
    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11CSTATIC STORES
    • G11C7/00Arrangements for writing information into, or reading information out from, a digital store
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2212/00Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
    • G06F2212/72Details relating to flash memory management
    • G06F2212/7202Allocation control and policies
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2212/00Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
    • G06F2212/72Details relating to flash memory management
    • G06F2212/7208Multiple device management, e.g. distributing data over multiple flash devices
    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11CSTATIC STORES
    • G11C11/00Digital stores characterised by the use of particular electric or magnetic storage elements; Storage elements therefor
    • G11C11/56Digital stores characterised by the use of particular electric or magnetic storage elements; Storage elements therefor using storage elements with more than two stable states represented by steps, e.g. of voltage, current, phase, frequency
    • G11C11/5621Digital stores characterised by the use of particular electric or magnetic storage elements; Storage elements therefor using storage elements with more than two stable states represented by steps, e.g. of voltage, current, phase, frequency using charge storage in a floating gate
    • G11C11/5628Programming or writing circuits; Data input circuits
    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11CSTATIC STORES
    • G11C16/00Erasable programmable read-only memories
    • G11C16/02Erasable programmable read-only memories electrically programmable
    • G11C16/06Auxiliary circuits, e.g. for writing into memory
    • G11C16/10Programming or data input circuits
    • G11C16/102External programming circuits, e.g. EPROM programmers; In-circuit programming or reprogramming; EPROM emulators

Abstract

A non-volatile memory is constituted from a set of memory planes, each having its own set of read/write circuits so that the memory planes can operate in parallel. The memory is further organized into erasable blocks, each for storing a logical group of logical units of data. In updating a logical unit, all versions of a logical unit are maintained in the same plane as the original. Preferably, all versions of a logical unit are aligned within a plane so that they are all serviced by the same set of sensing circuits. In a subsequent garbage collection operation, the latest version of the logical unit need not be retrieved from a different plane or a different set of sensing circuits, otherwise resulting in reduced performance. In one embodiment, any gaps left after alignment are padded by copying latest versions of logical units in sequential order thereto.

Description

1272487 九、發明說明: 【發明所屬之技術領域】 記憶體,尤其有關具有 導體記憶體。 本發明一般有關非揮發性半導體 記憶體區塊管理系統的非揮發性半 【先前技術】 此夠進仃電叙非揮發性儲存的固態記憶體,尤其是屬 於封衣成小型讀卡^EEPRqm及快閃卿⑽Μ的形式,近 來=成為各種行動及手持裝置選擇的儲存裝置,特別是資 ::電及消費性電子產品。不像也是固態記憶體的編(隨 广:取記憶體),快閃記憶體係為非揮發性,即使在關閉電 源後仍能保留其儲存的資料。還有,不像職(唯讀Μ 體夬閃記憶體和磁碟儲存裝置—樣都是可再寫。儘管成: =車又二彳-在大謂存應用中’快閃記憶體的使用漸增。 基於旋轉磁性媒體的羽 Β 、 、、白用大罝儲存裝置,如硬碟機及軟 ’不適"仃動及手持環境。這是因為磁碟機傾向於體 積龐大’I易機械故障且具有高等待時間及高功率需求。 這些不想要的屬性使得基於磁碟的儲存裝置無法在大 的行動及可攜式應用中使用。另一方面,既為内嵌式且為 式屺卡之形式的快閃記憶體在理想上極為適 動及手持環境,因其尺寸小、功率消祕、高速可: 性等特性。 η 了罪 項記憶 寫入或 二者均 快閃EEPROM和EEpR〇M(電子可抹除及可程式唯 :)的相似處在於’其為能夠被抹除且能將新的資料 式化」至其圮憶體單元中的非揮發性記憶體。 98680.doc 1272487 利用場效電晶體結構中,位在半導體基板中通道區之上且 在源極及汲極區之間的浮動(未連接)導電閘極。接著在浮動 閉極之上提供控制閘極。浮動閘極所保留的電荷量可控制 電晶體的定限電壓特性。也就是說,就浮動閘極上給定位 準的電荷而言’其中有必須在「開啟」電晶體之前施加於 控制閘極的對應電Μ(定限),以允許在其源極及汲極區之間 的導電。尤其,如快閃刪⑽的快閃記憶體允許同時抹 除記憶體單元的整個區塊。 —浮動閘極可以保持某個電荷範圍,因此,可經程式化為 定限電壓窗㈣任何定限電壓位準。定限電㈣的尺寸受 限於裝置的最小及最大定限位準,而裝置的最小及最大: 限位準對㈣可經程式化至浮動閘極上的電荷範圍。定限 窗一般根據記憶體裝置㈣性、操作條件及記錄而定。原 ^ 在此南Θ,各有所不同的、可解析的定限電;1位準 祀圍均可用於指定單元之明確的記憶體狀態。 。通㊉會藉由兩種機制之_,將作為記憶體單元的電晶體 程式化為:已程式化」&態。在「熱電子注人」中,施加 ;;及極的同電壓可加速橫跨在基板通道區上的電子。同 時,施加於控制閑極的高電壓可吸引熱電子通過薄閑才騎 電質到浮動閘極上。在「穿隨注人」巾,會相對於該基板 %加—南電壓給該控制閘極。依此方式,可將基板的電子 吸引到中間的浮動托 千 動閘極。雖然習慣使用用語「程式化,描 述藉由注入電子至記憶體單元的相始抹除電荷健存單元來 寫入記憶體以改變記憶體狀態…見已和較為常見的用語, 98680.doc 1272487 如「寫入」或「記錄」交換使用。 可以利用下面數種機制來抹除該記憶體裝置。對 EEPROM而言,藉由相對於該控制閘極施加一高電壓給該 基板致使可於該浮動閘極中誘發出電子,使其穿隧一薄 氧化物進入^亥基板通道區(也就是,穿隨效 應)便可電抹除一記憶體單元。一般來說,eepr〇m可以 逐個位元組的方式來抹除。對快閃EEPROM而言,可每次 電抹除所有的記憶體或是每次電抹除—個以上最小可抹除 區塊’其中—最小可抹除區塊可能係由—個以上的區段所 、、且成且母個區段均可儲存5丨2個位元組以上的資料。 該記憶體裝置通常包括可被安裝於一記憶卡上的一個以 上。己L、體曰曰片。各記憶體晶片包含為周邊電路(如解碼器和 抹除、寫入及讀取等電路)所支援之記憶體單元陣列。較為 精密的記憶㈣置還附有執行智慧型和較高階記憶體操作 及介面連接的控制器。 現今已有許多市售成功的非揮發性固態記憶體裝置。該 些記憶體裝置可能係快閃EEPROM,或是可採用其它類型 的非揮發性記憶體單元。快閃記憶體與系統及其製造方法 ㈣例揭示於美國專利案第5,G7M32E、第5,G95,344號、 第 5,315,541號、第 5,343,G6^、第以仏⑹號、第 5,3i3,42i 號、以及第6,222,762號。明確地說,具财仙線串結構的 決閃§己憶體裝置係描述於美國專利案第5,57〇,3丨$號、第 5,9〇3,495號、第M46,935號之中。另外,亦可利用具有一 用來儲存電荷之介面層的記憶體單元來製造非揮發性記憶 98680.doc 1272487 體裝置。其係利用-介電層來取代前面所述的導電浮動間 極70件。此等利用介電儲存元件的記憶體裝置已描述於 Ehan等人於2〇00年U月在IEEE m喻〇nw,第 21冊’第11號,第543_545頁中所發表的「NR〇M:AN〇vei Localized Trapping,2-Bit N〇nv〇UtUe Mem〇"⑽」一文 中有0N0介電層會延伸跨越源極與没極擴散區間的通 道其中個貝料位元的電荷會於靠近該汲極的介電層中 被局部化,而另-個資料位元的電荷會於靠近該源極的介 包層中被局4 4匕。舉例來說’美國專利案第5,7683%號及 第6,01U725號便揭示一種於兩層二氧化秒層間夾放―陷捕 介電質的非揮紐記㈣單元。藉由分開讀取該介電質内 空間分離的電荷儲存區域的二進制狀態,便可實現多重狀 態的資料儲存。 為了提高讀取及程式化效能,會平行讀取或程式化陣列 中的多個電何儲存元件或記憶體電晶體。因此,會一起讀 取或程式化記憶體元㈣「頁面」。在現有的記憶體架^ 中’-列通常含有若干交錯的頁面或一列可構成一個頁 面。一個1面的所有記憶體元件會被-起讀取或程式化。 在快閃記憶體系統中,抹除操作可能需要多達長於讀取 及程式化操作的數量級。因此,希望能夠具有大尺寸的抹 除區塊。依此方式,即可降低總數很大之記憶體單元上的1272487 IX. Description of the invention: [Technical field to which the invention pertains] Memory, in particular, has a conductor memory. The present invention generally relates to a non-volatile semi-volatile memory of a non-volatile semiconductor memory block management system. [Prior Art] This is a solid-state memory that can be used for non-volatile storage, especially for small-sized card readers and EEPRqm. The form of the flashing (10) , has recently become a storage device for a variety of mobile and handheld devices, in particular: electricity and consumer electronics. Unlike the compilation of solid-state memory (with the exception of memory), the flash memory system is non-volatile and retains its stored data even after the power is turned off. Also, it is not like the job (only reading 夬 夬 flash memory and disk storage device - all can be rewritten. Although: = car and two 彳 - in the big memory application 'flash memory use Increasingly. Based on rotating magnetic media, the feathers, and white storage devices, such as hard disk drives and soft 'discomfort' and swaying and handheld environments. This is because the disk drive tends to be bulky. Faults with high latency and high power requirements. These unwanted attributes make disk-based storage devices inaccessible for large mobile and portable applications. On the other hand, they are both inline and Leica. The flash memory in its form is ideally suited for mobile and handheld environments due to its small size, power-eliminating, high-speed performance, etc. η Crime writes or both flash EEPROM and EEpR〇 The similarity between M (electronic erasable and programmable:) is that it is a non-volatile memory that can be erased and can be used to convert new data into its memory unit. 98680.doc 1272487 In the field effect transistor structure, the bit is in the semiconductor substrate A floating (unconnected) conductive gate above the region and between the source and drain regions. A control gate is then provided over the floating closed pole. The amount of charge retained by the floating gate controls the limits of the transistor. Voltage characteristics. That is to say, in terms of the charge on the floating gate, the corresponding electric charge (limit) must be applied to the control gate before the transistor is "on" to allow its source and Conduction between the drain regions. In particular, flash memory such as flash erase (10) allows the entire block of memory cells to be erased at the same time. - The floating gate can maintain a certain charge range and, therefore, can be programmed into The limit voltage window (4) is the limit voltage level. The size of the limit voltage (4) is limited by the minimum and maximum limit of the device, and the minimum and maximum of the device: the limit level (4) can be programmed to the floating gate The range of charge on the pole. The limit window is generally determined according to the memory device (four), operating conditions and records. The original ^ in this south, each has a different, resolvable constant limit power; 1 position can be used Clear memory state in the specified unit The Tongshi will use the two mechanisms to program the transistor as a memory unit into a "programmed" & state. In "hot electron injection", apply;; and the same voltage of the pole It can accelerate the electrons across the channel area of the substrate. At the same time, the high voltage applied to the control idle pole can attract the hot electrons to ride the electricity to the floating gate through the thin air. The substrate % plus - south voltage is applied to the control gate. In this way, the electrons of the substrate can be attracted to the intermediate floating gate of the floating gate. Although it is customary to use the term "stylized, the description is by injecting electrons into the memory unit. The phase erases the charge-storing unit to write to the memory to change the state of the memory... See the more common terms, 98680.doc 1272487 for "write" or "record" exchange. The following devices can be used to erase the memory device. For the EEPROM, by applying a high voltage to the control gate, the substrate is induced to induce electrons in the floating gate to tunnel a thin oxide into the substrate channel region (ie, Wear and effect) can erase a memory unit. In general, eepr〇m can be erased on a byte by byte basis. For flash EEPROM, all memory can be erased every time or erased each time - more than one minimum erasable block 'where the smallest erasable block may be more than one area The segment, the sum, and the parent segment can store more than 5.2 bytes. The memory device typically includes more than one that can be mounted on a memory card. L, body tablets. Each memory chip includes a memory cell array supported by peripheral circuits such as decoders and circuits such as erase, write, and read. The more sophisticated memory (4) is accompanied by a controller that performs intelligent and higher-order memory operations and interface connections. There are many commercially available non-volatile solid state memory devices available today. These memory devices may be flash EEPROM or other types of non-volatile memory cells may be used. Examples of flash memory and systems and methods for fabricating the same are disclosed in U.S. Patent Nos. 5, G7M32E, 5, G95, 344, 5, 315, 541, 5, 343, G6, 仏 (6), 5, 3i3. , 42i, and 6,222,762. Specifically, the device of the hexagram structure with the structure of the celestial string is described in US Patent Nos. 5, 57, 3, $5, 9, 3, 495, and M46, 935. . Alternatively, a non-volatile memory 98680.doc 1272487 body device can be fabricated using a memory cell having an interface layer for storing charge. It replaces the previously described conductive floating via 70 with a dielectric layer. Such memory devices utilizing dielectric storage elements have been described by Ehan et al. in U.S.A., U.S.A., IEEE m. 〇nw, vol. 21, No. 11, pp. 543-545. :AN〇vei Localized Trapping,2-Bit N〇nv〇UtUe Mem〇"(10)" in the article, the 0N0 dielectric layer will extend across the source and the non-polar diffusion interval, and the charge of one of the bayonet bits will The dielectric layer near the drain is localized, and the charge of the other data bit is localized in the dielectric layer near the source. For example, U.S. Patent Nos. 5,768, 3% and 6,01 U 725 disclose a non-heroic (four) cell that traps a dielectric between two layers of dioxide dioxide. Multiple state data storage can be achieved by separately reading the binary state of the charge storage region separated by the space within the dielectric. To improve read and program performance, multiple electrical storage elements or memory transistors in the array are read or programmed in parallel. Therefore, the memory element (4) "page" will be read or programmed together. In an existing memory shelf, the '-column usually contains a number of interlaced pages or a column to form a page. All memory elements on one side will be read or programmed. In flash memory systems, erase operations may require orders of magnitude greater than read and program operations. Therefore, it is desirable to have a large size erase block. In this way, it is possible to reduce the total number of memory cells

抹除時間。 W 依照快閃記憶體的性質’資料必須寫入已抹除的記憶體 位置。如果要更新主機之特定邏輯位址的資料,—個方式 98680.doc 1272487 是將更新資料再寫入相同的實體記憶體位置中。也就是 再循環,而這對此類型記憶體裝置的有限耐久性而古 理想。 ° "兒,邏輯對貫體位址映射不會變更。然而,這便表示必須 先抹除含有該實體位置的整個抹除區塊,然後再以更新資 料寫入。此更新方法很沒有效率’因其需要抹除及再寫入 整個抹除區塊,尤其在要更新的資料只佔用抹除區塊的一 小部分時。此外,還會造成比較頻繁之記憶體區塊的抹除 另-個管理快閃記憶體系統的問題是必須處理系統控制 及目錄資料。各種記憶體操作的過程期間會產生且存取該 資料。因&,其有效處理及迅速存取將直接影響效能。由 於快閃記憶體是用來儲存且是非揮發性,因此希望能夠在 快閃記憶體中維持此類型的資料。然%,因為在控制器及 快閃記憶體之間有中間檔㈣理系統,所以無法直接存取 育料m统控制及目錄資料傾向於成為作用中及成 為片段’而這對於具有大尺寸區塊抹除之系統中的儲存毫 無助益。照慣例,會在㈣器RAM中設定此類型的資料, 藉此允許由控制器直接存取。在開啟記憶體裝置的電源 後’初始化程序會掃描快閃記憶體’以編譯要放在控制器 Ram中的必要系統控制及目錄資訊。此程序既耗時又需要 控制态RAM谷里’對於持續增加的快閃記憶體容量更是如 此0 US 6,567,307揭露一種在大抹除區塊中處理區段更新的 方法’其包括在多個當作可用記憶區的抹除區塊中記錄更 98680.doc -10- 1272487 新資料’及最後在各種區塊中彙總有效區段,然後按邏輯 循序順序重新排列有效區段後再將其寫人。依此方式,在 每個最微小的更新時,不必抹除及再寫入區塊。 WO 〇3/027828及臀〇 00/49488均揭露一種處理大抹除區 塊中更新的記憶體系統,其中包括以區域為單位分割邏輯 區段位址。小區域的邏輯位址範圍係保留用於和另一個用 於使用者資料之區域分開的作用中系統控制資料。依此方 式,在其自已區域中的系統控制資料操控便不會和另一區 域中關聯的使用者資料互相作用。更新屬於邏輯區段層 級’而且寫人指標會指向要寫人之區塊中的對應實體區 段。映射資訊會被緩衝在RAM中,且最後被儲存在主記憶 體的區段配置表中。邏輯區段的最新版本會淘汰現有區塊 :的所有先前版本,這些區塊因此變成部分淘汰。執行廢 :項目收集可保持部分淘汰區塊為可接受的數量。 ^技術的系統傾向於使更新資料分布在許多區塊上或 更新貧料會使許多現有的區塊受到部分淘汰。結果通常是 大量為部分淘汰之區塊所需的廢棄項目收集’這很沒有效 率且會造成記憶體提早老化。還有,和非循序更新相比, 更缺少有系統及有效的方式來處理循序更新。 因此,普遍需要高容量及高效能的非揮發性記憶體。尤 其:更f要有能夠在大區塊中執行記憶體操作且沒有上述 問越的咼容量非揮發性記憶體。 【發明内容】 照貫體記憶體位置的實體 一種非揮發性記憶體系統係按 98680.doc -11 - 1272487 群組來組織。各實體群組(中繼區塊)為可抹除的單元且能用 來儲存-個邏輯群組的資料。記憶體管理系統允許藉由配 置記錄邏輯群組之更新資料專用的中繼區塊來更新一個邏 =群組的資料。更新中繼區塊按照所接收的順序記錄更新 貝料,且對記錄按照原始儲存的正確邏輯順序(循序)與否 (混亂)並沒有限制。最後會關閉更新中繼區塊以進行其他記 錄。會發生若干程序之―,但最終會以按照正確順序完全 填滿之取代原始中繼區塊的中繼區塊結束。在混亂的例子 中’會以有助於經常更新的方式將目錄資料維持在非揮發 性記憶體中。系統支援多個同時更新的邏輯群組。 本發明的一個特色允許以逐個邏輯群組的方式來更新資 料。因此,在更新邏輯群組時,會限制邏輯單元的分布 有更新/匈汰之記憶體單疋的散佈)範圍。這在邏輯群組通常 含在貝體區塊内時更是如此。 在邏輯群組的更新期間,通常必須指派一或兩個緩衝已 更新之邏輯單元的區塊。因此’只需要在相對較少數量的 區塊上執行廢棄項目收集。藉由彙總或壓縮即可執行混亂 區塊的廢棄項目收集。 相較於循序區塊來說,更新處理的經濟效能於更新區塊 的一般處理中會愈加明顯,因而便無須為混亂(非循序)更新 配置任何額夕卜的區土鬼。所有的更新區塊均被配置為循序的 更新區塊,而且任何的更新區塊均可被變更成混亂的更新 區塊。更確切地說,可任意地將一更新區塊從循序式變更 成混亂式。 98680.doc -12- 1272487 有效使用系統㈣允許同時進行更新多個邏輯群纪… 可進一步增加效率及減少過度耗用。 每 分散在多個|£憶趙平面上的記憶趙對齊 根據本發明的另一古;^ki JhK ., 方面,對於一被組織成複數個可抹除 區塊且由夕個記憶體平面所構成(因而可平行地讀取 個邏輯單元或是將複數個邏輯單元平行地程式化至該等多 個平面之中)的記憶體陣列,當要更新儲存於特定記憶體二 面中第:區塊的原始邏輯單元時,會供應所需以將已更新 的邏輯單元保持在和原始相同的平面中。這可藉由以下方 式來完成:將已更新的邏輯單元記錄到仍在相同平面中之 第二區塊的下一個可用位置。較佳將邏輯單元儲存在平面 中和其中其他版本相同的位移位置,使得給定邏輯單元的 所有版本係由相同組的感測電路予以服務。 據此’在-項較佳具體實施例中,以邏輯單元的目前版 本來填補介於上—個程式化記憶體單元與下一個可用平面 對齊記憶體單元之間的任何中間間隙。將邏輯上位於該最 後被程式化之邏輯單元後面的該等邏輯單元的目前版本以 及邏輯上位於被儲存在該下一個可用的平面排列記憶體單 元中之邏輯單元前面的該等邏輯單元的目前版本填入間隙 中’便可完成該填補作業。 依此方式,可將邏輯單元的所有版本維持在具有和原始 相同位移的相同平面中,致使在廢棄項目收集操作中,不 必從不同平面擷取邏輯單元的最新版本,以免降低效能。 在一項較佳具體實施例中,可利用該等最新的版本來更新 98680.doc -13- 1272487 或填補該平面上的每個記憶體單元。因此,便可從每個平 面中平行地讀出-邏輯單元,其將會具有邏輯順序而無需 進一步重新排列。 ^此方案藉由允許平面上重新排列邏輯群組之邏輯單元的 最=版本’且不必收集不同記憶體平面的最新版本,而縮 短菜總混亂區塊的時間。這报有好處,其中主機介面的效 月b規格可定義由記憶體系統完成區段寫入操作的最大等待 時間。 、、 階段性程式錯誤處置 根據本發明的另一方面’在具有區塊管理系統的記憶體 中,在時間緊急的記憶體操作期間,區塊中的程式失敗可 藉由繼續中斷區塊(breakout Mock)中的程式化操作來處 置。稍後,在較不緊急的時間,可將中斷前記錄在失敗區 塊中的資料傳送到其他可能也是中斷區塊的區塊。接著即 可丢棄失敗的區塊。依此方式,在遇到缺陷區塊時,不會 因必須立刻傳送缺陷區塊中儲存的資料而損失資料及超過 指定的時間限制,即可加以處理。此錯誤處置對於廢棄項 目收集操作尤其重要,因此在緊急時間期間不需要對二薪 新的區塊重複進行整個作業。其後,在適宜的時間,藉由 重新配置到其他區i鬼’即可挽救缺$區塊的資料。 程式失敗處置在彙總操作期間尤其重要。正常的彙總操 作:將常駐在原始區塊及更新區塊中之邏輯群組的所有^邏 輯單元的目前版本彙總至彙總區塊。在囊總操作期間,如 果在彙總區塊中發生程式失敗,則會提供另一個當作中斷 98680.doc -14- 1272487 2總區塊的區塊,轉收其餘邏輯單元㈣總。依此方式, 的,… 人以上而仍可在正常彙總操作指定 右=内,成例外處理的操作。在適宜的時間,將群組所 处理完成之邏輯單元彙總至中斷區 總操作。適宜的時間蔣a /日、, 凡成果 1且的%間將疋在目珂主機寫入操作以外的一些 ::有時間執行囊總之期間的期間。-個此種適宜的時; =另-個其中有更新但無關聯之彙總操作之主機寫 朋間。 貫質上,可將程式失敗處置的彙總視為以多階段來實 施。在第一階段中,在發生程式失敗後,會將邏輯單元二 總至一個以上區塊中’以避免彙總各邏輯單元一次以上 ㈣宜的時間會完成最後階段,其中會將邏輯群㈣總至 個區塊中’較佳藉由按循序順序將所有料單元收华 中斷彙總區塊中。 $ 非循序更新區塊索引 根據本發明的另-方面,在具有支援具非循序邏輯單元 之更新區塊之區塊管理系統的非揮發性記憶體中,非循序 4區龙中_輯單元的索引被緩衝儲存在mm中,並定期 將其儲存至非揮發性記憶體中。在一項具體實施例中,將 索引儲存在專用於儲存索引的區塊中。在另一項具體實施 例中’將索引儲存在更新區塊本身中。在又另-項具體實 施例中,將索引儲存在各邏輯單元的標頭中。在另一方面 中’在上—個㈣更新之後但在下—個㈣更新之前寫入 的邏輯單元會將其㈣t訊儲存在各邏輯單元的標頭中。 98680.doc -15- 1272487 依此方式,在電 即可決定最近寫:_後’不必在初始化期間執行掃描, 將區塊管理成邛八:璉輯早70的位置。在又另-方面中, 群組。 ^序及部分非循序指Hx上邏輯子 控制資料完整性與管理 根據本發明的另— 資料如果被維持在”如部分或全部控制資料的關 ^ ^ 、禝製項中,則保證額外等級的可靠性。 稷製的執行方式對 Λ ^ 、知用兩次編碼過程(two_pass)mb 技衍以連績程式化相同組- 記怜俨糸絲; 、。己隐體早九之夕位凡的多重狀態 益Α Ί碼過程t的任何料化錯誤都 寫入中又1::人編瑪過程建立的資料。複製還有助糊 ''' 貞(即’兩個複本有良好的ECC但資料不 且可增加額料級的可#性。㈣資料複製的技術均 巳考慮。 在一項具體實施例中,在稍早程式化編碼過程中程式化 給定貧料的兩個複本後,後續程式化編碼過程可避免程式 化用於儲存該等兩個複本中至少—個的記憶體單元。依此 方式’在後續程式化編碼過程在完成之前中止及毀損稍早 編碼過程的資料時,該等兩個複本中至少一個也不會受到 影響。 在另一項具體實施例中,某一給定資料的兩個複本會被 儲存於兩個不同的區塊中,而且該等兩個複本中至多僅有 其中一個的記憶體單元會於後面的程式化編碼過程中被程 式化。 98680.doc -16- 1272487 於另一具體實施例中,於一程式化編碼過程中儲存某一 給定資料的兩個複本之後,便不再對用於儲存該等兩個複 =的記憶體單元組實施任何進一步的程式化。於該記憶體 單疋組的最終程式化編碼過程中來程式化該等兩個複本便 可達成此目的。 在又另一項具體實施例中,可於二進制程式化模式中將 某—給定資料的該㈣個複本程式化至—多重狀態的記憶 體之中,致使不會對該等已程式化的記憶體單元進行任何 進一步的程式化。 在又另-項具體實施財,對於採用兩次編碼過程程式 :技術以連續程式化相同組記憶體單元之多位元的多重狀 態記憶體系統而言’會採用容錯碼以編碼多個記憶體狀 態,使稍早程式化編碼過程所建立的資料不會受到後續程 式化編碼過程中錯誤的影響。 根據本發明的另—方面,在具有區塊管理系統的非揮發 性記憶體中,可實施記憶體區塊的「控制廢棄項目收集」 ^先佔式重新配置,以避免發生大量的更新區塊均恰巧同 =需要進行重新配置的情形。例如,在更新用於控制區塊 官理糸統操作的控制資料時會發生此情況。控制資料類型 的層級可和不同程度的更新 厂〇高 斤人數共存,導致其關聯的更新 區塊需要不同速率的廢棄項目收集或重新配置。會有一個 以上控制貧料類型之廢棄項目收隼摔作π 队果絲作冋時發生的特定次 的情況中,所有控制資料類型之更新區塊的重 新配置階段會進行整頓,導 所有的更新區塊都需要同時 98680.doc 1272487 重新配置。 荟考本發明以下結合附圖之較佳具體實施例的說明,即 可瞭解本發明的其祕特色及優點。 【實施方式】 圖1以示意圖顯示適於實施本發明之記憶體系統的主要 更體組件。3己憶體糸統2 〇通常透過主機介面以主機1 〇操 作。圮憶體系統通常形式為記憶卡或内嵌式記憶體系統。 記憶體系統20包括由控制器100控制操作的記憶體2〇〇。記 憶體200包含分布於一或多個積體電路晶片上的一或多個 陣列的非揮發性記憶體單元。控制器1〇〇包括:介面11〇、 處理器120、選用性副處理器121、R〇M 122(唯讀記憶體)、 RAM 130(隨機存取記憶體)、及選用性可程式非揮發性記憶 體124。介面110有一個連接控制器和主機的組件及另一個 連接至記憶體200的組件。儲存於非揮發性R〇M 122及/或選 用丨生非揮發性圮憶體丨24中的韌體可提供處理器1程式碼 以實施控制器100的功能。處理器12〇或選用性副處理器m 可處理錯誤校正碼。在_項#代性具體實施例中,控制器 1〇〇可藉由狀態機器(未顯示)來實施。在又另一項具體實施 例中,控制器100可在主機内實施。 、 邏輯舆實體區塊結構 圖2為根據本發明一較佳具體實施例的記憶體,其係被 織成數個實體區段群組(或中繼區塊)並且由該控^的 憶體管理器來管理。該記憶體彻會被組織成數個“ 塊’其中每個中繼區塊係一群組之可一起抹除的實體區 98680.doc -18 - 1272487 S〇、…、Sn_io 主機10可在棺案系統或作業系統下執行應用程式時存取 記憶體200。一般來說,該主機會以邏輯區段為單位來定址 資料’其中’例如,各區段可含有512位元組的資料。還有, 主機通常亦會以邏輯叢集為單位來讀取或寫入該記憶體系 統,各邏輯叢集由一或多個邏輯區段所組成。在部分主機 系統中,可存在選用性主機端記憶體管理器,以執行主機 的較低階記憶體管理。在大部分的例子中,在讀取或寫入 ‘作期間,主機1 〇實質上會對記憶體系統2〇發出指令,以 讀取或寫入含有一串具連續位址之資料邏輯區段的程式 段。 一記憶體端記憶體管㈣被實施在該記憶體系統2〇的老 制器100之中,用以管理於該快閃記憶體200的複數個" 區塊中儲存與擷取主機邏輯區段的資料。於該較佳的具毙 實施例中,該記憶體管理器含有數個軟體模組,用於^ 該等中繼區塊的抹除作業、讀取作業、以及寫入作業。女 記憶體管理器還會於該快閃記憶體及該控制器RA; 13 0之中維濩和其作業相關的系統控制與目錄資料。 圖3A(i)_3 A(iii)為根據本發明—較佳具體實施例介於一 邏輯群組與一中繼區塊間之映射的概略示意圖。該實體气 憶體的該中繼區塊具有N個實體區段,用於儲存_邏二群: 的N個邏輯區段資料。圖3A⑴所示的係來自邏輯群虹g 的資料’其中該等邏輯區段呈現連續的邏輯順序〇、卜 N-i。圖3A⑼所㈣係正以相同的邏輯順序被儲存於兮中 98680.doc -19- !272487 繼區塊中的相同資料。當依此方式儲存時,該中繼區塊便 系斤"月的循序式」。一般來說,該中繼區塊可能會具有以 不同順序儲存的資料,於該情況中,該中繼區塊則係所謂 的「非循序式」或「混亂式」。 明 在邏輯群組的最低位址及其映射之中繼區塊的最低位址 =間會有位移。此時,邏輯區段位址會於該中繼區塊内以 裱狀的方式從該邏輯群組的底部反繞回至頂端。例如,在 囷3 A(iii)中,該中繼區塊會在其始於邏輯區段&之資料的第 位置中進行儲存。在到達最後邏輯區段N-1時,中繼區塊 曰"0回至區段0,最後在其最後實體區段中儲存和邏輯區段 關聯的資料。在較佳具體實施例中,會使用頁面標記來 識別任何位移,例^,識別在中繼區塊之第-實體區段中 所儲存之資料的起始邏輯區段位址。當兩個區塊僅相差一 頁面私记日可,則會認為該等兩個區塊係以相同的順序來 儲存其邏輯區段。 圖3B為介於複數個邏輯群組與複數個中繼區塊間之映射 的概略不思圖。每個邏輯群組均映射至一唯一的中繼區 塊除了其中的資料正在被更新的少數邏輯群組以外。一 邏輯群組在被更新之後,其可能會映射至-不同的中繼區 鬼可將映射貝戒維持在一組邏輯對實體目錄中,猶後將 會詳細說明。 f考慮八匕類型的邏輯群組至中繼區塊映射關係。舉例 來兄由Alan Smclaim和本發明同一天提出之共同待審及 共同擁有的美國專利申請案,標題為「Adaptive Metablocks」 98680.doc -20- 1272487 之中便揭不具有可變大小的中繼區塊。本文以引用的方式 併入該共同待審巾請案全部的揭示内容。 本电明的-個特色在於系、統以單―邏輯分割操作,及記 十思體系統的整個邏輯位扯餘 平耳证址耗圍中的邏輯區段群組均以相同 的方式來處理。例如,可將含有系統資料的區段及含有使 用者資料的區段分布在邏輯位址空間中的任何地方。 不像先w技術的系統,並無系統區段(即,有關檔案配置 表、目錄或子目錄的區段)的特別分割或分區,以局部化在 可能含有高次數及小尺寸之更新資料的邏輯位址空間區段 中。而是,更新區段之邏輯群組的本方案會有效處理為系 統區段典型且為槽案資料典型的存取模式。 圖4顯示中繼區塊和實體記憶體中結構的對齊。快閃記憶 體包含可當作一個單元一起抹除之記憶體單元的區塊。2 種抹除區塊為快閃記憶體之最小的抹除單元或記憶體的最 小可抹除單元(MEU)。最小抹除單元是記憶體的硬體設計 參數,不過,在一些支援多個MEU抹除的記憶體系統中, 也可以設定包含一個以上MEU的「超級MEU」。對於快閃 EEPROM,一個MEU可包含一個區段,但較佳包含多個區 段。在所示的範例中,其具有M個區段。在較佳具體實施 例中’各區段可以儲存5 12位元組的資料且具有使用者資料 部分及用於儲存系統或附加項資料的標頭部分。如果中繼 區塊係以Ρ個MEU所構成且各MEU含有Μ個區段,則各中繼 區塊將具有N = Ρ*Μ個區段。 在系統層級,中繼區塊代表記憶體位置的群組,如,。 98680-doc -21 - 1272487 一起抹除的區段。快閃記憶體的實體位址空間會被處理、 -組中繼區塊,其中中繼區塊是最小的抹除單元。 說明書内,「中繼區塊」與「區塊」等詞語係同義詞,、用: 定義媒體管理於系統層級的最小抹除單位, " 口口 取小抹除 早位」或MEU-詞則係用來表示快閃記憶體的最小抹除二 位。 、干 連結數個最小抹除單位(MEU)以構成中繼區塊 為了最大化程式化速度及抹除速度,會儘可能利用平行 方式,其係藉由配置多個要平行程式化的頁面資訊(位在I 個MEU中)’及配置多個要平行抹除的meu。 在快閃記憶體中,一個頁面是可在單一操作中一起程弋 化之記憶體單元的群組。一個頁面可包含一或多個區段。 還有,可將記憶體陣列分割成一個以上平面,其中_ ^ 口 月b私式化或抹除*個平面内的"個MEU。最後,可在—戈 多個§己憶體晶片中分布各平面。 在快閃記憶體中,MEU可包含一或多個頁面。可將快閃 記憶體晶片内的數個MEU按平面來組織。由於可同時程式 化或抹除各平面的一個MEU,因此有利於從各平面選擇一 個MEU以形成多個MEU中繼區塊(請見下文圖5B)。 圖5 A顯示從連結不同平面之最小抹除單元所構成的中繼 區塊。各中繼區塊(例如MBO、MB 1、…)均係以記憶體系統 之不同平面的數個MEU所構成,其中不同平面可分布在一 或多個晶片中。圖2所示的中繼區塊連結管理器170可管理 各中繼區塊之MEU的連結。如果MEU之一沒有失敗,在初 98680.doc -22- 1272487 始格式化程序期間設定各中 ....7 保留其組成的咖。中“塊’並在整個系統壽命中 _顯示從各平面選擇—最小抹除單元(meu)以連結成 中繼區塊的一項具體實施例。 =:示:中從各平面選擇一個以上_以連結成中繼 £塊的另—項具體實施例。在另-項具體實施例中,可從 =面選擇-個以上MEU以形成_超級卿。例如,舉例 〜兄’ -超級MEU可能係由兩個咖所構成的。此時,會 採取-次以上編碼過程以進行讀取或寫入操#。 日 由Cados Gonzales等人於和本發明同一天提出之此同待 審及共同擁有的美國專利申請案,標題為「场― Determine Grouping 〇f 心咖 _ 咖❹⑽Wipe time. W According to the nature of the flash memory, the data must be written to the erased memory location. If you want to update the data of a specific logical address of the host, the way 98680.doc 1272487 is to write the updated data to the same physical memory location. That is, recycling, which is ideal for the limited durability of this type of memory device. ° " Children, the logic does not change the mapping of the location of the body. However, this means that the entire erase block containing the physical location must be erased before being written with the updated data. This update method is inefficient 'because it needs to erase and rewrite the entire erase block, especially if the data to be updated only occupies a small portion of the erase block. In addition, it will cause more frequent memory block erase. Another problem with managing flash memory systems is that system control and directory data must be processed. This data is generated and accessed during the course of various memory operations. Due to &, its efficient processing and rapid access will directly affect performance. Since flash memory is used for storage and is non-volatile, it is desirable to be able to maintain this type of data in flash memory. However, because there is an intermediate file (four) system between the controller and the flash memory, it is impossible to directly access the feed control system and the catalog data tends to become active and become a segment' and this has a large size zone. Storage in a block erase system is not helpful. As is customary, this type of material is set in the (four) RAM, thereby allowing direct access by the controller. After the power of the memory device is turned on, the 'initialization program scans the flash memory' to compile the necessary system control and directory information to be placed in the controller Ram. This procedure is both time consuming and requires control state RAM. 'For the ever-increasing flash memory capacity, this is even more. US 6,567,307 discloses a method for processing sector updates in a large erasure block' which is included in multiple The erased block of the available memory area records the more 98880.doc -10- 1272487 new data 'and finally summarizes the valid sections in various blocks, and then rearranges the valid sections in logical sequential order before writing them . In this way, it is not necessary to erase and rewrite blocks at each of the tiniest updates. Both WO 〇 3/027828 and 〇 00/49488 disclose an updated memory system for processing large erase blocks, including dividing logical sector addresses in units of regions. The logical address range of a small area is reserved for active system control data that is separate from another area for user data. In this way, system control data manipulation in its own area does not interact with user data associated with another area. The update belongs to the logical segment level and the writer indicator points to the corresponding physical segment in the block to be written. The mapping information is buffered in RAM and finally stored in the section configuration table of the main memory. The latest version of the logical section will eliminate all previous versions of the existing block: these blocks thus become partially eliminated. Execution of waste: Project collection can maintain a partially eliminated block as an acceptable quantity. ^Technical systems tend to distribute updated data across many blocks or update poor materials, which can partially eliminate many existing blocks. The result is usually a large collection of obsolete items needed for partially eliminated blocks. This is very inefficient and leads to premature aging of the memory. Also, there is a lack of a systematic and efficient way to handle sequential updates compared to non-sequential updates. Therefore, high capacity and high performance non-volatile memory are generally required. In particular: more f must have a non-volatile memory that can perform memory operations in large blocks without the above-mentioned problem. SUMMARY OF THE INVENTION A solid non-volatile memory system is organized according to the group 98680.doc -11 - 1272487. Each entity group (relay block) is an erasable unit and can be used to store data of a logical group. The memory management system allows updating a logical group of data by configuring a relay block dedicated to the update data of the logical group. The update relay block records the updated feed in the order received, and there is no limit to whether the record is in the correct logical order (sequential) or not (chaotic) of the original storage. Finally, the update relay block is closed for additional recording. A number of procedures will occur, but will eventually end up with a relay block that replaces the original relay block that is completely filled in the correct order. In the case of confusion, the catalogue data will be maintained in non-volatile memory in a way that facilitates frequent updates. The system supports multiple logical groups that are updated simultaneously. One feature of the present invention allows data to be updated in a logical group by group manner. Therefore, when updating the logical group, the distribution of the logical unit is limited to the extent of the update/hungry memory unit. This is especially true when the logical group is usually contained within a shell block. During the update of a logical group, one or two blocks that buffer the updated logical unit must typically be assigned. Therefore, it is only necessary to perform obsolete project collection on a relatively small number of blocks. The collection of obsolete items in a chaotic block can be performed by summarization or compression. Compared with the sequential block, the economic efficiency of the update process will become more and more obvious in the general processing of the update block, so there is no need to configure any ambiguous ghosts for the chaotic (non-sequential) update. All update blocks are configured as sequential update blocks, and any update block can be changed to a chaotic update block. More specifically, an update block can be arbitrarily changed from a sequential to a chaotic. 98680.doc -12- 1272487 Efficient use of the system (4) Allows simultaneous updating of multiple logical groups... Further increase efficiency and reduce over-consumption. Each memory scattered on multiple planes is aligned according to another ancient method of the present invention; ^ki JhK., for a tissue that is organized into a plurality of erasable blocks and by a memory plane a memory array that is configured (and thus can read a logical unit in parallel or program a plurality of logical units in parallel into the plurality of planes) to be updated in a second region of a particular memory When the original logical unit of the block is supplied, it is supplied with the required logical unit to remain in the same plane as the original. This can be done by recording the updated logical unit to the next available location of the second block that is still in the same plane. Preferably, the logic cells are stored in a plane in the same displacement position as the other versions, such that all versions of a given logic unit are served by the same set of sensing circuits. According to this preferred embodiment, any intermediate gap between the upper stylized memory unit and the next available planar aligned memory unit is filled with the current version of the logical unit. The current version of the logical units logically located after the last programmed logical unit and the current logical unit located in front of the logical units stored in the next available planar aligned memory unit The version is filled in the gap to complete the filling operation. In this way, all versions of the logical unit can be maintained in the same plane with the same displacement as the original, so that in the waste project collection operation, it is not necessary to extract the latest version of the logical unit from different planes to avoid performance degradation. In a preferred embodiment, the latest version can be used to update 98680.doc -13 - 1272487 or to fill each memory cell on the plane. Thus, the logical units can be read out in parallel from each plane, which will have a logical sequence without further rearrangement. ^ This scheme shortens the time of the total chaotic block by allowing the most = version ' of the logical unit of the logical group to be rearranged on the plane and without having to collect the latest version of the different memory planes. This is advantageous in that the host interface's validity b specification defines the maximum wait time for the memory system to complete the sector write operation. Staged Program Error Handling According to another aspect of the present invention, in a memory having a block management system, during a time-critical memory operation, a program failure in the block can be continued by interrupting the block (breakout) Stylized operations in Mock) to deal with. Later, in less urgent times, the data recorded in the failed block before the interruption can be transferred to other blocks that may also be interrupt blocks. The failed block can then be discarded. In this way, when a defective block is encountered, it cannot be processed because it must lose the data stored in the defective block immediately and exceed the specified time limit. This mishandling is especially important for discarding project collection operations, so it is not necessary to repeat the entire job for a new pay block during an emergency. Thereafter, at a suitable time, the data of the missing block can be saved by reconfiguring to other areas. Program failure handling is especially important during rollup operations. Normal summary operation: The current versions of all the logical units of the logical group resident in the original block and the update block are summarized into the summary block. During the total operation of the capsule, if a program failure occurs in the summary block, another block that is interrupted as 98680.doc -14 - 1272487 2 is provided, and the remaining logical units (4) are transferred. In this way, the ..., more than one person can still be specified in the normal summary operation right = within the exception processing. At the appropriate time, the logical units processed by the group are summarized into the total operation of the interrupt area. The appropriate time Chiang a / day, where the results of 1 and % will be in the face of some of the host write operations :: there is time to perform the period of the capsule total period. - One such suitable time; = another - a host write room with updated but unrelated summary operations. In essence, the summary of program failure handling can be considered to be implemented in multiple phases. In the first phase, after the program failure occurs, the logical unit 2 will be totaled in more than one block to avoid the summary of each logical unit more than once (four). The final stage will be completed, where the logical group (four) will be totaled. In the blocks, it is preferable to put all the material units into the interrupt summary block in a sequential order. $Non-sequential update block index According to another aspect of the present invention, in a non-volatile memory having a block management system supporting an update block having a non-sequential logic unit, the non-sequential 4-zone The index is buffered and stored in mm and periodically stored in non-volatile memory. In a specific embodiment, the index is stored in a block dedicated to storing the index. In another embodiment, the index is stored in the update block itself. In yet another embodiment, the index is stored in the header of each logical unit. On the other hand, the logical unit written after the previous (four) update but before the next (four) update stores its (four) t signal in the header of each logical unit. 98680.doc -15- 1272487 In this way, the power can be determined to be the most recent write: _ after 'do not have to perform scanning during initialization, the block is managed into a position of eight: 琏 early 70. In yet another aspect, the group. ^Order and part of the non-sequential means Hx on the logical sub-control data integrity and management according to the invention - if the data is maintained in "such as part or all of the control data in the ^ ^, control items, then guarantee an additional level Reliability. The execution method of the system is Λ ^, and the two-pass encoding process (two_pass) mb is used to stylize the same group with the success rate - remember the mercy; Multiple state benefits 任何 Any materialization errors in the weight process t are written in 1:1: The data created by the human maze process. The copy also helps the paste ''' 贞 (ie 'two copies have good ECC but data It is not possible to increase the amount of material available. (4) Techniques for data replication are considered. In a specific embodiment, after two programs of a given lean material are programmed in the early stylized coding process, The stylized encoding process can avoid stylizing the memory unit for storing at least one of the two copies. In this way, when the subsequent stylized encoding process terminates and destroys the data of the earlier encoding process before completion, At least one of the two copies is not Will be affected. In another embodiment, two copies of a given material are stored in two different blocks, and at most one of the two copies is a memory unit. Will be stylized in the following stylized encoding process. 98680.doc -16- 1272487 In another embodiment, after storing two copies of a given material in a stylized encoding process, it is no longer Any further stylization is performed on the set of memory cells used to store the two complexes. This can be achieved by stylizing the two replicas during the final stylized encoding of the memory monoblock. In yet another embodiment, the (four) copies of a given data can be stylized into a multi-state memory in a binary stylized mode so that the programmed ones are not The memory unit performs any further stylization. In another implementation, for the use of two encoding process programs: technology to continuously program multiple bits of the same group of memory cells of multi-state memory In the system, 'a fault-tolerant code is used to encode multiple memory states, so that the data created by the earlier stylized encoding process is not affected by errors in the subsequent stylized encoding process. According to another aspect of the present invention, In the non-volatile memory of the block management system, the "control waste collection" of the memory block can be implemented. ^Preemptive reconfiguration to avoid a large number of update blocks happening exactly = need to be reconfigured This can happen, for example, when updating the control data used to control the operation of the block manager. The level of the control data type can coexist with the number of different manufacturers of the update, resulting in the associated update block. Collection or reconfiguration of waste projects at different rates. In the case of more than one specific item that occurs when the waste project of the poor type is controlled to fall into the π team, the update block of all control data types is renewed. The configuration phase will be reorganized, and all update blocks will need to be reconfigured at the same time as 98680.doc 1272487. The features and advantages of the present invention will become apparent from the following description of the preferred embodiments of the invention. [Embodiment] Figure 1 shows in schematic view the main more components of a memory system suitable for implementing the present invention. 3 Responsive System 2 〇 usually operates from the host 1 through the host interface. The memory system is usually in the form of a memory card or an embedded memory system. The memory system 20 includes a memory 2 that is controlled by the controller 100 to operate. The memory 200 includes one or more arrays of non-volatile memory cells distributed over one or more integrated circuit wafers. The controller 1 includes: an interface 11〇, a processor 120, an optional sub-processor 121, an R〇M 122 (read-only memory), a RAM 130 (random access memory), and an optional programmable non-volatile Sex memory 124. The interface 110 has a component that connects the controller to the host and another component that is coupled to the memory 200. The firmware stored in the non-volatile R〇M 122 and/or the optional non-volatile memory 24 can provide the processor 1 code to implement the functions of the controller 100. The processor 12 or the optional sub-processor m can process the error correction code. In a specific embodiment of the invention, the controller 1 can be implemented by a state machine (not shown). In yet another specific embodiment, controller 100 can be implemented within a host. Logic 舆 physical block structure FIG. 2 is a memory according to a preferred embodiment of the present invention, which is woven into a plurality of physical segment groups (or relay blocks) and managed by the memory of the control device. To manage. The memory is organized into a number of "blocks" in which each of the relay blocks is a group of entities that can be erased together. 98680.doc -18 - 1272487 S〇, ..., Sn_io host 10 can be found in the file The memory 200 is accessed when the application is executed under the system or the operating system. Generally, the host will address the data 'where' in units of logical sectors, for example, each section may contain 512 bytes of data. The host usually reads or writes the memory system in units of logical clusters, and each logical cluster is composed of one or more logical segments. In some host systems, optional host-side memory management may exist. To perform lower-level memory management of the host. In most cases, during read or write, host 1 will essentially issue instructions to memory system 2 to read or write. a program segment containing a string of data sectors having consecutive addresses. A memory-side memory tube (4) is implemented in the old system 100 of the memory system 2 to manage the flash memory. In the plural " block of body 200 Storing and extracting data of a logical section of the host. In the preferred embodiment, the memory manager includes a plurality of software modules for erasing and reading the relay blocks. Jobs and write jobs. The female memory manager also maintains system control and directory data related to its operations in the flash memory and the controller RA; Fig. 3A(i)_3 A (iii) is a schematic diagram of a mapping between a logical group and a relay block according to the preferred embodiment of the present invention. The relay block of the physical memory has N physical segments. For storing N logical segment data of _ logical group: Figure 3A (1) is the data from the logical group g 'where the logical segments present a continuous logical sequence 卜, bu Ni. Figure 3A (9) (4) It is stored in the same logical order in the same data in the block 98680.doc -19- !272487. When stored in this way, the relay block will be chronologically "monthly" . In general, the relay block may have data stored in a different order, in which case the relay block is so-called "non-sequential" or "chaotic". There will be a shift between the lowest address of the logical group and the lowest address of the mapped relay block. At this point, the logical sector address will wrap around from the bottom of the logical group to the top in a meandering manner within the relay block. For example, in 囷3 A(iii), the relay block will be stored in its first position starting from the logical section & Upon reaching the last logical segment N-1, the relay block 曰"0 returns to sector 0, and finally stores the data associated with the logical segment in its last physical segment. In a preferred embodiment, page offsets are used to identify any displacements, identifying the starting logical sector address of the data stored in the first-physical section of the relay block. When two blocks differ by only one page private day, the two blocks are considered to store their logical segments in the same order. Figure 3B is a schematic illustration of the mapping between a plurality of logical groups and a plurality of relay blocks. Each logical group is mapped to a unique relay block except for a few logical groups in which the material is being updated. After a logical group is updated, it may be mapped to a different relay zone. The ghost can maintain the mapping ring in a logical pair of physical directories, as will be explained in detail later. f Consider the logical group-to-relay block mapping relationship of the gossip type. For example, the co-pending and co-owned US patent application filed by Alan Smclaim and the same day of the present invention, entitled "Adaptive Metablocks" 98680.doc -20- 1272487, discloses a relay of variable size. Block. This document incorporates in its entirety the entire disclosure of the co-pending claims. The characteristic of this syllabic is that the system and the system are operated by a single logical division, and the entire logical position of the ten-think system is handled in the same way in the logical segment group. . For example, a section containing system data and a section containing user data can be distributed anywhere in the logical address space. Unlike systems with the first w technology, there is no special segmentation or partitioning of system segments (ie, sections related to file configuration tables, directories, or subdirectories) to localize updates that may contain high and small sizes. In the logical address space section. Rather, the scheme for updating the logical group of segments is effectively handled as a typical access mode typical of the system segment and typical of the slot data. Figure 4 shows the alignment of the structure in the relay block and the physical memory. Flash memory contains blocks of memory cells that can be erased together as a single unit. The two erase blocks are the smallest erase unit of the flash memory or the smallest erasable unit (MEU) of the memory. The minimum erase unit is the hardware design parameter of the memory. However, in some memory systems that support multiple MEU erases, a "super MEU" containing more than one MEU can also be set. For flash EEPROM, a MEU can contain one segment, but preferably includes multiple segments. In the example shown, it has M segments. In a preferred embodiment, each segment may store 5 12-bit data and have a user data portion and a header portion for storing system or additional item data. If the relay block is composed of one MEU and each MEU contains one segment, each relay block will have N = Ρ * 区段 segments. At the system level, the relay block represents a group of memory locations, such as . 98680-doc -21 - 1272487 Sections that are erased together. The physical address space of the flash memory is processed, the group relay block, where the relay block is the smallest erase unit. In the manual, words such as “relay block” and “block” are synonymous, and use: Define the minimum erasing unit for media management at the system level, " mouth to remove the small position or MEU-word It is used to indicate the minimum erased two bits of the flash memory. To connect a few minimum erase units (MEUs) to form a relay block. In order to maximize the stylization speed and erase speed, parallel methods are used as much as possible by configuring multiple page information to be parallelized. (located in I MEU) ' and configure multiple meu to be erased in parallel. In flash memory, a page is a group of memory cells that can be combined together in a single operation. A page can contain one or more sections. Also, the memory array can be divided into more than one plane, wherein _ ^ mouth month b privately or erases " MEUs in * planes. Finally, the planes can be distributed in a plurality of § memory wafers. In flash memory, a MEU can contain one or more pages. Several MEUs within a flash memory wafer can be organized in a plane. Since one MEU of each plane can be programmed or erased at the same time, it is advantageous to select one MEU from each plane to form a plurality of MEU relay blocks (see Figure 5B below). Figure 5A shows a relay block formed by a minimum erasing unit that connects different planes. Each of the relay blocks (e.g., MBO, MB 1, ...) is constructed of a plurality of MEUs in different planes of the memory system, wherein different planes may be distributed in one or more of the wafers. The relay block link manager 170 shown in Fig. 2 can manage the link of the MEUs of the respective relay blocks. If one of the MEUs does not fail, set the ....7 retaining the composition of the coffee during the initial formatting process at the beginning of 98680.doc -22- 1272487. The "block" and throughout the system lifetime _ show a specific embodiment of selecting from the planes - the minimum erasing unit (meu) to join into the relay block. =: shows: select more than one from each plane _ In another embodiment, in the other embodiment, more than one MEU can be selected from the = face to form a _ super cle. For example, the example ~ brother ' - super MEU may be It consists of two coffee machines. At this time, more than one coding process is taken to perform reading or writing operation. The day is raised and co-owned by Cados Gonzales et al. on the same day as the present invention. US patent application titled "Field - Determine Grouping 〇f Heart Coffee _ Curry (10)

Structures」之中亦揭示將複數個meu連結及再連結成中繼 區塊。本文以引用的方式併入該共同待審申請案全部的揭 示内容。 中繼區塊管理 圖6為如控制器及快閃記憶體中實施之中繼區塊管理系 統的示意方塊圖。中繼區塊管理系統包含實施於控制器 中的各種功能模組,並在依階層式分布於快閃記憶體 及控制裔RAM 130的表袼及清單中維持各種控制資料(包括 目錄資料)。實施於控制器100中的功能模組包括:介面模 組11〇、邏輯對實體位址轉譯模組140、更新區塊管理器模 組150、抹除區塊管理器模組16〇及中繼區塊連結管理器 170 〇 98680.doc -23- 1272487 〇允4中繼區塊管理系統介接主機系統。邏輯對實 體位址轉譯模組140將主機的邏輯位址映射至實體記憶體 位置。更新區塊管理器模組15G管理記憶體中給定之資料邏 輯群組的#料更新操作。已抹除區塊管理器16G管理中繼區 鬼的抹除細作及其用於儲存新資訊的配置。中繼區塊連結 管理f 170管理區段最小可抹除區塊之子群組的連結以構 成給定的中繼區塊。這些模組將在其個別段以詳細說明。 於作業期間,該中繼區塊f理系統會產生且配合控制資 料(例如位i止、控制與狀態資訊)來運作。由於許多控制資料 傾向於是經常變更的小型資料,因此無法在具有大型區塊 結構的快閃記憶體中予以隨時有效儲存及維持。為了在非 揮發性快閃記憶體中儲存比較靜態的控制資料,同時在控 制器RAM中尋找數量較少之比較有變化的控制資料,以進 行更有效的更新及存取,會採用階層式與分散式的方案。 在發生電源關機或故障時’此方案允許掃描非揮發性記情 體中-小組的控制資料,以在揮發性控制器“Μ中快速^ 建控制資料。這之所以可行是因為本發明可限制和給定邏 輯群組之資料之可能活動關聯的區塊數量。依此方式,即 可限制掃描。此外,會將部分需要持久性的控制資料儲存 在按區段更新的非揮發性中繼區塊中,其中各更新將會記 錄取代先前區段的新區段。控制資料會採用區段索引二案 以在中繼區塊中記錄按區段的更新。 μ 非揮發性快閃記憶體2〇_存大量相對較靜態的控制資 料。這包括··群組位址表(GAT)210、混說區塊素引 98680.doc -24- 1272487 (CBI)220、已抹除的區塊清單(EBL)23〇及MAP 240。GAT 21 〇可記錄區段之邏輯群組及其對應中繼區塊之間的映 射。除非更新,否則該等映射關係不會改變。CBI 220可記 錄更新期間邏輯上非循序區段的映射。EBL 23 0可記錄已經 抹除之中繼區塊的集區。MAP 240是顯示快閃記憶體中所 有中繼區塊之抹除狀態的位元對映。 揮發性控制器RAM 130儲存一小部分經常變更及存取的 控制資料。這包括配置區塊清單(ABL)134及清除區塊清單 (CBL)136。ABL 134可記錄中繼區塊用於記錄更新資料的 配置,而CBL 136可記錄已解除配置及已抹除的中繼區塊。 在較佳具體實施例中,RAM 13〇可當作儲存在快閃記憶體 200之控制資料的快取記憶體。 更新區塊管理器 更新區塊管理器15〇(如圖2所示)處理邏輯群組的更新c 根據本發明的—方面,會配置進行更新之區段的各邏輯君 組一用於記錄更新資料的專用更新中繼區塊。在較佳具逮 實施例中,會將邏輯群組之—或多個區段的任何程式段奋 錄在更新區塊中。可管理更新區塊以接收按循序順序或^ 循序(又稱為「混亂」)順序的更新資料。混亂更新區塊允調 邏輯群組内按任何順序更新區段資料,並可任意重複個别 區段。尤其’不必重新配置任何資料區段,循序更新區塊 可變成聽更新區塊。混亂資料更新不需要任㈣定的區 塊配置’·任何邏輯位址的非循序寫入可被自動納入。因此, 不像先前技術的系、统’和先前技術系統不同的係,並不必 98680.doc -25- 1272487 特別處置該邏輯群組之各個更新程式段究竟係邏輯循序或 非循序。一般更新區塊只用來按照主機請求的順序記錄各 種程式段。例如,即使主機系統資料或系統控制資料傾向 =依混亂方式加以更新,仍不必以和主機使用者資料的不 同方式來處理邏輯位址空間對應於主機系統資料的區域。 較,將區段之完整邏輯群組的資料按邏輯上循序順序儲 存=早-中繼區塊中。依此方式,可預定義已儲存之邏輯 區段的索引。當中繼區塊按照預定順序儲存給定邏輯群組 的所有區段時,其可說是「完整」。至於更新區塊,當其最 f按邏輯上循序順序填滿更新賴時,収新區塊將成為 隨時可取代原始中繼區塊之已更新的完整中繼區塊。另— 方面,如果更新區塊按邏輯上和完整區塊的不同順序填滿 j新資料’更新區塊為非循序或混亂更新區塊,則必須進 一步處理順序純的程式段,以便最後能按和完整區塊相 同的:序來儲存邏輯群組的更新資料。在較佳的例子中, 其在單一中繼區塊中係按照邏輯上循序的順序。進一步的 處理涉及將更新區塊中已更新的區段和原始區塊中未變更 的區段彙總至又另—個更新中繼區塊。然後彙總的更新區 2按照邏輯上循序的順序並能夠用來取代原始區塊。在 =預定條件下,彙總程序之前會有-或多個Μ縮程序。 f #序/、是將此亂更新區塊的區段重新記錄成取代的混 :4區塊同日守除去任何已由相同邏輯區段的後續更新 淘汰的複製邏輯區段。 更新方案允許同時執行多個多達預定極大值的更新執行 98680.doc -26- 1272487 、者各執行緒為使用其專用更新中繼區塊進行更新的邏輯 群組。 循序資料更新 在先更新屬於邏輯群組的資料時,會配置中繼區塊及將 其專用為邏輯群組之更新資料的更新區塊。當從主機接收 寫入邏輯群組之一或多個區段之程式段的指令時(現有的 中、、廬區塊已經儲存所有完整區段),會配置更新區塊。對於 f s機;寫入操作,會將第一程式段的資料記錄在更新區 鬼上由於各主機寫入是具有連續邏輯位址之一或多個區 丰又的個程式段,因此會遵循第一更新在特性上永遠循 序。在後續的主機寫入中,會按照從主機接收的順序將相 同邏輯群組内的更新程式段記錄在更新區塊中。一個區塊 繼績接受管理為循序更新區塊,而關聯邏輯群組内由主機 更新的區段維持邏輯上循序。在此邏輯群組中更新的所有 區段會被寫入此循序更新區塊,直到區塊關閉或轉換為混 亂更新區塊。 圖7A顯示由於兩個分開的主機寫入操作而按循序順序寫 入循序更新區塊之邏輯群組中之區段的範例,而邏輯群組 之原始區塊中對應的區段變成淘汰。在主趟 偽冩入操作# 1 中,會更新邏輯區段LS5-LS8的資料。更新成為LS5,-LS8’ 的資料會被記錄在新配置的專用更新區塊中。 為了方便,會將邏輯群組中要更新的第一 品#又§己錄在始 於第一實體區段位置的專用更新區塊中。—^ 敎而言,要更 新的第一邏輯區段不一定是群組的邏輯篦 弭弟一區段,因此, 98680.doc -27- 1272487 在邏輯群組的起點及更新區塊的起點之間會有位移。此位 移稱為「頁面標記」,如先前結合圖3 A所述。後續區段將按 照邏輯上循序的順序加以更新。在寫入邏輯群組的最後區 奴日可,群組位址會繞回及寫入序列會從群組的第一區段繼 續。 在主機寫入操作#2中,會更新邏輯區段LS9-LS12中資料 的耘式段。更新成為LS9’-LS12,的資料會被記錄在直接在最 後寫入結束處之後之位置中的專用更新區塊。圖中顯示兩 個主機寫入如下:按邏輯上循序順序記錄在更新區塊中的 更新貝料,即LS5’-LS 12,。更新區塊可視為循序更新區塊, 因其已按邏輯上循序的順序填入。記錄在更新區塊中的更 新資料可淘汰原始區塊中對應的資料。 混亂資料更新 當關聯的邏輯群組内由主機更新的任何區段為邏輯上非 循序時,可為現有的循序更新區塊啟始混亂更新區塊管 里/心亂更新區塊是資料更新區塊的形式,其中關聯邏輯 群、、且内的邏輯區段可按任何順序進行更新並可重複任意 -入。其建立可藉由以下方式:在主機寫入的區段是邏輯上 非循序%,從循序更新區塊轉換成受更新之邏輯群組内先 幻寫入的區&。所有其後在此邏輯群組中更新的區段會被 寫入此亂更新區塊中的下一個可用區段位置,無論其在群 組内的邏輯區段位址為何。 圖7B顯示由於五個分開的主機寫入操作而按混亂順序寫 入混亂更新區塊之邏輯群組之區段的範{列,而邏輯群組之 98680.doc -28- 1272487 原始區塊中被取代的區段及混亂更新區塊中被複製的區段 變成淘汰。在主機寫入操作#1中,會更新儲存於原始中繼 區塊之給定邏輯群組的邏輯區段LS 1 O-LS 11。已更新的邏輯 區段LSlO’-LSll’會被儲存到新配置的更新區塊中。此時, 更新區塊為循序的更新區塊。在主機寫入操作#2中,么將 邏輯區段LS5-LS6更新成為LS5’-LS6’及將其記錄在緊接上 一個寫入之後之位置的更新區塊中。這可將循序的更新區 塊轉換為混亂的更新區塊。在主機寫入操作# 3,再次更新 邏輯區段LS10及將其§己錄在更新區塊的下一個位置成為 LS10”。此時,更新區塊中的LS10”可取代先前記錄中的 LSI(V,而LSI0’又可取代原始區塊中的LS10。在主機寫入 操作#4中,再次更新邏輯區段LSI 0的資料及將其記錄在更 新區塊的下一個位置中成為LSI0’"。因此,LSI0,,,現在是邏 輯區段LS10的隶後且唯一有效的貧料。在主機寫入操作#5 中,會更新邏輯區段LS30的資料及將其記錄在更新區塊中 成為LS3CT。因此,此範例顯示可以按照任何順序及任意重 複,將一邏輯群組内的複數個邏輯區段寫入至一混亂更新 區塊中。 強制循序更新 圖8顯示由於兩個在邏輯位址有中斷之分開的主機寫入 操作而在邏輯群組中按循序順序寫入循序更新區塊之區段 的範例。在主機寫入#1中,將邏輯區段LS5-LS8的更新資料 記錄在專用更新區塊中成為LS5’-LS8’。在主機寫入#2中, 將邏輯區段LSI4-LS 16的更新資料記錄在上一個寫入之後 98680.doc -29- 1272487 的更新區塊中成為1^14,-1^16’。然而,在1^8及1^14之間 有位址跳躍,及主機寫入#2通常會使更新區塊成為非循 序。由於位址跳躍不是很多,一個選項是在執行主機寫入 “之前將原始區塊之中間區段的資料複製到更新區塊,以 先執行填補操作(#2Α)。依此方式,即可保存更新區塊的循 序特性。 圖9顯示根據本發明的一般具體實施例,更新區塊管理器 更新一個邏輯群組之資料的程序流程圖。更新程序包含以 下步驟: 步驟260:該記憶體被組織成複數個區塊,每個區塊均被 分割成可一起抹除的複數個記憶體單元,每個記憶體單元 係用於儲存一邏輯單元的資料。 ν驟262 · 4資料被組織成複數個邏輯群組,每個邏輯群 組均被分割成複數個邏輯單元。 步驟2 6 4 :在標準的例子中,根據第一指定順序,較佳為 邏輯上循序㈣序,將邏輯群㈣所㈣料元儲存在原 始區塊的記憶體單元中。依此方式,即可得知存取區塊中 個別邏輯單元的索引。 步驟270:對於給定邏輯群組(如,LGx)的資料,請求更 新LGX内的邏輯單元。(邏輯單元更新係為舉例說明。一般 而言’更新將是由说内-或多個連續邏輯單元所組成的程 式段。) 步驟272:請求的更新邏輯單元將會儲存在專用於記錄 LGX之更新的第二區塊中。記錄順序係根據第二順序,通常 98680.doc -30- 1272487 =更新明求的順序。本發明的—個特色允許初始對按 照邏輯上循序或混亂順序記錄的資料為—般的更新區塊。 因此根據第二順序 更新區塊。 第一區塊可以是循序更新區塊或混亂 ^驟274 ·备程序迴圈返回到步驟270時,第二區塊繼續 Ζ錄U的邏輯單元。在關閉的預定條件成形時將會關 閉第一區塊’以接收進一步更新。此時,程序繼續進行至 步驟276。 ν驟276 ·判斷该已關閉之第二區塊是否以和原始區塊相 同=順序來記錄其更新邏輯單元。當該等兩個區塊記錄邏 輯單元僅相差胃面標記時,該等兩個區塊便會被視為具 有相同的順序’如結合圖3 Α所述。如果這兩個區塊具有相 同的順序私序^續進行至步驟28〇,否則,必須在步驟Μ。 執行廢棄項目收集。 步驟280 ·由於第二區塊具有和第一區塊相同的順序,因 此其可用來取代原始的第一區塊。然後,更新程序結束於 步驟299。 步驟290 ·從第二區塊(更新區塊)及第一區塊(原始區塊) 收集給定邏輯群組之各邏輯單元的最新版本。然後按照和 第一區塊相同的順序將給定邏輯群組之已彙總的邏輯單元 寫入第三區塊。 步驟292 :由於第三區塊(彙總的區塊)具有和第一區塊相 同的順序,因此其可用來取代原始的第一區塊。然後,更 新程序結束於步驟299。 98680.doc -31· !272487 步驟299 :當結束程序建立完整的更新區塊時,該區塊會 變成給定邏輯群組的新標準區塊。將會終止此邏輯群組的 更新執行緒。 —圖10顯示根據本發明的一項較佳具體實施例,更新區塊 管理器更新一個邏輯群組之資料的程序流程圖。更新程序 包含以下步驟: 步驟310 :對於給定邏輯群組(如,LGJ的資料,會請求 ^新LGX内的邏輯區段。(區段更新係為舉例說明。—般而 吕’更新將是由LG』-或多個連續邏輯區段所組成的程 段。) 步驟3U:如果LGx專用的更新區塊尚未存在,繼續進十 至步驟4H)以啟始邏輯群組之新的更新執行緒。這可藉由c 下方式來完成:配置記錄邏輯群組之更新資料專用的更杂 區塊。如果已經有開啟的更新區《,則繼續進行至步場 314,開始將更新區段記錄至更新區塊上。 步驟314:如果目前更新區塊已經混亂(即,非循序),貝, 直接繼續進行至步驟51G,以將請求的更新區段記錄至混廟 更新區塊上。如果目前更新區塊為循序,則繼續進行至歩 驟3 16,以處理循序更新區塊。 步驟316:本發明的—項特點係允許於初始時將一更新區 塊設置成通用於以邏輯循序或混亂順序的方式來記 料。不過’因為該賴群組最終會將其資料以邏輯循㈣ 順序儲存於中繼區塊之中,所以,因此希望儘可能保持 該更新區塊㈣絲態。接著,當_-更新區塊以進行 98680.doc -32- 1272487 進-步更新時,將會需要較少的處理,因為並不需要進行 廢棄項目收集。 因此,判斷所請求的更新是否遵循更新區塊的目前循序 順序。如果更新循序遵循,則繼續進行至步驟5ig以執行循 序更新,及更新區塊將維持循序。另—方面,如果更新未 循序遵循(混亂更新)’則其會在未採取任何動作時將循序更 新區塊轉換成混亂更新區塊。 在-項具體實施例中,不會進行任何動作來挽救此情 況,然後程序直接進行至步㈣G,其巾允許更新將更新區 塊變成混亂更新區塊。 °° 選用性強制循序程序 隹另一 貝㈣^例中’會制性執行強制循序程序歩 驟320’以儘可能因顧及懸置的混亂更新而保存循序更新區 ^。其中有兩個情況,這兩個情況都需要複製原始區塊的 退失以維持更新區塊上記錄之邏輯區段的循序顺 序。第-個情況是其中更新可建立短的位址跳躍。第 情況是提早結束更新區塊以將其保持循序。強制循序料 步驟320包含以下子步驟: 旦步驟330.如果更新建立的邏輯位址跳躍未大於預定的數 量則程序繼續進行至步驟⑽的強制循序更新程序,否 序結束。步驟州,以考慮其是否適合進行強制循 超過預定的設計參 ,則更新區塊為相 步驟340 ·如果未填充的實體區段數量 數Cc(其代表值為更新區塊尺寸的一半) 98680.doc -33- 1272487 對未被使用,因此不會提早關閉。程序繼續進行至步驟 370,及更新區塊會成為混亂。另一方面,如果實質上已填 充更新區塊’則將其視為已經充分利用,因此進入步驟3 以進行強制循序結束。 步驟3 5 0 ·只要位址跳躍未超過預定的數量cb,強制循序 更新允許目前循序更新區塊維持循序。實質上,會複製更 新區塊之關聯原始區塊的區段,以填充位址跳躍跨越的間 隙。因此,在繼續進行至步驟5丨〇之前,會以中間位址的資 料填補循序更新區塊,以循序記錄目前的更新。 步驟360 :如果目前循序更新區塊在實質上已經被填充, 而非由懸置的混亂更新轉換為混亂更新區塊,則強制循序 結束允許目前循序更新區塊成為關閉狀態。混亂或非循序 更新的定義是具有以下項目的更新:不為上述位址跳躍例 外所涵蓋的正向位址轉變、反向位址轉變、或位址重複。 為了防止循序更新區塊為混亂更新所轉換,會藉由複製更 新區塊之關聯之原始部分淘汰區塊的區段來填充更新區塊 的未寫入區段位置。然後完全淘汰及抹除原始區塊。現在, 目則的更新區塊具有完整組的邏輯區段,然後結束成為取 代原始中繼區塊的完整中繼區塊。然後,程序繼續進行至 v驟43 0以在其位置配置更新區塊,以接受在步驟3 1 〇先請 求之懸置區段更新的記錄。 轉換成混亂更新區塊 步驟370:當懸置的更新未按循序順序且可有可無時,如 果無法滿足強制循序條件,則在程序繼續進行至步驟5ι〇 98680.doc -34- 1272487 時,藉由允許在更新區塊上記錄具有非循序位址的懸置更 新區段,以允許循序更新區塊轉換為混亂更新區塊。如果 最大數量的混亂更新區塊存在,則在允許轉換進行之前, 必須關閉最久未存取的混亂更新區塊;因而不會超過最大 數量的混亂區塊。最久未存取之混亂更新區塊的識別和步 驟420所述的一般例子相同,但僅限於混亂更新區塊。此時 關閉混亂更新區塊可藉由如步驟55〇所述的彙總來達成。 配置受系統限制的新更新區塊 步驟410 :將抹除中繼區塊配置為更新區塊的程序始於決 定是否超過預定的系統限制。由於資源有限,記憶體管理 系統通常允許可同時存在之更新區塊的預定最大數量Ca。 此限制是循序更新區塊及混亂更新區塊的總數,也是設計 參數。在一項較佳具體實施例中,此限制為例如最大8個更 新區塊。還有,由於系統資源的較高需求,在可同時開啟 之混亂更新區塊的最大數量上還有對應的預定限制(如,句。 因此,在已經配置CA個更新區塊時,則只有在關閉現有 的配置明求之一後,才能滿足下一個配置請求。程序繼續 進行至步驟420。當開啟的更新區塊數量小於cA時,程序直 接進行至步驟430。 步驟420:在超過更新區塊的最大數量Ca時,關閉最久未 存取的更新區塊及執行廢棄項目收集。最久未存取的更新 區塊會被識別為和最久未存取之邏輯區塊關聯的更新區 塊。為了決定最久未存取的區塊,存取包括邏輯區段的寫 入及遙擇性讀取。會按照存取的順序維持開啟更新區塊的 98680.doc -35- 1272487 /月早;初始化時,不會假設任何存取順序。更新區塊的關 4在更新區塊為循序時,將按照結合步驟36〇及步驟所 述的相同程序,在更新區塊為混亂時,將按照結合步驟“Ο 斤述的相同%序。此關閉可挪出空間以在步驟43 〇配置新的 更新區塊。 V驟43 0 ·配置一新的中繼區塊作為該給定邏輯群組LGx 專屬的更新區塊便可滿足該配置要求。接著,程序繼續進 行至步驟5 1 0。 在更新區塊上記錄更新資料 步驟5 1 0 ·將請求的更新區段記錄在更新區塊的下一個可 用貝體位置上。然後,程序繼續進行至步驟52〇,決定更新 區塊是否準備結束就緒。 更新區塊結束 步驟520:如果更新區塊還有接受附加更新的空間,則繼 續進行至步驟522。否則繼續進行至步驟57〇以結束更新區 鬼在目鈾凊求的寫入嘗试寫入多於區塊所有空間的邏輯 區I又日可’有兩項填滿更新區塊的可能實施例。在第一實施 例中,會將寫入請求分成兩個部分,其中第一部分可寫入 直到區塊的最後實體區段。然後關閉區塊及將寫入的第二 部分處理為下一個請求的寫入。在另一個實施例中,會在 區塊填補其餘區段時保留請求的寫入,然後再將其關閉。 δ月求的寫入會被處理為下一個請求的寫入。 步驟522 :如果更新區塊為循序,則繼續進行至步驟53〇 以進行循序關閉。如果更新區塊為混亂,則繼續進行至步 98680.doc •36- 1272487 驟540以進行混亂關閉。 循序更新區塊結束 步驟530:由於更新區塊為循序且已完全填充,因此其中 儲存的邏輯群組為完整。中繼區塊為完整且能取代原始的 中繼區塊。此時,會完全淘汰原始區塊及將其抹除。然後 程序繼續進行至步驟570,其中給定邏輯群組的更新執彳^ 結束。 仃、、 混亂更新區塊結束 步驟540:由於更新區塊為非循序填充,可能含有一些邏 輯區段的多嗰更新,因此會執行廢棄項目收集以挽救其中 的有效貧料。混亂更新區塊可為已經壓縮或彙總。在步驟 542將會決定執行的程序。 ° 步驟542 :要執行壓縮或彙總將根據更新區塊的退化而 定。如果邏輯區段已更新多次,其邏輯位址將高度退化。 記錄在更新區塊上的將有相同邏輯區段的多個版本,而只 有最後記錄的版本為該邏輯區段的有效版本。在含有多個 版本之邏輯區段的更新區塊中,有所區分之邏輯區段的數 量會比邏輯群組的少很多。 在較佳*體實施例中,ft新區土鬼中有戶斤區分之邏輯區 段的數量超過預定的設計參數Cd(其代表值為邏輯群組尺 寸的一半)時,結束程序會在步驟55〇執行彙總,否則程序 會繼續進行至步驟560的壓縮。 步驟550 ··如果要彙總混亂更新區塊,則將以含有彙總資 料之新的私準中繼區塊取代原始區塊及更新區塊。彙總 98680.doc -37- 1272487 後,更新執行緒將結束於步驟57〇。 〜v驟560 ·如果要壓縮混亂更新區塊,則將以載有壓縮資 料之新的更新區塊取代之。壓縮後,已壓縮更新區塊的處 理將、、、。束於步驟57G。或者,可將I縮延遲直到再次寫入更 新區塊,因此排除壓縮之後為沒有中間更新之彙總的可能 I4生。後,备步驟5〇2中出現下一個要求於LGx中進行更新 時’在給定邏輯區塊的進_步更新中使用新的更新區塊。 含v驟570 ·备結束程序建立完整的更新區塊時,該區塊會 變成給定邏輯群組的新標準區塊。此邏輯群組的更新執行 緒將會終止。在結束程序建立取代現有更新區塊之新的更 新區塊時,會使用新的更新區塊記錄為給定邏輯群組請求 的下個更新。在更新區塊未結束時,處理會在步驟3 j 〇 中出現下一個要求於LGX中進行更新時時繼續。 從上述程序可知,在關閉混亂更新區塊時,會進一步處 理其上記錄的更新資料。尤其,其有效資料的廢棄項目收 集係藉由以下程序:壓縮至另一個混亂區塊的程序,或和 其關聯之原始區塊彙總以形成新的標準循序區塊的程序。 圖11A為詳細顯示關閉圖10所示之混亂更新區塊之彙總 程序的流程圖。混亂更新區塊彙總是執行於結束更新區塊 時(如當更新區塊已填滿其寫入的最後實體區段位置時)的 兩個可能程序之一。區塊中寫入之有所區分的邏輯區段數 量超過預定的設計參數CD時,將選擇彙總。圖丨〇所示的彙 總程序步驟5 5 0包含以下子步驟: 步驟5 5 1 ·在混IL更新區塊被關閉時,會配置可取而代之 98680.doc -38- 1272487 的新中繼區塊。 步驟552 :在混亂更新區塊及關聯之原始區塊中,收集各 邏輯區段的最新版本並忽略所有的淘汰區段。 步驟554 :將所收集的有效區段按邏輯上循序順序記錄在 新的中繼區塊上,以形成完整區塊,即,按循序順序記錄 之邏輯群組之所有邏輯區段的區塊。 步驟5 5 6 ··以新的完整區塊取代原始區塊。 步驟558 :抹除已結束的更新區塊及原始區塊。 圖11B為洋細顯示關閉圖1 〇所示之混亂更新區塊之壓縮 程序的流程圖。區塊中寫入之有所區分的邏輯區段數量低 於預定的設計參數CD時,將選擇壓縮。圖1〇所示的壓縮程 序步驟560包含以下子步驟: 步驟5 61 :在混亂更新區塊被壓縮時,會配置取而代之的 新中繼區塊。 步驟562 :在要壓縮之現有的混亂更新區塊中,收集各邏 輯區段的最新版本。 步驟564 :將所收集的區段記錄在新的更新區塊上,以形 成具有壓縮區段之新的更新區塊。 步驟5 6 6 :以具有壓縮區段之新的更新區塊取代現有的更 新區塊。 步驟568 :抹除已結束的更新區塊。 邏輯與中繼區塊狀態 圖12A顯示邏輯群組的所有可能狀態,及其間在各種操作 下的可能轉變。 98680.doc -39- I272487 圖12B為列出邏輯群組之可能狀態的表格。邏輯群組狀態 的定義如下: 1 ·完奎··邏輯群組中的所有邏輯區段已依照邏輯循序順 序)被寫入單一中繼區塊之中,其可能利用頁面標記捲繞方 式。 2.未,穿八:邏輯群組中未曾寫入任何邏輯區段。邏輯群 組在群組位址表中會會標示為未寫入且沒有任何配置的中 I區塊。返回預定的資料模式,以回應此群組内每個區段 的主機讀取。 3·#序燙舞:邏輯群組内的一些區段已經按邏輯上循序 順序被寫入中繼區塊中,可能使用頁面標記,因此其可取 代群組中任何先前完整狀態之對應的邏輯區段。 4 ·必農豸舞:邏輯群組内的一些區段已經按邏輯上非循 序順序被寫入中繼區塊中,可能使用頁面標記,因此其可 取代群組中任何先前完整狀態之對應的邏輯區段。群組内 的區段可被寫人-次以上中最新版本將取代所有先前 的版本。 圖13A顯示中繼區塊的所有可能狀態,及其間在各種操 下的可能轉變。 圖13B顯示中繼區塊的所有可能狀態,及其間在各種操 下的可能轉變: ’、 1 ·己#馀:中繼區塊中的所有區段已被抹除。 .餚序I奇:中繼區塊已經被部份寫入,其區段呈現 輯循序順序,可能使用頁面標記。所有區段均屬於相同 98680.doc -40- 1272487 邏輯群組。 3 _此旒豸痧:該中繼區塊已經被部份或完全寫入,其區 段主現邏輯非循序順序。任何區段均可被寫入一次以上品 所有區段均屬於相同的邏輯群組。 4·完# :中繼區塊已經以邏輯循序的順序被完全寫入, 可能使用頁面標記。 ' 區段已因主機資 5·及舍:中繼區塊先前為完整但至少一 料更新而淘汰。 圖14(A)-14(J)為顯示邏輯群組狀態上及實體中繼區塊上 各種操作效果的狀態圖。 圖14⑷顯*對應於第—寫人操作之邏輯群組及中繼區 塊轉變的狀態圖。主機按邏輯上循序順序將先前未寫入之 邏輯群組的__或多個區段寫人至新配置的已抹除中㈣ 塊。邏輯群組及中繼區塊進入循序更新狀態。 圖1啊顯示對應於第一完整操作之邏輯群組及中繼區 塊轉變的狀態圖。先前未寫人的循序更新邏輯群組因主機 循序寫人所有區段而變成完整。如果記憶切預定的資料 核式填充其餘未寫人的區段來填滿群組,則也會發生轉 變。中繼區塊變成完整。 圖14(C)顯示制、於第—混亂操作之邏輯群組及中繼區 塊轉變的狀態圖。先前未寫人的循序更新邏輯群組在主機 非循序寫入至少一區段時變成混亂。 圖14(D)顯TF對應於第—壓縮操作之邏輯群組及中繼區 塊轉變的狀態圖。從舊區塊將先前未寫入之混亂更新邏輯 98680.doc -41 - 1272487 群組内的所有有效區段複製到新的混亂中繼區塊,然後再 將其抹除。 圖14(E)顯示對應於第一彙總操作之邏輯群組及中繼區 塊轉變的狀態圖。從舊的混亂區塊移動先前未寫入之混亂 更新邏輯群組内的所有有效區段,以按邏輯上循序順序填 充新配置的已抹除區塊。主機未寫入的區段會以預定的資 料模式加以填充。然後抹除舊的混亂區塊。 圖14(F)顯示對應於循序寫入操作之邏輯群組及中繼區 塊轉變的狀態圖。主機按邏輯上循序順序將完整邏輯群組 的一或多個區段寫入新配置的已抹除中繼區塊。邏輯群組 及中繼區塊成為循序更新狀態。先前完整的中繼區塊變成 原始中繼區塊。 圖14(G)顯示對應於循序填充操作之邏輯群組及中繼區 塊轉變的狀態圖。循序更新邏輯群組在主機循序寫入所有 其區段時變成完整。這也可發生於以原始區塊的有效區段 填充循序更新邏輯群組以使其完整的廢棄項目收集期間, 在此之後會抹除原始區塊。 圖14(H)顯示對應於非循序寫入操作之邏輯群組及中繼 區塊轉變的狀態圖。循序更新邏輯群組在主機非循序寫入 至少一區段時變成混亂。非循序區段寫入可造成更新區塊 或對應原始區塊中的有效區段變成淘汰。 圖14(1)顯示對應於壓縮操作之邏輯群組及中繼區塊轉變 的狀態圖。從舊區塊將混亂更新邏輯群組内的所有有效區 段複製到新的混亂中繼區塊’然後再將其抹除。原始區塊 98680.doc -42- 1272487 不會受到影響。 圖14(J)顯示對應於彙總操作之邏輯群組及中繼區塊轉變 的狀態圖。從舊的混亂區塊複製混亂更新邏輯群組内的所 有有效區段,以按邏輯上循序順序填充新配置的已抹除區 塊。然後抹除舊的混亂區塊及原始區塊。 更新區塊追蹤及管理 圖15顯示用於追蹤已開啟及已關閉之更新區塊與已抹除 區塊以進行配置的配置區塊清單(ABL)的結構的較佳具體 實施例。該配置區塊清單(ABL)61 0會被保存於控制器RAM -130之中,以允許管理已抹除區塊之配置、配置的更新區 塊、關聯的區塊及控制結構,以啟用正確的邏輯對實體位 址轉譯。在較佳具體實施例中,ABL包括:已抹除區塊的 清單、開啟的更新區塊清單614、及關閉的更新區塊清單 616 〇 開啟的更新區塊清單614是ABL中具有開啟更新區塊之 屬性的區塊項目組。開啟的更新區塊清單具有目前開啟之 各資料更新區塊的一個項目。各項目保有以下資訊。LG是 目前更新中繼區塊專用的邏輯群組位址。循序/混亂是代表 以循序或混亂更新資料填充更新區塊的狀態。MB是更新區 塊的中繼區塊位址。頁面標記是在更新區塊之第一實體位 置記錄的起始邏輯區段。寫入的區段號碼代表目前在更新 區塊上寫入的區段號碼。MB◦是關聯之原始區塊的中繼區 塊位址。Page Tag〇是關聯之原始區塊的頁面標記。 關閉的更新區塊清單616是配置區塊清單(ABL)的子集。 98680.doc •43- 1272487 的區塊項目組。關閉Structures also reveals that a plurality of meu are linked and rejoined into a relay block. The entire disclosure of this co-pending application is incorporated herein by reference. Relay Block Management Figure 6 is a schematic block diagram of a relay block management system implemented in controllers and flash memory. The relay block management system includes various functional modules implemented in the controller, and maintains various control data (including catalog materials) in the form and list distributed hierarchically in the flash memory and the control RAM 130. The function modules implemented in the controller 100 include: an interface module 11〇, a logical pair entity address translation module 140, an update block manager module 150, an erase block manager module 16〇, and a relay. Block Link Manager 170 〇98680. Doc -23- 1272487 〇 4 4 relay block management system interface host system. The logical-to-real address translation module 140 maps the logical address of the host to the physical memory location. The update block manager module 15G manages the #material update operation of a given data logical group in the memory. The block manager 16G has been erased to manage the relay area. The ghost erased and its configuration for storing new information. Relay Block Link Management f 170 manages the links of the sub-groups of the smallest erasable block to form a given relay block. These modules will be detailed in their individual sections. During the operation, the relay block system will generate and cooperate with the control data (such as bit stop, control and status information) to operate. Since many control data tend to be small data that is frequently changed, it cannot be efficiently stored and maintained at any time in a flash memory having a large block structure. In order to store relatively static control data in non-volatile flash memory, and to find a small number of relatively variable control data in the controller RAM for more efficient update and access, hierarchical and Decentralized solution. In the event of a power outage or failure, 'this scheme allows scanning of non-volatile ticks in the group's control data to quickly build control data in the volatile controller. This is possible because the invention can be limited The number of blocks associated with the possible activity of the data for a given logical group. In this way, the scan can be restricted. In addition, some of the control data that requires persistence is stored in the non-volatile relay area updated by the segment. In the block, each update will record a new segment that replaces the previous segment. The control data will use the segment index II case to record the update by segment in the relay block. μ Non-volatile flash memory 2〇 _ A large amount of relatively static control data is stored. This includes · · Group Address Table (GAT) 210, mixed block block prime 98860. Doc -24- 1272487 (CBI) 220, erased block list (EBL) 23〇 and MAP 240. GAT 21 〇 can record the mapping between logical groups of segments and their corresponding relay blocks. These mappings will not change unless updated. The CBI 220 can record the mapping of logically non-sequential segments during the update. EBL 23 0 records the pool of relay blocks that have been erased. The MAP 240 is a bit map that displays the erased state of all of the relay blocks in the flash memory. The volatile controller RAM 130 stores a small portion of the control data that is frequently changed and accessed. This includes a configuration block list (ABL) 134 and a clear block list (CBL) 136. The ABL 134 records the configuration of the relay block for recording updated data, while the CBL 136 records the deconfigured and erased relay blocks. In the preferred embodiment, the RAM 13 can be used as a cache memory for the control data stored in the flash memory 200. The update block manager update block manager 15 (shown in FIG. 2) processes the update of the logical group c. According to the aspect of the present invention, each logical group of the updated section is configured to record updates. Dedicated update relay block for data. In a preferred embodiment, any block of the logical group - or multiple segments - will be recorded in the update block. The update block can be managed to receive updated data in sequential order or in sequential (also known as "chaotic") order. Chaotic update block allows the section data to be updated in any order within the logical group, and individual sections can be arbitrarily repeated. In particular, it is not necessary to reconfigure any of the data sections, and the sequential update block can become a listening update block. Chaotic data updates do not require any (four) fixed block configuration'. Any non-sequential write of logical addresses can be automatically incorporated. Therefore, unlike the systems of the prior art, which are different from the prior art systems, 98680 is not necessary. Doc -25- 1272487 Specially handles whether each update block of this logical group is logically sequential or non-sequential. The general update block is only used to record various blocks in the order requested by the host. For example, even if the host system data or system control data tends to be updated in a chaotic manner, it is not necessary to process the logical address space corresponding to the host system data in a different manner than the host user data. In contrast, the data of the complete logical group of the segment is stored in a logically sequential order = early-relay block. In this way, the index of the stored logical section can be predefined. When a relay block stores all segments of a given logical group in a predetermined order, it can be said to be "complete." As for the update block, when the most f is filled in the logically sequential order, the new block will become the updated full relay block that can replace the original relay block at any time. On the other hand, if the update block fills up the j new data in the different order of the logical and complete blocks, the update block is a non-sequential or chaotic update block, then the sequence-only block must be further processed so that the final block can be pressed. Same as the full block: the order to store the updated data of the logical group. In the preferred embodiment, it is in a logically sequential order in a single relay block. Further processing involves summarizing the updated segments in the update block and the unchanged regions in the original block to yet another update relay block. The summarized update area 2 is then in a logically sequential order and can be used to replace the original block. Under the = predetermined condition, there will be - or multiple collapse procedures before the summary program. f #序/, is to re-record the segment of this chaotic update block into a superimposed mix: 4 block same day guard removes any duplicate logical segments that have been eliminated by subsequent updates of the same logical segment. The update scheme allows multiple executions of up to a predetermined maximum value to be performed simultaneously 98680. Doc -26- 1272487, each thread is a logical group that is updated with its dedicated update relay block. Sequential Data Update When you update data belonging to a logical group first, the relay block and the update block that is dedicated to the update data of the logical group are configured. When an instruction to write a block of one or more segments of a logical group is received from the host (the existing medium, the block has stored all the complete segments), the update block is configured. For the fs machine; the write operation will record the data of the first block on the update area. Since each host writes one of the consecutive logical addresses or multiple blocks, it will follow the first An update is always on the feature. In subsequent host writes, the update blocks in the same logical group are recorded in the update block in the order received from the host. One block is accepted as a sequential update block, and the segments updated by the host in the associated logical group are logically sequential. All segments updated in this logical group are written to this sequential update block until the block is closed or converted to a fuzzy update block. Figure 7A shows an example of writing a segment in a logical group of sequential update blocks in sequential order due to two separate host write operations, while the corresponding segment in the original block of the logical group becomes obsolete. In the main pseudo-intrusion operation #1, the data of the logical segment LS5-LS8 is updated. The data updated to LS5, -LS8' will be recorded in the newly configured dedicated update block. For convenience, the first product # § to be updated in the logical group will be recorded in the dedicated update block starting from the location of the first physical segment. —^ 敎 In terms of the first logical segment to be updated, it is not necessarily a logical segment of the group, so, 98680. Doc -27- 1272487 There will be a displacement between the start of the logical group and the start of the update block. This bit shift is referred to as a "page mark" as previously described in connection with Figure 3A. Subsequent segments will be updated in a logically sequential order. In the last zone of the write logical group, the group address will wrap around and the write sequence will continue from the first segment of the group. In master write operation #2, the section of the data in logical section LS9-LS12 is updated. The data updated to become LS9'-LS12 will be recorded in the dedicated update block directly in the position after the end of the last write. The figure shows that two hosts are written as follows: The updated material in the update block, ie LS5’-LS 12, is recorded in a logically sequential order. The update block can be thought of as a sequential update block because it has been filled in a logically sequential order. The updated data recorded in the update block can be used to eliminate the corresponding data in the original block. Chaotic data update When any segment updated by the host in the associated logical group is logically non-sequential, the chaotic update block can be started for the existing sequential update block/the heart update block is the data update area. The form of the block in which the associated logical group, and the logical segments within can be updated in any order and can be repeated arbitrarily. It can be established by converting the sector written in the host to a logically non-sequential %, from a sequential update block to a region & phantom write within the updated logical group. All subsequent segments updated in this logical group are written to the next available segment location in this random update block, regardless of its logical segment address within the cluster. Figure 7B shows the vane column of the logical group of the chaotic update block written in chaotic order due to five separate host write operations, while the logical group 98680. Doc -28- 1272487 The replaced segment in the original block and the copied segment in the chaotic update block become obsolete. In host write operation #1, the logical segments LS 1 O-LS 11 of a given logical group stored in the original relay block are updated. The updated logical section LSlO'-LSll' will be stored in the newly configured update block. At this point, the update block is a sequential update block. In the host write operation #2, the logical extents LS5-LS6 are updated to LS5'-LS6' and recorded in the update block immediately after the last write. This converts sequential update blocks into confusing update blocks. At the host write operation #3, the logic segment LS10 is updated again and the next position where it has been recorded in the update block becomes LS10". At this time, the LS10" in the update block can replace the LSI in the previous record ( V, and LSI0' can replace LS10 in the original block. In host write operation #4, the data of logical sector LSI 0 is updated again and recorded in the next position of the update block to become LSI0'&quot Therefore, LSI0,, is now the last and only valid poor material of logical segment LS10. In host write operation #5, the data of logical segment LS30 is updated and recorded in the update block. It becomes LS3CT. Therefore, this example shows that multiple logical segments within a logical group can be written into a chaotic update block in any order and any repetition. Forced sequential update Figure 8 shows due to two logical bits An example of a segment of a sequential update block written sequentially in a logical group with interrupted host write operations. In the host write #1, the updated data record of the logical segment LS5-LS8 is recorded. In the dedicated update block It is LS5'-LS8'. In the host write #2, the update data of the logical sector LSI4-LS 16 is recorded after the last write 98680. In the update block of doc -29- 1272487, it becomes 1^14, -1^16'. However, there is an address jump between 1^8 and 1^14, and a host write of #2 usually makes the update block non-sequential. Since there are not many address jumps, one option is to copy the data of the middle section of the original block to the update block before performing the host write to perform the padding operation (#2Α). In this way, it can be saved. Update the sequential nature of the block.Figure 9 shows a flow diagram of a program for updating the information of a logical group by the update block manager in accordance with a general embodiment of the present invention. The update process comprises the following steps: Step 260: The memory is organized Each of the blocks is divided into a plurality of memory cells that can be erased together, and each of the memory cells is used to store data of a logical unit. ν 262 · 4 data is organized into plural Logical group, each logical group is divided into a plurality of logical units. Step 2 6 4: In the standard example, according to the first specified order, preferably logically sequential (four) order, the logical group (four) (4) The material element is stored in the memory unit of the original block. In this way, the index of the individual logical unit in the access block can be known. Step 270: For a given logical group (eg, LGx) It is requested to update the logical unit within the LGX. (The logical unit update is an example. Generally, the update will be a block consisting of one or more consecutive logical units.) Step 272: Update logic of the request The unit will be stored in the second block dedicated to recording the update of the LGX. The order of recording is based on the second order, usually 98680. Doc -30- 1272487 = Update the order of the requests. A feature of the present invention allows for the initial updating of blocks of data recorded in a logically sequential or chaotic order. Therefore, the block is updated according to the second order. The first block may be a sequential update block or a chaotic process. When the program loop returns to step 270, the second block continues to record the logical unit of U. The first block will be closed when the closed predetermined condition is formed to receive further updates. At this point, the process proceeds to step 276. Step 276: Determine whether the closed second block records its update logical unit in the same order as the original block. When the two block record logical units differ only by the stomach surface mark, the two blocks are considered to have the same order' as described in connection with Figure 3 。. If the two blocks have the same order, the private sequence is continued to step 28, otherwise, it must be in step Μ. Perform abandoned project collection. Step 280 - Since the second block has the same order as the first block, it can be used to replace the original first block. The update process then ends at step 299. Step 290: Collect the latest version of each logical unit of a given logical group from the second block (update block) and the first block (original block). The summarized logical units of a given logical group are then written to the third block in the same order as the first block. Step 292: Since the third block (summary block) has the same order as the first block, it can be used to replace the original first block. The update process then ends at step 299. 98680. Doc -31· !272487 Step 299: When the end program completes the update block, it will become the new standard block for the given logical group. The update thread for this logical group will be terminated. - Figure 10 shows a flow diagram of a program for updating a block manager to update data for a logical group in accordance with a preferred embodiment of the present invention. The update procedure includes the following steps: Step 310: For a given logical group (eg, LGJ data, it will request ^ logical section within the new LGX. (The section update is an example. - General Lu' update will be A segment consisting of LG"- or multiple consecutive logical segments.) Step 3U: If the LGx-specific update block does not yet exist, proceed to step 10 to step 4H) to initiate a new update thread for the logical group. . This can be done by means of c: configuring a more complex block dedicated to updating the data of the logical group. If there is already an updated update area, proceed to step 314 and begin recording the update section to the update block. Step 314: If the current update block has been confusing (i.e., non-sequential), then proceed directly to step 51G to record the requested update segment to the mixed temple update block. If the current update block is sequential, proceed to step 3 16 to process the sequential update block. Step 316: The feature of the present invention allows an update block to be initially set for use in a logically sequential or chaotic order. However, because the group will eventually store its data in the relay block in the logical sequence (4), it is therefore desirable to keep the updated block (4) as much as possible. Next, when _-updates the block to proceed to 98680. Doc -32- 1272487 When you update, you will need less processing because you do not need to collect waste items. Therefore, it is judged whether the requested update follows the current sequential order of the update block. If the update is followed sequentially, proceed to step 5g to perform the sequential update, and the update block will remain sequential. On the other hand, if the update is not followed (chaotic update), it will convert the sequential update block into a chaotic update block when no action is taken. In the specific embodiment, no action is taken to save the situation, and then the program proceeds directly to step (4) G, which allows the update to turn the update block into a chaotic update block. °° Optional Forced Sequencing Procedure 隹 Another (4)^ example of the implementation of the mandatory sequencer step 320' to save the sequential update area ^ as much as possible due to the chaotic update of the suspension. There are two cases, both of which require copying the original block's fallback to maintain the sequential order of the logical segments recorded on the updated block. The first case is where the update can establish a short address jump. The first case is to end the update block early to keep it in order. The forced sequencer step 320 includes the following sub-steps: Step 330. If the logical address jump of the update establishment is not greater than the predetermined number, the program proceeds to the forced sequence update procedure of step (10), and the sequence ends. The step state, in order to consider whether it is suitable to force the execution of the predetermined design parameters, then update the block to phase. Step 340. If the number of unfilled physical segments is Cc (the representative value is half of the updated block size) 98680. Doc -33- 1272487 The pair is not used, so it will not be closed early. The process proceeds to step 370, and updating the block can become confusing. On the other hand, if the update block is substantially filled, then it is considered to have been fully utilized, so proceed to step 3 to force the sequence to end. Step 3 5 0 • Forced sequential update allows the current sequential update block to remain sequential as long as the address jump does not exceed the predetermined number cb. Essentially, the section of the associated original block of the update block is copied to fill the gap spanned by the address jump. Therefore, before proceeding to step 5, the block is updated sequentially with the information of the intermediate address to record the current update sequentially. Step 360: If the current sequential update block has been substantially filled, rather than being converted to a chaotic update block by the suspended mess update, then the forced sequence end allows the current sequential update block to be closed. A chaotic or non-sequential update definition is an update with the following items: no forward address transitions, reverse address transitions, or address repetitions covered by the above address hop exceptions. In order to prevent the sequential update block from being converted for a chaotic update, the unwritten segment position of the update block is filled by copying the sector of the original partial demise block of the associated block. Then completely eliminate and erase the original block. Now, the update block of the destination has a complete set of logical segments and then ends up as a complete relay block that replaces the original relay block. The program then proceeds to step 430 to configure the update block at its location to accept the record of the pending segment update requested in step 31. Convert to chaotic update block Step 370: When the suspended update is not in sequential order and may or may not be available, if the forced sequence condition cannot be met, the program proceeds to step 5ι 98684. Doc -34 - 1272487 allows the sequential update block to be converted to a chaotic update block by allowing a suspension update segment with a non-sequential address to be recorded on the update block. If the maximum number of chaotic update blocks are present, the chaotic update block that has not been accessed for the longest time must be closed before allowing the conversion to proceed; thus, the maximum number of chaotic blocks will not be exceeded. The identification of the oldest unaccessed chaotic update block is the same as the general example described in step 420, but is limited to the chaotic update block. At this point, closing the chaotic update block can be accomplished by a summary as described in step 55. Configuring a New Update Block Restricted by the System Step 410: The procedure for configuring the erase relay block to update the block begins with determining whether the predetermined system limit is exceeded. Due to limited resources, the memory management system typically allows a predetermined maximum number of Cas of update blocks that can exist simultaneously. This limit is the total number of sequential update blocks and chaotic update blocks, and is also a design parameter. In a preferred embodiment, this limit is, for example, a maximum of 8 update blocks. Also, due to the high demand for system resources, there is a corresponding predetermined limit on the maximum number of chaotic update blocks that can be simultaneously opened (eg, sentences. Therefore, when CA update blocks have been configured, only The next configuration request can be satisfied after the existing configuration request is closed. The program proceeds to step 420. When the number of open update blocks is less than cA, the program proceeds directly to step 430. Step 420: Exceed the update block When the maximum number of Ca is exceeded, the oldest unaccessed update block is closed and the discarded item collection is performed. The oldest unaccessed update block is identified as the update block associated with the longest unaccessed logical block. The block that has not been accessed for the longest time, the access includes the writing and the selective reading of the logical segment. The 98680 of the update block is maintained in the order of access. Doc -35- 1272487 / month early; when initializing, no access order is assumed. When the update block is in the order of the update block, the same procedure as described in the combination of step 36 and the step will be followed. When the update block is confusing, the same % order of the combination step will be followed. The close space can be moved to configure a new update block in step 43. V Step 43 0 • Configuring a new relay block as the update block specific to the given logical group LGx can satisfy the configuration requirement. Next, the program proceeds to step 5 1 0. Recording the update data on the update block Step 5 1 0 • Record the requested update section on the next available shell position of the update block. Then, the program proceeds to Step 52: Determine whether the update block is ready to end. The update block ends step 520: if the update block still has space to accept the additional update, proceed to step 522. Otherwise, proceed to step 57 to end the update area. The ghost writes in the uranium pleading attempt to write more than the logical area I of all the spaces of the block, and there are two possible embodiments for filling up the updated block. In the first embodiment, the write will be written. Request score In two parts, where the first part can be written up to the last physical section of the block. The block is then closed and the second part of the write is processed as the next requested write. In another embodiment, The block retains the write of the request when it fills the remaining segments, and then closes it. The write for the δ month is processed as the write for the next request. Step 522: If the update block is sequential, proceed to Step 53: to perform a sequential shutdown. If the update block is confusing, proceed to step 98680. Doc • 36- 1272487 Step 540 to close the mess. Step by Step Update Block Step 530: Since the update block is sequential and fully populated, the logical group stored therein is complete. The relay block is complete and can replace the original relay block. At this point, the original block will be completely eliminated and erased. The program then proceeds to step 570 where the update of the given logical group is terminated.仃,, chaotic update block end Step 540: Since the update block is non-sequentially populated, it may contain multiple updates of some logical segments, so the waste project collection is performed to save the effective poor material. The chaotic update block can be compressed or summarized. At step 542, the program to be executed will be determined. ° Step 542: The compression or summary to be performed will depend on the degradation of the update block. If a logical segment has been updated multiple times, its logical address will be highly degraded. Multiple versions of the same logical section will be recorded on the update block, and only the last recorded version is a valid version of the logical section. In an update block with multiple versions of logical segments, the number of distinct logical segments will be much less than for logical groups. In the preferred embodiment, when the number of logical segments in the ft new area has more than a predetermined design parameter Cd (the representative value is half of the logical group size), the ending procedure is at step 55. 〇 Execution summary, otherwise the program will proceed to the compression of step 560. Step 550 · If the chaotic update block is to be summarized, the original block and the updated block will be replaced by the new private quasi-relay block containing the summary information. Summary 98680. After doc -37- 1272487, the update thread will end in step 57. ~vStep 560 • If you want to compress the chaotic update block, it will be replaced with a new update block containing the compressed data. After compression, the processing of the compressed update block will be , , , . The bundle is in step 57G. Alternatively, I can be delayed until the update block is written again, so the exclusion is followed by a possible I4 without a summary of the intermediate updates. Then, when the next request in step 5〇2 is required to be updated in LGx, the new update block is used in the incoming step update of the given logical block. When v completes the program to create a complete update block, the block becomes a new standard block for a given logical group. The update thread for this logical group will be terminated. When the end of the program builds a new update block that replaces the existing update block, the new update block is used to record the next update requested for the given logical group. When the update block has not ended, processing will continue when the next request in the LGX is updated in step 3 j 。. As can be seen from the above procedure, when the chaotic update block is closed, the updated data recorded thereon is further processed. In particular, the collection of obsolete items of valid data is performed by the following procedure: a program compressed to another chaotic block, or a summary of the original blocks associated with it to form a new standard sequential block. Fig. 11A is a flow chart showing in detail a summary procedure for closing the chaotic update block shown in Fig. 10. The chaotic update block summary is one of two possible programs that are executed when the update block is finished, such as when the update block has filled the last physical segment location it was written to. The summary is selected when the number of logical segments that are written in the block exceeds the predetermined design parameter CD. The summary procedure shown in Figure 5 5 5 0 contains the following sub-steps: Step 5 5 1 · When the mixed IL update block is closed, it will be configured to replace it. 98680. New trunk block for doc -38- 1272487. Step 552: Collect the latest version of each logical segment and ignore all the eliminated segments in the chaotic update block and the associated original block. Step 554: Record the collected valid segments in a logical sequential order on the new relay block to form a complete block, that is, a block of all logical segments of the logical group recorded in sequential order. Step 5 5 6 · Replace the original block with a new complete block. Step 558: Erasing the updated block and the original block. Fig. 11B is a flow chart showing the compression procedure for closing the chaotic update block shown in Fig. 1 for the fine display. Compression is selected when the number of logical segments that are written in the block is lower than the predetermined design parameter CD. The compression procedure step 560 shown in Figure 1A includes the following sub-steps: Step 5 61: When the chaotic update block is compressed, a new relay block is replaced. Step 562: Collect the latest version of each logical segment in the existing chaotic update block to be compressed. Step 564: Record the collected segments on the new update block to form a new update block with the compressed segments. Step 5 6 6: Replace the existing update block with a new update block with a compressed section. Step 568: Erasing the updated block that has ended. Logic and Relay Block Status Figure 12A shows all possible states of a logical group, and the possible transitions between them under various operations. 98680. Doc -39- I272487 Figure 12B is a table listing the possible states of a logical group. The logical group status is defined as follows: 1 • All logical segments in the 奎·· logical group have been written into a single relay block in a logical sequential order, which may utilize page mark winding. 2. No, wear eight: no logical segments have been written in the logical group. The logical group will be marked as a medium I block that is not written and has no configuration in the group address table. Returns a predetermined data pattern in response to a host read for each segment in this group. 3. Sequence hot dance: Some sections in the logical group have been written into the relay block in a logical sequential order, possibly using page markers, so it can replace the corresponding logic of any previous complete state in the group. Section. 4 · Bunun Dance: Some sections within a logical group have been written into the relay block in a logically non-sequential order, possibly using page markers, so they can replace the corresponding ones of any previous complete states in the group. Logical section. Sections within a group can be written to - the latest version of the above will replace all previous versions. Figure 13A shows all possible states of a relay block, and possible transitions among various operations. Figure 13B shows all possible states of the relay block, and the possible transitions between them in various operations: ', 1 · · #馀: All sectors in the relay block have been erased. . The order is odd: the relay block has been partially written, and its sections are in sequential order, possibly using page markers. All sections belong to the same 98680. Doc -40- 1272487 Logical group. 3 _This: The relay block has been partially or completely written, and its segment is logically out of order. Any zone can be written more than once. All zones belong to the same logical group. 4. End #: The relay block has been completely written in the logical sequential order, possibly using page markup. The section has been phased out due to hosting costs. The relay block was previously obsolete for complete but at least one update. 14(A)-14(J) are state diagrams showing various operational effects on the logical group state and on the physical relay block. Fig. 14(4) shows a state diagram corresponding to the logical group of the first-writer operation and the transition of the relay block. The host writes __ or multiple segments of a previously unwritten logical group to the newly configured erased (four) block in a logically sequential order. The logical group and the relay block enter the sequential update state. Figure 1 shows a state diagram corresponding to the logical grouping of the first complete operation and the transition of the relay block. The sequential update logical group of the previously unwritten person becomes complete because the host sequentially writes all the sections of the person. If the memory cuts the predetermined data and fills the remaining unwritten segments to fill the group, it will also change. The relay block becomes complete. Fig. 14(C) shows a state diagram of the logical group and the relay block transition of the first mess operation. The sequential update logical group of the previously unwritten person becomes confusing when the host writes at least one segment out of order. Fig. 14(D) shows a state diagram of the logical group of the first compression operation and the transition of the relay block. From the old block, the previously unwritten chaotic update logic 98680. Doc -41 - 1272487 All valid sections in the group are copied to the new chaotic relay block and then erased. Fig. 14(E) shows a state diagram of the logical group and relay block transition corresponding to the first summary operation. Move previously unwritten chaos from the old chaotic block Update all valid segments within the logical group to fill the newly configured erased blocks in a logically sequential order. Zones that are not written by the host are populated in a predetermined data mode. Then erase the old chaotic block. Fig. 14(F) shows a state diagram corresponding to the logical group of the sequential write operation and the transition of the relay block. The host writes one or more segments of the complete logical group to the newly configured erased relay block in a logically sequential order. The logical group and the relay block become in a sequential update state. The previously complete relay block becomes the original relay block. Fig. 14(G) shows a state diagram corresponding to the logical group of the sequential padding operation and the transition of the relay block. The sequential update logical group becomes complete when the host sequentially writes all its segments. This can also occur when the sequential update logical group is populated with the valid section of the original block to complete its collection of obsolete items, after which the original block is erased. Figure 14 (H) shows a state diagram corresponding to the logical group and relay block transitions of the non-sequential write operation. The sequential update logical group becomes confusing when the host writes at least one segment non-sequentially. Non-sequential segment writes can cause the updated block or the valid segment in the corresponding original block to become obsolete. Fig. 14 (1) shows a state diagram corresponding to the logical group of the compression operation and the transition of the relay block. All active segments in the chaotic update logical group are copied from the old block to the new chaotic relay block' and then erased. Original block 98680. Doc -42- 1272487 will not be affected. Fig. 14(J) shows a state diagram corresponding to the logical grouping of the summary operation and the transition of the relay block. All active segments within the chaotic update logical group are copied from the old chaotic block to populate the newly configured erased blocks in a logically sequential order. Then erase the old chaotic block and the original block. Update Block Tracking and Management Figure 15 shows a preferred embodiment of the structure of an Configuration Block List (ABL) for tracking open and closed update blocks and erased blocks for configuration. The configuration block list (ABL) 61 0 will be stored in the controller RAM -130 to allow management of the erased block configuration, configured update blocks, associated blocks and control structures to enable correct The logic translates to the physical address. In a preferred embodiment, the ABL includes: a list of erased blocks, an open update block list 614, and a closed update block list 616. The opened update block list 614 is an open update area in the ABL. The block project group of the attributes of the block. The list of updated update blocks has one item of each data update block currently open. Each item retains the following information. LG is the logical group address dedicated to updating the relay block. Sequential/chaotic is the state in which the update block is populated with sequential or confusing update data. MB is the relay block address of the update block. The page mark is the starting logical section of the record at the first physical location of the update block. The segment number written represents the segment number currently written on the update block. MB◦ is the relay block address of the associated original block. Page Tag is the page mark of the original block associated with it. The closed update block list 616 is a subset of the configuration block list (ABL). 98680. Doc • Block project team for 43- 1272487. shut down

一實體位置記錄的起始邏輯區段。 其為ABL中具有關閉更新區塊之屬性 的更新區塊清單具有一個已關閉之^ 鬼專用的邏輯群組位址。MB 頁面標記是在更新區塊之第 & ° MB〇是關聯之原始區塊 目,但其項目在邏輯對主實體目錄中 有以下資訊。LG是目前f新恧祕畜 的中繼區塊位址。 混亂區塊索引 循序更新區塊具有按邏輯上循序順序儲存的資料,因此 很容易尋找區塊中的任何邏輯區段。混礼更新區塊具有其 未按順序儲存的邏輯區段並可儲存一個邏輯區段的多個更 新世代。必須維持附加資訊,以記錄各有效邏輯區段被配 置在混亂更新區塊中的位置。 在較佳具體實施例中,混亂區塊索引資料結構允許追蹤 及快速存取混亂區塊中的所有有效區段。混亂區塊索引獨 立官理小型區域的邏輯位址空間,並可有效處理系統資料 及使用者資料的熱區。索引資料結構實質上允許在快閃記 fe體中維持具有不常更新需求的索引資訊,以免效能明顯 叉到影響。另一方面,會將混亂區塊中最近寫入區段的清 單保留在控制器RAM的混亂區段清單中。還有,會將快閃 記憶體之索引資訊的快取記憶體保留在控制器RAM中,以 減少位址轉譯之快閃區段存取的數量。各混亂區塊的索引 係儲存於快閃記憶體的混亂區塊索引(CBI)區段中。 圖16A顯示混亂區塊索引(CBI)區段的資料欄位。混亂區 98680.doc -44 - 1272487 塊索引區段(CBI區段)含有邏輯群組中各區段映射至混亂 更新區塊的索引,可定義邏輯群組各區段在混亂更新區塊 或其關聯之原始區塊内的位置。CBI區段包括:記錄混亂區 塊内有效區段的混亂區塊索引欄位、記錄混亂區塊位址參 數的混亂區塊資訊欄位、及記錄儲存CBI區段之中繼區塊 (CBI區塊)内有效CBI區段的區段索引欄位。 圖16B顯示記錄於專用中繼區塊中之混亂區塊索引(cbi) 區段的範例。專用的中繼區塊可稱為CBI區塊620。在更新 CBI區段時,會將其寫入CBI區塊620中下一個可用的實體 區段位置。因此,CBI區段的多個複本可存在CBI區塊中, 其中只有最後寫入的複本為有效。例如,已經利用有效版 本的最新版本將邏輯群組LG1的CBI區段更新三次。區塊中 最後寫入之CBI區段的一組索引可識別CBI區塊中各有效 區段的位置。在此範例中,區塊中最後寫入的CBI區段是 LG^6的CBI區段,及其索引組是取代所有先前索引組的有 效索引組。當CBI區塊最後變成以CBI區段予以完全填充 時’會將所有有效區段再寫入新的區塊位置,以在控制寫 入操作期間壓縮區塊。然後抹除整個區塊。 CBI區段内的混亂區塊索引欄位含有邏輯群組内各邏輯 區段的索引項目或映射至混亂更新區塊的子群組。各索引 項目代表對應邏輯區段之有效資料所在之混亂更新區塊内 的位移。保留的索引值代表混亂更新區塊中沒有邏輯區段 的有效資料,及代表關聯之原始區塊中的對應區段為有 效。一些混亂區塊索引欄位項目的快取記憶體會被保留在 98680.doc -45 - 1272487 控制器RAM中。 CBI區段内的混亂區塊資訊攔位含有關於存在於系統中 各混亂更新區塊的一個項目,以記錄區塊的位址參數資 訊。此攔位中的資訊只在CBI區塊的最後寫入區段中有效。 此資訊也會出現在RAM的資料結構中。 各混亂更新區塊的項目包括三個位址參數。第一個來數 是和混亂更新區塊關聯之邏輯群組(或邏輯群組數)的邏輯 位址。第二個參數是混亂更新區塊的中繼區塊位址。第二 個參數是寫入混亂更新區塊中最後區段的實體位址位移。 位移資訊設定初始化期間混亂更新區塊的掃描起點,以在 RAM中重建資料結構。 區段索引欄位含有關於CBI區塊中各有效CBI區段的項 目。其可定義CBI區塊内有關各許可之混亂更新區塊之最近 寫入的CBI區段所在的位移。索引中位移的保留值代表許可 的混亂更新區塊並不存在。 圖16C為顯示存取正在進行混亂更新之給定邏輯群組之 邏輯區段之資料的流程圖。在更新程序期間,會將更新資 料記錄在混亂更新區塊中,而未變更的資料則留在和邏輯 群組關聯的原始中繼區塊中。在混亂更新下存取邏輯群組 之邏輯區段的程序如下: 步驟650 :開始尋找給定邏輯群組的給定邏輯區段。 步驟652 :在CBI區塊中尋找最後寫入的CBI區段。 步驟654 :藉由查詢最後寫入之cm區段的混亂區塊資訊 欄位,尋找和給定邏輯群組關聯的混亂更新區塊或原始區 98680.doc -46- 1272487 塊。此步驟可在步驟662前的任何時間執行。 γ驟658 ·如果最後寫入的CBI區段係指向給定的邏輯群 組,則可尋找CBI區段。繼續進行至步驟662。否則,繼續 進行至步驟660。 步驟660 :藉由查詢最後寫入之CBI區段的區段索引欄 位’尋找給定邏輯群組的CBI區段。 步驟662 :藉由查詢所尋找到之CBI區段的混亂區塊索引 欄位’尋找混亂區塊或原始區塊中的給定邏輯區段。 圖16D顯示根據其中已將邏輯群組分割成子群組的替代, 性具體實施例,存取正在進行混亂更新之給定邏輯群組之 邏輯區段之資料的流程圖。CBI區段的有限容量只能記錄預 定最大數量的邏輯區段。當邏輯群組具有多於單一 CBI區段 所能處理的邏輯區段時,會將邏輯群組分割成多個具有指 派給各子群組之CBI區段的子群組。在一個範例中,各egI ^丰又具有足夠追縱由2 5 6區段所組成及多達8個混亂更新區 塊之邏輯群組的容量。如果邏輯群組具有超過256區段的尺 寸,則存在用於邏輯群組内各256區段子群組之分開的cm 區段。CBI區段可存在用於邏輯群組内多達8個子群組,以 支援尺寸多達2048區段的邏輯群組。 在較佳具體實施例中,會採用間接索引方案以促進索引 的管理。區段索引的各項目具有直接及間接攔位。 直接區段索引可定義CBI區塊内有關特定混亂更新區塊 之所有可能CBI區段所在的位移。此攔位中的資訊只在有關 5亥特定混亂更新區塊之最後寫入的C BI區段中有效。索弓丨中 98680.doc -47- 1272487 位移的保留值代表CBI區段並不存在,因為對應之有關混亂 更新區塊的邏輯子群組或是不存在,或是由於已配置更新 區塊而未更新。 間接區段索引可定義有關各許可的混亂更新區塊之最近 寫入之CBI區段所在之CBI區塊内的位移。索引中位移的保 留值代表許可的混亂更新區塊並不存在。 圖1 6D顯示在混亂更新下存取邏輯群組之邏輯區段的程 序,其步驟如下: 步驟670 :將各邏輯群組分割成多個子群組及指派cbi區-段給各子群組。 步驟680 :開始尋找給定邏輯群組之給定子群組的給定邏 輯區段。 步驟682 :在CBI區塊中尋找最後寫入的CBI區段。 步驟684 :藉由查詢最後寫入之CBI區段的混亂區塊資訊 欄位’尋找和給定子群組關聯的混亂更新區塊或原始區 塊。此步驟可在步驟696前的任何時間執行。 步驟686 :如果最後寫入的cbi區段係指向給定的邏輯群 、、且則知續進行至步驟69 1。否則,繼續進行至步驟690。 步驟690 :藉由查詢最後寫入之CBI區段的間接區段索引The starting logical section of an entity location record. The list of update blocks in the ABL with the attributes of the closed update block has a closed logical group address dedicated to the ghost. The MB page mark is the original block of the associated block in the update block, but its item has the following information in the logical vs. main entity directory. LG is the relay block address of the current secret animal. Chaotic Block Index The Sequential Update Block has data stored in a logically sequential order, so it is easy to find any logical section in the block. The hash update block has logical segments that are not stored in order and can store multiple update generations of one logical segment. Additional information must be maintained to record where each valid logical segment is placed in the chaotic update block. In a preferred embodiment, the chaotic block index data structure allows for tracking and fast access to all active segments in the chaotic block. The chaotic block index is independent of the logical address space of the small area and can effectively handle the hotspots of system data and user data. The index data structure essentially allows index information with infrequent update requirements to be maintained in the flash body to avoid significant impact on performance. On the other hand, the list of recently written segments in the chaotic block is retained in the chaotic section list of the controller RAM. Also, the cache memory of the index information of the flash memory is retained in the controller RAM to reduce the number of flash sector accesses for address translation. The index of each chaotic block is stored in the chaotic block index (CBI) section of the flash memory. Figure 16A shows the data field of the Chaotic Block Index (CBI) section. Chaotic area 98680.doc -44 - 1272487 The block index section (CBI section) contains an index of each section of the logical group mapped to the chaotic update block, which can define the logical group segments in the chaotic update block or The location within the original block associated with it. The CBI section includes: a chaotic block index field for recording the valid section in the chaotic block, a chaotic block information field for recording the chaotic block address parameter, and a relay block for recording the CBI section (CBI area) The section index field of the valid CBI section within the block). Figure 16B shows an example of a chaotic block index (cbi) section recorded in a dedicated relay block. A dedicated relay block may be referred to as a CBI block 620. When the CBI section is updated, it is written to the next available physical sector location in CBI block 620. Thus, multiple copies of the CBI section can exist in the CBI block, with only the last written copy being valid. For example, the CBI section of logical group LG1 has been updated three times with the latest version of the active version. A set of indices of the last written CBI section in the block identifies the location of each valid section in the CBI block. In this example, the last written CBI section in the block is the CBI section of LG^6, and its index set is a valid index set that replaces all previous index sets. When the CBI block finally becomes fully populated with the CBI section, then all valid sections are rewritten to the new block location to compress the block during the control write operation. Then erase the entire block. The chaotic block index field within the CBI section contains an index entry for each logical segment within the logical group or a subgroup that maps to the chaotic update block. Each index item represents the displacement within the chaotic update block where the valid data for the corresponding logical segment is located. The reserved index value represents valid data for no logical segments in the chaotic update block, and the corresponding segment in the original block representing the association is valid. The cache memory of some chaotic block index field items will be retained in the 98680.doc -45 - 1272487 controller RAM. The chaotic block information block in the CBI section contains an item about each chaotic update block in the system to record the address parameter information of the block. The information in this block is only valid in the last written section of the CBI block. This information will also appear in the data structure of the RAM. The items of each chaotic update block include three address parameters. The first number is the logical address of the logical group (or logical group number) associated with the chaotic update block. The second parameter is the relay block address of the chaotic update block. The second parameter is the physical address offset of the last segment written into the chaotic update block. The displacement information sets the scan start point of the chaotic update block during initialization to reconstruct the data structure in the RAM. The section index field contains entries for each valid CBI section in the CBI block. It defines the displacement of the most recently written CBI section within the CBI block for each of the licensed chaotic update blocks. The reserved value of the displacement in the index represents that the licensed chaotic update block does not exist. Figure 16C is a flow diagram showing the access to data for a logical segment of a given logical group that is undergoing a chaotic update. During the update process, the update data is recorded in the chaotic update block, while the unaltered data is left in the original relay block associated with the logical group. The procedure for accessing the logical section of a logical group under a chaotic update is as follows: Step 650: Start looking for a given logical section of a given logical group. Step 652: Find the last written CBI section in the CBI block. Step 654: Find the chaotic update block or the original area 98680.doc -46- 1272487 block associated with the given logical group by querying the chaotic block information field of the last written cm segment. This step can be performed any time before step 662. Gaming 658 - If the last written CBI segment points to a given logical group, then the CBI segment can be found. Proceed to step 662. Otherwise, proceed to step 660. Step 660: Find the CBI section of the given logical group by querying the section index field of the last written CBI section. Step 662: Find a chaotic block or a given logical segment in the original block by querying the chaotic block index field of the CBI section found. Figure 16D shows a flow diagram of accessing data for a logical segment of a given logical group that is undergoing a chaotic update, in accordance with an alternative embodiment in which a logical group has been partitioned into sub-groups. The limited capacity of the CBI segment can only record a predetermined maximum number of logical segments. When a logical group has more logical segments than a single CBI segment can handle, the logical group is split into a plurality of subgroups having CBI segments assigned to each subgroup. In one example, each of the packets has a capacity sufficient to track the logical group consisting of 256 segments and up to eight chaotic update blocks. If the logical group has a size greater than 256 segments, there are separate cm segments for each 256 segment subgroup within the logical group. The CBI section can exist for up to 8 subgroups within a logical group to support logical groups up to 2048 segments. In a preferred embodiment, an indirect indexing scheme is employed to facilitate index management. Each item of the section index has direct and indirect blocking. The direct segment index defines the displacement within the CBI block for all possible CBI segments of a particular chaotic update block. The information in this block is only valid in the C BI section written at the end of the 5H specific chaotic update block. The value of the displacement of the 98686.doc -47- 1272487 displacement means that the CBI section does not exist because the corresponding logical subgroup of the chaotic update block either does not exist or because the updated block is configured. not up-to-date. The indirect section index may define the displacement within the CBI block in which the most recently written CBI section of each chaotic update block of the license is located. The reserved value of the displacement in the index represents that the licensed chaotic update block does not exist. Figure 1 6D shows a procedure for accessing a logical section of a logical group under a chaotic update, the steps of which are as follows: Step 670: Split each logical group into a plurality of subgroups and assign a cbi area to a subgroup. Step 680: Start looking for a given logical segment of a given subgroup of a given logical group. Step 682: Find the last written CBI section in the CBI block. Step 684: Find the chaotic update block or the original block associated with the given sub-group by querying the chaotic block information field of the last written CBI section. This step can be performed any time before step 696. Step 686: If the last written cbi segment is directed to a given logical group, then the process proceeds to step 69 1 . Otherwise, proceed to step 690. Step 690: Indirect segment indexing by querying the last written CBI segment

攔位’尋找給定邏輯群組之多個CBI區段中最後寫入的CBI 區段。 步驟691 :已經尋找到和給定邏輯群組之子群組其中之一 關聯的至少一 CBI區段。繼續。 步驟692 :如果所尋找到的CBI區段指向給定子群組,則 98680.doc -48 - 1272487 可尋找給定子群組的CBI區段。繼續進行至步驟696。否則, 繼續進行至步驟694。 步驟694 :藉由查詢目前尋找之CBI區段的直接區段索引 欄位,尋找給定子群組的CBI區段。 步驟696 :藉由查詢給定子群組之CBI區段的混亂區塊索 引欄位,尋找混亂區塊或原始區塊中的給定邏輯區段。 圖16E顯示在其中將各邏輯群組分割成多個子群組的具 體實施例中,混亂區塊索引(CBI)區段及其功能的範例。邏 輯群組700原來將其完整的資料儲存在原始中繼區塊702 中。接著,邏輯群組配合配置專用的混亂更新區塊704進行 更新。在本範例中,將邏輯群組700分割成子群組,這些子 群組A、B、C、D各具有256個區段。 為了尋找子群組B中的第i個區段,會先尋找CBI區塊620 中最後寫入的CBI區段。最後寫入之CBI區段的混亂區塊資 訊欄位可提供尋找給定邏輯群組之混亂更新區塊704的位 址。同時,其還可提供寫入混亂區塊中之最後區段的位置。 此資訊在掃描及重建索引時很有用。 如果最後寫入之CBI區段結果是給定邏輯群組的四個 CBI區段之一,則會進一步決定其是否正是含有第i個邏輯 區段之給定子群組B的CBI區段。如果是,則CBI區段的混 亂區塊索引會指向儲存第i個邏輯區段之資料的中繼區塊 位置。區段位置會在混亂更新區塊704中或在原始區塊702 中〇 如果最後寫入之CBI區段結果是給定邏輯群組之四個 98680.doc -49- 1272487 CBI區段之一但卻非屬於子群組b,則會查詢其直接區段索 引’以哥找子群組B的CBI區段。在尋找此確切的CBI區段 後’會查詢混亂區塊索引,以在混亂更新區塊7〇4及原始區 塊702中尋找第丨個邏輯區段。 如果表後寫入之CBI區段結果不是給定邏輯群組之四個 CBI區段的任一個,則會查詢間接區段索引以尋找四個區段 中的一個。在圖16E所示的範例中,會尋找子群組c的CBI 區丰又。然後,子群組C的此CBI區段查詢其直接區段索引, 以尋找子群組B之確切的CBI區段。此範例顯示在查詢混亂 區塊索引時,將發現第i個邏輯區段為未變更及會在原始區 塊中尋找到其有效資料。 在給定邏輯群組的子群組C中尋找第j個邏輯區段時也會 做出同樣的考慮。此範例顯示最後寫入之CBI區段結果不是 給定邏輯群組之四個CBI區段的任何一個。其間接區段索引 指向給定群組之四個〇趴區段之一。所指向之四個中的最後 寫入結果也正是子群組C的CBI區段。在查詢其混亂區塊索 引打,將發現第j個邏輯區段被尋找在混亂更新區塊7〇4中 的指定位置。 控制器RAM中存在系統中之各混亂更新區塊的混亂區段 清單。每份清單均含有一份自快閃記憶體中最後被更新的 相關CB馳開始1目前區段為止被寫入該混亂更新區塊 之中的區段的記錄。特定混亂更新區塊之邏輯區段位址(可 保留在混亂區段清單中)的數量是8至16之代表值的設計參 數。清單的最佳尺寸可決定為其對混亂資料寫人操作之過 98680.doc -50- I272487 度耗用的作用及初始化期間之區段掃描時間之間的權衡。 在系統初始化期間,為了識別自其關聯之CBI區段之一的 先前更新後所寫入的有效區段,必須掃描各混亂更新區 塊。在控制器RAM中,會構成各混亂更新區塊的混亂區段 >月早。只需要在最後寫入的C BI區段中’從各區塊之混亂區 塊資訊欄位中定義的最後區段位址開始掃描各區塊即可。 在配置混亂更新區塊時,會寫入CBI區段以對應於所有的 更新缝輯子群組。混亂更新區塊的邏輯及實體位址會被寫 入區段中可用的混亂區塊資訊攔位,其中空值項目在混亂 區塊索引攔位中。會在控制器RAM中開啟混亂區段清單。 在關閉混亂更新區塊時,會以從區段中混亂區塊資訊欄 位移除的區塊邏輯及實體位址寫入CBI區段。RAM中對應 的混亂區段清單變成未使用。 可修改控制器RAM對應的混亂區段清單,以包括寫入混 亂更新區塊之區段的記錄。當控制器RAM中的混|L區段清 單沒有寫入混亂更新區塊之其他區段記錄的任何可用空間 時,會為有關清單中區段的邏輯子群組寫入已更新的CBI 區段,然後清除清單。 當CBI區塊620變滿時,會將有效的CBI區段複製到已配 置的已抹除區塊中,然後抹除先前的CBI區塊。 位址表 圖2所示的邏輯對實體位址轉譯模組14〇負責關聯快閃記 憶體中主機的邏輯位址和對應的實體位址。邏輯群組及實 體群組(中繼區塊)間的映射係儲存於非揮發性快閃記憶體 98680.doc -51 - 1272487 200及揮發性卻較為敏捷之Ram 13〇(見圖丨)中分布的一組 表格及清單中。位址表係維持在含有記憶體系統中每個邏 輯群組之中繼區塊位址的快閃記憶體中。此外,最近寫入 區段的邏輯對實體位址記錄會暫時保留在RAM中。在系統 啟動後進行初始化時,可從快閃記憶體中的區塊清單及資 料區段標頭中重新構成這些揮發性記錄。因此,快閃記憶 體中的位址表只需要偶而更新,以降低控制資料之過度耗 用寫入操作的百分比。 邏輯群組之位址記錄的階層包括:在RAM中之開啟的更 新區塊清單、關閉的更新區塊清單及維持在快閃記憶體中 的群組位址表(GAT)。 開啟的更新區塊清單是控制器RAM中目前開啟用於寫入 已更新之主機區段資料之資料更新區塊的清單。區塊的項 目在區塊關閉時會被移至關閉的更新區塊清單。關閉的更 新區塊清單是控制器RAM中已經關閉之資料更新區塊的清 單。清單中項目的子集在控制寫入操作期間會被移至群組 位址表中的區段。 群組位址表(GAT)是記憶體系統中主機資料所有邏輯群 組之中繼區塊位址的清單。GAT含有根據邏輯位址循序排 序之各邏輯群組的一個項目。GAT中的第n個項目含有具位 址η之邏輯群組的中繼區塊位址。在較佳具體實施例中, GAT是快閃記憶體中的表格,其中包含一組具定義記憶體 系統中每個邏輯群組之中繼區塊位址之項目的區段(稱為 GAT區段)。在快閃記憶體中,會將GAT區段尋找在一或多 98680.doc -52- 1272487 個專用的控制區塊(稱為GAT區塊)中。 圖1 7A顯示群組位址表(GAT)區段的資料欄位。GAT區段 可如具有足夠含有一組128個連續邏輯群組之GAT項目的 容量。各G AT區段包括兩個成分,即:一組用於範圍内各 邏輯群組之中繼區塊位址的GAT項目,及GAT區段索引。第 一成分含有用於尋找和邏輯位址關聯之中繼區塊的資訊。 第二成分含有用於尋找GAT區塊内所有有效G AT區段的資 訊。各GAT項目有三個欄位,即:中繼區塊號碼、如先前 結合圖3 A(iii)所述的頁面標記、及代表中繼區塊是否已經 重新連結的旗標。GAT區段索引列出GAT區塊中有效GAT 區段的部分。此索引會在每個GAT區段中但會被GAT區塊中 下個寫入之gat區段的版本所取代。因此只有最後寫入 之G AT區段中的版本為有效。 圖17B顯示記錄在一或多個GAT區塊中之群組位址表 (GAT)區段的範例。GAT區塊是記錄gAT區段專用的中繼區 塊。在更新GAT區段時,會將其寫入GAT區塊720中下一個 可用的實體區段位置。因此,GAT區段的多個複本可存在 GAT區塊中,其中只有最後寫入的複本為有效。例如,gat 區4又25 5(含有邏輯群組LG3%8 _ LG4〇98的指標)至少已經使 用有效版本的最新版本更新兩次。區塊中最後寫入之gat 區段的一組索引可識別GAT區塊中各有效區段的位置。在 此範例中,區塊中最後寫入的GAT區段是GAT區段236,及 其索引組是取代所有先前索引組的有效索引組。當GAT區 塊隶後變成以GAT區段完全填充時,會將所有有效區段再 98680.doc -53- 1272487 寫入新的區塊位置,以在控制寫入操作期間壓縮區塊。然 後抹除整個區塊。 如上述,G AT區塊在邏輯位址空間的區域中含有邏輯上 連續組之群組的項目。G AT區塊内的G AT區段各含有128個 連續邏輯群組的邏輯對實體映射資訊。在GAT區塊所跨越 的位址範圍内,儲存所有邏輯群組項目所需的GAT區段數 僅佔用區塊内總區段位置的一小部分。因此,在區塊中下 一個可用區段位置將GAT區段寫入,即可將其更新。GAT 區塊中所有有效GAT區段及其位置的索引係維持在最近寫 入之GAT區段中的索引攔位中。GAT區塊中有效GAT區段所 佔用之總區段的一小部分係為系統設計參數,其通常為 25%。然而,每個GAT區塊中最多有64個有效GAT區段。在 大邏輯容量的系統中,可能必須在一個以上的GAT區塊中 儲存GAT區段。此時,各GAT區塊係和固定範圍的邏輯群組 關聯。 可將G AT更新執行作為控制寫入操作的部分,此操作會 在ABL用盡配置的區塊時受到觸發(見圖18)。其執行和abl 填充及CBL清空操作㈣進行。在GAT更新操作期間,一個 GAT區段具有以關閉之更新區塊清單中對應項目所更新的 員目GAT項目被更新日守’會從關閉❺更新區塊清單(⑶肌) 移除任何對應的項目。你I ^ 4 4 + 例如’會根據關閉的更新區塊清單 中的第一項目選擇要更新的Γ π抓 文新的GAT 又。可將更新區段寫入 GAT區塊中的下一個可用區段位置。 當沒有任何區段位置可桩_ 0, t、 已更新的GAT區段使用時, 98680.doc -54- 1272487 於控制寫入操作期間便會發生GAT再寫入操作。將會配置 新的GAT區塊及從完整的gat區塊按循序順序複製GA丁索 引所定義的有效GAT區段。然後抹除整個GAT區塊。 GAT快取記憶體是控制器RAM 130中,GAT區段之128個 項目之子分割之項目的複本。GAT快取記憶體項目的數量 是一項系統設計參數,代表值為32。每次從gAT區段讀取 一個項目時,即可建立相關區段子分割的^八丁快取記憶 體。將會維持多個GAT快取記憶體。其數量是代表值為锡 設計參數。GAT快取記憶體會根據最久未使用以不同區段 子分割的項目來覆寫。 抹除的中繼區塊管理 圖2所示的抹除區塊管理器16〇可使用一組維持目錄及系 統控制資訊的清單來管理抹除區塊。這些清單係分布於控 制器RAM 130及快閃記憶體2〇〇中。當必須配置已抹除的中 繼區塊以儲存使用者資料或儲存系統控制資料結構時,會 ^擇保邊在控制器RAM中的配置區塊清單(abl)中的下一 個可用中繼區塊號碼(見圖15)。同樣地,在撤出中繼區塊後 而將其抹除日t,會將其號碼新增至同樣保留在控制器 中的清除區塊清單(CBL)。相對較靜態的目錄及系統控制資 料係儲存於快閃記憶體中。這些包括列出快閃記憶體中所 有中繼區塊之抹除狀態的已抹除區塊清單及位元對映 ()已抹除的區塊清單及MAP係儲存於個別區段中, 且會記錄在稱為「MAP區塊」的專用中繼區塊中。這些分 布於控制器RAM及快閃記憶體中的清單可提供已抹除區塊 98680.doc -55- 1272487 記錄的層級以有效管理已抹除中繼區塊的使用。 圖18為顯示使用及再循環已抹除區塊之控制及目錄資訊 之分布及流程的示意方塊圖。控制及目錄㈣係維持在被 保留在常駐於快閃記憶體2〇〇之控制器ram 13〇或在MAp 區塊750中的清單。 在較佳具體實施例中,控制器RAM 13〇會保留配置區塊 清單(ABL)610及清除區塊清單(CBL)74〇。如先前結合圖15 所述,配置區塊清單(ABL)可記錄最近已配置哪個中繼區塊 以儲存使用者資料或儲存系統控制資料結構。在必須配置 新的已抹除中繼區塊時,會在配置區塊清單(ABL)中選擇下 一個可用的中繼區塊號碼。同樣地,會使用清除區塊清單 (CBL)記錄已經解除配置及抹除的更新中繼區塊。在控制器 RAM 13 0(見圖1)中會保留ABL及CBL以在追縱相對作用中 更新區塊時進行快速存取及簡易操控。 配置區塊清單(ABL)可記錄即將成為更新區塊之已抹除 中繼區塊的集區及已抹除中繼區塊的配置。因此,各個這 些中繼區塊可由指定其是否為ABL懸置配置中的已抹除區 塊、開啟的更新區塊、或關閉的更新區塊之屬性來說明。 圖18顯示ABL含有:已抹除ABL清單612、開啟的更新區塊 清單614、及關閉的更新區塊清單616。此外,和開啟的更 新區塊清單614關聯的是關聯的原始區塊清單615。同樣 地,和關閉的更新區塊清單關聯的是關聯的已抹除原始區 塊清單617。如先前圖15所示,這些關聯的清單分別是開啟 的更新區塊清單614及關閉的更新區塊清單616的子集。已 98680.doc -56- 1272487 抹除的ABL區塊清單612、開啟的更新區塊清單614、及關 閉的更新區塊清單616均為配置區塊清單(ABL)610的子 集,各清單中的項目分別具有對應的屬性。 MAP區塊750是儲存快閃記憶體200中之抹除管理記錄專 用的中繼區塊。MAP區塊儲存MAP區塊區段的時間序列, 其中各MAP區段不是抹除區塊管理(EBM)區段760,就是 MAP區段780。當已抹除區塊在配置用盡且在撤出中繼區塊 時再循環時,關聯的控制及目錄資料較佳含在可在MAP區 塊更新的邏輯區段中,其中會將更新資料的各例項記錄在 新的區塊區段中。EBM區段760及MAP區段780的多個複本 可存在MAP區塊750中,其中只有最新版本為有效。有效 MAP區段之位置的索引係含在EMB區塊的欄位中。有效的 EMB區段總是在控制寫入操作期間最後被寫入MAP區塊 中。當MAP區塊750已滿時,會在控制寫入操作期間將所有 有效區段再寫入新的區塊位置而將其壓縮。然後抹除整個 區塊。 各EBM區段760含有已抹除的區塊清單(EBL)770,此清單 是已抹除區塊總體之子集位址的清單。已抹除的區塊清單 (EBL)770可當作含有已抹除之中繼區塊號碼的緩衝器,從 此緩衝器中會定期取用中繼區塊號碼以重新填充ABL,並 定期將中繼區塊號碼新增至此緩衝器中以重新清空CBL。 EBL 770可當作用於以下項目的緩衝器:可用的區塊緩衝器 (ABB)772、已抹除的區塊緩衝器(EBB)774及已清除的區塊 緩衝器(CBB)776。 98680.doc -57- 1272487 可用的區塊緩衝器(ABB)772含有緊接先前ABL填充操作 之後之ABL 6 10之項目的複本。其實際上是正在ABL填充操 作之後之ABL的備份複本。 已抹除的區塊緩衝器(EBB)774含有先前從MAP區段780 或CBB清單776傳送之已抹除的區塊位址(說明如下),且該 等位址可用在ABL填充操作期間傳送至ABL 610。 已清除的區塊緩衝器(CBB)776含有在CBL清空操作期間 已從CBL 740傳送及其後會被傳送至MAP區段780及EBB清 單774之已抹除區塊的位址。 各個MAP區段780含有稱為「MAP」的位元對映結構。 MAP會使用快閃記憶體中各中繼區塊的一位元,以用來表 示各區塊的抹除狀態。對應於EBM區段中ABL、CBL或已 抹除的區塊清單所列之區塊位址的位元在MAP中不會被設 為抹除狀態。 區塊配置演算法永遠不會使用在MAP、已抹除的區塊清 單、ABL或CBL内,任何未含有有效資料結構及未被指定 為已抹除區塊的區塊’因此無法存取此類區塊用於儲存主 機或控制資料結構。這可提供從可存取的快閃記憶體位址 空間排除具有缺陷位置之區塊的簡單機制。 圖1 8所示之階層可有效地管理已抹除區塊記錄,並且對 被儲存於該控制器之RAM中的該等區塊位址清單提供完整 的安全性。可以不頻繁的方式在該些區塊位址清單及一個 以上的MAP區段780之間交換已抹除的區塊項目。可於電源 關閉之後的系統初始化期間,透過被儲存於快閃記憶體中 98680.doc -58- 1272487 複數個區段中該等已抹除區塊清單及位址變換表中的資 訊,以及有限地掃描快閃記憶體中少量被參照的資料區 塊,來重建該些清單。 用於更新已抹除中繼區塊記錄之階層所採用的該等演瞀 法可以下面的順序來配置使用已抹除區塊:將來自該map 區塊750的區塊叢於位址順序中交錯來自該CBL 740的區塊 位址叢’其反映的係區塊被該主機更新的順序。對大部份 的中繼區塊大小與系統記憶體容量而言,單一 MAP區段可 針對該系統中的所有中繼區塊提供一位元對映。於此情況 中,已抹除的區塊必定會以和被記錄於此MAP區段中相同 的位址順序來配置使用。 抹除區塊管理操作 如上述,ABL 610是具有以下位址項目的清單:可經配置 使用的已抹除中繼區塊,及最近已配置為資料更新區塊之 中繼區塊。ABL中區塊位址的實際數量介於為系統設計變 數之最大及最小限制之間。在製造期間,袼式化之abl項 目的數量是記憶卡類型及容量的函數。此外,由於可用之 已抹除區塊的數量會因壽命期間的區塊故障而縮減,也會 縮減ABL中項目的數量接近系統壽命終點。例如,在填充 操作後,胤中的項目可指定可用於以下用途的區塊。每 個區塊具有一個項目之部分寫入資料更新區塊的項目不超 過系統對同時開啟之最大更新區塊的限 料更新區塊之抹除區塊的一至二十個項目之間。、配= 制區塊之已抹除區塊的四個項目。 98680.doc -59- 1272487 ABL填充操作 由於ABL 610會因為配置而變成耗盡,因此需要進行重新 填充。填充ABL的操作發生於控制寫入操作期間。此係觸 發於以下情況時:必須配置區塊,但ABL含有不足用於配 置為資料更新區塊或一些其他控制資料更新區塊的已抹除 區塊項目。在控制寫入期間,ABL填充操作係和GAT更新 操作同時進行。 在ABL填充操作期間會發生以下動作。 1 ·保留具有目前資料更新區塊之屬性的ABL項目。 2·保留已關閉資料更新區塊之屬性的abl項目,除非該 區塊的某個項目正於該同時進行的GAT更新作業中被寫 入’於此情況中則會從該ABL中移除該項目。 3·保留用於未配置之抹除區塊的abl項目。 4.壓縮ABL以移除因移除項目所產生的間隙,以維持項 目的順序。 5·藉由附加來自該EBB清單中下次可用的項目,以完全 填充該ABL。 CBL清空操作 CBL是控制器RAM中已抹除區塊位址的清單,對已; 區塊員目數里的限制和ABL相同。清空⑶[的操作發匕 控制寫入操作期間。因此,其和胤填充心丁更新操; 夕矛、項目並將其寫入CBB清單776。 98680.doc -60 - 1272487 MAP交換操作 當EBB清單774已為清空時,在MAP區段78〇之抹除區塊 資訊及EBM區段760間的MAP交換操作可定期發生於控制 寫入操作期間。如果系統中的所有已抹除中繼區塊均記錄 在EBM區段760中,將無任何MAP區段78〇存在及不會執行 任何MAP父換。在MAP交換操作期間,用於將已抹除區塊 饋送給EBB 7 74的MAP區段被視為來源MAP區段782。相反 地,用於從CBB 776接收已抹除區塊的MAP區段被視為目的 地MAP區段784。如果只有一個MAP區段,則可當作來源及 目的地MAP區段,其定義如下。 在MAP交換期間會執行以下動作。 1 ·以遞增指標的方式為基礎,選擇一來源MAP區段。 2·以不在該來源MAP區段中之第一 CBB項目中的區塊位 址為基礎來選擇一目的MAP區段。 3.如該CBB中相關項目所定義的方式來更新該目的MAp 區段,並且從該CBB中移除該等項目。 4·將該已更新的目的MAP區段寫入至該MAP區塊之中, 除非沒有分離的來源MAP區段存在。 5 ·如該CBB中相關項目所定義的方式來更新該來源map 區段’並且從該CBB中移除該等項目。 6·將該CBB中剩餘的項目附加至該EBB之中。 7 ·利用該來源MAP區段所定義之已抹除區段位址盡可能 地填充該EBB。 8 ·將该已更新的來源MAP區段寫入至該]VIAP區塊中。 98680.doc -61 - 1272487 9·將一已更新的EBM區段寫入至該MAP區塊中。 清單管理 圖18顯示各種清單間控制及目錄資訊的分布與流程。為 了方便,在清單元件間移動項目或變更項目屬性的操作, 在圖1 8中識別為[A]至[0],說明如下。 [A] 在將抹除區塊配置為主機資料的更新區塊時,合 將其在ABL中的項目屬性從已抹除的ABL區塊變更為開啟 的更新區塊。 [B] 在將已抹除的區塊配置為控制區塊時,會移除其在 ABL中的項目。 [C] 在建立一具有開啟更新區塊屬性的ABL項目時,會 將關聯的原始區塊欄位新增至項目,以記錄被更新之邏輯 群組的原始中繼區塊位址。從GAT可獲得此資訊。 [D] 關閉更新區塊時,其在ABL中的項目屬性會從開啟 的更新區塊變更為關閉的更新區塊。 [E] 關閉更新區塊時,會抹除其關聯的原始區塊,及會 將其在ABL中項目之關聯原始區塊攔位的屬性變更為已抹 除的原始區塊。 。[F]在ABL填充操作期間,任何其位址在相同控制寫入 才木作期間於GAT中更新的已關閉更新區塊會從abl中移除 其項目。 [G]在ABL填充插作期間,在從ABL移除已關閉更新區 鬼的項目時’會將其關聯之已抹除原始區塊的項目移至 98680.doc 62- 1272487 [Η] 在抹除控制區塊時,會將其所用項目新增至CBL。 [I] 在ABL填充操作期間,會從EBB清單將已抹除區塊 項目移至ABL,且被賦以已抹除之ABL區塊的屬性。 [J] 在ABL填充操作期間修改所有相關的ABL項目後, ABL中的區塊位址將取代ABB清單的區塊位址。 [K] 和控制寫入期間的ABL填充操作同時進行,將CBL 中已抹除區塊的項目移至CBB清單。 [L] 在MAP交換操作期間,從CBB清單將所有相關項目 移至MAP目的地區段。 [M] 在MAP交換操作期間,從CBB清單將所有相關項目 移至MAP來源區段。 [N] 在MAP交換操作期間的[L]與[M]之後,從CBB清單 將所有其餘項目移至ebb清單。 [〇] 在MAP交換操作期間的[N]之後,如果可能,從MAP 來源區段移動除了在[Μ]中移動之項目以外的項目,以填充 EBB清單。 邏輯對實體位址轉譯 為了在快閃記憶體中尋找邏輯區段的實體位置,圖2所示 的邏輯對實體位址轉譯模組140可執行邏輯對實體位址轉 譯。除了最近已更新的邏輯群組外,可以使用常駐在控制 器RAM 1 30中快閃記憶體200或GAT快取記憶體的群組位址 表(GAT)執行大多數的轉譯。最近已更新之邏輯群組的位址 轉譯會需要查詢主要常駐在控制器RAM 130中之更新區塊 的位址清單。因此,邏輯區段位址之邏輯對實體位址轉譯 98680.doc -63- 1272487 的程序端視和區段所在之邏輯群組關聯之區塊的類型而 定。區塊的類型如下··完整區塊、循序資料更新區塊、混 亂資料更新區塊、關閉的資料更新區塊。 圖19為顯示邏輯對實體位址轉譯程序的流程圖。實質 上’先使用邏輯區段位址查詢各種更新目錄(例如,開啟的 更新區塊清單及關閉的更新區塊清單),即可尋找對應的中 繼區塊及實體區段。如果關聯的中繼區塊並不屬於更新程 序的部分,則由GAT提供目錄資訊。邏輯對實體位址轉譯 包括以下步驟: 步驟800 :給定一邏輯區段位址。 步驟810:查詢控制器RAM中開啟之更新區塊清單614的 給定邏輯位址(見圖15與18)。如果查詢失敗,繼續進行至步 驟82〇,否則繼續進行至步驟83{)。 步驟820 :在關閉的更新區塊清單616中查詢給定的邏輯 位址。如果查詢失敗,則給定的邏輯位址並不屬於任何更 新程序的部分,·繼續進行至步驟87〇,以進行gat位址轉 澤。否則繼續進行至步驟86〇,以進行關閉的更新區塊位址 轉譯。 乂驟830 ·如果含有給定邏輯位址的更新區塊為循序,則 繼績進行至步驟840,以進行循序更新區塊位址轉譯。否則 繼續進行至步驟850,以進行混亂更新區塊位址轉譯。、 v驟840 ·使用循序更新區塊位址轉譯來取得中繼區塊位 址。繼續進行至步驟88〇。 步驟850:|用混亂更新區塊位址轉譯來取得中繼區塊位 98680.doc -64- 1272487 址。繼續進行至步驟880。 步驟860 :使用關閉的更新區塊位址轉譯來取得中繼區塊 位址。繼續進行至步驟880。 步驟870 :使用群組位址表(GAT)轉譯來取得中繼區塊位 址。繼續進行至步驟880。 步驟8 8 0 ··將中繼區塊位址轉換為實體位址。轉譯方法端 視中繼區塊是否已經重新連結而定。 步驟890 :已取得實體區段位址。 下文將更詳細說明該等各種位址轉譯處理: 循序更新區塊位址轉譯(步驟840) 從開啟之更新區塊清單614 (圖15及18)的資訊即可直接 完成和循序更新區塊關聯之邏輯群組中目標邏輯區段位址 的位址轉譯,說明如下。 1 ·從清單的「頁面標記」及「寫入的區段號碼」攔位可 決定目標邏輯區段是否已經配置在更新區塊或其關聯的原 始區塊中。 2 ·從清單中可讀取適合目標邏輯區段的中繼區塊位址。 3 ·從合適的「頁面標記」櫊位可決定中繼區塊内的區段 位址。 混亂更新區塊位址轉譯(步驟85〇) 和混亂更新區塊關聯之邏輯群組中目標邏輯區段位址的 位址轉譯序列如下。 1·如果從RAM中的混亂區段清單決定區段是最近寫入的 區段’則直接從其在此清單中的位置即可完成位址轉譯。 98680.doc -65- 1272487 2·在CBI區塊中最近寫入的區段在其混亂區塊資料攔位 内含有和目標邏輯區段位址相關之混亂更新區塊的實體位 址。其在間接區段索引攔位内也含有有關此混亂更新區塊 最後寫入之CBI區段之CBI區塊内的位移(見圖16A-16E)。 3_該些欄位中的資訊均被快取儲存於ram之中,而不需 要於後續的位址轉譯期間來讀取該區段。 4.讀取步驟3由間接區段索引欄位所識別的CBI區段。 5·將最近被存取之混亂更新子群的直接區段索引欄位快 取儲存於RAM之中,而不需要實施步驟4處的讀取以重複存-取相同的混亂更新區塊。 6·在步驟4或步驟5讀取的直接區段索引欄位接著可識別 有關含有目標邏輯區段位址之邏輯子群組的CBI區段。 7·從步驟6中識別的CBI區段讀取目標邏輯區段位址的混 亂區塊索引項目。 8.該最近被讀取之混亂區塊索引攔位可被快取儲存於控 制器RAM之中,而不需要實施步驟4與步驟7處的讀取以重 複存取相同的邏輯子群。 9·混亂區塊索引項目可定義目標邏輯區段在混亂更新區 塊或關聯之原始區塊中的位置。如果目標邏輯區段的有效 複本係在原始區塊中,則可使用原始中繼區塊及頁面標記 資訊將其尋找。 關閉的更新區塊位址轉課(步驟860) 從關閉之更新區塊清單的資訊即可直接完成和關閉之更 新區塊關聯之邏輯群組中目標邏輯區段位址的位址轉譯 98680.doc -66- 1272487 (參見圖18),說明如下。 1 · k π單中可$買取指派給目標邏輯群組的中繼區塊位 址0 2.從清單中的「頁面標記」攔位可決定中繼區塊内的區 段位址。 G AT位址轉譯(步称870) 如果邏輯群組*會受到開啟或關閉之區塊更新清單的參 考,則其在GAT中的項目為有效。由GAT所參考之邏輯群組 中目標邏輯區段位址的位址轉譯序列如下。 1·評估RAM中可用GAT快取記憶體的範圍,以決定目標 邏輯群組的項目是否含在GAT快取記憶體中。 2·如果在步驟1發現目標邏輯群組,則gat快取記憶體含 有完整的群組位址資訊,包括中繼區塊位址及頁面標記, 因此允許轉譯目標邏輯區段位址。 3·如果目標位址不在GAT快取記憶體中,則必須讀取目 才示GAT區塊的GAT索引,以識別有關目標邏輯群組位址之 G AT區段的位置。 4.最後存取之GAT區塊的GAT索引會保留在控制器ram 中,且不用從快閃記憶體讀取區段即可存取。 5 ·將一份由每個G AT區塊之中繼區塊位址及被寫入每個 GAT區塊之中的區段數量所組成的清單保存在控制器ram 之中。假使步驟4處無法取得必要的GA丁索引,則可立刻從 快閃記憶體之中讀取。 6 ·從步驟4或步驟6處所獲得的G AT索引所定義的G A丁區 98680.doc -67- 1272487 塊中的區段位置中讀取有關目標邏輯群組位址的gat區 段。以含有目標項目之區段的子分割來更新gat快取記憶 體。 7·從目標GAT項目内的中繼區塊位址及「頁面標記」欄 位取得目標區段位址。 中繼區塊對實體位址轉譯(步驟880) 如果和中繼區塊位址關聯的旗標代表中繼區塊已經被重 新連結,則會從BLM區塊讀取相關的LT區段,以決定目標 區4又位址的抹除區塊位址。否則,會從中繼區塊位址決直— 接定抹除區塊位址。 控制資料管理 圖20顯不在記憶體管理的操作過程中,在控制資料結構 上執行的操作階層。資料更新管理操作可對常駐在RAM中 的各種清單發生作用。控制寫入操作可對快閃記憶體中各 ,並還能和RAM中的 種控制資料區段及專用區塊發生作用,並還 清單交換資料。Blocker' finds the last written CBI section in multiple CBI sections of a given logical group. Step 691: At least one CBI section associated with one of the subgroups of the given logical group has been found. carry on. Step 692: If the found CBI section points to a given subgroup, then 98680.doc -48 - 1272487 may look for a CBI section of the given subgroup. Proceed to step 696. Otherwise, proceed to step 694. Step 694: Find the CBI section of the given subgroup by querying the direct section index field of the CBI section currently being sought. Step 696: Find a chaotic block or a given logical segment in the original block by querying the chaotic block index field of the CBI section of the given subgroup. Figure 16E shows an example of a Chaotic Block Index (CBI) section and its functions in a specific embodiment in which each logical group is partitioned into a plurality of subgroups. The logical group 700 originally stores its complete data in the original relay block 702. The logical group is then updated with the configuration-specific chaotic update block 704. In this example, logical grouping 700 is partitioned into subgroups, each of which has 256 segments. In order to find the i-th segment in subgroup B, the last written CBI segment in CBI block 620 is first looked for. The chaotic block information field of the last written CBI section can provide the address of the chaotic update block 704 looking for a given logical group. At the same time, it can also provide the location of the last segment written in the chaotic block. This information is useful when scanning and rebuilding indexes. If the last written CBI section result is one of the four CBI sections of a given logical group, it is further determined whether it is the CBI section of the given subgroup B containing the i-th logical section. If so, the chaotic block index of the CBI section will point to the location of the relay block where the data of the i th logical section is stored. The segment location will be in the chaotic update block 704 or in the original block 702. If the last written CBI segment result is one of the four 98680.doc -49 - 1272487 CBI segments of the given logical group but However, if it does not belong to subgroup b, it will query its direct section index 'to find the CBI section of subgroup B. After looking for this exact CBI segment, the chaotic block index is queried to find the third logical segment in the chaotic update block 7〇4 and the original block 702. If the CBI section result written after the table is not one of the four CBI sections of a given logical group, the indirect section index is queried for one of the four sections. In the example shown in Fig. 16E, the CBI area of the subgroup c is found. This CBI section of subgroup C then queries its direct section index to find the exact CBI section of subgroup B. This example shows that when querying a chaotic block index, it will find that the i th logical segment is unchanged and will find its valid data in the original block. The same considerations are also made when looking for the jth logical segment in a subgroup C of a given logical group. This example shows that the last written CBI section result is not any of the four CBI sections of a given logical group. Its indirect segment index points to one of the four segments of a given group. The last written result of the four pointed to is also the CBI section of subgroup C. When querying its chaotic block index, it will be found that the jth logical sector is found at the specified location in the chaotic update block 7〇4. There is a list of chaotic sections of the chaotic update blocks in the system in the controller RAM. Each list contains a record of the segment that was written to the chaotic update block since the last CB in the flash memory. The number of logical sector addresses (which may remain in the chaotic section list) for a particular chaotic update block is a design parameter for a representative value of 8 to 16. The optimal size of the list can be determined by the trade-off between the effect of the write-off of the chaotic data and the scan time of the zone during the initialization. During system initialization, in order to identify the valid segments written since the previous update of one of its associated CBI segments, each chaotic update block must be scanned. In the controller RAM, it will constitute a chaotic section of each chaotic update block > month early. It is only necessary to scan each block in the last written C BI section from the last sector address defined in the chaotic block information field of each block. When a chaotic update block is configured, the CBI section is written to correspond to all of the update stitch subgroups. The logical and physical addresses of the chaotic update block are written to the chaotic block information block available in the segment, where the null value entry is in the chaotic block index block. A list of chaotic sections will be opened in the controller RAM. When the chaotic update block is closed, the CBI section is written with the block logic and physical address removed from the chaotic block information field in the section. The list of corresponding chaotic sections in RAM becomes unused. The list of chaotic sections corresponding to the controller RAM can be modified to include a record of the section written to the chaotic update block. When the mixed |L section list in the controller RAM is not written to any free space recorded by other sections of the chaotic update block, the updated CBI section is written for the logical subgroup of the section in the list. And then clear the list. When CBI block 620 becomes full, the valid CBI section is copied into the configured erased block and the previous CBI block is erased. Address Table The logic-to-physical address translation module 14 shown in Figure 2 is responsible for associating the logical address of the host in the flash memory with the corresponding physical address. The mapping between logical groups and entity groups (relay blocks) is stored in the non-volatile flash memory 98680.doc -51 - 1272487 200 and the more volatile Raz 13〇 (see Figure 丨). A set of tables and lists that are distributed. The address list is maintained in flash memory containing the address of the relay block for each logical group in the memory system. In addition, the logical-to-physical address record of the most recently written section is temporarily retained in RAM. These volatile records can be reconstructed from the block list and the data section headers in the flash memory when the system is initialized after startup. Therefore, the address table in the flash memory only needs to be updated occasionally to reduce the percentage of over-utilization of the control data. The hierarchy of the address records of the logical group includes: a list of updated blocks opened in the RAM, a list of closed update blocks, and a group address table (GAT) maintained in the flash memory. The list of updated update blocks is a list of data update blocks currently open in the controller RAM for writing updated host segment data. The block's items are moved to the closed update block list when the block is closed. The closed update block list is a list of the data update blocks that have been closed in the controller RAM. A subset of the items in the list are moved to the section in the group address table during the control write operation. The Group Address Table (GAT) is a list of the relay block addresses of all logical groups of host data in the memory system. The GAT contains an item for each logical group that is sequentially ordered according to logical addresses. The nth item in the GAT contains the relay block address of the logical group with the address η. In a preferred embodiment, the GAT is a table in a flash memory containing a set of sectors (referred to as GAT regions) having items defining the relay block addresses of each logical group in the memory system. segment). In flash memory, the GAT segment is looked up in one or more 98680.doc -52 - 1272487 dedicated control blocks (called GAT blocks). Figure 1 7A shows the data field of the Group Address Table (GAT) section. The GAT segment can have a capacity of a GAT project that is large enough to contain a set of 128 consecutive logical groups. Each G AT segment consists of two components, namely a set of GAT entries for the relay block addresses of each logical group in the range, and a GAT segment index. The first component contains information for finding a relay block associated with a logical address. The second component contains information for finding all valid G AT segments within the GAT block. Each GAT project has three fields, namely: a relay block number, a page mark as previously described in connection with Figure 3 A(iii), and a flag indicating whether the relay block has been reconnected. The GAT section index lists the parts of the valid GAT section in the GAT block. This index will be replaced in each GAT segment but will be replaced by the version of the next written gat segment in the GAT block. Therefore only the version in the last written G AT section is valid. Figure 17B shows an example of recording a group address table (GAT) section in one or more GAT blocks. The GAT block is a relay block dedicated to recording the gAT sector. When the GAT segment is updated, it is written to the next available physical segment location in the GAT block 720. Thus, multiple copies of the GAT segment can exist in the GAT block, with only the last written copy being valid. For example, the gat area 4 and 25 5 (indicators containing the logical group LG3%8 _ LG4 〇 98) have been updated at least twice with the latest version of the valid version. A set of indices of the last written gap segment in the block identifies the location of each valid segment in the GAT block. In this example, the last written GAT segment in the block is the GAT segment 236, and its index group is a valid index group that replaces all previous index groups. When the GAT block becomes fully populated with the GAT segment, all valid segments are again written to 98680.doc -53 - 1272487 to the new block location to compress the block during the control write operation. Then erase the entire block. As described above, the G AT block contains items of a group of logically consecutive groups in the area of the logical address space. The G AT segments within the G AT block each contain 128 logically logical group mapping information for consecutive logical groups. Within the address range spanned by the GAT block, the number of GAT segments required to store all logical group items occupies only a small portion of the total segment location within the block. Therefore, the GAT segment can be written to the next available segment location in the block to update it. The index of all valid GAT segments and their locations in the GAT block is maintained in the index block in the most recently written GAT segment. A small portion of the total segment occupied by the active GAT segment in the GAT block is the system design parameter, which is typically 25%. However, there are up to 64 valid GAT segments in each GAT block. In systems with large logical capacity, it may be necessary to store GAT segments in more than one GAT block. At this time, each GAT block is associated with a fixed range logical group. G AT update execution can be part of the control write operation, which is triggered when the ABL runs out of configured blocks (see Figure 18). Its execution is performed with abl fill and CBL clear operation (4). During the GAT update operation, a GAT segment has a member whose GAT item is updated with the corresponding item in the closed update block list. The update will remove any corresponding from the Closed Update block list ((3) muscle). project. You I ^ 4 4 + for example ' will select the new GAT to be updated according to the first item in the list of updated update blocks. The update section can be written to the next available section location in the GAT block. When no segment location is available for the _0, t, updated GAT segment, 98680.doc -54 - 1272487 GAT rewrite operation occurs during the control write operation. The new GAT block will be configured and the valid GAT segment defined by the GA Buto will be copied in a sequential order from the complete gat block. Then erase the entire GAT block. The GAT cache memory is a copy of the sub-divided items of the 128 items of the GAT section in the controller RAM 130. The number of GAT cache memory items is a system design parameter with a value of 32. Each time an item is read from the gAT section, the corresponding segment sub-segmented memory is created. Multiple GAT caches will be maintained. The number is representative of the tin design parameters. GAT cache memory is overwritten based on items that have not been subdivided by different sections for the longest time. Erased Relay Block Management The erase block manager 16 shown in Figure 2 can manage the erase block using a list of maintained directory and system control information. These lists are distributed among the controller RAM 130 and the flash memory 2A. When the erased relay block must be configured to store the user data or the storage system control data structure, the next available relay area in the configuration block list (abl) in the controller RAM is selected. Block number (see Figure 15). Similarly, after the trunk block is removed and its day t is erased, its number is added to the clear block list (CBL) that is also retained in the controller. Relatively static catalog and system control data is stored in flash memory. These include a list of erased blocks that list the erased states of all the relay blocks in the flash memory, and a list of blocks that have been erased by the bit map () and the MAP system are stored in the individual sectors, and It is recorded in a dedicated relay block called "MAP Block". These lists, which are distributed in the controller RAM and flash memory, provide the level of the erased blocks 98680.doc -55 - 1272487 to effectively manage the use of erased blocks. Figure 18 is a schematic block diagram showing the distribution and flow of control and directory information for the use and recycling of erased blocks. The control and directory (4) are maintained in a list that is retained in the controller ram 13〇 resident in the flash memory 2 or in the MAp block 750. In the preferred embodiment, the controller RAM 13 保留 retains the configuration block list (ABL) 610 and the clear block list (CBL) 74 〇. As previously described in connection with Figure 15, the configuration block list (ABL) can record which relay block has been recently configured to store user data or store system control data structures. When a new erased trunk block must be configured, the next available trunk block number is selected in the configuration block list (ABL). Similarly, the Clear Block List (CBL) is used to record the updated relay blocks that have been unconfigured and erased. ABL and CBL are reserved in the controller RAM 13 0 (see Figure 1) for quick access and easy manipulation when updating blocks in the tracking effect. The configuration block list (ABL) records the configuration of the pooled and erased trunk blocks of the erased trunk block that will become the update block. Thus, each of these relay blocks can be illustrated by an attribute specifying whether it is an erased block, an open update block, or a closed update block in an ABL suspended configuration. Figure 18 shows that the ABL contains: an erased ABL list 612, an open update block list 614, and a closed update block list 616. In addition, associated with the opened update block list 614 is the associated original block list 615. Similarly, associated with the closed list of updated blocks is the associated erased original block list 617. As previously shown in Figure 15, these associated lists are a subset of the open update block list 614 and the closed update block list 616, respectively. The erased ABL block list 612, the opened update block list 614, and the closed update block list 616 are all subsets of the configuration block list (ABL) 610, in each list. The items have corresponding attributes. The MAP block 750 is a relay block dedicated to the erase management record in the flash memory 200. The MAP block stores a time series of MAP block segments, wherein each MAP segment is not an erase block management (EBM) segment 760, or a MAP segment 780. When the erased block is recirculated when the configuration is exhausted and the relay block is retracted, the associated control and directory data is preferably included in the logical section that can be updated in the MAP block, where the updated data will be updated. Each item is recorded in a new block section. Multiple copies of EBM section 760 and MAP section 780 may be present in MAP block 750, with only the latest version being active. The index of the location of the valid MAP section is contained in the field of the EMB block. A valid EMB segment is always written to the MAP block during the control write operation. When MAP block 750 is full, all valid sectors are rewritten to the new block location and compressed during the control write operation. Then erase the entire block. Each EBM section 760 contains an erased block list (EBL) 770, which is a list of subset addresses of the erased block population. The erased block list (EBL) 770 can be used as a buffer containing the erased block number, from which the relay block number is periodically taken to refill the ABL and periodically The block number is added to this buffer to re-empt the CBL. The EBL 770 can be considered as a buffer for the following: an available block buffer (ABB) 772, an erased block buffer (EBB) 774, and a cleared block buffer (CBB) 776. 98680.doc -57- 1272487 The available block buffer (ABB) 772 contains a copy of the ABL 6 10 project immediately following the previous ABL fill operation. It is actually a backup copy of the ABL after the ABL fill operation. The erased block buffer (EBB) 774 contains the erased block addresses previously transmitted from the MAP section 780 or the CBB list 776 (described below), and the addresses can be transmitted during the ABL fill operation. To ABL 610. The cleared block buffer (CBB) 776 contains the address of the erased block that has been transferred from the CBL 740 during the CBL clear operation and then transferred to the MAP segment 780 and the EBB list 774. Each MAP section 780 contains a bit mapping structure called "MAP". The MAP uses a bit of each of the relay blocks in the flash memory to indicate the erase status of each block. Bits corresponding to the block addresses listed in the ABL, CBL, or erased block list in the EBM section are not set to the erase state in the MAP. The block configuration algorithm will never be used in MAP, erased block list, ABL or CBL, any block that does not contain a valid data structure and is not designated as an erased block' therefore cannot access this Class blocks are used to store hosts or control data structures. This provides a simple mechanism to exclude blocks with defective locations from the accessible flash memory address space. The hierarchy shown in Figure 18 effectively manages erased block records and provides complete security for the list of block addresses stored in the controller's RAM. The erased block items can be exchanged between the list of block addresses and one or more MAP segments 780 in an infrequent manner. The information in the erased block list and the address translation table stored in the plurality of segments of 98880.doc -58 - 1272487 stored in the flash memory during system initialization after the power is turned off, and limited A small number of referenced data blocks in the flash memory are scanned to reconstruct the lists. The deduction used to update the hierarchy of erased block records can be configured using the erased block in the following order: the block from the map block 750 is in the address sequence The block address clusters from the CBL 740 are interleaved in the order in which they are updated by the host. For most of the trunk block size and system memory capacity, a single MAP segment provides a one-dimensional mapping for all of the relay blocks in the system. In this case, the erased blocks must be configured for use in the same address order as recorded in this MAP segment. Erase Block Management Operations As mentioned above, ABL 610 is a list of address items that can be configured to use erased blocks, and relay blocks that have recently been configured as data update blocks. The actual number of block addresses in the ABL is between the maximum and minimum limits of the system design variables. During manufacturing, the number of abl items that are formatted is a function of the type and capacity of the memory card. In addition, since the number of available erased blocks is reduced due to block failures during the lifetime, the number of items in the ABL is also reduced to approach the end of the system life. For example, after a fill operation, items in the 可 can specify blocks that can be used for the following purposes. Each block has a part of the project that is written to the data update block and does not exceed one to twenty items of the erase block of the limit update block of the largest update block that is simultaneously opened. , with = the four items of the erased block of the block. 98680.doc -59- 1272487 ABL Fill Operation Since the ABL 610 becomes exhausted due to configuration, it needs to be refilled. The operation of padding the ABL occurs during the control of the write operation. This is triggered by the fact that the block must be configured, but the ABL contains an erased block item that is not sufficient for configuration as a data update block or some other control data update block. During control writes, the ABL fill operating system and the GAT update operation are performed simultaneously. The following actions occur during the ABL fill operation. 1 • Retain ABL projects with the attributes of the current data update block. 2. Keep the abl item of the attribute of the data update block closed, unless an item of the block is being written in the simultaneous GAT update operation. 'In this case, the ABL will be removed from the ABL. project. 3. Keep the abl project for the unconfigured erase block. 4. Compress the ABL to remove the gap created by the removal of the item to maintain the order of the items. 5. Completely populate the ABL by appending the next available item from the EBB list. CBL Clear Operation CBL is a list of erased block addresses in the controller RAM. The restrictions in the block number of the block are the same as those in the ABL. Empty (3) [Operation of the operation control during the write operation. Therefore, it is updated with the 胤 心 ;; 夕 、, project and write it to CBB list 776. 98680.doc -60 - 1272487 MAP Exchange Operation When the EBB list 774 has been emptied, the MAP swap operation between the erase block information and the EBM section 760 in the MAP section 78 can occur periodically during the control write operation. . If all erased trunk blocks in the system are recorded in the EBM section 760, no MAP section 78 will exist and no MAP parent swap will be performed. During the MAP exchange operation, the MAP section for feeding the erased block to EBB 7 74 is considered to be the source MAP section 782. Conversely, the MAP segment used to receive the erased block from CBB 776 is considered a destination MAP segment 784. If there is only one MAP section, it can be regarded as the source and destination MAP section, which is defined as follows. The following actions are performed during the MAP exchange. 1 • Select a source MAP segment based on the incremental indicator. 2. Select a destination MAP segment based on the block address in the first CBB project that is not in the source MAP segment. 3. Update the target MAp section in the manner defined by the relevant project in the CBB and remove the items from the CBB. 4. Write the updated destination MAP segment into the MAP block unless there is no separate source MAP segment present. 5. Update the source map section ' in the manner defined by the relevant project in the CBB and remove the items from the CBB. 6. Add the remaining items in the CBB to the EBB. 7. The EBB is filled as much as possible using the erased sector address defined by the source MAP section. 8 - Write the updated source MAP section to the ]VIAP block. 98680.doc -61 - 1272487 9. Write an updated EBM section to the MAP block. List Management Figure 18 shows the distribution and flow of control and catalog information between various lists. For convenience, the operation of moving items or changing item attributes between list elements is identified as [A] to [0] in Fig. 18, as explained below. [A] When the erase block is configured as the update block of the master data, the project attribute in the ABL is changed from the erased ABL block to the open update block. [B] When an erased block is configured as a control block, its items in the ABL are removed. [C] When an ABL project with open update block attributes is created, the associated original block field is added to the project to record the original relay block address of the updated logical group. This information is available from GAT. [D] When the update block is closed, its project properties in the ABL are changed from the open update block to the closed update block. [E] When the update block is closed, its associated original block is erased, and the attribute of the associated original block block in the ABL item is changed to the erased original block. . [F] During the ABL fill operation, any closed update block whose address is updated in GAT during the same control write will remove its entry from abl. [G] During the ABL fill insertion, when the item that has closed the update area ghost is removed from the ABL, the item whose associated original block has been erased will be moved to 98680.doc 62- 1272487 [Η] In addition to controlling the block, the items it uses are added to the CBL. [I] During the ABL fill operation, the erased block item is moved from the EBB list to the ABL and assigned the attributes of the erased ABL block. [J] After modifying all relevant ABL items during the ABL fill operation, the block address in the ABL will replace the block address of the ABB list. [K] is performed simultaneously with the ABL fill operation during the control write, moving the items of the erased block in the CBL to the CBB list. [L] Move all related items from the CBB list to the MAP destination section during the MAP exchange operation. [M] Move all related items from the CBB list to the MAP source section during the MAP exchange operation. [N] After [L] and [M] during the MAP exchange operation, move all remaining items from the CBB list to the ebb list. [〇] After [N] during the MAP exchange operation, if possible, move items other than the items moved in [Μ] from the MAP source section to populate the EBB list. Logical to Physical Address Translation To find the physical location of a logical segment in flash memory, the logical-to-physical address translation module 140 shown in Figure 2 can perform a logical-to-physical address translation. In addition to the recently updated logical group, most of the translations can be performed using the Group Address Table (GAT) resident in the flash RAM 200 or the GAT cache memory in the controller RAM 130. The address translation of the recently updated logical group will require a list of addresses of the updated blocks that are primarily resident in controller RAM 130. Therefore, the logic of the logical sector address depends on the type of block associated with the logical group in which the physical address translation 98680.doc -63 - 1272487 is located and the logical group in which the sector is located. The types of blocks are as follows: • Complete block, sequential data update block, mixed data update block, and closed data update block. Figure 19 is a flow chart showing a logical versus physical address translation procedure. Essentially, the logical block address is used to query various update directories (for example, the updated update block list and the closed update block list) to find the corresponding relay block and entity segment. If the associated relay block does not belong to the update program, the directory information is provided by GAT. Logical to physical address translation includes the following steps: Step 800: Given a logical sector address. Step 810: Query the given logical address of the updated block list 614 opened in the controller RAM (see Figures 15 and 18). If the query fails, proceed to step 82, otherwise proceed to step 83 {). Step 820: Query the given logical address in the closed update block list 616. If the query fails, the given logical address does not belong to any part of the update program. • Proceed to step 87 for gat address translation. Otherwise proceed to step 86〇 for a closed update block address translation. Step 830: If the update block containing the given logical address is sequential, the process proceeds to step 840 to perform sequential update block address translation. Otherwise, proceed to step 850 for chaotic update block address translation. v v 840 • Use the sequential update block address translation to obtain the relay block address. Proceed to step 88. Step 850:|Use the chaotic update block address translation to obtain the relay block bit 98680.doc -64- 1272487. Proceed to step 880. Step 860: Use the closed update block address translation to obtain the relay block address. Proceed to step 880. Step 870: Use the group address table (GAT) translation to obtain the relay block address. Proceed to step 880. Step 8 8 0 ·· Convert the relay block address to a physical address. The translation method depends on whether the relay block has been reconnected. Step 890: The physical sector address has been obtained. These various address translation processes are described in more detail below: Sequential Update Block Address Translation (Step 840) From the information of the updated Update Block List 614 (Figures 15 and 18), the block association can be directly completed and updated sequentially. The address translation of the target logical sector address in the logical group is described below. 1 • From the “Page Mark” and “Writed Section Number” blocks of the list, it can be determined whether the target logical section has been configured in the update block or its associated original block. 2 • The relay block address suitable for the target logical segment can be read from the list. 3 • The appropriate “Page Mark” field determines the sector address within the trunk block. The chaotic update block address translation (step 85〇) and the address translation sequence of the target logical sector address in the logical group associated with the chaotic update block are as follows. 1. If it is determined from the list of chaotic sections in the RAM that the section is the most recently written section ', the address translation can be done directly from its position in this list. 98680.doc -65- 1272487 2. The most recently written section in the CBI block contains the entity address of the chaotic update block associated with the target logical sector address in its chaotic block data block. It also contains the displacement within the CBI block of the CBI section last written to this chaotic update block in the indirect section index block (see Figures 16A-16E). 3_The information in these fields is cached in the ram without having to read the section during subsequent address translation. 4. Read step 3 of the CBI section identified by the indirect section index field. 5. The direct segment index field cache of the recently accessed chaotic update subgroup is stored in RAM without the need to perform the read at step 4 to repeat the same chaotic update block. 6. The direct section index field read in step 4 or step 5 can then identify the CBI section associated with the logical subgroup containing the target logical sector address. 7. Read the hash block index entry of the target logical sector address from the CBI section identified in step 6. 8. The recently read chaotic block index block can be cached in the controller RAM without the need to perform the read at steps 4 and 7 to repeatedly access the same logical subgroup. 9. The Chaos Block Index item defines the location of the target logical section in the chaotic update block or associated original block. If the valid copy of the target logical segment is in the original block, it can be found using the original relay block and page tag information. Closed Update Block Address Transfer (Step 860) The address of the target logical sector address in the logical group associated with the updated block can be directly completed and closed from the information of the updated updated block list. 98880.doc -66- 1272487 (see Figure 18), explained below. 1 · k π 单中 can buy the relay block address assigned to the target logical group. 0 2. From the “Page Mark” block in the list, the block address in the relay block can be determined. G AT address translation (step 870) If the logical group* is subject to a reference to the block update list that is turned on or off, its entry in the GAT is valid. The address translation sequence of the target logical sector address in the logical group referenced by the GAT is as follows. 1. Evaluate the range of available GAT cache memory in RAM to determine if the target logical group's items are included in the GAT cache. 2. If the target logical group is found in step 1, the gat cache memory contains complete group address information, including the relay block address and page mark, thus allowing translation of the target logical sector address. 3. If the target address is not in the GAT cache, the GAT index of the GAT block must be read to identify the location of the G AT segment with respect to the target logical group address. 4. The GAT index of the last accessed GAT block will remain in the controller ram and can be accessed without reading the segment from the flash memory. 5 • A list of the relay block addresses of each G AT block and the number of segments written into each GAT block is stored in the controller ram. If the necessary GA index is not available at step 4, it can be read from the flash memory immediately. 6. Read the gat section about the target logical group address from the extent position in the block of the G A-segment 98680.doc -67- 1272487 defined by the G AT index obtained at step 4 or step 6. The gat cache memory is updated with a subdivision of the section containing the target item. 7. Obtain the target segment address from the relay block address and the "page tag" field in the target GAT project. Relay block-to-physical address translation (step 880) If the flag associated with the relay block address represents that the relay block has been re-linked, the associated LT segment is read from the BLM block to The erase block address of the target area 4 and the address is determined. Otherwise, it will be terminated from the relay block address - the block address is erased. Controlling Data Management Figure 20 shows the level of operation performed on the control data structure during the memory management operation. Data update management operations can work on various lists that reside in RAM. The control write operation can be performed on each of the flash memory, and can also interact with the control data section and the dedicated block in the RAM, and also exchange data in the list.

、CBL、以及該 S抹除一控制區塊時,」 的某個項目寫入該GA 丁之中時, 被寫入一混亂更新區塊之中時 清單。 已抹除區塊被配置為一更新區 丨一更新區塊時,便會更新該 ’或是將一已關閉的更新區塊 f,便會更新該CBL。當一區段 便會更新該更新混亂區段 控制寫入操作會使得來 自RAM中之控制資料結構的資訊 98680.doc -68 - 1272487 被寫入快閃記憶體中的控制資料結構之中,必要時會隨之 更新快閃記憶體與RAM之中其它支援的控制資料結構。當 該ABL不含欲被配置為更新區塊的已抹除區塊的任何其它 項目時,或是再寫入該CBI區塊時,便會觸發控制寫入操作。 在較佳具體實施例中,會在每個控制寫入操作期間執行 ABL填充操作、CBL清空操作、及EBM區段更新操作。當 含有EBM區段的MAP區塊已滿時,會將有效的EBM及MAP 區段複製至已配置的已抹除區塊,然後抹除先前的MAP區 塊。 在每個控制寫入操作期間,寫入一個GAT區段,也會隨 著修改關閉的更新區塊清單。當GAT區塊已滿時,將執行 GAT再寫入操作。 如上述,經過幾次的混亂區段寫入作業之後,便會寫入 一 CBI區段。當CBI區塊變滿時,會將有效的CBI區段複製 到已配置的抹除區塊中,然後抹除先前的CBI區塊。 如上述,MAP交換操作係執行於EBM區段的EBB清單中 沒有其他已抹除的區塊項目時。 每次再寫入MAP區塊時,會在專用的μAPA區塊中寫入用 於記錄MAP區塊之目前位址的MAP位址(ΜΑΡΑ)區段。當 MAPA區塊已滿時,會將有效的MAPA區段複製至已配置的 已抹除區塊,然後抹除先前的ΜAPA區塊。 每次再寫入ΜΑΡΑ區塊時,會將啟動區段寫入目前的啟動 區塊中。當啟動區塊已滿時,會將目前版本之啟動區塊的 有效啟動區段複製至備份版本,然後該版本再變成目前的 98680.doc -69- 1272487 版本。先前的目前版本會被抹除並變成備份版本,並會將 有效的啟動區段寫回其中。 分散在多個記憶體平面上之記憶體的對齊 如先前結合圖4及圖5 A-5C所述,為了增加效能,會平行 操2多個記憶體平面。基本上,各平面有其自己的感測放 大的、、且作為碩取及程式電路的部分,以平行服務跨越平面 之記憶體單元的對應頁面。在結合多個平面時,可平行操 作夕個頁面,使得效能更為提高。 ^ 發明的另一方面,對於一被組織成複數個可抹I: 區塊且由多個記憶體平面所構成(因而可平行地讀取 個邏輯單元或是將複數個邏輯單元平行地程式化至該等, 個平=之中)的記憶體陣列,當要更新儲存於特定記憶體^ 面中弟T區塊的原始邏輯單元時,會供應所需以將已更I 的邏輯單元保持在和原始相同的平面中。這可藉由以下2 式來完成:將已更新的邏輯單元記錄到仍在相同平面中4 區塊的τ_個可用位置。較佳將邏輯單元儲存在平适 所m其他版本相同的位移位置,使得給定邏輯單元贫 有版本係由相同組的感測電路予以服務。 本=二—項較佳具體實施例中,以邏輯單元的目前版 本末填補,丨於上—個程式化記憶體單元與下—個可用平面 對齊纪憶體單元之間的任何中間間 後被程式化之邏輯單元後’、:城輯上位於該最 及邏輯上位於被儲存在該下—個可用的平面:j版本以 元中之邏輯單元前面的該等邏輯單:!::記憶體單 兀的目則版本填入間隙 98680.doc -70- 1272487 中,便可完成該填補作業。 士依此方式’可將邏輯單元的所有版本維持在具有和原始 、同4移的相同平面中,致使在廢棄項目收集操作中,不 必從不同平面擷取邏輯單元的最新版本,嗔低效能。 t—項較佳具體實施例中’可利用該等最新的版本來更新 或填補該平面上的每個記憶體單元。因此,便可從每個平 面中平订地頃出—邏輯單元,其將會具有邏輯順序而無需 進一步重新排列。 … 此方案藉由允許平面上重新排列邏輯群組之邏輯單元的 最新版本’且不必收集不同記憶體平面的最新版本,而縮 :彙總混亂區塊的時間。這很有好處,其中主機介面的效 月b規格可疋義由記憶體系統完成區段寫人操作的最大等待 圖21顯示以多個記憶體平面所構成的記憶體陣列。記憶 、:平面可以來自相同的§己憶體晶片或多個記憶體晶片。各 平面910具有其自己的讀取及程式電路912以平行服務記憶 體單元的頁面914。在不失一般性之下,在所示的範例中, 記憶體陣列具有四個平行操作的平面。 一般而言,邏輯單元是主機系統存取的最小單元。通常 一個邏輯單元是尺寸512位元組的區段。頁面是平面中平行 讀取或程式化的t大單元1常一個邏輯頁面含有—或多 個邏輯單元。因此,在結合多個平面時,可將平行讀取或 程式化的最大總數單元視為記憶體單元的中繼頁面,其中 中繼頁面係由多個平面中之各平面的頁面所構成。例如, 98680.doc -71 · 1272487 如MP〇之中繼頁面具有四個頁面,即來自各平面抑、ρ】、 及P3的頁面’其中平行儲存邏輯頁面LPQ、LP,、LP2、 A。因此’和僅在_個平面中的操作相比,記憶體的讀取 及寫入效能增加四倍。 % 5己憶體陣列會進一步組織成複數個中繼區塊,如 ΜΒ〇、.·.、MBj ’其中各中繼區塊内的所有記憶體單元可成 為個早兀-起抹除。如MB〇的中繼區塊係以多個記憶體 :置所構成,以儲存資料的邏輯頁面914,如uvLIW甲 、塵區塊中的邏輯頁面係根據其填充於中繼區塊的順序,按 預定的序列分布於四個平面PG、pi、p2及㈣。例如,在 按邏輯上循序順序填充邏輯頁面時,會以第一平面中第一 頁面、第二平面中第二頁面等等的循環順序造訪平面。在 到達最後时面後,填充會以循環的方歧回,以從下一 個中繼頁面的第-平面重新開始。依此方式,即可在所有 平面均為平行操作時平行存取連續的邏輯頁面。 、:般而t,如果有鄕平面平行操作中及中繼區塊係按 邏輯上循序順序進行填充,則中繼區塊中第k個邏輯頁面將 常駐在平面X中,其中x = kM〇Dw。例如,有四個平面, W , 4結邏輯循序順序填充區塊時,第$個邏輯頁面as 將常駐在由5 MOD 4給定的平面中,即平Η,如圖Η所示。 各記憶體平面巾的記憶體操作係由_組讀取/寫入電路 912來執行。進出各讀取/寫入電路的資料係透過在控制哭 ㈣之㈣下的諸匯流排9料行傳送。控㈣似中的緩 衝器922可經由資料匯流排93〇協助緩衝資料的傳送。尤里 98680.doc -72- 1272487 在第一平面的操作需要存取第二平面的資料時,將需要兩 個步驟的程序。控制器會先讀出第二平面的資料,然後經 由資料匯流排及緩衝器傳送至第一平面。事實上,在大多 數的記憶體架構中’在兩個不同的位福之間傳送資料也 需要透過資料匯流排920交換資料。 至少,這涉及在-平面中從一組讀取/寫入電路傳送出 去,然後進入另一平面中的另一組讀取/寫入電路。在其中 平面係來自不同晶片的例子中,將需要在晶片之間傳送。 本發明可提供記憶體區塊管理的結構及方案,以避免一個 平面從另一個平面存取資料,以便將效能最大化。 如圖21所示,一中繼頁面係由多個邏輯頁(各位於其中一 個平面之中)所構成。每個邏輯頁可能係由一個以上的邏輯 單元所組成。當資料欲以逐個邏輯單元的方式被記錄於一 跨越該等平面的區塊中時,每個邏輯單元便將會落在該等 四個記憶體平面之一中。 在更新邏輯單元時會發生平面對齊的問題。在目前的範 例中,為了便於解說,將邏輯單元視為5 12位元組的邏輯區 段,一個邏輯頁面也是一個邏輯單元寬。由於快閃記憶體 不允弄未先抹除整個區塊而再寫入區塊的一部分,因此不 會將邏輯頁面的更新寫入現有的位置之上,而是將其記錄 在區塊未使用的位置中。然後會將邏輯單元的先前版本視 為淘汰。在一些更新後,區塊可含有一些由於已經更新因 此變成淘汰的邏輯單元。然後此區塊可以說是「不乾淨」, 而廢棄項目收集操作會忽略不乾淨的邏輯單元而收集各個 98680.doc -73- 1272487 邏輯單s的最新版本並按邏輯上循序順序將其重新記錄在 多個新的區塊中。㈣抹除及再循環不乾淨的區塊。 田-亥已更新邏輯單元被記錄於某一區塊中下個未被使用 的位置之中時’其通常不會被記錄於和先前版本相同的記 憶體平面之中。當要進行廢棄項目收集操作時,如彙總或 昼&’ —邏輯單元的最新版本便會被記錄於和原來相同的 平面之中’以維持原來的順序。然而,如果必須從另一個 平面擷取最新版本,效能將會降低。 因此,根據本發明的另一方面,給定平面之第一區塊的 原始邏輯單元。這可藉由以下方式來完成:將已更新的邏 軏單元記錄到仍在相同平面中之第二區塊的下—個可用位 置。在-項較佳具體實施例中,會以和原始區塊中原始 輯單元的相同相對位置之邏輯單元的目前版本,填補⑼, 藉由複製來填充)任何在上一個程式化記憶體單元和下一 個可用平面對齊記憶體單元之間的中間間隙。 圖22A顯示根據本發明的一般實施例,具有平面對齊之更 新之方法的流程圖。 步驟950 :於一被組織成複數個區塊的非揮發性記憶體之 中’每個區塊均被分割成可—起抹除的複數個記㈣單 元,每個§己憶體單元係用於儲存一邏輯單元的資料。 步驟952 :以多個記憶體平面構成記憶體,各平面具有一 組用於平行服務記憶體頁面的感測電路,該記憶體頁面含 有一或多個記憶體單元。 、 步驟954:依照第一順序將邏輯單元的第一版本儲存於一 98680.doc -74- 1272487 版本邏輯單 第一區塊的複數個記憶體單 元均被儲存於該等記憶體平面之一中 步驟队依照不同於第一順序的第二 :續版本儲存於-第二區塊之中,每個後續版: = 於和該第-版本相同的記憶體平面中下子 單元之中,以便可利用該組相同的感測:的"憶體 面中來存取-邏輯單元的所有的版本。〜_相同的平 圖22Β顯示在圖22Α所示之流程圖中儲存 較佳具體實施例。 ’之V驟的 步驟956'包括步驟957、步驟958及步驟959。 步驟抓將各區塊分割成中繼頁面,各, CBL, and when the S erases a control block, when an item is written into the GA, it is written into a chaotic update block list. When the erased block is configured as an update area, the update block will be updated or the closed update block f will be updated. When a segment updates the update chaotic segment control write operation, the information 98880.doc -68 - 1272487 from the control data structure in the RAM is written into the control data structure in the flash memory, necessary The other supported control data structures in the flash memory and RAM are updated accordingly. Control write operations are triggered when the ABL does not contain any other items of the erased block that are to be configured to update the block, or when the CBI block is rewritten. In a preferred embodiment, the ABL fill operation, the CBL clear operation, and the EBM sector update operation are performed during each control write operation. When the MAP block containing the EBM segment is full, the valid EBM and MAP segments are copied to the configured erased block and the previous MAP block is erased. During each control write operation, writing a GAT segment also modifies the list of updated blocks that are closed. When the GAT block is full, a GAT rewrite operation will be performed. As described above, after a few chaotic section write jobs, a CBI section is written. When the CBI block becomes full, the valid CBI section is copied into the configured erase block and the previous CBI block is erased. As mentioned above, the MAP exchange operation is performed when there are no other erased block items in the EBB list of the EBM section. Each time the MAP block is rewritten, the MAP address (ΜΑΡΑ) section for recording the current address of the MAP block is written in the dedicated μAPA block. When the MAPA block is full, a valid MAPA segment is copied to the configured erased block and the previous APA block is erased. Each time the block is rewritten, the boot section is written to the current boot block. When the boot block is full, the valid boot sector of the current version of the boot block is copied to the backup version, which then becomes the current 98680.doc -69 - 1272487 version. The previous current version will be erased and become a backup version, and a valid boot section will be written back to it. Alignment of Memory Dispersed on Multiple Memory Planes As previously described in connection with Figures 4 and 5A-5C, in order to increase performance, more than 2 memory planes are operated in parallel. Basically, each plane has its own sensed amplification, and as part of the master and program circuitry, parallel services span the corresponding pages of the memory cells of the plane. When multiple planes are combined, the pages can be operated in parallel to improve performance. Another aspect of the invention is that a frame is organized into a plurality of smeared I: blocks and is composed of a plurality of memory planes (so that logical units can be read in parallel or a plurality of logical units can be programmed in parallel To such a memory array, when the original logical unit stored in the T-block of the specific memory is to be updated, the required logic unit is maintained to keep the logical unit of the I. In the same plane as the original. This can be done by the following equation: Record the updated logical unit to the τ_ available positions of the 4 blocks still in the same plane. Preferably, the logic unit is stored in the same displacement position as the other versions of the system, such that the given logic unit lean version is served by the same set of sensing circuits. In the preferred embodiment of the present invention, the current version of the logical unit is padded, and any intermediate between the upper-synchronized memory unit and the next available plane-aligned memory unit is After the stylized logical unit ',: the most important logical plane is located in the next available plane: the j version is in front of the logical unit in the meta:::::Memory The fill version can be completed by filling in the gap version 98680.doc -70- 1272487. In this way, all versions of the logical unit can be maintained in the same plane as the original and the same 4 shifts, so that in the waste project collection operation, it is not necessary to extract the latest version of the logical unit from different planes, and the performance is degraded. In the preferred embodiment of the t-item, the latest version can be used to update or fill each memory cell on the plane. Thus, the logical units can be ordered from each plane, which will have a logical order without further rearrangement. ... This scheme shrinks by allowing the latest version of the logical unit of the logical group to be rearranged on the plane and does not have to collect the latest version of the different memory planes: summarizing the time of the chaotic block. This is very advantageous, in which the validity of the host interface b specification can be used to complete the maximum waiting of the segment write operation by the memory system. Figure 21 shows a memory array composed of a plurality of memory planes. Memory: The plane can come from the same § memory wafer or multiple memory chips. Each plane 910 has its own read and program circuitry 912 to parallel pages 914 of the memory unit. Without loss of generality, in the example shown, the memory array has four planes that operate in parallel. In general, a logical unit is the smallest unit accessed by the host system. Usually a logical unit is a section of size 512 bytes. Pages are parallel in the plane Read or stylized t-large unit 1 A logical page often contains - or multiple logical units. Therefore, when a plurality of planes are combined, the maximum number of units read or programmed in parallel can be regarded as a relay page of the memory unit, wherein the relay page is composed of pages of planes of the plurality of planes. For example, 98680.doc -71 · 1272487, such as the MP 〇 relay page has four pages, that is, pages from the respective planes, ρ], and P3 where the logical pages LPQ, LP, LP2, A are stored in parallel. Therefore, the read and write performance of the memory is increased by four times compared to the operation in only _ planes. The %5 memory array is further organized into a plurality of relay blocks, such as ΜΒ〇, .., MBj', in which all memory cells in each of the relay blocks can be erased. For example, the MB 〇 relay block is composed of a plurality of memory: a logical page 914 for storing data, such as a logical page in a uvLIW A, dust block according to the order in which it is filled in the relay block. Distributed in four planes PG, pi, p2, and (d) in a predetermined sequence. For example, when logical pages are filled in a logically sequential order, the planes are accessed in a cyclical order of the first page in the first plane, the second page in the second plane, and the like. After the last time is reached, the padding is repeated in a loop to restart from the first plane of the next relay page. In this way, successive logical pages can be accessed in parallel when all planes are in parallel operation. And: t, if there is a parallel plane operation and the relay block is filled in a logical sequential order, the kth logical page in the relay block will be resident in plane X, where x = kM〇 Dw. For example, when there are four planes, the W and 4 junction logic fills the block sequentially, the $th logical page as will be resident in the plane given by 5 MOD 4, ie flat, as shown in Figure 。. The memory operation of each memory plane towel is performed by the _ group read/write circuit 912. The data entering and leaving each read/write circuit is transmitted through the bus lines 9 under the control (4). The control (4)-like buffer 922 can assist in the transmission of buffered data via the data bus. Yuri 98680.doc -72- 1272487 When the operation of the first plane requires access to the data of the second plane, a two-step procedure is required. The controller first reads the data of the second plane and then transmits it to the first plane via the data bus and the buffer. In fact, in most memory architectures, the transfer of data between two different locations requires the exchange of data through data bus 920. At a minimum, this involves transferring from a set of read/write circuits in a plane and then into another set of read/write circuits in another plane. In the case where the planar system is from a different wafer, it will need to be transferred between the wafers. The present invention provides a structure and scheme for memory block management to avoid accessing data from one plane to another to maximize performance. As shown in Fig. 21, a relay page is composed of a plurality of logical pages (each located in one of the planes). Each logical page may consist of more than one logical unit. When data is to be recorded on a logical unit by block in a block spanning the planes, each logical unit will fall into one of the four memory planes. Planar alignment issues occur when updating logical units. In the current example, for ease of explanation, the logical unit is treated as a logical segment of 5 12 bytes, and a logical page is also a logical unit wide. Since the flash memory does not allow a portion of the block to be written without first erasing the entire block, the update of the logical page is not written to the existing location, but is recorded in the block unused. In the location. The previous version of the logical unit is then considered to be obsolete. After some updates, the block may contain some logical units that have become obsolete due to the update. This block can then be said to be "not clean", and the obsolete item collection operation ignores the dirty logical unit and collects the latest version of each 98680.doc -73- 1272487 logical list s and re-records it in a logically sequential order. In multiple new blocks. (4) Wiping and recycling unclean blocks. When the Tian-Hai updated logic unit is recorded in the next unused location in a block, it is usually not recorded in the same memory plane as the previous version. When a waste project collection operation is to be performed, such as summary or 昼&’, the latest version of the logical unit will be recorded in the same plane as the original one to maintain the original order. However, if you have to extract the latest version from another plane, performance will be reduced. Thus, in accordance with another aspect of the invention, the original logical unit of the first block of the plane is given. This can be done by recording the updated logical unit to the next available location of the second block that is still in the same plane. In a preferred embodiment, the current version of the logical unit that is the same relative position as the original unit in the original block is filled (9), filled by copying any of the previous stylized memory units and The next available plane aligns the intermediate gap between the memory cells. Figure 22A shows a flow chart of a method with planar alignment updates in accordance with a general embodiment of the present invention. Step 950: In a non-volatile memory organized into a plurality of blocks, each block is divided into a plurality of (four) cells that can be erased, and each of the § memory cells is used. For storing data of a logical unit. Step 952: constituting the memory by a plurality of memory planes, each plane having a set of sensing circuits for parallel service memory pages, the memory page having one or more memory cells. Step 954: storing the first version of the logical unit in a first order in a first memory block of the first block of the 98680.doc -74- 1272487 version of the logical single block, and storing the plurality of memory cells in one of the memory planes The step team is stored in the second block according to a second version different from the first order: each subsequent version: = in the lower subunit of the same memory plane as the first version, so as to be available The same sense of the group: "Recalling all versions of the logical unit in the facet. ~_ Identical Figure 22A shows a preferred embodiment of the storage in the flow chart shown in Figure 22A. Step 956' of step V includes steps 957, 958, and 959. Steps to divide each block into relay pages, each

平面的—頁面所構成。此步驟可在儲存步驟的任_1各 執行。 )仕項之W ㈣958:根據,第—順序不同的第二順序將邏輯單 4 =本儲存至第二區塊,各後續版本係儲存於具 頁面中和第—版本㈣位移的下—個可 、、· 步驟抓㈣存邏輯單元㈣續版本_^早 一順序複製邏輯單元的目前版本1逐個中繼 =在該下一個可用記憶體單元之前的任何未使= 圖23A顯示不顧平面對齊按循序順序寫入 塊之邏輯單元的範例。該範例顯示出每個邏輯㈣大= 係一個邏輯區段’例如以。、⑶…。於該四平面的範例中 母個區塊(例如MB°)均可視為被分割成複數個中繼頁面 98680.doc -75- 1272487 MP〇、MP!、···,其中每個中繼頁面(例如Mp〇)均含有四個 區段(例如LSO、LSI、LS2、以及LS3),每個區段分別來自 平面P0、P1、P2、以及P3。所以,該區塊會以循環順序逐 個區段地被填入平面P〇、PI、p2、以及p3中的邏輯單元之 中〇 在主機寫入操作#1中,正在更新邏輯區段LS5丄S8中的資 料。會將更新成為LS5,_LS8%資料記錄在始於第一可用位 置之新配置的更新區塊中。 在主機寫入操作#2中,正在更新邏輯區段LS9-LS12中資 料的程式段。會將更新成的資料記錄在緊接在 最後寫入結束處之後之位置中的更新區塊中。圖中顯示兩 次主機寫入的方式係以邏輯循序方式將該更新資料記錄於 該更新區塊之中,即LS5,_LS12,。更新區塊可視為循序更新 區塊,因其已按邏輯上循序的順序被填補。記錄在更新區 塊中的更新資料可淘汰原始區塊中對應的資料。 然而,更新邏輯區段係根據下一個可用位置但卻不顧平 面對齊而記錄在更新區塊中。例如,區段LS5原來是記錄在 平面P1中’但已更新的LS5,現在則記錄在中。同樣地, 其他更新區段全部無法對齊。 圖23B顯示不顧平面對齊按非循序順序寫入混亂更新區 塊之邏輯單元的範例。 在主機寫入操作#1中,會更新儲存於原始中繼區塊之給 定邏輯群組的邏輯區段LS1〇-LSU。已更新的邏輯區段 LS1〇’-Lsii’會被儲存到新配置的更新區塊中。此時,更新 98680.doc -76- 1272487 區塊為循序的更新區塊。在主機寫入操作#2中,會將邏輯 區段LS5-LS6更新成為LS5’-LS6’及將其記錄在緊接上一個 寫入之後之位置的更新區塊中。這可將循序的更新區塊轉 換為混亂的更新區塊。在主機寫入操作#3,再次更新邏輯 區段LS 10’及將其記錄在更新區塊的下一個位置中成為 LS10”。此時,更新區塊中的!^1〇”可取代先前記錄中的 LS10’ ’而LS10’又可取代原始區塊中的Lsl〇。在主機寫入 操作#4中,再次更新邏輯區段Lsl〇的資料及將其記錄在更 新區塊的下一個位置中成為LS10,,,。因此,LS10,,,現在是邏, 輯區段LS10的最後且唯一有效的版本。Lsl〇的所有先前版 本現在均已淘汰。在主機寫入操作#5中,會更新邏輯區段 LS30的資料及將其記錄在更新區塊中成為1^3〇,。在此範例 中’可知:任何順序及以任何重複將邏輯群組内的邏輯單元 寫入至混亂更新區塊。 同樣地,更新邏輯區段係根據下一個可用位置但卻不顧 平面對齊而記錄在更新區塊中。例如,區段L § 1 〇原來是記 錄在平面P2(即,MP2、第三平面)中,但更新[si 〇,現在卻 記錄在P0(即,MP〇’,第一平面)中。同樣地,在主機寫入#3 中’會再次將邏輯區段LS10,更新成為LS10”並被放在結果 也在平面Ρ0(ΜΡΓ的第一平面)的下一個可用位置中。因此, 一般而言,從圖中可見,將更新區段記錄至區塊的下一個 可用位置會使更新區段被儲存在和其先前版本不同的平面 中〇 具有以填補填充之中間間隙之平面對齊的循序更新區塊 98680.doc -77- 1272487 圖24A顯不根據本發明的一項較佳具體實施例,具有平面 對齊及填補之圖23 A的循序更新範例。 在主機寫入操作#1中,會將更新成為LS5,-LS8,的資料記 錄在始於第一可用平面對齊位置之新配置的更新區塊。在 此例中,LS5原來在P1中,?1是中繼頁面的第二平面。因此, 會在更新區塊之第一可用中繼頁面Mp〇的對應平面中程式 化LS5’-LS7’。同時,會以原始區塊中繼頁面中在[Μ前之 邏輯區段LS4的目前版本填補ΜΙγ中未使用之第一平面的 間隙。然後將原始LS4處理成淘汰的資料。然後將剩餘的 LS8’記錄在下一個中繼頁面Mp〆的第一個平面中並已平面 對齊。 在主機寫入操作#2中,會將更新成為LS9,_LS12,的資料記 錄在下-個可用+面對齊位置的更新區塊中。目此,會將 LS9屺錄在下一個可用的平面對齊記憶體單元中,即Μ" 的第一平面。此時,不會造成任何間隙,也不需要任何填 補。更新區塊可視為循序更新區塊,因其已按邏輯上循序 的順序填人。此外,其將因各更新邏輯單元和其原始的一 樣都在相同平面中而已平面對齊。 具中間間隙之平面對齊的混亂更新區塊 圖24B顯示根據本發明的一項較佳具體實施例,具有平面 對齊及不具有任何填補之圖23B的混亂更新範例。 在主機寫入刼作#丨中,將已更新的邏輯區 儲存在新配置的更新區塊中。並不會將其儲存在下一個可 用的。己體單元中,而是將其儲存在下—個可用的平面對 98680.doc -78- 1272487 齊記憶體單元中。由於LS10,及LSI Γ原來係分別儲存在平面 Ρ2及Ρ3(原始區塊之ΜΙ>2的第三及第四平面),下一個可用的 平面對齊記憶體單元將在更新區塊之ΜΡ〇,的第三及第四平 面中。此時,更新區塊為非循序,其中將按「未填充」、「未 填充」、LSI〇’及LSI Γ的順序填充中繼頁面ΜΡ〇的頁面。 在主機寫入操作#2中,會將邏輯區段LS5-LS6更新成為 LS5 LS6及將其記錄在下一個可用之平面對齊的更新區塊 中。因此,會將原始區塊中在ΜΡ〗之第二(Ρ1)及第三(Ρ2)平 面或記憶體單元的LS5,及LS6,程式化至更新區塊中丁一個 可用中繼頁面MPi’之對應的平面中。這會在ΜΡι,中留下在 前面之未使用的第一平面。 在主機寫入操作#3,再次更新邏輯區段LS1〇,及將其記錄 在更新區塊的下一個平面對齊位置中成為LS丨〇”。因此,會 將其寫入下一個可用的第三平面,即在Mp2,中。這會在“卜, 的最後平面及MP2’的前兩個平面中留下位在前面的間隙。 這將會淘汰在MP〇,中的LS10,。 在主機寫入操作#4中,會再次更新邏輯區段1^1〇,,中的資 料並將其記錄在更新區塊中中繼頁面Μ。,的下一個可用第 三平面中成為LS10,,’。因此,LS10,,,現在是邏輯區段Lsi〇 的最後且唯一有效的版本。這會留下由MP2,之最後平面及 MP/之前兩個平面所組成的間隙。 在主機寫入操作#5中,會更新邏輯區段LS3〇的資料及將 其記錄在更新區塊中成為LS3〇,。由於原始的LS3〇常駐於中 繼頁面的P2或第三平面中,因此會將其寫入更新區塊中下 98680.doc -79- 1272487 個可用的第二平面。此時,其將是Μ"的第三平面。將 因MP3’的最後平面與MIV的前兩個平面而產生間隙。因 範例顯示可以平面對齊的方式,按照任何順序及任 W重複’將-邏輯群組内的複數個邏輯區段寫入至一混亂 _品鬼中在後續的廢棄項目收集操作中,將便利地由 彳同、、且的感/則電路來服務給定邏輯區段的所有版本,尤其 是最新版本。 ~ 具有以填補填充之中間間隙之平面對齊的混亂更新區塊 圖24C顯示根據本發明的另一項較佳具體實施例,具有平 面對齊及填補之圖23Β的混亂更新範例。 此操作和圖24Β所示的相同,但中間間隙會先以填補加以 填充。在主機寫入操作#1中,會先以常駐在原始區塊之ls8 及LS9的目前版本,來填補由中繼頁面Μρ〇,之第一及第二未 使用的平面所產生的間隙。這會使原始區塊中的1^8及1^9 淘汰。此時,更新區塊是循序的更新區塊,其中中繼頁面 ΜΡ〇’的填充順序為 LS8、LS9、LS10,及 LS11,。 在主機寫入操作#2中,將因MPi,中在前面之未使用的第 一平面而產生間隙,其將先以LS4進行填補。這將使原始區 塊中的LS4淘汰。和之前一樣,第二寫入可將循序更新區塊 轉換為混亂更新區塊。 在主機寫入操作#3中,將因MP!,中未使用的最後平面及 MP/的前兩個平面而產生間隙。會先以在上一個程式化之 LS6’之後的LS7來填補MPi’的最後平面,然後以在LS1〇之前 的邏輯單元(即LS8及LS9)來填補MP2,的前兩個平面。這會 98680.doc -80- 1272487 淘汰MP0,中的Lsl〇,及原始區塊中的LS7_LS9。 在主機寫入操作#4中,將產生由MIV的最後平面及Μΐγ 的前兩個平面所組成的間隙。ΜΙγ的最後平面可由中繼頁 面μρ2’中在最後寫人uS1G,,之後之邏輯單元目前版本的 LS11進行填補。Μιγ的前兩個平面分別可藉由及lS9 進行填補,和中繼頁面MIV甲在Lsl〇”,之前的邏輯單元一 樣。 在主機寫入操作#5中,也會跟著分別以LSU,、。“及 ,來真補彳之MP3之隶後平面至MP4’前兩個平面之間的間、 隙。因此,此範例顯示可以平面對齊的方式,按照任何順 序及任意重複,將一邏輯群組内的複數個邏輯區段寫入至 一混亂更新區塊中。 在較佳具體實施例中,一個中繼頁面含有來自個別平面 的循環頁面。由於中繼頁面可以平行進行讀取或程式化, 因此以中繼頁面的粒度實施主機更新會很方便。在有任何 填補時’其可和按中繼頁面的更新邏輯單元—起記錄。 在圖24A及圖24C之範例顯示的具體實施例中,在各主機 寫入期間,會在要程式化更新之平面對齊記憶體單元之前 未使用的記憶體單it上執行填補。在下—個主機寫入之 前’會延後在上-個程式化之記憶體單元後之任何未使用 記憶體單元的動作。一般而言’會在各中繼頁面的邊界内 填補任何在前面之未使用的記憶體單元。換言之,如果在 前面的間隙跨越於兩個中繼頁面之上,則按各中繼頁面人 適的邏輯上循序·在各中繼頁面上執行填補,但不顧^ 98680.doc -81 - ^/2487 在㈣區塊時 如為部分寫入,可藉由後寫入的令繼頁面: 在另-項具體實施例中二仃:整填充。 移至下-個中繼頁 7邛刀填充的中繼頁面可名 記憶想單元粒度之㈣行完全填補。 根據個別記憶體架構所支援的彈性 兀可以有各種變化。 項取或釦式化的單 程式化中繼F面中個】/面的獨立特性允許獨立讀取及 各平面中頁= 面的各頁面。上述範例具有成為 中頁面的程式化最大單元。在中繼頁…可以有 、於所有頁面的局部中繼頁面程式化。例如,彳以 中繼頁面的前三個頁面,’然後再程式化第四個頁面Γ ,有,在平面層、級’ 一個實體頁面可含有一或多個記憶 二早^如果各記憶體單元可以儲存一個區段的資料,則 κ體頁面可儲存一或多個區段。一些記憶體架構可支 、:弋化了在夕個私式化編碼過程上,在不同的時間個 別私式化選定的邏輯單元。 在記憶體平面内用於邏輯群組之混亂更新的邏輯單元對齊 在區塊記憶體管理系統中,按邏輯上循序順序將邏輯單 元的邏輯群組儲存於原始區塊中。在更新邏輯群組時,會 援局部1面程式化’其中藉由抑制頁面内選定記憶體單元 將邏輯單元的後續版本儲存於更新區塊中。如果將邏輯單 元混亂地(即’非循序)儲存於更新區塊中,最後會執行廢棄 項目收集以收集原始區塊及更新區塊中邏輯單元的最新版 本,以循序將其整合為新的原始區塊。如果將給定邏輯單 98680.doc -82 - 1272487 兀的更新版本全部儲存在和其原始區塊中原始版本對齊的 更新區塊中’致使相同組的感測電路可以存取所有版本, 則廢茱項目收集操作會更有效。 根據本發明的另一方面,在上述區塊記憶體管理系統 中在將5己憶體組織成一系列的記憶體頁面時(其中記憶體 單元的各頁面係由一組感測電路進行平行服務),如果給定 邏輯單元的所有版本在所儲存的頁面中全部具有相同的位 移位置’則所有版本均已對齊。 圖25顯不其中各頁面含有兩個用於儲存兩個邏輯單元 (如兩個邏輯區段)之記憶體單元的範例記憶體組織。在原始 區塊中’由於邏輯區段係按邏輯上循序順序加以儲存,會 將邏輯區段LSO及LSI儲存於頁面p〇中,將邏輯區段l;§2及 LS3儲存於頁面?1中,及將邏輯區段LS4及LS5儲存於頁面 P3中等。可以看出在此兩個區段的頁面中,左邊算起第一 個區段的頁位移為「〇」,而第二個區段的頁位移為Γ 1」。 在更新循序儲存於原始區塊中之邏輯區段的邏輯群組 時’會將已更新的邏輯區段記錄在更新區塊中。例如,邏 輯區段LS2常駐在原始區塊中具有位移r 〇」的頁面p〇中。 如果在第一寫入中,如果將!^2更新為LS2’,則會將其儲存 於具有相同頁面位移「〇」之更新區塊的第一可用位置。這 會是在頁面P0f的第一記憶體單元中。如果在第二寫入中, 將LS5更新為LS5’,則會將其儲存於具有相同頁面位移「工 之更新區塊的第一可用位置。這會是在具有頁面P1,之位移 「1」的弟一 s己憶體單元中。然而,在儲存L S 5 ’之前,會在 98680.doc -83- 1272487 〃中複製至少在各頁面中將會維持邏輯循序順序之邏輯區 段的最新版本,以先填補p〇,中具有位移「丨」的未使用記 十思體單兀及P1’中的位移「〇」。此時,會將LS3複製至P0,中 位移「1」位置及將LS4複製至ρι,中位移「〇」位置。如果 在第三寫入中,再次將LS2,更新為LS2”,則會將其儲存在 P2的位移「〇」中。如果在第四寫入中,分別將LS22及LS23 更新為LS22’及LS23’,則會分別將其儲存在p3,的位移「〇」 及1」中。然而,在那之前,會以LS3填補在P2,中具有位 移「1」的未使用記憶體單元。 上述更新序列假設可以在頁面内程式化個別區段。對於 二其中不支援局部頁面程式化的記憶體架構,必須一起 程式化頁面内的區段。此時,在第一寫入中,會將⑶,及 LS3—起程式化至p〇,中。在第二寫入中,會將[Μ及乙85, 起耘式化至P1’中。在第三寫入中,會將LS2,,及LS3 一起 程式化至P2,中等等。 中繼頁面内的平面對齊 或者,程式化的單元可具有中繼頁面的粒度。如果寫人 混亂更新區塊的粒度變成中繼頁面,則結合圖16A及i6B所 述之CBI區塊巾的項目將和巾繼頁面有關,而非和區段有 關。,加的粒度會減少必須為混亂更新區塊所記錄之項目 的數里’並允5午直接消除索引及每個中繼區塊使用單一㈣ 區段 0 圖26A和圖21的記憶體結構相同,只是各頁面含有兩個區 段而非叫固。因&,從圖中可見中繼頁面Μρ。現在各有其能 98680.doc -84 - 1272487 夠儲存兩個邏輯單元之資料的頁面。如果各邏輯單元是一 個區奴,則將邏輯區段循序儲存於平面中及“I及平 面P1中LS2及LS3等的MP0之中。 =6B顯示圖26A所示之具有以線性圖方式布局之記憶 體單元的中繼區塊。和圖21的單-區段頁面相t匕,邏輯區 段係以循環的方式儲存於各頁面中具有兩個區段的四個頁 面中。 一般而言’如果有w個平行操作的平面及每個頁面有K個 己隐體單7L 知邏輯上循序順序填充中繼區塊,則中繼 區塊中第k個邏輯頁面將常駐於平面X中,其中x = k, _ W其中k — INT(k/K)。例如,有四個平面,W = 4,每個 頁面2個區段,K = 2,則對於k = 5,即指第五個邏輯區段 ⑶,其將常駐於由2刪辑給定的平面中,即平面2, 如圖24A所示。一和而士 , 對齊。 般而S,相同原理適用於實施上述的平面 上述範例係用於多重平 、夕垔十面条構中平面和頁面的對 具有多個區段之i而Μ γ丨2 ^ ^ 士 頁㈣例子中,也維持頁面内的區段對齊 έ很有利。依此方式,蚀田士 相同組的感測電路有利於相同 邏輯區段的不同版太。V古 有效執行如區段之重新配置及「钱 取_修改-寫入」的摔作。名斟谳; 且久口貝 m 乍在對齊頁面内的區段順序時,可以The flat-page is composed. This step can be performed at any of the storage steps. W (4) 958: According to the second order in which the first order is different, the logical list 4 = this is stored in the second block, and each subsequent version is stored in the page and the first (version) displacement. , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , An example of a logical unit that writes blocks sequentially. This example shows that each logic (four) is large = one logical segment 'for example. , (3)... In the example of the four planes, the parent block (for example, MB°) can be regarded as being divided into a plurality of relay pages 98680.doc -75 - 1272487 MP〇, MP!, ···, where each relay page (For example, Mp〇) each contains four segments (e.g., LSO, LSI, LS2, and LS3), each segment coming from planes P0, P1, P2, and P3, respectively. Therefore, the block is filled into the logical units in the planes P〇, PI, p2, and p3 in a cyclical order, in the host write operation #1, and the logical sector LS5丄S8 is being updated. Information in the middle. The update will be LS5 and the _LS8% data will be recorded in the newly configured update block starting at the first available location. In master write operation #2, the block of the data in the logical section LS9-LS12 is being updated. The updated data is recorded in the update block immediately after the end of the last write. The figure shows that the way of two host writes is to record the update data in the update block in logical sequential mode, namely LS5, _LS12. The update block can be thought of as a sequential update block because it has been filled in a logically sequential order. The updated data recorded in the update block can be used to eliminate the corresponding data in the original block. However, the update logic segment is recorded in the update block based on the next available location but regardless of the planar alignment. For example, the segment LS5 is originally recorded in the plane P1 but the updated LS5 is now recorded. Similarly, all other update sections are not aligned. Figure 23B shows an example of a logical unit that writes a chaotic update block in a non-sequential order regardless of plane alignment. In host write operation #1, the logical segment LS1〇-LSU stored in the given logical group of the original relay block is updated. The updated logical section LS1〇'-Lsii' will be stored in the newly configured update block. At this point, update the 98680.doc -76- 1272487 block as a sequential update block. In the host write operation #2, the logical extents LS5-LS6 are updated to LS5'-LS6' and recorded in the update block immediately after the last write. This converts sequential update blocks into confusing update blocks. At host write operation #3, the logical segment LS 10' is updated again and recorded in the next position of the update block to become LS10". At this time, !^1〇" in the update block can replace the previous record. LS10' ' and LS10' can replace Lsl〇 in the original block. In the host write operation #4, the data of the logical section Ls1〇 is updated again and recorded in the next position of the update block to become LS10,,. Therefore, LS10,,, is now the last and only valid version of the logical segment LS10. All previous versions of Lsl〇 are now obsolete. In the host write operation #5, the data of the logical section LS30 is updated and recorded in the update block to become 1^3〇. In this example, it can be seen that logical units within a logical group are written to the chaotic update block in any order and with any repetition. Similarly, the update logic segment is recorded in the update block based on the next available location but regardless of the plane alignment. For example, the segment L § 1 〇 was originally recorded in plane P2 (i.e., MP2, third plane), but updated [si 〇, now recorded in P0 (i.e., MP〇', first plane). Similarly, in the host write #3 'the logical segment LS10 will be updated again to LS10' and placed in the next available position where the result is also in plane Ρ0 (the first plane of ΜΡΓ). Therefore, generally It can be seen from the figure that recording the update section to the next available position of the block causes the update section to be stored in a plane different from its previous version, with a sequential update of the plane alignment of the filled gap. Block 98680.doc -77- 1272487 Figure 24A shows a sequential update example of Figure 23A with plane alignment and padding in accordance with a preferred embodiment of the present invention. In host write operation #1, The data updated to LS5, -LS8, is recorded in the newly configured update block starting from the first available plane alignment position. In this example, LS5 is originally in P1, and ?1 is the second plane of the relay page. , the LS5'-LS7' will be programmed in the corresponding plane of the first available relay page Mp〇 of the update block. At the same time, the current version of the logical section LS4 in the front block will be relayed in the original block. Fill the unused number in ΜΙγ The gap of the plane. Then the original LS4 is processed into the eliminated data. Then the remaining LS8' is recorded in the first plane of the next relay page Mp〆 and is aligned in the plane. In the host write operation #2, The data updated to LS9, _LS12, is recorded in the update block of the next available + face alignment position. Therefore, the LS9 is recorded in the next available plane-aligned memory unit, that is, the first plane of Μ" At this point, no gaps are created and no padding is required. The update block can be thought of as a sequential update block because it has been filled in a logically sequential order. In addition, it will be updated by each logical unit and its original The same is in the same plane but planarly aligned. The chaotic update block with plane alignment with intermediate gaps Figure 24B shows the chaos of Figure 23B with plane alignment and without any padding in accordance with a preferred embodiment of the present invention. Update the example. In the host write operation #丨, the updated logical area is stored in the newly configured update block. It will not be stored in the next available one. In the unit, it is stored in the next available plane pair 98680.doc -78-1272487 memory unit. Since LS10, and LSI Γ are stored in planes Ρ2 and Ρ3 respectively (( of the original block> The third and fourth planes of 2), the next available plane-aligned memory unit will be in the third and fourth planes after the update block. At this time, the update block is non-sequential, which will be pressed The pages of "Unfilled", "Unfilled", LSI〇', and LSI 填充 are populated in the order of the relay page. In master write operation #2, the logical segment LS5-LS6 is updated to LS5 LS6 and recorded in the next available horizontally aligned update block. Therefore, the second (Ρ1) and third (Ρ2) planes of the original block or the LS5 and LS6 of the memory unit are programmed into the update block, and the available relay page MPi' Corresponding plane. This leaves the first unused plane in front of ΜΡι. At host write operation #3, update logical segment LS1〇 again, and record it in the next plane alignment position of the update block to become LS丨〇". Therefore, it will be written to the next available third. The plane, in Mp2, will leave a gap in the front in the first two planes of "B, the last plane and MP2'. This will eliminate the LS10 in the MP〇,. In host write operation #4, the data in the logical section 1^1〇, is updated again and recorded in the update block in the update block. , the next available third plane becomes LS10,,'. Therefore, LS10,,, is now the last and only valid version of the logical segment Lsi〇. This leaves a gap consisting of the last plane of MP2, and the two planes of MP/. In the host write operation #5, the data of the logical section LS3〇 is updated and recorded in the update block to become LS3〇. Since the original LS3〇 is resident in the P2 or 3rd plane of the relay page, it will be written to the next available plane in the update block 98680.doc -79 - 1272487. At this point, it will be the third plane of Μ". A gap will be created due to the last plane of MP3' and the first two planes of the MIV. Because the example shows that the plane alignment can be performed, in any order and any repetition, the multiple logical segments in the logical group are written into a chaotic _ _ ghost in the subsequent waste project collection operation, which will be conveniently All versions of a given logical section, especially the latest version, are serviced by the same sense and/or circuit. ~ Chaotic update block with plane alignment to fill the filled gaps. Figure 24C shows a chaotic update example of Figure 23A with planar alignment and padding in accordance with another preferred embodiment of the present invention. This operation is the same as that shown in Figure 24Β, but the intermediate gap is filled first with padding. In host write operation #1, the gap generated by the first and second unused planes of the relay page 填补ρ〇 is first filled with the current versions of ls8 and LS9 resident in the original block. This will eliminate 1^8 and 1^9 in the original block. At this time, the update block is a sequential update block, wherein the order of filling the relay page ΜΡ〇' is LS8, LS9, LS10, and LS11. In host write operation #2, a gap is created due to the first unused plane in the previous MPi, which will be padded with LS4 first. This will eliminate the LS4 in the original block. As before, the second write converts the sequential update block into a chaotic update block. In the host write operation #3, a gap is generated due to the last plane not used in MP!, and the first two planes of MP/. The last plane of MPi' is padded with LS7 after the last stylized LS6', and then the first two planes of MP2 are filled with the logical units (ie LS8 and LS9) before LS1. This will 98680.doc -80- 1272487 retire MP0, Lsl〇, and LS7_LS9 in the original block. In host write operation #4, a gap consisting of the last plane of the MIV and the first two planes of Μΐγ will be generated. The last plane of ΜΙγ can be filled by the last page written by the user uS1G in the relay page μρ2', and then the current version of the LS11 of the logic unit. The first two planes of Μιγ can be filled by lS9 respectively, and the relay page MIV A is the same as the previous logic unit. In the host write operation #5, it will also be followed by LSU, respectively. "And, come to the gap between the rear plane of the MP3 and the first two planes of the MP4'. Thus, this example shows that a plurality of logical segments within a logical group can be written into a chaotic update block in any order and any repetition, in a planar alignment manner. In a preferred embodiment, a relay page contains loop pages from individual planes. Since relay pages can be read or stylized in parallel, it is convenient to implement host updates at the granularity of the relay page. When there is any padding, it can be recorded with the update logic unit of the relay page. In the particular embodiment shown in the example of Figures 24A and 24C, during each host write, padding is performed on the memory bank it is not used before the programmatically updated memory plane is aligned. The action of any unused memory cells after the next-synchronized memory unit is delayed by the next host write. In general, any unused memory cells in the front will be filled in the boundaries of each relay page. In other words, if the previous gap spans over two relay pages, then the logical sequence of each relay page is logically sequenced and the padding is performed on each relay page, but regardless of the number of 98680.doc -81 - ^/ 2487 In the case of a (4) block, if it is a partial write, the subsequent page can be written by the following: In the other embodiment, the second fill: the full fill. Move to the next - relay page 7 The relay page filled with the file can be named. There are various changes that can be made depending on the flexibility supported by the individual memory architecture. The independent feature of the single-formed relay F-plane/item of the item or the deduction can be independently read and the pages of each page in the plane = face. The above example has the largest stylized unit to become a page. The relay page... can be programmed with local relay pages on all pages. For example, to relay the first three pages of the page, 'and then program the fourth page Γ, there is, in the plane layer, level 'a physical page can contain one or more memories two early ^ if each memory unit The data of one section can be stored, and the κ body page can store one or more sections. Some memory architectures can support: In the private encoding process, the selected logical units are privately advertised at different times. Logical unit alignment for chaotic updates of logical groups in the memory plane In the block memory management system, logical groups of logical units are stored in the original block in a logically sequential order. When updating a logical group, a partial 1-sided stylization is supported, in which a subsequent version of the logical unit is stored in the update block by suppressing the selected memory unit in the page. If the logical unit is stored confusingly (ie, 'non-sequentially') in the update block, the obsolete project collection is finally performed to collect the latest version of the original block and the logical unit in the update block to sequentially integrate it into the new original. Block. If all of the updated versions of the given logical singles 98880.doc -82 - 1272487 储存 are stored in the update block aligned with the original version in their original block, causing the same set of sensing circuits to access all versions, then the waste茱 Project collection operations will be more effective. According to another aspect of the present invention, in the above-described block memory management system, when five memory elements are organized into a series of memory pages (where each page of the memory unit is parallelized by a set of sensing circuits) If all versions of a given logical unit have the same displacement position in the stored page, then all versions are aligned. Figure 25 shows an example memory organization in which each page contains two memory cells for storing two logical units (e.g., two logical segments). In the original block, because the logical segments are stored in a logical sequential order, the logical segments LSO and LSI are stored in the page p〇, and the logical segments l; § 2 and LS3 are stored in the page. In 1, and the logical segments LS4 and LS5 are stored in the page P3. It can be seen that in the pages of the two sections, the page displacement of the first section from the left is "〇", and the page displacement of the second section is Γ 1". The updated logical segment is recorded in the update block when updating the logical group of logical segments sequentially stored in the original block. For example, the logical segment LS2 is resident in the page p〇 with the displacement r 〇" in the original block. If in the first write, if it will! When ^2 is updated to LS2', it will be stored in the first available location of the update block with the same page offset "〇". This will be in the first memory unit of page P0f. If LS5 is updated to LS5' in the second write, it will be stored in the first available position with the same page displacement "Working Update Block. This will be on page P1 with a displacement of "1" The younger brother is in the body unit. However, before storing LS 5 ', the latest version of the logical section that will maintain the logical sequential order at least in each page will be copied in 98680.doc -83 - 1272487 , to fill the p〇 with the displacement first. "未" is not used in the tenth body and the displacement "〇" in P1'. At this time, LS3 will be copied to P0, the displacement is "1" position, and LS4 is copied to ρι, and the displacement is "〇". If LS2 is updated to LS2" again in the third write, it will be stored in the displacement "〇" of P2. If LS22 and LS23 are updated to LS22' and LS23', respectively, in the fourth write, they are stored in the displacements "〇" and 1" of p3, respectively. However, before that, the unused memory cells with a bit shift of "1" in P2 are filled with LS3. The above update sequence assumes that individual sections can be programmed within the page. For a memory architecture where partial page stylization is not supported, the sections within the page must be stylized together. At this time, in the first write, (3), and LS3 will be programmed into p〇, medium. In the second write, [Μ and B 85 are initialized to P1'. In the third write, LS2,, and LS3 are programmed together into P2, and so on. Planar alignment within the relay page Alternatively, the stylized unit can have the granularity of the relay page. If the granularity of the writer's chaotic update block becomes a relay page, then the items of the CBI block towel described in connection with Figures 16A and i6B will be related to the page, rather than the segment. The added granularity will reduce the number of items that must be recorded for the chaotic update block' and allow the index to be directly eliminated at 5 noon and the single (four) segment 0 for each relay block. The memory structure of Figure 26A and Figure 21 is the same. Just that each page contains two sections instead of solid. Due to &, the relay page Μρ can be seen from the figure. Now each has its own 98680.doc -84 - 1272487 page that can store the data of two logical units. If each logical unit is a zone slave, the logical sections are sequentially stored in the plane and "I and the MP0 of LS2 and LS3 in plane P1. = 6B shows that the layout shown in Fig. 26A has a linear map layout. The relay block of the memory unit. In contrast to the single-segment page of Fig. 21, the logical segment is stored in a cyclic manner in four pages having two segments in each page. If there are w parallel-operated planes and each page has K hidden bodies 7L, the logical block sequentially fills the relay blocks, and the kth logical page in the relay block will be resident in the plane X, wherein x = k, _ W where k — INT(k/K). For example, there are four planes, W = 4, 2 segments per page, K = 2, then for k = 5, the fifth Logic section (3), which will reside in the plane given by 2, ie plane 2, as shown in Fig. 24A. One and the same, aligned. As usual, the same principle applies to the above example of implementing the above plane. In the case of a multi-flat, 垔 垔 面条 构 构 中 和 和 和 和 和 和 和 具有 具有 具有 具有 ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ It is also advantageous to maintain the alignment of the segments in the page. In this way, the sensing circuit of the same group of Eclipse is beneficial to different versions of the same logical segment. V ancient effective execution such as segment reconfiguration and "money taking _Modify-Write" fall. Name 斟谳; and long-term m m 乍 when aligning the order of the segments within the page, you can

採用和對齊頁面盥巫& #』 J W 〜料面的相同技術。還有根據具體實祐制 而定,可以填補也可以又搶 灵β例 了以不填補任何中間間隙。 不用填補的邏輯單元平面對齊 圖27顯示的替代性方案 下.不用填補要從-個位置複 98680.doc -85- 1272487 取到另一個的邏 ’ j仗更新區塊中 乂一 齊。可將和更新區塊交又之四 進仃平面對 们平面的部分视為四個收隼 接收自主機之平面對齊的已更新邏輯單元的緩衝哭= 在合適緩衝器的下-個可用記憶 ° 化各接收自主機的邏輯單元。& τ真補,即可程式 元位址的序列,可能會有 饵旳邏輯早 每個平面之中。…數㈣邏輯單元被程式化於 混亂更新區塊娜1可含有邏輯中繼頁面之所有邏輯單元 的已更新版本’如用於Μη的。其還可含有小二= 的所有邏輯單元,如用於Mp, 、 ▲頁面 用PMP丨的。在Mps的例子中, 對應的原始區塊MBG取得遺失的邏輯單元Μ*。 財案在記_架射支援平行心各平面的任 思邏^頁叫尤其有效。依此方式,可在單-平行讀取操 作中喂取個中^頁面的所有邏輯頁面,即使個別邏輯頁 面並非來自相同列。 、 階段性程式錯誤處置 :區塊中有程式失敗時,則通常會將所有要儲存至區塊 的貝料私至另個區塊並將失敗的區塊標示為不良。根據 其中遇到失敗之操作的時序規格,可能沒有足夠的時間可 另外將儲存的資料移至另一個區塊。最壞的情況是在正常 廢棄項目收集操作期間的程式失敗,其中需要另一個相同 廢棄項目收集操作以將所有資料重新配置至另一個區塊。 於此情況中’可能會違反—給定主機/記憶體裝置所規定的 寫入等待時間限制,其該限制通常係被設計成容納 98680.doc -86 - 1272487 非兩次)廢棄項目收集操作。 圖28顯示其中缺陷區塊在彙總操作期間發生程式失敗時 會在另一個區塊上重複彙總操作的方案。在此範例中,區 塊1是按邏輯上循序順序儲存邏輯群組之完整邏輯單元的 原始區塊。為了便於解說,原始區塊含有區段A、b、c、 及D,各儲存一個子群組的邏輯單元。當主機更新群組的特 疋邏輯單元時,會將邏輯單元的較新版本記錄在更新區塊 中,即區塊2。如先前結合更新區塊所述,根據主機而定, 此更新可按循序或非循序(混亂)順序記錄邏輯單元。最後, 會因更新區塊已滿或一些其他原因而關閉更新區塊以接收 進步更新。當更新區塊(區塊2)關閉時,會將常駐於更新 區塊或原始區塊(區塊丨)上之邏輯單元的目前版本彙總在新 的區塊(區塊3)上,以形成邏輯群組之新的原始區塊。此範 例顯示更新區塊在區段B及D中含有邏輯單元的較新版 本。為了方便,圖中將區段B及D顯示在區塊2中未必是其 記錄的位置,而是對齊其在區塊1中原始位置的位置。 在彙總操作中,會按循序順序將原來常駐於區塊1之邏輯 群組之所有邏輯單元的目前版本記錄於彙總區塊(區塊3) 中。因此,會先從區塊i將區段A的邏輯單元複製至區塊3, 接著再從區塊2將區段B複製至區塊3。在此範例中,在從區 塊1將區段C的邏輯單元複製至區塊3時,區塊3的缺陷將導 致程式失敗。 一種處置程式失敗的方式是在全新的區塊(區塊4)上重 新啟動彙總程序。因此,會將區段A、B、C、D複製在區塊 98680.doc -87- 1272487 1交云棄缺陷區塊3 的菜總操作,結果造成複製多達兩個充滿邏輯單元的區:。 置具有完成特_的特_容限。例如, :寫:記憶體裝置時’會預計寫入操作在 内兀成,已知為「寫入箄锌日4 „ 了间 士 w 寺%間」。當記憶體裝置,如記情 卡,正忙於寫入主機的資料時,會發信己。 態給主機。如果rBUSY η 士 SY(k碌)」狀 W BUSY」狀恶持續超過寫入等待時間 二誤主機會使寫人操作逾時,然後對寫人操作登錄例外^ 你圖29以示意圖顯示具有允許足夠時間完成寫入(更新)操 作及彙總操作之時序或寫入等待時間的主機寫入操作。主 作具有寫入等待時間Tw,其可提供足夠完成寫入 …料至更新區塊之更新操作972的時間(圖29(a))。如先 别在區塊管理系統所述,對更新區塊的主機寫入可觸發囊 總操作。因此,時序亦允許在更新操作奶之外的彙總操作 二4(圖29(B))。然而,必須重新啟動彙總操作以回應失敗的 彙總操作將會花費太多時間並超過指定的寫人等待時間。 根據本發明的另一方面,在具有區塊管理系統的記憶體 I ’在時間緊急的記憶體操作期間,區塊中的程式失敗可 精由繼續中斷區塊(break〇ui bi〇ck)中的程式化操作來處 置稍後,在較不緊急的時間,可將中斷前記錄在失敗區 塊中的資料傳送到其他可能也是中斷區塊的區塊。接著即 莱失敗的區塊。依此方式,在遇到缺陷區塊時,不會 因必須立刻傳送缺陷區塊中儲存的資料而損失資料及超過 98680.doc -88- 1272487 指疋的%間限制,即可加以處理。此錯誤處置對於廢棄項 目收木缸作尤其重I,因此在緊急時間期間不需要對一嶄 新的區塊重複進行整個作業。其後,在適宜的時間,藉由 重新配置到其他區塊,即可挽救缺陷區塊的資料。 圖3〇顯示根據本發明_般方案之程式失敗處置的流程 圖。 —步驟1GG2 ·將非揮發性記憶體組織成區塊,將各區塊分 割成可-起抹除的記憶體單元,各記憶體單元可儲存一邏 輯單元的資料。 程式失敗處置(第一階段) 步驟1012·在第一區塊中儲存一連串邏輯單元的資料。 v驟14為了回應儲存一些邏輯單元後在第一區塊的 儲存失敗,在作為第一區塊之中斷區塊的第二區塊中儲存 後續的邏輯單元。 程式失敗處置(最後階段) v驟020 ·為了回應預定的事件,將儲存在第一區塊的 逆輯單το傳送至第三區塊,纟中第三區塊和第二區塊可以 相同或不同。 步驟1022 :丟棄第一區塊。 圖3 1A顯不程式失敗處置的一項具體實施例,其中第三 (取後的重新配置)區塊和第二(中斷)區塊不同。在階段1期 間,會在第一區塊上記錄一連串的邏輯單元。如果邏輯單 元是來自主機寫入,則可將第一區塊視為更新區塊。如果 邏輯單兀是來自壓縮操作的彙總,則可將第一區塊視為重 98680.doc -89- 1272487 新配置區塊。如果在某個點在區塊1中遇到程式失敗,則口 提供當作中斷區塊的第二區塊。在區塊丨及後續邏輯單元可 記錄失敗的邏輯單元會被記錄在中斷區塊上。依此方式 不需要額外的時間來取代失敗的區塊1及常駐其上的資料 在中間階段II中,可在區塊〗及區塊2之間取得序列中所 有的已記錄邏輯單元。 在最後階段III中,會將邏輯單元重新配置至可當作重新 配置區塊的區塊3,以取代失敗的區塊丨及常駐其上的資 料因此,可挽救已失敗區塊中的資料,然後再丟棄失敗 的區塊。會排定最後階段的時間,使其不會和任何同時之 記憶體操作的時序衝突。 在此具體實施例中,重新配置區塊3和中斷區塊2有所區 分。廷在中間階段期間已經以附加的邏輯單元記錄中斷區 塊時會很方便。HUb,中斷區塊已經變成更新區塊,可能 不適於將缺陷區塊丨的邏輯單元重新配置至其中。 圖3 1B ?、、、員示耘式失敗處置的另一項具體實施例,其中第三 (最後的重新配置)區塊和第二(中斷)區塊相㈤。階段工及Η 和圖31Α所不的第一具體實施例相同、然而,在階段⑴中, 會㈣陷區塊1的邏輯單元重新配置至中斷區塊2。這在未 以先4寫入操作之原始序列以外的附加邏輯單元記錄中斷 區塊2日$會很方便。依此方式,儲存所論邏輯單元所需的區 塊最小。 在彙總期間之程式失敗處置的具體實施例 程式失敗處置在彙總操作期間尤其重要。正常的囊總操 98680.doc -90- 1272487 作可將常駐在原始區塊及更新區塊中 if留-α , 、平耳砰組的所有邏 軏早凡的目前版本彙總至彙總區塊。在囊 IΑ μ 裳總知作期間,如 果在茱w區塊中發生程式失敗,則會 益沾r 扠仏另一個當作中斷 茱〜、區塊的區塊,以接收其餘邏輯單元 ^禾、、心。依此方荄, =複製賴單元-次以上,㈣可在正以總 :期間内完成例外處理的操作。在適宜的時間,將群組所 有未處理完成之邏輯單元彙總至中斷 她杼从. 尾中,即可完成彙 …操作。適宜的時間將是在目前 你月j王钺冩入知作以外的一此 八他有時間執行彙總之期間的期間。—個此種適宜的時口 =另一個其中有更新但無關聯之囊總操作之主機寫入二 /質上,可將程式失敗處置的彙總視為以多階段來者 施。在第-階段中,在發生程式失敗後,會將邏輯單元二 總至一個以上區塊中以避免彙總各邏輯單元一次以上。在 適宜的時时完成最後階段,其中會將邏輯群㈣總至一 個區塊中,較佳藉由按循序順序將所有邏輯單元收集 斷彙總區塊中。 卞 輯單元的資料 圖32A顯示造成囊總操作之初始更新操作的流程圖。 步驟1102:將非揮發性記憶體組織成區塊,將各區塊分 列成可-起抹除的記憶體單元,各記憶體單元可儲存一邏 V驟11 G4冑貝料組織成複數個邏輯群組,各邏輯群組 為可儲存在區塊中之邏輯單元的群組。 步驟1112:接收封裝在邏輯單元中的主機資料。 98680.doc -91 - 1272487 步驟1114 :根據第一順庠 、 弟一區塊中儲存邏輯群組之 避狗1早7〇的弟一版本,择a * w建立邏輯群組的原始區塊。 步驟1116 ·根據第二順岸,一 、 在弟二區塊中儲存包括邏輯群 組之邏輯單元的後續版本,以 、 建立邏輯群組的更新區塊。 步驟1119 :在上述部分的 T貝&事件’執行廢棄項目收集 以在各種區塊中收集邏輯覃二 早兀的目丽版本,及將其重新記 錄至新的區塊。 圖3 2B顯示根據本發明一馆卜 天月項較佳具體實施例之多階段彙 總操作的流程圖。 彙總失敗處置(階段I) 1122及步觸 錯誤處置的彙總,階段〗操作112〇包含步 1124。 步驟112 2 :以和第一)値痒;η ΑΛ丨丨s — * ^項序相冋的順序,在第三區塊中儲 存該邏輯群組之邏輯栗4 、铒早70的目刖版本,以建立邏輯群組的 彙總區塊。 步驟U24:為了回應彙總區塊的儲存失敗,以和第一順 序相同的順序,在第四區塊中儲存該第三區塊所沒有之邏 輯群組的邏輯單元,以提供中斷彙總區塊。 由於已將區塊1及區塊2中的資料傳送至區塊3及區塊4, 因此可抹除區塊1及區塊2以釋放空間。在較佳具體實施例 中’可立即釋放區塊2至EBL(已抹除的區塊清單,見圖ι8) 再予以使用。區塊1只能在以下的條件下釋放··如果其為關 閉的更新區塊及其中有對應的GA丁項目指向的另一個區 塊0 98680.doc -92- 1272487 只貝上,區塊3會變成邏輯群組的原始區塊,而區塊4變 成區塊3的取代循序更新區塊。 在完成階段I彙總後,記憶體裝置藉由釋放BuSY信號來 發信給主機。 中間操作(階段II) 階段II,即中間操作1130,可在階段出彙總操作114〇之 前發生。如步驟1132、1134、1136中任一者所提出般,可 能會有數種可能的情況。 步驟1132·或是在邏輯群組的寫入操作中,寫入作為更 新區塊的第四區塊(中斷彙總區塊)。 如果主機寫入所論邏輯群組,則區塊4(其為中斷彙總區 塊且其至此已疋取代循序更新區塊)將用作正常更新區 塊。根據主機寫入而定,其可維持循序或變成混亂狀態。 作為更新區塊’其將在某個點觸發關閉另一個混亂區塊, 如先前的較佳具體實施例所述。 如果主機寫入另一個邏輯群組,則直接進行至階段hi操 作。 步驟1134:或是在讀取操作中,讀取其中第三區塊為邏 輯群組原始區塊及第四區塊為更新區塊的記憶體。 此時’會從為邏輯群組之原始區塊的區塊3讀取區段A及 B的邏輯單元,及從為群組之更新區塊的區塊4讀取區段c 及D的邏輯單元。由於從區塊3只能讀取區段a及b,將無法 存取其中程式化失敗的頁面,且無法存取其後未寫入的部 分。雖然尚未更新快閃記憶體中的GAT目錄及其仍指向為 98680.doc -93- 1272487 原始區塊的區塊1,但不會從中讀取任何資料,且此區塊本 身已於稍早抹除。 另一種可能性是主機讀取邏輯群組中邏輯單元。此時, 會從為邏輯群組之原始區塊的區塊3讀取區段A及B的邏輯 單元,及從為群組之循序區塊的區塊4讀取區段c及D的邏 輯單元。 步驟1136 :或在電源開啟初始化中,藉由掃描其中内容 以重新識別第一至第四區塊中的任一項。 中間階段的另一個可能性是關閉記憶體裝置的電源,然 後重新啟動。如上述,在電源開啟初始化期間,會掃描配 置區塊清單中的區塊(要使用的抹除集區區塊,見圖丨5及圖 18)以識別邏輯群組中已成為特殊狀態原始區塊(區塊3)及 關聯之循序更新區塊(區塊4)的缺陷彙總區塊。中斷區塊(區 塊4)之第一邏輯單元中的旗標將指示關聯的區塊為已遭遇 程式錯誤的原始區塊(區塊3)。藉由查閱區塊目錄(GAT), 即可尋找區塊3。 在一項具體實施例中,會將旗標程式化至中斷彙總區塊 (區塊4)的第一邏輯單元。這可協助指示邏輯群組的特殊狀 態··即,其已經彙總成兩個區塊,即,區塊3及區塊4。 使用旗標以識別具缺陷區塊之邏輯群組的一個替代方法 疋利用不像原始區塊應為已滿的特性(除非錯誤發生在最 後頁面,及取後頁面沒有ECC錯誤)偵測在掃描期間為缺陷 的區塊。還有,根據實施例而定,其中會有有關儲存在快 閃記憶體中控制資料結構之失敗群組/區塊的f訊記錄,而 98680.doc -94- 1272487 不/、疋在寫入中斷莱總區塊(區塊 的旗標。 免4)之弟-區段之標頭區中 彙總完成(階段III) 步驟1142:為了回應預定的事件,對於自階段後未進一 步記錄第四區塊時的第一種情況,以和該第一順序相同的 順序,^其中儲存該邏輯群組之所有未處理完成之邏輯單 :的目前版本;對於自階段】後已進一步記錄第四區塊時的 弟二種情況’將第三及第四區塊彙總為第五區塊。 /驟1144:之後,對於第-種情況,操作記憶體時,以 茱總的第四區塊作為邏輯群組的原始區塊;對於第二種情 況’操作記憶體時,以第五區塊作為邏輯群組的原始:塊: 只要有任何不會違反任何指㈣間限制的機會,即可執 行階段III中的最後彙總。一個較佳情況是,在其中有另一 個未附帶囊總操作之邏輯群組之更新操作時,「掛附 (pigw)」在下-個主機寫人時槽上。如果另—個邏輯 群組的主機寫人觸發本身的廢棄項目^,則將使階段ΠΙ 彙總延後。 圖3 3顯示多階段彙總操作之第一及最後階段的範例時 序。主機寫入等待時間是具有持續期間1之各主機寫入時 槽的寬度。主機寫入i是簡單的更新,及邏輯群組中第 一組邏輯單元的目前版本會被記錄在關聯的更新區塊上。 在主機寫入2,會在邏輯群組LGi上發生更新,致使更新 區塊被關閉(如,已滿)。會提供新的更新區塊以記錄其餘的 更新。提供新的更新區塊會觸發廢棄項目收集,而導致關 98680.doc -95- 1272487 於LG#的彙總操作,以便再循環要再使用的區塊。LG4群組 的目前邏輯單元會按循序順序記錄在彙總區塊上。彙總操 作可繼續進行直到在彙總區塊中遭遇缺陷為止。然後叫用 階段I彙總,其中彙總操作在中斷彙總區塊上繼續。同時, LG4(階段III)的最後彙總會等待下一個機會。 在主機寫入3 ’也會發生邏輯群組lg2之邏輯單元的寫入 以觸發LG。的彙總。這表示已經完全利用時槽。 在主機寫入4,操作只是將乙仏的一些邏輯單元記錄至其 更新區塊。時槽中剩餘的時間可提供執行LG4之最後彙總的 機會。 未將中斷彙總區塊轉換為更新區塊的具體實施例 圖34A及圖34B分別顯示圖28及圖31之範例適用之多階 段茱總之階段I及階段III操作的第一案例。 圖34A顯示其中中斷彙總區塊並非用作更新區塊而是用 作其彙總操作已經中斷之彙總區塊的例子。尤其,圖“A 是指Μ %所示的主機“#2 ’其中主機寫入屬於邏輯群組 叫之邏輯單元的更新,及在此期間,此操作也會觸發和另 一個邏輯群組LG4關聯之區塊的彙總。 原始區塊(區塊1)及更新區塊(區塊2)的形成方式和圖μ 的範例相同。同樣地’在彙總操作期間,已知彙總區峰 塊3)在囊總區段C的邏輯單元時有缺陷。然而,不像圖^ 所示的重新彙總方案,纟多階段方案可在新提供之可當作 中斷彙總區塊的區塊(區塊4)上繼續彙總操作。因此,I产 段I彙總操作中’已在彙總區塊(區塊3)中囊總區段A及= 98680.doc -96- 1272487 的邏輯單元。當彙總區塊中發生程式失敗時,會循序將區 段C及D中其餘的邏輯單元複製至中斷彙總區塊(區塊4)。 如果主機原來在第一邏輯群組中寫入更新會觸發和第二 邏輯群組關聯之區塊的彙總操作,則會將第一邏輯群組的 更新記錄至第一邏輯群組的更新區塊(通常為新的更新區 塊)。此時,中斷彙總區塊(區塊4)不會用來記錄彙總操作之 外的任何更新資料且會維持必須完成的中斷彙總區塊。 由於區塊1及區塊2中的資料現在完全含在另一個區塊 (區塊3及區塊4)中,因此可將其抹除以便再循環。位址表、 (GAT)會被更新以指向區塊3,作為邏輯群組的原始區塊。 更新區塊的目錄資訊(在ACL中,見圖15及圖18)也會被更 新,以指向已成為邏輯群組(如,LG4)之循序更新區塊的區 塊4 〇 結果,彙總的邏輯群組並未侷限在一個區塊中,而是分 布在缺陷茱總區塊(區塊3 )及中斷彙總區塊(區塊4)之上。此 方案的重要特色是群組中的邏輯單元只會在此階段期間彙 總一次’但卻將彙總散布在一個以上區塊之上。依此方式, 可在正常指定的時間内完成彙總操作。 圖34B顯示始於圖34A之多階段彙總的第三及最後階 段。如結合圖33所述,會在第一階段後的適宜時間(例如在 後續不會觸發隨附彙總操作的主機寫入期間)執行階段ΙΠ 茱總。尤其,圖34Β是指其中發生如圖33所示之主機寫入#4 的時槽。在該期間中,主機寫入可更新屬於邏輯群組lg2 的邏輯單元而不會觸發另一個額外的彙總操作。因此,有 98680.doc -97- 1272487 利於將時槽中剩餘的時間μ階段m操作,以完成邏輯群 組LG4的彙總。 此操作可將還不在中斷區塊中之LG4之所有未處理完成 之邏輯單元彙總至中斷區塊。在此範例中,這表示會從區 塊3按邏輯上循序順序將區段AAB複製至中斷區塊(區塊 4)。由於區塊中邏輯單元的繞回方案及使用頁面標記(見圖 3A)’即使範例顯示在區塊4中,區段八及6會被記錄在區段 C及D之後,但仍會將已記錄的序列考慮為等同於A、B、c、 D的循序順序。根據貫施例而定,較佳從區塊3取得要複製 之未處理完成之邏輯單元的目前版本,因其已經是彙總的 形式,不過也可從尚未被抹除的區塊丨及區塊2中收集。 在中斷區塊(區塊4)上完成最後彙總後,會將其指定為邏 輯群組的原始區塊並跟著更新合適的目錄(如,GAT,見圖 17A)。同樣地,會將失敗的實體區塊(區塊3)標示為不良並 將其排除。其他區塊,區塊丨及區塊2,將被抹除及再循環。 同時,會將LG2的更新記錄在和LG2關聯的更新區塊中。 將中斷彙總區塊變成更新區塊的具體實施例 圖35A及圖35B分別顯示圖28及圖33之範例適用之多階 段彙總之階段I及階段III操作的第二案例。 圖3 5 A顯示其中維持中斷彙總區塊為接收主機寫入之更 新區塊而非莱總區塊的例子。這適用於如更新邏輯群組lg4 的主機寫入,及在此程序中,也會觸發相同邏輯群組的彙 總。 和圖34A的案例一樣,將區塊1與區塊2彙總於區塊3中會 98680.doc -98- 1272487 ::進仃’直到在處理區段c時遇到程式失敗。然後會在中 斷彙總區塊(區塊4)上繼續彙總。在中斷區塊(區塊4)中彙總 未處理70成之邏輯單元(如,在區段C&D中)後,並不在階 段III中等待完成其中的邏輯群組彙總,而是維持中斷區塊 為更新區塊此案例尤其適於其中主機寫入可更新邏輯群 組及觸發相同邏輯群組之彙總的情況。在此範例中,這可 將邏輯群組LG#之主機更新的記錄記錄在中斷彙總區塊(區 塊4)中,而非記錄至新的更新區塊。此更新區塊(先前為中 斷彙總區塊(區塊4))可根據其中記錄的主機資料而為循序 或變成混亂。在所示的範例中,區塊4已變成混亂,因區段 C中邏輯單元的後續較新版本使得區塊4中的先前版本被淘 汰。 在中間階段期間,會將區塊3視為LG#的原始區塊,及區 塊4會是關聯的更新區塊。 圖35B顯示始於第二例中圖35a之多階段彙總的第三及 最後1¾段。如結合圖3 3所述,會在第一階段後的適宜時間 執行階段III彙總,如在後續不會觸發隨附彙總操作的主機 寫入期間。在該期間中,主機寫入可更新屬於邏輯群組的 邏輯單元而不會觸發另一個額外的彙總操作。因此,有利 於將時槽中剩餘的時間用於階段III操作,以完成邏輯群組 LG4的彙總。 然後從區塊3及區塊4將邏輯群組L G4的廢棄項目收集至 新的彙總區塊(區塊5)。然後將區塊3標示為不良,將區塊4 再循環,及新的彙總區塊(區塊5)將變成邏輯群組LG*之新 98680.doc -99- 1272487 的原始區塊。其他區塊,區塊1及區塊2,也會被抹除及再 循環。 階段性程式失敗處置的其他具體實施例 圖31A、31B、34A、34B、35八及35丑中所述的範例適用 於較佳的區塊管理系統,其中各實體區塊(中繼區塊)僅儲存 屬於相同邏輯群組的邏輯單元。本發明同樣適用於其他其 中並無邏輯群組對實體區塊對齊的區塊管理系統,如W0 03/027828及W0 00/49488中所揭露的區塊管理系統。在這 些其他系統中實施階段性程式失敗處置方法的一些範例如_ 圖36A、36B及36C所示。 圖36A顯示套用於主機寫入觸發關閉更新區塊且更新區 塊為循序時之情況的階段性程式錯誤處置方法。此例中的 關閉係藉由以下方式來完成:將原始區塊2之其餘的有效資 料(B及C)複製至循序更新區塊3。在資料部分c程式化起點 之程式失敗的例子中,會將部分C程式化至保留的區塊4。 然後可將新的主機資料寫入新的更新區塊5(未顯示)。此方 法的階段II及III和混亂區塊關閉的情況相同。 圖36B顯示在更新區塊之更新的例子中套用於(局部區塊 糸統)的階段性程式錯誤處置方法。在此例中,會將邏輯群 組儲存在原始區塊1及其他的更新區塊中。彙總操作包括將 原始區塊1及其他更新區塊2的資料複製至更新區塊(根據 些規則返定’圖中的區塊3 )之一。和已經說明的主要情 況不同處在於區塊3已經部分被寫入。 圖36C顯示處理廢棄項目收集操作的階段性程式錯誤,或 98680.doc -100- 1272487 不支援映射至中繼區塊之邏輯群組之記憶體區塊管理系統 中的清除。此種記憶體區塊管理(循環儲存)系統係說明於 WO 03/027828 A1中。循環儲存系統明顯的特色是不對單— 邏輯群組配置區塊。其中支援中繼區塊中控制資料的多個 邏輯群組。廢棄項目收集涉及從部分淘汰的區塊’將可能 沒有任何關係(隨機邏輯區塊位址)的有效資料區段取至其 中可能已經有些資料的重新配置區塊。如果重新配置區塊 在操作期間變滿’則將開啟另一個區塊。 非循序更新區塊索引 在上文關於混亂區塊索引及結合圖16a_16e的段落中, CBI區段可用來儲存可記錄隨機儲存在混亂或非循序更新 區塊中之邏輯區段位置的索引。 根據本發明的另一方面,在具有支援具非循序邏輯單元 之更新區塊之區塊管理系統的非揮發性記憶體中,緩衝儲 存在RAM中之非循序更新區塊之邏輯單元的索引被定期儲 存在非揮發性記憶體中。在一項具體實施例中,將索引儲 存在專用於儲存索引的區塊中。在另一項具體實施例中, ㈢將索引儲存在更新區塊中。在又另一項具體實施例中, 將索引儲存在各邏輯單元的標頭中。在另一方面中,在上 :個索引更新之後但在下一個索引更新之前寫入的邏輯單 一/將其索引資訊儲存在各邏輯單元的標頭中。依此方 式,在電源中斷後,不必在初始化期間執行掃描,即可決 =最近寫入之邏輯單元的位置。在又另一方面中,將區塊 &理成部分循序及部分非循序指向一個以上邏輯子群組。 98680.doc -101 - 1272487 在預定觸發事件後儲存在CBI區塊中CBI區段的索引指標 根據結合圖16A-16E所述的方案,會將混亂區塊中最近寫 入區段的清單保留在控制器RAM中。只在和給定混亂區塊 關聯之邏輯群組之預定數量的寫入後,含有最新索引資訊 的CBI區段才被寫入快閃記憶體(CBI區塊620)。依此方式, 可縮減CBI區塊更新的數量。 在邏輯群組中CBI區段的下一個更新之前,會將邏輯群組 中最近寫入區段的清單保留在控制器RAM中。此清單在記 憶體裝置遭遇電源關閉時將會遺失,但可在電源啟動後的 初始化中藉由掃描更新區塊加以重建。 圖37顯示在每N個區段寫入相同的邏輯群組後將CBI區段 寫入關聯之混亂索引區段區塊的排程範例。此範例顯示兩個 同時進行更新的邏輯群組LG3及LGU。初始時,會將[(^的 邏輯區段按循序順序儲存在原始區塊中。群組中邏輯區段的 更新會按照主機指定的順序被記錄在關聯的更新區塊上。此 範例顯示混亂更新序列。同時,也會以和其更新區塊相同的 方式更新邏輯君f組LGU。在每個邏輯區段寫入後,會將其在 更新區塊中的位置保留在控制器RAM中。在每個預定的觸 發事件後,會以混亂索引區段的形式將更新區塊中邏輯區段 的目前索引寫入非揮發性混亂索引區段區塊。例如,預定的 觸發事件可發生於每N個寫入之後,其中N為3。 雖然提供的範例有關作為區段的貧料邏輯單元,彳B熟習 本技術者應明白,邏輯單元也可以是一些其他集合體,如 含有一個區段或一組區段的頁面。還有,循序區塊中的第 98680.doc -102- 1272487 一頁面不一定是邏輯頁面ο,因為繞回的頁面標記可能是第 一頁面。 在預定觸發事件後儲存在混亂更新區塊中CBI區段的索引 指標 在另一項具體實施例中,會在其中每Ν個寫入後,將索引 指標儲存在混亂更新區塊本身中的專用CBI區段中。此方案 和上述其中也將索引儲存在CBI區段中的具體實施例相 同。其中的差異在於上述具體實施例中,CBI區段被記錄在 CBI區段區塊中,而非更新區塊本身中。 此方法係基於將所有混亂區塊索引資訊保持在混亂更新 區塊本身中。圖38A、38B及38C分別顯示同樣按照三個不 同階段儲存CBI區段之更新區塊的狀態。 圖3 8 A顯示直到在預定數量的寫入後在其中記錄c B j區 段日守的更新區塊。在此範例中,在主機已經循序寫入邏輯 區段〇-3後,接著將會發出再次寫入邏輯區段丨之另一版本 的私7,因而破壞資料寫入的循序序列。然後實施cBI區段 中載送的混IL區塊索引,將更新區塊轉換為混亂更新區 塊。如上述,CBI是含有混亂區塊之所有邏輯區段之索引的 索引。例如,第0個項目代表第〇個邏輯區段之更新區塊的 位移,同樣地,第n個項目代表第n個邏輯區段的位移。可 將CBI區段寫入更新區塊中的下一個可用位置。為了避免頻 繁的陕閃存取,會在每^^個資料區段寫入後寫入CBI區段。 在此犯例中’ N為4。如果此時損失電源,則最後寫入的區 段成為CBI區段,並將此區塊視為混亂更新區塊。 98680.doc -103 - 1272487 圖38B顯示圖38A之進一步在索引區段後在其中記錄邏輯 區段1、2及4的更新區塊。邏輯區段丨及2的較新版本可取代 先前在更新區塊中記錄的舊版本。在此時之電源週期的例子 中,必須先找到最後寫入的區段,然後必須掃描多達1^個區 段,以尋找最後寫入的索引區段及最近寫入的資料區段。 圖38C顯示圖38B之具有另一寫入以觸發索引區段下一 個圮錄之邏輯區段的更新區塊。在另1^個(N==4)區段寫入後 的相同更新區塊可記錄CBI區段的另一個目前版本。 /匕方案的優點是不需要分開的CBI區塊。同時還不必擔心 實體快閃區段的附加項資料區是否大到足以容納混亂更新 區塊中有效區段之索引所需項目的數量。料,混亂更新 區塊含有所有的資訊,且位址轉譯不需要外部的資料。這 可讓演算法比較簡單,其中可縮減有關CBI區塊壓縮的控制 更新數;也可讓串接控制更新比較簡短。(請見上文有關⑽ 區塊管理的部分)。 ::館存在混亂更新區塊中資料區段標頭之最近寫入區段 的貢訊 餘:本發:的另一方面,會在細固寫入後,將記錄在 A 込輯單元的索引儲存在非揮發性記憶體中,及 關中間寫人之邏輯單元的目前資訊儲存在各邏輯單元寫 的附加項部八 馬 、”。依此方式’在電源重新啟動後,不必掃 鬼’即可從區 口 一 取 取後寫入璉輯早兀的附加項部分快: 、有關自上一個旁3丨承$ 1口索引更新後寫入之邏輯單元的資訊。 θ顯示儲存於混亂更新區塊中各資料區段標頭之, 98680,doc -104- 1272487 間寫入的中間索引。 圖39B顯示在寫人的各區段標頭中儲存中間寫入之中間 索引的範例。在此範例中,在寫入四個區段ls〇_lS3後,會 寫入C财引作為區塊中的下_個區段。其後,會將邏㈣ 段LS、、LS,2及Lb寫入區塊。每次,標頭都會儲存自上一 個CBI索引後寫人之邏輯單元的巾間索引。因此,叫中的 標頭將具有提供上-個CBI索引及LSS之位移(即,位置)的 索引。同樣地’ ls4中的標頭將具有提供上—個cm索引及 1^’1和1^’2之位移(即,位置)的索引。 取後寫入的資料區段永遠含有有關多達N個最後寫入之 頁面的資訊(即,一直到最後寫入之CBI區段)。只要電源重 新㈣時’上-個CBI索引可提供邏輯單元在CBI索引區段 之前寫入的索引資訊,及後續寫人之邏輯單元的索引資訊可 在最後寫入之資料區段的標頭中找到。這具有優點如下:在 初始化時不必為其後寫入的區段掃描區塊以決定其位置。 在資料區段的標頭中儲存中間索引資訊的方案同樣適用 於無論CBI索引區段係儲存在更新區塊本身或在分開的 CBI區段區塊中,如前文所述。 儲存在混亂更新區塊中資料區段標頭的索引指標 在另一項具體實施例中,會將整個CBI索引儲存在混亂更 新區塊中各資料區段的附加項部分。 圖40顯示在混亂更新區塊之各資料區段標頭中儲存的混 亂索引攔位中的資訊。 區段標頭的資訊容量有限,因此可將由任何單一區段所提 98680.doc -105 - 1272487 供的索引範圍設計為層級索引方案的一部分。例如,記憶體 特定平面内的區段可提供索引給僅在該平面内的區段。還 有’可將邏輯位址的範圍分割成一些子範圍,以允許採用間 接索引方案。例如,如果可將有64個邏輯位址的區段儲存在 個平面中’則各區段可以有3個櫊位用於區段位移值,各 棚位能夠儲存4個位移值。第一櫊位可定義邏輯位移範圍 (M5 ' 15-31 ' 32_47、及48_63内最後寫入區段的實體位移。 第一攔位可定義在其相關範圍内各4個區段之4個子範圍的 貫體位移值。第三攔位可定義在其相關子範圍内4個區段的 貫體位移值。因此,藉由讀取多達3個區段的間接位移值, 即可決疋混亂更新區塊内邏輯區段的實體位移。 此方案的優點是也不需要分開的CBI區塊或CBI區段。然 而,這卻只適用於實體快閃區段的附加項資料區大到足以 容納混亂更新區塊中有效區段索引所需的項目數量時。 混亂更新區塊之邏輯群組内受限的邏輯範圍 在邏輯群組内,可縮減可非循序寫入之區段的邏輯範 圍。此技術的主要優點如下:由於只需要讀取一個多重區 段頁面(在多重晶片的例子中,可以平行讀取頁面),即可取 得目的地頁面的所有資料(假設來源和目的地已對齊,若未 對齊,則需要另-個讀取),使得循序寫入資料的複製能夠 更快速地完成’因此範圍之外的區段在原始區塊中保持循 序寫入,及廢棄項目收集操作能在更短的時間内完成。還 有,使用晶片上的複本特色,可將彳盾序資料從來源複製至 目的地,而不用在控制器之間來回傳送資料。如果來源資 98680.doc -106 - 1272487 料已l刀纟,如混亂區境中所發生的,則需要每個區段讀 取多達-個頁面’才能收集所有要寫入目的地的區段。 在一項具體實施例中’實際上並不將邏輯範圍限制在某 個數量的區段,而是經由限制CBI的數量來完成(只限制大 群組/中繼區塊的混亂範圍报合理,因其需要多個混亂區塊 索引才能涵蓋整個邏輯群組的範圍)。例如,如果中繼區塊 /群組有2048個區段,則其將需要多達M@cm區段,各涵蓋 一個子群組256個區段的連續邏輯範圍。如果將cBI的數量 限制為4,則混亂區塊可用來寫入多達4個子群組的區段(其 中任何一個)。因此,允許邏輯群組有多達4個部分或完全 混亂子群組,且最少有4個子群組將維持完全循序。如果一 個混亂區塊有4個和其關聯的有效CBI區段,及主機寫入在 這些CBI區段範圍之外(混亂子群組)的區段,則應囊總及關 閉混亂邏輯群組。但是這卻極不可能發生,因為在真實的 應用中,主機在2048個區段的範圍(邏輯群組)内不需要多於 4個256個區段的混亂範圍(子群組)。結果,在正常的情況 中,廢棄項目收集也不會受到影響,但限制規則的防範卻 形成廢棄項目收集太長(會觸發主機的逾時)的極端例子。 部分循序混亂更新區塊的索引 當循序更新區塊在將區塊轉換為混亂管理模式之前被部 分寫入時,邏輯群組之循序更新區段的全部或部分可繼續 被處理成已經循序更新,及可將混亂更新管理僅套用於邏 輯群組之位址範圍的子集。 控制資料完整性與管理 98680.doc -107· 1272487 儲存在記憶體裝置中的資料可能會因為電源中斷或特定 記憶體位置變成有缺陷而成為已毀損。如果碰到記憶體區 塊缺陷,則將資料重新配置至不同的區塊及將缺陷區塊丟 棄。如果錯誤不會擴大,則可藉由和資料一起儲存的錯誤 校正碼(ECC)在執行中進行校正。然而,還是會有Ecc無法 杈正已毁扣貧料的時候。例如,當錯誤位元的數量超過 的容量時。這對於如和記憶體區塊管理系統關聯之控制資 料的關鍵資料是無法接受的。 控制資料的範例為目錄資訊及和記憶體區塊管理系統關 :的區塊配置資訊’如結合圖20所述。如上述,可將控制 广隹持在ν速RAM及較慢的非揮發性記憶體區塊中。任 ::常變更的控制資料會被維持在具有定期控制寫入的 :、以更新儲存於非揮發性中繼區塊中的同等資訊。 性但較慢的快閃記‘,體將控制資料儲存在非揮發 1中如圖20所示GAT、CBI、MAP、 中。因此貝料結構的層級會被維持在快閃記憶體 π = π 操作造叙施中控制資料結構的資訊 關鍵資料備份 Π4的控制資料結構。 I像本發明的另一 資料如果被维持n ’如部分或全部控制資料的縣 複製的執行方式對=項用中兩’職證額外等級的可靠姓 技術以連續程式化相同心式 記憶體系統而今L、體早兀之多位元的多重狀 。 次編碼過程中的任何程式化錯領 98680.doc -108- 1272487 無法毀損第—攻 寫入中止、^t 、過程建立的資料。複製還有助於偵測 丁儿、偵測誤測 同),且 (P,兩個複本有良好的:ECC但資料不 J立日加頜外等纽土 已考慮。 、、17罪性。若干資料複製的技術均 在一項具體實施例中 ^ ^ 給定資料的兩個複本德式化編碼過程中程式化 化用於儲存 ,後續程式化編碼過程可避免程式 方式,在德: 本中至少一個的記憶體單元。依此 方式在後績程式化編碼過程在& ^ 4 绝证^ 1❿柱在凡成之别中止及毀損稍早 、為碼過程的資料時,嗲 影響。 4專兩個禝本中至少一個也不會受到 在另項具體貫施例中,羊一 ^ ^ 心+ 杲給疋貧料的兩個複本會被 儲存於兩個不同的區塊中, 而且該荨兩個複本中至多僅右 其中一個的記憶體單元备於德^… 甲夕僅有 、 θ於後面的程式化編碼過程中被程 式化。 於另一具體實施例中,於一箱彳外姑由 乃、私式化編碼過程中儲存某一 給定資料的兩個複本之後,俑X五m ^ 、 炎个心俊便不再對用於儲存該等兩個複 本的記憶體.單元組實施任何進一步的 /的私式化。於該記憶體 單元組的最終程式化編碼過程中夹 木転式化该專兩個複本便 可達成此目的。 在又另一項具體實施例中,可於二進制程式化模式中將 某-給定資料的該等兩個複本程式化至一多重狀態的記憶 體之中’致使不會對該等已程式化的記憶體單元進行任何 進一步的程式化。 在又另-項具體實施例中’對於採用兩次編碼過程程式 98680.doc -109- 1272487 ^技術以連續程式化相同組記憶體單元之多位元的多重狀 ^記憶體系統而言’會採用容錯碼以編碼多個記憶體狀 悲,使稍早程式化編碼過程所建立的資料不會受到後續程 式化編碼過程中錯誤的影響。 在各記憶體單元可儲存—位元以上資料的多重狀態記憶 體中會引發資料複製的複雜性。例如,—個4狀態記憶體可 以兩個位兀來表示。一個現有的技術是使用2次編碼過程程 式化來程式化此種記憶體。第一位元(下方頁面位元)可由第 一次編碼過程進行程式化。其後,可在第二次編碼過程中 轾式化相同單元以代表所要的第二位元(上方頁面位元 > 為 了不要k更第二次編碼過程中第一位元的值,會使第一位 兀的圮憶體狀態表示取決於第二位元的值。因此,在第二 位元的程式化期間,如果因電源中斷或其他原因而發生錯 誤及造成不正確的記憶體狀態,則也會毀損第一位元的值。 圖41A顯示當各記憶體單元儲存兩個位元的資料時,4狀 態記憶體陣列的定限電壓分布。此四個分布代表四個記憶 體狀態「U」、「X」、rY」及「z」的總體。在程式化記憶體 皁7L之前,會先將其抹除至其「U」或「未寫入」狀態。在 記憶體單元逐漸被程式化時,會累進達到記憶體狀態「χ」、 「Y」及「z」。 ^ 」、 圖41B顯示現有使用格雷碼(Gray code)的2次編碼過程程 式化方案。此四個狀態可以兩個位元表示,即,下方頁面 位元及上方頁面位元,如(上方頁面位元,下方頁面位元)。 對於要平行程式化之單元的頁面,實際上有兩個邏輯頁 98680.doc -110- 1272487 面:邏輯下方頁面及邏輯上方頁面。第—程式化編碼過程 /、會紅式化邏輯下方頁面。藉由合適的編碼,不用重設邏 輯下方頁面,單元相同頁面上的後續第二程式化編碼過程 會,式化邏輯上方頁面。一般使用的程式碼是格雷瑪,其 中’、有自位元會在轉變至相鄰的狀態時變更。因此,此 程式碼具有優點如下:對於錯誤校正的要求較少,因只斧 及一個位元。 一般使用格雷碼的方案是假設Γ1」代表「未程式化」條 件。因此,已抹除的記憶體狀態Γυ」可表示為]上方頁 二!元’下方頁面位元卜。,1)。在程式化邏輯下方頁面 一人編碼過程中,任何儲存資料「0」的單元將因此具 有其從(X,1)至(X,G)的邏輯狀態轉變,其中「X」代表上二 位元的「任意(d〇n丨t care、 /古 ^ ^ care)」值。然而,由於上方位 ,式:,因此為了一致’可將Γχ」標示為、 错由轾式化早兀為記憶體狀態「X」來表示。也 就是說’在第二次程式編碼過程之前,下方位元值「〇」可 以表不為記憶體狀態Γ χ」。 執行第二次編碼過程程式化可储存邏輯上方頁面的位 兀。只有這些需要上方頁面位元 的蒂-4入、 化。在第一次編碼過程後,頁」-7 '被程式 貝面中的早兀在邏輯狀態(1,1) 或(1,〇)。為了保存第二次編碼過程中下方頁面的值,必須 =分下方位讀「〇」或4」。對於從(i,g)至(Μ)的轉變: ;將所論記憶體單元程式化為記憶體狀態「γ」。對於從〇 υ至(°,1)的轉變,會將所論記憶體單元程式化為記憶體狀 98680.doc -111 - 1272487 態「z」。依此方式,在讀取期間,藉由決定在單元 化的記憶體狀態,即可解碼下方頁面位元及上方頁面:: 然而,格雷碼的2次編碼過程程式化方案在第 = 程程式化錯誤時會成為門,如丄 馬過 珉為問蟪。例如’在下方位元為 將^方頁面位元程式化為「〇」,將造成(U)轉變她υ。 Μ要將記憶體單元累進從「U」程式化通過「χ」及Γγ 而至Ζ」。如果在完成程式化前發生電源中斷,則呓 單元將結束於轉變記憶體狀態之…「X」。在讀取:憶 體单70時,會將「Χ」解碼為邏輯狀態0, G)。這對上方^ :方:元造成不正確的結果’因其應為(〇,υ。同樣地,如 果程式化在達到「γ」時受財斷,其將對應於(〇,〇)。雖 然上方位兀現在是正確的,但下方位元還是錯的。 因此’I以看出上方頁面程式化的問題可毁損已經在下 方頁面的貝料。尤其當第二次編碼過程程式化涉及在 記㈣狀態上通料,程式巾止會使程式化結束於該記憶 體狀悲,造成解碼不正確的下方頁面位元。 心 圖42顯示藉由儲存複製的各區段以防衛_^心 式。例如’可將區段A、B、C、及D儲存在複製複本中。如 在一個區段複本中有資料毁損,則可以讀取另-個來取代。 圖43顯示其中通常將複製區段儲存在多重狀態記憶體的 非健全性。如上述,在範例的情態記憶體中,多重狀 面實際上包括分別在兩次編碼過程中進行程式化的邏輯下 方頁面及邏輯上方頁面。在所示的範例中,頁面為四個區 段寬。因此,區段八及其複製會同時被程式化在邏輯下 98680.doc -112- 1272487 面中,同樣地,對於區段B及其複製也是如此。然後在邏輯 上方頁面中後續之程式化的第二次編碼過程中,會同時程 式化區段C,C,及對於區段D,D也是如此。如果在程式化 區段C,C的中間發生程式中止,則會毀損下方頁面中的區 役A,A。除非,在上方頁面程式化之前先讀取及緩衝下方 頁面區段,否則如果毀損將無法復原。因此,同時儲存兩 個關鍵資料的複本,如區段A,A,無法防止其為其上方頁 面中後續區段C,C之有問題的儲存所毀損。 圖44A顯示將關鍵資料錯開之複製複本儲存至多重狀態 記憶體的一項具體實施例。基本上,會以和圖43的相同方 式儲存下方頁面,即,區段A,A及區段B,B。然而,在上 方頁面耘式化中,區段C&D會和其複製交錯成c,C,D。 如果支援局部頁面程式化,則可同時程式化兩個區段c的複 本及對於兩個區段D的複本也是如此。如果兩個區段〔的 弋k到中止,則只會在區段A的一個複本及區段B的一個 稷本上毀損下方頁面。另一個複本將維持不受影響。因此, 如果儲存在第一次編碼過程中的關鍵資料有兩個複本,則 ^ ♦不^艾到後續第一次編碼過程同時程式化的影響。 圖44B顯示只„鍵資料之複製複本儲存至多重狀^記 體之邏輯上方頁面的另一項具體實施例。此時,未使用 下方頁面的資料。關鍵資料及其複製,如區段A,A及區段 ’ 曰破儲存至邏輯上方頁面。依此方式,如果有程式 中止’則可將關鍵資料再寫入另一個邏輯上方頁面,而下 方頁面資料的任何毀損將無關緊要。此辦法基本上使用各 98680.doc -113- 1272487 夕重狀恶頁面一半的儲存容量。 圖44C顯不又另一項以多重狀態記憶體的二進制模式儲 存關鍵資料之複製複本的具體實施例。此時,會依二進制 模式程式化各記憶體單元,其中僅將其定限範圍分成兩個 區域。因此,其中只有一次編碼過程程式化,且在發生程 式中止時可在不同位置中重新啟動程式化。此辦法也使用 各多重狀態頁面一半的儲存容量。依二進制模式操作多重 狀悲冗憶體係說明於美國專利第6,456,528則號,其整個揭 4内谷在此以提及的方式併入本文中。 圖45顯示同時將關鍵資料之複製複本儲存至兩個不同中 繼區塊的又另一項具體實施例。如果區塊之一變成不可 用,則可從另—個區塊讀取資料。例如,關鍵資料係含在 區段A、B、C、D及E、 F、G、Η及1、了、尺、乙内。各區段 會被儲存在複製中。這兩個複本將被同時寫人兩個不同的 區塊,區塊0及區塊1。如果將一個複本寫入邏輯下方頁面, 則會將另-個複本寫入邏輯上方頁面。依此方式,永遠會 有程式化至邏輯上方頁面的複本。如果發生程式中止,則 可將其重新程式化至另一個邏輯上方頁面 同時,如果下 方頁面已經毁損,則在其他區塊中永遠會有另—個上方頁 面複本。 圖46Β顯示使用容錯碼同時儲存關鍵資料之複製複本合 又另-項具體實施例。圖46Α和圖似同在顯示4狀態記伯 體陣列的定限電壓分布並顯示作為圖46β的參考。容錯碼^ 質上可避免在任何中間狀態中轉變的任何上方^面3程^ 98680.doc -114- 1272487 化。因此,在下方百& < b 頁面程式化的第一次編碼過程中,邏輯 狀態(1,1)轉變為(1 V,,如表不為程式化已抹除的記憶體狀 態「υ」為「γ ) 〇名μ 士 π 仕上方頁面位元為「〇」的第二次編碼過 程程式化中,如果下士 η、 果下方頁面位元為「1」,則邏輯狀態(1,1) 轉=為(G, 1) ’如表示為程式化已抹除的記憶體狀態「U」 為Χ」如果下方頁面位元為「〇」,則邏輯狀態(1,〇)轉變 為(0,…如表示為程式化記憶體狀態「γ」為「ζ」。由於 亡方頁面式化僅涉及程式化為下一個相鄰的記憶體狀 態,因此程式中止無法變更下方頁面位元。 串列寫入 關鍵資料的複製複本較佳如上述同時寫人。另—個避免 同日守U貝兩個複本的方式是彳盾序寫人複本。此方法較慢, 4複本本身代表其在控制器檢查兩個複本時程式化是否成 功。 圖47為顯示兩個資料複本之可能狀態及資料有效性的表 格。 如果第一及第二複本沒有Ecc錯誤,則可將資料的程式 化視為完全成功。有效資料可從任一個複本取得。 如果第一複本沒有ECC錯誤,但第二複本有Ecc錯誤,便 表不程式化在第二複本程式化的中間受到中斷。第一複本 含有有效資料。即使錯誤為可校正,第二複本資料已 靠。 如果第一複本沒有ECC錯誤且第二複本已經清空(抹 除),便表示程式化在第一複本程式化結束後但在第二1 复本 98680.doc -115- 1272487 開始前受射斷。第-複本含有有效資料。 如果第一複本有ECC錯誤及第二複本已經清空(抹除),便 表不私式化在第-複本程式化的中間受到中斷。即使錯誤 為可校正,第一複本仍含有無效資料。 為了讀取維持在複製中的資料,以下技術為較佳,因其 利用複製複本的存在。讀取及比較兩個複本。此例中,圖 47所示兩個複本的狀態可用來確保沒有任何錯誤誤測。 在另一項具體實施例中,控制器只讀取一個複本,為了 顧及速度及簡單性,複本讀取較佳在兩個複本之間輪替。 例如,在控制器讀取控制資料時,其可讀取如複本丨,下一 個控制讀取(任何控制讀取)則應來自複本2,然後再是複本玉 等。依此方式,即可讀取及定期檢查兩個複本的完整性(Ecc k查)。其可減少以下風險··無法在因變質的資料保留所造 成的時間錯誤中進行偵測。例如,如果通常只讀取複本}, 則複本2會逐漸變質至其中錯誤無法為ECC挽救的程度,因 而無法再使用第二複本。 先佔式資料重新配置 如結合圖20所述,區塊管理系統在其操作期間可在快閃 記憶體中維持一組控制資料。此組控制資料會被儲存在和 主機資料相同的中繼區塊中。因此,控制資料本身會受到 區塊管理,因而會受到更新的影響,及因此也受到廢棄項 目收集操作的影響。Use the same technique as aligning the page 盥 & &&# 』 J W ~ 面面. According to the specific system, it can be filled or robbed to prevent any intermediate gaps. Alignment of logical unit planes without filling Figure 27 shows an alternative scenario. Do not need to fill to recover from - 98684. Doc -85- 1272487 Get the other logical 仗 j仗 update block in the same block. The intersection of the update block and the update block can be regarded as the four buffers of the updated logical unit received from the plane alignment of the host. The next available memory is in the appropriate buffer. Each logical unit received from the host. & τ true complement, that is, the sequence of the program meta-address, there may be bait logic early in each plane. The number (4) logical unit is stylized in the chaotic update block, which can contain an updated version of all logical units of the logical relay page, as used for Μη. It can also contain all the logical units of the small two =, such as for the Mp, ▲ page with PMP 。. In the example of Mps, the corresponding original block MBG takes the missing logical unit Μ*. It is especially effective for the financial record in the record. In this way, all logical pages of a page can be fed in a single-parallel read operation, even if individual logical pages are not from the same column. , staged program error handling: When there is a program failure in the block, then all the materials to be stored in the block are usually private to another block and the failed block is marked as bad. Depending on the timing specifications of the operation in which the failure occurred, there may not be enough time to move the stored data to another block. The worst case scenario is a program failure during a normal obsolete project collection operation, where another identical obsolete project collection operation is required to reconfigure all data to another block. In this case, 'may be violated—the write latency limit specified for a given host/memory device, which is typically designed to accommodate 98680. Doc -86 - 1272487 Non-two) waste project collection operations. Figure 28 shows a scheme in which a defective block repeats the summary operation on another block when a program failure occurs during the summary operation. In this example, block 1 is the original block that stores the complete logical unit of the logical group in a logically sequential order. For ease of illustration, the original block contains sections A, b, c, and D, each of which stores a logical unit of a subgroup. When the host updates the special logical unit of the group, a newer version of the logical unit is recorded in the update block, block 2. As previously described in connection with the update block, depending on the host, this update can record logical units in a sequential or non-sequential (chaotic) order. Finally, the update block is closed for the progress update because the update block is full or for some other reason. When the update block (block 2) is closed, the current version of the logical unit resident on the update block or the original block (block 丨) is summarized on the new block (block 3) to form The new original block of the logical group. This example shows that the update block contains a newer version of the logical unit in sections B and D. For convenience, the sections B and D are displayed in the block 2 not necessarily at the position where they are recorded, but are aligned with their original positions in the block 1. In the summary operation, the current version of all logical units originally resident in the logical group of block 1 is recorded in the summary block (block 3) in sequential order. Therefore, the logical unit of the sector A is copied from the block i to the block 3, and then the block B is copied from the block 2 to the block 3. In this example, when copying the logical unit of the sector C from the block 1 to the block 3, the defect of the block 3 will cause the program to fail. One way to resolve a program failure is to restart the summary program on a brand new block (block 4). Therefore, sections A, B, C, and D are copied in block 98680. Doc -87- 1272487 1 The total operation of the dish in the defective block 3 results in the copying of up to two areas filled with logical units: Set to have the special _ tolerance of the completion _. For example, when writing: a memory device, it is expected that the write operation will be completed internally, and it is known as "writing 箄 zinc day 4 „ Between the w and the temple %”. When a memory device, such as a memory card, is busy writing data to the host, it will send a message. State to the host. If rBUSY η 士 士( 碌 」 状 W W W W W W W W W W W W W W W W W W W W W W W W W W W W W W W W W W W W W W W W W W W W W W W W W W W W W W W W W W W W W W W W W W W W W W W W W The time is completed by the write (update) operation and the summary operation or the write operation of the write wait time. The master has a write latency Tw which provides sufficient time to complete the write operation to update block 972 (Fig. 29(a)). As described above in the block management system, a host write to the update block can trigger a total operation of the capsule. Therefore, the timing also allows for summary operations other than updating the operating milk 2 (Fig. 29(B)). However, the rollup operation must be restarted in response to a failed rollup that will take too much time and exceed the specified writer wait time. According to another aspect of the present invention, during the memory operation of the memory I' having the block management system during the time-critical memory operation, the program failure in the block can be interrupted by the interrupt block (break〇ui bi〇ck). The stylized operation to handle later, in less urgent times, can transfer the data recorded in the failed block before the interruption to other blocks that may also be interrupt blocks. Then there is the block where Lai failed. In this way, when a defective block is encountered, the data will not be lost due to the need to transmit the data stored in the defective block immediately and exceed 98680. Doc -88- 1272487 The limit between % of the fingerprints can be processed. This erroneous handling is particularly heavy for the waste bins, so there is no need to repeat the entire job for a new block during an emergency. Thereafter, at a suitable time, the data of the defective block can be saved by reconfiguring to other blocks. Figure 3 is a flow chart showing the failure of the program according to the present invention. - Step 1GG2 - The non-volatile memory is organized into blocks, and each block is divided into memory cells that can be erased, and each memory unit can store data of a logical unit. Program Failure Handling (Phase 1) Step 1012. Store a series of logical unit data in the first block. Step 14 In order to respond to the storage failure of the first block after storing some of the logical units, the subsequent logical unit is stored in the second block which is the interrupt block of the first block. Program failure handling (final stage) v Step 020. In response to the predetermined event, the reverse order το stored in the first block is transferred to the third block, and the third block and the second block may be the same or different. Step 1022: Discard the first block. Figure 3 is a specific embodiment of a program failure handling in which the third (fetched reconfiguration) block is different from the second (interrupted) block. During Phase 1, a series of logical units are recorded on the first block. If the logical unit is from a host write, the first block can be considered an update block. If the logical unit is a summary from a compression operation, the first block can be considered heavy 98680. Doc -89- 1272487 New configuration block. If a program failure occurs in block 1 at some point, the port provides the second block as an interrupt block. Logical units that fail to record in the block and subsequent logical units are recorded on the interrupt block. In this way, no extra time is required to replace the failed block 1 and the data resident on it. In the intermediate phase II, all the recorded logical units in the sequence can be obtained between the block and the block 2. In the final phase III, the logical unit will be reconfigured to block 3, which can be used as a reconfigured block, to replace the failed block and the data resident on it, thus saving the data in the failed block. Then discard the failed block. The final phase is scheduled so that it does not conflict with the timing of any simultaneous memory operations. In this particular embodiment, reconfiguration block 3 and interrupt block 2 are differentiated. It is convenient for the Ting to record the interrupt block with additional logical units during the intermediate phase. HUb, the interrupt block has become an update block and may not be suitable for reconfiguring the logical unit of the defective block. Fig. 3 is a further embodiment of the failure handling of the 耘 type, wherein the third (last reconfiguration) block and the second (interrupted) block phase (f). The stage work and the same are the same as the first embodiment shown in Fig. 31. However, in the stage (1), the logical unit of the (4) trap block 1 is reconfigured to the interrupt block 2. This would be convenient if the additional logical unit other than the original sequence of the first 4 write operations recorded the interrupt block for 2 days. In this way, the block required to store the logical unit in question is the smallest. Specific Examples of Program Failure Dispositions During Aggregation Program failure handling is especially important during summary operations. Normal capsule operation 98680. Doc -90- 1272487 can summarize the current versions of all the logics that are resident in the original block and the updated block, if left-alpha, and the flat-ear group, to the summary block. During the period of the capsule IΑμ, if a program failure occurs in the 茱w block, it will benefit from the other block that is used as the interrupt 茱~, block to receive the remaining logical units. ,heart. According to this method, = copy the unit more than - times, (4) can complete the exception processing within the total: period. At the appropriate time, all the unprocessed logical units of the group are summarized to the interruption.  In the tail, you can complete the ... operation. The appropriate time will be the period during which you have time to perform the summary period. One such suitable time slot = another host with a new but unrelated capsule operation is written to the second level. The summary of program failure handling can be considered as multi-stage. In the first phase, after a program failure occurs, the logical unit 2 is totaled into more than one block to avoid summarizing each logical unit more than once. The final phase is completed at the appropriate time, where the logical group (4) is totaled into one block, preferably all logical units are collected in the summary block in a sequential order. Information on the unit of the unit Figure 32A shows a flow chart of the initial update operation that results in the overall operation of the capsule. Step 1102: Organize the non-volatile memory into blocks, and divide each block into memory units that can be erased. Each memory unit can store a logic V 11 G4 mussel material into a plurality of blocks. A logical group, each logical group being a group of logical units that can be stored in a block. Step 1112: Receive host data encapsulated in the logical unit. 98680. Doc -91 - 1272487 Step 1114: According to the first version of the first group, the younger version of the logical group that avoids the dog 1 is selected, and a*w is used to establish the original block of the logical group. Step 1116: According to the second shun, the second version of the logical unit including the logical group is stored in the second block, and the update block of the logical group is established. Step 1119: Execution of the obsolete item collection is performed in the above-mentioned part of the T&&event' to collect the logical version of the logic in the various blocks and re-record it to the new block. Figure 3B shows a flow chart of a multi-stage summary operation of a preferred embodiment of a library in accordance with the present invention. Summary Failure Disposition (Phase I) 1122 and Steps Summary of Error Handling, Stage Operation 112, contains step 1124. Step 112 2: store the target versions of the logical chest 4 and the early 70 of the logical group in the third block in the order of the first) itching; η ΑΛ丨丨 s - * ^ terms. To create a summary block of logical groups. Step U24: In response to the storage failure of the summary block, the logical units of the logical group not included in the third block are stored in the fourth block in the same order as the first sequence to provide an interrupt summary block. Since the data in block 1 and block 2 has been transferred to block 3 and block 4, block 1 and block 2 can be erased to free up space. In the preferred embodiment, block 2 to EBL (list of erased blocks, see Fig. 8) can be released immediately. Block 1 can only be released under the following conditions: if it is a closed update block and there is another block pointed to by the corresponding GA Ding project 0 98680. Doc -92- 1272487 On the shell, block 3 will become the original block of the logical group, and block 4 will become the block update block of block 3. After completing Phase I, the memory device sends a message to the host by releasing the BuSY signal. Intermediate Operation (Phase II) Phase II, intermediate operation 1130, can occur before the phase out of summary operation 114. As suggested by any of steps 1132, 1134, 1136, there may be several possible scenarios. Step 1132. Or, in the write operation of the logical group, the fourth block (interrupt summary block) is written as the update block. If the host writes the logical group in question, block 4, which is the interrupt summary block and which has now replaced the sequential update block, will be used as the normal update block. Depending on the host write, it can be maintained in a sequential or chaotic state. As an update block, it will trigger the closing of another chaotic block at some point, as described in the previous preferred embodiment. If the host writes to another logical group, it proceeds directly to the stage hi operation. Step 1134: In the read operation, the memory in which the third block is the logical group original block and the fourth block is the update block is read. At this time, the logical units of the sectors A and B are read from the block 3 of the original block of the logical group, and the logic of the sectors c and D are read from the block 4 of the updated block of the group. unit. Since only segments a and b can be read from block 3, pages that have failed to be stylized cannot be accessed, and portions that have not been written later cannot be accessed. Although the GAT directory in the flash memory has not been updated and it still points to 98680. Doc -93- 1272487 Block 1 of the original block, but does not read any data from it, and this block itself has been erased earlier. Another possibility is that the host reads the logical units in the logical group. At this point, the logical units of segments A and B are read from block 3 of the original block for the logical group, and the logic for reading segments c and D from block 4 for the sequential blocks of the group. unit. Step 1136: or in power-on initialization, by scanning the contents to re-identify any of the first to fourth blocks. Another possibility in the intermediate phase is to turn off the power to the memory device and then restart. As described above, during power-on initialization, the block in the configuration block list (the erase block block to be used, see Figure 及5 and Figure 18) is scanned to identify the original block that has become a special state in the logical group. Defect summary block (block 3) and associated sequential update block (block 4). The flag in the first logical unit of the interrupt block (block 4) will indicate that the associated block is the original block that has encountered a program error (block 3). Block 3 can be found by looking up the block directory (GAT). In a specific embodiment, the flag is programmed to the first logical unit of the interrupt summary block (block 4). This can assist in indicating the special state of the logical group. That is, it has been aggregated into two blocks, namely, block 3 and block 4. An alternative to using flags to identify logical groups with defective blocks, using features that are not as full as the original block (unless the error occurred on the last page, and there is no ECC error on the page after the fetch). The block is a defective block. Also, depending on the embodiment, there will be an information record about the failed group/block of the control data structure stored in the flash memory, and 98680. Doc -94- 1272487 No /, 疋 is written in the interruption of the general block (block flag. Free 4) brother - section of the header area is completed (stage III) Step 1142: in response to the scheduled The event, for the first case when the fourth block is not further recorded since the stage, in the same order as the first order, where the current version of all unprocessed logical sheets of the logical group is stored; For the second case when the fourth block has been further recorded since the stage], the third and fourth blocks are summarized into the fifth block. /Step 1144: Afterwards, for the first case, when the memory is operated, the total block of the fourth block is used as the original block of the logical group; for the second case, when the memory is operated, the fifth block is used. Raw as a logical group: Block: The final summary in Phase III can be performed as long as there is any chance that it will not violate any of the restrictions between the four (4). Preferably, when there is another update operation of a logical group that does not have a capsule total operation, "pigw" is placed on the next host write time slot. If the host writer of another logical group triggers its own obsolete item ^, the stage 汇总 summary will be postponed. Figure 3 3 shows an example sequence of the first and final stages of a multi-stage rollup operation. The host write latency is the width of each host write slot with duration 1. The host write i is a simple update, and the current version of the first set of logical units in the logical group is recorded on the associated update block. At host write 2, an update occurs on logical group LGI, causing the update block to be closed (eg, full). A new update block is provided to record the remaining updates. Providing a new update block will trigger the collection of abandoned items, resulting in a closure of 98680. Doc -95- 1272487 The summary operation of LG# to recycle the blocks to be reused. The current logical unit of the LG4 group is recorded in the sequential block in sequential order. The summary operation can continue until a defect is encountered in the summary block. It is then called Phase I summary, where the summary operation continues on the interrupt summary block. At the same time, the final summary of LG4 (Phase III) will wait for the next opportunity. A write to the logical unit of logical group lg2 also occurs at host write 3' to trigger LG. Summary. This means that the time slot has been fully utilized. At host write 4, the operation simply logs some of the logical units of the 仏 to its update block. The time remaining in the time slot provides an opportunity to perform the final summary of LG4. DETAILED DESCRIPTION OF THE INVENTION Example of the phase I and phase III operations of the multi-stage 适用 适用 。 。 。 。 。 。 。 。 。 。 。 。 。 。 。 。 。 。 。 。 。 。 。 。 。 。 。 。 。 。 。 。 。 。 。 。 Fig. 34A shows an example in which the interrupt summary block is not used as an update block but as a summary block whose summary operation has been interrupted. In particular, the figure "A" refers to the host "#2' shown by %, where the host writes an update belonging to the logical unit called the logical group, and during this time, this operation also triggers association with another logical group LG4. A summary of the blocks. The original block (block 1) and the update block (block 2) are formed in the same manner as the example of the map μ. Similarly, during the summary operation, the summary region peak 3 is known to be defective in the logical unit of the total segment C of the capsule. However, unlike the re-aggregation scheme shown in Figure ^, the multi-stage scenario can continue the summary operation on the newly provided block (block 4) that can be considered as the interrupt summary block. Therefore, the I-segment I summary operation has been in the summary block (block 3) in the total capsule segment A and = 98680. Logic block of doc -96- 1272487. When a program failure occurs in the summary block, the remaining logical units in blocks C and D are sequentially copied to the interrupt summary block (block 4). If the host originally writes the update in the first logical group triggers the summary operation of the block associated with the second logical group, the update of the first logical group is recorded to the update block of the first logical group. (usually a new update block). At this point, the interrupt summary block (Block 4) will not be used to record any update data outside of the summary operation and will maintain the interrupt summary block that must be completed. Since the data in blocks 1 and 2 is now completely contained in another block (block 3 and block 4), it can be erased for recycling. The address table, (GAT), is updated to point to block 3 as the original block of the logical group. The directory information of the update block (in the ACL, see Figure 15 and Figure 18) is also updated to point to the block 4 of the sequential update block that has become a logical group (eg, LG4). The summary logic The group is not confined to one block, but is distributed over the defect block (block 3) and the interrupt summary block (block 4). An important feature of this scenario is that the logical units in the group will only be aggregated once during this phase, but the summary is spread over more than one block. In this way, the summary operation can be completed within the normally specified time. Figure 34B shows the third and final stages starting from the multi-stage summary of Figure 34A. As described in connection with Figure 33, the phase 执行 茱 will be executed at a suitable time after the first phase (e.g., during subsequent host writes that do not trigger the accompanying summary operation). In particular, Fig. 34A refers to a time slot in which the host write #4 shown in Fig. 33 occurs. During this period, the host write can update the logical unit belonging to logical group lg2 without triggering another additional summary operation. Therefore, there are 98680. Doc -97- 1272487 facilitates the operation of the remaining period μ phase m in the time slot to complete the aggregation of the logical group LG4. This operation summarizes all unprocessed logical units of LG4 that are not yet in the interrupt block into the interrupt block. In this example, this means that the section AAB is copied from block 3 in logically sequential order to the interrupt block (block 4). Due to the wraparound scheme of the logical unit in the block and the use of the page mark (see Figure 3A) 'even if the example is displayed in block 4, the segments 8 and 6 will be recorded after the segments C and D, but will still be The sequence of records is considered to be equivalent to the sequential order of A, B, c, D. Depending on the embodiment, it is preferred to obtain from the block 3 the current version of the unprocessed logical unit to be copied, since it is already in a summarized form, but it can also be used from blocks that have not been erased. 2 collected. After the final summary is completed on the interrupt block (block 4), it is designated as the original block of the logical group and is updated with the appropriate directory (eg, GAT, see Figure 17A). Similarly, the failed physical block (block 3) is marked as bad and is excluded. Other blocks, blocks and blocks 2 will be erased and recycled. At the same time, the update of LG2 will be recorded in the update block associated with LG2. DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENT OF THE INTERRUPTION ACCUMULATION BLOCKS FIG. 35A and FIG. 35B respectively show a second case of Phase I and Phase III operations of the multi-stage summary of the examples of FIGS. 28 and 33. Figure 3 5 A shows an example in which the interrupt summary block is maintained for the receiving block written by the receiving host instead of the total block. This applies to host writes such as updating logical group lg4, and in this program, it also triggers the sum of the same logical group. As in the case of Figure 34A, the combination of Block 1 and Block 2 in Block 3 will be 98680. Doc -98- 1272487:Into the 仃' until the program failed to process the section c. The summary is then continued on the interrupt summary block (block 4). After summarizing the unprocessed 70-degree logical unit (eg, in section C&D) in the interrupt block (block 4), it does not wait for completion of the logical group summary in phase III, but maintains the interrupt area. Blocks are Update Blocks This case is particularly suitable for situations where a host writes an updatable logical group and triggers a summary of the same logical group. In this example, this records the record of the host update of the logical group LG# in the interrupt summary block (block 4) instead of the new update block. This update block (previously the interrupt summary block (block 4)) can be sequenced or confusing based on the host data recorded therein. In the example shown, block 4 has become confusing because the subsequent newer version of the logical unit in segment C causes the previous version in block 4 to be phased out. During the intermediate phase, block 3 will be treated as the original block of LG#, and block 4 will be the associated update block. Figure 35B shows the third and last 13⁄4 segments starting from the multi-stage summary of Figure 35a in the second example. As described in connection with Figure 33, Phase III summary will be performed at the appropriate time after the first phase, as during subsequent host writes that do not trigger the accompanying summary operation. During this period, the host writes can update the logical units belonging to the logical group without triggering another additional summary operation. Therefore, it is advantageous to use the time remaining in the time slot for the phase III operation to complete the aggregation of the logical group LG4. The discarded items of logical group L G4 are then collected from block 3 and block 4 to the new summary block (block 5). Block 3 is then marked as bad, block 4 is recycled, and the new summary block (block 5) becomes the new logical group LG* 98680. The original block of doc -99- 1272487. Other blocks, Block 1 and Block 2, will also be erased and re-circulated. Other Specific Embodiments of Phased Program Failure Handling The examples described in Figures 31A, 31B, 34A, 34B, 35, and 35 are applicable to a preferred block management system in which each physical block (relay block) Only logical units belonging to the same logical group are stored. The invention is equally applicable to other block management systems in which no logical group is aligned to a physical block, such as the block management system disclosed in WO 03/027828 and WO 00/49488. Some examples of implementing a phased program failure handling method in these other systems are shown, for example, in Figures 36A, 36B, and 36C. Figure 36A shows a phased program error handling method for the case where the host write triggers the shutdown of the update block and the update block is sequential. The closing in this example is accomplished by copying the remaining valid data (B and C) of the original block 2 to the sequential update block 3. In the example where the program in the data part c stylized starting point fails, part C is programmed to the reserved block 4. The new host data can then be written to the new update block 5 (not shown). Phases II and III of this method are the same as in the case of a closed block. Fig. 36B shows a staged program error handling method applied to (local block system) in the example of updating the update block. In this example, the logical group is stored in the original block 1 and other update blocks. The summary operation includes copying the data of the original block 1 and other update blocks 2 to one of the update blocks (block 3 in the figure returned according to some rules). The main difference from the already explained case is that block 3 has been partially written. Figure 36C shows a staged program error handling the collection of abandoned items, or 98680. Doc -100- 1272487 does not support cleanup in the memory block management system mapped to the logical group of relay blocks. Such a memory block management (cyclic storage) system is described in WO 03/027828 A1. The obvious feature of the circular storage system is that it does not configure blocks for single-logical groups. It supports multiple logical groups of control data in the relay block. The collection of obsolete items involves the removal of blocks from the partially eliminated blocks. The valid data segments that may not have any relationship (random logical block addresses) are taken to the reconfigured blocks where there may already be some data. If the reconfigured block becomes full during operation, another block will be opened. Non-Sequential Update Block Index In the above paragraph regarding chaotic block indexing and in conjunction with Figures 16a-16e, the CBI section can be used to store an index that can record logical sector locations that are randomly stored in a chaotic or non-sequential update block. According to another aspect of the present invention, in a non-volatile memory having a block management system supporting an update block having a non-sequential logic unit, an index of a logical unit buffering a non-sequential update block stored in the RAM is Stored regularly in non-volatile memory. In a specific embodiment, the index is stored in a block dedicated to storing the index. In another specific embodiment, (3) storing the index in the update block. In yet another specific embodiment, the index is stored in the header of each logical unit. In another aspect, the logical singles written after the last index update but before the next index update store their index information in the header of each logical unit. In this way, after the power supply is interrupted, it is not necessary to perform a scan during initialization, and the position of the most recently written logical unit can be determined. In yet another aspect, the block & component is partially and partially non-sequentially directed to more than one logical subgroup. 98680. Doc -101 - 1272487 Index metrics for CBI segments stored in CBI blocks after a predetermined trigger event. According to the scheme described in connection with Figures 16A-16E, the list of recently written segments in the chaotic block is retained in the controller. In RAM. The CBI section containing the latest index information is written to the flash memory (CBI block 620) only after a predetermined number of writes to the logical group associated with the given chaotic block. In this way, the number of CBI block updates can be reduced. The list of recently written segments in the logical group is retained in the controller RAM before the next update of the CBI segment in the logical group. This list will be lost when the memory device encounters a power down, but can be rebuilt by scanning the update block during initialization after power is turned on. Figure 37 shows a scheduling example of writing a CBI section to an associated chaotic index section block after writing the same logical group every N sectors. This example shows two logical groups LG3 and LGU that are updated simultaneously. Initially, the logical segments of [(^) are stored in the original block in sequential order. The updates of the logical segments in the group are recorded in the associated update block in the order specified by the host. This example shows confusion. The sequence is updated. At the same time, the logical group fGU is updated in the same way as its update block. After each logical sector is written, its location in the update block is retained in the controller RAM. After each predetermined trigger event, the current index of the logical segment in the update block is written to the non-volatile hash index segment block in the form of a chaotic index segment. For example, a predetermined trigger event may occur in each After N writes, where N is 3. Although the examples provided relate to the lean logic unit as a segment, it should be understood by those skilled in the art that the logical unit may also be some other aggregate, such as containing a segment or The page of a group of sections. Also, the 98680 in the sequential block. Doc -102- 1272487 A page is not necessarily a logical page, because the page mark that is wrapped around may be the first page. Index index of the CBI section stored in the chaotic update block after the predetermined trigger event. In another specific embodiment, the index indicator is stored in the chaotic update block itself after each write. In the CBI section. This scheme is the same as the specific embodiment in which the index is also stored in the CBI section. The difference is that in the above specific embodiment, the CBI section is recorded in the CBI section block, not in the update block itself. This method is based on keeping all chaotic block index information in the chaotic update block itself. 38A, 38B and 38C respectively show the state in which the updated blocks of the CBI section are also stored in three different stages. Fig. 3 8 A shows an update block in which the c B j section is recorded until after a predetermined number of writes. In this example, after the host has sequentially written to logical section 〇-3, then another version of private 7 that is written to the logical section 发出 will be issued, thus destroying the sequential sequence of data writes. The mixed IL block index carried in the cBI section is then implemented, and the update block is converted into a chaotic update block. As mentioned above, CBI is an index of the index of all logical sections containing chaotic blocks. For example, the 0th item represents the displacement of the update block of the second logical section, and likewise, the nth item represents the displacement of the nth logical section. The CBI section can be written to the next available location in the update block. In order to avoid frequent Shaanxi flash memory fetching, the CBI segment will be written after each ^^ data segment is written. In this case, 'N is 4. If power is lost at this time, the last written segment becomes a CBI segment and this block is treated as a chaotic update block. 98680. Doc - 103 - 1272487 Figure 38B shows the update block of Figure 38A in which the logical sections 1, 2 and 4 are recorded after the index section. A newer version of logical section 丨 and 2 replaces the old version previously recorded in the update block. In the case of the power cycle at this time, the last written segment must be found first, and then up to 1^ segments must be scanned to find the last written index segment and the most recently written data segment. Figure 38C shows the update block of Figure 38B with another write to trigger the logical segment of the next record of the index sector. Another current version of the CBI section can be recorded in the same update block after another (N==4) sector write. The advantage of the /匕 scheme is that there is no need for separate CBI blocks. At the same time, there is no need to worry about whether the additional data area of the physical flash zone is large enough to accommodate the number of items required for the index of the valid section in the chaotic update block. The chaotic update block contains all the information, and the address translation does not require external data. This makes the algorithm simpler, which reduces the number of control updates for CBI block compression; it also makes the concatenation control update shorter. (See above for section (10) Block Management). ::There is a tribute to the most recent write section of the data section header in the chaotic update block: On the other hand, after the fine write, the index will be recorded in the A 单元 单元 unit. The current information stored in the non-volatile memory and the logical unit of the middle writer is stored in the additional item written in each logical unit. "In this way, after the power is restarted, it is not necessary to sweep the ghost" It can be written from the section of the port and then added to the additional part of the file. The information about the logical unit written after the index of the previous one is updated. The θ display is stored in the chaotic update area. An intermediate index written between the headers of each data section in the block, 98680, doc -104 - 1272487. Figure 39B shows an example of storing an intermediate index of intermediate writes in each section header of the writer. After writing four segments ls〇_lS3, the C index is written as the next _ segment in the block. Thereafter, the logical (four) segments LS, LS, 2, and Lb are written. Block. Each time, the header stores the index of the wipes of the logical unit of the writer since the last CBI index. The header in the call will have an index that provides the displacement (ie, position) of the upper CBI index and the LSS. Similarly, the header in 'ls4 will have the upper-cm index and 1^'1 and 1^. The index of the displacement of '2 (ie, position). The data section after the write always contains information about up to N last written pages (ie, the CBI section until the last write). The (four) time 'up-CBI index can provide index information written by the logical unit before the CBI index section, and the index information of the logical unit of the subsequent writer can be found in the header of the last written data section. This has the advantage that it is not necessary to scan the block for the sector to be written later to determine its position during initialization. The scheme of storing the intermediate index information in the header of the data section is also applicable to the CBI index sector. Updating the block itself or in a separate CBI section block, as described above. Indexing indicators stored in the chaotic update block data section headers In another embodiment, the entire CBI index will be stored In the chaotic update block The additional item portion of each data section. Figure 40 shows the information in the chaotic index block stored in the header of each data section of the chaotic update block. The section header has limited information capacity and can therefore be used by any single zone. Section 98680. The index range provided by doc -105 - 1272487 is designed as part of a hierarchical indexing scheme. For example, a section within a particular plane of memory can provide an index to a section that is only within that plane. There is also the ability to divide the range of logical addresses into sub-ranges to allow for an indirect indexing scheme. For example, if a segment with 64 logical addresses can be stored in a plane, then each segment can have 3 clamps for the segment displacement value, and each shelf can store 4 displacement values. The first clamp defines the logical displacement range (M5 ' 15-31 '32_47, and the physical displacement of the last written segment within 48_63. The first stop can define 4 subranges of each 4 segments within its relevant range. The value of the body displacement. The third block defines the value of the body displacement of the four segments in its associated sub-range. Therefore, by reading the indirect displacement values of up to 3 segments, the chaotic update can be determined. The physical displacement of the logical section within the block. The advantage of this scheme is that there is no need for a separate CBI block or CBI section. However, this only applies to the additional data area of the physical flash sector large enough to accommodate the confusion. When updating the number of items required for a valid section index in a block. The limited logical range within a logical group of chaotic update blocks is within a logical group, which reduces the logical range of segments that can be written out of order. The main advantages of the technology are as follows: Since only one multi-session page needs to be read (in the multi-wafer example, the page can be read in parallel), all the data of the destination page can be obtained (assuming the source and destination are aligned, if Not aligned, you need To be read another, so that the copy of the sequential write data can be completed more quickly 'so the extent outside the range is kept in the original block, and the waste collection operation can be performed in a shorter time. Also, using the replica feature on the wafer, you can copy the data from the source to the destination without transferring data back and forth between the controllers. If the source is 98680. Doc -106 - 1272487 It has been a problem, as happened in a chaotic area, you need to read up to - pages per section to collect all the segments to be written to the destination. In a specific embodiment, 'there is not actually limiting the logical range to a certain number of segments, but by limiting the number of CBIs (only limiting the confusion of large groups/relay blocks is reasonable, Because it requires multiple chaotic block indexes to cover the entire logical group range). For example, if a relay block/group has 2048 segments, it would require up to M@cm segments, each covering a contiguous logical range of 256 segments of a subgroup. If the number of cBIs is limited to 4, the chaotic block can be used to write up to 4 subgroups of any of the segments (any one of them). Therefore, the logical group is allowed to have up to 4 partial or complete chaotic subgroups, and at least 4 subgroups will remain fully sequential. If a chaotic block has 4 active CBI segments associated with it, and the host writes a segment outside of these CBI segments (chaotic subgroups), then the chaotic logical group should be closed. But this is highly unlikely, because in real applications, the host does not need more than 4 256 segments of chaos (subgroups) in the 2048 segment range (logical group). As a result, under normal circumstances, the collection of obsolete items will not be affected, but the precautions of the restriction rules form an extreme example of the waste collection being too long (which will trigger the host's timeout). Partially sequential chaotic update block index When a sequential update block is partially written before converting the block to the chaotic management mode, all or part of the sequential update segment of the logical group may continue to be processed to have been sequentially updated, And chaotic update management can only be applied to a subset of the address range of the logical group. Control data integrity and management 98680. Doc -107· 1272487 Data stored in a memory device may become corrupted due to a power interruption or a specific memory location becoming defective. If a memory block defect is encountered, the data is reconfigured to a different block and the defective block is discarded. If the error does not increase, the error correction code (ECC) stored with the data can be corrected during execution. However, there will still be times when Ecc can't correct the depleted material. For example, when the number of error bits exceeds the capacity. This is unacceptable for key information such as control information associated with the memory block management system. An example of the control data is the directory information and the block configuration information of the memory block management system: as described in connection with FIG. As mentioned above, the control can be held in the ν-speed RAM and the slower non-volatile memory block. Any of the frequently changed control data will be maintained with regular control writes: to update the equivalent information stored in the non-volatile relay block. Sexual but slower flash ‘, the body stores control data in non-volatile 1 as shown in Figure 20 in GAT, CBI, MAP, and. Therefore, the level of the shell material structure will be maintained in the flash memory π = π operation description control information structure information key data backup Π 4 control data structure. I. Another material of the present invention, if maintained by n's, such as partial or total control data, is performed in a county-by-case manner; = the item is used in two 'services' additional levels of reliable surname technology to continuously program the same cardiac memory system Nowadays, L and the body are multi-dimensional. Any stylized mismatch in the sub-encoding process 98680. Doc -108- 1272487 Unable to destroy the first-attack write abort, ^t, process establishment data. Copying also helps detect Ding, detect false positives, and (P, two copies have good: ECC but the data is not J-day plus the jaws and other New Zealand has been considered.,, 17 sin. Some techniques for data replication are in a specific embodiment. ^ ^ Two copies of the given data are programmed for storage in the Germanized encoding process, and the subsequent stylized encoding process can avoid the program mode. In German: At least one memory unit. In this way, the programmatic coding process in the post-production is in the case of & ^ 4 绝 ^ ^ 1 ❿ 在 在 在 在 在 在 在 在 在 在 在 在 在 在 在 在 在 在 在 在 在 在 在 在 在 在 在 在 在 在 在 在 在 在 在At least one of the two transcripts will not be subjected to another specific example, and two copies of the sheep's heart will be stored in two different blocks, and the 荨At most one of the two replicas, only one of the memory cells of the right is prepared for the sequel, and θ is programmed in the following stylized encoding process. In another specific embodiment, in a box of scorpions Two, a private data encoding process to store a given data After replica, figurines five X m ^, inflammation of the heart-Jun is no longer present in the complex for these two memory storage. The unit group implements any further / privateization. This can be achieved by arranging the two copies of the memory in the final stylized coding process of the memory unit group. In yet another embodiment, the two copies of a given data can be programmed into a multi-state memory in a binary stylized mode, such that the programs are not The memory unit is subjected to any further stylization. In yet another embodiment, the program is used for two encoding processes 98680. Doc -109- 1272487 ^Technology for multi-byte memory systems that continuously program multi-bits of the same set of memory cells 'will use fault-tolerant codes to encode multiple memory-like sorrows, enabling earlier stylized coding processes The information created will not be affected by errors in subsequent stylized coding. The complexity of data replication can be caused in multiple state memories in which each memory unit can store more than one bit of data. For example, a 4-state memory can be represented by two bits. One prior art technique is to program this memory using a two-pass encoding process. The first bit (lower page bit) can be programmed by the first encoding process. Thereafter, the same unit can be parsed in the second encoding process to represent the desired second bit (the upper page bit > in order to avoid k and the value of the first bit in the second encoding process, The state of the first bit of the memory state depends on the value of the second bit. Therefore, during the stylization of the second bit, if an error occurs due to a power interruption or other reasons, and an incorrect memory state is caused, The value of the first bit is also destroyed. Figure 41A shows the threshold voltage distribution of the 4-state memory array when each memory cell stores two bits of data. These four distributions represent four memory states. The total of U", "X", rY" and "z". Before staging the memory soap 7L, it will be erased to its "U" or "unwritten" state. When stylized, it will progressively reach the memory states "χ", "Y" and "z". ^", Figure 41B shows the existing two-coded program stylization scheme using Gray code. These four states can be Two bit representations, ie, the lower page bit and the upper page Faces, such as (upper page bit, lower page bit). For pages that are to be stylized in parallel, there are actually two logical pages 98680. Doc -110- 1272487 Face: The page below the logic and the page above the logic. The first-stylized encoding process /, will redeem the logic below the page. With proper encoding, instead of resetting the page below the logic, the subsequent second stylized encoding process on the same page of the unit will normalize the page above the logic. The commonly used code is Graymar, where ', there are self-bits that change when transitioning to an adjacent state. Therefore, this code has the following advantages: There are fewer requirements for error correction, because only the axe and one bit. The general scheme for using Gray code is to assume that Γ1" stands for "unprogrammed" conditions. Therefore, the erased memory state Γυ" can be expressed as] above page two! The page below the yuan is a bit. ,1). In the one-person encoding process of the page below the stylized logic, any unit storing the data "0" will therefore have its logical state transition from (X, 1) to (X, G), where "X" represents the upper two bits. "Any (d〇n丨t care, /古^^ care)" value. However, since the upper direction is of the formula:, in order to be consistent, "Γχ" can be marked as "," and the error is represented by "轾" as the memory state "X". That is to say, before the second program encoding process, the lower azimuth value "〇" can be expressed as a memory state. Performing a second encoding process stylizes the bits of the page above the logic. Only these need to be in the upper page bit of the 1-4. After the first encoding process, page "-7' is programmed in the logic state (1,1) or (1,〇). In order to save the value of the page below in the second encoding process, you must read "〇" or 4" in the lower position. For the transition from (i, g) to (Μ): ; the memory unit is programmed into the memory state "γ". For the transition from 〇 υ to (°, 1), the memory unit is stylized into a memory shape 98680. Doc -111 - 1272487 State "z". In this way, during the reading, by deciding the state of the memory in the unit, the lower page bit and the upper page can be decoded: However, the two encoding process of the Gray code is programmed in the program. When it is wrong, it will become a door. For example, the stylization of the square page bit into "〇" in the lower azimuth will cause (U) to change her.累To reorganize the memory unit from "U" through "χ" and Γγ to Ζ". If a power interruption occurs before the stylization is completed, the unit will end in the transition memory state... "X". When reading: Memory Sheet 70, "Χ" is decoded into logic state 0, G). This pair of upper ^ : square : yuan causes an incorrect result 'since it should be (〇, υ. Similarly, if the stylization is financially broken when it reaches "γ", it will correspond to (〇, 〇). The upper position is now correct, but the lower position is still wrong. So 'I see the problem of stylized above the page can damage the bedding already on the page below. Especially when the second coding process is programmed (4) In the state, the program towel will cause the stylization to end in the memory, resulting in the decoding of the lower page bit. The heart map 42 shows the defense by __ heart. For example, sections A, B, C, and D can be stored in a duplicate copy. If there is data corruption in a section copy, then another one can be read instead. Figure 43 shows where the copy section will normally be The non-soundness stored in the multi-state memory. As described above, in the exemplary modal memory, the multi-face actually includes the logical lower page and the logical upper page which are respectively programmed in the two encoding processes. In the example, the page is A wide area segments. Thus, eight segments are stylized and will also copy at logical 98,680. Doc -112- 1272487 In the same way, the same is true for section B and its copy. Then, in the subsequent stylized second encoding process in the logical upper page, sections C, C are also programmed simultaneously, and for section D, D as well. If the program is aborted in the middle of the stylized section C, C, it will destroy the service A, A in the page below. Unless the page section below is read and buffered before the top page is stylized, the damage will not be restored. Therefore, storing a copy of two key data at the same time, such as section A, A, cannot prevent it from being damaged by the problematic storage of subsequent sections C, C in its upper page. Figure 44A shows a specific embodiment of storing a copy of a key material staggered copy into a multi-state memory. Basically, the lower page, that is, the segments A, A and the segments B, B, are stored in the same manner as in Fig. 43. However, in the upper page simplification, the section C&D is interleaved with its copy into c, C, D. If local page stylization is supported, the copies of the two segments c can be programmed at the same time as well as for the duplicates of the two segments D. If 两个k to abort the two segments, only the next page will be destroyed on one copy of segment A and one copy of segment B. Another copy will remain unaffected. Therefore, if there are two copies of the key data stored in the first encoding process, then ^ ♦ does not affect the simultaneous stylization of the first encoding process. Fig. 44B shows another specific embodiment in which only the copy of the key data is stored in the logical upper page of the multi-character. At this time, the data of the following page is not used. The key data and its copy, such as section A, A and the section 'broken to the logical upper page. In this way, if there is a program abort', the key data can be written to another logical upper page, and any damage to the following page data will not matter. This method is basically Use each 98680. Doc -113- 1272487 Half the storage capacity of the ugly page. Figure 44C shows another embodiment of a duplicate copy of a key material stored in a binary mode of multiple state memories. At this point, each memory unit is programmed in binary mode, with only its limit range divided into two areas. Therefore, only one encoding process is stylized, and stylization can be restarted in different locations when the occurrence of the program is aborted. This method also uses half the storage capacity of each multi-state page. The operation of the multiplexed memory system in binary mode is described in U.S. Patent No. 6,456,528, the entire disclosure of which is incorporated herein by reference. Figure 45 shows yet another embodiment of simultaneously storing a copy of the key material to two different relay blocks. If one of the blocks becomes unavailable, the data can be read from another block. For example, key data is contained in sections A, B, C, D and E, F, G, Η and 1, 尺, 尺, B. Each section will be stored in the copy. These two copies will be written to two different blocks, block 0 and block 1. If you write a copy to the lower logical page, another copy will be written to the logical upper page. In this way, there will always be a copy of the page that is stylized to the top of the logic. If a program aborts, it can be reprogrammed to another logical top page. If the next page is corrupted, there will always be another copy of the upper page in the other blocks. Figure 46 is a diagram showing a copy of a copy of a key material simultaneously stored using a fault tolerant code. Fig. 46 and Fig. 4 show the limit voltage distribution of the 4-state track array and are shown as a reference for Fig. 46?. The fault-tolerant code ^ can avoid any upper surface of the transition in any intermediate state ^ 98680. Doc -114- 1272487. So under the hundred &< b The first encoding process of the page, the logic state (1,1) is changed to (1 V, if the memory state of the table is not erased, the value of the memory is "υ" is "γ" In the second encoding process of the page above, the suffrage η and the page bit below the page are "1", then the logic state (1,1) turns = (G, 1) 'If the stylized erased memory state "U" is Χ", if the lower page bit is "〇", the logic state (1, 〇) changes to (0, ... as represented as stylized The memory state "γ" is "ζ." Since the dead page is only programmed to be the next adjacent memory state, the program can not change the lower page bit. The serial copy of the key data is copied. It is better to write people at the same time as above. Another way to avoid the two copies of U Bay on the same day is to copy the copy of the book. This method is slower. The 4 copies themselves represent whether they are stylized when the controller checks two copies. Success. Figure 47 shows the possible status and data of two copies of the data. If the first and second copies do not have an Ecc error, the stylization of the material can be considered to be completely successful. Valid data can be obtained from either copy. If the first copy does not have an ECC error, but the second copy has an Ecc error. The unformatted form is interrupted in the middle of the second copy of the stylization. The first copy contains valid data. Even if the error is correctable, the second copy of the data is relied upon. If the first copy has no ECC error and the second copy has been emptied (Erase) means that the stylization is broken after the end of the first copy stylization but before the start of the second copy of 98680.doc -115- 1272487. The first copy contains valid data. If the first copy has ECC The error and the second copy have been emptied (erased), and the table is not privately interrupted in the middle of the first-replica stylization. Even if the error is correctable, the first copy still contains invalid data. The following techniques are preferred because they utilize the existence of a duplicate replica. Read and compare two replicas. In this example, the states of the two replicas shown in Figure 47 can be used to ensure that Any error misdetection. In another embodiment, the controller reads only one copy, and in order to account for speed and simplicity, the replica read is preferably rotated between the two replicas. For example, reading at the controller When the data is controlled, it can be read as a copy, and the next control read (any control read) should come from the copy 2, then the copy jade, etc. In this way, the two copies can be read and periodically checked. Integrity (Ecc k check). It can reduce the following risks. · It can't detect in time errors caused by spoiled data retention. For example, if you usually only read the copy}, the copy 2 will gradually deteriorate to The error cannot be saved by the ECC and the second copy can no longer be used. Preemptive Data Reconfiguration As described in connection with Figure 20, the block management system maintains a set of control data in the flash memory during its operation. . This group of control data is stored in the same relay block as the host data. As a result, the control data itself is subject to block management and is therefore subject to updates and, therefore, to the waste collection operations.

其中還說明控制資料的階層,其中較低層級中的控制資 料更新比較高層級中的頻繁。例如,假設每個控制區塊有N 98680.doc 116, 1272487 個要寫入的控制區段,則通常會發生以下控制更新及控制 區塊重新配置的序列。再次參考圖20,每N個CBI更新可填 滿CBI區塊及觸發CBI重新配置(再寫入)及MAP更新。如果 混亂區塊遭受關閉,則其也會觸發GAT更新。每個GAT更新 可觸發MAP更新。每N個GAT更新可填滿區塊及觸發GAT區 塊重新配置。此外,當MAP區塊變滿時,也會觸發MAP區 塊重新配置及MAPA區塊(如果存在的話,否則BOOT區塊會 直接指向MAP)更新。此外,當MAPA區塊變滿時,也會觸 發MAPA區塊重新配置、BOOT區塊更新及MAP更新。此 外,在BOOT區塊變滿時,將會觸發另一個BOOT區塊之作 用中BOOT區塊重新配置。 由於階層的形成為:頂部的BOOT控制資料,接著是 MAPA、MAP、然後GAT,因此,在每N3個GAT更新中,將 有「串接控制更新」,其中所有的GAT、MAP、MAPA及BOOT 區塊都會被重新配置。此例中,在因主機寫入導致的混亂 或循序更新區塊關閉造成GAT更新時,也會有廢棄項目收 集操作(即,重新配置或再寫入)。在混亂更新區塊廢棄項目 收集的例子中,會更新CBI,而這也會觸發CBI區塊重新配 置。因此,在此極端的情況中,必須同時收集大量中繼區 塊的廢棄項目。 從圖中可見,階層的各控制資料區塊在取得填充及接受 重新配置上有其自己的週期性。如果各控制資料區塊進行 正常,則將有發生以下情形的時候:大量區塊的階段進行 整頓,因而觸發大量同時涉及所有這些區塊的重新配置廢 98680.doc 117- I272487 =項目收集。許多控制區塊的重新配置將會花費报長的時 …因此應加以避免,因為部分主機不容許因大量控 作所造成的長時間延遲。 4 ,根據本發明的另—方面,在具有區塊管理系統的非揮發 =記憶體中,可實施記憶體區塊的「控制廢棄項目收集」 ^先佔式重新配置’㈣免發生大量的更新區塊均恰巧^ ,需要進行重新配置的情形。例如,在更新用於控制區塊 管理系統操作的控制資料時會發生此情況。控制資料類型 的層級可和不同程度的更新次數共存,導致其關聯的更新 區塊需要不同速率的廢棄項目收集或重新配置。會有一個 以上控制育料類型之廢棄項目收集操作同時發生的特定次 數。在極端的情況中,所有控制資料類型之更新區塊的重 新配置階段會進行整頓,導致所有的更新區塊都需要同時 重新配置。 本發明可避免這種不想要的情況,其中目前的記憶體操 作無淪何時均可容納自發性的廢棄項目收集操作,更新區 塊的先佔式重新配置可預先在完全填充區塊前發生。尤 其’會將優先權提供給具有最慢速率之最高階層資料類型 的區塊。依此方式,在重新配置最慢速率區塊後,將不再 需要另一個相對較長時間的廢棄項目收集。還有,階層中 較南的較恢速率區塊沒有太多可觸發之重新配置的串接。 可將本發明方法視為··為了避免所論各種區塊之階段對 齊’而將某種抖動引入事物的整體混合。因此,只要一有 機會’即可以先佔式的方式重新配置有些微不受完全填充 98680.doc -118- 1272487 之邊限的缓慢填充區塊。 在具有階層中較低的控制資料因串接效應而變更快於階 層中較高的控制資料之控制資料階層的系統中,會將優先 權提供給階層中較高之控制資料的區塊。一個執行自發性 先佔式重新配置之機會的範例是在以下情況時:當主機寫 入本身無法觸發重新配置,因此可利用其等待時間中的任 何剩餘時間來進行先佔式重新配置操作。一般而言,務必 重新配置之區塊前的邊限是在區塊全滿前之預定數量的未 寫入記憶體單元。所考慮的是足以加速在完全填充之區塊 前但又不會過早之重新配置的邊限,以免資源浪費。在較 佳具體實施例中,預定數量的未寫入記憶體單元係在一到 六個記憶體單元之間。 圖48顯示先佔式重新配置儲存控制資料之記憶體區塊的 流程圖。 步驟1202 :將非揮發性記憶體組織成區塊,各區塊已分 割成可一起抹除的記憶體單元。 步驟1204 ··維持不同類型的資料。 步驟1206 :對不同類型的資料指派階層。 步驟1208 ·儲存複數個區塊中該等不同類型資料的更 新,使得各區塊實質上儲存相同類型的資料。 步驟1210 :為了回應具有少於預定數量之清空記憶體單 元及具有該等複數個區塊中最高階層資料類型的區塊,將 該區塊之資料的目前更新重新配置至另一個區塊。若未受 到中斷,到步驟1208。 98680.doc -119- 1272487 Κ β圖0所示控制資料之先佔式重新配置的範例演算法 如下: 士果(G又有任何因使用者資料所造成的廢棄項目收集)或 (MAP留有6個或争y、人人丄 田^ 口 A吏少的未寫入區段)或(GAT留有3個或更少 的未寫入區段) 則 如果(BOOT留有丨個未寫入區段) 則重新配置B00T(即,重新配置至區塊) 否則 如果(ΜAPA留有1個未寫入區段) 則重新配置MAPA及更新MAP 否則 如果(MAP留有1個未寫入區段)It also describes the hierarchy of control data, where the control data in the lower level is updated more frequently than in the higher level. For example, assuming that each control block has N 98680.doc 116, 1272487 control segments to be written, the following sequence of control updates and control block reconfigurations typically occurs. Referring again to Figure 20, every N CBI updates can fill the CBI block and trigger CBI reconfiguration (rewrite) and MAP updates. If the chaotic block is closed, it will also trigger a GAT update. Each GAT update can trigger a MAP update. Each N GAT update fills the block and triggers the GAT block reconfiguration. In addition, when the MAP block becomes full, it also triggers the MAP block reconfiguration and the MAPA block (if it exists, otherwise the BOOT block points directly to the MAP). In addition, when the MAPA block becomes full, MAPA block reconfiguration, BOOT block update, and MAP update are also triggered. In addition, when the BOOT block becomes full, it will trigger the BOOT block reconfiguration in the action of another BOOT block. Since the formation of the hierarchy is: the top BOOT control data, followed by MAPA, MAP, and then GAT, therefore, in every N3 GAT updates, there will be "serial control update", in which all GAT, MAP, MAPA and BOOT The block will be reconfigured. In this case, there is also an obsolete item collection operation (ie, reconfiguration or rewriting) when the GAT update is caused by a chaotic or host update block closure due to host writes. In the case of a collection of obscure update blocks, the CBI is updated, which also triggers CBI block reconfiguration. Therefore, in this extreme case, it is necessary to collect a large number of discarded items of the relay block at the same time. As can be seen from the figure, each control data block of the hierarchy has its own periodicity in obtaining padding and accepting reconfiguration. If the control data blocks are normal, then the following situation will occur: the stages of a large number of blocks are reorganized, thus triggering a large number of reconfiguration wastes involving all of these blocks at the same time. 98680.doc 117- I272487 = Project collection. The reconfiguration of many control blocks will take time to report ... so it should be avoided because some hosts do not tolerate long delays due to extensive control. 4. According to another aspect of the present invention, in the non-volatile=memory with the block management system, the "control waste collection" of the memory block can be implemented. ^Preemptive reconfiguration" (4) A large number of updates are avoided. The blocks happen to be ^ and need to be reconfigured. This can happen, for example, when updating control data that is used to control the operation of the block management system. The hierarchy of control data types can coexist with varying degrees of update times, causing their associated update blocks to require collection or reconfiguration of obsolete items at different rates. There will be more than one specific number of simultaneous collections of waste items that control the type of feed. In extreme cases, the reconfiguration phase of all update blocks of the control data type is reorganized, causing all update blocks to be reconfigured at the same time. The present invention avoids this undesirable situation in which the current memory gymnastics can accommodate spontaneous collection of waste items at any time, and the preemptive reconfiguration of the update block can occur before the block is completely filled. In particular, the priority will be given to the block with the highest level data type of the slowest rate. In this way, after reconfiguring the slowest rate block, another relatively long-term collection of obsolete items is no longer needed. Also, there are not many reconfigurable concatenations that can be triggered by the more recent recovery rate blocks in the hierarchy. The method of the present invention can be considered to introduce a certain amount of jitter into the overall mixture of things in order to avoid phase alignment of the various blocks in question. Therefore, as soon as there is a chance, you can reconfigure some of the slightly-filled blocks that are not completely filled with the margins of 98680.doc -118- 1272487. In systems where the lower control data in the hierarchy becomes faster than the control data hierarchy of the higher control data in the hierarchy due to the concatenation effect, the priority is provided to the higher control data blocks in the hierarchy. An example of an opportunity to perform a spontaneous preemptive reconfiguration is when the host write itself cannot trigger a reconfiguration, so any remaining time in its wait time can be utilized for preemptive reconfiguration. In general, the margin before the block that must be reconfigured is the predetermined number of unwritten memory cells before the block is full. What is considered is a margin sufficient to accelerate the reconfiguration before the fully populated block, but not too early, to avoid wasting resources. In a preferred embodiment, a predetermined number of unwritten memory cells are between one and six memory cells. Figure 48 is a flow chart showing the memory block of the preemptive reconfiguration storage control data. Step 1202: The non-volatile memory is organized into blocks, and each block has been divided into memory units that can be erased together. Step 1204 · Maintain different types of data. Step 1206: Assign a hierarchy to different types of data. Step 1208 - Store updates to the different types of data in the plurality of blocks such that each block substantially stores the same type of data. Step 1210: In response to a block having less than a predetermined number of empty memory cells and having the highest hierarchical data type of the plurality of blocks, the current update of the data of the block is reconfigured to another block. If it is not interrupted, go to step 1208. 98680.doc -119- 1272487 范例 The example algorithm for the preemptive reconfiguration of the control data shown in Figure 0 is as follows: 士果(G has any collection of discarded items due to user data) or (MAP has 6 or y, everyone's 丄田 ^ port A 吏 less unwritten segment) or (GAT left 3 or less unwritten segments) then if (BOOT left one is not written Section) Reconfigure B00T (ie, reconfigure to block) Otherwise, if (ΜAPA has 1 unwritten segment) then reconfigure MAPA and update MAP otherwise (MAP has 1 unwritten segment) )

則重新配置]MAP 否貝J 如果(上一個更新或最大的GAT留有1個未寫入區段)Then reconfigure] MAP No Bay J If (the last update or the largest GAT has 1 unwritten segment)

則重新配置GAT 否則 如果(CBI留有1個未寫入區段)Then reconfigure GAT or if (CBI leaves 1 unwritten section)

則重新配置CBI 否則 否則 離開 因此,先佔式重新配置通常完成於其中未發生任何使用 98680.doc -120- 1272487 者資料廢棄項目收集時。在 入觸發使用者資料廢棄項目 一個區塊的自發性重新配置 的先佔式重新配置。 最糟的情況中,當每個主機寫 收集,但還有足夠的時間進行 日可,一次可執行一個控制區塊 由於使用者資料廢棄項目收錢作及控制更新可能和實 體錯誤同時發生,因此’最好具有較大的安全性邊限,^ 係藉由事先如在區塊還有2個或更多未寫人之記憶體單^ (如’區段)時’先行進行先佔式重新配置或控制之廢辛項目 收集。 、 雖然已經針對特定具體實施例說明本發明的各種方面, 但應明白,本發明有權受到隨附申請專利範圍之完整 的保護。 β 【圖式簡單說明】 圖1以示意圖顯示適於實施本發明之記憶體系、統的主要 硬體組件。 &圖2顯示根據本發明的—項較佳具體實施例,經組織成區 或中繼區塊)之實體群組並為控制器之記憶體管理器所 管理的記憶體。 立圖3Α⑴-3A(iii)根據本發明的一項較佳具體實施例,以示 思圖顯不邏輯群組及中繼區塊間的映射。 圖3B以不意圖顯示邏輯群組及中繼區塊間的映射。 圖4顯示中繼區塊和實體記憶體中結構的對齊。 圖5A顯示從連結不同平面之最小抹除單元所構成的中繼 區塊。 98680.doc -121 - 1272487 圖5B顯不其中從各平面選擇最小抹除單元(MEU)以連結 至中繼區塊的一項具體實施例。 圖5C顯示其中從各平面選擇一個以上meu以連結至中繼 區塊的另一項具體實施例。 圖6為如控制器及快閃記憶體中實施之中繼區塊管理系 統的示意方塊圖。 圖7A顯示邏輯群組中按循序順序寫入循序更新區塊之區 段的範例。 圖7B顯示邏輯群組中按混亂順序寫入混亂更新區塊之區 段的範例。 圖8顯示由於兩個在邏輯位址有中斷之分開的主機寫入 操作而在邏輯群組中按循序順序寫入循序更新區塊之區段 的範例。 圖9根據本發明的一般具體實施例,為顯示更新區塊管理 器更新一個邏輯群組之資料的程序流程圖。 圖10顯示根據本發明的一項較佳具體實施例,更新區塊 管理器更新一個邏輯群組之資料的程序流程圖。 圖11A為詳細顯示關閉圖10所示之混亂更新區塊之彙總 程序的流程圖。 圖11B為詳細顯示關閉圖10所示之混亂更新區塊之壓縮 程序的流程圖。 圖12A顯示邏輯群組的所有可能狀態,及其間在各種操作 下的可能轉變。 圖12B為列出邏輯群組之可能狀態的表格。 98680.doc -122- 1272487 圖13A顯不中繼區塊的所有可能狀態,及其間在各種操作 下的可能轉變。中繼區塊是對應於邏輯群組的實體群組。 圖13B為列出中繼區塊之可能狀態的表格。 圖14(A)-14(J)為顯示邏輯群組狀態上及實體中繼區塊上 各種操作效果的狀態圖。 圖15顯不用於記錄開啟及關閉之更新區塊及配置之已抹 除區塊之配置區塊清單(ABL)結構的較佳具體實施例。 圖16A顯示混亂區塊索引(CBI)區段的資料攔位。 圖16]3顯示記錄於專用中繼區塊中之混亂區塊索引(CBI) 區段的範例。 圖16C為顯示存取進行混亂更新之給定邏輯群組之邏輯 區段之資料的流程圖。 圖16D根據其中已將邏輯群組分割成子群組的替代性具 體貫施例’為顯示存取進行混亂更新之給定邏輯群組之邏 輯區段之資料的流程圖。 圖16E顯示在其中將各邏輯群組分割成多個子群組的具 體實施例中,混亂區塊索引(CBi)區段及其功能的範例。 圖17A顯示群組位址表(GAT)區段的資料攔位。 圖17B顯示記錄在GAT區塊中之群組位址表(GAT)區段的 範例。 圖1 8為顯示使用及再循環已抹除區塊之控制及目錄資訊 之分布及流程的示意方塊圖。 圖19為顯示實體位址轉譯邏輯程序的流程圖。 圖2 0顯不在記憶體管理的操作過程中’在控制資料結構 98680.doc -123- 1272487 上執行的操作層級。 圖21顯示以多個記憶體 所構成的記憶體陣列。 圖22A頌不根據本發明一 新之方法的流程圖。 一例’具有平面對齊之更 圖22B顯示在圖22八所 較佳具體實施例。 心圖中儲存更新之步驟的 =轉示不顧平面對齊按循序順序寫人循序更新區塊 之邏輯早7L的範例。 圖23B顯。示不顧平面對齊按非循序順序寫入混亂更新區 鬼之邏輯單元的範例。 圖24A顯示根據本發明的一 、罕1土,、體只軛例,具有平面 及填補之圖23A的循序更新範例。 圖24B顯示根據本發明 貝孕乂仏具體貫施例,具有平面 、&不具有任何填補之圖23B的混亂更新範例。 圖24C顯示根據本發明的另一項較佳具體實施例,具有平 面對齊及填補之圖23B的混亂更新範例。 圖25顯不其中各頁面含有兩個用於儲存兩個邏輯單元 (如兩個邏輯區段)之記憶體單元的範例記憶體組織。 圖26A和圖21的記憶體結構相同,只是各頁面含有兩個區 段而非一個。 圖細顯示圖26A所示之具有以線性圖方式布局之記憶 體早兀的中繼區塊。 ,圖27顯示的替代性方案如下:不用填補要從一個位置複 製到另個的遊輯單凡,即可在更新區塊令進行平面對齊。 98680.doc -124- 1272487 圖28纟、、員不其中缺卩曰區塊在彙總操 a 士口 , h月間發生程式失敗時 會在另一個區塊上重複彙總操作的方案。 圖29以示意圖顯示具有允許足夠時間完成寫入(更新)操 作及彙總操作之時序或寫人等待時間的主機寫入操作。 圖30根據本發明的一般方案,顯示程式失敗處置的流程 圖0 圖31A顯示程式失敗處置的一項具體實施例,其中第三 (最後的重新配置)區塊和第二(中斷)區塊不同。 圖3 1B顯示程式失敗處置的另一項具體實施例,其中第三 (最後的重新配置)區塊和第二(中斷)區塊相同。 圖32A顯示造成彙總操作之初始更新操作的流程圖。 圖32B顯示根據本發明的一項較佳具體實施例,多階段彙 總操作的流程圖。 圖33顯示多階段彙總操作之第一及最後階段的範例時 序。 圖34A顯示其中中斷彙總區塊並非用作更新區塊而是用 作其彙總操作已經中斷之彙總區塊的例子。 圖34B顯示始於圖34A之多階段彙總的第三及最後階段。 圖35A顯示其中維持中斷彙總區塊為接收主機寫入之更 新區塊而非彙總區塊的例子。 圖35B顯示始於第二例中圖35A之多階段彙總的第三及 最後階段。 圖3 6 A顯示套用於主機寫入觸發關閉更新區塊且更新區 塊為循序時之情況的階段性程式錯誤處置方法。 98680.doc -125- 1272487 圖3 6B顯示在更新區塊之更新的例子中套用於(局部區塊 系統)的階段性程式錯誤處置方法。 圖36C顯示處理廢棄項目收集操作的階段性程式錯誤,或 不支援映射至中繼區塊之邏輯群組之記憶體區塊管理系統 中的清除。 圖3 7顯示在每N個區段寫入相同的邏輯群組後將cbi區 段寫入關聯之混亂索引區段區塊的排程範例。 圖38A顯示直到在預定數量的寫入後在其中記錄(^趴區 段時的更新區塊。 圖38B顯示圖38A之進一步在索引區段後在其中記錄資 料頁面1、2及4的更新區塊。 圖3 8C顯示圖3 8B之具有另一寫入以觸發索引區段下一 個記錄之邏輯區段的更新區塊。 圖3 9 A顯示儲存於混亂更新區塊中各資料區段標頭之中 間寫入的中間索引。 圖39B顯示在寫入的各區段標頭中儲存中間寫入之中間 索引的範例。 圖40顯示在混亂更新區塊之各資料區段標頭中儲存的混 亂索引棚位中的資訊。 圖41A顯示當各記憶體單元儲存兩個位元的資料時,々狀 怨記憶體陣列的定限電壓分布。 圖41B顯示現有使用格雷碼(Gray⑶㈣的2次編碼過程程 式化方案。 圖42顯示籍由儲存複製的各區段以防衛關鍵資料的方 98680.doc -126- 1272487 :。:列如’可將區段“、。,儲存在複 :3 = Γ有資料毀損’則可以讀取另-個來取代。 中通常將複製區段儲存在多重狀態記憶體的 重狀態 狀態記 ^圖44Α顯示將關鍵資料錯開之複製複本館存至多 記憶體的一項具體實施例。 圖4 4 B顯示只將關鍵資料之複製複本儲存至多重 憶體之邏輯上方頁面的另一項具體實施例。 圖44C顯示又另_項以多重狀態記憶體的二進制模式儲 存關鍵資料之複製複本的具體實施例。 圖45顯示同時將關鍵資料之複製複本儲存至兩個不同中 繼區塊的又另一項具體實施例。 圖46A和圖41A同在顯示4狀態記憶體陣列的定限電壓分 布並顯示作為圖46B的參考。 圖46B顯示使用容錯碼同時儲存關鍵資料之複製複本的 又另一項具體實施例。 圖47為顯示兩個資料複本之可能狀態及資料有效性的表 格。 圖48顯示先佔式重新配置儲存控制資料之記憶體區塊的 流程圖。 【主要元件符號說明】 1 缺陷區塊 2 中斷區塊 3 重新配置區塊 98680.doc -127 - 1272487 10 主機 20 記憶體糸統 100 控制器 110 介面 120 處理器 121 選用性副處理器 122 唯讀記憶體(ROM) 124 選用性可程式非揮發性記憶體 130 隨機存取記憶體(RAM) 132 快取記憶體 134、 610 配置區塊清單(ABL) 136、 740 清除區塊清單(CBL) 140 邏輯對實體位址轉譯模組 150 更新區塊管理器模組 152 循序更新 154 混亂更新 160 抹除區塊管理器模組 162 結束區塊管理器 170 中繼區塊連結管理器 180 控制資料互換 200 記憶體 210 群組位址表(GAT) 220 混亂區塊索引(CBI) 230、 770 已抹除的區塊清單(EBL) 98680.doc -128- 1272487 240 MAP 612 已抹除的ABL區塊清單 614 開啟的更新區塊清單 615 關聯的原始區塊清單 616 關閉的更新區塊清單 617 已抹除的原始區塊清單 620 CBI區塊 700 邏輯群組 702 原始中繼區塊 704 混亂更新區塊 750 MAP區塊 760 抹除區塊管理(EBM)區段 772 可用的區塊緩衝器(ABB) 774 已抹除的區塊緩衝器(EBB) 776 已清除的區塊緩衝器(CBB) 780 MAP區段 782 來源MAP區段 784 目的地MAP區段 910 平面 912 讀取及程式電路 914 頁面 920 控制器 922 緩衝器 930 資料匯流排 98680.doc 129-Then reconfigure CBI or leave otherwise. Therefore, preemptive reconfiguration is usually done when no usage occurs. 98680.doc -120-1272487 When the data is discarded. Preemptive reconfiguration of a spontaneous reconfiguration of a block in the Trigger User Data Discard item. In the worst case, when each host writes the collection, but there is enough time to perform the day, one control block can be executed at a time. Since the user data is discarded and the control update may occur at the same time as the entity error, 'It is better to have a larger security margin, ^ by pre-emptive re-emptive if you have 2 or more unwritten memory files (such as 'segments) in the block beforehand. Collection or control of waste Xin project collection. While the various aspects of the invention have been described in terms of specific embodiments, it is understood that the invention is intended to be β [Simplified illustration of the drawings] Fig. 1 is a schematic view showing main hardware components suitable for implementing the memory system of the present invention. & Figure 2 shows a memory group managed by a memory manager of a controller in accordance with a preferred embodiment of the present invention, organized into zones or relay blocks. Figure 3(1)-3A(iii) illustrates a mapping between logical groups and relay blocks in accordance with a preferred embodiment of the present invention. Figure 3B is not intended to show the mapping between logical groups and relay blocks. Figure 4 shows the alignment of the structure in the relay block and the physical memory. Fig. 5A shows a relay block composed of a minimum erasing unit that connects different planes. 98680.doc -121 - 1272487 Figure 5B shows a particular embodiment in which a minimum erase unit (MEU) is selected from each plane to link to a relay block. Figure 5C shows another embodiment in which more than one meu is selected from each plane to link to a relay block. Figure 6 is a schematic block diagram of a relay block management system implemented in a controller and flash memory. Fig. 7A shows an example of writing a section of a sequential update block in a sequential order in a logical group. Figure 7B shows an example of a logical group in which a segment of a chaotic update block is written in a chaotic order. Figure 8 shows an example of writing a segment of a sequential update block in a logical group in sequential order due to two separate host write operations with interrupts at logical addresses. Figure 9 is a flow diagram of a program for updating the data of a logical group for displaying an update block manager, in accordance with a general embodiment of the present invention. Figure 10 is a flow diagram showing the flow of updating the data of a logical group by the update block manager in accordance with a preferred embodiment of the present invention. Fig. 11A is a flow chart showing in detail a summary procedure for closing the chaotic update block shown in Fig. 10. Figure 11B is a flow chart showing in detail the compression procedure for closing the chaotic update block shown in Figure 10. Figure 12A shows all possible states of a logical group, with possible transitions between various operations. Figure 12B is a table listing the possible states of a logical group. 98680.doc -122- 1272487 Figure 13A shows all possible states of the relay block and their possible transitions under various operations. A relay block is an entity group corresponding to a logical group. Figure 13B is a table listing the possible states of a relay block. 14(A)-14(J) are state diagrams showing various operational effects on the logical group state and on the physical relay block. Figure 15 shows a preferred embodiment of the configuration block list (ABL) structure for recording the open and closed update blocks and the configured erased blocks. Figure 16A shows the data block of the Chaotic Block Index (CBI) section. Figure 16] 3 shows an example of a chaotic block index (CBI) section recorded in a dedicated relay block. Figure 16C is a flow diagram showing the access to the logical section of a given logical group for a chaotic update. Figure 16D is a flow diagram of data for a logical section of a given logical group for which a display access is chaotically updated, based on an alternative embodiment in which the logical group has been partitioned into subgroups. Figure 16E shows an example of a chaotic block index (CBi) section and its functions in a specific embodiment in which each logical group is partitioned into a plurality of subgroups. Figure 17A shows the data block of the Group Address Table (GAT) section. Fig. 17B shows an example of a group address table (GAT) section recorded in a GAT block. Figure 18 is a schematic block diagram showing the distribution and flow of control and catalog information for the use and recycling of erased blocks. Figure 19 is a flow chart showing the physical address translation logic program. Figure 2 shows the level of operation performed on the control data structure 98680.doc -123- 1272487 during the memory management operation. Fig. 21 shows an array of memories composed of a plurality of memories. Figure 22A is a flow chart of a new method not in accordance with the present invention. An example of having a planar alignment Figure 22B shows a preferred embodiment of Figure 28. The step of storing the update step in the mind map = the example of the logical 7L of writing the sequential update block in the sequential order regardless of the plane alignment. Figure 23B shows. An example of writing a chaotic update area in a non-sequential order, regardless of plane alignment. Fig. 24A shows an example of a sequential update of Fig. 23A in accordance with the present invention, a rare earth, a yoke example having a plane and a fill. Figure 24B shows a chaotic update example of Figure 23B with planes & without any padding, in accordance with a specific embodiment of the present invention. Figure 24C shows an example of a chaotic update of Figure 23B with planar alignment and padding in accordance with another preferred embodiment of the present invention. Figure 25 shows an example memory organization in which each page contains two memory cells for storing two logical units (e.g., two logical segments). The memory of Fig. 26A and Fig. 21 has the same structure except that each page contains two sections instead of one. The figure shows the relay block shown in Fig. 26A with the memory arranged in a linear pattern. The alternative shown in Figure 27 is as follows: You can make a plane alignment in the update block without having to fill it from one location to another. 98680.doc -124- 1272487 Figure 28纟, the staff is not in the missing block in the summary operation a, the program will fail to repeat the summary operation on another block when the program fails. Figure 29 is a schematic diagram showing a host write operation with a timing or write latency that allows sufficient time to complete the write (update) operation and the summary operation. Figure 30 is a flow chart showing the failure of the program to be processed according to the general scheme of the present invention. Figure 31A shows a specific embodiment of the failure handling of the program, wherein the third (last reconfigured) block and the second (interrupted) block are different. . Figure 3B shows another embodiment of a program failure handling in which the third (last reconfiguration) block is the same as the second (interrupt) block. Figure 32A shows a flow chart for the initial update operation that caused the summary operation. Figure 32B shows a flow chart of a multi-stage summary operation in accordance with a preferred embodiment of the present invention. Figure 33 shows an example sequence of the first and final stages of a multi-stage rollup operation. Fig. 34A shows an example in which the interrupt summary block is not used as an update block but as a summary block whose summary operation has been interrupted. Figure 34B shows the third and final stages starting from the multi-stage summary of Figure 34A. Fig. 35A shows an example in which the interrupt summary block is maintained as a newer block written by the receiving host instead of the summary block. Figure 35B shows the third and final stages starting from the multi-stage summary of Figure 35A in the second example. Figure 3 6 A shows the set of program error handling methods used when the host write triggers the shutdown of the update block and the update block is sequential. 98680.doc -125- 1272487 Figure 3 6B shows the staged program error handling method applied to the (local block system) in the updated example of the update block. Figure 36C shows a staged program error handling a discarded item collection operation, or a cleanup in a memory block management system that does not support a logical group mapped to a relay block. Figure 37 shows an example of scheduling a cbi segment to write to an associated chaotic index sector block after writing the same logical group every N segments. Fig. 38A shows an update block in which a section is recorded until after a predetermined number of writes. Fig. 38B shows an update area in which the data pages 1, 2, and 4 are recorded further after the index section of Fig. 38A. Figure 3 8C shows the update block of Figure 3 8B with another write to trigger the logical segment of the next record of the index segment. Figure 3 9 A shows the header of each data segment stored in the chaotic update block. An intermediate index written in the middle. Figure 39B shows an example of storing an intermediate index of intermediate writes in each of the written sector headers. Figure 40 shows the confusion stored in the headers of each data section of the chaotic update block. Indexing information in the booth. Figure 41A shows the threshold voltage distribution of the array of memory cells when each memory unit stores two bits of data. Figure 41B shows the existing encoding process using Gray code (Gray(3)(4)) Stylized scheme. Figure 42 shows the side 98864.doc -126- 1272487: by means of storing the copied sections to protect the key data. The column can be stored in the complex: 3 = Γ Data corruption can read another one In the middle of the multi-state memory, the copy state is usually stored in the multi-state memory. Figure 44 shows a specific embodiment of storing the duplicated copy of the duplicated data in the multi-memory. Figure 4 4 B shows only the key Another embodiment of the copy of the data is stored in a logically upper page of the multi-memory. Figure 44C shows a further embodiment of storing a duplicate copy of the key material in a binary mode of multiple state memory. Another embodiment in which a copy of the key material is simultaneously stored to two different relay blocks. Figure 46A and Figure 41A show the limited voltage distribution of the 4-state memory array and are shown as a reference for Figure 46B. Figure 46B shows yet another embodiment of a copy of a duplicate of key data stored using a fault tolerant code. Figure 47 is a table showing the possible states of two data copies and the validity of the data. Figure 48 shows the preemptive reconfiguration storage. Flow chart of the memory block of the control data. [Main component symbol description] 1 Defective block 2 Interrupt block 3 Reconfigure block 98680 .doc -127 - 1272487 10 Host 20 Memory System 100 Controller 110 Interface 120 Processor 121 Selective Sub Processor 122 Read Only Memory (ROM) 124 Selective Programmable Nonvolatile Memory 130 Random Access Memory Body (RAM) 132 Cache Memory 134, 610 Configuration Block List (ABL) 136, 740 Clear Block List (CBL) 140 Logical Pair Physical Address Translation Module 150 Update Block Manager Module 152 Sequential Update 154 Chaos Update 160 Erase Block Manager Module 162 End Block Manager 170 Relay Block Link Manager 180 Control Data Exchange 200 Memory 210 Group Address Table (GAT) 220 Chaotic Block Index (CBI) 230 , 770 erased block list (EBL) 98680.doc -128- 1272487 240 MAP 612 erased ABL block list 614 open update block list 615 associated original block list 616 closed update block Listing 617 erased original block list 620 CBI block 700 logical group 702 original relay block 704 chaotic update block 750 MAP block 760 erase block Management (EBM) Section 772 Available Block Buffer (ABB) 774 Erased Block Buffer (EBB) 776 Cleared Block Buffer (CBB) 780 MAP Section 782 Source MAP Section 784 Purpose MAP segment 910 plane 912 read and program circuit 914 page 920 controller 922 buffer 930 data bus 98680.doc 129-

Claims (1)

1272487 十、申請專利範圍: 1 · 一種於一被組織成複數個區塊的非揮發性記憶體中儲存 更’斤資料的方法,其中每個區塊均被分割成可一起抹 除的複數個記憶體單元,每個記憶體單元係用於儲存一 邏輯單元的資料,該方法包括: 立將:記憶體組織成一連串的頁,每頁面之中的所有記 憶體單元均係由一組感測電路來平行服務; —將資料組織成複數個邏輯群組,每個邏輯群組均被分 副成複數個邏輯單元的資料; 第〗#以逐頁的方式將-邏輯群組的複數個 1 一 :70的帛版本儲存於第一區塊中,致使每個邏輯 早疋被儲存於該頁中且右〜 貝中八有給疋位移的一記憶體單元中; Μ及 依不同於該第一順序的二 、、 該邏輯群組的複數個邏輯二、’以逐頁的方式將 诘φ 禮— 的後續版本儲存於第二區 免中’讓母個後續版本被儲存 :::位移的位移的下個可用的記憶體二::; 2 感測電路群來存取-邏輯單元的所有版本 2.如凊求項丨之方法,進一步包括·· ,版丰 於儲存邏輯單元的後續版本時 下—個可用記憶體單元真補每一頁内該 ,其方式係依照該第一順序脾:未,吏用的記憶體單元 至此。 、:邏輯單兀的目前版本複製 3. 如請求項1之方法,進一步包括·· 98680.doc 1272487 利用多個記憶體平面來構成 組織成-連•的頁每頁面=::趙’每個平*均被 由—組_電路來平行服務;㈣有記㈣單元均係 將該記憶體組織成一連串的中 均在山—/ M貝面’母個中繼頁面 頁面中:固平面中的一記憶體頁面所構成,因此-中繼 務。 '所有記憶體單元係由複數個感測電路來平行服 4. 如請求項3之方法,進一步包括: 於儲存邏輯單元的後續版本時 面内該下一個可用記恃體罩_、,门守填補母—中繼頁' 體單元,二I ,任何未被使用的記憶 體早兀,其方式係依照該第—順序 本複製至此。 、輯早几的目前版 其中每個邏輯單元係 其中每頁面均儲存一 其中每頁面均儲存一, /、中5亥非揮發性記憶 5. 如請求項1至4中任一項之方法, 主機資料區段。 6. 如請求項1至4中任一項之方法, 邏輯單元的資料。 7·如請求項1至4中任一項之方法, 以上邏輯單元的資料。 8. 如清求項1至4中任一項之方法, 具有複數個浮動閘極記憶體單元 9. 如請求項1至4中任一項之方法,其 係快閃EEPROM。 /、 Λ 發性記憶 10·如請求項1至4中任一項之方 係NROM。 八中该非揮發性記憶 98680.doc -2- 1272487 其中该非揮發性記憶體 1 1 ·如請求項1至4中任一項之方法 係一記憶卡。 12·如請求項1至4中任一項之方法,其中該非揮發性記憶體 /、有可各儲存一位資料的複數個記憶體單元。 ΐ3·=求項1至4中任—項之方法,其中該非揮發性記憶體 八有可各儲存_位元以上資料的複數個記憶體單元。 14· 一種非揮發性記憶體,其包括: -被組織成複數個區塊的記憶體,每個區塊均係可— 起抹除的複數個記憶體單元,每個記憶體單元係用㈣ 存-邏輯單元的資料,而且每個區塊係用於將—邏輯單 儿之邏輯群組齡於複數個巾繼頁面之巾,每個中繼頁 面均係由每個平面中的一記憶體頁面所構成; 一控制器,用來控制該等區塊的作業; 古該控制器依照第—順序將邏輯單元的資料的第一版本 儲存於-第-區塊的複數個記憶體單元之中,每個第— 版本邏輯單元均被儲存於該等記憶體平面之一中;以及 該控制器依照不同於第一順序的第二順序,將邏輯單 :的後續版本儲存於一第二區塊之中,每個後續版本均 被儲存於和該第一版本相同的記憶體平面中下一個可用 的記憶體單m以便可從該相同的平面中來存取一 邏輯單元的所有的版本。 15.如請求項14之非揮發性記憶體,其進一牛勺括· 斤該控制器於儲存邏輯單元的後續版本=同時依照該 弟一順序’使用邏輯單元的目前版本,以逐個中繼頁面 98680.doc 1272487 的方式來填補位於該下-個可用記憶體單元前面的任何 未被使用的記憶體單元邏輯單元;以及其中 該下一個可用記憶體單元於其中繼頁面中具有和該第 一版本之位移相同的位移。 其中该非揮發性記憶體 〇 其中該非揮發性記憶體 其中該非揮發性記憶體 其中该非揮發性記憶體 1 6.如清求項14之非揮發性記憶體, 具有複數個浮動閘極記憶體單元 17·如請求項14之非揮發性記憶體, 係快閃EEPROM。 1 8.如凊求項14之非揮發性記憶體 係 NROM。 19.如請求項14之非揮發性記憶體 係一記憶卡。 2〇.如請求項14至19中任一項之非揮發性記憶體,其 揮發性記憶體具有可各儲存—位元資料的複數個記憶 單元。 一 如請求項14至 ,丨思遐,具中該|| 揮發性記憶體具有可各儲存一位元以上資料的: 憶體單元。 22· —種非揮發性記憶體,其包括·· 一被組織成複數個區塊的記憶體,每個區塊均係可— 起抹除的複數個記憶體單元,每個記憶體單元係用於: 存-邏輯單元的資料,而且每個區塊係用於將—邏輯單 元之邏輯群組儲存則复數個中繼頁面之中,每個中繼^ 面均係由每個平面中的一記憶體頁面所構成,· 、 98680.doc 1272487 儲存構件,用以昭 第-版本儲存於一二—順序,將邏輯單元的資料的 每個第m #區塊的複數個記憶體單元之中, 中心及版本邏輯單元均被鍺存於該等記憶體平面之一 儲存構件,用以片877 邏輯單於第-順序的第:順序,將 、铒早兀的後績版本儲存 版本均被儲存於和該第一版本二塊之中,母個後續 個可用的記憶體單元之中=同的記憶體平面中下一 之中以便可從該相同的平面中來 存取一邏輯單元的所有的版本。 23. 如請求項22之非揮發性記憶體,其進-步包括: 乂真1構彳用於依照該第—順序,使用邏輯單元的目 前版本’以逐個中繼頁面的方式來填補位於該下—個可 ^記憶體單元前面的任何未被使用的記憶體單元記憶體 皁70記憶體單元邏輯單元;以及其中 °亥下個可用記憶體單元於其中繼頁面中具有和該第 一版本之位移相同的位移。 98680.doc1272487 X. Patent application scope: 1 · A method for storing more data in a non-volatile memory organized into a plurality of blocks, wherein each block is divided into a plurality of blocks that can be erased together a memory unit, each memory unit for storing data of a logical unit, the method comprising: arranging: the memory is organized into a series of pages, and all the memory units in each page are subjected to a set of sensing The circuit is parallel service; - the data is organized into a plurality of logical groups, each logical group is divided into multiple logical units of data; the first _# page-by-page - a plurality of logical groups A: The 帛 version of 70 is stored in the first block, so that each logic is stored in the page and the right ~ 八 八 has a memory unit for the 疋 displacement; Second, the logical second of the logical group, 'storing the subsequent version of 诘φ 礼 in a page-by-page manner in the second zone exemption' allows the subsequent versions of the parent to be stored::: displacement displacement The next one Memory 2::; 2 sense circuit group to access - all versions of the logic unit 2. If the method of requesting the item, further including ··, the version is rich in the subsequent version of the storage logic unit - one available The memory unit is true in each page, in the same way as the first order spleen: no, the memory unit is used. ,: The current version of the logical unit is copied 3. As in the method of claim 1, further including... 98680.doc 1272487 Using multiple memory planes to form a page into a tissue-connected page =:: Zhao' each The flats are all serviced in parallel by the group_circuit; (4) the units in the four (4) units are organized into a series of medium averages in the mountain-/M shells' parent relay page: solid plane A memory page is formed, so - relaying. 'All memory cells are served in parallel by a plurality of sensing circuits. 4. The method of claim 3, further comprising: in the subsequent storage of the subsequent version of the logic unit, the next available memory mask _, the gatekeeper Fill the parent-relay page' body unit, II I, any unused memory is early, the way is copied according to the first-order. In the current version of the series, each of the logical units stores one page per page, wherein each page stores one, /, and 5 hai non-volatile memory. 5. The method of any one of claims 1 to 4, Host data section. 6. The method of any one of claims 1 to 4, the data of the logical unit. 7. The method of any one of claims 1 to 4, the data of the above logical unit. 8. The method of any one of claims 1 to 4, comprising a plurality of floating gate memory cells. 9. The method of any one of claims 1 to 4, which is a flash EEPROM. /, Λ 记忆 memory 10. The method of any one of claims 1 to 4 is NROM. VIII. The non-volatile memory 98680.doc -2- 1272487 wherein the non-volatile memory 1 1 is a memory card as claimed in any one of claims 1 to 4. The method of any one of claims 1 to 4, wherein the non-volatile memory has a plurality of memory cells each capable of storing one bit of data. Ϊ́3·= The method of any one of clauses 1 to 4, wherein the non-volatile memory VIII has a plurality of memory cells each of which stores data of _bit or more. 14. A non-volatile memory comprising: - a memory organized into a plurality of blocks, each block being a plurality of memory cells that can be erased, each memory cell system being used (4) Store the data of the logical unit, and each block is used to set the logical group of the logical single to the number of pages, and each relay page is a memory in each plane. The controller is configured to control the operations of the blocks; the controller stores the first version of the data of the logical unit in the plurality of memory units of the -th block according to the first order Each first version of the logical unit is stored in one of the memory planes; and the controller stores the subsequent version of the logical list in a second block in a second order different from the first order Each subsequent version is stored in the next available memory bank m in the same memory plane as the first version so that all versions of a logical unit can be accessed from the same plane. 15. In the non-volatile memory of claim 14, the controller is in a subsequent version of the storage logic unit = simultaneously using the current version of the logical unit in accordance with the order of the brothers to relay the pages one by one 98680.doc 1272487 to fill any unused memory unit logic unit located in front of the next available memory unit; and wherein the next available memory unit has and the first version in its relay page The displacement is the same displacement. Wherein the non-volatile memory, wherein the non-volatile memory, wherein the non-volatile memory, such as the non-volatile memory of the claim 14, has a plurality of floating gate memories Unit 17· The non-volatile memory of claim 14 is a flash EEPROM. 1 8. For the non-volatile memory system NROM of item 14. 19. The non-volatile memory of claim 14 is a memory card. The non-volatile memory of any one of claims 14 to 19, wherein the volatile memory has a plurality of memory cells each of which stores the bit data. As requested in Item 14 to , 丨思遐, in the middle of || Volatile memory has the ability to store more than one yuan each: Recalling unit. 22. A non-volatile memory comprising: a memory organized into a plurality of blocks, each block being a plurality of memory cells that can be erased, each memory cell system Used for: storage-logical unit data, and each block is used to store the logical group of logical units in a plurality of relay pages, each of which is in each plane A memory page is constructed, ·, 98680.doc 1272487 storage component, stored in the first-two-order, in the memory unit of each m-th block of the logical unit data , the center and the version logic unit are stored in one of the storage units of the memory plane, and the slice 877 is logically listed in the first order of the order: the stored version of the later version is stored. And in the first version of the second block, among the next available memory cells of the parent = the next in the same memory plane so that all of the logical units can be accessed from the same plane. version. 23. The non-volatile memory of claim 22, further comprising: 乂 1 彳 彳 彳 彳 彳 依照 依照 依照 依照 依照 依照 依照 依照 依照 依照 依照 依照 依照 依照 依照 依照 依照 依照 依照 依照 依照 依照 依照 依照 依照 依照 依照 依照 依照 依照 依照Any unused memory unit memory soap memory cell logic unit in front of the memory unit; and wherein the next available memory unit has the first version in its relay page Displace the same displacement. 98680.doc
TW093141380A 2003-12-30 2004-12-30 Non-volatile memory and method with memory planes alignment TWI272487B (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US10/750,155 US7139864B2 (en) 2003-12-30 2003-12-30 Non-volatile memory and method with block management system
US10/917,888 US20050141313A1 (en) 2003-12-30 2004-08-13 Non-volatile memory and method with memory planes alignment

Publications (2)

Publication Number Publication Date
TW200601042A TW200601042A (en) 2006-01-01
TWI272487B true TWI272487B (en) 2007-02-01

Family

ID=34753195

Family Applications (1)

Application Number Title Priority Date Filing Date
TW093141380A TWI272487B (en) 2003-12-30 2004-12-30 Non-volatile memory and method with memory planes alignment

Country Status (4)

Country Link
EP (1) EP1704483A2 (en)
KR (1) KR20060134011A (en)
TW (1) TWI272487B (en)
WO (1) WO2005066792A2 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8898412B2 (en) 2007-03-21 2014-11-25 Hewlett-Packard Development Company, L.P. Methods and systems to selectively scrub a system memory
TWI747349B (en) * 2020-06-30 2021-11-21 大陸商合肥沛睿微電子股份有限公司 Low-level formatting method of storage device

Families Citing this family (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7139864B2 (en) 2003-12-30 2006-11-21 Sandisk Corporation Non-volatile memory and method with block management system
US9104315B2 (en) 2005-02-04 2015-08-11 Sandisk Technologies Inc. Systems and methods for a mass data storage system having a file-based interface to a host and a non-file-based interface to secondary storage
JP4751163B2 (en) * 2005-09-29 2011-08-17 株式会社東芝 Memory system
US7870231B2 (en) * 2006-07-21 2011-01-11 Qualcomm Incorporated Efficiently assigning precedence values to new and existing QoS filters
KR100825802B1 (en) * 2007-02-13 2008-04-29 삼성전자주식회사 Data write method of non-volatile memory device copying data having logical pages prior to logical page of write data from data block
US8634470B2 (en) 2007-07-24 2014-01-21 Samsung Electronics Co., Ltd. Multimedia decoding method and multimedia decoding apparatus based on multi-core processor
KR101297563B1 (en) 2007-11-15 2013-08-19 삼성전자주식회사 Storage management method and storage management system
KR100982440B1 (en) * 2008-06-12 2010-09-15 (주)명정보기술 System for managing data in single flash memory
US8285970B2 (en) 2008-11-06 2012-10-09 Silicon Motion Inc. Method for managing a memory apparatus, and associated memory apparatus thereof
JP4956593B2 (en) 2009-09-08 2012-06-20 株式会社東芝 Memory system
US8626989B2 (en) 2011-02-02 2014-01-07 Micron Technology, Inc. Control arrangements and methods for accessing block oriented nonvolatile memory
KR101419004B1 (en) * 2012-05-03 2014-07-11 주식회사 디에이아이오 Non-volatile memory system
WO2013171792A1 (en) * 2012-05-16 2013-11-21 Hitachi, Ltd. Storage control apparatus and storage control method
KR101987740B1 (en) 2012-07-09 2019-06-11 에스케이하이닉스 주식회사 Estimation method for channel characteristic of nonvolatile memory device
US9817593B1 (en) 2016-07-11 2017-11-14 Sandisk Technologies Llc Block management in non-volatile memory system with non-blocking control sync system
US10423353B2 (en) 2016-11-11 2019-09-24 Micron Technology, Inc. Apparatuses and methods for memory alignment

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5860124A (en) * 1996-09-30 1999-01-12 Intel Corporation Method for performing a continuous over-write of a file in nonvolatile memory
US6763424B2 (en) * 2001-01-19 2004-07-13 Sandisk Corporation Partial block data programming and reading operations in a non-volatile memory

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8898412B2 (en) 2007-03-21 2014-11-25 Hewlett-Packard Development Company, L.P. Methods and systems to selectively scrub a system memory
TWI499911B (en) * 2007-03-21 2015-09-11 Hewlett Packard Development Co Methods and systems to selectively scrub a system memory
TWI747349B (en) * 2020-06-30 2021-11-21 大陸商合肥沛睿微電子股份有限公司 Low-level formatting method of storage device

Also Published As

Publication number Publication date
WO2005066792A3 (en) 2006-02-09
TW200601042A (en) 2006-01-01
WO2005066792A2 (en) 2005-07-21
EP1704483A2 (en) 2006-09-27
KR20060134011A (en) 2006-12-27

Similar Documents

Publication Publication Date Title
TWI288328B (en) Non-volatile memory and method with non-sequential update block management
TWI288327B (en) Non-volatile memory and method with control data management
TWI272487B (en) Non-volatile memory and method with memory planes alignment
JP4851344B2 (en) Non-volatile memory and method with nonsequential update block management
TWI294081B (en) Memory system and operating method thereof
US9942084B1 (en) Managing data stored in distributed buffer caches
CN1348191A (en) Method for driving remapping in flash memory and its flash memory system structure
TW201216058A (en) Use of guard bands and phased maintenance operations to avoid exceeding maximum latency requirements in non-volatile memory systems
TW200837562A (en) Non-volatile memory and method for class-based update block replacement rules
TWI269154B (en) Non-volatile memory and method of storing data in a non-volatile memory
Myers On the use of NAND flash memory in high-performance relational databases
US11307979B2 (en) Data storage device and non-volatile memory control method
TW200844999A (en) Non-volatile memory with worst-case control data management and methods therefor
TW201015563A (en) Block management and replacement method, flash memory storage system and controller using the same
TWI742698B (en) Data storage device and non-volatile memory control method
US11748023B2 (en) Data storage device and non-volatile memory control method
CN103098034B (en) The apparatus and method of operation are stored for condition and atom