TWI288328B - Non-volatile memory and method with non-sequential update block management - Google Patents

Non-volatile memory and method with non-sequential update block management Download PDF

Info

Publication number
TWI288328B
TWI288328B TW093141426A TW93141426A TWI288328B TW I288328 B TWI288328 B TW I288328B TW 093141426 A TW093141426 A TW 093141426A TW 93141426 A TW93141426 A TW 93141426A TW I288328 B TWI288328 B TW I288328B
Authority
TW
Taiwan
Prior art keywords
block
logical
update
memory
data
Prior art date
Application number
TW093141426A
Other languages
Chinese (zh)
Other versions
TW200601043A (en
Inventor
Alan Welsh Sinclair
Sergey Anatolievich Gorobets
Alan David Bennett
Original Assignee
Sandisk Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US10/750,155 external-priority patent/US7139864B2/en
Application filed by Sandisk Corp filed Critical Sandisk Corp
Publication of TW200601043A publication Critical patent/TW200601043A/en
Application granted granted Critical
Publication of TWI288328B publication Critical patent/TWI288328B/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/0223User address space allocation, e.g. contiguous or non contiguous base addressing
    • G06F12/023Free address space management
    • G06F12/0238Memory management in non-volatile memory, e.g. resistive RAM or ferroelectric memory
    • G06F12/0246Memory management in non-volatile memory, e.g. resistive RAM or ferroelectric memory in block erasable memory, e.g. flash memory
    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11CSTATIC STORES
    • G11C16/00Erasable programmable read-only memories
    • G11C16/02Erasable programmable read-only memories electrically programmable
    • G11C16/06Auxiliary circuits, e.g. for writing into memory
    • G11C16/10Programming or data input circuits
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2212/00Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
    • G06F2212/72Details relating to flash memory management
    • G06F2212/7202Allocation control and policies

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Techniques For Improving Reliability Of Storages (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

In a nonvolatile memory with block management system that supports update blocks with non-sequential logical units, an index of the logical units in a non-sequential update block is buffered in RAM and stored periodically into the non-volatile memory. In one embodiment, the index is stored in a block dedicated for storing indices. In another embodiment, the index is stored in the update block itself. In yet another embodiment, the index is stored in the header of each logical unit. In another aspect, the logical units written after the last index update but before the next have their indexing information stored in the header of each logical unit. In this way, after a power outage, the location of recently written logical units can be determined without having to perform a scanning during initialization. In yet another aspect, a block is managed as partially sequential and partially non-sequential, directed to more than one logical subgroup.

Description

1288328 九、發明說明: 【發明所屬之技術領域】 本發明一般有關非揮發性半導體記憶體,尤其有關具有 各己憶體區塊管理糸統的非揮發性半導體記憶體。 【先前技術】 能夠進行電荷之非揮發性儲存的固態記憶體,尤其是屬 於封裝成小型記憶卡之EEPROM及快閃EEPROM的形式,近 來已成為各種行動及手持裝置選擇的儲存裝置,特別是資 訊豕電及消費性電子產品。不像也是固態記憶體的RAM(隨 機存取記憶體),快閃記憶體係為非揮發性,即使在關閉電 源後仍旎保留其儲存的資料。還有,不像R〇M(唯讀記憶 體)’快閃記憶體和磁碟儲存裝置一樣都是可再寫。儘管成 本較高,但在大量儲存應用中,快閃記憶體的使用漸增。 基於旋轉磁性媒體的習用大量儲存裝置,如硬碟機及軟 碟,並不適合行動及手持環境。這是因為磁碟機傾向於體 積龐大,容易機械故障且具有高等待時間及高功率需求。 這些不想要的屬性使得基於磁碟的儲存裝置無法在大多數 的订動及可攜式應用中使用。另一方面,既為内喪式且為 抽取式記憶卡之形式的十夬閃記憶體在理想上極為適用於行 動及手持%境,因其尺寸小、功率消耗低、高速及高可靠 性等特性。 快閃EEPROM和EEpR〇M(電子可抹除及可程丨唯讀記憶 體)的相似處在於’其為能夠被抹除且能將新的資料寫入或 私式化」至其記憶體單元中的非揮發性記憶體。二者均 98704.doc 1288328 利用場效電晶體結構中’位在半導體基板中通道區之上且 在源極心及極區之間的浮動(未連接)導電㈣。接著在浮動 閘極之上提供控制閘極。浮動閉極所保留的電荷量可控制 電晶體的定限電壓特性。也就是說,就浮動閘極上給定位 準的電荷而言,其中有必須在「開啟」電晶體之前施加於 控制閘極的對應電壓(定限)’以允許在其源極及沒極區之間 的導電。尤其,如快閃EEPROM的快閃記憶體允許同時抹 除記憶體單元的整個區塊。 浮動閘極可以保持某個電荷範圍,因此,可經程式化為 定限電壓窗内的任何定限電壓位準。定限電壓窗的尺寸受 限於裝置的最小及最大定限位準,而裝置的最小及最大定 限位準對應於可經程式化至浮動閘極上的電荷範圍。定限 窗一般根據記憶體裝置的特性、操作條件及記錄而定。原 則上’在此窗Μ ’各有所不同的、可解析的定限電愿位準 範圍均可用於指定單元之明確的記憶體狀態。 通常會藉由兩種機制之一,將作為記憶體單元的電晶體 矛王式化為已私式化」狀態。在「熱電子注入」中,施加 於汲極的回電壓可加速橫跨在基板通道區上的電子。同 時’施加於控制閘極的高電壓可吸引熱電子通過薄閘極介 電質到洋動閘極上。在「穿隨注人」巾,會相對於該 施加-高電壓給該控制閘極。依此方式,可將基板的;子 吸引到中間的浮動閘極。雖然習慣使用用肖「程式化」描 述精由注入電子至記憶體單元的初始抹除電荷儲存單元來 寫入記憶體以改變記憶體狀態,現已和較為常見的用語, 98704.doc 1288328 如「寫入」或「記錄」交換使用。 可以利用下面數種機制來抹除該記憶體裝置。對 EEPROM而3,藉由相對於該控制閘極施加—高㈣給該 基板’致使可於該浮動閘極中誘發出電子,使其穿隨一薄 氧化物進入w亥基板通道區(也就是,F〇wl〇N〇rdheim穿隧效 應)便可包抹除一圮憶體單元。一般來說,EEpR〇M可以 逐個位元組的方式來抹除。對快閃EEPROM而言,可每次 :抹除所有的記憶體或是每次電抹除一個以上最小可抹除 區鬼八中最小可抹除區塊可能係由一個以上的區段所 、、且成且母個區^又均可儲存5 12個位元組以上的資料。 該記憶體裝置通f包括可被安裝於一記憶卡上的一個以 上記憶體晶片。各記憶體晶片包含為周邊電路(如解碼器和 抹除、寫入及讀取等電路)所支援之記憶體單元陣列。較為 精山的Z L體虞置還附有執行智慧型和較高階記憶體操作 及介面連接的控制器。 現今已有許多市售成功的非揮發性固態記憶體裝置。該 些記憶體裝置可能係快閃EEPR0M,或是可採用其它類型 的非揮發性記憶體單元。快閃記憶體與系統及其製造方法 的範例揭示於美國專利案第5,()7M32#u、第號、 第 5,315,541號、第 5,343,063號、第 5,661,〇53號、第 5’,313,⑶ 號、以及第6,222,762號。明確地說,具NAND線串結構的 快閃記憶體裝置係描述於美國專利案第 一號、第卿35號之中。另外,亦可利用:一弟 用來儲存電荷之介面層的記憶體單元來製造非揮發性記憶 98704.doc 1288328 體裝置。其係利用一介電層來取代前面所述的導電浮動閘 極7C件。此等利用介電儲存元件的記憶體裝置已描述於 Eitan 等人於 2000 年 1 i 月在 IEEE Electr〇n Device [扣⑽,第 21冊,第11號,第543_545頁中所發表的「NR〇M: AN〇vel1288328 IX. Description of the Invention: TECHNICAL FIELD OF THE INVENTION The present invention relates generally to non-volatile semiconductor memory, and more particularly to non-volatile semiconductor memory having various memory management systems. [Prior Art] Solid-state memory capable of non-volatile storage of electric charge, especially in the form of EEPROM and flash EEPROM packaged into a small memory card, has recently become a storage device for various mobile and handheld devices, especially information. Electricity and consumer electronics. Unlike RAM, which is also a solid-state memory (random access memory), the flash memory system is non-volatile, retaining its stored data even after the power is turned off. Also, unlike R〇M (read-only memory) flash memory and disk storage devices are rewritable. Despite the high cost, the use of flash memory is increasing in mass storage applications. Conventional mass storage devices based on rotating magnetic media, such as hard disk drives and floppy disks, are not suitable for mobile and handheld environments. This is because the disk drive tends to be bulky, easily mechanically faulty, and has high latency and high power requirements. These unwanted attributes make disk-based storage devices inaccessible to most subscription and portable applications. On the other hand, the Xenon flash memory, which is both in the form of a spoofed and removable memory card, is ideally suited for mobile and handheld environments because of its small size, low power consumption, high speed and high reliability. characteristic. Flash EEPROM and EEpR〇M (electronically erasable and programmable read-only memory) are similar in that they can be erased and can write or privately write new data to their memory cells. Non-volatile memory. Both 98704.doc 1288328 utilize floating (unconnected) conduction (4) above the channel region in the semiconductor substrate and between the source and the polar regions in the field effect transistor structure. A control gate is then provided over the floating gate. The amount of charge retained by the floating closed-pole controls the limiting voltage characteristics of the transistor. That is to say, in terms of the charge on the floating gate, the corresponding voltage (limit) must be applied to the control gate before the transistor is "on" to allow the source and the gate region to be Conductive between. In particular, flash memory such as flash EEPROM allows the entire block of memory cells to be erased simultaneously. The floating gate can maintain a certain charge range and, therefore, can be programmed to any defined voltage level within the threshold voltage window. The size of the threshold voltage window is limited to the minimum and maximum limits of the device, and the minimum and maximum limits of the device correspond to the range of charge that can be programmed onto the floating gate. The limit window is generally based on the characteristics of the memory device, operating conditions, and recording. In principle, the different and analyzable limits of the limits in the 'window' can be used to specify the explicit memory state of the unit. A transistor that is a memory cell is usually turned into a private state by one of two mechanisms. In "hot electron injection", the return voltage applied to the drain accelerates electrons across the channel region of the substrate. At the same time, the high voltage applied to the control gate attracts hot electrons through the thin gate dielectric to the oceanic gate. In the "wearing the person" towel, the control gate is applied with respect to the applied-high voltage. In this way, the substrate can be attracted to the intermediate floating gate. Although it is customary to use the "stylization" to describe the initial erasing charge storage unit that injects electrons into the memory unit to write to the memory to change the memory state, now and more common terms, 98704.doc 1288328 eg " Write or "record" exchange use. The following devices can be used to erase the memory device. For the EEPROM 3, by applying a high (four) to the control gate relative to the control gate, the electrons can be induced in the floating gate to pass through a thin oxide into the substrate channel region (ie, , F〇wl〇N〇rdheim tunneling effect) can be used to erase a memory unit. In general, EEpR〇M can be erased on a byte by byte basis. For flash EEPROM, each time: erase all memory or erase more than one minimum erasable area each time. The smallest erasable block of ghosts may be more than one segment. And the parent area can store more than 5 12 bytes of data. The memory device includes a memory chip that can be mounted on a memory card. Each memory chip includes a memory cell array supported by peripheral circuits such as decoders and circuits such as erase, write, and read. The more sophisticated Z L body is also equipped with a controller that performs intelligent and higher-order memory operations and interface connections. There are many commercially available non-volatile solid state memory devices available today. These memory devices may be flash EEPR0M or other types of non-volatile memory cells may be used. Examples of flash memory and systems and methods of fabricating the same are disclosed in U.S. Patent No. 5, (7), No. 5, No. 5, 315, 541, No. 5, 343, 063, No. 5, 661, No. 53, No. 5, 313, (3) and 6,222,762. Specifically, a flash memory device having a NAND string structure is described in U.S. Patent No. 1, No. 35. Alternatively, a memory unit for storing the charge interface layer can be used to make a non-volatile memory 98704.doc 1288328 body device. It replaces the previously described conductive floating gate 7C with a dielectric layer. Such memory devices utilizing dielectric storage elements have been described by Eitan et al. in IEEE Electr〇n Device [Deduction (10), Vol. 21, No. 11, pp. 543_545, NR, January 2000). 〇M: AN〇vel

Localized Trapping,2-Bit Nonvolatile Memory Cell」一文 中。有一ΟΝΟ介電層會延伸跨越源極與汲極擴散區間的通 道。其中一個資料位元的電荷會於靠近該汲極的介電層中 被局部化,而另一個資料位元的電荷會於靠近該源極的介 電層中被局部化。舉例來說,美國專利案第5,768,192號及 第6,011,725號便揭示一種於兩層二氧化矽層間夾放一陷捕 介電質的非揮發性記憶體單元。藉由分開讀取該介電質内 空間分離的電荷儲存區域的二進制狀態,便可實現多重狀 態的資料儲存。 為了提高讀取及程式化效能,會平行讀取或程式化陣列 中的多個電荷儲存元件或記憶體電晶體。因此,會—起讀 取或程式化記憶體元相「頁面」。在現有的記 中,-列通常含有若干交錯的頁面或一列可構成—個頁 面個頁面的所有記憶體元件會被一起讀取或程式化。 在快閃記憶體系統中,抹除操作可能需要多達長於讀取 及程式化操作的數量級。因此’希望能夠具有大尺寸的抹 除區塊。依此方式,即可降減純大之記Μ單元上的 抹除時間。 依照快閃記憶體的性質,資料必須寫入已抹除的記憶體 位置。如果要更新主機之特定邏輯位址的資料,一個方式 98704.doc 1288328 是將更新資料再寫入相同的實體記憶體位置中。也就是 說,邏輯對實體位址映射不會變更。然而,這便表示必須 先抹除含有該實體位置的整個抹除區塊,然後再以更新資 料寫入。此更新方法很沒有效率,因其需要抹除及再寫入 整個抹除區塊’尤其在要更新的資料只佔用抹除區塊的一 小部分時。此外’還會造成比較頻繁之記憶體區塊的抹除 再循環,而這對此類型記憶體裝置的有限耐久性而言並不 理想。 另一個管理快閃記憶體系統的問題是必須處理系統控制 及目錄資料。各種記憶體操作的過程期間會產生且存取該 資料。因此,其有效處理及迅速存取將直接影響效能。由 於快閃記憶體是用來儲存且是非揮發性,因此希望能夠在 f夬閃δ己憶體中維持此類型的資料。然而,因為在控制器及 快閃記憶體之間有中間檔案管理系統,所以無法直接存取 資料。還有’系統控制及目錄資料傾向於成為作用中及成 為片段’而這對於具有大尺寸區塊抹除之系統中的儲存毫 無助盈。照慣例,會在控制器RAM中設定此類型的資料, 藉此允許由控制器直接存取。在開啟記憶體裝置的電源 後,初始化程序會掃描快閃記憶體,以編譯要放在控制器 RAM中的必要系統控制及目錄資訊。此程序既耗時又需要 控制器RAM容量,對於持續增加的快閃記憶體容量更是如 此。 US 6,567,307揭露一種在大抹除區塊中處理區段更新的 方法,其包括在多個當作可用記憶區的抹除區塊中記錄更 98704.doc -10· 1288328 新資料,及最後在各種區塊中彙總有效區段,然後按邏輯 循序順序重新排列有效區段後再將其寫入。依此方式,在 每個最微小的更新時,不必抹除及再寫入區塊。 W0 03/027828及W0 00/49488均揭露一種處理大抹除區 塊中更新的記憶體系統,其中包括以區域為單位分割邏輯 區段位址。小區域的邏輯位址範圍係保留用於和另一個用 於使用者資料之區域分開的作用中系統控制資料。依此方 式’在其自已區域中的系統控制資料操控便不會和另一區 域中關聯的使用者資料互相作用。更新屬於邏輯區段層 級,而且寫入指標會指向要寫入之區塊中的對應實體區 段。映射資訊會被緩衝在RAM中’且最後被儲存在主記憶 體的區段配置表中。邏輯區段的最新版本會淘汰現有區塊 中的所有先前版本,這些區塊因此變成部分淘汰。執行廢 棄項目收集可保持部分淘汰區塊為可接受的數量。 先前技術的系統傾向於使更新資料分布在許多區塊上或 更新資料會使許多現有的區塊受到部分淘汰。結果通常是 大量為部分淘汰之區塊所需的廢棄項目收集,這很沒有效 率且會造成記憶體提早老化。還有,和非循序更新相比, 更缺少有系統及有效的方式來處理循序更新。 因此’普遍需要高容量及高效能的非揮發性記憶體。尤 其,更需要有能夠在大區塊中執行記憶體操作且沒有上述 問題的高容量非揮發性記憶體。 【發明内容】 照實體記憶體位置的實體 一種非揮發性記憶體系統係按 98704.doc 1288328 群組來組織。各實體群組(中繼區塊)為可抹除的翠元且能用 來儲存-個邏輯群組的資料。記憶體管理系統允許藉由匕配 置記錄邏輯群組之更新資料專用的中繼區棟來更新一個邏 =群㈣資料Q更新中㈣塊按照所接收的順序記錄更新 貧料’且對記錄按照原始儲存的正確邏輯順序⑽序)與否 (混亂)並沒有限制。最後會關閉更新中繼區塊以進行其他記 錄。會發生若干程序之-’但最終會以按照正確順序完全 填滿之取代原始中繼區塊的中繼區塊結束。在混亂的例子 中乂以有助於經常更新的方式將目錄資料維持在非揮發· 性記憶體中。系統支援多個同時更新的邏輯群組。 本發明的一個特色允許以逐個邏輯群組的方式來更新資 料。因此’在更新邏輯群組時,會限制邏輯單元的分布(還 有更新淘汰之記憶體單元的散佈)_。這在邏輯群組通常 含在實體區塊内時更是如此。 在邏輯群組的更新期間,通常必須指派一或兩個緩衝已 更新之邏輯單元的區塊。因此,只需要在相對較少數量的 區塊上執仃廢棄項目收集。藉由彙總或壓縮即可執行混亂鲁 區塊的廢棄項目收集。 、㈣交於循序區塊來說,更新處理的經濟效能於更新區塊 的叙處理中會愈加明顯,因而便無須為混亂(非循序)更新 * 一 ^何額外的區塊。所有的更新區塊均被配置為循序的 · 2新區塊’而且任何的更新區塊均可被變更成混亂的更新 W。更確切地說,可任意地將-更新區塊從循序式變更 成混亂式。 98704.doc -12- 1288328 有效使用系統資源允許同時進行更新多個邏輯群組。這 可進一步增加效率及減少過度耗用。 k 分散在多個記愴雔平面上的記憶體對齊 根據本發明的另-方面,對於—被組織成複數個可抹除 區塊且由多個記憶體平面所構成(因而可平行地讀取複數 個邏輯單元或是將複數個邏輯單元平行地程式化至該等多 個平面之中)的記憶體陣列,當要更新儲存於特定記憶體平 面中第一區塊的原始邏輯單元時,會供應所需以將已更新 的邏輯單元保持在和原始相同的平面中。這可藉由以下方 式來完成··將已更新的邏輯單元記錄到仍在相同平面中之 第二區塊的下一個可用位置。較佳將邏輯單元儲存在平面 中和其中其他版本相同的位移位置,使得給定邏輯單元的 所有版本係由相同組的感測電路予以服務。 據此,在一項較佳具體實施例中,以邏輯單元的目前版 本來填補介於上一個程式化記憶體單元與下一個可用平面 對齊記憶體單元之間的任何中間間隙。將邏輯上位於該最 後被程式化之邏輯單元後面的該等邏輯單元的目前版本以 及邏輯上位於被儲存在該下一個可用的平面排列記憶體單 元中之邏輯單元前面的該等邏輯單元的目前版本填入間隙 中,便可完成該填補作業。 依此方式,可將邏輯單元的所有版本維持在具有和原始 相同位移的相同平面中,致使在廢棄項目收集操作中,不 必從不同平面擷取邏輯單元的最新版本,以免降低效能。 在一項較佳具體實施例中,可利用該等最新的版本來更新 98704.doc -13- 1288328 或填補該平面上的I彳面七& 母個a己憶體單元。因此,便可從每個平 面中平行地讀出-邏輯單元,其將會具有邏輯順序而無需 進一步重新排列。 田此方案猎由允許平面上重新排列邏輯群組之邏輯單元的 取新版本’且不必收集不同記憶體平面的最新版本,而縮 ^總混亂區塊的時間。這报有好處,其中主機介面的效 見;^可疋義由Z憶體系統完成區段寫人操作的最大等待 時間。 階段性程式錯誤處置 根據本發明的另一方面,在具有區塊管理系統的記憶體 中’在時間緊急的記憶體操作期間,區塊中的程式失敗可 藉由繼續中斷區塊(breakout blGek) t的程式化操作來處 置。稿後’在較不緊急的時間,可將中斷前記錄在失敗區 塊中的資料傳送到其他可能也是中斷區塊的區塊。接著即 可丟棄失敗的區塊。依此方式,在遇到缺陷區塊時,不會 因必須立刻傳送缺陷區塊中儲存的資料而損失資料及超過 指定的時間限制’即可加以處理。此錯誤處置對於廢棄項 目收集操作尤其重要’因此在緊急時間期間不需要對一嶄 新的區塊重複進行整個作業,,在適宜的時間,藉由 重新配置到其他區塊,即可挽救缺陷區塊的資料。 程式失敗處置在彙總操作期間尤其重要。正常的彙總操 作可將常駐在原始區塊及更新區塊中之邏輯群組的所有邏 輯單元的目前版本彙總至彙總區塊。在彙總操作期間,如 果在彙總區塊中發生程式失敗’則會提供另—個當作中斷 98704.doc -14- 1288328 彙總區塊的區塊,以接收其餘邏輯單元的彙總。依此方式, =必複製邏輯單元-次以上,而仍可在正常彙總操作指定 =間内完成例外處理的操作。在適宜的時間,將群組所 有未處理完成之邏輯單元彙總至中斷區塊中,即可完成囊 總操作。適宜的時間將是在目前主機寫入操作以外的一此 =有時間執行囊總之期間的期間。一個此種適宜的時間 =另一個其中有更新但無關聯之囊總操作之主機寫入的 實質上,可將程式失敗處置的彙總視為以多階段來杳 ^在第-階段中,在發生料失敗後,切邏輯單^ 〜至個以上區塊中’以避免彙總各邏輯單元—次以上。 ㈣宜的時間會完成最後階段’其中會將邏輯群組彙維至 較佳藉由按财順序將所有邏輯單元收集至Localized Trapping, 2-Bit Nonvolatile Memory Cell. A dielectric layer extends across the channel between the source and drain diffusion regions. The charge of one of the data bits is localized in the dielectric layer near the drain, and the charge of the other data bit is localized in the dielectric layer near the source. For example, U.S. Patent Nos. 5,768,192 and 6,011,725 disclose a non-volatile memory cell in which a trapping dielectric is sandwiched between two layers of germanium dioxide. Multiple state data storage can be achieved by separately reading the binary state of the charge storage region separated by the space within the dielectric. To improve read and program performance, multiple charge storage elements or memory transistors in the array are read or programmed in parallel. Therefore, it will read or program the memory element phase "page". In the prior art, the -column usually contains a number of interleaved pages or a list of all memory elements that can constitute a page of pages that are read or stylized together. In flash memory systems, erase operations may require orders of magnitude greater than read and program operations. Therefore, it is desirable to have a large-sized erase block. In this way, the erase time on the purely large unit can be reduced. Depending on the nature of the flash memory, the data must be written to the erased memory location. If you want to update the data of a specific logical address of the host, one way 98704.doc 1288328 is to write the updated data to the same physical memory location. That is, the logic does not change the physical address mapping. However, this means that the entire erase block containing the physical location must be erased before being written with the updated data. This update method is inefficient because it needs to erase and rewrite the entire erase block' especially when the data to be updated occupies only a small portion of the erase block. In addition, it will result in the eradication of memory blocks that are relatively frequent, which is not ideal for the limited durability of this type of memory device. Another problem with managing flash memory systems is that they must handle system control and directory data. This data is generated and accessed during the course of various memory operations. Therefore, its effective processing and rapid access will directly affect performance. Since flash memory is used for storage and is non-volatile, it is desirable to be able to maintain this type of data in the f-flash. However, because there is an intermediate file management system between the controller and the flash memory, the data cannot be accessed directly. Also, 'system control and catalog data tend to be active and fragmented' and this is not helpful for storage in systems with large block erases. As is customary, this type of material is set in the controller RAM, thereby allowing direct access by the controller. After powering up the memory device, the initialization program scans the flash memory to compile the necessary system control and directory information to be placed in the controller RAM. This procedure is both time consuming and requires controller RAM capacity, especially for ever-increasing flash memory capacity. US 6,567,307 discloses a method of processing a sector update in a large erasure block, which includes recording 98704.doc -10· 1288328 new data in a plurality of erase blocks that are available as available memory areas, and finally in various The valid sections are summarized in the block, and then the valid sections are rearranged in logical sequential order before being written. In this way, it is not necessary to erase and rewrite blocks at each of the tiniest updates. Both W0 03/027828 and W0 00/49488 disclose an updated memory system for processing large erase blocks, including dividing logical sector addresses in units of regions. The logical address range of a small area is reserved for active system control data that is separate from another area for user data. In this way, system control data manipulation in its own area does not interact with user data associated with another area. The update belongs to the logical segment level, and the write metric points to the corresponding physical segment in the block to be written. The mapping information is buffered in RAM' and finally stored in the section configuration table of the main memory. The latest version of the logical section will eliminate all previous versions in the existing block, which will therefore become partially eliminated. Performing a collection of abandoned items can maintain a partially eliminated block as an acceptable quantity. Prior art systems tend to distribute updated data across many blocks or to update data to partially eliminate many existing blocks. The result is usually a large collection of obsolete items required for partially eliminated blocks, which is inefficient and leads to premature aging of the memory. Also, there is a lack of a systematic and efficient way to handle sequential updates compared to non-sequential updates. Therefore, high-capacity and high-performance non-volatile memory is generally required. In particular, there is a greater need for high-capacity non-volatile memory that is capable of performing memory operations in large blocks without the above problems. SUMMARY OF THE INVENTION Entity of physical memory location A non-volatile memory system is organized according to the group 98704.doc 1288328. Each entity group (relay block) is an erasable emerald and can be used to store data of a logical group. The memory management system allows to update a logical group by means of a relay zone dedicated to the update data of the configuration log logical group (4). The Q update (four) block records the updated poor material according to the received order' and the record is original. There is no limit to the correct logical order of storage (10) or not (chaos). Finally, the update relay block is closed for additional recording. A number of procedures will occur - 'but will eventually end up with a relay block that replaces the original relay block that is completely filled in the correct order. In the case of confusion, the directory data is maintained in non-volatile memory in a way that facilitates frequent updates. The system supports multiple logical groups that are updated simultaneously. One feature of the present invention allows data to be updated in a logical group by group manner. Therefore, when updating a logical group, the distribution of logical units (and the distribution of memory cells that have been eliminated) is limited. This is especially true when the logical group is usually contained within a physical block. During the update of a logical group, one or two blocks that buffer the updated logical unit must typically be assigned. Therefore, it is only necessary to perform obsolete project collection on a relatively small number of blocks. The collection of obsolete items in the chaotic block can be performed by summarizing or compressing. (4) In the case of sequential blocks, the economic efficiency of the update process will become more apparent in the process of updating the blocks, so there is no need to update the *blocks for the chaotic (non-sequential). All update blocks are configured to be sequential 2 new blocks and any update block can be changed to a confusing update W. More specifically, the -update block can be arbitrarily changed from sequential to chaotic. 98704.doc -12- 1288328 Effective use of system resources allows multiple logical groups to be updated simultaneously. This can further increase efficiency and reduce excessive consumption. k Memory alignment dispersed over a plurality of recording planes according to another aspect of the invention, organized into a plurality of erasable blocks and composed of a plurality of memory planes (and thus can be read in parallel) a plurality of logical units or a memory array in which a plurality of logical units are parallelized into the plurality of planes, when the original logical unit stored in the first block in the specific memory plane is to be updated, The supply is required to keep the updated logical unit in the same plane as the original. This can be done by recording the updated logical unit to the next available location of the second block that is still in the same plane. Preferably, the logic cells are stored in a plane in the same displacement position as the other versions, such that all versions of a given logic unit are served by the same set of sensing circuits. Accordingly, in a preferred embodiment, any intermediate gap between the last stylized memory unit and the next available planar aligned memory unit is filled with the current version of the logical unit. The current version of the logical units logically located after the last programmed logical unit and the current logical unit located in front of the logical units stored in the next available planar aligned memory unit The filling can be completed by filling in the gap. In this way, all versions of the logical unit can be maintained in the same plane with the same displacement as the original, so that in the waste project collection operation, it is not necessary to extract the latest version of the logical unit from different planes to avoid performance degradation. In a preferred embodiment, the latest version can be used to update 98704.doc -13 - 1288328 or to fill the I-seven & parent a-remember cell on the plane. Thus, the logical units can be read out in parallel from each plane, which will have a logical sequence without further rearrangement. This scenario is based on allowing the new version of the logical unit of the logical group to be rearranged on the plane' without having to collect the latest version of the different memory planes, and shrinking the time of the total chaotic block. This report has the advantage, in which the effect of the host interface; ^ can be used to complete the maximum waiting time for the segment write operation by the Z memory system. Staged Program Error Handling According to another aspect of the present invention, in a memory having a block management system, during a time-critical memory operation, a program failure in the block can be continued by interrupting the block (breakout blGek) The stylized operation of t to deal with. After the manuscript, in less urgent times, the data recorded in the failed block before the interruption can be transferred to other blocks that may also be interrupted blocks. The failed block can then be discarded. In this way, when a defective block is encountered, it will not be processed because the data stored in the defective block must be transferred immediately and the specified time limit is exceeded. This mishandling is especially important for the collection of waste projects. Therefore, it is not necessary to repeat the entire operation for a new block during an emergency time. At the appropriate time, the defective block can be saved by reconfiguring to other blocks. data of. Program failure handling is especially important during rollup operations. A normal summary operation summarizes the current versions of all logical units resident in the original block and the updated block to the summary block. During the rollup operation, if a program failure occurs in the summary block, then another block that is used as the interrupt 98704.doc -14- 1288328 summary block is provided to receive the summary of the remaining logical units. In this way, = logical unit must be copied more than one time, and the exception processing can still be completed within the normal = operation specified by =. At the appropriate time, all the unprocessed logical units of the group are summarized into the interrupt block to complete the capsule operation. A suitable time would be one in addition to the current host write operation = a period of time during which the capsule is total. One such suitable time = the essence of another host write in which there is an update but no associated capsule operation, the summary of program failure handling can be considered as multi-stage 杳^ in the first stage, in the occurrence After the material fails, cut the logic single ^ ~ to more than one block 'to avoid summarizing each logical unit - more than one time. (4) The appropriate time will be completed in the final stage, where the logical group will be merged to better collect all logical units in a financial order.

中斷彙總區塊中。 fI 非循序更新區塊索引 之發明的另一方面,在具有支援具非循序邏輯翠元 更新巴區塊管理系統的非揮發性記憶體中,非循序‘ 中邏輯翠元的索引被緩衝儲存在ram中,並定期 索引儲存在專, *貝具體實施例中,將 隹專用於儲存索引的區塊中。在另一項且 例中,將索弓丨儲存在更新區塊本身中。在又另—項具^ :例:,將索引儲存在各邏輯單元的標頭中。在另—方面 中,在上-個索引更新之後但在下一個"丄4面 的邏輯單元备將n iu家引更新之則寫入 曰、,、索引育訊儲存在各邏輯單元的標頭中。 98704.doc -15- 1288328 依此方式,在電源中斷後,不必在初始化期間執行掃描, 即可決定最近寫入之邏輯單元的位置。在又另一方面中, 將區塊管理成部分循序及部分非循序指向—個以上邏輯子 群組。 控制資料完整性與管理 」艮據本發明的另一方面,如部分或全部控制資料的關鍵 貧料如果被維持在複製項中,則保證額外等級的可靠性。 複製的執行方式對於採用兩次編碼過程(tw〇_pasy程式化 技術以連續程式化相同組記憶體單元之多位元的多重狀態 記憶體系統而言,第二次編碼過程中的任何程式化錯誤都 無法毀損第一次編碼過程建立的資料。複製還有助於偵測 寫入中止、偵測誤測(即,兩個複本有良好的Ecc但資料不 同),且可增加額外等級的可靠十生。若+資料複製的技術均 已考慮。 在一項具體實施例中,在稍早程式化編碼過程中程式化 給定資料的兩個複本後,後續程式化編碼過程可避免程式 化用於儲存該等兩個複本中至少一個的記憶體單元。依此 方式,在後續程式化編碼過程在完成之前中止及毀損稍早 編碼過程的資料時,該等兩個複本中至少一個也不會受到 影響。 在另-項具體實施例中,某-給定資料的兩個複本會被 儲存於兩個不同的區塊中,而且該等兩個複本中至多僅有 其中一個的記憶體單元會於後面的程式化編碼過程中被程 式化。 98704.doc -16- 1288328 於另一具體實施例中,於一 給定資料的兩個複本之後,便 本的記憶體單元組實施任何進 單元組的最終程式化編碼過程 可達成此目的。 私式化編碼過程中儲存某一 不再對用於儲存該等兩個複 一步的程式化。於該記憶體 中來私式化該等兩個複本便 在又另-項具體實施财,可於:進制程式化模式中將 某-給定資料的該等兩個複本程式化至—多重狀態的記憶Interrupt summary block. Another aspect of the invention of the fI non-sequential update block index is that in the non-volatile memory supporting the non-sequential logic Cuiyuan update block management system, the index of the non-sequenced logic element is buffered. In the ram, and the regular index is stored in the special, in the specific embodiment, the 隹 is dedicated to the block in which the index is stored. In another example, the cable is stored in the update block itself. In another case, ^: Example: The index is stored in the header of each logical unit. In another aspect, after the last index update but in the next "丄4 side logical unit is prepared, the n iu home update is written to 曰,, and the index information is stored in the header of each logical unit. in. 98704.doc -15- 1288328 In this way, after a power interruption, it is not necessary to perform a scan during initialization to determine the location of the most recently written logical unit. In yet another aspect, the block is managed in a partial sequence and a portion of the non-sequentially directed to more than one logical subgroup. Controlling Data Integrity and Management In accordance with another aspect of the present invention, additional levels of reliability are guaranteed if critical lean materials, such as some or all of the control material, are maintained in the replicated item. The execution of the copy is any stylization in the second encoding process for a multi-state memory system that uses a two-pass encoding process (tw〇_pasy stylization technique to continuously program multiple bits of the same set of memory cells). Errors can't damage the data created by the first encoding process. Copying can also help detect write aborts, detect false positives (ie, two replicas have good Ecc but different data), and can add additional levels of reliability. Tenth. If + material copying techniques have been considered. In a specific embodiment, after stylizing two copies of a given data in a slightly stylized encoding process, the subsequent stylized encoding process can avoid stylization. And storing at least one of the two copies of the two copies. In this manner, at least one of the two copies is not suspended until the subsequent stylized encoding process terminates and destroys the data of the earlier encoding process. In another embodiment, two copies of a given data are stored in two different blocks, and at most only two of the two copies One of the memory units will be stylized in a subsequent stylized encoding process. 98704.doc -16- 1288328 In another embodiment, after two copies of a given material, the memory of the notebook The unit group implements the final stylized encoding process of any incoming unit group to achieve this purpose. The private encoding process stores a program that is no longer used to store the two further steps. The two copies of the two copies are programmed in another way, and the two copies of a given data can be programmed into a multi-state memory in a stylized pattern.

體之中,致衫會對該程式化的記憶體單元進行任何 進一步的程式化。 在又另-項具时施财,躲採用兩:欠編㈣程程式 化技術以連續程式化相同組記憶鮮元之多位元的多重狀 態記憶體系統而言’會採用容錯碼以編碼多個記憶體狀 態’使稱早程式化編碼過程所建立的資料不會受到後續程 式化編碼過程中錯誤的影響。 根據本發明的另一方面,在具有區塊管理系統的非揮發In the body, the shirt will perform any further stylization of the stylized memory unit. In the case of another-item, it uses two: the under-programming (four) program-based technology to continuously program the multi-state memory system of the same group of memory elements, 'will use the fault-tolerant code to encode more The memory state' makes the data created by the early stylized encoding process unaffected by errors in the subsequent stylized encoding process. According to another aspect of the invention, non-volatile in a block management system

性記憶體中,可實施記憶體區塊的「控制廢棄項目收集」 或先佔式重新配置,以避免發生大量的更新區塊均恰巧同 日:而要進仃重新配置的情形。例如,纟更新用於控制區塊 味糸先操作的控制資料時會發生此情況。控制資料類型 的層級可和不同程度的更新次數共存,導致其關聯的更新 區塊而要不同速率的廢棄項目收集或重新配置。會有一個 以上控制資料類型之廢棄項目收集操作同時發生的特定次 數。在極端的情況中,所有控制資料類型之更新區塊的重 新配置階段會進行整頓,導致所有的更新區塊都需要同時 98704.doc -17- 1288328 重新配置。 參考本發明以下結合附圖之較佳具體實施例的說明,即 可瞭解本發明的其他特色及優點。 【實施方式】 圖丄以示意圖顯示適於實施本發明之記憶體系統的主要 硬體組件。記憶體系統20通常透過主機介面以主機1〇操 作。記憶體㈣通常形式為職卡或内嵌式記憶體系統。 記憶體系統20包括由控制器i 〇〇控制操作的記憶體2〇〇。記 憶體2 0 0包含分布於—或多個積體電路晶片上的—或多個 陣列的非揮發性記憶體單元。控制器1〇〇包括:介面11〇、 處理器12 0、選用性副處理器丨2卜R 〇 M〗2 2 (唯讀記憶體 RAM 130(隨機存取記憶體)、及選用性可程式非揮發性記憶 體124。介面11()有一個連接控制器和主機的組件及另一個 連接至記憶體200的組件。儲存於非揮發性尺〇河122及/或選 用〖生非揮發性s己憶體124中的韌體可提供處理器12〇程式碼 以實施控制器100的功能。處理器12〇或選用性副處理器ΐ2ι 可處理錯誤校正碼。在—項#代性具體實施例中,控制器 1 〇〇可藉由狀態機器(未顯示)來實施。在又另一項具體實施 例中’控制器100可在主機内實施。 邏輯與實體區塊結構 圖2為根據本發明一較佳具體實施例的記憶體,其係被組 織成數個實體區段群組(或中繼區塊)並且由該控制器的記 隐體s理器來管理。該記憶體2〇〇會被組織成數個中繼區 塊其中每個中繼區塊係一群組之可一起抹除的實體區段 98704.doc -18- 1288328 S〇 ···、Sn-i。 主機10可在檔案系統或作業系統下執行應用程式時存取 5己憶體20〇。一般來說,該主機會以邏輯區段為單位來定 杳料,甘士祉 、,,八中,例如,各區段可含有512位元組的資料。還有, 主機通$亦會以邏輯叢集為單位來讀取或寫入該記憶體系 統,各邏輯叢集由一或多個邏輯區段所組成。在部分主機 系統中,可存在選用性主機端記憶體管理器,以執行主機 的較低階記憶體管理。在大部分的例子中,在讀取或寫入 操作期間,主機10實質上會對記憶體系統20發出指令,以 3貝取或寫入含有一串具連續位址之資料邏輯區段的程式 :記憶體端記憶體管理器被實施在該記憶體系統2〇的控 制盗1〇0之巾’用以管理於該快閃記憶體200的複數個中繼 區塊中儲存與操取主機邏輯區段的資料。於該較佳的具體 實施例中’該記憶體管理器含有數個軟體模組,用於管理 該等中繼區塊的抹除作業、讀取作業、以及寫入作業。該 記憶體管理器還會於該快閃記憶體2⑻及該控制器讀 130之中維護和其作業相關的系統控制與目錄資料。 一較佳具體實施例介於一 邏輯群組與-中繼區塊間之映射的概略示意圖。該實體記 μ㈣t ^塊具有實體區段’用於儲存—邏輯群組 的Ν欠個邏輯區段資料。圖3Α⑴所示的係來自邏輯群組LGi 的貝料,其中該等邏輯區段呈現連續的邏輯順序〇、卜…、 —示的係正以相同的邏輯順序被儲存於該中 98704.doc -19· 1288328 繼區塊中的相同資料 係所謂的「循序式」。 不同順序儲存的資料 的「非循序式」或Γ 。當依此方式儲存時,該中繼區塊便 一般來說,該中繼區塊可能會具有以 ,於該情況中,該中繼區塊則係所謂 混亂式」。 在L輯群、、且的取低位址及其映射之中繼區塊的最低位址 ,間會有位移。此時,邏輯區段位址會於該中繼區塊内以 衣狀的方式攸δ亥邏輯群組的底部反繞回至頂端。例如,在 圖3Α㈣中,該中繼區塊會在其始於邏輯區段^之資料的第In the memory, you can implement the “Control Waste Project Collection” or the preemptive reconfiguration of the memory block to avoid a large number of update blocks happening on the same day: but to reconfigure. This can happen, for example, when updating control data that is used to control the operation of the block. The hierarchy of control data types can coexist with varying degrees of update times, resulting in their associated update blocks and the collection or reconfiguration of obsolete items at different rates. There will be more than one specific number of simultaneous acquisitions of the obsolete items that control the data type. In extreme cases, the reconfiguration phase of all update blocks of the control data type is reorganized, causing all update blocks to be reconfigured at the same time as 98704.doc -17-1288328. Other features and advantages of the present invention will become apparent from the following description of the preferred embodiments of the invention. [Embodiment] The figure shows a main hardware component of a memory system suitable for implementing the present invention in a schematic view. The memory system 20 typically operates as a host through the host interface. Memory (4) is usually in the form of a job card or an embedded memory system. The memory system 20 includes a memory 2 that is controlled by the controller i 〇〇. The memory 200 includes zero or a plurality of arrays of non-volatile memory cells distributed over one or more integrated circuit wafers. The controller 1 includes: an interface 11 〇, a processor 120, an optional sub processor 丨 2 卜 R 〇 M 〗 2 2 (read only memory RAM 130 (random access memory), and optional program Non-volatile memory 124. Interface 11() has a component that connects the controller to the host and another component that is connected to the memory 200. Stored in a non-volatile ruler River 122 and/or selected for non-volatile s The firmware in the memory 124 can provide the processor 12 code to implement the functions of the controller 100. The processor 12 or the optional secondary processor ΐ2 can process the error correction code. The controller 1 can be implemented by a state machine (not shown). In yet another specific embodiment, the controller 100 can be implemented within the host. Logical and physical block structure FIG. 2 is in accordance with the present invention. A memory of a preferred embodiment is organized into a plurality of physical segment groups (or relay blocks) and managed by a controller of the controller. Organized into a number of relay blocks, each of which is a group of relay blocks The physical segment erased together is 98704.doc -18- 1288328 S〇···, Sn-i. The host 10 can access the 5 memory 20 执行 when executing the application under the file system or the operating system. The host will determine the data in units of logical segments, such as Gans, ,, 八, for example, each segment can contain 512 bytes of data. Also, the host through $ will also be in logical clusters. Read or write to the memory system, each logical cluster consists of one or more logical segments. In some host systems, there may be a selective host-side memory manager to execute the lower-order memory of the host In most of the examples, during a read or write operation, the host 10 essentially issues an instruction to the memory system 20 to fetch or write a data logical segment containing a string of consecutive addresses. The program: the memory-side memory manager is implemented in the memory system 2, and the storage and operation host is managed in a plurality of relay blocks of the flash memory 200. Data of the logical section. In this preferred implementation In the example, the memory manager includes a plurality of software modules for managing erase jobs, read jobs, and write jobs of the relay blocks. The memory manager also plays the flash memory. The system control and directory information related to the maintenance and operation of the body 2 (8) and the controller read 130. A schematic diagram of a mapping between a logical group and a relay block in a preferred embodiment. The μ(four)t^block has a physical segment 'used for storage—the logical group data of the logical group. Figure 3Α(1) shows the data from the logical group LGI, wherein the logical segments present a continuous logical sequence The 资料, 卜..., - shows are stored in the same logical order in the 98704.doc -19· 1288328. The same data in the block is the so-called "sequential". "Non-sequential" or Γ of data stored in different orders. When stored in this manner, the relay block is generally such that the relay block may have, in this case, the relay block is so confusing. There will be a displacement between the L-group, the lower address, and the lowest address of the mapped relay block. At this point, the logical sector address will wrap around to the top of the bottom of the logical group in the relay block. For example, in Figure 3 (4), the relay block will be in its first data from the logical segment ^

一位置中進行儲存。在到達最後邏輯區段Ν·1時,中繼區塊 會繞回至區段〇,最後在其最後實體區段中儲存和邏輯區段 ^關聯的資料。在較佳具體實施例中,會使用頁面標記來 知、Η何位移’例&,識別在中繼區塊之第—實體區段中 所儲存之資料的起始邏輯區段位址。#兩個區塊僅相差一 固頁面枯。己時,則會認為該等兩個區塊係以相同的順序來 儲存其邏輯區段。Store in one location. When the last logical segment Ν·1 is reached, the relay block wraps around to the segment 〇, and finally stores the data associated with the logical segment ^ in its last physical segment. In a preferred embodiment, the page mark is used to know, the displacement 'example &', to identify the starting logical sector address of the data stored in the first physical segment of the relay block. #两块块 differs only one solid page is dry. In the meantime, the two blocks are considered to store their logical segments in the same order.

圖3Β為介於複數個邏輯群組與複數個中繼區塊間之映射 的概略示意圖。每個邏輯群組均映射至_唯—的中繼區 鬼。除了其中的資料正在被更新的少數邏輯群組以外。一 k輯群、、且在被更新之後,其可能會映射至一不同的中繼區 鬼可將映射^訊維持在一組邏輯對實體目錄中,稍後將 會詳細說明。 亦考慮其它類型的邏輯群組至中繼區塊映射關係。舉例 來祝’由Alan Sinclair於和本發明同一天提出之共同待審及 同擁有的美國專利申請案,標題為「Adaptive Metabl〇cks」 98704.doc -20- 1288328 之中便揭不具有可變大小的中繼區塊。本文以引用的方式 併入該共同待審申請案全部的揭示内容。 本發明的一個特色在於系統以單一邏輯分割操作,及記 憶體系統的整個邏輯位址範圍中的邏輯區段群組均以相同 的方式來處理。例如,可將含有系統資料的區段及含有使 用者資料的區段分布在邏輯位址空間中的任何地方。 不像先則技術的系統,並無系統區段(即,有關檔案配置 表目錄或子目錄的區段)的特別分割或分區,以局部化在 可能含有高次數及小尺寸之更新資料的邏輯位址空間區段 中。而是,更新區段之邏輯群組的本方案會有效處理為系 統區段典型且為檔案資料典型的存取模式。 圖4顯示中繼區塊和實體記憶體中結構的對齊。快閃記憶 體包含可當作一個單元一起抹除之記憶體單元的區塊。此 種抹除區塊為快閃記憶體之最小的抹除單元或記憶體的最 小可抹除單元(MEU)。最小抹除單元是記憶體的硬體設計 參數,不過,在一些支援多個MEU抹除的記憶體系統中, 也可以設定包含一値以上MEU的「超級MEU」。對於快閃 EEPROM,一個MEU可包含一個區段,但較佳包含多個區 段。在所示的範例中,其具有Μ個區段。在較佳具體實施 例中,各區段可以儲存5 12位元組的資料且具有使用者資料 部分及用於儲存系統或附加項資料的標頭部分。如果中繼 區塊係以Ρ個MEU所構成且各MEU含有Μ個區段,則各中繼 區塊將具有N = Ρ*Μ個區段。 在系統層級,中繼區塊代表記憶體位置的群組,如,可 98704.doc -21 - 1288328 (抹除的區奴。快閃記憶體的實體位址 一組中繼區塊,其中中繼e妙θ # Β «被处理成 π…「士小的抹除單元。於本份 二曰,中繼區塊」與「區塊」等詞語係同義詞,用來 ^媒體管理於系統層級的最小抹除單位,而 :位」或刪-詞則係用來表示快閃記憶體的最小二 連結數個最小抹除單〜ΜΕϋ)以構成中繼區塊 為了最大化程式化速度及抹除速度,會儘可能利用平行 方式’其係藉由配置多個要平行程式化的頁面資訊(位在多 個MEU中)’及配置多個要平行抹除的ΜΕυ。 在快閃記憶體中,一個頁面是可在單一操作中一起程式 化之記憶體單元的群組…個頁面可包含—或多個區段。 還有,可將記憶體陣列分割成一個以上平面,其中一次只 能程式化或抹除一個平面内的一個MEU。最後,可在一或 多個記憶體晶片中分布各平面。 在快閃記憶體中,MEU可包含一或多個頁面。可將快閃 記憶體晶片内的數個MEU按平面來組織。由於可同時程式 化或抹除各平面的一個MEU,因此有利於從各平面選擇一 個MEU以形成多個MEU中繼區塊(請見下文圖5B)。 圖5 A顯示從連結不同平面之最小抹除單元所構成的中繼 區塊。各中繼區塊(例如MB0、MB 1、…)均係以記憶體系統 之不同平面的數個MEU所構成,其中不同平面可分布在一 或多個晶片中。圖2所示的中繼區塊連結管理器170可管理 各中繼區塊之MEU的連結。如果MEU之一沒有失敗,在初 98704.doc -22- 1288328 始格式化程序期間設定各中繼區塊,並在整個系統壽命中 保留其組成的JVIEU。 圖5B顯示從各平面選擇—最小抹除單元(meu)以連結成 中繼區塊的一項具體實施例。 圖5C顯示其中從各平面選擇一個以上MEu以連結成中繼 區塊的另一項具體實施例。在另一項具體實施例中,可從 各平面選擇-個以上MEU以形成一超級卿。例如,舉例 來說,一超級MEU可能係由兩個MEU所構成的。此時,會 採取一次以上編碼過程以進行讀取或寫入操作。 _ 由Carlos Gonzales等人於和本發明同一天提出之共同待 審及共同擁有的美國專利申請案,標題為「Figure 3Β is a schematic diagram of the mapping between a plurality of logical groups and a plurality of relay blocks. Each logical group is mapped to a _-only relay ghost. Except for a few logical groups where the data is being updated. After a group of k, and after being updated, it may be mapped to a different relay zone. The ghost can maintain the mapping in a set of logical pair entities, which will be described in detail later. Other types of logical group to relay block mapping relationships are also considered. For example, the United States patent application filed by Alan Sinclair on the same day as the present invention, entitled "Adaptive Metabl〇cks" 98704.doc -20- 1288328, is not variable. The size of the relay block. The entire disclosure of this co-pending application is incorporated herein by reference. One feature of the present invention is that the system operates in a single logical split, and the logical segment groups in the entire logical address range of the memory system are processed in the same manner. For example, a section containing system data and a section containing user data can be distributed anywhere in the logical address space. Unlike prior art systems, there is no special segmentation or partitioning of system sections (ie, sections of file configuration table directories or subdirectories) to localize logic that may contain updated data for high and small sizes. In the address space section. Rather, the scheme for updating the logical group of segments is effectively handled as a typical access mode typical of the system segment and typical for archival material. Figure 4 shows the alignment of the structure in the relay block and the physical memory. Flash memory contains blocks of memory cells that can be erased together as a single unit. This erase block is the smallest erase unit of the flash memory or the smallest erasable unit (MEU) of the memory. The minimum erase unit is the hardware design parameter of the memory. However, in some memory systems that support multiple MEU erases, a "super MEU" containing more than one MEU can be set. For flash EEPROM, a MEU can contain one segment, but preferably includes multiple segments. In the example shown, it has one section. In a preferred embodiment, each segment may store 5 12-bit data and have a user data portion and a header portion for storing system or additional item data. If the relay block is composed of one MEU and each MEU contains one segment, each relay block will have N = Ρ * 区段 segments. At the system level, the relay block represents a group of memory locations, for example, 98704.doc -21 - 1288328 (erased area slave. Flash memory entity address a group of relay blocks, of which Following e θ θ # Β « is processed into π... "small erase unit. In this copy, relay block" and "block" are synonymous with words, used to manage media at the system level. Minimize the unit, and: "bit" or "delete" is used to indicate the minimum number of minimum flashes of the flash memory to minimize the stylized speed and erase. Speed, as far as possible, use the parallel method 'by configuring multiple page information to be parallelized (located in multiple MEUs)' and configuring multiple defects to be erased in parallel. In flash memory, a page is a group of memory cells that can be programmed together in a single operation... a page can contain - or multiple segments. Also, the memory array can be divided into more than one plane, where only one MEU in a plane can be programmed or erased at a time. Finally, the planes can be distributed in one or more memory chips. In flash memory, a MEU can contain one or more pages. Several MEUs within a flash memory wafer can be organized in a plane. Since one MEU of each plane can be programmed or erased at the same time, it is advantageous to select one MEU from each plane to form a plurality of MEU relay blocks (see Figure 5B below). Figure 5A shows a relay block formed by a minimum erasing unit that connects different planes. Each of the relay blocks (e.g., MB0, MB 1, ...) is constructed of a plurality of MEUs in different planes of the memory system, wherein different planes may be distributed in one or more of the wafers. The relay block link manager 170 shown in Fig. 2 can manage the link of the MEUs of the respective relay blocks. If one of the MEUs does not fail, each relay block is set during the initial 98704.doc -22- 1288328 formatter and retains its constituent JVIEU throughout the life of the system. Figure 5B shows a specific embodiment of selecting a minimum erase unit (meu) from each plane to join into a relay block. Figure 5C shows another embodiment in which more than one MEu is selected from each plane to join into a relay block. In another embodiment, more than one MEU can be selected from each plane to form a super. For example, a super MEU might be composed of two MEUs, for example. At this point, the above encoding process is taken once for a read or write operation. _ A co-pending and co-owned US patent application filed by Carlos Gonzales et al. on the same day as the present invention, entitled "

Deterministic Grouping 〇f Blocks into Multi-BlockDeterministic Grouping 〇f Blocks into Multi-Block

Structures」之中亦揭示將複數個MEU連結及再連結成中繼 區塊。本文以引用的方式併入該共同待審申請案全部的揭 示内容。 中繼區塊管理 圖6為如控制器及快閃記憶體中實施之中繼區塊管理系· 統的不意方塊®。中·區塊管理系統包含實施於控制器⑽ 中的各種功能模組,並在依階層式分布於快閃記憶體20〇 及控制器RAM 130的表袼及清單巾轉各種控制資料(包括 ^ 目錄資料)。實施於控制器⑽中的功能模組包括··介面模 · 組11〇、邏輯對實體位址轉譯模組140'更新區塊管理器模 組150、抹除區塊管理器模組16〇及中繼區塊連結管理器 170。 。 98704.doc •23- 1288328 w面110允許中繼區塊管理系統介接主機系統。邏輯對實 體位址轉譯模組14〇將主機的邏輯位址映射至實體記憶體 位置。更新區塊管理器模組150管理記憶體中給定之資料邏 輯群組的貝料更新操作。已抹除區塊管理器⑽管理中繼區 塊的抹除操作及其用於儲存新資訊的配置。中繼區塊連結 官理器170管理區段最小可抹除區塊之子群組的連結以構 成給疋的中繼區塊。這些模組將在其個別段落中肖細說明。 於作業期間’該中繼區塊管理系統會產生且配合控制資 料(例如位址、控制與狀態資訊)來運作。由於許多控制資料 傾向於是經常變更的小型資料,因此無法在具有大型區塊 結構的快閃記憶體巾予以隨時有效儲存及維持。為了在非 揮發性快閃記憶體中料比較靜態的控制資料,同時在控 制器RAM中尋找數量較少之比較有變化的控制資料,以進 行更有效的更新及存取,會採用階層式與分散式的方案。 在發生電源關機或故障時,此方案允許掃描非揮發性記憶 體中一小組的控制資料,以在揮發性控制器RAM中快速重 建控制資料。這之所以可行是因為本發明可限制和給定邏 輯群、、且之 > 料之可旎活動關聯的區塊數量。依此方式,即 可限制掃描。此外,會將部分需要持久性的控制資料儲存 在按區段更新的非揮發性中繼區塊中,其中各更新將會圮 錄取代先前區段的新區段。控制資料會採用區段索引方案 以在中繼區塊中記錄按區段的更新。 非揮發性快閃記憶體200儲存大量相對較靜態的控制資 料。這包括:群組位址表(GAT)210、混亂區塊索弓貝丨 98704.doc •24· 1288328 (CBI)220、已抹除的區塊清單(EBL)23〇及mAP 24〇。gat 210可記錄區段之邏輯群組及其對應中繼區塊之間的映 射。除非更新’否則該等映射關係不會改變。CBI 220可記 錄更新期間邏輯上非循序區段的映射。EBl 230可記錄已經 抹除之中繼區塊的集區。MAp 24〇是顯示快閃記憶體中所 有中繼區塊之抹除狀態的位元對映。 揮發性控制器RAM 130儲存一小部分經常變更及存取的 控制資料。這包括配置區塊清單(A]BL)134及清除區塊清單 (CBL)136。ABL 134可記錄中繼區塊用於記錄更新資料的 配置’而CBL 136可記錄已解除配置及已抹除的中繼區塊。 在較佳具體實施例中,RAM 130可當作儲存在快閃記憶體 200之控制資料的快取記憶體。 更新區塊管理器 更新區塊管理器150(如圖2所示)處理邏輯群組的更新。 根據本發明的一方面,會配置進行更新之區段的各邏輯群 組一用於記錄更新資料的專用更新中繼區塊。在較佳具體 實施例中’會將邏輯群組之一或多個區段的任何程式段記 錄在更新區塊中。可管理更新區塊以接收按循序順序或非 循序(又稱為「混亂」)順序的更新資料。混亂更新區塊允許 邏輯群組内按任何順序更新區段資料,並可任意重複個別 區段。尤其,不必重新配置任何資料區段,循序更新區塊 可變成混亂更新區塊。混亂資料更新不需要任何預定的區 塊配置;任何邏輯位址的非循序寫入可被自動納入。因此, 不像先前技術的系統,和先前技術系統不同的係,並不必 98704.doc -25- 1288328 特別處置該邏輯群組之各個更新程式段究竟係邏輯循序或 非循序。一般更新區塊只用來按照主機請求的順序記錄各 種程式段。例如,即使主機,系統資料或系統控制資料傾向 於依混亂方式加以更新,仍不必以和主機使用者資料的不 同方式來處理邏輯位址空間對應於主機系統資料的區域。 較佳將區段之完整邏輯群組的資料按邏輯上循序順序儲 存在單一中繼區塊中。依此方式,可預定義已儲存之邏輯 區段的索引。當中繼區塊按照預定順序儲存給定邏輯群組 的所有區段時,其可說是Γ完整」。至於更新區塊,當其最 後按邏輯上循序順序填滿更新資料時,則更新區塊將成為 隨時可取代原始中繼區塊之已更新的完整中繼區塊。另一 方面’如果更新區塊按邏輯上和完整區塊的不同順序填滿 更新資料’更新區塊為非循序或混亂更新區塊,則必須進 一步處理順序紊亂的程式段,以便最後能按和完整區塊相 同的順序來儲存邏輯群組的更新資料。在較佳的例子中, 其在單一中繼區塊中係按照邏輯上循序的順序。進一步的 處理涉及將更新區塊中已更新的區段和原始區塊中未變更 的區段彙總至又另一個更新中繼區塊。然後彙總的更新區 塊將按照邏輯上循序的順序並能夠用來取代原始區塊。在 一些預定條件下,彙總程序之前會有一或多個壓縮程序。 壓縮程序只是將混亂更新區塊的區段重新記錄成取代的混 亂更新區塊,同時除去任何已由相同邏輯區段的後續更新 淘汰的複製邏輯區段。 更新方案允許同時執行多個多達預定極大值的更新執行 98704.doc -26- !288328 緒。各執行緒為使用其專用更新中繼區塊進行更新的邏輯 群組。 猶序資料更新 在先更新屬於邏輯群組的資料時,會配置中繼區塊及將 其專用為邏輯群組之更新資料的更新區塊。當從主機接收 寫入邏輯群組之一或多個區段之程式段的指令時(現有的 中繼區塊已經儲存所有完整區段),會配置更新區塊。對於 第一主機寫入操作,會將第一程式段的資料記錄在更新區 塊上。由於各主機寫入是具有連續邏輯位址之一或多個區 段的一個程式段,因此會遵循第一更新在特性上永遠循 序。在後續的主機寫入中,會按照從主機接收的順序將相 同邏輯群組内的更新程式段記錄在更新區塊中。一個區塊 繼續接受管理為循序更新區塊,而關聯邏輯群組内由主機 更新的區段維持邏輯上循序。在此邏輯群組中更新的所有 區段會被寫入此循序更新區塊,直到區塊關閉或轉換為混 亂更新區塊。 圖7 A顯不由於兩個分χ 丨口刀開的主機寫入刼作而按循序順序寫 入循序更新區塊之邏輯群組中之區段的範例,而邏輯群& 之原始區塊中對應的區段變成淘汰。在主機寫入操作:】 中,會更新邏輯區段LS5_LS_料。更新成為⑶ 的資料會被記錄在新配置的專用更新區塊中。 為了方便,會將邏輯群組中要更新的第— 於第-實體區段位置的專用更新區塊中。:錄在始 新的第-邏輯區段不一定是群 又5 ’要更 、科罘—區段,因此, 98704.doc -27- 1288328 在邏輯群組的起點及更新區塊的起點之間會有位移。此位 移稱為「頁面標記」,如先前結合圖3 A所述。後續區段將按 照邏輯上循序的順序加以更新。在寫入邏輯群組的最後區 段時’群組位址會繞回及寫入序列會從群組的第一區段繼 續。 在主機寫入操作#2中,會更新邏輯區段LS9_LS12中資料 的程式段。更新成為LS9,-LS12,的資料會被記錄在直接在最 後寫入結束處之後之位置中的專用更新區塊。圖中顯示兩 個主機寫入如下:按邏輯上循序順序記錄在更新區塊中的 更新資料’即LS5’_LS 12,。更新區塊可視為循序更新區塊, 因其已按邏輯上循序的順序填入。記錄在更新區塊中的更 新資料可淘汰原始區塊中對應的資料。 混亂資料更新 當關聯的邏輯群組内由主機更新的任何區段為邏輯上非 循序時’可為現有的循序更新區塊啟始混亂更新區塊管 理。此亂更新區塊是資料更新區塊的形式,其中關聯邏輯 群組内的邏輯區段可按任何順序進行更新並可重複任意 次。其建立可藉由以下方式:在主機寫入的區段是邏輯上 非循序時,從循序更新區塊轉換成受更新之邏輯群組内先 前寫入的區段。所有其後在此邏輯群組中更新的區段會被 寫入混亂更新區塊中的下一個可用區段位置,無論其在群 組内的邏輯區段位址為何。 圖7B顯示由於五個分開的主機寫入操作而按混亂順序寫 入混亂更新區塊之邏輯群組之區段的範例,而邏輯群組之 98704.doc -28- 1288328 原始區塊中被取代的區段及混亂更新區塊中被複製的區段 變成淘汰。在主機寫入操作# 1中,會更新儲存於原始中繼 區塊之給定邏輯群組的邏輯區段LS10-LS11。已更新的邏賴: 區段LS10,-LS11,會被儲存到新配置的更新區塊中。此時, 更新區塊為循序的更新區塊。在主機寫入操作#2中,會將 邏輯區段LS5-LS6更新成為LS5’-LS6’及將其記錄在緊接上 一個寫入之後之位置的更新區塊中。這可將循序的更新區 塊轉換為混亂的更新區塊。在主機寫入操作#3,再次更新 邏輯區段LS10及將其記錄在更新區塊的下一個位置成為 LSI0"。此時,更新區塊中的LSI0”可取代先前記錄中的 LSI0’,而LSI0’又可取代原始區塊中的LS10。在主機寫入 操作#4中,再次更新邏輯區段LS10的資料及將其記錄在更 新區塊的下一個位置中成為LSI 0’’’。因此,LS10,"現在是邏 輯區段LS10的最後且唯一有效的資料。在主機寫入操作#5 中,會更新邏輯區段LS30的資料及將其記錄在更新區塊中 成為L S 3 01。因此,此範例顯示可以按照任何順序及任意重 複,將一邏輯群組内的複數個邏輯區段寫入至一混亂更新 區塊中。 強制循序更新 圖8顯示由於兩個在邏輯位址有中斷之分開的主機寫入 操作而在邏輯群組中按循序順序寫入循序更新區塊之區段 的範例。在主機寫入#1中,將邏輯區段LS5-LS8的更新資料 §己錄在專用更新區塊中成為LS5’-LS8’。在主機寫入# 2中, 將邏輯區段LS14-LS16的更新資料記錄在上一個寫入之後 98704.doc -29- 1288328 的更新區塊中成為LS14,-LS16,。然而,在LS8及LS14之間 有位址跳躍,及主機寫入#2通常會使更新區塊成為非循 序。由於位址跳躍不是报多,一個選項是在執行主機寫入 #2之前將原始區塊之中間區段的資料複製到更新區塊,以 先執行填補操作(#2A)。依此方式,即可保存更新區塊的循 序特性。 圖9顯示根據本發明的一般具體實施例,更新區塊管理器 更新一個邏輯群組之資料的程序流程圖。更新程序包含以 下步驟: 步驟260:該記憶體被組織成複數個區塊,每個區塊均被 分割成可一起抹除的複數個記憶體單元,每個記憶體單元 係用於儲存一邏輯單元的資料。 步驟262:該資料被組織成複數個邏輯群組,每個邏輯群 組均被分割成複數個邏輯單元。 步驟264 :在標準的例子中,根據第一指定順序,較佳為Structures also reveals that multiple MEUs are linked and reconnected into relay blocks. The entire disclosure of this co-pending application is incorporated herein by reference. Trunk Block Management Figure 6 shows the Unintentional Blocks® of the Relay Block Management System implemented in the controller and flash memory. The middle block management system includes various functional modules implemented in the controller (10), and is transferred to various control data (including ^) in the form and the list of the flash memory 20 and the controller RAM 130. Directory information). The functional modules implemented in the controller (10) include an interface module, a logical entity address translation module 140, an update block manager module 150, and an erase block manager module 16 The relay block link manager 170. . 98704.doc • 23- 1288328 w-plane 110 allows the relay block management system to interface with the host system. The logical-to-real address translation module 14 maps the logical address of the host to the physical memory location. The update block manager module 150 manages the feed update operation for a given data logical group in the memory. The erased block manager (10) manages the erase operation of the relay block and its configuration for storing new information. The relay block link manager 170 manages the links of the sub-groups of the smallest erasable block of the segment to form a relay block for the cell. These modules will be explained in detail in their individual paragraphs. During the operation, the relay block management system generates and works with control data such as address, control and status information. Since many control materials tend to be small materials that are frequently changed, it is not possible to store and maintain flash memory towels having a large block structure at any time. In order to compare static control data in non-volatile flash memory, and to find a small number of relatively variable control data in the controller RAM for more efficient update and access, hierarchical and Decentralized solution. In the event of a power outage or failure, this scheme allows scanning of a small group of control data in non-volatile memory to quickly rebuild control data in the volatile controller RAM. This is possible because the present invention limits the number of blocks associated with a given logical group, and the > In this way, scanning can be limited. In addition, some of the control data that needs to be persistent is stored in the non-volatile relay block updated by the section, where each update will replace the new section of the previous section. The control data uses the segment indexing scheme to record updates by segment in the relay block. The non-volatile flash memory 200 stores a large amount of relatively static control data. This includes: Group Address Table (GAT) 210, chaotic block Suobeibei 98704.doc • 24· 1288328 (CBI) 220, erased block list (EBL) 23〇 and mAP 24〇. Gat 210 can record the mapping between logical groups of segments and their corresponding relay blocks. These mappings will not change unless updated. The CBI 220 can record the mapping of logically non-sequential segments during the update. EBl 230 can record the pool of relay blocks that have been erased. MAp 24〇 is a bit map showing the erase status of all the relay blocks in the flash memory. The volatile controller RAM 130 stores a small portion of the control data that is frequently changed and accessed. This includes a configuration block list (A]BL) 134 and a clear block list (CBL) 136. The ABL 134 can record the configuration of the relay block for recording updated data' while the CBL 136 can record the deconfigured and erased relay blocks. In the preferred embodiment, RAM 130 can be used as a cache memory for control data stored in flash memory 200. Update Block Manager The Update Block Manager 150 (shown in Figure 2) handles the updating of logical groups. In accordance with an aspect of the invention, each logical group of segments for updating is configured to record a dedicated update relay block for updating data. In a preferred embodiment, any block of one or more of the logical groups will be recorded in the update block. Update blocks can be managed to receive updated data in sequential or non-sequential (also known as "chaotic") order. The chaotic update block allows the section data to be updated in any order within the logical group, and individual sections can be arbitrarily repeated. In particular, it is not necessary to reconfigure any of the data sections, and the sequential update block can become a chaotic update block. Chaotic data updates do not require any predetermined block configuration; unscheduled writes of any logical address can be automatically incorporated. Therefore, unlike prior art systems, which differ from prior art systems, it is not necessary for 98704.doc -25 - 1288328 to specifically address whether each update block of the logical group is logically sequential or non-sequential. The general update block is only used to record various blocks in the order requested by the host. For example, even if the host, system data, or system control data tends to be updated in a chaotic manner, it is not necessary to process the logical address space corresponding to the host system data in a different manner than the host user data. Preferably, the data of the complete logical group of the segments is stored in a single relay block in a logical sequential order. In this way, the index of the stored logical section can be predefined. When a relay block stores all segments of a given logical group in a predetermined order, it can be said to be complete. As for the update block, when it finally fills the update data in a logically sequential order, the update block becomes an updated full relay block that can replace the original relay block at any time. On the other hand, 'If the update block fills up the update data in a different order from the logical and complete blocks', the update block is a non-sequential or chaotic update block, then the sequenced blocks must be further processed so that the sum can be finally pressed. The entire block is in the same order to store the updated data of the logical group. In the preferred embodiment, it is in a logically sequential order in a single relay block. Further processing involves summarizing the updated segments in the update block and the unchanged regions in the original block to yet another update relay block. The summarized update blocks will then be in a logically sequential order and can be used to replace the original block. Under some predetermined conditions, there will be one or more compression procedures before the summary program. The compression program simply re-records the segments of the chaotic update block into alternate flush update blocks, while removing any duplicate logical segments that have been eliminated by subsequent updates of the same logical segment. The update scheme allows multiple executions of up to a predetermined maximum value to be executed simultaneously. 98704.doc -26- !288328 Thread. Each thread is a logical group that is updated with its dedicated update relay block. Update of the order data When the data belonging to the logical group is updated first, the relay block and the update block of the update data which is dedicated to the logical group are configured. When an instruction to write a block of one or more segments of a logical group is received from the host (the existing relay block has stored all the full extents), the update block is configured. For the first host write operation, the data of the first block is recorded on the update block. Since each host write is a block with one or more consecutive logical addresses, the first update is followed by the feature for the first update. In subsequent host writes, the update blocks in the same logical group are recorded in the update block in the order received from the host. A block continues to be managed as a sequential update block, while the segments updated by the host within the associated logical group remain logically sequential. All segments updated in this logical group are written to this sequential update block until the block is closed or converted to a fuzzy update block. Figure 7A shows an example of a segment written in a logical group of sequential update blocks in sequential order due to the host write operation of the two branches, and the original block of the logical group & The corresponding section in the middle becomes obsolete. In the host write operation:], the logical segment LS5_LS_ is updated. The data updated to (3) will be recorded in the newly configured dedicated update block. For convenience, the logical update group will be updated in the private update block of the first-physical segment location. : The first logical section recorded in the new is not necessarily the group and the 5 'to be more, the science - section. Therefore, 98704.doc -27- 1288328 is between the starting point of the logical group and the starting point of the updated block. There will be displacement. This bit shift is referred to as a "page mark" as previously described in connection with Figure 3A. Subsequent segments will be updated in a logically sequential order. When the last segment of the logical group is written, the group address wraps around and the write sequence continues from the first segment of the group. In master write operation #2, the program segment of the data in logical segment LS9_LS12 is updated. The data updated to LS9, -LS12, will be recorded in the dedicated update block directly in the position after the end of the last write. The figure shows that two hosts are written as follows: The update data in the update block, ie LS5'_LS 12, is recorded in a logically sequential order. The update block can be thought of as a sequential update block because it has been filled in a logically sequential order. The updated data recorded in the update block can be used to eliminate the corresponding data in the original block. Chaotic Data Update When any of the sections updated by the host within the associated logical group are logically non-sequential, the chaotic update block management can be initiated for the existing sequential update block. This random update block is in the form of a data update block in which logical segments within the associated logical group can be updated in any order and can be repeated any number of times. It can be established by converting the sequential update block to the previously written segment within the updated logical group when the segment written by the host is logically out of order. All subsequent segments updated in this logical group are written to the next available segment location in the chaotic update block, regardless of its logical segment address within the cluster. Figure 7B shows an example of a section of a logical group that writes a chaotic update block in a chaotic order due to five separate host write operations, while the logical group 98704.doc -28- 1288328 is replaced in the original block. The segments that are copied and the segments that are copied in the chaotic update block become obsolete. In host write operation #1, the logical segments LS10-LS11 of the given logical group stored in the original relay block are updated. Updated Logs: Segments LS10, -LS11 are stored in the newly configured update block. At this point, the update block is a sequential update block. In the host write operation #2, the logical extents LS5-LS6 are updated to LS5'-LS6' and recorded in the update block immediately after the last write. This converts sequential update blocks into confusing update blocks. At the host write operation #3, the logical section LS10 is updated again and recorded in the next position of the update block to become LSI0". At this time, the LSI0" in the update block can replace the LSI0' in the previous record, and the LSI0' can replace the LS10 in the original block. In the host write operation #4, the data of the logical segment LS10 is updated again. Record it in the next location of the update block to become LSI 0'''. Therefore, LS10," is now the last and only valid material of logical segment LS10. It will be updated in host write operation #5. The data of the logical segment LS30 and its record in the update block become LS 3 01. Therefore, this example shows that a plurality of logical segments in a logical group can be written into a chaos in any order and any repetition. Update block. Forced sequential update Figure 8 shows an example of writing a segment of a sequential update block in a logical group in sequential order due to two separate host write operations with interrupts at the logical address. In #1, the update data of the logical segment LS5-LS8 is recorded in the dedicated update block to become LS5'-LS8'. In the host write #2, the update data of the logical segment LS14-LS16 is recorded. Recorded after the last write 98 The update block of 704.doc -29- 1288328 becomes LS14, -LS16, however, there is an address jump between LS8 and LS14, and the host write #2 usually makes the update block non-sequential. The address jump is not reported. One option is to copy the data of the middle section of the original block to the update block before executing the host write #2 to perform the padding operation (#2A). In this way, it can be saved. Update the sequential nature of the block.Figure 9 shows a flow diagram of a program for updating the information of a logical group by the update block manager in accordance with a general embodiment of the present invention. The update process comprises the following steps: Step 260: The memory is organized a plurality of blocks, each block being divided into a plurality of memory cells that can be erased together, each memory cell being used to store data of a logical unit. Step 262: The data is organized into a plurality of blocks Logical group, each logical group is divided into a plurality of logical units. Step 264: In the standard example, according to the first specified order, preferably

邏輯上循序㈣序,將邏輯群㈣所有邏輯單元儲存在原 始區塊的記憶體單元中。依此方式,即可得知存取區塊中 個別邏輯單元的索引。 步驟270:對於給定邏輯群組(如,LGx)的資料,請# 新LGX内的邏輯單元。(邏輯單元更新係為舉例說明。一 而言,更新將是由LGx内-或多個連續邏輯單元所组❹ 式段。) ’ 步驟272:請求的更新邏輯單元將會儲存在專用於記錄 LGX之更新的第二區塊中。記錄順序係根據第二順序,通常 98704.doc -30- 1288328 是更新請求的順序。本發明的一個特色允許設定初始對按 照邏輯上循序或混亂順序記錄的資料為一般的更新區塊。 因此根據第二順序’第二區塊可以是循序更新區塊或混亂 更新區塊。 步驟274 :當程序迴圈返回到步驟27〇時,第二區塊繼續 記錄所請求的邏輯單元。在關閉的預定條件成形時將會關 閉第二區塊,以接收進一步更新。此時,程序繼續進行至 步驟276。 步驟276:判斷該已關閉之第二區塊是否以和原始區塊相 同的順序來圮錄其更新邏輯單元。當該等兩個區塊記錄邏 輯單元僅相差一頁面標記時,該等兩個區塊便會被視為具 有相同的順序’如結合圖3 A所述。如果這兩個區塊具有相 同的順序’程序繼續進行至步驟2 8 0,否則,必須在步驟2 9 0 執行廢棄項目收集。 步驟280·由於第二區塊具有和第一區塊相同的順序,因 此其可用來取代原始的第一區塊。然後,更新程序結束於 步驟299。 步驟290 :從第二區塊(更新區塊)及第一區塊(原始區塊) 收集給疋邏輯群組之各邏輯單元的最新版本。然後按照和 第一區塊相同的順序將給定邏輯群組之已彙總的邏輯單元 寫入第三區塊。 步驟292:由於第三區塊(彙總的區塊)具有和第一區塊相 同的順序,因此其可用來取代原始的第一區塊。然後,更 新程序結束於步驟299。 98704.doc -31 - 1288328 步驟299 :當結束程序建立完整的更新區塊時,該區塊會 變成給定邏輯群組的新標準區塊。將會終止此邏輯群組的 更新執行緒。 圖10顯示根據本發明的一項較佳具體實施例,更新區塊 管理器更新一個邏輯群組之資料的程序流程圖。更新程序 包含以下步驟: 步驟310 :對於給定邏輯群組(如’ LGx)的資料,會請求 更新LGX内的邏輯區段。(區段更新係為舉例說明。一般而 言’更新將是由LGX内-或多個連續邏輯區段所組成的程式鲁 段。) 步驟312 :如果LGx專用的更新區塊尚未存在,繼續進行 至步驟41〇以啟始邏輯群組之新的更新執行緒。這可藉由以 下方式來完成:配置記錄邏輯群組之更新資料專用的更新 區塊。如果已經有開啟的更新區塊,則繼續進行至步驟 3 14,開始將更新區段記錄至更新區塊上。 步驟314··如果目前更新區塊已經混亂(即,非循序),則 直接繼續進行至㈣51G,,X將請求的更㈣段記錄至混亂 °如果目前更新區塊為循序,則繼續進行至步 驟316,以處理循序更新區塊。 步驟⑽本發明的—項特點係允料初始時將一更新 ^設置成通心以邏輯循序或混亂順序的方式來記錄 順序儲:因最終會將其資料以邏輯循序 該更新區塊的循序❹接^Γ因此教儘可能保 循序狀悲。接者,當關閉-更新區塊以進 98704.doc -32 - 1288328 進一步更新時,將會需要較少的處理,因為並不需要進行 廢棄項目收集。 因此,判斷所請求的更薪b π * 循更新區塊的目前循序 順序。如果更新循序遵#,則繼續進行至步驟5U)以執行循 序更新,及更新區塊將維持猶序。另一方面,如果更新未 循序遵循(混亂更新),則其會在未採取任何動作時將循序更 新區塊轉換成混亂更新區塊。Logically (4), all the logical units of the logical group (4) are stored in the memory unit of the original block. In this way, the index of the individual logical units in the access block can be known. Step 270: For the data of a given logical group (eg, LGx), please # logical unit within the new LGX. (The logical unit update is an example. In one case, the update will be grouped by LGx- or multiple consecutive logical units.) 'Step 272: The requested update logic unit will be stored in dedicated to record LGX Updated in the second block. The order of recording is based on the second order, usually 98704.doc -30- 1288328 is the order of the update request. One feature of the present invention allows for the initial setting of data to be recorded in a logically sequential or chaotic sequence as a general update block. Therefore, according to the second order, the second block may be a sequential update block or a chaotic update block. Step 274: When the program loop returns to step 27, the second block continues to record the requested logical unit. The second block will be closed when the closed predetermined condition is formed to receive further updates. At this point, the process proceeds to step 276. Step 276: Determine whether the closed second block records its update logical unit in the same order as the original block. When the two block record logical units differ by only one page mark, the two blocks are considered to have the same order' as described in connection with Figure 3A. If the two blocks have the same order, the program proceeds to step 2 80, otherwise, the obsolete item collection must be performed in step 2900. Step 280. Since the second block has the same order as the first block, it can be used to replace the original first block. The update process then ends at step 299. Step 290: Collect the latest version of each logical unit of the logical group from the second block (update block) and the first block (original block). The summarized logical units of a given logical group are then written to the third block in the same order as the first block. Step 292: Since the third block (summary block) has the same order as the first block, it can be used to replace the original first block. The update process then ends at step 299. 98704.doc -31 - 1288328 Step 299: When the end program establishes a complete update block, the block becomes a new standard block for a given logical group. The update thread for this logical group will be terminated. Figure 10 is a flow diagram showing the flow of updating the data of a logical group by the update block manager in accordance with a preferred embodiment of the present invention. The update procedure includes the following steps: Step 310: For a given logical group (e.g., 'LGx) profile, a request is made to update the logical segment within the LGX. (The zone update is an example. In general, the update will be a program segment consisting of LGX- or multiple consecutive logical segments.) Step 312: If the LGx-specific update block does not yet exist, proceed Go to step 41 to start a new update thread for the logical group. This can be done by configuring an update block dedicated to the update data of the record logical group. If there is already an updated update block, proceed to step 3-14 to begin recording the update segment onto the update block. Step 314·· If the current update block is already confusing (ie, non-sequential), proceed directly to (4) 51G, and X records the more (4) segment of the request to the chaos. If the current update block is sequential, proceed to the step. 316, to process the blocks in order. Step (10) The feature of the present invention is that the material is initially set to be arranged in a logically sequential or chaotic sequence to record the sequential storage: the sequence of the updated block will be logically sequenced. Therefore, I will teach you to follow the order as much as possible. In addition, when the block is closed-updated to 98704.doc -32 - 1288328 for further updates, less processing will be required as there is no need to collect waste items. Therefore, it is judged that the requested salary b π * follows the current sequential order of the update block. If the update is followed by #, proceed to step 5U) to perform the sequential update, and the update block will maintain the order. On the other hand, if the update is not followed (chaotic update), it will convert the sequential update block into a chaotic update block when no action is taken.

塊變成混亂更新區塊。 選用性強制循序程序 在一項具體實施例中,不會進行任何動作來挽救此情 況’然後程序直接進行至步驟37G,其中允許更新將更新區 在另-項具體實施例中,會選用性執行強制循序程序步 驟320 ’以儘可能因顧及懸置的混亂更新而保存循序更新區 塊。其中有兩個情況,這兩個情況都需要複製原始區塊的 遺失區段,以維持更新區塊上記錄之邏輯區段的循序順 序。第一個情況是其中更新可建立短的位址跳躍。第二個 情況是提早結束更新區塊以將其保持循序。強制循序程序 步驟320包含以下子步驟: 步驟330:如果更新建立的邏輯位址跳躍未大於預定的數 量CB,則程序繼續進行至步驟350的強制循序更新程序,否 則程序繼續進行至步驟340,以考慮其是否適合進行強制循 序結束。 步驟340 :如果未填充的實體區段數量超過預定的設計參 數Cc(其代表值為更新區塊尺寸的一半),則更新區塊為相 98704.doc -33- 1288328 對未被使用,因此不會提早關閉。程序繼續進行至步驟 370’及更新區塊會成為混亂。另一方面,如果實質上已填 充更新區塊,則將其視為已經充分利用,因此進入步驟36〇 以進行強制循序結束。 步驟350 :只要位址跳躍未超過預定的數量Cb,強制循序 更新允許目前循序更新區塊維持循序。實質上,會複製更 新區塊之關聯原始區塊的區段,以填充位址跳躍跨越的間 隙。因此,在繼續進行至步驟5 10之前,會以中間位址的資 料填補循序更新區塊,以循序記錄目前的更新。 步驟360 :如果目前循序更新區塊在實質上已經被填充, 而非由懸置的混亂更新轉換為混亂更新區塊,則強制循序 結束允許目前循序更新區塊成為關閉狀態。混亂或非循序 更新的定義是具有以下項目的更新:不為上述位址跳躍例 外所涵蓋的正向位址轉變、反向位址轉變、或位址重複。 為了防止循序更新區塊為混亂更新所轉換,會藉由複製更 新區塊之關聯之原始部分淘汰區塊的區段來填充更新區塊 的未寫入區段位置。然後完全淘汰及抹除原始區塊。現在, 目前的更新區塊具有完整組的邏輯區段,然後結束成為取 代原始中繼區塊的完整中繼區塊。然後,程序繼續進行至 步驟430以在其位置配置更新區塊,以接受在步驟31〇先請 求之懸置區段更新的記錄。 轉換成混亂更新區塊The block becomes a chaotic update block. Optional Forced Sequencing Procedure In a specific embodiment, no action is taken to save the situation' then the procedure proceeds directly to step 37G, where the update is allowed to be updated in another embodiment, which is selectively performed The force sequencer step 320' saves the sequential update blocks as much as possible due to the chaotic update of the suspension. There are two cases in which both cases need to copy the missing segments of the original block to maintain the sequential order of the logical segments recorded on the updated block. The first case is where the update can establish a short address jump. The second case is to end the update block early to keep it in order. The forced sequencer step 320 includes the following sub-steps: Step 330: If the update established logical address hop is not greater than the predetermined number CB, the program proceeds to the forced sequence update procedure of step 350, otherwise the program proceeds to step 340 to Consider whether it is suitable for forced completion. Step 340: If the number of unfilled physical segments exceeds a predetermined design parameter Cc (the representative value is half of the updated block size), the update block is phase 98704.doc -33- 1288328 pairs are not used, so no Will close early. The process proceeds to step 370' and the update block becomes confusing. On the other hand, if the update block has been substantially filled, it is considered to have been fully utilized, so the process proceeds to step 36 to end the forced sequence. Step 350: Force the sequential update to allow the current sequential update block to maintain the sequence as long as the address jump does not exceed the predetermined number Cb. Essentially, the section of the associated original block of the update block is copied to fill the gap spanned by the address jump. Therefore, before proceeding to step 5 10, the block is updated sequentially with the information of the intermediate address to sequentially record the current update. Step 360: If the current sequential update block has been substantially filled, rather than being converted to a chaotic update block by the suspended mess update, then the forced sequence end allows the current sequential update block to be closed. A chaotic or non-sequential update definition is an update with the following items: no forward address transitions, reverse address transitions, or address repetitions covered by the above address hop exceptions. In order to prevent the sequential update block from being converted for a chaotic update, the unwritten segment position of the update block is filled by copying the sector of the original partial demise block of the associated block. Then completely eliminate and erase the original block. Now, the current update block has a complete set of logical segments and then ends up as a complete relay block that replaces the original relay block. The program then proceeds to step 430 to configure the update block at its location to accept the record of the pending segment update requested in step 31. Convert to chaotic update block

步驟370:當懸置的更新未按循序順序且可有可無時,如 果無法滿足強制循序條件,則在程序繼續進行至步驟5 1 Q 98704.doc -34- 1288328 時,藉由允許在更新區塊上記錄具有非循序位址的懸置更 新區段’以允许循序更新區塊轉換為混亂更新區塊。如果 最大數量的混亂更新區塊存在,則在允許轉換進行之前, 必須關閉最久未存取的混亂更新區塊;因而不會超過最大 數量的混亂區塊。最久未存取之混亂更新區塊的識別和步 驟420所述的一般例子相同,但僅限於混亂更新區塊。此時 關閉混亂更新區塊可藉由如步驟5 5 0所述的彙總來達成。 配置受系統限制的新更新區塊 步驟410 :將抹除中繼區塊配置為更新區塊的程序始於決 定是否超過預定的系統限制。由於資源有限,記憶體管理 系統通常允許可同時存在之更新區塊的預定最大數量cA。 此限制是循序更新區塊及混亂更新區塊的總數,也是設計 參數。在一項較佳具體實施例中,此限制為例如最大8個更 新區塊。還有’由於系統資源的較高需求,在可同時開啟 之混鉱更新區塊的最大數量上還有對應的預定限制(如,4)。 因此’在已經配置CA個更新區塊時,則只有在關閉現有 的配置請求之一後,才能滿足下一個配置請求。程序繼續 進行至步驟420。當開啟的更新區塊數量小於Ca時,程序直 接進行至步驟430。 步驟420:在超過更新區塊的最大數量Ca時,關閉最久未 存取的更新區塊及執行廢棄項目收集。最久未存取的更新 區塊會被識別為和最久未存取之邏輯區塊關聯的更新區 塊。為了決定最久未存取的區塊,存取包括邏輯區段的寫 入及選擇性讀取。會按照存取的順序維持開啟更新區塊的 98704.doc -35- 1288328 清單;初始化時,不會假設任何存取順序。更新區塊的關 閉在更新區塊為循序時,將按照結合步驟36〇及步驟53〇所 述的相同程序,在更新區塊為混亂時,將按照結合步驟54〇 所述的相同私序。此關閉可挪出空間以在步驟43〇配置新的 更新區塊。 步驟43 0 ·配置一新的中繼區塊作為該給定邏輯群組lgx 專屬的更新區塊便可滿足該配置要求。接著,程序繼續進 行至步驟5 10。 在更新區塊上記錄更新資料 步驟510:將請求的更新區段記錄在更新區塊的下一個可 用實體位置上。然後,程序繼續進行至步驟52〇,決定更新 區塊是否準備結束就緒。 更新區塊結束 步驟520:如果更新區塊還有接受附加更新的空間,則繼 續進行至步驟522。否則繼續進行至步驟57〇以結束更新區 塊。在目丽請求的寫入嘗試寫入多於區塊所有空間的邏輯 區段時,有兩項填滿更新區塊的可能實施例。在第一實施 例中,會將寫入請求分成兩個部分,其中第一部分可寫入 直到區塊的最後實體區段。然後關閉區塊及將寫入的第二 部分處理為下一個請求的寫入。在另一個實施例中,會在 區塊填補其餘區段時保留請求的寫入,然後再將其關閉。 請求的寫入會被處理為下一個請求的寫入。 步驟522··如果更新區塊為循序,則繼續進行至步驟53〇 以進行循序關閉。如果更新區塊為混亂,則繼續進行至步 98704.doc -36 - 1288328 驟540以進行混亂關閉。 循序更新區塊結束 步驟530··由於更新區塊為循序且已完全填充,因此其中 儲存的邏輯群組為完整。中繼區塊為完整且能取代原始的 中繼區塊。此時,會完全淘汰原始區塊及將其抹除。然後 程序繼續進行至步驟570,其中給定邏輯群組的更新執行緒 結束。 混亂更新區塊結束 步驟540 ·由於更新區塊為非循序填充,可能含有一些邏 輯區段的多個更新,因此會執行廢棄項目收集以挽救其中 的有效資料。混亂更新區塊可為已經壓縮或彙總。在步驟 542將會決定執行的程序。 步驟542 :要執行壓縮或彙總將根據更新區塊的退化而 定。如果邏輯區段已更新多次,其邏輯位址將高度退化。 記錄在更新區塊上的將有相同邏輯區段的多個版本,而戶 有最後5己錄的版本為該邏輯區段的有效版本。在含有多個 版本之邏輯區段的更新區塊中,有所區分之邏輯區段的數 量會比邏輯群組的少很多。 在較佳具體實施例中,當更新區塊中有所區分之邏輯區 段的數量超過預定的設計參數CD(其代表值為邏輯群級尺 寸的一半)時,結束程序會在步驟550執行彙總,否則程序 會繼續進行至步驟560的壓縮。 步驟550 :如果要彙總混亂更新區塊,則將以含有囊總資 料之新的標準中繼區塊取代原始區塊及更新區塊。& 98704.doc -37- 1288328 後,更新執行緒將結束於步驟570。 步驟560 :如果要壓縮混亂更新區塊,則將以載有壓縮資 料之新的更新區塊取代之。壓縮後,已壓縮更新區塊的處 理將結束於步驟5 7 0。或者,可將壓縮延遲直到再次寫入更 新區塊’因此排除壓縮之後為沒有中間更新之彙總的可能 性。然後’當步驟502中出現下一個要求於lGx中進行更新 時’在給定邏輯區塊的進一步更新中使用新的更新區塊。 步驟570 :當結束程序建立完整的更新區塊時,該區塊會 變成給定邏輯群組的新標準區塊。此邏輯群組的更新執行 緒將會終止。在結束程序建立取代現有更新區塊之新的更 新區塊時,會使用新的更新區塊記錄為給定邏輯群組請求 的下一個更新。在更新區塊未結束時,處理會在步驟31〇 中出現下一個要求於LGX中進行更新時時繼續。 從上述程序可知,在關閉混亂更新區塊時,會進一步處 理其上記錄的更新資料。尤其,其有效資料的廢棄項目收 集係藉由以下程序:壓縮至另一個混亂區塊的程序,或和 其關聯之原始區塊彙總以形成新的標準循序區塊的程序。 圖11A為詳細顯示關閉圖1〇所示之混亂更新區塊之彙總 私序的广^圖。’/昆亂更新區土束彙總{執行於結I更新區塊 時(如§更新區塊已填滿其寫入的最後實體區段位置時)的 :個可能程序之-。區塊中寫人之有所區分的邏輯區段數 超過預疋的5又汁參數CD時,將選擇彙總。圖i 〇所示的彙 總私序步驟5 5 〇包含以下子步驟: v驟5 5 1 ·在此亂更新區塊被關閉時,會配置可取而代之 98704.doc -38 - 1288328 的新中繼區塊。 步驟552 :在混亂更新區塊及關聯之原始區塊中,收集各 邏輯區段的最新版本並忽略所有的淘汰區段。 步驟554 ··將所收集的有效區段按邏輯上循序順序記錄在 新的中繼區塊上,以形成完整區塊,即,按循序順序記錄 之邏輯群組之所有邏輯區段的區塊。 步驟5 5 6 :以新的完整區塊取代原始區塊。 步驟558 :抹除已結束的更新區塊及原始區塊。 圖11B為詳細顯示關閉圖1 〇所示之混亂更新區塊之壓縮 程序的流程圖。區塊中寫入之有所區分的邏輯區段數量低 於預定的設計參數CD時,將選擇壓縮。圖1〇所示的壓縮程 序步驟560包含以下子步驟·· 步驟561 :在混亂更新區塊被壓縮時,會配置取而代之的 新中繼區塊。 步驟562 ··在要壓縮之現有的混亂更新區塊中,收集各邏 輯區段的最新版本。 步驟564 :將所收集的區段記錄在新的更新區塊上,以形 成具有壓縮區段之新的更新區塊。 步驟566 ·以具有壓縮區段之新的更新區塊取代現有的更 新區塊。 步驟568 :抹除已結束的更新區塊。 邏輯與中繼區塊狀態 圖· 12A顯不邏輯群組的所有可能狀態,及其間在各種操作 下的可能轉變。 98704.doc 1288328 圖12B為列出邏輯群組之可能狀態的表格。邏輯群組狀態 的定義如下: 1 · A # :邏輯群組中的所有邏輯區段已依照邏輯循序順 序)被寫入單一中繼區塊之中,其可能利用頁面標記捲繞方 式。 2·表尊八:邏輯群組中未曾寫入任何邏輯區段。邏輯群 組在群組位址表中會會標示為未寫入且沒有任何配置的中 繼區塊。返回預定的資料模式,以回應此群組内每個區段 的主機讀取。 3·#居豸舞:邏輯群組内的一些區段已經按邏輯上循序 順序被寫入中繼區塊中,可能使用頁面標記,因此其可取 代群組中任何先前完整狀態之對應的邏輯區段。 4·必旎鸢舞:邏輯群組内的一些區段已經按邏輯上非循 序順序被寫人中繼區塊中,可能使用頁面標記,因此其可 取代群組巾任何先前完整狀態之對應的邏輯區段。群址内 的區段可被寫人-次以上,纟中最新版本將取代所有先前 的版太。 圖13A顯示中繼區塊的所有可能狀態,及其間在 下的可能轉變。 下 圖13B顯不中繼區塊的所有可能狀態,及其間在各 下的可能轉變: 呆作 L⑽徐:中繼區塊中的所有區段已被抹除。 =序⑼:中繼區塊已經被部份寫入,其區段呈現邏 4順序,可能使用頁面標記。所有區段均屬於相同的 98704.doc 1288328 邏輯群組。 3.滿農^奔··該中繼區塊已經被部份或完全寫入,其區 段呈現邏輯非循序順序。任何區段均可被寫人—次以上。 所有區段均屬於相同的邏輯群組。 4·完# :中繼區塊已經以邏輯循序的順序被完全寫入, 可能使用頁面標記。 、原始中知區塊先前為完整但至少一區段已因主機資 料更新而淘汰。 ' 圖14(A)_ 14(J)為顯示邏輯群組狀態上及實體中繼區塊上 各種操作效果的狀態圖。 圖H(A)顯示對應於第一寫入操作之邏輯群組及中繼區 塊轉變的狀態圖。主機按邏輯上循序順序將先前未寫入之 缝輯群組的一或多個區段寫入至新配置的已抹除中繼區 塊。邏輯群組及中繼區塊進入循序更新狀態。 圖μ(β)顯不對應於第—完整操作之邏輯群組及中繼區 鬼轉麦的狀恶圖。先前未寫入的循序更新邏輯群組因主機 =序寫入所有區段而變成完整。如果記憶卡以預定的資料 模式填充其餘未寫入的區段來填滿群紐,則也會發生轉 變。中繼區塊變成完整。 圖14(c)顯不對應於第一混亂操作之邏輯群組及中繼區 ▲轉4的狀悲圖。先前未寫入的循序更新邏輯群組在主機 非循序寫入至少一區段時變成混亂。 圖14(D)顯示對應於第一壓縮操作之邏輯群組及中繼區 塊轉變的狀態圖。從舊區塊將先前未寫入之混亂更新邏輯 987〇4.d( -41 - 1288328 群組内的所有有效區段複製到新的混亂中繼區塊’然後再 將其抹除。 圖14(E)顯示對應於第一彙總操作之邏輯群組及中‘區 塊轉變的狀態圖。從舊的混亂區塊移動先前未寫入之混亂 更新邏輯群組内的所有有效區段,以按邏輯上循序順序填 充新配置的已抹除區塊。主機未寫入的區段會以預定的資 料模式加以填充。然後抹除舊的混亂區塊。 圖14(F)顯示對應於循序寫入操作之邏輯群組及中繼區 塊轉變的狀態圖。主機按邏輯上循序順序將完整邏輯群組 的一或多個區段寫入新配置的已抹除中繼區塊。邏輯群組 及中繼區塊成為循序更新狀態。先前完整的中繼區塊變成 原始中繼區塊。 圖14(G)顯示對應於循序填充操作之邏輯群組及中繼區 塊轉變的狀態圖。循序更新邏輯群組在主機循序寫入所有 其區段時變成完整。這也可發生於以原始區塊的有效區段 填充循序更新邏輯群組以使其完整的廢棄項目收集期間, 在此之後會抹除原始區塊。 圖14(H)顯示對應於非循序寫入操作之邏輯群組及中繼 區塊轉變的狀態圖。循序更新邏輯群組在主機非循序寫入 至少一區段時變成混亂。非循序區段寫入可造成更新區塊 或對應原始區塊中的有效區段變成淘汰。 圖14(1)顯示對應於壓縮操作之邏輯群組及中繼區塊轉變 的狀態圖。從舊區塊將混亂更新邏輯群組内的所有有效區 段複製到新的混亂中繼區塊,然後再將其抹除。原始區塊 98704.doc -42- 1288328 不會受到影響。 圖14(J)顯示對應於彙總操作之邏輯群組及中繼區塊轉變 的狀態圖。從舊的混亂區塊複製混亂更新邏輯群組内的所 有有效區段,以按邏輯上循序順序填充新配置的已抹除區 塊。然後抹除舊的混亂區塊及原始區塊。 更新區塊追蹤及管理 圖15顯示用於追蹤已開啟及已關閉之更新區塊與已抹除 區塊以進行配置的配置區塊清單(ABL)的結構的較佳具體 實施例。該配置區塊清單(ABL)6 10會被保存於控制器RAM 130之中,以允許管理已抹除區塊之配置、配置的更新區 塊、關聯的區塊及控制結構,以啟用正確的邏輯對實體位 址轉譯。在較佳具體實施例中,ABL包括:已抹除區塊的 清單、開啟的更新區塊清單614、及關閉的更新區塊清單 616 〇 開啟的更新區塊清單614是ABL中具有開啟更新區塊之 屬性的區塊項目組。開啟的更新區塊清單具有目前開啟之 各資料更新區塊的一個項目。各項目保有以下資訊。LG是 目前更新中繼區塊專用的邏輯群組位址。循序/混亂是代表 以循序或混IL更新資料填充更新區塊的狀態。MB是更新區 塊的中繼區塊位址。頁面標記是在更新區塊之第一實體位 置記錄的起始邏輯區段。寫入的區段號碼代表目前在更新 區塊上寫入的區段號碼。MB 〇是關聯之原始區塊的中繼區 塊位址。Page TagG是關聯之原始區塊的頁面標記。 關閉的更新區塊清單616是配置區塊清單(Abl)的子集。 98704.doc -43- 1288328 其為abl中具有關閉更新區塊之屬性的區塊項目組。關閉 的更新區塊清單具有-個已關閉之各資料更新區塊的項 目,但其項目在邏輯對主實體目錄中並未更新。各項目保 有以下資訊。LG是目前更新區塊專用的邏輯群組位址。mb 是更新區塊的中繼區塊位址。頁面標記是在更新區塊之第 -實體位置記錄的起始邏輯區段。MB。是關聯之原始區塊 的中繼區塊位址。 混亂區塊索引 循序更新區塊具有按邏輯上循序順序儲存的資料,因此 很容易尋找區塊中的任何邏輯區段。混亂更新區塊具有其 未按順序儲存的邏輯區段並可儲存一個邏輯區段的多個更 新世代。必須維持附加資訊,以記錄各有效邏輯區段被配 置在混亂更新區塊中的位置。 在較佳具體實施例中,混亂區塊索引資料結構允許追蹤 及快速存取混亂區塊中的所有有效區段。混亂區塊索引獨 立官理小型區域的邏輯位址空間,並可有效處理系統資料 及使用者資料的熱區。索引資料結構實質上允許在快閃記 憶體中維持具有不常更新需求的索引資訊,以免效能明顯 受到影響。另一方面,會將混亂區塊中最近寫入區段的清 單保留在控制器Ram的混亂區段清單中。還有,會將快閃 記憶體之索引資訊的快取記憶體保留在控制器RAM中,以 減少位址轉譯之快閃區段存取的數量。各混亂區塊的索引 係儲存於快閃記憶體的混亂區塊索引(CBI)區段中。 圖16A顯示混亂區塊索引((:]31)區段的資料欄位。混亂區 98704.doc -44- 1288328 塊索引區段(CBI區段)含有邏輯群組中各區段映射至混爲匕 更新區塊的索引,可定義邏輯群組各區段在混亂更新區塊 或其關聯之原始區塊内的位置。CBI區段包括··記錄混亂區 塊内有效區段的混亂區塊索引攔位、記錄混亂區塊位址參 數的混亂區塊資訊欄位、及記錄儲存CBI區段之中繼區塊 (CBI區塊)内有效CBI區段的區段索引欄位。 圖16B顯示記錄於專用中繼區塊中之混亂區塊索引(CBI) 區段的範例。專用的中繼區塊可稱為CBI區塊620。在更新 CBI區段時,會將其寫入CBI區塊620中下一個可用的實體 區段位置。因此,CBI區段的多個複本可存在CBI區塊中, 其中只有最後寫入的複本為有效。例如,已經利用有效版 本的最新版本將邏輯群組LGi的CBI區段更新三次。區塊中 最後寫入之CBI區段的一組索引可識別CBI區塊中各有效 區段的位置。在此範例中,區塊中最後寫入的CBI區段是 LG^6的CBI區段,及其索引組是取代所有先前索引組的有 效索引組。當CBI區塊最後變成以CBI區段予以完全填充 時’會將所有有效區段再寫入新的區塊位置,以在控制寫 入操作期間壓縮區塊。然後抹除整個區塊。 CBI區段内的混亂區塊索引欄位含有邏輯群組内各邏輯 區段的索引項目或映射至混亂更新區塊的子群組。各索引 項目代表對應邏輯區段之有效資料所在之混亂更新區塊内 的位移。保留的索引值代表混亂更新區塊中沒有邏輯區段 的有效資料’及代表關聯之原始區塊中的對應區段為有 效。一些混亂區塊索引攔位項目的快取記憶體會被保留在 98704.doc 1288328 控制器RAM中。 CBI區段内的混亂區塊資訊攔位含有關於存在於系統中 各混亂更新區塊的一個項目,以記錄區塊的位址參數資 訊。此欄位中的資訊只在CBI區塊的最後寫入區段中有效。 此資訊也會出現在RAM的資料結構中。 各混亂更新區塊的項目包括三個位址參數。第一個參數 是和混亂更新區塊關聯之邏輯群組(或邏輯群組數)的邏輯 位址。第二個參數是混IL更新區塊的中繼區塊位址。第三 個參數是寫入混亂更新區塊中最後區段的實體位址位移。 位移資訊設定初始化期間混亂更新區塊的掃描起點,以在 RAM中重建資料結構。 區段索引欄位含有關於CBI區塊中各有效CBI區段的項 目。其可定義CBI區塊内有關各許可之混亂更新區塊之最近 寫入的CBI區段所在的位移。索引中位移的保留值代表許可 的混亂更新區塊並不存在。 圖16C為顯示存取正在進行混亂更新之給定邏輯群組之 邏輯區段之資料的流程圖。在更新程序期間,會將更新資 料記錄在混亂更新區塊中,而未變更的資料則留在和邏輯 群組關聯的原始中繼區塊中。在混亂更新下存取邏輯群組 之邏輯區段的程序如下: 步驟650:開始尋找給定邏輯群組的給定邏輯區段。 步驟652 :在CBI區塊中尋找最後寫入的CBI區段。 步驟654 :藉由查詢最後寫入之cbi區段的混亂區塊資訊 攔位’尋找和給定邏輯群組關聯的混亂更新區塊或原始區 98704.doc -46- 1288328 塊。此步驟可在步驟662前的任何時間執行。 步驟658 :如果最後寫入的CBI區段係指向給定的邏輯群 組,則可尋找CBI區段。繼續進行至步驟662。否則,繼續 進行至步驟660。 步驟660 :藉由查詢最後寫入之CBI區段的區段索引欄 位,尋找給定邏輯群組的CBI區段。 步驟662 :藉由查詢所尋找到之CBI區段的混亂區塊索引 欄位,尋找混亂區塊或原始區塊中的給定邏輯區段。 圖16D顯示根據其中已將邏輯群組分割成子群組的替代 性具體實施例,存取正在進行混亂更新之給定邏輯群組之 邏輯區段之資料的流程圖。CBI區段的有限容量只能記錄預 定最大數量的邏輯區段。當邏輯群組具有多於單一 CBI區段 所能處理的邏輯區段時,會將邏輯群組分割成多個具有指 派給各子群組之CBI區段的子群組。在一個範例中,各CBI 區段具有足夠追蹤由256區段所組成及多達8個混亂更新區 塊之邏輯群組的容量。如果邏輯群組具有超過256區段的尺 寸,則存在用於邏輯群組内各256區段子群組之分開的CBI 區段。CBI區段可存在用於邏輯群組内多達8個子群組,以 支援尺寸多達2048區段的邏輯群組。 在較佳具體實施例中,會採用間接索引方案以促進索引 的管理。區段索引的各項目具有直接及間接攔位。 直接區段索引可定義CBI區塊内有關特定混亂更新區塊 之所有可能C BI區段所在的位移。此搁位中的資訊只在有關 該特定混亂更新區塊之最後寫入的CBI區段中有效。索引中 98704.doc -47- 1288328 位移的保留值代表CBI區段並不存在,因為對應之有關混亂 更新區塊的邏輯子群組或是不存在’或是由於已配置更新 區塊而未更新。 間接區段索引可定義有關各許可的混亂更新區塊之最近 寫入之CBI區段所在之CBI區塊内的位移。索引中位移的保 留值代表許可的混亂更新區塊並不存在。 圖16D顯示在混亂更新下存取邏輯群組之邏輯區段的程 序,其步驟如下: 步驟670 :將各邏輯群組分割成多個子群組及指派cbi區 段給各子群組。 步驟6 8 0 :開始尋找給定邏輯群組之給定子群組的給定邏 輯區段。 步驟682 :在CBI區塊中尋找最後寫入的CBI區段。 步驟684 :藉由查詢最後寫入之CBI區段的混亂區塊資訊 欄位’尋找和給定子群組關聯的混亂更新區塊或原始區 塊。此步驟可在步驟6 9 6前的任何時間執行。 步驟686 :如果最後寫入的CBI區段係指向給定的邏輯群 組’則繼續進行至步驟691。否則,繼續進行至步驟690。 步驟690 :藉由查詢最後寫入之CBI區段的間接區段索引Step 370: When the suspended update is not in the sequential order and may or may not be available, if the forced sequence condition cannot be met, then the program is continued to the step 5 1 Q 98704.doc -34 - 1288328 by allowing the update A suspended update section with a non-sequential address is recorded on the block to allow the sequential update block to be converted into a chaotic update block. If the maximum number of chaotic update blocks are present, the chaotic update block that has not been accessed for the longest time must be closed before allowing the conversion to proceed; thus, the maximum number of chaotic blocks will not be exceeded. The identification of the oldest unaccessed chaotic update block is the same as the general example described in step 420, but is limited to the chaotic update block. At this point, closing the chaotic update block can be achieved by summarizing as described in step 550. Configuring a New Update Block Restricted by the System Step 410: The procedure for configuring the erase relay block to update the block begins with determining whether the predetermined system limit is exceeded. Due to limited resources, the memory management system typically allows a predetermined maximum number cA of update blocks that can exist simultaneously. This limit is the total number of sequential update blocks and chaotic update blocks, and is also a design parameter. In a preferred embodiment, this limit is, for example, a maximum of 8 update blocks. Also, due to the high demand for system resources, there is a corresponding predetermined limit (e.g., 4) on the maximum number of hybrid update blocks that can be simultaneously turned on. Therefore, when the CA update blocks have been configured, the next configuration request can be satisfied only after one of the existing configuration requests is closed. The process continues to step 420. When the number of update blocks opened is less than Ca, the program proceeds directly to step 430. Step 420: When the maximum number Ca of update blocks is exceeded, the update block that has not been accessed for the longest time is closed and the collection of the discarded items is performed. The oldest unaccessed update block is identified as the update block associated with the longest unaccessed logical block. In order to determine the block that has not been accessed for the longest time, access includes writes and selective reads of logical sections. The 98704.doc -35- 1288328 list of update blocks is maintained in the order of access; no initialization sequence is assumed during initialization. When the update block is sequential, the same procedure as described in connection with step 36 and step 53 will be followed. When the update block is garbled, the same private sequence as described in conjunction with step 54 will be used. This close removes the space to configure a new update block in step 43. Step 43 0. Configuring a new relay block as the update block dedicated to the given logical group lgx can satisfy the configuration requirement. The program then proceeds to step 510. Recording the update data on the update block Step 510: Record the requested update zone on the next available physical location of the update block. The program then proceeds to step 52 and determines if the update block is ready to end. Update Block End Step 520: If the update block still has space to accept additional updates, then proceed to step 522. Otherwise proceed to step 57 to end the update block. When a write request of a destination request is written to a logical section that is more than all of the space of the block, there are two possible embodiments for filling up the update block. In the first embodiment, the write request is divided into two parts, wherein the first part can be written up to the last physical section of the block. The block is then closed and the second portion of the write is processed as the next requested write. In another embodiment, the requested write is retained while the block fills the remaining segments and then closed. The requested write is processed as a write to the next request. Step 522 · If the update block is sequential, proceed to step 53 〇 to perform a sequential shutdown. If the update block is confusing, proceed to step 98704.doc -36 - 1288328 step 540 for a messy shutdown. Sequential Update Block End Step 530·· Since the update block is sequential and completely filled, the logical group stored therein is complete. The relay block is complete and can replace the original relay block. At this point, the original block will be completely eliminated and erased. The program then proceeds to step 570 where the update thread for the given logical group ends. Chaotic Update Block End Step 540 • Since the update block is non-sequentially populated, it may contain multiple updates of some logical segments, so a waste project collection is performed to save valid data. The chaotic update block can be compressed or summarized. At step 542, the program to be executed will be determined. Step 542: The compression or aggregation to be performed will depend on the degradation of the update block. If a logical segment has been updated multiple times, its logical address will be highly degraded. Multiple versions of the same logical section will be recorded on the update block, and the last 5 recorded version will be a valid version of the logical section. In an update block with multiple versions of logical segments, the number of distinct logical segments will be much less than for logical groups. In a preferred embodiment, when the number of distinct logical segments in the update block exceeds a predetermined design parameter CD (which represents a half of the logical group size), the end program performs a summary at step 550. Otherwise the program will proceed to the compression of step 560. Step 550: If the chaotic update block is to be summarized, the original block and the updated block will be replaced by a new standard relay block containing the total packet data. After & 98704.doc -37- 1288328, the update thread will end at step 570. Step 560: If the chaotic update block is to be compressed, it will be replaced with a new update block containing the compressed data. After compression, the processing of the compressed update block will end at step 507. Alternatively, compression may be delayed until the update block is written again' thus eliminating the possibility of a summary without intermediate updates after compression. Then 'when the next request in step 502 occurs to update in lGx', a new update block is used in the further update of the given logical block. Step 570: When the end program establishes a complete update block, the block becomes a new standard block for a given logical group. The update thread for this logical group will be terminated. When the end of the program builds a new update block that replaces the existing update block, the new update block is used to record the next update requested for the given logical group. When the update block has not ended, processing will continue when the next request in the LGX is updated in step 31. As can be seen from the above procedure, when the chaotic update block is closed, the updated data recorded thereon is further processed. In particular, the collection of obsolete items of valid data is performed by the following procedure: a program compressed to another chaotic block, or a summary of the original blocks associated with it to form a new standard sequential block. Fig. 11A is a detailed view showing the summary of the private sequence of the chaotic update block shown in Fig. 1A. '/Centre update area soil bundle summary {executed when the I update block (such as § update block has filled the last physical segment location of its write): a possible program -. The number of logical segments that are distinguished by the number of people written in the block will be summarized when the pre-existing 5 juice parameter CD is exceeded. The summary private sequence step 5 5 图 shown in Figure i 〇 contains the following sub-steps: v Step 5 5 1 · When this random update block is closed, a new relay zone with 98704.doc -38 - 1288328 may be configured instead Piece. Step 552: Collect the latest version of each logical segment and ignore all the eliminated segments in the chaotic update block and the associated original block. Step 554 · Record the collected valid segments in a logical sequential order on the new relay block to form a complete block, that is, blocks of all logical segments of the logical group recorded in sequential order . Step 5 5 6 : Replace the original block with a new complete block. Step 558: Erasing the updated block and the original block. Figure 11B is a flow chart showing in detail the compression procedure for closing the chaotic update block shown in Figure 1A. Compression is selected when the number of logical segments that are written in the block is lower than the predetermined design parameter CD. The compression program step 560 shown in Figure 1A includes the following sub-steps: Step 561: When the chaotic update block is compressed, a new relay block is replaced. Step 562 · Collect the latest version of each logical section in the existing chaotic update block to be compressed. Step 564: Record the collected segments on the new update block to form a new update block with the compressed segments. Step 566 - Replace the existing update block with a new update block with a compressed section. Step 568: Erasing the updated block that has ended. Logic and Relay Block Status Figure 12A shows all possible states of a logical group and their possible transitions under various operations. 98704.doc 1288328 Figure 12B is a table listing the possible states of a logical group. The logical group status is defined as follows: 1 · A # : All logical segments in the logical group have been written into a single relay block in a logical sequential order, which may utilize page mark winding mode. 2. Table Respect 8: No logical segments have been written in the logical group. The logical group is marked in the group address table as a successor block that is not written and has no configuration. Returns a predetermined data pattern in response to a host read for each segment in this group. 3.·豸豸舞: Some sections in the logical group have been written into the relay block in a logically sequential order, possibly using page markers, so it can replace the corresponding logic of any previous complete state in the group. Section. 4. Must dance: Some sections in the logical group have been written in the human relay block in a logically non-sequential order, possibly using page markers, so it can replace the corresponding complete state of the group towel. Logical section. The sections in the group address can be written more than once - and the latest version will replace all previous versions too. Figure 13A shows all possible states of a relay block, and the possible transitions between them. Figure 13B below shows all possible states of the relay block, and the possible transitions between them: Stay L (10) Xu: All the segments in the relay block have been erased. = Sequence (9): The relay block has been partially written, and its segments are in a logical sequence, possibly using page markers. All sections belong to the same 98704.doc 1288328 logical group. 3. Mannong ^ Ben · The relay block has been partially or completely written, and its segments are in a logical non-sequential order. Any section can be written more than once. All sections belong to the same logical group. 4. End #: The relay block has been completely written in the logical sequential order, possibly using page markup. The original known block was previously complete but at least one segment has been phased out due to host material updates. Figure 14(A)_14(J) is a state diagram showing the various operational effects on the logical group status and on the physical relay block. Figure H(A) shows a state diagram corresponding to the logical grouping of the first write operation and the transition of the relay block. The host writes one or more segments of the previously unwritten patch group to the newly configured erased block in a logically sequential order. The logical group and the relay block enter the sequential update state. The graph μ(β) does not correspond to the logical group of the first complete operation and the ghost map of the relay zone. The sequential update logical group that was not previously written becomes complete due to the host = sequential write to all sections. If the memory card fills the remaining unwritten segments with a predetermined data pattern to fill the group, the transition will also occur. The relay block becomes complete. Fig. 14(c) shows that the logical group of the first chaotic operation and the relay area ▲ turn 4 are sad. A previously updated sequence of sequential update logic becomes confusing when the host writes at least one segment out of sequence. Figure 14 (D) shows a state diagram corresponding to the logical grouping of the first compression operation and the transition of the relay block. Copy the previously unwritten chaotic update logic 987〇4.d (all active sections in the -41 - 1288328 group to the new chaotic relay block' from the old block and then erase it. Figure 14 (E) displaying a logical group corresponding to the first summary operation and a state diagram of the middle 'block transition. Moving all valid segments in the previously unwritten chaotic update logical group from the old chaotic block to press The newly configured erased block is logically filled sequentially. The sectors that are not written by the host are filled in a predetermined data pattern. Then the old chaotic block is erased. Figure 14(F) shows the corresponding write State diagram of the logical grouping of operations and the transition of the relay block. The host writes one or more segments of the complete logical group to the newly configured erased relay block in a logical sequential order. Logical group and The relay block becomes a sequential update state. The previous complete relay block becomes the original relay block. Figure 14(G) shows the state diagram corresponding to the logical group and relay block transition of the sequential padding operation. Logical group when the host writes all its segments sequentially It becomes complete. This can also occur when the sequential update logical group is filled with the valid section of the original block to make it complete during the collection of discarded items, after which the original block is erased. Figure 14(H) shows the corresponding State diagram of logical group and relay block transitions for non-sequential write operations. The sequential update logical group becomes confusing when the host writes at least one segment non-sequentially. Unsequential segment writes may result in updated blocks or The valid segment corresponding to the original block becomes obsolete. Figure 14(1) shows the state diagram corresponding to the logical group and relay block transition of the compression operation. From the old block, all the validities in the chaotic update logical group are valid. The segment is copied to the new chaotic relay block and then erased. The original block 98704.doc -42- 1288328 will not be affected. Figure 14(J) shows the logical group and medium corresponding to the summary operation. Following the state diagram of the block transition. Copy all the valid segments in the chaotic update logical group from the old chaotic block to fill the newly configured erased blocks in a logically sequential order. Then erase the old chaotic area. Block and original block. Update Block Tracking and Management Figure 15 shows a preferred embodiment of the structure of an Configuration Block List (ABL) for tracking open and closed update blocks and erased blocks for configuration. Block list (ABL) 6 10 will be stored in controller RAM 130 to allow management of erased block configurations, configured update blocks, associated blocks, and control structures to enable proper logical pair entities. Address translation. In a preferred embodiment, the ABL includes: a list of erased blocks, an open update block list 614, and a closed update block list 616. The open update block list 614 is in the ABL. A block project group having the attribute of opening the update block. The opened update block list has one item of each data update block currently open. Each item retains the following information. LG is the logical group address dedicated to updating the relay block. Sequential/chaotic is the state in which the update block is populated with sequential or mixed IL update data. MB is the relay block address of the update block. The page mark is the starting logical section of the record at the first physical location of the update block. The segment number written represents the segment number currently written on the update block. MB 〇 is the relay block address of the associated original block. Page TagG is the page mark of the original block associated with it. The closed update block list 616 is a subset of the configuration block list (Abl). 98704.doc -43- 1288328 This is the block project group in abl with the properties of the closed update block. The closed update block list has an item for each data update block that has been closed, but its items are not updated in the logical pair main entity directory. Each item retains the following information. LG is the logical group address dedicated to the current update block. Mb is the relay block address of the update block. The page mark is the starting logical section of the first-physical location record in the update block. MB. Is the relay block address of the original block associated with it. Chaotic Block Index The Sequential Update Block has data stored in a logically sequential order, so it is easy to find any logical section in the block. A chaotic update block has logical segments that are not stored in order and can store multiple update generations of one logical segment. Additional information must be maintained to record where each valid logical segment is placed in the chaotic update block. In a preferred embodiment, the chaotic block index data structure allows for tracking and fast access to all active segments in the chaotic block. The chaotic block index is independent of the logical address space of the small area and can effectively handle the hotspots of system data and user data. The index data structure essentially allows index information with infrequent update requirements to be maintained in the flash memory to avoid significant impact on performance. On the other hand, the list of recently written segments in the chaotic block is retained in the chaotic section list of controller Ram. Also, the cache memory of the index information of the flash memory is retained in the controller RAM to reduce the number of flash sector accesses for address translation. The index of each chaotic block is stored in the chaotic block index (CBI) section of the flash memory. Figure 16A shows the data field of the chaotic block index ((:] 31) section. The chaotic area 98704.doc -44 - 1288328 The block index section (CBI section) contains the sections in the logical group mapped to the mix索引 Updating the index of the block, the position of each segment of the logical group in the chaotic update block or its associated original block may be defined. The CBI segment includes a chaotic block index of the effective segment in the recorded chaotic block. Blocking, recording the chaotic block information field of the chaotic block address parameter, and recording the segment index field of the valid CBI section in the relay block (CBI block) storing the CBI section. Figure 16B shows the record An example of a chaotic block index (CBI) section in a dedicated relay block. A dedicated relay block may be referred to as a CBI block 620. When the CBI section is updated, it is written to the CBI block 620. The next available physical segment location. Therefore, multiple copies of the CBI segment can exist in the CBI block, with only the last written copy being valid. For example, the logical group LGi has been utilized with the latest version of the valid version. The CBI section is updated three times. The last written CBI section in the block The group index can identify the location of each valid segment in the CBI block. In this example, the last written CBI segment in the block is the CBI segment of LG^6, and its index group is substituted for all previous index groups. A valid index group. When the CBI block finally becomes fully populated with a CBI segment, 'all valid segments are rewritten to the new block location to compress the block during the control write operation. Then the entire region is erased The chaotic block index field in the CBI section contains an index item of each logical section within the logical group or a subgroup mapped to the chaotic update block. Each index item represents the valid data of the corresponding logical section. Chaos updates the displacement within the block. The reserved index value represents the valid data in the chaotic update block without the logical segment 'and the corresponding segment in the original block representing the association is valid. Some chaotic block index blocking items The cache memory is retained in the controller RAM of 98704.doc 1288328. The chaotic block information block in the CBI section contains an item about each chaotic update block present in the system to record the block. Address parameter information. The information in this field is only valid in the last written segment of the CBI block. This information will also appear in the data structure of the RAM. The items of each chaotic update block include three address parameters. The first parameter is the logical address of the logical group (or logical group number) associated with the chaotic update block. The second parameter is the relay block address of the mixed IL update block. The third parameter Is the physical address offset of the last segment written in the chaotic update block. The displacement information sets the scan start point of the chaotic update block during initialization to reconstruct the data structure in the RAM. The segment index field contains information about each of the CBI blocks. A project of a valid CBI section that defines the displacement of the most recently written CBI section within the CBI block for each of the licensed chaotic update blocks. The reserved value of the displacement in the index represents that the licensed chaotic update block does not exist. Figure 16C is a flow diagram showing the access to data for a logical segment of a given logical group that is undergoing a chaotic update. During the update process, the update data is recorded in the chaotic update block, while the unaltered data is left in the original relay block associated with the logical group. The procedure for accessing the logical section of a logical group under a chaotic update is as follows: Step 650: Start looking for a given logical section of a given logical group. Step 652: Find the last written CBI section in the CBI block. Step 654: Look for the chaotic update block or the original area 98704.doc -46 - 1288328 block associated with the given logical group by querying the chaotic block information block of the last written cbi section. This step can be performed any time before step 662. Step 658: If the last written CBI segment is directed to a given logical group, the CBI segment can be sought. Proceed to step 662. Otherwise, proceed to step 660. Step 660: Find the CBI section of the given logical group by querying the sector index field of the last written CBI section. Step 662: Find a chaotic block or a given logical segment in the original block by querying the chaotic block index field of the CBI section found. Figure 16D shows a flow diagram of accessing data for a logical segment of a given logical group that is undergoing a chaotic update, based on an alternative embodiment in which a logical group has been partitioned into subgroups. The limited capacity of the CBI segment can only record a predetermined maximum number of logical segments. When a logical group has more logical segments than a single CBI segment can handle, the logical group is split into a plurality of subgroups having CBI segments assigned to each subgroup. In one example, each CBI section has a capacity sufficient to track a logical group of up to 256 segments and up to 8 chaotic update blocks. If the logical group has a size greater than 256 segments, there are separate CBI segments for each 256 segment subgroup within the logical group. The CBI section can exist for up to 8 subgroups within a logical group to support logical groups up to 2048 segments. In a preferred embodiment, an indirect indexing scheme is employed to facilitate index management. Each item of the section index has direct and indirect blocking. The direct segment index defines the displacement within the CBI block for all possible C BI segments of a particular chaotic update block. The information in this shelving is only valid in the CBI section that was written last about the particular chaotic update block. In the index 98704.doc -47- 1288328 The reserved value of the displacement means that the CBI section does not exist because the corresponding logical subgroup of the chaotic update block does not exist or is not updated due to the configured update block. . The indirect section index may define the displacement within the CBI block in which the most recently written CBI section of each chaotic update block of the license is located. The reserved value of the displacement in the index represents that the licensed chaotic update block does not exist. Figure 16D shows the procedure for accessing the logical section of a logical group under a chaotic update, the steps of which are as follows: Step 670: Split each logical group into a plurality of subgroups and assign a cbi section to each subgroup. Step 6 8 0: Start looking for a given logical segment of a given subgroup of a given logical group. Step 682: Find the last written CBI section in the CBI block. Step 684: Find the chaotic update block or the original block associated with the given sub-group by querying the chaotic block information field of the last written CBI section. This step can be performed any time before step 6.9. Step 686: Proceed to step 691 if the last written CBI section is directed to a given logical group. Otherwise, proceed to step 690. Step 690: Indirect segment indexing by querying the last written CBI segment

欄位,尋找給定邏輯群組之多個CBI區段中最後寫入的CBI 區段。 步驟69 1 :已經尋找到和給定邏輯群組之子群組其中之一 關聯的至少一 CBI區段。繼續。 步驟692 :如果所尋找到的CBI區段指向給定子群組,則 98704.doc 1288328 可尋找給定子群組的CBI區段。繼續進行至步驟696。否則, 繼續進行至步驟694。 步驟694 :藉由查詢目前尋找之CBI區段的直接區段索引 欄位,尋找給定子群組的CBI區段。 步驟696 :藉由查詢給定子群組之CBI區段的混亂區塊索 引欄位,尋找混亂區塊或原始區塊中的給定邏輯區段。 圖16E顯示在其中將各邏輯群組分割成多個子群組的具 體實施例中,混亂區塊索引(CBI)區段及其功能的範例。邏 輯群組700原來將其完整的資料儲存在原始中繼區塊7〇2 中。接著,邏輯群組配合配置專用的混亂更新區塊704進行 更新。在本範例中,將邏輯群組700分割成子群組,這些子 群組A、B、C、D各具有256個區段。 為了尋找子群組B中的第i個區段,會先尋找CBI區塊620 中最後寫入的CBI區段。最後寫入之CBI區段的混亂區塊資 訊欄位可提供尋找給定邏輯群組之混亂更新區塊704的位 址。同時’其還可提供寫入混亂區塊中之最後區段的位置。 此資訊在掃描及重建索引時很有用。 如果最後寫入之CBI區段結果是給定邏輯群組的四個 CBI區段之一,則會進一步決定其是否正是含有第丨個邏輯 區段之給定子群組B的CBI區段。如果是,則cbi區段的混 亂區塊索引會指向儲存第丨個邏輯區段之資料的中繼區塊 位置。區段位置會在混亂更新區塊704中或在原始區塊702 中0 如果最後寫入之CBI區段結果是給定邏輯群組之四個 98704.doc 1288328 CBI區段之一但卻非屬於子群組b,則會查詢其直接區段索 引’以尋找子群組B的CBI區段。在尋找此確切的CBI區段 * 後’會查詢混亂區塊索引,以在混亂更新區塊704及原始區 塊702中尋找第i個邏輯區段。 如果最後寫入之CBI區段結果不是給定邏輯群組之四個 CBI區段的任一個,則會查詢間接區段索引以尋找四個區段 中的一個。在圖16E所示的範例中,會尋找子群組c的CBI 區段。然後,子群組C的此CBI區段查詢其直接區段索引, _ 以尋找子群組B之確切的CBI區段。此範例顯示在查詢混亂 區塊索引時,將發現第i個邏輯區段為未變更及會在原始區 塊中尋找到其有效資料。 在給定邏輯群組的子群組C中尋找第j個邏輯區段時也會 做出同樣的考慮。此範例顯示最後寫入之CBI區段結果不是 給定邏輯群組之四個CBI區段的任何一個。其間接區段索引 指向給定群組之四個CBI區段之一。所指向之四個中的最後 寫入結果也正是子群組C的CBI區段。在查詢其混亂區塊索 _ 引時’將發現第j個邏輯區段被尋找在混亂更新區塊7〇4中 的指定位置。 控制器RAM中存在系統中之各混亂更新區塊的混亂區段 清單。每份清單均含有一份自快閃記憶體中最後被更新的 相關CBI區段開始至目前區段為止被寫入該混亂更新區塊 之中的區段的記錄。特定混亂更新區塊之邏輯區段位址(可 保留在混亂區段清單中)的數量是8至16之代表值的設計參 數。清單的最佳尺寸可決定為其對混亂資料寫入操作之過 98704.doc -50- 1288328 度耗用的作用及初始化期間之區段掃描時間之間的權衡。 在系統初始化期間,為了識別自其關聯之CBI區段之一的 先前更新後所寫入的有效區段,必須掃描各混亂更新區 塊。在控制器RAM中,會構成各混亂更新區塊的混亂區段 清單。只需要在最後寫入的CBI區段中,從各區塊之混亂區 塊負訊搁位中疋義的最後區段位址開始掃描各區塊即可。 在配置混亂更新區塊時,會寫入CBI區段以對應於所有的 更新邏輯子群組。混亂更新區塊的邏輯及實體位址會被寫 入區段中可用的混亂區塊資訊欄位,其中空值項目在混亂 區塊索引攔位中。會在控制器RAM中開啟混亂區段清單。 在關閉混亂更新區塊時,會以從區段中混亂區塊資訊欄 位移除的區塊邏輯及實體位址寫入CBI區段。RAM中對應 的混亂區段清單變成未使用。 可修改控制器RAM對應的混亂區段清單,以包括寫入混 亂更新區塊之區段的記錄。當控制器RAM中的混亂區段清 早沒有寫入混亂更新區塊之其他區段記錄的任何可用空間 時,會為有關清單中區段的邏輯子群組寫入已更新的CBI 區段,然後清除清單。 當CBI區塊620變滿時,會將有效的CBI區段複製到已配 置的已抹除區塊中,然後抹除先前的CBI區塊。 位址表 圖2所示的邏輯對實體位址轉譯模組140負責關聯快閃記 憶體中主機的邏輯位址和對應的實體位址。邏輯群組及實 體群組(中繼區塊)間的映射係儲存於非揮發性快閃記憶體 98704.doc -51 - 1288328 200及揮發性卻較為敏捷之RAM 13 0(見圖1)中分布的一組 表格及清單中。位址表係維持在含有記憶體系統中每個邏 輯群組之中繼區塊位址的快閃記憶體中。此外,最近寫入 區段的邏輯對實體位址記錄會暫時保留在RAM中。在系統 啟動後進行初始化時,可從快閃記憶體中的區塊清單及資 料區段標頭中重新構成這些揮發性記錄。因此,快閃記憶 體中的位址表只需要偶而更新,以降低控制資料之過度耗 用寫入操作的百分比。 邏輯群組之位址記錄的階層包括:在RAM中之開啟的更 新區塊清單、關閉的更新區塊清單及維持在快閃記憶體中 的群組位址表(GAT)。 開啟的更新區塊清單是控制器RAM中目前開啟用於寫入 已更新之主機區段資料之資料更新區塊的清單。區塊的項 目在區塊關閉時會被移至關閉的更新區塊清單。關閉的更 新區塊清單是控制器RAM中已經關閉之資料更新區塊的清 單。清單㈣目的子集在控制寫入操作期时被移至群組 位址表中的區段。 群組位址表(GAT)是記憶體系統中主機資料所有邏輯群 組之中繼區塊位址的清單。GAT含有根據邏輯位址循序排 序之各邏輯群組的-個項目。GAT中的第_項目含有具位 址η之邏輯群組的中繼區塊位址。在較佳具體實施例中, GAT是快閃記憶體中的表袼 知目—μ 具Τ包含一組具定義記憶體 系統中每個邏輯群組之中繼區塊位址之項目的區段(稱為 GAT區段)。在快閃記憶體中,會將gat區段尋找在一或多 98704.doc -52- 1288328 個專用的控制區塊(稱為gat區塊)中。 圖17A顯示群組位址表(GAT)區段的資料欄位。GAT區段 可如具有足夠含有一組128個連續邏輯群組之GAT項目的 容量。各G AT區段包括兩個成分,即:一組用於範圍内各 邏輯群組之中繼區塊位址的GAT項目,及GAT區段索引。第 一成分含有用於尋找和邏輯位址關聯之中繼區塊的資訊。 第二成分含有用於尋找GAT區塊内所有有效GAT區段的資 訊。各GAT項目有三個欄位,即··中繼區塊號碼、如先前 結合圖3 A(iii)所述的頁面標記、及代表中繼區塊是否已經 重新連結的旗標。GAT區段索引列出GAT區塊中有效GAT 區段的部分。此索引會在每個GAT區段中但會被GAT區塊中 下一個寫入之GAT區段的版本所取代。因此只有最後寫入 之GAT區段中的版本為有效。 圖17B顯示記錄在一或多個GAT區塊中之群組位址表 (GAT)區段的範例。GAT區塊是記錄GAT區段專用的中繼區 塊。在更新GAT區段時,會將其寫入GAT區塊720中下一個 可用的實體區段位置。因此,GAT區段的多個複本可存在 GAT區塊中,其中只有最後寫入的複本為有效。例如,gat 區段25 5(含有邏輯群組LG3968 _ LG4〇98的指標)至少已經使 用有效版本的最新版本更新兩次。區塊中最後寫入之gat 區段的一組索引可識別GAT區塊中各有效區段的位置。在 此範例中,區塊中最後寫入的GAT區段是GAT區段236,及 其索引組是取代所有先前索引組的有效索引組。當gat區 塊最後變成以G AT區段完全填充時,會將所有有效區段再 98704.doc -53- 1288328 寫入新的區塊位置,以在控制寫入操作期間壓縮區塊。然 後抹除整個區塊。 如上述,G AT區塊在邏輯位址空間的區域中含有邏輯上 連續組之群組的項目。GAT區塊内的GAT區段各含有128個 連續邏輯群組的邏輯對實體映射資訊。在G AT區塊所跨越 的位址範圍内,儲存所有邏輯群組項目所需的GAT區段數 僅佔用區塊内總區段位置的一小部分。因此,在區塊中下 一個可用區段位置將G AT區段寫入,即可將其更新。GAT 區塊中所有有效GAT區段及其位置的索引係維持在最近寫 入之G AT區段中的索引欄位中。G AT區塊中有效GAT區段所 佔用之總區段的一小部分係為系統設計參數,其通常為 25%。然而,每個GAT區塊中最多有64個有效GAT區段。在 大邏輯容量的系統中,可能必須在一個以上的GAT區塊中 儲存G AT區段。此時,各G AT區塊係和固定範圍的邏輯群組 關聯。 可將GAT更新執行作為控制寫入操作的部分,此操作會 在ABL用盡配置的區塊時受到觸發(見圖18)。其執行和abl 填充及CBL清空操作同時進行。在GAT更新操作期間,一個 GAT區段具有以關閉之更新區塊清單中對應項目所更新的 項目。GAT項目被更新時,會從關閉的更新區塊清單(cubl) 移除任何對應的項目。例如,會根據關閉的更新區塊清單 中的第-項目選擇要更新的GAT區段。可將更新區段寫入 G AT區塊中的下一個可用區段位置。 當沒有任何區段位置可供一已更新的GAT區段使用時, 98704.doc •54- 1288328 於控制寫入操作期間便會發生gat再寫入操作。將會配置 新的GAT區塊及從完整的GAT區塊按循序順序複製GAT索 引所定義的有效G AT區段。然後抹除整個GAT區塊。 GAT快取記憶體是控制器ram 130中,GAT區段之128個 項目之子分割之項目的複本。GAT快取記憶體項目的數量 是一項系統設計參數,代表值為32。每次從GAT區段讀取 一個項目時’即可建立相關區段子分割的gat快取記憶 體。將會維持多個GAT快取記憶體。其數量是代表值為4的 設計參數。GAT快取記憶體會根據最久未使用以不同區段 子分割的項目來覆寫。 抹除的中繼區塊管理 圖2所示的抹除區塊管理器ι6〇可使用一組維持目錄及系 統控制資訊的清單來管理抹除區塊。這些清單係分布於控 制器RAM 130及快閃記憶體200中。當必須配置已抹除的中 繼區塊以儲存使用者資料或儲存系統控制資料結構時,會 選擇保留在控制器RAM中的配置區塊清單(ABL)中的下一 個可用中繼區塊號碼(見圖15)。同樣地,在撤出中繼區塊後 而將其抹除時,會將其號碼新增至同樣保留在控制器rAM 中的清除區塊清單(CBL)。相對較靜態的目錄及系統控制資 料係儲存於快閃記憶體中。這些包括列出快閃記憶體中所 有中繼區塊之抹除狀態的已抹除區塊清單及位元對映 (MAP)。已抹除的區塊清單及MAp係儲存於個別區段中, 且會記錄在稱為「MAP區塊」的專用中繼區塊中。這些分 布於控制器RAM及快閃記憶體中的清單可提供已抹除區塊 98704.doc 1288328 記錄的層級以有效管理已抹除中繼區塊的使用。 圖18為顯示使用及再循環已抹除區塊之控制及目錄資訊 之分布及流程的示意方塊圖。控制及目錄資料係維持在被 保留在常駐於快閃記憶體2〇〇之控制器ram 130或在MAP 區塊750中的清單。 在較佳具體實施例中,控制器RAM 13 0會保留配置區塊 清單(ABL)610及清除區塊清單(Cbl)740。如先前結合圖15 所述’配置區塊清單(ABL)可記錄最近已配置哪個中繼區塊 以儲存使用者資料或儲存系統控制資料結構。在必須配置 新的已抹除中繼區塊時,會在配置區塊清單(ABL)中選擇下 一個可用的中繼區塊號碼。同樣地,會使用清除區塊清單 (CBL)記錄已經解除配置及抹除的更新中繼區塊。在控制器 RAM 13 0(見圖1)中會保留ABL及CBL以在追蹤相對作用中 更新區塊時進行快速存取及簡易操控。 配置區塊清單(ABL)可記錄即將成為更新區塊之已抹除 中繼區塊的集區及已抹除中繼區塊的配置。因此,各個這 些中繼區塊可由指定其是否為ABL懸置配置中的已抹除區 塊、開啟的更新區塊、或關閉的更新區塊之屬性來說明。 圖18顯示ABL含有:已抹除ABL清單612、開啟的更新區塊 清單614、及關閉的更新區塊清單616。此外,和開啟的更 新區塊清單614關聯的是關聯的原始區塊清單615。同樣 地,和關閉的更新區塊清單關聯的是關聯的已抹除原始區 塊清單617。如先前圖15所示’這些關聯的清單分別是開啟 的更新區塊清單614及關閉的更新區塊清單616的子集。已 98704.doc -56- 1288328 抹除的ABL區塊清單612、開啟的更新區塊清單614、及關 閉的更新區塊清單616均為配置區塊清單(ABL)610的子 集,各清單中的項目分別具有對應的屬性。 MAP區塊750是儲存快閃記憶體200中之抹除管理記錄專 用的中繼區塊。MAP區塊儲存MAP區塊區段的時間序列, 其中各MAP區段不是抹除區塊管理(EBM)區段760,就是 MAP區段780。當已抹除區塊在配置用盡且在撤出中繼區塊 時再循環時,關聯的控制及目錄資料較佳含在可在MAP區 塊更新的邏輯區段中,其中會將更新資料的各例項記錄在 新的區塊區段中。EBM區段760及MAP區段780的多個複本 可存在MAP區塊750中,其中只有最新版本為有效。有效 MAP區段之位置的索引係含在EMB區塊的攔位中。有效的 EMB區段總是在控制寫入操作期間最後被寫入MAP區塊 中。當MAP區塊750已滿時,會在控制寫入操作期間將所有 有效區段再寫入新的區塊位置而將其壓縮。然後抹除整個 區塊。 各EBM區段760含有已抹除的區塊清單(EBL)770,此清單 是已抹除區塊總體之子集位址的清單。已抹除的區塊清單 (EBL)770可當作含有已抹除之中繼區塊號碼的緩衝器,從 此緩衝器中會定期取用中繼區塊號碼以重新填充ABL,並 定期將中繼區塊號碼新增至此緩衝器中以重新清空CBL。 EBL 770可當作用於以下項目的缓衝器:可用的區塊緩衝器 (ABB)772、已抹除的區塊緩衝器(EBB)774及已清除的區塊 緩衝器(CBB)776。 98704.doc -57- 1288328 可用的區塊緩衝器(ABB)772含有緊接先前ABL填充操作 之後之ABL 610之項目的複本。其實際上是正在ABL填充操 作之後之ABL的備份複本。 已抹除的區塊緩衝器(EBB)774含有先前從MAP區段780 或CBB清單776傳送之已抹除的區塊位址(說明如下),且該 等位址可用在ABL填充操作期間傳送至ABL 610。 已清除的區塊緩衝器(CBB)776含有在CBL清空操作期間 已從CBL 740傳送及其後會被傳送至MAP區段780及EBB清 單774之已抹除區塊的位址。 各個MAP區段780含有稱為「MAP」的位元對映結構。 MAP會使用快閃記憶體中各中繼區塊的一位元,以用來表 示各區塊的抹除狀態。對應於EBM區段中ABL、CBL或已 抹除的區塊清單所列之區塊位址的位元在MAP中不會被設 為抹除狀態。 區塊配置演算法永遠不會使用在MAP、已抹除的區塊清 單、ABL或CBL内,任何未含有有效資料結構及未被指定 為已抹除區塊的區塊,因此無法存取此類區塊用於儲存主 機或控制資料結構。這可提供從可存取的快閃記憶體位址 空間排除具有缺陷位置之區塊的簡單機制。 圖1 8所示之階層可有效地管理已抹除區塊記錄,並且對 被儲存於該控制器之RAM中的該等區塊位址清單提供完整 的安全性。可以不頻繁的方式在該些區塊位址清單及一個 以上的MAP區段780之間交換已抹除的區塊項目。可於電源 關閉之後的系統初始化期間,透過被儲存於快閃記憶體中 98704.doc -58- 1288328 複數個區段中該等已抹除區塊清單及位址變換表中的資 訊,以及有限地掃描快閃記憶體中少量被參照的資料區 塊,來重建該些清單。 用於更新已抹除中繼區塊記錄之階層所採用的該等演管 法可以下面的順序來配置使用已抹除區塊:將來自該MAp 區塊750的區塊叢於位址順序中交錯來自該CBL 740的區塊 位址叢,其反映的係區塊被該主機更新的順序。對大部份 的中繼區塊大小與系統記憶體容量而言,單一 MAP區段可 針對該系統中的所有中繼區塊提供一位元對映。於此情況 中,已抹除的區塊必定會以和被記錄於此MAP區段中相同 的位址順序來配置使用。 抹除區塊管理操作 如上述,ABL 610是具有以下位址項目的清單··可經配置 使用的已抹除中繼區塊,及最近已配置為資料更新區塊之 中繼區塊。ABL·中區塊位址的實際數量介於為系統設計變 數之最大及最小限制之間。在製造期間,格式化之AbL項 目的數量是記憶卡類型及容量的函數。此外,由於可用之 已抹除區塊的數量會因壽命期間的區塊故障而縮減,也會 縮減ABL中項目的數量接近系統壽命終點。例如,在填充 操作後,ABL中的項目可指定可用於以下用途的區塊。每 個區塊具有一個項目之部分寫入資料更新區塊的項目不超 過系統對同時開啟之最大更新區塊的限制。用於配置為資 料更新區塊之抹除區塊的一至二十個項目之間。配置為控 制區塊之已抹除區塊的四個項目。 98704.doc -59- 1288328 ABL填充操作 由於ABL 610會因為配置而變成耗盡,因此需要進行重新 填充。填充ABL的操作發生於控制寫入操作期間。此係觸 發於以下情況時··必須配置區塊,但ABL含有不足用於配 置為資料更新區塊或一些其他控制資料更新區塊的已抹除 區塊項目。在控制寫入期間,ABL填充操作係和GAT更新 操作同時進行。 在ABL填充操作期間會發生以下動作。 1. 保留具有目前資料更新區塊之屬性的ABL項目。 2. 保留已關閉資料更新區塊之屬性的ABL項目,除非該 區塊的某個項目正於該同時進行的GAT更新作業中被寫 入,於此情況中則會從該ABL中移除該項目。 3. 保留用於未配置之抹除區塊的ABL項目。 4_壓縮ABL以移除因移除項目所產生的間隙,以維持項 目的順序。 5·藉由附加來自該EBB清單中下次可用的項目,以完全 填充該ABL。 6.利用該ABL中該等目前的項目來覆寫該ABB清單。 CBL清空操作 CBL是控制器RAM中已抹除區塊位址的清單,對已抹除 區塊項目數量的限制和ABL相同。清空CBL的操作發生於 控制寫入操作期間。因此,其和ABL填充/GAT更新操作或 CBI區塊寫入操作同時進行。在CBL清空操作中,會從CBL 740移除項目並將其寫入CBB清單776。 98704.doc -60- 1288328 MAP交換操作 當EBB清單774已為清空時,在MAP區段78〇之抹除區塊 資訊及EBM區段760間的MAP交換操作可定期發生於控制 寫入操作期間。如果系統中的所有已抹除中繼區塊均記錄 在EBM區段760中,將無任何MAP區段780存在及不會執行 任何MAP交換。在MAP交換操作期間,用於將已抹除區塊 饋送給EBB 774的MAP區段被視為來源MAP區段782。相反 地’用於從CBB 776接收已抹除區塊的MAP區段被視為目的 地MAP區段784。如果只有一個MAP區段,則可當作來源及 目的地MAP區段,其定義如下。 在MAP交換期間會執行以下動作。 1 ·以遞增指標的方式為基礎,選擇一來源map區段。 2·以不在該來源MAP區段中之第一 CBB項目中的區塊位 址為基礎來選擇一目的map區段。 3_如該CBB中相關項目所定義的方式來更新該目的MAp 區段,並且從該CBB中移除該等項目。 4·將該已更新的目的MAP區段寫入至該MAP區塊之中, 除非沒有分離的來源MAP區段存在。 5.如該CBB中相關項目所定義的方式來更新該來源MAp 區段,並且從該CBB中移除該等項目。 6·將該CBB中剩餘的項目附加至該ebb之中。 7. 利用該來源MAP區段所定義之已抹除區段位址盡可能 地填充該EBB。 8. 將該已更新的來源MAP區段寫入至該MAP區塊中。 98704.doc -61 - 1288328 9·將一已更新的EBM區段寫入至該MAp區塊中。 清單管理 圖1 8顯不各種清單間控制及目錄資訊的分布與流程。為 了方便,在清單元件間移動項目或變更項目屬性的操作, 在圖18中識別為[A]至[〇],說明如下。 [A] 在將抹除區塊配置為主機資料的更新區塊時,會 將其在ABL中的項目屬性從已抹除的ABL區塊變更為開啟 的更新區塊。 [B] 在將已抹除的區塊配置為控制區塊時,會移除其在 ABL中的項目。 [C] 在建立一具有開啟更新區塊屬性的ABL項目時,會 將關聯的原始區塊攔位新增至項目,以記錄被更新之邏輯 群組的原始中繼區塊位址。從GAT可獲得此資訊。 [D] 關閉更新區塊時,其在ABL中的項目屬性會從開啟 的更新區塊變更為關閉的更新區塊。 [E] 關閉更新區塊時,會抹除其關聯的原始區塊,及會 將其在ABL中項目之關聯原始區塊欄位的屬性變更為已抹 除的原始區塊。 [F] 在ABL填充操作期間,任何其位址在相同控制寫入 操作期間於GAT中更新的已關閉更新區塊會從ab]l中移除 其項目。 [G] 在ABL填充操作期間,在從ABL移除已關閉更新區 塊的項目時,會將其關聯之已抹除原始區塊的項目移至 CBL。 98704.doc -62- 1288328 [Η] 在抹除控制區塊時,會將其所用項目新增至CBL。 [I] 在ABL填充操作期間,會從EBB清單將已抹除區塊 項目移至ABL,且被賦以已抹除之ABL區塊的屬性。 [J] 在ABL填充操作期間修改所有相關的ABL項目後, ABL中的區塊位址將取代ABB清單的區塊位址。 [K] 和控制寫入期間的ABL填充操作同時進行,將CBL 中已抹除區塊的項目移至CBB清單。 [L] 在MAP交換操作期間,從CBB清單將所有相關項目 移至MAP目的地區段。 [M] 在MAP交換操作期間,從CBB清單將所有相關項目 移至MAP來源區段。 [N] 在MAP交換操作期間的[L]與[M]之後,從CBB清單 將所有其餘項目移至EBB清單。 [O] 在MAP交換操作期間的[N]之後,如果可能,從MAP 來源區段移動除了在[Μ]中移動之項目以外的項目,以填充 EBB清單。 邏輯對實體位址轉譯 為了在快閃記憶體中尋找邏輯區段的實體位置,圖2所示 的邏輯對實體位址轉譯模組140可執行邏輯對實體位址轉 譯。除了最近已更新的邏輯群組外,可以使用常駐在控制 器RAM 130中快閃記憶體200或GAT快取記憶體的群組位址 表(GAT)執行大多數的轉譯。最近已更新之邏輯群組的位址 轉譯會需要查詢主要常駐在控制器RAM 130中之更新區塊 的位址清單。因此,邏輯區段位址之邏輯對實體位址轉譯 98704.doc -63- 1288328 的程序端視和區段所在之邏輯群組關聯之區塊的類型而 定。區塊的類型如下:完整區塊、循序資料更新區塊、混 亂資料更新區塊、關閉的資料更新區塊。 圖19為顯示邏輯對實體位址轉譯程序的流程圖。實質 上’先使用邏輯區段位址查詢各種更新目錄(例如,開啟的 更新區塊清單及關閉的更新區塊清單),即可尋找對應的中 繼區塊及實體區段。如果關聯的中繼區塊並不屬於更新程 序的部分,則由G AT提供目錄資訊。邏輯對實體位址轉譯 包括以下步驟: 步驟8 0 〇 :給定一邏輯區段位址。 步驟810:查詢控制器RAM中開啟之更新區塊清單614的 給定邏輯位址(見圖15與18)。如果查詢失敗,繼續進行至步 驟820,否則繼續進行至步驟83〇。 步驟820 ·在關閉的更新區塊清單616中查詢給定的邏輯 位址。如果查詢失敗,則給定的邏輯位址並不屬於任何更 新程序的部分;繼續進行至步驟87〇 ,以進行GAT位址轉 譯。否則繼續進行至步驟860,以進行關閉的更新區塊位址 轉譯。 步驟830 :如果含有給定邏輯位址的更新區塊為循序,則 繼縯進行至步驟84〇,以進行循序更新區塊位址轉譯。否則 繼績進行至步驟850,以進行混亂更新區塊位址轉譯。 步驟840:使用循序更新區塊位址轉譯來取得中繼區塊位 址。繼續進行至步驟880。 步驟850··使用混亂更新區塊位址轉譯來取得中繼區塊位 98704.doc 1288328 址。繼續進行至步驟880。 步驟860:使用關閉的更新區塊位址轉譯來取得中繼區塊 位址。繼續進行至步驟88〇。 步驟870 :使用群組位址表(GAT)轉譯來取得中繼區塊位 址。繼續進行至步驟88〇。 步驟880 :將中繼區塊位址轉換為實體位址。轉譯方法端 視中繼區塊是否已經重新連結而定。 步驟890:已取得實體區段位址。 下文將更詳細說明該等各種位址轉譯處理: 循序更新區塊位址轉譯(步驟84〇) 從開啟之更新區塊清單614 (圖15及18)的資訊即可直接 兀成和循序更新區塊關聯之邏輯群組中目標邏輯區段位址 的位址轉譯,說明如下。 1·從清單㈤「頁面標記」及「寫入的區段號媽」搁位可 決定目標邏輯區段是否已經配置在更新區塊或其關聯的原 始區塊中。 2·從清,單中可讀取適合目標邏輯區段的中繼區塊位址。 3·攸合適的「頁面標記」攔位可決定中繼區塊内的區段 位址。 混亂更新區塊位址轉譯(步驟85〇) 和混亂更新區塊關聯之邏輯群組中目標邏輯區段位址的 位址轉譯序列如下。 。=如果攸rAM中的混亂區段清單決定區段是最近寫入的 區段,則直接從其在此清單中的位置即可完成位址轉譯。 98704.doc 1288328 2·在CBI區塊中最近寫入的區段在其混亂區塊資料攔位 内含有和目標邏輯區段位址相關之混亂更新區塊的實體位 址。其在間接區段索引欄位内也含有有關此混亂更新區塊 最後寫入之CBI區段之CBI區塊内的位移(見圖16A-16E)。 3 ·該些攔位中的資訊均被快取儲存於ram之中,而不需 要於後續的位址轉譯期間來讀取該區段。 4·讀取步驟3由間接區段索引攔位所識別的CBI區段。 5.將最近被存取之混亂更新子群的直接區段索引欄位快 取儲存於RAM之中,而不需要實施步驟4處的讀取以重複存 取相同的混亂更新區塊。 6·在步驟4或步驟5讀取的直接區段索引攔位接著可識別 有關含有目標邏輯區段位址之邏輯子群組的CBI區段。 7.從步驟6中識別的CBI區段讀取目標邏輯區段位址的混 亂區塊索引項目。 8 ·該最近被項取之混亂區塊索引桶位可被快取儲存於控 制器RAM之中,而不需要實施步驟4與步驟7處的讀取以重 複存取相同的邏輯子群。 9·混亂區塊索引項目可定義目標邏輯區段在混亂更新區 塊或關聯之原始區塊中的位置。如果目標邏輯區段的有效 複本係在原始區塊中,則可使用原始中繼區塊及頁面標記 資訊將其尋找。 關閉的更新區塊位址轉譯(步驟860) 從關閉之更新區塊清單的資訊即可直接完成和關閉之更 新區塊關聯之邏輯群組中目標邏輯區段位址的位址轉譯 98704.doc -66 - 1288328 (參見圖18),說明如下。 1. 從凊單中可讀取指派給目標邏輯群組的中繼區塊位 址。 2. 從清單中的「頁面標記」欄位可決定中繼區塊内的區 段位址。 GAT位址轉譯(步驟87〇) 如果邏輯群組不會受到開啟或關閉之區塊更新清單的參 考,則其在G AT中的項目為有效。由GAT所參考之邏輯群組 中目標邏輯區段位址的位址轉譯序列如下。 1·評估RAM中可用GAT快取記憶體的範圍,以決定目標 邏輯群組的項目是否含在GAT快取記憶體中。 2 ·如果在步驟1發現目標邏輯群組,則gat快取記憶體含 有完整的群組位址資訊,包括中繼區塊位址及頁面標記, 因此允許轉譯目標邏輯區段位址。 3. 如果目標位址不在G AT快取記憶體中,則必須讀取目 標GAT區塊的GAT索引,以識別有關目標邏輯群組位址之 G AT區段的位置。 4·最後存取之GAT區塊的GAT索引會保留在控制器ram 中,且不用從快閃記憶體讀取區段即可存取。 5 ·將一份由每個G AT區塊之中繼區塊位址及被寫入每個 GAT區塊之中的區段數量所組成的清單保存在控制器ram 之中。假使步驟4處無法取得必要的GAT索引,則可立刻從 快閃記憶體之中讀取。 6·從步驟4或步驟6處所獲得的GAT索引所定義的GA 丁區 98704.doc -67- 1288328 塊中的區段位置中讀取有關目標邏輯群組位址的gat區 段。以含有目標項目之區段的子分割來更新gat快取記憶 體。 7·從目標GAT項目内的中繼區塊位址及「頁面標記」攔 位取得目標區段位址。 中繼區塊對實體位址轉譯(步驟880) 如果和中繼區塊位址關聯的旗標代表中繼區塊已經被重 新連結,則會從BLM區塊讀取相關的LT區段,以決定目標 區段位址的抹除區塊位址。否則,會從中繼區塊位址決直 接疋抹除區塊位址。 控制資料管理 圖20顯示在記憶體管理的操作過程中,在控制資料結構 上執打的操作階層。資料更新管理操作可對常駐在^^“中 的各種清單發生作用。控制寫入操作可對快閃記憶體中各 種控制負料區段及專用區塊發生作用,並還能和中的 清單交換資料。 資料更新管理操作會於RAM中針對ABL、CBL、以及該 混亂區段清單來實施。當—已抹除區塊被配置為一更新區 鬼或控制區塊時’或是關閉—更新區塊時,便會更新該 ABL。當抹除一控制區壤時,或是將一已關閉的更新區塊 的某個項目寫入該GAT之中時,便會更新該cbl。當一區段 :寫入-混亂更新區塊之中時,便會更新該更新混亂區段 清單。 …、操作έ使知來自ram中之控制資料結構的資訊 98704.doc 68 - 1288328 被寫入快閃記憶體中的控制資料結構之中,必要時會隨之 更新快閃記憶體與RAM之中其它支援的控制資料結構。當 該ABL不含欲被配置為更新區塊的已抹除區塊的任何其它 項目時,或是再寫入該CBI區塊時,便會觸發控制寫入操作。 在較佳具體實施例中,會在每個控制寫入操作期間執行 ABL填充操作、CBL清空操作、及EBM區段更新操作。當 含有EBM區段的MAP區塊已滿時,會將有效的EBM及MAP 區段複製至已配置的已抹除區塊,然後抹除先前的MAP區 塊。 在每個控制寫入操作期間,寫入一個GAT區段,也會隨 著修改關閉的更新區塊清單。當GAT區塊已滿時,將執行 GAT再寫入操作。 如上述,經過幾次的混亂區段寫入作業之後,便會寫入 一 CBI區段。當CBI區塊變滿時,會將有效的CBI區段複製 到已配置的抹除區塊中,然後抹除先前的CBI區塊。 如上述,MAP交換操作係執行於EBM區段的EBB清單中 沒有其他已抹除的區塊項目時。 每次再寫入MAP區塊時,會在專用的MAPA區塊中寫入用 於記錄MAP區塊之目前位址的MAP位址(MAPA)區段。當 MAPA區塊已滿時,會將有效的MAPA區段複製至已配置的 已抹除區塊,然後抹除先前的MAPA區塊。 每次再寫入MAPA區塊時,會將啟動區段寫入目前的啟動 區塊中。當啟動區塊已滿時,會將目前版本之啟動區塊的 有效啟動區段複製至備份版本,然後該版本再變成目前的 98704.doc -69- 1288328 版本。先前的目前版本會被抹除並變成備份版本,並會將 有效的啟動區段寫回其中。 分散在多個記憶體平面上之記憶體的對齊 。如先前結合圖4及圖5A-5C所述,為了增加效能,會平行 操作多個記憶體平面。基本上,各平面有其自己的感测放 大器、,且作為項取及程式電路的部分,以平行服務跨越平面 之記憶體單元的對應頁面。在結合多個平面0夺,可平行操 作多個頁面,使得效能更為提高。A field that looks for the last written CBI section in multiple CBI sections of a given logical group. Step 69 1 : At least one CBI section associated with one of the subgroups of the given logical group has been found. carry on. Step 692: If the found CBI section points to a given subgroup, then 98704.doc 1288328 may look for a CBI section of the given subgroup. Proceed to step 696. Otherwise, proceed to step 694. Step 694: Find the CBI section of the given subgroup by querying the direct section index field of the CBI section currently being sought. Step 696: Find a chaotic block or a given logical segment in the original block by querying the chaotic block index field of the CBI section of the given subgroup. Figure 16E shows an example of a Chaotic Block Index (CBI) section and its functions in a specific embodiment in which each logical group is partitioned into a plurality of subgroups. The logical group 700 originally stores its complete data in the original relay block 7〇2. The logical group is then updated with the configuration-specific chaotic update block 704. In this example, logical grouping 700 is partitioned into subgroups, each of which has 256 segments. In order to find the i-th segment in subgroup B, the last written CBI segment in CBI block 620 is first looked for. The chaotic block information field of the last written CBI section can provide the address of the chaotic update block 704 looking for a given logical group. At the same time, it can also provide the location of the last segment written in the chaotic block. This information is useful when scanning and rebuilding indexes. If the last written CBI section result is one of the four CBI sections of a given logical group, it is further determined whether it is the CBI section of the given subgroup B containing the third logical section. If so, the chaotic block index of the cbi section will point to the location of the relay block storing the data of the next logical section. The segment location will be in the chaotic update block 704 or in the original block 702. If the last written CBI segment result is one of the four 98704.doc 1288328 CBI segments of the given logical group but not belong to Subgroup b will query its direct section index 'to find the CBI section of subgroup B. After looking for this exact CBI section *, the chaotic block index is queried to find the i-th logical section in the chaotic update block 704 and the original block 702. If the last written CBI section result is not one of the four CBI sections of a given logical group, the indirect section index is queried for one of the four sections. In the example shown in Figure 16E, the CBI section of subgroup c is looked for. This CBI section of subgroup C then queries its direct section index, _, to find the exact CBI section of subgroup B. This example shows that when querying a chaotic block index, it will find that the i th logical segment is unchanged and will find its valid data in the original block. The same considerations are also made when looking for the jth logical segment in a subgroup C of a given logical group. This example shows that the last written CBI section result is not any of the four CBI sections of a given logical group. Its indirect section index points to one of the four CBI sections of a given group. The last written result of the four pointed to is also the CBI section of subgroup C. When querying its chaotic block, it will find that the jth logical sector is found at the specified location in the chaotic update block 7〇4. There is a list of chaotic sections of the chaotic update blocks in the system in the controller RAM. Each list contains a record of the segments that were written into the chaotic update block from the beginning of the last updated CBI segment in the flash memory to the current segment. The number of logical sector addresses (which may remain in the chaotic section list) for a particular chaotic update block is a design parameter for a representative value of 8 to 16. The optimal size of the list determines the trade-off between the effect of the 98704.doc -50-1288328 degrees of consumption and the zone scan time during initialization. During system initialization, in order to identify the valid segments written since the previous update of one of its associated CBI segments, each chaotic update block must be scanned. In the controller RAM, a list of chaotic sections of each chaotic update block is formed. It is only necessary to scan each block in the CBI section that was last written, starting from the last sector address of the chaotic block in the chaotic block of each block. When a chaotic update block is configured, the CBI section is written to correspond to all of the update logical subgroups. The logical and physical addresses of the chaotic update block are written to the chaotic block information fields available in the segment, where the null value entries are in the chaotic block index block. A list of chaotic sections will be opened in the controller RAM. When the chaotic update block is closed, the CBI section is written with the block logic and physical address removed from the chaotic block information field in the section. The list of corresponding chaotic sections in RAM becomes unused. The list of chaotic sections corresponding to the controller RAM can be modified to include a record of the section written to the chaotic update block. When the chaotic section in the controller RAM does not write any free space recorded by other sections of the chaotic update block early, the updated CBI section is written for the logical subgroup of the section in the list, and then Clear the list. When CBI block 620 becomes full, the valid CBI section is copied into the configured erased block and the previous CBI block is erased. Address Table The logical-to-physical address translation module 140 shown in Figure 2 is responsible for associating the logical address of the host in the flash memory with the corresponding physical address. The mapping between logical groups and entity groups (relay blocks) is stored in non-volatile flash memory 98704.doc -51 - 1288328 200 and the more volatile RAM 13 0 (see Figure 1). A set of tables and lists that are distributed. The address list is maintained in flash memory containing the address of the relay block for each logical group in the memory system. In addition, the logical-to-physical address record of the most recently written section is temporarily retained in RAM. These volatile records can be reconstructed from the block list and the data section headers in the flash memory when the system is initialized after startup. Therefore, the address table in the flash memory only needs to be updated occasionally to reduce the percentage of over-utilization of the control data. The hierarchy of the address records of the logical group includes: a list of updated blocks opened in the RAM, a list of closed update blocks, and a group address table (GAT) maintained in the flash memory. The list of updated update blocks is a list of data update blocks currently open in the controller RAM for writing updated host segment data. The block's items are moved to the closed update block list when the block is closed. The closed update block list is a list of the data update blocks that have been closed in the controller RAM. The subset of the list (4) is moved to the section in the group address table while controlling the write operation period. The Group Address Table (GAT) is a list of the relay block addresses of all logical groups of host data in the memory system. The GAT contains - items for each logical group that are sequentially ordered according to logical addresses. The _th item in the GAT contains the relay block address of the logical group with the address η. In a preferred embodiment, the GAT is a representation in the flash memory - a component comprising a set of items having a relay block address defining each logical group in the memory system. (called the GAT section). In flash memory, the gat section is looked up in one or more 98704.doc -52 - 1288328 dedicated control blocks (called gat blocks). Figure 17A shows the data field of the Group Address Table (GAT) section. The GAT segment can have a capacity of a GAT project that is large enough to contain a set of 128 consecutive logical groups. Each G AT segment consists of two components, namely a set of GAT entries for the relay block addresses of each logical group in the range, and a GAT segment index. The first component contains information for finding a relay block associated with a logical address. The second component contains information for finding all valid GAT segments within the GAT block. Each GAT project has three fields, namely a relay block number, a page mark as previously described in connection with Figure 3 A(iii), and a flag indicating whether the relay block has been reconnected. The GAT section index lists the parts of the valid GAT section in the GAT block. This index will be replaced in each GAT segment but will be replaced by the version of the next GAT segment written in the GAT block. Therefore only the version in the last written GAT section is valid. Figure 17B shows an example of recording a group address table (GAT) section in one or more GAT blocks. The GAT block is a relay block dedicated to recording the GAT section. When the GAT segment is updated, it is written to the next available physical segment location in the GAT block 720. Thus, multiple copies of the GAT segment can exist in the GAT block, with only the last written copy being valid. For example, the gat section 25 5 (the metric containing the logical group LG3968 _ LG4 〇 98) has been updated at least twice with the latest version of the valid version. A set of indices of the last written gap segment in the block identifies the location of each valid segment in the GAT block. In this example, the last written GAT segment in the block is the GAT segment 236, and its index group is a valid index group that replaces all previous index groups. When the gat block finally becomes completely filled with the G AT segment, all valid segments are again written to 98704.doc -53 - 1288328 to the new block location to compress the block during the control write operation. Then erase the entire block. As described above, the G AT block contains items of a group of logically consecutive groups in the area of the logical address space. The GAT segments within the GAT block each contain 128 logically logical group mapping information for consecutive logical groups. Within the address range spanned by the G AT block, the number of GAT segments required to store all logical group items occupies only a small portion of the total segment location within the block. Therefore, the G AT segment can be written to the next available segment location in the block to update it. The index of all valid GAT segments and their locations in the GAT block is maintained in the index field in the most recently written G AT segment. A small portion of the total segment occupied by the active GAT segment in the G AT block is the system design parameter, which is typically 25%. However, there are up to 64 valid GAT segments in each GAT block. In systems with large logical capacity, it may be necessary to store G AT segments in more than one GAT block. At this time, each G AT block is associated with a fixed range logical group. The GAT update execution can be part of the control write operation, which is triggered when the ABL runs out of configured blocks (see Figure 18). Its execution is performed simultaneously with the abl fill and CBL clear operations. During the GAT update operation, a GAT zone has items that are updated with the corresponding items in the closed update block list. When the GAT project is updated, any corresponding items are removed from the closed update block list (cubl). For example, the GAT segment to be updated is selected based on the - item in the list of closed update blocks. The update section can be written to the next available section location in the G AT block. When no segment location is available for an updated GAT segment, 98704.doc • 54-1288328 gat rewrite operation occurs during control write operations. The new GAT block will be configured and the valid G AT segments defined by the GAT index will be copied sequentially from the complete GAT block. Then erase the entire GAT block. The GAT cache memory is a copy of the sub-divided items of the 128 items of the GAT segment in the controller ram 130. The number of GAT cache memory items is a system design parameter with a value of 32. Each time an item is read from the GAT section, the gat cache memory of the relevant section sub-segmentation can be established. Multiple GAT caches will be maintained. The number is a design parameter with a value of 4. GAT cache memory is overwritten based on items that have not been subdivided by different sections for the longest time. Erased Relay Block Management The erase block manager ι6〇 shown in Figure 2 can manage erased blocks using a list of maintained directory and system control information. These lists are distributed among the controller RAM 130 and the flash memory 200. When the erased relay block must be configured to store the user profile or the storage system control profile, the next available relay block number in the configuration block list (ABL) retained in the controller RAM is selected. (See Figure 15). Similarly, when the trunk block is removed and erased, its number is added to the clear block list (CBL) that also remains in the controller rAM. Relatively static catalog and system control data is stored in flash memory. These include a list of erased blocks and a bit map (MAP) that lists the erased states of all of the relay blocks in the flash memory. The erased block list and MAp are stored in individual segments and are recorded in a dedicated relay block called "MAP Block". These lists, which are distributed in the controller RAM and flash memory, provide the level of the erased blocks 98704.doc 1288328 to effectively manage the use of erased blocks. Figure 18 is a schematic block diagram showing the distribution and flow of control and directory information for the use and recycling of erased blocks. The control and directory data is maintained in a list that is retained in the controller ram 130 resident in the flash memory or in the MAP block 750. In the preferred embodiment, controller RAM 130 retains configuration block list (ABL) 610 and clear block list (Cbl) 740. The 'Configuration Block List (ABL) as previously described in connection with Figure 15 can record which relay block has been recently configured to store user data or storage system control data structures. When a new erased trunk block must be configured, the next available trunk block number is selected in the configuration block list (ABL). Similarly, the Clear Block List (CBL) is used to record the updated relay blocks that have been unconfigured and erased. ABL and CBL are reserved in controller RAM 13 0 (see Figure 1) for quick access and easy manipulation when updating blocks in tracking relative effects. The configuration block list (ABL) records the configuration of the pooled and erased trunk blocks of the erased trunk block that will become the update block. Thus, each of these relay blocks can be illustrated by an attribute specifying whether it is an erased block, an open update block, or a closed update block in an ABL suspended configuration. Figure 18 shows that the ABL contains: an erased ABL list 612, an open update block list 614, and a closed update block list 616. In addition, associated with the opened update block list 614 is the associated original block list 615. Similarly, associated with the closed list of updated blocks is the associated erased original block list 617. As shown previously in Figure 15, the list of these associations is a subset of the open update block list 614 and the closed update block list 616, respectively. 98704.doc -56- 1288328 The erased ABL block list 612, the opened update block list 614, and the closed update block list 616 are all subsets of the configuration block list (ABL) 610, in each list. The items have corresponding attributes. The MAP block 750 is a relay block dedicated to the erase management record in the flash memory 200. The MAP block stores a time series of MAP block segments, wherein each MAP segment is not an erase block management (EBM) segment 760, or a MAP segment 780. When the erased block is recirculated when the configuration is exhausted and the relay block is retracted, the associated control and directory data is preferably included in the logical section that can be updated in the MAP block, where the updated data will be updated. Each item is recorded in a new block section. Multiple copies of EBM section 760 and MAP section 780 may be present in MAP block 750, with only the latest version being active. The index of the location of the valid MAP section is contained in the block of the EMB block. A valid EMB segment is always written to the MAP block during the control write operation. When MAP block 750 is full, all valid sectors are rewritten to the new block location and compressed during the control write operation. Then erase the entire block. Each EBM section 760 contains an erased block list (EBL) 770, which is a list of subset addresses of the erased block population. The erased block list (EBL) 770 can be used as a buffer containing the erased block number, from which the relay block number is periodically taken to refill the ABL and periodically The block number is added to this buffer to re-empt the CBL. The EBL 770 can be considered as a buffer for the following: an available block buffer (ABB) 772, an erased block buffer (EBB) 774, and a cleared block buffer (CBB) 776. 98704.doc -57- 1288328 The available block buffer (ABB) 772 contains a copy of the ABL 610 project immediately following the previous ABL fill operation. It is actually a backup copy of the ABL after the ABL fill operation. The erased block buffer (EBB) 774 contains the erased block addresses previously transmitted from the MAP section 780 or the CBB list 776 (described below), and the addresses can be transmitted during the ABL fill operation. To ABL 610. The cleared block buffer (CBB) 776 contains the address of the erased block that has been transferred from the CBL 740 during the CBL clear operation and then transferred to the MAP segment 780 and the EBB list 774. Each MAP section 780 contains a bit mapping structure called "MAP". The MAP uses a bit of each of the relay blocks in the flash memory to indicate the erase status of each block. Bits corresponding to the block addresses listed in the ABL, CBL, or erased block list in the EBM section are not set to the erase state in the MAP. The block configuration algorithm will never be used in MAP, erased block list, ABL or CBL, any block that does not contain a valid data structure and is not designated as an erased block, so this cannot be accessed. Class blocks are used to store hosts or control data structures. This provides a simple mechanism to exclude blocks with defective locations from the accessible flash memory address space. The hierarchy shown in Figure 18 effectively manages erased block records and provides complete security for the list of block addresses stored in the controller's RAM. The erased block items can be exchanged between the list of block addresses and one or more MAP segments 780 in an infrequent manner. The information in the erased block list and the address translation table stored in the plurality of sectors 98704.doc -58 - 1288328 may be stored in the flash memory during system initialization after the power is turned off, and limited A small number of referenced data blocks in the flash memory are scanned to reconstruct the lists. The methods used to update the hierarchy of erased block records may be configured in the following order using the erased block: the block from the MAp block 750 is in the address sequence The block address plexes from the CBL 740 are interleaved, reflecting the order in which the system blocks are updated by the host. For most of the trunk block size and system memory capacity, a single MAP segment provides a one-dimensional mapping for all of the relay blocks in the system. In this case, the erased blocks must be configured for use in the same address order as recorded in this MAP segment. Erasing Block Management Operations As mentioned above, ABL 610 is a list of the following address items: • Erasable block that can be configured for use, and a block that has recently been configured as a data update block. The actual number of ABL·middle block addresses is between the maximum and minimum limits of the system design variables. During manufacture, the number of formatted AbL items is a function of the type and capacity of the memory card. In addition, since the number of available erased blocks is reduced due to block failures during the lifetime, the number of items in the ABL is also reduced to approach the end of the system life. For example, after a fill operation, items in the ABL can specify blocks that can be used for the following purposes. Each block has a part of a project that is written to the data update block and does not exceed the system's limit on the maximum update block that is simultaneously open. Used between one to twenty items configured as erase blocks for data update blocks. Four items that are configured to control the erased block of the block. 98704.doc -59- 1288328 ABL Fill Operation Since the ABL 610 becomes exhausted due to configuration, it needs to be refilled. The operation of padding the ABL occurs during the control of the write operation. This is triggered when the block is configured, but the ABL contains an erased block item that is not sufficient for configuration as a data update block or some other control data update block. During control writes, the ABL fill operating system and the GAT update operation are performed simultaneously. The following actions occur during the ABL fill operation. 1. Retain the ABL project with the attributes of the current data update block. 2. Retain the ABL item whose properties of the data update block have been closed, unless an item of the block is being written in the simultaneous GAT update job, in which case the ABL will be removed from the ABL project. 3. Reserve the ABL project for the unconfigured erase block. 4_ Compress the ABL to remove the gap created by the removal of the item to maintain the order of the items. 5. Completely populate the ABL by appending the next available item from the EBB list. 6. Overwrite the ABB list with the current projects in the ABL. CBL Clear Operation CBL is a list of erased block addresses in the controller RAM. The limit on the number of erased block items is the same as ABL. The operation of clearing the CBL occurs during the control of the write operation. Therefore, it is performed simultaneously with the ABL padding/GAT update operation or the CBI block write operation. In the CBL flush operation, items are removed from the CBL 740 and written to the CBB list 776. 98704.doc -60- 1288328 MAP Exchange Operation When the EBB list 774 has been emptied, the MAP swap operation between the erase block information and the EBM section 760 in the MAP section 78 can occur periodically during the control write operation. . If all erased relay blocks in the system are recorded in the EBM section 760, no MAP section 780 will exist and no MAP exchange will be performed. During the MAP exchange operation, the MAP section for feeding the erased block to EBB 774 is considered to be the source MAP section 782. The MAP segment used to receive the erased block from CBB 776 is considered to be the destination MAP segment 784. If there is only one MAP section, it can be regarded as the source and destination MAP section, which is defined as follows. The following actions are performed during the MAP exchange. 1 · Select a source map segment based on the incremental indicator. 2. Select a destination map segment based on the block address in the first CBB project that is not in the source MAP segment. 3_ Update the target MAp section in the manner defined by the relevant project in the CBB and remove the items from the CBB. 4. Write the updated destination MAP segment into the MAP block unless there is no separate source MAP segment present. 5. Update the source MAp section as defined by the relevant project in the CBB and remove the items from the CBB. 6. Add the remaining items in the CBB to the ebb. 7. Fill the EBB as much as possible with the erased sector address defined by the source MAP section. 8. Write the updated source MAP section to the MAP block. 98704.doc -61 - 1288328 9. Write an updated EBM section to the MAp block. List Management Figure 1 shows the distribution and flow of control and catalog information between various lists. For convenience, the operation of moving items or changing item attributes between list elements is identified as [A] to [〇] in Fig. 18, as explained below. [A] When the erase block is configured as an update block of the master data, its project attributes in the ABL are changed from the erased ABL block to the open update block. [B] When an erased block is configured as a control block, its items in the ABL are removed. [C] When an ABL project with open update block attributes is created, the associated original block block is added to the project to record the original relay block address of the updated logical group. This information is available from GAT. [D] When the update block is closed, its project properties in the ABL are changed from the open update block to the closed update block. [E] When the update block is closed, its associated original block is erased, and the attribute of the associated original block field of the item in the ABL is changed to the erased original block. [F] During an ABL fill operation, any closed update block whose address is updated in GAT during the same control write operation will remove its entry from ab]l. [G] During the ABL fill operation, when an item that has closed the update block is removed from the ABL, its associated item that erased the original block is moved to the CBL. 98704.doc -62- 1288328 [Η] When the control block is erased, the item used is added to the CBL. [I] During the ABL fill operation, the erased block item is moved from the EBB list to the ABL and assigned the attributes of the erased ABL block. [J] After modifying all relevant ABL items during the ABL fill operation, the block address in the ABL will replace the block address of the ABB list. [K] is performed simultaneously with the ABL fill operation during the control write, moving the items of the erased block in the CBL to the CBB list. [L] Move all related items from the CBB list to the MAP destination section during the MAP exchange operation. [M] Move all related items from the CBB list to the MAP source section during the MAP exchange operation. [N] After [L] and [M] during the MAP exchange operation, move all remaining items from the CBB list to the EBB list. [O] After [N] during the MAP exchange operation, if possible, move items other than the items moved in [Μ] from the MAP source section to populate the EBB list. Logical to Physical Address Translation To find the physical location of a logical segment in flash memory, the logical-to-physical address translation module 140 shown in Figure 2 can perform a logical-to-physical address translation. In addition to the recently updated logical group, most of the translations can be performed using the Group Address Table (GAT) resident in the flash memory 200 or the GAT cache memory in the controller RAM 130. The address translation of the recently updated logical group will require a list of addresses of the updated blocks that are primarily resident in controller RAM 130. Therefore, the logic of the logical sector address depends on the type of block associated with the logical group in which the physical address translation 98704.doc -63 - 1288328 is associated with the logical group in which the segment is located. The types of blocks are as follows: complete block, sequential data update block, mixed data update block, and closed data update block. Figure 19 is a flow chart showing a logical versus physical address translation procedure. Essentially, the logical block address is used to query various update directories (for example, the updated update block list and the closed update block list) to find the corresponding relay block and entity segment. If the associated relay block does not belong to the update program, the directory information is provided by the G AT. Logical to physical address translation involves the following steps: Step 8 0 〇 : Given a logical sector address. Step 810: Query the given logical address of the updated block list 614 opened in the controller RAM (see Figures 15 and 18). If the query fails, proceed to step 820, otherwise proceed to step 83. Step 820 - Query the given logical address in the closed update block list 616. If the query fails, the given logical address does not belong to any part of the update procedure; proceed to step 87〇 for GAT address translation. Otherwise, proceed to step 860 to perform a closed update block address translation. Step 830: If the update block containing the given logical address is sequential, the process proceeds to step 84A to perform sequential update block address translation. Otherwise, the process proceeds to step 850 for chaotic update block address translation. Step 840: Use the sequential update block address translation to obtain the relay block address. Proceed to step 880. Step 850··Use chaotic update block address translation to get the relay block bit 98704.doc 1288328 address. Proceed to step 880. Step 860: Use the closed update block address translation to obtain the relay block address. Proceed to step 88. Step 870: Use the group address table (GAT) translation to obtain the relay block address. Proceed to step 88. Step 880: Convert the relay block address to a physical address. The translation method depends on whether the relay block has been reconnected. Step 890: The physical sector address has been obtained. The various address translation processes are described in more detail below: Sequential Update Block Address Translation (Step 84〇) The information from the Open Update Block List 614 (Figures 15 and 18) can be directly converted and updated sequentially. The address translation of the target logical sector address in the logical group associated with the block is described below. 1. From the list (5) "Page Mark" and "Writed Section Number Mom", the position can be determined whether the target logical section has been configured in the update block or its associated original block. 2. From the clear, the relay can read the relay block address suitable for the target logical segment. 3. The appropriate "page mark" block determines the sector address within the relay block. The chaotic update block address translation (step 85〇) and the address translation sequence of the target logical sector address in the logical group associated with the chaotic update block are as follows. . = If the chaotic section list in 攸rAM determines that the section is the most recently written section, the address translation can be done directly from its position in this list. 98704.doc 1288328 2. The most recently written section in the CBI block contains the entity address of the chaotic update block associated with the target logical sector address in its chaotic block data block. It also contains the displacement within the CBI block of the CBI section last written to this chaotic update block in the indirect section index field (see Figures 16A-16E). 3. The information in the blocks is cached and stored in the ram without the need to read the segment during subsequent address translation. 4. Reading Step 3 The CBI segment identified by the indirect segment index is blocked. 5. The direct segment index field cache of the recently accessed chaotic update subgroup is stored in RAM without the need to perform the read at step 4 to repeatedly access the same chaotic update block. 6. The direct sector index intercept read in step 4 or step 5 can then identify the CBI section containing the logical subgroup containing the target logical sector address. 7. Read the hash block index entry of the target logical sector address from the CBI section identified in step 6. 8. The most recently scrambled block index bucket can be cached in the controller RAM without the need to perform the read at steps 4 and 7 to repeatedly access the same logical subgroup. 9. The Chaos Block Index item defines the location of the target logical section in the chaotic update block or associated original block. If the valid copy of the target logical segment is in the original block, it can be found using the original relay block and page tag information. Closed Update Block Address Translation (Step 860) The address translation of the target logical sector address in the logical group associated with the updated block can be directly completed and closed from the information of the updated updated block list. 98704.doc - 66 - 1288328 (see Figure 18), explained below. 1. The relay block address assigned to the target logical group can be read from the order. 2. From the “Page Mark” field in the list, you can determine the segment address in the relay block. GAT Address Translation (Step 87) If a logical group is not referenced by a block update list that is turned on or off, its entry in the G AT is valid. The address translation sequence of the target logical sector address in the logical group referenced by the GAT is as follows. 1. Evaluate the range of available GAT cache memory in RAM to determine if the target logical group's items are included in the GAT cache. 2 • If the target logical group is found in step 1, the gat cache memory contains complete group address information, including the relay block address and page mark, thus allowing translation of the target logical sector address. 3. If the target address is not in the G AT cache memory, the GAT index of the target GAT block must be read to identify the location of the G AT segment with respect to the target logical group address. 4. The GAT index of the last accessed GAT block will remain in the controller ram and can be accessed without reading the segment from the flash memory. 5 • A list of the relay block addresses of each G AT block and the number of segments written into each GAT block is stored in the controller ram. If the necessary GAT index cannot be obtained at step 4, it can be read from the flash memory immediately. 6. Read the gat section about the target logical group address from the extent location in the GA butyl area 98704.doc -67 - 1288328 block defined by the GAT index obtained at step 4 or step 6. The gat cache memory is updated with a subdivision of the section containing the target item. 7. Get the target segment address from the relay block address and the "page mark" block in the target GAT project. Relay block-to-physical address translation (step 880) If the flag associated with the relay block address represents that the relay block has been re-linked, the associated LT segment is read from the BLM block to Determine the erase block address of the target sector address. Otherwise, the block address will be erased from the relay block address. Control Data Management Figure 20 shows the operational hierarchy of the control data structure during the memory management operation. The data update management operation can work on various lists resident in the ^^". The control write operation can affect various control negative and special blocks in the flash memory, and can also be exchanged with the list in the memory. The data update management operation is implemented in RAM for ABL, CBL, and the list of chaotic sections. When the erased block is configured as an update zone ghost or control block, or is closed - update zone When the block is updated, the ABL is updated. When a control area is erased, or an item of a closed update block is written into the GAT, the cbl is updated. : When the write-chaotic update block is in the middle, the update chaotic section list is updated. ..., the operation knows that the information from the control data structure in the ram 98704.doc 68 - 1288328 is written to the flash memory In the control data structure, if necessary, the flash memory and other supported control data structures in the RAM are updated. When the ABL does not contain any other blocks of the erased block to be configured as update blocks. When project, or write to the CBI again In the case of a block, a control write operation is triggered. In a preferred embodiment, an ABL fill operation, a CBL clear operation, and an EBM segment update operation are performed during each control write operation. When an EBM segment is included When the MAP block is full, the valid EBM and MAP segments are copied to the configured erased block, and then the previous MAP block is erased. During each control write operation, a GAT region is written. The segment will also be updated with the list of updated blocks that are closed. When the GAT block is full, the GAT rewrite operation will be performed. As described above, after several times of chaotic segment write jobs, one will be written. CBI section. When the CBI block becomes full, the valid CBI section is copied into the configured erase block, and then the previous CBI block is erased. As mentioned above, the MAP exchange operation is performed in the EBM zone. When there are no other erased block items in the EBB list of the segment, each time the MAP block is rewritten, the MAP address for recording the current address of the MAP block is written in the dedicated MAPA block. (MAPA) section. When the MAPA block is full, a valid MAPA section will be copied. The configured erased block is then erased from the previous MAPA block. Each time the MAPA block is rewritten, the boot sector is written to the current boot block. When the boot block is full, The valid boot section of the current version of the boot block will be copied to the backup version, which will then become the current 98704.doc -69- 1288328 version. The previous current version will be erased and become a backup version and will The active boot sector is written back therein. The alignment of the memory spread across multiple memory planes. As previously described in connection with Figures 4 and 5A-5C, multiple memory planes are operated in parallel for added efficiency. Basically, each plane has its own sense amplifier, and as part of the item fetch program circuit, parallel services span the corresponding pages of the memory cells of the plane. When combined with multiple planes, multiple pages can be operated in parallel, which improves performance.

口根據本發明的另—方面,對於—被組織成複數個可抹除 區塊且由多個記憶體平面所構成(因而可平行地讀取複數 個邏輯單元或是將複數個邏輯單元平行地程式化至該等多 個平面之中)的記憶體陣列,當要更新儲存於特定記憶體平 面中第•區塊的原始邏輯單元時,會供應所需以將已更新 的邏輯皁元保持在和原始相同的平面中。這可藉由以下方 式來完成:將已更新的邏輯單元記錄到仍在相同平面中之 第二區塊的下一個可用位置。較佳將邏輯單元儲存在平面 中和其中其他版本相同的位移位置,使得給定邏輯單元的 所有版本係由相同組的感測電路予以服務。 據此在-項較佳具體實施例中,以邏輯單元的目前) 本來填補介於上一個程式化記憶體單元與下一個 對齊記憶體單元之問& & γ i 間的任何中間間隙。將邏輯上位於該, 後被程式化之邏輯單元後 - 、以· 卞的该#邏輯早兀的目前版本, 及邏輯上位於被儲存在兮 子在5亥下一個可用的平面排 元中之邏輯單元前面的- , “體, 的这專邏輯早兀的目前版本填入間丨 98704.doc -70- 1288328 中’便可完成該填補作業。 依此方式’可將邏輯單元的所有版本維持在具有和原始 相同位移的相同平面中,致使在廢棄項目收集操作中,不 必從不同平面擷取邏輯單元的最新版本,以免降低效能。 在一項較佳具體實施例中,可利用該等最新的版本來更新 或填補該平面上的每個記憶體單元。因此,便可從每個平 面中平行地讀出一邏輯單元,其將會具有邏輯順序而無需 進一步重新排列。 此方案藉由允許平面上重新排列邏輯群組之邏輯單元的 取新版本,且不必收集不同記憶體平面的最新版本,而縮 短彙總混亂區塊的時間。這很有好處,其中主機介面的效 能規格可定彡纟記憶體系、统$成區段寫入鮮的最大等待 時間。 圖21顯示以多個記憶體平面所構成的記憶體陣列。記憶 體平面可以來自相同的記憶體晶片或多個記憶體晶片。各 平面910具有其自己的讀取及程式電路川以平行服務記憶 體單元的頁面在不失—般性之下,在所示的範例中,According to another aspect of the present invention, it is organized into a plurality of erasable blocks and is composed of a plurality of memory planes (so that a plurality of logical units can be read in parallel or a plurality of logical units can be paralleled A memory array that is stylized into the plurality of planes, when the original logical unit stored in the first block of the particular memory plane is to be updated, supplies the required logic to keep the updated logical soap elements In the same plane as the original. This can be done by recording the updated logical unit to the next available location of the second block that is still in the same plane. Preferably, the logic cells are stored in a plane in the same displacement position as the other versions, such that all versions of a given logic unit are served by the same set of sensing circuits. Accordingly, in the preferred embodiment of the present invention, any intermediate gap between the previous <&& γ i of the last stylized memory unit and the next aligned memory unit is originally filled in the logical unit. The current version of the logic that is logically located after the stylized logical unit - and _ 卞 is logically located in a usable plane row element that is stored in the raft In front of the logical unit -, "the body, this current logic is filled in the current version of 98704.doc -70- 1288328 to complete the filling operation. In this way, all versions of the logical unit can be maintained. In the same plane with the same displacement as the original, it is not necessary to extract the latest version of the logic unit from different planes in the waste project collection operation, so as not to reduce the performance. In a preferred embodiment, the latest is available. The version updates or fills each memory cell on the plane. Thus, a logical unit can be read out in parallel from each plane, which will have a logical order without further rearrangement. Rearranging the new version of the logical unit of the logical group on the plane, and eliminating the need to collect the latest version of the different memory planes, and shortening the time to summarize the chaotic blocks. There is a benefit in that the performance specification of the host interface can be defined as the maximum waiting time of the memory system and the segmentation of the memory segment. Figure 21 shows a memory array composed of multiple memory planes. The memory plane can come from The same memory chip or a plurality of memory chips. Each plane 910 has its own read and program circuit to parallel the pages of the memory memory unit without loss of generality, in the example shown,

記憶體陣列具有四個平行操作的平面。 訾 -般而言,邏輯單元是主機系統存取的最小單元。通 -個邏輯單元是尺寸512位元組的區段。頁面是平面中平 »貝取或式化的取大單元。通常一個邏輯頁面含有一或 個輯單I因此’在結合多個平面時,可將平行讀取 程式化的最大總數單S視為記憶體單元的中繼頁面,其 中繼頁面係、由多個平面中之各平面的頁面所構成。例如 98704.doc -71- 1288328 如MP〇之中繼頁面具有四個頁面,即來自各平面抑、μ、 Μ、及Ρ3的頁面,其中平行儲存邏輯頁面^。、心⑶、 因此’和僅在_個平面中的操作相比,記憶體的讀取 及冩入效能增加四倍。 記憶體陣列會進-步組織成複數個中繼區塊,如 二°、…、卿其中各中繼區塊内的所有記憶體單元可成 ’、、、-個單元一起抹除。如MB。的中繼區塊係以多個記憶體 立置所構成’以儲存資料的邏輯頁面914,如Lp『LPw。中 繼區塊中的邏輯頁面係根據其填充於中繼區塊的順序,按 預定的序列分布於四個平面⑼^^㈣卜例如’在 按邏輯盾序順序填充邏輯頁面時,會以第—平面中第一 頁面、第二平面中第二頁面等等的循環順序造訪平面。在 到達最後的平面後,填充會以循環的方式返回,以從下一 個中繼頁面的第一平面重新開始。依此方式,即可在所有 平面均為平行操作時平行存取連續的邏輯頁面。 、:般而言’如果有W個平面平行操作中及中繼區塊係按 邏輯上循序順序進行填充,射繼區塊巾第k個邏輯頁面將 常駐在平面X中,其中x = k麵W。例如,有四個平面, W = 4’在按邏輯循序順序填充區塊時,第5個邏輯頁面 將常駐在由5M〇D4給定的平面中,即平面卜如_所示。 各記憶體平面中的記憶體操作係由一組讀取,寫入電路 9 12來執行。進出各讀取/寫入電路的資料係透過在控制器 920之控制下的資料匯流排93()進行傳送。控制器_中的緩 衝器922可經由資料匯流排93〇協助緩衝資料的傳送。尤其 98704.doc -72- 1288328 在第一平面的操作需要存取第— π Μ弟一十面的資料時,將需要兩 個步驟的程序。控制器會弈读山楚_ 紅列為3无靖出第二平面的資料,然後經 由資料匯流排及緩衝器傳送至第一平面。事實上,在大多 數的記憶體架構中’在兩個不同的位元線之間傳送資料也 需要透過資料匯流排920交換資料。 至少’這涉及在-平面中從一組讀取/寫入電路傳送出 去,然後進人另-平面中的另—組讀取/寫人電路。在其中 平面係來自不同晶片的例子中,冑需要在晶片之間傳送。 本發明可提供記憶體區塊管理的結構及方案,以避免一個 平面從另-個平面存取資料,以便將效能最大化。 如圖21所示,一中繼頁面係由多俯邏輯頁(各位於其中一 個平面之中)所構成。每個邏輯頁可能係由_個以上的邏輯 單元所組成。當資料欲以逐個邏輯單元的方式被記錄於一 跨越該等平面的區塊中肖’每個邏輯單元便將憾在該等 四個記憶體平面之一中。 在更新邏輯單元時會發生平面對齊的問題。纟目前的範 例中,為了便於解說,將邏輯單元視為512位元組的邏輯區 段,一個邏輯頁面也是一個邏輯單元寬。由於快閃記憶體 不允許未先抹除整個區塊而再寫入區塊的一部分,因此不 會將邏輯頁面的更新寫入現有的位置之上,而是將其記錄 在區塊未使用的位置中。然後會將邏輯單元的先前版本視 為淘汰。在一些更新後,區塊可含有一些由於已經更新因 此變成淘汰的邏輯單元。然後此區塊可以說是「不乾淨」, 而廢棄項目收集操作會忽略不乾淨的邏輯單元而收集各個 98704.doc •73· 1288328 4輯單7L的最新版本並按邏輯上循序順序將其重新記錄在 或夕個新的區塊中。然後抹除及再循環不乾淨的區塊。 田β亥已更新邏輯單元被記錄於某一區塊中下個未被使用 的位置之中時,其通常不會被記錄於和先前版本相同的記 :體平面之中。當要進行廢棄項目收集操作時,如彙總或 壓縮,一邏輯單元的最新版本便會被記錄於和原來相同的The memory array has four planes that operate in parallel.訾 In general, a logical unit is the smallest unit accessed by the host system. A logical unit is a section of size 512 bytes. The page is a flat unit in the plane. Usually, a logical page contains one or one album I. Therefore, when a plurality of planes are combined, the maximum total number S of parallel read stylizations can be regarded as a relay page of the memory unit, and the relay page system is composed of multiple The pages of each plane in the plane are composed. For example, 98704.doc -71- 1288328 The relay page of MP〇 has four pages, that is, pages from each plane, μ, Μ, and Ρ3, in which logical pages are stored in parallel. The reading and intrusion efficiency of the memory is increased by four times compared to the operation of the heart (3), and therefore only in the _ planes. The memory array is further organized into a plurality of relay blocks, such as two, ..., and all of the memory cells in each of the relay blocks can be erased together by ', ', and - units. Such as MB. The relay block is constructed by a plurality of memory stands to store a logical page 914 of data, such as Lp "LPw. The logical pages in the relay block are distributed in four planes according to the order in which they are filled in the relay block. (9) ^^(4) For example, when the logical page is filled in the logical order, the - A cyclical sequence of the first page in the plane, the second page in the second plane, etc., visits the plane. After reaching the final plane, the fill returns in a round-robin fashion to restart from the first plane of the next relay page. In this way, successive logical pages can be accessed in parallel when all planes are in parallel operation. Generally speaking, if there are W plane parallel operations and the relay blocks are filled in a logical sequential order, the kth logical page of the sequence block will be resident in plane X, where x = k plane W. For example, if there are four planes, W = 4', when the blocks are filled in logical sequential order, the fifth logical page will be resident in the plane given by 5M〇D4, that is, the plane is as shown in _. The memory operation in each memory plane is performed by a set of read, write circuits 912. The data entering and leaving each read/write circuit is transmitted through the data bus 93 () under the control of the controller 920. The buffer 922 in the controller_ can assist in the transfer of buffered data via the data bus 93. In particular, 98704.doc -72- 1288328 When the operation of the first plane requires access to the data of the tenth side of the first π, the two-step procedure will be required. The controller will read the Shan Chu _ red column as the data of the second plane without the Jingya, and then transmit it to the first plane via the data bus and buffer. In fact, in most memory architectures, data transfer between two different bit lines also requires data exchange via data bus 920. At least 'this involves transferring from a set of read/write circuits in a plane and then entering another set of read/write circuits in another plane. In the case where the planar system is from a different wafer, germanium needs to be transferred between the wafers. The present invention provides a structure and scheme for memory block management to prevent a plane from accessing data from another plane to maximize performance. As shown in Fig. 21, a relay page is composed of multiple logical pages (each located in one of the planes). Each logical page may consist of more than _ logical units. When the data is to be recorded in a logical unit by block in a block spanning the planes, each logical unit will be in one of the four memory planes. Planar alignment issues occur when updating logical units. In the current example, for ease of explanation, the logical unit is treated as a logical section of 512-bit tuples, and a logical page is also a logical unit wide. Since the flash memory does not allow a portion of the block to be written without first erasing the entire block, the update of the logical page is not written to the existing location, but is recorded in the unused block. In the location. The previous version of the logical unit is then considered to be obsolete. After some updates, the block may contain some logical units that have become obsolete due to the update. Then this block can be said to be "not clean", and the abandoned project collection operation will ignore the dirty logic unit and collect the latest version of each 98704.doc • 73· 1288328 4 series 7L and re-sequence it in a logically sequential order. Recorded in a new block or a new block. Then erase and recycle the dirty blocks. When the Tianhai updated logic unit is recorded in the next unused location in a block, it is usually not recorded in the same body plane as the previous version. When a waste collection operation is to be performed, such as summarization or compression, the latest version of a logical unit will be recorded in the same way as the original

Vp T" y I 甲’以維持原來的順序。然而,如果必須從另一個 平面擷取最新版本,效能將會降低。Vp T" y I A' to maintain the original order. However, if you have to extract the latest version from another plane, performance will be reduced.

因此,根據本發明的另一方面,給定平面之第一區塊的 原^邏輯單元。這可藉由以下方式來^成·將已更新的邏 輯單元記錄到仍在相同平面中之第二區塊的下一個可用位 置在項較佳具體實施例中,會以和原始區塊中原始邏 輯單兀的相同相對位置之邏輯單元的目前版本,填補(即, 藉由複製來填充)任何在上一個程式化記憶體單元和下一 個了用平面對齊記憶體單元之間的中間間隙。Thus, in accordance with another aspect of the invention, the original logical unit of the first block of the plane is given. This can be accomplished by recording the updated logical unit to the next available location of the second block that is still in the same plane, in the preferred embodiment, and in the original block. The current version of the logical unit of the same relative position of the logical unit is padded (ie, filled by copying) any intermediate gap between the previous stylized memory unit and the next used plane aligned memory unit.

圖22A顯示根據本發明的一般實施例,具有平面對齊之更 新之方法的流程圖。 步驟950 ··於一被組織成複數個區塊的非揮發性記憶體^ 中,=個區塊均被分割成可一起抹除的複數個記憶體^ 疋’母個記憶體單元係用於館存一邏輯單元的資料。 步驟952:以多個記憶體平面構成記憶體,各平面具有一 組用於平仃服務記憶體頁面的感測電路,該記憶體頁面^ 有一或多個記憶體單元。 步驟954·依照第一順序將邏輯單元的第—版本儲存於一 98704.doc -74- 1288328 第一區塊的複數個記憶體單元之 -μ a w 母個弟一版本邏輯單 疋均被儲存於該等記憶體平面之一中。 干 步驟956:依照不同於第一順序的第二 後續版本儲存於一第二區塊之中 、、早兀、 尾之中母個後續版本均被儲存 於和該第一版本相同的記憶體平 口口 一七士 丁囬甲下—個可用的記憶體 早兀之中,以便可利用該組相 ^ ^ ^ 的感測電路從該相同的平 面中來存取一邏輯單元的所有的版本。 /22Β顯示在圖22Α所示之流程圖中料更新之步驟的 較佳具體實施例。 步驟956’包括步驟957、步驟958及步驟959。 步驟957:將各區塊分割成中繼頁面,各中繼頁面係以各 平面的-頁面所構t此步驟可在儲存步驟的任—項之前 執行。 步驟958:根據和第-順序不同的第二順序將邏輯單元的 後續版本儲存至第二區塊,各後續版本係儲存於具有中繼 頁面中和第-版本相同位移的下__個可用記憶體單元中。 步驟959 :和儲存邏輯單元的後續版本同時進行,根據第 -順序複製邏輯單元的目前版本,以逐個中繼頁面的方式 來填補在該下一個可用記憶體單元之前的任何未使用記憶 體單元。 & 圖23A顯示不顧平面對齊按循序順序寫人至循序更新區 塊之邏輯單元的範例。該範例顯示出每個邏輯頁的大小均 係一個邏輯區段,例如LS0、LS丨…。於該四平面的範例中, 每個區塊(例如MB0)均可視為被分割成複數個中繼頁面 98704.doc •75- 1288328 ο、MPi、…,其中每個中繼頁面(例如Mp〇)均含有四個 區段(例如LSO、LSI、LS2、以及LS3),每個區段分別來自 平面P0、P1、P2、以及P3。所以,該區塊會以循環順序逐 個區段地被填入平面p〇、ρι、p2、以及p3中的邏輯單元之 中。 在主機寫入操作#1中,正在更新邏輯區段LS5_LS8中的資 料。會將更新成為LS5,-LS8,的資料記錄在始於第一可用位 置之新配置的更新區塊中。 在主機寫入操作#2中,正在更新邏輯區段LS9_LS12中資 料的程式段。會將更新成為LS9,彳S12,的資料記錄在緊接在 取後寫入結束處之後之位置中的更新區塊中。圖中顯示兩 次主機寫入的方式係以邏輯循序方式將該更新資料記錄於 該更新區塊之中,即LS5’-LS 12,。更新區塊可視為循序更新 區塊,因其已按邏輯上循序的順序被填補。記錄在更新區 塊中的更新資料可淘汰原始區塊中對應的資料。 然而,更新邏輯區段係根據下一個可用位置但卻不顧平 面對齊而記錄在更新區塊中。例如,區段LS5原來是記錄在 平面P1中,但已更新的LS5,現在則記錄在別中。同樣地, 其他更新區段全部無法對齊。 圖23B顯示不顧平面對齊按非循序順序寫入混亂更新區 塊之邏輯單元的範例。 在主機寫入操作#1中,會更新儲存於原始中繼區塊之給 定邏輯群組的邏輯區段LS10_LS11。已更新的邏輯區段 LS10’-LS11’會被儲存到新配置的更新區塊中。此時,更新 98704.doc -76- 1288328 區塊為循序的更新區塊。在主機寫入操作#2中,會將邏輯 區段LS5-LS6更新成為LS5’-LS6’及將其記錄在緊接上_個 寫入之後之位置的更新區塊中。這可將循序的更新區塊轉 換為混亂的更新區塊。在主機寫入操作#3,再次更新邏輯 區段LS 1 (V及將其記錄在更新區塊的下一個位置中成為 LSI0”。此時,更新區塊中的LSI0,,可取代先前記錄中的 LSI0’,而LSI0’又可取代原始區塊中的LS10。在主機寫入 操作#4中,再次更新邏輯區段LS10的資料及將其記錄在更 新區塊的下一個位置中成為LSI0"’。因此,LSI0’"現在是邏 輯區段LS10的最後且唯一有效的版本。LS 10的所有先前版 本現在均已淘汰。在主機寫入操作#5中,會更新邏輯區段 LS30的資料及將其記錄在更新區塊中成為LS301。在此範例 中,可按任何順序及以任何重複將邏輯群組内的邏輯單元 寫入至混亂更新區塊。 同樣地,更新邏輯區段係根據下一個可用位置但卻不顧 平面對齊而記錄在更新區塊中。例如,區段LS 10原來是記 錄在平面P2(即,MP2、第三平面)中,但更新LSI 0,現在卻 記錄在P0(即’ MP〇’,第一平面)中。同樣地,在主機寫入#3 中’會再次將邏輯區段LS101更新成為LS10"並被放在結果 也在平面POCMPi’的第一平面)的下一個可用位置中。因此, 一般而言,從圖中可見,將更新區段記錄至區塊的下一個 可用位置會使更新區段被儲存在和其先前版本不同的平面 中。 具有以填補填充之中間間隙之平面對齊的循序更新區塊 98704.doc -77- 1288328 圖24 A顯示根據本發明的一項較佳具體實施例,具有平面 對齊及填補之圖23A的循序更新範例。 在主機寫入操作#1中,會將更新成為LS5,_LS8,的資料記 錄在始於第一可用平面對齊位置之新配置的更新區塊。在 此例中’ LS5原來在p 1中,p 1是中繼頁面的第二平面。因此, 會在更新區塊之第一可用中繼頁面MP〇的對應平面中程式 化LS5’-LS7’。同時,會以原始區塊中繼頁面中在LS5前之 邏輯區段LS4的目前版本填補ΜΙγ中未使用之第一平面的 間隙。然後將原始L S 4處理成淘汰的資料。然後將剩餘的 LS8’記錄在下一個中繼頁面ΜΡι,的第一個平面中並已平面 對齊。 在主機寫入操作#2中,會將更新成為LS9,-LS 12,的資料記 錄在下一個可用平面對齊位置的更新區塊中。因此,會將 LS9’記錄在下一個可用的平面對齊記憶體單元中,即 的第二平面。此時,不會造成任何間隙,也不需要任何填 補。更新區塊可視為循序更新區塊,因其已按邏輯上循序 的順序填入。此外,其將因各更新邏輯單元和其原始的一 樣都在相同平面中而已平面對齊。 具中間間隙之平面對齊的混亂更新區塊 圖24B顯示根據本發明的一項較佳具體實施例,具有平面 對齊及不具有任何填補之圖23B的混亂更新範例。 在主機寫入操作#1中,將已更新的邏輯區段LSlO'-LSll, 儲存在新配置的更新區塊中。並不會將其儲存在下一個可 用的記憶體單元中,而是將其儲存在下—個可用的平面對 98704.doc 1288328 齊七憶體單元中。由於LS10’及LSI Γ原來係分別儲存在平面 Ρ2及Ρ3(原始區塊之ΜΡ2的第三及第四平面),下一個可用的 平面對齊記憶體單元將在更新區塊之“匕,的第三及第四平 面中。此時,更新區塊為非循序,其中將按「未填充」、「未 填充」、LS10f及LS11’的順序填充中繼頁面Mp〇的頁面。 在主機寫入操作#2中,會將邏輯區段LS5-LS6更新成為 LS5f-LS6’及將其記錄在下一個可用之平面對齊的更新區塊 中。因此,會將原始區塊中在MPl之第二(ρι)及第三(p2)平 面或§己憶體單元的LS5’及LS6’程式化至更新區塊中下一個 可用中頁面MP!’之對應的平面中。這會在mp i,中留下在 前面之未使用的第一平面。 在主機寫入操作#3,再次更新邏輯區段LSI 〇,及將其記錄 在更新區塊的下一個平面對齊位置中成為LS10"。因此,會 將其寫入下一個可用的第三平面,即在MP2,中。這會在μρ〆 的隶後平面及ΜΡ/的前兩個平面中留下位在前面的間隙。 這將會淘汰在MP〇’中的LS10,。 在主機寫入操作#4中,會再次更新邏輯區段LSI 〇"中的資 料並將其記錄在更新區塊中中繼頁面ΜΡ/的下一個可用第 二平面中成為LS10”f。因此,LS10M1在是邏輯區段lsi〇 的最後且唯一有效的版本。這會留下由MP2,之最後平面及 MP/之前兩個平面所組成的間隙。 在主機寫入操作#5中,會更新邏輯區段LS30的資料及將 其記錄在更新區塊中成為LS30,。由於原始的LS30常駐於中 繼頁面的P2或第三平面中,因此會將其寫入更新區塊中下 98704.doc -79- 1288328 個可用的第二平面。此時,其將是mi>4,的第三平面。將 因MP3’的最後平面與Μιγ的前兩個平面而產生間隙。因 2 ’此㈣顯示可以平面對齊的方式,按照任何順序及任 意重複,將一邏輯群組内的複數個邏輯區段寫入至一混亂 更新區塊中。在後續的廢棄項目收集操作中,將便利地由 相同組的感測電路來服務給定邏輯區段的所有版本,尤其 是最新版本。 具有以填補填充之中間間隙之平面對齊的混亂更新區塊 圖24C顯示根據本發明的另一項較佳具體實施例,具有平 面對齊及填補之圖23Β的混亂更新範例。 此操作和圖24Β所示的相同,但中間間隙會先以填補加以 填充。在主機寫入操作#1中,會先以常駐在原始區塊之LS8 及LS9的目前版本,來填補由中繼頁面Mp,之第一及第二未 使用的平面所產生的間隙。這會使原始區塊中的LS8及lS9 淘汰。此時,更新區塊是循序的更新區塊,其中中繼頁面 MP〇’的填充順序為 LS8、LS9、LS10,及 LS11,。 在主機寫入操作#2中,將因ΜΡι,中在前面之未使用的第 平面而產生間隙,其將先以LS4進行填補。這將使原始區 塊中的LS4淘汰。和之前一樣,第二寫入可將循序更新區塊 轉換為混亂更新區塊。 在主機寫入操作#3中,將因ΜΡ〗,中未使用的最後平面及 MP/的前兩個平面而產生間隙。會先以在上一個程式化之 LS6’之後的LS7來填補MPl,的最後平面,然後以在Lsi〇之前 的邏輯單元(即LS8&LS9)來填補MIV的前兩個平面。這會 98704.doc -80- 1288328 淘汰MP(/中的LSIO1及原始區塊中的LS7-LS9。 在主機寫入操作#4中,將產生由MP/的最後平面及mp3, 的前兩個平面所組成的間隙。MP2’的最後平面可由中繼頁 面MP2’中在最後寫入之LSI 0"之後之邏輯單元目前版本的 LS11’進行填補。MP/的前兩個平面分別可藉由LS8及LS9 進行填補,和中繼頁面MP3’中在LSI 0,,,之前的邏輯單元一 樣。Figure 22A shows a flow chart of a method with planar alignment updates in accordance with a general embodiment of the present invention. Step 950 · In a non-volatile memory that is organized into a plurality of blocks, = each block is divided into a plurality of memories that can be erased together ^ 疋 'mother memory unit is used The library stores information about a logical unit. Step 952: constituting the memory by a plurality of memory planes, each plane having a set of sensing circuits for aligning the page of the service memory, the memory page having one or more memory cells. Step 954: Store the first version of the logical unit in a first order according to the first sequence in a plurality of memory units of the first block of -98704.doc -74 - 1288328 - the aw aw parent and the other version of the logical unit are stored in One of the memory planes. Step 956: storing in a second block according to a second subsequent version different from the first order, and storing the subsequent versions in the early and last versions in the same memory port as the first version. One of the available memory is in the early stage, so that all the versions of a logical unit can be accessed from the same plane using the sensing circuit of the set of ^^^. /22Β shows a preferred embodiment of the step of updating the material in the flow chart shown in Fig. 22A. Step 956' includes steps 957, 958, and 959. Step 957: Divide each block into a relay page, and each relay page is constructed by a page of each plane. This step can be performed before any of the storage steps. Step 958: Store the subsequent version of the logical unit to the second block according to the second order different from the first order, and each subsequent version is stored in the next available memory with the same displacement of the first page and the first version. In the body unit. Step 959: Simultaneously with the subsequent version of the storage logic unit, the current version of the logical unit is copied according to the first-order, and any unused memory units preceding the next available memory unit are filled one by one. & Figure 23A shows an example of a logical unit that writes a person to a sequential update block in a sequential order regardless of plane alignment. This example shows that each logical page is a logical segment, such as LS0, LS丨.... In the four-plane example, each block (eg, MB0) can be considered to be split into multiple relay pages 98704.doc • 75- 1288328 ο, MPi, ..., where each relay page (eg, Mp〇) Each contains four segments (eg, LSO, LSI, LS2, and LS3), each segment from planes P0, P1, P2, and P3, respectively. Therefore, the block is filled into the logical units in the planes p〇, ρι, p2, and p3 in a cyclical order. In host write operation #1, the data in logical section LS5_LS8 is being updated. The data that is updated to LS5, -LS8, is recorded in the newly configured update block starting at the first available location. In the host write operation #2, the block of the data in the logical section LS9_LS12 is being updated. The data that is updated to become LS9, 彳S12, is recorded in the update block in the position immediately after the end of the fetch write. The figure shows that the way of two host writes is to record the update data in the update block in logical sequential mode, namely LS5'-LS 12,. The update block can be thought of as a sequential update block because it has been filled in a logically sequential order. The updated data recorded in the update block can be used to eliminate the corresponding data in the original block. However, the update logic segment is recorded in the update block based on the next available location but regardless of the planar alignment. For example, the segment LS5 is originally recorded in the plane P1, but the updated LS5 is now recorded in the other. Similarly, all other update sections are not aligned. Figure 23B shows an example of a logical unit that writes a chaotic update block in a non-sequential order regardless of plane alignment. In host write operation #1, the logical segment LS10_LS11 of the given logical group stored in the original relay block is updated. The updated logical section LS10'-LS11' will be stored in the newly configured update block. At this point, the update 98704.doc -76- 1288328 block is a sequential update block. In the host write operation #2, the logical extents LS5-LS6 are updated to LS5'-LS6' and recorded in the update block immediately after the upper_write. This converts sequential update blocks into confusing update blocks. At the host write operation #3, the logical section LS 1 is updated again (V and recorded in the next position of the update block becomes LSI0.) At this time, the LSI0 in the update block can be replaced in the previous record. LSI0', and LSI0' can replace LS10 in the original block. In host write operation #4, the data of logical section LS10 is updated again and recorded in the next position of the update block to become LSI0" Therefore, LSI0'" is now the last and only valid version of logical segment LS10. All previous versions of LS 10 are now obsolete. In host write operation #5, the data for logical segment LS30 is updated. And record it in the update block to become LS301. In this example, the logical units in the logical group can be written to the chaotic update block in any order and in any repetition. Similarly, the update logic segment is based on The next available position, but regardless of the plane alignment, is recorded in the update block. For example, the sector LS 10 is originally recorded in the plane P2 (ie, MP2, the third plane), but the LSI 0 is updated, but now recorded in P0. (ie 'MP〇' In the first plane). Similarly, in the host write #3, 'the logical section LS101 will be updated again to LS10" and placed in the next available position where the result is also in the first plane of the plane POCMPi') Thus, in general, it can be seen from the figure that recording the update section to the next available location of the block causes the update section to be stored in a different plane than its previous version. Planar Aligned Sequential Update Block 98704.doc -77 - 1288328 Figure 24A shows a sequential update example of Figure 23A with plane alignment and padding in accordance with a preferred embodiment of the present invention. The data that is updated to LS5, _LS8, is recorded in the newly configured update block starting from the first available plane alignment position. In this example, 'LS5 is originally in p1, and p1 is the first page of the relay page. The second plane. Therefore, the LS5'-LS7' will be programmed in the corresponding plane of the first available relay page MP〇 of the update block. At the same time, the logical segment LS4 before the LS5 will be relayed in the original block. Current version of the fill The gap of the first plane that is not used in Ι γ. The original LS 4 is then processed into the eliminated data. Then the remaining LS8' is recorded in the first plane of the next relay page ΜΡ, and is aligned in the plane. In operation #2, the data of the update to LS9, -LS 12, will be recorded in the update block of the next available plane alignment position. Therefore, LS9' will be recorded in the next available plane-aligned memory unit, ie The second plane. At this point, no gaps are created and no padding is required. The update block can be viewed as a sequential update block because it has been filled in a logically sequential order. In addition, it will be planarly aligned because each update logic unit is in the same plane as its original. Alignment chaotic update block with intermediate gaps Figure 24B shows a chaotic update example of Figure 23B with plane alignment and without any padding in accordance with a preferred embodiment of the present invention. In host write operation #1, the updated logical extents LS10'-LSll are stored in the newly configured update block. It is not stored in the next available memory unit, but is stored in the next available plane pair 98704.doc 1288328. Since the LS10' and LSI Γ were originally stored in planes Ρ2 and Ρ3 (the third and fourth planes of 原始2 of the original block), the next available plane-aligned memory unit will be in the updated block. In the third and fourth planes, at this time, the update block is non-sequential, and the pages of the relay page Mp〇 are filled in the order of "unfilled", "unfilled", LS10f, and LS11'. In host write operation #2, the logical segment LS5-LS6 is updated to LS5f-LS6' and recorded in the next available horizontally aligned update block. Therefore, the second (ρι) and third (p2) planes of the MP1 or the LS5' and LS6' of the § memory unit in the original block are programmed into the next available page MP! in the update block. In the corresponding plane. This leaves the first unused plane in front of mp i . At host write operation #3, the logical sector LSI is updated again, and it is recorded in the next plane alignment position of the update block to become LS10". Therefore, it will be written to the next available third plane, in MP2. This leaves a gap in the front in the rear plane of μρ〆 and the first two planes of ΜΡ/. This will eliminate the LS10 in MP〇'. In host write operation #4, the data in the logical sector LSI 〇" is updated again and recorded in the next available second plane of the relay page ΜΡ/ in the update block becomes LS10"f. LS10M1 is the last and only valid version of the logical section lsi〇. This leaves a gap consisting of the last plane of MP2, and the two planes of MP/. In host write operation #5, the logic is updated. The data of the segment LS30 and its record in the update block become LS30. Since the original LS30 is resident in the P2 or the third plane of the relay page, it will be written in the update block 98704.doc - 79- 1288328 available second planes. At this time, it will be the third plane of mi>4. The gap will be generated due to the last plane of MP3' and the first two planes of Μιγ. Since 2' this (four) display can The plane alignment method, in any order and any repetition, writes a plurality of logical segments in a logical group into a chaotic update block. In subsequent garbage collection operations, it will be conveniently by the same group. Sensing circuit to serve given logic All versions of the segment, especially the latest version. A chaotic update block with plane alignment to fill the filled intermediate gap. Figure 24C shows a preferred embodiment of the present invention having planar alignment and padding in accordance with Figure 23A. The chaotic update example. This operation is the same as shown in Figure 24, but the intermediate gap is filled with padding first. In host write operation #1, the current version of LS8 and LS9 resident in the original block will be used first. Filling the gap generated by the first and second unused planes of the relay page Mp. This will cause LS8 and lS9 in the original block to be eliminated. At this time, the update block is a sequential update block in which the relay is relayed. The padding order of the page MP〇' is LS8, LS9, LS10, and LS11. In the host write operation #2, a gap will be generated due to the first unused plane in the first, which will be first performed by LS4. This will cause the LS4 in the original block to be eliminated. As before, the second write can convert the sequential update block into a chaotic update block. In the host write operation #3, it will be due to Last used The plane and the first two planes of the MP/ create a gap. The last plane of the MP1 is filled with the LS7 after the last stylized LS6', and then the logical unit before the Lsi〇 (ie LS8&LS9) To fill the first two planes of the MIV. This will 98704.doc -80- 1288328 retire MP (in LSIO1 and LS7-LS9 in the original block. In host write operation #4, will be generated by the end of MP/ The gap between the plane and the first two planes of mp3. The last plane of MP2' can be filled by the LS11' in the relay page MP2' after the last written LSI 0" The first two planes of the MP/ can be padded by LS8 and LS9, respectively, as in the Logic 0, before, and in the relay page MP3'.

在主機寫入操作#5中,也會跟著分別以LSI 1,、LS28及 LS29來填補從MP3’之最後平面至MP4,前兩個平面之間的間 隙。因此,此範例顯示可以平面對齊的方式,按照任何順 序及任意重複,將一邏輯群組内的複數個邏輯區段寫入至 一混亂更新區塊中。In the host write operation #5, the gap between the last two planes from MP3' to MP4 and the first two planes is also filled with LSI 1, LS28, and LS29, respectively. Thus, this example shows that a plurality of logical segments within a logical group can be written into a chaotic update block in any order and any repetition, in a planar alignment manner.

在較佳具體實施例中,一個中繼頁面含有來自個別平 的循環頁面。由於中繼頁面可以平行進行讀取或程式化 因此以中繼頁面的粒度實施主機更新會很方便。在有任 填補時,纟可和按中繼頁面的更新邏輯單元一起記錄。 在圖24A及圖24C之範例顯示的具體實施例中,在各主; 寫J間會在要私式化更新之平面對齊記憶體單元之· =使用的記憶體單元上執行填補。在下—個主機寫入, 别’會延後在上-個程式化之記憶體單元後之任何未使 記憶體單元的動作。一舯 ^ ^ ; … 斯邗 奴而s,會在各中繼頁面的邊界4 ^補任何在前面之未使用的記憶體單it。換言之,如果> 前面的間隙跨越於兩個中 田1U T龜頁面之上,則按各中繼頁面, 適的邏輯上循序順序在各中繼百而 只斤隹各中、^頁面上執行填補,但 98704.doc -81 - 1288328 跨邊界的連續性。纟彙總區塊時,最後寫人的中繼頁面, 如為部分寫入’可藉由填補進行完整填充。 在另一項具體實施例中,任何部分填充的中繼頁面可在 移至下一個中繼頁面之前進行完全填補。 記憶體單元粒度 根據個別記憶體架構所支援的彈性,讀取或程式化的單 元可以有各種^化。個別平面的獨立特性允許獨立讀取及 程式化中繼頁面中個別平面的各頁面。上述範例具有成為 各平面中頁面的程式化最大單元。在中繼頁面内,可以有 小於所有頁面的局部中繼頁面程式化。例#,可以程式化 中繼頁面的前三個頁面,然後再程式化第四個頁面。 還有,在平面層級,一個實體頁面可含有一或多個記憶 體單元。如果各記憶體單元可以儲存—個區段的資料,則 :個實體頁面可儲存一或多個區段。一些記憶體架構可支 援局部頁面程式化’其中藉由抑制頁面内選定記憶體單元 的程式化,可在多個程式化編碼過程上,在不同的時間個 別程式化選定的邏輯單元。 在記憶體平面内用於邏輯群組之混亂更新的邏輯單元對齊 在區塊圮憶體官理系統中,按邏輯上循序順序將邏輯單 元的邏輯群組儲存於原始區塊中。在更新邏輯群組時,會 將邏輯單元的後續版本儲存於更新區塊中。如果將邏輯單 元混亂地(即,非循序)儲存於更新區塊中,最後會執行廢棄 項目收集以收集原始區塊及更新區塊中邏輯單元的最新版 本,以循序將其整合為新的原始區塊。如果將給定邏輯單 98704.doc -82- 1288328 兀*的更新版本全部儲存在和其原始區塊中原始版本對齊的 更新區塊中,致使相同組的感測電路可以存取所有版本, 則廢棄項目收集操作會更有效。 根據本發明的另一方面,在上述區塊記憶體管理系統 中’在將記憶體組織成一系列的記憶體頁面時(其中記憶體 單元的各頁面係由一組感測電路進行平行服務),如果給定 邏輯單元的所有版本在所儲存的頁面中全部具有相同的位 移位置,則所有版本均已對齊。 圖25顯示其中各頁面含有兩個用於儲存兩個邏輯單元 (如兩個邏輯區段)之記憶體單元的範例記憶體組織。在原始 區塊中’由於邏輯區段係按邏輯上循序順序加以儲存,會 將邏輯區段LSO及LSI儲存於頁面p〇中,將邏輯區段LS2及 LS3儲存於頁面卩丨中,及將邏輯區段[84及LS5儲存於頁面 P3中等。可以看出在此兩個區段的頁面中,左邊算起第一 個區段的頁位移為「〇」,而第二個區段的頁位移為「1」。 在更新循序儲存於原始區塊中之邏輯區段的邏輯群組 時,會將已更新的邏輯區段記錄在更新區塊中。例如,邏 輯區段LS2常駐在原始區塊中具有位移r 〇」的頁面p〇中。 如果在第一寫入中,如果將LS2更新為LS2,,則會將其儲存 於具有相同頁面位移「〇」之更新區塊的第一可用位置。這 會是在頁面P0’的第一記憶體單元中。如果在第二寫入中, 將LS5更新為LS5’,則會將其儲存於具有相同頁面位移「J」 之更新區塊的第一可用位置。這會是在具有頁面P1,之位移 「1」的第二記憶體單元中。然而,在儲存LS5,之前,會在 98704.doc -83- 1288328 其中複製至少在各頁面中將會維持邏輯循序順序之邏輯區 段的最新版本,以先填補P0’中具有位移「1」的未使用記 憶體單元及P1’中的位移「〇」。此時,會將LS3複製至p〇,中 位移「1」位置及將LS4複製至P1,中位移「〇」位置。如果 在第三寫入中,再次將1^2,更新為LS2”,則會將其儲存在 P2’的位移「〇」中。如果在第四寫入中,分別 更新為LS22’及LS23’,則會分別將其儲存在P3,的位移「〇」 及「i」中。然而,在那之前,會以LS3填補在P2,中具有位 移「1」的未使用記憶體單元。 上述更新序列假設可以在頁面内程式化個別區段。對於 些其中不支援局部頁面程式化的記憶體架構,必須一起 程式化頁面内的區段。此時,在第一寫入中,會將LS2,及 LS3—起程式化至Ρ0·中。在第二寫入中,會將 一起程式化至pi’中。在第三寫入中,會將LS2"及lS3一起 程式化至P2f*等等。 中繼頁面内的平面對齊 或者,程式化的單元可具有中繼頁面的粒度。如果寫入 混亂更新區塊的粒度變成中繼頁面,則結合圖16A及16B所 述之CBI區塊中的項目將和中繼頁面有關,而非和區段有 關。增加的粒度會減少必須為混亂更新區塊所記錄之項目In a preferred embodiment, a relay page contains loop pages from individual flats. Since relay pages can be read or programmed in parallel, it is convenient to implement host updates at the granularity of the relay page. When there is any padding, it can be recorded along with the update logic unit of the relay page. In the particular embodiment shown in the example of Figures 24A and 24C, padding is performed between each of the main; write Js on the memory cells that are to be used to customize the updated planar aligned memory cells. In the next host write, don't delay any action on the memory unit after the last stylized memory unit. A 舯 ^ ^ ; ... 斯 奴 奴 s, will be at the border of each relay page 4 ^ to fill any unused memory in the front of the single it. In other words, if the gap in the front of the > spans over the two Uchida 1U T turtle pages, then according to each relay page, the appropriate logically sequential order is performed on each of the relays. , but 98704.doc -81 - 1288328 continuity across the border. When the block is summarized, the last written page of the person, such as a partial write, can be completely filled by padding. In another embodiment, any partially populated relay page can be fully populated before moving to the next relay page. Memory Cell Granularity Read or stylized cells can vary depending on the flexibility supported by the individual memory architecture. The individual features of the individual planes allow independent reading and programming of individual pages of individual planes in the relay page. The above example has the largest stylized unit for pages in each plane. Within the relay page, there can be partial relay page stylizations that are smaller than all pages. Example #, you can program the first three pages of the relay page and then program the fourth page. Also, at the plane level, a physical page can contain one or more memory cells. If each memory unit can store data for one segment, then: a physical page can store one or more segments. Some memory architectures can support partial page stylization'. By suppressing the stylization of selected memory cells within a page, the selected logical units can be programmed at different times in multiple stylized encoding processes. Logical unit alignment for chaotic updates of logical groups in the memory plane In the block memory system, logical groups of logical units are stored in the original block in a logically sequential order. When a logical group is updated, subsequent versions of the logical unit are stored in the update block. If the logical unit is stored confusingly (ie, non-sequentially) in the update block, the obsolete project collection is finally performed to collect the latest version of the original block and the logical unit in the update block to sequentially integrate it into the new original. Block. If all of the updated versions of the given logical list 98704.doc -82 - 1288328 兀* are stored in the update block aligned with the original version in its original block, so that the same set of sensing circuits can access all versions, then Obsolete project collection operations are more effective. According to another aspect of the present invention, in the above-described block memory management system, 'when a memory is organized into a series of memory pages (where each page of the memory unit is parallelized by a set of sensing circuits), If all versions of a given logical unit have the same displacement position in the stored page, all versions are aligned. Figure 25 shows an exemplary memory organization in which each page contains two memory cells for storing two logical units, such as two logical segments. In the original block, 'since the logical segments are stored in a logical sequential order, the logical segments LSO and LSI are stored in the page p〇, the logical segments LS2 and LS3 are stored in the page, and The logical sections [84 and LS5 are stored in the middle of page P3. It can be seen that in the pages of the two sections, the page displacement of the first section from the left is "〇", and the page displacement of the second section is "1". When the logical group of logical segments sequentially stored in the original block is updated, the updated logical segment is recorded in the update block. For example, the logical segment LS2 is resident in the page p〇 with the displacement r 〇" in the original block. If in the first write, if LS2 is updated to LS2, it will be stored in the first available location of the update block with the same page offset "〇". This will be in the first memory unit of page P0'. If LS5 is updated to LS5' in the second write, it will be stored in the first available location of the update block having the same page displacement "J". This will be in the second memory unit with the displacement "1" of page P1. However, before storing LS5, the latest version of the logical section that will maintain the logical sequential order at least in each page is copied at 98704.doc -83 - 1288328 to fill the displacement "1" in P0'. The memory unit and the displacement "〇" in P1' are not used. At this time, LS3 will be copied to p〇, the position of "1" will be shifted, and LS4 will be copied to P1, and the position of "〇" will be shifted. If 1^2 is updated to LS2" again in the third write, it will be stored in the displacement "〇" of P2'. If it is updated to LS22' and LS23' in the fourth write, it will be stored in the displacements "〇" and "i" of P3, respectively. However, before that, the unused memory cells with a bit shift of "1" in P2 are filled with LS3. The above update sequence assumes that individual sections can be programmed within the page. For memory architectures where partial page stylization is not supported, the sections within the page must be stylized together. At this time, in the first write, LS2, and LS3 will be programmed into Ρ0·. In the second write, it will be programmed together into pi'. In the third write, LS2" and lS3 are programmed together to P2f* and so on. Planar alignment within the relay page Alternatively, the stylized unit can have the granularity of the relay page. If the granularity of writing the chaotic update block becomes a relay page, the items in the CBI block described in connection with Figures 16A and 16B will be related to the relay page, not to the sector. Increased granularity reduces the number of items that must be recorded for the chaotic update block

的數量,並允許直接消除索引及每個中繼區塊使用單一 cBI 區段。 圖26A和圖21的記憶體結構相同,只是各頁面含有兩個區 &而非-個。因此’從圖中可見中繼頁面Mp。現在各有其能 98704.doc -84- 1288328 夠儲存兩個邏輯單元之資料的頁面。如果各邏輯單元是一 個區&,則將邏輯區段循序儲存於平面p〇中乙训及[Μ及平 面P1中LS2及LS3等的MP0之中。 圖26B顯示圖26A所示之具有以線性圖方式布局之記憶 體單兀的中繼區塊。和圖21的單一區段頁面相比,邏輯區 &係以循ί衣的方式儲存於各頁面中具有兩個區段的四個頁 面中。The number and allows for direct elimination of the index and the use of a single cBI segment per relay block. The memory of Fig. 26A and Fig. 21 has the same structure except that each page contains two areas & Therefore, the relay page Mp can be seen from the figure. Now each has its own 98704.doc -84- 1288328 page that can store the data of two logical units. If each logical unit is a zone &, the logical sections are sequentially stored in the plane p〇 and in the MP0 of LS2 and LS3 in the plane P1. Fig. 26B shows a relay block shown in Fig. 26A having a memory unit arranged in a line graph manner. Compared with the single-segment page of Fig. 21, the logical area & is stored in a four-page four pages in each page.

一般而S,如果有臀個平行操作的平面及每個頁面有Κ個 〇己L體單7G,且按邏輯上循序順序填充中繼區塊,則中繼 區塊中第k個邏輯頁面將常駐於平面种,其中x = k,M0DIn general, S, if there is a plane in which the hips operate in parallel and each page has a single 7G, and the relay blocks are filled in a logical sequential order, the kth logical page in the relay block will Resident in a planar species, where x = k, M0D

W,其中k’ = INT(k/K)。例如,有四個平面,w = 4,每4 頁面2個區段,κ = 2 ’則對於k = 5,即指第五個邏輯區名 LS5,其將常駐於由2 M〇D 4所給定的平面中,即平面2, = 24A所示…般而言,相同原理適用於實施上述的平语 對齊。 上述範例係用於多重芈而@蓉 十面架構中平面和頁面的對齊。在 具有多個區段之頁面的例子中,也唯# i & & 乜維持頁面内的區段對齊 會很有利。依此方式,使用 「讲 飞使用相冋組的感測電路有利於相同W, where k' = INT(k/K). For example, there are four planes, w = 4, 2 segments per 4 pages, κ = 2 ' for k = 5, which means the fifth logical region name LS5, which will reside in 2 M〇D 4 In a given plane, that is, plane 2, = 24A... In general, the same principle applies to the implementation of the above-described flat alignment. The above example is used for multiple 芈 and @蓉10-sided architecture in the alignment of planes and pages. In the case of a page with multiple sections, it is also advantageous to have #i && 乜 maintain the alignment of the sections within the page. In this way, using the sensing circuit of the "flying phase" group is beneficial for the same

邏輯區段的不同版本。可有 U %故u 有執仃如區段之重新配置及「續 取-修改'寫入」的操作。在 ^ 採用和對齊頁面與平面的相 „ 斤手了以 而定,可以填捕h 還有根據具體實施例 真補也可以不填補任何中間間隙。 不用填補的邏輯單元平面對齊 圖2 7顯示的替#柯f # 幻生方案如下:不用填補要從一個位置複 98704.doc -85- 1288328 製到另一個的邏鞋簞 k科早凡,即可在更新區 齋。可蔣h 〒進仃平面對 、更新區塊交叉之四個平面的部 接收自主趟+ τ 丨刀視為四個收集 接叹目主祛之平面對齊 月又新邏輯早兀的緩'衝哭。尤田 在合適緩衝器的下一個 ^ 化各接收自主機&、羅M 0 — 、 P可私式 口工ί戍的避季耳早7^。依昭垃 从广 依Μ接收自該主機的邏輯單 、序列,可能會有不同數 每個平面之中。 咖輯早-被程式化於 此亂更新區物、可含有邏輯中繼頁面之所有邏輯單元 的已更新版本,如用於膨〇的。其還可含有小於中繼頁面 的所有邏輯〜如用於ΜΡ、的。在…的例子中,可從 對應的原始區塊ΜΒ()取得遺失的邏輯單元乙心。 此替代性方案在記憶體架構可支援平行讀取各平面的任 意邏輯頁面時尤其有效。依此方式,可在單—平行讀取操 作中β取個中繼頁面的所有邏輯頁面,即使個別邏輯頁 面並非來自相同列。 、 階段性程式錯誤處置 當區塊中有程式失敗時,則通常會將所有要儲存至區塊 的資料移至另-個區塊並將失敗的區塊標示為不良。根據 其中遇到失敗之操作的時序規格,可能沒有足夠的時間可 另外將儲存的資料移至另一個區塊。最壞的情況是在正常 廢棄項目收集操作期間的程式失敗,其中需要另一個相同 廢棄項目收集操作以將所有資料重新配置至另一個區塊。 於此情況中,可能會達反一給定主機/記憶體裝置所規定的 寫入等待時間限制,其該限制通常係被設計成容納一次(而 98704.doc -86 - 1288328 非兩次)廢棄項目收集操作。 圖28顯不其中缺陷區塊在彙總操作期間發生程式失敗時 會在另一個區塊上重複彙總操作的方案。在此範例中,區 塊1是按邏輯上循序順序儲存邏輯群組之完整邏輯單元的 原始區塊。為了便於解說,原始區塊含有區段a、b、c、 及D,各儲存一個子群組的邏輯單元。當主機更新群組的特 定邏輯單元時,會將邏輯單元的較新版本記錄在更新區塊 中,即區塊2。如先前結合更新區塊所述,根據主機而定, 此更新可按循序或非循序(混亂)順序記錄邏輯單元。最後, 會因更新區塊已滿或一些其他原因而關閉更新區塊以接收 進步更新。當更新區塊(區塊2)關閉時,會將常駐於更新 區塊或原始區塊(區塊1)上之邏輯單元的目前版本彙總在新 的區塊(區塊3)上,以形成邏輯群組之新的原始區塊。此範 例顯示更新區塊在區段B及D中含有邏輯單元的較新版 本。為了方便,圖中將區段B及D顯示在區塊2中未必是其 記錄的位置,而是對齊其在區塊1中原始位置的位置。 在彙總操作中,會按循序順序將原來常駐於區塊1之邏輯 群組之所有邏輯單元的目前版本記錄於彙總區塊(區塊3) 中。因此,會先從區塊1將區段A的邏輯單元複製至區塊3, 接著再從區塊2將區段B複製至區塊3。在此範例中,在從區 塊1將區段C的邏輯單元複製至區塊3時,區塊3的缺陷將導 致程式失敗。 一種處置程式失敗的方式是在全新的區塊(區塊4)上重 新啟動彙總程序。因此,會將區段A、B、C、D複製在區塊 98704.doc -87 - 1288328 4上’然後丟棄缺陷區塊3。麸 、▲ 的彙妯握你从田 、、' ,延將表示執行兩個串聯 的菜、广作,結果造成複製多達兩個充 串聯 記憶體裝置具有完成特定操 塊。 力主a舍 的特定時間容限。例如, 在主機寫入記憶體裝置時,4 内P , . r S預计寫入操作在指定的時間 r成,已知為「寫入等待時間」。當記 卡,正忙於寫人主機的資料時,會發信「B 如礼 態給主機。如果「BUSY狀 奴)」狀 产,主德合“ 」狀恶持續超過寫入等待時間的長 錯誤。 于…、後對寫入操作登錄例外或 =9以示意圖顯示具有允許足夠時間完成寫入(更新)操 乍^茱總操作之時序或寫入等待時間的主機寫入操作。主 作具有寫4待時仏,其可提供㈣完成寫入 义、貝4至更新區塊之更新操作972的時間(圖Μ⑽。如先 月J在區塊官理系統所述,對更新區塊的主機寫入可觸發彙 總#作。因此’時序亦允許在更新操作Μ之外的彙總操作 974(圖29(B))。然而,必須重新啟動彙總操作以回應失敗的 囊總操作將會花費太多時間並超過指定的寫人等待時間。 根據,發明的另一方面,在具有區塊管理系統的記憶體 :’在時間緊急的記憶體操作期間,區塊中的程式失敗可 精由^績中斷區塊(breakout block)中的程式也操作來處 置。稍後,在較不緊急的時間,可將中斷前記錄在失敗區 塊中的育料傳送到其他可能也是中斷區塊的區塊。接著即 可丟棄失敗的區塊。依此方式,在遇到缺陷區塊時,不會 因必須立刻傳送缺陷區塊中儲存的資料而損失資料及超過 98704.doc -88- 1288328 指定的時間限制,即可加以處理。此錯誤處置對於廢棄項 目收集刼作尤其重要,因此在緊急時間期間不需要對一嶄 新的區塊重複進行整個作業。其後,在適宜的時間,藉由 重新配置到其他區塊,即可挽救缺陷區塊的資料。 圖3〇顯示根據本發明一般方案之程式失敗處置的流程 圖。 步驟1002 ··將非揮發性記憶體組織成區塊,將各區塊分 割成可一起抹除的記憶體單元,各記憶體單元可儲存一邏 輯單元的資料。 程式失敗處置(第一階段) 步驟1012 ··在第一區塊中儲存一連串邏輯單元的資料。 v驟1014·為了回應儲存一些邏輯單元後在第一區塊的 儲存失敗,在作為第一區塊之中斷區塊的第二區塊中儲存 後續的邏輯單元。 程式失敗處置(最後階段) ,步驟1020:為了回應預定的事件,將儲存在第一區塊的 邏輯單元傳送至第三區塊’其中第三區塊和第二區塊可以 相同或不同。 步驟1022 ··丟棄第一區塊。 圖31A顯示程式失敗處置的一項具體實施例,其中第三 (最後的重新配置)區塊和第二(中斷)區塊不同。在階段工期 間’會在第一區塊上記錄一連串的邏輯單元。如果邏輯單 元是來自主機寫入,則可將第一區塊視為更新區塊。如果 邏輯單元是來自壓縮操作的彙總,則可將第一區塊視為重 98704.doc -89- 1288328 新配置區塊。如果在某個點在區塊1中遇到程式失敗,則口 提供當作中斷區塊的第二區塊。在區塊1及後續邏輯單元中 記錄失敗的邏輯單元會被記錄在中斷區塊上。依此方式, 不需要額外的時間來取代失敗的區塊1及常駐其上的資料 在中間階段II中,可在區塊1及區塊2之間取得序列中所 有的已記錄邏輯單元。 在最後階段III中,會將邏輯單元重新配置至可當作重新 配置區塊的區塊3,以取代失敗的區塊1及常駐其上的資 料。因此,可挽救已失敗區塊中的資料,然後再丟棄失敗 的區塊。會排定最後階段的時間,使其不會和任何同時之 記憶體操作的時序衝突。 在此具體實施例中,重新配置區塊3和中斷區塊2有所區 分。這在中間階段期間已經以附加的邏輯單元記錄中斷區 塊時會很方便。因此,中斷區塊已經變成更新區塊,可能 不適於將缺陷區塊1的邏輯單元重新配置至其中。 圖31B顯示程式失敗處置的另一項具體實施例,其中第三 (最後的重新配置)區塊和第二(中斷)區塊相同。階段1及11 和圖3 1A所示的第一具體實施例相同。然而,在階段m中, 會將缺陷區塊1的邏輯單元重新配置至中斷區塊2。這在未 以先前寫入操作之原始序列以外的附加邏輯單元記錄中斷 區塊2時會很方便。依此方式,儲存所論邏輯單元所需的區 塊最小。 在彙總期間之程式失敗處置的具體實施例 程式失敗處置在彙總操作期間尤其重要。正常的彙總操 98704.doc -90· 1288328 作可將常駐在原始區塊及更新區塊中之邏輯群組的所有邏 輯單元的目前版本彙總至彙總區塊。在棄總操作期間,如 果在彙總區塊中發生程式失敗,則會提供另一個當作中斷 彙總區塊的區塊,以接收其餘邏輯單元的彙總。依此方式, 不必複製邏輯單元-次以上,而仍可在正常囊總操作指定 的期間内完成例外處理的操作。在適宜的時間,將群组所 有未處理完成之邏輯單元彙總至中斷區塊中,即可完成棄 總操作。適宜的時間將是在目前主機寫入操作以外的―些 其他有時間執行彙總之期間的期間。—個此種適宜的時間 疋1另-個其中有更新但無關聯之彙總操作之主機寫入的 期間。 ‘實質土 ’可將程式失敗處置的彙總視為以多階段來實 =。在第—階段中,在發生程式失敗後,會將邏輯單元二 〜至一個以上區塊中以避免彙總各邏輯單元—次以上。在 間會完成最後階段,其中會將邏輯群組彙總至一 斷二佳藉由按循序順序將所有邏輯單元收集至中 斷菜總區塊中。 丁土 τ 232A顯示造成彙總操作之初始更新操作的流程圖。 割:Γ:技Γ揮發性記憶體組織成區塊,將各區塊分 輯單元的資料。 早凡了儲存一邏 乂驟1104 ·將資料組織成複數個邏輯 為可儲存在㈣巾之龍單元的群組。|邏輯群組 /驟1112·接收封裝在邏輯單元中的主機資料。 98704.doc -91 - 1288328 步驟1114 :Different versions of the logical section. There may be U%, such as the reconfiguration of the section and the "continuation-modify" write operation. In the adoption and alignment of the page and the plane, it can be filled with h. It can also be filled with any intermediate gap according to the specific embodiment. The logical unit plane alignment without filling is shown in Figure 27. For #柯f# 幻幻计划 as follows: do not need to fill the sneakers from one position to 98704.doc -85- 1288328 to another, you can get in the update area. You can go to the update area. The planes of the four planes of the plane pair and the update block are autonomously 趟+ τ. The knives are regarded as four collections, and the planes of the sighs are aligned with the moon and the new logic is early and slow. The next one of the devices is received from the host &, Luo M 0 — , P can be privately ported to the ear of the ear 7 ^. According to Zhao Zhao from the wide Μ Μ received from the host logic list, sequence There may be a different number in each plane. The coffee is early - is programmed to this updated area, an updated version of all logical units that can contain logical relay pages, such as for expansion. Can contain less than all logic of the relay page ~ as used for ΜΡ, in In the example of ..., the missing logical unit core can be obtained from the corresponding original block ΜΒ(). This alternative scheme is particularly effective when the memory architecture can support parallel reading of any logical page of each plane. In the single-parallel read operation, β can take all the logical pages of the relay page, even if the individual logical pages are not from the same column. Staged program error handling When there is a program failure in the block, it will usually be all The data stored in the block is moved to another block and the failed block is marked as bad. Depending on the timing specifications of the operation in which the failure occurred, there may not be enough time to move the stored data to another area. Block. The worst case scenario is a program failure during a normal abandoned project collection operation, where another identical waste project collection operation is required to reconfigure all data to another block. In this case, it may be reversed. The write latency limit specified by the host/memory device, which is typically designed to be accommodated once (and 98704.doc -86 - 1288328) Non-twice) Abandonment project collection operation. Figure 28 shows a scenario in which the defect block repeats the summary operation on another block when the program fails during the summary operation. In this example, block 1 is logically The original block of the complete logical unit of the logical group is sequentially stored. For ease of explanation, the original block contains segments a, b, c, and D, each storing a logical unit of a subgroup. When the host updates the group For a particular logical unit, a newer version of the logical unit is recorded in the update block, block 2. As previously described in connection with the update block, depending on the host, this update can be sequential or non-sequential (chaotic) The logical unit is recorded sequentially. Finally, the update block is closed to receive the progress update because the update block is full or for some other reason. When the update block (block 2) is closed, the current version of the logical unit resident on the update block or the original block (block 1) is summarized on the new block (block 3) to form The new original block of the logical group. This example shows that the update block contains a newer version of the logical unit in sections B and D. For convenience, the sections B and D are displayed in the block 2 not necessarily at the position where they are recorded, but are aligned with their original positions in the block 1. In the summary operation, the current version of all logical units originally resident in the logical group of block 1 is recorded in the summary block (block 3) in sequential order. Therefore, the logical unit of the sector A is copied from the block 1 to the block 3, and then the block B is copied from the block 2 to the block 3. In this example, when copying the logical unit of the sector C from the block 1 to the block 3, the defect of the block 3 will cause the program to fail. One way to resolve a program failure is to restart the summary program on a brand new block (block 4). Therefore, the segments A, B, C, D are copied on the block 98704.doc -87 - 1288328 4 and then the defective block 3 is discarded. The bran and ▲ 妯 妯 妯 从 从 从 从 从 从 , , , 从 从 从 , , , , , , , , , ▲ ▲ ▲ ▲ ▲ ▲ ▲ ▲ ▲ ▲ ▲ ▲ ▲ ▲ ▲ ▲ ▲ ▲ ▲ ▲ ▲ The specific time tolerance of the owner. For example, when the host writes to the memory device, P, . r S in 4 expects the write operation to be at the specified time r, which is known as the "write wait time". When the card is busy writing the data of the host, it will send a message "B is like the ceremony to the host. If the "BUSY-like slave" is in the form of a product, the main character is "s" and the long-lasting error of the write waiting time continues. . After the ..., write access exception or =9 to the schematic shows the host write operation with the timing or write wait time that allows enough time to complete the write (update) operation. The master has a write timeout, which provides (4) the time to complete the update operation 972 of writing the sense and shell 4 to the update block (Fig. 10). As described in the block government system, the update block The host write can trigger the summary #. Therefore, the 'timing also allows the summary operation 974 outside the update operation (Fig. 29(B)). However, the summary operation must be restarted in response to the failed capsule operation. Too much time and exceed the specified writer waiting time. According to another aspect of the invention, in the memory with the block management system: 'In the time-critical memory operation, the program failure in the block can be improved by ^ The program in the breakout block is also operated to handle. Later, in less urgent time, the feed recorded in the failed block before the interruption can be transferred to other blocks that may also be interrupted blocks. In this way, the failed block can be discarded. In this way, when the defective block is encountered, the data will not be lost due to the need to immediately transfer the data stored in the defective block and the time specified by 98704.doc -88- 1288328 Limit, ie This error handling is especially important for the collection of abandoned items, so there is no need to repeat the entire operation for a new block during an emergency time. Thereafter, at the appropriate time, by reconfiguring to other blocks, The data of the defective block can be saved. Figure 3A shows a flow chart of the failure handling of the program according to the general scheme of the present invention. Step 1002 · Organize the non-volatile memory into blocks, and divide the blocks into pieces that can be wiped together. In addition to the memory unit, each memory unit can store data of a logical unit. Program failure handling (first stage) Step 1012 · Store a series of logical unit data in the first block. v1014· In response to storage Some of the logical units fail to be stored in the first block, and the subsequent logical units are stored in the second block as the interrupt block of the first block. Program failure handling (final stage), step 1020: in response to the predetermined The event transfers the logical unit stored in the first block to the third block 'where the third block and the second block may be the same or different. Step 1022 · Discarding the first block. Figure 31A shows a specific embodiment of the program failure handling, wherein the third (last reconfiguration) block is different from the second (interrupt) block. Recording a series of logical units on the first block. If the logical unit is from a host write, the first block can be regarded as an update block. If the logical unit is a summary from a compression operation, the first area can be The block is considered to be a new configuration block of 98704.doc -89 - 1288328. If the program fails in block 1 at some point, the port provides the second block as the interrupt block. In block 1 and subsequent Logical units that fail to log in the logical unit are recorded on the interrupt block. In this way, no additional time is required to replace the failed block 1 and the data resident on it in intermediate stage II, which is available in block 1 And all the recorded logical units in the sequence are obtained between blocks 2. In the final phase III, the logical unit is reconfigured to block 3, which can be considered as a reconfigured block, to replace the failed block 1 and the data resident on it. Therefore, the data in the failed block can be saved and the failed block discarded. The final phase is scheduled so that it does not conflict with the timing of any simultaneous memory operations. In this particular embodiment, reconfiguration block 3 and interrupt block 2 are differentiated. This is convenient when the interrupt block has been recorded with an additional logical unit during the intermediate phase. Therefore, the interrupt block has become an update block and may not be suitable for reconfiguring the logical unit of defective block 1 therein. Figure 31B shows another embodiment of a program failure handling in which the third (last reconfiguration) block is the same as the second (interrupt) block. Stages 1 and 11 are the same as the first embodiment shown in Fig. 31A. However, in stage m, the logical unit of defective block 1 is reconfigured to interrupt block 2. This is convenient when the interrupt block 2 is not recorded by an additional logical unit other than the original sequence of the previous write operation. In this way, the block required to store the logical unit in question is the smallest. Specific Examples of Program Failure Dispositions During Aggregation Program failure handling is especially important during summary operations. The normal summary operation 98704.doc -90· 1288328 summarizes the current versions of all logical units resident in the original block and the updated block to the summary block. During a total abandonment operation, if a program failure occurs in the summary block, another block that serves as an interrupt summary block is provided to receive a summary of the remaining logical units. In this way, it is not necessary to copy the logical unit more than one time, and the exception processing can still be completed within the period specified by the normal capsule total operation. At the appropriate time, all the unprocessed logical units of the group are summarized into the interrupt block to complete the total abandonment operation. The appropriate time will be during the period of some other time-synthesizing summaries other than the current host write operation. One such suitable time 疋1 another period during which the host writes the updated but unrelated summary operations. ‘Essential soil’ can be considered as a multi-stage summary of the failure of the program. In the first phase, after a program failure occurs, the logical unit is divided into two or more blocks to avoid summarizing the logical units more than once. The final phase is completed, where the logical groups are summarized into one and the second is best collected by the sequential order in which all logical units are collected into the total block. Ding τ 232A shows a flow chart that causes the initial update operation of the summary operation. Cut: Γ: The technical volatile memory is organized into blocks, and the data of each unit is divided into blocks. Early storage of a logic Step 1104 • Organize the data into a plurality of logics for groups that can be stored in the (four) towel dragon unit. |Logic Group / Step 1112. Receive host data encapsulated in the logical unit. 98704.doc -91 - 1288328 Step 1114:

建立邏輯群組的原始區塊。 邏輯單元的第一版本, 步驟1116 :Create the original block of the logical group. The first version of the logical unit, step 1116:

錄至新的區塊。 圖32B顯示根據本發明一項較佳具體實施例之多階段彙 總操作的流程圖。 彙總失敗處置(階段I) 錯誤處置的彙總,階段I操作i i20包含步驟丨丨22及步驟 1124 〇 步驟1122 :以和第一順序相同的順序,在第三區塊中儲 存该邏輯群組之邏輯單元的目前版本,以建立邏輯群組的 彙總區塊。 步驟1124 :為了回應彙總區塊的儲存失敗,以和第一順 序相同的順序,在第四區塊中儲存該第三區塊所沒有之邏 輯群組的邏輯單元,以提供中斷彙總區塊。 由於已將區塊1及區塊2中的資料傳送至區塊3及區塊4, 因此可抹除區塊1及區塊2以釋放空間。在較佳具體實施例 中,可立即釋放區塊2至EBL(已抹除的區塊清單,見圖18) 再予以使用。區塊1只能在以下的條件下釋放:如果其為關 閉的更新區塊及其中有對應的GAT項目指向的另一個區 塊0 98704.doc -92- 1288328 實質上,區塊3會變成邏輯群組的原始區塊,而區塊4變 成區塊3的取代循序更新區塊。 在完成階段I彙總後,記憶體裝置藉由釋放BUS Y信號來 發信給主機。 中間操作(階段II) 階段II,即中間操作H30,可在階段III彙總操作1140之 前發生。如步驟1132、1134、1136中任一者所提出般,可 能會有數種可能的情況。 步驟1132:或是在邏輯群組的寫入操作中,寫入作為更 新區塊的第四區塊(中斷彙總區塊)。 如果主機寫入所論邏輯群組,則區塊4(其為中斷彙總區 塊且其至此已是取代循序更新區塊)將用作正常更新區 塊。根據主機寫入而定,其可維持循序或變成混亂狀態。 作為更新區塊,其將在某個點觸發關閉另一個混亂區塊, 如先前的較佳具體實施例所述。 如果主機寫入另一個邏輯群組,則直接進行至階段ΠΙ操 作。 步驟1134:或是在讀取操作中,讀取其中第三區塊為邏 輯群組原始區塊及第四區塊為更新區塊的記憶體。 此時,會從為邏輯群組之原始區塊的區塊3讀取區段a及 B的邏輯單元,及從為群組之更新區塊的區塊4讀取區段c 及D的邏輯單元。由於從區塊3只能讀取區段a及B,將無法 存取其中程式化失敗的頁面,且無法存取其後未寫入的部 分。雖然尚未更新快閃記憶體中的GAT目錄及其仍指向為 98704.doc -93- 1288328 原始區塊的區塊1,但不會從中讀取任何資料,且此區塊本 身已於稍早抹除。 另一種可能性是主機讀取邏輯群組中邏輯單元。此時, 會攸為邏輯群組之原始區塊的區塊3讀取區段A及B的邏輯 單凡’及從為群組之循序區塊的區塊4讀取區段C及D的邏 輯單元。 步驟1136 :或在電源開啟初始化中,藉由掃描其中内容 以重新識別第一至第四區塊中的任一項。 中間階段的另一個可能性是關閉記憶體裝置的電源,然 後重新啟動。如上述,在電源開啟初始化期間,會掃描配 置區塊清單中的區塊(要使用的抹除集區區塊,見圖15及圖 18)以識別邏輯群組中已成為特殊狀態原始區塊(區塊3)及 關聯之循序更新區塊(區塊4)的缺陷彙總區塊。中斷區塊(區 塊4)之第一邏輯單元中的旗標將指示關聯的區塊為已遭遇 程式錯誤的原始區塊(區塊3)。藉由查閱區塊目錄(GAT), 即可尋找區塊3。 在一項具體實施例中,會將旗標程式化至中斷彙總區塊 (區塊4)的第一邏輯單元。這可協助指示邏輯群組的特殊狀 態:即,其已經彙總成兩個區塊,即,區塊3及區塊4。 使用旗標以識別具缺陷區塊之邏輯群組的一個替代方法 是,利用不像原始區塊應為已滿的特性(除非錯誤發生在最 後頁面,及最後頁面沒有ECC錯誤)偵測在掃描期間為缺陷 的區塊。還有,根據實施例而定,其中會有有關儲存在快 閃記憶體中控制資料結構之失敗群組/區塊的資訊記錄,而 98704.doc -94- 1288328 不只是在寫入中斷彙總區塊(區塊4)之第一區段之標頭區中 的旗標。 彙總完成(階段III) v驟1142 ·為了回應預定的事件,對於自階段〗後未進一 步記錄第四區塊時的第一種情況,以和該第一順序相同的 順序,在其中儲存該邏輯群組之所有未處理完成之邏輯單 =的目前版本;對於自階段〗後已進一步記錄第四區塊時的 第一種情況,將第三及第四區塊彙總為第五區塊。 步驟1144:之後,對於第-種情況,操作記憶體時,以 彙總的第四區塊作為邏輯群組的原始區塊;對於第二種情 況’操作記憶體時,以第五區塊作為邏輯群組的原始區塊。 只要有任何不會違反任何指定時間限制的機t,即可執 行階段出中的最後彙總。一個較佳情況是,在其中有另一 個未附帶彙總操作之邏輯群組之更新操作時,「掛附 (一㈣」在下一個主機寫入時槽上。如果另一個邏輯 群組的主機寫人觸發本身的廢棄項目收集,則將使階段m 彙總延後。 圖33顯示多階段彙總操作 n ^ 呆1下之第一及最後階段的範例峡 序。主機寫入荨待時間是且右垃 疋,、有持續期間1之各主機寫入辞 槽的寬度。主機寫入1是簡單 1早的更新,及邏輯群組LGi中筹 一組邏輯單元的目前版本會祐 破5己錄在關聯的更新區塊上。 在主機寫入2,會在邏輯群 弭拜組LGi上發生更新,致使更杂 區塊被關閉(如,已滿)。备袒讲& ;9棱供新的更新區塊以記錄其餘 更新。提供新的更新區塊合自 3觸發廢棄項目收集,而導致關 98704.doc 95 1288328 於LG*的彙總操作,以便再循環要再使用的區塊。乙匕群組 的目前邏輯單元會按循序順序記錄在彙總區塊上。彙總操 作可繼續進行直到在彙總區塊中遭遇缺陷為止。然後叫用 1¾段Ϊ彙總’其中菜總操作在中斷彙總區塊上繼續。同時, LG4(階段III)的最後彙總會等待下一個機會。 在主機寫入3,也會發生邏輯群組LG2之邏輯單元的寫入 以觸發LG2的彙總。這表示已經完全利用時槽。 在主機寫入4,操作只是將LG:的一些邏輯單元記錄至其 更新區塊。時槽中剩餘的時間可提供執行!^4之最後彙總的 機會。 未將中斷彙總區塊轉換為更新區塊的具體實施例 圖34A及圖34B分別顯示圖28及圖31之範例適用之多階 段彙總之階段I及階段III操作的第一案例。 圖34A顯示其中中斷彙總區塊並非用作更新區塊而是用 作其彙總操作已經中斷之彙總區塊的例子。尤其,圖34八 是指圖33所示的主機寫入#2,其中主機寫入屬於邏輯群組 LGi之邏輯單元的更新,及在此期間,此操作也會觸發和另 一個邏輯群組LG#關聯之區塊的彙總。 原始區塊(區塊1)及更新區塊(區塊2)的形成方式和圖28 的範例相同。同樣地,在彙總操作期間,已知彙總區塊(區 塊3)在彙總區段C的邏輯單元時有缺陷。然而,不像圖“ 所不的重新彙總方案,本多階段方案可在新提供之可當作 中斷彙總區塊的區塊(區塊4)上繼續彙總操作。因此,在階 段I彙總操作中,已在彙總區塊(區塊3)中彙總區段 98704.doc -96- 1288328 的邏輯單元。當彙總區塊中發生程式失敗時,會循序將區 4又C及D中其餘的邏輯單元複製至中斷彙總區塊(區塊4)。 如果主機原來在第一邏輯群組中寫入更新會觸發和第二 邏輯群組關聯之區塊的彙總操作,則會將第一邏輯群組的 更新έ己錄至弟一邏輯群組的更新區塊(通常為新的更新區 塊)。此時,中斷彙總區塊(區塊4)不會用來記錄彙總操作之 外的任何更新資料且會維持必須完成的中斷彙總區塊。 由於區塊1及區塊2中的資料現在完全含在另一個區塊 (區塊3及區塊4)中,因此可將其抹除以便再循環。位址表 (GAT)會被更新以指向區塊3,作為邏輯群組的原始區塊。 更新區塊的目錄資訊(在ACL中,見圖15及圖18)也會被更 新’以指向已成為邏輯群組(如,LG4)之循序更新區塊的區 塊4 〇 結果,彙總的邏輯群組並未侷限在一個區塊中,而是分 布在缺陷彙總區塊(區塊3)及中斷彙總區塊(區塊4)之上。此 方案的重要特色是群組中的邏輯單元只會在此階段期間彙 總一次,但卻將彙總散布在一個以上區塊之上。依此方式, 可在正常指定的時間内完成彙總操作。 圖34B顯示始於圖34A之多階段彙總的第三及最後階 段。如結合圖3 3所述,會在第一階段後的適宜時間(例如在 後續不會觸發隨附彙總操作的主機寫入期間)執行階段III 彙總。尤其,圖34B是指其中發生如圖33所示之主機寫入#4 的時槽。在該期間中,主機寫入可更新屬於邏輯群組LG2 的邏輯單元而不會觸發另一個額外的彙總操作。因此,有 98704.doc -97- 1288328 利於將時槽中剩餘的時間用於階段ΠΙ操作,以完成邏輯群 組lg4的彙總。 此操作可將還不在中斷區塊中之LG*之所有未處理完成 之邏輯單元彙總至中斷區塊。在此範例中,這表示會從區 塊3按邏輯上循序順序將區段a及B複製至中斷區塊(區塊 4)。由於區塊中邏輯單元的繞回方案及使用頁面標記(見圖 3A),即使範例顯示在區塊4中,區段a及b會被記錄在區段 C及D之後,但仍會將已記錄的序列考慮為等同於A、B、。、 D的循序順序。根據實施例而定,較佳從區塊3取得要複製 之未處理完成之邏輯單元的目前版本,因其已經是彙總的 形式,不過也可從尚未被抹除的區塊i及區塊2中收集。 在中斷區塊(區塊4)上完成最後彙總後,會將其指定為邏 輯群組的原始區塊並跟著更新合適的目錄(如,gat,見圖 17A)。同樣地,會將失敗的實體區塊(區塊3)標示為不良並 將其排除。其他區塊’區塊1及區塊2,將被抹除及再循環。 同日守’會將LG2的更新記錄在和LG2關聯的更新區塊中。 將中斷彙總區塊變成更新區塊的具體實施例 圖35A及圖35B分別顯示圖28及圖33之範例適用之多階 段彙總之階段I及階段III操作的第二案例。 圖35A顯示其中維持中斷彙總區塊為接收主機寫入之更 新區塊而非彙總區塊的例子。這適用於如更新邏輯群組LG^ 的主機寫入,及在此程序中,也會觸發相同邏輯群組的彙 總。 和圖34A的案例一樣,將區塊1與區塊2彙總於區塊3中會 98704.doc -98- 1288328 繼續進行,直到在處理區段c時遇到程式失敗。然後會在中 斷彙總區塊(區塊4)上繼續彙總。在中斷區塊(區塊4)中彙總 未處理完成之邏輯單元(如,在區段C&D中)後,並不在階 段III中等待完成其中的邏輯群組彙總,而是維持中斷區塊 為更新區塊。此案例尤其適於其中主機寫入可更新邏輯群 組及觸發相同邏輯群組之彙總的情況。在此範例中,這可 將邏輯群組LG*之主機更新的記錄記錄在中斷彙總區塊(區 塊4)中,而非記錄至新的更新區塊。此更新區塊(先前為中 斷彙總區塊(區塊4))可根據其中記錄的主機資料而為循序 或變成混亂。在所示的範例中,區塊4已變成混亂,因區段 C中邏輯單元的後續較新版本使得區塊4中的先前版本被淘 汰。 在中間階段期間,會將區塊3視為LG4的原始區塊,及區 塊4會是關聯的更新區塊。 圖35B顯示始於第二例中圖35八之多階段彙總的第三及 最後階段。如結合圖33所述,會在第一階段後的適宜時間 執行階段III茱總,如在後續不會觸發隨附彙總操作的主機 寫入期間。在該期間中,主機寫入可更新屬於邏輯群組的 邏輯單元而不會觸發另一個額外的彙總操作。因此,有利 於將時槽中剩餘的時間用於階段ΠΙ操作,以完成邏輯群組 LG4的彙總。 然後從區塊3及區塊4將邏輯群組LG4的廢棄項目收集至 新的彙總區塊(區塊5)。然後將區塊3標示為不良,將區塊4 再循環,及新的彙總區塊(區塊5)將變成邏輯群組LG4之新 98704.doc -99- 1288328 的原始區塊。其他區塊,區塊1及區塊2,也會被抹除及再 循環。 階段性程式失敗處置的其他具體實施例 圖31A、31B、34A、34B、35A及35B中所述的範例適用 於較佳的區塊管理系統,其中各實體區塊(中繼區塊)僅儲存 屬於相同邏輯群組的邏輯單元。本發明同樣適用於其他其 中並無邏輯群組對實體區塊對齊的區塊管理系統,如W0 03/027828及WO 00/49488中所揭露的區塊管理系統。在這 些其他系統中實施階段性程式失敗處置方法的一些範例如 圖36A、36B及36C所示。 圖36A顯示套用於主機寫入觸發關閉更新區塊且更新區 塊為循序時之情況的階段性程式錯誤處置方法。此例中的 關閉係藉由以下方式來完成:將原始區塊2之其餘的有效資 料(B及C)複製至循序更新區塊3。在資料部分c程式化起點 之程式失敗的例子中,會將部分C程式化至保留的區塊4。 然後可將新的主機資料寫入新的更新區塊5(未顯示)。此方 法的階段II及III和混亂區塊關閉的情況相同。 圖3 6B顯示在更新區塊之更新的例子中套用於(局部區塊 系統)的階段性程式錯誤處置方法。在此例中,會將邏輯群 組儲存在原始區塊i及其他的更新區塊中。彙總操作包括將 原始區塊1及其他更新區塊2的資料複製至更新區塊(根據 一些規則選定,圖中的區塊3)之一。和已經說明的主要情 況不同處在於區塊3已經部分被寫入。 圖36C顯示處理廢棄項目收集操作的階段性程式錯誤,或 98704.doc 1288328 不支援映射至中繼區塊之邏輯群組之記憶體區塊管理系統 中的清除。此種記憶體區塊管理(循環儲存)系統係說明於 WO 03/027828 A1中。循環儲存系統明顯的特色是不對單一 邏輯群組配置區塊。其中支援中繼區塊中控制資料的多個 邏輯群組。廢棄項目收集涉及從部分淘汰的區塊,將可能 沒有任何關係(隨機邏輯區塊位址)的有效資料區段取至其 中可能已經有些資料的重新配置區塊。如果重新配置區塊 在操作期間變滿,則將開啟另一個區塊。 非循序更新區塊索引 在上文關於混亂區塊索引及結合圖丨6A_ 16E的段落中, CBI區段可用來儲存可記錄隨機儲存在混亂或非循序更新 區塊中之邏輯區段位置的索引。 根據本發明的另一方面,在具有支援具非循序邏輯單元 之更新區塊之區塊管理系統的非揮發性記憶體中,緩衝儲 存在RAM中之非循序更新區塊之邏輯單元的索引被定期儲 存在非揮發性記憶體中。在一項具體實施例中,將索引儲 存在專用於儲存索引的區塊中。在另一項具體實施例中, 會將索引儲存在更新區塊中。在又另一項具體實施例中, 將索引儲存在各邏輯單元的標頭中。在另一方面中,在上 :個索引更新之後但在下一個索引更新之前寫入的邏輯單 元έ將其索引資矾儲存在各邏輯單元的標頭中。依此方 ^在電源中斷後,不必在初始化期間執行掃描,即可決 :取近寫入之邏輯單元的位置。在又另一方面_,將區塊 官理成部分循序及部分非循序指向一個以上邏輯子群組。 98704.doc 1288328 在預定觸發事件後儲存在CBI區塊中CBI區段的索引指標 根據結合圖16A-16E所述的方案,會將混亂區塊中最近寫 入區段的清單保留在控制器RAM中。只在和給定混亂區塊 關聯之邏輯群組之預定數量的寫入後,含有最新索引資訊 的CBI區段才被寫入快閃記憶體(CBI區塊620)。依此方式, 可縮減CBI區塊更新的數量。 在邏輯群組中CBI區段的下一個更新之前,會將邏輯群組 中最近寫入區段的清單保留在控制器RAM中。此清單在記 憶體裝置遭遇電源關閉時將會遺失,但可在電源啟動後的 初始化中藉由掃描更新區塊加以重建。 圖37顯示在每N個區段寫入相同的邏輯群組後將CBI區段 寫入關聯之混亂索引區段區塊的排程範例。此範例顯示兩個 同時進行更新的邏輯群組LG3及LGn。初始時,會將LG3的 邏輯區段按循序順序儲存在原始區塊中。群組中邏輯區段的 更新會按照主機指定的順序被記錄在關聯的更新區塊上。此 範例顯示混亂更新序列。同時,也會以和其更新區塊相同的 方式更新邏輯群組LGU。在每個邏輯區段寫入後,會將其在 更新區塊中的位置保留在控制器RAM中。在每個預定的觸 發事件後,會以混亂索引區段的形式將更新區塊中邏輯區段 的目前索引寫入非揮發性混亂索引區段區塊。例如,預定的 觸發事件可發生於每N個寫入之後,其中N為3。 雖然提供的範例有關作為區段的資料邏輯單元,但熟習 本技術者應明白,邏輯單元也可以是一些其他集合體,如 含有一個區段或一組區段的頁面。還有,循序區塊中的第 98704.doc -102- 1288328 一頁面不一定是邏輯頁面〇,因為繞回的頁面標記可能是第 一頁面。 在預定觸發事件後儲存在混亂更新區塊中CBI區段的索引 指標 在另一項具體實施例中,會在其中每N個寫入後,將索引 指標儲存在混亂更新區塊本身中的專用CBI區段中。此方案 和上述其中也將索引儲存在CBI區段中的具體實施例相 同。其中的差異在於上述具體實施例中,CBJ區段被記錄在 CBI區段區塊中,而非更新區塊本身中。 此方法係基於將所有混亂區塊索引資訊保持在混亂更新 區塊本身中。圖38A、38B及38C分別顯示同樣按照三個不 同階段儲存CBI區段之更新區塊的狀態。 圖38A顯示直到在預定數量的寫入後在其中記錄€酊區 段時的更新區塊。在此範例中,在主機已經循序寫入邏輯 區段0-3後,接著將會發出再次寫入邏輯區段丨之另一版本 的才曰令,因而破壞資料寫入的循序序列。然後實施cBI區段 中載送的混亂區塊索引,將更新區塊轉換為混亂更新區 塊。如上述,CBI是含有混亂區塊之所有邏輯區段之索引的 索引。例如,第〇個項目代表第〇個邏輯區段之更新區塊的 位移,同樣地,第n個項目代表第n個邏輯區段的位移。可 將CBI區段寫人更新區塊中的下—個可用位置。為了避免頻 繁的陕閃存取,會在每N個資料區段寫入後寫入cBI區段。 在此fe例中,N為4。如果此時損失電源,則最後寫入的區 段成為CBI區段,並將此區塊視為混亂更新區塊。 98704.doc 1288328 圖38B顯示圖38Λ之進一步在索引區段後在其中記錄邏輯 區段1、2及4的更新區塊。邏輯區段〗及2的較新版本可取代 先前在更新區塊中記錄的舊版本。在此時之電源週期的例子 中,必須先找到最後寫入的區段,然後必須掃描多達N個區 段,以尋找最後寫入的索引區段及最近寫入的資料區段。 圖38C顯示圖3 8B之具有另一寫入以觸發索引區段下一 個記錄之邏輯區段的更新區塊。在另N個(N=4)區段寫入後 的相同更新區塊可記錄CBI區段的另一個目前版本。 此方案的優點是不需要分開的CBI區塊。同時還不必擔心 實體快閃區段的附加項資料區是否大到足以容納混亂更新 區塊中有效區段之索引所需項目的數量。然後,混亂更新 區塊含有所有的資訊,且位址轉譯不需要外部的資料。這 可讓演算法比較簡單,其中可縮減有關c B!區塊壓縮的控制 更新數,也可讓串接控制更新比較簡短。(請見上文有關〔Μ 區塊管理的部分)。 《儲存在⑨亂更新區塊中資料區段標頭之最近寫入區段 的資訊 根據本發明^_七工 . 的另一方面,會在母N個寫入後,將記錄在區 ▲之璉輯早兀的索引儲存在非揮發性記憶體中,及將有 關中間寫入之碟絡一 、輯早7L的目前資訊儲存在各邏輯單元寫入 的附加項部八Record to a new block. Figure 32B shows a flow chart of a multi-stage summary operation in accordance with a preferred embodiment of the present invention. Summary Failure Handling (Phase I) Summary of Error Handling, Phase I Operation i i20 includes Step 22 and Step 1124 〇 Step 1122: Store the logical group in the third block in the same order as the first order The current version of the logical unit to create a summary block of logical groups. Step 1124: In response to the storage failure of the summary block, the logical units of the logical group not included in the third block are stored in the fourth block in the same order as the first sequence to provide an interrupt summary block. Since the data in block 1 and block 2 has been transferred to block 3 and block 4, block 1 and block 2 can be erased to free up space. In the preferred embodiment, blocks 2 through EBL (list of erased blocks, see Figure 18) can be immediately released for use. Block 1 can only be released under the following conditions: if it is a closed update block and there is another block pointed to by the corresponding GAT item 0 98704.doc -92- 1288328 In essence, block 3 becomes logical The original block of the group, and block 4 becomes the replacement of the block 3 with the sequential update block. After completing Phase I, the memory device sends a message to the host by releasing the BUS Y signal. Intermediate Operation (Phase II) Phase II, intermediate operation H30, may occur prior to Stage III summary operation 1140. As suggested by any of steps 1132, 1134, 1136, there may be several possible scenarios. Step 1132: Or in the write operation of the logical group, write the fourth block (interrupt summary block) as the update block. If the host writes the logical group in question, block 4, which is the interrupt summary block and which has now replaced the sequential update block, will be used as the normal update block. Depending on the host write, it can be maintained in a sequential or chaotic state. As an update block, it will trigger the closing of another chaotic block at some point, as described in the previous preferred embodiment. If the host writes to another logical group, it goes directly to the stage ΠΙ operation. Step 1134: In the read operation, the memory in which the third block is the logical group original block and the fourth block is the update block is read. At this point, the logical units of segments a and B are read from block 3 of the original block for the logical group, and the logic for reading segments c and D from block 4 for the updated block of the group. unit. Since only segments a and B can be read from block 3, pages that have failed to be stylized cannot be accessed, and portions that have not been written later cannot be accessed. Although the GAT directory in the flash memory has not been updated and it still points to block 1 of the original block 98704.doc -93 - 1288328, no data is read from it, and the block itself has been wiped out earlier. except. Another possibility is that the host reads the logical units in the logical group. At this time, the block 3 of the original block of the logical group reads the logical unit of the sections A and B and reads the sections C and D from the block 4 of the sequential block of the group. Logical unit. Step 1136: or in power-on initialization, by scanning the contents to re-identify any of the first to fourth blocks. Another possibility in the intermediate phase is to turn off the power to the memory device and then restart. As described above, during power-on initialization, the blocks in the configuration block list (the erase block blocks to be used, see Figures 15 and 18) are scanned to identify the original blocks that have become special states in the logical group ( Block 3) and associated defect update block (block 4) defect summary block. The flag in the first logical unit of the interrupt block (block 4) will indicate that the associated block is the original block that has encountered a program error (block 3). Block 3 can be found by looking up the block directory (GAT). In a specific embodiment, the flag is programmed to the first logical unit of the interrupt summary block (block 4). This can assist in indicating the special state of the logical group: that is, it has been aggregated into two blocks, namely, block 3 and block 4. An alternative to using flags to identify logical groups with defective blocks is to use features that are not as full as the original block (unless the error occurred on the last page, and the last page has no ECC error). The block is a defective block. Also, depending on the embodiment, there will be an information record about the failed group/block of the control data structure stored in the flash memory, and 98704.doc -94 - 1288328 is not just in the write interrupt summary area. The flag in the header area of the first segment of the block (block 4). Summary completion (stage III) v step 1142 · In response to the predetermined event, for the first case when the fourth block is not further recorded since the stage, the logic is stored therein in the same order as the first order The current version of all unprocessed logical orders = of the group; for the first case when the fourth block has been further recorded since the stage, the third and fourth blocks are summarized into the fifth block. Step 1144: After that, for the first case, when the memory is operated, the aggregated fourth block is used as the original block of the logical group; for the second case, when the memory is operated, the fifth block is used as the logic. The original block of the group. As long as there are any machines t that do not violate any of the specified time limits, the final summary in the phase out can be performed. A preferred case is when there is another update operation of the logical group without the summary operation, "attach (one (four)" on the next host write time slot. If another logical group host writes Triggering the collection of abandoned items will delay the aggregation of stage m. Figure 33 shows the sample sequence of the first and last stages of the multi-stage summary operation n ^1. The host writes the wait time and is right. , the width of each host that has a duration of 1 is written. The host write 1 is a simple 1 early update, and the current version of a logical unit in the logical group LGi will be recorded in the associated On the update block. When the host writes 2, it will be updated on the logical group group LGI, causing the more blocks to be closed (for example, full). Ready & 9 edge for the new update area Block to record the remaining updates. Provides a new update block from 3 triggers the collection of obsolete items, resulting in a summary operation of 98704.doc 95 1288328 on LG* to recycle the blocks to be reused. The current logical unit will be recorded in sequential order On the summary block, the summary operation can continue until the defect is encountered in the summary block. Then call the 13⁄4 segment Ϊ summary 'where the total dish operation continues on the interrupt summary block. At the same time, the final summary of LG4 (phase III) Will wait for the next opportunity. In the host write 3, the logical unit of the logical group LG2 will also be written to trigger the summary of LG2. This means that the time slot has been fully utilized. At the host write 4, the operation is just LG: Some of the logical units are logged to their update block. The remaining time in the time slot provides the opportunity to perform the final summation of !4. The specific embodiment of not converting the interrupt summary block to the update block is shown in Figures 34A and 34B, respectively. A first case of Phase I and Phase III operations of the multi-stage summary applicable to the examples of Figures 28 and 31 is shown. Figure 34A shows a summary in which the interrupt summary block is not used as an update block but is used as a summary of its summary operations. An example of a block. In particular, FIG. 34 refers to the host write #2 shown in FIG. 33, in which the host writes an update of the logical unit belonging to the logical group LGi, and during this time, the operation also touches A summary of the blocks associated with another logical group LG#. The original block (block 1) and the update block (block 2) are formed in the same manner as the example of Fig. 28. Similarly, during the summary operation, It is known that the summary block (block 3) is defective in summarizing the logical units of section C. However, unlike the re-aggregation scheme of the figure, this multi-stage scheme can be used as the interrupt summary area in the new provision. The summary operation continues on the block (block 4) of the block. Therefore, in the stage I summary operation, the logical units of the segment 98704.doc -96-1288328 have been summarized in the summary block (block 3). When a program failure occurs in a block, the remaining logical units in the areas 4 and C and D are sequentially copied to the interrupt summary block (block 4). If the host originally writes the update in the first logical group triggers the summary operation of the block associated with the second logical group, the update of the first logical group is recorded to the update of the logical group. Block (usually a new update block). At this point, the interrupt summary block (Block 4) will not be used to record any update data outside of the summary operation and will maintain the interrupt summary block that must be completed. Since the data in blocks 1 and 2 is now completely contained in another block (block 3 and block 4), it can be erased for recycling. The Address Table (GAT) is updated to point to Block 3 as the original block of the logical group. The directory information of the update block (in the ACL, see Figure 15 and Figure 18) will also be updated 'to point to the block 4 of the sequential update block that has become a logical group (eg, LG4), the result of the summary logic The group is not confined to one block, but is distributed over the defect summary block (block 3) and the interrupt summary block (block 4). An important feature of this scenario is that the logical units in the group will only be aggregated once during this phase, but the summary is spread over more than one block. In this way, the summary operation can be completed within the normally specified time. Figure 34B shows the third and final stages starting from the multi-stage summary of Figure 34A. As described in connection with Figure 33, the Phase III summary is performed at a suitable time after the first phase (e.g., during subsequent host writes that do not trigger the accompanying summary operation). In particular, Fig. 34B refers to a time slot in which the host write #4 shown in Fig. 33 occurs. During this period, the host write can update the logical unit belonging to logical group LG2 without triggering another additional summary operation. Therefore, there are 98704.doc -97-1288328 which facilitates the use of the remaining time in the time slot for the phase ΠΙ operation to complete the aggregation of the logical group lg4. This operation summarizes all unprocessed logical units of LG* that are not yet in the interrupt block into the interrupt block. In this example, this means that sections a and B are copied from block 3 in logically sequential order to the interrupt block (block 4). Due to the wraparound scheme of the logical units in the block and the use of page markers (see Figure 3A), even if the example is shown in block 4, segments a and b will be recorded after segments C and D, but will still be The sequence of records is considered to be equivalent to A, B,. , the sequential order of D. Depending on the embodiment, the current version of the unprocessed logical unit to be copied is preferably obtained from block 3, since it is already in a summarized form, but may also be from block i and block 2 that have not been erased. Collected in. After the final summary is completed on the interrupt block (block 4), it is designated as the original block of the logical group and is updated with the appropriate directory (eg, gat, see Figure 17A). Similarly, the failed physical block (block 3) is marked as bad and is excluded. Other blocks 'Block 1 and Block 2' will be erased and recycled. On the same day, the LG2 update will be recorded in the update block associated with LG2. DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENT OF THE INTERRUPTION ACCUMULATION BLOCKS FIG. 35A and FIG. 35B respectively show a second case of Phase I and Phase III operations of the multi-stage summary of the examples of FIGS. 28 and 33. Fig. 35A shows an example in which the interrupt summary block is maintained as a newer block written by the receiving host instead of the summary block. This applies to host writes such as updating the logical group LG^, and in this program, it also triggers the sum of the same logical group. As in the case of Figure 34A, the aggregation of block 1 and block 2 in block 3 will continue 98704.doc -98 - 1288328 until the program fails when processing segment c. The summary is then continued on the interrupt summary block (block 4). After summarizing the unprocessed logical units (eg, in section C&D) in the interrupt block (block 4), they do not wait for completion of the logical group summary in phase III, but maintain the interrupt block. To update the block. This case is especially suitable for situations where a host writes an updatable logical group and triggers a summary of the same logical group. In this example, this records the record of the host update of the logical group LG* in the interrupt summary block (block 4) instead of the new update block. This update block (previously the interrupt summary block (block 4)) can be sequenced or confusing based on the host data recorded therein. In the example shown, block 4 has become confusing because the subsequent newer version of the logical unit in segment C causes the previous version in block 4 to be phased out. During the intermediate phase, block 3 will be considered the original block of LG4, and block 4 will be the associated update block. Figure 35B shows the third and final stages starting from the multi-stage summary of Figure 35 in the second example. As described in connection with Figure 33, phase III is performed at the appropriate time after the first phase, as during subsequent host writes that do not trigger the accompanying summary operation. During this period, the host writes can update the logical units belonging to the logical group without triggering another additional summary operation. Therefore, it is advantageous to use the time remaining in the time slot for the phase ΠΙ operation to complete the aggregation of the logical group LG4. The obsolete items of logical group LG4 are then collected from block 3 and block 4 to the new summary block (block 5). Block 3 is then marked as bad, block 4 is recycled, and the new summary block (block 5) becomes the original block of the new 98704.doc -99-1288328 of logical group LG4. Other blocks, Block 1 and Block 2, will also be erased and re-circulated. Other Specific Embodiments of Phased Program Failure Handling The examples described in Figures 31A, 31B, 34A, 34B, 35A, and 35B are applicable to a preferred block management system in which each physical block (relay block) is only stored. Logical units belonging to the same logical group. The invention is equally applicable to other block management systems in which no logical group is aligned to a physical block, such as the block management system disclosed in WO 03/027828 and WO 00/49488. Some examples of implementing a staged program failure handling method in these other systems are shown in Figures 36A, 36B and 36C. Figure 36A shows a phased program error handling method for the case where the host write triggers the shutdown of the update block and the update block is sequential. The closing in this example is accomplished by copying the remaining valid data (B and C) of the original block 2 to the sequential update block 3. In the example where the program in the data part c stylized starting point fails, part C is programmed to the reserved block 4. The new host data can then be written to the new update block 5 (not shown). Phases II and III of this method are the same as in the case of a closed block. Figure 3 6B shows a phased program error handling method applied to the (local block system) in the updated example of the update block. In this example, the logical group is stored in the original block i and other update blocks. The summary operation includes copying the data of the original block 1 and other update blocks 2 to one of the update blocks (selected according to some rules, block 3 in the figure). The main difference from the already explained case is that block 3 has been partially written. Figure 36C shows a staged program error handling the discarded item collection operation, or 98704.doc 1288328 does not support clearing in the memory block management system mapped to the logical group of the relay block. Such a memory block management (cyclic storage) system is described in WO 03/027828 A1. An obvious feature of the circular storage system is that no blocks are configured for a single logical group. It supports multiple logical groups of control data in the relay block. The collection of abandoned projects involves the partial elimination of blocks, and the valid data segments that may not have any relationship (random logical block address) are taken to the reconfigured blocks where there may already be some data. If the reconfigured block becomes full during operation, another block will be opened. Non-Sequential Update Block Index In the above paragraph on chaotic block indexing and in conjunction with Figures 6A-16E, the CBI section can be used to store an index that can record logical sector locations that are randomly stored in chaotic or non-sequential update blocks. . According to another aspect of the present invention, in a non-volatile memory having a block management system supporting an update block having a non-sequential logic unit, an index of a logical unit buffering a non-sequential update block stored in the RAM is Stored regularly in non-volatile memory. In a specific embodiment, the index is stored in a block dedicated to storing the index. In another embodiment, the index is stored in the update block. In yet another specific embodiment, the index is stored in the header of each logical unit. In another aspect, the logical unit written after the last index update but before the next index update stores its index information in the header of each logical unit. According to this, after the power supply is interrupted, it is not necessary to perform scanning during the initialization, and it is determined that the position of the logical unit to be written is taken. On the other hand, _, the block officially points to part of the sequence and part of the non-sequential point to more than one logical subgroup. 98704.doc 1288328 Index metrics for CBI segments stored in CBI blocks after a predetermined trigger event. According to the scheme described in connection with Figures 16A-16E, the list of recently written segments in the chaotic block is retained in the controller RAM. in. The CBI section containing the latest index information is written to the flash memory (CBI block 620) only after a predetermined number of writes to the logical group associated with the given chaotic block. In this way, the number of CBI block updates can be reduced. The list of recently written segments in the logical group is retained in the controller RAM before the next update of the CBI segment in the logical group. This list will be lost when the memory device encounters a power down, but can be rebuilt by scanning the update block during initialization after power is turned on. Figure 37 shows a scheduling example of writing a CBI section to an associated chaotic index section block after writing the same logical group every N sectors. This example shows two logical groups LG3 and LGn that are updated simultaneously. Initially, the logical segments of LG3 are stored in the original block in sequential order. Updates to logical segments in the group are recorded on the associated update block in the order specified by the host. This example shows a chaotic update sequence. At the same time, the logical group LGU is also updated in the same way as its update block. After each logical segment is written, its location in the update block is retained in the controller RAM. After each predetermined trigger event, the current index of the logical segment in the update block is written to the non-volatile hash index segment block in the form of a chaotic index segment. For example, a predetermined triggering event can occur after every N writes, where N is three. Although the examples provided are related to the logical unit of data as a section, those skilled in the art will appreciate that the logical unit can also be some other aggregate, such as a page containing a section or a group of sections. Also, the page 98704.doc -102-1288328 in the sequential block is not necessarily a logical page because the page mark that is wrapped around may be the first page. Index metrics for CBI segments stored in chaotic update blocks after a predetermined trigger event. In another embodiment, the index metrics are stored in the chaotic update block itself after every N writes. In the CBI section. This scheme is the same as the specific embodiment in which the index is also stored in the CBI section. The difference is that in the above specific embodiment, the CBJ sector is recorded in the CBI sector block, not in the update block itself. This method is based on keeping all chaotic block index information in the chaotic update block itself. 38A, 38B and 38C respectively show the state in which the updated blocks of the CBI section are also stored in three different stages. Fig. 38A shows an update block until a section is recorded therein after a predetermined number of writes. In this example, after the host has sequentially written to logical section 0-3, another version of the logic segment will be issued again, thus destroying the sequential sequence of data writes. The chaotic block index carried in the cBI section is then implemented to convert the updated block into a chaotic update block. As mentioned above, CBI is an index of the index of all logical sections containing chaotic blocks. For example, the second item represents the displacement of the update block of the third logical segment, and likewise, the nth item represents the displacement of the nth logical segment. The CBI section can be written to the next available location in the update block. In order to avoid frequent Shaanxi flash memory access, the cBI sector is written after every N data sectors are written. In this example, N is 4. If power is lost at this time, the last written segment becomes a CBI segment and this block is treated as a chaotic update block. 98704.doc 1288328 Figure 38B shows the update block in Figure 38 further recorded in logical segments 1, 2 and 4 after the index segment. Newer versions of logical sections and 2 replace the old versions previously recorded in the update block. In the case of a power cycle at this time, the last written segment must be found first, and then up to N segments must be scanned to find the last written index segment and the most recently written data segment. Figure 38C shows the update block of Figure 38B with another write to trigger the logical segment of the next record of the index segment. The same update block after the other N (N=4) sectors are written may record another current version of the CBI section. The advantage of this scheme is that there is no need for separate CBI blocks. At the same time, there is no need to worry about whether the additional data area of the physical flash zone is large enough to accommodate the number of items required for the index of the valid section in the chaotic update block. Then, the chaotic update block contains all the information, and the address translation does not require external data. This makes the algorithm simpler, which reduces the number of control updates for c B! block compression and also makes the serial control update shorter. (See above for [Μ Block Management"). "The information stored in the most recently written section of the data section header in the 9 random update block is in accordance with the invention. On the other hand, it will be recorded in the area ▲ after the mother N is written. The index of the early memory is stored in the non-volatile memory, and the current information about the intermediate write of the first and last 7L is stored in the additional item written in each logical unit.

、刀。依此方式,在電源重新啟動後,不必掃描 區塊,即可;^ p A _ _ ^ &鬼中最後寫入邏輯單元的附加項部分快速,Knife. In this way, after the power is restarted, it is not necessary to scan the block; ^ p A _ _ ^ & the last part of the ghost that is written to the logical unit is fast

取得有Μ δ U —個索引更新後寫入之邏輯單元的資訊。 圖3 9 Α顯;处1 1 /、諸存於混亂更新區塊中各資料區段標頭之中 98704.doc 1288328 間寫入的中間索引。 圖3 9B顯示在寫入的| F i;L挪 的各£ &軚頭中儲存中間寫入之中間 索引的範例。在此範合,丨φ,A皆x . 乾U中在寫入四個區段LS〇-LS3後,會 寫入⑽索引作為區塊中的τ_個區段。其後,會將邏輯區 段LS、、LS’2及LS4寫入區塊。每次,標頭都會儲存自上一 個cm索引後寫人之邏輯單元的中間索引。因此,ls,2中的 標頭將具有提供上一個CBI索引及之位移(即,位置)的 索引同樣地,LS4中的標頭將具有提供上一個CBI索引及 1^1和1^’2之位移(即,位置)的索引。 最後寫入的資料區段永遠含有有關多達N個最後寫入之 頁面的資訊(即,一直到最後寫入之CBI區段)。只要電源重 新啟動時,上一個CBI索引可提供邏輯單元在CBI*引區段 之鈾寫入的索引> §孔,及後縯寫入之邏輯單元的索引資訊可 在最後寫入之資料區段的標頭中找到。這具有優點如下:在 初始化時不必為其後寫入的區段掃描區塊以決定其位置。 在資料區段的標頭中儲存中間索引資訊的方案同樣適用 於無論CBI索引區段係儲存在更新區塊本身或在分開的 CBI區段區塊中,如前文所述。 儲存在混亂更新區塊中資料區段標頭的索引指標 在另一項具體實施例中,會將整個CBI索引儲存在混亂更 新區塊中各資料區段的附加項部分。 圖40顯示在混亂更新區塊之各資料區段標頭中儲存的混 亂索引攔位中的資訊。 區段標頭的資訊容量有限,因此可將由任何單一區段所提 98704.doc -105- 1288328 供的索引範圍設計為層級索引方案的一部分。例如,記憶體 特定平面内的區段可提供索引給僅在該平面内的區段。還 有’可將邏輯位址的範圍分割成一些子範圍,以允許採用間 接索引方案。例如,如果可將有64個邏輯位址的區段儲存在 一個平面中,則各區段可以有3個攔位用於區段位移值,各 攔位能夠儲存4個位移值。第一攔位可定義邏輯位移範圍 〇-15、15-31、32-47、及48-63内最後寫入區段的實體位移。 弟一欄位可定義在其相關範圍内各4個區段之4個子範圍的 實體位移值。第三攔位可定義在其相關子範圍内4個區段的 實體位移值。因此,藉由讀取多達3個區段的間接位移值, 即可決定混亂更新區塊内邏輯區段的實體位移。 此方案的優點是也不需要分開的CBI區塊或CBI區段。然 而,這卻只適用於實體快閃區段的附加項資料區大到足以 容納混亂更新區塊中有效區段索引所需的項目數量時。 混亂更新區塊之邏輯群組内受限的邏輯範圍 在邏輯群組内,可縮減可非循序寫入之區段的邏輯範 圍。此技術的主要優點如了 :由於只需要讀取一個多重區 段頁面(在多重晶片的例子中’可以平行讀取頁面),即可取 得目的地頁面的所有-寅料(假設來源和目的地已對齊,若未 對齊’則需要另一個讀取),使得循序寫入資料的複製能夠 更快速地完成,因此範圍之外的區段在原始區塊中保持德 序寫入’及廢棄項目收集操作能在更短的時間内完成 有,使用晶片上的複太拉& 本特色,可將循序資料從來源複製至 目的地,而不用在把告丨|哭々日日+ 隹控制盗之間來回傳送資料。如果來源資 98704.doc 1288328 枓已經分散,如混亂區塊中所發生的,則需要每個區段讀 取多達-個頁面,才能收集所有要寫入目的地的區段。 在-項具體實施例中,實際上並不將邏輯範圍限制在某 _量的區段’而是經由限制CBI的數量來完成(只限制大 群組/中繼區塊的混亂範圍很合理,因其需要多個混亂區塊 索引才能涵蓋整個邏輯群組的範圍)。例如,如果中繼區塊 /群組有2048個區段,則其將需要多達M0CBI區段,各涵蓋 一個子群組256個區段的連續邏輯範圍。如果將cBI的數量 限制為4,則混亂區塊可用來寫入多達4個子群組的區段(其 中任何一個)。因此,允許邏輯群組有多達4個部分或完全 混亂子群組,且最少有4個子群組將維持完全循序。如果一 個混亂區塊有4個和其關聯的有效CBI區段,及主機寫入在 這些CBI區段範圍之外(混亂子群組)的區段,則應彙總及關 閉混亂邏輯群組。但是這卻極不可能發生,因為在真實的 應用中,主機在2048個區段的範圍(邏輯群組)内不需要多於 4個256個區段的混亂範圍(子群組)。結果,在正常的情況 中,廢棄項目收集也不會受到影響,但限制規則的防範卻 形成廢棄項目收集太長(會觸發主機的逾時)的極端例子。 部分循序混亂更新區塊的索引 當循序更新區塊在將區塊轉換為混亂管理模式之前被部 分寫入時,邏輯群組之循序更新區段的全部或部分可繼續 被處理成已經循序更新,及可將混亂更新管理僅套用於邏 輯群組之位址範圍的子集。 控制資料完整性與管理 98704.doc -107- 1288328 儲存在記憶體裝置中的資料可能會因為電源中斷或特定 記憶體位置變成有缺陷而成為已毁損。如果碰到記憶體區 塊缺陷,則將資料重新配置至不同的區塊及將缺陷區塊丟 棄。如果錯誤不會擴大,則可藉由和資料一起儲存的錯誤 杈正碼(ECC)在執行中進行校正。然而,還是會有Ecc無法 权正已毁損資料的時候。例如,當錯誤位元的數量超過 的谷畺時。這對於如和記憶體區塊管理系統關聯之控制資 料的關鍵資料是無法接受的。 控制貝料的範例為目錄資訊及和記憶體區塊管理系統關 聯的區塊配置資訊,如結合圖2〇所述。如上述,可將控制 資料維持在高速RAM及較慢的非揮發性記憶體區塊中。任 何、&吏更的控制貧料會被維持在具有定期控制寫入的 RAM中’以更新儲存於非揮發性中繼區塊中的同等資訊。 依此方式,不必經常存取,即可將控制資料储存在非揮發 {季乂〖又的快閃圮憶體中。如圖2〇所示、⑽、ΜΑ?、 A之控制貝料結構的層級會被維持在快閃記憶體 匕控制寫入操作造成RAM中控制資料結構的資訊 可更新陕閃,己憶體中同等的控制資料結構。 關鍵資料備份 根據本發明的$ _ 次 方面,如部分或全部控制資料的關鍵 被維持在複製财,則保證額料級的可靠性。 技彳以、查=方式對於採用兩次編碼過程(two-pass)程式化 技術以連績程式化 ^ ^ , 、、且°己f思體早兀之多位元的多重狀態 d 體糸統而言,筮-4 一久、、扁碼過程中的任何程式化錯誤都 98704.doc 1288328 無法毀損第一次編碼過程建立的資料。複製還有助於偵測 寫入中止、偵測誤測(即,兩個複本有良好的ECC但資料不 同)’且可增加額外等級的可靠性。若干資料複製的技術均 已考慮。 在一項具體實施例中,在稍早程式化編碼過程中程式化 給疋貝料的兩個複本後,後續程式化編碼過程可避免程式 化用於儲存該等兩個複本中至少一個的記憶體單元。依此 方式’在後續程式化編碼過程在完成之前中止及毀損稍早 編碼過程的資料時,該等兩個複本中至少一個也不會受到 影響。 在另一項具體實施例中,某一給定資料的兩個複本會被 儲存於兩個不同的區塊中,而且該等兩個複本中至多僅有 其中個的§己憶體單元會於後面的程式化編碼過程中被程 式化。 於另一具體實施例中,於一程式化編碼過程中儲存某一 給定資料的兩個複本之後,便不再利於儲存該等兩個複 本的記憶體單元組實施任何進一步的程式化 單元組的最終程式化編碼過程中來程式化該 可達成此目的。 。於該記憶體 等兩個複本便 在又另一項具體實施例中,可於二進制程式化模式中將 某一給定資料的該等兩個複本程式化至_多重狀態的記憶 體之中’致使不會對該等已程式化的記憶體單元進 進一步的程式化。 在又另-項具體實施例中,狀採用兩次編石馬過程程式 98704.doc •109- 1288328 化技術以連續程式化相同組記憶體單元之多位元的多重狀 恶記憶體系統而言,會採用容錯碼以編碼多個記憶體狀 態’使稍早程式化編碼過程所建立的資料不會受到後續程 式化編碼過程中錯誤的影響。 在各記憶體單元可儲存一位元以上資料的多重狀態記憶 體中會引發資料複製的複雜性。例如,一個4狀態記憶體可 以兩個位70來表示。一個現有的技術是使用2次編碼過程程 式化來程式化此種記憶體。第一位元(下方頁面位元)可由第 一次編碼過程進行程式化。其後,可在第二次編碼過程中 程式化相同單元以代表所要的第二位元(上方頁面位元)。為 了不要k更第二次編碼過程中第一位元的值,會使第一位 兀的記憶體狀態表示取決於第二位元的值。因炎匕,在第二 位元的程式化期間,如果因電源中斷或其他原因而發生. 誤及造成*正確的記憶體狀態,則也會毀損第—位元的值 圖41Α顯示當各記憶體單元儲存兩個位元的資料時,· 態記憶體陣列的定限電壓分布。此四個分布代表四個記摘 體狀1一11」、「又」、「¥」及「2」的總體。在程式化記憶11 皁兀之前,會先將其抹除至其「U」3戈「未寫入」狀態。名 t憶體單元逐漸被程式化時,會累進達到記憶體狀態「X」 「Y」及「z」。 園4 1B顯示現有使Get the information of the logical unit that has Μ δ U—the index is written after the update. Figure 3 9 Α display; at 1 1 /, stored in the chaotic update block in the header of each data section 98704.doc 1288328 intermediate index written. Figure 3 9B shows an example of storing an intermediate index of intermediate writes in each of the written & In this case, 丨φ, A are both x. After writing four segments LS〇-LS3 in the dry U, the (10) index is written as the τ_ segments in the block. Thereafter, the logical segments LS, LS'2, and LS4 are written to the block. Each time, the header stores the intermediate index of the logical unit of the person written since the last cm index. Therefore, the header in ls, 2 will have an index that provides the previous CBI index and its displacement (ie, position). Similarly, the header in LS4 will have the previous CBI index and 1^1 and 1^'2. The index of the displacement (ie, position). The last written data section will always contain information about up to N last written pages (ie, the CBI section up to the last write). As long as the power is restarted, the previous CBI index can provide the index of the uranium write of the logical unit in the CBI* lead section. § Hole, and the index information of the logical unit of the post-write can be written in the last written data area. Found in the header of the segment. This has the advantage that it is not necessary to scan the block for its subsequent writes at initialization to determine its position. The scheme of storing intermediate index information in the header of the data section is equally applicable regardless of whether the CBI index section is stored in the update block itself or in a separate CBI section block, as previously described. Index Indicators for Data Section Headers Stored in Chaotic Update Blocks In another embodiment, the entire CBI index is stored in the add-on portion of each data section in the chaotic update block. Figure 40 shows the information in the scrambled index block stored in the header of each data section of the chaotic update block. The section header has a limited amount of information, so the index range provided by any single section 98704.doc -105 - 1288328 can be designed as part of the hierarchical indexing scheme. For example, a section within a particular plane of memory can provide an index to a section that is only within that plane. There is also the ability to divide the range of logical addresses into sub-ranges to allow for an indirect indexing scheme. For example, if a segment with 64 logical addresses can be stored in a plane, each segment can have 3 blocks for the segment shift value, and each block can store 4 displacement values. The first stop can define the physical displacement of the last written segment within the logical displacement range 〇-15, 15-31, 32-47, and 48-63. The field column defines the entity displacement values for the four sub-ranges of each of the four segments within its associated range. The third blocker defines the entity displacement values for the four segments within its associated sub-range. Therefore, by reading the indirect displacement values of up to 3 segments, the physical displacement of the logical segments within the chaotic update block can be determined. The advantage of this scheme is that there is no need for separate CBI blocks or CBI sections. However, this only applies when the additional item data area of the physical flash segment is large enough to accommodate the number of items required for a valid segment index in a chaotic update block. Limited logical range within a logical group of chaotic update blocks Within a logical group, the logical range of segments that can be written out of order can be reduced. The main advantages of this technique are as follows: Since only one multi-session page needs to be read (in the multi-wafer example, 'the pages can be read in parallel'), all the destination pages can be obtained (assuming the source and destination) Aligned, if it is not aligned, then another read is required, so that the copy of the sequential write data can be completed more quickly, so the extents outside the range remain in the original block and the writes are discarded. The operation can be completed in a shorter time, using the complex terabar on the wafer & this feature, you can copy the sequential data from the source to the destination, instead of telling the story | crying day + 隹 control stolen Transfer data back and forth. If the source 98704.doc 1288328 枓 has been decentralized, as occurs in a chaotic block, then each segment needs to read up to one page to collect all the segments to be written to the destination. In the specific embodiment, the logical range is not actually limited to a certain amount of segments, but is completed by limiting the number of CBIs (only limiting the confusion of large groups/relay blocks is reasonable, Because it requires multiple chaotic block indexes to cover the entire logical group range). For example, if a relay block/group has 2048 segments, it will need up to M0CBI segments, each covering a contiguous logical range of 256 segments of a subgroup. If the number of cBIs is limited to 4, the chaotic block can be used to write up to 4 subgroups of any of the segments (any one of them). Therefore, the logical group is allowed to have up to 4 partial or complete chaotic subgroups, and at least 4 subgroups will remain fully sequential. If a chaotic block has 4 valid CBI segments associated with it, and the host writes segments outside of these CBI segments (chaotic subgroups), the chaotic logical groups should be summarized and closed. But this is highly unlikely, because in real applications, the host does not need more than 4 256 segments of chaos (subgroups) in the 2048 segment range (logical group). As a result, under normal circumstances, the collection of obsolete items will not be affected, but the precautions of the restriction rules form an extreme example of the waste collection being too long (which will trigger the host's timeout). Partially sequential chaotic update block index When a sequential update block is partially written before converting the block to the chaotic management mode, all or part of the sequential update segment of the logical group may continue to be processed to have been sequentially updated, And chaotic update management can only be applied to a subset of the address range of the logical group. Controlling Data Integrity and Management 98704.doc -107- 1288328 Data stored in a memory device may become corrupted due to a power interruption or a specific memory location becoming defective. If a memory block defect is encountered, the data is reconfigured to a different block and the defective block is discarded. If the error does not increase, the error can be corrected during execution by the error correction code (ECC) stored with the data. However, there will still be times when Ecc cannot be right to destroy the data. For example, when the number of error bits exceeds the valley. This is unacceptable for key information such as control information associated with the memory block management system. An example of controlling batting is catalog information and block configuration information associated with the memory block management system, as described in connection with Figure 2〇. As mentioned above, control data can be maintained in high speed RAM and slower non-volatile memory blocks. Any <RTI ID=0.0>>>><>>> In this way, the control data can be stored in a non-volatile, non-volatile version of the flash memory. As shown in Figure 2〇, (10), ΜΑ?, A control batting structure level will be maintained in the flash memory 匕 control write operation caused by the control of the data structure in the RAM can be updated in the flash, in the body The same control data structure. Key Data Backup In accordance with the $_th aspect of the present invention, if the key to some or all of the control data is maintained in the copy, the reliability of the material level is guaranteed. The technique uses the two-pass stylization technique to program the multi-state d-systems of the multi-bits of the ^^, , and In fact, 筮-4 for a long time, any stylized errors in the flat code process are 98704.doc 1288328 can not damage the data created by the first encoding process. Replication also helps detect write aborts, detect false positives (ie, two replicas have good ECC but different data) and can add additional levels of reliability. Several techniques for data replication have been considered. In a specific embodiment, after stylizing two copies of the shell material in the early stylized encoding process, the subsequent stylized encoding process can avoid stylizing the memory for storing at least one of the two copies. Body unit. In this manner, at least one of the two copies will not be affected when the subsequent stylized encoding process terminates and destroys the data of the earlier encoding process before completion. In another embodiment, two copies of a given material are stored in two different blocks, and at most only one of the two copies of the two copies will be It is stylized in the later stylized encoding process. In another embodiment, after storing two copies of a given data in a stylized encoding process, no further stylized cell groups are implemented to facilitate the storage of the memory cells of the two copies. This can be achieved by stylizing this in the final stylized coding process. . In the other embodiment, the two copies of the memory are programmed to program the two copies of a given material into the memory of the _multi-state in a binary stylized mode. This will not further stylize these programmed memory units. In yet another embodiment, the method uses a two-step stone program 98704.doc • 109-1288328 technique to continuously program a multi-bit memory system of the same group of memory cells. The fault-tolerant code is used to encode multiple memory states' so that the data created by the earlier stylized encoding process is not affected by errors in the subsequent stylized encoding process. The complexity of data copying can be caused in multiple state memories in which each memory unit can store more than one bit of data. For example, a 4-state memory can be represented by two bits 70. One prior art technique is to program this memory using a two-pass encoding process. The first bit (lower page bit) can be programmed by the first encoding process. Thereafter, the same unit can be programmed in the second encoding process to represent the desired second bit (upper page bit). In order not to k the value of the first bit in the second encoding process, the memory state representation of the first bit depends on the value of the second bit. Due to inflammation, during the stylization of the second bit, if it occurs due to power interruption or other reasons. Mistakes and *correct memory state, it will also destroy the value of the first bit. Figure 41 shows when each memory When the volume unit stores two bits of data, the threshold voltage distribution of the state memory array. These four distributions represent the total of four recorded figures 1-11, "again", "¥" and "2". Before stylizing memory 11 saponins, it will be erased to its "U" 3 Ge "unwritten" state. When the t-memory unit is gradually programmed, it will progressively reach the memory state "X" "Y" and "z". Park 4 1B shows the existing make

WWay code)的2次琳碼迴 式化方案。此四個狀態可以兩個位元表示,即,下方 位元及上方頁面位元,如(上 蝌协H 万頁面位兀,下方頁面位; 對於要平行程式化之單元的 貝面,實際上有兩個邏 98704.doc -110- 1288328 面.邏輯下方頁面及邏輯上方頁面。第—程式化編碼過程 :會程式化邏輯下方頁面。藉由合適的編碼,不用重設邏 輯下方頁φ ’單元相同頁面上的後續第二程式化編碼過= 會程式化邏輯上方頁面。—般使料程式碼是格雷碼,其 中只有一個位元會在轉變至相鄰的狀態時變更。因此,此 程式碼具有優點如下:對於錯誤校正的要求較少,因只涉 及一個位元。 -般使用格雷碼的方案是假設「L代表「未程式化」條 件。因此’已抹除的記憶體狀態「u」可表示為:(上方頁 面:兀’下方頁面位元)=(1,1}。在程式化邏輯下方頁面 的第-次編碼過程中,任何儲存諸「〇」的單元將因此具 有其從(X,1)至(X,〇)的邏輯狀態轉變,其中「X」代表上方 位兀的「任意(don’t care)」值。然而,由於上方位元尚未 被%式化’因此為了一致,可將「χ」標示為「】」。(U) 邏„態可藉由程式化單元為記憶體狀態「χ」來表示。也 就是說’在第二次程式編碼過程之前,下方位元值「〇」可 以表示為記憶體狀態「X」。 一執^第二次編碼過程程式化可儲存邏輯上方頁面的位 兀/、有逆些需要上方頁面位元值「〇」的單元才會被程式 化在第— 人編碼過程後,頁面中的單元在邏輯狀態(1,1) 或(,)為了保存第二次編碼過程中下方頁面的值,必須 區刀下方位7°值「〇」或「i」。對於從(1,0)至(0, 0)的轉變, ‘將所’.己體單元程式化為記憶體狀態「Y」。對於從(i, )至(〇’ 1)的轉^ ’會將所論記憶體單元程式化為記憶體狀 98704.doc -111- 1288328 態「z」。依此方式,在讀取期間,藉由決定在單元中程式 化的記憶體狀態’即可解碼下方頁面位元及上方頁面位 然而’格雷碼的2次編碼過程程式化方案在第二次 程程式化錯誤時會成為問題。例如,在下方位元為ϋ 將上方頁面位元程式化為「G」,將造成(1,υ轉變成(q 1)、。 · 這需要將記憶體單元累進從「U」程式化通過「χ」及「γ」. 而至「Ζ」。如果在完成程式化前發生電源中斷,則記憶體 單元將結束於轉變記憶體狀態之一,如「χ」。在讀取記憶 體單元時,會將「X」解碼為邏輯狀態(1,〇)。這對上方及修 下方位元造成不正確的結果,因其應為(^)。同樣地,如 果程式化在達到「Υ」時受到中斷,其將對應於(0, 〇)。雖 然上方位兀現在是正確的,但下方位元還是錯的。 因此可以看出上方頁面程式化的問題可毀損已經在下 方頁面的資料。尤其當第二次編碼過程程式化涉及在中間 記憶體狀態上通過時,程式中止會使程式化結束於該記憶 體狀態,造成解碼不正確的下方頁面位元。 _ 圖42顯示藉由儲存複製的各區段以防衛關鍵資料的方 式。例如,可將區段A、B、c、及〇儲存在複製複本中。如 在一個區段複本中有資料毀損,則可以讀取另一個來取代。 圖43顯示其中通常將複製區段儲存在多重狀態記憶體的 非健全性。如上述,在範例的4狀態記憶體中,多重狀態頁 面貝際上包括分別在兩次編碼過程中進行程式化的邏輯下 方頁面及邏輯上方頁面。在所示的範例中,頁面為四個區 , 段寬。因此,區段Α及其複製會同時被程式化在邏輯下方頁 98704.doc -112- 1288328 面中,同樣地’對於區段B及其複製也是如此。然後在邏輯 上方頁面中後續之程式化的第二次編碼過程中,會同時程 式化區段C, C,及對於區段D,D也是如此。如果在程式化 區段C,C的中間發生程式中止’則會毀損下方頁面中的區 段A,A。除非,在上方頁面程式化之前先讀取及緩衝下方 頁面區段,否則如果毀損將無法復原。因此,同時儲存兩 個關鍵資料的複本,如區段A,A,無法防止其為其上方頁 面中後續區段C,C之有問題的儲存所毀損。 圖44A顯示將關鍵資料錯開之複製複本儲存至多重狀熊 記憶體的一項具體實施例。基本上,會以和圖43的相同^ 式儲存下方頁面,即,區段A,A及區段B, B。然而,在上 方頁面程式化中,區段C及D會和其複製交錯成c,d,c,d。 如果支援局部頁面程式化,則可同時程式化兩個區段c的複 本,及對於兩個區段D的複本也是如此。如果兩個區段c的 程式遭到中止,則只會在區段八的一個複本及區段B的一個 禝本上毀損下方頁面。另一個複本將維持不受影響。因此, 如果儲存在第一次編碼過程中的關鍵資料有兩個複本,則 其不將不會受到後續第二次編碼過程同時程式化的影響。 圖44B顯示只將關鍵資料之複製複本儲存至多重狀態記 憶體之邏輯上方頁面的另—項具體實施例。此時,未使用 下方頁面的資料。關鍵資料及其複製,如區段A,A及區段 B,B只會被儲存至邏輯上方頁面。依此方式,如果有程式 中止’則可將關鍵資料再寫入另一個邏輯上方頁面,而下 方頁面資料的任何毁損將無關緊要。此辦法基本上使用各 98704.doc • 113 - 1288328 多重狀態頁面一半的儲存容量。 圖44C顯示又另—項以多重狀態記憶體的二進制模式儲 存關鍵資料之複製複本的具體實施例。此時,合依二進制 模式程式化各記憶體單元,其中僅將其定限範时:兩個 &域。因此,其中只有—次編碼過程程式化,且在發生程 式中止時可在不同位置中重新啟動程式化。此辦法也使用 各多重狀態頁面-半的儲存容量。依二進制模式操作多重 狀態記憶體係說明於美國專利第M56,528 B i號,其整個揭 露内容在此以提及的方式併入本文中。 圖45顯示同時將關鍵資料之複製複本儲存至兩個不同中 繼區塊的又另一項具體實施例。如果區塊之一變成不可 用’則可從另一個區塊讀取資料。例如,關鍵資料係含在 區段A、B、C、D及E、F、G、HAI、卜K、L内。各區段 會被儲存在複製中。這兩個複本將被同時寫入兩個不同的 區塊,區塊0及區塊1。如果將一個複本寫入邏輯下方頁面, 則會將另一個複本寫入邏輯上方頁面。依此方式,永遠會 有程式化至邏輯上方頁面的複本。如果發生程式中止,則 可將其重新程式化至另一個邏輯上方頁面。同時,如果下 方頁面已經毀損,則在其他區塊申永遠會有另一個上方頁 面複本。 圖46B顯示使用容錯碼同時儲存關鍵資料之複製複本的 又另一項具體實施例。圖46A和圖41A同在顯示4狀態記憶 體陣列的定限電壓分布並顯示作為圖46B的參考。容錯竭實 質上可避免在任何中間狀態中轉變的任何上方頁面程式 98704.doc -114- 1288328 化。因此’在下 頁面程式化的第一次編碼過程中,邏輯 狀態(1,1)轉變兔μ 钓(1,〇),如表示為程式化已抹除的記憶體狀 態「U」為「γ 。户 —、 」 上方頁面位元為「〇」的第二次編碼過 程程式化中,如+ π 果下方頁面位兀為「1」,則邏輯狀態(1, 轉變為(0,1),如矣-& … 表不為耘式化已抹除的記憶體狀態「U」 為「X I。如要^ 、 方頁面位元為「〇」,則邏輯狀態(1,〇)轉變 為(〇, 0) ’如表示為程式化記憶體狀態Γγ」為「z」。由於 頁程式化僅涉及程式化為下一個相鄰的記憶體狀 態,因此程式中止無法變更下方頁面位元。 串列寫入 一關鍵 > 料的複製複本較佳如上述同時寫入。另一個避免 同時鲛損兩個複本的方式是循序寫入複本。此方法較慢, 仁複本本身代表其在控制器檢查兩個複本時程式化是否成 功。 圖47為顯示兩個資料複本之可能狀態及資料有效性的表 如果第一及第二複本沒有ECC錯誤,則可將資料的程式 化視為完全成功。有效資料可從任一個複本取得。 如果第一複本沒有ECC錯誤,但第二複本有ecC錯誤,便 表示程式化在弟二複本程式化的中間受到中斷。第一複本 合有有效資料。即使錯誤為可校正,第二複本資料已不可 靠。 如果第一複本沒有ECC錯誤且第二複本已經清空(抹 除)’便表示程式化在第一複本程式化結束後但在第二複本 98704.doc -115- 1288328 開始前受到中斷。第一複本含有有效資料。 如果第一複本有ECC錯誤及第二複本已經清空(抹除),便 表示耘式化在第一複本程式化的中間受到中斷。即使錯誤 為可校正,第一複本仍含有無效資料。 為了讀取維持在複製中的資料,以下技術為較佳,因其 利用複製複本的存在。讀取及比較兩個複本。此例中,圖 4 7所示兩個複本的狀態可用來確保沒有任何錯誤誤測。 在另一項具體實施例中,控制器只讀取一個複本,為了 顧及速度及簡單性,複本讀取較佳在兩個複本之間輪替。 例如,在控制器讀取控制資料時,其可讀取如複本丨,下一 個控制讀取(任何控制讀取)則應來自複本2,然後再是複本^^ 專依此方式,即可續取及定期檢查兩個複本的完整性(ecc 才欢查)。其可減少以下風險:無法在因變質的資料保留所造 成的時間錯誤中進行偵測。例如,如果通常只讀取複本1, 則複本2會逐漸變質至其中錯誤無法為ECC挽救的程度,因 而無法再使用第二複本。 先佔式資料重新配置 如結合圖20所述,區塊管理系統在其操作期間可在快閃 記憶體中維持一組控制資料。此組控制資料會被儲存在和 主機資料相同的中繼區塊中。因此,控制資料本身會受到 區塊管理,因而會受到更新的影響,及因此也受到廢棄項 目收集操作的影響。 其中還說明控制資料的階層,其中較低層級中的控制資 料更新比較高層級中的頻繁。例如,假設每個控制區塊有^^ 98704.doc -116- 1288328 個要寫入的控制區段,則通常會發生以下控制更新及控制 區塊重新配置的序列。再次參考圖20,每N個CBI更新可填 滿CBI區塊及觸發CBI重新配置(再寫入)及MAP更新。如果 混亂區塊遭受關閉,則其也會觸發GAT更新。每個GAT更新 可觸發MAP更新。每N個GAT更新可填滿區塊及觸發GAT區 塊重新配置。此外,當MAP區塊變滿時,也會觸發MAP區 塊重新配置及MAPA區塊(如果存在的話,否則BOOT區塊會 直接指向MAP)更新。此外,當MAPA區塊變滿時,也會觸 發MAPA區塊重新配置、BOOT區塊更新及MAP更新。此 外,在BOOT區塊變滿時,將會觸發另一個BOOT區塊之作 用中BOOT區塊重新配置。 由於階層的形成為:頂部的BOOT控制資料,接著是 MAPA、MAP、然後GAT,因此,在每N3個GAT更新中,將 有「串接控制更新」,其中所有的GAT、MAP、MAPA及BOOT 區塊都會被重新配置。此例中,在因主機寫入導致的混亂 或循序更新區塊關閉造成GAT更新時,也會有廢棄項目收 集操作(即,重新配置或再寫入)。在混亂更新區塊廢棄項目 收集的例子中,會更新CBI,而這也會觸發CBI區塊重新配 置。因此,在此極端的情況中,必須同時收集大量中繼區 塊的廢棄項目。 從圖中可見,階層的各控制資料區塊在取得填充及接受 重新配置上有其自己的週期性。如果各控制資料區塊進行 正常,則將有發生以下情形的時候:大量區塊的階段進行 整頓,因而觸發大量同時涉及所有這些區塊的重新配置廢 98704.doc -117- 1288328 棄項目收集。許多控制區塊的重新配置將會花費很長的時 間,因此應加以避免,因為部分主機不容許因大量控制操 作所造成的長時間延遲。 根據本發明的另一方面,在具有區塊管理系統的非揮發 性記憶體中,可實施記憶體區塊的Γ控制廢棄項目收集」 或先佔式重新配置,以避免發生大量的更新區塊均恰巧同 時需要進行重新配置的情形。例如,在更新用於控制區塊 管理系統操作的控制資料時會發生此情況。控制資料類型 的層級可和不同程度的更新次數共存,導致其關聯的更新 區塊需要不同速率的廢棄項目收集或重新配置。會有一個 以上控制資料類型之廢棄項目收集操作同時發生的特定次 數。在極端的情況中,所有控制資料類型之更新區塊的重 新配置階段會進行整頓,導致所有的更新區塊都需要同時 重新配置。 本發明可避免這種不想要的情況,其中目前的記憶體操 作無論何時均可容納自發性的廢棄項目收集操作,更新區 塊的先佔式重新配置可預先在完全填充區塊前發生。尤 其’會將優先權提供給具有最慢速率之最高階層資料類型 的區塊依此方式,在重新配置最慢速率區塊後,將不再 需要另一個相對較長時間的廢棄項目收集。還有,階層中 車乂 N的較忮速率區塊沒有太多可觸發之重新配置的串接。 叮將本么明方法視為·為了避免所論各種區塊之階段對 齊,而將某種抖動引入事物的整體混合。因此,只要一有 機會’即可以先佔式的方式重新配置有些微不受完全填充 98704.doc 1288328 之邊限的緩慢填充區塊。 在具有階層中較低的控制資料因串接效應而變更快於階 層中較高的控制資料之控制資料階層的系統中,會將優先 權提供給階層中較高之控制資料的區塊。一個執行自發性 先佔式重新配置之機會的範例是在以下情況時··當主機寫 入本身無法觸發重新配置,因此可利用其等待時間中的任 何剩餘時間來進行先佔式重新配置操作。一般而言,務必 重新配置之區塊前的邊限是在區塊全滿前之預定數量的未 寫入記憶體單元。所考慮的是足以加速在完全填充之區塊 前但又不會過早之重新配置的邊限,以免資源浪費。在較 佳具體實施例中,預定數量的未寫入記憶體單元係在一到 六個記憶體單元之間。 圖48顯示先佔式重新配置儲存控制資料之記憶體區塊的 流程圖。 步驟1202 ·•將非揮發性記憶體組織成區塊,各區塊已分 割成可一起抹除的記憶體單元。 步驟1204 :維持不同類型的資料。 步驟1206 :對不同類型的資料指派階層。 步驟1208 :儲存複數個區塊中該等不同類型資料的更 新,使得各區塊實質上儲存相同類型的資料。 步驟1210 ·•為了回應具有少於預定數量之清空記憶體單 元及具有該等複數個區塊中最高階層資料類型的區塊,將 該區塊之資料的目前更新重新配置至另一個區塊。若未受 到中斷,到步驟1208。 98704.doc -119- 1288328 實施圖20所不控制資料之先佔式重新配置的範例演算法 如下: 如果((沒有任何因使用者資料所造成的廢棄項目收集)或 (MAP留有6個或更少的未寫入區段)或(GAT留有3個或更少 的未寫入區段) 則 如果(BOOT留有1個未寫入區段) 則重新配置BOOT(即,重新配置至區塊) 否貝ij 如果(MAPA留有1個未寫入區段)WWay code) 2 times Lin code recombination scheme. The four states can be represented by two bits, that is, the lower azimuth element and the upper page bit, such as (the upper page H, the lower page bit; for the parallelized stylized unit, actually There are two logics 98704.doc -110 - 1288328. The logical lower page and the logical upper page. The first - stylized encoding process: will program the logic below the page. With the appropriate encoding, there is no need to reset the logic below the page φ 'unit Subsequent second stylized encoding on the same page = will program the top page of the logic. The general code is Gray code, and only one of the bits will change when transitioning to the adjacent state. Therefore, this code The advantages are as follows: there are fewer requirements for error correction, since only one bit is involved. - The general scheme of using Gray code is to assume that "L stands for "unprogrammed" condition. Therefore, the erased memory state "u" Can be expressed as: (above page: 兀 'below page bit) = (1, 1}. In the first encoding process of the page below the stylized logic, any unit storing the "〇" will have its The logical state transition of (X,1) to (X,〇), where "X" represents the "don't care" value of the upper azimuth. However, since the upper azimuth element has not been %-formed, Consistently, "χ" can be marked as "]". (U) The logic can be represented by the stylized unit as the memory state "χ". That is to say, 'before the second program encoding process, the lower position The value "〇" can be expressed as the memory state "X". The second encoding process can be programmed to store the position of the page above the logic/, and the unit that needs the upper page bit value "〇". Will be programmed in the first person encoding process, the unit in the page in the logic state (1,1) or (,) in order to save the value of the lower page in the second encoding process, must be the area under the knife 7 ° value " 〇" or "i". For the transition from (1,0) to (0, 0), the 'single' unit is programmed into the memory state "Y". For (i, ) to (〇) The turn of '1) will be stylized into the memory state 98704.doc -111 - 1288328 state "z". In this way, during the reading process, the lower page bit and the upper page bit can be decoded by determining the memory state programmed in the cell. However, the 2 encoding process of the Gray code is programmed in the second pass. Stylized errors can become a problem. For example, if the lower position is 程式, the upper page bit is stylized as "G", which will cause (1, υ to (q 1), · This requires the memory unit to be progressively "U" is stylized through "χ" and "γ". To "Ζ". If a power interruption occurs before the stylization is completed, the memory unit will end in one of the transition memory states, such as "χ". When the memory unit is read, "X" is decoded into a logic state (1, 〇). This causes an incorrect result for the upper and lower orientation elements, since it should be (^). Similarly, if the stylization is interrupted when it reaches "Υ", it will correspond to (0, 〇). Although the upper position is correct now, the lower position is still wrong. Therefore, it can be seen that the stylized problem of the above page can damage the data already on the next page. In particular, when the second encoding process is programmed to pass through the intermediate memory state, the program abort causes the stylization to end in the memory state, resulting in an incorrectly decoded lower page bit. _ Figure 42 shows how to protect critical data by storing replicated sections. For example, sections A, B, c, and 〇 can be stored in a duplicate copy. If there is data corruption in a copy of a section, you can read the other to replace it. Figure 43 shows the unsoundness in which the copy segments are typically stored in multiple state memories. As described above, in the exemplary 4-state memory, the multi-state page includes a logical lower page and a logical upper page which are respectively programmed in the two encoding processes. In the example shown, the page is four zones and the segments are wide. Therefore, the section Α and its copy are simultaneously stylized in the logical lower page 98704.doc -112 - 1288328, as well as for section B and its copy. Then, in the subsequent stylized second encoding process in the logical upper page, the segments C, C are also programmed simultaneously, as well as for the segments D, D. If a program abort in the middle of the stylized section C, C will destroy the section A, A in the page below. Unless the page section below is read and buffered before the top page is stylized, the damage will not be restored. Therefore, storing a copy of two key data at the same time, such as section A, A, cannot prevent it from being damaged by the problematic storage of subsequent sections C, C in its upper page. Figure 44A shows a specific embodiment of storing a copy of a key staggered copy to a multi-bear memory. Basically, the lower page, that is, the segments A, A and the segments B, B, are stored in the same manner as in FIG. However, in the upper page stylization, sections C and D are interleaved with c, d, c, d. If local page stylization is supported, the copies of the two segments c can be programmed simultaneously, as well as for the duplicates of the two segments D. If the program for the two segments c is aborted, only the next page is destroyed on one copy of the segment eight and one copy of the segment B. Another copy will remain unaffected. Therefore, if there are two copies of the key data stored in the first encoding process, it will not be affected by the simultaneous stylization of the subsequent second encoding process. Figure 44B shows another embodiment of storing only a copy of the key material to a logically above page of the multi-state memory. At this time, the information on the page below is not used. Key data and its copy, such as section A, A and section B, B will only be stored to the logical upper page. In this way, if there is a program abort, the key data can be written to another logical upper page, and any damage to the next page data will not matter. This approach basically uses half of the storage capacity of each 98704.doc • 113 - 1288328 multi-state page. Figure 44C shows a further embodiment of a duplicate copy of a key material stored in a binary mode of multiple state memory. At this point, each memory unit is programmed in binary mode, which is only limited to the time: two & fields. Therefore, only the -coding process is stylized, and stylization can be restarted in different locations when the occurrence of the program is aborted. This method also uses each multi-state page - half the storage capacity. The operation of the multi-state memory system in binary mode is described in U.S. Patent No. M 56,528, the entire disclosure of which is incorporated herein by reference. Figure 45 shows yet another embodiment of simultaneously storing a copy of the key material to two different relay blocks. If one of the blocks becomes unavailable, then the data can be read from another block. For example, key data is contained in sections A, B, C, D and E, F, G, HAI, Bu K, L. Each section will be stored in the copy. These two copies will be written to two different blocks, block 0 and block 1. If you write a copy to the lower logical page, another copy is written to the logical upper page. In this way, there will always be a copy of the page that is stylized to the top of the logic. If a program aborts, it can be reprogrammed to another logical top page. At the same time, if the next page has been damaged, there will always be another upper page copy in other blocks. Figure 46B shows yet another embodiment of a duplicate copy of a key material stored simultaneously using a fault tolerant code. Figures 46A and 41A show the limited voltage distribution of the 4-state memory array and are shown as a reference to Figure 46B. Tolerantly, any upper page program that can be transitioned in any intermediate state can be avoided. 98704.doc -114- 1288328. Therefore, in the first encoding process of the next page, the logic state (1, 1) changes the rabbit μ fishing (1, 〇), as the memory state "U" expressed as stylized erased is "γ". In the second encoding process where the upper page bit is "〇", if the page bit below the + π is "1", the logic state (1, transition to (0, 1), such as矣-& ... The table is not the erased erased memory state "U" is "XI. If you want ^, the square page bit is "〇", then the logic state (1, 〇) is changed to (〇 , 0) 'If the stylized memory state Γ γ' is "z". Since the page stylization only involves stylizing into the next adjacent memory state, the program abort cannot change the lower page bit. The copy of the key > material is preferably written as described above. Another way to avoid both depreciation of the two copies is to write the copy sequentially. This method is slower, and the copy itself represents two checks on the controller. Stylized success at the time of the copy. Figure 47 shows the possibility of two copies of the data. Table of status and data validity If the first and second copies do not have an ECC error, the stylization of the data can be considered to be completely successful. Valid data can be obtained from either copy. If the first copy does not have an ECC error, but the second If the copy has an ecC error, it means that the stylization is interrupted in the middle of the stylization of the second copy. The first copy has valid data. Even if the error is correctable, the second copy is unreliable. If the first copy has no ECC error and The second copy has been emptied (erased), indicating that the stylization was interrupted after the end of the first copy stylization but before the start of the second copy 98704.doc -115- 1288328. The first copy contains valid material. If the first copy If there is an ECC error and the second copy has been emptied (erased), it means that the normalization is interrupted in the middle of the first replica stylization. Even if the error is correctable, the first replica still contains invalid data. The following techniques are preferred because they utilize the existence of duplicate copies. Read and compare two copies. In this example, the two copies shown in Figure 47 The state can be used to ensure that there are no false positives. In another embodiment, the controller reads only one copy, and in order to account for speed and simplicity, the replica read is preferably rotated between the two replicas. When the controller reads the control data, it can read the copy, the next control read (any control read) should come from the copy 2, and then the duplicate ^^ in this way, you can continue and Regularly check the integrity of the two copies (ecc is checked), which reduces the risk of not being able to detect in time errors caused by deteriorating data retention. For example, if you normally only read duplicate 1, then copy 2 will gradually deteriorate to the extent that the error cannot be saved by the ECC, and the second copy can no longer be used. Preemptive data reconfiguration As described in connection with Figure 20, the block management system can be in flash memory during its operation. Maintain a set of control data. This group of control data is stored in the same relay block as the host data. As a result, the control data itself is subject to block management and is therefore subject to updates and, therefore, to the waste collection operations. It also describes the hierarchy of control data, where the control data in the lower level is updated more frequently than in the higher level. For example, assuming that each control block has ^^ 98704.doc -116 - 1288328 control segments to be written, the following sequence of control updates and control block reconfigurations typically occurs. Referring again to Figure 20, every N CBI updates can fill the CBI block and trigger CBI reconfiguration (rewrite) and MAP updates. If the chaotic block is closed, it will also trigger a GAT update. Each GAT update can trigger a MAP update. Each N GAT update fills the block and triggers the GAT block reconfiguration. In addition, when the MAP block becomes full, it also triggers the MAP block reconfiguration and the MAPA block (if it exists, otherwise the BOOT block points directly to the MAP). In addition, when the MAPA block becomes full, MAPA block reconfiguration, BOOT block update, and MAP update are also triggered. In addition, when the BOOT block becomes full, it will trigger the BOOT block reconfiguration in the action of another BOOT block. Since the formation of the hierarchy is: the top BOOT control data, followed by MAPA, MAP, and then GAT, therefore, in every N3 GAT updates, there will be "serial control update", in which all GAT, MAP, MAPA and BOOT The block will be reconfigured. In this case, there is also an obsolete item collection operation (ie, reconfiguration or rewriting) when the GAT update is caused by a chaotic or host update block closure due to host writes. In the case of a collection of obscure update blocks, the CBI is updated, which also triggers CBI block reconfiguration. Therefore, in this extreme case, it is necessary to collect a large number of discarded items of the relay block at the same time. As can be seen from the figure, each control data block of the hierarchy has its own periodicity in obtaining padding and accepting reconfiguration. If each control data block is normal, then the following situation will occur: the stage of a large number of blocks is reorganized, thus triggering a large number of reconfiguration wastes involving all of these blocks at the same time. 98704.doc -117- 1288328 Abandonment project collection. The reconfiguration of many control blocks can take a long time and should be avoided because some hosts do not tolerate long delays due to a large number of control operations. According to another aspect of the present invention, in a non-volatile memory having a block management system, a memory block can be implemented to control the collection of waste items or a preemptive reconfiguration to avoid a large number of update blocks. It happens that you need to reconfigure at the same time. This can happen, for example, when updating control data that is used to control the operation of the block management system. The hierarchy of control data types can coexist with varying degrees of update times, causing their associated update blocks to require collection or reconfiguration of obsolete items at different rates. There will be more than one specific number of simultaneous acquisitions of the obsolete items that control the data type. In extreme cases, the reconfiguration phase of all update blocks of the control data type is reorganized, causing all update blocks to be reconfigured at the same time. The present invention avoids this undesirable situation where current memory gymnastics can accommodate spontaneous waste collection operations at any time, and preemptive reconfiguration of update blocks can occur before the block is fully populated. In particular, 'priority will be provided to the block with the highest level data type of the slowest rate. In this way, after reconfiguring the slowest rate block, another relatively long period of obsolete item collection will no longer be needed. . Also, there is not much configurable reconfigurable concatenation of the 忮 rate block in the hierarchy.视为 Think of this method as a way to introduce some kind of jitter into the overall mixture of things in order to avoid the alignment of the various blocks in the discussion. Therefore, as soon as there is an opportunity, it is possible to reconfigure some of the slightly-filled blocks that are not completely filled with the margins of 98704.doc 1288328. In systems where the lower control data in the hierarchy becomes faster than the control data hierarchy of the higher control data in the hierarchy due to the concatenation effect, the priority is provided to the higher control data blocks in the hierarchy. An example of an opportunity to perform a spontaneous preemptive reconfiguration is when the host write itself cannot trigger a reconfiguration, so any pre-remaining time in its wait time can be utilized for preemptive reconfiguration. In general, the margin before the block that must be reconfigured is the predetermined number of unwritten memory cells before the block is full. What is considered is a margin sufficient to accelerate the reconfiguration before the fully populated block, but not too early, to avoid wasting resources. In a preferred embodiment, a predetermined number of unwritten memory cells are between one and six memory cells. Figure 48 is a flow chart showing the memory block of the preemptive reconfiguration storage control data. Step 1202 • Organize non-volatile memory into blocks, each of which has been divided into memory cells that can be erased together. Step 1204: Maintain different types of data. Step 1206: Assign a hierarchy to different types of data. Step 1208: Store updates of the different types of data in the plurality of blocks such that each block substantially stores the same type of data. Step 1210: • In response to a block having less than a predetermined number of empty memory cells and having the highest hierarchical data type of the plurality of blocks, the current update of the data for the block is reconfigured to another block. If it is not interrupted, go to step 1208. 98704.doc -119- 1288328 The example algorithm for implementing preemptive reconfiguration of the uncontrolled data in Figure 20 is as follows: If ((there is no collection of obsolete items due to user data) or (MAP has 6 or Less unwritten segments) or (GAT leaves 3 or fewer unwritten segments) If (BOOT leaves 1 unwritten segment) then reconfigure BOOT (ie, reconfigure to Block) No Bay ij If (MAPA has 1 unwritten section)

則重新配置MAPA及更新MAP 否貝J 如果(MAP留有1個未寫入區段)Then reconfigure MAPA and update MAP No Bay J if (MAP has 1 unwritten segment)

則重新配置MAP 否貝U 如果(上一個更新或最大的GAT留有1個未寫入區段)Then reconfigure MAP No Bay U if (the last update or the largest GAT has 1 unwritten segment)

則重新配置GAT 否貝彳 如果(CBI留有1個未寫入區段) 則重新配置CBI 否貝U 否則 離開 因此,先佔式重新配置通常完成於其中未發生任何使用 98704.doc 1288328 者資料廢棄項目收集時。在最糟的情況中,當每個主機寫 入觸發使用者資料廢棄項目收集,但還有足夠的時間進行 一個區塊的自發性重新配置時,一次可執行一個控制區塊 的先佔式重新配置。 由於使用者資料廢棄項目收集操作及控制更新可能和實 體錯誤同時發生,因此,最好具有較大的安全性邊限,其 係藉由事先如在區塊還有2個或更多未寫入之記憶體單元 (如,區段)時,先行進行先佔式重新配置或控制之廢棄項目 收集。 雖然已經針對特定具體實施例⑨明本發明的各種方面, 但應明白,本發明有權受到隨附申請專利範圍之完整範疇 的保護。 【圖式簡單說明】 圖1以示意圖顯示適於實施本發明之記憶體系統的主要 硬體組件。 圖2顯示根據本發日月&Then reconfigure GAT No. If (CBI leaves 1 unwritten section) then reconfigure CBI No Bayu Otherwise leave, therefore, preemptive reconfiguration is usually completed without any use of 98704.doc 1288328 When the abandoned project is collected. In the worst case, when each host write triggers the collection of user data obsolescence items, but there is still enough time to perform a spontaneous reconfiguration of a block, a preemptive re-execution of a control block can be performed at a time. Configuration. Since the user data collection item collection operation and the control update may occur simultaneously with the entity error, it is preferable to have a large security margin by using 2 or more unwritten in the block as before. In the case of a memory unit (eg, a segment), the collection of obsolete items for preemptive reconfiguration or control is performed first. While the various aspects of the invention have been described with respect to the specific embodiments, it is understood that the invention is intended to be protected by the scope of the appended claims. BRIEF DESCRIPTION OF THE DRAWINGS Figure 1 is a schematic view showing the main hardware components of a memory system suitable for implementing the present invention. Figure 2 shows the date and time according to this issue.

月的一項較佳具體實施例,經組織成區 段(或中繼區塊)之實體淼 耳體群組並為控制器之記憶體管理器所 管理的記憶體。 立圖3Α⑴-3A(U1)根據本發明的一項較佳具體實施例,以示 忍圖顯示邏輯群組及中繼區塊間的映射。 圖3B以示意圖顯示邏 一 ^科群組及中繼區塊間的映射。 圖4顯示中繼區塊和實 只® A 1:¾體中結構的對齊。 圖5A顯示從連結不 區塊。 十面之最小抹除單元所構成的中繼 98704.doc -121 - 1288328 ^顯Μ中從各平面轉最小抹除單 至中繼區塊的一項具體實施例。 圖5C顯示其中從各 卞向選擇一個以上MEU以連結至中繼 區塊的另一項具體實施例。 圖6為如控制器及快閃記憶體中實施之中繼區塊管理系 統的不意方塊圖。 圖7Α顯示邏輯群組中按循序順序寫人循序更新區塊之區 段的範例。 圖7Β顯不邏輯群組中按混亂順序寫人混亂更新區塊之區 段的範例。 圖8顯示由於兩個在邏輯位址有中斷之分開的主機寫入 操作而在邏輯群組中按循序順序寫人循序更新區塊之區段 的範例。 圖9根據本發明的一般具體實施例,為顯示更新區塊管理 器更新一個邏輯群組之資料的程序流程圖。 圖10顯示根據本發明的一項較佳具體實施例,更新區塊 管理器更新一個邏輯群組之資料的程序流程圖。 圖11Α為詳細顯示關閉圖10所示之混亂更新區塊之彙總 程序的流程圖。 圖11B為詳細顯示關閉圖1〇所示之混亂更新區塊之壓縮 程序的流程圖。 圖12A顯示邏輯群組的所有可能狀態,及其間在各種操作 下的可能轉變。 圖12B為列出邏輯群組之可能狀態的表格。 98704.doc •122- 1288328 圖13A顯示中繼區塊的所有可能狀態,及其間在各種操作 下的可能轉變。中繼區塊是對應於邏輯群組的實體群組。 圖13B為列出中繼區塊之可能狀態的表格。 圖14(A)-14(J)為顯示邏輯群組狀態上及實體中繼區塊上 各種操作效果的狀態圖。 圖15顯示用於記錄開啟及關閉之更新區塊及配置之已抹 除區塊之配置區塊清單(ABL)結構的較佳具體實施例。 圖16A顯示混亂區塊索引(CBI)區段的資料欄位。 圖16B顯示記錄於專用中繼區塊中之混亂區塊索引(CBI) 區段的範例。 圖16C為顯示存取進行混亂更新之給定邏輯群組之邏輯 區段之資料的流程圖。 圖16D根據其中已將邏輯群組分割成子群組的替代性具 體實施例,為顯示存取進行混亂更新之給定邏輯群組之邏 輯區段之資料的流程圖。 圖16E顯示在其中將各邏輯群組分割成多個子群組的具 體實施例中,混亂區塊索引(CBI)區段及其功能的範例。 圖17A顯示群組位址表(GAT)區段的資料欄位。 圖17B顯示記錄在GAT區塊中之群組位址表(GAT)區段的 範例。 圖1 8為顯示使用及再循環已抹除區塊之控制及目錄資訊 之分布及流程的示意方塊圖。 圖19為顯示實體位址轉譯邏輯程序的流程圖。 圖20顯示在記憶體管理的操作過程中,在控制資料結構 98704.doc -123 - 1288328 上執行的操作層級。 圖21顯不以多個記憶體平面所構成的記憶體陣列。 圖22A顯示根據本發明的一般實施例,具有平面對齊之更 新之方法的流程圖。 圖22B顯不在圖22A所示之流程圖中儲存更新之步驟的 較佳具體實施例。 圖23A顯示不顧平面對齊按循序順序寫人循序更新區塊 之邏輯單元的範例。 圖23B顯不不顧平面對齊按非循序順序寫入混亂更新區 塊之邏輯單元的範例。 圖24A顯示根據本發明的一項較佳具體實施例,具有平面 對齊及填補之圖23 A的循序更新範例。 圖24B顯示根據本發明的一項較佳具體實施例,具有平面 對齊及不具有任何填補之圖23B的混亂更新範例。 圖24C顯示根據本發明的另一項較佳具體實施例,具有平 面對齊及填補之圖23B的混亂更新範例。 圖25顯不其中各頁面含有兩個用於儲存兩個邏輯單元 (如兩個邏輯區段)之記憶體單元的範例記憶體組織。 圖26A和圖的記憶體結構相同,只是各頁面含有兩個區 段而非一個。 圖26B顯示圖26A所示之具有以線性圖方式布局之記憶 體單元的中繼區塊。 圖27顯示的替代性方案如下:不用填補要從—個位置複 製到另-個的邏輯單元,即可在更新區塊中進行平面對齊。 98704.doc -124- 1288328 圖28顯示其中缺陷區塊在彙總操作期間發生程式失敗時 會在另一個區塊上重複彙總操作的方案。 圖29以示意圖顯示具有允許足夠時間完成寫入(更新)操 作及彙總操作之時序或寫人等待時間的主機寫入操作。 圖30根據本發明的-般方案,_示程式失敗處置的流程 圖。 圖31A顯示程式失敗處置的一項具體實施例,其中第三 (最後的重新配置)區塊和第二(中斷)區塊不同。 圖3 1B顯示程式失敗處置的另一項具體實施例,其中第三 (最後的重新配置)區塊和第二(中斷)區塊相同。 圖32A顯示造成彙總操作之初始更新操作的流程圖。 圖32B顯示根據本發明的一項較佳具體實施例,多階段彙 總操作的流程圖。 圖33顯示多階段彙總操作之第一及最後階段的範例時 序。 圖34A顯示其中中斷彙總區塊並非用作更新區塊而是用 作其彙總操作已經中斷之彙總區塊的例子。 圖34B顯示始於圖34A之多階段彙總的第三及最後階段。 圖35A顯示其中維持中斷彙總區塊為接收主機寫入之更 新區塊而非彙總區塊的例子。 圖35B顯示始於第二例中圖35A之多階段彙總的第三及 最後階段。 圖36A顯示套用於主機寫入觸發關閉更新區塊且更新區 塊為循序時之情況的階段性程式錯誤處置方法。 98704.doc -125- 1288328 圖36B顯示在更新區塊之更新的例子中套用於(局部區塊 系統)的階段性程式錯誤處置方法。 圖36C顯示處理廢棄項目收集操作的階段性程式錯誤,或 不支援映射至中繼區塊之邏輯群組之記憶體區塊管理系統 中的清除。 圖3 7顯示在每N個區段寫入相同的邏輯群組後將cbI區 段寫入關聯之混亂索引區段區塊的排程範例。 圖3 8A顯示直到在預定數量的寫入後在其中記錄匸別區 段時的更新區塊。 圖38B顯示圖38A之進一步在索引區段後在其中記錄資 料頁面1、2及4的更新區塊。 圖38C顯示圖38B之具有另一寫入以觸發索引區段下一 個記錄之邏輯區段的更新區塊。 圖39A顯示儲存於混亂更新區塊中各資料區段標頭之中 間寫入的中間索引。 圖39B顯示在寫入的各區段標頭中儲存中間寫入之中間 索引的範例。 圖40顯示在混亂更新區塊之各資料區段標頭中儲存的混 亂索引欄位中的資訊。 圖41A顯示當各記憶體單元儲存兩個位元的資料時,斗狀 態記憶體陣列的定限電壓分布。 圖41B顯示現有使用格雷碼(㈣code)的2次編碼過程程 式化方案。 圖42顯示藉由健存複製的各區段以防衛㈣資料μ 98704.doc -126- 1288328 式。例如,可將區段A、B、C、及〇儲存在複製複本中。如 在一個區段複本中有資料毀損,則可以讀取另一個來取代。. 圖43顯不其中通常將複製區段儲存在多重狀態記憶體的 非健全性。 圖44A顯示將關鍵資料錯開之複製複本儲存至多重狀態 · 記憶體的一項具體實施例。 圖44B顯示只將關鍵資料之複製複本儲存至多重狀態記 憶體之邏輯上方頁面的另一項具體實施例。 _ 圖44C顯示又另一項以多重狀態記憶體的二進制模式儲 存關鍵資料之複製複本的具體實施例。 圖4 5顯示同時將關鍵資料之複製複本儲存至兩個不同中 繼區塊的又另一項具體實施例。 圖46A和圖41A同在顯示4狀態記憶體陣列的定限電壓分 布並顯示作為圖46B的參考。 圖46B顯示使用容錯碼同時儲存關鍵資料之複製複本的 又另一項具體實施例。 g 圖47為顯示兩個資料複本之可能狀態及資料有效性的表 格。 圖48顯示先佔式重新配置儲存控制資料之記憶體區塊的 流程圖。 【主要元件符號說明】 1 缺陷區塊 2 中斷區塊 , 3 重新配置區塊 98704.doc • 127- 1288328 10 主機 20 記憶體系統 100 控制器 110 介面 120 處理器 121 選用性副處理器 122 唯讀記憶體(ROM) 124 選用性可程式非揮發性記憶體 130 隨機存取記憶體(RAM) 132 快取記憶體 134、 610 配置區塊清單(ABL) 136、 740 清除區塊清單(CBL) 140 邏輯對實體位址轉譯模組 150 更新區塊管理器模組 152 循序更新 154 混亂更新 160 抹除區塊管理器模組 162 結束區塊管理器 170 中繼區塊連結管理器 180 控制資料互換 200 記憶體 210 群組位址表(GAT) 220 混亂區塊索引(CBI) 230、 770 已抹除的區塊清單(EBL)A preferred embodiment of the month is organized into segments (or relay blocks) of the entity 耳 ear group and is the memory managed by the controller's memory manager. Figure 3(1)-3A(U1) shows a mapping between logical groups and relay blocks in accordance with a preferred embodiment of the present invention. Figure 3B shows a schematic diagram of the mapping between a logical group and a relay block. Figure 4 shows the alignment of the structure in the relay block and the real ® A 1:3⁄4 body. Figure 5A shows the block from the link. A relay composed of a ten-sided minimum erasing unit 98704.doc -121 - 1288328 ^ A specific embodiment in which a minimum erasure order from a plane to a relay block is made. Figure 5C shows another embodiment in which more than one MEU is selected from each direction to link to a relay block. Figure 6 is a block diagram of a relay block management system implemented in a controller and flash memory. Figure 7Α shows an example of writing a segment of a logically updated block in a sequential order in a logical group. Figure 7 shows an example of a segment of a chaotic update block written in a chaotic order in a logical group. Figure 8 shows an example of writing a segment of a sequentially updated block in a logical group in sequential order due to two separate host write operations with interrupts at logical addresses. Figure 9 is a flow diagram of a program for updating the data of a logical group for displaying an update block manager, in accordance with a general embodiment of the present invention. Figure 10 is a flow diagram showing the flow of updating the data of a logical group by the update block manager in accordance with a preferred embodiment of the present invention. Figure 11 is a flow chart showing in detail a summary procedure for closing the chaotic update block shown in Figure 10. Figure 11B is a flow chart showing in detail the compression procedure for closing the chaotic update block shown in Figure 1A. Figure 12A shows all possible states of a logical group, with possible transitions between various operations. Figure 12B is a table listing the possible states of a logical group. 98704.doc • 122- 1288328 Figure 13A shows all possible states of a relay block and their possible transitions under various operations. A relay block is an entity group corresponding to a logical group. Figure 13B is a table listing the possible states of a relay block. 14(A)-14(J) are state diagrams showing various operational effects on the logical group state and on the physical relay block. Figure 15 shows a preferred embodiment of an Configuration Block List (ABL) structure for recording the open and closed update blocks and the configured erased blocks. Figure 16A shows the data field of the Chaotic Block Index (CBI) section. Figure 16B shows an example of a Chaotic Block Index (CBI) section recorded in a dedicated relay block. Figure 16C is a flow diagram showing the access to the logical section of a given logical group for a chaotic update. Figure 16D is a flow diagram of data for a logical section of a given logical group for performing a chaotic update based on an alternative embodiment in which a logical group has been partitioned into subgroups. Figure 16E shows an example of a Chaotic Block Index (CBI) section and its functions in a specific embodiment in which each logical group is partitioned into a plurality of subgroups. Figure 17A shows the data field of the Group Address Table (GAT) section. Fig. 17B shows an example of a group address table (GAT) section recorded in a GAT block. Figure 18 is a schematic block diagram showing the distribution and flow of control and catalog information for the use and recycling of erased blocks. Figure 19 is a flow chart showing the physical address translation logic program. Figure 20 shows the operational hierarchy performed on the control profile 98704.doc -123 - 1288328 during the memory management operation. Figure 21 shows an array of memories constructed with multiple memory planes. Figure 22A shows a flow chart of a method with planar alignment updates in accordance with a general embodiment of the present invention. Figure 22B shows a preferred embodiment of the step of storing updates in the flow chart shown in Figure 22A. Fig. 23A shows an example of a logical unit that writes a person sequentially updating blocks in a sequential order regardless of plane alignment. Figure 23B shows an example of a logical unit that writes a chaotic update block in a non-sequential order, regardless of plane alignment. Figure 24A shows an example of a sequential update of Figure 23A with planar alignment and padding in accordance with a preferred embodiment of the present invention. Figure 24B shows an example of a chaotic update of Figure 23B with planar alignment and without any padding in accordance with a preferred embodiment of the present invention. Figure 24C shows an example of a chaotic update of Figure 23B with planar alignment and padding in accordance with another preferred embodiment of the present invention. Figure 25 shows an example memory organization in which each page contains two memory cells for storing two logical units (e.g., two logical segments). The memory structure of Fig. 26A and Fig. 26 is the same except that each page contains two sections instead of one. Fig. 26B shows a relay block having the memory cells arranged in a line graph as shown in Fig. 26A. The alternative shown in Figure 27 is as follows: Plane alignment can be performed in the update block without filling the logical unit to be copied from one location to another. 98704.doc -124- 1288328 Figure 28 shows a scenario in which a defective block repeats the rollup operation on another block when a program failure occurs during the rollup operation. Figure 29 is a schematic diagram showing a host write operation with a timing or write latency that allows sufficient time to complete the write (update) operation and the summary operation. Figure 30 is a flow diagram showing the failure of a program in accordance with the general scheme of the present invention. Figure 31A shows a specific embodiment of a program failure handling in which the third (last reconfiguration) block is different from the second (interrupt) block. Figure 3B shows another embodiment of a program failure handling in which the third (last reconfiguration) block is the same as the second (interrupt) block. Figure 32A shows a flow chart for the initial update operation that caused the summary operation. Figure 32B shows a flow chart of a multi-stage summary operation in accordance with a preferred embodiment of the present invention. Figure 33 shows an example sequence of the first and final stages of a multi-stage rollup operation. Fig. 34A shows an example in which the interrupt summary block is not used as an update block but as a summary block whose summary operation has been interrupted. Figure 34B shows the third and final stages starting from the multi-stage summary of Figure 34A. Fig. 35A shows an example in which the interrupt summary block is maintained as a newer block written by the receiving host instead of the summary block. Figure 35B shows the third and final stages starting from the multi-stage summary of Figure 35A in the second example. Figure 36A shows a phased program error handling method for the case where the host write triggers the shutdown of the update block and the update block is sequential. 98704.doc -125- 1288328 Figure 36B shows a phased program error handling method applied to the (local block system) in the updated instance of the update block. Figure 36C shows a staged program error handling a discarded item collection operation, or a cleanup in a memory block management system that does not support a logical group mapped to a relay block. Figure 37 shows an example of the scheduling of writing a cbI sector to an associated chaotic index sector block after writing the same logical group every N sectors. Fig. 3A shows the update block until the discrimination section is recorded therein after a predetermined number of writes. Figure 38B shows the update block of Figure 38A in which the data pages 1, 2 and 4 are recorded after the index section. Figure 38C shows the update block of Figure 38B with another write to trigger the logical segment of the next record of the index sector. Figure 39A shows the intermediate index written between the headers of the data sectors stored in the chaotic update block. Figure 39B shows an example of storing an intermediate index of intermediate writes in each of the sector headers written. Figure 40 shows the information in the hash index field stored in the header of each data section of the chaotic update block. Fig. 41A shows the threshold voltage distribution of the bucket state memory array when each memory cell stores data of two bits. Fig. 41B shows a prior art encoding process scheme using the Gray code ((4) code). Figure 42 shows the sections copied by the memory to defend (4) data μ 98704.doc -126 - 1288328. For example, sections A, B, C, and 〇 can be stored in a duplicate copy. If there is data corruption in a copy of a section, you can read the other to replace it. Figure 43 shows the unsoundness in which the copy segments are typically stored in multiple state memories. Figure 44A shows a specific embodiment of storing memory copies of key data staggered into multiple states. Figure 44B shows another embodiment of a page that only stores a copy of the key material in a logically above page of the multi-state memory. Figure 44C shows yet another embodiment of a duplicate copy of a key material stored in a binary mode of multiple state memories. Figure 45 shows yet another embodiment of simultaneously storing a copy of the key material to two different relay blocks. Figures 46A and 41A show the limited voltage distribution of the 4-state memory array and are shown as a reference to Figure 46B. Figure 46B shows yet another embodiment of a duplicate copy of a key material stored simultaneously using a fault tolerant code. g Figure 47 is a table showing the possible status of two copies of data and the validity of the data. Figure 48 is a flow chart showing the memory block of the preemptive reconfiguration storage control data. [Main component symbol description] 1 Defective block 2 Interrupt block, 3 Reconfigured block 98704.doc • 127- 1288328 10 Host 20 Memory system 100 Controller 110 Interface 120 Processor 121 Selective secondary processor 122 Read only Memory (ROM) 124 Selective Programmable Non-Volatile Memory 130 Random Access Memory (RAM) 132 Cache Memory 134, 610 Configuration Block List (ABL) 136, 740 Clear Block List (CBL) 140 Logical Pair Physical Address Translation Module 150 Update Block Manager Module 152 Sequential Update 154 Chaotic Update 160 Erasing Block Manager Module 162 End Block Manager 170 Relay Block Link Manager 180 Control Data Interchange 200 Memory 210 Group Address Table (GAT) 220 Chaotic Block Index (CBI) 230, 770 Erased Block List (EBL)

98704.doc -128-98704.doc -128-

MAP 已抹除的ABL區塊清單 開啟的更新區塊清單 關聯的原始區塊清單 關閉的更新區塊清單 已抹除的原始區塊清單 CBI區塊 邏輯群組 原始中繼區塊 混亂更新區塊 MAP區塊 抹除區塊管理(EBM)區段 可用的區塊緩衝器(ABB) 已抹除的區塊緩衝器(EBB) 已清除的區塊緩衝器(CBB) MAP區段 來源MAP區段 目的地MAP區段 平面 讀取及程式電路 頁面 控制器 緩衝器 資料匯流排 -129-MAP erased ABL block list open update block list associated original block list closed update block list erased original block list CBI block logical group original relay block chaotic update block MAP Block Erasing Block Management (EBM) Section Available Block Buffer (ABB) Erased Block Buffer (EBB) Cleared Block Buffer (CBB) MAP Section Source MAP Section Destination MAP section plane read and program circuit page controller buffer data bus -129-

Claims (1)

1288328 十、申請專利範圍: u 一種在一組織成區塊的非揮發性記憶體中用於儲存及更 新貝料的方法,其中各區塊已分割成可一起抹除的記憶 體單元,各記憶體單元用於儲存一邏輯單元的資料,該 方法包含: 將資料組織成複數個邏輯群組,各邏輯群組係為一群 組邏輯的單元; 接收封裝在邏輯單元中的主機資料; 根據-第-順序,在一第一區塊中儲存一邏輯群組之 邏輯單元的一第一版本;及 根據和該第-順序不同的一第二順序,在_包括該 邏輯單元之後續版本的第二區塊中儲存邏輯單元的最專 版本;及 々 為了回應預定的觸發事件,在一第三區塊中儲存 最新觸發事件後儲存於該第二區塊中之邏輯單元的—一 錄。 目 2·如請求項1之方法,進一步包含: 提供用於各邏輯單元的一標頭部分;及 對於自-最新之預定觸發事件後儲存於該第 的中間邏輯單元,在各該等中間邏輯單元的奸心中 中儲存該等中間邏輯單元的—目錄。 44分 3·如明求項1之方法,其中該非揮發性記憶體具 記憶體單元。 予動閘極 4·如睛求項1之方法,盆由 其中該非揮發性記憶體係為快 98704.doc 1288328 BhFKOM 5 · 如a月求項1之方、、+ . 、 ’ ’八中該非揮發性記憶體係為NROM。 6 ·如清求項1之方沐,甘士 — , 套其中该非揮發性記憶體係在一記憶卡 中〇 7·如請求項1至6中任一 項之方法,其中該非揮發性記憶體 …各儲存一位元之資料的記憶體單元。 8 ·如凊求項1至6中任一 9. 貝之方法,其中該非揮發性記憶體 二有各儲存-位元以上之資料的記憶體單元。 ,在組織成區塊的非揮發性記憶體中用於儲存及更 雜:料的方法’其中各區塊已分割成可_起抹除的記憶 體早元’各記憶體單元 早凡用於儲存一邏輯單元的資料,該 方法包含: 、 ,資料㈣成複數個邏輯群Μ,各邏輯群組係為一群 組邏輯的單元; 接收封裝在邏輯單元中的主機資料; 根據一第一順序,在一第 十 仕 弟 區塊中儲存邏輯單元的一 弟一版本;及 根據和該第一順序不同的一 邏輯單元士 噴序,在一包括該等 , 、’貝 的第二區塊中儲存邏輯單元的最新 欣本;及 =回應-預定的觸發事件,在該第二區塊的一可用 輯f中铸存自—最新觸發事件後儲存於該第二區塊之邏 科早元的一目錄。 10·如請求項9之方法,進一步包含: 98704.doc 1288328 提供用於各邏輯單元的一標頭部分;及 對於自一最新之預定觸發事件後儲存於該第二區塊中 的中間邏輯單元,在各該等中間邏輯單元的該標頭部分 中儲存該等中間邏輯單元的一目錄。 明求項9之方法,其中該非揮發性記憶體具有浮動閘極 記憶體單元。 12. 女明求項9之方法,其中該非揮發性記憶體係為快閃 EEPR〇m 〇 13. 如請求項9之方法,其中該非揮發性記憶體係為刪m。 14·如μ求項9之方法’其中該非揮發性記憶體係在一記憶卡 中。 一 15.如凊求項9至14中你一js + +、_l u 貝之方法,其中該非揮發性記憶體 〃有各儲存一位元之資料的記憶體單元。 16_如請求項9至14中任一項 L , 、之方法,其中该非揮發性記憶體 具有各儲存一位元以上之資料的記憶體單元。 π -種在—組織成區塊㈣揮發性記憶體巾詩儲存 新資料的方法,J:中久F说a、 /、中各&塊已分割成可一起抹除的記情 體單元,各記憶體單元用於儲存—邏輯單元‘ 方法包含: ^ ^ 在一第一區塊中儲存邏輯單元; 為了回應—狀的觸發事件,在該非揮發性記憶體的一 β刀中儲存至此在該第—區塊中之邏輯單元的一目錄. 提供用於各邏輯單元的—標頭部分;& 、’ 對於自-最新之預定觸發事件後儲存於該區塊中的中 98704.doc 1288328 間邏輯單元, ,在各該等中間邏輯單元的該標頭部分中儲 存3亥寺中間邏輯單元的—目錄。 储 18. 如請求項17之 極記憶體單元。…該非揮發性記憶體具有浮動閘 19. 如請求項17 EEPROM 〇 法’ ”中該非揮發性記憶體係為快閃 20. 如請求項17之古 21. 如請求項17之方;該非揮發性記憶體係為取〇M。 卡中 ,,/、中该非揮發性記憶體係在一記憶 22. 如請求項17至21 τ 1士項之方法,其中該非揮發性 體具有各儲存一仂分夕-欠』丨 U 位疋之貝料的記憶體單元。 23·如請求項17至21中 辨且古w 項之方法,其中該非揮發性記憶 八有α儲存一位元以上之資料的記憶體單元。 24.= 一組織成區塊的非揮發性記憶體中用於儲存及更 體^的方法’其中各區塊已分割成可-起抹除的記憶 早凡’各記憶體單元用於儲存一邏輯單元的資料,該 方法包含: 將資料組織成複數個邏輯群組,各邏輯群組係為一群 組邏輯的單元; 接收封裝在邏輯單元中的主機資料; 根據一第一順序,在一第-區塊中儲存邏輯單元的一 第一版本;及 根據和該第-順序不同的一第二順序,在一包括該等 邏輯單元之後續版本的第二區塊中儲存邏輯單元的最新 98704.doc 1288328 版本; 提供用於各邏輯單元的一標頭部分;及 在儲存於該第二區塊中之久、遮0 丁 <各邏軏單元的該標頭部分中 儲存目錄貨料’該目錄資料可田 貝针可用於儲存在該第二區塊中 的邏輯單元。 2 5 ·如睛求項2 4之方法’其中該非你 〆非揮發性記憶體具有浮動閘 極記憶體單元。 26·如請求項24之方法,其中兮非括饮a l Υ Θ非揮發性記憶體係為快閃 EEPROM。 27. 如請求項24之方法,其中該非揮發性記憶體係為皿⑽。 28. 如請求項24之方法’其中該非揮發性記憶體係在一記憶 卡中。 29·如睛求項24至28中任一項之方、去 體具有各儲存一位元之資料的記憶體單元 其中該非揮發性記憶 3 0.如請求項24至28中任一 j音夕士、+ ^ , 月5貝王仕項之方法,其中該非揮發性記憶 體具有各儲存一位元以上之資料的記憶體單元。 3 1 · —種在一組織成區塊的非揮發性記憶體中用於儲存及更 «料的方法’其中各區塊已分割成可—起抹除的記憶 體單疋,各記憶體單元用於儲存一邏輯單元的資料,該 方法包含: 將貢料組織成複數個邏輯群組,各邏輯群組係為一群 組邏輯的單元; 接收封裝在邏輯單元中的主機資料; 根據一第一順序,在一第一區塊中儲存一邏輯群組之 98704.doc 1288328 邏輯單元的一第—版本;及 根糠—第二順序,在—勺 的第二區it巾胃;f π# ^该等邏輯單元之後續版本 储存邏輯單元曰 該等邏輯罩元呈、取新版本,其中已儲存的 組及和;;:和該第一順序相同順序的-第-子群 =二順序不同順序的—第二子群組;及 輯單=揮發性記憶體的部分中維持該第二子群組之邏 科早7L的一目錄。 、 32.如明求項31之方法,其中該^ 極記憶體單元。 m生-己隐體具有〉于動閘 33·如請求項3丨之方法,其 亥非揮發性記憶體係為快閃 JhiirKUM 〇 34.如:求項31之方法,其中該非揮發性記憶體係為取⑽。 月求貝3 1之方法’其中該非揮發性記憶體係在一記憶 卡中。 " 36·如請求項31至35中任一項之古、土 甘占斗u t 貝之方法,其中該非揮發性記憶 體具有各儲存一位元之資料的記憶體單元。 37.如請求項31至35中任-項之方法,纟中該非揮發性記憶 體具有各儲存一位元以上之資料的記憶體單元。 98704.doc1288328 X. Patent Application Range: u A method for storing and updating bacon in a non-volatile memory organized into blocks, wherein each block has been divided into memory units that can be erased together, each memory The body unit is configured to store data of a logical unit, the method comprising: organizing the data into a plurality of logical groups, each logical group is a group of logical units; receiving host data encapsulated in the logical unit; a first-order, storing a first version of a logical unit of a logical group in a first block; and a second sequence different from the first-order, including a subsequent version of the logical unit The most exclusive version of the logical unit is stored in the second block; and in response to the predetermined trigger event, the record of the logical unit stored in the second block after storing the latest trigger event in a third block. The method of claim 1, further comprising: providing a header portion for each logical unit; and storing the intermediate logic unit in the first logical event unit after the predetermined event is triggered The directory of the intermediate logical units is stored in the unit's traitor. 44. The method of claim 1, wherein the non-volatile memory has a memory unit. Pre-action gate 4 · If the method of claim 1 is used, the basin is made of the non-volatile memory system as fast as 98704.doc 1288328 BhFKOM 5 · such as a month for the item 1 square, +. , ' 'eight non-volatile The sexual memory system is NROM. 6. The method of claim 1, wherein the non-volatile memory system is in a memory card, wherein the method of any one of claims 1 to 6, wherein the non-volatile memory ...a memory unit that stores one bit of data. 8. The method according to any one of claims 1 to 6, wherein the non-volatile memory 2 has a memory unit of data of each storage-bit. In the non-volatile memory organized into blocks, the method for storing and more complicated materials: wherein each block has been divided into memory cells that can be erased, and each memory cell is used for Storing a logical unit of data, the method comprising: , , data (4) into a plurality of logical groups, each logical group is a group of logical units; receiving host data encapsulated in the logical unit; according to a first order Storing a version of the logical unit in a tenth member block; and in accordance with a first order of the logical unit, in a second block including the , The latest update of the storage logic unit; and = response-predetermined trigger event, which is stored in the available block f of the second block, and is stored in the second block after the latest trigger event. A directory. 10. The method of claim 9, further comprising: 98704.doc 1288328 providing a header portion for each logical unit; and an intermediate logic unit stored in the second block after a predetermined predetermined trigger event A directory of the intermediate logical units is stored in the header portion of each of the intermediate logical units. The method of claim 9, wherein the non-volatile memory has a floating gate memory cell. 12. The method of claim 9, wherein the non-volatile memory system is flash EEPR 〇m 〇 13. The method of claim 9, wherein the non-volatile memory system is m. 14. The method of claim 9, wherein the non-volatile memory system is in a memory card. A method of requesting a js + +, _l u shell in items 9 to 14, wherein the non-volatile memory has a memory unit each storing one bit of data. The method of any one of claims 9 to 14, wherein the non-volatile memory has a memory unit each storing more than one bit of data. π - seeding - organized into blocks (4) methods of storing new data in volatile memory poems, J: Zhong Jiu F said that a, /, middle & blocks have been divided into memorable units that can be erased together, Each memory unit is used for storing a logic unit' method comprising: ^ ^ storing a logic unit in a first block; in response to a trigger event, storing in a beta knife of the non-volatile memory a directory of logical units in the first block. Provides a header portion for each logical unit; &, 'between 98704.doc 1288328 stored in the block after the latest scheduled event The logical unit, stores the directory of the intermediate logic unit of the 3H Temple in the header portion of each of the intermediate logic units. Storage 18. As requested in item 17 of the pole memory unit. ...the non-volatile memory has a floating gate 19. The non-volatile memory system in the request item 17 EEPROM ' ' ' is flashing 20. If the request item 17 is the same as 21. The party of claim 17; the non-volatile memory The system is a method of taking 。M., in the card, the non-volatile memory system in a memory 22. The method of claim 17 to 21 τ 1 , wherein the non-volatile body has a storage epoch - A memory unit that owes a 贝U 疋 贝 23 23 23 23 23 · · · · 如 · · 如 · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · 24.= A method for storing and morphing in a non-volatile memory organized into blocks. Each of the blocks has been segmented into a memory that can be erased. The memory cells are used for storage. a data of a logical unit, the method comprising: organizing data into a plurality of logical groups, each logical group being a group of logical units; receiving host data encapsulated in the logical unit; according to a first order, a logical block in a block-block a first version of the element; and storing a newest 98704.doc 1288328 version of the logical unit in a second block comprising subsequent versions of the logical units in a second order different from the first order; And a header portion of each logical unit; and storing the catalogue in the header portion of each logical unit stored in the second block. The logic unit stored in the second block. 2 5 · The method of claim 2, wherein the non-volatile memory has a floating gate memory unit. 26. The method of claim 24. The non-volatile memory system is a flash EEPROM. 27. The method of claim 24, wherein the non-volatile memory system is a dish (10). 28. The method of claim 24, wherein the non-volatile The memory system is in a memory card. 29. The method of any one of items 24 to 28, wherein the body has a memory unit that stores data of one bit, wherein the non-volatile memory 3 0. 24 to 28 A method of singularity, +^, and month 5, wherein the non-volatile memory has a memory unit each storing more than one dollar of data. 3 1 · - a type of tissue in a block The method for storing and changing materials in a non-volatile memory, wherein each block is divided into a memory unit that can be erased, and each memory unit is used to store data of a logic unit, and the method includes : arranging the tribute into a plurality of logical groups, each logical group is a group of logical units; receiving host data encapsulated in the logical unit; storing one in a first block according to a first order Logical group 98704.doc 1288328 a first version of the logical unit; and root 糠 - second order, in the second area of the spoon - it s stomach; f π # ^ subsequent versions of the logical unit store the logical unit曰The logical mask elements are presented with a new version, wherein the stored groups and sums;;: the first subgroup in the same order as the first order = the second subgroup = the second subgroup; and the album = maintain this part of the volatile memory Section two sub-groups of the early 7L logic of a directory. 32. The method of claim 31, wherein the method is a memory cell. m------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- Take (10). The method of the Japanese Patent No. 3 1 wherein the non-volatile memory system is in a memory card. < 36. The method of any one of claims 31 to 35, wherein the non-volatile memory has a memory unit each storing data of one bit. 37. The method of any of clauses 31 to 35, wherein the non-volatile memory has a memory unit each storing more than one bit of data. 98704.doc
TW093141426A 2003-12-30 2004-12-30 Non-volatile memory and method with non-sequential update block management TWI288328B (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US10/750,155 US7139864B2 (en) 2003-12-30 2003-12-30 Non-volatile memory and method with block management system
US10/917,867 US20050141312A1 (en) 2003-12-30 2004-08-13 Non-volatile memory and method with non-sequential update block management

Publications (2)

Publication Number Publication Date
TW200601043A TW200601043A (en) 2006-01-01
TWI288328B true TWI288328B (en) 2007-10-11

Family

ID=34753194

Family Applications (1)

Application Number Title Priority Date Filing Date
TW093141426A TWI288328B (en) 2003-12-30 2004-12-30 Non-volatile memory and method with non-sequential update block management

Country Status (4)

Country Link
EP (1) EP1704484A2 (en)
KR (1) KR20070007264A (en)
TW (1) TWI288328B (en)
WO (1) WO2005066793A2 (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI417889B (en) * 2009-12-30 2013-12-01 Silicon Motion Inc Write timeout methods for a flash memory and memory device using the same
TWI424438B (en) * 2009-12-30 2014-01-21 Asolid Technology Co Ltd Nonvolatile memory control apparatus and multi-stage resorting method thereof
US9058253B2 (en) 2007-07-04 2015-06-16 Samsung Electronics Co., Ltd. Data tree storage methods, systems and computer program products using page structure of flash memory
US9396103B2 (en) 2007-06-08 2016-07-19 Sandisk Technologies Llc Method and system for storage address re-mapping for a memory device
US10089225B2 (en) 2014-10-31 2018-10-02 Silicon Motion, Inc. Improving garbage collection efficiency by reducing page table lookups

Families Citing this family (23)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7139864B2 (en) 2003-12-30 2006-11-21 Sandisk Corporation Non-volatile memory and method with block management system
EP1746510A4 (en) * 2004-04-28 2008-08-27 Matsushita Electric Ind Co Ltd Nonvolatile storage device and data write method
US9104315B2 (en) 2005-02-04 2015-08-11 Sandisk Technologies Inc. Systems and methods for a mass data storage system having a file-based interface to a host and a non-file-based interface to secondary storage
US7984084B2 (en) * 2005-08-03 2011-07-19 SanDisk Technologies, Inc. Non-volatile memory with scheduled reclaim operations
JP2009503743A (en) * 2005-08-03 2009-01-29 サンディスク コーポレイション Managing memory blocks that store data files directly
DE602006019263D1 (en) * 2005-08-03 2011-02-10 Sandisk Corp NON-VOLATILE MEMORY WITH BLOCK ADMINISTRATION
JP4533956B2 (en) * 2005-08-03 2010-09-01 サンディスク コーポレイション Free up data storage capacity of flash memory system
US7747837B2 (en) 2005-12-21 2010-06-29 Sandisk Corporation Method and system for accessing non-volatile storage devices
US7769978B2 (en) 2005-12-21 2010-08-03 Sandisk Corporation Method and system for accessing non-volatile storage devices
US7793068B2 (en) 2005-12-21 2010-09-07 Sandisk Corporation Dual mode access for non-volatile storage devices
EP2097825B1 (en) 2006-12-26 2013-09-04 SanDisk Technologies Inc. Use of a direct data file system with a continuous logical address space interface
US8046522B2 (en) 2006-12-26 2011-10-25 SanDisk Technologies, Inc. Use of a direct data file system with a continuous logical address space interface and control of file address storage in logical blocks
US7917686B2 (en) 2006-12-26 2011-03-29 Sandisk Corporation Host system with direct data file interface configurability
US7739444B2 (en) 2006-12-26 2010-06-15 Sandisk Corporation System using a direct data file system with a continuous logical address space interface
US8166267B2 (en) 2006-12-26 2012-04-24 Sandisk Technologies Inc. Managing a LBA interface in a direct data file memory system
US8209461B2 (en) 2006-12-26 2012-06-26 Sandisk Technologies Inc. Configuration of host LBA interface with flash memory
KR100907477B1 (en) * 2007-07-16 2009-07-10 한양대학교 산학협력단 Apparatus and method for managing index of data stored in flash memory
JP2009211192A (en) * 2008-02-29 2009-09-17 Toshiba Corp Memory system
KR101565975B1 (en) 2009-02-27 2015-11-04 삼성전자주식회사 User device including flash memory storing index and index accessing method thereof
KR101543246B1 (en) 2009-04-24 2015-08-11 삼성전자주식회사 Method for driving of data storage device and data storage device thereof
US9817593B1 (en) 2016-07-11 2017-11-14 Sandisk Technologies Llc Block management in non-volatile memory system with non-blocking control sync system
US10261876B2 (en) * 2016-11-08 2019-04-16 Micron Technology, Inc. Memory management
CN108959280B (en) * 2017-05-17 2021-08-06 中国移动通信有限公司研究院 Method and device for storing virtual resource associated information

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP3604466B2 (en) * 1995-09-13 2004-12-22 株式会社ルネサステクノロジ Flash disk card
JP3072722B2 (en) * 1997-06-20 2000-08-07 ソニー株式会社 Data management device and data management method using flash memory and storage medium using flash memory
JP4085478B2 (en) * 1998-07-28 2008-05-14 ソニー株式会社 Storage medium and electronic device system
JP3967121B2 (en) * 2001-12-11 2007-08-29 株式会社ルネサステクノロジ File system, file system control method, and program for controlling file system
US6771536B2 (en) * 2002-02-27 2004-08-03 Sandisk Corporation Operating techniques for reducing program and read disturbs of a non-volatile memory

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9396103B2 (en) 2007-06-08 2016-07-19 Sandisk Technologies Llc Method and system for storage address re-mapping for a memory device
US9058253B2 (en) 2007-07-04 2015-06-16 Samsung Electronics Co., Ltd. Data tree storage methods, systems and computer program products using page structure of flash memory
TWI417889B (en) * 2009-12-30 2013-12-01 Silicon Motion Inc Write timeout methods for a flash memory and memory device using the same
TWI424438B (en) * 2009-12-30 2014-01-21 Asolid Technology Co Ltd Nonvolatile memory control apparatus and multi-stage resorting method thereof
US10089225B2 (en) 2014-10-31 2018-10-02 Silicon Motion, Inc. Improving garbage collection efficiency by reducing page table lookups

Also Published As

Publication number Publication date
TW200601043A (en) 2006-01-01
WO2005066793A3 (en) 2006-06-15
EP1704484A2 (en) 2006-09-27
WO2005066793A2 (en) 2005-07-21
KR20070007264A (en) 2007-01-15

Similar Documents

Publication Publication Date Title
TWI288328B (en) Non-volatile memory and method with non-sequential update block management
TWI288327B (en) Non-volatile memory and method with control data management
JP4898457B2 (en) Nonvolatile memory and method with control data management
TWI272487B (en) Non-volatile memory and method with memory planes alignment
US9942084B1 (en) Managing data stored in distributed buffer caches
US10942656B2 (en) System data storage mechanism providing coherency and segmented data loading
US20130097369A1 (en) Apparatus, system, and method for auto-commit memory management
TW200951979A (en) Data writing method for flash memory and storage system and controller using the same
TWI634426B (en) Managing backup of logical-to-physical translation information to control boot-time and write amplification
TW201027347A (en) Solid state drive operation
TWI269154B (en) Non-volatile memory and method of storing data in a non-volatile memory
TW200837562A (en) Non-volatile memory and method for class-based update block replacement rules
TW201015563A (en) Block management and replacement method, flash memory storage system and controller using the same
US9977612B1 (en) System data management using garbage collection and logs

Legal Events

Date Code Title Description
MM4A Annulment or lapse of patent due to non-payment of fees