WO2008042594A1 - Managing file allocation table information - Google Patents

Managing file allocation table information Download PDF

Info

Publication number
WO2008042594A1
WO2008042594A1 PCT/US2007/078830 US2007078830W WO2008042594A1 WO 2008042594 A1 WO2008042594 A1 WO 2008042594A1 US 2007078830 W US2007078830 W US 2007078830W WO 2008042594 A1 WO2008042594 A1 WO 2008042594A1
Authority
WO
WIPO (PCT)
Prior art keywords
host
nonvolatile memory
allocation
data
controller
Prior art date
Application number
PCT/US2007/078830
Other languages
English (en)
French (fr)
Inventor
Andrew Tomlin
Sergey Anatolievich Gorobets
Original Assignee
Sandisk Corporation
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US11/537,216 external-priority patent/US7681008B2/en
Priority claimed from US11/537,243 external-priority patent/US7752412B2/en
Application filed by Sandisk Corporation filed Critical Sandisk Corporation
Publication of WO2008042594A1 publication Critical patent/WO2008042594A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0638Organizing or formatting or addressing of data
    • G06F3/0643Management of files
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/0223User address space allocation, e.g. contiguous or non contiguous base addressing
    • G06F12/0292User address space allocation, e.g. contiguous or non contiguous base addressing using tables or multilevel address translation means
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0602Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
    • G06F3/0604Improving or facilitating administration, e.g. storage management
    • G06F3/0607Improving or facilitating administration, e.g. storage management by facilitating the process of upgrading existing storage systems, e.g. for improving compatibility between host and storage device
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0668Interfaces specially adapted for storage systems adopting a particular infrastructure
    • G06F3/0671In-line storage system
    • G06F3/0673Single storage device
    • G06F3/0679Non-volatile semiconductor memory device, e.g. flash memory, one time programmable memory [OTP]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2212/00Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
    • G06F2212/20Employing a main memory using a specific memory technology
    • G06F2212/202Non-volatile memory
    • G06F2212/2022Flash memory

Definitions

  • This invention relates to nonvolatile memory systems and to methods of operating nonvolatile memory systems.
  • Modular, portable, non-volatile memory devices are available that can be readily connected to and disconnected from host devices such as digital cameras, digital audio recorders, and personal computers.
  • Traditional memory such as flash cards used in these devices is rewritable, allowing a memory address to be erased and rewritten for system or user purposes.
  • a host allocates clusters from one end of the address range of the memory and the controller allocates clusters from the other end of the address range.
  • Such allocation by a controller allows the host to send an update to previously written data, even though overwriting or erasing the data is not possible.
  • the allocation by the host is recorded in a File Allocation Table (FAT).
  • the allocation by the controller is not recorded in the FAT, but instead is recorded in volatile memory.
  • the controller modifies the FAT from the nonvolatile memory according to the record in volatile memory.
  • the host records allocation of stored data in a File Allocation Table that is also stored in the nonvolatile memory.
  • a request is received from the host for a portion of the File Allocation Table that is stored in the nonvolatile memory array
  • the portion of the File Allocation Table is read from the nonvolatile memory array.
  • the portion of the File Allocation Table is modified according to a record that indicates at least one memory location used by the memory controller to store data received from the host and the modified portion of the File Allocation Table is sent to the host.
  • a cluster map is maintained in a volatile memory that indicates an allocation/deallocation state for each of a plurality of clusters.
  • it is determined from the cluster map whether the plurality of clusters are allocated. If the plurality of clusters are not allocated, then allocation information is generated from the cluster map and is returned to the host. For example, a FAT sector may be generated showing no allocated clusters. This may be done without accessing the nonvolatile memory. If the cluster map indicates at least one allocated cluster, the allocation information may be read from the nonvolatile memory.
  • Figure 1 shows a host device in communication with a memory device that includes a write-once memory array.
  • Figure 2 shows a memory map of the write-once memory array.
  • Figure 3 shows a memory map including usage of space within the write-once memory array where clusters are allocated by the host from one end and clusters are allocated by the controller from the other end of the address range.
  • Figure 4A shows a memory map of the logical address space of a write-once memory, including a File Allocation Table (FAT) that reflects both allocation by the host and allocation by the controller.
  • FAT File Allocation Table
  • Figure 4B shows a map of the physical address space of the memory of Figure 4A prior to storing host data.
  • Figure 4C shows a map of the physical address space of the memory of Figure 4B after host data is stored and control data is updated to reflect the stored host data.
  • Figure 4D shows a map of the physical address space of the memory of Figure 4C after an update to the host data is received and stored, updated control data also being stored.
  • Figure 5 shows an alternative embodiment where a FAT stored in nonvolatile memory reflects allocation by the host and a record stored in volatile memory reflects allocation that is not by the controller, the information from the FAT and the record being combined in response to a host request for FAT information.
  • Figure 6 shows an embodiment where a host allocates space in a memory non-sequentially and a controller allocates space in a manner that adapts to the host allocated space.
  • Figure 7 shows an embodiment where a host allocates updated data that is reflected in a record in nonvolatile memory.
  • Figure 8 shows an example where a host deallocates a number of clusters, and the record is updated to indicate use of an equal number of clusters.
  • Figure 9 shows a flowchart of a technique for using a record in volatile memory to reduce the number of reads of nonvolatile memory.
  • Certain embodiments described herein can be used to enable one-time or few-time programmable memories to work with existing consumer electronic devices (such as those that work with flash—an erasable, re-programmable non-volatile memory) without requiring a firmware upgrade, thereby providing backwards compatibility while minimizing user impact. As such, these embodiments are a viable way to bridge one-time or few-time programmable memories with existing consumer electronic devices that have flash card slots. These embodiments also allow future consumer electronic devices to be designed without updating firmware to include a file system customized for a one-time or few-time programmable memory. Certain embodiments described may also be applied to nonvolatile memories that are many-time programmable. It is not required that a memory be one-time programmable to employ the techniques described.
  • FIG. 1 is a block diagram of a host device 5 connected to a memory device 10. Both the host device 5 and the memory device 10 comprise electrical connectors that mate with one another to electrically couple the host device 5 with the memory device 10 through an interface 15. As used herein, the term “coupled with” means directly coupled with or indirectly coupled with through one or more intervening components.
  • the host device 5 can take the form of a consumer electronic device such as, but not limited to, a digital still or moving camera, a personal digital assistant, a cellular phone, a digital audio player, or a personal computer (such as those with USB reader/writers or PCMCIA card adapters).
  • the host device 5 contains a write -many file system 7, such as the DOS-FAT file system.
  • Other controllers are also described that are not “backwards compatible controllers," for example controllers used with erasable nonvolatile memory.
  • the memory device 10 can take the form of a modular, compact, handheld unit, such as a memory card or stick.
  • the memory device 10 comprises a controller 20 and a write-once memory array 30.
  • the interface 15 between host 5 and memory device 10 can be configured for MultiMedia, Secure Digital, Memory Stick, Compact Flash, Smart Media, xD, USB, HS- MMC, or any of the many portable storage media available.
  • controller 20 allows the memory device 10 to be backwards compatible with a host device using a write-many file system.
  • the controller 20 will sometimes be referred to herein as a "backwards compatible controller" or "BCC.”
  • controller 20 is an ASIC using a finite state machine combined with standard combinatorial logic.
  • the controller 20 can be implemented in a variety of other forms, such as, but not limited to, a microcontroller or a microprocessor with firmware.
  • the controller 20 is separated from the memory array 30 in this embodiment, a controller and a memory array may be integrated into a single die to save cost.
  • the design of the controller 20 in this embodiment is very similar to a controller used for rewritable memory technologies. Examples of differences are that this controller 20 does not need any wear leveling or other logic associated with erasing non-volatile memory. While the design presented here could include these additional blocks and function, they would probably not be cost optimal.
  • the write-once memory array 30 comprises a plurality of field-programmable write- once memory cells.
  • Field-programmable write-once memory cells are memory cells that are fabricated in an initial, un-programmed digital state and can be switched to an alternative, programmed digital state at a time after fabrication.
  • the original, un-programmed digital state can be identified as the Logic 1 (or Logic 0) state
  • the programmed digital state can be identified as the Logic 0 (or Logic 1) state.
  • the memory cells are a write- once type, an original, un-programmed digital state of a storage location (e.g., the Logic 1 state) cannot be restored once switched to a programmed digital state (e.g., the Logic 0 state).
  • the memory array 30 can take the form of a few-time programmable (FTP) memory array, which is a memory array that can be written to more than once but not as many times as a write-many memory array.
  • FTP few-time programmable
  • the memory device 10 can contain additional memory arrays (write-once, few- time programmable, or write-many).
  • the write -once memory array 30 can take any suitable form, such as a solid-state memory device (i.e., a memory device that responds to electrical read and write signals), a magnetic storage device (such as a hard drive), or an optical storage device (such as a CD or DVD).
  • a solid-state memory device i.e., a memory device that responds to electrical read and write signals
  • a magnetic storage device such as a hard drive
  • an optical storage device such as a CD or DVD
  • the memory cells in the memory array 30 can be organized in a two-dimensional or three-dimensional fashion.
  • the memory array 30 is a three-dimensional array, such as an array described in U.S. Patent No. 6,034,882 to Johnson et al, and U.S. Patent No. 5,835,396 to Zhang.
  • erasable nonvolatile memory such as erasable flash memory is used.
  • FIG. 2 is a memory map 50 of the logical address space of a typical FAT 12/16-based storage card showing various file system structures.
  • a "file system structure" refers to any data that describes a partition in memory, the memory space within the partition, and/or the type of commands that a file system can or cannot use with that partition.
  • file system structures include, but are not limited to, master boot record, a partition boot record, a file allocation table ("FAT" or "FAT table”), a root directory, and a sub-directory.
  • the master boot record (MBR) is used to store boot information for the card (i.e., the memory device 10) as well as the starting location of each logical partition. Most memory cards, such as the one shown here, only contain one logical partition.
  • FATl corresponds to FAT table one, which contains a linked list of clusters numbers for each file, directory, and other information on the card 10.
  • FAT2 corresponds to FAT table two.
  • the root directory and file data area begin at physical address 0C200. This area is divided into clusters, corresponding to the cluster numbers in each FAT table. Note that this is only an example of a FAT File System and in no way intended to limit the invention. Different locations for each structure, different cluster sizes, and even different file systems can be associated with the embodiments described herewith.
  • a FAT file system may be used with both logically-addressed and physically-addressed memories.
  • a logically-addressed memory maps data provided by the host to locations in memory and records the mapping. Addresses allocated to the data by the host are treated as logical addresses and these addresses are not tied to particular locations in the physical memory.
  • One common method is to use a physical-to-logical-address table that is read or calculated at startup and then stored in volatile memory in the controller 20.
  • OTP memory updated data is allocated by the host to clusters that were previously allocated to old data. Thus, the same logical address range is occupied, but additional space must be occupied in the physical memory. This discrepancy between the logical address range that is indicated to be available in the FAT and the physical address range that is actually available in the memory may cause problems. For example, the host may attempt to write to a full memory.
  • the memory array is organized in 528Byte pages that each store one sector, but the host sends data in a 512Byte sector, leaving l ⁇ Bytes of extra space.
  • These extra l ⁇ Bytes are referred to as the sideband and can be used by the controller to store extra information including, but not limited to, ECC data, data and address validity flags, and remapping pointers.
  • the controller 20 uses part of this sideband to store a physical address if the page has been remapped. If the page is written for the first time, no remap pointer will be needed, as the physical and logical address will be the same.
  • the controller will write the new data to a new location and store the physical address of this location into the sideband of the old location.
  • a host does not know whether a memory system is logically addressed or physically-addressed and the embodiments described here may apply to either physically- addressed or logically-addressed memories.
  • the description refers to logically-addressed memory, but it will be understood that a physically-addressed memory may generally be treated as a logically-addressed memory with a particular predetermined logical-to-physical mapping.
  • a host if a host updates or deallocates data in an OTP memory, the host may see more space available in the memory (through FAT) than is actually available in the physical memory.
  • One simple way to solve the problem of accounting for controller usage of physical memory is to reserve memory space for controller use before the time of sale of the memory device 10. Such space is not made available to the host. This static amount of memory space can be used by the controller 20 until there is no more reserved space left. At this time, the controller 20 can signal a card error, allow any write that is not a remap, or many other operations.
  • this method referred to as static allocation, may be undesirable, as it limits the number and size of operations that a user can perform.
  • the host and controller both allocate clusters from the same logical address space 301.
  • a host allocates clusters from the top of the available address space 301, while the controller allocates clusters from the bottom of the available address space as shown in Figure 3.
  • the host allocates space 303 for new data, while the controller allocates space 305 for updated data.
  • the usage of space by the controller must be reported to the host in some manner.
  • the space used by the controller is communicated to the host through FAT information sent to the host.
  • the controller 20 informs the host 5 that space has been used for remapping by writing to the FAT table 407 stored in the nonvolatile memory. Examples of memory systems using a system such as this are provided in U.S. Patent Application Publication No. 2006/0047920. As additional memory is needed, the controller 20 can allocate a new cluster for its own use just as the host 5 would allocate a cluster for file data.
  • this scheme dynamically updates the FAT table 407 as additional space is needed for remapping.
  • the benefit of this implementation is that the host 5 and user are not limited in the number of remaps/file modifications they can perform.
  • the controller 20 will allow them to remap until the card is completely full.
  • the controller 20 allocates memory from the bottom up, as most hosts allocate memory from the top down (from the beginning of logical space, as shown in Figure 2).
  • This allocation scheme will allow the controller 20 and host 5 to independently allocate memory. This also allows the controller 20 to acknowledge that the card 10 is full whenever the two allocation zones collide. At the time the card is full, the controller 20 may choose to set its permanent and temporary write protect registers, signaling to the host 5 that no more writes to the card 10 are possible.
  • a controller may use some other system to prevent additional writes when the card is full.
  • the host will not try to write to a full card because the FAT indicates that the logical address space is full (if the host caches FAT data, the cached FAT may not show a full logical address space).
  • a non-cached system will know how much memory is left, while a cached system will read the FAT table at startup to see how much remap space was used prior to that particular session.
  • Figures 4B-4D illustrate how physical memory space is used for storage of data as the controller and host store data and allocate logical address space as shown in Figure 4A.
  • Control data may include data generated by the controller that is used for managing data in the memory array, for example data used by the controller for logical- physical mapping. Generally, such control data is not visible to the host. Data supplied by the host, including MBR and FAT are considered host data in this example. Control data is generated during an initial formatting operation in this example.
  • Figure 4C shows the physical memory after the host sends host data to be stored in physical memory 409 for the first time.
  • the host data is written to the next available physical location 413 after the control data.
  • at least a portion of the control data must be updated to reflect the newly written host data.
  • This updated control data is written to the next physical location 415 available after the host data.
  • a portion 417 of the previously stored control data is obsolete.
  • Figure 4D shows the physical memory 409 after the host sends updated data to replace a portion of the previously programmed host data. For example, the host may send a number of sectors of host data with the same logical addresses as previously stored sectors. The updated host data is stored in the next available physical location 419.
  • Control data related to the updated host data is then stored in the next available physical location 421.
  • Updated host data makes a portion 423 of the original host data obsolete.
  • Updated control data makes a portion 425 of the first updated control data obsolete.
  • portions of the physical memory space are occupied by obsolete data portions 417, 423, 425, which include obsolete host data and obsolete control data.
  • the controller may allocate logical space to account for this loss of physical space to obsolete host data and control data. It should be noted that static and dynamic remapping are not mutually exclusive. One may choose to implement both in a single product. For example, a user may want to delete or modify files even though the card is full, and no additional files can be added.
  • Additional data modification can be allowed if a set amount of memory is pre-allocated and not used under normal operation.
  • the card 10 can have 500 KB of static remap set aside that the controller 20 does not use until the card 10 appears full due to host and controller allocation collisions.
  • the controller 20 can allow the host's additional data to be written to the static allocation zone until the desired operation is complete.
  • smart filters can be used to allow some user operations such as delete and renames to occur in the static area, while other operations would result in an error, as the card 10 is essentially full for most use.
  • Figure 5 shows an alternative scheme to that of Figure 4.
  • the host 427 allocates clusters from the top and the controller 429 allocates clusters from the bottom as before.
  • the allocation of clusters by the host 427 is recorded in the FAT 431 stored in the nonvolatile memory 433.
  • the copy of FAT 431 stored in nonvolatile memory 433 shows clusters 0-2 being allocated and shows clusters 3-9 being available for storage of additional data.
  • Allocation of clusters 7-9 by the controller 429 is not recorded in FAT 431 but is recorded in a record 435 that is stored within a volatile memory 437.
  • the volatile memory 437 is a Static Random Access Memory (SRAM) in a controller 429, though other volatile memories may also be used.
  • the record shows both clusters allocated by the host and clusters allocated by the controller.
  • the record in this example records only a cluster's state as allocated or deallocated. No additional information is stored in the record.
  • the FAT records not only whether a cluster is allocated but also provides additional information such as the next cluster in a chain or an end-of-file indicator.
  • the record 435 may have a single bit (indicating allocation/deallocation) for each cluster and may be considered a bitmap or a cluster map.
  • controller 429 When the host 427 requests FAT information (generally by sending a read request for one or more FAT sectors), controller 429 provides FAT information that reflects both the FAT information stored in FAT 431 in the memory 433 and the allocation information stored in record 435 in volatile memory 437.
  • the requested portion of FAT information is read from the nonvolatile memory 433 and sent to an editor 439 in the controller 429.
  • the record 435 is also read from volatile memory 437 and is used by the editor 439 to edit or modify the FAT information from the nonvolatile memory 433.
  • the editor 439 makes changes to the FAT information to reflect allocation by the controller 429 in addition to the allocation by the host 427 that is already recorded in FAT 431.
  • the allocation by the controller 429 may be indicated by marking allocated clusters in any suitable manner that is compatible with the FAT scheme of the host 427. For example, the clusters may be marked as bad clusters, reserved clusters or used clusters.
  • the modified FAT information 441 is then sent to the host 427. In this way, the host 427 is informed of unavailable space in the nonvolatile memory that is unavailable as a result of remap operations by the controller 429. This prevents the host 427 from attempting to allocate clusters that are already used by the controller 429.
  • clusters allocated by the controller are shown as being allocated to a file created by the controller. The file may have a filename such as "unused capacity," to indicate that it is not a host file.
  • Such a file is shown by a directory entry that is provided when the host requests directory information stored in the memory.
  • the directory entry may be added to a directory stored in memory in a similar manner to modification of FAT information. This allows a user to see how much capacity is unused due to controller operations. In particular, this allows a user to account for any apparent discrepancy between the stated capacity of a memory card and the amount of user data stored before a card- full condition is reached.
  • the editor may be formed as a dedicated circuit in the controller or may be implemented through firmware or some combination of hardware and firmware.
  • a record may initially be generated when the memory system is first turned on.
  • a controller may scan the memory to build the initial record.
  • Clusters allocated by the host may be detected simply by reading the FAT.
  • Clusters allocated by the controller may be obtained by scanning the memory itself or by scanning management structures that indicate usage of memory.
  • the controller carries out remap operations, additional clusters are used by the controller and this use is reflected in the record.
  • the controller may use some space that is dedicated for the controller. Later, when such dedicated space is all used, the controller may start to use the logical space that is also used by the host and so, the controller also starts to record the use of this space in the record.
  • the controller generally starts using logical space that is available to the host when the amount of obsolete data reaches a limit.
  • This limit is determined by the amount of spare physical capacity available. In one example the limit is the total physical capacity of the memory, minus the logical capacity of the memory (as seen by the host), minus some minimum space required for control data and some operational space.
  • the controller may indicate use of logical space in a record before a corresponding portion of the physical memory is used. For example, where a controller has scheduled some operation that will require some space in physical memory, the controller may reserve this space, making it unavailable to the host, by indicating that it is unavailable in the record. Allocation by the host may also be reflected in the record, but this is not necessary because allocation by the controller is recorded in the FAT. The number of writes to the nonvolatile memory is reduced in this example compared to the example where both host and controller allocation is recorded in the FAT. Because there are fewer FAT writes to the nonvolatile memory, less space is needed for storing the FAT.
  • Figure 6 shows an example where a host does not allocate clusters sequentially.
  • a host allocates clusters in sequential order in an unwritten memory. However, this is not always the case.
  • a host allocates clusters nonsequentially, it is generally preferable that a card-full condition is not generated when host allocated clusters and controller allocated clusters meet.
  • an initially allocated cluster map 651 is obtained from the FAT 653 stored in the memory (other sources may also be used to generate the cluster map). This map 651 indicates clusters allocated by the host nonsequentially.
  • the initially allocated cluster map 651 is then modified to reflect clusters allocated by the controller to provide a complete cluster map 655.
  • the controller allocated clusters extend from the bottom of the address range to an area of host allocated clusters and also extend above this area of host allocated clusters. Thus, in this scheme, a memory- full condition is not generated when host allocated clusters and controller allocated clusters meet. Instead, the controller seeks the next available cluster for a remap operation. A memory-full condition may occur only when some other condition is met, such as all clusters in the address range being allocated. Whenever the host requests FAT information, the FAT information is read from the nonvolatile memory and modified before being sent to the host.
  • Figure 6 shows a first map 651 of cluster allocation as it is indicated by the FAT 653 stored in the nonvolatile memory.
  • Figure 6 shows a second map 655 of cluster allocation as it is indicated to the host.
  • the second map includes clusters allocated by the controller as well as clusters allocated by the host. In this scheme, it is not necessary that the controller allocate clusters sequentially from one end of the address range. A different scheme may be used that mixes host allocated clusters and controller allocated clusters.
  • Figure 7 shows a first example of a host operation that results in a change in a record.
  • the FAT 761 Prior to the host operation, the FAT 761 shows clusters 0-2 occupied by data A. Both the FAT 761 and the record 763 of Figure 7 show use of logical space, not physical space. The record 763 also shows clusters 0-2 as allocated. No other clusters are shown as allocated at this point.
  • the host then allocates data B to clusters 0-2. Data B is written to the physical memory at a new location that is recorded in a physical-to-logical address translation table.
  • the FAT 761 is updated to show data B allocated to clusters 0-2.
  • the allocated space indicated by the FAT 761 is the same as before. However, there is more space used in the physical memory because data B was written to a new location and data A still occupies another location.
  • the record 763 is updated to show use of clusters 7-9.
  • the record indicates use of six clusters, which corresponds to the amount of physical memory used. If the host requested FAT information for clusters 0-9 at this point, the FAT information returned to the host would indicate that clusters 0-2 and 7-9 are unavailable.
  • Figure 8 shows a second example of a host operation that results in a change in a record. Prior to this host operation, the FAT 871 shows clusters 0-2 occupied by data. The record 873 also shows these clusters as allocated. The host then deallocates clusters 0-2.
  • the host may attempt to delete data that occupies clusters 0-2, for example by marking clusters 0-2 as available and by removing the file that is allocated to clusters 0-2 from the directory (in the case where a file is shortened, clusters may be marked as unavailable, and the directory entry is updated with the new length ).
  • This makes clusters 0-2 available for storage of additional data in the FAT 871.
  • no more space is made available in the physical memory by the host's operation.
  • the record is updated to show clusters 7-9 as used. If the host requested FAT information for clusters 0-9 at this point, clusters 7-9 would be shown as unavailable.
  • the record indicates clusters 0-2 are allocated, even after clusters 0-2 are deallocated by the host and recorded as deallocated in FAT. Subsequently, when the host requests FAT information for clusters 0-2, they are indicated to be allocated. In some cases this may cause problems because the host may have a separate record (such as a cached portion of FAT) that indicates clusters 0-2 are deallocated.
  • controller use of memory space for storage of host data.
  • the controller may use memory space for other purposes and account for the used space in memory using a record as described.
  • Either erasable or OTP memory may be used.
  • an embedded application running in a memory card may use space in the nonvolatile memory. Use of space in a nonvolatile memory by such an application may be tracked and communicated to the host through a cluster map in volatile memory.
  • a cluster map in volatile memory may be used to reflect allocation by a host that operates according to a different interface.
  • US Patent Application No. 11/196,869 discloses memory systems where two host interfaces share the same physical area in memory. A first host maintains a FAT indicating its use of logical address space, while a second host uses a different management system that does not require FAT. However, the FAT is modified to show use of logical space in the FAT by the second host corresponding to the space that the second host uses to store data in the physical memory.
  • a cluster map or similar record may be maintained that shows such use. When the first host requests FAT information, the FAT may be read and modified according to the record to reflect use of memory by both the first and second hosts.
  • a record maintained in volatile memory may be used to reduce the number of reads of FAT stored in nonvolatile memory.
  • the memory controller may examine the record for that range to determine 892 if any clusters in the range are allocated. If there are no allocated clusters in the range, then the controller can generate FAT information to reflect this, by creating one or more FAT sectors 893. Thus, no access to the nonvolatile memory is necessary. If clusters in the requested address range are indicated by the record to be allocated, then the FAT stored in nonvolatile memory is generally read 894 to obtain full FAT information.
  • the FAT read from the nonvolatile memory may be modified 895 according to the record before being sent to the host 896.
  • the record of cluster allocation by the controller may be maintained as a file.
  • this file is maintained in volatile memory only and is not stored in the nonvolatile memory.
  • such a file may be separately stored in nonvolatile memory as part of a power-down routine or otherwise. This may reduce the time needed to generate a record in volatile memory when the memory system is initially powered- on.
  • clusters allocated by the host and clusters allocated by the controller are distinguished during failure analysis by comparing the allocation indicated by the FAT with actual memory use.
  • Memory space is allocated in units of a cluster.
  • Cluster size generally depends on the memory system used and may vary from one memory system to another. In one example, a cluster consists of 32 sectors of data. Where the nonvolatile memory is erasable, the size of a block (the minimum unit of erase) may also be 32 sectors. In other examples, a block may contain two or more clusters.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
PCT/US2007/078830 2006-09-29 2007-09-19 Managing file allocation table information WO2008042594A1 (en)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
US11/537,216 US7681008B2 (en) 2006-09-29 2006-09-29 Systems for managing file allocation table information
US11/537,216 2006-09-29
US11/537,243 2006-09-29
US11/537,243 US7752412B2 (en) 2006-09-29 2006-09-29 Methods of managing file allocation table information

Publications (1)

Publication Number Publication Date
WO2008042594A1 true WO2008042594A1 (en) 2008-04-10

Family

ID=38988085

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2007/078830 WO2008042594A1 (en) 2006-09-29 2007-09-19 Managing file allocation table information

Country Status (2)

Country Link
TW (1) TWI376599B (zh)
WO (1) WO2008042594A1 (zh)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7681008B2 (en) 2006-09-29 2010-03-16 Sandisk Corporation Systems for managing file allocation table information
US7752412B2 (en) 2006-09-29 2010-07-06 Sandisk Corporation Methods of managing file allocation table information

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI452467B (zh) * 2008-10-13 2014-09-11 A Data Technology Co Ltd 記憶體系統及其控制方法
TWI425514B (zh) * 2009-10-29 2014-02-01 Hon Hai Prec Ind Co Ltd Nand快閃記憶體及其資料更新管理方法
TWI449414B (zh) 2010-12-29 2014-08-11 Altek Corp 影像擷取裝置及其開機方法

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2005008499A1 (ja) * 2003-07-16 2005-01-27 Matsushita Electric Industrial Co., Ltd. 情報記録媒体におけるデータ領域管理方法、及びデータ領域管理方法を用いた情報処理装置
US6996660B1 (en) * 2001-04-09 2006-02-07 Matrix Semiconductor, Inc. Memory device and method for storing and reading data in a write-once memory array
US20060047920A1 (en) * 2004-08-24 2006-03-02 Matrix Semiconductor, Inc. Method and apparatus for using a one-time or few-time programmable memory with a host device designed for erasable/rewriteable memory

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6996660B1 (en) * 2001-04-09 2006-02-07 Matrix Semiconductor, Inc. Memory device and method for storing and reading data in a write-once memory array
WO2005008499A1 (ja) * 2003-07-16 2005-01-27 Matsushita Electric Industrial Co., Ltd. 情報記録媒体におけるデータ領域管理方法、及びデータ領域管理方法を用いた情報処理装置
US20070174550A1 (en) * 2003-07-16 2007-07-26 Matsushita Electric Industrial Co., Ltd. Data area managing method in information recording medium and information processor employing data area managing method
US20060047920A1 (en) * 2004-08-24 2006-03-02 Matrix Semiconductor, Inc. Method and apparatus for using a one-time or few-time programmable memory with a host device designed for erasable/rewriteable memory

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7681008B2 (en) 2006-09-29 2010-03-16 Sandisk Corporation Systems for managing file allocation table information
US7752412B2 (en) 2006-09-29 2010-07-06 Sandisk Corporation Methods of managing file allocation table information

Also Published As

Publication number Publication date
TWI376599B (en) 2012-11-11
TW200832132A (en) 2008-08-01

Similar Documents

Publication Publication Date Title
US7752412B2 (en) Methods of managing file allocation table information
US7681008B2 (en) Systems for managing file allocation table information
JP4238514B2 (ja) データ記憶装置
KR100980905B1 (ko) 리무버블 데이터 기억 장치, 호스트 기기, 데이터 기록 시스템, 및 리무버블 데이터 기억 장치의 데이터 관리 방법
US7395384B2 (en) Method and apparatus for maintaining data on non-volatile memory systems
KR101139224B1 (ko) 소거/재기록 가능 메모리를 위해 고안된 호스트 장치에사용할 수 있는 한 번 또는 몇 번의 프로그램이 가능한메모리를 위한 방법 및 장치
CN107273058B (zh) 逻辑地址偏移
US9116791B2 (en) Method for flash-memory management
US7877569B2 (en) Reduction of fragmentation in nonvolatile memory using alternate address mapping
US7669003B2 (en) Reprogrammable non-volatile memory systems with indexing of directly stored data files
US7949845B2 (en) Indexing of file data in reprogrammable non-volatile memories that directly store data files
US8312554B2 (en) Method of hiding file at data protecting mode for non-volatile memory module, memory controller and portable memory storage apparatus
KR100987241B1 (ko) 메모리 장치 및 그 메모리 장치를 이용한 기록 재생 장치
US20060020745A1 (en) Fat analysis for optimized sequential cluster management
KR20110107857A (ko) 솔리드 스테이트 메모리 포멧팅
KR20080038368A (ko) 데이터 파일을 직접 저장하는 재프로그램가능 비휘발성메모리에 파일 데이터의 인덱싱
WO2008042594A1 (en) Managing file allocation table information
JP2007018528A (ja) メモリ装置、ファイル管理方法及び記録再生装置
JP2005149620A (ja) 記憶装置およびファイルシステム

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 07842740

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 07842740

Country of ref document: EP

Kind code of ref document: A1