WO2014055445A1 - Methods, devices and systems for physical-to-logical mapping in solid state drives - Google Patents

Methods, devices and systems for physical-to-logical mapping in solid state drives Download PDF

Info

Publication number
WO2014055445A1
WO2014055445A1 PCT/US2013/062723 US2013062723W WO2014055445A1 WO 2014055445 A1 WO2014055445 A1 WO 2014055445A1 US 2013062723 W US2013062723 W US 2013062723W WO 2014055445 A1 WO2014055445 A1 WO 2014055445A1
Authority
WO
WIPO (PCT)
Prior art keywords
page
pages
physical
journal
entries
Prior art date
Application number
PCT/US2013/062723
Other languages
French (fr)
Inventor
Andrew J. Tomlin
Justin Jones
Radoslav Danilak
Rodney N. Mullendore
Original Assignee
Western Digital Technologies, Inc.
Skyera, Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Western Digital Technologies, Inc., Skyera, Inc. filed Critical Western Digital Technologies, Inc.
Priority to EP13844022.7A priority Critical patent/EP2904496A4/en
Priority to AU2013327582A priority patent/AU2013327582B2/en
Priority to KR1020157011769A priority patent/KR101911589B1/en
Priority to CN201380063439.2A priority patent/CN105027090B/en
Priority to JP2015535725A priority patent/JP6210570B2/en
Publication of WO2014055445A1 publication Critical patent/WO2014055445A1/en
Priority to HK16104405.3A priority patent/HK1216443A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/0223User address space allocation, e.g. contiguous or non contiguous base addressing
    • G06F12/023Free address space management
    • G06F12/0238Memory management in non-volatile memory, e.g. resistive RAM or ferroelectric memory
    • G06F12/0246Memory management in non-volatile memory, e.g. resistive RAM or ferroelectric memory in block erasable memory, e.g. flash memory
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2212/00Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
    • G06F2212/72Details relating to flash memory management
    • G06F2212/7201Logical to physical mapping or translation of blocks or pages

Definitions

  • SSDs solid state drives
  • a page in an SSD is typically 8- 16 kilobytes (KB) in size and a block consists of a large number of pages (e.g., 256 or 512).
  • KB kilobytes
  • a particular physical location in an SSD e.g., a page
  • address indirection is needed.
  • L2P mapping system known as Logical Block Addressing (LBA) that is part of the flash translation layer (FTL).
  • LBA Logical Block Addressing
  • a large L2P map table maps logical entries to physical address locations on an SSD.
  • This large L2P map table which may reside in a volatile memory such as dynamic random access memory (DRAM), is usually updated as writes come in, and saved to non-volatile memory in small sections. For example, if random writing occurs, although the system may have to update only one entry, it may nonetheless have to save to the non-volatile memory the entire table or a portion thereof, including entries that have not been updated, which is inherently inefficient.
  • DRAM dynamic random access memory
  • Fig. 1 shows aspects of a conventional Logical Block Addressing (LBA) scheme for an SSD.
  • LBA Logical Block Addressing
  • a map table 104 contains one entry for every logical block 102 defined for the data storage device’s flash memory 106.
  • flash memory 106 For example, a 64 GB SSD that supports 512 byte logical blocks may present itself to the host as having 125,000,000 logical blocks.
  • One entry in the map table 104 contains the current location of each of the 125,000,000 logical blocks in the flash memory 106.
  • a flash page holds an integer number of logical blocks (i.e., a logical block does not span across flash pages). In this conventional example, an 8 KB flash page would hold 16 logical blocks (of size 512 bytes).
  • each entry in the logical-to-physical map table 104 contains a field 108 identifying the flash die on which the logical block is stored, a field 110 identifying the flash block on which the logical block is stored, another field 112 identifying the flash page within the flash block and a field 114 identifying the offset within the flash page that identifies where the logical block data begins in the identified flash page.
  • the large size of the map table 104 prevents the table from being held inside the SSD controller. Conventionally, the large map table 104 is held in an external DRAM connected to the SSD controller. As the map table 104 is stored in volatile DRAM, it must be restored when the SSD powers up, which can take a long time, due to the large size of the table.
  • the corresponding entry in the map table 104 is read to determine the location in flash memory to be read. A read is then performed to the flash page specified in the corresponding entry in the map table 104. When the read data is available for the flash page, the data at the offset specified by the map entry is transferred from the SSD to the host. When a logical block is written, the corresponding entry in the map table 104 is updated to reflect the new location of the logical block.
  • the flash memory when a logical block is written, the flash memory will initially contain at least two versions of the logical block; namely, the valid, most recently written version (pointed to by the map table 104) and at least one other, older version thereof that is stale and is no longer pointed to by any entry in the map table 104.
  • These“stale” data are referred to as garbage, which occupies space that must be accounted for, collected, erased and made available for future use.
  • Fig. 1 shows aspects of a conventional Logical Block Addressing (LBA) scheme for SSDs.
  • LBA Logical Block Addressing
  • FIG. 2 is a diagram showing aspects of the physical and logical data organization of a data storage device according to one embodiment.
  • FIG. 3 shows a logical-to-physical address translation map and illustrative entries thereof, according to one embodiment.
  • FIG. 4 shows aspects of a method for updating a logical-to-physical address translation map and for creating an S-Journal entry, according to one embodiment.
  • FIG. 5 is a block diagram of an S-Journal, according to one embodiment.
  • Fig. 6 shows an exemplary organization of one entry of an S-Journal, according to one embodiment.
  • Fig. 7 is a block diagram of a superblock (S-Block), according to one embodiment.
  • Fig. 8 shows another view of a super page (S-Page), according to one embodiment.
  • Fig. 9A shows relationships among the logical-to-physical address translation map, S-Journals and S-Blocks, according to one embodiment.
  • Fig. 9B is a block diagram of an S-Journal Map, according to one embodiment.
  • FIG. 10 is a block diagram illustrating aspects of a method of updating the logical-to-physical address translation map, according to one embodiment.
  • FIG. 11 is a block diagram illustrating further aspects of a method of updating the logical-to-physical address translation map, according to one embodiment.
  • FIG. 12 is a block diagram illustrating still further aspects of a method of updating the logical-to-physical address translation map, according to one embodiment.
  • FIG. 13 is a block diagram illustrating yet further aspects of a method of updating the logical-to-physical address translation map, according to one embodiment.
  • Fig. 14 is a block diagram illustrating aspects of a method of updating S- Journals and the S-Journal Map, according to one embodiment.
  • Fig. 15 is a block diagram illustrating further aspects of a method of updating S-Journals and the S-Journal Map, according to one embodiment.
  • Fig. 16 is a block diagram illustrating still further aspects of a method of updating S-Journals and the S-Journal Map, according to one embodiment.
  • Fig. 17 is a block diagram illustrating yet further aspects of a method of updating S-Journals and the S-Journal Map, according to one embodiment.
  • Fig. 18 is a block diagram illustrating aspects of garbage collection, according to one embodiment.
  • FIG. 19 is a block diagram illustrating further aspects of garbage collection, according to one embodiment.
  • FIG. 20 is a block diagram illustrating still further aspects of garbage collection, according to one embodiment.
  • FIG. 21 is a block diagram illustrating yet further aspects of garbage collection, according to one embodiment.
  • Fig. 22 is a block diagram illustrating aspects of garbage collecting a system block, according to one embodiment.
  • FIG. 23 is a block diagram illustrating further aspects of garbage collecting a system block, according to one embodiment.
  • FIG. 24 is a block diagram illustrating still further aspects of garbage collecting a system block, according to one embodiment.
  • Fig. 25 is a block diagram illustrating other aspects of garbage collecting a system block, according to one embodiment.
  • FIG. 26 is a block diagram illustrating still other aspects of garbage collecting a system block, according to one embodiment.
  • Fig. 2 is a diagram showing aspects of the physical and logical data organization of a data storage device according to one embodiment.
  • the data storage device is an SSD.
  • the data storage device is a hybrid drive including flash memory and rotating magnetic storage media.
  • the disclosure is applicable to both SSD and hybrid implementations, but for the sake of simplicity the various embodiments are described with reference to SSD- based implementations.
  • a data storage device controller 202 may be configured to be coupled to a host, as shown at reference numeral 218.
  • the host 218 may utilize a logical block addressing (LBA) scheme. While the LBA size is normally fixed, the host can vary the size of the LBA dynamically. For example, the LBA size may vary by interface and interface mode.
  • LBA logical block addressing
  • the data storage device controller 202 may comprise or be coupled to a page register 204.
  • the page register 204 may be configured to enable the controller 202 to read data from and store data to the data storage device.
  • the controller 202 may be configured to program and read data from an array of flash memory devices responsive to data access commands from the host 218.
  • the array of memory devices may comprise one or more of various types of non-volatile memory devices such as flash integrated circuits, Chalcogenide RAM (C-RAM), Phase Change Memory (PC-RAM or PRAM), Programmable Metallization Cell RAM (PMC- RAM or PMCm), Ovonic Unified Memory (OUM), Resistance RAM (RRAM), NAND memory (e.g., single-level cell (SLC) memory, multi-level cell (MLC) memory, or any combination thereof), NOR memory, EEPROM, Ferroelectric Memory (FeRAM), Magnetoresistive RAM (MRAM), other discrete NVM (non-volatile memory) chips, or any combination thereof.
  • flash integrated circuits e.g., Intel® 845555B Intel® 845555B Intel® 845555B Intel® 845555B Intel® 845555B Intel® 845555B Intel® 845555B Intel® 845555B Intel® 845555B Intel® 845555B Intel® 845555B Intel® 845555B Intel® 845555B Intel
  • the page register 204 may be configured to enable the controller 202 to read data from and store data to the array.
  • the array of flash memory devices may comprise a plurality of non-volatile memory devices in die (e.g., 128 dies), each of which comprises a plurality of blocks, such as shown at 206 in Fig. 2.
  • Other page registers 204 may be coupled to blocks on other die.
  • a combination of flash blocks, grouped together, may be called a Superblock or S-Block.
  • the individual blocks that form an S-Block may be chosen from one or more dies, planes or other levels of granularity.
  • An S-Block may comprise a plurality of flash blocks, spread across one or more die, that are combined together.
  • the S-Block may form a unit on which the Flash Management System (FMS) operates.
  • the individual blocks that form an S- Block may be chosen according to a different granularity than at the die level, such as the case when the memory devices include dies that are sub-divided into structures such as planes (i.e., blocks may be taken from individual planes).
  • allocation, erasure and garbage collection may be carried out at the S- Block level.
  • the FMS may perform data operations according to other logical groupings such as pages, blocks, planes, dies, etc.
  • each of the flash blocks 206 comprises a plurality of flash pages (F- Pages) 208.
  • Each F-Page may be of a fixed size such as, for example, 16 KB.
  • the F- Page is the size of the minimum unit of program for a given flash device.
  • each F-Page 208 may be configured to accommodate a plurality of physical pages, hereinafter referred to as E-Pages 210.
  • E-Page refers to a data structure stored in flash memory on which an error correcting code (ECC) has been applied.
  • E-Page 210 may form the basis for physical addressing within the data storage device and may constitute the minimum unit of flash read data transfer.
  • the E-Page 210 may be (but need not be) of a predetermined fixed size (such as 2 KB, for example) and determine the size of the payload (e.g., host data) of the ECC system.
  • each F-Page 208 may be configured to fit a predetermined plurality of E- Pages 210 within its boundaries. For example, given 16 KB size F-Pages 208 and a fixed size of 2 KB per E-Page 210, eight E-Pages 210 fit within a single F-Page 208, as shown in Fig. 2.
  • a power of 2 multiple of E- Pages 210, including ECC may be configured to fit into an F-Page 208.
  • Each E-Page 210 may comprise a data portion 214 and, depending on where the E-Page 210 is located, may also comprise an ECC portion 216. Neither the data portion 214 nor the ECC portion 216 need be fixed in size.
  • the address of an E-Page uniquely identifies the location of the E-Page within the flash memory. For example, the E-Page’s address may specify the flash channel, a particular die within the identified flash channel, a particular block within the die, a particular F-Page and, finally, the E-Page within the identified F-Page.
  • L-Page logical page
  • An L- Page denoted in Fig. 2 at reference numeral 212 may comprise the minimum unit of address translation used by the FMS.
  • Each L-Page may be associated with an L-Page number.
  • the L-Page numbers of L-Pages 212 therefore, may be configured to enable the controller 202 to logically reference host data stored in one or more of the physical pages, such as the E-Pages 210.
  • the L- Page 212 may also be utilized as the basic unit of compression.
  • L-Pages 212 are not fixed in size and may vary in size, due to variability in the compression of data to be stored. Since the compressibility of data varies, a 4 KB amount of data of one type may be compressed into a 2 KB L-Page while a 4 KB amount of data of a different type may be compressed into a 1 KB L-Page, for example. Due to such compression, therefore, the size of L-Pages may vary within a range defined by a minimum compressed size of, for example, 24 bytes to a maximum uncompressed size of, for example, 4 KB or 4 KB+. Other sizes and ranges may be implemented. As shown in Fig.
  • L-Pages 212 need not be aligned with the boundaries of E-Page 210. Indeed, L-Pages 212 may be configured to have a starting address that is aligned with an F-Page 208 and/or E-Page 210 boundary, but also may be configured to be unaligned with either of the boundaries of an F-Page 208 or E-Page 210. That is, an L-Page starting address may be located at a non-zero offset from either the start or ending addresses of the F-Pages 208 or the start or ending addresses of the E-Pages 210, as shown in Fig. 2.
  • L-Pages 212 are not fixed in size and may be smaller than the fixed-size E-Pages 210, more than one L-Page 212 may fit within a single E-Page 210. Similarly, as the L-Pages 212 may be larger in size than the E-Pages 210, L-Pages 212 may span more than one E-Page, and may even cross the boundaries of F-Pages 210, shown in Fig. 2 at numeral 217.
  • LBA size is 512 or 512+ bytes
  • a maximum of, for example, eight sequential LBAs may be packed into a 4 KB L-Page 212, given that an uncompressed L-Page 212 may be 4 KB to 4 KB+.
  • the exact logical size of an L-Page 212 is unimportant as, after compression, the physical size may span from few bytes at minimum size to thousands of bytes at full size. For example, for 4 TB SSD device, 30 bits of addressing may be used to address each L-Page 212 to cover for an amount of L-Pages that could potentially be present in such a SSD.
  • Fig. 3 shows a logical-to-physical address translation map and illustrative entries thereof, according to one embodiment.
  • a logical-to-physical address translation map is required to enable the controller 202 to associate an L-Page number of an L-Page 212 to one or more E-Pages 210.
  • Such a logical-to-physical address translation map is shown in Fig. 3 at 302 and, in one embodiment, is a linear array having one entry per L-Page 212.
  • Such a logical-to-physical address translation map 302 may be stored in a volatile memory, such as a DRAM or SRAM.
  • Fig. 3 also shows the entries in the logical-to- physical address translation map for four different L-Pages 212, which L-Pages 212 in Fig. 3 are associated with L-Page numbers denoted as L-Page 1 , L-Page 2, L-Page 3 and L-Page 4.
  • each L-Page stored in the data storage device may be pointed to by a single and unique entry in the logical-to-physical address translation map 302. Accordingly, in the example being developed herewith, four entries are shown.
  • each entry in the map 302 may comprise an L- Page number, which may comprise an identification of the physical page (e.g., E-Page) containing the start address of the L-Page being referenced, the offset of the start address within the physical page (e.g., E-Page) and the length of the L-Page.
  • a plurality of ECC bits may provide error correction functionality for the map entry. For example, and as shown in Fig. 3, and assuming an E-Page size of 2 KB, L- Page 1 may be referenced in the logical-to-physical address translation map 302 as follows: E-Page 1003, offset 800, length 1624, followed by a predetermined number of ECC bits (not shown).
  • the start of L-Page 1 is within (not aligned with) E-Page 1003, and is located at an offset from the starting physical location of the E-Page 1003 that is equal to 800 bytes.
  • Compressed L-Page 1 furthermore, extends 1 ,624 bytes, thereby crossing an E-Page boundary to E-Page 1004. Therefore, E-Pages 1003 and 1004 each store a portion of the L-Page 212 denoted by L-Page number L-Page 1.
  • the compressed L-Page referenced by L-Page number L-Page 2 is stored entirely within E-Page 1004, and begins at an offset therein of 400 bytes and extends only 696 bytes within E-Page 1004.
  • the compressed L-Page associated with L-Page number L-Page 3 starts within E-Page 1004 at an offset of 1 ,120 bytes (just 24 bytes away from the boundary of L-Page 2) and extends 4,096 bytes past E-Page 1005 and into E-Page 1006. Therefore, the L-Page associated with L-Page number L-Page 3 spans a portion of E-Page 1004, all of E-Page 1005 and a portion of E-Page 1006.
  • L-Page associated with L-Page number L-Page 4 begins within E-Page 1006 at an offset of 1 ,144 bytes, and extends 3,128 bytes to fully span E-Page 1007, crossing an F-Page boundary into E-Page 1008 of the next F-Page.
  • each of these constituent identifier fields (E-Page, offset, length and ECC) making up each entry of the logical-to-physical address translation map 302 may be, for example, 8 bytes in size. That is, for an exemplary 4 TB drive, the address of the E-Page may be 32 bits in size, the offset may be 12 bits (for E-Page data portions up to 4 KB) in size, the length may be 10 bits in size and the ECC field may be provided. Other organizations and bit-widths are possible. Such an 8 byte entry may be created each time an L-Page is written or modified, to enable the controller 202 to keep track of the host data, written in L-Pages, within the flash storage.
  • This 8-byte entry in the logical-to-physical address translation map may be indexed by an L-Page number of LPN.
  • the L-Page number functions as an index into the logical-to-physical address translation map 302.
  • the LBA is the same as the LPN.
  • the LPN therefore, may constitute the address of the entry within the volatile memory.
  • the controller 202 receives a read command from the host 218, the LPN may be derived from the supplied LBA and used to index into the logical-to-physical address translation map 302 to extract the location of the data to be read in the flash memory.
  • the LPN When the controller 202 receives a write command from the host, the LPN may be constructed from the LBA and the logical-to-physical address translation map 302 may be modified. For example, a new entry therein may be created. Depending upon the size of the volatile memory storing the logical-to-physical address translation map 302, the LPN may be stored in a single entry or broken into, for example, a first entry identifying the E-Page containing the starting address of the L-Page in question (plus ECC bits) and a second entry identifying the offset and length (plus ECC bits). According to one embodiment, therefore, these two entries may together correspond and point to a single L-Page within the flash memory. In other embodiments, the specific format of the logical-to-physical address translation map entries may be different from the examples shown above.
  • the logical-to-physical address translation map 302 may be stored in a volatile memory, it may need to be rebuilt upon startup or any other loss of power to the volatile memory. This, therefore, requires some mechanism and information to be stored in a non-volatile memory that will enable the controller 202 to reconstruct the logical-to-physical address translation map 302 before the controller can“know” where the L-Pages are stored in the non-volatile memory after startup or after a power-fail event. According to one embodiment, such mechanism and information are embodied in a construct that may be called a System Journal, or S-Journal.
  • the controller 202 may be configured to maintain, in the plurality of non- volatile memory devices (e.g., in one or more of the blocks 206 in one or more die, channel or plane), a plurality of S-Journals defining physical-to-logical address correspondences.
  • each S-Journal covers a pre- determined range of physical pages (e.g., E-Pages).
  • each S-Journal may comprise a plurality of journal entries, with each entry being configured to associate one or more physical pages, such as E-Pages, to the L-Page number of each L-Page.
  • Fig. 4 shows aspects of a method for updating a logical-to-physical address translation map and for creating an S-Journal entry, according to one embodiment.
  • the logical-to-physical address translation map 302 may be updated as shown at B42.
  • an S-Journal entry may also be created, storing therein information pointing to the location of the updated L-Page.
  • both the logical-to- physical address translation map 302 and the S-Journals are updated when new writes occur (e.g., as the host issues writes to non-volatile memory, as garbage collection/wear leveling occurs, etc.).
  • Write operations to the non-volatile memory devices to maintain a power-safe copy of address translation data may be configured, therefore, to be triggered by newly created journal entries (which may be just a few bytes in size) instead of re-saving all or a portion of the logical-to-physical address translation map, such that Write Amplification (WA) is reduced.
  • WA Write Amplification
  • the updating of the S- Journals ensures that the controller 202 can access a newly updated L-Page and that the logical-to-physical address translation map 302 may be reconstructed upon restart or other information-erasing power event affecting the volatile memory in which the logical-to-physical address translation map is stored.
  • the S-Journals are useful in enabling effective Garbage Collection (GC). Indeed, the S-Journals may contain the last-in-time update to all L-Page numbers, and also may contain stale entries, entries that do not point to a valid L-Page.
  • the S-Journals may be the main flash management data written to the non-volatile memory.
  • S-Journals may contain mapping information for a given S-Block and may contain the Physical-to-Logical (P2L) information for a given S-Block.
  • Fig. 5 is a block diagram showing aspects of an S- Journal, according to one embodiment. As shown therein, each S-Journal 502 covers a predetermined physical region of the non-volatile memory such as, for example, 32 E- Pages as shown at 506, which are addressable using 5 bits.
  • Each S-Journal 502 may be identified by an S-Journal Number, which may be part of a header 504 that could include other information about the S-Journal.
  • the S-Journal Number may comprise a portion of the address of the first physical page covered by the S-Journal.
  • the S-Journal Number of S-Journal 502 may comprise, for example, the 27 Most Significant Bits (MSb) of the first E-Page address covered by this S-Journal 502.
  • Fig. 6 shows an exemplary organization of one entry 602 of an S-Journal 502, according to one embodiment.
  • Each entry 602 of the S-Journal 502 may point to the starting address of one L-Page, which is physically addressed in E-Pages.
  • Each entry 602 may comprise, for example, a number (5, for example) of Least Significant Bits (LSbs) of the address of the E-Page containing the start L-Page. The full E-Page address is obtained by concatenating these 5 LSbs with the 27 MSbs of the S-Journal Number in the header 504.
  • the entry 602 may comprise the L-Page number, its offset within the identified E-Page and its size.
  • each entry 602 of an S-Journal may comprise the 5 LSbs of the address of first E-Page covered by this S-Journal entry, 30 bits of L-Page number, 9 bits of E-Page offset and 10 bits of L-Page size, adding up to an overall size of about 7 bytes.
  • Various other internal journal entry formats may be used in other embodiments.
  • a variable number of L-Pages may be stored in a physical area, such as a physical area equal to 32 E-Pages, as shown at 506.
  • S-Journals may comprise a variable number of entries.
  • an L-Page may be 24 bytes in size and an S-Journal may comprise over 2,500 entries, referencing an equal number of L-Pages, one L-Page per S-Journal entry 602.
  • an S-Journal may be configured to contain mapping information for a given S-Block. More precisely, according to one embodiment, S- Journals contain the mapping information for a predetermined range of E-Pages within a given S-Block.
  • Fig. 7 is a block diagram of a S-Block, according to one embodiment. As shown therein, an S-Block 702 may comprise one flash block (F-Block) 704 (as also shown at 206 in Fig. 2) per die. An S-Block, therefore, may be thought of as a collection of F-Blocks, one F-Block per die, that are combined together to form a unit of the Flash Management System.
  • F-Block flash block
  • allocation, erasure and GC may be managed at the S-Block level.
  • Each F-Block 704, as shown in Fig. 7, may comprise a plurality of flash pages (F-Page) such as, for example, 256 or 512 F-Pages.
  • An F- Page may be the size of the minimum unit of program for a given non-volatile memory device.
  • Fig. 8 shows a super page (S-Page), according to one embodiment.
  • an S-Page 802 may comprise one F-Page per F-Block of an S-Block, meaning that an S-Page spans across an entire S-Block.
  • Fig. 9A shows relationships among the logical-to-physical address translation map, the S-Journal map and S-Blocks, according to one embodiment.
  • Reference 902 denotes an entry in the logical-to-physical address translation map (stored in DRAM in one embodiment).
  • the logical-to-physical address translation map may be indexed by L-Page number, in that there may be one entry 902 per L-Page in the logical-to-physical address translation map.
  • the physical address of the start of the L-Page in the flash memory and the size thereof may be given in the map entry 902; namely by E-Page address, offset within the E-Page and the size of the L-Page.
  • the L-Page depending upon its size, may span one or more E-Pages and may span F-Pages and F-Blocks as well.
  • the volatile memory may also store a System Journal (S-Journal) map.
  • S-Journal System Journal
  • An entry 904 in the S-Journal map stores information related to where an S-Journal is physically located in the non-volatile memory. For example, the 27 MSbs of the E-Page physical address where the start of the L-Page is stored may constitute the S-Journal Number (as previously shown in Fig. 5).
  • the S-Journal map entry 904 in the volatile memory may also include the address of the S-Journal in non-volatile memory, referenced in system E-Pages.
  • System S-Block Information 908 may be extracted.
  • the System S-Block Information 908 may be indexed by System S-Block (S-Block in the System Band) and may comprise, among other information regarding the S-Block, the size of any free or used space in the System S-Block.
  • the physical location (expressed in terms of E-Pages in the System Band) of the referenced S-Journal in non-volatile memory 910 may be extracted.
  • the System Band does not contain L-Page data and may contain File Management System (FMS) meta-data and information.
  • the System Band may be configured as lower page only for reliability and power fail simplification. During normal operation, the System Band need not be read except during Garbage Collection.
  • the System Band may be provided with significantly higher overprovisioning than the data band for overall WA optimization.
  • Other bands include the Hot Band, which may contain L-Page data and is frequently updated, and the Cold Band, which may be less frequently updated and may comprise more static data, such as data that may have been collected as a result of GC.
  • the System, Hot and Cold Bands may be allocated by an S-Block basis.
  • each of these S-Journals in non-volatile memory may comprise a collection of S-Journal entries and cover, for example, 32 E-Pages worth of data.
  • These S-Journals in non-volatile memory 910 enable the controller 202 to rebuild not only the logical-to-physical address translation map in volatile memory, but also the S-Journal map, the User S-Block Information 906, and the System S-Block Information 908, in volatile memory.
  • Fig. 9B is a block diagram of an S-Journal Map 912, according to one embodiment.
  • the S-Journal Map 912 may be indexed by S-Block number and each entry thereof may point to the start of the first S-Journal for that S-Block which, in turn, may cover a predetermined number of E-Pages (e.g., 32) of that S-Block.
  • the controller 202 may be further configured to build or rebuild a map of the S-Journals and store the resulting S-Journal Map in volatile memory.
  • the controller 202 may read the plurality of S-Journals in a predetermined sequential order, build a map of the S-Journals stored in the non-volatile memory devices based upon the sequentially read plurality of S-Journals, and store the built S- Journal Map 912 in the volatile memory.
  • Figs. 10-13 are block diagrams illustrating aspects of a method of updating the logical-to-physical address translation map, according to one embodiment.
  • a logical-to-physical address translation map 1002 contains an entry (e.g., a location of an L-Page) for L-Page 100, which has a length of 3,012 bytes.
  • L-Page 100 is stored in S-Block 15, as shown at 1006.
  • a buffer 1004 (such as a static random access memory (SRAM)) in or coupled to the controller 202 may store the S-Journal that contains the P2L information for S-Block 15 in which the L-Page 100 resides.
  • SRAM static random access memory
  • the User S-Block Info 906, whose entries are indexed by S-Block, may comprise for each S-Block, among other information regarding the S-Block, the (e.g., exact or approximate) size of the free or used space in the S-Block. As shown in Fig. 10, the entry in the User S-Block Info 906 for S-Block 15 is shown. Fig. 10 shows an illustrative state of these constituent functional blocks before an update to L-Page 100 is processed by the controller 202.
  • an updated L-Page 100 is received, with a new length of 1 ,534 bytes.
  • the L- Page 100 information may be fetched from the logical-to-physical address translation map 1002 and the length (3,012 bytes) of the (now obsolete) L-Page 100 may be extracted therefrom and used to correspondingly and precisely increase the tracked free space value of S-Block 15.
  • the User S-Block Information 906 may be updated with data indicating that S-Block 15 now has 3,012 additional bytes of free space, now that the original data for L-Page 100 is now stale (e.g., obsolete).
  • the logical-to-physical address translation map 1002 may be updated to accommodate the updated L-Page.
  • the length information is now 1 ,534 bytes.
  • the S-Journal in the buffer 1004 for the particular portion of S-Block 15 where the update occurs may now be updated with the new L-Page information, including length information and the L-Page newly received from, for example, a compressor may be read into the buffer 1004.
  • the entry for L-Page 100 in the logical-to-physical address translation map 1002 may temporarily reflect its location in the buffer, as shown by the arrow labeled“1”.
  • the updated L-Page 100 may be flushed to the Hot S-Block 1008, at the E-Page address specified by the newly-created entry in the now updated S- Journal still in the buffer 1004.
  • the mapping table entry in the logical-to-physical address translation map 1002 is updated to reflect the physical E-Page address and offset of the L-Page’s final destination, as suggested by arrow“2”.
  • the S-Journal in the volatile memory buffer 1004 may be written out to non-volatile memory, such as to the System S-Block 1010.
  • the System S-Block 1010 may be a portion of the flash memory allocated by the controller firmware to store S-Journals. The saving of the S-Journal in non-volatile memory enables the later reconstruction of the logical-to-physical address translation map, as needed.
  • Figs. 14-17 are block diagrams illustrating aspects of a method of updating S- Journals and the S-Journal Map in the System Band, according to one embodiment.
  • the figures illustrate additional details relating to the update mechanisms of S-Journal triggered by the same example update to L-Page 100.
  • an S-Journal Map 1402 in volatile memory e.g., synchronous dynamic random-access memory (SDRAM)
  • SDRAM synchronous dynamic random-access memory
  • the relevant S-Journal i.e., S-Journal for a particular portion of S-Block 15 storing L-Page 100
  • System S-Block 3 at 1408, starting at E-Page 42.
  • L-Page 100 is updated, with a length of 1 ,534 bytes.
  • the information e.g., E-Page address, offset and length
  • the address of the E-Page storing L-Page 100 may be used to extract the S-Journal Number used to locate the S-Journal entry within the S-Journal Map 1402.
  • the S-Journal Number may comprise the 27 MSbs of the first 32-bit E-Page address.
  • This S-Journal number may then be used to index into the S-Journal Map 1402 to obtain the location, which may then be used to identify the System S-Block that contains the S-Journal of interest.
  • the System S-Block Information 908 may be updated to reflect the fact that the journal entry for L-Page 100 in the S-Journal at S- Block 3 is now invalid, as the space previously occupied by that entry has now been de- allocated, in view of the recent update to L-Page 100. That de-allocated space is now free space, and must be accounted for. Alternatively, used space may be accounted for and the free space derived therefrom.
  • the entry in the System S-Block information 908 for S-Block 3 may, therefore, be incremented by a space taken by one S-Journal entry; namely, 7 bytes in the exemplary implementation being developed herein. This empty space may thereafter be taken into account during GC of the System Band.
  • L-Page 100 may then be written to the buffer 1004, as shown at 1005, whereupon the logical-to-physical address translation map 1002 may be updated to point to the buffer 1004 (indicating the physical location where L-Page 100 is stored) and to store the length of the updated L-Page 100 (1 ,534 in the example being developed herein).
  • System S-Block 3 still includes an S-Journal entry that points to S-Block 15 at reference 1006 as containing L-Page 100.
  • the System S-Block Info for S-Block 3 has been updated so as to indicate that the space presently occupied by the now outdated S-Journal entry within the System S- Block 3 at reference 1408 is, in fact, free space.
  • Fig. 16 also shows a different S-Journal in the buffer.
  • the S-Journal in the buffer 1004 is for recording new L-Pages written to the Open Hot S- Block 7.
  • the S-Journal in the buffer 1004 (which has not yet been flushed to the non-volatile memory) is updated with the L-Page number of updated L- Page 100 (now in the buffer also) and its length information, as indicated by the arrow at 1007 in Fig. 16.
  • L-Page 100 may now be flushed from the buffer 1004 to the currently open Hot S-Block 7, as referenced at 1404 in Fig. 16.
  • the S-Journal in the buffer is updated to reflect that updated L-Page 100 has been written to the non-volatile memory.
  • the logical-to-physical address translation map 1002 and the S- Journal may be updated with the address of the updated L-Page 100 prior to the updated L-Page 100 being written to the non-volatile memory.
  • an updated entry in that S-Journal will contain the E-Page pointed to by the new L-Page 100’s entry in the logical-to-physical address translation map 1002 and additional information about L-Page 100, as previously shown in Fig. 6.
  • the S-Journal in the buffer 1004 may be written out to an open System S-Block 1 (referenced at 1406).
  • this S-Journal is written out to E-Page 19 of open System S-Block 1.
  • the S-Journal Map 1402 now comprises one entry indicating that a portion of S-Block 15 is stored in System S-Block 3, E-Page 42. However, the entry within that S-Journal which pointed to the old location of L-Page 100 within S-Block 15 is no longer valid.
  • the S- Journal Map 1402 also comprises one entry indicating that S-Journal for S-Block 7 is stored in System S-Block 1 , E-Page 19. That S-Journal includes a valid entry that points to open Hot S-Block 7 at 1404 as the location in non-volatile memory of the updated L-Page 100. In this manner, the S-Journals containing the P2L information are updated in the buffer 1004, and are eventually written out to non-volatile memory. In addition, the S-Journal Map 1402 may be suitably updated to point to such newly updated S-Journals. Moreover, the logical-to-physical address translation map 1002 may be suitably updated to point to the correct open Hot S-Block.
  • the S-Journal construct as shown and described herein minimizes write overhead.
  • a new write operation necessitates writing a new S-Journal entry in the non-volatile memory. Since there is only a small amount of S-Journal entry data per L-Page generated (e.g., 7 bytes as described herein), the WA due to system data writes is reduced as compared to conventional systems. S-Journal data is effectively a massive command history. Updated S-Journal data is pooled up and written out sequentially to the System Band.
  • mapping information that is generated is the same as the data to be written, with no additional unchanged entries, as only that which is changed is written to non-volatile (e.g., flash) memory, as opposed to all or a portion of a logical-to-physical map.
  • non-volatile e.g., flash
  • all the S-Journal data in the System Band may be read and processed into DRAM to rebuild the logical-to-physical address translation map in volatile memory. According to one embodiment, this is done in the order in which the S- Blocks are allocated to the System Band.
  • hardware support is used to enable this large data load to occur quickly to meet power-up timing requirement (this may not be practical without hardware, since this is essentially generating the logical-to-physical address translation map from the command history).
  • an S-Journal Map is also constructed at power-up to point to valid S-Journals stored in the System Band.
  • Figs. 18-21 are block diagrams illustrating aspects of garbage collection, according to one embodiment.
  • the data in the User S-Block Information 906 may be scanned to select the“best” S-Block to garbage collect.
  • the best S-Block to garbage collect may be that S-Block having the largest amount of free space and the lowest Program Erase (PE) count.
  • PE Program Erase
  • these and/or other criteria may be weighted to select the S-Block to be garbage collected.
  • S-Block 15 is referenced by information entry 15 within the User S-Block Information 906, showing some amount + 3,012 bytes of free space (the additional 3,012 bytes to account for the recently obsoleted L-Page 100).
  • the User S-Block Information 906 may comprise, among other items of information, a running count of the number of PE cycles undergone by each tracked S- Block, which may be evaluated in deciding which S-Block to garbage collect. As shown at 1006, S-Block 15 has a mix of valid data (hashed blocks) and invalid data (non- hashed blocks).
  • the S-Journal Map (see 912 in Fig. 9B) may be consulted (e.g., indexed into by the S-Block number) to find the location in non-volatile memory (e.g., E-Page address) of the corresponding S-Journals for that S-Block.
  • non-volatile memory e.g., E-Page address
  • one S-Journal pointed to by the S-Journal Map 912 is located using the S-Journal Number (27 MSb of E-Page Address), and read into the buffer 1004, as shown in Fig. 18.
  • the 1 or more E-Pages in the System S-Block 1804 pointed to by S-Journal Map 912 are accessed and the S-Journal stored beginning at that location may be read into the buffer 1004.
  • the S- Journal map also contains the length of the S-Journal since an S-Journal may span one or more E-Pages.
  • An S-Journal may be quite large and, therefore, may be read in pieces and processed as available.
  • each P2L entry in the S-Journal in the buffer 1004 may then be compared to the corresponding entry in the logical-to-physical address translation map 1802. For each entry in the S-Journal in the buffer 1004, it may be determined whether the physical address for the L-Page of that entry matches the physical address of the same L-Page in the corresponding entry in the logical-to-physical address translation map 1802. If the two match, that entry in the S-Journal is valid. Conversely, if the address for the L-Page in the S-Journal does not match the entry for that L-Page in the logical-to-physical address translation map, that entry in the S-Journal is not valid.
  • the referenced L-Pages may be read out of S-Block 15 and written to the buffer 1004, as shown in Fig. 19. The same process may be used for other S-Journals covering S-Block 15 until the entire S-Block is processed. As also shown in Fig. 19 at reference 1006, S-Block 15 now contains only invalid data. This is because the data indicated as valid by entries in S-Journals for S-Block 15 have been preserved and will soon be moved to a new S-Block.
  • the valid data may then be written out to the Cold S- Block 1801 (the Hot S-Block being used for recently written host data, not garbage collected data), as shown at Fig. 20.
  • S-Journal 1005 may be written out to the System S- Block 1804 in the System Band.
  • S-Block 15 has now been garbage collected and User S-Block Info 906 now indicates that the entire S-Block 15 is free space.
  • S-Block 15 may thereafter be erased, its PE count updated and made available for new writes. It is to be noted that an invalid S-Journal is still present in System S-Block 1804. The space in flash memory in the System Band occupied by this invalid S-Journal may be garbage collected at some later time.
  • Figs. 22-26 are block diagrams illustrating aspects of garbage collecting a system block, according to one embodiment.
  • System S-Block 3 shown at referenced 2208 in Fig. 22, has been picked for garbage collection.
  • all of the E-pages (and by extension, all S-Journals contained therein) within the picked System S-Block may be read (sequentially or non-sequentially) into the buffer 1004.
  • the S-Journal numbers for one or more of the S- Journals read into the buffer 1004 may then be extracted from the headers of the S- Journals. Each such System S-Journal number may then be used to look up in the S- Journal Map 2202 to determine whether the corresponding S-Journal is still valid. According to one embodiment, invalid S-Journals are those S-Journals whose S-Journal number is not matched by a corresponding entry in the S-Journal Map 2202, which has the most updated information on where S-Journals are physically stored.
  • S-Journal Map 2202 For example, if the entry in the S-Journal Map 2202 for S-Journal Number“12345” points to an E- Page within System S-Block 120, the copy of S-Journal“12345” in S-Block 3 (the S- Block being garbage collected) is obsolete. Likewise, if the S-Journal Map entry instead points to the E-Page from S-Block 3 where S-Journal“12345” currently resides, S- Journal“12345” is still valid.
  • a valid S-Journal being garbage collected may include a mix of valid and invalid entries, thus individual checking of the entries is needed.
  • each entry in each valid S-Journal 2402 may then be matched with a corresponding entry in the logical-to-physical address translation map 1802 in memory. That is, the E-Page address of the L-Page referenced by each S-Journal entry may be compared with the E-Page address of the L-Page specified in the logical-to-physical address translation map 1802. If the two match, that S-Journal entry is valid.
  • E-Page address and the offset within the E-Page in the S-Journal may be necessary to compare the E-Page address and the offset within the E-Page in the S-Journal with the E-Page address and the offset within the E- page of the L-Page specified in the logical-to-physical address translation map 1802. Conversely, if the E-page address (or E-page address and offset) for the L-Page in the S-Journal does not match the E-Page address (or E-page address and offset) in the entry for that L-Page in the address translation map 1802, that entry in the S-Journal is not valid.
  • each S-Journal loaded into the buffer 1004 from the S- Block picked for GC may be first determined to be valid or invalid at the journal level, and then, if determined valid, compacted to contain only valid entries.
  • invalid S-Journal entries are simply not copied to the new version of the S- Journal 2502.
  • the new version 2502 of the S-Journal will be smaller (i.e., comprise fewer entries) than the older version 2402 thereof, presuming that the S-Journal 2402 had one or more obsolete entries. It is to be noted that if an S- Journal has all valid entries, the size of the new version thereof will remain the same.
  • the S-Journal 2502 (the new version of the S-Journal 2402, which is now invalid or obsolete) may then be written to the current open System S-Block which, in Fig. 26, is Open System S-Block 1 , reference numeral 2206. Thereafter, the S-Journal Map 2202 may be updated with the new location of the S- Journal 2502 in Open System S-Block 1. At the end of this process, the S-Journals in System S-Block 3 (2208) have been garbage collected and System S-Block 3 may then be erased and made available for future programming.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Techniques For Improving Reliability Of Storages (AREA)
  • Memory System (AREA)

Abstract

A data storage device comprises a plurality of non-volatile memory devices storing physical pages, each stored at a predetermined physical location. A controller may be coupled to the memory devices and configured to access data stored in a plurality of logical pages (L-Pages), each associated with an L-Page number that enables the controller to logically reference data stored in the physical pages. A volatile memory may comprise a logical-to-physical address translation map that enables the controller to determine a physical location, within the physical pages, of data stored in each L~Page. The controller may be configured to maintain, in the memory devices, journals defining physical-to-logical correspondences, each journal covering a predetermined range of physical pages and comprising a plurality of entries that associate one or more physical pages to each L-Page. The controller may read the journals upon startup and rebuild the address translation map from the read journals.

Description

METHODS, DEVICES AND SYSTEMS FOR PHYSICAL-TO-LOGICAL MAPPING IN
SOLID STATE DRIVES BACKGROUND
[0001] Due to the nature of flash memory in solid state drives (SSDs), data is typically programmed by pages and erased by blocks. A page in an SSD is typically 8- 16 kilobytes (KB) in size and a block consists of a large number of pages (e.g., 256 or 512). Thus, a particular physical location in an SSD (e.g., a page) cannot be directly overwritten without overwriting data in pages within the same block, as is possible in a magnetic hard disk drive. As such, address indirection is needed. Conventional data storage device controllers, which manage the flash memory on data storage devices such as SSDs and interface with the host system, use a Logical to Physical (L2P) mapping system known as Logical Block Addressing (LBA) that is part of the flash translation layer (FTL). When new data comes in replacing older data already written, the data storage device controller causes the new data to be written in a new location and update the logical mapping to point to the new physical location. Since the old physical location no longer holds valid data, it will eventually need to be erased before it can be written again.
[0002] Conventionally, a large L2P map table maps logical entries to physical address locations on an SSD. This large L2P map table, which may reside in a volatile memory such as dynamic random access memory (DRAM), is usually updated as writes come in, and saved to non-volatile memory in small sections. For example, if random writing occurs, although the system may have to update only one entry, it may nonetheless have to save to the non-volatile memory the entire table or a portion thereof, including entries that have not been updated, which is inherently inefficient.
[0003] Fig. 1 shows aspects of a conventional Logical Block Addressing (LBA) scheme for an SSD. As shown therein, a map table 104 contains one entry for every logical block 102 defined for the data storage device’s flash memory 106. For example, a 64 GB SSD that supports 512 byte logical blocks may present itself to the host as having 125,000,000 logical blocks. One entry in the map table 104 contains the current location of each of the 125,000,000 logical blocks in the flash memory 106. In a conventional SSD, a flash page holds an integer number of logical blocks (i.e., a logical block does not span across flash pages). In this conventional example, an 8 KB flash page would hold 16 logical blocks (of size 512 bytes). Therefore, each entry in the logical-to-physical map table 104 contains a field 108 identifying the flash die on which the logical block is stored, a field 110 identifying the flash block on which the logical block is stored, another field 112 identifying the flash page within the flash block and a field 114 identifying the offset within the flash page that identifies where the logical block data begins in the identified flash page. The large size of the map table 104 prevents the table from being held inside the SSD controller. Conventionally, the large map table 104 is held in an external DRAM connected to the SSD controller. As the map table 104 is stored in volatile DRAM, it must be restored when the SSD powers up, which can take a long time, due to the large size of the table.
[0004] When a logical block is read, the corresponding entry in the map table 104 is read to determine the location in flash memory to be read. A read is then performed to the flash page specified in the corresponding entry in the map table 104. When the read data is available for the flash page, the data at the offset specified by the map entry is transferred from the SSD to the host. When a logical block is written, the corresponding entry in the map table 104 is updated to reflect the new location of the logical block. It is to be noted that when a logical block is written, the flash memory will initially contain at least two versions of the logical block; namely, the valid, most recently written version (pointed to by the map table 104) and at least one other, older version thereof that is stale and is no longer pointed to by any entry in the map table 104. These“stale” data are referred to as garbage, which occupies space that must be accounted for, collected, erased and made available for future use. BRIEF DESCRIPTION OF THE DRAWINGS
[0005] Fig. 1 shows aspects of a conventional Logical Block Addressing (LBA) scheme for SSDs.
[0006] Fig. 2 is a diagram showing aspects of the physical and logical data organization of a data storage device according to one embodiment.
[0007] Fig. 3 shows a logical-to-physical address translation map and illustrative entries thereof, according to one embodiment.
[0008] Fig. 4 shows aspects of a method for updating a logical-to-physical address translation map and for creating an S-Journal entry, according to one embodiment.
[0009] Fig. 5 is a block diagram of an S-Journal, according to one embodiment.
[0010] Fig. 6 shows an exemplary organization of one entry of an S-Journal, according to one embodiment.
[0011] Fig. 7 is a block diagram of a superblock (S-Block), according to one embodiment.
[0012] Fig. 8 shows another view of a super page (S-Page), according to one embodiment.
[0013] Fig. 9A shows relationships among the logical-to-physical address translation map, S-Journals and S-Blocks, according to one embodiment.
[0014] Fig. 9B is a block diagram of an S-Journal Map, according to one embodiment.
[0015] Fig. 10 is a block diagram illustrating aspects of a method of updating the logical-to-physical address translation map, according to one embodiment.
[0016] Fig. 11 is a block diagram illustrating further aspects of a method of updating the logical-to-physical address translation map, according to one embodiment.
[0017] Fig. 12 is a block diagram illustrating still further aspects of a method of updating the logical-to-physical address translation map, according to one embodiment.
[0018] Fig. 13 is a block diagram illustrating yet further aspects of a method of updating the logical-to-physical address translation map, according to one embodiment.
[0019] Fig. 14 is a block diagram illustrating aspects of a method of updating S- Journals and the S-Journal Map, according to one embodiment.
[0020] Fig. 15 is a block diagram illustrating further aspects of a method of updating S-Journals and the S-Journal Map, according to one embodiment.
[0021] Fig. 16 is a block diagram illustrating still further aspects of a method of updating S-Journals and the S-Journal Map, according to one embodiment.
[0022] Fig. 17 is a block diagram illustrating yet further aspects of a method of updating S-Journals and the S-Journal Map, according to one embodiment.
[0023] Fig. 18 is a block diagram illustrating aspects of garbage collection, according to one embodiment.
[0024] Fig. 19 is a block diagram illustrating further aspects of garbage collection, according to one embodiment.
[0025] Fig. 20 is a block diagram illustrating still further aspects of garbage collection, according to one embodiment.
[0026] Fig. 21 is a block diagram illustrating yet further aspects of garbage collection, according to one embodiment.
[0027] Fig. 22 is a block diagram illustrating aspects of garbage collecting a system block, according to one embodiment.
[0028] Fig. 23 is a block diagram illustrating further aspects of garbage collecting a system block, according to one embodiment.
[0029] Fig. 24 is a block diagram illustrating still further aspects of garbage collecting a system block, according to one embodiment.
[0030] Fig. 25 is a block diagram illustrating other aspects of garbage collecting a system block, according to one embodiment.
[0031] Fig. 26 is a block diagram illustrating still other aspects of garbage collecting a system block, according to one embodiment. DETAILED DESCRIPTION
System Overview
[0032] Fig. 2 is a diagram showing aspects of the physical and logical data organization of a data storage device according to one embodiment. In one embodiment, the data storage device is an SSD. In another embodiment, the data storage device is a hybrid drive including flash memory and rotating magnetic storage media. The disclosure is applicable to both SSD and hybrid implementations, but for the sake of simplicity the various embodiments are described with reference to SSD- based implementations. A data storage device controller 202 according to one embodiment may be configured to be coupled to a host, as shown at reference numeral 218. The host 218 may utilize a logical block addressing (LBA) scheme. While the LBA size is normally fixed, the host can vary the size of the LBA dynamically. For example, the LBA size may vary by interface and interface mode. Indeed, while 512 bytes is most common, 4 KB is also becoming more common, as are 512+ (520, 528 etc.) and 4 KB+ (4 KB+8, 4K+16 etc.) formats. As shown therein, the data storage device controller 202 may comprise or be coupled to a page register 204. The page register 204 may be configured to enable the controller 202 to read data from and store data to the data storage device. The controller 202 may be configured to program and read data from an array of flash memory devices responsive to data access commands from the host 218. While the description herein refers to flash memory generally, it is understood that the array of memory devices may comprise one or more of various types of non-volatile memory devices such as flash integrated circuits, Chalcogenide RAM (C-RAM), Phase Change Memory (PC-RAM or PRAM), Programmable Metallization Cell RAM (PMC- RAM or PMCm), Ovonic Unified Memory (OUM), Resistance RAM (RRAM), NAND memory (e.g., single-level cell (SLC) memory, multi-level cell (MLC) memory, or any combination thereof), NOR memory, EEPROM, Ferroelectric Memory (FeRAM), Magnetoresistive RAM (MRAM), other discrete NVM (non-volatile memory) chips, or any combination thereof.
[0033] The page register 204 may be configured to enable the controller 202 to read data from and store data to the array. According to one embodiment, the array of flash memory devices may comprise a plurality of non-volatile memory devices in die (e.g., 128 dies), each of which comprises a plurality of blocks, such as shown at 206 in Fig. 2. Other page registers 204 (not shown), may be coupled to blocks on other die. A combination of flash blocks, grouped together, may be called a Superblock or S-Block. In some embodiments, the individual blocks that form an S-Block may be chosen from one or more dies, planes or other levels of granularity. An S-Block, therefore, may comprise a plurality of flash blocks, spread across one or more die, that are combined together. In this manner, the S-Block may form a unit on which the Flash Management System (FMS) operates. In some embodiments, the individual blocks that form an S- Block may be chosen according to a different granularity than at the die level, such as the case when the memory devices include dies that are sub-divided into structures such as planes (i.e., blocks may be taken from individual planes). According to one embodiment, allocation, erasure and garbage collection may be carried out at the S- Block level. In other embodiments, the FMS may perform data operations according to other logical groupings such as pages, blocks, planes, dies, etc.
[0034] In turn, each of the flash blocks 206 comprises a plurality of flash pages (F- Pages) 208. Each F-Page may be of a fixed size such as, for example, 16 KB. The F- Page, according to one embodiment, is the size of the minimum unit of program for a given flash device. As also shown in Fig. 2, each F-Page 208 may be configured to accommodate a plurality of physical pages, hereinafter referred to as E-Pages 210. The term“E-Page” refers to a data structure stored in flash memory on which an error correcting code (ECC) has been applied. According to one embodiment, the E-Page 210 may form the basis for physical addressing within the data storage device and may constitute the minimum unit of flash read data transfer. The E-Page 210, therefore, may be (but need not be) of a predetermined fixed size (such as 2 KB, for example) and determine the size of the payload (e.g., host data) of the ECC system. According to one embodiment, each F-Page 208 may be configured to fit a predetermined plurality of E- Pages 210 within its boundaries. For example, given 16 KB size F-Pages 208 and a fixed size of 2 KB per E-Page 210, eight E-Pages 210 fit within a single F-Page 208, as shown in Fig. 2. In any event, according to one embodiment, a power of 2 multiple of E- Pages 210, including ECC, may be configured to fit into an F-Page 208. Each E-Page 210 may comprise a data portion 214 and, depending on where the E-Page 210 is located, may also comprise an ECC portion 216. Neither the data portion 214 nor the ECC portion 216 need be fixed in size. The address of an E-Page uniquely identifies the location of the E-Page within the flash memory. For example, the E-Page’s address may specify the flash channel, a particular die within the identified flash channel, a particular block within the die, a particular F-Page and, finally, the E-Page within the identified F-Page.
[0035] To bridge between physical addressing on the data storage device and logical block addressing by the host, a logical page (L-Page) construct is introduced. An L- Page, denoted in Fig. 2 at reference numeral 212 may comprise the minimum unit of address translation used by the FMS. Each L-Page, according to one embodiment, may be associated with an L-Page number. The L-Page numbers of L-Pages 212, therefore, may be configured to enable the controller 202 to logically reference host data stored in one or more of the physical pages, such as the E-Pages 210. The L- Page 212 may also be utilized as the basic unit of compression. According to one embodiment, unlike F-Pages 208 and E-Pages 210, L-Pages 212 are not fixed in size and may vary in size, due to variability in the compression of data to be stored. Since the compressibility of data varies, a 4 KB amount of data of one type may be compressed into a 2 KB L-Page while a 4 KB amount of data of a different type may be compressed into a 1 KB L-Page, for example. Due to such compression, therefore, the size of L-Pages may vary within a range defined by a minimum compressed size of, for example, 24 bytes to a maximum uncompressed size of, for example, 4 KB or 4 KB+. Other sizes and ranges may be implemented. As shown in Fig. 2, L-Pages 212 need not be aligned with the boundaries of E-Page 210. Indeed, L-Pages 212 may be configured to have a starting address that is aligned with an F-Page 208 and/or E-Page 210 boundary, but also may be configured to be unaligned with either of the boundaries of an F-Page 208 or E-Page 210. That is, an L-Page starting address may be located at a non-zero offset from either the start or ending addresses of the F-Pages 208 or the start or ending addresses of the E-Pages 210, as shown in Fig. 2. As the L-Pages 212 are not fixed in size and may be smaller than the fixed-size E-Pages 210, more than one L-Page 212 may fit within a single E-Page 210. Similarly, as the L-Pages 212 may be larger in size than the E-Pages 210, L-Pages 212 may span more than one E-Page, and may even cross the boundaries of F-Pages 210, shown in Fig. 2 at numeral 217.
[0036] For example, where the LBA size is 512 or 512+ bytes, a maximum of, for example, eight sequential LBAs may be packed into a 4 KB L-Page 212, given that an uncompressed L-Page 212 may be 4 KB to 4 KB+. It is to be noted that, according to one embodiment, the exact logical size of an L-Page 212 is unimportant as, after compression, the physical size may span from few bytes at minimum size to thousands of bytes at full size. For example, for 4 TB SSD device, 30 bits of addressing may be used to address each L-Page 212 to cover for an amount of L-Pages that could potentially be present in such a SSD.
[0037] Fig. 3 shows a logical-to-physical address translation map and illustrative entries thereof, according to one embodiment. As the host data is referenced by the host in L-Pages 212 and as the data storage device stores the L-Pages 212 in one or more contiguous E-Pages 210, a logical-to-physical address translation map is required to enable the controller 202 to associate an L-Page number of an L-Page 212 to one or more E-Pages 210. Such a logical-to-physical address translation map is shown in Fig. 3 at 302 and, in one embodiment, is a linear array having one entry per L-Page 212. Such a logical-to-physical address translation map 302 may be stored in a volatile memory, such as a DRAM or SRAM. Fig. 3 also shows the entries in the logical-to- physical address translation map for four different L-Pages 212, which L-Pages 212 in Fig. 3 are associated with L-Page numbers denoted as L-Page 1 , L-Page 2, L-Page 3 and L-Page 4. According to one embodiment, each L-Page stored in the data storage device may be pointed to by a single and unique entry in the logical-to-physical address translation map 302. Accordingly, in the example being developed herewith, four entries are shown. As shown at 302, each entry in the map 302 may comprise an L- Page number, which may comprise an identification of the physical page (e.g., E-Page) containing the start address of the L-Page being referenced, the offset of the start address within the physical page (e.g., E-Page) and the length of the L-Page. In addition, a plurality of ECC bits may provide error correction functionality for the map entry. For example, and as shown in Fig. 3, and assuming an E-Page size of 2 KB, L- Page 1 may be referenced in the logical-to-physical address translation map 302 as follows: E-Page 1003, offset 800, length 1624, followed by a predetermined number of ECC bits (not shown). That is, in physical address terms, the start of L-Page 1 is within (not aligned with) E-Page 1003, and is located at an offset from the starting physical location of the E-Page 1003 that is equal to 800 bytes. Compressed L-Page 1 , furthermore, extends 1 ,624 bytes, thereby crossing an E-Page boundary to E-Page 1004. Therefore, E-Pages 1003 and 1004 each store a portion of the L-Page 212 denoted by L-Page number L-Page 1. Similarly, the compressed L-Page referenced by L-Page number L-Page 2 is stored entirely within E-Page 1004, and begins at an offset therein of 400 bytes and extends only 696 bytes within E-Page 1004. The compressed L-Page associated with L-Page number L-Page 3 starts within E-Page 1004 at an offset of 1 ,120 bytes (just 24 bytes away from the boundary of L-Page 2) and extends 4,096 bytes past E-Page 1005 and into E-Page 1006. Therefore, the L-Page associated with L-Page number L-Page 3 spans a portion of E-Page 1004, all of E-Page 1005 and a portion of E-Page 1006. Finally, the L-Page associated with L-Page number L-Page 4 begins within E-Page 1006 at an offset of 1 ,144 bytes, and extends 3,128 bytes to fully span E-Page 1007, crossing an F-Page boundary into E-Page 1008 of the next F-Page.
[0038] Collectively, each of these constituent identifier fields (E-Page, offset, length and ECC) making up each entry of the logical-to-physical address translation map 302 may be, for example, 8 bytes in size. That is, for an exemplary 4 TB drive, the address of the E-Page may be 32 bits in size, the offset may be 12 bits (for E-Page data portions up to 4 KB) in size, the length may be 10 bits in size and the ECC field may be provided. Other organizations and bit-widths are possible. Such an 8 byte entry may be created each time an L-Page is written or modified, to enable the controller 202 to keep track of the host data, written in L-Pages, within the flash storage. This 8-byte entry in the logical-to-physical address translation map may be indexed by an L-Page number of LPN. In other words, according to one embodiment, the L-Page number functions as an index into the logical-to-physical address translation map 302. It is to be noted that, in the case of a 4 KB sector size, the LBA is the same as the LPN. The LPN, therefore, may constitute the address of the entry within the volatile memory. When the controller 202 receives a read command from the host 218, the LPN may be derived from the supplied LBA and used to index into the logical-to-physical address translation map 302 to extract the location of the data to be read in the flash memory. When the controller 202 receives a write command from the host, the LPN may be constructed from the LBA and the logical-to-physical address translation map 302 may be modified. For example, a new entry therein may be created. Depending upon the size of the volatile memory storing the logical-to-physical address translation map 302, the LPN may be stored in a single entry or broken into, for example, a first entry identifying the E-Page containing the starting address of the L-Page in question (plus ECC bits) and a second entry identifying the offset and length (plus ECC bits). According to one embodiment, therefore, these two entries may together correspond and point to a single L-Page within the flash memory. In other embodiments, the specific format of the logical-to-physical address translation map entries may be different from the examples shown above.
S-Journals and S-Journal Map
[0039] As the logical-to-physical address translation map 302 may be stored in a volatile memory, it may need to be rebuilt upon startup or any other loss of power to the volatile memory. This, therefore, requires some mechanism and information to be stored in a non-volatile memory that will enable the controller 202 to reconstruct the logical-to-physical address translation map 302 before the controller can“know” where the L-Pages are stored in the non-volatile memory after startup or after a power-fail event. According to one embodiment, such mechanism and information are embodied in a construct that may be called a System Journal, or S-Journal. According to one embodiment, the controller 202 may be configured to maintain, in the plurality of non- volatile memory devices (e.g., in one or more of the blocks 206 in one or more die, channel or plane), a plurality of S-Journals defining physical-to-logical address correspondences. According to one embodiment, each S-Journal covers a pre- determined range of physical pages (e.g., E-Pages). According to one embodiment, each S-Journal may comprise a plurality of journal entries, with each entry being configured to associate one or more physical pages, such as E-Pages, to the L-Page number of each L-Page. According to one embodiment, each time the controller 202 restarts or whenever the logical-to-physical address translation map 302 is to be rebuilt either partially or entirely, the controller 202 reads the S-Journals and, from the information read from the S-Journal entries, rebuilds the logical-to-physical address translation map 302.
[0040] Fig. 4 shows aspects of a method for updating a logical-to-physical address translation map and for creating an S-Journal entry, according to one embodiment. As shown therein, to ensure that the logical-to-physical address translation map 302 is kept up-to-date, whenever an L-Page is written or otherwise updated as shown at block B41 , the logical-to-physical address translation map 302 may be updated as shown at B42. As shown at B43, an S-Journal entry may also be created, storing therein information pointing to the location of the updated L-Page. In this manner, both the logical-to- physical address translation map 302 and the S-Journals are updated when new writes occur (e.g., as the host issues writes to non-volatile memory, as garbage collection/wear leveling occurs, etc.). Write operations to the non-volatile memory devices to maintain a power-safe copy of address translation data may be configured, therefore, to be triggered by newly created journal entries (which may be just a few bytes in size) instead of re-saving all or a portion of the logical-to-physical address translation map, such that Write Amplification (WA) is reduced. The updating of the S- Journals ensures that the controller 202 can access a newly updated L-Page and that the logical-to-physical address translation map 302 may be reconstructed upon restart or other information-erasing power event affecting the volatile memory in which the logical-to-physical address translation map is stored. Moreover, in addition to their utility in rebuilding the logical-to-physical address translation map 302, the S-Journals are useful in enabling effective Garbage Collection (GC). Indeed, the S-Journals may contain the last-in-time update to all L-Page numbers, and also may contain stale entries, entries that do not point to a valid L-Page.
[0041] According to one embodiment, the S-Journals may be the main flash management data written to the non-volatile memory. S-Journals may contain mapping information for a given S-Block and may contain the Physical-to-Logical (P2L) information for a given S-Block. Fig. 5 is a block diagram showing aspects of an S- Journal, according to one embodiment. As shown therein, each S-Journal 502 covers a predetermined physical region of the non-volatile memory such as, for example, 32 E- Pages as shown at 506, which are addressable using 5 bits. Each S-Journal 502 may be identified by an S-Journal Number, which may be part of a header 504 that could include other information about the S-Journal. The S-Journal Number may comprise a portion of the address of the first physical page covered by the S-Journal. For example, the S-Journal Number of S-Journal 502 may comprise, for example, the 27 Most Significant Bits (MSb) of the first E-Page address covered by this S-Journal 502.
[0042] Fig. 6 shows an exemplary organization of one entry 602 of an S-Journal 502, according to one embodiment. Each entry 602 of the S-Journal 502 may point to the starting address of one L-Page, which is physically addressed in E-Pages. Each entry 602 may comprise, for example, a number (5, for example) of Least Significant Bits (LSbs) of the address of the E-Page containing the start L-Page. The full E-Page address is obtained by concatenating these 5 LSbs with the 27 MSbs of the S-Journal Number in the header 504. In addition, the entry 602 may comprise the L-Page number, its offset within the identified E-Page and its size. For example, each entry 602 of an S-Journal may comprise the 5 LSbs of the address of first E-Page covered by this S-Journal entry, 30 bits of L-Page number, 9 bits of E-Page offset and 10 bits of L-Page size, adding up to an overall size of about 7 bytes. Various other internal journal entry formats may be used in other embodiments.
[0043] According to one embodiment, due to the variability in the compression or the host configuration of the data stored in L-Pages, a variable number of L-Pages may be stored in a physical area, such as a physical area equal to 32 E-Pages, as shown at 506. As a result of the use of compression and the consequent variability in the sizes of L-Pages, S-Journals may comprise a variable number of entries. For example, according to one embodiment, at maximum compression, an L-Page may be 24 bytes in size and an S-Journal may comprise over 2,500 entries, referencing an equal number of L-Pages, one L-Page per S-Journal entry 602.
[0044] As noted above, an S-Journal may be configured to contain mapping information for a given S-Block. More precisely, according to one embodiment, S- Journals contain the mapping information for a predetermined range of E-Pages within a given S-Block. Fig. 7 is a block diagram of a S-Block, according to one embodiment. As shown therein, an S-Block 702 may comprise one flash block (F-Block) 704 (as also shown at 206 in Fig. 2) per die. An S-Block, therefore, may be thought of as a collection of F-Blocks, one F-Block per die, that are combined together to form a unit of the Flash Management System. According to one embodiment, allocation, erasure and GC may be managed at the S-Block level. Each F-Block 704, as shown in Fig. 7, may comprise a plurality of flash pages (F-Page) such as, for example, 256 or 512 F-Pages. An F- Page, according to one embodiment, may be the size of the minimum unit of program for a given non-volatile memory device. Fig. 8 shows a super page (S-Page), according to one embodiment. As shown therein, an S-Page 802 may comprise one F-Page per F-Block of an S-Block, meaning that an S-Page spans across an entire S-Block.
Relationships Among Various Data Structures
[0045] Fig. 9A shows relationships among the logical-to-physical address translation map, the S-Journal map and S-Blocks, according to one embodiment. Reference 902 denotes an entry in the logical-to-physical address translation map (stored in DRAM in one embodiment). According to one embodiment, the logical-to-physical address translation map may be indexed by L-Page number, in that there may be one entry 902 per L-Page in the logical-to-physical address translation map. The physical address of the start of the L-Page in the flash memory and the size thereof may be given in the map entry 902; namely by E-Page address, offset within the E-Page and the size of the L-Page. As noted earlier, the L-Page, depending upon its size, may span one or more E-Pages and may span F-Pages and F-Blocks as well.
[0046] As shown at 904, the volatile memory (e.g., DRAM) may also store a System Journal (S-Journal) map. An entry 904 in the S-Journal map stores information related to where an S-Journal is physically located in the non-volatile memory. For example, the 27 MSbs of the E-Page physical address where the start of the L-Page is stored may constitute the S-Journal Number (as previously shown in Fig. 5). The S-Journal map entry 904 in the volatile memory may also include the address of the S-Journal in non-volatile memory, referenced in system E-Pages. From the S-Journal map entry 904 in volatile memory, System S-Block Information 908 may be extracted. The System S-Block Information 908 may be indexed by System S-Block (S-Block in the System Band) and may comprise, among other information regarding the S-Block, the size of any free or used space in the System S-Block. Also from the S-Journal map entry 904, the physical location (expressed in terms of E-Pages in the System Band) of the referenced S-Journal in non-volatile memory 910 may be extracted.
[0047] The System Band, according to one embodiment, does not contain L-Page data and may contain File Management System (FMS) meta-data and information. The System Band may be configured as lower page only for reliability and power fail simplification. During normal operation, the System Band need not be read except during Garbage Collection. The System Band may be provided with significantly higher overprovisioning than the data band for overall WA optimization. Other bands include the Hot Band, which may contain L-Page data and is frequently updated, and the Cold Band, which may be less frequently updated and may comprise more static data, such as data that may have been collected as a result of GC. According to one embodiment, the System, Hot and Cold Bands may be allocated by an S-Block basis.
[0048] As noted above, each of these S-Journals in non-volatile memory may comprise a collection of S-Journal entries and cover, for example, 32 E-Pages worth of data. These S-Journals in non-volatile memory 910 enable the controller 202 to rebuild not only the logical-to-physical address translation map in volatile memory, but also the S-Journal map, the User S-Block Information 906, and the System S-Block Information 908, in volatile memory.
[0049] Fig. 9B is a block diagram of an S-Journal Map 912, according to one embodiment. The S-Journal Map 912 may be indexed by S-Block number and each entry thereof may point to the start of the first S-Journal for that S-Block which, in turn, may cover a predetermined number of E-Pages (e.g., 32) of that S-Block. The controller 202 may be further configured to build or rebuild a map of the S-Journals and store the resulting S-Journal Map in volatile memory. That is, upon restart or upon the occurrence of another event in which power fails or after a restart subsequent to error recovery, the controller 202 may read the plurality of S-Journals in a predetermined sequential order, build a map of the S-Journals stored in the non-volatile memory devices based upon the sequentially read plurality of S-Journals, and store the built S- Journal Map 912 in the volatile memory.
Updating the logical-to-physical Address Translation Map
[0050] Figs. 10-13 are block diagrams illustrating aspects of a method of updating the logical-to-physical address translation map, according to one embodiment. As shown in Fig. 10, a logical-to-physical address translation map 1002 contains an entry (e.g., a location of an L-Page) for L-Page 100, which has a length of 3,012 bytes. In this example, L-Page 100 is stored in S-Block 15, as shown at 1006. A buffer 1004 (such as a static random access memory (SRAM)) in or coupled to the controller 202 may store the S-Journal that contains the P2L information for S-Block 15 in which the L-Page 100 resides. What is shown in the buffer 1004 may actually reside in DRAM in some embodiments. The User S-Block Info 906, whose entries are indexed by S-Block, may comprise for each S-Block, among other information regarding the S-Block, the (e.g., exact or approximate) size of the free or used space in the S-Block. As shown in Fig. 10, the entry in the User S-Block Info 906 for S-Block 15 is shown. Fig. 10 shows an illustrative state of these constituent functional blocks before an update to L-Page 100 is processed by the controller 202.
[0051] As shown in Fig. 11 at 1102, an updated L-Page 100 is received, with a new length of 1 ,534 bytes. Responsive to the receipt of the updated L-Page 100, the L- Page 100 information may be fetched from the logical-to-physical address translation map 1002 and the length (3,012 bytes) of the (now obsolete) L-Page 100 may be extracted therefrom and used to correspondingly and precisely increase the tracked free space value of S-Block 15. In particular, the User S-Block Information 906 may be updated with data indicating that S-Block 15 now has 3,012 additional bytes of free space, now that the original data for L-Page 100 is now stale (e.g., obsolete).
[0052] As shown in Fig. 12, the logical-to-physical address translation map 1002 may be updated to accommodate the updated L-Page. For example, the length information is now 1 ,534 bytes. Thereafter, the S-Journal in the buffer 1004 for the particular portion of S-Block 15 where the update occurs may now be updated with the new L-Page information, including length information and the L-Page newly received from, for example, a compressor may be read into the buffer 1004. While the new L- Page is in the buffer, the entry for L-Page 100 in the logical-to-physical address translation map 1002 may temporarily reflect its location in the buffer, as shown by the arrow labeled“1”. Later, the updated L-Page 100 may be flushed to the Hot S-Block 1008, at the E-Page address specified by the newly-created entry in the now updated S- Journal still in the buffer 1004. The mapping table entry in the logical-to-physical address translation map 1002 is updated to reflect the physical E-Page address and offset of the L-Page’s final destination, as suggested by arrow“2”.
[0053] As shown in Fig. 13, at some later point in time such as, for example, after accumulating a sufficient number of new entries, the S-Journal in the volatile memory buffer 1004 may be written out to non-volatile memory, such as to the System S-Block 1010. The System S-Block 1010 may be a portion of the flash memory allocated by the controller firmware to store S-Journals. The saving of the S-Journal in non-volatile memory enables the later reconstruction of the logical-to-physical address translation map, as needed.
Updating the S-Journals and S-Journal Map
[0054] Figs. 14-17 are block diagrams illustrating aspects of a method of updating S- Journals and the S-Journal Map in the System Band, according to one embodiment. The figures illustrate additional details relating to the update mechanisms of S-Journal triggered by the same example update to L-Page 100. As shown therein, an S-Journal Map 1402 in volatile memory (e.g., synchronous dynamic random-access memory (SDRAM)), may contain an entry that points to the physical location, in non-volatile memory, of the S-Journal that contains the P2L mapping data associated with L-Page 100. In the illustrative example, the relevant S-Journal (i.e., S-Journal for a particular portion of S-Block 15 storing L-Page 100) is located in System S-Block 3 (at 1408), starting at E-Page 42.
[0055] Thereafter, as shown Fig. 15, L-Page 100 is updated, with a length of 1 ,534 bytes. The information (e.g., E-Page address, offset and length) regarding current L- Page 100 may then be fetched from the logical-to-physical address translation map 1002. Thereafter, the address of the E-Page storing L-Page 100 may be used to extract the S-Journal Number used to locate the S-Journal entry within the S-Journal Map 1402. For example, in the case wherein, as shown at Figs. 5 and 6, the S-Journal Number may comprise the 27 MSbs of the first 32-bit E-Page address. This S-Journal number may then be used to index into the S-Journal Map 1402 to obtain the location, which may then be used to identify the System S-Block that contains the S-Journal of interest. Thereafter, as shown at Fig. 15, the System S-Block Information 908 may be updated to reflect the fact that the journal entry for L-Page 100 in the S-Journal at S- Block 3 is now invalid, as the space previously occupied by that entry has now been de- allocated, in view of the recent update to L-Page 100. That de-allocated space is now free space, and must be accounted for. Alternatively, used space may be accounted for and the free space derived therefrom. The entry in the System S-Block information 908 for S-Block 3 may, therefore, be incremented by a space taken by one S-Journal entry; namely, 7 bytes in the exemplary implementation being developed herein. This empty space may thereafter be taken into account during GC of the System Band.
[0056] As shown in Fig. 16, L-Page 100 may then be written to the buffer 1004, as shown at 1005, whereupon the logical-to-physical address translation map 1002 may be updated to point to the buffer 1004 (indicating the physical location where L-Page 100 is stored) and to store the length of the updated L-Page 100 (1 ,534 in the example being developed herein). It is to be noted that, at this point, System S-Block 3 still includes an S-Journal entry that points to S-Block 15 at reference 1006 as containing L-Page 100. However, the System S-Block Info for S-Block 3 has been updated so as to indicate that the space presently occupied by the now outdated S-Journal entry within the System S- Block 3 at reference 1408 is, in fact, free space.
[0057] Fig. 16 also shows a different S-Journal in the buffer. In one embodiment, the S-Journal in the buffer 1004 is for recording new L-Pages written to the Open Hot S- Block 7. In this example, the S-Journal in the buffer 1004 (which has not yet been flushed to the non-volatile memory) is updated with the L-Page number of updated L- Page 100 (now in the buffer also) and its length information, as indicated by the arrow at 1007 in Fig. 16. L-Page 100 may now be flushed from the buffer 1004 to the currently open Hot S-Block 7, as referenced at 1404 in Fig. 16. At that point, the S-Journal in the buffer is updated to reflect that updated L-Page 100 has been written to the non-volatile memory. Alternatively, the logical-to-physical address translation map 1002 and the S- Journal may be updated with the address of the updated L-Page 100 prior to the updated L-Page 100 being written to the non-volatile memory. In one embodiment, an updated entry in that S-Journal will contain the E-Page pointed to by the new L-Page 100’s entry in the logical-to-physical address translation map 1002 and additional information about L-Page 100, as previously shown in Fig. 6.
[0058] As shown in Fig. 17, after accumulating sufficient data, the S-Journal in the buffer 1004 may be written out to an open System S-Block 1 (referenced at 1406). In this example, this S-Journal is written out to E-Page 19 of open System S-Block 1. The S-Journal Map 1402 now comprises one entry indicating that a portion of S-Block 15 is stored in System S-Block 3, E-Page 42. However, the entry within that S-Journal which pointed to the old location of L-Page 100 within S-Block 15 is no longer valid. The S- Journal Map 1402 also comprises one entry indicating that S-Journal for S-Block 7 is stored in System S-Block 1 , E-Page 19. That S-Journal includes a valid entry that points to open Hot S-Block 7 at 1404 as the location in non-volatile memory of the updated L-Page 100. In this manner, the S-Journals containing the P2L information are updated in the buffer 1004, and are eventually written out to non-volatile memory. In addition, the S-Journal Map 1402 may be suitably updated to point to such newly updated S-Journals. Moreover, the logical-to-physical address translation map 1002 may be suitably updated to point to the correct open Hot S-Block.
[0059] Advantageously, on a new write, the S-Journal construct as shown and described herein minimizes write overhead. According to one embodiment, a new write operation necessitates writing a new S-Journal entry in the non-volatile memory. Since there is only a small amount of S-Journal entry data per L-Page generated (e.g., 7 bytes as described herein), the WA due to system data writes is reduced as compared to conventional systems. S-Journal data is effectively a massive command history. Updated S-Journal data is pooled up and written out sequentially to the System Band. The S-Journal system shown and described herein is efficient, as the mapping information that is generated is the same as the data to be written, with no additional unchanged entries, as only that which is changed is written to non-volatile (e.g., flash) memory, as opposed to all or a portion of a logical-to-physical map.
[0060] On power up, all the S-Journal data in the System Band may be read and processed into DRAM to rebuild the logical-to-physical address translation map in volatile memory. According to one embodiment, this is done in the order in which the S- Blocks are allocated to the System Band. In one embodiment, hardware support is used to enable this large data load to occur quickly to meet power-up timing requirement (this may not be practical without hardware, since this is essentially generating the logical-to-physical address translation map from the command history). According to one embodiment, an S-Journal Map is also constructed at power-up to point to valid S-Journals stored in the System Band.
Garbage Collection of User Data S-Blocks
[0061] Figs. 18-21 are block diagrams illustrating aspects of garbage collection, according to one embodiment. As shown therein, the data in the User S-Block Information 906 may be scanned to select the“best” S-Block to garbage collect. There are a number of criteria that may be evaluated to select which S-Block to garbage collect. For example, the best S-Block to garbage collect may be that S-Block having the largest amount of free space and the lowest Program Erase (PE) count. Alternatively, these and/or other criteria may be weighted to select the S-Block to be garbage collected. For purposes of example, the S-Block selected to be garbage collected in Figs. 18-21 is S-Block 15, which is referenced by information entry 15 within the User S-Block Information 906, showing some amount + 3,012 bytes of free space (the additional 3,012 bytes to account for the recently obsoleted L-Page 100). It is to be noted that the User S-Block Information 906 may comprise, among other items of information, a running count of the number of PE cycles undergone by each tracked S- Block, which may be evaluated in deciding which S-Block to garbage collect. As shown at 1006, S-Block 15 has a mix of valid data (hashed blocks) and invalid data (non- hashed blocks).
[0062] Now that S-Block 15 has been selected for GC, the S-Journal Map (see 912 in Fig. 9B) may be consulted (e.g., indexed into by the S-Block number) to find the location in non-volatile memory (e.g., E-Page address) of the corresponding S-Journals for that S-Block. To illustrate an example, one S-Journal pointed to by the S-Journal Map 912 is located using the S-Journal Number (27 MSb of E-Page Address), and read into the buffer 1004, as shown in Fig. 18. That is, the 1 or more E-Pages in the System S-Block 1804 pointed to by S-Journal Map 912 are accessed and the S-Journal stored beginning at that location may be read into the buffer 1004. In one embodiment, the S- Journal map also contains the length of the S-Journal since an S-Journal may span one or more E-Pages. An S-Journal may be quite large and, therefore, may be read in pieces and processed as available.
[0063] Thereafter, each P2L entry in the S-Journal in the buffer 1004 may then be compared to the corresponding entry in the logical-to-physical address translation map 1802. For each entry in the S-Journal in the buffer 1004, it may be determined whether the physical address for the L-Page of that entry matches the physical address of the same L-Page in the corresponding entry in the logical-to-physical address translation map 1802. If the two match, that entry in the S-Journal is valid. Conversely, if the address for the L-Page in the S-Journal does not match the entry for that L-Page in the logical-to-physical address translation map, that entry in the S-Journal is not valid.
[0064] According to one embodiment, as valid entries are found in the S-Journal whose entries are being parsed and compared, the referenced L-Pages may be read out of S-Block 15 and written to the buffer 1004, as shown in Fig. 19. The same process may be used for other S-Journals covering S-Block 15 until the entire S-Block is processed. As also shown in Fig. 19 at reference 1006, S-Block 15 now contains only invalid data. This is because the data indicated as valid by entries in S-Journals for S-Block 15 have been preserved and will soon be moved to a new S-Block. In the illustrated example, as the entries in the example S-Journal of S-Block 15 in System S- Block 1804 point to such invalid data, that S-Journal is shown as being hashed, indicating that it is now stale. The logical-to-physical address translation map 1802 may then be updated, generating a new E-Page starting address for the valid data read into the buffer 1004. It is to be noted that during the update of the logical-to-physical address translation map, the map may be rechecked for valid entries and may be locked during the map update process to guarantee atomicity. The valid data will also necessitate new S-Journal entries be generated in S-Journal 1005, which in one embodiment is for the Cold S-Block 1801.
[0065] In one embodiment, the valid data may then be written out to the Cold S- Block 1801 (the Hot S-Block being used for recently written host data, not garbage collected data), as shown at Fig. 20. At some later time (e.g., after a sufficient number of entries have been populated), S-Journal 1005 may be written out to the System S- Block 1804 in the System Band. S-Block 15 has now been garbage collected and User S-Block Info 906 now indicates that the entire S-Block 15 is free space. S-Block 15 may thereafter be erased, its PE count updated and made available for new writes. It is to be noted that an invalid S-Journal is still present in System S-Block 1804. The space in flash memory in the System Band occupied by this invalid S-Journal may be garbage collected at some later time.
Garbage Collection on System S-Blocks
[0066] Figs. 22-26 are block diagrams illustrating aspects of garbage collecting a system block, according to one embodiment. In the example being developed in Figs. 22-26, it is assumed that System S-Block 3, shown at referenced 2208 in Fig. 22, has been picked for garbage collection. Once a System S-Block has been picked, all of the E-pages (and by extension, all S-Journals contained therein) within the picked System S-Block may be read (sequentially or non-sequentially) into the buffer 1004.
[0067] As suggested in Fig. 23, the S-Journal numbers for one or more of the S- Journals read into the buffer 1004 may then be extracted from the headers of the S- Journals. Each such System S-Journal number may then be used to look up in the S- Journal Map 2202 to determine whether the corresponding S-Journal is still valid. According to one embodiment, invalid S-Journals are those S-Journals whose S-Journal number is not matched by a corresponding entry in the S-Journal Map 2202, which has the most updated information on where S-Journals are physically stored. For example, if the entry in the S-Journal Map 2202 for S-Journal Number“12345” points to an E- Page within System S-Block 120, the copy of S-Journal“12345” in S-Block 3 (the S- Block being garbage collected) is obsolete. Likewise, if the S-Journal Map entry instead points to the E-Page from S-Block 3 where S-Journal“12345” currently resides, S- Journal“12345” is still valid.
[0068] In one embodiment, a valid S-Journal being garbage collected may include a mix of valid and invalid entries, thus individual checking of the entries is needed. As shown in Fig. 24, each entry in each valid S-Journal 2402 may then be matched with a corresponding entry in the logical-to-physical address translation map 1802 in memory. That is, the E-Page address of the L-Page referenced by each S-Journal entry may be compared with the E-Page address of the L-Page specified in the logical-to-physical address translation map 1802. If the two match, that S-Journal entry is valid. According to one embodiment, it may be necessary to compare the E-Page address and the offset within the E-Page in the S-Journal with the E-Page address and the offset within the E- page of the L-Page specified in the logical-to-physical address translation map 1802. Conversely, if the E-page address (or E-page address and offset) for the L-Page in the S-Journal does not match the E-Page address (or E-page address and offset) in the entry for that L-Page in the address translation map 1802, that entry in the S-Journal is not valid.
[0069] According to one embodiment, as valid entries are identified in the S-Journal 2402 (whose entries are being parsed and compared), they may be copied to a new version 2502 of the S-Journal 2402, as shown in Fig. 25. The S-Journal 2502, according to one embodiment, may have the same S-Journal number as that of the S- Journal 2402. In this manner, each S-Journal loaded into the buffer 1004 from the S- Block picked for GC may be first determined to be valid or invalid at the journal level, and then, if determined valid, compacted to contain only valid entries. According to one embodiment, invalid S-Journal entries are simply not copied to the new version of the S- Journal 2502. Accordingly, it is expected that the new version 2502 of the S-Journal will be smaller (i.e., comprise fewer entries) than the older version 2402 thereof, presuming that the S-Journal 2402 had one or more obsolete entries. It is to be noted that if an S- Journal has all valid entries, the size of the new version thereof will remain the same.
[0070] As shown in Fig. 26, the S-Journal 2502 (the new version of the S-Journal 2402, which is now invalid or obsolete) may then be written to the current open System S-Block which, in Fig. 26, is Open System S-Block 1 , reference numeral 2206. Thereafter, the S-Journal Map 2202 may be updated with the new location of the S- Journal 2502 in Open System S-Block 1. At the end of this process, the S-Journals in System S-Block 3 (2208) have been garbage collected and System S-Block 3 may then be erased and made available for future programming.
[0071] While certain embodiments of the disclosure have been described, these embodiments have been presented by way of example only, and are not intended to limit the scope of the disclosure. Indeed, the novel methods, devices and systems described herein may be embodied in a variety of other forms. Furthermore, various omissions, substitutions and changes in the form of the methods and systems described herein may be made without departing from the spirit of the disclosure. The accompanying claims and their equivalents are intended to cover such forms or modifications as would fall within the scope and spirit of the disclosure. For example, those skilled in the art will appreciate that in various embodiments, the actual physical and logical structures may differ from those shown in the figures. Depending on the embodiment, certain steps described in the example above may be removed, others may be added. Also, the features and attributes of the specific embodiments disclosed above may be combined in different ways to form additional embodiments, all of which fall within the scope of the present disclosure. Although the present disclosure provides certain preferred embodiments and applications, other embodiments that are apparent to those of ordinary skill in the art, including embodiments which do not provide all of the features and advantages set forth herein, are also within the scope of this disclosure. Accordingly, the scope of the present disclosure is intended to be defined only by reference to the appended claims.

Claims

CLAIMS:
1. A data storage device, comprising:
a plurality of non-volatile memory devices, each configured to store a plurality of physical pages, each of the plurality of physical pages being stored at a predetermined physical location within the plurality of non-volatile devices;
a controller coupled to the plurality of memory devices and configured to program data to and read data from the plurality of memory devices, the data being stored in a plurality of logical pages (L-Pages), each of the plurality of L-Pages being associated with an L-Page number that is configured to enable the controller to logically reference data stored in one or more of the physical pages; and
a volatile memory comprising a logical-to-physical address translation map configured to enable the controller to determine a physical location, within one or more physical pages, of the data stored in each L-Page;
wherein the controller is configured to maintain, in the plurality of non-volatile memory devices, a plurality of journals defining physical-to-logical correspondences, each of the plurality of journals being associated with a journal number, each journal covering a pre-determined range of physical pages and comprising a plurality of journal entries, each entry being configured to associate one or more physical pages to each L- Page, wherein the controller is configured to read the plurality of journals upon startup and rebuild the address translation map stored in the volatile memory from the read plurality of journals.
2. The data storage device of claim 1 , wherein the controller is further configured to, upon an update to one of the plurality of L-Pages, create a new entry in one of the plurality of journals.
3. The data storage device of claim 2, wherein write operations to the non- volatile memory devices to maintain a power-safe copy of translation data are configured to be triggered by newly created journal entries instead of saving at least portions of the translation map, such that write amplification is reduced.
4. The data storage device of claim 2, wherein the new entry indicates a physical location, within a physical page, of a start of the updated L-Page.
5. The data storage device of claim 2, wherein the controller is further configured to update a free space accounting by an amount corresponding to a length of the L-Page prior to being updated.
6. The data storage device of claim 1 , wherein at least one of the plurality of L-Pages is unaligned with physical page boundaries.
7. The data storage device of claim 1 , wherein the physical pages are implemented as Error Correcting Code (ECC) pages (E-Pages) and wherein the plurality of devices comprises a plurality of flash memory blocks, each flash memory block comprising a plurality of flash memory pages (F-Pages), each of the F-Pages comprising a plurality of the E-Pages, each of the plurality of E-Pages being stored at a predetermined physical location within the plurality of devices.
8. The data storage device of claim 2, wherein the controller is further configured to update the translation map with the physical location, within one or more physical pages, of the data referenced by the L-Page number of the updated L-Page.
9. The data storage device of claim 1 , further comprising a plurality of super blocks (S-Blocks), each comprising one or more flash memory blocks per device and wherein each of the plurality of journal entries is configured to associate one or more of the physical pages of the S-Block to each L-Page.
10. The data storage device of claim 9, wherein the controller is further configured to garbage collect by at least:
selecting one of the plurality of S-Blocks to garbage collect;
comparing each entry in a journal for the selected S-Block to entries in the translation map and designating entries that match as valid and entries that do not match as invalid;
reading the L-Pages corresponding to the valid entries;
writing the read L-Pages to respective physical addresses within the plurality of non-volatile memory devices;
updating the translation map for the valid entries to point to the respective physical addresses; and
generating new journal entries for the entries for which the translation map was updated.
11. The data storage device of claim 9, wherein the controller is further configured to garbage collect by at least:
selecting one of the plurality of S-Blocks to garbage collect;
reading the physical pages of the selected S-Block;
comparing L-Page numbers in the read physical pages of the selected S-Block to entries in the translation map and designating entries that match as valid and entries that do not match as invalid;
writing the L-Pages corresponding to the valid entries to respective physical addresses within the plurality of non-volatile memory devices;
updating the translation map for the valid entries to point to the respective physical addresses; and
generating new journal entries for the entries for which the translation map was updated.
12. The data storage device of claim 10, wherein selecting comprises weighing free space and program erase (PE) count in determining which S-Block to select.
13. The data storage device of claim 1 , wherein each journal number comprises a predetermined number of most significant bits of an address of a first physical page covered by the journal.
14. The data storage device of claim 1 , wherein each of the plurality of journal entries comprises:
an L-Page number, and
a physical address location.
15. The data storage device of claim 1 , wherein each of the plurality of journal entries comprises:
an L-Page number;
a physical address location of a physical page, and
an L-Page size.
16. The data storage device of claim 1 , wherein each of the plurality of journal entries comprises:
a predetermined number of least significant bits of an address of a physical page that includes a start of an L-Page;
an address;
an L-Page size; and
an offset into the physical page.
17. The data storage device of claim 1 , wherein the plurality of L-Pages are configured to be compressed and to vary in size, and wherein the plurality of journal numbers are configured to reference a greater number of L-Pages of smaller size or a lesser number of L-Pages of greater size.
18. The data storage device of claim 1 , wherein the controller is further configured to read the plurality of journals upon startup in a predetermined sequential order and rebuild the translation map stored in volatile memory based upon the sequentially read plurality of journals.
19. The data storage device of claim 1 , wherein the controller is further configured to build a journal map based on the plurality of journals.
20. The data storage device of claim 1 , wherein the controller is further configured to:
read the plurality of journals upon startup in a predetermined sequential order; build a map of the journals stored in the non-volatile memory devices based upon the sequentially read plurality of journals, and
store the built map of the journals in the volatile memory.
21. The data storage of device of claim 1 , wherein, among journal entries associated with a given L-Page, only a last-in-time updated journal entry associated with the given L-Page is valid.
22. The data storage device of claim 1 , wherein the controller is further configured to maintain a system journal map of the plurality of journals in the volatile memory, each entry in the system journal map pointing to a location in the non-volatile memory devices where one of the plurality of journals is stored.
23. A method of controlling a data storage device comprising a volatile memory and a plurality of non-volatile memory devices, each of the plurality of non- volatile devices being configured to store a plurality of physical pages, each of the plurality of physical pages being stored at a predetermined physical location within the plurality of non-volatile devices, the method comprising:
storing data in a plurality of logical pages (L-Pages), each of the plurality of L- Pages being associated with an L-Page number that is configured to enable the controller to logically reference data stored in one or more of the physical pages;
maintaining a logical-to-physical address translation map in the volatile memory, the translation map being configured to enable determination of a physical location, within one or more of the physical pages, of the data stored in each L-Page;
maintaining a plurality of journals defining physical-to-logical correspondences in the plurality of non-volatile memory devices, each of the plurality of journals being associated with a journal number, each journal covering a pre-determined range of physical pages and each comprising a plurality of journal entries, each entry being configured to associate one or more physical pages to each L-Page; and
reading the plurality of journals upon startup and rebuilding the translation map stored in volatile memory based upon the read entries in the plurality of journals.
24. The method of claim 23, further comprising creating a new entry in one of the plurality of journals upon an update to one of the plurality of L-Pages.
25. The method of claim 24, further comprising triggering write operations to the non-volatile memory devices to maintain a power-safe copy of translation data based on newly created journal entries instead of saving at least portions of the translation map, such that a write amplification is reduced.
26. The method of claim 24, wherein the new entry indicates a physical location, within a physical page, of a start of the updated L-Page.
27. The method of claim 24, further comprising updating a free space accounting by an amount corresponding to a length of the L-Page prior to being updated.
28. The method of claim 23, wherein storing comprises storing at least one of the plurality of L-Pages at a location that is unaligned with physical page boundaries.
29. The method of claim 23, wherein the physical pages are implemented as Error Correcting Code (ECC) pages (E-Pages) and wherein the plurality of devices comprises a plurality of flash memory blocks, each flash memory block comprising a plurality of flash memory pages (F-Pages), each of the F-Pages comprising a plurality of the E-Pages, each of the plurality of E-Pages being stored at a predetermined physical location within the plurality of devices.
30. The method of claim 23, further comprising updating the translation map with the physical location, within one or more physical pages, of the data referenced by the L-Page number of the updated L-Page.
31. The method of claim 23, wherein the plurality of devices comprises a plurality of super blocks (S-Blocks), each comprising one or more flash memory blocks per device and wherein each of the plurality of journal entries is configured to associated one or more of the physical pages of the S-Block to each L-Page.
32. The method of claim 31 , further comprising:
selecting an S-Block to garbage collect;
comparing each entry in a journal for the selected S-Block to entries in the translation map and designating entries that match as valid and entries that do not match as invalid;
reading the L-Pages corresponding to the valid entries;
writing the read L-Pages to respective physical addresses within the plurality of non-volatile memory devices;
updating the translation map for the valid entries to point to the respective physical addresses; and
generating new journal entries for the entries for which the translation map was updated.
33. The method of claim 31 , further comprising
selecting one of the plurality of S-Blocks to garbage collect;
reading the physical pages of the selected S-Block;
comparing L-Page numbers in the read physical pages of the selected S-Block to entries in the translation map and designating entries that match as valid and entries that do not match as invalid;
writing the L-Pages corresponding to the valid entries to respective physical addresses within the plurality of non-volatile memory devices;
updating the translation map for the valid entries to point to the respective physical addresses; and
generating new journal entries for the entries for which the translation map was updated.
34. The method of claim 32, wherein selecting comprises weighing free space and program erase (PE) count in determining which S-Block to select.
35. The method of claim 34, wherein the journal number comprises a predetermined number of most significant bits of an address of a first physical page covered by the journal.
36. The method of claim 23, wherein each of the plurality of journal entries comprises:
an L-Page number, and
a physical address location.
37. The method of claim 23, wherein each of the plurality of journal entries comprises:
an L-Page number;
a physical address location of a physical page, and
an L-Page size.
38. The method of claim 23, wherein each of the plurality of journal entries comprises:
a predetermined number of least significant bits of an address of a physical page that includes a start of an L-Page;
an address;
an L-Page size; and
an offset into the physical page.
39. The method of claim 23, further comprising selectively compressing the plurality of L-Pages such that the plurality of L-Pages vary in size and wherein the plurality of journals are configured to reference a greater number of L-Pages of smaller size or a lesser number of L-Pages of greater size.
40. The method of claim 23, wherein reading the plurality of journals upon startup and rebuilding the translation map comprises reading the plurality of journals upon startup in a predetermined sequential order and rebuilding the translation map stored in volatile memory based upon the read plurality of journals.
41. The method of claim 23, wherein the controller is further configured to build a journal map based on the plurality of journals.
42. The method of claim 23, further comprising:
reading the plurality of journals upon startup in a predetermined sequential order; building a map of the journals stored in the non-volatile memory devices based upon the sequentially read plurality of journals, and
storing the built map of the journals in the volatile memory.
43. The method of claim 23 wherein, among journal entries associated with a given L-Page, only a last-in-time updated journal entry associated with the given L-Page is valid.
44. The method of claim 23, further comprising maintaining a system journal map of the plurality of journals in the volatile memory, each entry in the system journal map pointing to a location in the non-volatile memory devices where one of the plurality of journals is stored.
45. A data storage device controller, comprising:
a processor configured to couple to a volatile memory and to a plurality of memory devices, each of the plurality of memory devices being configured to store a plurality of physical pages at a predetermined physical location within the plurality of non-volatile devices, the processor being further configured to program data to and read data from the plurality of memory devices, the data being stored in a plurality of logical pages (L-Pages), each of the plurality of L-Pages being associated with an L-Page number that is configured to enable the processor to logically reference data stored in one or more of the physical pages, the volatile memory being configured to store a logical-to-physical address translation map configured to enable the processor to determine a physical location, within one or more physical pages, of the data stored in each L-Page,
wherein the processor is configured to maintain, in the plurality of non-volatile memory devices, a plurality of journals defining physical-to-logical correspondences, each of the plurality of journals being associated with a journal number, each journal covering a pre-determined range of physical pages and comprising a plurality of journal entries, each entry being configured to associate one or more physical pages to each L- Page, wherein the processor is further configured to read the plurality of journals upon startup and rebuild the address translation map stored in the volatile memory from the read plurality of journals.
PCT/US2013/062723 2012-10-05 2013-09-30 Methods, devices and systems for physical-to-logical mapping in solid state drives WO2014055445A1 (en)

Priority Applications (6)

Application Number Priority Date Filing Date Title
EP13844022.7A EP2904496A4 (en) 2012-10-05 2013-09-30 Methods, devices and systems for physical-to-logical mapping in solid state drives
AU2013327582A AU2013327582B2 (en) 2012-10-05 2013-09-30 Methods, devices and systems for physical-to-logical mapping in solid state drives
KR1020157011769A KR101911589B1 (en) 2012-10-05 2013-09-30 Methods, devices and systems for physical-to-logical mapping in solid state drives
CN201380063439.2A CN105027090B (en) 2012-10-05 2013-09-30 Method, apparatus and system for the physics in solid-state drive to logical mappings
JP2015535725A JP6210570B2 (en) 2012-10-05 2013-09-30 Method, apparatus and system for physical logical mapping in solid state drives
HK16104405.3A HK1216443A1 (en) 2012-10-05 2016-04-18 Methods, devices and systems for physical-to-logical mapping in solid state drives

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US13/645,822 2012-10-05
US13/645,822 US9268682B2 (en) 2012-10-05 2012-10-05 Methods, devices and systems for physical-to-logical mapping in solid state drives

Publications (1)

Publication Number Publication Date
WO2014055445A1 true WO2014055445A1 (en) 2014-04-10

Family

ID=50433685

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2013/062723 WO2014055445A1 (en) 2012-10-05 2013-09-30 Methods, devices and systems for physical-to-logical mapping in solid state drives

Country Status (8)

Country Link
US (1) US9268682B2 (en)
EP (1) EP2904496A4 (en)
JP (1) JP6210570B2 (en)
KR (1) KR101911589B1 (en)
CN (1) CN105027090B (en)
AU (1) AU2013327582B2 (en)
HK (1) HK1216443A1 (en)
WO (1) WO2014055445A1 (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9959203B2 (en) 2014-06-23 2018-05-01 Google Llc Managing storage devices
WO2019040320A1 (en) * 2017-08-21 2019-02-28 Micron Technology, Inc. Logical to physical mapping
US10997085B2 (en) 2019-06-03 2021-05-04 International Business Machines Corporation Compression for flash translation layer
US11481121B2 (en) 2016-09-30 2022-10-25 Amazon Technologies, Inc. Physical media aware spacially coupled journaling and replay
US11650931B2 (en) 2018-12-31 2023-05-16 Micron Technology, Inc. Hybrid logical to physical caching scheme of L2P cache and L2P changelog

Families Citing this family (171)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9176859B2 (en) 2009-01-07 2015-11-03 Siliconsystems, Inc. Systems and methods for improving the performance of non-volatile memory operations
US10079048B2 (en) 2009-03-24 2018-09-18 Western Digital Technologies, Inc. Adjusting access of non-volatile semiconductor memory based on access time
US9753847B2 (en) 2009-10-27 2017-09-05 Western Digital Technologies, Inc. Non-volatile semiconductor memory segregating sequential, random, and system data to reduce garbage collection for page based mapping
US8782327B1 (en) 2010-05-11 2014-07-15 Western Digital Technologies, Inc. System and method for managing execution of internal commands and host commands in a solid-state memory
US9026716B2 (en) 2010-05-12 2015-05-05 Western Digital Technologies, Inc. System and method for managing garbage collection in solid-state memory
US8959284B1 (en) 2010-06-28 2015-02-17 Western Digital Technologies, Inc. Disk drive steering write data to write cache based on workload
US9058280B1 (en) 2010-08-13 2015-06-16 Western Digital Technologies, Inc. Hybrid drive migrating data from disk to non-volatile semiconductor memory based on accumulated access time
US8769190B1 (en) 2010-09-15 2014-07-01 Western Digital Technologies, Inc. System and method for reducing contentions in solid-state memory access
US8788779B1 (en) 2010-09-17 2014-07-22 Western Digital Technologies, Inc. Non-volatile storage subsystem with energy-based performance throttling
US9164886B1 (en) 2010-09-21 2015-10-20 Western Digital Technologies, Inc. System and method for multistage processing in a memory storage subsystem
US9021192B1 (en) 2010-09-21 2015-04-28 Western Digital Technologies, Inc. System and method for enhancing processing of memory access requests
US9069475B1 (en) 2010-10-26 2015-06-30 Western Digital Technologies, Inc. Hybrid drive selectively spinning up disk when powered on
US8700950B1 (en) 2011-02-11 2014-04-15 Western Digital Technologies, Inc. System and method for data error recovery in a solid state subsystem
US8700951B1 (en) 2011-03-09 2014-04-15 Western Digital Technologies, Inc. System and method for improving a data redundancy scheme in a solid state subsystem with additional metadata
US8700834B2 (en) 2011-09-06 2014-04-15 Western Digital Technologies, Inc. Systems and methods for an enhanced controller architecture in data storage systems
US9195530B1 (en) 2011-09-06 2015-11-24 Western Digital Technologies, Inc. Systems and methods for improved data management in data storage systems
US8707104B1 (en) 2011-09-06 2014-04-22 Western Digital Technologies, Inc. Systems and methods for error injection in data storage systems
US8713357B1 (en) 2011-09-06 2014-04-29 Western Digital Technologies, Inc. Systems and methods for detailed error reporting in data storage systems
US9268701B1 (en) 2011-11-21 2016-02-23 Western Digital Technologies, Inc. Caching of data in data storage systems by managing the size of read and write cache based on a measurement of cache reliability
US8959416B1 (en) 2011-12-16 2015-02-17 Western Digital Technologies, Inc. Memory defect management using signature identification
US9348741B1 (en) 2011-12-19 2016-05-24 Western Digital Technologies, Inc. Systems and methods for handling write data access requests in data storage devices
US9053008B1 (en) 2012-03-26 2015-06-09 Western Digital Technologies, Inc. Systems and methods for providing inline parameter service in data storage devices
US9977612B1 (en) 2012-05-11 2018-05-22 Western Digital Technologies, Inc. System data management using garbage collection and logs
US9170932B1 (en) 2012-05-22 2015-10-27 Western Digital Technologies, Inc. System data storage mechanism providing coherency and segmented data loading
US8954653B1 (en) 2012-06-26 2015-02-10 Western Digital Technologies, Inc. Mechanisms for efficient management of system data in data storage systems
US8924832B1 (en) 2012-06-26 2014-12-30 Western Digital Technologies, Inc. Efficient error handling mechanisms in data storage systems
US8966343B2 (en) 2012-08-21 2015-02-24 Western Digital Technologies, Inc. Solid-state drive retention monitor using reference blocks
US9507523B1 (en) 2012-10-12 2016-11-29 Western Digital Technologies, Inc. Methods, devices and systems for variable size logical page management in a solid state drive
US9489296B1 (en) 2012-10-17 2016-11-08 Western Digital Technologies, Inc. Methods, devices and systems for hardware-based garbage collection in solid state drives
US8972826B2 (en) 2012-10-24 2015-03-03 Western Digital Technologies, Inc. Adaptive error correction codes for data storage systems
US9177638B2 (en) 2012-11-13 2015-11-03 Western Digital Technologies, Inc. Methods and devices for avoiding lower page corruption in data storage devices
US8954694B2 (en) 2012-11-15 2015-02-10 Western Digital Technologies, Inc. Methods, data storage devices and systems for fragmented firmware table rebuild in a solid state drive
US9021339B2 (en) 2012-11-29 2015-04-28 Western Digital Technologies, Inc. Data reliability schemes for data storage systems
US9059736B2 (en) 2012-12-03 2015-06-16 Western Digital Technologies, Inc. Methods, solid state drive controllers and data storage devices having a runtime variable raid protection scheme
US9032271B2 (en) 2012-12-07 2015-05-12 Western Digital Technologies, Inc. System and method for lower page data recovery in a solid state drive
US8954655B2 (en) 2013-01-14 2015-02-10 Western Digital Technologies, Inc. Systems and methods of configuring a mode of operation in a solid-state memory
US8972655B2 (en) 2013-01-21 2015-03-03 Western Digital Technolgies, Inc. Initialization of a storage device
US9495288B2 (en) * 2013-01-22 2016-11-15 Seagate Technology Llc Variable-size flash translation layer
US9274966B1 (en) 2013-02-20 2016-03-01 Western Digital Technologies, Inc. Dynamically throttling host commands to disk drives
US9454474B2 (en) 2013-03-05 2016-09-27 Western Digital Technologies, Inc. Methods, devices and systems for two stage power-on map rebuild with free space accounting in a solid state drive
US8924824B1 (en) 2013-03-12 2014-12-30 Western Digital Technologies, Inc. Soft-decision input generation for data storage systems
US8990668B2 (en) 2013-03-14 2015-03-24 Western Digital Technologies, Inc. Decoding data stored in solid-state memory
US9218279B2 (en) 2013-03-15 2015-12-22 Western Digital Technologies, Inc. Atomic write command support in a solid state drive
US9448738B2 (en) 2013-03-15 2016-09-20 Western Digital Technologies, Inc. Compression and formatting of data for data storage systems
US9335950B2 (en) 2013-03-15 2016-05-10 Western Digital Technologies, Inc. Multiple stream compression and formatting of data for data storage systems
US9338927B2 (en) 2013-05-02 2016-05-10 Western Digital Technologies, Inc. Thermal interface material pad and method of forming the same
US9195293B1 (en) 2013-05-03 2015-11-24 Western Digital Technologies, Inc. User controlled data storage device power and performance settings
US10417123B1 (en) 2013-05-16 2019-09-17 Western Digital Technologies, Inc. Systems and methods for improving garbage collection and wear leveling performance in data storage systems
US9081700B2 (en) 2013-05-16 2015-07-14 Western Digital Technologies, Inc. High performance read-modify-write system providing line-rate merging of dataframe segments in hardware
US9170938B1 (en) 2013-05-17 2015-10-27 Western Digital Technologies, Inc. Method and system for atomically writing scattered information in a solid state storage device
US9280200B1 (en) 2013-05-20 2016-03-08 Western Digital Technologies, Inc. Automatic peak current throttle of tiered storage elements
US9740248B2 (en) 2013-06-07 2017-08-22 Western Digital Technologies, Inc. Component placement within a solid state drive
US9274978B2 (en) 2013-06-10 2016-03-01 Western Digital Technologies, Inc. Migration of encrypted data for data storage systems
US9436630B2 (en) 2013-06-11 2016-09-06 Western Digital Technologies, Inc. Using dual phys to support multiple PCIe link widths
US9830257B1 (en) 2013-06-12 2017-11-28 Western Digital Technologies, Inc. Fast saving of data during power interruption in data storage systems
US9665501B1 (en) 2013-06-18 2017-05-30 Western Digital Technologies, Inc. Self-encrypting data storage device supporting object-level encryption
US9304560B2 (en) 2013-06-19 2016-04-05 Western Digital Technologies, Inc. Backup power for reducing host current transients
GB2515539A (en) 2013-06-27 2014-12-31 Samsung Electronics Co Ltd Data structure for physical layer encapsulation
US9208101B2 (en) 2013-06-26 2015-12-08 Western Digital Technologies, Inc. Virtual NAND capacity extension in a hybrid drive
US9583153B1 (en) 2013-06-28 2017-02-28 Western Digital Technologies, Inc. Memory card placement within a solid state drive
US9042197B2 (en) 2013-07-23 2015-05-26 Western Digital Technologies, Inc. Power fail protection and recovery using low power states in a data storage device/system
US9141176B1 (en) 2013-07-29 2015-09-22 Western Digital Technologies, Inc. Power management for data storage device
US9442668B1 (en) 2013-08-29 2016-09-13 Western Digital Technologies, Inc. Adaptive power management control with performance feedback
US9263136B1 (en) 2013-09-04 2016-02-16 Western Digital Technologies, Inc. Data retention flags in solid-state drives
US9304709B2 (en) 2013-09-06 2016-04-05 Western Digital Technologies, Inc. High performance system providing selective merging of dataframe segments in hardware
US9007841B1 (en) 2013-10-24 2015-04-14 Western Digital Technologies, Inc. Programming scheme for improved voltage distribution in solid-state memory
US9330143B2 (en) 2013-10-24 2016-05-03 Western Digital Technologies, Inc. Data storage device supporting accelerated database operations
US10444998B1 (en) 2013-10-24 2019-10-15 Western Digital Technologies, Inc. Data storage device providing data maintenance services
US8917471B1 (en) 2013-10-29 2014-12-23 Western Digital Technologies, Inc. Power management for data storage device
US9323467B2 (en) 2013-10-29 2016-04-26 Western Digital Technologies, Inc. Data storage device startup
US9286176B1 (en) 2013-11-08 2016-03-15 Western Digital Technologies, Inc. Selective skipping of blocks in an SSD
US9270296B1 (en) 2013-11-13 2016-02-23 Western Digital Technologies, Inc. Method and system for soft decoding through single read
US9529710B1 (en) 2013-12-06 2016-12-27 Western Digital Technologies, Inc. Interleaved channels in a solid-state drive
US10140067B1 (en) 2013-12-19 2018-11-27 Western Digital Technologies, Inc. Data management for data storage device with multiple types of non-volatile memory media
US9684568B2 (en) * 2013-12-26 2017-06-20 Silicon Motion, Inc. Data storage device and flash memory control method
US9036283B1 (en) 2014-01-22 2015-05-19 Western Digital Technologies, Inc. Data storage device with selective write to a first storage media or a second storage media
US9337864B1 (en) 2014-01-29 2016-05-10 Western Digital Technologies, Inc. Non-binary LDPC decoder using binary subgroup processing
US9250994B1 (en) 2014-02-05 2016-02-02 Western Digital Technologies, Inc. Non-binary low-density parity check (LDPC) decoding using trellis maximization
US9384088B1 (en) 2014-02-24 2016-07-05 Western Digital Technologies, Inc. Double writing map table entries in a data storage system to guard against silent corruption
US9354955B1 (en) 2014-03-19 2016-05-31 Western Digital Technologies, Inc. Partial garbage collection for fast error handling and optimized garbage collection for the invisible band
US9268487B2 (en) 2014-03-24 2016-02-23 Western Digital Technologies, Inc. Method and apparatus for restricting writes to solid state memory when an end-of life condition is reached
US9348520B2 (en) 2014-03-24 2016-05-24 Western Digital Technologies, Inc. Lifetime extension of non-volatile semiconductor memory for data storage device
US9448742B2 (en) 2014-03-27 2016-09-20 Western Digital Technologies, Inc. Communication between a host and a secondary storage device
TWI522804B (en) 2014-04-23 2016-02-21 威盛電子股份有限公司 Flash memory controller and data storage device and flash memory control method
US9564212B2 (en) 2014-05-06 2017-02-07 Western Digital Technologies, Inc. Solid-state memory corruption mitigation
US9690696B1 (en) 2014-05-14 2017-06-27 Western Digital Technologies, Inc. Lifetime extension of memory for data storage system
US9472222B2 (en) 2014-05-16 2016-10-18 Western Digital Technologies, Inc. Vibration mitigation for a data storage device
US9275741B1 (en) 2014-09-10 2016-03-01 Western Digital Technologies, Inc. Temperature compensation management in solid-state memory
US9418699B1 (en) 2014-10-09 2016-08-16 Western Digital Technologies, Inc. Management of sequentially written data
US9405356B1 (en) 2014-10-21 2016-08-02 Western Digital Technologies, Inc. Temperature compensation in data storage device
US9823859B2 (en) 2014-11-06 2017-11-21 Western Digital Technologies, Inc. Mechanical shock mitigation for data storage
US10915256B2 (en) * 2015-02-25 2021-02-09 SK Hynix Inc. Efficient mapping scheme with deterministic power transition times for flash storage devices
US9857995B1 (en) 2015-03-09 2018-01-02 Western Digital Technologies, Inc. Data storage device and method providing non-volatile memory buffer for real-time primary non-volatile memory protection
US9785563B1 (en) 2015-08-13 2017-10-10 Western Digital Technologies, Inc. Read command processing for data storage system based on previous writes
US9668337B2 (en) 2015-09-08 2017-05-30 Western Digital Technologies, Inc. Temperature management in data storage devices
US9727261B2 (en) 2015-09-24 2017-08-08 Western Digital Technologies, Inc. Weighted programming patterns in solid-state data storage systems
US9836232B1 (en) 2015-09-30 2017-12-05 Western Digital Technologies, Inc. Data storage device and method for using secondary non-volatile memory for temporary metadata storage
US10013174B2 (en) 2015-09-30 2018-07-03 Western Digital Technologies, Inc. Mapping system selection for data storage device
US9620226B1 (en) 2015-10-30 2017-04-11 Western Digital Technologies, Inc. Data retention charge loss and read disturb compensation in solid-state data storage systems
CN106843744B (en) * 2015-12-03 2020-05-26 群联电子股份有限公司 Data programming method and memory storage device
US10126981B1 (en) 2015-12-14 2018-11-13 Western Digital Technologies, Inc. Tiered storage using storage class memory
US9940034B2 (en) 2016-01-25 2018-04-10 International Business Machines Corporation Reducing read access latency by straddling pages across non-volatile memory channels
US10162561B2 (en) 2016-03-21 2018-12-25 Apple Inc. Managing backup of logical-to-physical translation information to control boot-time and write amplification
US10157004B2 (en) * 2016-04-14 2018-12-18 Sandisk Technologies Llc Storage system and method for recovering data corrupted in a host memory buffer
CN107544913B (en) * 2016-06-29 2021-09-28 北京忆恒创源科技股份有限公司 FTL table rapid reconstruction method and device
CN113590504B (en) * 2016-06-29 2024-09-03 北京忆恒创源科技股份有限公司 Solid state disk for storing log frames and log entries
CN107544866B (en) * 2016-06-29 2021-01-05 北京忆恒创源科技有限公司 Log updating method and device
CN106201778B (en) * 2016-06-30 2019-06-25 联想(北京)有限公司 Information processing method and storage equipment
US9946463B2 (en) * 2016-07-12 2018-04-17 Western Digital Technologies, Inc. Compression of indirection tables
CN106155919B (en) * 2016-07-26 2019-06-11 深圳市瑞耐斯技术有限公司 A kind of control method and control system of 3D flash memory
KR20180019419A (en) 2016-08-16 2018-02-26 삼성전자주식회사 Memory Controller, Memory System and Operating Method thereof
US10387303B2 (en) 2016-08-16 2019-08-20 Western Digital Technologies, Inc. Non-volatile storage system with compute engine to accelerate big data applications
CN107870870B (en) * 2016-09-28 2021-12-14 北京忆芯科技有限公司 Accessing memory space beyond address bus width
CN107870867B (en) * 2016-09-28 2021-12-14 北京忆芯科技有限公司 Method and device for 32-bit CPU to access memory space larger than 4GB
US10489289B1 (en) 2016-09-30 2019-11-26 Amazon Technologies, Inc. Physical media aware spacially coupled journaling and trim
US10747678B2 (en) 2016-10-27 2020-08-18 Seagate Technology Llc Storage tier with compressed forward map
US10459644B2 (en) 2016-10-28 2019-10-29 Western Digital Techologies, Inc. Non-volatile storage system with integrated compute engine and optimized use of local fast memory
KR20180051706A (en) 2016-11-07 2018-05-17 삼성전자주식회사 Memory system performing error correction of address mapping table
TWI591533B (en) * 2016-11-25 2017-07-11 慧榮科技股份有限公司 Data storage method and data recovery method for data storage device, and data storage device using the same methods
US10613973B1 (en) 2016-12-28 2020-04-07 Amazon Technologies, Inc. Garbage collection in solid state drives
US10565123B2 (en) 2017-04-10 2020-02-18 Western Digital Technologies, Inc. Hybrid logical to physical address translation for non-volatile storage devices with integrated compute module
CN108733575B (en) * 2017-04-20 2022-12-27 深圳市得一微电子有限责任公司 Method for reconstructing physical mapping table by logic after power-off restart and solid state disk
KR102458312B1 (en) 2017-06-09 2022-10-24 삼성전자주식회사 Storage device and operating method thereof
US10845866B2 (en) * 2017-06-22 2020-11-24 Micron Technology, Inc. Non-volatile memory system or sub-system
US10649661B2 (en) * 2017-06-26 2020-05-12 Western Digital Technologies, Inc. Dynamically resizing logical storage blocks
WO2019000355A1 (en) * 2017-06-30 2019-01-03 Intel Corporation Log structure with compressed keys
US10379948B2 (en) 2017-10-02 2019-08-13 Western Digital Technologies, Inc. Redundancy coding stripe based on internal addresses of storage devices
US10474528B2 (en) 2017-10-02 2019-11-12 Western Digital Technologies, Inc. Redundancy coding stripe based on coordinated internal address scheme across multiple devices
US10877898B2 (en) * 2017-11-16 2020-12-29 Alibaba Group Holding Limited Method and system for enhancing flash translation layer mapping flexibility for performance and lifespan improvements
CN109800180B (en) * 2017-11-17 2023-06-27 爱思开海力士有限公司 Method and memory system for address mapping
KR20190083051A (en) * 2018-01-03 2019-07-11 에스케이하이닉스 주식회사 Controller and operation method thereof
TWI670594B (en) * 2018-01-18 2019-09-01 慧榮科技股份有限公司 Data storage device
KR102527132B1 (en) * 2018-01-19 2023-05-02 에스케이하이닉스 주식회사 Memory system and operating method of memory system
CN110096452B (en) * 2018-01-31 2024-05-28 北京忆恒创源科技股份有限公司 Nonvolatile random access memory and method for providing the same
TWI679538B (en) * 2018-03-31 2019-12-11 慧榮科技股份有限公司 Control unit for data storage system and method for updating logical-to-physical mapping table
US10725941B2 (en) 2018-06-30 2020-07-28 Western Digital Technologies, Inc. Multi-device storage system with hosted services on peer storage devices
US10409511B1 (en) 2018-06-30 2019-09-10 Western Digital Technologies, Inc. Multi-device storage system with distributed read/write processing
KR102612918B1 (en) * 2018-07-27 2023-12-13 에스케이하이닉스 주식회사 Controller and operation method thereof
US10592144B2 (en) 2018-08-03 2020-03-17 Western Digital Technologies, Inc. Storage system fabric with multichannel compute complex
US10649843B2 (en) 2018-08-03 2020-05-12 Western Digital Technologies, Inc. Storage systems with peer data scrub
US10901848B2 (en) 2018-08-03 2021-01-26 Western Digital Technologies, Inc. Storage systems with peer data recovery
US10824526B2 (en) 2018-08-03 2020-11-03 Western Digital Technologies, Inc. Using failed storage device in peer-to-peer storage system to perform storage-centric task
US10831603B2 (en) 2018-08-03 2020-11-10 Western Digital Technologies, Inc. Rebuild assist using failed storage device
US10877810B2 (en) 2018-09-29 2020-12-29 Western Digital Technologies, Inc. Object storage system with metadata operation priority processing
US10769062B2 (en) 2018-10-01 2020-09-08 Western Digital Technologies, Inc. Fine granularity translation layer for data storage devices
US10956071B2 (en) 2018-10-01 2021-03-23 Western Digital Technologies, Inc. Container key value store for data storage devices
TWI709042B (en) * 2018-11-08 2020-11-01 慧榮科技股份有限公司 Method and apparatus for performing mapping information management regarding redundant array of independent disks, and associated storage system
US10740231B2 (en) 2018-11-20 2020-08-11 Western Digital Technologies, Inc. Data access in data storage device including storage class memory
US11182258B2 (en) 2019-01-04 2021-11-23 Western Digital Technologies, Inc. Data rebuild using dynamic peer work allocation
US11061598B2 (en) * 2019-03-25 2021-07-13 Western Digital Technologies, Inc. Optimized handling of multiple copies in storage management
TWI698744B (en) * 2019-04-10 2020-07-11 慧榮科技股份有限公司 Data storage device and method for updating logical-to-physical mapping table
US11226904B2 (en) * 2019-04-26 2022-01-18 Hewlett Packard Enterprise Development Lp Cache data location system
US11237893B2 (en) * 2019-06-26 2022-02-01 Western Digital Technologies, Inc. Use of error correction-based metric for identifying poorly performing data storage devices
US10825535B1 (en) 2019-08-28 2020-11-03 Micron Technology, Inc. Intra-code word wear leveling techniques
US10977189B2 (en) 2019-09-06 2021-04-13 Seagate Technology Llc Reducing forward mapping table size using hashing
TWI724550B (en) * 2019-09-19 2021-04-11 慧榮科技股份有限公司 Data storage device and non-volatile memory control method
US11016905B1 (en) 2019-11-13 2021-05-25 Western Digital Technologies, Inc. Storage class memory access
US11194709B2 (en) * 2019-12-30 2021-12-07 Micron Technology, Inc. Asynchronous power loss recovery for memory devices
CN113127376B (en) * 2019-12-30 2024-02-27 阿里巴巴集团控股有限公司 Control method, device and equipment for solid state drive
CN111338990B (en) * 2020-02-12 2021-01-12 合肥康芯威存储技术有限公司 Data storage device, data storage method and storage system
JP2021149549A (en) 2020-03-19 2021-09-27 キオクシア株式会社 Storage device and cache control method of address translation table
US11249921B2 (en) 2020-05-06 2022-02-15 Western Digital Technologies, Inc. Page modification encoding and caching
US11726921B2 (en) * 2020-05-21 2023-08-15 Seagate Technology Llc Combined page footer for parallel metadata storage
CN113885778B (en) * 2020-07-02 2024-03-08 慧荣科技股份有限公司 Data processing method and corresponding data storage device
CN113961140B (en) 2020-07-02 2024-06-11 慧荣科技股份有限公司 Data processing method and corresponding data storage device
US11360671B2 (en) * 2020-07-22 2022-06-14 Seagate Technology Llc Region-specific directed offline scan for hard disk drive
US11550724B2 (en) 2020-08-14 2023-01-10 Samsung Electronics Co., Ltd. Method and system for logical to physical (L2P) mapping for data-storage device comprising nonvolatile memory
KR20220032826A (en) 2020-09-08 2022-03-15 에스케이하이닉스 주식회사 Apparatus and method for controlling and storing map data in a memory system
US20210223979A1 (en) * 2021-03-16 2021-07-22 Intel Corporation On-ssd-copy techniques using copy-on-write
JP2024043063A (en) * 2022-09-16 2024-03-29 キオクシア株式会社 Memory system and control method
CN115563026B (en) * 2022-12-07 2023-04-14 合肥康芯威存储技术有限公司 Mapping table reconstruction method and data storage device

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090049229A1 (en) * 2005-12-09 2009-02-19 Matsushita Electric Industrial Co., Ltd. Nonvolatile memory device, method of writing data,and method of reading out data
US20090216943A1 (en) 2008-02-21 2009-08-27 Hitachi Global Storage Technologies Netherlands B. V. Data storage device and data management method in data storage device
US20100030999A1 (en) * 2008-08-01 2010-02-04 Torsten Hinz Process and Method for Logical-to-Physical Address Mapping in Solid Sate Disks
US20110072194A1 (en) * 2009-09-23 2011-03-24 Lsi Corporation Logical-to-Physical Address Translation for Solid State Disks
US20120173795A1 (en) * 2010-05-25 2012-07-05 Ocz Technology Group, Inc. Solid state drive with low write amplification
US20120226887A1 (en) * 2011-03-06 2012-09-06 Micron Technology, Inc. Logical address translation

Family Cites Families (52)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7886108B2 (en) 2000-01-06 2011-02-08 Super Talent Electronics, Inc. Methods and systems of managing memory addresses in a large capacity multi-level cell (MLC) based flash memory device
US7660941B2 (en) 2003-09-10 2010-02-09 Super Talent Electronics, Inc. Two-level RAM lookup table for block and page allocation and wear-leveling in limited-write flash-memories
US7610438B2 (en) 2000-01-06 2009-10-27 Super Talent Electronics, Inc. Flash-memory card for caching a hard disk drive with data-area toggling of pointers stored in a RAM lookup table
US8266367B2 (en) 2003-12-02 2012-09-11 Super Talent Electronics, Inc. Multi-level striping and truncation channel-equalization for flash-memory system
US7526599B2 (en) * 2002-10-28 2009-04-28 Sandisk Corporation Method and apparatus for effectively enabling an out of sequence write process within a non-volatile memory system
US20040088474A1 (en) 2002-10-30 2004-05-06 Lin Jin Shin NAND type flash memory disk device and method for detecting the logical address
US20040109376A1 (en) 2002-12-09 2004-06-10 Jin-Shin Lin Method for detecting logical address of flash memory
US7139864B2 (en) 2003-12-30 2006-11-21 Sandisk Corporation Non-volatile memory and method with block management system
JP2005293774A (en) 2004-04-02 2005-10-20 Hitachi Global Storage Technologies Netherlands Bv Control method of disk unit
US7441067B2 (en) 2004-11-15 2008-10-21 Sandisk Corporation Cyclic flash memory wear leveling
KR101404083B1 (en) 2007-11-06 2014-06-09 삼성전자주식회사 Solid state disk and operating method thereof
US7363421B2 (en) 2005-01-13 2008-04-22 Stmicroelectronics S.R.L. Optimizing write/erase operations in memory devices
US8452929B2 (en) 2005-04-21 2013-05-28 Violin Memory Inc. Method and system for storage of data in non-volatile media
US20070016721A1 (en) 2005-07-18 2007-01-18 Wyse Technology Inc. Flash file system power-up by using sequential sector allocation
US20070094445A1 (en) 2005-10-20 2007-04-26 Trika Sanjeev N Method to enable fast disk caching and efficient operations on solid state disks
US7509471B2 (en) 2005-10-27 2009-03-24 Sandisk Corporation Methods for adaptively handling data writes in non-volatile memories
US7711923B2 (en) 2006-06-23 2010-05-04 Microsoft Corporation Persistent flash memory mapping table
KR100843543B1 (en) 2006-10-25 2008-07-04 삼성전자주식회사 System comprising flash memory device and data recovery method thereof
US8316206B2 (en) * 2007-02-12 2012-11-20 Marvell World Trade Ltd. Pilot placement for non-volatile memory
JP4939234B2 (en) * 2007-01-11 2012-05-23 株式会社日立製作所 Flash memory module, storage device using the flash memory module as a recording medium, and address conversion table verification method for the flash memory module
US20080288712A1 (en) 2007-04-25 2008-11-20 Cornwell Michael J Accessing metadata with an external host
US20080282024A1 (en) 2007-05-09 2008-11-13 Sudeep Biswas Management of erase operations in storage devices based on flash memories
US8095851B2 (en) 2007-09-06 2012-01-10 Siliconsystems, Inc. Storage subsystem capable of adjusting ECC settings based on monitored conditions
KR101436505B1 (en) 2008-01-03 2014-09-02 삼성전자주식회사 Memory device
TWI375956B (en) 2008-02-29 2012-11-01 Phison Electronics Corp Block management methnd for flash memory, controller and storage sysetm thereof
KR101398200B1 (en) 2008-03-18 2014-05-26 삼성전자주식회사 Memory device and encoding and/or decoding method
KR101398212B1 (en) 2008-03-18 2014-05-26 삼성전자주식회사 Memory device and encoding and/or decoding method
US8180954B2 (en) 2008-04-15 2012-05-15 SMART Storage Systems, Inc. Flash management using logical page size
KR101518199B1 (en) 2008-05-23 2015-05-06 삼성전자주식회사 Error correction apparatus method there-of and memory device comprising the apparatus
US8412880B2 (en) 2009-01-08 2013-04-02 Micron Technology, Inc. Memory system controller to manage wear leveling across a plurality of storage nodes
US8255774B2 (en) 2009-02-17 2012-08-28 Seagate Technology Data storage system with non-volatile memory for error correction
KR20100104623A (en) 2009-03-18 2010-09-29 삼성전자주식회사 Data processing system and code rate controlling scheme thereof
KR101571693B1 (en) 2009-04-15 2015-11-26 삼성전자주식회사 Non-volatile semiconductor memory controller for processing one request first before completing another request Memory system having the same and Method there-of
US20100306451A1 (en) 2009-06-01 2010-12-02 Joshua Johnson Architecture for nand flash constraint enforcement
US20110004720A1 (en) * 2009-07-02 2011-01-06 Chun-Ying Chiang Method and apparatus for performing full range random writing on a non-volatile memory
US8688894B2 (en) 2009-09-03 2014-04-01 Pioneer Chip Technology Ltd. Page based management of flash storage
US8463983B2 (en) 2009-09-15 2013-06-11 International Business Machines Corporation Container marker scheme for reducing write amplification in solid state devices
US20110072333A1 (en) 2009-09-24 2011-03-24 Innostor Technology Corporation Control method for flash memory based on variable length ecc
US8364929B2 (en) 2009-10-23 2013-01-29 Seagate Technology Llc Enabling spanning for a storage device
US8745353B2 (en) 2009-10-23 2014-06-03 Seagate Technology Llc Block boundary resolution for mismatched logical and physical block sizes
US8255661B2 (en) 2009-11-13 2012-08-28 Western Digital Technologies, Inc. Data storage system comprising a mapping bridge for aligning host block size with physical block size of a data storage device
JP4738536B1 (en) 2010-01-29 2011-08-03 株式会社東芝 Nonvolatile memory controller and nonvolatile memory control method
US8327226B2 (en) 2010-02-03 2012-12-04 Seagate Technology Llc Adjustable error correction code length in an electrical storage device
US8407449B1 (en) 2010-02-26 2013-03-26 Western Digital Technologies, Inc. Non-volatile semiconductor memory storing an inverse map for rebuilding a translation table
US8458417B2 (en) 2010-03-10 2013-06-04 Seagate Technology Llc Garbage collection in a storage device
US20110252289A1 (en) 2010-04-08 2011-10-13 Seagate Technology Llc Adjusting storage device parameters based on reliability sensing
US9026716B2 (en) 2010-05-12 2015-05-05 Western Digital Technologies, Inc. System and method for managing garbage collection in solid-state memory
US8533550B2 (en) 2010-06-29 2013-09-10 Intel Corporation Method and system to improve the performance and/or reliability of a solid-state drive
TWI455144B (en) 2010-07-22 2014-10-01 Silicon Motion Inc Controlling methods and controllers utilized in flash memory device
CN102567221B (en) * 2010-12-29 2015-06-10 群联电子股份有限公司 Data management method, memory controller and memory storage device
US8458133B2 (en) * 2011-01-24 2013-06-04 Apple Inc. Coordinating sync points between a non-volatile memory and a file system
TWI432962B (en) * 2011-10-06 2014-04-01 Mstar Semiconductor Inc Electronic system and memory managing method thereof

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090049229A1 (en) * 2005-12-09 2009-02-19 Matsushita Electric Industrial Co., Ltd. Nonvolatile memory device, method of writing data,and method of reading out data
US20090216943A1 (en) 2008-02-21 2009-08-27 Hitachi Global Storage Technologies Netherlands B. V. Data storage device and data management method in data storage device
US20100030999A1 (en) * 2008-08-01 2010-02-04 Torsten Hinz Process and Method for Logical-to-Physical Address Mapping in Solid Sate Disks
US20110072194A1 (en) * 2009-09-23 2011-03-24 Lsi Corporation Logical-to-Physical Address Translation for Solid State Disks
US20120173795A1 (en) * 2010-05-25 2012-07-05 Ocz Technology Group, Inc. Solid state drive with low write amplification
US20120226887A1 (en) * 2011-03-06 2012-09-06 Micron Technology, Inc. Logical address translation

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP2904496A4 *

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9959203B2 (en) 2014-06-23 2018-05-01 Google Llc Managing storage devices
US11042478B2 (en) 2014-06-23 2021-06-22 Google Llc Managing storage devices
US11797434B2 (en) 2014-06-23 2023-10-24 Google Llc Managing storage devices
US11481121B2 (en) 2016-09-30 2022-10-25 Amazon Technologies, Inc. Physical media aware spacially coupled journaling and replay
WO2019040320A1 (en) * 2017-08-21 2019-02-28 Micron Technology, Inc. Logical to physical mapping
US10628326B2 (en) 2017-08-21 2020-04-21 Micron Technology, Inc. Logical to physical mapping
US11055230B2 (en) * 2017-08-21 2021-07-06 Micron Technology, Inc. Logical to physical mapping
US11650931B2 (en) 2018-12-31 2023-05-16 Micron Technology, Inc. Hybrid logical to physical caching scheme of L2P cache and L2P changelog
US10997085B2 (en) 2019-06-03 2021-05-04 International Business Machines Corporation Compression for flash translation layer

Also Published As

Publication number Publication date
AU2013327582A1 (en) 2015-05-14
JP2015530685A (en) 2015-10-15
CN105027090B (en) 2018-03-09
EP2904496A1 (en) 2015-08-12
US20140101369A1 (en) 2014-04-10
JP6210570B2 (en) 2017-10-11
AU2013327582B2 (en) 2019-01-17
HK1216443A1 (en) 2016-11-11
CN105027090A (en) 2015-11-04
KR20150084817A (en) 2015-07-22
US9268682B2 (en) 2016-02-23
KR101911589B1 (en) 2018-10-24
EP2904496A4 (en) 2016-06-01

Similar Documents

Publication Publication Date Title
US10055345B2 (en) Methods, devices and systems for solid state drive control
US9268682B2 (en) Methods, devices and systems for physical-to-logical mapping in solid state drives
US9817577B2 (en) Methods, devices and systems for two stage power-on map rebuild with free space accounting in a solid state drive
US8954694B2 (en) Methods, data storage devices and systems for fragmented firmware table rebuild in a solid state drive
US10254983B2 (en) Atomic write command support in a solid state drive
US9792067B2 (en) Trim command processing in a solid state drive
US9513831B2 (en) Method and system for atomically writing scattered information in a solid state storage device
US9507523B1 (en) Methods, devices and systems for variable size logical page management in a solid state drive

Legal Events

Date Code Title Description
WWE Wipo information: entry into national phase

Ref document number: 201380063439.2

Country of ref document: CN

121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 13844022

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2015535725

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

WWE Wipo information: entry into national phase

Ref document number: 2013844022

Country of ref document: EP

ENP Entry into the national phase

Ref document number: 20157011769

Country of ref document: KR

Kind code of ref document: A

ENP Entry into the national phase

Ref document number: 2013327582

Country of ref document: AU

Date of ref document: 20130930

Kind code of ref document: A