WO2012021847A2 - Dispositif, système et procédé pour mettre des données en mémoire cache - Google Patents

Dispositif, système et procédé pour mettre des données en mémoire cache Download PDF

Info

Publication number
WO2012021847A2
WO2012021847A2 PCT/US2011/047659 US2011047659W WO2012021847A2 WO 2012021847 A2 WO2012021847 A2 WO 2012021847A2 US 2011047659 W US2011047659 W US 2011047659W WO 2012021847 A2 WO2012021847 A2 WO 2012021847A2
Authority
WO
WIPO (PCT)
Prior art keywords
cache
data
request
module
storage
Prior art date
Application number
PCT/US2011/047659
Other languages
English (en)
Other versions
WO2012021847A3 (fr
Inventor
David Flynn
Original Assignee
Fusion-Io, Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Fusion-Io, Inc. filed Critical Fusion-Io, Inc.
Publication of WO2012021847A2 publication Critical patent/WO2012021847A2/fr
Publication of WO2012021847A3 publication Critical patent/WO2012021847A3/fr

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/0802Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
    • G06F12/0893Caches characterised by their organisation or structure

Definitions

  • This invention relates to caching data and more particularly relates to caching data using solid-state storage media.
  • Data storage caches are typically direct mapped, fully associative, or set associative.
  • direct mapped caches each storage block of a backing store is directly mapped to a single cache block, but since a cache typically has a smaller capacity than an associated backing store, several storage blocks often share the same cache block, causing cache collisions.
  • Direct mapped caches usually address a cache collision for a cache block by overwriting the cache block with the most recently accessed data.
  • fully associative caches storage blocks typically are not mapped to a specific cache block, but can be cached in any cache block.
  • the processing overhead for locating cached data in a fully associative cache is typically greater than for a direct mapped cache, because a cache map, cache index, cache tags, or another separate cache translation layer is used to locate the cached data.
  • Set associative caches typically divide cache storage into sets, where each storage block of a backing store is mapped to a set and can be stored in any cache block in the set.
  • Set associative caches typically have more cache collision issues than fully associative caches and more processing overhead for locating cached data than direct mapped caches.
  • the present invention has been developed in response to the present state of the art, and in particular, in response to the problems and needs in the art that have not yet been fully solved by currently available data storage caches. Accordingly, the present invention has been developed to provide an apparatus, system, and method for caching data that overcome many or all of the above-discussed shortcomings in the art.
  • a method of the present invention is presented for caching data.
  • the method in the disclosed embodiments substantially includes the steps necessary to carry out the functions presented below with respect to the operation of the described apparatus and system.
  • the method includes detecting an input/output ("I/O") request for a storage device cached by solid-state storage media of a cache.
  • the method in a further embodiment, may include referencing a single mapping structure to determine that the cache comprises data of the I/O request.
  • the method includes satisfying the I/O request using the cache in response to determining that the cache comprises at least one data block of the I/O request.
  • the single mapping structure maps each logical block address of the storage device directly to a logical block address of the cache.
  • the single mapping structure in a further embodiment, comprises or maintains a fully associative relationship between logical block addresses of the storage device and physical storage addresses on the solid-state storage media.
  • Figure 1 is a schematic block diagram illustrating one embodiment of a system for caching data in accordance with the present invention
  • Figure 2 is a schematic block diagram illustrating one embodiment of a host device in accordance with the present invention.
  • Figure 3 is a schematic block diagram illustrating one embodiment of a direct cache module in accordance with the present invention.
  • Figure 4 is a schematic block diagram illustrating another embodiment of a direct cache module in accordance with the present invention.
  • Figure 5 is a schematic block diagram illustrating one embodiment of a storage controller in accordance with the present invention
  • Figure 6 is a schematic block diagram illustrating another embodiment of a storage controller in accordance with the present invention
  • Figure 7 is a schematic block diagram illustrating one embodiment of a forward map and a reverse map in accordance with the present invention.
  • Figure 8 is a schematic block diagram illustrating one embodiment of a mapping structure, a logical address space of a cache, a sequential, log-based, append-only writing structure, and an address space of a storage device in accordance with the present invention
  • Figure 9 is a schematic flow chart diagram illustrating one embodiment of a method for caching data in accordance with the present invention.
  • Figure 10 is a schematic flow chart diagram illustrating another embodiment of a method for caching data in accordance with the present invention.
  • aspects of the present invention may be embodied as a system, method or computer program product. Accordingly, aspects of the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment (including firmware, resident software, micro-code, etc.) or an embodiment combining software and hardware aspects that may all generally be referred to herein as a "circuit,” “module” or “system.” Furthermore, aspects of the present invention may take the form of a computer program product embodied in one or more computer readable medium(s) having computer readable program code embodied thereon.
  • FIG. 1 depicts one embodiment of a system 100 for caching data in accordance with the present invention.
  • the system 100 in the depicted embodiment, includes a cache 102 a host device 114, a direct cache module 116, and a storage device 118.
  • the cache 102 in the depicted embodiment, includes a solid-state storage controller 104, a write data pipeline 106, a read data pipeline 108, and a solid-state storage media 110.
  • the system 100 caches data for the storage device 118 in the cache 102.
  • the system 100 includes a single cache 102.
  • the system 100 may include two or more caches 102.
  • the system 100 may mirror cached data between several caches 102, may virtually stripe cached data across multiple caches 102, or otherwise cache data in more than one cache 102.
  • the cache 102 serves as a read and/or a write cache for the storage device 118 and the storage device 118 is a backing store for the cache 102.
  • the cache 102 is embodied by a non- volatile, solid-state storage device, with a solid-state storage controller 104 and non-volatile, solid-state storage media 110.
  • the non- volatile, solid-state storage media 110 may include flash memory, nano random access memory (“nano RAM or NRAM”), magneto-resistive RAM (“MRAM”), phase change RAM (“PRAM”), etc.
  • the cache 102 may include other types of non-volatile and/or volatile data storage, such as dynamic RAM (“DRAM”), static RAM (“SRAM”), magnetic data storage, optical data storage, and/or other data storage technologies.
  • DRAM dynamic RAM
  • SRAM static RAM
  • magnetic data storage magnetic data storage
  • optical data storage and/or other data storage technologies.
  • the solid-state storage controller 104 may mask differences in latency for storage operations performed on the solid-state storage media 110 by grouping erase blocks by access time, wear level, and/or health, by queuing storage operations based on expected completion times, by splitting storage operations, by coordinating storage operation execution in parallel among multiple buses, or the like.
  • the cache 102 caches data for the storage device 118.
  • the storage device 118 in one embodiment, is a backing store associated with the cache 102 and/or with the direct cache module 116.
  • the storage device 118 may include a hard disk drive, an optical drive with optical media, a magnetic tape drive, or another type of storage device.
  • the storage device 118 may have a greater data storage capacity than the cache 102.
  • the storage device 118 may have a higher latency, a lower throughput, or the like, than the cache 102.
  • the storage device 118 may have a higher latency, a lower throughput, or the like due to properties of the storage device 118 itself, or due to properties of a connection to the storage device 118.
  • the cache 102 and the storage device 118 may each include non- volatile, solid-state storage media 110 with similar properties, but the storage device 118 may be in communication with the host device 114 over a data network, while the cache 102 may be directly connected to the host device 114, causing the storage device 118 to have a higher latency relative to the host 114 than the cache 102.
  • the cache 102 and the storage device 118 are in communication with the host device 114 through the direct cache module 116.
  • the cache 102 and/or the storage device 118 may be direct attached storage ("DAS”) of the host device 114.
  • DAS direct attached storage
  • DAS is data storage that is connected to a device, either internally or externally, without a storage network in between.
  • the cache 102 and/or the storage device 118 are internal to the host device 114 and are connected using a system bus, such as a peripheral component interconnect express ("PCI-e") bus, a Serial Advanced Technology Attachment (“SATA”) bus, or the like.
  • PCI-e peripheral component interconnect express
  • SATA Serial Advanced Technology Attachment
  • the cache 102 and/or the storage device 118 may be external to the host device 114 and may be connected using a universal serial bus (“USB”) connection, an Institute of Electrical and Electronics Engineers (“IEEE”) 1394 bus (“FireWire”), an external SATA (“eSATA”) connection, or the like.
  • USB universal serial bus
  • IEEE Institute of Electrical and Electronics Engineers
  • eSATA external SATA
  • the cache 102 and/or the storage device 118 may be connected to the host device 114 using a peripheral component interconnect ("PCI") express bus using external electrical or optical bus extension or bus networking solution such as Infiniband or PCI Express Advanced Switching (“PCIe-AS”), or the like.
  • PCIe-AS Peripheral Component Interconnect
  • the cache 102 and/or the storage device 118 may be in the form of a dual-inline memory module ("DIMM"), a daughter card, or a micro-module.
  • the cache 102 and/or the storage device 118 may be elements within a rackmounted blade.
  • the cache 102 and/or the storage device 118 may be contained within packages that are integrated directly onto a higher level assembly (e.g. mother board, lap top, graphics processor).
  • individual components comprising the cache 102 and/or the storage device 118 are integrated directly onto a higher level assembly without intermediate packaging.
  • the cache 102 includes one or more solid-state storage controllers 104 with a write data pipeline 106 and a read data pipeline 108, and a solid-state storage media 110.
  • the cache 102 and/or the storage device 118 may be connected to the host device 114 over a data network.
  • the cache 102 and/or the storage device 118 may include a storage area network (“SAN") storage device, a network attached storage (“NAS”) device, a network share, or the like.
  • the system 100 may include a data network, such as the Internet, a wide area network (“WAN”), a metropolitan area network (“MAN”), a local area network (“LAN”), a token ring, a wireless network, a fiber channel network, a SAN, a NAS, ESCON, or the like, or any combination of networks.
  • a data network may also include a network from the IEEE 802 family of network technologies, such Ethernet, token ring, Wi-Fi, Wi-Max, and the like.
  • a data network may include servers, switches, routers, cabling, radios, and other equipment used to facilitate networking between the host device 114 and the cache 102 and/or the storage device 118.
  • the cache 102 is connected directly to the host device 114 as a DAS device.
  • the cache 102 is directly connected to the host device 114 as a DAS device and the storage device 118 is directly connected to the cache 102.
  • the cache 102 may be connected directly to the host device 114, and the storage device 118 may be connected directly to the cache 102 using a direct, wire-line connection, such as a PCI express bus, an SATA bus, a USB connection, an IEEE 1394 connection, an eSATA connection, a proprietary direct connection, an external electrical or optical bus extension or bus networking solution such as Infiniband or PCIe-AS, or the like.
  • a direct, wire-line connection such as a PCI express bus, an SATA bus, a USB connection, an IEEE 1394 connection, an eSATA connection, a proprietary direct connection, an external electrical or optical bus extension or bus networking solution such as Infiniband or PCIe-AS, or the like.
  • the system 100 includes the host device 114 in communication with the cache 102 and the storage device 118 through the direct cache module 116.
  • a host device 114 may be a host, a server, a storage controller of a SAN, a workstation, a personal computer, a laptop computer, a handheld computer, a supercomputer, a computer cluster, a network switch, router, or appliance, a database or storage appliance, a data acquisition or data capture system, a diagnostic system, a test system, a robot, a portable electronic device, a wireless device, or the like.
  • the host device 114 is in communication with the direct cache module 116.
  • the direct cache module 116 receives or otherwise detects read and write requests from the host device 114 for the storage device 118 and manages the caching of data in the cache 102.
  • the direct cache module 116 comprises a software application, file system filter driver, or the like.
  • the direct cache module 116 may include one or more software drivers on the host device 114, one or more storage controllers, such as the solid-state storage controllers 104 of the cache 102, a combination of one or more software drivers and storage controllers, or the like.
  • hardware and/or software of the direct cache module 116 comprises a cache controller that is in communication with the solid-state storage controller 104 to manage operation of the cache 102.
  • the storage controller 104 sequentially writes data on the solid-state storage media 110 in a log structured format and within one or more physical structures of the storage elements, the data is sequentially stored on the solid-state storage media 110. Sequentially writing data involves the storage controller 104 streaming data packets into storage write buffers for storage elements, such as a chip (a package of one or more dies) or a die on a circuit board. When the storage write buffers are full, the data packets are programmed to a designated virtual or logical page ("LP"). Data packets then refill the storage write buffers and, when full, the data packets are written to the next LP. The next virtual page may be in the same bank or another bank. This process continues, LP after LP, typically until a virtual or logical erase block (“LEB”) is filled.
  • LEB virtual or logical erase block
  • the streaming may continue across LEB boundaries with the process continuing, LEB after LEB.
  • the storage controller 104 sequentially stores data packets in an LEB by order of processing.
  • the storage controller 104 stores packets in the order that they come out of the write data pipeline 106. This order may be a result of data segments arriving from a requesting device mixed with packets of valid data that are being read from another storage location as valid data is being recovered from another LEB during a recovery operation.
  • the sequentially stored data can serve as a log to reconstruct data indexes and other metadata using information from data packet headers.
  • the storage controller 104 may reconstruct a storage index by reading headers to determine the data structure to which each packet belongs and sequence information to determine where in the data structure the data or metadata belongs.
  • the storage controller 104 uses physical address information for each packet and timestamp or sequence information to create a mapping between the physical locations of the packets and the data structure identifier and data segment sequence. Timestamp or sequence information is used by the storage controller 104 to replay the sequence of changes made to the index and thereby reestablish the most recent state.
  • erase blocks are time stamped or given a sequence number as packets are written and the timestamp or sequence information of an erase block is used along with information gathered from container headers and packet headers to reconstruct the storage index.
  • timestamp or sequence information is written to an erase block when the erase block is recovered.
  • data packets associated with the logical structure are located and read in a read operation.
  • Data segments of the modified structure that have been modified are not written to the location from which they are read. Instead, the modified data segments are again converted to data packets and then written to the next available location in the virtual page currently being written.
  • Index entries for the respective data packets are modified to point to the packets that contain the modified data segments.
  • the entry or entries in the index for data packets associated with the same logical structure that have not been modified will include pointers to original location of the unmodified data packets.
  • the original logical structure is maintained, for example to maintain a previous version of the logical structure, the original logical structure will have pointers in the index to all data packets as originally written.
  • the new logical structure will have pointers in the index to some of the original data packets and pointers to the modified data packets in the virtual page that is currently being written.
  • the index includes an entry for the original logical structure mapped to a number of packets stored on the solid-state storage media 110.
  • a new logical structure is created and a new entry is created in the index mapping the new logical structure to the original packets.
  • the new logical structure is also written to the solid-state storage media 110 with its location mapped to the new entry in the index.
  • the new logical structure packets may be used to identify the packets within the original logical structure that are referenced in case changes have been made in the original logical structure that have not been propagated to the copy and the index is lost or corrupted.
  • the index includes a logical entry for a logical block.
  • sequentially writing packets facilitates a more even use of the solid-state storage media 110 and allows a solid- storage device controller to monitor storage hot spots and level usage of the various virtual pages in the solid-state storage media 110. Sequentially writing packets also facilitates a powerful, efficient garbage collection system, which is described in detail below. One of skill in the art will recognize other benefits of sequential storage of data packets.
  • the system 100 may comprise a log-structured storage system or log-structured array similar to a log-structured file system and the order that data is stored may be used to recreate an index.
  • an index that includes a logical-to-physical mapping is stored in volatile memory. If the index is corrupted or lost, the index may be reconstructed by addressing the solid-state storage media 110 in the order that the data was written.
  • a logical erase block (“LEB")
  • LEB logical erase block
  • data is typically stored sequentially by filling a first logical page, then a second logical page, etc. until the LEB is filled.
  • the solid-state storage controller 104 then chooses another LEB and the process repeats.
  • the index can be rebuilt by traversing the solid- state storage media 110 in order from beginning to end.
  • the solid- state storage controller 104 may only need to replay a portion of the solid-state storage media 110 to rebuild a portion of the index that was not stored in non- volatile memory.
  • the host device 114 loads one or more device drivers for the cache
  • the direct cache module 116 communicates with the one or more device drivers on the host device 114.
  • the direct cache module 116 may communicate directly with a hardware interface of the cache 102 and/or the storage device 118.
  • the direct cache module 116 may be integrated with the cache 102 and/or the storage device 118.
  • the cache 102 and/or the storage device 118 have block device interfaces that support block device commands.
  • the cache 102 and/or the storage device 118 may support the standard block device interface, the ATA interface standard, the ATA Packet Interface ("ATAPI”) standard, the small computer system interface (“SCSI”) standard, and/or the Fibre Channel standard which are maintained by the InterNational Committee for Information Technology Standards ("IN CITS").
  • the direct cache module 116 may interact with the cache 102 and/or the storage device 118 using block device commands to read, write, and clear (or trim) data.
  • the direct cache module 116 serves as a proxy for the storage device 118, receiving read and write requests for the storage device 118 directly from the host device 114.
  • the direct cache module 116 may represent itself to the host device 114 as a storage device having a capacity similar to and/or matching the capacity of the storage device 118.
  • the direct cache module 116 and/or the solid-state storage controller 104 dynamically reduce a cache size for the cache 102 in response to an age characteristic for the solid-state storage media 110 of the cache 102. For example, as storage elements of the cache 102 age, the direct cache module 116 and/or the solid-state storage controller 104 may remove the storage elements from operation, thereby reducing the cache size for the cache 102. Examples of age characteristics, in various embodiments, may include a program/erase count, a bit error rate, an uncorrectable bit error rate, or the like that satisfies a predefined age threshold.
  • the direct cache module 116 upon receiving a read request or write request from the host device 114, in one embodiment, fulfills the request by caching write data in the cache 102 or by retrieving read data from one of the cache 102 and the storage device 118 and returning the read data to the host device 114.
  • Data caches are typically organized into cache lines which divide up the physical capacity of the cache, these cache lines may be divided into several sets.
  • a cache line is typically larger than a block or sector of a backing store associated with a data cache, to provide for prefetching of additional blocks or sectors and to reduce cache misses and increase the cache hit rate.
  • Data caches also typically evict an entire, fixed size, cache line at a time to make room for newly requested data in satisfying a cache miss.
  • Data caches may be direct mapped, fully associative, N-way set associative, or the like.
  • each block or sector of a backing store has a one-to-one mapping to a cache line in the direct mapped cache. For example, if a direct mapped cache has T number of cache lines, the backing store associated with the direct mapped cache may be divided into T sections, and the direct mapped cache caches data from a section exclusively in the cache line corresponding to the section. Because a direct mapped cache always caches a block or sector in the same location or cache line, the mapping between a block or sector address and a cache line can be a simple manipulation of an address of the block or sector.
  • any cache line can store data from any block or sector of a backing store.
  • a fully associative cache typically has lower cache miss rates than a direct mapped cache, but has longer hit times (i.e. it takes longer to locate data in the cache) than a direct mapped cache.
  • To locate data in a fully associative cache either cache tags of the entire cache can be searched, a separate cache index can be used, or the like.
  • each sector or block of a backing store may be cached in any of a set of N different cache lines.
  • a 2-way set associative cache either of two different cache lines may cache data for a sector or block.
  • both the cache and the backing store are typically divided into sections or sets, with one or more sets of sectors or blocks of the backing store assigned to a set of N cache lines.
  • To locate data in an N-way set associative cache a block or sector address is typically mapped to a set of cache lines, and cache tags of the set of cache lines are searched, a separate cache index is searched, or the like to determine which cache line in the set is storing data for the block or sector.
  • An N-way set associative cache typically has miss rates and hit rates between those of a direct mapped cache and those of a fully associative cache.
  • the cache 102 has characteristics of both a directly mapped cache and a fully associative cache.
  • a logical address space of the cache 102 in one embodiment, is directly mapped to an address space of the storage device 118 while the physical storage media 110 of the cache 102 is fully associative with regard to the storage device 118.
  • each block or sector of the storage device 118 in one embodiment, is directly mapped to a single logical address of the cache 102 while any portion of the physical storage media 110 of the cache 102 may store data for any block or sector of the storage device 118.
  • a logical address is an identifier of a block of data and is distinct from a physical address of the block of data, but may be mapped to the physical address of the block of data.
  • Examples of logical addresses include logical block addresses ("LBAs"), logical identifiers, object identifiers, pointers, references, and the like.
  • the cache 102 has logical or physical cache data blocks associated with each logical address that are equal in size to a block or sector of the storage device 118.
  • the cache 102 caches ranges and/or sets of ranges of blocks or sectors for the storage device 118 at a time, providing dynamic or variable length cache line functionality.
  • a range or set of ranges of blocks or sectors may include a mixture of contiguous and/or noncontiguous blocks.
  • the cache 102 in one embodiment, supports block device requests that include a mixture of contiguous and/or noncontiguous blocks and that may include "holes" or intervening blocks that the cache 102 does not cache or otherwise store.
  • one or more groups of addresses of the storage device 118 are directly mapped to corresponding logical addresses of the cache 102.
  • the addresses of the storage device 118 may comprise physical addresses or logical addresses.
  • Directly mapping logical addresses of the storage device 118 to logical addresses of the cache 102 in one embodiment, provides a one-to-one relationship between the logical addresses of the storage device 118 and the logical addresses of the cache 102.
  • Directly mapping logical or physical address space of the storage device 118 to logical addresses of the cache 102 precludes the use of an extra translation layer in the direct cache module 116, such as the use of cache tags, a cache index, the maintenance of a translation data structure, or the like.
  • both logical address spaces include at least logical addresses 0-N.
  • at least a portion of the logical address space of the cache 102 represents or appears as the logical address space of the storage device 118 to a client, such as the host device 114.
  • At least a portion of logical addresses in a logical address space of the cache 102 may be mapped to physical addresses of the storage device 118. At least a portion of the logical address space of the cache 102, in one embodiment, may correspond to the physical address space of the storage device 118. At least a subset of the logical addresses of the cache 102, in this embodiment, are directly mapped to corresponding physical addresses of the storage device 118.
  • the logical address space of the cache 102 is a sparse address space that is either as large as or is larger than the physical storage capacity of the cache 102. This allows the storage device 118 to have a larger storage capacity than the cache 102, while maintaining a direct mapping between the logical addresses of the cache 102 and logical or physical addresses of the storage device 118.
  • the sparse logical address space may be thinly provisioned, in one embodiment.
  • the cache 102 directly maps the logical addresses to distinct physical addresses or locations on the solid-state storage media 110 of the cache 102, such that the physical addresses or locations of data on the solid-state storage media 110 are fully associative with the storage device 118.
  • the direct cache module 116 and/or the cache 102 use the same single mapping structure to map addresses (either logical or physical) of the storage device 118 to logical addresses of the cache 102 and to map logical addresses of the cache 102 to locations/physical addresses of a block or sector (or range of blocks or sectors) on the physical solid state storage media 110.
  • using a single mapping structure for both functions eliminates the need for a separate cache map, cache index, cache tags, or the like, decreasing access times of the cache 102.
  • the solid state storage media 110 in the depicted embodiment are freed to store data for other logical addresses.
  • the solid state storage controller 104 stores the data at the physical addresses using a log-based, append only writing structure such that data evicted from the cache 102 or overwritten by a subsequent write request invalidates other data in the log. Consequently, a garbage collection process recovers the physical capacity of the invalid data in the log.
  • the log-based, append only writing structure is a logically ring-like, cyclic data structure, as new data is appended to the log-based writing structure, previously used physical capacity is reused in a circular, theoretically infinite manner.
  • FIG 2 depicts one embodiment of a host device 114.
  • the host device 114 may be similar, in certain embodiments, to the host device 114 depicted in Figure 1.
  • the depicted embodiment includes a user application 502 in communication with a storage client 504.
  • the storage client 504 is in communication with a direct cache module 116, which, in one embodiment, is substantially similar to the direct cache module 116 of Figure 1, described above.
  • the direct cache module 116 in the depicted embodiment, is in communication with the cache 102 and the storage device 118.
  • the user application 502 is a software application operating on or in conjunction with the storage client 504.
  • the storage client 504 manages file systems, files, data, and the like and utilizes the functions and features of the direct cache module 116, the cache 102, and the storage device 118.
  • Representative examples of storage clients include, but are not limited to, a server, a file system, an operating system, a database management system ("DBMS"), a volume manager, and the like.
  • DBMS database management system
  • the storage client 504 is in communication with the direct cache module 116.
  • the storage client 504 may also be in communication with the cache 102 and/or the storage device 118 directly.
  • the storage client 504 reads data from and writes data to the storage device 118 through the direct cache module 116, which uses the cache 102 to cache read data and write data for the storage device 118.
  • the direct cache module 116 caches data in a manner that is substantially transparent to the storage client 504, with the storage client 504 sending read requests and write requests directly to the direct cache module 116.
  • the direct cache module 116 has exclusive access to, and/or control over the cache 102 and the storage device 118.
  • the direct cache module 116 may represent itself to the storage client 504 as a storage device.
  • the direct cache module 116 may represent itself as a conventional block storage device.
  • the direct cache module 116 may represent itself to the storage client 504 as a storage device having the same number of logical blocks (0 to N) as the storage device 118.
  • the direct cache module 116 may be embodied by one or more of a storage controller of the cache 102 and/or a storage controller of the storage device 118; a separate hardware controller device that interfaces with the cache 102 and the storage device 118; a device driver/software controller loaded on the host device 114; and the like.
  • the host device 114 loads a device driver for the direct cache module
  • the host device 114 loads device drivers for the cache 102 and/or the storage device 118.
  • the direct cache module 116 may communicate with the cache 102 and/or the storage device 118 through device drivers loaded on the host device 114, through a storage controller of the cache 102 and/or through a storage controller of the storage device 118, or the like.
  • Hardware and/or software elements of the direct cache module 116 may form a cache controller for the cache 102 and may be in communication with the solid-state storage controller 104, sending commands to the solid-state storage controller 104 to manage operation of the cache 102.
  • the storage client 504 communicates with the direct cache module 116 through an input/output ("I/O") interface represented by a block I/O emulation layer 506.
  • I/O input/output
  • the fact that the direct cache module 116 is providing caching services in front of one or more caches 102, and/or one or more backing stores, such as the storage device 118, may be transparent to the storage client 504.
  • the direct cache module 116 may present (i.e. identify itself as) a conventional block device to the storage client 504.
  • the cache 102 and/or the storage device 118 either include a distinct block I/O emulation layer 506 or are conventional block storage devices.
  • Certain conventional block storage devices divide the storage media into volumes or partitions. Each volume or partition may include a plurality of sectors. One or more sectors are organized into a logical block.
  • the logical blocks are referred to as clusters.
  • the logical blocks are referred to simply as blocks.
  • a logical block or cluster represents a smallest physical amount of storage space on the storage media that is addressable by the storage client 504.
  • a block storage device may associate n logical blocks available for user data storage across the storage media with a logical block address, numbered from 0 to n.
  • the logical block addresses may range from 0 to n per volume or partition.
  • a logical block address maps directly to a particular logical block.
  • each logical block maps to a particular set of physical sectors on the storage media.
  • the direct cache module 116, the cache 102 and/or the storage device 118 may not directly or necessarily associate logical block addresses with particular physical blocks.
  • the direct cache module 116, the cache 102, and/or the storage device 118 may emulate a conventional block storage interface to maintain compatibility with block storage clients 504 and with conventional block storage commands and protocols.
  • the direct cache module 116 appears to the storage client 504 as a conventional block storage device.
  • the direct cache module 116 provides the block I/O emulation layer 506 which serves as a block device interface, or API.
  • the storage client 504 communicates with the direct cache module 116 through this block device interface.
  • the block I/O emulation layer 506 receives commands and logical block addresses from the storage client 504 in accordance with this block device interface.
  • the block I/O emulation layer 506 provides the direct cache module 116 compatibility with block storage clients 504.
  • the direct cache module 116 may communicate with the cache 102 and/or the storage device 118 using corresponding block device interfaces.
  • a storage client 504 communicates with the direct cache module 116 through a direct interface layer 508.
  • the direct cache module 116 directly exchanges information specific to the cache 102 and/or the storage device 118 with the storage client 504.
  • the direct cache module 116 in one embodiment, may communicate with the cache 102 and/or the storage device 118 through direct interface layers 508.
  • a direct cache module 116 using the direct interface 508 may store data on the cache 102 and/or the storage device 118 as blocks, sectors, pages, logical blocks, logical pages, erase blocks, logical erase blocks, ECC chunks or in any other format or structure advantageous to the technical characteristics of the cache 102 and/or the storage device 118.
  • the storage device 118 comprises a hard disk drive and the direct cache module 116 stores data on the storage device 118 as contiguous sectors of 512 bytes, or the like, using physical cylinder-head-sector addresses for each sector, logical block addresses for each sector, or the like.
  • the direct cache module 116 may receive a logical address and a command from the storage client 504 and perform the corresponding operation in relation to the cache 102, and/or the storage device 118.
  • the direct cache module 116, the cache 102, and/or the storage device 118 may support a block I/O emulation layer 506, a direct interface 508, or both a block I/O emulation layer 506 and a direct interface 508.
  • certain storage devices while appearing to a storage client 504 to be a block storage device, do not directly associate particular logical block addresses with particular physical blocks, also referred to in the art as sectors.
  • Such storage devices may use a logical-to- physical translation layer 510.
  • the cache 102 includes a logical-to- physical translation layer 510.
  • the storage device 118 may also include a logical-to-physical translation layer 510.
  • the direct cache module 116 maintains a single logical-to-physical translation layer 510 for the cache 102 and the storage device 118.
  • the direct cache module 116 maintains a distinct logical-to- physical translation layer 510 for each of the cache 102 and the storage device 118.
  • the logical-to-physical translation layer 510 provides a level of abstraction between the logical block addresses used by the storage client 504 and the physical block addresses at which the cache 102 and/or the storage device 118 store the data.
  • the logical-to-physical translation layer 510 maps logical block addresses to physical block addresses of data stored on the media of the cache 102. This mapping allows data to be referenced in a logical address space using logical identifiers, such as a logical block address. A logical identifier does not indicate the physical location of data in the cache 102, but is an abstract reference to the data.
  • Further examples of a logical-to-physical translation layer 510 include the direct mapping module 606 of Figures 3and 4, the forward mapping module 802 of Figure 5, and the reverse mapping module 804 of Figure 5, each of which are discussed below.
  • the cache 102 and the storage device 118 separately manage the physical block addresses in the distinct, separate physical address spaces of the cache 102 and the storage device 118.
  • contiguous logical block addresses may in fact be stored in non-contiguous physical block addresses as the logical-to-physical translation layer 510 determines the location on the physical media of the cache 102 at which to perform data operations.
  • the logical address space of the cache 102 is substantially larger than the physical address space or storage capacity of the cache 102.
  • This "thinly provisioned" or “sparse address space” embodiment allows the number of logical addresses for data references to greatly exceed the number of possible physical addresses.
  • a thinly provisioned and/or sparse address space also allows the cache 102 to cache data for a storage device 118 with a larger address space (i.e. a larger storage capacity) than the physical address space of the cache 102.
  • the logical-to-physical translation layer 510 includes a map or index that maps logical block addresses to physical block addresses.
  • the map or index may be in the form of a B-tree, a content addressable memory ("CAM”), a binary tree, and/or a hash table, and the like.
  • the logical-to-physical translation layer 510 is a tree with nodes that represent logical block addresses and include references to corresponding physical block addresses. Example embodiments of B-tree mapping structure are described below with regard to Figures 7 and 8.
  • a logical block address maps directly to a particular physical block.
  • the storage client 504 may note that the particular logical block address is deleted and can re-use the physical block associated with that deleted logical block address without the need to perform any other action.
  • a storage client 504 communicating with a storage controller 104 or device driver with a logical-to-physical translation layer 510 (a storage controller 104 or device driver that does not map a logical block address directly to a particular physical block), deletes data of a logical block address, the corresponding physical block address may remain allocated because the storage client 504 may not communicate the change in used blocks to the storage controller 104 or device driver.
  • the storage client 504 may not be configured to communicate changes in used blocks (also referred to herein as "data block usage information").
  • the storage client 504 may erroneously believe that the direct cache module 116, the cache 102, and/or the storage device 118 is a conventional block storage device that would not utilize the data block usage information. Or, in certain embodiments, other software layers between the storage client 504 and the direct cache module 116, the cache 102, and/or the storage device 118 may fail to pass on data block usage information.
  • the storage controller 104 or device driver may preserve the relationship between the logical block address and a physical address and the data on the cache 102 and/or the storage device 118 corresponding to the physical block. As the number of allocated blocks increases, the performance of the cache 102 and/or the storage device 118 may suffer depending on the configuration of the cache 102 and/or the storage device 118.
  • the cache 102, and/or the storage device 118 are configured to store data sequentially, using an append-only writing process, and use a storage space recovery process that re-uses non- volatile storage media storing deallocated/unused logical blocks.
  • the cache 102, and/or the storage device 118 may sequentially write data on the solid-state storage media 110 in a log structured format and within one or more physical structures of the storage elements, the data is sequentially stored on the solid-state storage media 110.
  • Those of skill in the art will recognize that other embodiments that include several caches 102 can use the same append-only writing process and storage space recovery process.
  • the cache 102 and/or the storage device 118 achieve a high write throughput and a high number of I/O operations per second ("IOPS").
  • the cache 102 and/or the storage device 118 may include a storage space recovery, or garbage collection process that re-uses data storage cells to provide sufficient storage capacity.
  • the storage space recovery process reuses storage cells for logical blocks marked as deallocated, invalid, unused, or otherwise designated as available for storage space recovery in the logical-physical translation layer 510.
  • the direct cache module 116 marks logical blocks as deallocated or invalid based on a cache eviction policy, to recover storage capacity for caching additional data for the storage device 118.
  • the storage space recovery process is described in greater detail below with regard to the garbage collection module 710 of Figure 4.
  • the storage space recovery process determines that a particular section of storage may be recovered. Once a section of storage has been marked for recovery, the cache 102 and/or the storage device 118 may relocate valid blocks in the section. The storage space recovery process, when relocating valid blocks, copies the packets and writes them to another location so that the particular section of storage may be reused as available storage space, typically after an erase operation on the particular section. The cache 102 and/or the storage device 118 may then use the available storage space to continue sequentially writing data in an append-only fashion. Consequently, the storage controller 104 expends resources and overhead in preserving data in valid blocks. Therefore, physical blocks corresponding to deleted logical blocks may be unnecessarily preserved by the storage controller 104, which expends unnecessary resources in relocating the physical blocks during storage space recovery.
  • Some storage devices are configured to receive messages or commands notifying the storage device of these unused logical blocks so that the storage device may deallocate the corresponding physical blocks.
  • to deallocate a physical block includes marking the physical block as invalid, unused, or otherwise designating the physical block as available for storage space recovery, its contents on storage media no longer needing to be preserved by the storage device.
  • Data block usage information may also refer to information maintained by a storage device regarding which physical blocks are allocated and/or deallocated/unallocated and changes in the allocation of physical blocks and/or logical-to-physical block mapping information.
  • Data block usage information may also refer to information maintained by a storage device regarding which blocks are in use and which blocks are not in use by a storage client 504. Use of a block may include storing of data in the block on behalf of the storage client 504, reserving the block for use by the storage client 504, and the like.
  • While physical blocks may be deallocated, in certain embodiments, the cache 102 and/or the storage device 118 may not immediately erase the data on the storage media. An erase operation may be performed later in time. In certain embodiments, the data in a deallocated physical block may be marked as unavailable by the cache 102 and/or the storage device 118 such that subsequent requests for data in the physical block return a null result or an empty set of data.
  • a storage device upon receiving a TRIM command, may deallocate physical blocks for logical blocks whose data is no longer needed by the storage client 504.
  • a storage device that deallocates physical blocks may achieve better performance and increased storage space, especially storage devices that write data using certain processes and/or use a similar data storage recovery process as that described above.
  • the performance of the storage device is enhanced as physical blocks are deallocated when they are no longer needed such as through the TRIM command or other similar deallocation commands issued to the cache 102 and/or the storage device 118.
  • the direct cache module 116 clears, trims, and/or evicts cached data from the cache 102based on a cache eviction policy, or the like.
  • clearing, trimming, or evicting data includes deallocating physical media associated with the data, marking the data as invalid or unused (using either a logical or physical address of the data), erasing physical media associated with the data, overwriting the data with different data, issuing a TRIM command or other deallocation command relative to the data, or otherwise recovering storage capacity of physical storage media corresponding to the data.
  • Clearing cached data from the cache 102 based on a cache eviction policy frees storage capacity in the cache 102 to cache more data for the storage device 118.
  • the direct cache module 116 may represent itself, the cache 102, and the storage device 118 to the storage client 504 in different configurations.
  • the direct cache module 116 may represent itself to the storage client 504 as a single storage device (i.e. as the storage device 118, as a storage device with a similar physical capacity as the storage device 118, or the like) and the cache 102 may be transparent or invisible to the storage client 504.
  • the direct cache module 116 may represent itself to the direct cache module 116 as a cache device (i.e.
  • the direct cache module 116 may represent itself to the storage client 504 as a hybrid cache/storage device including both the cache 102 and the storage device 118.
  • the direct cache module 116 may pass certain commands down to the cache 102 and/or to the storage device 118 and may not pass down other commands.
  • the direct cache module 116 may support certain custom or new block I/O commands.
  • the direct cache module 116 supports a deallocation or trim command that clears corresponding data from both the cache 102 and the storage device 118, i.e. the direct cache module 116 passes the command to both the cache 102 and the storage device 118.
  • the direct cache module 116 supports a flush type trim or deallocation command that ensures that corresponding data is stored in the storage device 118 (i.e.
  • the direct cache module 116 supports an evict type trim or deallocation command that evicts corresponding data from the cache 102, marks corresponding data for eviction in the cache 102, or the like, without clearing the corresponding data from the storage device 118.
  • the direct cache module 116 may receive, detect, and/or intercept one or more predefined commands that a storage client 504 or another storage manager sent to the storage device 118, that a storage manager sends to a storage client 504, or the like.
  • the direct cache module 116 or a portion of the direct cache module 116 may be part of a filter driver that receives or detects the predefined commands, the direct cache module 116 may register with an event server to receive a notification of the predefined commands, or the like.
  • the direct cache module 116 may present an API through which the direct cache module 116 receives predefined commands.
  • the direct cache module 116 performs one or more actions on the cache 102 in response to detecting or receiving one or more predefined commands for the storage device 118, such as writing or flushing data related to a command from the cache 102 to the storage device 118, evicting data related to a command from the cache 102, switching from a write back policy to a write through policy for data related to a command, or the like.
  • One example of predefined commands that the direct cache module 116 may intercept or respond to, in one embodiment, includes a "freeze/thaw” commands.
  • "Freeze/thaw” commands are used in SANs, storage arrays, and the like, to suspend storage access, such as access to the storage device 118 or the like, to take an snapshot or backup of the storage without interrupting operation of the applications using the storage.
  • Freeze/thaw commands alert a storage client 504 that a snapshot is about to take place, the storage client 504 flushes pending operations, for example in-flight transactions, or data cached in volatile memory, the snapshot takes place while the storage client 504 use of the storage is in a "frozen” or ready state, and once the snapshot is complete the storage client 504 continues normal use of the storage in response to a thaw command.
  • the direct cache module 116 flushes or cleans dirty data from the cache 102 to the storage device 118 in response to detecting a "freeze/thaw" command.
  • the direct cache module 116 suspends access to the storage device 118 during a snapshot or other backup of a detected "freeze/thaw” command and resumes access in response to a completion of the snapshot or other backup.
  • the direct cache module 116 may cache data for the storage device 118 during a snapshot or other backup without interrupting the snapshot or other backup procedure. In other words, rather than the backup/snapshot software signaling the application to quiesce I/O operations, the direct cache module 116 receives and responds to the freeze/thaw commands.
  • Other embodiments of predefined commands may include one or more of a read command, a write command, a TRIM command, an erase command, a flush command, a pin command, an unpin command, and the like.
  • Figure 3 depicts one embodiment of the direct cache module 116.
  • the direct cache module 116 includes a storage request module 602, a cache fulfillment module 604, and a direct mapping module 606.
  • the direct cache module 116 of Figure 3 is substantially similar to the direct cache module 116 described above with regard to Figure 1 and/or Figure 2.
  • the direct cache module 116 caches data for the storage device 118 without an extra cache mapping layer.
  • the direct cache module 116 directly maps logical addresses of the storage device 118 to logical addresses of the cache 102 using the same mapping structure that maps the logical addresses of the cache 102 to the physical storage media 110 of the cache 102.
  • the storage request module 602 detects input/output ("I/O") requests for the storage device 118, such as read requests, write requests, erase requests, TRIM requests, and/or other I/O requests for the storage device 118.
  • the storage request module 602 may detect an I/O request by receiving the I/O request directly, detecting an I/O request sent to a different module or entity (such as detecting an I/O request sent directly to the storage device 118), or the like.
  • the host device 114 sends the I O request.
  • the direct cache module 116 in one embodiment, represents itself to the host device 114 as a storage device, and the host device 114 sends I O requests directly to the storage request module 602.
  • An I/O request may include or may request data that is not stored on the cache 102.
  • Data that is not stored on the cache 102 may include new data not yet stored on the storage device 118, modifications to data that is stored on the storage device 118, data that is stored on the storage device 118 but not currently stored in the cache 102, or the like.
  • An I O request in various embodiments, may directly include data, may include a reference, a pointer, or an address for data, or the like.
  • an I/O request (such as a write request or the like) may include a range of addresses indicating data to be stored on the storage device 118 by way of a Direct Memory Access (“DMA”) or Remote DMA (“RDMA”) operation.
  • DMA Direct Memory Access
  • RDMA Remote DMA
  • a single I O request may include several different contiguous and/or noncontiguous ranges of addresses or blocks.
  • an I/O request may include one or more destination addresses for data, such as logical and/or physical addresses for the data on the cache 102 and/or on the storage device 118.
  • the storage request module 602 and/or another cooperating module may retrieve the data of an I/O request directly from an I/O request itself, from a storage location referenced by an I/O request (i.e. from a location in system memory or other data storage referenced in a DMA or RDMA request), or the like.
  • the direct mapping module 606 in one embodiment, directly maps logical or physical addresses of the storage device 118 to logical addresses of the cache 102 and directly maps logical addresses of the cache 102 to logical addresses of the storage device 118.
  • direct mapping of addresses means that for a given address in a first address space there is exactly one corresponding address in a second address space with no translation or manipulation of the address to get from an address in the first address space to the corresponding address in the second address space.
  • the direct mapping module 606, in a further embodiment maps addresses of the storage device 118 to logical addresses of the cache 102 such that each storage device 118 address has a one to one relationship with a logical address of the cache 102.
  • logical addresses of the cache 102 are independent of physical storage addresses of the solid-state storage media 110 for the cache 102, making the physical storage addresses of the solid-state storage media 110 fully associative with the storage device 118. Because the solid-state storage media 110 is fully associative with the storage device 118, any physical storage block of the cache 102 may store data associated with any storage device address of the storage device 118.
  • the cache 102 in one embodiment, is logically directly mapped and physically fully associative, combining the benefits of both cache types.
  • the direct mapping module 606 maps each storage block of the storage device 118 to a distinct unique logical address of the cache 102 and associated distinct unique entry in the mapping structure, which may be associated with any distinct storage address of the solid-state storage media 110. This means that the direct mapping module 606 maps a storage block of the storage device 118 (represented by an LB A or other address) consistently to the same distinct unique logical address of the cache 102 while any distinct storage address of the solid-state storage media 110 may store the associated data, depending on a location of an append point of a sequential log-based writing structure, or the like.
  • the combination of logical direct mapping and full physical associativity that the direct mapping module 606 provides precludes cache collisions from occurring because logical addresses of the cache 102 are not shared and any storage block of the solid-state storage media 110 may store data for any address of the storage device 118, providing caching flexibility and optimal cache performance.
  • a garbage collection module 710 and/or an eviction module 712 clear invalid or old data from the cache 102 to free storage capacity for caching data.
  • the direct mapping module 606 maps storage device addresses to logical addresses of the cache 102 directly, in certain embodiments, the cache 102 provides fully associative physical storage media 110 without the processing overhead and memory consumption of a separate cache map, cache index, cache tags, or other lookup means traditionally associated with fully associative caches, eliminating a cache translation layer.
  • the direct mapping module 606 (which may be embodied by the logical-to-physical translation layer 510 described above and/or the forward mapping module 802 described below) and the associated single mapping structure serve as both a cache index or lookup structure and as a storage address mapping layer.
  • the direct mapping module 606 maps addresses of the storage device 118 directly to logical addresses of the cache 102 so that the addresses of the storage device 118 and the logical addresses of the cache 102 are equal or equivalent. In one example of this embodiment, the addresses of the storage device 118 and the logical addresses of the cache 102 share a lower range of the logical address space of the cache 102, such as 0-2 32 , or the like. In embodiments where the direct mapping module 606 maps addresses of the storage device 118 as equivalents of logical addresses of the cache 102, the direct mapping module 606 may use the addresses of the storage device 118 and the logical addresses of the cache 102 interchangeably, substituting one for the other without translating between them.
  • the direct mapping module 606 directly maps addresses of the storage device 118 to logical addresses of the cache 102, an address of an I/O request directly identifies both an entry in the mapping structure for a logical address of the cache 102 and an associated address of the storage device 118.
  • logical block addresses of the storage device 118 are used to index both the logical address space of the cache 102 and the logical address space of the storage device 118. This is enabled by the direct mapping module 606 presenting an address space to the host device 114 that is the same size or larger than the address space of the storage device 118.
  • the direct mapping module 606 maps logical addresses of the cache 102 (and associated addresses of the storage device 118) to physical addresses and/or locations on the physical storage media 110 of the cache 102. In a further embodiment, the direct mapping module 606 uses a single mapping structure to map addresses of the storage device 118 to logical addresses of the cache 102 and to map logical addresses of the cache 102 to locations on the physical storage media 110 of the cache 102. The direct mapping module 606 references the single mapping structure to determine whether or not the cache 102 stores data associated with an address of an I/O request. An address of an I/O request may comprise an address of the storage device 118, a logical address of the cache 102, or the like.
  • the single mapping structure may include a B-tree, B*-tree,
  • B+-tree a CAM, a binary tree, a hash table, an index, an array, a linked-list, a look-up table, or another mapping data structure.
  • Use of a B-tree as the mapping structure in certain embodiments, is particularly advantageous where the logical address space presented to the client is a very large address space (2 ⁇ 64 addressable blocks - which may or may not be sparsely populated). Because B-trees maintain an ordered structure, searching such a large space remains very fast.
  • Example embodiments of a B-tree as a mapping structure are described in greater detail with regard to Figures 7 and 8.
  • the mapping structure includes a B-tree with multiple nodes and each node may store several entries.
  • each entry may map a variable sized range or ranges of logical addresses of the cache 102 to a location on the physical storage media 110 of the cache 102.
  • the number of nodes in the B-tree may vary as the B-tree grows wider and/or deeper.
  • Caching variable sized ranges of data associated with contiguous and/or non-contiguous ranges of storage device addresses is more efficient than caching fixed size cache lines, as the cache 102 may more closely match data use patterns without restrictions imposed by fixed size cache lines.
  • the mapping structure of the direct mapping module 606 only includes a node or entry for logical addresses of the cache 102 that are associated with currently cached data in the cache 102.
  • membership in the mapping structure represents membership in the cache 102.
  • membership in the mapping structure may represent valid allocated blocks on the solid-state storage media 110.
  • the solid-state storage controller 104 adds entries, nodes, and the like to the mapping structure as data is stored on the solid-state storage media 110 and removes entries, nodes, and the like from the mapping structure in response to data being invalidated cleared, trimmed, or otherwise removed from the solid-state storage media 110.
  • the mapping structure is shared for both cache management and data storage management on the solid-state storage media, the present invention also tracks whether the data is dirty or not to determine whether the data is persisted on the storage device 118.
  • the mapping structure of the direct mapping module 606 may include one or more nodes or entries for logical addresses of the cache 102 that are not associated with currently stored data in the cache 102, but that are mapped to addresses of the storage device 118 that currently store data.
  • the nodes or entries for logical addresses of the cache 102 that are not associated with currently stored data in the cache 102 in one embodiment, are not mapped to locations on the physical storage media 110 of the cache 102, but include an indicator that the cache 102 does not store data corresponding to the logical addresses.
  • the nodes or entries in a further embodiment, may include information that the data resides in the storage device 118.
  • Nodes, entries, records, or the like of the mapping structure may include information (such as physical addresses, offsets, indicators, etc.) directly, as part of the mapping structure, or may include pointers, references, or the like for locating information in memory, in a table, or in another data structure.
  • the mapping of addresses of the storage device 118 to the logical addresses of the cache 102 and/or the mapping of the logical addresses of the cache 102 to locations on the physical storage media 110 of the cache 102 are persistent, even if the cache 102 is subsequently paired with a different host device 114, the cache 102 undergoes an unexpected or improper shutdown, the cache 102 undergoes a power loss, or the like.
  • the storage device 118 is also subsequently paired with the different host device 114 along with the cache 102.
  • the cache 102 rebuilds or restores at least a portion of data from the storage device 118 on a new storage device associated with the different host device 114, based on the mapping structure and data stored on the cache 102.
  • the direct mapping module 606 reconstructs the mapping structure and included entries by scanning data on the solid-state storage media 110, such as a sequential log-based writing structure or the like, and extracting logical addresses, sequence indicators, and the like from data at physical locations on the solid-state storage media 110.
  • the cache fulfillment module 604 stores data of I/O requests in a format that associates the data with sequence indicators for the data and with respective logical addresses of the cache 102 for the data. If the mapping structure becomes lost or corrupted, the direct mapping module 606 may use the physical address or location of data on the solid-state storage media 110 with the associated sequence indicators, logical addresses, and/or other metadata stored with the data, to reconstruct entries of the mapping structure.
  • the forward map module 802 described below with regard to Figures 5 and 6 is another embodiment of the direct mapping module 606.
  • the direct mapping module 606 receives one or more addresses of an
  • the direct mapping module 606 references the mapping structure to determine whether or not the cache 102 stores data associated with the I/O request.
  • the direct mapping module 606 in response to referencing the mapping structure, may provide information from the mapping structure to the cache fulfillment module 604, such as a determination whether the cache 102 stores data of the I/O request, a physical storage address on the solid-state storage media 110 for data of the I/O request, or the like to assist the cache fulfillment module 604 in satisfying the I/O request.
  • the direct mapping module 606 updates the mapping structure to reflect changes or updates to the cache 102 that the cache fulfillment module 604 made to satisfy the I/O request.
  • the cache fulfillment module 604 satisfies I/O requests that the storage request module 602 detects.
  • the direct mapping module 606 determines that the cache 102 stores data of an I/O request, such as storing at least one data block of the I/O request or the like
  • the cache fulfillment module 604 satisfies the I/O request at least partially using the cache 102.
  • the cache fulfillment module 604 satisfies an I/O request based on the type of I/O request. For example, the cache fulfillment module 604 may satisfy a write I/O request by storing data of the I/O request to the cache 102, may satisfy a read I/O request by reading data of the I/O request from the cache 102, and the like.
  • An embodiment of the cache fulfillment module 604 that includes a write request module 703 for fulfilling write I/O requests and a read request module 704 for fulfilling read I O requests is described below in greater detail with regard to Figure 4.
  • the cache fulfillment module 604 stores data of the I/O request to the cache 102.
  • the cache fulfillment module 604 in response to a write I/O request, a cache miss, or the like, in certain embodiments, stores data of an I/O request to the solid-state storage media 110 of the cache 102 sequentially to preserve an ordered sequence of I/O operations performed on the solid-state storage media 110.
  • the cache fulfillment module 604 may store the data of I/O requests to the cache 102 sequentially by appending the data to an append point of a sequential, log-based, cyclic writing structure of the solid-state storage media 110, in the order that the storage request module 602 receives the I O requests.
  • a sequential, log-based, cyclic writing structure is described below with regard to Figure 8.
  • the cache fulfillment module 604 stores data in a manner that associates the data with a sequence indicator for the data.
  • the cache fulfillment module 604 may store a numerical sequence indicator as metadata with data of an I/O request, may use the sequential order of a log-based writing structure as a sequence indicator, or the like.
  • the cache fulfillment module 604 stores data in a manner that associates the data with respective logical addresses of the data, storing one or more logical block addresses of the data with the data in a sequential, log-based writing structure or the like.
  • the cache fulfillment module 604 enables the direct mapping module 606 to reconstruct, rebuild, and/or recover entries in the mapping structure using the stored sequence indicators and logical addresses, as described above.
  • Figure 4 depicts another embodiment of the direct cache module 116.
  • the direct cache module 116 includes the block I/O emulation layer 506, the direct interface layer 508, the storage request module 602, the cache fulfillment module 604, and the direct mapping module 606, substantially as described above with regard to Figures 2 and 3.
  • the direct cache module 116 in the depicted embodiment, further includes a storage device interface module 702, a write acknowledgement module 706, a cleaner module 708, a garbage collection module 710, and an eviction module 712.
  • the cache fulfillment module 604 in the depicted embodiment, includes a write request module 703 and a read request module 704.
  • the write request module 703 services and satisfies write I O requests that the storage request module 602 detects.
  • a write request includes data that is not stored on the storage device 118, such as new data not yet stored on the storage device 118, modifications to data that is stored on the storage device 118, and the like.
  • a write request in various embodiments, may directly include the data, may include a reference, a pointer, or an address for the data, or the like.
  • a write request includes a range of addresses indicating data to be stored on the storage device 118 by way of a Direct Memory Access (“DMA”) or Remote DMA (“RDMA”) operation.
  • DMA Direct Memory Access
  • RDMA Remote DMA
  • a single write request may include several different contiguous and/or noncontiguous ranges of addresses or blocks.
  • a write request includes one or more destination addresses for the associated data, such as logical and/or physical addresses for the data on the storage device 118.
  • the write request module 703 and/or another cooperating module may retrieve the data of a write request directly from the write request itself, from a storage location referenced by a write request (i.e. from a location in system memory or other data storage referenced in a DMA or RDMA request), or the like to service the write request.
  • the write request module 703, in one embodiment, writes data of a write request to the cache 102 at one or more logical addresses of the cache 102 corresponding to the addresses of the write request as mapped by the direct mapping module 606.
  • the write request module 703 writes the data of the write request to the cache 102 by appending the data to a sequential, log-based, cyclic writing structure of the physical solid-state storage media 110 of the cache 102 at an append point.
  • the write request module 703, in one embodiment returns one or more physical addresses or locations corresponding to the append point and the direct mapping module 606 maps the one or more logical addresses of the cache 102 to the one or more physical addresses corresponding to the append point.
  • the read request module 704 services and satisfies read I/O requests that the storage request module 602 detects for data stored in the cache 102 and/or the storage device 118.
  • a read request is a read command with an indicator, such as a logical address or range of logical addresses, of the data being requested.
  • the read request module 704 supports read requests with several contiguous and/or noncontiguous ranges of logical addresses, as discussed above with regard to the storage request module 602.
  • the read request module 704 includes a read miss module
  • the read miss module 718 determines whether or not requested data is stored in the cache 102, in cooperation with the direct mapping module 606 or the like.
  • the read miss module 718 may query the cache 102 directly, query the direct mapping module 606, query the mapping structure of the direct mapping module 606, or the like to determine whether or not requested data is stored in the cache 102.
  • the read retrieve module 720 returns requested data to the requesting entity, such as the host device 114. If the read miss module 718 and/or the direct mapping module 606 determine that the cache 102 stores the requested data, in one embodiment, the read retrieve module 720 reads the requested data from the cache 102 and returns the data to the requesting entity.
  • the read retrieve module 720 reads the requested data from the storage device 118, stores the requested data to the cache 102, and returns the requested data to the requesting entity to satisfy the associated read request. In one embodiment, the read retrieve module 720 writes the requested data to the cache 102 by appending the requested data to an append point of a sequential, log-based, cyclic writing structure of the cache 102.
  • the read retrieve module 720 provides one or more physical addresses corresponding to the append point to the direct mapping module 606 with the one or more logical addresses of the requested data and the direct mapping module 606 adds and/or updates the mapping structure with the mapping of logical and physical addresses for the requested data.
  • the read retrieve module 720 in one embodiment, writes the requested data to the cache 102 using and/or in conjunction with the cache fulfillment module 604.
  • the read miss module 718 detects a partial miss, where the cache 102 stores one portion of the requested data but does not store another.
  • a partial miss in various embodiments, may be the result of eviction of the unstored data, a block I/O request for noncontiguous data, or the like.
  • the read miss module 718 in one embodiment, reads the missing data or "hole" data from the storage device 118 and returns both the portion of the requested data from the cache 102 and the portion of the requested data from the storage device 118 to the requesting entity. In one embodiment, the read miss module 718 stores the missing data retrieved from the storage device 118 in the cache 102.
  • the write acknowledgement module 706 acknowledges, to a requesting entity such as the host device 114, a write request that the storage request module 602 receives.
  • the write acknowledgement module 706 implements a particular data integrity policy.
  • embodiments of the present invention permit variations in the data integrity policy that is implemented.
  • the write acknowledgement module 706 acknowledges the write request in response to the cleaner module 708 writing data of the write request to the storage device 118, as described below.
  • the cleaner module 708 writes data from the cache 102 to the storage device 118, destaging or cleaning the data.
  • Data that is stored in the cache 102 that is not yet stored in the storage device 118 is referred to as "dirty" data.
  • the data is referred to as "clean.”
  • the cleaner module 708 cleans data in the cache 102 by writing the data to the storage device 118.
  • the cleaner module 708, in one embodiment, may determine an address for the data in the storage device 118 based on a write request corresponding to the data.
  • the cleaner module 708 determines an address for the data in the storage device 118 based on a logical address of the data in the cache 102, based on the mapping structure of the direct mapping module 606, or the like. In another embodiment, the cleaner module 708 uses the reverse mapping module 804 to determine an address for the data in the storage device 118 based on a physical address of the data in the cache 102.
  • the cleaner module 708, in one embodiment, writes data to the storage device 118 based on a write policy.
  • the cleaner module 708 uses a write-back write policy, and does not immediately write data of a write request to the storage device 118 upon receiving the write request. Instead, the cleaner module 708, in one embodiment, performs an opportunistic or "lazy" write, writing data to the storage device 118 when the data is evicted from the cache 102, when the cache 102 and/or the direct cache module 116 has a light load, when available storage capacity in the cache 102 falls below a threshold, or the like.
  • the cleaner module 708 reads data from the cache 102, writes the data to the storage device 118, and sets an indicator that the storage device 118 stores the data, in response to successfully writing the data to the storage device 118. Setting the indicator that the storage device 118 stores the data alerts the garbage collection module 710 that the data may be cleared from the cache 102 and/or alerts the eviction module 712 that the data may be evicted from the cache 102.
  • the cleaner module 708 sets an indicator that the storage device 118 stores data by marking the data as clean in the cache 102. In a further embodiment, the cleaner module 708 may set an indicator that the storage device 118 stores data by communicating an address of the data to the direct mapping module 606, sending a request to the direct mapping module 606 to update an indicator in a logical to physical mapping or other mapping structure, or the like.
  • the cleaner module 708 maintains a separate data structure indicating which data in the cache 102 is clean and which data is dirty. In another embodiment, the cleaner module 708 references indicators in a mapping of logical addresses to physical media addresses, such as a mapping structure maintained by the direct mapping module 606, to determine which data in the cache 102 is clean and which data is dirty.
  • the cleaner module 708 uses a write-through policy, performing a synchronous write to the storage device 118 for each write request that the storage request module 602 receives.
  • the cleaner module 708, in one embodiment, transitions from a write-back to a write-through write policy in response to a predefined error condition, such as an error or failure of the cache 102, or the like.
  • the garbage collection module 710 recovers storage capacity of physical storage media corresponding to data that is marked as invalid, such as data cleaned by the cleaner module 708 and/or evicted by the eviction module 712.
  • the garbage collection module 710 recovers storage capacity of physical storage media corresponding to data that the cleaner module 708 has cleaned and that the eviction module 712 has evicted, or that has been otherwise marked as invalid.
  • the garbage collection module 710 allows clean data to remain in the cache 102 as long as possible until the eviction module 712 evicts the data or the data is otherwise marked as invalid, to decrease the number of cache misses.
  • the garbage collection module 710 recovers storage capacity of physical storage media corresponding to invalid data opportunistically. For example, the garbage collection module 710 may recover storage capacity in response to a lack of available storage capacity, a percentage of data marked as invalid reaching a predefined threshold level, a consolidation of valid data, an error detection rate for a section of physical storage media reaching a threshold value, performance crossing a threshold value, a scheduled garbage collection cycle, identifying a section of the physical storage media 110 with a high amount of invalid data, identifying a section of the physical storage media 110 with a low amount of wear, or the like.
  • the garbage collection module 710 relocates valid data that is in a section of the physical storage media 110 in the cache 102 that the garbage collection module 710 is recovering to preserve the valid data.
  • the garbage collection module 710 is part of an autonomous garbage collector system that operates within the cache 102. This allows the cache 102 to manage data so that data is systematically spread throughout the solid- state storage media 110, or other physical storage media, to improve performance, data reliability and to avoid overuse and underuse of any one location or area of the solid-state storage media 110 and to lengthen the useful life of the solid-state storage media 110.
  • the garbage collection module 710 upon recovering a section of the physical storage media 110, allows the cache 102 to re-use the section of the physical storage media 110 to store different data. In one embodiment, the garbage collection module 710 adds the recovered section of physical storage media to an available storage pool for the cache 102, or the like. The garbage collection module 710, in one embodiment, erases existing data in a recovered section. In a further embodiment, the garbage collection module 710 allows the cache 102 to overwrite existing data in a recovered section. Whether or not the garbage collection module 710, in one embodiment, erases existing data in a recovered section may depend on the nature of the physical storage media. For example, Flash media requires that cells be erased prior to reuse where magnetic media such as hard drives does not have that requirement.
  • the garbage collection module 710 may mark the data in the recovered section as unavailable to service read requests so that subsequent requests for data in the recovered section return a null result or an empty set of data until the cache 102 overwrites the data.
  • the garbage collection module 710 recovers storage capacity of the cache 102 one or more storage divisions at a time.
  • a storage division in one embodiment, is an erase block or other predefined division. For flash memory, an erase operation on an erase block writes ones to every bit in the erase block. This is a lengthy process compared to a program operation which starts with a location being all ones, and as data is written, some bits are changed to zero.
  • the eviction module 712 may erase the data of a storage division as it evicts data, instead of the garbage collection module 710.
  • allowing the eviction module 712 to mark data as invalid rather than actually erasing the data and allowing the garbage collection module 710 to recover the physical media associated with invalid data increases efficiency because, as mentioned above, for flash memory and other similar storage an erase operation takes a significant amount of time. Allowing the garbage collection module 710 to operate autonomously and opportunistically within the cache 102 provides a way to separate erase operations from reads, writes, and other faster operations so that the cache 102 operates very efficiently.
  • the garbage collection module 710 is integrated with and/or works in conjunction with the cleaner module 708 and/or the eviction module 712.
  • the garbage collection module 710 clears data from the cache 102 in response to an indicator that the storage device stores the data (i.e. that the cleaner module 708 has cleaned the data) based on a cache eviction policy (i.e. in response to the eviction module 712 evicting the data).
  • the eviction module 712 in one embodiment, evicts data by marking the data as invalid. In other embodiments, the eviction module 712 may evict data by erasing the data, overwriting the data, trimming the data, deallocating physical storage media associated with the data, or otherwise clearing the data from the cache 102.
  • the eviction module 712 evicts data from the cache 102 based on a cache eviction policy.
  • the cache eviction policy in one embodiment, is based on a combination or a comparison of one or more cache eviction factors.
  • the cache eviction factors include wear leveling of the physical storage media 110.
  • the cache eviction factors include a determined reliability of a section of the physical storage media 110.
  • the cache eviction factors include a failure of a section of the physical storage media 110.
  • the cache eviction factors in one embodiment, include a least recently used ("LRU") block of data.
  • the cache eviction factors include a frequency of access of a block of data, i.e. how "hot” or “cold” a block of data is. In one embodiment, the cache eviction factors include a position of a block of data in the physical storage media 110 relative to other "hot” data.
  • the direct mapping module 606 determines one or more of the cache eviction factors based on a history of access to the mapping structure.
  • the direct mapping module 606, in a further embodiment, identifies areas of high frequency, "hot,” use and/or low frequency, "cold,” use by monitoring accesses of branches or nodes in the mapping structure.
  • the direct mapping module 606, in a further embodiment, determines a count or frequency of access to a branch, directed edge, or node in the mapping structure. In one embodiment, a count associated with each node of a b-tree like mapping structure may be incremented for each I/O read operation and/or each I/O write operation that visits the node in a traversal of the mapping structure.
  • the eviction module 712 evicts data from the cache 102 intelligently and/or opportunistically based on activity in the mapping structure monitored by the direct mapping module 606, based on information about the physical storage media 110, and/or based on other cache eviction factors.
  • the direct mapping module 606, the eviction module 712, and/or the garbage collection module 710 in one embodiment, share information to increase the efficiency of the cache 102, to reduce cache misses, to make intelligent eviction decisions, and the like.
  • the direct mapping module 606 tracks or monitors a frequency that I/O requests access logical addresses in the mapping structure.
  • the direct mapping module 606, in a further embodiment, stores the access frequency information in the mapping structure, communicates the access frequency information to the eviction module 712 and/or to the garbage collection module 710, or the like.
  • the direct mapping module 606, in another embodiment, may track, collect, or monitor other usage/access statistics relating to the logical to physical mapping of addresses for the cache 102 and/or relating to the mapping between the logical address space of the cache 102 and the address space of the storage device 118, and may share that data with the eviction module 712 and/or with the garbage collection module 710.
  • One example of a benefit of sharing information between the direct mapping module 606, the eviction module 712, and the garbage collection module 710, in certain embodiments, is that write amplification can be reduced.
  • the garbage collection module 710 copies any valid data in an erase block forward to the current append point of the log-based append-only writing structure of the cache 102 before recovering the physical storage capacity of the erase block.
  • the garbage collection module 710 may clear certain valid data from an erase block without copying the data forward (for example because the replacement algorithm for the eviction module 712 indicates that the valid data is unlikely to be re-requested soon), reducing write amplification, increasing available physical storage capacity and efficiency.
  • the garbage collection module 710 preserves valid data with an access frequency in the mapping structure that is above a predefined threshold, and clears valid data from an erase block if the valid data has an access frequency below the predefined threshold.
  • the eviction module 712 may mark certain data as conditionally evictable, conditionally invalid, or the like, and the garbage collection module 710 may evict the conditionally invalid data based on an access frequency or other data that the direct mapping module 606 provides.
  • the direct mapping module 606, the eviction module 712, and the garbage collection module 710 cooperate such that valid data that is in the cache 102 and is dirty gets stored on the storage device 118 by the garbage collection module 710 rather than copied to the front of the log, because the eviction module 712 indicated that it is more advantageous to do so.
  • modules responsible for managing the non-volatile storage media that uses a log-based append-only writing structure can leverage the information available in the direct cache module 116.
  • modules responsible for managing the cache 102 can leverage the information available in solid-state controller 104 regarding the condition of the non- volatile storage media.
  • the direct mapping module 606, the eviction module 712, and the garbage collection module 710 cooperate such that selection of one or more blocks of data by the eviction module 712 is influenced by the Uncorrectable Bit Error Rates (UBER), Correctable Bit Error Rates (BER), Program / Erase (PE) cycle counts, read frequency, or other non- volatile solid state storage specific attributes of the region of the solid-state storage media 110 in the cache 102 that presently holds the valid data.
  • UBER Uncorrectable Bit Error Rates
  • BER Correctable Bit Error Rates
  • PE Program / Erase
  • Read frequency or other non- volatile solid state storage specific attributes of the region of the solid-state storage media 110 in the cache 102 that presently holds the valid data.
  • High BER, UBER, PEs may be used as factors to increase the likelihood that the eviction module 712 will evict a particular block range stored on media having those characteristics.
  • the storage device interface module 702 provides an interface between the direct cache module 116, the cache 102, and/or the storage device 118.
  • the direct cache module 116 may interact with the cache 102 and/or the storage device 118 through a block device interface, a direct interface, a device driver on the host device 114, a storage controller, or the like.
  • the storage device interface module 702 provides the direct cache module 116 with access to one or more of these interfaces.
  • the storage device interface module 702 may receive read commands, write commands, and clear (or TRIM) commands from one or more of the cache fulfillment module 604, the direct mapping module 606, the read request module 704, the cleaner module 708, the garbage collection module 710, and the like and relay the commands to the cache 102 and/or the storage device 118.
  • the storage device interface module 702 may translate or format a command into a format compatible with an interface for the cache 102 and/or the storage device 118.
  • the storage device interface module 702 has exclusive ownership over the storage device 118 and the direct cache module 116 is an exclusive gateway to accessing the storage device 118. Providing the storage device interface module 702 with exclusive ownership over the storage device 118 and preventing access to the storage device 118 by other routes obviates stale data issues and cache coherency requirements, because all changes to data in the storage device 114 are processed by the direct cache module 116.
  • the storage device interface module 702 does not have exclusive ownership of the storage device 118, and the storage device interface module 702 manages cache coherency for the cache 102.
  • the storage device interface module 702 may access a common directory with other users of the storage device 118 to maintain coherency, may monitor write operations from other users of the storage device 118, may participate in a predefined coherency protocol with other users of the storage device 118, or the like.
  • FIG. 5 is a schematic block diagram illustrating one embodiment of an apparatus 800 to efficiently map physical and logical addresses in accordance with the present invention.
  • the apparatus 800 includes a forward mapping module 802, a reverse mapping module 804, and a storage space recovery module 806, which are described below. At least a portion of one or more of the forward mapping module 802, the reverse mapping module 804, and the storage space recovery module 806 is located within one or more of a requesting device that transmits the storage request, the solid-state storage media 110, the storage controller 104, and a computing device separate from the requesting device, the solid-state storage media 110, and the storage controller 104.
  • the forward mapping module 802 and the reverse mapping module are identical to the forward mapping module 802 and the reverse mapping module
  • the forward mapping module 802 and the reverse mapping module 804 work in conjunction with the direct mapping module 606.
  • the forward mapping module 802 and the reverse mapping module 804 may be part of the direct mapping module 606, may be separate and work together with the direct mapping module 606, or the like.
  • the apparatus 800 includes a forward mapping module 802 that uses a forward map to identify one or more physical addresses of data of a data segment.
  • the physical addresses are identified from one or more logical addresses of the data segment, which are identified in a storage request directed to the solid-state storage media 110.
  • a storage request may include a request to read data stored in the solid-state storage media 110.
  • the storage request to read data includes a logical address or logical identifier associated with the data stored on the solid-state storage media 110.
  • the read request may include a logical or virtual address of a file from which the data segment originated, which may be interpreted that the read request is a request to read an entire data segment associated with the logical or virtual address.
  • the read request in another example, includes a logical address along with an offset as well a data length of the data requested in the read request. For example, if a data segment is 20 blocks, a read request may include an offset of 16 blocks (i.e. start at block 16 of 20) and a data length of 5 so that the read request reads the last 5 blocks of the data segment.
  • the read request may include an offset and data length also in a request to read an entire data segment or to read from the beginning of a data segment.
  • Other requests may also be included in a storage request, such as a status request. Other types and other forms of storage requests are contemplated within the scope of the present invention and will be recognized by one of skill in the art.
  • the apparatus 800 includes a forward map that maps of one or more logical addresses to one or more physical addresses of data stored in the solid-state storage media 110.
  • the logical addresses correspond to one or more data segments relating to the data stored in the solid-state storage media 110.
  • the one or more logical addresses typically include discrete addresses within a logical address space where the logical addresses sparsely populate the logical address space.
  • data length information may also be associated with the logical address and may also be included in the forward map.
  • the data length typically corresponds to the size of the data segment. Combining a logical address and data length information associated with the logical address may be used to facilitate reading a particular portion within a data segment.
  • the forward map is typically a data structure that facilitates quickly traversing the forward map to find a physical address based on a logical address.
  • the forward map may include a B-tree, a content addressable memory ("CAM”), a binary tree, a hash table, or other data structure that facilitates quickly searching a sparsely populated space or range.
  • the apparatus 800 includes a reverse mapping module 804 that uses a reverse map to determine a logical address of a data segment from a physical address.
  • the reverse map is used to map the one or more physical addresses to one or more logical addresses and can be used by the reverse mapping module 804 or other process to determine a logical address from a physical address.
  • the reverse map beneficially maps the solid-state storage media 110 into erase regions such that a portion of the reverse map spans an erase region of the solid-state storage media 110 erased together during a storage space recovery operation.
  • the storage space recovery operation (or garbage collection operation) recovers erase regions for future storage of data. By organizing the reverse map by erase region, the storage space recovery module 806 can efficiently identify an erase region for storage space recovery and identify valid data.
  • the storage space recovery module 806 is discussed in more detail below.
  • the physical addresses in the reverse map are associated or linked with the forward map so that if logical address A is mapped to physical address B in the forward map, physical address B is mapped to logical address A in the reverse map.
  • the forward map includes physical addresses that are linked to entries in the reverse map.
  • the forward map includes pointers to physical addresses in the reverse map or some other intermediate list, table, etc.
  • the reverse map includes one or more source parameters.
  • the source parameters are typically received in conjunction with a storage request and include at least one or more logical addresses.
  • the source parameters may also include data lengths associated with data of a data segment received in conjunction with a storage request.
  • the reverse map does not include source parameters in the form of logical addresses or data lengths and the source are stored with data of the data segment stored on the solid-state storage media 110.
  • the source parameters may be discovered from a physical address in the reverse map which leads to the source parameters stored with the data. Said differently, the reverse map may use the primary logical-to-physical map rather than the secondary-logical-to-physical map.
  • Storing the source parameters with the data is advantageous in a sequential storage device because the data stored in the solid-state storage media 110 becomes a log that can be replayed to rebuild the forward and reverse maps. This is due to the fact that the data is stored in a sequence matching when storage requests are received, and thus the source data serves a dual role; rebuilding the forward and reverse maps and determining a logical address from a physical address.
  • the apparatus 800 includes a storage space recovery module 806 that uses the reverse map to identify valid data in an erase region prior to an operation to recover the erase region. The identified valid data is moved to another erase region prior to the recovery operation.
  • the storage space recovery module 806 can scan through a portion of the reverse map corresponding to an erase region to quickly identify valid data or to determine a quantity of valid data in the erase region.
  • An erase region may include an erase block, a fixed number of pages, etc. erased together.
  • the reverse map may be organized so that once the entries for a particular erase region are scanned, the contents of the erase region are known.
  • the reverse may include a table, data base, or other structure that allows entries for data of an erase region to be stored together to facilitate operations on data of an erase region.
  • the forward map and the reverse map are independent of a file structure, a name space, a directory, etc. that organize data for the requesting device transmitting the storage request, such as a file server or client operating in a server or the host device 114.
  • the apparatus 800 is able to emulate a random access, logical block storage device storing data as requested by the storage request.
  • the apparatus 800 uses the forward map and reverse map to appear to be storing data in specific locations as directed by a storage request while actually storing data sequentially in the solid-state storage media 110.
  • the apparatus 800 overcomes problems that random access causes for solid-state storage, such as flash memory, by emulating logical block storage while actually storing data sequentially.
  • the apparatus 800 also allows flexibility because one storage request may be a logical block storage request while a second storage request may be an object storage request, file storage request, etc. Maintaining independence from file structures, namespaces, etc. of the requesting device provides great flexibility as to which type of storage requests may be serviced by the apparatus 800.
  • FIG. 6 is a schematic block diagram illustrating another embodiment of an apparatus 900 for efficient mapping of logical and physical addresses in accordance with the present invention.
  • the apparatus 900 includes a forward mapping module 802, a reverse mapping module 804, and a storage space recovery module 806, which are substantially similar to those described above in relation to the apparatus 800 of Figure 5.
  • the apparatus 900 also includes a map rebuild module 902, a checkpoint module 904, a map sync module 906, an invalidate module 908, and a map update module 910, which are described below.
  • the apparatus 900 includes a map rebuild module 902 that rebuilds the forward map and the reverse map using the source parameters stored with the data.
  • a map rebuild module 902 that rebuilds the forward map and the reverse map using the source parameters stored with the data.
  • data is stored on the solid-state storage media 110 sequentially, by keeping track of the order in which erase regions or erase blocks in the solid-state storage media 110 were filled and by storing source parameters with the data, the solid-state storage media 110 becomes a sequential log.
  • the map rebuild module 902 replays the log by sequentially reading data packets stored on the solid-state storage media 110. Each physical address and data packet length is paired with the source parameters found in each data packet to recreate the forward and reverse maps.
  • the apparatus 900 includes a checkpoint module 904 that stores information related to the forward map and the reverse map where the checkpoint is related to a point in time or state of the data storage device.
  • the stored information is sufficient to restore the forward map and the reverse map to a status related to the checkpoint.
  • the stored information may include storing the forward and reverse maps in non-volatile storage, such as on the data storage device, along with some identifier indicating a state or time checkpoint.
  • a timestamp could be stored with the checkpoint information. The timestamp could then be correlated with a location in the solid-state storage media 110 where data packets were currently being stored at the checkpoint.
  • state information is stored with the checkpoint information, such as a location in the solid-state storage media 110 where data is currently being stored.
  • the apparatus 900 includes a map sync module 906 that updates the forward map and the reverse map from the status related to the checkpoint to a current status by sequentially applying source parameters and physical addresses.
  • the source parameters applied are stored with data that was sequentially stored after the checkpoint.
  • the physical addresses are derived from a location of the data on the solid-state storage media 110.
  • map sync module 906 restores the forward and reverse maps to a current state from a checkpoint rather than starting from scratch and replaying the entire contents of the solid-state storage media 110.
  • the map sync module 906 uses the checkpoint to go to the data packet stored just after the checkpoint and then replays data packets from that point to a current state where data packets are currently being stored on the solid-state storage media 110.
  • the map sync module 906 typically takes less time to restore the forward and reverse maps than the map rebuild module 902.
  • the forward and reverse maps are stored on the solid-state storage media 110 and another set of forward and reverse maps are created to map the stored forward and reverse maps.
  • data packets may be stored on a first storage channel while the forward and reverse maps for the stored data packets may be stored as data on a second storage channel; the forward and reverse maps for the data on the second storage channel may be stored as data on a third storage channel, and so forth. This recursive process may continue as needed for additional forward and reverse maps.
  • the storage channels may be on a single element of solid-state storage media 110 or on separate elements of solid-state storage media 110.
  • the apparatus 900 includes an invalidate module 908 that marks an entry for data in the reverse map indicating that data referenced by the entry is invalid in response to an operation resulting in the data being invalidated.
  • the invalidate module 908 may mark an entry invalid as a result of a delete request, a read-modify-write request, and the like.
  • the reverse map includes some type of invalid marker or tag that may be changed by the invalidate module 908 to indicate data associated with an entry in the reverse map is invalid.
  • the reverse map may include a bit that is set by the invalidate module 908 when data is invalid.
  • the reverse map includes information for valid data and invalid data stored in the solid-state storage media 110 and the forward includes information for valid data stored in the solid-state storage media 110. Since the reverse map is useful for storage space recovery operations, information indicating which data in an erase block is invalid is included in the reverse map. By maintaining the information indicating invalid data in the reverse map, the forward map, in one embodiment, need only maintain information related to valid data stored on the solid-state storage media 110, thus improving the efficiency and speed of forward lookup.
  • the storage space recovery module 806 may then use the invalid marker to determine a quantity of invalid data in an erase region by scanning the reverse map for the erase region to determine a quantity of invalid data in relation to a storage capacity of the erase region. The storage space recovery module 806 can then use the determined quantity of invalid data in the erase region to select an erase region for recovery. By scanning several erase regions, or even all available erase regions, the storage space recovery module 806 can use selection criteria, such as highest amount of invalid data in an erase region, to then select an erase region for recovery. Once an erase region is selected for recovery, in one embodiment the storage space recovery module 806 may then write valid data from the selected erase region to a new location in the solid-state storage media 110.
  • the new location is typically within a page of an erase region where data is currently being stored sequentially.
  • the storage space recovery module 806 may write the valid data using a data pipeline as described in U.S. Patent Application No. 11,952,091 entitled “Apparatus, System, and Method for Managing Data Using a Data Pipeline” for David Flynn et al. and filed December 6, 2007, which is hereinafter incorporated by reference.
  • the storage space recovery module 806 also updates the reverse map to indicate that the valid data written to the new location is invalid in the selected erase region and updates the forward and reverse maps based on the valid data written to the new location. In another embodiment, the storage space recovery module 806 coordinates with the map update module 910 (described below) to update the forward and reverse maps.
  • the storage space recovery module 806 operates autonomously with respect to data storage and retrieval associated with storage requests and other commands. Storage space recovery operations that may be incorporated in the storage space recovery module 806 are described in more detail in the Storage Space Recovery Application referenced above.
  • the apparatus 900 includes a map update module 910 that updates the forward map and/or the reverse map in response to contents of the solid-state storage media 110 being altered.
  • the map update module 910 receives information linking a physical address of stored data to a logical address from the data storage device based on a location where the data storage device stored the data. In the embodiment, the location where a data packet is stored may not be available until the solid-state storage media 110 stores the data packet.
  • the size of each data packet may be unknown until after compression.
  • the solid-state storage media 110 stores data sequentially, once a data packet is compressed and stored, an append point is set to a location after the stored data packet and a next data packet is stored. Once the append point is known, the solid-state storage media 110 may then report back the physical address corresponding to the append point where the next data packet is stored.
  • the map update module 910 uses the reported physical address and associated data length of the stored data packet to update the forward and reverse maps.
  • One of skill in the art will recognize other embodiments of a map update module 910 to update the forward and reverse maps based on physical addresses and associated data lengths of data stored on the solid-state storage media 110.
  • FIG. 7 is a schematic block diagram of an example of a forward map 1004 and a reverse map 1022 in accordance with the present invention.
  • the apparatus 800, 900 receives a storage request, such as storage request to read an address.
  • the apparatus 800, 900 may receive a logical block storage request 1002 to start reading read address "182" and read 3 blocks.
  • the forward map 1004 stores logical block addresses as virtual/logical addresses along with other virtual/logical addresses so the forward mapping module 802 uses forward map 1004 to identify a physical address from the virtual/logical address "182" of the storage request 1002.
  • a forward map 1004 in other embodiments, may include alpha-numerical characters, hexadecimal characters, and the like.
  • the forward map 1004 is a simple B-tree.
  • the forward map 1004 may be a content addressable memory ("CAM"), a binary tree, a hash table, or other data structure known to those of skill in the art.
  • a B-Tree includes nodes (e.g. the root node 1008) that may include entries of two logical addresses. Each entry, in one embodiment, may include a range of logical addresses.
  • a logical address may be in the form of a logical identifier with a range (e.g. offset and length) or may represent a range using a first and a last address or location.
  • a single logical address is included at a particular node, such as the root node 1008, if a logical address 1006 being searched is lower than the logical address of the node, the search will continue down a directed edge 1010 to the left of the node 1008. If the searched logical address 1006 matches the current node 1008 (i.e. is located within the range identified in the node), the search stops and the pointer, link, physical address, etc. at the current node 1008 is identified. If the searched logical address 1006 is greater than the range of the current node 1008, the search continues down directed edge 1012 to the right of the current node 1008.
  • a node includes two logical addresses and a searched logical address 1006 falls between the listed logical addresses of the node
  • the search continues down a center directed edge (not shown) to nodes with logical addresses that fall between the two logical addresses of the current node 1008.
  • a search continues down the B-tree until either locating a desired logical address or determining that the searched logical address 1006 does not exist in the B-tree.
  • membership in the B-tree denotes membership in the cache 102, and determining that the searched logical address 1006 is not in the B-tree is a cache miss.
  • the forward mapping module 802 searches for logical address "182" 1006 starting at the root node 1008.
  • the forward mapping module 802 searches down the directed edge 1010 to the left to the next node 1014.
  • the searched logical address "182" 1006 is more than the logical address (072-083) stored in the next node 1014 so the forward mapping module 802 searches down a directed edge 1016 to the right of the node 1014 to the next node 1018.
  • the next node 1018 includes a logical address of 178-192 so that the searched logical address "182" 1006 matches the logical address 178-192 of this node 1018 because the searched logical address "182" 1006 falls within the range 178-192 of the node 1018.
  • the forward mapping module 802 determines a match in the forward map 1004, the forward mapping module 802 returns a physical address, either found within the node 1018 or linked to the node 1018.
  • the node 1018 identified by the forward mapping module 802 as containing the searched logical address 1006 includes a link "f" that maps to an entry 1020 in the reverse map 1022.
  • the reverse map 1022 includes an entry ID 1024, a physical address 1026, a data length 1028 associated with the data stored at the physical address 1026 on the solid-state storage media 110 (in this case the data is compressed), a valid tag 1030, a logical address 1032 (optional), a data length 1034 (optional) associated with the logical address 1032, and other miscellaneous data 1036.
  • the reverse map 1022 is organized into erase blocks (erase regions). In this example, the entry 1020 that corresponds to the selected node 1018 is located in erase block n 1038.
  • Erase block n 1038 is preceded by erase block n-1 1040 and followed by erase block n+1 1042 (the contents of erase blocks n-1 and n+1 are not shown).
  • An erase block may be some erase region that includes a predetermined number of pages.
  • An erase region is an area in the solid-state storage media 110 erased together in a storage recovery operation.
  • entry ID 1024 is shown as being part of the reverse map 1022, the entry ID 1024 may be an address, a virtual link, or other means to tie an entry in the reverse map 1022 to a node in the forward map 1004.
  • the physical address 1026 is an address in the solid-state storage media 110 where data that corresponds to the searched logical address 1006 resides.
  • the data length 1028 associated with the physical address 1026 identifies a length of the data packet stored at the physical address 1026.
  • the physical address 1026 and data length 1028 may be called destination parameters 1044 and the logical address 1032 and associated data length 1034 may be called source parameters 1046 for convenience.
  • the data length 1028 of the destination parameters 1044 is different from the data length 1034 of the source parameters 1046 in one embodiment compression the data packet stored on the solid-state storage media 110 was compressed prior to storage. For the data associated with the entry 1020, the data was highly compressible and was compressed from 64 blocks to 1 block.
  • the valid tag 1030 indicates if the data mapped to the entry 1020 is valid or not. In this case, the data associated with the entry 1020 is valid and is depicted in Figure 7 as a "Y" in the row of the entry 1020.
  • the reverse map 1022 tracks both valid and invalid data and the forward map 1004 tracks valid data.
  • entry "c" 1048 indicates that data associated with the entry 1048 is invalid.
  • the forward map 1004 does not include logical addresses associated with entry "c" 1048.
  • the reverse map 1022 typically maintains entries for invalid data so that valid and invalid data can be quickly distinguished during a storage recovery operation.
  • the depicted reverse map 1022 includes source parameters 1046 for convenience, but the reverse map 1022 may or may not include the source parameters 1046.
  • the reverse map 1022 could identify a logical address indirectly by including a physical address 1026 associated with the data and the source parameters 1046 could be identified from the stored data.
  • One of skill in the art will recognize when storing source parameters 1046 in a reverse map 1022 would be beneficial.
  • the reverse map 1022 may also include other miscellaneous data 1036, such as a file name, object name, source data, etc.
  • miscellaneous data 1036 such as a file name, object name, source data, etc.
  • physical addresses 1026 are depicted in the reverse map 1022, in other embodiments, physical addresses 1026, or other destination parameters 1044, may be included in other locations, such as in the forward map 1004, an intermediate table or data structure, etc.
  • the reverse map 1022 is arranged by erase block or erase region so that traversing a section of the map associated with an erase block (e.g. erase block n 1038) allows the storage space recovery module 806 to identify valid data in the erase block 1038 and to quantify an amount of valid data, or conversely invalid data, in the erase block 1038.
  • Arranging an index into a forward map 1004 that can be quickly searched to identify a physical address 1026 from a logical address 1006 and a reverse map 1022 that can be quickly searched to identify valid data and quantity of valid data in an erase block 1038 is beneficial because the index may be optimized for searches and storage recovery operations.
  • One of skill in the art will recognize other benefits of an index with a forward map 1004 and a reverse map 1022.
  • Figure 8 depicts one embodiment of a mapping structure 1100, a logical address space 1120 of the cache 102, a combined logical address space 1119 that is accessible to a storage client, a sequential, log-based, append-only writing structure 1140, and a storage device address space 1170 of the storage device 118.
  • the mapping structure 1100 in one embodiment, is maintained by the direct mapping module 606.
  • the mapping structure 1100 in the depicted embodiment, is a B-tree that is substantially similar to the forward map 1004 described above with regard to Figure 7, with several additional entries. Further, instead of links that map to entries in a reverse map 1022, the nodes of the mapping structure 1100 include direct references to physical locations in the cache 102.
  • the mapping structure 1100 may be used either with or without a reverse map 1022.
  • the references in the mapping structure 1100 may include alpha-numerical characters, hexadecimal characters, pointers, links, and the like.
  • the mapping structure 1100 includes a plurality of nodes.
  • Each node in the depicted embodiment, is capable of storing two entries. In other embodiments, each node may be capable of storing a greater number of entries, the number of entries at each level may change as the mapping structure 1100 grows or shrinks through use, or the like.
  • Each entry maps a variable length range of logical addresses of the cache 102 to a physical location in the storage media 110 for the cache 102.
  • variable length ranges of logical addresses in the depicted embodiment, are represented by a starting address and an ending address, in other embodiments, a variable length range of addresses may be represented by a starting address and a length, or the like.
  • the capital letters 'A' through 'M' represent a logical or physical erase block in the physical storage media 110 of the cache 102 that stores the data of the corresponding range of logical addresses. In other embodiments, the capital letters may represent other physical addresses or locations of the cache 102. In the depicted embodiment, the capital letters 'A' through 'M' are also depicted in the writing structure 1140 which represents the physical storage media 110 of the cache 102.
  • membership in the mapping structure 1100 denotes membership (or storage) in the cache 102.
  • an entry may further include an indicator of whether the cache 102 stores data corresponding to a logical block within the range of logical addresses, data of the reverse map 1022 described above, and/or other data.
  • the mapping structure 1100 may also map logical addresses of the storage device 118 to physical addresses or locations within the storage device 118, and an entry may include an indicator that the cache 102 does not store the data and a physical address or location for the data on the storage device 118.
  • the mapping structure 1100 in the depicted embodiment, is accessed and traversed in a similar manner as that described above with regard to the forward map 1004.
  • the root node 1008 includes entries 1102, 1104 with noncontiguous ranges of logical addresses.
  • a "hole” exists at logical address "208" between the two entries 1102, 1104 of the root node.
  • a "hole” indicates that the cache 102 does not store data corresponding to one or more logical addresses corresponding to the "hole.”
  • a "hole” may exist because the eviction module 712 evicted data corresponding to the "hole” from the cache 102.
  • the storage device 118 still stores data corresponding to the "hole.”
  • the cache 102 and/or the storage device 118 supports block I/O requests (read, write, trim, etc.) with multiple contiguous and/or noncontiguous ranges of addresses (i.e. ranges that include one or more "holes" in them).
  • a "hole,” in one embodiment, may be the result of a single block I/O request with two or more noncontiguous ranges of addresses.
  • a "hole” may be the result of several different block I/O requests with address ranges bordering the "hole.”
  • the root node 1008 includes a single entry with a logical address range of "205-212," without the hole at "208.” If the entry of the root node 1008 were a fixed size cache line of a traditional cache, the entire range of logical addresses "205-212" would be evicted together. Instead, in the embodiment depicted in Figure 8, the eviction module 712 evicts data of a single logical address "208" and splits the range of logical addresses into two separate entries 1102, 1104.
  • the direct mapping module 606 may rebalance the mapping structure 1100, adjust the location of a directed edge, root node, or child node, or the like in response to splitting a range of logical addresses.
  • each range of logical addresses may have a dynamic and/or variable length, allowing the cache 102 to store dynamically selected and/or variable lengths of logical block ranges.
  • similar "holes” or noncontiguous ranges of logical addresses exist between the entries 1106, 1108 of the node 1014, between the entries 1110, 1112 of the left child node of the node 1014, between entries 1114, 1116 of the node 1018, and between entries of the node 1118.
  • similar "holes” may also exist between entries in parent nodes and child nodes.
  • a "hole” of logical addresses "060-071" exists between the left entry 1106 of the node 1014 and the right entry 1112 of the left child node of the node 1014.
  • the "hole” at logical address "003,” in the depicted embodiment, can also be seen in the logical address space 1120 of the cache 102 at logical address "003" 1130.
  • the hash marks at logical address "003" 1140 represent an empty location, or a location for which the cache 102 does not store data.
  • storage device address "003" 1180 of the storage device address space 1170 does store data (identified as 'b'), indicating that the eviction module 712 evicted data from logical address "003" 1130 of the cache 102.
  • the "hole” at logical address 1134 in the logical address space 1120 has no corresponding data in storage device address 1184, indicating that the "hole” is due to one or more block I/O requests with noncontiguous ranges, a trim or other deallocation command to both the cache 102 and the storage device 118, or the like.
  • the "hole” at logical address "003" 1130 of the logical address space 1120 is not viewable or detectable to a storage client.
  • the combined logical address space 1119 represents the data that is available to a storage client, with data that is stored in the cache 102 and data that is stored in the storage device 118 but not in the cache 102.
  • the read miss module 718 of Figure 4 handles misses and returns requested data to a requesting entity.
  • the read miss module 718 will retrieve the data from the storage device 118, as depicted at address "003" 1180 of the storage device address space 1170, and return the requested data to the storage client.
  • the requested data at logical address "003" 1130 may then also be placed back in the cache 102 and thus logical address 1130 would indicate 'b' as present in the cache 102.
  • the read miss module 718 may return a combination of data from both the cache 102 and the storage device 118. For this reason, the combined logical address space
  • the combined logical address space 1119 includes data 'b' at logical address "003" 1130, and the "hole" in the logical address space 1120 of the cache 102 is transparent.
  • the combined logical address space 1119 is the size of the logical address space 1120 of the cache 102 and is larger than the storage device address space 1180.
  • the direct cache module 116 may size the combined logical address space 1119 as the size of the storage device address space 1180, or as another size.
  • the logical address space 1120 of the cache 102 in the depicted embodiment, is larger than the physical storage capacity and corresponding storage device address space 1170 of the storage device 118.
  • the cache 102 has a 64 bit logical address space
  • the storage device address space 1170 begins at storage device address "0" 1172 and extends to storage device address "N” 1174.
  • Storage device address "N” 1174 in the depicted embodiment, corresponds to logical address "N” 1124 in the logical address space 1120 of the cache 102.
  • the storage device address space 1170 corresponds to only a subset of the logical address space 1120 of the cache 102, the rest of the logical address space 1120 may be shared with an additional cache 102, may be mapped to a different storage device 118, may store data in the cache 102 (such as a Non-volatile memory cache) that is not stored in the storage device 1170, or the like.
  • the first range of logical addresses "000-002" 1128 stores data corresponding to the first range of storage device addresses "000-002" 1178.
  • the second range of logical addresses "004-059” 1132 corresponds to the second range of storage device addresses "004-059” 1182.
  • the final range of logical addresses 1136 extending from logical address "N" 1124 extends beyond storage device address "N" 1174. No storage device address in the storage device address space 1170 corresponds to the final range of logical addresses 1136.
  • the cache 102 may store the data corresponding to the final range of logical addresses 1136 until the data storage device 118 is replaced with larger storage or is expanded logically, until an additional data storage device 118 is added, simply use the non- volatile storage capability of the cache to indefinitely provide storage capacity directly to a storage client 504 independent of a storage device 118, or the like.
  • the direct cache module 116 alerts a storage client 504, an operating system, a user application 502, or the like in response to detecting a write request with a range of addresses, such as the final range of logical addresses 1136, that extends beyond the storage device address space 1170. The user may then perform some maintenance or other remedial operation to address the situation. Depending on the nature of the data, no further action may be taken. For example, the data may represent temporary data which if lost would cause no ill effects.
  • the sequential, log -based, append-only writing structure 1140 is a logical representation of the physical storage media 110 of the cache 102.
  • the storage device 118 may use a substantially similar sequential, log-based, append-only writing structure 1140.
  • the cache 102 stores data sequentially, appending data to the writing structure 1140 at an append point 1144.
  • the cache 102 uses a storage space recovery process, such as the garbage collection module 710 and/or the storage space recovery module 806 that re-uses non-volatile storage media storing deallocated/unused logical blocks.
  • Non-volatile storage media storing deallocated/unused logical blocks in the depicted embodiment, is added to an available storage pool 1146 for the cache 102.
  • the writing structure 1140 is cyclic, ring-like, and has a theoretically infinite capacity.
  • the append point 1144 progresses around the log-based, append-only writing structure 1140 in a circular pattern 1142.
  • the circular pattern 1142 wear balances the solid-state storage media 110, increasing a usable life of the solid-state storage media 110.
  • the eviction module 712 and/or the garbage collection module 710 have marked several blocks 1148, 1150, 1152, 1154 as invalid, represented by an "X" marking on the blocks 1148, 1150, 1152, 1154.
  • the garbage collection module 710 in one embodiment, will recover the physical storage capacity of the invalid blocks 1148, 1150, 1152, 1154 and add the recovered capacity to the available storage pool 1146.
  • modified versions of the blocks 1148, 1150, 1152, 1154 have been appended to the writing structure 1140 as new blocks 1156, 1158, 1160, 1162 in a read, modify, write operation or the like, allowing the original blocks 1148, 1150, 1152, 1154 to be recovered.
  • Figure 9 depicts one embodiment of a method 1200 for caching data.
  • the method 1200 begins and the storage request module 602 detects 1202 an I O request for a storage device 118 cached by solid-state storage media 110 of a cache 102.
  • the direct mapping module 606 references 1204 a single mapping structure to determine whether the cache 102 comprises data of the detected 1202 I/O request.
  • the single mapping structure maps each logical block address of the storage device 102 directly to a logical block address of the cache 102 and also comprises a fully associative relationship between logical block addresses of the storage device 118 and physical storage addresses of the solid-state storage media 110.
  • the cache fulfillment module 604 satisfies 1206 the detected 1202 I/O request using the cache 102 in response to the direct mapping module 606 determining 1204 that the cache 102 comprises at least one data block of the detected 1202 I/O request.
  • the storage request module 602 continues to detect 1202 I/O requests and the method 1200 repeats.
  • Figure 10 depicts another embodiment of a method 1300 for caching data.
  • the method 1300 begins and the storage request module 602 determines 1302 whether there are any I/O requests for a storage device 118 cached by solid-state storage media 110 of a cache 102. If the storage request module 602 does not detect 1302 an I/O request, the storage request module 602 continues to monitor 1302 I/O requests. If the storage request module 602 detects 1302 an I/O request, the storage request module 602 determines 1304 a storage device logical block address for the I O request.
  • the direct mapping module 606 references 1306 a single mapping structure using the determined 1304 storage device logical block address to determine 1308 whether the cache 102 comprises/stores data of the I/O request. If the direct mapping module 606 determines 1308 that the cache 102 does not comprise data of the I/O request, the cache fulfillment module 604 stores 1310 data of the I/O request to the cache 102 in a manner that associates the data with the determined 1304 logical block address and a sequence indicator for the I/O request, to satisfy the I/O request.
  • the cache fulfillment module 604 satisfies 1312 the I/O request, at least partially, using the cache 102.
  • the cache fulfillment module 604 may satisfy 1312 the I/O request by storing data of the I/O request to the cache 102 sequentially on the solid-state storage media 110 to preserve an ordered sequence of storage operations.
  • the cache fulfillment module 604 may satisfy 1312 the I/O request by reading data of the I/O request from the cache 102 using a physical storage address of the solid-state storage media 110 associated with the determined 1304 logical block address of the I/O request.
  • the direct mapping module 606 determines 1314 whether to update the mapping structure to maintain an entry in the mapping structure associating the determined 1304 logical block address and physical storage locations or addresses on the solid-state storage media 110. For example, the direct mapping module 606 may determine 1314 to update the mapping structure if storing 1310 data of the I/O request to the cache 102 or otherwise satisfying 1312 the I/O request changed the state of data on the cache 102, such as for a write I/O request, a cache miss, a TRIM request, an erase request, or the like.
  • the direct mapping module 606 determines 1314 to update the mapping structure, the direct mapping module 606 updates 1316 the mapping structure to map the determined 1304 storage device logical block address for the I/O request directly to a logical block address of the cache 102 and to a physical storage address or location of data associated with the I/O request on the solid-state storage media 110 of the cache 102. If the direct mapping module 606 determines 1314 not to update the mapping structure, for a read I/O request resulting in a cache hit or the like, the method 1300 continues without the direct mapping module 606 updating 1316 the mapping structure.
  • the direct mapping module 606 determines 1318 whether to reconstruct the mapping structure, in response to a reconstruction event such as a power failure, a corruption of the mapping structure, an improper shutdown, or the like. If the direct mapping module 606 determines 1318 to reconstruct the mapping structure, the direct mapping module 606 reconstructs 1320 the mapping structure using the logical block addresses and sequence indicators associated with data on the solid-state storage media 110 of the cache 102, scanning a sequential, log-based, cyclic writing structure or the like. If the direct mapping module 606 determines 1318 not to reconstruct the mapping structure, the method 1300 skips the reconstruction step 1320 and the storage request module 602 continues to monitor 1302 I/O requests for the storage device 118.
  • a reconstruction event such as a power failure, a corruption of the mapping structure, an improper shutdown, or the like.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Memory System Of A Hierarchy Structure (AREA)

Abstract

La présente invention se rapporte à un dispositif, à un système et à un procédé adaptés pour mettre des données en mémoire cache. Selon la présente invention, un module de demande de stockage 602 détecte une demande d'entrée/sortie (« I/O », Input/Output) pour un dispositif de stockage 118 mise en mémoire cache par un support de stockage à semi-conducteur 110 d'une mémoire cache 102. Un module de mappage direct 606 référence une structure de mappage unique 1100 dans le but de déterminer que la mémoire cache 102 contient des données de la demande I/O. La structure de mappage unique 1100 mappe chaque adresse de bloc logique du dispositif de stockage 118 directement sur une adresse de bloc logique de la mémoire cache 102. La structure de mappage unique 1100 maintient une relation entièrement associative entre des adresses de bloc logique du dispositif de stockage 118 et des adresses de stockage physique sur le support de stockage à semi-conducteur 110. Un module de remplissage de cache 604 satisfait la demande I/O au moyen de la mémoire cache 102 en réponse à la détermination par le module de mappage direct 606 que la mémoire cache 102 contient au moins un bloc de données de la demande I/O.
PCT/US2011/047659 2010-08-12 2011-08-12 Dispositif, système et procédé pour mettre des données en mémoire cache WO2012021847A2 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US37327110P 2010-08-12 2010-08-12
US61/373,271 2010-08-12

Publications (2)

Publication Number Publication Date
WO2012021847A2 true WO2012021847A2 (fr) 2012-02-16
WO2012021847A3 WO2012021847A3 (fr) 2012-05-31

Family

ID=45568226

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2011/047659 WO2012021847A2 (fr) 2010-08-12 2011-08-12 Dispositif, système et procédé pour mettre des données en mémoire cache

Country Status (1)

Country Link
WO (1) WO2012021847A2 (fr)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9141528B2 (en) 2011-05-17 2015-09-22 Sandisk Technologies Inc. Tracking and handling of super-hot data in non-volatile memory systems
US9176864B2 (en) 2011-05-17 2015-11-03 SanDisk Technologies, Inc. Non-volatile memory and method having block management with hot/cold data sorting
US10175896B2 (en) 2016-06-29 2019-01-08 Western Digital Technologies, Inc. Incremental snapshot based technique on paged translation systems
US10229048B2 (en) 2016-06-29 2019-03-12 Western Digital Technologies, Inc. Unified paging scheme for dense and sparse translation tables on flash storage systems
US10235287B2 (en) 2016-06-29 2019-03-19 Western Digital Technologies, Inc. Efficient management of paged translation maps in memory and flash
US10353813B2 (en) 2016-06-29 2019-07-16 Western Digital Technologies, Inc. Checkpoint based technique for bootstrapping forward map under constrained memory for flash devices
US11216361B2 (en) 2016-06-29 2022-01-04 Western Digital Technologies, Inc. Translation lookup and garbage collection optimizations on storage system with paged translation table

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6334173B1 (en) * 1997-11-17 2001-12-25 Hyundai Electronics Industries Co. Ltd. Combined cache with main memory and a control method thereof
US6745292B1 (en) * 1995-12-08 2004-06-01 Ncr Corporation Apparatus and method for selectively allocating cache lines in a partitioned cache shared by multiprocessors
US20080195801A1 (en) * 2007-02-13 2008-08-14 Cheon Won-Moon Method for operating buffer cache of storage device including flash memory
KR20100022811A (ko) * 2008-08-20 2010-03-03 주식회사 셀픽 플래시메모리 저장장치 및 그에 따른 관리 방법

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6745292B1 (en) * 1995-12-08 2004-06-01 Ncr Corporation Apparatus and method for selectively allocating cache lines in a partitioned cache shared by multiprocessors
US6334173B1 (en) * 1997-11-17 2001-12-25 Hyundai Electronics Industries Co. Ltd. Combined cache with main memory and a control method thereof
US20080195801A1 (en) * 2007-02-13 2008-08-14 Cheon Won-Moon Method for operating buffer cache of storage device including flash memory
KR20100022811A (ko) * 2008-08-20 2010-03-03 주식회사 셀픽 플래시메모리 저장장치 및 그에 따른 관리 방법

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9141528B2 (en) 2011-05-17 2015-09-22 Sandisk Technologies Inc. Tracking and handling of super-hot data in non-volatile memory systems
US9176864B2 (en) 2011-05-17 2015-11-03 SanDisk Technologies, Inc. Non-volatile memory and method having block management with hot/cold data sorting
US10175896B2 (en) 2016-06-29 2019-01-08 Western Digital Technologies, Inc. Incremental snapshot based technique on paged translation systems
US10229048B2 (en) 2016-06-29 2019-03-12 Western Digital Technologies, Inc. Unified paging scheme for dense and sparse translation tables on flash storage systems
US10235287B2 (en) 2016-06-29 2019-03-19 Western Digital Technologies, Inc. Efficient management of paged translation maps in memory and flash
US10353813B2 (en) 2016-06-29 2019-07-16 Western Digital Technologies, Inc. Checkpoint based technique for bootstrapping forward map under constrained memory for flash devices
US10725669B2 (en) 2016-06-29 2020-07-28 Western Digital Technologies, Inc. Incremental snapshot based technique on paged translation systems
US10725903B2 (en) 2016-06-29 2020-07-28 Western Digital Technologies, Inc. Unified paging scheme for dense and sparse translation tables on flash storage systems
US11216361B2 (en) 2016-06-29 2022-01-04 Western Digital Technologies, Inc. Translation lookup and garbage collection optimizations on storage system with paged translation table
US11816027B2 (en) 2016-06-29 2023-11-14 Western Digital Technologies, Inc. Translation lookup and garbage collection optimizations on storage system with paged translation table

Also Published As

Publication number Publication date
WO2012021847A3 (fr) 2012-05-31

Similar Documents

Publication Publication Date Title
US8756375B2 (en) Non-volatile cache
US9251086B2 (en) Apparatus, system, and method for managing a cache
WO2012116369A2 (fr) Appareil, système et procédé de gestion du contenu d'une antémémoire
US9678874B2 (en) Apparatus, system, and method for managing eviction of data
US10366002B2 (en) Apparatus, system, and method for destaging cached data
KR101717644B1 (ko) 고체-상태 저장 디바이스 상에서 데이터를 캐싱하는 장치, 시스템, 및 방법
US9715519B2 (en) Managing updates to multiple sets of metadata pertaining to a memory
US8880787B1 (en) Extent metadata update logging and checkpointing
US9842053B2 (en) Systems and methods for persistent cache logging
WO2012016209A2 (fr) Appareil, système et procédé de mise en antémémoire d'écriture redondante
US9519540B2 (en) Apparatus, system, and method for destaging cached data
US9021222B1 (en) Managing incremental cache backup and restore
WO2012109679A2 (fr) Appareil, système et procédé de gestion de mémoire virtuelle directe d'application
WO2012021847A2 (fr) Dispositif, système et procédé pour mettre des données en mémoire cache
US20240319876A1 (en) Caching techniques using a unified cache of metadata leaf objects with mixed pointer types and lazy content resolution
US12093183B1 (en) Caching techniques using a mapping cache and maintaining cache coherency using physical to logical address mapping
US20240303200A1 (en) Caching techniques using a mapping cache and maintaining cache coherency using hash values
US11782842B1 (en) Techniques for reclaiming dirty cache pages
US20240176741A1 (en) Caching techniques using a two-level read cache

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 11817139

Country of ref document: EP

Kind code of ref document: A2

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 11817139

Country of ref document: EP

Kind code of ref document: A2