US20190235925A1 - Systems, methods, and interfaces for vector input/output operations - Google Patents
Systems, methods, and interfaces for vector input/output operations Download PDFInfo
- Publication number
- US20190235925A1 US20190235925A1 US16/371,110 US201916371110A US2019235925A1 US 20190235925 A1 US20190235925 A1 US 20190235925A1 US 201916371110 A US201916371110 A US 201916371110A US 2019235925 A1 US2019235925 A1 US 2019235925A1
- Authority
- US
- United States
- Prior art keywords
- storage
- request
- atomic
- data
- vector
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/50—Allocation of resources, e.g. of the central processing unit [CPU]
- G06F9/5005—Allocation of resources, e.g. of the central processing unit [CPU] to service a request
- G06F9/5011—Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals
- G06F9/5016—Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals the resource being the memory
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0602—Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
- G06F3/0608—Saving storage space on storage systems
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
- G06F11/07—Responding to the occurrence of a fault, e.g. fault tolerance
- G06F11/14—Error detection or correction of the data by redundancy in operation
- G06F11/1402—Saving, restoring, recovering or retrying
- G06F11/1471—Saving, restoring, recovering or retrying involving logging of persistent data for recovery
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F12/00—Accessing, addressing or allocating within memory systems or architectures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F12/00—Accessing, addressing or allocating within memory systems or architectures
- G06F12/02—Addressing or allocation; Relocation
- G06F12/0223—User address space allocation, e.g. contiguous or non contiguous base addressing
- G06F12/023—Free address space management
- G06F12/0238—Memory management in non-volatile memory, e.g. resistive RAM or ferroelectric memory
- G06F12/0246—Memory management in non-volatile memory, e.g. resistive RAM or ferroelectric memory in block erasable memory, e.g. flash memory
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F12/00—Accessing, addressing or allocating within memory systems or architectures
- G06F12/02—Addressing or allocation; Relocation
- G06F12/08—Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
- G06F12/0802—Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/10—File systems; File servers
- G06F16/17—Details of further file system functions
- G06F16/1734—Details of monitoring file system events, e.g. by the use of hooks, filter drivers, logs
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/90—Details of database functions independent of the retrieved data types
- G06F16/907—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0602—Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
- G06F3/061—Improving I/O performance
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0602—Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
- G06F3/0614—Improving the reliability of storage systems
- G06F3/0619—Improving the reliability of storage systems in relation to data integrity, e.g. data losses, bit errors
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0628—Interfaces specially adapted for storage systems making use of a particular technique
- G06F3/0655—Vertical data movement, i.e. input-output transfer; data movement between one or more hosts and one or more storage devices
- G06F3/0656—Data buffering arrangements
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0628—Interfaces specially adapted for storage systems making use of a particular technique
- G06F3/0655—Vertical data movement, i.e. input-output transfer; data movement between one or more hosts and one or more storage devices
- G06F3/0659—Command handling arrangements, e.g. command buffers, queues, command scheduling
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0668—Interfaces specially adapted for storage systems adopting a particular infrastructure
- G06F3/0671—In-line storage system
- G06F3/0673—Single storage device
- G06F3/0679—Non-volatile semiconductor memory device, e.g. flash memory, one time programmable memory [OTP]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0668—Interfaces specially adapted for storage systems adopting a particular infrastructure
- G06F3/0671—In-line storage system
- G06F3/0683—Plurality of storage devices
- G06F3/0688—Non-volatile semiconductor memory arrays
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2212/00—Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
- G06F2212/72—Details relating to flash memory management
- G06F2212/7201—Logical to physical mapping or translation of blocks or pages
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2212/00—Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
- G06F2212/72—Details relating to flash memory management
- G06F2212/7205—Cleaning, compaction, garbage collection, erase control
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0628—Interfaces specially adapted for storage systems making use of a particular technique
- G06F3/0638—Organizing or formatting or addressing of data
- G06F3/064—Management of blocks
Abstract
Description
- The Application Data Sheet (“ADS”) filed with this application is incorporated by reference herein. Any applications claimed on the ADS for priority under 35 U.S.C. §§ 119, 120, 121, or 365(c), and any and all parent, grandparent, great-grandparent, etc., applications of such applications, are also incorporated by reference, including any priority claims made in those applications and any material incorporated by reference, to the extent such subject matter is not inconsistent herewith.
- This application claims the benefit of the earliest available effective filing date(s) from the following listed application(s) (the “Priority Applications”), if any, listed below (e.g., claims earliest available priority dates for other than provisional patent applications or claims benefits under 35 U.S.C. § 119(e) for provisional patent applications, for any and all parent, grandparent, great-grandparent, etc., applications of the Priority Application(s)).
- Priority Applications: this application is a continuation of, and claims priority to, U.S. patent application Ser. No. 13/725,728 filed Dec. 21, 2012, which claims priority to: U.S. Provisional Application No. 61/579,627, filed Dec. 22, 2011; U.S. Provisional Application No. 61/625,475 filed Apr. 17, 2012; U.S. Provisional Patent Application Ser. No. 61/637,155 filed Apr. 23, 2012; U.S. patent application Ser. No. 13/539,235 filed Jun. 29, 2012; and U.S. patent application Ser. No. 13/335,922 filed Dec. 22, 2011, each of which is hereby incorporated by reference.
- The disclosure relates to input/output (IO) operations and, more particularly, to IO operations configured to operate on one or more IO vectors.
- This disclosure includes and references the accompanying drawings. In the drawings, similar symbols typically identify similar components, unless context dictates otherwise. The illustrative embodiments described in the detailed description, drawings, and claims are not meant to be limiting. Other embodiments may be utilized, and other changes may be made to these exemplary embodiments, without departing from the scope of the disclosure.
-
FIG. 1 is a block diagram of a storage system comprising a storage controller; -
FIG. 2 is a block diagram of another embodiment of a storage controller; -
FIG. 3 is a block diagram of another embodiment of a storage controller; -
FIG. 4 depicts one embodiment of a forward index; -
FIG. 5 depicts one embodiment of a reverse index; -
FIGS. 6A-B depict embodiments of storage metadata for log storage; -
FIG. 7 depicts one embodiment of a contextual data format; -
FIGS. 8A-B depict embodiments of data of disjoint, non-adjacent, and/or non-contiguous vectors stored contiguously within a log on a non-volatile storage medium; -
FIGS. 9A-E depict one embodiment a forward index and an inflight index; -
FIG. 10 depicts one embodiment of data of an incomplete atomic storage operation; -
FIGS. 11A-C depict one embodiment of persistent metadata; -
FIG. 12A depicts another embodiment of persistent metadata; -
FIG. 12B depicts another embodiment of persistent metadata; -
FIG. 13A depicts one embodiment of data of an atomic storage request spanning erase blocks of a non-volatile storage medium; -
FIG. 13B depicts one embodiment of persistent notes for managing atomic storage operations; -
FIG. 14 depicts a failed atomic write that spans an erase block boundary of a non-volatile storage medium; -
FIG. 15 depicts one embodiment of a restart recovery process; -
FIG. 16A depicts embodiments of interfaces for storage requests; -
FIG. 16B depicts one embodiment of an atomic vector storage operation; -
FIG. 16C depicts another embodiment of an atomic vector storage operation; -
FIG. 17A is a block diagram of another embodiment of a storage controller; -
FIGS. 17B-17D depict storage request consolidation in a request buffer; -
FIG. 18 is a flow diagram of one embodiment of a method for servicing an atomic storage request; -
FIG. 19 is a flow diagram of one embodiment of a method for restart recovery; -
FIG. 20 is a flow diagram of one embodiment of a method for consolidating storage requests; and -
FIG. 21 is a flow diagram of another embodiment of a method for servicing a vector storage request. - A storage controller may be configured to perform input/output (IO) operations in response to requests from one or more storage clients. The storage controller may be configured to implement vector storage operations on respective logical identifier ranges. The vector storage operations may be atomic, such that the storage operation completes for each I/O vector, or none of the I/O vectors.
- Disclosed herein are systems and apparatuses configured to service vector storage requests, which may include a request consolidation module configured to modify one or more storage requests of a vector storage request, wherein the storage requests corresponds to respective logical identifier ranges of the vector storage request in response to one or more other pending storage requests, and a storage controller configured to store one or more data packets pertaining to the vector storage request on the non-volatile storage medium.
- The request consolidation module may be configured to combine two or more storage requests including a storage request of the vector storage request. The two or more storage requests pertain to logical identifiers that are adjacent and/or overlap. The two or more storage requests may comprise trim storage requests that pertain to overlapping and/or adjacent logical identifier ranges in a logical address space. The request consolidation module may be further configured to remove one or more of the storage requests of the vector storage request in response to determining that the one or more storage requests are obviated by one or more pending storage requests. The request consolidation module may be configured to remove a storage request to trim one or more logical identifiers in response to a pending storage request to write data to the one or more logical identifiers.
- The apparatus may further comprise a log storage module configured to append the one or more data packets pertaining to an atomic vector storage request contiguously within a log on the non-volatile storage medium, and an atomic storage module configured to include a persistent indicator in one or more the data packets of the atomic vector storage request to indicate that the one or more data packets pertain to an atomic storage operation that is incomplete. The atomic storage module may be configured to include a persistent indicator in a last one of the data packets of the atomic vector storage request to indicate that the atomic storage request is complete.
- Disclosed herein are systems and apparatus configured to service atomic vector storage requests, which may comprise a non-volatile storage medium, a log storage module configured to append one or more data packets pertaining to an atomic vector storage request in a contiguous log format on the non-volatile storage medium, and an atomic storage module configured to include respective persistent metadata flags in one or more of the data packets of the atomic storage request within the log on the non-volatile storage medium to indicate that that the one or more data packets correspond to an atomic storage request that is in process. The atomic storage module may be configured to include a persistent metadata flag in one of the data packets of the atomic vector storage request to indicate that the atomic storage request is complete. The persistent metadata flags may comprise single bits. The log storage module may be configured to append the one or more data packets to non-contiguous physical storage locations within a physical address space of the non-volatile storage medium. The log storage module may be configured to append data packets sequentially from an append point within a physical address space of the non-volatile storage medium and to associate the data packets with respective sequence indicators, and wherein the sequential order and the sequence indicators of the data packets determine a log order of the data packets.
- The atomic vector storage request may comprise a plurality of sub-requests, each sub-request comprising an operation pertaining to a respective set of one or more logical identifiers, and wherein the storage controller is configured to defer updating a forward index comprising any-to-any mappings between logical identifiers and physical storage locations until each of the sub-requests of the atomic vector storage operation are complete.
- The atomic vector storage request comprises a plurality of sub-requests, each sub-request comprising an operation pertaining to a respective set of one or more logical identifiers, wherein two or more of the sub-requests comprise different types of storage operations.
- A restart recovery module may be configured to reconstruct a forward index comprising mappings between logical identifiers of a logical address space and physical storage locations of the non-volatile storage medium, wherein the restart recovery module is configured to identify a data packet of an incomplete atomic vector storage request in response to accessing a data packet that comprises a persistent metadata flag indicating that the data packet corresponds to an atomic vector storage request that is in process at an append point.
- The storage controller may be configured to update an inflight index in response to completing a subcommand of the atomic vector storage operation, and to update the forward index with the inflight index in response to completing each of the subcommands of the atomic vector storage operation.
- Subcommands of the atomic vector storage request may be queued in an ordered queue configured to complete the subcommands and the other storage requests according to an order in which the subcommands and the other storage requests were received at the ordered queue.
- A request consolidation module may be configured to modify one of the subcommands based on one or more of the other plurality of subcommands of the atomic vector storage request. The request consolidation module may delete a subcommand in response to determining that the subcommand is overridden by one or more other subcommands of the atomic vector storage request and/or combine one or more subcommands into a single composite subcommand.
- Disclosed herein are systems and apparatus for consolidating storage requests, comprising a request buffer configured to buffer and/or queue one or more storage requests, a request consolidation module configured to modify one or more of the storage requests in the request buffer based on one or more other storage requests in the request buffer, and a storage controller configured to service storage requests in the request buffer. The request consolidation module may be configured to delete a storage request to trim one or more logical identifiers from the request buffer in response to receiving a storage request configured to store data to the one or more logical identifiers at the storage controller. The request consolidation module may be further configured to consolidate two or more storage requests to trim logical identifiers that overlap and/or are contiguous in a logical address.
-
FIG. 1 is a block diagram illustrating one embodiment of astorage system 100. Thesystem 100 may comprise acomputing device 110, which may comprise a personal computer, server, blade, laptop, notebook, smart phone, embedded system, virtualized computing device, or the like. Thecomputing device 110 may comprise aprocessor 112,volatile memory 113,non-transitory storage medium 114, and/orcommunication interface 115. Theprocessor 112 may comprise one or more general and/or special purpose processing elements and/or cores. Theprocessor 112 may be configured to execute instructions loaded from thenon-transitory storage medium 114. Portions of the modules and/or methods disclosed herein may be embodied as machine-readable instructions stored on thenon-transitory storage medium 114. - The
system 100 may further comprise astorage controller 120. Thestorage controller 120 may comprise astorage management layer 130, logical-to-physical translation module 132,storage metadata 135,log storage module 136,media interface 122, and/or one ormore media controllers 123. Portions of thestorage controller 120 may operate on, or in conjunction with, thecomputing device 110. Portions of thestorage controller 120 may be implemented separately from the computing device; for example, portions of thestorage controller 120 may be connected using a system bus, such as a peripheral component interconnect express (PCI-e) bus, a Serial Advanced Technology Attachment (serial ATA) bus, universal serial bus (USB) connection, an Institute of Electrical and Electronics Engineers (IEEE) 1394 bus (FireWire), an external PCI bus, Infiniband, or the like. - The
storage controller 120 may comprise amedia interface 122 configured to couple to thestorage controller 120 to a non-volatile storage media 140 (by use of one ormore media controllers 123 and bus 127). Thenon-volatile storage media 140 may comprise any suitable storage medium including, but not limited to, flash memory, nano random access memory (nano RAM or NRAM), nanocrystal wire-based memory, silicon-oxide based sub-10 nanometer process memory, graphene memory, Silicon-Oxide-Nitride-Oxide-Silicon (SONOS), Resistive Random-Access Memory (RRAM), Programmable Metallization Cell (PMC), Conductive-Bridging RAM (CBRAM), Magneto-Resistive RAM (MRAM), Dynamic RAM (DRAM), Phase change RAM (PRAM), magnetic media (e.g., one or more hard disks), optical media, or the like. - The media controller(s) 123 may be configured to write data to and/or read data from the
non-volatile storage media 140 via abus 127. Thebus 127 may comprise a storage I/O bus for communicating data to and from thenon-volatile storage media 140, and may further comprise a control I/O bus for communicating addressing and other command and control information to thenon-volatile storage media 140. - The
storage controller 120 may be configured to service storage requests for one ormore storage clients 118A-N. Thestorage clients 118A-N may include, but are not limited to,operating systems 118A,file systems 118B,databases 118C,user applications 118D, and so on. Thestorage clients 118A-N may operate locally on the computing device and/or may operate on other, remote computing devices 111 (e.g., remote storage client(s) 118E). - The
storage clients 118A-N may access services provided by thestorage controller 120 via thestorage management layer 130. Thestorage management layer 130 may comprise one or more drivers, libraries, modules, interfaces, block device interfaces, interface extensions (e.g., input/output control IOCTL interfaces), Application Programming Interfaces (API), application binary interfaces (ABI), object classes, remote interfaces (e.g., Remote Procedure Call, Simple Object Access Protocol, or the like), and so on. - The
storage management layer 130 may be configured to present and/or expose alogical address space 134 to thestorage clients 118A-N. As used herein, a logical address space refers to a logical representation of I/O resources, such as storage resources. Thelogical address space 134 may comprise a plurality (e.g., range) of logical identifiers. As used herein, a logical identifier refers to any identifier for referencing an I/O resource (e.g., data stored on the non-volatile storage media 140), including, but not limited to, a logical block address (LBA), cylinder/head/sector (CHS) address, a file name, an object identifier, an inode, a Universally Unique Identifier (UUID), a Globally Unique Identifier (GUID), a hash code, a signature, an index entry, a range, an extent, or the like. - The
storage management layer 130 may comprise a logical-to-physical translation layer configured to map and/or associate logical identifiers in the logical address space 134 (and referenced by thestorage clients 118A-N) with physical storage locations (e.g., physical addresses) on thenon-volatile storage media 140. The mappings may be “any-to-any,” such that any logical identifier can be associated with any physical storage location (and vice versa). As used herein, a physical address refers to an address (or other reference) of one or more physical storage location(s) on thenon-volatile storage media 140. Accordingly, a physical address may be a “media address.” As used herein, physical storage locations include, but are not limited to, sectors, pages, logical pages, storage divisions (e.g., erase blocks, logical erase blocks, and so on), or the like. - In some embodiments, the
logical address space 134 maintained by thestorage management layer 130 may be thinly provisioned or “sparse.” As used herein, a thinly provisioned or sparse logical address space refers to a logical address space having a logical capacity that is independent of physical address space of thenon-volatile storage media 140. For example, thestorage management layer 130 may present a very large logical address space 134 (e.g., 2{circumflex over ( )}64 bits) to the storage clients 18A-N, which exceeds the physical address space of thenon-volatile storage media 140. - The
storage management layer 130 may be configured to maintainstorage metadata 135 pertaining to thenon-volatile storage media 140 including, but not limited to, a forward index comprising any-to-any mappings between logical identifiers of thelogical address space 134 and storage resources, a reverse index pertaining to thenon-volatile storage media 140, one or more validity bitmaps, atomicity and/or translational metadata, and so on. Portions of thestorage metadata 135 may be stored on thevolatile memory 113 and/or may be periodically stored on a persistent storage medium, such as thenon-transitory storage medium 114 and/ornon-volatile storage media 140. - In some embodiments, the
storage controller 120 may leverage the arbitrary, any-to-any mappings of the logical-to-physical translation module to store data in a log format, such that data is updated and/or modified “out-of-place” on thenon-volatile storage media 140. As used herein, writing data “out-of-place” refers to writing data to different media storage location(s) rather than overwriting the data “in-place” (e.g., overwriting the original physical location of the data). Storing data in a log format may result in obsolete and/or invalid data remaining on thenon-volatile storage media 140. For example, overwriting data of logical identifier “A” out-of-place may result in writing data to new physical storage location(s) and updating thestorage metadata 135 to associate A with the new physical storage locations(s) (e.g., in a forward index, described below). The original physical storage location(s) associated with A are not overwritten, and comprise invalid, out-of-date data. Similarly, when data of a logical identifier “X” is deleted or trimmed, the physical storage locations(s) assigned to X may not be immediately erased, but may remain on thenon-volatile storage media 140 as invalid data. - The
storage controller 120 may comprise agroomer module 138 configured to “groom” thenon-volatile storage media 140, which may comprise reclaiming physical storage location(s) comprising invalid, obsolete, or “trimmed,” data, as described above. As used herein, “grooming” thenon-volatile storage media 140 may include, but is not limited to, wear leveling, removing invalid and/or obsolete data from thenon-volatile storage media 140, removing deleted (e.g., trimmed) data from thenon-volatile storage media 140, refreshing and/or relocating valid data stored on thenon-volatile storage media 140, reclaiming physical storage locations (e.g., erase blocks), identifying physical storage locations for reclamation, and so on. Thegroomer module 138 may be configured to operate autonomously, and in the background, from servicing other storage requests. Accordingly, grooming operations may be deferred while other storage requests are being processed. Alternatively, the groomer module 162 may operate in the foreground while other storage operations are being serviced. Reclaiming a physical storage location may comprise erasing invalid data from the physical storage location so that the physical storage location can be reused to store valid data. For example, reclaiming a storage division (e.g., an erase block or logical erase block) may comprise relocating valid data from the storage division, erasing the storage division, and initializing the storage division for storage operations (e.g., marking the storage division with a sequence indicator). The groomer 162 may wear-level thenon-volatile storage media 140, such that data is systematically spread throughout different physical storage locations, which may improve performance, data reliability, and avoid overuse and/or underuse of particular physical storage locations. Embodiments of systems and methods for grooming non-volatile storage media are disclosed in U.S. Pat. No. 8,074,011, issued Dec. 6, 2011, and entitled, “Apparatus, System, and Method for Storage Space Recovery After Reaching a Read Count Limit,” which is hereby incorporated by reference. - In some embodiments, the
storage controller 120 may be configured to manage asymmetric, write-oncenon-volatile storage media 140, such as solid-state storage media. As used herein, a “write once” refers to storage media that is reinitialized (e.g., erased) each time new data is written or programmed thereon. As used herein, “asymmetric” refers to storage media having different latencies and/or execution times for different types of storage operations. For example, read operations on asymmetric solid-statenon-volatile storage media 140 may be much faster than write/program operations, and write/program operations may be much faster than erase operations. The solid-statenon-volatile storage media 140 may be partitioned into storage divisions that can be erased as a group (e.g., erase blocks) in order to, inter alia, account for these asymmetric properties. As such, modifying a single data segment “in-place” may require erasing an entire erase block, and rewriting the modified data on the erase block, along with the original, unchanged data (if any). This may result in inefficient “write amplification,” which may cause excessive wear. Writing data out-of-place as described above may avoid these issues, since thestorage controller 120 can defer erasure of the obsolete data (e.g., the physical storage location(s) comprising the obsolete data may be reclaimed in background grooming operations). -
FIG. 4 depicts one embodiment of aforward index 404 configured to maintain arbitrary, any-to-any mappings between logical identifiers and physical storage locations on anon-volatile storage media 140. In theFIG. 4 example, theforward index 404 is implemented as a range-encoded B-tree. The disclosure is not limited in this regard, however, theforward index 404 may be implemented using any suitable data structure including, but not limited to, a tree, a B-tree, a range-encoded B-tree, a radix tree, a map, a content addressable map (CAM), a table, a hash table, or other suitable data structure (or combination of data structures). - The
forward index 404 comprises a plurality ofentries 405A-N, each representing one or more logical identifiers in the logical address space 134:entry 405A references logical identifiers 205-212;entry 405B references logical identifiers 72-83;entry 405C references logical identifiers 5-59; and so on. The logical-to-physical translation module 132 may enable independence between logical identifiers and physical storage locations, such that data may be stored sequentially, in a log-based format and/or updated “out-of-place” on thenon-volatile storage media 140. As such, there may be no correspondence between logical identifiers and the physical storage locations. - The
entries 405A-N may comprise assignments between logical identifiers and physical storage locations on thenon-volatile storage media 140. Accordingly, one or more of theentries 405A-N may reference respective physical storage locations; for example,entry 405A assigns logical identifiers 205-212 to physical addresses 930-937;entry 405B assigns logical identifiers 072-083 to physical addresses 132-143; and so on. In some embodiments, references to the physical storage locations may be indirect, as depicted inentries - The physical address(es) of the
entries 405A-N may be updated in response to changes to the physical storage location(s) associated with the corresponding logical identifiers due to, inter alia, grooming, data refresh, modification, overwrite, or the like. In some embodiments, one or more of theentries 405A-N may represent logical identifiers that have been allocated to astorage client 118A-N, but have not been assigned to any particular physical storage locations (e.g., the storage client has not caused data to be written to the logical identifiers, as depicted inentry 405E). - The
entries 405A-N may be indexed to provide for fast and efficient lookup by logical identifier. For clarity, theFIG. 4 example depictsentries 405A-N comprising numeric logical identifiers. However, the disclosure is not limited in this regard and theentries 405A-N could be adapted to include suitable logical identifier representation, including, but not limited to, alpha-numerical characters, hexadecimal characters, binary values, text identifiers, hash codes, or the like. - The
entries 405A-N of theindex 404 may reference ranges or vectors of logical identifiers of variable size and/or length; asingle entry 405A may reference a plurality of logical identifiers (e.g., a set of logical identifiers, a logical identifier range, a disjoint, non-adjacent, and/or non-contiguous set of logical identifiers, or the like). For example, theentry 405B represents a contiguous range of logical identifiers 072-083. Other entries of theindex 404 may represent a non-contiguous sets or vectors of logical identifiers;entry 405G represents a non-contiguous, disjoint logical identifier range 454-477 and 535-598, each range being assigned to respective physical storage locations by respective references G1 and G2. Theforward index 404 may represent logical identifiers using any suitable technique; for example, theentry 405D references a logical identifier range by starting point and length (logical identifier 178 and length 15), which corresponds to a range of logical identifiers 178-192. - The
index 404 may be used to efficiently determine whether particular logical identifiers are assigned to physical storage location(s) and/or are allocated to one ormore storage clients 118A-N. Thestorage controller 120 may determine that logical identifiers that are not included in theindex 404 are available to be allocated to astorage client 118A-N. Similarly, thestorage controller 120 may determine that physical storage locations that are not associated with a logical identifier in theindex 404 do not comprise valid data, and can be reclaimed. For example, modifying data of the logical identifiers 5-59 may result in associating theentry 405C with a new set of physical storage location(s) (e.g., the storage locations comprising the data as modified “out-of-place” on the non-volatile storage media 140). As a result, the old physical addresses 734-788 are no longer associated with anentry 405A-N in the index 405, and may be identified as “invalid” and ready for reclamation. -
FIG. 5 depicts one example of areverse index 506 for maintaining metadata pertaining to physical storage locations of anon-volatile storage media 140. In theFIG. 5 example, thereverse index 506 is implemented as a table data structure. The disclosure is not limited in this regard, however, and could be adapted to implement thereverse index 506 using any suitable datastructure. For example, in some embodiments, thereverse index 506 is implemented using a tree datastructure similar to theforward index 404, described above. - The
reverse index 506 comprises a plurality of entries 507 (depicted as rows in the table datastructure of the reverse index 506), each of which corresponds to one or more physical storage locations on thenon-volatile storage media 140. Accordingly, eachentry 507 may correspond to one or morephysical addresses 526. In some embodiments, theentries 507 may be of variable length and/or may comprise compressed and/or encrypted data. As such, one or more of theentries 507 may comprise adata length 528. Avalid tag 530 indicates whether the physical address(es) 526 of theentry 507 comprise valid or invalid data (e.g., obsolete or trimmed data). - The
reverse index 506 may further comprise references and/or links to the first index, such as alogical identifier field 532, data length from the perspective of thestorage clients 118A-N (e.g., uncompressed and/or decrypted data length), and the like (e.g., miscellaneous 536). In some embodiments, thereverse index 506 may include an indicator of whether thephysical address 526 stores dirty or clean data, or the like. - The reverse index 522 may be organized according to the configuration and/or layout of a particular
non-volatile storage media 140. In embodiments comprising solid-statenon-volatile storage media 140, thereverse index 506 may be arranged by storage divisions (e.g., erase blocks), physical storage locations (e.g., pages), logical storage locations, or the like. In theFIG. 5 example, thereverse index 506 is arranged into a plurality of erase blocks (540, 538, and 542), each comprising a plurality of physical storage locations (e.g., pages, logical pages, or the like). - The
entry ID 524 may comprise an address, reference, virtual link, or other data to associate entries in thereverse index 506 with entries in the forward index 404 (or other storage metadata 135). Thephysical address 526 indicates a physical address on thenon-volatile storage media 140. Together, thephysical address 526 anddata length 528 may be referred to as destination parameters 544 (e.g., parameters pertaining to the physical storage location(s) of the entries 507). Thelogical identifier 532 and data length 534 may be referred to assource parameters 546. Thelogical identifier 532associates entries 507 with respective logical identifier(s) of the logical address space 134 (e.g., in the forward index 404). - The
valid tag 530 indicates whether the data of theentry 507 is valid (e.g., whether the physical storage location(s) of theentry 507 comprise valid, up-to-date data of a logical identifier). Entries marked invalid intag 530 may comprise invalid, obsolete, and/or deleted (e.g., trimmed) data. Thereverse index 506 may track the validity status of each physical storage location of the non-volatile storage device. Thegroomer module 138 may use thereverse index 506 to identify physical storage locations to reclaim and/or to distinguish data that needs to be retained from data that can be removed from thenon-volatile storage media 140. - The
reverse index 506 may also include othermiscellaneous data 536, such as a file name, object name, source data, storage client, security flags, atomicity flag, transaction identifier, or the like. Whilephysical addresses 526 are depicted in thereverse index 506, in other embodiments,physical addresses 526, orother destination parameters 544, may be included in other locations, such as in theforward index 404, an intermediate table or data structure, or the like. - The
reverse index 506 may be adapted to the characteristics and/or partitioning of thenon-volatile storage media 140. In theFIG. 5 example, thereverse index 506 is adapted for use with solid-state storage media 140 that is partitioned into a plurality of erase blocks. Thegroomer module 138 may traverse theindex 506 to identify valid data in a particular erase block (or logical erase block) and to quantify an amount of valid data, or conversely invalid data, therein. The groomer may select storage divisions for recovery based, in part, on the amount of valid and/or invalid data in each erase block. - In some embodiments, the
groomer module 138 is restricted to operating within certain portions of thenon-volatile storage media 140. For example, portions of thestorage metadata 135 may be periodically persisted on the non-volatile storage media 140 (or other persistent storage), and thegroomer module 138 may be limited to operating on physical storage locations corresponding to the persistedstorage metadata 135. In some embodiments,storage metadata 135 is persisted by relative age (e.g., sequence), with older portions being persisted, while more current portions are retained involatile memory 113. Accordingly, thegroomer module 138 may be restricted to operating in older portions of the physical address space and, as such, are less likely to affect data of ongoing storage operations. Therefore, in some embodiments, the groomer module may continue to operate while vector and/or atomic storage requests are being serviced. Alternatively, or in addition,groomer module 138 may access the storage metadata and/or inflight index (disclosed in further detail below) to prevent interference with atomic storage operations. Further embodiments of systems, methods, and interfaces managing a logical address pace, such as thelogical address space 134, and/or storing data in a log-based format, are disclosed in U.S. patent application Ser. No. 12/986,117, filed on Jan. 6, 2011, entitled “Apparatus, System, and Method for a Virtual Storage Layer,” and published as United States Patent Application Publication No. 20120011340 on Jan. 12, 2012, and U.S. patent application Ser. No. 13/424,333, filed on Mar. 19, 2012, and entitled, “Logical Interface for Contextual Storage,” each of which is hereby incorporated by reference. - Referring back to
FIG. 1 , thestorage controller 120 may be configured to leverage the arbitrary, any-to-any mappings maintained by the logical-to-physical translation module 134 to manage data on thenon-volatile storage media 140 independent of the logical interface of the data (e.g., independent of the logical identifier(s) associated with the data). For example, thestorage controller 120 may leverage the logical-to-physical translation layer 132 to store data on thenon-volatile storage media 140 in a “log format,” as described below. - The
storage controller 120 may comprise alog storage module 136 configured to store data on thenon-volatile storage media 140 in a log-format (e.g., an “event log”). As used herein, a log-format refers to a data storage format that defines an ordered sequence of storage operations performed on thenon-volatile storage media 140. Accordingly, the log-format may define an “event log” of storage operations performed on thenon-volatile storage media 140. In some embodiments, thelog storage module 136 is configured to store data sequentially, from an append point, on thenon-volatile storage media 140. Thelog storage module 136 may be further configured to associate data (and/or physical storage locations on the non-volatile storage media 140) with respective sequence indicators. The sequence indicators may be applied to individual data segments, packets, and/or physical storage locations and/or may be applied to groups of data and/or physical storage locations (e.g., erase blocks). In some embodiments, sequence indicators may be applied to physical storage locations when the storage locations are reclaimed (e.g., erased) in a grooming operation and/or when the storage locations are first used to store data. - In some embodiments, the
log storage module 136 may be configured to store data according to an “append only” paradigm. Thestorage controller 120 may maintain a current append point within a physical address space of thenon-volatile storage media 140. As used herein, an “append point” refers to a pointer or reference to a particular physical storage location (e.g., sector, page, storage division, offset, or the like). Thelog storage module 136 may be configured to append data sequentially from the append point. As data is stored at the append point, the append point moves to a next available physical storage location on thenon-volatile storage media 140. The log-order of data stored on thenon-volatile storage media 140 may, therefore, may be determined based upon the sequence indicator associated with the data and/or the sequential order of the data on thenon-volatile storage media 140. The log storage module may identify the “next” available storage location by traversing the physical address space of the non-volatile storage media 140 (e.g., in a reverse index, as described below) to identify a next available physical storage location. -
FIG. 6A depicts a physical address space 600 of anon-volatile storage media 140. The physical storage space 600 is arranged into storage divisions (e.g., erase blocks 612), each of which can be initialized (e.g., erased) in a single operation. Each storage division comprises a plurality of physical storage locations (e.g., pages or logical pages) capable of storing data. Alternatively, thestorage divisions 612 may represent sectors of a randomaccess storage media 140, such as a magnetic hard disk, or the like. - Each physical storage location may be assigned a respective physical address ranging from zero (0) to N. The log-
storage module 138 may be configured to store data sequentially 621 from anappend point 620 within the physical address space 600. Theappend point 620 moves sequentially through the physical storage space 600. After storing data at theappend point 620, the append point advances sequentially 621 to the next available physical storage location. As used herein, an available physical storage location refers to a physical storage location that has been initialized and is ready to store data (e.g., has been erased). Somenon-volatile storage media 140, such as solid-state storage media, can only be programmed once after erasure. Accordingly, as used herein, an available physical storage location may refer to a storage location that is in an initialized (or erased) state. If the next storage division in the sequence is unavailable (e.g., comprises valid data, has not been erased or initialized, is out of service, etc.), theappend point 620 selects the next available physical storage location. In theFIG. 6A embodiment, after storing data on thephysical storage location 616, theappend point 620 may skip the unavailable physical storage locations ofstorage division 613, and continue at the next available physical storage location (e.g., physical storage location 617 of storage division 614). - After storing data on the “last” storage location (e.g.,
storage location N 618 of storage division 615), theappend point 620 wraps back to the first division 612 (or the next available storage division if 612 is unavailable). Accordingly, theappend point 620 may treat the physical address space as a loop or cycle. -
FIG. 6B depicts anappend point 620 within thephysical address space 601 of anon-volatile storage media 140. As depicted inFIG. 6B , thelog storage module 136 may be configured to cycle theappend point 620 sequentially through thephysical address space 601. As disclosed above, data stored at theappend point 620 may be associated with (e.g., assigned to) any logical identifier of thelogical address space 134. As such, thestorage controller 120 may implement a “write anywhere” storage paradigm. Storing data sequentially at the append point 620 (with the any-to-any mappings) may provide performance benefits; rather than searching for a particular physical storage location to be used with a particular logical identifier (and/or initializing the particular physical storage location), data may be stored at available physical storage locations at theappend point 620. Accordingly, data may be stored without first searching for and/or initializing particular storage locations. Moreover, sequential storage at theappend point 620 may prevent write amplification and other issues related to write-once, asymmetric storage media, as described above. - Referring back to
FIG. 1 , the log-based format of thestorage controller 120 may further comprise storing data in a “contextual” format. As used herein, a “contextual” data refers to a self-describing data format from which the logical interface of the data may be determined. As used herein, the “logical interface” of data may include, but is not limited to: a logical identifier of the data, a range and/or extent of logical identifiers, a set of logical identifiers, a name for the data (e.g., file name, object name, or the like), or the like. Accordingly, the contextual format may comprise storing self-descriptive, persistent metadata with the data on thenon-volatile storage media 140; the persistent metadata may comprise the logical identifier(s) associated with the data and/or provide sequence information pertaining to the sequential ordering of storage operations performed on thenon-volatile storage media 140. In some embodiments, contextual data may be stored in data packets on thenon-volatile storage media 140. As used herein, a data packet refers to any data structure configured to associate a data segment, and/or other quantum of data, with metadata pertaining to the data segment. A data packet may comprise one or more fields configured for storage as a contiguous unit on thenon-volatile storage media 140. Alternatively, a data packet may comprise a plurality of different portions and/or fragments stored at different, noncontiguous storage locations of one or morenon-volatile storage medium 140. -
FIG. 7 depicts one embodiment of a contextual data format (packet 710). Eachdata packet 710 may comprise arespective data segment 712 comprising data associated with one or more logical identifiers. Thedata segment 712 may correspond to data astorage client 118A-N and may include, but is not limited to, operating system data, file data, application data, or the like. In some embodiments, the data of thedata segment 712 may be processed by a write data pipeline (described below), which may include, but is not limited to, compression, encryption, whitening, error-correction encoding, and so on. Thedata segment 712 may be of a predetermined size (e.g., a fixed “block” or “segment” size). Alternatively, thedata segment 712 may have a variable size. - In certain embodiments, the
packet 710 may includepersistent metadata 714 that is stored on thenon-volatile storage media 140 with thedata segment 712. In some embodiments, thepersistent metadata 714 is stored with thedata segment 712 as a packet header, footer, of other packet field. Thepersistent metadata 714 may include alogical identifier indicator 715 that identifies the logical identifier(s) to which thedata segment 712 pertains. As described below, the persistent metadata 714 (and the logical identifier indicator 715) may be used to reconstruct thestorage metadata 135, such as theforward index 404 and/orreverse index 506. Thepersistent metadata 714 may further comprise one or more persistent metadata flags 717. As disclosed below, thepersistent metadata flags 717 may be used to support atomic storage operations, transactions, or the like. - In some embodiments, the
packet 710 may comprise and/or be associated with asequence indicator 718. Thesequence indicator 718 may be persisted with thepacket 710 on thenon-volatile storage media 140; for example, thesequence indicator 718 may be stored on the same storage division as thepacket 710. Alternatively, thesequence indicator 718 may be persisted in a separate storage location. In some embodiments, asequence indicator 718 is applied when a storage division is made available for use (e.g., when erased, when the first or last storage location is programmed, or the like). Thesequence indicator 718 may be used to determine the log-order of thepacket 710 relative toother packets 710 on thenon-volatile storage media 140. - The letters A-L of
FIG. 6B may represent data stored on physical storage locations of thenon-volatile storage media 140. Data A is initially stored at aphysical storage location 650. When the data A is persisted atlocation 650, the physicalstorage location reference 626 in the forward index (entry 605) is updated to reference thephysical storage location 650. In addition, areverse index entry 607 may be updated to indicate that thephysical storage location 650 comprises valid data and/or to associate thephysical storage location 650 with logical identifiers 205-212 (not shown). (For clarity, other portions of the forward index and/or reverse index are omitted fromFIG. 6B .) - Data A may be modified and/or overwritten out-of-place, such that the updated data is not be stored on the original
physical storage location 650. Instead, the updated data A′ is stored sequentially (out-of-place) atstorage location 651, which may correspond to the current position of theappend point 620 at the time data A was modified. The storage metadata is updated accordingly. Theforward index entry 605 is updated to associate the logical identifiers 205-212 with thephysical storage location 651 comprising A′. Theentry 607 of the reverse index is updated to markphysical storage location 650 as invalid and to indicate that thephysical storage location 650 comprises valid data. Marking thephysical storage location 650 as invalid may allow thestorage location 650 to be reclaimed by thegroomer module 138, as described above. - The data A′ may be further modified and/or overwritten with data A″. The updated data A″ may be stored at the current append point 620 (physical storage location 652). The storage metadata is updated, as described above: the
forward index entry 605 is updated to associate the entry with thephysical storage location 652, and areverse index entry 609 is updated to indicate that thephysical storage address 652 comprises valid data (and that thephysical address 651 comprises invalid data). The “obsolete” versions A and A′ may be retained on thenon-volatile storage media 140 until the correspondingphysical storage locations 650 and/or 651 are reclaimed (e.g., erased) in a grooming operation. - The data A, A′, and A″ may be stored in the sequential, log-based format (an “event-log” format) described above. Referring back to
FIG. 1 , thestorage controller 120 may be configured to reconstruct thestorage metadata 135 from the contents of the non-volatile storage media 140 (e.g., from the contextual, log format of the data). Thestorage controller 120 may accesspersistent metadata 714 ofpackets 710 to identify the logical identifier(s) associated with correspondingdata segments 712. Thestorage controller 120 may be further configured to distinguish valid, up-to-date data from obsolete, out-of-date versions based on the log-order of the data on the non-volatile storage medium (e.g., based on sequence indicator(s) 718 associated with the data and/or relative order of the data within the physical address space of the non-volatile storage media 140). - In
FIG. 6B , the logical identifier indicator of the persistent metadata stored with data A, A′, and/or A″ may indicate that the data stored at thephysical storage locations physical storage location 652 comprises the current, valid copy of the data. Therefore, theforward index entry 605 may be reconstructed to associate the logical identifiers 205-212 with thephysical storage location 652. In addition, thereverse index entries physical storage locations physical storage location 652 comprises valid data. Further embodiments of systems, and methods for crash recovery and/or data integrity despite invalid shutdown conditions are described in U.S. patent application Ser. No. 13/330,554, filed Dec. 19, 2011, and entitled, “Apparatus, System, and Method for Persistent Data Management on a Non-Volatile Storage Media,” which is hereby incorporated by reference. -
FIG. 2 is a block diagram of another embodiment of astorage controller 120 configured to implement vector I/O operations and/or service vector storage requests. Thestorage controller 120 may further comprise arestart recovery module 139, which may be configured to reconstruct thestorage metadata 135 from the contents of thenon-volatile storage media 140, as described above. - In the
FIG. 2 embodiment, thenon-volatile storage media 140 may comprise one or more non-volatile storage devices, such as one or more hard disks, one or more solid-state storage elements, or the like. The non-volatile storage media 140 (and/or corresponding devices) may be selectively coupled to themedia controller 123 via thebus 127 and/ormultiplexer 249. Alternatively, or in addition, one or more of the non-volatile storage media 140 (or devices) may be a remote storage device accessible via a network (e.g., network 116). - The
media controller 123 may comprise a storagerequest receiver module 231 configured to receive storage requests from thestorage controller 120 and/orother storage clients 118A-N. Therequest module 231 may be configured to perform storage operations on thenon-volatile storage media 140 in response to the requests, which may comprise transferring data to and from thestorage controller 120 and/orstorage clients 118A-N. Accordingly, therequest module 231 may comprise one or more direct memory access (DMA) modules, remote DMA modules, controllers, bridges, buffers, and the like. - The
media controller 123 may comprise awrite pipeline 240 that is configured to process data for storage on thenon-volatile storage media 140. In some embodiments, thewrite pipeline 240 comprises one or more write processing stages, which may include, but are not limited to, compression, encryption, packetization, media encryption, error encoding, and so on. - Packetization may comprise encapsulating data in a contextual data format, such as the self-describing
packet format 710 described above. Accordingly, thewrite pipeline 240 may be configured to store data withpersistent metadata 714, which may include indicators of the logical identifier(s) associated with the data. As described above, therestart recovery module 139 may leverage the contextual data format to reconstruct thestorage metadata 135. As used herein, restart recovery comprises the act of a system, apparatus, or computing device, commencing processing after an event that can cause the loss of data stored within volatile memory of the system, apparatus, or computing device (e.g., a power loss, reset, hardware failure, software fault, or the like). Restart recovery may also comprise power cycle recovery, such as commencing processing after an invalid shutdown, a hard reset, or a disconnection or separation of the powered device from a power supply (such as physically disconnecting a power supply for the device). - Error encoding may comprise encoding data packets (or other data containers) in an error-correcting code (ECC). The ECC encoding may comprise generating ECC codewords, each of which may comprise a data segment of length N and a syndrome of length S. For example, the
write pipeline 240 may be configured to encode data segments into 240 byte ECC chunks, each ECC chunk comprising 224 bytes of data and 16 bytes of ECC data. In other embodiments, thewrite pipeline 240 may be configured to encode data in a symbolic ECC encoding, such that each data segment of length N produces a symbol of length X. Thewrite pipeline 240 may encode data according to a selected ECC “strength.” As used herein, the “strength” of an error-correcting code refers to the number of errors that can be detected and/or corrected by use of the error-correcting code. In some embodiments, the strength of the ECC encoding may be adaptive and/or configurable; the strength of the ECC encoding may be selected according to the reliability and/or error rate of thenon-volatile storage media 140. - The
write buffer 244 may be configured to buffer data for storage on thenon-volatile storage media 140. In some embodiments, thewrite buffer 244 may comprise one or more synchronization buffers to synchronize a clock domain of themedia controller 123 with a clock domain of the non-volatile storage media 140 (and/or bus 127). - As described above, the
log storage module 136 may be configured to store data in a log format on thenon-volatile storage media 140. Thelog storage module 136 may be configured to store data sequentially from an append point within the physical address space of thenon-volatile storage media 140, as described above. Thelog storage module 136 may, therefore, select physical storage location(s) for data to maintain a log order on thenon-volatile storage media 140, which may comprise providing addressing and/or control information to themedia controller 123 and/or writepipeline 240. - The
media controller 123 may further comprise a readpipeline 241 that is configured to read data from thenon-volatile storage media 140 in response to requests received via therequest module 231. The requests may comprise and/or reference the logical interface of the requested data, such as a logical identifier, a range and/or extent of logical identifiers, a set of logical identifiers, or the like. The physical addresses associated with data of a read request may be determined based, at least in part, upon the logical-to-physical translation layer 132 (and/or storage metadata 135) maintained by thestorage controller 120. Data may stream into the readpipeline 241 via theread buffer 245 and in response to addressing and/or control signals provided via thebus 127. The readbuffer 245 may comprise one or more read synchronization buffers for clock domain synchronization, as described above. - The read
pipeline 241 may be configured to process data read from thenon-volatile storage media 140 and provide the processed data to thestorage controller 120 and/or astorage client 118A-N. The readpipeline 241 may comprise one or more data processing stages, which may include, but are not limited to, error correction, media decryption, depacketization, decryption, decompression, and so on. Data processed by the readpipeline 241 may flow to thestorage controller 120 and/orstorage client 118A-N via therequest module 231, and/or other interface or communication channel (e.g., the data may flow directly to and from a storage client via a DMA or remote DMA module of the storage controller 120) - The read
pipeline 241 may be configured to detect and/or correct errors in data read from thenon-volatile storage media 140 using, inter alia, the ECC encoding of the data (e.g., as encoded by the write pipeline 240), parity data (e.g., using parity substitution), and so on. The ECC encoding may be capable of detecting and/or correcting a pre-determined number of bit errors, in accordance with the strength of the ECC encoding. Further embodiments of apparatus, systems, and methods for detecting and/or correcting data errors are disclosed in U.S. Pat. No. 8,195,978, issued on Apr. 5, 2012, and entitled “Apparatus, System, and Method for Detecting and Replacing a Failed Data Storage,” which is hereby incorporated by reference. -
FIG. 3 depicts another embodiment of astorage controller 120. In theFIG. 3 embodiment, thenon-volatile storage media 140 may comprise a plurality of solid-state storage elements 316 (elements 316 0 through N). The solid-state storage elements 316 may be embodied on separate chips, packages, die, or the like. Alternatively, or in addition, one or more of the solid-state storage elements 316 may share the same package and/or chip (e.g., be separate die and/or planes on the same chip). The solid-state storage elements 316 may be partitioned into respective storage divisions 330 (e.g., erase blocks), each comprising a plurality of storage units 332 (e.g., pages). However, the disclosure could be adapted to use different types ofnon-volatile storage media 140 comprising different media partitioning schemes and, as such, should not be read as limited in this regard. The solid-state storage elements 316 may be communicatively coupled to themedia controller 123 in parallel (via the bus 127). Accordingly, themedia controller 123 may be configured to manage the solid-state storage elements 316 as a “logical storage element” 315. - The
logical storage element 315 may comprise 25 solid-state storage elements 316 connected in parallel by thebus 127. Thelogical storage element 315 may be partitioned into logical storage units, such as logical storage divisions (logical erase blocks) 340 and/or logical storage units (logical pages) 342. Each logical eraseblock 340 comprises an eraseblock 330 of a respective storage element 316 (25 erase blocks 330), and eachlogical page 342 comprises apage 332 of a respective storage element 316 (25 pages). - Storage operations performed on the
logical storage element 315 may operate across the constituent solid-state storage elements 316: an operation to read alogical page 342 comprises reading from as many as 25 physical pages 332 (e.g., one storage unit per solid-state storage element 316), an operation to program alogical page 342 comprises programming as many as 25physical pages 332, an operation to erase a logical eraseblock 340 comprises erasing as many as 25 physical eraseblocks 330, and so on. - As disclosed above, the
groomer module 138 may be configured to reclaim storage resources on thenon-volatile storage media 140. In some embodiments, thegroomer module 138 may be configured to interleave grooming operations with other storage operations and/or requests. For example, reclaiming a storage resource, such as a physical erase block (PEB) 330 or logical erase block 340 (e.g., set of two or more physical erase blocks), may comprise relocating valid data to another storage location on thenon-volatile storage media 140. The groomer write and groomer readbypass modules pipeline 241 and then be transferred directly to thewrite pipeline 240 without being routed out of themedia controller 123. - The groomer read
bypass module 265 may coordinate reading data to be relocated from a storage resource that is being reclaimed (e.g., an erase block, logical erase block, or the like). Thegroomer module 138 may be configured to interleave the relocation data with other data being written to thenon-volatile storage media 140 via thegroomer write bypass 264. Accordingly, data may be relocated without leaving themedia controller 123. In some embodiments, thegroomer module 138 may be configured to fill the remainder of thewrite buffer 244 with relocation data, which may improve groomer efficiency, while minimizing the performance impact of grooming operations. - The
media controller 123 may further comprise amultiplexer 249 that is configured to selectively route data and/or commands between thewrite pipeline 240 and readpipeline 241, and thenon-volatile storage media 140. In some embodiments, themedia controller 123 may be configured to read data while filling thewrite buffer 244 and/or may interleave one or more storage operations on one or more banks of solid-state storage elements 316. Further embodiments of write and/or read pipelines are disclosed in U.S. patent Ser. No. 11/952,091, filed Dec. 6, 2007, entitled “Apparatus, System, and Method for Managing Data Using a Data Pipeline,” and published as United States Patent Application Publication No. 2008/0141043 on Jun. 12, 2008, which is hereby incorporated by reference. -
Many storage clients 118A-N rely on atomic storage operations. As used herein, an atomic operation refers to an operation that either completes or fails as a whole. Accordingly, if any portion of an atomic storage operation does not complete successfully, the atomic storage operation is incomplete (or failed), and other portions of the atomic storage operation are invalidated or “rolled back.” As used herein, rolling back an incomplete atomic storage operation refers to undoing any completed portions of the atomic storage operation. For example, an atomic storage operation may comprise storing six data packets on thenon-volatile storage media 140, five of the packets may be stored successfully, but storage of the sixth data packet may fail; rolling back the incomplete storage operation may comprise ignoring and/or excluding the five packets, as described below. - Some atomic operations may be limited to a relatively small, fixed-sized data (e.g., a single sector within a block storage device). Atomic storage operations may require a “copy on write” operation to ensure consistency (e.g., to allow the atomic storage operation to be rolled back, if necessary), which may significantly impact the performance of the atomic storage operations. Moreover, support for atomic storage operations may typically be provided by a layer that maintains its own, separate metadata pertaining to atomic storage operations, resulting in duplicative effort, increased overhead, and/or decreased performance. Some atomic operations may be more complex and may involve multiple storage operations or “sub-requests” or “subcommands” (e.g., may involve storing a plurality of data packets on the non-volatile storage media 140). The
storage controller 120 may be configured to efficiently service complex atomic storage operations, such that the atomic operations are crash safe and packets of incomplete (failed) atomic operations can be identified and rolled back. - In some embodiments, the
storage controller 120 is configured to leverage and/or extend thestorage metadata 135 to provide efficient atomic storage operations through thestorage management layer 130. Consistency of thestorage metadata 135 may be maintained by deferring updates to thestorage metadata 135 until the one or more storage operations comprising the atomic storage request are complete. In some embodiments, theatomic storage module 172 maintains metadata pertaining to atomic storage operations that are “in process” (e.g., ongoing operations that are not yet complete) in separate “inflight”metadata 175. Accordingly, in certain embodiments, the state of thestorage metadata 135 is maintained until the atomic operation successfully completes, obviating the need for extensive rollback processing. In response to completion of the atomic storage operation, theatomic storage module 172 updates thestorage metadata 135 with the corresponding contents of theinflight metadata 175. - Alternatively, or in addition, the
atomic storage module 172 may comprise an orderedqueue 173 that is configured to maintain ordering of storage requests directed to thestorage controller 120. The orderedqueue 173 may be configured to queue both atomic storage requests and non-atomic storage requests. In some embodiments, the orderedqueue 173 may be configured to retain the order in which the storage requests were received (e.g., in a first-in-first-out configuration). The ordering may prevent data hazards, such as read before write, or the like. The orderedqueue 173 may, therefore, simplify processing of storage requests and/or obviate the need, for example, for the separate inflight metadata 175 (disclosed below in connection withFIGS. 9A-E ). Consequently, certain embodiments may include an orderedqueue 173 and not inflight metadata 175 (or vice versa). In addition, some embodiments may leverage the orderedqueue 173 to avoid potential problems that may be caused by interleaving of data packets, which may occur if multiple atomic requests are processed simultaneously. As will be explained below in connection withFIGS. 8B and 11A -C, if data packets for each atomic request are stored contiguously in the log (without interleaving packets associated with other write requests), a single bit within each data packet may be utilized to identify whether an atomic write was successfully completed. Accordingly, in certain embodiments, the orderedqueue 173 may provide significant advantages by reducing the persistent metadata overhead associated with atomic storage operations. In alternative embodiments, the orderedqueue 173 may process either atomic storage requests or non-atomic storage requests but not both and/or thestorage controller 120 may comprise separate queues for atomic storage requests and non-atomic storage requests. - The
storage management layer 130 may comprise avector module 170 configured to perform vector I/O operations (e.g., service vector storage requests). As used herein, a vector I/O operation (or vector storage request) refers to an I/O operation pertaining to one or more vectors. A vector may comprise one or more parameters, which may include, but are not limited to: one or more source identifiers pertaining to a source of an I/O operation, one or more destination identifiers pertaining to a destination of the I/O operation, one or more flags to indicate a type of I/O operation and/or properties of the I/O operation, and so on. Accordingly, as used herein, a “vector” may define an I/O operation (e.g., a storage request) pertaining to a set of disjoint and/or non-contiguous identifiers, a range of identifiers, an extent of identifiers, or the like. The identifiers of a vector may include, but are not limited to: memory addresses, memory references, physical storage locations, logical identifiers, names, offsets, or the like. A vector may specify a storage request and/or I/O operation. As such, as used herein, a vector may be referred to as a “storage request,” storage vector,” and/or “I/O vector.” A vector storage request may comprise a plurality of vectors and may, therefore, define a plurality of storage requests, (e.g., a separate I/O vector and/or storage request for each vector of the vector storage request). The storage requests of a vector storage request may be referred to as “subcommands” or “sub-requests,” each of which may correspond to a respective vector of the vector storage request. Servicing and/or executing a vector storage request comprising a plurality of vectors may comprise servicing and/or executing the subcommands and/or sub-requests of the vector storage request. Accordingly, in certain embodiments, servicing and/or executing a vector storage request may comprise generating and/or determining storage requests corresponding to each vector of the vector storage request (generating and/or determining the subcommands and/or sub-requests of the vector storage request). Servicing and/or executing an atomic vector storage request may comprise successfully completing all of the storage requests of the atomic vector storage request or none of the storage requests of the atomic vector storage request (e.g., rolling back and/or excluding completed portions of a failed atomic vector storage request). - As disclosed above, a vector storage request refers to a request to perform an I/O operation(s) on one or more vectors. The vector(s) of a vector storage request may pertain to logical identifier sets and/or ranges that are contiguous or non-contiguous with respect to the
logical address space 134. For example, an operation to TRIM one or more logical identifier ranges in thelogical address space 134 may be implemented as a single vector storage request (e.g., a vector storage request to TRIMlogical identifiers 2 through 45, 1032 through 1032, and 32134 through 32445). - The
storage layer 130 may further comprise anatomic module 172 configured to implement atomic operations. As described in additional detail below, thestorage layer 120 may leverage the log format implemented by thelog storage module 136, and the independence between logical identifiers and physical storage locations, to efficiently service vector and/or atomic operations. - As disclosed above, the logical-to-
physical translation module 132 may enable arbitrary, any-to-any mappings between logical identifiers and physical storage locations. Thestorage controller 120 may leverage the flexibility provided by these mappings to store data “out-of-place” and in a log-based format, and to efficiently manage vector storage requests. A vector storage request may comprise a request to perform I/O operation(s) on two or more vectors, which may be disjoint, non-adjacent, and/or non-contiguous with respect to thelogical address space 134. However, due to the independence between logical identifiers and physical storage locations, thestorage controller 120 may store data pertaining to the vector storage operations contiguously in the log on the non-volatile storage media 140 (e.g., by use of thelog storage module 136, as described above). -
FIG. 8A depicts one embodiment of data packets of a vector storage operation stored contiguously in alog 800. Thevector storage request 803 ofFIG. 8A may comprise a request to write to a plurality of disjoint, non-adjacent and/or non-contiguous vectors: 1024-1027, 5-6 . . . and 4096-4099. Thevector storage module 170, and thelog storage module 136, may be configured to storedata packets 880 of vector storage operations contiguously within thelog 800, which may comprise storingdata packets 880 pertaining to disjoint, non-adjacent and/or non-contiguous vectors contiguously within the log 800 (e.g., storingdata packets 880 sequentially from the startingappend point 820A to thecompletion append point 820B). Storing thedata packets 880 contiguously within thelog 800 may comprise thevector storage module 170 decomposing thevector storage request 803 into one or more sub-requests or subcommands (e.g., separate write commands for each logical identifier range of the vector storage request 803). The sub-requests may be queued for execution by the storage controller 120 (e.g., in an orderedqueue 173, request buffer (described in further detail below), or the like). Thelog storage module 136 may be configured to service each of the sub-requests in order and/or without interleaving other data packets there between. Accordingly, thelog storage module 136 may storedata packets 880 pertaining to the firstlogical identifier range 882A, secondlogical identifier range 882B, and Nthlogical identifier range 882N, which may be disjoint, non-adjacent, and/or non-contiguous with respect to thelogical address space 134 contiguously within thelog 800 on thenon-volatile storage media 140. Servicing thevector storage request 803 may further comprise updating the storage metadata 135 (e.g., forward index 204) to associate the disjoint, non-adjacent, and/ornon-contiguous vectors 882A-N with physical storage location(s) of thedata packets 880 in thelog 800, as described above. - Storing data contiguously within the
log 800 may simplify atomic storage operations, including atomic vector storage operations. Referring toFIG. 8B , an atomicvector storage request 804 may comprise a request to write data to two or more disjoint, non-adjacent, and/or non-contiguous vectors, such that either all of the write requests complete successfully, or none of the write requests complete (e.g., any partial sub-requests are rolled back). Theatomic storage module 172 may be configured to decompose the atomicvector storage request 804 into sub-requests (e.g., a separate write request for each logical identifier range) tostore data packets log 801 from a startingappend point 821A to anend append point 821B, as described above (e.g., by use of thevector storage module 170 and/or the log storage module 136). - The
storage controller 120 may leveragepersistent metadata 714 of the packet format 710 (or other suitable data format) to identify data that pertains to atomic storage operations. In some embodiments, thepersistent metadata 714 may be used to identify and exclude data packets pertaining to incomplete, failed atomic storage operations (e.g., during reconstruction of thestorage metadata 135 by the restart recovery module 139). Thepersistent metadata 714 may ensure that atomic storage operations (including atomic vector storage operations) are crash safe, such that data packets of failed atomic operations can be identified and rolled back during restart and/or recovery processing. - In some embodiments, data pertaining to atomic operations may be identified by use of persistent indicators stored on the
non-volatile storage media 140. For example, data pertaining to an “incomplete” and/or “in process” atomic storage operation may be identified by use of a persistent metadata indicator in a first state. As used herein, data of an “incomplete” or “in process” atomic storage request refers to data pertaining to an ongoing atomic storage operation, such as data stored on thenon-volatile storage media 140 as part of one or more sub-requests of an atomic vector operation and/or other multi-packet operation. Persistent metadata in a second state may be used to signify completion of the atomic storage operation. The indicators may be stored at a pre-determined order within the log, which, as disclosed in further detail herein, may allow data of failed atomic storage operations to be detected, excluded, and/or rolled back. - In some embodiments, the
packet format 710 ofFIG. 7 may be leveraged to identify data packets of atomic storage operations. Data packets pertaining to incomplete and/or in processes atomic storage operations may comprise apersistent metadata flag 717 in a first state. Data packets pertaining to non-atomic operations and/or data packets that represent completion of an atomic storage operation may comprise apersistent metadata flag 717 in a second state. Themetadata flag 717 may comprise a single bit; the first state may be a “0” and the second state may be a “1” (or vice versa). - In the
FIG. 8B example, theatomic storage module 172 may configure thewrite pipeline 240 to store thedata packets 885 with thepersistent metadata flag 717 in the first state (e.g., the state indicating that thedata packets 885 are part of an in-progress atomic storage request 804). Theatomic storage module 172 may further configure thewrite pipeline 240 to set thepersistent metadata flag 717 of thedata packet 887 of the atomicvector storage request 804 to the second state (e.g., non-atomic or “closed” state), indicating that the atomic storage operation was successfully completed. Thedata packet 887 comprising thepersistent metadata flag 717 in the second state may be the “last,” “final,” and/or “terminating” data packet of the atomic storage request within thelog 801. This data packet may be configured to signify completion of the atomic storage operation. As such, the “last” data packet may be stored at the head of the log with respect to theother packets 885 of the atomic storage operation. Accordingly, when traversing the log inreverse log order 823 fromcompletion append point 821B, thefirst packet 887 encountered will indicate that the atomicvector storage request 804 is complete (and that theother data packets 885 of theatomic storage request 804 should be retained). - The
storage controller 120 may be configured to identify data pertaining to incomplete atomic storage operations using thepersistent metadata flags 717, which certain embodiments may include in thepackets restart recovery module 139 may be configured to identify data of an incomplete atomic storage operation in response to identifying one or more data packets comprising apersistent metadata flag 717 in the first state that do not have corresponding data packets with apersistent metadata flag 717 in the second state (e.g., thelog 801 ends with packets comprisingpersistent metadata flags 717 in the first state). In theFIG. 8B embodiment, a failure condition may occur at theappend point 821C, before thedata packet 887 was stored in thelog 801. Therestart recovery module 139 may be configured to traverse thelog 801 from thefailure append point 821C (in reverse log sequence 823), which results in encounteringpackets 885 comprising apersistent metadata flag 717 in the first state (without first encountering a packet having apersistent metadata flag 717 in the second state), indicating that thepackets 885 are part of an incomplete atomicvector storage request 804, and should be ignored and/or invalidated (as described below). - Although
FIGS. 8A-B depict thelogs FIG. 6A , in some embodiments, thelogs 800 and/or 801 may not be contiguous in the physical address space of thenon-volatile storage media 140. Referring toFIG. 6A , as thelog storage module 136 appends data sequentially from theappend point 620, thelog storage module 136 may skip over certain physical storage locations that are not available for storing data (e.g., the eraseblock 613 ofFIG. 6A ). A physical storage location may be unavailable for a number of different reasons including, but not limited to, the physical storage location is currently being used to store other valid data, the physical storage location is not ready to store data (e.g., has not been reclaimed or erased by the groomer module 138), a failure condition (e.g., the physical storage location has been taken out of service), or the like. However, notwithstanding any non-contiguity in the physical address space 600, the log format of thelog storage module 136 generates a contiguous log of storage operations as defined by the sequence indicators and sequential storage order of data on thenon-volatile storage media 140. Therefore, referring back toFIGS. 8A and 8B , thelogs non-volatile storage media 140, regardless of whether thedata packets non-volatile storage media 140. - As described above, the
storage controller 120 may leverage the contiguous log format to ensure that atomic storage operations are crash safe with minimal persistent metadata overhead on thenon-volatile storage media 140. For example, if a data packet of a non-atomic storage operation were interleaved within thedata packets 885 in thelog 801, one or more of thedata packets 885 could be misidentified as being part of a completed atomic storage operation. However, the log format of thestorage controller 120 may ensure that data of atomic storage operations are stored contiguously within the log 801 (without interleaving other packets therein), which may ensure that incomplete atomic operations are crash safe, and can be accurately identified and rolled back. - As described above, in some embodiments, the
storage controller 120 may be configured to defer updates to thestorage metadata 135 pertaining to an atomic storage operation until completion of the atomic storage operation. Metadata pertaining to storage operations that are in process may be maintained in separateinflight metadata 175. Accordingly, in certain embodiments, the state of thestorage metadata 135 is maintained until the atomic storage operation successfully completes, obviating the need for extensive post-failure “rollback” operations. - Metadata pertaining to in-process atomic storage operations may be maintained in an inflight metadata 177, which may be separate from
other storage metadata 135. The inflight metadata 177 may be accessed to identify read and/or write hazards pertaining to the atomic storage request. -
FIG. 9A depicts one example ofstorage metadata 135 that comprises aforward index 904 and a separateinflight index 950. Like the forward index 504 described above, theindex 904 is a range-encoded B-tree that tracks allocations of logical identifiers within thelogical address space 134. Accordingly, theindex 904 may comprise a plurality of entries (e.g.,entries 905A-F) to associate logical identifiers with corresponding physical storage locations. Theforward index 904 may also track the availablelogical capacity 930 of thelogical address space 134 and/or may include an unallocated index (not shown) to track unallocated portions of thelogical address space 134. - An atomic
vector storage request 901 may comprise and/or reference one or more vectors pertaining to one or more disjoint, non-adjacent, and/or non-contiguous ranges of logical identifiers (e.g., an atomic vector storage request). In theFIG. 9A example, the atomicvector storage request 901 comprises a request to store data pertaining to two logical identifier ranges (072-120 and 291-347), portions of which overwrite existing data in theforward index 904. The existing data is referenced byentries forward index 904. Theentries physical storage locations entries reverse index 922 and reverse index entries is depicted). As illustrated inFIG. 9A , the atomicvector storage request 901 expands the logical identifier range of 072-083 to 072-120. Servicing the atomic storage request may, therefore, comprise allocating additional logical identifiers in thelogical address space 134. Completion of the atomicvector storage request 901 may be predicated on the availability of the additional logical identifiers. The new logical identifiers may be allocated in the forward index 904 (in an unassigned entry (not shown)) or, as depicted inFIGS. 9A-9C , in theinflight index 950. - As disclosed above, the
storage metadata 135 may be updated as data is stored on thenon-volatile storage media 140, which may comprise updating entries in theforward index 904 to assign logical identifiers to updated physical storage locations, adding and/or removing entries. Updating thestorage metadata 135 may further comprise updating thereverse index 922 to invalidate previous versions of overwritten/modified data and to track the physical storage locations of the updated data. These updates modify the state of thestorage metadata 135, which may make it difficult to “roll back” a failed atomic storage operation. Moreover, the updates may cause previous versions of the data to be removed from thenon-volatile storage media 140 by the groomer module 138 (or other process), such as a cache manager or the like. Removal of the previous version of data overwritten by data of an atomic storage request may make it difficult or impossible to roll back the atomic storage request in the event of a failure. - Use of the
inflight index 950 may provide additional advantages over tracking in-process storage operations using theforward index 904 alone. For example, as a storage request is performed, theinflight index 950 may be updated via an “exclusive” or “locked” operation. If these updates were performed in the forward index 904 (or other shared storage metadata 135), the lock may preclude other storage requests from being completed. Isolating these updates in a separate datastructure may free thestorage metadata 135 for use in servicing other, potentially concurrent, storage requests. In addition, theinflight index 950 may track in-process operations that may be rolled back in the event of failure (e.g., atomic storage operations). Furthermore, isolating the in-process metadata within theinflight index 950 allows the storage metadata 135 (e.g., forward index 904) to be maintained in a consistent state until the storage request is fully complete, and may allow for more efficient rollback of failed and/or incomplete storage requests. - In some embodiments, the state of the
storage metadata 135 is preserved until completion of an atomic storage request. The progress of the atomicvector storage request 901 may be tracked in theinflight index 950. Modifications to theinflight index 950 may be applied to the storage metadata 135 (forward index 904 and/or reverse index 922) upon completion of the atomic storage request (and/or upon reaching a point after which the atomic storage operation is guaranteed to complete). -
Entries inflight index 950 in response to the atomicvector storage request 901. Theentries vector storage request 901. As illustrated inFIG. 9A , the atomicvector storage request 901 comprises writing data to two vectors pertaining to respective disjoint, non-adjacent, and/or non-contiguous logical identifier ranges (072-120 and 291-347). Theinflight index 950 comprisesrespective entries vector storage request 901, and so on. - The
inflight index 950 is updated in response to completion of one or more portions of the atomicvector storage request 901.FIG. 9B depicts theinflight index 950 after storing a first portion of the data of the atomicvector storage request 901. Theentry 906E indicates that the data corresponding to logical identifiers 291-347 has been successfully stored at physical storage locations 972-1028. Alternatively, or in addition, the physical storage locations may be referenced using a secondary datastructure, such as a separatereverse index 922 or the like. Theforward index 904 andreverse index 922 of thestorage metadata 135 remain unchanged. Theinflight index 950 is further updated in response to completion of other portions of the atomicvector storage request 901.FIG. 9C depicts theinflight index 950 as the atomic storage request is completed. Theinflight index entry 906B is updated to assign physical storage locations to the logical identifiers 072-083. Theforward index 904 and/orreverse index 922 remain unchanged. - The
storage metadata 135 may be updated in response to detecting completion of the atomicvector storage request 901 and/or determining that the atomicvector storage request 901 will successfully complete (e.g., data of the atomic vector storage request has been received within a crash/power safe domain, such as within thewrite pipeline 240 or at write buffer 244). -
FIG. 9D depicts updatedstorage metadata 135 following completion of the atomicvector storage request 901. As shown inFIG. 9D , theentries inflight index 950. In addition, thereverse index 922 may be updated to invalidate data overwritten and/or modified by the atomic vector storage request 901 (e.g., invalidateentries 924 and 925) and to addentries entries forward index 904 are updated to assign the logical identifiers of the atomicvector storage request 901 to the updatedphysical storage locations entry 905B from a logical identifier range of 072-83 to 072-120. Theforward index 904 and/or portions thereof may be locked during the updating. The lock may prevent potential read/write hazards due to concurrent storage requests. - In some embodiments, the
inflight index 950 is used to avoid write and/or read hazards. As shown inFIG. 9E , astorage request 902 pertaining to a logical identifier of an atomic vector storage request may be received after or concurrently with the atomicvector storage request 901, but before completion of the atomicvector storage request 901. For example, thesubsequent storage request 902 may pertain to logical identifiers 072-083 that are to be overwritten by the atomicvector storage request 901. If thesubsequent storage request 902 is to read data of 072-083, therequest 902 may pose a read hazard (e.g., read before write), since reading thephysical storage location 924 of theentry 905B will return obsolete data. The read hazard may be identified in theinflight index 950, which indicates that the target of therequest 902 is in the process of being modified. Thestorage management layer 130 may be configured to delay and/or defer thesubsequent storage request 902 until completion or failure of the atomic vector storage request 901 (and removal of the in-process entry 906B from the inflight index 950). Write hazards may also be detected and addressed by use of theinflight index 950. - The
inflight index 950 may also be used to prevent a subsequent storage request from writing data to the logical identifiers of the atomicvector storage request 901. For example, theentry 906B of theinflight index 950 may be accessed to prevent another storage client from allocating logical identifiers 084-120. - As described above, the
storage controller 120 may be configured to mark data packets pertaining to atomic storage operations that are in process (vectored or otherwise). Accordingly, atomic storage operations may be crash safe, such that data of incomplete storage operations can be identified within the log (the log format stored on the non-volatile storage media 140). Absent these indicators, data packets pertaining to failed atomic storage operation may appear to be valid. This potential issue is illustrated inFIG. 10 . Data A, B, C are stored onphysical storage locations log 1002. The data A, B, and C are modified (overwritten) in a subsequent atomic storage request. The atomic storage request stores a portion of the atomic storage request; updated data A′ is stored inpacket 1090 and updated B′ is stored inpacket 1091. A failure occurs (with theappend point 1020 at physical storage location 1092) before the atomic storage operation is complete, for example, before writing C′ topacket 1092. The failure may require the storage metadata (e.g., forward index and/or reverse index through power loss or data corruption) to be reconstructed from thelog 1002. - The
restart recovery module 139 may be configured to reconstruct the storage metadata (e.g., forward index) from data stored on thenon-volatile storage media 140 in the self-describing log format described above. Therestart recovery module 139 may be configured to access thelog 1002 from the last knownappend point 1020, which corresponds to the most recent operations in thelog 1002. In some embodiments, theappend point 1020 location is periodically stored to the non-volatile storage media 140 (or other non-transitory storage medium). Alternatively, or in addition, theappend point 1020 may be determined using sequence indicators within the log 1002 (e.g., sequence indicators on erase blocks or other physical storage locations of the non-volatile storage media 140). Thestorage metadata 135 may be reconstructed by traversing thelog 1002 in a pre-determined order (e.g., from storage operation performed furthest in the past to the most recent storage operations (tail to head) or from the most recent storage operations to older storage operations (head to tail)). - As disclosed above, the
storage controller 120 may be configured to store data of atomic storage requests contiguously in the log. Thestorage controller 120 may be further configured to mark data packets withpersistent metadata flags 717 to identify data pertaining to in process atomic storage operations (e.g., by use of the atomic storage module 172). The log order of the data A′ at 1090 and B′ 1091 of the failed atomic storage request in thelog 1002 may indicate thatdata packets invalid entries forward index 1004 that associate A and B with data of the failed atomic storage request (e.g.,data packets 1090 and/or 1091). Thereverse index 1022 may compriseentries entries - In some embodiments, persistent indicators stored on the non-volatile media are used to track in-process storage requests on the non-volatile storage device and/or to account for loss of
storage metadata 135. As used herein, a persistent indicator refers to an indicator that is stored (persisted) on a non-volatile storage medium (e.g., the non-volatile storage media 140). A persistent indicator may be associated with the data to which the indicator pertains. In some embodiments, the persistent indicators are persisted with the data in a packet format, such as thepacket format 710 described above. The persistent indicators may be stored with the data in a single storage operation and/or in the smallest write unit supported by thenon-volatile storage media 140. Accordingly, persistent storage indicators will be available when thestorage metadata 135 is reconstructed from thelog 1002. The persistent indicators may identify incomplete and/or failed atomic storage requests despite an invalid shutdown and/or loss ofstorage metadata 135. For example, and as described above, thepackets persistent metadata flags 717 in the first state, indicating that thepackets packet 1092 comprising themetadata flag 717 in the second state was not stored in thelog 1002; therefore, when traversing thelog 1002 from theappend point 1020, therestart recovery module 139 may determine that thepackets packet 1090 and B and packet 1091 (reverting to the associations to 1080 and 1081, respectively), and invalidatingpackets reverse index 1022. -
FIG. 11A depicts another embodiment of persistent indicators within alog 1103. InFIG. 11A , thelog 1103 comprises data pertaining to logical identifiers 3-8 stored on respective physical storage locations 20-25. Theappend point 1120A is prepared to store data at the next sequentialphysical storage location 26. Aforward index 1104 associateslogical identifiers forward index 1104 may include other entries, which are not shown here for clarity. - An
atomic storage request 1101 is received to store data in association with one or more disjoint, non-adjacent, and/or non-contiguous logical identifiers (LIDs atomic storage request 1101 is formed by combining one or more storage requests, as described above; for example, the storage requests may be combined into a single atomic vector storage request that is implemented as a whole. - In some embodiments, data of the
atomic storage request 1101 is stored contiguously in thelog 1103, such that data that does not pertain to theatomic storage request 1101 is not interleaved with data of theatomic storage request 1101. The logical identifiers of theatomic storage request 1101, however, may be disjoint, non-adjacent, non-contiguous, out of order, or the like. Accordingly, while data of theatomic storage request 1101 is being appended to thelog 1103, other data that does not pertain to therequest 1101, such as groomer bypass data, data of other storage requests, and the like, may be suspended. In some embodiments, suspension is not required if write requests, including grooming, are processed utilizing the orderedqueue 173, described above. -
FIG. 11B depicts the state of thestorage metadata 1134, inflight index 1150, and log 1103 while theatomic storage request 1101 is in process. InFIG. 11B , data oflogical identifiers logical identifiers physical storage locations - The
persistent metadata flag 1117 stored with the data onphysical storage locations physical storage locations persistent metadata flag 1117 is a “0” rather than a “1,” reading in reverse log order (reading to the left from the append point 1120, as illustrated inFIG. 11B ). If the firstpersistent metadata flag 1117 preceding theappend point 1120A is set to a “1” (as shown inFIG. 11C ), this indicates that the atomic storage operation was successfully completed. Thepersistent metadata flag 1117 may be stored with the data on thephysical storage locations - If a failure were to occur, the
persistent metadata flags 1117 are used, together with the contiguous placement of data for theatomic storage request 1101 in thelog 1103, to identify data pertaining to the incompleteatomic storage request 1101. When theevent log 1103 ofFIG. 11B is traversed in reverse log order (e.g., right to left as shown inFIG. 11B or, in other words, from the tail to the head of the sequence), the firstpersistent metadata flag 1117 will be a “0,” indicating that the data pertains to a failed atomic storage request. The data atstorage location 27 may, therefore, be invalidated and may not result in reconstructinginvalid storage metadata 1134. The data may continue to be invalidated or ignored, until a “1” flag is encountered atphysical storage location 25. This approach relies on data of theatomic storage request 1101 being stored contiguously in thelog 1103. If data comprising a “1”persistent metadata flag 1117 were interleaved with the atomic storage data (before completion of the atomic storage request 1101), the data at 26 and/or 27 could be misidentified as being valid (e.g., pertaining to a complete atomic storage request 1101). -
FIG. 11C depicts one embodiment of completion of theatomic storage request 1101. The final storage operation of theatomic storage request 1101 comprises a “1” flag indicating that theatomic storage request 1101 is complete. Theforward index 1104 is updated to assign thelogical identifiers physical storage locations logical identifiers atomic storage request 1101 is no longer in process (e.g., is complete). - If a failure were to occur subsequent to persisting the data at
physical storage location 28, thestorage metadata 1134 could be correctly reconstructed. When traversing theevent log 1103 in reverse sequence (e.g., moving left from the append point), the firstpersistent metadata flag 1117 encountered would be the “1” flag on thephysical storage location 28, indicating that the data atphysical storage locations - In some embodiments, the data of such an atomic storage request may be limited by storage boundaries of the non-volatile storage media 140 (e.g., page boundaries, logical page boundaries, storage divisions, erase blocks, logical erase blocks, etc.). Alternatively, the size of the data for an atomic storage request may require that the atomic storage request wait until the append point is on a storage division with sufficient free space to fit the atomic storage request before reaching a logical erase block boundary. Accordingly, the size of an atomic storage request may be limited to a logical page size. Additionally, in some embodiments, atomic storage requests do not cross logical erase block boundaries. In another example, the
persistent metadata flag 1117 may comprise an identifier, which may allow data to be interleaved with atomic storage requests and/or allow atomic storage requests to be serviced concurrently. In some embodiments, data of atomic storage operations may be allowed to cross storage boundaries, as described below in conjunction withFIGS. 13-16C . - In some embodiments, the
persistent metadata flags 1217A of data packets pertaining to atomic storage operations may be modified in response to grooming operations. For example, a grooming operation on astorage division 140 comprisingphysical addresses logical identifiers 4 and 6). When the data is relocated after completion of the atomic storage operation, thepersistent metadata flags 1117 of the corresponding data packets may be modified to indicate that the data is part of a complete atomic operation and/or a non-atomic operation, which may comprise updating thepersistent metadata flags 1117 of the data packets to a “1” state. Accordingly, whenstorage metadata 135 is reconstructed from an updatedappend point 1120B, the relocated data onstorage division 1142 will not be misidentified as being part of a failed and/or incomplete atomic storage operation. - In some embodiments, the
groomer module 138 may be configured to control grooming operations on storage divisions that comprise persistent metadata indicating completion of atomic storage operation(s). Thegroomer module 138 may be configured to prevent such storage divisions from being groomed until other storage divisions comprising data of the corresponding atomic storage operation(s) have been relocated and/or updated to indicate that the atomic storage operation(s) are complete. As described in further detail below (in conjunction withFIGS. 13-16C ) prohibiting grooming operations on such storage divisions may, inter alia, prevent loss of the completion indicators due to grooming failures. - The
storage management layer 130 may be configured to manage subsequent storage operations pertaining to data of atomic storage operations. For example, an operation to TRIM data oflogical identifier 8 may result in trimming (e.g., invalidating) the data packet atphysical address 28, which indicates completion of theatomic storage request 1101. If the data packet atphysical address 28 were to be completely invalidated and/or erased, the correspondingpersistent metadata flag 1117 indicating completion of theatomic storage request 1101 may also be lost, which may allow the data atphysical addresses 26 and/or 27 to be misidentified as being part of a failed and/or incomplete atomic storage operation. Thestorage layer 130 may be configured to implement TRIM operations, while preserving information pertaining to atomic storage operations (e.g., persistent metadata flags 1117). In response to the TRIM request, thestorage management layer 130 may be configured to invalidate the data atphysical address 28, while retaining the completion indicator (e.g., the persistent metadata flag 1117). Thestorage management layer 130 may be configured to invalidate the data within theindex 404 and/orreverse index 506, while retainingstorage metadata 135 indicating successful completion of the atomic storage operation. Accordingly, thestorage management layer 130 may invalidate the data oflogical identifier 8 while retaining the effect of thepersistent metadata flag 1117 associated with the data. - In some embodiments, an operation trimming data comprises storing a persistent indicator corresponding to the trim operation (e.g., a persistent TRIM note, packet, or the like). During a restart and recovery operation, the
restart recovery module 139 may be configured to exclude trimmed data in response to such indicators (e.g., exclude data stored atphysical address 28 in response to a persistent indicator that the data was trimmed). Therestart recovery module 139 may be further configured to preserve the persistent metadata of the invalidated data (e.g., apply and/or effectuate the persistent metadata flag 1117), such that the data oflogical identifiers 4 and 6 (atphysical addresses 26 and 27) are not misidentified as being part of a failed and/or incomplete atomic storage operation. Accordingly, therestart recovery module 139 may utilize thepersistent metadata flag 1117 of the invalidated data, while excluding the data itself. - The disclosure is not limited to preserving
persistent metadata 1117 through TRIM operations. As disclosed herein, a data packet may be invalidated in response to a number of different storage operations including, but not limited to: overwriting, modifying, and/or erasing the data. As disclosed above, performing any of these types of operations in relation tological identifier 8 may result in invalidating the data stored at physical address 28 (e.g., the data comprising thepersistent metadata flag 1117 indicating completion of the atomic storage request 1101). In response to any such operation, thestorage management layer 130 and/or restartreconstruction module 139 may be configured to preserve the effect of the persistent metadata flag(s) 1117, while invalidating the corresponding data. As described above, preserving the persistent metadata flag(s) 1117 may comprise retainingstorage metadata 135 indicating that data atphysical address 28 is invalid, but that the corresponding atomic storage operation was successfully completed, excluding data atphysical address 28 while preserving and/or applying the persistent metadata flag(s) atphysical address 28, and so on. Accordingly, the storage management layer may be configured to invalidate a portion of data comprisingpersistent metadata flags 1117 indicating completion of the atomic storage request (a particular data packet, data segment, or the like), and to utilize thepersistent metadata flags 1117 of the invalidated data despite the invalidation operation(s). Preserving thepersistent metadata flags 1117 of the invalidated data may comprise identifying other data of the atomic storage request (e.g., other portions of data, such as data packets, data segments, or the like), as being part of a completed atomic storage request (or non-atomic storage request). Preserving thepersistent metadata flags 1117 may further comprise therestart recovery module 139 excluding the invalidated portion of data, while identifying other portions of the corresponding atomic storage request as valid (e.g., by applying thepersistent metadata flags 1117 of the invalidated data portion). -
FIG. 12A depicts one example of alog 1203 comprisingpersistent metadata 1217A (e.g., persistent metadata flags). Thelog 1203 comprises data pertaining to two atomic operations having respective identifiers ID1 and ID2. ID1 corresponds to an atomic storage request pertaining tological identifiers logical identifiers - The ID1_0
persistent metadata flag 1217A onphysical storage locations persistent metadata flag 1217A ID1_1 on thephysical storage location 26 indicates successful completion of the atomic storage operation ID1. Anotherpersistent metadata flag 1217A ID2_0 identifies data pertaining to a different, interleaved atomic storage operation. Thepersistent metadata flag 1217A ID2_1 ofphysical storage location 24 indicates successful completion of the atomic storage request ID2. Data that does not pertain to an atomic storage operation may comprise a “1”persistent metadata flag 1217A or other pre-determined identifier. When reconstructing storage metadata from the event log 1203 (at theappend point 1220A), if an atomic storage request identifier comprising a “0” flag (e.g., ID1_0) is encountered before (or without) encountering a completionpersistent metadata flag 1217A (e.g., ID1_), all data associated with thepersistent metadata flag 1217A ID1 may be invalidated. By contrast, after encountering the ID1_1 flag, all data associated with the ID1persistent metadata flag 1217A may be identified as pertaining to a completed atomic storage request. Thepersistent metadata 1217A of data pertaining to atomic storage operations may be updated in response to grooming operations, as described above. Accordingly, relocating data oflogical identifiers storage division 1242 after completion of the atomic storage operation ID2 may comprise updating the respectivepersistent metadata flags 1217A of the corresponding data packets to indicate that the data is part of a completed atomic storage operation (or non-atomic storage operation). Although the extendedpersistent metadata flags 1217A ofFIG. 12A may provide for more robust support for atomic storage operations, they may impose additional overhead. -
FIG. 12B depicts another embodiment of persistent metadata. As described above in conjunction withFIG. 12A , thelog 1203 may comprise data pertaining to two atomic operations having respective identifiers ID1 and ID2, wherein ID1 corresponds to an atomic storage request pertaining tological identifiers logical identifiers - As indicated in
FIG. 12B , data associated withlogical identifiers persistent metadata 1217B that indicates that the data pertains to the atomic storage operation ID1. In some embodiments, thepersistent metadata 1217B may comprise persistent metadata flag(s) within a packet header. The disclosure is not limited in this regard, however; thepersistent metadata 1217B may be embodied in other forms. In some embodiments, for example, thepersistent metadata 1217B may be embodied in a persistent index, reverse index, separate data packet or segment, or the like. - In the
FIG. 12B embodiment, completion of the atomic storage operations ID1 and ID2 may be indicated by persistent metadata 1218_1 and 1218_2. The persistent metadata 1218_1 and 1218_2 may be embodied as persistent metadata within thelog 1203. The persistent metadata 1218_1 and/or 1218_2 may be embodied as separate data packets, data segments, persistent flags within other data packets, or the like. The completion indicators 1218_1 and/or 1218_2 may be configured to indicate completion of one or more atomic storage operations; the completion indicator 1218_1 may indicate completion of the atomic storage operation ID1 and the completion indicator 1218_2 may indicate completion of the atomic storage operation ID2. Accordingly, the completion indicators 1218_1 and/or 1218_2 may comprise and/or reference the identifier(s) of one or more completed atomic storage operations ID1 and ID2. Data of a failed and/or incomplete atomic storage operation may be detected in response to identifying data comprising an atomic storage operation identifier that does not have a corresponding completion indicator. - In some embodiments, the completion indicators 1218_1 and/or 1218_2 may be configured to indicate completion of an atomic storage operation regardless of the log order of the indicator(s) 1218_1 and/or 1218_2 within the
log 1203. Theatomic storage module 172 may be configured to append the persistent metadata 1218_1 and/or 1218_2 to thelog 1203 in response to completing the respective atomic storage operations ID1 and/or ID2. Completion of an atomic storage operation may comprise transferring data of the atomic storage operation into a powercut- and/or crash-safe domain, such as themedia controller 123, writebuffer 244, media write buffer, queue 173 (described below), request buffer 1780 (described below), or the like. Accordingly, an atomic storage operation may be considered to be complete before all of the data pertaining the atomic storage operation has been actually written to thenon-volatile storage medium 140, which may result in storing the completion indicator(s) 1218_1 and/or 1218_2 before data of the corresponding atomic operations within thelog 1203. Therestart recovery module 139 may be configured to apply and/or effectuate completion indicators 1218_1 and/or 1218_2 regardless of their order within thelog 1203. - In some embodiments, completion indicators 1218_1 and/or 1218_2 may be consolidated. As described above, grooming data pertaining to an atomic operation may comprise modifying persistent metadata of the data, which may comprise updating
persistent metadata flags 1217B to indicate that the data packets are part of a completed atomic storage operation and/or non-atomic storage operation. Grooming may further comprise combining and/or coalescing persistent metadata 1218_1 and/or 1218_2. For example, the persistent metadata 1218_1 and 1218_2 may be combined into a single persistent metadata entry (persistent note or data packet) 1218_N that indicates completion of a plurality of atomic storage operations (e.g., atomic storage operations ID1 and ID2). The persistent indicator(s) 1218_1, 1218_2, and/or 1218_N may be removed from thelog 1203 in response to updating thepersistent metadata 1217B of the data corresponding to the atomic storage operations (e.g., updating the respectivepersistent metadata flags 1217B of the data packets in grooming operation(s), as described above), such that the persistent indicator(s) are no longer required to determine that the corresponding atomic storage operations were successfully completed. -
FIG. 13A is a diagram illustrating data of an atomic storage operation stored within multiple logical erase blocks 1340 a-b of anon-volatile storage media 1302 in response to an atomic storage request. It should be noted that in connection withFIGS. 13-15 certain components are marked with the same fill pattern to identify these components throughout these figures, although, for simplicity and clarity, a reference number has not been placed on each such area. - As illustrated in
FIG. 13A , two data packets 1310 a-b are stored in a first logical eraseblock 1340 a and twodifferent data packets 1310 c-d are stored in a second logical eraseblock 1340 b. In the illustrated embodiment, all four of the data packets 1310 a-d are stored as a result of a single atomic storage request (e.g., an atomic vector storage request). As indicated above, theappend point 1320 indicates where additional data may be written to thestorage media 1302. - Each logical erase block 1340 a-b comprises two or more physical erase blocks (e.g., erase
blocks 330, as depicted inFIG. 3 ). A logical eraseblock boundary 1342 separates each logical erase block 1340 a-b. The logical eraseblock boundary 1342 may comprise a virtual or logical boundary between each logical erase block 1340 a-b. - As illustrated in the embodiment of
FIG. 13A , each data packet 1310 a-d includes a header 1314 a-b. Each header 1314 a-b may comprise persistent metadata related todata 1312 within each packet 1310 a-d. Thedata 1312 may comprise user data to be stored on and potentially retrieved from thestorage media 1302 in response to requests by, for example,storage clients 118A-N. In some embodiments, aheader 1314 a and its associateddata 1312 are both stored to thestorage media 1302 in a single write operation (e.g., in a packet format 710). - In
FIG. 13A , aheader 1314 a of afirst data packet 1310 a is illustrated. Theheader 1314 a may comprise persistent metadata including various flags 1317 a-c. For example, one or more bits of theheader 1314 a may comprise adata packet flag 1317 c that, when set to a particular value, indicates when an associated data packet 1310 a-d comprises user data. The position and number of the bits for eachdata packet flag 1317 c within theheader 1314 a may be varied within the scope of the disclosed subject matter. Also, in one embodiment, thedata packet flag 1317 c may be located in the same position (i.e., the same bit position) within each header 1314 a-b of each data packet 1310 a-d. - The illustrated headers 1314 a-b also include either a first persistent metadata flag in a
first state 1317 a or the first persistent metadata flag in asecond state 1317 b. The first persistent metadata flag 1317 a-b may comprise a single bit within each header 1314 a-b. For example, the first persistent metadata flag in thefirst state 1317 a may comprise a particular bit position (such as the 56th bit) within aheader 1314 a set to a high value (a “1”), while the first persistent metadata flag in thesecond state 1317 b may comprise the same bit position in adifferent header 1314 b set to a low value (a “0”). Alternatively, the first persistent metadata flag in thefirst state 1317 a may comprise a particular bit position within theheader 1314 a set to a low value, while the first persistent metadata flag in thesecond state 1317 b may comprise the same bit position in adifferent header 1314 b set to a high value. In one embodiment, the first persistent metadata flag in the first or second state 1317 a-b may each comprise a pattern of multiple bits or separate and distinct bit positions. Use of a single bit within each packet 1310 a-d, when data packets 1310 a-d associated with an atomic storage request are stored contiguously, provides the advantage that a very small amount of data is used on thestorage media 1302 to indicate whether an atomic write operation failed or succeeded. - As illustrated in
FIG. 13A , eachheader 1314 a of the first three data packets 1310 a-c comprises the first persistent metadata flag in thefirst state 1317 a, while thelast data packet 1310 d comprises the first persistent metadata flag in thesecond state 1317 b. In one embodiment, each of data packets 1310 a-c, except thelast data packet 1310 d, stored on thestorage media 1302 pursuant to an atomic storage request comprises the first persistent metadata flag in thefirst state 1317 a. As illustrated, thelast packet 1310 d includes the first persistent metadata flag in thesecond state 1317 b, which signals the end or completion of data written pursuant to an atomic write request. This embodiment is advantageous in that only one bit within each packet 1310 a-d is needed to signal whether an atomic storage request was completed successfully. The first persistent metadata flags in the first and second states 1317 a-b indicate not only that thedata 1312 of these packets 1310 a-d pertain to an atomic storage request, but also identify a beginning and end, or successful completion, of the data associated with the atomic storage request. - However, a problem may arise if the third and
fourth data packets 1310 c-d of the second logical eraseblock 1340 b are erased. Some background information may be helpful to understand this problem. For example, during a recovery or other process the event log (e.g., the data stored sequentially together with persistent metadata as illustrated in theevent 1103 ofFIG. 11 ) may be accessed to reconstruct a logical sequence of logical erase blocks 1340 a-b (e.g., from head to tail). This may be achieved through a scan of the erase blocks 1340 a-b and, in particular, through examination and processing of metadata and sequence indictors stored in the erase block headers 1319 a-b of theevent log 1103. The logical sequence of erase blocks 1340 a-b may be formulated before performing recovery following an invalid shutdown or a restart operation (such as a shutdown resulting from a power failure) using either a forward or reverse sequence scan of the logical erase blocks 1340 a-b stored on themedia 1302. After the logical sequence of erase blocks 1340 a-b has been formulated, reverse sequence scanning theevent log 1103 or logical sequence of logical erase blocks 1340 a-b based on theevent log 1103 from the append point 1320 (i.e., the tail) in reverse sequence toward the head or beginning of thelog 1103, in certain embodiments, is initiated to identify failed atomic requests. In such a case (if third andfourth data packets 1310 c-d of the second logical eraseblock 1340 b are erased), the reverse sequence scanning from anappend point 1320 could erroneously identify the first and second data packets 1310 a-b as being associated with a failed atomic storage request because the first encounteredpacket 1310 b does not include the first persistent metadata flag in thesecond state 1317 b. Accordingly, in one embodiment, grooming or deletion of a logical eraseblock 1340 b that includes anendpoint 1321 is prohibited. - As used in this application, an
endpoint 1321 may comprise the point immediately after thelast packet 1310 d, which may be stored or identified in a volatile memory. Alternatively, the final orlast packet 1310 d of an atomic write operation may comprise the endpoint. - As an alternative to prohibiting grooming or deletion of a logical erase
block 1340 b that includes anendpoint 1321, an incorrect determination that the first and second data packets 1310 a-b relate to a failed atomic storage request is avoided by reference to sequence indicators (such as thesequence indicators 718 illustrated inFIG. 7 ). As noted above, thesequence indicators 718 identify or specify a log order of physical storage locations (e.g., erase blocks) 1340 a-b. In particular, in one embodiment, sequence indicators 1318 a-b of each erase block header 1319 a-b comprise monotonically increasing numbers spaced at regular intervals. In view of the foregoing, if asequence indicator 1318 b for a next logical eraseblock 1340 b in theevent log 1103, moving from left to right (from the head to the tail of logical chain of erase blocks, as specified by the event log 1103), is not a next sequence number in the sequence, then, for example, thestorage management layer 130 recognizes that prior logical eraseblock 1340 a does not end with a failed atomic request, i.e., the first and second packets 1310 a-b do not comprise a part of a failed atomic write. -
FIG. 14 illustrates a failed atomic write to a non-volatile solid-state storage media 1402 that spans a logical eraseblock boundary 1442. As indicated inFIG. 14 , the atomic write request, in the illustrated case, failed because of apower failure 1488. Apower failure 1488 may comprise any event that can cause the loss of data stored within volatile memory of a system, apparatus, or computing device (e.g., a hard reset or other interruption of power). Thepower failure 1488 may comprise apower failure 1488 of a primary power source of acomputing device 110 and/or thestorage controller 120. Alternatively, the atomic write may have failed for other reasons. As shown inFIG. 14 , the first and second data packets 1410 a-b may be stored in the first logical eraseblock 1440 a and athird data packet 1410 c may be stored in a second logical eraseblock 1440 b. Each of the data packets 1410 a-c comprises a persistent metadata flag in a first state 1417 a;FIG. 14 illustrates a persistent metadata flag 1417 a in theheader 1414 a ofpacket 1410 a. Thelast packet 1410 c shown inFIG. 14 does not include a persistent metadata flag in asecond state 1317 b, indicating that the atomic write at issue was not successfully completed. As a consequence, if a reverse sequence scan of the storage media 1402 is initiated from, or based on, theappend point 1420 during a restart recovery, the packets 1410 a-c will be identified as comprising part of a failed atomic write. Accordingly, the data packets 1410 a-c will be excluded from (i.e., removed from or otherwise not included in) a logical orforward index 1404 that mapslogical identifiers 1415 to physical locations or addresses 1423 of the data packets 1410 a-c of the storage media 1402. As indicated above,index 1404 may be contained in or derived from themetadata 1434 stored on the non-volatile solid-state storage media 1402. - In some embodiments, excluding from the
index 1404 may comprise bypassing each data packet 1410 a-c associated with the failed atomic storage request during a scan of a log-based structure (e.g., theevent log 1103 illustrated inFIGS. 11A-C or the ordered sequence of logical erase blocks 1440 a-b specified by the log 1103) used to create theindex 1404. In another embodiment, excluding from theindex 1404 may further comprise removing eachlogical identifier 1415 that maps to each data packet 1410 a-c associated with the failed atomic storage request from theindex 1404 created by way of a scan of the log-based structure. In yet another embodiment, excluding from theindex 1404 may further comprise erasing each data packet 1410 a-c associated with the failed atomic storage request from the storage media 1402 by way of a storage space recovery operation (which will be explained further below). Of course, one or more of the foregoing embodiments may be combined or used with other embodiments for excluding the data packets 1410 a-c from theindex 1404. -
FIG. 15 comprises a diagram illustrating a restart recovery process related to afirst power failure 1588 a and asecond power failure 1588 b. As illustrated inFIG. 15 , afirst power failure 1588 a interrupts an atomic write operation such thatdata packets 1510 d-e, 1510 f-i associated with the failed atomic write are stored on the non-volatile solid-state storage media 1502. During a restart recovery operation, such as during a subsequent power-on operation, an ordered sequence of logical erase blocks 1540 a-c (e.g., the ordered sequence of erase blocks in the log) are formulated usingmetadata 1534 stored on thestorage media 1502. Anappend point 1520 is identified at the end of the ordered sequence of logical erase blocks 1540 a-c. Thereafter, reverse sequence scanning of the ordered sequence of logical erase blocks 1540 a-b (or the log 1103) will be initiated from theappend point 1520 to identifydata packets 1510 d-e, 1510 f-i associated with a failed atomic request. As a consequence,data packets 1510 d-e of the first logical eraseblock 1540 a and data packets 1510 f-i of the second logical eraseblock 1540 b will be identified as being associated with a failed atomic write operation. As indicated above, this may occur, for example, by determining that the first packet found in the reverse sequence scan (i.e., data packet 1510 i) satisfies a failed atomic write criteria (e.g., includes a first persistent metadata flag in a first state 1417 a, as explained in connection withFIG. 14 ). Thereafter, the remainingdata packets 1510 d-e, 1510 f-h of the failed atomic storage request will be identified as being associated with the failed atomic storage request because, for example, each of thesepackets 1510 d-e, 1510 f-h also includes the first persistent metadata flag in the first state 1417 a. - Thereafter, a
recovery grooming operation 1589 may be initiated to transfer the valid data packets 1510 a-c (but not theinvalid data packets 1510 d-e) from the first logical eraseblock 1540 a to the third logical eraseblock 1540 c. More specifically, thegrooming operation 1589, for example, may involve transfer of valid packets 1510 a-c from the first logical eraseblock 1540 a to the third logical erase block with a newly assigned sequence number (e.g., a logical erase block immediately after the append point 1520), whiledata packets 1510 d-e, 1510 f-i that are associated with a failed atomic write are not transferred to the logical erase block with the newly assigned sequence number. Therecovery grooming operation 1589 may be performed as part of a storage recovery operation, in response to a storage request (e.g., a request to TRIM and/or erase data on the eraseblock 1540 a, or the like). - As noted above, a sequence number 1518 a-b may be assigned to each erase block 1540 a-c. The sequence numbers 1518 a-b may be stored in logical erase
block headers 1519 a-b, as illustrated inFIG. 15 , or at another location on the non-volatile solid-state storage media 1502. The sequence numbers 1518 a-b are utilized to create an ordered sequence of the logical erase blocks 1540 a-c. The ordered sequence may be identified or specified by thelog 1103. The sequence numbers 1518 a-b for each logical erase block 1540 a-c, in one embodiment, are spaced at regular intervals. For example, a consecutive series of logical erase blocks 1540 a-c may be assigned the following sequence numbers: 1, 65, 129, 193, 257, 321, 385, and 449. When it is determined that a new logical eraseblock 1540 c needs be to utilized for the storage of data, the new logical eraseblock 1540 c may be assigned the next available sequence number 1518 a-b in the series of sequence numbers 1518 a-b. Accordingly, in such an embodiment, if the last sequence number assigned to a logical erase block is the sequence number 385, a newly assigned eraseblock 1540 c may be assigned the sequence number 449. Of course, in alternative embodiments, spacing between the sequence numbers 1518 a-b may be at an interval other than 64 (such as 32) or at irregular or varying intervals. Also, the sequence numbers 1518 a-b may be assigned in the cyclic fashion such that when the highest sequence number is utilized (given the number of bits ofmetadata 1534 allocated for the sequence numbers 1518 a-b), the lowest sequence number no longer in use may be assigned to a newly identified eraseblock 1540 c. - In view of this background, as illustrated in
FIG. 15 , during therecovery grooming operation 1589, which is intended to transfer the valid data packs 1510 a-c from the first logical eraseblock 1540 a to the third logical erase block, asecond power failure 1588 b may occur, resulting in a failure of thegrooming operation 1589. Accordingly, a technique for identification of such a failure would be helpful to prevent use of the invalid or partially written data 1510 a-c saved in the third logical eraseblock 1540 c or confusion as to whether the data in the first logical eraseblock 1540 a or the third logical eraseblock 1540 c should be utilized. - One such technique involves assigning a subsequence number 1519 (rather than a sequence number 1518 a-b) to the logical erase
block 1540 c to which the valid data 1510 a-c will be or is intended to be transferred. As indicated above, in one embodiment, the sequence numbers 1518 a-b are spaced at regular intervals, such as at intervals of 64 or at intervals of 32, as illustrated inFIG. 15 . For example, consecutive sequence numbers may increment the most significant bits 1590 a-b of a fixed size sequence number by a particular increment, while leaving the least significant bits 1592 a-b unchanged. Thesubsequence number 1519 may be derived from asequence number 1518 a by incorporating the mostsignificant bits 1590 a of thesequence number 1518 a from which thesubsequence number 1519 is derived and altering (such as incrementing or decrementing) the leastsignificant bits 1592 a of thesequence number 1518 a. As illustrated inFIG. 15 , thesubsequence number 1519 may incorporate the mostsignificant bits 1590 a of thefirst sequence number 1518 a and increment the leastsignificant bits 1592 a of thefirst sequence number 1518 a, to yield the subsequence number 1519 (e.g., 1010001000001) comprising the same high-order bits 1590 c and incremented low-order bits 1592 c. By assigning thesubsequence number 1519 to the third logical eraseblock 1540 c, the sequencing order of the erased blocks 1540 a-c is maintained because thesubsequence number 1519 is greater than thefirst sequence number 1518 a from which thesubsequence number 1519 is derived, and is less than thenext sequence number 1518 b. Accordingly, thesubsequence number 1519 maintains an ordered sequence among logical erase blocks 1540 a-c of the log-based structure (e.g., thelog 1103 illustrated inFIGS. 11A-C ) such that an ordered sequence of storage operations completed on thestorage media 1502 is preserved on thestorage media 1502. - It should also be noted that a
subsequence number 1519 may be derived in various ways from asequence number 1518 a. For example, asubsequence number 1519 could decrement the mostsignificant bits 1590 a of thefirst sequence number 1518 a from which thesubsequence number 1519 is derived and increment the leastsignificant bits 1592 a of thesequence number 1518 a from which thesubsequence number 1519 is derived. - In due course, all of the data packets 1510 a-c, 1510 d-e of the first logical erase
block 1540 a will be erased, including eraseblock header 1519 a, from thestorage media 1502 if thegrooming operation 1589 were completed successfully. However, erasure of the data packets 1510 a-c, 1510 d-e and the eraseblock header 1519 a of the first logical eraseblock 1540 a may not occur immediately if thegrooming operation 1589 is completed successfully. Moreover, if thesecond power failure 1588 b occurs during grooming (e.g., while relocating the valid data 1510 a-c from the first logical eraseblock 1540 a to the third logical eraseblock 1540 c), the data packets 1510 a-c in the third logical eraseblock 1540 c could potentially be corrupt or incomplete. - Accordingly, during a power-on operation following the
second power failure 1588 b, a restart recovery process may be initiated. During the restart recovery process, the log will be created to formulate an ordered sequence of the logical erase blocks 1540 a-c. During this process, it may be determined that the first logical eraseblock 1540 a has been assigned thefirst sequence number 1518 a and the third logical eraseblock 1540 c has been assigned thesubsequence number 1519 derived from thefirst sequence number 1518 a. As explained above, this may indicate that either the data of the first logical eraseblock 1540 a was not erased or that a grooming operation was interrupted. In either case, the data packets 1510 a-c of the third logical eraseblock 1540 c are potentially corrupted or incomplete and should not be relied on as being valid. As a result, the data packets 1510 a-c, eraseblock header 1519 c, and any other data stored in the third logical eraseblock 1540 c should be erased or scheduled for erasure and should be excluded from theindex 1504. (As indicated previously, theindex 1504 mapslogical identifiers 1515 to physical locations or addresses 1523 and may comprise or be based onmetadata 1534 stored on themedia 1502.) - Thereafter, the
append point 1520 would be positioned immediately to the right of invalid data packet 1510 i, as shown inFIG. 15 . Reverse sequence scanning of thenon-volatile storage media 1502 from theappend point 1520 would be commenced and would identifydata packets 1510 d-e of the first logical eraseblock 1540 a and data packets 1510 f-i of the second logical eraseblock 1540 b as comprising a portion of a failed atomic write operation as a result of thefirst power failure 1588 a. The valid data packets 1510 a-c of first logical eraseblock 1540 a will be groomed 1589 to the third logical eraseblock 1540 c without transferring theinvalid data packets 1510 d-e to the third logical eraseblock 1540 c. In one embodiment, when the valid data packets 1510 a-c are groomed 1589 to the third logical eraseblock 1540 c, the first persistent metadata flag for each of the valid data packets 1510 a-c is set to asecond state 1317 a. - In view of the foregoing, it should also be observed that excluding from the forward or
logical index 1504 during a restart recovery may comprise erasing each logical erase block 1540 a-b of the non-volatile solid-state storage media 1502 comprising one ormore data packets 1510 d-e, 1510 f-i associated with the failed atomic storage request and transferring data packets 1510 a-c (e.g., valid data packets) from the each logical erase block 1540 a-b to a different location or logical eraseblock 1540 c on thestorage media 1502. Also, erasing each logical erase block during restart recovery may comprise assigning asubsequence number 1519 to a destination logical eraseblock 1540 c configured to store transferred data packets 1510 a-c (i.e., valid data 1510 a-c). Further, erasing each logical erase block 1540 a-c during a restart recovery process may comprise, in response to identifying a first logical eraseblock 1540 a having asequence number 1518 a and a third logical eraseblock 1540 c having asubsequence number 1519, grooming 1589 the first logical eraseblock 1540 a and, as described above, excluding eachdata packet 1510 d-e of the first logical eraseblock 1540 a associated with the failed atomic storage request from theindex 1504. Again, theinvalid data packets 1510 d-e of the first logical eraseblock 1540 a may immediately or eventually be erased from themedia 1502 after thegrooming operation 1589 is performed. - The
recovery grooming operation 1589, if completed before normal input-output operations commence, in one embodiment, avoids a scenario in whichdata packets 1510 d-e, 1510 f-i associated with a failed atomic write operation could be considered valid because those data packets are removed from themedia 1502 by therecovery grooming operation 1589. The following example illustrates this point. - First, a failed atomic write operation commences and is interrupted, resulting in the
invalid data packets 1510 d-e, 1510 f-i being stored on thestorage media 1502. Second, a power-on operation is performed and, through a scan, theevent log 1103 is formulated without engaging in therecovery grooming operation 1589 such that theinvalid data packets 1510 d-e, 1510 f-i are included in theevent log 1103 andforward index 1504. Third, a second atomic write operation is commenced and successfully completed. Finally, a reverse-sequence scan from the append point 1520 (which is positioned after the data packets associated with the second successful atomic write operation) is subsequently initiated to identify packets associated with a failed atomic write operation. In this scenario, theinvalid packets 1510 d-e, 1510 f-i will not be identified and removed from thestorage media 1502. This is because the reverse sequence scanning from theappend point 1520 will encounter the packets associated with the second successful atomic write operation, and determine that the second atomic write operation was successfully completed. In certain embodiments, identifying the second successful atomic write operation may result in termination of the reverse sequence scanning and theinvalid data packets 1510 d-e, 1510 f-i will not be identified as being associated with a failed atomic write operation. Accordingly, theinvalid data packets 1510 d-e, 1510 f-i will not be removed, or otherwise excluded, from theforward index 1504 or from thestorage media 1502. - Although
FIGS. 8B, 13A, 14, and 15 depict embodiments for managing atomic storage operations using, inter alia, persistent metadata flags (e.g.,persistent metadata flags FIG. 13B depicts one embodiment of persistent notes for managing an atomic storage operation. The persistent note 1327 a identifies the beginning of an atomic storage operation on the non-volatile storage medium (log) 1302. Accordingly, the packets 1311 a-n following the open persistent note 1327 a are identified as part of an atomic storage operation. A closepersistent note 1327 b may be stored on thenon-volatile storage medium 1302 in response to completion of the atomic storage operation. If an open persistent note 1327 a is not closed with a corresponding closepersistent note 1327 b, the packets 1311 a-n may be identified as being part of an incomplete atomic storage operation and excluded, as described above. - In some embodiments, the packets 1311 a-n may comprise respective headers, as described above (e.g., headers 1314 a-b). The headers may comprise persistent metadata indicating that the packets 1311 a-n are part of an atomic storage operation. Alternatively, persistent flags indicating membership in an atomic storage operation may be omitted, since this information may be determined based upon the open persistent note 1327 a. However, in some embodiments, a persistent flag indicating membership in the atomic storage operation may be included (e.g., a persistent metadata flag in a
first state 1317 a). Other packets that are not part of the atomic storage operation may be interleaved with the packets 1311 a-n. These packets may comprise respective persistent metadata flags to indicate that the packets are not part of the atomic storage operation (e.g., persistent metadata flags in asecond state 1317 b). Accordingly, when excluding packets due to a failed or incomplete atomic storage request, the interleaved packets that were not part of the atomic storage operation may be retained (not excluded, as described above). - The embodiments disclosed herein may be configured to efficiently process vector storage requests. As disclosed herein, a vector storage request refers to a storage request pertaining to one or more vectors (I/O vectors). A vector may pertain to a group, set, and/or range of identifiers (e.g., logical identifiers, physical addresses, buffer addresses, or the like). A vector may be defined in terms of a base identifier (e.g., starting point) and length, range, and/or extent. Alternatively, a vector may be defined in set notation (e.g., a set of one or more identifiers or ranges of identifiers). A vector storage request may, therefore, refer to a storage request comprising a plurality of “sub-requests” or “subcommands,” each of which pertains to a respective one of the vectors. For example, a vector write operation may comprise writing data to each of a plurality of vectors, each vector pertaining to a respective logical identifier range or extent. As described above in conjunction with
FIGS. 8A and 8B , thestorage controller 120 may be configured to store data of vector storage requests contiguously within a log on thenon-volatile storage media 140. Therefore, data packets pertaining to disjoint, non-adjacent, and/or non-contiguous vectors with respect to thelogical address space 134 may be stored contiguously within the log on thenon-volatile storage media 140. - The
storage management layer 130 may provide an interface through which storage clients may issue vector storage requests. In some embodiments, the vector storage request interface provided by thestorage management layer 130 may include, but is not limited to, API, library, remote procedure call, user-space API, kernel space API, block storage interface or extension (e.g., IOCTL commands and/or extensions), or the like. A vector may be defined as a data structure, such as: -
struct iovect { uint64 iov_base; // Base address of memory region for input or output uint32 iov_len; // Size of the memory referenced by iov_base uint64 dest_lid; // Destination logical identifier } - The iov_base parameter may reference a memory or buffer location comprising data of the vector, iov_len may refer to a length or size of the data buffer, and dest_lid may refer to the destination logical identifier(s) for the vector (e.g., base logical identifier, the length of the logical identifier range may be implied and/or derived from the input buffer iov_len).
- A vector storage request to write data to one or more vectors may, therefore, be defined as follows:
-
vector_write( int fileids, const struct iovect *iov, uint32 iov_cnt, uint32 flag) - The vector write operation above may be configured to gather data from each of the vector data structures referenced by the *iov pointer and/or specified by the vector count parameter (iov_cnt), and write the data to the destination logical identifier(s) specified in the respective iovect structures (e.g., dest_lid). The flag parameter may specify whether the vector write operation should be implemented as an atomic vector operation.
- As illustrated above, a vector storage request may comprise performing the same operation on each of a plurality of vectors (e.g., implicitly perform a write operation pertaining to one or more different vectors). In some embodiments, a vector storage request may specify different I/O operations for each constituent vector. Accordingly, each iovect data structure may comprise a respective operation indicator. In some embodiments, the iovect structure may be extended as follows:
-
struct iovect { uint64 iov_base; // Base address of memory region for input or output uint32 iov_len; // Size of the memory referenced by iov_base uint32 iov_flag; // Vector operation flag uint64 dest_lid; // Destination logical identifier } - The iov_flag parameter may specify the storage operation to perform on the vector. The iov_flag may specify any suitable storage operation, which include, but is not limited to, a write, a read, an atomic write, a trim or discard request, a delete request, a format request, a patterned write request (e.g., request to write a specified pattern), a write zero request, or an atomic write operation with verification request, allocation request, or the like. The vector storage request interface described above, may be extended to accept vector structures:
-
vector_request( int fileids, const struct iovect *iov, uint32 iov_cnt, uint32 flag) - The flag parameter may specify whether the vector operations of the vector_request are to be performed atomically.
-
FIG. 16A depictsexemplary interfaces interfaces 1694 a and/or 1694 b may be utilized bystorage client 118A-N to request vector storage operations via thestorage management layer 130. The parameters 1696 a-d of the interfaces 1694 a-b may be arranged in any suitable order, may be provided in any suitable format, and may be adapted for use with any suitable programming language and/or interface. Moreover, the interfaces 1694 a-b may include other parameters not specifically identified inFIG. 16A . The interfaces 1694 a-b may be implemented within one or more existing interfaces (e.g., a block device interface) or may be provided as extensions to an existing application program interface and/or as part of a separate application program interface. Adescriptor parameter 1696 a may comprise a reference and/or handle to a storage entity pertaining to a request. Thedescriptor 1696 a may comprise and/or reference a file descriptor, file identifier, file name, database entity identifier, or the like. The IO_Vector(s)parameter 1696 b may reference one or more vector storage operations. The IO_Vector(s)parameter 1696 b may comprise and/or reference a set or list ofvector identifiers 1697 a. Thevector identifiers 1697 a may specify memory and/or buffer addresses pertaining to a vector storage operation using, for example, a base identifier, “V_Base,” which may comprise a source address, source LID, or the like, and length “V_Length,” which may comprise a range, extent, or other length and/or size indicator. The LID_Dest parameter may specify a source of the vector operation (e.g., write the data of V_Length from V_Base starting at LID_Dest). Accordingly, eachIO_Vector 1696 b may define a vector storage request, as described above (e.g., a subcommand or sub-operation of a vector storage request). - The
IO_Count 1696 c parameter may specify the number of vector storage operations encapsulated within theIO_Vector 1696 b (e.g., the number ofvector identifiers 1697 a). Theflag parameter 1696 d may identify the storage operation to be performed on the IO_Vector(s) 1696 b. Theflag parameter 1696 b may specify any storage operation, including, but not limited to, a write, a read, an atomic write, a trim or discard request, a delete request, a format request, a patterned write request (e.g., request to write a specified pattern), a write zero request, or an atomic write operation with verification request, allocation request, or the like. The atomic write operation with verification request completes the atomic write operation and then verifies that the data of the request was successfully written to the storage media. As illustrated above, theflag parameter 1696 d may specify either atomic or non-atomic storage operations. - The storage operation specified by the
flag 1696 d may be implemented on each of the IO_Vector(s) 1696 b. Accordingly, the interface 1694 may be used to minimize the number of calls needed to perform a particular set of operations. For example, an operation to store data pertaining to several contiguous or disjoint, non-adjacent, and/or non-contiguous ranges may be encapsulated into a single vector storage request through theinterface 1696 a. Moreover, the use of aflag parameter 1696 d provides flexibility such that theinterface 1694 a may be utilized for various purposes, such as atomic writes, a trim or discard request, a delete request, a format request, a patterned write request, a write zero request, or an atomic write operation with verification request. - In some embodiments, an
interface 1694 b may provide for specifying a different storage operation for each IO_Vector 1696 b. Theinterface 1694 b may include vector identifier(s) 1697 b comprising respective flag parameters 1698 a-n. The flag parameter(s) 1698 a-n may specify a storage operation to perform on aparticular IO_Vector 1696 b; the flag parameters 1698 a-n may be different for each IO_Vector 1696 b. Accordingly, theinterface 1694 b may be configured to implement vector storage operations, such that each sub-request and/or sub-operation of the vector storage request may involve a different type of storage operation. For example, theflag 1698 a of afirst IO_Vector 1696 b may specify a TRIM operation, theflag 1698 b ofsecond IO_Vector 1696 b may specify a write operation, and so on. Theinterface 1694 b may comprise a top-level flag parameter 1696 d, which may be used to specify default and/or global storage flag parameters (e.g., specify that the vector storage request is to be performed atomically, as described above). - In some embodiments, one or more of the operations of a vector storage request may comprise operations that do not directly correspond to storage operations on the
non-volatile storage media 140. For example, the vector storage request may comprise a request to allocate one or more logical identifiers in the logical address space 134 (e.g., expand a file), deallocate logical identifiers (e.g., TRIM or delete data), and so on. If the vector storage request is atomic, the allocation/deallocation operation(s) may not be reflected in thestorage metadata 135 until other operations of the atomic vector storage request are complete. In another example, a TRIM subcommand may comprise modifying thestorage metadata 135 to indicate that data of one or more logical identifiers no longer needs to be preserved on thenon-volatile storage media 140. Modifying thestorage metadata 135 may comprise removing one or more entries from a forward index, invaliding one or more packets, and so on. These metadata operations may not be reflected in thestorage metadata 135 until other operations of the request are complete (e.g., index entries may not be removed until other operations of the atomic storage request are complete). In some embodiments, the allocation, deallocation, and/or TRIM operations may be maintained ininflight metadata 175 until completion of the atomic vector storage request, as described above. - In some embodiments,
flags 1696 d and/or 1698 a-n may specify an order of the vector storage request. For example, theflags 1696 d and/or 1698 a-n may indicate that operations of the vector storage request are to be completed in a particular order and/or may be completed out-of-order. Ordering of the vector storage requests may be enforced by thestorage management layer 130 by use of the orderedqueue 173, request buffer (described below), or the like. - As described above in conjunction with
FIGS. 8A and 8B , thestorage controller 120 may be configured to store data packets pertaining to disjoint, non-adjacent, and/or non-contiguous logical identifier ranges (vectors) contiguously within a log on thenon-volatile storage media 140.FIG. 16B depicts execution of an atomic vector storage request 1601, which comprises appending data packets to a log on anon-volatile storage media 140. In theFIG. 16B example, an atomic vector storage request 1601 may specify atomic write operations pertaining to a plurality of vectors, including a vector atLID 2,length 1; a vector atLID 179,length 2; a vector atLID 412,length 1; and a vector atLID 512,length 1. As illustrated in theindex 1604, the vectors of the request 1601 correspond to disjoint, non-adjacent, and non-contiguous ranges with respect to thelogical address space 134. - In response to the request 1601, the
storage management layer 130 may queue the sub-requests of the atomic vector storage request 1601, which may comprise a TRIM storage request, write storage request, zero storage request. The storage requests may be queued in an orderedqueue 173 and/or in a request buffer (described below). Alternatively, if the request 1601 is not an atomic operation (or is being managed using an inflight index, as described above), the orderedqueue 173 may not be used. - The
storage controller 120 may be configured to service the atomic vector storage request 1601 by executing the sub-requests of the vector storage request 1601. Thelog storage module 136 may be configured to append data packets 1610 a-e pertaining to the vector storage request 1601 to thelog 1603 on thenon-volatile storage medium 1640. - For clarity of illustration, in the
FIG. 16B example, each logical identifier corresponds to data of a respective data packet 1610 a-e (e.g., each logical identifier references the same or less data as stored in adata packet segment 712, described above). The disclosure, however, is not limited in this regard, and could be adapted to implement any fixed and/or variable mapping between logical identifiers and data segment size. - The logical-to-
physical translation module 132 may be configured to associate physical storage locations of the data packets 1610 a-e with respective logical identifiers in theindex 1604. Theindex 1604 may compriseentries 1605A-D corresponding to the vectors of the request 1601. The any-to-any mappings between logical identifiers and physical storage locations may allow data of the disjoint, non-adjacent, non-contiguous vectors to be stored contiguously within thelog 1603; as illustrated inFIG. 16B , theentries 1605A-D may comprise respective mappings to arbitrary physical storage locations on thenon-volatile media 1640, such that the logical identifier ranges map to packets 1610 a-e that are arranged contiguously within thelog 1603. The packets 1610 a-e may comprise self-describing, persistent metadata (e.g., headers), to persist the association between the logical identifier(s) and the packets 1610 a-e, such that the any-to-any mappings ofentries 1605A-D can be reconstructed. - The contiguous log format of the packets 1610 a-e may facilitate tracking completion of the atomic vector storage request 1601. As described above, the packets 1610 a-d may comprise a persistent metadata flag in a first state indicating that the packets 1610 a-e are part of an “incomplete” or “in process” atomic storage request. The last, final, or termination packet 1610 e written as part of the atomic vector storage request 1601 may comprise a persistent metadata flag in a second state indicating successful completion of the atomic vector storage request 1601. As disclosed above, the “last” packet 1610 e may be the final data packet pertaining to the atomic vector storage request 1601 within the
log 1603. In some embodiments, the packet 1610 e may be the “termination” data packet of the atomic storage request 1601 (e.g., the final packet written to the non-volatile storage medium as part of the atomic vector storage request 1601). Accordingly, the packet 1610 e may the “last” packet pertaining to the atomic vector storage request 1601 with respect to the log-order of the packets 1610 a-e. Alternatively, or in addition, the data packet 1610 e may comprise separate persistent metadata, such as a persistent note, data packet, and/or data segment configured to indicate completion of the atomic vector storage request 1601, as described above in conjunction withFIGS. 12A and 12B . - As described above, the contiguous layout of the packets 1610 a-e (and the corresponding flags) in the
log 1603 may allow incomplete atomic storage requests to be identified and rolled back, such that data pertaining to the incomplete atomic storage requests are excluded from the storage metadata 135 (e.g., excluded from the index 1604). For example, if the persistent metadata flag in thesecond state 1614 e is not found on thenon-volatile storage media 1640, theentries 1605A-D may be removed (or omitted) from theindex 1604 and the packets 1610 a-e may be invalidated, as described above. The persistent metadata may be further leveraged to allow atomic storage operations to cross media boundaries (e.g., erase block boundaries), allow TRIM and/or grooming operations, and so on, as described herein. -
FIG. 16C depicts another embodiment of an atomic vector storage request 1602. The atomic vector storage request 1602 ofFIG. 16C may comprise a plurality of vectors, each comprising a respective operation flag. The atomic vector storage request 1602 may comprise a vector comprising an atomic TRIM operation atLID 2,length 1; an atomic write to theLID 179,length 2; an atomic ZERO fill to theLID 412,length 1; and an atomic TRIM atLID 512,length 1. In response to the request 1602, thestorage controller 120 may queue the individual storage requests of the atomic vector storage request 1602 in an ordered queue 173 (or request buffer), and may append data packets pertaining to the atomic storage vector request 1602 onto thelog 1603, as described above. Performing an atomic TRIM operation may comprise modifyingstorage metadata 135, which may comprise removing the entry 1605 from theindex 1604, invalidating one or more packets comprising data associated with the entry 1605, and so on. The modifications to thestorage metadata 135 may be deferred until after other atomic operations of the request 1602 are complete. Performing the atomic TRIM may further comprise appending apersistent note 1611 a to thelog 1603. Thepersistent note 1611 a may indicate that data ofLID 2 does not need to be preserved on thenon-volatile storage medium 1640. Therefore, if theindex 1604 is reconstructed from the contents of thenon-volatile storage media 1620, thepersistent note 1611 a may be used to invalidate data of LID 2 (e.g., excludeentry 1605A from the index 1604), and/or invalidate one or more packets comprising the data. For example, while reconstructing the storage metadata 135 (e.g., index 1604), apacket 1630 corresponding toLID 2 may be identified and, in response, theentry 1605A may be added to theindex 1604. In the absence of thepersistent note 1611 a, theentry 1605A may remain in the index 1604 (and thepacket 1630 may remain on the medium 1620), negating the TRIM operation. However, thepersistent note 1611 a on thenon-volatile storage medium 1620 may indicate that theLID 2 was TRIMed and, as such, theentry 1605A may be removed from theindex 1604, and thepacket 1630 may be invalidated. - The
persistent note 1611 a (and other persistent notes and/or data of the atomic vector storage request 1602) may comprise and/or reference persistent metadata flags, which, as described above, may indicate that the persistent note (and/or data) is part of an atomic storage operation. If a corresponding persistent metadata flag in a state indicative of completing the atomic storage operation is not found (e.g.,persistent flag 1614 e does not exist on the medium 1620), the TRIM operation of thepersistent note 1611 a (as well as other operations) may be rolled back or excluded. Accordingly, in the absence of thepersistent metadata flag 1614 e in the appropriate state (or other condition indicating closure of the atomic storage operation), the entry 1605 may not be removed from theindex 1604, and thedata packet 1630 may not be invalidated (e.g., the TRIM operation will be rolled back). - The other storage operations of the atomic vector storage request 1602 may proceed as described above. The “ZERO” operation may comprise associating
LID 412 with a particular data pattern (e.g., zeros) by storing the data pattern in one or more packets on thelog 1603 and/or storing an indicator of the pattern (e.g., a persistent note), as described above. Completion of the composite, atomic storage request 1602 may comprise storing a packet (or other persistent data) comprising a persistent metadata flag indicating completion of the request 1602, as described above. -
FIG. 17A is a block diagram of another embodiment of astorage controller 1720. Thestorage controller 1720 may comprise a logical-to-physical translation module 132,logical address space 134,storage metadata 135,log storage module 136,groomer 138, and restartrecovery module 139, as described above. Thestorage management layer 1730 may further comprise arequest buffer 1780 configured to buffer requests directed to thestorage controller 1720 from thestorage clients 118A-N. In some embodiments, therequest buffer 1780 may comprise an orderedqueue 173, as described above. Therequest buffer 1780 may be configured to buffer and/or cache storage requests, vector storage requests, atomic storage requests, atomic vector storage requests, and so on. Therequest buffer 1780 may be configured to buffer storage requests for execution in an order in which the requests were received (e.g., using a first-in-first-out buffer, or the like). Alternatively, therequest buffer 1780 may comprise a plurality of different request buffers and/or queues that may, or may not, be ordered. - The
storage management layer 130 may be configured to modify a storage request within therequest buffer 1780 in response to one or more other storage requests by use of arequest consolidation module 1782. Theconsolidation module 1782 may be configured to selectively modify storage requests in response to other pending storage requests (e.g., other storage requests in the request buffer 1780). In some embodiments, modifying a storage request comprises consolidating and/or combining two or more storage requests, removing or deleting one or more storage requests, modifying the range, extent, and/or set of logical identifiers pertaining to a storage request, or the like. Modifying a vector storage request may comprise modifying one or more vectors provided in the vector storage request in response to other pending storage requests within therequest buffer 1780 and/or in response to other vectors within the vector storage request itself. The storagerequest consolidation module 1782 may improve efficiency by consolidating and/or removing certain storage requests. For example,certain storage clients 118A-N, such as filesystem storage clients 118B, may make heavy use of certain types of storage requests (e.g., TRIM storage requests). The storage requests may pertain to adjacent and/or overlapping logical identifier ranges in thelogical address space 134. Accordingly, one or more storage requests (and/or portions thereof) may be overridden, subsumed, made obsolete, and/or made redundant by other pending storage requests within the same logical address range or namespace (e.g., other pending storage requests within the request buffer 1780). Therequest consolidation module 1782 may modify the storage requests in the request buffer 1780 (e.g., join, combine, and/or remove buffered storage requests), to thereby reduce the overall number of storage requests processed by thestorage controller 120, which may improve performance and reduce wear on thenon-volatile storage media 140. In some embodiments, modifying a storage request comprises acknowledging completion of the storage request without actually performing and/or implementing the storage request (e.g., acknowledging a TRIM storage request made redundant by one or more other pending storage requests without actually implementing the redundant TRIM request). - The
storage management layer 1730 may be configured to selectively buffer and/or modify storage requests. In some embodiments, thestorage management layer 1780 may be configured to receive storage requests fromdifferent storage clients 118A-N (both within the same host or on other hosts). Thestorage management layer 1730 may be configured to buffer and/or modify the storage requests of select storage client(s) 118A-N (to the extent that the storage client(s) 118A-N are configured to operate using the same logical identifiers namespace, and/or the like. Storage requests of other unselected storage clients (e.g., filesystem storage client 118B) may not be buffered in therequest buffer 1780 and/or modified by therequest consolidation module 1782. In some embodiments, thestorage management layer 1730 may be configured to selectively buffer storage requests of a particular type. For example, therequest buffer 1780 may be configured to only buffer TRIM storage requests. Alternatively, or in addition, therequest buffer 1780 may comprise a plurality ofseparate request buffers 1780 for different storage client(s) 118A-N and/or different types of storage requests. For example, therequest buffer 1780 may be configured to buffer sub-requests or subcommands of vector storage requests and/or atomic vector storage requests. Therequest consolidation module 1782 may be configured to consolidate the sub-requests and/or subcommands as described herein. - In some embodiments, the
request consolidation module 1782 may be configured to modify a vector storage request and/or one or more vectors of a vector storage request (e.g., one or more sub-requests and/or subcommands of the vector storage request). Therequest consolidation module 1782 may be configured to identify and/or analyze the respective vectors of the vector storage request by use of thevector storage module 1770 and/oratomic storage module 1772. The storage requests corresponding to the vector storage request may be buffered in therequest buffer 1780 along with, or separately from, similar other non-vector storage requests and/or storage requests of other vector storage requests. Buffering a vector storage request may, therefore, comprise generating sub-requests and/or subcommands (separate storage requests) corresponding to each of the vectors of the vector storage request. For example, a vector storage request to TRIM data invectors 1 . . . N may correspond to N separate storage requests, wherein each of the N storage requests is configured to TRIM a range of logical identifiers specified in a respective one of the 1 . . . N vectors. The constituent storage requests of atomic vector storage requests may be similarly buffered in therequest buffer 1780. The storage requests of an atomic vector storage request may be buffered in an ordered queue 173 (and/or ordered buffer), as described above. - The
request consolidation module 1782 may be configured to modify one or more storage requests in therequest buffer 1780 based on one or more other storage requests within therequest buffer 1780. The storage requests may comprise storage requests of vector storage requests and/or non-vector storage requests. Modifying a storage request may comprise combining and/or coalescing two or more of the storage requests. For example, individual storage requests pertaining to overlapping and/or contiguous sets of logical identifiers in thelogical address space 134 may be combined into a single storage request, which may include and/or combine the overlapping ranges.FIG. 17B depicts one embodiment of arequest buffer 1780. Therequest buffer 1780 may be ordered, such that storage requests are executed and/or serviced by the request execution module 1784 (described below) in the order in which the storage requests were received (e.g., in a first-in-first-out (FIFO) configuration in which storage requests are pushed into therequest buffer 1780 at theincoming end 1783 of therequest buffer 1780 and are popped for execution at theoutgoing end 1785 of the request buffer 1780). - Storage requests may be added to the
request buffer 1780 as they are received at thestorage controller 1720. Adding a vector storage request to therequest buffer 1780 may comprise adding storage requests corresponding to each of a plurality of vectors of the vector storage request to therequest buffer 1780. Thestorage controller 1720 may be configured to execute and/or service the storage requests, as described herein, which may comprise appending one or more data packets to a log on thenon-volatile storage media 140, modifying thestorage metadata 135, and so on. In some embodiments, thestorage controller 1720 comprises arequest execution module 1784 configured to service and/or execute storage requests in therequest buffer 1780. Therequest execution module 1784 may be configured to execute buffered storage requests in a particular order (e.g., in the order in which the storage requests were received); for example, therequest execution module 1784 may be configured to pop buffered storage requests from an end of an orderedqueue 173, FIFO, or the like. Alternatively, or in addition, therequest execution module 1784 may be configured to service and/or execute storage requests out of order. Alternatively, or in addition, therequest execution module 1784 may be configured to change the order of storage requests within therequest buffer 1780 based on criteria that optimizes use of thestorage media 140 and preserves the integrity of the storage operations. Executing or servicing a storage request may comprise performing one or more storage operations specified by the store request, which, as described herein, may comprise appending one or more data packets to a log on the non-volatile storage medium 140 (by use of the log storage module 136), reading portions of thenon-volatile storage medium 140, transferring data pertaining to a storage request, updatingstorage metadata 135, and so on. Therequest execution module 1784 may be further configured to execute and/or service atomic storage requests by use of theatomic storage module 1772, which may comprise storing persistent metadata on the non-volatile storage medium to track completion of the atomic storage request(s), as described herein. - In some embodiments, the
request execution module 1784 is configured to execute storage requests according to a particular interval and/or schedule. The scheduling may be adaptive according to operating conditions of thestorage controller 120 and/or in response to trigger conditions, such as filling the request buffer 1780 (and/or ordered queue 173), buffering a threshold number of storage requests, and so on. - As disclosed above, the
request consolidation module 1782 may be configured to modify one or more of the storage requests within therequest buffer 1780. Therequest consolidation module 1782 may be configured to modify the storage requests in response to other pending storage requests within therequest buffer 1780, which may comprise combining and/or joining two or more storage requests into a single storage request that operates on a logical union of the overlapping and/or adjacent set(s) of logical identifiers. In theFIG. 17B example, thestorage request buffer 1782 comprises TRIM storage requests pertaining tological identifiers 2 . . . 6. Therequest consolidation module 1782 may be configured to aggregate the TRIM storage requests in therequest buffer 1780 to form a single, combinedTRIM storage request 1786. The storage request to TRIM logical identifier 1023 is not adjacent with and/or overlap thelogical identifiers 1 . . . 6 and, as such, may remain as a separate storage request. Coalescing the TRIM storage requests as described herein may reduce wear on thenon-volatile storage media 140. For example, if the TRIM storage requests are persistent (e.g., comprise storing a persistent note on the non-volatile storage media 140), forming the aggregateTRIM storage request 1786 may reduce the total number of persistent notes stored on thenon-volatile storage medium 140. In some embodiments, a persistent TRIM note may be configured to TRIM one or more disjoint, non-adjacent, and/or non-contiguous logical identifier ranges or vectors. Accordingly, the storagerequest consolidation module 1782 may be configured to join the trim storage request pertaining to logical identifier 1023 into a vector TRIM storage request (e.g., request to TRIMlogical identifiers 1 . . . 6 and 1023, not shown inFIG. 17B ). - The
request consolidation module 1782 may be configured to modify storage requests in therequest buffer 1780 such that the modifications do not affect other pending storage requests. As illustrated inFIG. 17C , therequest buffer 1780 may comprise a storage request to read data oflogical identifier 7. Therequest consolidation module 1782 may be configured to schedule the read storage request before the combined storage request to TRIMlogical identifiers 2 . . . 7 such that the read storage request can be completed; scheduling the read storage request after the combined TRIM storage request would result in losing access to the data oflogical identifier 7. - The
request consolidation module 1782 may be further configured to remove and/or delete one or more storage requests from therequest buffer 1780. A storage request may be removed and/or deleted from therequest buffer 1780 in response to determining that the storage request(s) would be obviated by one or more other pending storage requests in therequest buffer 1780. As illustrated inFIG. 17D , therequest buffer 1782 comprises a plurality of storage requests to TRIM and write to various logical identifiers in thelogical address space 134. Therequest consolidation module 1782 may determine that one or more of the TRIM and/or write storage requests are obviated by other pending storage requests in therequest buffer 1780; the write request tological identifier 2 . . . 10 overlaps several of the TRIM storage requests and the write request tological identifiers 3 . . . 5. Therequest consolidation module 1782 may be configured to remove and/or delete the storage requests that are obviated by the write storage request. Storage requests that are not obviated by the write storage request may be retained and/or modified (e.g., the storage request to TRIMlogical identifiers 1 . . . 5 may be modified to TRIM onlylogical identifier 1, which is not obviated by the write storage request). As described above, therequest consolidation module 1782 may configure the modification such that other pending storage requests are not affected. For example, the write operation tological identifiers 3 . . . 5 may not be deleted if there is a storage request to read data of one or more of thelogical identifiers 3 . . . 5 before the write to 2 . . . 10 in therequest buffer 1780. Removing a storage request may further comprise acknowledging completion of the storage request. The storage request may be acknowledged even if the storage request is not actually implemented (e.g., is obviated by another storage request in the request buffer 1780). - As described above, the
request buffer 1780 may be configured to buffer storage requests received from one ormore storage clients 118A-N, including vector storage requests and/or atomic vector storage requests. Therequest consolidation module 1782 may be configured to modify an atomic vector storage request (and/or the constituent storage requests thereof) in response to other pending storage requests in the request buffer 1780 (and/or within the atomic vector storage request itself). In some embodiments, however, therequest consolidation module 1782 may only modify storage requests within respective atomic vector storage operations, without regard to other non-atomic storage requests in therequest buffer 1780. For example, therequest consolidation module 1782 may consolidate adjacent and/or overlapping write and/or TRIM requests within an atomic vector storage request, as described above. However, therequest consolidation module 1782 may not modify the sub-requests of the atomic vector storage request in response to other storage requests in therequest buffer 1780 that are not part of the atomic vector storage request. -
FIG. 18 is a flowchart of one embodiment of amethod 1800 for servicing anatomic storage request 1101. Themethod 1800 may start and be initialized, which may include, but is not limited to, loading one or more machine-readable instructions from a non-transitory, machine-readable storage medium, accessing and/or initializing resources, such as a non-volatile storage device, communication interfaces, and so on. - As the method begins, an
atomic storage request 1101 is received 1810, for example, at thestorage management layer 1730. Theatomic storage request 1101 may be received 1810, for example, through an interface, such as thestorage management layer 130 by use of one or more of the interfaces 1694 a-b. Theatomic storage request 1101 may involve a single atomic storage operation or a plurality of vector storage operations. Thestorage request 1101 may pertain to disjoint, non-adjacent, and/or non-contiguous ranges and/or sets of logical identifiers in thelogical address space 134. -
Step 1820 may comprise storing and/or appending data pertaining to the atomic storage request contiguously to a log on thenon-volatile storage media 140. In some embodiments, the data may be appended in a packet format, such as thepacket format 710 described above in conjunction withFIG. 7 .Step 1820 may further comprise storing the data with persistent metadata (e.g., persistent metadata flags 717) to track completion of the atomic storage request, as illustrated, for example, inFIGS. 13A and 16B -C. The persistent metadata may comprise persistent metadata flags configured to identify data that is part of an incomplete atomic storage operation. The persistent metadata may comprisepersistent metadata flags 717 of one or more data packets. The persistent metadata may further comprise one or more persistent indicators that the atomic storage request is complete. In some embodiments, a completion indicator may comprise storing apersistent metadata flag 717 in a last data packet stored as part of the atomic vector storage request (e.g., the final data packet within the log), wherein thepersistent metadata flag 717 is configured to indicate completion of the atomic storage request. In some embodiments, theatomic storage request 1101 may involve a plurality of storage operations, each of which may encompass storage operations in a plurality of different logical erase blocks 1340 a-b. Thelog storage module 136 may be configured to store persistent metadata (such as aheader 1314 a) and associateduser data 1312 within a data packet 1310 a-d (or other persistent note) on thestorage media 1302 in one or more write operations, i.e., as part of one or more operations performed on thestorage media 1302. -
Step 1830 may comprise acknowledging completion of the atomic storage request to astorage client 118A-N or the like. Theatomic storage module 172 may be configured to send acknowledgment asynchronously via a callback or other mechanism. Alternatively, theatomic storage request 1101 may be synchronous, and theatomic storage module 172 may transmit acknowledgment by a return from a synchronous function or method call. - In some embodiments, acknowledgment is provided as soon as it can be assured that the data of the
atomic storage request 1101 will be persisted to thenon-volatile storage media 140 but before the data is actually stored thereon. For example, theatomic storage module 172 may send acknowledgment upon transferring data of theatomic storage request 1101 into a buffer of thenon-volatile storage device 1302 or into a write data pipeline, transferring the data to a storage controller 120 (e.g., within a protection domain of a storage controller 120), or the like. Alternatively,acknowledgment 1830 is performed after the data of theatomic storage request 1101 has been persisted on thenon-volatile storage media 140. -
FIG. 19 illustrates amethod 1900 for restart recovery to reconstruct storage metadata 135 (e.g., forward index 204). As shown inFIG. 19 , thestorage controller 120 may be configured to access an append point on thenon-volatile storage media 140. Thenon-volatile storage media 1502 may comprise a plurality of data packets 1510 a-c, 1510 d-e, 1510 f-i in a log format; the data packets 1510 a-c, 1510 d-e, 1510 f-i may be appended to the log from theappend point 1520 and/or may be associated with respective sequence indicators, as described above. The data packets 1510 a-c, 1510 d-e, 1510 f-i may be associated with differentlogical identifiers 1515 of thelogical address space 134; the logical identifiers may be independent ofphysical storage locations 1523 on thenon-volatile storage media 1502. - The
restart recovery module 139 may be configured to identifydata packets 1920 of incomplete atomic storage requests in response to a data packet 1510 i preceding theappend point 1520 comprising a persistent indicator that satisfies an incomplete atomic write criteria. For example, the persistent indicator may satisfy the incomplete atomic write criteria if the preceding data packet comprises the first persistent metadata flag in the first state 1417 a (e.g., a state indicating that the packet is part of an incomplete or in process atomic storage request). - The restart recovery module may be further configured to identify 1930 one or
more data packets 1510 d-e, 1510 f-i associated with the incomplete atomic storage request by, for example, identifying data packets including the first persistent metadata flag in a first state 1417 a. The one ormore data packets 1510 d-e, 1510 f-i associated with the incomplete atomic storage request may be positioned sequentially within the log-basedstructure 1103. One example of an incomplete atomic storage request involving sequentially positioned packets is illustrated inFIG. 15 , i.e., thedata packets 1510 d-e, 1510 f-i ofFIG. 15 are associated with the incomplete atomic storage request and are positioned sequentially in a log-basedstructure 1103. It should be noted that identifying 1920 the incomplete atomic storage request and identifying 1930 one or more packets associated with the incomplete atomic storage request may be performed consecutively or concurrently. -
Step 1940 comprises excluding thedata packet 1510 d-e, 1510 f-i associated with the incomplete atomic storage request from an index, such as aforward index 1504 or areverse index 1022. Therestart recovery module 139 may exclude 1940 by bypassing eachdata packet 1510 d-e, 1510 f-i associated with the incomplete atomic storage request during a scan of the log-basedstructure 1103 used to create theindex 1504. In addition, the exclusion module 1745 may exclude 1940 by removing eachlogical identifier 1515 that maps to eachdata packet 1510 d-e, 1510 f-i associated with the incomplete atomic storage request from theindex 1504 created by way of a scan of the log-basedstructure 1103. -
Step 1940 may comprise grooming (e.g., erasing) thedata packets 1510 d-e, 1510 f-i associated with the incompleteatomic storage request 1103 by way of the storage space recovery operation. Thegroomer module 138 may be further configured to exclude 1940 by erasing each logical erase block 1540 a-b of the solid-storage media comprising one ormore data packets 1510 d-e, 1510 f-i associated with the incomplete atomic storage request and transferring data packets 1510 a-c from each logical eraseblock 1540 a to adifferent location 1540 c on thenon-volatile storage media 1502, as illustrated, for example, inFIG. 15 . Thegroomer module 138 may also erase by assigning asubsequence number 1519 to a destination logical eraseblock 1540 c configured to store the preserved data packets 1510 a-c, as is also illustrated, for example, inFIG. 15 . During a power-on operation of the storage device, thegroomer module 138 may erase by identifying a first logical eraseblock 1540 a having asequence number 1518 a and another logical eraseblock 1540 c having asubsequence number 1519 derived from thesequence number 1518 a and grooming the first logical eraseblock 1540 a, as illustrated inFIG. 15 , and excluding eachdata packet 1510 d-e, 1510 f-i associated with the failed atomic storage request from theindex 1504. Excluding may further comprise storing a physical TRIM note identifying the data packet(s) of the incomplete atomic storage request. -
Step 1950 may comprise resuming input-output operations after restart recovery is complete. Performingexclusion 1940 before commencing 1950 normal input-output operations, in one embodiment, simplifies the restart recovery process by preventing normal input-output operations from interfering with the restart recovery process and/or propagating errors in data stored on themedia 1502. - As disclosed above, a vector storage request may comprise a request to perform one or more operations on one or more vectors, which may pertain to respective sets and/or ranges within a
logical address space 134. A portion of one or more of the vectors may overlap (and/or may be logically adjacent) and/or one or more operations may negate (e.g., overlay) one or more other operations. For example, a vector storage request may comprise a request to perform a TRIM operation on two vectors. The vectors may pertain to overlapping and/or adjacent sets of logical identifiers (e.g., the operations may TRIM logical identifiers 256-1024 and 759-1052, respectively). Therequest consolidation module 1782 may identify the overlapping TRIM operations within the vector storage request and, in response, may modify the vector storage requests. Modifying the vector storage request may comprise modifying one or more of the vectors of the vector storage request (e.g., combining the TRIM requests into a single request to TRIM logical identifiers 256-1052). In another example, a vector storage request may comprise requests to TRIM the same set of logical identifiers; therequest consolidation module 1782 may be configured to remove one or more of the overlapping vectors of the vector storage request. For example, a vector storage request comprising multiple requests to TRIM logical identifiers 0-256 may be combined into a single TRIM request comprising the vector 0-256. Therequest consolidation module 1782 may be configured to consolidate or join logically adjacent requests and/or vectors. For example, a vector storage request may comprise requests to TRIM logical identifiers 0-256 and 257-512; therequest consolidation module 1782 may be configured to consolidate these two separate vectors into a single vector 0-512. - The
request consolidation module 1782 may be further configured to consolidate atomic vector storage requests (e.g., requests received via theinterface 1694 b described above). For example, an atomic vector storage request may comprise a vector configured to TRIM a particular range of logical identifiers followed by a vector configured to write to the same vector (or a portion of the same vector). Therequest consolidation module 1782 may be configured to detect that the vector pertaining to the TRIM operation is obviated by the vector pertaining to the write operation and, in response, may omit storage request(s) of the TRIM vector (and/or omit the portion of the TRIM operation that is obviated by the write). - The
request consolidation module 1782 may be configured to modify storage requests by examining the vectors within respective vector storage requests, comparing vectors of different vector storage requests, examining storage requests in astorage request buffer 1780, identifying I/O vectors for consolidation, and/or modifying the buffered storage requests, and so on, as described above. -
FIG. 20 is a flow diagram of one embodiment of amethod 2000 for managing storage operations. Themethod 2000 may start and initialize, as described above. -
Step 2020 may comprise buffering one or more storage requests. As described above, buffering storage requests may comprise adding the storage requests to a buffer (the request buffer 1780), queuing storage requests (e.g., adding storage requests to an ordered queue 173), holding storage requests, delaying storage requests, and/or the like.Step 2020 may comprise buffering storage requests, buffering vector storage requests, buffering atomic vector storage requests, and so on. Buffering a vector storage request and/or atomic vector storage request may comprise extracting one or more vector(s) from the storage request and/or generating storage requests corresponding to each of the vectors within the vector storage request (e.g., buffering a storage request for each vector within the vector storage request).Step 2020 may comprise retaining an order of the storage requests within the buffer, queue, or other data structure. Accordingly, the buffering ofstep 2020 may be configured to maintain the storage requests in the same (or equivalent) order as the storage requests were received. For example, in some embodiments, therequest buffer 1780 comprises an orderedqueue 173, such as a first-in-first-out (FIFO) or the like. Storage requests may flow through the ordered queue 173 (e.g., by first-in-first-out processing), as disclosed above. -
Step 2030 may comprise modifying one or more of the storage requests, vector storage requests, and/or vectors. The modification ofstep 2030 may comprise removing, joining, combining, and/or modifying one or more storage requests, vector storage requests, and/or vectors, as described above.Step 2030 may comprise identifying storage requests and/or vectors that pertain to overlapping and/or adjacent ranges of logical identifiers within thelogical address space 134. Accordingly,step 2030 may comprise comparing pending storage requests and/or vectors of pending vector storage requests (atomic and/or otherwise) to other pending storage requests and/or vectors within therequest buffer 1780.Step 2030 may further comprise identifying storage requests and/or vectors that can be combined, modified, and/or removed. As disclosed above, storage requests that pertain to overlapping ranges of logical identifiers may be combined, which may comprise modifying the storage request to reference a vector and/or modifying the set, range, extent, and/or logical identifiers of one or more vectors.Step 2030 may further comprise identifying storage requests and/or vectors that are made redundant by one or more other pending storage requests and/or vectors, as disclosed above. - In some embodiments, the modification of
step 2030 may operate within the vectors of a particular vector storage request. Accordingly, the buffering ofstep 2020 may be omitted, andstep 2030 may operate within an individual vector storage request (and/or an individual atomic vector storage request). Alternatively, or in addition, therequest consolidation module 1782 may treat some storage requests separately. For example, atomic vector storage requests may be buffered and/or consolidated separately from other storage requests. In other embodiments,steps 2020 and/or 2030 may comprise buffering and/or modifying storage requests of a particular storage client 18A-N (e.g., storage requests of a filesystem storage client 118B), buffering and/or modifying storage requests of a particular type (e.g., only TRIM storage requests), or the like -
Step 2040 may comprise servicing the buffered storage requests.Step 2040 may comprise servicing one or more of the storage requests and/or vectors modified atstep 2030.Step 2040 may be performed at a predetermined time and/or operation interval. In some embodiments,step 2040 is performed in response to a trigger condition, which may include, but is not limited to: filling the request buffer 1780 (e.g., a FIFO, orderedqueue 173, or the like), buffering a predetermined number of storage requests, a user request to flush therequest buffer 1780, or the like.Step 2040 may further comprise acknowledging completion of one or more storage requests. The request(s) may be acknowledged after all of the storage requests of a particular vector storage request (or atomic vector storage request) are complete. In some embodiments,step 2040 may comprise acknowledging completion of a storage request that was modified atstep 2030. The acknowledgement may pertain to a storage request and/or vector that was removed or omitted atstep 2030. -
FIG. 21 is a flow diagram of one embodiment of amethod 2100 for servicing vector storage requests. Themethod 2100 may start and initialize, as described above. -
Step 2110 may comprise identifying a plurality of storage requests of a vector storage request (e.g., a plurality of sub-requests or sub-operations of the vector storage request). The vector storage request may pertain to a plurality of vectors, each vector corresponding to a range of one or more logical identifiers of alogical address space 134. Two or more of the vectors may pertain to logical identifiers that are disjoint, non-adjacent, and/or non-contiguous with respect to thelogical address space 134. The storage requests identified atstep 2110 may correspond to respective vectors of the vector storage request and/or may comprise different types of storage operations (e.g., in accordance with avector flag parameter 1698 n or vector storagerequest flag parameter 1696 d). -
Step 2120 may comprise modifying one or more of the storage requests of the vector storage request based on and/or in response to other pending storage requests (by use of therequest consolidation module 1782, described above).Step 2120 may comprise buffering the identified storage requests in arequest buffer 1780, which may comprise other storage requests ofother storage clients 118A-N (in addition to the storage requests identified at step 2110). Alternatively,step 2120 may comprise modifying the storage requests in response to the vector storage request as identified atstep 2110, without regard to other storage requests (buffered or otherwise). Accordingly, the other storage requests may comprise other storage requests within the vector storage request (as identified at step 2110) and/or other storage requests buffered in therequest buffer 1780 that are independent of the vector storage request (e.g., in addition to the storage requests of the vector storage request of step 2110). - Modifying a storage request may comprise joining and/or combining two or more storage requests, removing or deleting one or more storage requests that are obviated (e.g., negated) by one or more other pending storage requests, modifying the logical identifier(s) and/or vector of the storage request, and so on, as described above. The modifications of
step 2120 may be configured to maintain consistency with other storage requests; as described above, therequest consolidation module 1782 may be configured to modify and/or order the storage requests such that the modifications do not affect other pending storage requests. -
Step 2130 may comprise servicing the storage requests of the vector storage request (as modified at step 2120).Step 2130 may comprise storing data packets of the vector storage request contiguously within a log on the non-volatile storage media 140 (e.g., by use of the log storage module 136). Storing the data packets contiguously may comprise appending the data packets at an append point, storing the data packets sequentially from the append point, and/or associating the data packets with respective sequence indicators on thenon-volatile storage media 140, such that a log order of the data packets is retained on thenon-volatile storage media 140. - In some embodiments, the vector storage request of
step 2110 may be an atomic vector storage request. Accordingly,step 2130 may further comprise storing one or more persistent indicators on thenon-volatile storage media 140 to identify data pertaining to the atomic vector storage request and/or to indicate completion of the atomic vector storage request.Step 2130 may comprise configuring one or more data packets of the atomic vector storage request to include respective persistent indicators (e.g., persistent metadata flags 717) that indicate that the one or more data packets pertain to an atomic storage request that is incomplete and/or in process.Step 2130 may further comprise configuring a last data packet of the atomic storage request to include a persistent indicator (e.g., persistent metadata flag 717) that indicates that the atomic storage operation is complete. - Reference throughout this specification to features, advantages, or similar language does not imply that all of the features and advantages that may be realized are included in any single embodiment. Rather, language referring to the features and advantages is understood to mean that a specific feature, advantage, or characteristic described in connection with an embodiment is included in at least one embodiment. Thus, discussion of the features and advantages, and similar language, throughout this specification may, but do not necessarily, refer to the same embodiment.
- Furthermore, the features, advantages, and characteristics described herein may be combined in any suitable manner in one or more embodiments. One skilled in the relevant art will recognize that the disclosed embodiments may be practiced without one or more of the specific features or advantages of a particular embodiment. In other instances, additional features and advantages may be recognized in certain embodiments that may not be present in all embodiments. These features and advantages of the disclosed embodiments will become more fully apparent from the following description and appended claims, or may be learned by the practice of the embodiments as set forth hereinafter.
- Many of the functional units described in this specification have been labeled as modules, in order to more particularly emphasize their implementation independence. For example, a module may be implemented as a hardware circuit comprising custom VLSI circuits or gate arrays, off-the-shelf semiconductors such as logic chips, transistors, or other discrete components. A module may also be implemented in programmable hardware devices such as field programmable gate arrays, programmable array logic, programmable logic devices, or the like.
- Modules may also be implemented in software for execution by various types of processors. An identified module of executable code may, for instance, comprise one or more physical or logical blocks of computer instructions which may, for instance, be organized as an object, procedure, or function. Nevertheless, the executables of an identified module need not be physically located together, but may comprise disparate instructions stored in different locations which, when joined logically together, comprise the module and achieve the stated purpose for the module.
- Indeed, a module of executable code may be a single instruction, or many instructions, and may even be distributed over several different code segments, among different programs, and across several memory devices. Similarly, operational data may be identified and illustrated herein within modules, and may be embodied in any suitable form and organized within any suitable type of data structure. The operational data may be collected as a single data set, or may be distributed over different locations including over different storage devices, and may exist, at least partially, merely as electronic signals on a system or network. Where a module or portions of a module are implemented in software, the software portions are stored on one or more computer readable media.
- Reference throughout this specification to “one embodiment,” “an embodiment,” or similar language means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment. Thus, appearances of the phrases “in one embodiment,” “in an embodiment,” and similar language throughout this specification may, but do not necessarily, all refer to the same embodiment.
- Reference to a computer readable medium may take any form capable of storing machine-readable instructions on a digital processing apparatus. A computer readable medium may be embodied by a compact disk, digital-video disk, a magnetic tape, a Bernoulli drive, a magnetic disk, a punch card, flash memory, integrated circuits, or other digital processing apparatus memory device.
- Furthermore, the features, structures, or characteristics disclosed herein may be combined in any suitable manner in one or more embodiments. In the following description, numerous specific details are provided, such as examples of programming, software modules, user selections, network transactions, database queries, database structures, hardware modules, hardware circuits, hardware chips, etc., to provide a thorough understanding of the disclosed embodiments. One skilled in the relevant art will recognize, however, that the teachings of the disclosure may be practiced without one or more of the specific details, or with other methods, components, materials, and so forth. In other instances, well-known structures, materials, or operations are not shown or described in detail to avoid obscuring aspects of the disclosed embodiments.
- The schematic flow chart diagrams included herein are generally set forth as logical flow chart diagrams. As such, the depicted order and labeled steps are indicative of one embodiment of the presented method. Other steps and methods may be conceived that are equivalent in function, logic, or effect to one or more steps, or portions thereof, of the illustrated method. Additionally, the format and symbols employed are provided to explain the logical steps of the method and are understood not to limit the scope of the method. Although various arrow types and line types may be employed in the flow chart diagrams, they are understood not to limit the scope of the corresponding method. Indeed, some arrows or other connectors may be used to indicate only the logical flow of the method. For instance, an arrow may indicate a waiting or monitoring period of unspecified duration between enumerated steps of the depicted method. Additionally, the order in which a particular method occurs may or may not strictly adhere to the order of the corresponding steps shown.
Claims (20)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US16/371,110 US11182212B2 (en) | 2011-12-22 | 2019-04-01 | Systems, methods, and interfaces for vector input/output operations |
Applications Claiming Priority (8)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US201161579627P | 2011-12-22 | 2011-12-22 | |
US13/335,922 US8725934B2 (en) | 2011-12-22 | 2011-12-22 | Methods and appratuses for atomic storage operations |
US201261625475P | 2012-04-17 | 2012-04-17 | |
US201261637155P | 2012-04-23 | 2012-04-23 | |
US13/539,235 US10133662B2 (en) | 2012-06-29 | 2012-06-29 | Systems, methods, and interfaces for managing persistent data of atomic storage operations |
US13/725,728 US9274937B2 (en) | 2011-12-22 | 2012-12-21 | Systems, methods, and interfaces for vector input/output operations |
US15/000,995 US10296220B2 (en) | 2011-12-22 | 2016-01-19 | Systems, methods, and interfaces for vector input/output operations |
US16/371,110 US11182212B2 (en) | 2011-12-22 | 2019-04-01 | Systems, methods, and interfaces for vector input/output operations |
Related Parent Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US15/000,995 Division US10296220B2 (en) | 2011-12-22 | 2016-01-19 | Systems, methods, and interfaces for vector input/output operations |
Publications (2)
Publication Number | Publication Date |
---|---|
US20190235925A1 true US20190235925A1 (en) | 2019-08-01 |
US11182212B2 US11182212B2 (en) | 2021-11-23 |
Family
ID=48655728
Family Applications (3)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US13/725,728 Active 2032-10-18 US9274937B2 (en) | 2011-12-22 | 2012-12-21 | Systems, methods, and interfaces for vector input/output operations |
US15/000,995 Active 2032-07-09 US10296220B2 (en) | 2011-12-22 | 2016-01-19 | Systems, methods, and interfaces for vector input/output operations |
US16/371,110 Active 2032-03-11 US11182212B2 (en) | 2011-12-22 | 2019-04-01 | Systems, methods, and interfaces for vector input/output operations |
Family Applications Before (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US13/725,728 Active 2032-10-18 US9274937B2 (en) | 2011-12-22 | 2012-12-21 | Systems, methods, and interfaces for vector input/output operations |
US15/000,995 Active 2032-07-09 US10296220B2 (en) | 2011-12-22 | 2016-01-19 | Systems, methods, and interfaces for vector input/output operations |
Country Status (1)
Country | Link |
---|---|
US (3) | US9274937B2 (en) |
Families Citing this family (70)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8019940B2 (en) | 2006-12-06 | 2011-09-13 | Fusion-Io, Inc. | Apparatus, system, and method for a front-end, distributed raid |
WO2008106686A1 (en) * | 2007-03-01 | 2008-09-04 | Douglas Dumitru | Fast block device and methodology |
US9274937B2 (en) | 2011-12-22 | 2016-03-01 | Longitude Enterprise Flash S.A.R.L. | Systems, methods, and interfaces for vector input/output operations |
US9767676B2 (en) * | 2012-01-11 | 2017-09-19 | Honeywell International Inc. | Security system storage of persistent data |
US9805052B2 (en) * | 2013-01-28 | 2017-10-31 | Netapp, Inc. | Coalescing metadata for mirroring to a remote storage node in a cluster storage system |
US10248670B1 (en) | 2013-03-14 | 2019-04-02 | Open Text Corporation | Method and system for migrating content between enterprise content management systems |
US10127166B2 (en) | 2013-05-21 | 2018-11-13 | Sandisk Technologies Llc | Data storage controller with multiple pipelines |
US9483397B2 (en) * | 2013-07-16 | 2016-11-01 | Intel Corporation | Erase management in memory systems |
US20150032982A1 (en) * | 2013-07-26 | 2015-01-29 | Fusion-Io, Inc. | Systems and methods for storage consistency |
US20150032845A1 (en) * | 2013-07-26 | 2015-01-29 | Samsung Electronics Co., Ltd. | Packet transmission protocol supporting downloading and streaming |
US9842128B2 (en) * | 2013-08-01 | 2017-12-12 | Sandisk Technologies Llc | Systems and methods for atomic storage operations |
US10223208B2 (en) * | 2013-08-13 | 2019-03-05 | Sandisk Technologies Llc | Annotated atomic write |
CN105745627B (en) | 2013-08-14 | 2019-03-15 | 西部数据技术公司 | Address conversion for non-volatile memory storage device |
US10763752B1 (en) | 2019-06-25 | 2020-09-01 | Chengdu Monolithic Power Systems Co., Ltd. | Zero-voltage-switching flyback converter |
CN103473185B (en) * | 2013-09-06 | 2016-08-10 | 华为数字技术(苏州)有限公司 | Method, buffer storage and the storage system of caching write |
US10019320B2 (en) | 2013-10-18 | 2018-07-10 | Sandisk Technologies Llc | Systems and methods for distributed atomic storage operations |
US10073630B2 (en) * | 2013-11-08 | 2018-09-11 | Sandisk Technologies Llc | Systems and methods for log coordination |
US9465820B2 (en) | 2013-11-13 | 2016-10-11 | Cellco Partnership | Method and system for unified technological stack management for relational databases |
TWI524179B (en) * | 2014-04-22 | 2016-03-01 | 新唐科技股份有限公司 | Storage unit controller and control method thereof, and storage device |
US10114576B2 (en) * | 2014-07-24 | 2018-10-30 | Sandisk Technologies Llc | Storage device metadata synchronization |
KR102282006B1 (en) * | 2014-08-19 | 2021-07-28 | 삼성전자주식회사 | Computer device and storage device |
US9244858B1 (en) * | 2014-08-25 | 2016-01-26 | Sandisk Technologies Inc. | System and method of separating read intensive addresses from non-read intensive addresses |
US9952805B2 (en) | 2014-09-11 | 2018-04-24 | Hitachi, Ltd. | Storage system and data write method using a logical volume to either store data successfully onto a first memory or send a failure response to a server computer if the storage attempt fails |
KR102238652B1 (en) * | 2014-11-12 | 2021-04-09 | 삼성전자주식회사 | Data storage devce, method thereof, and method for operating data processing system having the same |
US10706041B1 (en) * | 2015-02-11 | 2020-07-07 | Gravic, Inc. | Systems and methods to profile transactions for end-state determination and latency reduction |
US9946607B2 (en) | 2015-03-04 | 2018-04-17 | Sandisk Technologies Llc | Systems and methods for storage error management |
US9933955B1 (en) * | 2015-03-05 | 2018-04-03 | Western Digital Technologies, Inc. | Power safe write buffer for data storage device |
US11556396B2 (en) * | 2015-05-08 | 2023-01-17 | Seth Lytle | Structure linked native query database management system and methods |
US10198208B2 (en) * | 2015-11-13 | 2019-02-05 | International Business Machines Corporation | Performing collective I/O operations within operating system processes |
EP3376394B1 (en) | 2015-12-30 | 2022-09-28 | Huawei Technologies Co., Ltd. | Method and device for processing access request, and computer system |
CN108431784B (en) * | 2015-12-30 | 2020-12-04 | 华为技术有限公司 | Access request processing method and device and computer system |
US10176216B2 (en) * | 2016-02-01 | 2019-01-08 | International Business Machines Corporation | Verifying data consistency |
US10671572B2 (en) * | 2016-06-14 | 2020-06-02 | Sap Se | Storage of log-structured data |
US10146454B1 (en) | 2016-06-30 | 2018-12-04 | EMC IP Holding Company LLC | Techniques for performing data storage copy operations in an integrated manner |
US10061540B1 (en) * | 2016-06-30 | 2018-08-28 | EMC IP Holding Company LLC | Pairing of data storage requests |
US10353588B1 (en) | 2016-06-30 | 2019-07-16 | EMC IP Holding Company LLC | Managing dynamic resource reservation for host I/O requests |
US20180059990A1 (en) | 2016-08-25 | 2018-03-01 | Microsoft Technology Licensing, Llc | Storage Virtualization For Files |
US10162752B2 (en) | 2016-09-22 | 2018-12-25 | Qualcomm Incorporated | Data storage at contiguous memory addresses |
TWI616807B (en) * | 2016-11-17 | 2018-03-01 | 英屬維京群島商大心電子(英屬維京群島)股份有限公司 | Data writing method and storage controller |
KR20180062062A (en) * | 2016-11-30 | 2018-06-08 | 에스케이하이닉스 주식회사 | Memory system and operating method thereof |
US10037778B1 (en) * | 2017-02-27 | 2018-07-31 | Amazon Technologies, Inc. | Indexing zones for storage devices |
US10996857B1 (en) * | 2017-02-28 | 2021-05-04 | Veritas Technologies Llc | Extent map performance |
US11507534B2 (en) | 2017-05-11 | 2022-11-22 | Microsoft Technology Licensing, Llc | Metadata storage for placeholders in a storage virtualization system |
US10147501B1 (en) * | 2017-05-30 | 2018-12-04 | Seagate Technology Llc | Data storage device with rewriteable in-place memory |
US11681667B2 (en) * | 2017-07-30 | 2023-06-20 | International Business Machines Corporation | Persisting distributed data sets into eventually consistent storage systems |
US10970204B2 (en) * | 2017-08-29 | 2021-04-06 | Samsung Electronics Co., Ltd. | Reducing read-write interference by adaptive scheduling in NAND flash SSDs |
US11221958B2 (en) | 2017-08-29 | 2022-01-11 | Samsung Electronics Co., Ltd. | System and method for LBA-based RAID |
US10621086B2 (en) * | 2017-10-30 | 2020-04-14 | International Business Machines Corporation | Dynamic resource visibility tracking to avoid atomic reference counting |
US10419265B2 (en) | 2017-11-29 | 2019-09-17 | Bank Of America Corporation | Request processing system using a combining engine |
KR102485812B1 (en) * | 2017-12-19 | 2023-01-09 | 에스케이하이닉스 주식회사 | Memory system and operating method thereof and data processing system including memory system |
KR102407128B1 (en) * | 2018-01-29 | 2022-06-10 | 마이크론 테크놀로지, 인크. | memory controller |
US11036596B1 (en) * | 2018-02-18 | 2021-06-15 | Pure Storage, Inc. | System for delaying acknowledgements on open NAND locations until durability has been confirmed |
US11099778B2 (en) * | 2018-08-08 | 2021-08-24 | Micron Technology, Inc. | Controller command scheduling in a memory system to increase command bus utilization |
US10725931B2 (en) | 2018-08-22 | 2020-07-28 | Western Digital Technologies, Inc. | Logical and physical address field size reduction by alignment-constrained writing technique |
US11061751B2 (en) * | 2018-09-06 | 2021-07-13 | Micron Technology, Inc. | Providing bandwidth expansion for a memory sub-system including a sequencer separate from a controller |
US11080210B2 (en) | 2018-09-06 | 2021-08-03 | Micron Technology, Inc. | Memory sub-system including an in package sequencer separate from a controller |
TW202020664A (en) * | 2018-11-26 | 2020-06-01 | 深圳大心電子科技有限公司 | Read data sorting method and storage device |
KR20200085967A (en) * | 2019-01-07 | 2020-07-16 | 에스케이하이닉스 주식회사 | Data storage device and operating method thereof |
US11068191B2 (en) | 2019-01-23 | 2021-07-20 | EMC IP Holding Company LLC | Adaptive replication modes in a storage system |
TWI737189B (en) * | 2019-02-23 | 2021-08-21 | 國立清華大學 | Method for facilitating recovery from crash of solid-state storage device, computer system, and solid-state storage device |
US11726851B2 (en) * | 2019-11-05 | 2023-08-15 | EMC IP Holding Company, LLC | Storage management system and method |
JP7347157B2 (en) * | 2019-11-22 | 2023-09-20 | 富士通株式会社 | Information processing system, storage control program, and storage control device |
US11748023B2 (en) | 2019-11-29 | 2023-09-05 | Silicon Motion, Inc. | Data storage device and non-volatile memory control method |
CN112882649B (en) | 2019-11-29 | 2024-04-02 | 慧荣科技股份有限公司 | Data storage device and non-volatile memory control method |
US11397669B2 (en) | 2019-11-29 | 2022-07-26 | Silicon Motion, Inc. | Data storage device and non-volatile memory control method |
TWI745986B (en) * | 2019-11-29 | 2021-11-11 | 慧榮科技股份有限公司 | Data storage device and non-volatile memory control method |
US11663043B2 (en) * | 2019-12-02 | 2023-05-30 | Meta Platforms, Inc. | High bandwidth memory system with dynamically programmable distribution scheme |
US11301370B2 (en) | 2020-03-24 | 2022-04-12 | Samsung Electronics Co., Ltd. | Parallel overlap management for commands with overlapping ranges |
US11593141B2 (en) * | 2020-06-29 | 2023-02-28 | Dell Products L.P. | Atomic groups for configuring HCI systems |
US20230195351A1 (en) * | 2021-12-17 | 2023-06-22 | Samsung Electronics Co., Ltd. | Automatic deletion in a persistent storage device |
Family Cites Families (289)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US4571674A (en) | 1982-09-27 | 1986-02-18 | International Business Machines Corporation | Peripheral storage system having multiple data transfer rates |
US5359726A (en) | 1988-12-22 | 1994-10-25 | Thomas Michael E | Ferroelectric storage device used in place of a rotating disk drive unit in a computer system |
US5247658A (en) | 1989-10-31 | 1993-09-21 | Microsoft Corporation | Method and system for traversing linked list record based upon write-once predetermined bit value of secondary pointers |
US5261068A (en) | 1990-05-25 | 1993-11-09 | Dell Usa L.P. | Dual path memory retrieval system for an interleaved dynamic RAM memory unit |
US5193184A (en) | 1990-06-18 | 1993-03-09 | Storage Technology Corporation | Deleted data file space release system for a dynamically mapped virtual data storage subsystem |
US5307497A (en) | 1990-06-25 | 1994-04-26 | International Business Machines Corp. | Disk operating system loadable from read only memory using installable file system interface |
US5325509A (en) | 1991-03-05 | 1994-06-28 | Zitel Corporation | Method of operating a cache memory including determining desirability of cache ahead or cache behind based on a number of available I/O operations |
US5438671A (en) | 1991-07-19 | 1995-08-01 | Dell U.S.A., L.P. | Method and system for transferring compressed bytes of information between separate hard disk drive units |
US5469555A (en) | 1991-12-19 | 1995-11-21 | Opti, Inc. | Adaptive write-back method and apparatus wherein the cache system operates in a combination of write-back and write-through modes for a cache-based microprocessor system |
US6256642B1 (en) | 1992-01-29 | 2001-07-03 | Microsoft Corporation | Method and system for file system management using a flash-erasable, programmable, read-only memory |
US5414840A (en) | 1992-06-25 | 1995-05-09 | Digital Equipment Corporation | Method and system for decreasing recovery time for failed atomic transactions by keeping copies of altered control structures in main memory |
US5596736A (en) | 1992-07-22 | 1997-01-21 | Fujitsu Limited | Data transfers to a backing store of a dynamically mapped data storage system in which data has nonsequential logical addresses |
US5845329A (en) | 1993-01-29 | 1998-12-01 | Sanyo Electric Co., Ltd. | Parallel computer |
JP2856621B2 (en) | 1993-02-24 | 1999-02-10 | インターナショナル・ビジネス・マシーンズ・コーポレイション | Batch erase nonvolatile memory and semiconductor disk device using the same |
US5404485A (en) | 1993-03-08 | 1995-04-04 | M-Systems Flash Disk Pioneers Ltd. | Flash file system |
JP2784440B2 (en) | 1993-04-14 | 1998-08-06 | インターナショナル・ビジネス・マシーンズ・コーポレイション | Data page transfer control method |
CA2121852A1 (en) | 1993-04-29 | 1994-10-30 | Larry T. Jost | Disk meshing and flexible storage mapping with enhanced flexible caching |
US5499354A (en) | 1993-05-19 | 1996-03-12 | International Business Machines Corporation | Method and means for dynamic cache management by variable space and time binding and rebinding of cache extents to DASD cylinders |
US5682497A (en) | 1993-09-28 | 1997-10-28 | Intel Corporation | Managing file structures for a flash memory file system in a computer |
US5535399A (en) | 1993-09-30 | 1996-07-09 | Quantum Corporation | Solid state disk drive unit having on-board backup non-volatile memory |
US5809527A (en) | 1993-12-23 | 1998-09-15 | Unisys Corporation | Outboard file cache system |
JPH086854A (en) | 1993-12-23 | 1996-01-12 | Unisys Corp | Outboard-file-cache external processing complex |
GB9326499D0 (en) | 1993-12-24 | 1994-03-02 | Deas Alexander R | Flash memory system with arbitrary block size |
US5553261A (en) | 1994-04-01 | 1996-09-03 | Intel Corporation | Method of performing clean-up of a solid state disk while executing a read command |
US5696917A (en) | 1994-06-03 | 1997-12-09 | Intel Corporation | Method and apparatus for performing burst read operations in an asynchronous nonvolatile memory |
US5504882A (en) | 1994-06-20 | 1996-04-02 | International Business Machines Corporation | Fault tolerant data storage subsystem employing hierarchically arranged controllers |
DE19540915A1 (en) | 1994-11-10 | 1996-05-15 | Raymond Engineering | Redundant arrangement of solid state memory modules |
US6170047B1 (en) | 1994-11-16 | 2001-01-02 | Interactive Silicon, Inc. | System and method for managing system memory and/or non-volatile memory using a memory controller with integrated compression and decompression capabilities |
US6002411A (en) | 1994-11-16 | 1999-12-14 | Interactive Silicon, Inc. | Integrated video and memory controller with data processing and graphical processing capabilities |
US5586291A (en) | 1994-12-23 | 1996-12-17 | Emc Corporation | Disk controller with volatile and non-volatile cache memories |
US5651133A (en) | 1995-02-01 | 1997-07-22 | Hewlett-Packard Company | Methods for avoiding over-commitment of virtual capacity in a redundant hierarchic data storage system |
US5701434A (en) | 1995-03-16 | 1997-12-23 | Hitachi, Ltd. | Interleave memory controller with a common access queue |
US5682499A (en) | 1995-06-06 | 1997-10-28 | International Business Machines Corporation | Directory rebuild method and apparatus for maintaining and rebuilding directory information for compressed data on direct access storage device (DASD) |
EP0747825B1 (en) | 1995-06-06 | 2001-09-19 | Hewlett-Packard Company, A Delaware Corporation | SDRAM data allocation system and method |
US5845313A (en) | 1995-07-31 | 1998-12-01 | Lexar | Direct logical block addressing flash memory mass storage architecture |
US5930815A (en) | 1995-07-31 | 1999-07-27 | Lexar Media, Inc. | Moving sequential sectors within a block of information in a flash memory mass storage architecture |
US5754563A (en) | 1995-09-11 | 1998-05-19 | Ecc Technologies, Inc. | Byte-parallel system for implementing reed-solomon error-correcting codes |
US6014724A (en) | 1995-10-27 | 2000-01-11 | Scm Microsystems (U.S.) Inc. | Flash translation layer block indication map revision system and method |
US6330688B1 (en) | 1995-10-31 | 2001-12-11 | Intel Corporation | On chip error correction for devices in a solid state drive |
US5787486A (en) | 1995-12-15 | 1998-07-28 | International Business Machines Corporation | Bus protocol for locked cycle cache hit |
US5757567A (en) | 1996-02-08 | 1998-05-26 | International Business Machines Corporation | Method and apparatus for servo control with high efficiency gray code for servo track ID |
US6385710B1 (en) | 1996-02-23 | 2002-05-07 | Sun Microsystems, Inc. | Multiple-mode external cache subsystem |
US5832515A (en) | 1996-09-12 | 1998-11-03 | Veritas Software | Log device layered transparently within a filesystem paradigm |
US5960462A (en) | 1996-09-26 | 1999-09-28 | Intel Corporation | Method and apparatus for analyzing a main memory configuration to program a memory controller |
US5754567A (en) | 1996-10-15 | 1998-05-19 | Micron Quantum Devices, Inc. | Write reduction in flash memory systems through ECC usage |
TW349196B (en) | 1996-10-18 | 1999-01-01 | Ibm | Cached synchronous DRAM architecture having a mode register programmable cache policy |
US6279069B1 (en) | 1996-12-26 | 2001-08-21 | Intel Corporation | Interface for flash EEPROM memory arrays |
US5802602A (en) | 1997-01-17 | 1998-09-01 | Intel Corporation | Method and apparatus for performing reads of related data from a set-associative cache memory |
US6311290B1 (en) | 1997-02-14 | 2001-10-30 | Intel Corporation | Methods of reliably allocating, de-allocating, re-allocating, and reclaiming objects in a symmetrically blocked nonvolatile memory having a bifurcated storage architecture |
US6073232A (en) | 1997-02-25 | 2000-06-06 | International Business Machines Corporation | Method for minimizing a computer's initial program load time after a system reset or a power-on using non-volatile storage |
JP3459868B2 (en) | 1997-05-16 | 2003-10-27 | 日本電気株式会社 | Group replacement method in case of memory failure |
US6418478B1 (en) | 1997-10-30 | 2002-07-09 | Commvault Systems, Inc. | Pipelined high speed data transfer mechanism |
US6101601A (en) | 1998-04-20 | 2000-08-08 | International Business Machines Corporation | Method and apparatus for hibernation within a distributed data processing system |
US5957158A (en) | 1998-05-11 | 1999-09-28 | Automatic Switch Company | Visual position indicator |
US6185654B1 (en) | 1998-07-17 | 2001-02-06 | Compaq Computer Corporation | Phantom resource memory address mapping system |
US6507911B1 (en) | 1998-07-22 | 2003-01-14 | Entrust Technologies Limited | System and method for securely deleting plaintext data |
US6209088B1 (en) | 1998-09-21 | 2001-03-27 | Microsoft Corporation | Computer hibernation implemented by a computer operating system |
FR2785693B1 (en) | 1998-11-06 | 2000-12-15 | Bull Sa | DEVICE AND METHOD FOR WRITE SECURE WRITE DISC FOR HARD DISCS IN MASS MEMORY SUBSYSTEM |
US6629112B1 (en) | 1998-12-31 | 2003-09-30 | Nortel Networks Limited | Resource management for CORBA-based applications |
US6412080B1 (en) | 1999-02-23 | 2002-06-25 | Microsoft Corporation | Lightweight persistent storage system for flash memory devices |
KR100330164B1 (en) | 1999-04-27 | 2002-03-28 | 윤종용 | A method for simultaneously programming plural flash memories having invalid blocks |
US7194740B1 (en) | 1999-05-28 | 2007-03-20 | Oracle International Corporation | System for extending an addressable range of memory |
US7660941B2 (en) | 2003-09-10 | 2010-02-09 | Super Talent Electronics, Inc. | Two-level RAM lookup table for block and page allocation and wear-leveling in limited-write flash-memories |
US6336174B1 (en) | 1999-08-09 | 2002-01-01 | Maxtor Corporation | Hardware assisted memory backup system and method |
KR100577380B1 (en) | 1999-09-29 | 2006-05-09 | 삼성전자주식회사 | A flash-memory and a it's controling method |
US20080195798A1 (en) | 2000-01-06 | 2008-08-14 | Super Talent Electronics, Inc. | Non-Volatile Memory Based Computer Systems and Methods Thereof |
US20080320209A1 (en) * | 2000-01-06 | 2008-12-25 | Super Talent Electronics, Inc. | High Performance and Endurance Non-volatile Memory Based Storage Systems |
US8171204B2 (en) | 2000-01-06 | 2012-05-01 | Super Talent Electronics, Inc. | Intelligent solid-state non-volatile memory device (NVMD) system with multi-level caching of multiple channels |
US6671757B1 (en) | 2000-01-26 | 2003-12-30 | Fusionone, Inc. | Data transfer and synchronization system |
US6785785B2 (en) | 2000-01-25 | 2004-08-31 | Hewlett-Packard Development Company, L.P. | Method for supporting multi-level stripping of non-homogeneous memory to maximize concurrency |
US7089391B2 (en) | 2000-04-14 | 2006-08-08 | Quickshift, Inc. | Managing a codec engine for memory compression/decompression operations using a data movement engine |
US6523102B1 (en) | 2000-04-14 | 2003-02-18 | Interactive Silicon, Inc. | Parallel compression/decompression system and method for implementation of in-memory compressed cache improving storage density and access speed for industry standard memory subsystems and in-line memory modules |
CN1295623C (en) | 2000-06-23 | 2007-01-17 | 英特尔公司 | Non-volatile cache |
US6813686B1 (en) | 2000-06-27 | 2004-11-02 | Emc Corporation | Method and apparatus for identifying logical volumes in multiple element computer storage domains |
US6981070B1 (en) | 2000-07-12 | 2005-12-27 | Shun Hang Luk | Network storage device having solid-state non-volatile memory |
US6658438B1 (en) | 2000-08-14 | 2003-12-02 | Matrix Semiconductor, Inc. | Method for deleting stored digital data from write-once memory device |
US6636879B1 (en) | 2000-08-18 | 2003-10-21 | Network Appliance, Inc. | Space allocation in a write anywhere file system |
US6404647B1 (en) | 2000-08-24 | 2002-06-11 | Hewlett-Packard Co. | Solid-state mass memory storage device |
US6883079B1 (en) | 2000-09-01 | 2005-04-19 | Maxtor Corporation | Method and apparatus for using data compression as a means of increasing buffer bandwidth |
US20040236798A1 (en) | 2001-09-11 | 2004-11-25 | Sudhir Srinivasan | Migration of control in a distributed segmented file system |
US6625685B1 (en) | 2000-09-20 | 2003-09-23 | Broadcom Corporation | Memory controller with programmable configuration |
US7039727B2 (en) | 2000-10-17 | 2006-05-02 | Microsoft Corporation | System and method for controlling mass storage class digital imaging devices |
US6779088B1 (en) | 2000-10-24 | 2004-08-17 | International Business Machines Corporation | Virtual uncompressed cache size control in compressed memory systems |
US7113507B2 (en) | 2000-11-22 | 2006-09-26 | Silicon Image | Method and system for communicating control information via out-of-band symbols |
US6745310B2 (en) | 2000-12-01 | 2004-06-01 | Yan Chiew Chow | Real time local and remote management of data files and directories and method of operating the same |
US6976060B2 (en) | 2000-12-05 | 2005-12-13 | Agami Sytems, Inc. | Symmetric shared file storage system |
US20020103819A1 (en) | 2000-12-12 | 2002-08-01 | Fresher Information Corporation | Technique for stabilizing data in a non-log based information storage and retrieval system |
US7013376B2 (en) | 2000-12-20 | 2006-03-14 | Hewlett-Packard Development Company, L.P. | Method and system for data block sparing in a solid-state storage device |
KR100365725B1 (en) | 2000-12-27 | 2002-12-26 | 한국전자통신연구원 | Ranked Cleaning Policy and Error Recovery Method for File Systems Using Flash Memory |
JP4818812B2 (en) | 2006-05-31 | 2011-11-16 | 株式会社日立製作所 | Flash memory storage system |
US6731447B2 (en) | 2001-06-04 | 2004-05-04 | Xerox Corporation | Secure data file erasure |
US6839808B2 (en) | 2001-07-06 | 2005-01-04 | Juniper Networks, Inc. | Processing cluster having multiple compute engines and shared tier one caches |
US6996668B2 (en) | 2001-08-06 | 2006-02-07 | Seagate Technology Llc | Synchronized mirrored data in a data storage device |
US7275135B2 (en) | 2001-08-31 | 2007-09-25 | Intel Corporation | Hardware updated metadata for non-volatile mass storage cache |
US20030061296A1 (en) | 2001-09-24 | 2003-03-27 | International Business Machines Corporation | Memory semantic storage I/O |
GB0123416D0 (en) | 2001-09-28 | 2001-11-21 | Memquest Ltd | Non-volatile memory control |
US6938133B2 (en) | 2001-09-28 | 2005-08-30 | Hewlett-Packard Development Company, L.P. | Memory latency and bandwidth optimizations |
US6892264B2 (en) | 2001-10-05 | 2005-05-10 | International Business Machines Corporation | Storage area network methods and apparatus for associating a logical identification with a physical identification |
US7013379B1 (en) | 2001-12-10 | 2006-03-14 | Incipient, Inc. | I/O primitives |
US7173929B1 (en) | 2001-12-10 | 2007-02-06 | Incipient, Inc. | Fast path for performing data operations |
CN1278239C (en) | 2002-01-09 | 2006-10-04 | 株式会社瑞萨科技 | Storage system and storage card |
JP4154893B2 (en) | 2002-01-23 | 2008-09-24 | 株式会社日立製作所 | Network storage virtualization method |
US20030145230A1 (en) | 2002-01-31 | 2003-07-31 | Huimin Chiu | System for exchanging data utilizing remote direct memory access |
US7533214B2 (en) | 2002-02-27 | 2009-05-12 | Microsoft Corporation | Open architecture flash driver |
US7010662B2 (en) | 2002-02-27 | 2006-03-07 | Microsoft Corporation | Dynamic data structures for tracking file system free space in a flash memory device |
US7085879B2 (en) | 2002-02-27 | 2006-08-01 | Microsoft Corporation | Dynamic data structures for tracking data stored in a flash memory device |
JP2003281071A (en) | 2002-03-20 | 2003-10-03 | Seiko Epson Corp | Data transfer controller, electronic equipment and data transfer control method |
JP4050548B2 (en) | 2002-04-18 | 2008-02-20 | 株式会社ルネサステクノロジ | Semiconductor memory device |
US7043599B1 (en) | 2002-06-20 | 2006-05-09 | Rambus Inc. | Dynamic memory supporting simultaneous refresh and data-access transactions |
US7562089B2 (en) | 2002-06-26 | 2009-07-14 | Seagate Technology Llc | Systems and methods for storing information to allow users to manage files |
US7082495B2 (en) | 2002-06-27 | 2006-07-25 | Microsoft Corporation | Method and apparatus to reduce power consumption and improve read/write performance of hard disk drives using non-volatile memory |
US7051152B1 (en) | 2002-08-07 | 2006-05-23 | Nvidia Corporation | Method and system of improving disk access time by compression |
KR100505638B1 (en) | 2002-08-28 | 2005-08-03 | 삼성전자주식회사 | Apparatus and method for saving and restoring of working context |
US7130979B2 (en) | 2002-08-29 | 2006-10-31 | Micron Technology, Inc. | Dynamic volume management |
US7340566B2 (en) | 2002-10-21 | 2008-03-04 | Microsoft Corporation | System and method for initializing a memory device from block oriented NAND flash |
US7171536B2 (en) | 2002-10-28 | 2007-01-30 | Sandisk Corporation | Unusable block management within a non-volatile memory system |
US7035974B2 (en) | 2002-11-06 | 2006-04-25 | Synology Inc. | RAID-5 disk having cache memory implemented using non-volatile RAM |
US6996676B2 (en) | 2002-11-14 | 2006-02-07 | International Business Machines Corporation | System and method for implementing an adaptive replacement cache policy |
US7093101B2 (en) | 2002-11-21 | 2006-08-15 | Microsoft Corporation | Dynamic data structures for tracking file system free space in a flash memory device |
US7660998B2 (en) | 2002-12-02 | 2010-02-09 | Silverbrook Research Pty Ltd | Relatively unique ID in integrated circuit |
US6957158B1 (en) | 2002-12-23 | 2005-10-18 | Power Measurement Ltd. | High density random access memory in an intelligent electric device |
US7010645B2 (en) | 2002-12-27 | 2006-03-07 | International Business Machines Corporation | System and method for sequentially staging received data to a write cache in advance of storing the received data |
US6973551B1 (en) | 2002-12-30 | 2005-12-06 | Emc Corporation | Data storage system having atomic memory operation |
US20040148360A1 (en) | 2003-01-24 | 2004-07-29 | Hewlett-Packard Development Company | Communication-link-attached persistent memory device |
US6959369B1 (en) | 2003-03-06 | 2005-10-25 | International Business Machines Corporation | Method, system, and program for data backup |
US8041878B2 (en) | 2003-03-19 | 2011-10-18 | Samsung Electronics Co., Ltd. | Flash file system |
US7610348B2 (en) | 2003-05-07 | 2009-10-27 | International Business Machines | Distributed file serving architecture system with metadata storage virtualization and data access at the data server connection speed |
JP2004348818A (en) | 2003-05-20 | 2004-12-09 | Sharp Corp | Method and system for controlling writing in semiconductor memory device, and portable electronic device |
US7243203B2 (en) | 2003-06-13 | 2007-07-10 | Sandisk 3D Llc | Pipeline circuit for low latency memory |
US7047366B1 (en) | 2003-06-17 | 2006-05-16 | Emc Corporation | QOS feature knobs |
EP1639443A1 (en) | 2003-06-23 | 2006-03-29 | Koninklijke Philips Electronics N.V. | Device and method for recording information |
US20040268359A1 (en) | 2003-06-27 | 2004-12-30 | Hanes David H. | Computer-readable medium, method and computer system for processing input/output requests |
US7487235B2 (en) | 2003-09-24 | 2009-02-03 | Dell Products L.P. | Dynamically varying a raid cache policy in order to optimize throughput |
US7173852B2 (en) | 2003-10-03 | 2007-02-06 | Sandisk Corporation | Corrected data storage and handling methods |
US7096321B2 (en) | 2003-10-21 | 2006-08-22 | International Business Machines Corporation | Method and system for a cache replacement technique with adaptive skipping |
CA2544063C (en) | 2003-11-13 | 2013-09-10 | Commvault Systems, Inc. | System and method for combining data streams in pilelined storage operations in a storage network |
CN100543702C (en) | 2003-11-18 | 2009-09-23 | 松下电器产业株式会社 | File recording device and control method thereof and manner of execution |
US7139864B2 (en) | 2003-12-30 | 2006-11-21 | Sandisk Corporation | Non-volatile memory and method with block management system |
US7328307B2 (en) | 2004-01-22 | 2008-02-05 | Tquist, Llc | Method and apparatus for improving update performance of non-uniform access time persistent storage media |
US7305520B2 (en) | 2004-01-30 | 2007-12-04 | Hewlett-Packard Development Company, L.P. | Storage system with capability to allocate virtual storage segments among a plurality of controllers |
US7356651B2 (en) | 2004-01-30 | 2008-04-08 | Piurata Technologies, Llc | Data-aware cache state machine |
US7130956B2 (en) | 2004-02-10 | 2006-10-31 | Sun Microsystems, Inc. | Storage system including hierarchical cache metadata |
US7130957B2 (en) | 2004-02-10 | 2006-10-31 | Sun Microsystems, Inc. | Storage system structure for storing relational cache metadata |
US7725628B1 (en) | 2004-04-20 | 2010-05-25 | Lexar Media, Inc. | Direct secondary device interface by a host |
US20050240713A1 (en) | 2004-04-22 | 2005-10-27 | V-Da Technology | Flash memory device with ATA/ATAPI/SCSI or proprietary programming interface on PCI express |
EP1745394B1 (en) | 2004-04-26 | 2009-07-15 | Storewiz, Inc. | Method and system for compression of files for storage and operation on compressed files |
US7430571B2 (en) | 2004-04-30 | 2008-09-30 | Network Appliance, Inc. | Extension of write anywhere file layout write allocation |
US7644239B2 (en) | 2004-05-03 | 2010-01-05 | Microsoft Corporation | Non-volatile memory cache performance improvement |
US7360015B2 (en) | 2004-05-04 | 2008-04-15 | Intel Corporation | Preventing storage of streaming accesses in a cache |
US7386663B2 (en) | 2004-05-13 | 2008-06-10 | Cousins Robert E | Transaction-based storage system and method that uses variable sized objects to store data |
US20050257017A1 (en) | 2004-05-14 | 2005-11-17 | Hideki Yagi | Method and apparatus to erase hidden memory in a memory card |
US7831561B2 (en) | 2004-05-18 | 2010-11-09 | Oracle International Corporation | Automated disk-oriented backups |
US7904181B2 (en) | 2004-06-01 | 2011-03-08 | Ils Technology Llc | Model for communication between manufacturing and enterprise levels |
US7447847B2 (en) | 2004-07-19 | 2008-11-04 | Micron Technology, Inc. | Memory device trims |
US7395384B2 (en) | 2004-07-21 | 2008-07-01 | Sandisk Corproation | Method and apparatus for maintaining data on non-volatile memory systems |
US7203815B2 (en) | 2004-07-30 | 2007-04-10 | International Business Machines Corporation | Multi-level page cache for enhanced file system performance via read ahead |
US8407396B2 (en) | 2004-07-30 | 2013-03-26 | Hewlett-Packard Development Company, L.P. | Providing block data access for an operating system using solid-state memory |
US7664239B2 (en) | 2004-08-09 | 2010-02-16 | Cox Communications, Inc. | Methods and computer-readable media for managing and configuring options for the real-time notification and disposition of voice services in a cable services network |
US7398348B2 (en) | 2004-08-24 | 2008-07-08 | Sandisk 3D Llc | Method and apparatus for using a one-time or few-time programmable memory with a host device designed for erasable/rewritable memory |
WO2006025322A1 (en) | 2004-08-30 | 2006-03-09 | Matsushita Electric Industrial Co., Ltd. | Recorder |
US20060075057A1 (en) | 2004-08-30 | 2006-04-06 | International Business Machines Corporation | Remote direct memory access system and method |
US7603532B2 (en) | 2004-10-15 | 2009-10-13 | Netapp, Inc. | System and method for reclaiming unused space from a thinly provisioned data container |
US8131969B2 (en) | 2004-10-20 | 2012-03-06 | Seagate Technology Llc | Updating system configuration information |
US7310711B2 (en) | 2004-10-29 | 2007-12-18 | Hitachi Global Storage Technologies Netherlands B.V. | Hard disk drive with support for atomic transactions |
US7873782B2 (en) | 2004-11-05 | 2011-01-18 | Data Robotics, Inc. | Filesystem-aware block storage system, apparatus, and method |
EP1839154A4 (en) | 2004-12-06 | 2008-07-09 | Teac Aerospace Technologies In | System and method of erasing non-volatile recording media |
US8074041B2 (en) | 2004-12-09 | 2011-12-06 | International Business Machines Corporation | Apparatus, system, and method for managing storage space allocation |
US7581118B2 (en) | 2004-12-14 | 2009-08-25 | Netapp, Inc. | Disk sanitization using encryption |
US7487320B2 (en) | 2004-12-15 | 2009-02-03 | International Business Machines Corporation | Apparatus and system for dynamically allocating main memory among a plurality of applications |
KR100684887B1 (en) | 2005-02-04 | 2007-02-20 | 삼성전자주식회사 | Data storing device including flash memory and merge method of thereof |
US20060136657A1 (en) | 2004-12-22 | 2006-06-22 | Intel Corporation | Embedding a filesystem into a non-volatile device |
US20060143396A1 (en) | 2004-12-29 | 2006-06-29 | Mason Cabot | Method for programmer-controlled cache line eviction policy |
US7246195B2 (en) | 2004-12-30 | 2007-07-17 | Intel Corporation | Data storage management for flash memory devices |
US20060184719A1 (en) | 2005-02-16 | 2006-08-17 | Sinclair Alan W | Direct data file storage implementation techniques in flash memories |
US9104315B2 (en) | 2005-02-04 | 2015-08-11 | Sandisk Technologies Inc. | Systems and methods for a mass data storage system having a file-based interface to a host and a non-file-based interface to secondary storage |
US20060190552A1 (en) | 2005-02-24 | 2006-08-24 | Henze Richard H | Data retention system with a plurality of access protocols |
US7254686B2 (en) | 2005-03-31 | 2007-08-07 | International Business Machines Corporation | Switching between mirrored and non-mirrored volumes |
US7620773B2 (en) | 2005-04-15 | 2009-11-17 | Microsoft Corporation | In-line non volatile memory disk read cache and write buffer |
US20060236061A1 (en) | 2005-04-18 | 2006-10-19 | Creek Path Systems | Systems and methods for adaptively deriving storage policy and configuration rules |
US8452929B2 (en) | 2005-04-21 | 2013-05-28 | Violin Memory Inc. | Method and system for storage of data in non-volatile media |
US7702873B2 (en) | 2005-04-25 | 2010-04-20 | Network Appliance, Inc. | Managing common storage by allowing delayed allocation of storage after reclaiming reclaimable space in a logical volume |
US7743210B1 (en) | 2005-04-29 | 2010-06-22 | Netapp, Inc. | System and method for implementing atomic cross-stripe write operations in a striped volume set |
US20060265636A1 (en) | 2005-05-19 | 2006-11-23 | Klaus Hummler | Optimized testing of on-chip error correction circuit |
US20060294300A1 (en) | 2005-06-22 | 2006-12-28 | Seagate Technology Llc | Atomic cache transactions in a distributed storage system |
US7457910B2 (en) | 2005-06-29 | 2008-11-25 | Sandisk Corproation | Method and system for managing partitions in a storage device |
US7716387B2 (en) | 2005-07-14 | 2010-05-11 | Canon Kabushiki Kaisha | Memory control apparatus and method |
US7409489B2 (en) | 2005-08-03 | 2008-08-05 | Sandisk Corporation | Scheduling of reclaim operations in non-volatile memory |
US7552271B2 (en) | 2005-08-03 | 2009-06-23 | Sandisk Corporation | Nonvolatile memory with block management |
US7480771B2 (en) | 2005-08-17 | 2009-01-20 | Sun Microsystems, Inc. | Conditional synchronization mechanisms allowing multiple store operations to become visible while a flagged memory location is owned and remains unchanged |
KR100739722B1 (en) | 2005-08-20 | 2007-07-13 | 삼성전자주식회사 | A method for managing a flash memory and a flash memory system |
JP5008845B2 (en) | 2005-09-01 | 2012-08-22 | 株式会社日立製作所 | Storage system, storage apparatus and control method thereof |
US7580287B2 (en) | 2005-09-01 | 2009-08-25 | Micron Technology, Inc. | Program and read trim setting |
US20070061508A1 (en) | 2005-09-13 | 2007-03-15 | Quantum Corporation | Data storage cartridge with built-in tamper-resistant clock |
US7437510B2 (en) | 2005-09-30 | 2008-10-14 | Intel Corporation | Instruction-assisted cache management for efficient use of cache and memory |
US8078588B2 (en) | 2005-10-10 | 2011-12-13 | Oracle International Corporation | Recoverable execution |
US7529905B2 (en) | 2005-10-13 | 2009-05-05 | Sandisk Corporation | Method of storing transformed units of data in a memory system having fixed sized storage blocks |
US7516267B2 (en) | 2005-11-03 | 2009-04-07 | Intel Corporation | Recovering from a non-volatile memory failure |
US7739472B2 (en) | 2005-11-22 | 2010-06-15 | Sandisk Corporation | Memory system for legacy hosts |
US7366808B2 (en) | 2005-11-23 | 2008-04-29 | Hitachi, Ltd. | System, method and apparatus for multiple-protocol-accessible OSD storage subsystem |
US7526614B2 (en) | 2005-11-30 | 2009-04-28 | Red Hat, Inc. | Method for tuning a cache |
US8799882B2 (en) | 2005-12-07 | 2014-08-05 | Microsoft Corporation | Compiler support for optimizing decomposed software transactional memory operations |
US7877540B2 (en) | 2005-12-13 | 2011-01-25 | Sandisk Corporation | Logically-addressed file storage methods |
US20070143560A1 (en) | 2005-12-21 | 2007-06-21 | Gorobets Sergey A | Non-volatile memories with memory allocation for a directly mapped file storage system |
US20070143566A1 (en) | 2005-12-21 | 2007-06-21 | Gorobets Sergey A | Non-volatile memories with data alignment in a directly mapped file storage system |
US20070156998A1 (en) | 2005-12-21 | 2007-07-05 | Gorobets Sergey A | Methods for memory allocation in non-volatile memories with a directly mapped file storage system |
US7747837B2 (en) | 2005-12-21 | 2010-06-29 | Sandisk Corporation | Method and system for accessing non-volatile storage devices |
US20070143561A1 (en) | 2005-12-21 | 2007-06-21 | Gorobets Sergey A | Methods for adaptive file data handling in non-volatile memories with a directly mapped file storage system |
US7831783B2 (en) | 2005-12-22 | 2010-11-09 | Honeywell International Inc. | Effective wear-leveling and concurrent reclamation method for embedded linear flash file systems |
US20070150663A1 (en) | 2005-12-27 | 2007-06-28 | Abraham Mendelson | Device, system and method of multi-state cache coherence scheme |
JP4392049B2 (en) | 2006-02-27 | 2009-12-24 | 富士通株式会社 | Cache control device and cache control program |
US20070208790A1 (en) | 2006-03-06 | 2007-09-06 | Reuter James M | Distributed data-storage system |
US7676628B1 (en) | 2006-03-31 | 2010-03-09 | Emc Corporation | Methods, systems, and computer program products for providing access to shared storage by computing grids and clusters with large numbers of nodes |
US20070233937A1 (en) | 2006-03-31 | 2007-10-04 | Coulson Richard L | Reliability of write operations to a non-volatile memory |
US7636829B2 (en) | 2006-05-02 | 2009-12-22 | Intel Corporation | System and method for allocating and deallocating memory within transactional code |
US20070261030A1 (en) | 2006-05-04 | 2007-11-08 | Gaurav Wadhwa | Method and system for tracking and prioritizing applications |
US7424587B2 (en) | 2006-05-23 | 2008-09-09 | Dataram, Inc. | Methods for managing data writes and reads to a hybrid solid-state disk drive |
US7558913B2 (en) | 2006-06-20 | 2009-07-07 | Microsoft Corporation | Atomic commit of cache transfer with staging area |
US8307148B2 (en) | 2006-06-23 | 2012-11-06 | Microsoft Corporation | Flash management techniques |
US7721059B2 (en) | 2006-07-06 | 2010-05-18 | Nokia Corporation | Performance optimization in solid-state media |
US20080052377A1 (en) | 2006-07-11 | 2008-02-28 | Robert Light | Web-Based User-Dependent Customer Service Interaction with Co-Browsing |
KR101128234B1 (en) | 2006-08-23 | 2012-03-23 | 엘지전자 주식회사 | Apparatus and method for controlling access of memory |
US7870306B2 (en) | 2006-08-31 | 2011-01-11 | Cisco Technology, Inc. | Shared memory message switch and cache |
JP4452261B2 (en) | 2006-09-12 | 2010-04-21 | 株式会社日立製作所 | Storage system logical volume management method, logical volume management program, and storage system |
JP4942446B2 (en) | 2006-10-11 | 2012-05-30 | 株式会社日立製作所 | Storage apparatus and control method thereof |
US7685178B2 (en) | 2006-10-31 | 2010-03-23 | Netapp, Inc. | System and method for examining client generated content stored on a data container exported by a storage system |
US20080120469A1 (en) | 2006-11-22 | 2008-05-22 | International Business Machines Corporation | Systems and Arrangements for Cache Management |
US7904647B2 (en) | 2006-11-27 | 2011-03-08 | Lsi Corporation | System for optimizing the performance and reliability of a storage controller cache offload circuit |
US8151082B2 (en) | 2007-12-06 | 2012-04-03 | Fusion-Io, Inc. | Apparatus, system, and method for converting a storage request into an append data storage command |
US8935302B2 (en) | 2006-12-06 | 2015-01-13 | Intelligent Intellectual Property Holdings 2 Llc | Apparatus, system, and method for data block usage information synchronization for a non-volatile storage volume |
US8019940B2 (en) | 2006-12-06 | 2011-09-13 | Fusion-Io, Inc. | Apparatus, system, and method for a front-end, distributed raid |
US20080140737A1 (en) | 2006-12-08 | 2008-06-12 | Apple Computer, Inc. | Dynamic memory management |
US20080140918A1 (en) | 2006-12-11 | 2008-06-12 | Pantas Sutardja | Hybrid non-volatile solid state memory system |
US7660911B2 (en) | 2006-12-20 | 2010-02-09 | Smart Modular Technologies, Inc. | Block-based data striping to flash memory |
US7913051B1 (en) | 2006-12-22 | 2011-03-22 | Emc Corporation | Methods and apparatus for increasing the storage capacity of a zone of a storage system |
US8060482B2 (en) | 2006-12-28 | 2011-11-15 | Intel Corporation | Efficient and consistent software transactional memory |
WO2008106686A1 (en) | 2007-03-01 | 2008-09-04 | Douglas Dumitru | Fast block device and methodology |
US20080229045A1 (en) | 2007-03-16 | 2008-09-18 | Lsi Logic Corporation | Storage system provisioning architecture |
US8135900B2 (en) | 2007-03-28 | 2012-03-13 | Kabushiki Kaisha Toshiba | Integrated memory management and memory management method |
US20080243966A1 (en) | 2007-04-02 | 2008-10-02 | Croisettier Ramanakumari M | System and method for managing temporary storage space of a database management system |
US9207876B2 (en) | 2007-04-19 | 2015-12-08 | Microsoft Technology Licensing, Llc | Remove-on-delete technologies for solid state drive optimization |
US8429677B2 (en) | 2007-04-19 | 2013-04-23 | Microsoft Corporation | Composite solid state drive identification and optimization technologies |
US7853759B2 (en) | 2007-04-23 | 2010-12-14 | Microsoft Corporation | Hints model for optimization of storage devices connected to host and write optimization schema for storage devices |
JP2008276646A (en) | 2007-05-02 | 2008-11-13 | Hitachi Ltd | Storage device and data management method for storage device |
US9009452B2 (en) | 2007-05-14 | 2015-04-14 | International Business Machines Corporation | Computing system with transactional memory using millicode assists |
US20080320253A1 (en) | 2007-06-19 | 2008-12-25 | Andrew Tomlin | Memory device with circuitry for writing data of an atomic transaction |
US8850154B2 (en) | 2007-09-11 | 2014-09-30 | 2236008 Ontario Inc. | Processing system having memory partitioning |
US20090070526A1 (en) | 2007-09-12 | 2009-03-12 | Tetrick R Scott | Using explicit disk block cacheability attributes to enhance i/o caching efficiency |
US7873803B2 (en) | 2007-09-25 | 2011-01-18 | Sandisk Corporation | Nonvolatile memory with self recovery |
TWI366828B (en) | 2007-09-27 | 2012-06-21 | Phison Electronics Corp | Wear leveling method and controller using the same |
JP5552431B2 (en) | 2007-11-05 | 2014-07-16 | セルラー コミュニケーションズ エクイップメント エルエルシー | Buffer status reporting apparatus, system and method |
US8195912B2 (en) | 2007-12-06 | 2012-06-05 | Fusion-io, Inc | Apparatus, system, and method for efficient mapping of virtual and physical addresses |
KR101086855B1 (en) | 2008-03-10 | 2011-11-25 | 주식회사 팍스디스크 | Solid State Storage System with High Speed and Controlling Method thereof |
US20090276654A1 (en) | 2008-05-02 | 2009-11-05 | International Business Machines Corporation | Systems and methods for implementing fault tolerant data processing services |
US8266114B2 (en) | 2008-09-22 | 2012-09-11 | Riverbed Technology, Inc. | Log structured content addressable deduplicating storage |
JP5159421B2 (en) | 2008-05-14 | 2013-03-06 | 株式会社日立製作所 | Storage system and storage system management method using management device |
US8775718B2 (en) | 2008-05-23 | 2014-07-08 | Netapp, Inc. | Use of RDMA to access non-volatile solid-state memory in a network storage system |
US8554983B2 (en) | 2008-05-27 | 2013-10-08 | Micron Technology, Inc. | Devices and methods for operating a solid state drive |
WO2010011428A1 (en) | 2008-06-06 | 2010-01-28 | Pivot3 | Method and system for data migration in a distributed raid implementation |
US7917803B2 (en) | 2008-06-17 | 2011-03-29 | Seagate Technology Llc | Data conflict resolution for solid-state memory devices |
US8843691B2 (en) | 2008-06-25 | 2014-09-23 | Stec, Inc. | Prioritized erasure of data blocks in a flash storage device |
US8135907B2 (en) | 2008-06-30 | 2012-03-13 | Oracle America, Inc. | Method and system for managing wear-level aware file systems |
US8019953B2 (en) | 2008-07-01 | 2011-09-13 | Lsi Corporation | Method for providing atomicity for host write input/outputs (I/Os) in a continuous data protection (CDP)-enabled volume using intent log |
JP5242264B2 (en) | 2008-07-07 | 2013-07-24 | 株式会社東芝 | Data control apparatus, storage system, and program |
US20100017556A1 (en) | 2008-07-19 | 2010-01-21 | Nanostar Corporationm U.S.A. | Non-volatile memory storage system with two-stage controller architecture |
KR101086857B1 (en) | 2008-07-25 | 2011-11-25 | 주식회사 팍스디스크 | Control Method of Solid State Storage System for Data Merging |
US7941591B2 (en) | 2008-07-28 | 2011-05-10 | CacheIQ, Inc. | Flash DIMM in a standalone cache appliance system and methodology |
JP5216463B2 (en) | 2008-07-30 | 2013-06-19 | 株式会社日立製作所 | Storage device, storage area management method thereof, and flash memory package |
US8205060B2 (en) | 2008-12-16 | 2012-06-19 | Sandisk Il Ltd. | Discardable files |
US9015209B2 (en) | 2008-12-16 | 2015-04-21 | Sandisk Il Ltd. | Download management of discardable files |
US8266365B2 (en) | 2008-12-17 | 2012-09-11 | Sandisk Il Ltd. | Ruggedized memory device |
US8607028B2 (en) * | 2008-12-30 | 2013-12-10 | Micron Technology, Inc. | Enhanced addressability for serial non-volatile memory |
US8205063B2 (en) | 2008-12-30 | 2012-06-19 | Sandisk Technologies Inc. | Dynamic mapping of logical ranges to write blocks |
US20100235597A1 (en) | 2009-03-10 | 2010-09-16 | Hiroshi Arakawa | Method and apparatus for conversion between conventional volumes and thin provisioning with automated tier management |
US8447918B2 (en) | 2009-04-08 | 2013-05-21 | Google Inc. | Garbage collection for failure prediction and repartitioning |
US8205037B2 (en) | 2009-04-08 | 2012-06-19 | Google Inc. | Data storage device capable of recognizing and controlling multiple types of memory chips operating at different voltages |
US20100262979A1 (en) | 2009-04-08 | 2010-10-14 | Google Inc. | Circular command queues for communication between a host and a data storage device |
US8055816B2 (en) * | 2009-04-09 | 2011-11-08 | Micron Technology, Inc. | Memory controllers, memory systems, solid state drives and methods for processing a number of commands |
US8516219B2 (en) | 2009-07-24 | 2013-08-20 | Apple Inc. | Index cache tree |
US8601222B2 (en) | 2010-05-13 | 2013-12-03 | Fusion-Io, Inc. | Apparatus, system, and method for conditional and atomic storage operations |
TW201111986A (en) * | 2009-09-29 | 2011-04-01 | Silicon Motion Inc | Memory apparatus and data access method for memories |
US8103910B2 (en) | 2009-11-13 | 2012-01-24 | International Business Machines Corporation | Local rollback for fault-tolerance in parallel computing systems |
US8285937B2 (en) | 2010-02-24 | 2012-10-09 | Apple Inc. | Fused store exclusive/memory barrier operation |
US8738724B2 (en) * | 2010-05-25 | 2014-05-27 | Microsoft Corporation | Totally ordered log on appendable storage |
WO2012016089A2 (en) | 2010-07-28 | 2012-02-02 | Fusion-Io, Inc. | Apparatus, system, and method for conditional and atomic storage operations |
EP2604067A1 (en) | 2010-08-09 | 2013-06-19 | Nokia Siemens Networks Oy | Increasing efficiency of admission control in a network |
US8850114B2 (en) * | 2010-09-07 | 2014-09-30 | Daniel L Rosenband | Storage array controller for flash-based storage devices |
US8904091B1 (en) * | 2011-12-22 | 2014-12-02 | Western Digital Technologies, Inc. | High performance media transport manager architecture for data storage systems |
US10133662B2 (en) | 2012-06-29 | 2018-11-20 | Sandisk Technologies Llc | Systems, methods, and interfaces for managing persistent data of atomic storage operations |
US9274937B2 (en) | 2011-12-22 | 2016-03-01 | Longitude Enterprise Flash S.A.R.L. | Systems, methods, and interfaces for vector input/output operations |
-
2012
- 2012-12-21 US US13/725,728 patent/US9274937B2/en active Active
-
2016
- 2016-01-19 US US15/000,995 patent/US10296220B2/en active Active
-
2019
- 2019-04-01 US US16/371,110 patent/US11182212B2/en active Active
Also Published As
Publication number | Publication date |
---|---|
US9274937B2 (en) | 2016-03-01 |
US20130166855A1 (en) | 2013-06-27 |
US10296220B2 (en) | 2019-05-21 |
US20160132243A1 (en) | 2016-05-12 |
US11182212B2 (en) | 2021-11-23 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11182212B2 (en) | Systems, methods, and interfaces for vector input/output operations | |
US10133662B2 (en) | Systems, methods, and interfaces for managing persistent data of atomic storage operations | |
US8725934B2 (en) | Methods and appratuses for atomic storage operations | |
US10013354B2 (en) | Apparatus, system, and method for atomic storage operations | |
US9075557B2 (en) | Virtual channel for data transfers between devices | |
US9983993B2 (en) | Apparatus, system, and method for conditional and atomic storage operations | |
US10073630B2 (en) | Systems and methods for log coordination | |
US9442844B2 (en) | Apparatus, system, and method for a storage layer | |
US9251058B2 (en) | Servicing non-block storage requests | |
US10019320B2 (en) | Systems and methods for distributed atomic storage operations | |
US10102144B2 (en) | Systems, methods and interfaces for data virtualization | |
US8898376B2 (en) | Apparatus, system, and method for grouping data stored on an array of solid-state storage elements | |
US9519647B2 (en) | Data expiry in a non-volatile device | |
US11030156B2 (en) | Key-value store with partial data access | |
US20150058547A1 (en) | Apparatus, system, and method for allocating storage | |
WO2015112634A1 (en) | Systems, methods and interfaces for data virtualization |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
FEPP | Fee payment procedure |
Free format text: ENTITY STATUS SET TO UNDISCOUNTED (ORIGINAL EVENT CODE: BIG.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY |
|
AS | Assignment |
Owner name: SANDISK TECHNOLOGIES, INC., TEXAS Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:LONGITUDE ENTERPRISE FLASH SARL;REEL/FRAME:050909/0015 Effective date: 20160318 Owner name: SANDISK TECHNOLOGIES LLC, TEXAS Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:SANDISK TECHNOLOGIES, INC.;REEL/FRAME:050926/0420 Effective date: 20160516 |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: NON FINAL ACTION MAILED |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: NON FINAL ACTION MAILED |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: FINAL REJECTION MAILED |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: RESPONSE AFTER FINAL ACTION FORWARDED TO EXAMINER |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: NOTICE OF ALLOWANCE MAILED -- APPLICATION RECEIVED IN OFFICE OF PUBLICATIONS |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: PUBLICATIONS -- ISSUE FEE PAYMENT VERIFIED |
|
STCF | Information on status: patent grant |
Free format text: PATENTED CASE |