US20180314627A1 - Systems and Methods for Referencing Data on a Storage Medium - Google Patents
Systems and Methods for Referencing Data on a Storage Medium Download PDFInfo
- Publication number
- US20180314627A1 US20180314627A1 US16/030,232 US201816030232A US2018314627A1 US 20180314627 A1 US20180314627 A1 US 20180314627A1 US 201816030232 A US201816030232 A US 201816030232A US 2018314627 A1 US2018314627 A1 US 2018314627A1
- Authority
- US
- United States
- Prior art keywords
- storage
- data
- module
- solid
- ecc
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Abandoned
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
- G06F11/07—Responding to the occurrence of a fault, e.g. fault tolerance
- G06F11/08—Error detection or correction by redundancy in data representation, e.g. by using checking codes
- G06F11/10—Adding special bits or symbols to the coded information, e.g. parity check, casting out 9's or 11's
- G06F11/1076—Parity data used in redundant arrays of independent storages, e.g. in RAID systems
- G06F11/108—Parity data distribution in semiconductor storages, e.g. in SSD
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F12/00—Accessing, addressing or allocating within memory systems or architectures
- G06F12/02—Addressing or allocation; Relocation
- G06F12/0223—User address space allocation, e.g. contiguous or non contiguous base addressing
- G06F12/023—Free address space management
- G06F12/0238—Memory management in non-volatile memory, e.g. resistive RAM or ferroelectric memory
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F12/00—Accessing, addressing or allocating within memory systems or architectures
- G06F12/02—Addressing or allocation; Relocation
- G06F12/0223—User address space allocation, e.g. contiguous or non contiguous base addressing
- G06F12/023—Free address space management
- G06F12/0238—Memory management in non-volatile memory, e.g. resistive RAM or ferroelectric memory
- G06F12/0246—Memory management in non-volatile memory, e.g. resistive RAM or ferroelectric memory in block erasable memory, e.g. flash memory
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0602—Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
- G06F3/0608—Saving storage space on storage systems
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0628—Interfaces specially adapted for storage systems making use of a particular technique
- G06F3/0638—Organizing or formatting or addressing of data
- G06F3/064—Management of blocks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0668—Interfaces specially adapted for storage systems adopting a particular infrastructure
- G06F3/0671—In-line storage system
- G06F3/0673—Single storage device
- G06F3/0679—Non-volatile semiconductor memory device, e.g. flash memory, one time programmable memory [OTP]
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L12/00—Data switching networks
- H04L12/28—Data switching networks characterised by path configuration, e.g. LAN [Local Area Networks] or WAN [Wide Area Networks]
- H04L12/46—Interconnection of networks
- H04L12/4604—LAN interconnection over a backbone network, e.g. Internet, Frame Relay
- H04L12/462—LAN interconnection over a bridge based backbone
- H04L12/4625—Single bridge functionality, e.g. connection of two networks over a single bridge
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L12/00—Data switching networks
- H04L12/64—Hybrid switching systems
- H04L12/6418—Hybrid transport
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L12/00—Data switching networks
- H04L12/66—Arrangements for connecting between networks having differing types of switching systems, e.g. gateways
-
- H—ELECTRICITY
- H05—ELECTRIC TECHNIQUES NOT OTHERWISE PROVIDED FOR
- H05K—PRINTED CIRCUITS; CASINGS OR CONSTRUCTIONAL DETAILS OF ELECTRIC APPARATUS; MANUFACTURE OF ASSEMBLAGES OF ELECTRICAL COMPONENTS
- H05K7/00—Constructional details common to different types of electric apparatus
- H05K7/14—Mounting supporting structure in casing or on frame or rack
- H05K7/1438—Back panels or connecting means therefor; Terminals; Coding means to avoid wrong insertion
- H05K7/1439—Back panel mother boards
- H05K7/1444—Complex or three-dimensional-arrangements; Stepped or dual mother boards
-
- G06F2003/0694—
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2211/00—Indexing scheme relating to details of data-processing equipment not covered by groups G06F3/00 - G06F13/00
- G06F2211/10—Indexing scheme relating to G06F11/10
- G06F2211/1002—Indexing scheme relating to G06F11/1076
- G06F2211/109—Sector level checksum or ECC, i.e. sector or stripe level checksum or ECC in addition to the RAID parity calculation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2212/00—Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
- G06F2212/40—Specific encoding of data in memory or cache
- G06F2212/401—Compressed data
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2212/00—Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
- G06F2212/72—Details relating to flash memory management
- G06F2212/7201—Logical to physical mapping or translation of blocks or pages
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2212/00—Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
- G06F2212/72—Details relating to flash memory management
- G06F2212/7208—Multiple device management, e.g. distributing data over multiple flash devices
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0628—Interfaces specially adapted for storage systems making use of a particular technique
- G06F3/0662—Virtualisation aspects
- G06F3/0664—Virtualisation aspects at device level, e.g. emulation of a storage device or system
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/01—Protocols
- H04L67/10—Protocols in which an application is distributed across nodes in the network
- H04L67/1097—Protocols in which an application is distributed across nodes in the network for distributed storage of data in networks, e.g. transport arrangements for network file system [NFS], storage area networks [SAN] or network attached storage [NAS]
Definitions
- This disclosure relates to data storage and, in particular, to systems and methods for efficiently referencing data stored on a non-volatile storage medium.
- a storage system may map logical addresses to storage locations of a storage device.
- Physical addressing metadata used to reference the storage locations may consume significant memory resources.
- the size of the physical addressing metadata may limit the size of the storage resources the system is capable of referencing.
- the method may comprise arranging a plurality of data segments for storage at respective offsets within a storage location of a solid-state storage medium, mapping front-end addresses of the data segments to an address of the storage location in a first index, and generating a second index configured for storage on the solid-state storage medium, wherein the second index is configured to associate the front-end addresses of the data segments with respective offsets of the data segments within the storage location.
- the method further includes compressing one or more of the data segments for storage on the solid-state storage medium such that a compressed size of the compressed data segments differs from an uncompressed size of the data segments, wherein the offsets of the data segments within the storage location are based on the compressed size of the one or more data segments.
- the disclosed method may further comprise storing the second index on the storage medium.
- the second index may be stored on the storage location that comprises the plurality of data segments.
- the offsets may be omitted from the first index, which may reduce the overhead of the first index and/or allow the first index to reference a larger storage address space.
- the storage address of a data segment associated with a particular front-end address may be determined by use of a storage location address mapped to the particular front-end address in the first index and a data segment offset associated with the particular front-end address of the second index stored on the storage location.
- Accessing a requested data segment of a specified front-end address may include accessing a physical address of a storage location mapped to the specified front-end address in the first index, and reading the second index stored on the storage location to determine an offset of the requested data segment within the storage location.
- the apparatus may include a storage layer configured to store data packets within storage units of a non-volatile storage medium, wherein the storage units are configured to store a plurality of data packets, a data layout module configured to determine relative locations of the stored data packets within the storage units, and an offset index module configured to generate offset indexes for the storage units based on the determined relative locations of the data packets stored within the storage units, wherein the offset index of a storage unit is configured to associate logical identifiers of data packets stored within the storage unit with the determined relative locations of the data packets within the storage unit.
- the disclosed apparatus further includes a compression module configured to compress data of one or more of the data packets, such that a compressed size of the data differs from an uncompressed size of the data, wherein the offset index module is configured to determine the offsets of the data packets based on the compressed size of the data.
- the apparatus may further comprise a translation module which may be used to associate logical identifiers with media addresses of storage units comprising data packets corresponding to the logical identifiers, wherein the storage layer is configured to access a data packet corresponding to a logical identifier by use of a media address of a storage unit associated with the logical identifier by the translation module, and an offset index indicating a relative location of the data packet within the storage unit, wherein the offset index is stored at a pre-determined location within the storage unit.
- a translation module which may be used to associate logical identifiers with media addresses of storage units comprising data packets corresponding to the logical identifiers
- the storage layer is configured to access a data packet corresponding to a logical identifier by use of a media address of a storage unit associated with the logical identifier by the translation module, and an offset index indicating a relative location of the data packet within the storage unit, wherein the offset index is stored at a pre-determined location within the storage unit.
- the storage layer may be configured to store the offset indexes of the storage units at pre-determined locations within the storage units.
- the storage layer may be further configured to store each offset index within the storage unit that comprises data packets indexed by the offset index.
- the storage medium may comprise a solid-state storage array comprising a plurality of columns, each column comprising a respective solid-state storage element, and wherein each of the storage units comprises physical storage units on two or more columns of the solid-state storage array.
- the solid-state storage array may comprise a plurality of columns, each column comprising a respective solid-state storage element.
- the offset indexes may indicate a relative location of a data packet within a column of the solid-state storage array.
- the storage medium is a solid-state storage array comprising a plurality of independent channels, each channel comprising a plurality of solid-state storage elements, and wherein the offset indexes indicate relative locations of data packets within respective independent channels.
- the method may further comprise compressing the data for storage on the solid-state storage device, wherein the data offsets within respective storage units are based on a compressed size of the data.
- Data corresponding to a logical address may be accessed by combining a first portion of the physical address mapped to the logical address with a second portion of the physical address stored on a storage unit corresponding to the first portion of the physical address.
- each storage unit comprises a plurality of storage units corresponding to respective solid-state storage elements.
- the storage unit may comprise a page on a solid-state storage element, and the second portions of the physical addresses may correspond to a data offsets within the pages.
- FIG. 1A is a block diagram of one embodiment of a computing system comprising a storage layer
- FIG. 1B depicts embodiments of any-to-any mappings
- FIG. 1C depicts one embodiment of a solid-state storage array
- FIG. 1D depicts one embodiment of a storage log
- FIG. 2 is a block diagram of another embodiment of a storage layer
- FIG. 3 depicts one embodiment of a packet format
- FIG. 4 depicts one embodiment of ECC codewords comprising one or more data segments
- FIG. 5A is a block diagram depicting one embodiment of a solid-state storage array
- FIG. 5B is a block diagram depicting another embodiment of a solid-state storage array
- FIG. 5C is a block diagram depicting another embodiment of banks of solid-state storage arrays
- FIG. 5D depicts one embodiment of sequential bank interleave
- FIG. 5E depicts another embodiment of sequential bank interleave
- FIG. 6A is a block diagram of another embodiment of a storage controller
- FIG. 6B depicts one embodiment of a horizontal data storage configuration
- FIG. 7A depicts one embodiment of storage metadata for referencing data stored on a storage medium
- FIG. 7B depicts another embodiment of storage metadata for referencing data stored on a storage medium
- FIG. 7C depicts another embodiment of storage metadata for referencing data stored on a storage medium
- FIG. 8A depicts one embodiment of a vertical data layout
- FIG. 8B depicts another embodiment of a vertical data layout
- FIG. 8C depicts one embodiment of a system for referencing data stored on a storage medium in a vertical data layout
- FIG. 9A is a block diagram of one embodiment of a system for referencing data stored in an independent column layout on a storage medium
- FIG. 9B is a block diagram of another embodiment of a system for referencing data stored in an independent column layout on a storage medium
- FIG. 9C is a block diagram of another embodiment of a system for referencing data stored in an independent column layout on a storage medium
- FIG. 10A is a block diagram of one embodiment of data stored in a vertical stripe configuration
- FIG. 10B is a block diagram of one embodiment of a system for referencing data stored in a vertical stripe configuration
- FIG. 10C is a block diagram of another embodiment of a system for referencing data stored in a vertical stripe configuration
- FIG. 10D is a block diagram of another embodiment of a system for referencing data stored in a vertical stripe configuration
- FIG. 11 is a flow diagram of one embodiment of a method for referencing data stored on a storage medium.
- FIG. 12 is a flow diagram of another embodiment of a method for referencing data stored on a storage medium.
- FIG. 1A is a block diagram of one embodiment of a computing system 100 comprising a storage layer 130 configured to provide storage services to one or more storage clients 106 .
- the computing system 100 may comprise any suitable computing device, including, but not limited to: a server, desktop, laptop, embedded system, mobile device, and/or the like. In some embodiments, computing system 100 may include multiple computing devices, such as a cluster of server computing devices.
- the computing system 100 may comprise processing resources 101 , volatile memory resources 102 (e.g., random access memory (RAM)), non-volatile storage resources 103 , and a communication interface 104 .
- the processing resources 101 may include, but are not limited to, general purpose central processing units (CPUs), application-specific integrated circuits (ASICs), programmable logic elements, such as field programmable gate arrays (FPGAs), programmable logic arrays (PLGs), and the like.
- the non-volatile storage 103 may comprise a non-transitory machine-readable storage medium, such as a magnetic hard disk, solid-state storage medium, optical storage medium, and/or the like.
- the communication interface 104 may be configured to communicatively couple the computing system 100 to a network 105 .
- the network 105 may comprise any suitable communication network including, but not limited to: a Transmission Control Protocol/Internet Protocol (TCP/IP) network, a Local Area Network (LAN), a Wide Area Network (WAN), a Virtual Private Network (VPN), a Storage Area Network (SAN), a Public Switched Telephone Network (PSTN), the Internet, and/or the like.
- TCP/IP Transmission Control Protocol/Internet Protocol
- LAN Local Area Network
- WAN Wide Area Network
- VPN Virtual Private Network
- SAN Storage Area Network
- PSTN Public Switched Telephone Network
- the computing system 100 may comprise a storage layer 130 , which may be configured to provide storage services to one or more storage clients 106 .
- the storage clients 106 may include, but are not limited to: operating systems (including bare metal operating systems, guest operating systems, virtual machines, virtualization environments, and the like), file systems, database systems, remote storage clients (e.g., storage clients communicatively coupled to the computing system 100 and/or storage layer 130 through the network 105 ), and/or the like.
- the storage layer 130 may be implemented in software, hardware and/or a combination thereof.
- portions of the storage layer 130 are embodied as executable instructions, such as computer program code, which may be stored on a persistent, non-transitory storage medium, such as the non-volatile storage resources 103 .
- the instructions and/or computer program code may be configured for execution by the processing resources 101 .
- portions of the storage layer 130 may be embodied as machine components, such as general and/or application-specific components, programmable hardware, FPGAs, ASICs, hardware controllers, storage controllers, and/or the like.
- the storage layer 130 may be configured to perform storage operations on a storage medium 140 .
- the storage medium 140 may comprise any storage medium capable of storing data persistently.
- “persistent” data storage refers to storing information on a persistent, non-volatile storage medium.
- the storage medium 140 may include non-volatile storage media such as solid-state storage media in one or more solid-state storage devices or drives (SSD), hard disk drives (e.g., Integrated Drive Electronics (IDE) drives, Small Computer System Interface (SCSI) drives, Serial Attached SCSI (SAS) drives, Serial AT Attachment (SATA) drives, etc.), tape drives, writable optical drives (e.g., CD drives, DVD drives, Blu-ray drives, etc.), and/or the like.
- SSD solid-state storage media in one or more solid-state storage devices or drives
- IDE Integrated Drive Electronics
- SCSI Small Computer System Interface
- SAS Serial Attached SCSI
- SAS Serial AT Attachment
- tape drives writable
- the storage medium 140 comprises non-volatile solid-state memory, which may include, but is not limited to, NAND flash memory, NOR flash memory, nano RAM (NRAM), magneto-resistive RAM (MRAM), phase change RAM (PRAM), Racetrack memory, Memristor memory, nanocrystal wire-based memory, silicon-oxide based sub-10 nanometer process memory, graphene memory, Silicon-Oxide-Nitride-Oxide-Silicon (SONOS), Resistive random-access memory (RRAM), programmable metallization cell (PMC), conductive-bridging RAM (CBRAM), and/or the like.
- NAND flash memory NOR flash memory
- NRAM nano RAM
- MRAM magneto-resistive RAM
- PRAM phase change RAM
- Racetrack memory Memristor memory
- nanocrystal wire-based memory silicon-oxide based sub-10 nanometer process memory
- graphene memory Silicon-Oxide-Nitride-Oxide-Silicon
- the teachings of this disclosure could be applied to any suitable form of memory including both non-volatile and volatile forms. Accordingly, although particular embodiments of the storage layer 130 are disclosed in the context of non-volatile, solid-state storage devices 140 , the storage layer 130 may be used with other storage devices and/or storage media.
- the storage device 130 includes volatile memory, which may include, but is not limited to RAM, dynamic RAM (DRAM), static RAM (SRAM), synchronous dynamic RAM (SDRAM), etc.
- the storage medium 140 may correspond to memory of the processing resources 101 , such as a CPU cache (e.g., L1, L2, L3 cache, etc.), graphics memory, and/or the like.
- the storage medium 140 is communicatively coupled to the storage layer 130 by use of an interconnect 127 .
- the interconnect 127 may include, but is not limited to peripheral component interconnect (PCI), PCI express (PCI-e), serial advanced technology attachment (serial ATA or SATA), parallel ATA (PATA), small computer system interface (SCSI), IEEE 1394 (FireWire), Fiber Channel, universal serial bus (USB), and/or the like.
- the storage medium 140 may be a remote storage device that is communicatively coupled to the storage layer 130 through the network 105 (and/or other communication interface, such as a Storage Area Network (SAN), a Virtual Storage Area Network (VSAN), or the like).
- SAN Storage Area Network
- VSAN Virtual Storage Area Network
- the interconnect 127 may, therefore, comprise a remote bus, such as a PCE-e bus, a network connection (e.g., Infiniband), a storage network, Fibre Channel Protocol (FCP) network, HyperSCSI, and/or the like.
- a remote bus such as a PCE-e bus, a network connection (e.g., Infiniband), a storage network, Fibre Channel Protocol (FCP) network, HyperSCSI, and/or the like.
- a network connection e.g., Infiniband
- FCP Fibre Channel Protocol
- HyperSCSI HyperSCSI
- the storage layer 130 may be configured to manage storage operations on the storage medium 140 by use of, inter alia, a storage controller 139 .
- the storage controller 139 may comprise software and/or hardware components including, but not limited to: one or more drivers and/or other software modules operating on the computing system 100 , such as storage drivers, I/O drivers, filter drivers, and/or the like, hardware components, such as hardware controllers, communication interfaces, and/or the like, and so on.
- the storage medium 140 may be embodied on a storage device 141 . Portions of the storage layer 139 (e.g., storage controller 139 ) may be implemented as hardware and/or software components (e.g., firmware) of the storage device 141 .
- the storage controller 139 may be configured to implement storage operations at particular storage locations of the storage medium 140 .
- a storage location refers to unit of storage of a storage resource (e.g., a storage medium and/or device) that is capable of storing data persistently; storage locations may include, but are not limited to: pages, groups of pages (e.g., logical pages and/or offsets within a logical page), storage divisions (e.g., physical erase blocks, logical erase blocks, etc.), sectors, locations on a magnetic disk, battery-backed memory locations, and/or the like.
- the storage locations may be addressable within a storage address space 144 of the storage medium 140 .
- Storage addresses may correspond to physical addresses, media addresses, back-end addresses, address offsets, and/or the like. Storage addresses may correspond to any suitable storage address space 144 , storage addressing scheme and/or arrangement of storage locations.
- the storage layer 130 may comprise an interface 131 through which storage clients 106 may access storage services provided by the storage layer.
- the storage interface 131 may include one or more of: a block device interface, a virtualized storage interface, an object storage interface, a database storage interface, and/or other suitable interface and/or Application Programming Interface (API).
- API Application Programming Interface
- the storage layer 130 may provide for referencing storage resources through a front-end interface.
- a front-end interface refers to the identifiers used by the storage clients 106 to reference storage resources and/or services of the storage layer 130 .
- a front-end interface may correspond to a front-end address space 132 that comprises a set, range, and/or extent of front-end addresses or identifiers.
- a front-end address refers to an identifier used to reference data and/or storage resources; front-end addresses may include, but are not limited to: names (e.g., file names, distinguished names, etc.), data identifiers, logical identifiers (LIDs), logical addresses, logical block addresses (LBAs), logical unit number (LUN) addresses, virtual storage addresses, storage addresses, physical addresses, media addresses, and/or the like.
- the front-end address space 132 comprises a logical address space, comprising a plurality of logical identifiers, LBAs, and/or the like.
- the translation module 134 may be configured to map front-end identifiers of the front-end address space 132 to storage resources (e.g., data stored within the storage address space 144 of the storage medium 140 ).
- the front-end address space 132 may be independent of the back-end storage resources (e.g., the storage medium 140 ); accordingly, there may be no set or pre-determined mappings between front-end addresses of the front-end address space 132 and the storage addresses of the storage address space 144 of the storage medium 140 .
- the front-end address space 132 is sparse, thinly provisioned, and/or over-provisioned, such that the size of the front-end address space 132 differs from the storage address space 144 of the storage medium 140 .
- the storage layer 130 may be configured to maintain storage metadata 135 pertaining to storage operations performed on the storage medium 140 .
- the storage metadata 135 may include, but is not limited to: a forward index comprising any-to-any mappings between front-end identifiers of the front-end address space 132 and storage addresses within the storage address space 144 of the storage medium 140 , a reverse index pertaining to the contents of the storage locations of the storage medium 140 , one or more validity bitmaps, reliability testing and/or status metadata, status information (e.g., error rate, retirement status, and so on), and/or the like.
- Portions of the storage metadata 135 may be maintained within the volatile memory resources 102 of the computing system 100 .
- portions of the storage metadata 135 may be stored on non-volatile storage resources 103 and/or the storage medium 140 .
- FIG. 1B depicts one embodiment of any-to-any mappings 150 between front-end identifiers of the front-end address space 132 and back-end identifiers (e.g., storage addresses) within the storage address space 144 .
- the any-to-any mappings 150 may be maintained in one or more data structures of the storage metadata 135 .
- the translation module 134 may be configured to map any front-end address to any back-end storage location.
- the front-end address space 132 may be sized differently than the underlying storage address space 144 .
- the front-end address space 132 may be thinly provisioned, and, as such, may comprise a larger range of front-end identifiers than the range of storage addresses in the storage address space 144 .
- the storage layer 130 may be configured to maintain the any-to-any mappings in a forward map 152 .
- the forward map 152 may comprise any suitable data structure, including, but not limited to: an index, a map, a hash map, a hash table, an extended-range tree, a b-tree, and/or the like.
- the forward map 152 may comprise entries 153 corresponding to front-end identifiers that have been allocated for use to reference data stored on the storage medium 140 .
- the entries 153 of the forward map 152 may associate front-end identifiers 154 A-D with respective storage addresses 156 A-D within the storage address space 144 .
- the forward map 152 may be sparsely populated, and as such, may omit entries corresponding to front-end identifiers that are not currently allocated by a storage client 106 and/or are not currently in use to reference valid data stored on the storage medium 140 .
- the forward map 152 comprises a range-encoded data structure, such that one or more of the entries 153 may correspond to a plurality of front-end identifiers (e.g., a range, extent, and/or set of front-end identifiers).
- the forward map 152 includes an entry 153 corresponding to a range of front-end identifiers 154 A mapped to a corresponding range of storage addresses 156 A.
- the entries 153 may be indexed by front-end identifiers.
- the entries 153 are arranged into a tree data structure by respective links.
- the disclosure is not limited in this regard, however, and could be adapted to use any suitable data structure and/or indexing mechanism.
- the solid-state storage medium 140 may comprise a solid-state storage array 115 comprising a plurality of solid-state storage elements 116 A-Y.
- a solid-state storage array (or array) 115 refers to a set of two or more independent columns 118 .
- a column 118 may comprise one or more solid-state storage elements 116 A-Y that are communicatively coupled to the storage layer 130 in parallel using, inter alia, the interconnect 127 .
- Rows 117 of the array 115 may comprise physical storage units of the respective columns 118 (solid-state storage elements 116 A-Y).
- a solid-state storage element 116 A-Y includes, but is not limited to solid-state storage resources embodied as: a package, chip, die, plane, printed circuit board, and/or the like.
- the solid-state storage elements 116 A-Y comprising the array 115 may be capable of independent operation. Accordingly, a first one of the solid-state storage elements 116 A may be capable of performing a first storage operation while a second solid-state storage element 116 B performs a different storage operation.
- the solid-state storage element 116 A may be configured to read data at a first physical address, while another solid-state storage element 116 B reads data at a different physical address.
- a solid-state storage array 115 may also be referred to as a logical storage element (LSE).
- the solid-state storage array 115 may comprise logical storage units (rows 117 ).
- a “logical storage unit” or row 117 refers to a logical construct combining two or more physical storage units, each physical storage unit on a respective column 118 of the array 115 .
- a logical erase block refers to a set of two or more physical erase blocks
- a logical page refers to a set of two or more pages, and so on.
- a logical erase block may comprise erase blocks within respective logical storage elements 115 and/or banks.
- a logical erase block may comprise erase blocks within a plurality of different arrays 115 and/or may span multiple banks of solid-state storage elements.
- the storage layer 130 may further comprise a log storage module 136 configured to store data on the storage medium 140 in log structured storage configuration (e.g., in a storage log).
- a “storage log” or “log structure” refers to an ordered arrangement of data within the storage address space 144 of the storage medium 140 .
- the log storage module 136 may be configured to append data sequentially within the storage address space 144 of the storage medium 140 .
- FIG. 1D depicts one embodiment of the storage address space 144 of the storage medium 140 .
- the storage address space 144 comprises a plurality of storage divisions (e.g., erase blocks, logical erase blocks, or the like), each of which can be initialized (e.g., erased) for use in storing data.
- the storage divisions 160 A-N may comprise respective storage locations, which may correspond to pages, logical pages and/or the like.
- the storage locations may be assigned respective storage addresses (e.g., storage address 0 to storage address N).
- the log storage module 136 may be configured to store data sequentially at an append point 180 within the physical address space 144 . Data may be appended at the append point 180 and, when the storage location 182 is filled, the append point 180 may advance 181 to a next available storage location.
- an “available” logical page refers to a logical page that has been initialized (e.g., erased) and has not yet been programmed. Some types of storage media can only be reliably programmed once after erasure. Accordingly, an available storage location may refer to a storage division 160 A-N that is in an initialized (or erased) state.
- Storage divisions 160 A-N may be reclaimed for use in a storage recovery process, which may comprise relocating valid data (if any) on the storage division 160 A-N that is being reclaimed to other storage division(s) 160 A-N and erasing the storage division 160 A-N.
- the logical erase block 160 B may be unavailable for storage due to, inter alia, not being in an erased state (e.g., comprising valid data), being out-of service due to high error rates or the like, and so on. Therefore, after filling the storage location 182 , the log storage module 136 may skip the unavailable storage division 160 B, and advance the append point 180 to the next available storage division 160 C.
- the log storage module 136 may be configured to continue appending data to storage locations 183 - 185 , at which point the append point 180 continues at a next available storage division 160 A-N, as disclosed above.
- the append point 180 After storing data on the “last” storage location within the storage address space 144 (e.g., storage location N 189 of storage division 160 N), the append point 180 wraps back to the first storage division 160 A (or the next available storage division, if storage division 160 A is unavailable). Accordingly, the log storage module 136 may treat the storage address space 144 as a loop or cycle.
- the storage layer 130 may be configured to modify and/or overwrite data out-of-place.
- modifying and/or overwriting data “out-of-place” refers to performing storage operations at different storage addresses rather than modifying and/or overwriting the data at its current storage location (e.g., overwriting the original physical location of the data “in-place”).
- Performing storage operations out-of-place may avoid write amplification, since existing, valid data on the storage division 160 A-N comprising the data that is being modified need not be erased and/or recopied.
- writing data “out-of-place” may remove erasure from the latency path of many storage operations (the erasure latency is no longer part of the “critical path” of a write operation). In the FIG.
- a storage operation to overwrite and/or modify data corresponding to front-end address A (denoted AO) stored at physical storage location 191 with modified data A 1 may be stored out-of-place on a different location (media address 193 ) within the storage address space 144 .
- Storing the data A 1 may comprise updating the storage metadata 135 to associate the front end address A with the storage address of storage location 193 and/or to invalidate the obsolete data AO at storage address 191 .
- updating the storage metadata 135 may comprise updating an entry of the forward map 152 to associate the front-end address A 154 E with the storage address of the modified data A 1 .
- the storage layer 130 is configured to scan the storage address space 144 of the storage medium 140 to identify storage divisions 160 A-N to reclaim. As disclosed above, reclaiming a storage division 160 A-N may comprise relocating valid data on the storage division 160 A-N (if any) and erasing the storage division 160 A-N.
- the storage layer 130 may be further configured to store data in association with persistent metadata (e.g., in a self-describing format).
- the persistent metadata may comprise information about the data, such as the front-end identifier(s) associated with the data, data size, data length, and the like.
- Embodiments of a packet format comprising persistent, contextual metadata pertaining to data stored within the storage log are disclosed in further detail below in conjunction with FIG. 3 .
- the storage layer 130 may be configured to reconstruct the storage metadata 135 by use of contents of the storage medium 140 .
- the current version of the data associated with front-end identifier A stored at storage location 191 may be distinguished from the obsolete version of the data A stored at storage location 193 based on the log order of the packets at storage location 191 and 193 , respectively. Since the data packet at 193 is ordered after the data packet at 191 , the storage layer 130 may determine that storage location 193 comprises the most recent, up-to-date version of the data A. Accordingly, the reconstructed forward map 152 may associate front-end identifier A with the data stored at storage location 193 (rather than the obsolete data at storage location 191 ).
- FIG. 2 is a block diagram of a system 200 comprising another embodiment of a storage layer 130 configured to manage data storage operations on a storage medium 140 .
- the storage medium 140 may comprise one or more independent banks 119 A-N of solid-state storage arrays 115 A-N.
- Each of the solid-state storage arrays 115 A-N may comprise a plurality of solid-state storage elements (columns 118 ) communicatively coupled in parallel via the interconnect 127 , as disclosed herein.
- the storage controller 139 may comprise a request module 231 configured to receive storage requests from the storage layer 130 and/or storage clients 106 .
- the request module 231 may be configured to transfer data to/from the storage controller 139 in response to the requests.
- the request module 231 may comprise and/or be communicatively coupled to one or more direct memory access (DMA) modules, remote DMA modules, interconnect controllers, bus controllers, bridges, buffers, network interfaces, and the like.
- DMA direct memory access
- the storage controller 139 may comprise a write module 240 configured to process data for storage on the storage medium 140 .
- the write module 240 comprises one or more stages configured to process and/or format data for storage on the storage medium 140 , which may include, but are not limited to: a compression module 242 , a packet module 244 , an ECC write module 246 , and a write buffer 250 .
- the write module 240 may further comprise a whitening module, configured to whiten data for storage on the storage medium 140 , one or more encryption modules configured to encrypt data for storage on the storage medium 140 , and so on.
- the read module 241 may comprise one or more modules configured to process and/or format data read from the storage medium 140 , which may include, but are not limited to: a read buffer 251 , the data layout module 248 , an ECC read module 247 , a depacket module 245 , and a decompression module 243 .
- the write module 240 comprises a write pipeline configured to process data for storage in a plurality of pipeline stages or modules, as disclosed herein.
- the read module 241 may comprise a read pipeline configured to process data read from the solid-state storage array 115 in a plurality of pipeline stages or modules, as disclosed herein.
- the compression module 242 may be configured to compress data for storage on the storage medium 140 .
- Data may be compressed using any suitable compression algorithm and/or technique.
- the data compression module 242 may be configured to compress the data, such that a compressed size of the data stored on the storage medium 140 differs from the original, uncompressed size of the data.
- the compression module 242 may be configured to compress data using different compression algorithms and/or compression levels, which may result in variable compression ratios between the original, uncompressed size of certain data segments and the size of the compressed data segments.
- the compression module 242 may be further configured to perform one or more whitening transformations on the data segments and/or data packets generated by the packet module 244 (disclosed in further detail below).
- the data whitening transformations may comprise decorrelating the data, which may provide wear-leveling benefits for certain types of storage media.
- the compression module 242 may be further configured to encrypt data for storage on the storage medium 140 by use of one or more of a media encryption key, a user encryption key, and/or the like.
- the packet module 244 may be configured to generate data packets comprising data to be stored on the storage medium 140 .
- the write module 240 may be configured to store data in a storage log, in which data segments are stored in association with self-describing metadata in a packet format as illustrated in FIG. 3 .
- the packt module 244 may be configured to generate packets comprising a data segment 312 and persistent metadata 314 .
- the persistent metadata 314 may include one or more front-end addresses 315 associated with the data segment 312 .
- the data packets 310 may be associated with sequence information, such as a sequence indicator 318 , to define, inter alia, a log-order of the data packets 310 within the storage log on the storage medium 140 .
- the sequence indicator 318 may comprise one or more sequence numbers, timestamps, or other indicators from which a relative order of the data packets 310 stored on the storage medium 140 can be determined.
- the storage layer 130 may use the data packets 310 stored within the storage log on the storage medium 140 to reconstruct portions of the storage metadata 135 , which may include, but is not limited to: reconstructing any-to-any mappings 150 between front-end addresses and storage addresses (e.g., the forward map 152 ), a reverse map, and/or the like.
- the packet module 244 may be configured to generate packets of arbitrary lengths and/or sizes in accordance with the size of storage requests received via the request receiver module 231 , data compression performed by the compression module 242 , configuration, preferences, and so on.
- the packet module 244 may be configured to generate packets of one or more pre-determined sizes.
- the packet module 244 in response to a request to write 24 k of data to the solid-state storage medium 110 , the packet module 244 may be configured to generate six packets, each packet comprising 4 k of the data; in another embodiment, the packet module 244 may be configured to generate a single packet comprising 24 k of data in response to the request.
- the persistent metadata 314 may comprise the front-end identifier(s) 315 corresponding to the packet data segment 312 . Accordingly, the persistent metadata 314 may be configured to associate the packet data segment 312 with one or more LIDs, LBAs, and/or the like. The persistent metadata 314 may be used to associate the packet data segment 312 with the front-end identifier(s) independently of the storage metadata 135 . Accordingly, the storage layer 130 may be capable of reconstructing the storage metadata 135 (e.g., the forward map 152 ) by use of the storage log stored on the storage medium 140 .
- the persistent metadata 314 may comprise other persistent metadata, which may include, but is not limited to, data attributes (e.g., an access control list), data segment delimiters, signatures, links, data layout metadata, and/or the like.
- the data packet 170 may be associated with a log sequence indicator 318 .
- the log sequence indicator 318 may be persisted on the storage division 160 A-N comprising the data packet 310 . Alternatively, the sequence indicator 318 may be persisted elsewhere on the storage medium 140 .
- the sequence indicator 178 is applied to the storage divisions 160 A-N when the storage divisions 160 A-N are reclaimed (e.g., erased, when the first or last storage unit is programmed, etc.).
- the log sequence indicator 318 may be used to determine the log-order of packets 310 within the storage log stored on the storage medium 140 (e.g., determine an ordered sequence of data packets 170 ).
- the ECC write module 246 may be configured to encode data packets 310 generated by the packet module 244 into respective ECC codewords.
- an ECC codeword refers to data and corresponding error detection and/or correction information.
- the ECC write module 246 may be configured to implement any suitable ECC algorithm and may be configured to generate corresponding ECC information (e.g., ECC codewords), which may include, but are not limited to: data segments and corresponding ECC syndromes, ECC symbols, ECC chunks, and/or other structured and/or unstructured ECC information.
- ECC codewords may comprise any suitable error-correcting encoding, including, but not limited to: block ECC encoding, convolutional ECC encoding, Low-Density Parity-Check (LDPC) encoding, Gallager encoding, Reed-Solomon encoding, Hamming codes, Multidimensional parity encoding, cyclic error-correcting codes, BCH codes, and/or the like.
- the ECC write module 246 may be configured to generate ECC codewords of a pre-determined size. Accordingly, a single packet may be encoded into a plurality of different ECC codewords and/or a single ECC codeword may comprise portions of two or more packets.
- the ECC write module 246 is configured to generate ECC codewords, each of which may comprise a data of length N and a syndrome of length S.
- the ECC write module 246 may be configured to encode data segments into 240-byte ECC codewords, each ECC codeword comprising 224 bytes of data and 16 bytes of ECC syndrome information.
- the ECC encoding may be capable of correcting more bit errors than the manufacturer of the storage medium 140 requires.
- the ECC write module 246 may be configured to encode data in a symbolic ECC encoding, such that each data segment of length N produces a symbol of length X.
- the ECC write module 246 may encode data according to a selected ECC strength.
- the “strength” of an error-correcting code refers to the number of errors that can be detected and/or corrected by use of the error correcting code.
- the strength of the ECC encoding implemented by the ECC write module 246 may be adaptive and/or configurable. The strength of the ECC encoding may be selected according to the reliability and/or error rate of the storage medium 140 .
- the strength of the ECC encoding may be independent of the partitioning and/or data layout on the storage medium 140 , which may allow the storage layer 130 to select a suitable ECC encoding strength based on the conditions of the storage medium 140 , user requirements, and the like, as opposed to static and/or pre-determined ECC settings imposed by the manufacturer of the storage medium 140 .
- FIG. 4 depicts one embodiment of data flow 400 between the packet module 244 and an ECC write module 246 .
- the packet module 244 may be configured to generate packets 310 A- 310 N in response to one or more requests to store data on the storage medium 140 .
- the packets 310 A-N may comprise respective packet data segments 312 A, 312 B, and 312 N.
- the packets 310 A-N may further comprise persistent metadata embodied in respective packet headers 314 A, 314 B, and 314 N.
- the packets 310 A-N may be processed by, inter alia, the ECC write module 246 to generate ECC codewords 420 A-Z.
- the ECC codewords comprise ECC codewords 420 A- 420 Z, each of which may comprise a portion of one or more of the packets 310 A-N and a syndrome (not shown). In other embodiments, the ECC codewords may comprise ECC symbols or the like.
- the packets 310 A-N may vary in size in accordance with the size of the respective packet data segments 312 A-N and/or header information 314 A-N.
- the packet module 244 may be configured to generate packets 310 A-N of a fixed, uniform size.
- the ECC write module 246 may be configured to generate ECC codewords 420 A-N having a uniform, fixed size; each ECC codeword 420 A-N may comprise N bytes of packet data and S syndrome bytes, such that each ECC codeword 420 A-N comprises N+S bytes.
- each ECC codeword comprises 240 bytes, and includes 224 bytes of packet data (N) and 16 byes of error correction code (S).
- S error correction code
- the disclosed embodiments are not limited in this regard, however, and could be adapted to generate ECC codewords 420 A-N of any suitable size, having any suitable ratio between N and S.
- the ECC write module 242 may be further adapted to generate ECC symbols, or other ECC codewords, comprising any suitable ratio between data and ECC information.
- the ECC codewords 420 A-N may comprise portions of one or more packets 310 A-N; ECC codeword 420 D comprises data of packets 310 A and 310 B.
- the packets 310 A-N may be spread between a plurality of different ECC codewords 420 A-N: ECC codewords 420 A-D comprise data of packet 310 A; ECC codewords 420 D-H comprise data of packet 310 B; and ECC codewords 420 X-Z comprise data of packet 310 N.
- the write module 240 may further comprise a data layout module 248 configured to buffer data for storage on one or more of the solid-state storage arrays 115 A-N.
- the data layout module 248 may be configured to store data within one or more columns 118 of a solid-state storage array 115 .
- the data layout module 248 may be further configured to generate parity data associated corresponding to the layout and/or arrangement of the data on the storage medium 140 .
- the parity data may be configured to protect data stored within respective rows 117 of the solid-state storage array 115 A-N, and may be generated in accordance with the data layout implemented by the storage controller 139 .
- the write module 240 further comprises a write buffer 250 configured to buffer data for storage within respective page write buffers of the storage medium 140 .
- the write buffer 250 may comprise one or more synchronization buffers to synchronize a clock domain of the storage controller 139 with a clock domain of the storage medium 140 (and/or interconnect 127 ).
- the log storage module 136 may be configured to select storage location(s) for data storage operations and/or may provide addressing and/or control information to the storage controller 139 . Accordingly, the log storage module 136 may provide for storing data sequentially at an append point 180 within the storage address space 144 of the storage medium 140 .
- the storage address at which a particular data segment is stored may be independent of the front-end identifier(s) associated with the data segment.
- the translation module 134 may be configured to associate the front-end interface of data segments (e.g., front-end identifiers of the data segments) with the storage address(es) of the data segments on the storage medium 140 .
- the translation module 134 may leverage storage metadata 135 to perform logical-to-physical translations; the storage metadata 135 may include, but is not limited to: a forward map 152 comprising arbitrary, any-to-any mappings 150 between front-end identifiers and storage addresses; a reverse map comprising storage address validity indicators and/or any-to-any mappings between storage addresses and front-end identifiers; and so on.
- the storage metadata 135 may be maintained in volatile memory, such as the volatile memory 102 of the computing system 100 .
- the storage layer 130 is configured to periodically store portions of the storage metadata 135 on a persistent storage medium, such as the storage medium 140 , non-volatile storage resources 103 , and/or the like.
- the storage controller 139 may further comprise a read module 241 that is configured to read data from the storage medium 140 in response to requests received via the request module 231 .
- the read module 241 may be configured to process data read from the storage medium 140 , and provide the processed data to the storage layer 130 and/or a storage client 106 (by use of the request module 231 ).
- the read module 241 may comprise one or more modules configured to process and/or format data read from the storage medium 140 , which may include, but is not limited to: read buffer 251 , data layout module 248 , ECC read module 247 , a depacket 245 , and a decompression module 243 .
- the read module 241 further includes a dewhiten module configured to perform one or more dewhitening transforms on the data, a decryption module configured to decrypt encrypted data stored on the storage medium 140 , and so on.
- Data processed by the read module 241 may flow to the storage layer 130 and/or directly to the storage client 106 via the request module 231 , and/or other interface or communication channel (e.g., the data may flow directly to/from a storage client via a DMA or remote DMA module of the storage layer 130 ).
- Read requests may comprise and/or reference the data using the front-end interface of the data, such as a front-end identifier (e.g., a logical identifier, an LBA, a range and/or extent of identifiers, and/or the like).
- the back-end addresses associated with data of the request may be determined based, inter alia, on the any-to-any mappings 150 maintained by the translation module 134 (e.g., forward map 152 ), metadata pertaining to the layout of the data on the storage medium 140 , and so on.
- Data may stream into the read module 241 via a read buffer 251 .
- the read buffer 251 may correspond to page read buffers of one or more of the solid-state storage arrays 115 A-N.
- the read buffer 251 may comprise one or more synchronization buffers configured to synchronize a clock domain of the read buffer 251 with a clock domain of the storage medium 140 (and/or interconnect 127 ).
- the data layout module 248 may be configured to reconstruct one or more data segments from the contents of the read buffer 251 .
- Reconstructing the data segments may comprise recombining and/or reordering contents of the read buffer (e.g., ECC codewords) read from various columns 118 in accordance with a layout of the data on the solid-state storage arrays 115 A-N as indicated by the storage metadata 135 .
- reconstructing the data may comprise stripping data associated with one or more columns 118 from the read buffer 251 , reordering data of one or more columns 118 , and so on.
- the read module 241 may comprise an ECC read module 247 configured to detect and/or correct errors in data read from the solid-state storage medium 110 using, inter alia, the ECC encoding of the data (e.g., as encoded by the ECC write module 246 ), parity data (e.g., using parity substitution), and so on.
- the ECC encoding may be capable of detecting and/or correcting a pre-determined number of bit errors, in accordance with the strength of the ECC encoding.
- the ECC read module 247 may be capable of detecting more bit errors than can be corrected.
- the ECC read module 247 may be configured to correct any “correctable” errors using the ECC encoding. In some embodiments, the ECC read module 247 may attempt to correct errors that cannot be corrected by use of the ECC encoding using other techniques, such as parity substitution, or the like. Alternatively, or in addition, the ECC read module 247 may attempt to recover data comprising uncorrectable errors from another source. For example, in some embodiments, data may be stored in a RAID configuration. In response to detecting an uncorrectable error, the ECC read module 247 may attempt to recover the data from the RAID, or other source of redundant data (e.g., a mirror, backup copy, or the like).
- redundant data e.g., a mirror, backup copy, or the like.
- the ECC read module 247 may be configured to generate an interrupt in response to reading data comprising uncorrectable errors.
- the interrupt may comprise a message indicating that the requested data is in error, and may indicate that the ECC read module 247 cannot correct the error using the ECC encoding.
- the message may comprise the data that includes the error (e.g., the “corrupted data”).
- the interrupt may be caught by the storage layer 130 or other process, which, in response, may be configured to reconstruct the data using parity substitution, or other reconstruction technique, as disclosed herein.
- Parity substitution may comprise iteratively replacing portions of the corrupted data with a “parity mask” (e.g., all ones) until a parity calculation associated with the data is satisfied.
- the masked data may comprise the uncorrectable errors, and may be reconstructed using other portions of the data in conjunction with the parity data.
- Parity substitution may further comprise reading one or more ECC codewords from the solid-state storage array 115 A-N (in accordance with an adaptive data structure layout on the array 115 ), correcting errors within the ECC codewords (e.g., decoding the ECC codewords), and reconstructing the data by use of the corrected ECC codewords and/or parity data.
- the corrupted data may be reconstructed without first decoding and/or correcting errors within the ECC codewords.
- uncorrectable data may be replaced with another copy of the data, such as a backup or mirror copy.
- the storage layer 130 stores data in a RAID configuration, from which the corrupted data may be recovered.
- the solid-state storage medium 140 may be arranged into a plurality of independent banks 119 A-N.
- Each bank may comprise a plurality of solid-state storage elements arranged into respective solid-state storage arrays 115 A-N.
- the banks 119 A-N may be configured to operate independently; the storage controller 139 may configure a first bank 119 A to perform a first storage operation while a second bank 119 B is configured to perform a different storage operation.
- the storage controller 139 may further comprise a bank controller 252 configured to selectively route data and/or commands to respective banks 119 A-N.
- storage controller 139 is configured to read data from a bank 119 A while filling the write buffer 250 for storage on another bank 119 B and/or may interleave one or more storage operations between one or more banks 119 A-N.
- FIG. 2008/0229079 U.S. patent application Ser. No. 11/952,095
- FIG. 6 U.S. patent application Ser. No. 11/952,095
- the storage layer 130 may further comprise a groomer module 138 configured to reclaim storage resources of the storage medium 140 .
- the groomer module 138 may operate as an autonomous, background process, which may be suspended and/or deferred while other storage operations are in process.
- the log storage module 136 and groomer module 138 may manage storage operations so that data is spread throughout the storage address space 144 of the storage medium 140 , which may improve performance and data reliability, and avoid overuse and underuse of any particular storage locations, thereby lengthening the useful life of the storage medium 140 (e.g., wear-leveling, etc.).
- data may be sequentially appended to a storage log within the storage address space 144 at an append point 180 , which may correspond to a particular storage address within one or more of the banks 119 A-N (e.g., physical address 0 of bank 119 A).
- the append point 180 may revert to the initial position (or next available storage location).
- operations to overwrite and/or modify data stored on the storage medium 140 may be performed “out-of-place.”
- the obsolete version of overwritten and/or modified data may remain on the storage medium 140 while the updated version of the data is appended at a different storage location (e.g., at the current append point 180 ).
- an operation to delete, erase, or TRIM data from the storage medium 140 may comprise indicating that the data is invalid (e.g., does not need to be retained on the storage medium 140 ).
- Marking data as invalid may comprise modifying a mapping between the front-end identifier(s) of the data and the storage address(es) comprising the invalid data, marking the storage address as invalid in a reverse map, and/or the like.
- the groomer module 138 may be configured to select sections of the solid-state storage medium 140 for grooming operations.
- a “section” of the storage medium 140 may include, but is not limited to: an erase block, a logical erase block, a die, a plane, one or more pages, a portion of a solid-state storage element 116 A-Y, a portion of a row 117 of a solid-state storage array 115 , a portion of a column 118 of a solid-state storage array 115 , and/or the like.
- a section may be selected for grooming operations in response to various criteria, which may include, but are not limited to: age criteria (e.g., data refresh), error metrics, reliability metrics, wear metrics, resource availability criteria, an invalid data threshold, and/or the like.
- a grooming operation may comprise relocating valid data on the selected section (if any).
- the operation may further comprise preparing the section for reuse, which may comprise erasing the section, marking the section with a sequence indicator, such as the sequence indicator 318 , and/or placing the section into a queue of storage sections that are available to store data.
- the groomer module 138 may be configured to schedule grooming operations with other storage operations and/or requests.
- the storage controller 139 may comprise a groomer bypass (not shown) configured to relocate data from a storage section by transferring data read from the section from the read module 241 directly into the write module 240 without being routed out of the storage controller 139 .
- a groomer bypass (not shown) configured to relocate data from a storage section by transferring data read from the section from the read module 241 directly into the write module 240 without being routed out of the storage controller 139 .
- the storage layer 130 may be further configured to manage out-of-service conditions on the storage medium 140 .
- a section of the storage medium 140 that is “out-of-service” (OOS) refers to a section that is not currently being used to store valid data.
- OOS out-of-service
- the storage layer 130 may be configured to monitor storage operations performed on the storage medium 140 and/or actively scan the solid-state storage medium 140 to identify sections that should be taken out of service.
- the storage metadata 135 may comprise OOS metadata that identifies OOS sections of the solid-state storage medium 140 .
- the storage layer 130 may be configured to avoid OOS sections by, inter alia, streaming padding (and/or nonce) data to the write buffer 250 such that padding data will map to the identified OOS sections.
- the storage layer 130 may be configured to manage OOS conditions by replacing OOS sections of the storage medium 140 with replacement sections.
- a hybrid OOS approach may be used that combines adaptive padding and replacement techniques; the padding approach to managing OOS conditions may be used in portions of the storage medium 140 comprising a relatively small number of OOS sections; as the number of OOS sections increases, the storage layer 130 may replace one or more of the OOS sections with replacements sections.
- the storage medium 140 may comprise one or more solid-state storage arrays 115 A-N.
- a solid-state storage array 115 A-N may comprise a plurality of independent columns 118 (respective solid-state storage elements 116 A-Y), which may be coupled to the storage layer 130 in parallel via the interconnect 127 . Accordingly, storage operations performed on an array 115 A-N may be performed on a plurality of solid-state storage elements 116 A-Y.
- Performing a storage operation on a solid-state storage array 115 A-N may comprise performing the storage operation on each of the plurality of solid-state storage elements 116 A-Y comprising the array 115 A-N: a read operation may comprise reading a physical storage unit (e.g., page) from a plurality of solid-state storage elements 116 A-Y; a program operation may comprise programming a physical storage unit (e.g., page) on a plurality of solid-state storage elements 116 A-Y; an erase operation may comprise erasing a section (e.g., erase block) on a plurality of solid-state storage elements 116 A-Y; and so on.
- a read operation may comprise reading a physical storage unit (e.g., page) from a plurality of solid-state storage elements 116 A-Y
- a program operation may comprise programming a physical storage unit (e.g., page) on a plurality of solid-state storage elements 116 A-Y
- an erase operation may comprise eras
- a program operation may comprise the write module 240 streaming data to program buffers of a plurality of solid-state storage elements 116 A-Y (via the write buffer 250 and interconnect 127 ) and, when the respective program buffers are sufficiently full, issuing a program command to the solid-state storage elements 116 A-Y.
- the program command may cause one or more storage units on each of the storage elements 116 A-Y to be programmed in parallel.
- FIG. 5A depicts another embodiment 500 of a solid-state storage array 115 .
- the solid-state storage array 115 may comprise a plurality of independent columns 118 , each of which may correspond to a respective set of one or more solid-state storage elements 116 A-Y.
- the solid-state storage array 115 comprises 25 columns 118 (e.g., solid-state storage element 0 116 A through solid-state storage element 24 116 Y).
- the solid-state storage elements 116 A-Y comprising the array may be communicatively coupled to the storage layer 130 in parallel by the interconnect 127 .
- the interconnect 127 may be capable of communicating data, addressing, and/or control information to each of the solid-state storage elements 116 A-Y.
- the parallel connection may allow the storage controller 139 to manage the solid-state storage elements 116 A-Y in parallel, as a single, logical storage element.
- the solid-state storage elements 116 A-Y may be partitioned into sections, such as physical storage divisions 530 (e.g., physical erase blocks). Each erase block may comprise a plurality of physical storage units 532 , such as pages. The physical storage units 532 within a physical storage division 530 may be erased as a group.
- FIG. 5A depicts a particular partitioning scheme, the disclosed embodiments are not limited in this regard, and could be adapted to use solid-state storage elements 116 A-Y partitioned in any suitable manner.
- the columns 118 of the array 115 may correspond to respective solid-state storage elements 116 A-Y. Accordingly, the array 115 of FIG. 5A comprises 25 columns 118 . Rows of the array 117 may correspond to physical storage units 532 and/or 530 of a plurality of the columns 118 . In other embodiments, the columns 118 may comprise multiple solid-state storage elements.
- FIG. 5B is a block diagram 501 of another embodiment of a solid-state storage array 115 .
- the solid-state storage array 115 may comprise a plurality of rows 117 , which may correspond to storage units on a plurality of different columns 118 within the array 115 .
- the rows 117 of the solid-state storage array 115 may include logical storage divisions 540 , which may comprise physical storage divisions on a plurality of the solid-state storage elements 116 A-Y.
- a logical storage division 540 may comprise a logical erase block, comprising physical erase blocks of the solid-state storage elements 116 A-Y within the array 115 .
- a logical page 542 may comprise physical storage units (e.g., pages) on a plurality of the solid-state storage elements 116 A-Y.
- Storage operations performed on the solid-state storage array 115 may operate on multiple solid-state storage elements 116 A-Y: an operation to program data to a logical storage unit 542 may comprise programming data to each of 25 physical storage units (e.g., one storage unit per non-volatile storage element 116 A-Y); an operation to read data from a logical storage unit 542 may comprise reading data from 25 physical storage units (e.g., pages); an operation to erase a logical storage division 540 may comprise erasing 25 physical storage divisions (e.g., erase blocks); and so on. Since the columns 118 are independent, storage operations may be performed across different sets and/or portions of the array 115 .
- a read operation on the array 115 may comprise reading data from physical storage unit 532 at a first physical address of solid-state storage element 116 A and reading data from a physical storage unit 532 at a different physical address of one or more other solid-state storage elements 116 B-N.
- Arranging solid-state storage elements 116 A-Y into a solid-state storage array 115 may be used to address certain properties of the storage medium 140 .
- Some embodiments may comprise an asymmetric storage medium 140 , in which it takes longer to program data onto the solid-state storage elements 116 A-Y than it takes to read data therefrom (e.g., 10 times as long).
- data may only be programmed to physical storage divisions 530 that have first been initialized (e.g., erased). Initialization operations may take longer than program operations (e.g., 10 times as long as a program, and by extension 100 times as long as a read operation).
- Managing groups of solid-state storage elements 116 A-Y in an array 115 may allow the storage layer 130 to perform storage operations more efficiently, despite the asymmetric properties of the storage medium 140 .
- the asymmetry in read, program, and/or erase operations is addressed by performing these operations on multiple solid-state storage elements 116 A-Y in parallel.
- programming asymmetry may be addressed by programming 25 storage units in a logical storage unit 542 in parallel.
- Initialization operations may also be performed in parallel.
- Physical storage divisions 530 on each of the solid-state storage elements 116 A-Y may be initialized as a group (e.g., as logical storage divisions 540 ), which may comprise erasing 25 physical erase blocks in parallel.
- portions of the solid-state storage array 115 may be configured to store data and other portions of the array 115 may be configured to store error detection and/or recovery information.
- Columns 118 used for data storage may be referred to as “data columns” and/or “data solid-state storage elements.”
- Columns used to store data error detection and/or recovery information may be referred to as a “parity column” and/or “recovery column.”
- the array 115 may be configured in an operational mode in which one of the solid-state storage elements 116 Y is used to store parity data, whereas other solid-state storage elements 116 A-X are used to store data. Accordingly, the array 115 may comprise data solid-state storage elements 116 A-X and a recovery solid-state storage element 116 Y.
- the effective storage capacity of the rows may be reduced by one physical storage unit (e.g., reduced from 25 physical pages to 24 physical pages).
- the “effective storage capacity” of a storage unit refers to the number of storage units or divisions that are available to store data and/or the total amount of data that can be stored on a logical storage unit.
- the operational mode described above may be referred to as a “24+1” configuration, denoting that twenty-four (24) physical storage units 532 are available to store data, and one (1) of the physical storage units 532 is used for parity.
- the disclosed embodiments are not limited to any particular operational mode and/or configuration, and could be adapted to use any number of the solid-state storage elements 116 A-Y to store error detection and/or recovery data.
- FIG. 5C is a block diagram of a system 502 comprising a storage controller 139 configured to manage storage divisions (logical erase blocks 540 ) that span multiple arrays 115 A-N of multiple banks 119 A-N.
- Each bank 119 A-N may comprise one or more solid-state storage arrays 115 A-N, which, as disclosed herein, may comprise a plurality of solid-state storage elements 116 A-Y coupled in parallel by a respective bus 127 A-N.
- the storage controller 139 may be configured to perform storage operations on the storage elements 116 A-Y of the arrays 119 A-N in parallel and/or in response to a single command and/or signal.
- the storage controller 139 may be configured to manage groups of logical erase blocks 540 that include erase blocks of multiple arrays 115 A-N within different respective banks 119 A-N. Each group of logical erase blocks 540 may comprise erase blocks 531 A-N on each of the arrays 115 A-N. The erase blocks 531 A-N comprising the logical erase block group 540 may be erased together (e.g., in response to a single erase command and/or signal or in response to a plurality of separate erase commands and/or signals). Performing erase operations on logical erase block groups 540 comprising large numbers of erase blocks 531 A-N within multiple arrays 115 A-N may further mask the asymmetric properties of the solid-state storage medium 140 , as disclosed herein.
- the storage controller 139 may be configured to perform some storage operations within boundaries of the arrays 115 A-N and/or banks 119 A-N.
- the read, write, and/or program operations may be performed within rows 117 of the solid-state storage arrays 115 A-N (e.g., on logical pages 542 A-N within arrays 115 A-N of respective banks 119 A-N).
- the logical pages 542 A-N of the arrays 115 A-N may not extend beyond single arrays 115 A-N and/or banks 119 A-N.
- the log storage module 136 and/or bank interleave module 252 may be configured to append data to the storage medium 140 by interleaving and/or scheduling storage operations sequentially between the arrays 115 A-N of the banks 119 A-N.
- FIG. 5D depicts one embodiment of storage operations that are interleaved between solid-state storage arrays 115 A-N of respective banks 119 A-N.
- the bank interleave module 252 is configured to interleave programming operations between logical pages 542 A-N (rows 117 ) of the arrays 115 A-N within the banks 119 A-N.
- the write module 240 may comprise a write buffer 250 , which may have sufficient capacity to fill write buffers one or more logical pages 542 A-N of an array 115 A-N.
- the storage controller 139 may be configured to stream the contents of the write buffer 250 to program buffers of the solid-state storage elements 116 A-Y comprising one of the banks 119 A-N.
- the write module 240 may then issue a program command and/or signal to the solid-state storage array 115 A-N to store the contents of the program buffers to a specified logical page 542 A-N.
- the log storage module 136 and/or bank interleave module 252 may be configured to provide control and addressing information to the solid-state storage elements 116 A-Y of the array 115 A-N using a bus 127 A-N, as disclosed above.
- the bank interleave module 252 may be configured to append data to the solid-state storage medium 110 by programming data to the arrays 115 A-N in accordance with a sequential interleave pattern.
- the sequential interleave pattern may comprise programming data to a first logical page (LP_0) of array 115 A within bank 119 A, followed by the first logical page (LP_0) of array 115 B within the next bank 119 B, and so on, until data is programmed to the first logical page LP_0 of each array 115 A-N within each of the banks 119 A-N.
- data may be programmed to the first logical page LP_0 of array 115 A in bank 119 A in a program operation 243 A.
- the bank interleave module 252 may then stream data to the first logical page (LP_0) of the array 115 B in the next bank 119 B.
- the data may then be programmed to LP_0 of array 115 B in bank 119 B in a program operation 243 B.
- the program operation 243 B may be performed concurrently with the program operation 243 A on array 115 A of bank 119 A; the data write module 240 may stream data to array 115 B and/or issue a command and/or signal for the program operation 243 B, while the program operation 243 A is being performed on the array 115 A.
- Data may be streamed to and/or programmed on the first logical page (LP_0) of the arrays 115 C-N of the other banks 119 C- 119 N following the same sequential interleave pattern (e.g., after data is streamed and/or programmed to LP_0 of array 115 A of bank 119 B, data is streamed and/or programmed to LP_0 of array 115 C of bank 119 C in program operation 243 C, and so on).
- LP_0 first logical page
- the bank interleave controller 252 may be configured to begin streaming and/or programming data to the next logical page (LP_1) of array 115 A within the first bank 119 A, and the interleave pattern may continue accordingly (e.g., program LP_1 of array 115 B bank 119 B, followed by LP_1 of array 115 C bank 119 C through LP_1 of array 115 N bank 119 N, followed by LP_2 of array 115 A bank 119 A, and so on).
- LP_1 of array 115 B bank 119 B followed by LP_1 of array 115 C bank 119 C through LP_1 of array 115 N bank 119 N
- LP_2 of array 115 A bank 119 A and so on.
- Sequentially interleaving programming operations as disclosed herein may increase the time between concurrent programming operations on the same array 115 A-N and/or bank 119 A-N, which may reduce the likelihood that the storage controller 139 will have to stall storage operations while waiting for a programming operation to complete.
- programming operations may take significantly longer than other operations, such as read and/or data streaming operations (e.g., operations to stream the contents of the write buffer 250 to an array 115 A-N via the bus 127 A-N).
- read and/or data streaming operations e.g., operations to stream the contents of the write buffer 250 to an array 115 A-N via the bus 127 A-N).
- 5D may be configured to avoid consecutive program operations on the same array 115 A-N and/or bank 119 A-N; programming operations on a particular array 115 A-N may be separated by N ⁇ 1 programming operations on other banks (e.g., programming operations on array 115 A are separated by programming operations on arrays 115 A-N). As such, programming operations on array 119 A are likely to be complete before another programming operation needs to be performed on the array 119 A.
- the interleave pattern for programming operations may comprise programming data sequentially across rows 117 (e.g., logical pages 542 A-N) of a plurality of arrays 115 A-N.
- the interleave pattern may result in interleaving programming operations between arrays 115 A-N of banks 119 A-N, such that the erase blocks of each array 115 A-N (erase block groups EBG_0-N) are filled at the same rate.
- the sequential interleave pattern programs data to the logical pages of the first erase block group (EBG_0) in each array 115 A-N before programming data to logical pages LP_0 through LP_N of the next erase block group (EBG_1), and so on (e.g., wherein each erase block comprises 0-N pages).
- the interleave pattern continues until the last erase block group EBG_N is filled, at which point the interleave pattern continues back at the first erase block group EBG_0.
- the erase block groups of the arrays 115 A-N may, therefore, be managed as logical erase blocks 540 A-N that span the arrays 115 A-N.
- a logical erase block group 540 may comprise erase blocks 531 A-N on each of the arrays 115 A-N within the banks 119 A-N.
- managing groups of erase blocks e.g., logical erase block group 540
- erasing the logical erase block group 540 A may comprise erasing EBG_0 of arrays 115 A-N in banks 119 A-N
- erasing a logical erase block group 540 B may comprise erasing EBG_1 of arrays 115 A-N in banks 517 A-N
- erasing logical erase block group 540 C may comprise erasing EBG_2 of arrays 115 A-N in banks 517 A-N
- erasing logical erase block group 540 N may comprise erasing EBG_N of arrays 115 A-N in banks 517 A-N.
- recovering the logical erase block group 540 A may comprise relocating valid data (if any) stored on EBG_0 on arrays 115 A-N in banks 517 A-N, erasing the erase blocks of each EBG_0 in arrays A-N, and so on.
- each bank 119 A-N comprising a respective solid-state storage array 115 A-N comprising 25 storage elements 116 A-Y
- erasing, grooming, and/or recovering a logical erase block group 540 comprises erasing, grooming, and/or recovering 100 physical erase blocks 530 .
- multi-bank embodiments are described herein, the disclosure is not limited in this regard and could be configured using any multi-bank architecture comprising any number of banks 119 A-N of arrays 115 A-N comprising any number of solid-state storage elements 116 A-Y.
- the storage layer 130 may be configured to store data segments in one or more different configurations, arrangements and/or layouts within a solid-state storage array 115 A-N (by use of the data layout module 248 ).
- the data layout module 248 may be configured to buffer and/or arrange data in the write module 240 for storage in a particular arrangement within one or more of the solid-state storage arrays 115 A-N.
- the data layout module 248 may configure data for “horizontal” storage within rows 117 of the array 115 (e.g., horizontally within logical storage units 542 of the array 115 ).
- a datastructure such as an ECC codeword, packet, or the like, may be spread across a plurality of the storage elements 116 A-Y comprising the logical storage unit 542 .
- data may be stored horizontally within one or more independent “channels” of the array 115 .
- an independent channel or “channel” refers to a subset of one or more columns 118 of the array 115 (e.g., respective subsets of solid-state storage elements 116 A-Y).
- Data may be arranged for storage within respective independent channels.
- An array 115 comprising N columns 118 may be divided into a configurable number of independent channels X, each comprising Y columns 118 of the array 115 . In the FIG.
- the channel configurations may include, but are not limited to: 24 channels each comprising a single column 118 ; 12 channels each comprising two solid-state storage elements; eight channels each comprising three solid-state storage elements; six channels each comprising six columns 118 ; and so on.
- the array 115 may be divided into heterogeneous channels, such as a first channel comprising 12 columns 118 and six other channels each comprising two columns 118 .
- the data layout module 248 may be configured to arrange data for storage in a vertical code word configuration (disclosed in further detail below).
- FIG. 6A is a block diagram of a system 600 comprising one embodiment of a storage controller 139 comprising a data layout module 248 configured to arrange data for storage on a solid-state storage array 115 in a horizontal configuration.
- the solid-state storage array 115 comprises 25 solid-state storage elements 116 A-Y operating in a “24+1” configuration, in which 24 of the solid-state storage elements 116 A-X are used to store data, and one storage element ( 116 Y) is used to store parity data.
- the write module 240 may comprise a packet module 244 configured to generate data packets comprising data segments for storage on the array 115 , as disclosed above.
- the packet module 244 is configured to format data into a packet format 610 , comprising a packet data segment 612 and persistent metadata 614 (e.g., header).
- the header 614 may comprise a front-end interface of the packet data segment 612 , a sequence number, and/or the like, as disclosed above.
- the packet module 244 is configured to generate packets 610 of a fixed size (520-byte packet data segment 612 and 8 bytes of metadata 614 ).
- the ECC write module 246 is configured to generate ECC datastructures (ECC codewords 620 ) comprising portions of one or more packets 610 , as disclosed above.
- the ECC codewords 620 may be of a fixed size. In the FIG. 6A example, each ECC codeword 620 comprises 224 bytes of packet data and a 16-byte error-correcting code or syndrome. Although particular sizes and/or configurations of packets 610 and ECC codewords 620 are disclosed herein, the disclosure is not limited in this regard and could be adapted to use any size packets 610 and/or ECC codewords 620 .
- the size of the datastructures may vary.
- the size and/or contents of the packets 610 and/or ECC codewords 620 may be adapted according to out-of-service conditions, as disclosed above.
- the data layout module 248 may be configured to lay out data for horizontal storage within rows 117 of the array 115 .
- the data layout module 248 may be configured to buffer and/or arrange data segments (e.g., the ECC codewords 621 , 622 , and 623 ) into data rows 667 comprising 24 bytes of data.
- the data layout module 248 may be capable of buffering one or more ECC codewords 620 (by use of the write buffer 251 ). In the FIG. 6A embodiment, data layout module 248 may be configured to buffer 10 24-byte data rows, which is sufficient to buffer a full 240-byte ECC codeword 620 .
- the data layout module 248 may be configured to lay out data segments for horizontal storage within rows 117 of the array 115 .
- the data layout module 248 may be configured to buffer and/or arrange data segments (e.g., the ECC codewords 621 , 622 , and 623 ) into data rows 667 comprising 24 bytes of data.
- the data layout module 248 may be capable of buffering one or more ECC codewords 620 (by use of the write buffer 251 ). In the FIG. 6A embodiment, data layout module 248 may be configured to buffer 10 24-byte data rows, which is sufficient to buffer a full 240-byte ECC codeword 620 .
- the data layout module 248 may be further configured to stream 24-byte data rows to a parity module 637 , which may be configured to generate a parity byte for each 24-byte group.
- the data layout module 248 streams the resulting 25-byte data rows 667 to the array 115 via the bank controller 252 and interconnect 127 (and/or write buffer 250 , as disclosed above).
- the storage controller 139 may be configured to stream the data rows 667 to respective program buffers of the solid-state storage array 115 (e.g., stream to program buffers of respective solid-state storage elements 116 A-Y). Accordingly, each cycle of the interconnect 127 may comprise transferring a byte of a data row 667 to a program buffer of a respective solid-state storage element 116 A-Y.
- the solid-state storage elements 116 A-X receive data bytes of a data row 667 and solid-state storage element 116 Y receives the parity byte of the data row 667 .
- data of the ECC codewords 620 may be byte-wise interleaved between the solid-state storage elements 116 A-X of the array 115 ; each solid-state storage element 116 A-X receives 10 bytes of each 240 byte ECC codeword 620 .
- a data row 667 refers to a data set comprising data for each of a plurality of columns 118 within the array 115 .
- the data row 667 may comprise a byte of data for each column 0-23.
- the data row 667 may further comprise a parity byte corresponding to the data bytes (e.g., a parity byte corresponding to the data bytes for columns 0-23).
- Data rows 667 may be streamed to respective program buffers of the solid-state storage elements 116 A-Y via the interconnect 127 .
- streaming a 240-byte ECC codeword 620 to the array 115 may comprise streaming 10 separate data rows 667 to the array 115 , each data row comprising 24 data bytes (one for each data solid-state storage element 116 A-X) and a corresponding parity byte.
- the storage locations of the solid-state storage array 115 may be capable of storing a large number of ECC codewords 610 and/or packets 610 .
- the solid-state storage elements may comprise 8 kb pages, such that the storage capacity of a storage location (row 117 ) is 192 kb.
- each storage location within the array 115 may be capable of storing approximately 819 240B ECC codewords (352 packets 610 ).
- the storage address of a data segment may, therefore, comprise: a) the address of the storage location on which the ECC codewords 620 and/or packets 610 comprising the data segment are stored, and b) an offset of the ECC codewords 620 and/or packets 610 within the row 117 .
- the storage location or offset 636 of the packet 610 A within the logical page 542 A may be determined based on the horizontal layout of the data packet 610 A.
- the offset 636 may identify the location of the ECC codewords 621 , 622 , and/or 623 comprising the packet 610 A (and/or may identify the location of the last ECC codeword 623 comprising data of the packet 610 A). Accordingly, in some embodiments, the offset may be relative to one or more datastructures on the solid-state storage array 115 (e.g., a packet offset and/or ECC codeword offset).
- Another offset 638 may identify the location of the last ECC codeword of a next packet 620 (e.g., packet 610 B), and so on.
- each of the ECC codewords 621 , 622 , and 623 are horizontally spread across the storage elements 116 A-Y comprising the logical page 542 A (e.g., 10 bytes of the ECC codewords 621 , 622 , and 623 are stored on each solid-state storage element 116 A-X). Accessing the packet 610 A may, therefore, comprise accessing each of the ECC codewords 621 , 622 , and 623 (and each of the storage elements 116 A-X).
- FIG. 6B is a block diagram of a system 601 depicting one embodiment of a storage controller 139 configured to store data in a horizontal storage configuration.
- the FIG. 6B embodiment depicts a horizontal layout of an ECC codeword 621 on the array 115 of FIG. 6A .
- Data D 0 denotes a first byte of the ECC codeword 621
- data D 239 denotes the last byte (byte 240) of the ECC codeword 621 .
- FIG. 6B is a block diagram of a system 601 depicting one embodiment of a storage controller 139 configured to store data in a horizontal storage configuration.
- the FIG. 6B embodiment depicts a horizontal layout of an ECC codeword 621 on the array 115 of FIG. 6A .
- Data D 0 denotes a first byte of the ECC codeword 621
- data D 239 denotes the last byte (byte 240) of the ECC codeword 621 .
- each column 118 of the solid-state storage array 115 comprises 10 bytes of the ECC codeword 621 , and the data of the ECC codeword 621 is horizontally spread across a row 117 of the array 115 (e.g., horizontally spread across solid-state storage elements 116 A-X of the array 115 ).
- FIG. 6B also depicts a data row 667 as streamed to (and stored on) the solid-state storage array 115 . As illustrated in FIG. 6B , the data row 667 comprises bytes 0 through 23 of the ECC codeword D, each stored on a respective one of the columns 118 .
- the data row 667 further comprises a parity byte 668 corresponding to the contents of the data row 667 (bytes D 0 through D 23 ).
- reading data of the ECC codeword 621 may require accessing a plurality of columns 118 .
- the smallest read unit may be an ECC codeword 620 (and/or packet 610 ).
- reading a data segment may comprise determining the storage address of the data by use of, inter alia, the translation module 134 (e.g., the forward map 152 ).
- the storage address may comprise a) the address of the storage location (logical page) on which the ECC codewords and/or packets comprising the requested data are stored, and b) the offset of the ECC codewords and/or packets within the particular storage location.
- the translation module 134 may be configured to maintain a forward map 152 configured to index front-end identifiers to storage addresses on the storage medium 140 .
- the storage address of data may comprise a) the address of the storage location (logical page) comprising the data and b) an offset of the data within the storage location.
- the storage addresses 156 A-D of the entries 153 within forward map 152 may be segmented into a first portion comprising an address of a storage location and a second portion comprising the offset of the data within the storage location.
- Portions of the storage metadata 135 may be stored in volatile memory of the computing system 100 and/or storage layer 130 .
- the memory footprint of the storage metadata 135 may grow in proportion to the number of entries 153 that are included in the forward map 152 , as well as the size of the entries 153 themselves.
- the memory footprint of the forward map 152 may be related the size (e.g., number of bits) used to represent the storage address of each entry 153 .
- the memory footprint of the forward map 153 may impact the performance of the computing system 100 hosting the storage layer 130 .
- the computing device 100 may exhaust its volatile memory resources 102 , and be forced to page swap memory to non-volatile storage resources 103 , or the like. Even small reductions in the size of the entries 153 may have a significant impact on the overall memory footprint of the storage metadata 135 when scaled to a large number of entries 153 .
- the number of the storage addresses 154 A-D may also determine the storage capacity that the forward map 152 is capable of referencing (e.g., may determine the number of unique storage locations that can be referenced by the entries 153 of the forward map 152 ).
- the entries 153 may comprise 32 bit storage addresses 154 A-D. As disclosed above, a portion of each 32 bit storage addresses 154 A-D may be used to address a specific storage location (e.g., logical page), and other portions of the storage addresses 154 A-D may determine the offset within the storage location. If 4 bits are needed to represent storage location offsets, the 32 bit storage addresses 154 A-D may only be capable of addressing 2 ⁇ 28 unique storage locations.
- the storage layer 130 comprises an offset index module 249 configured to determine the offsets of data segments within storage locations of the storage medium 140 .
- the offset index module 249 may be further configured to generate an offset index configured to map front-end identifiers of the data segments to respective offsets within the storage locations.
- the offset index may be configured for storage on the storage medium 140 .
- the offset index module 249 may, therefore, segment storage addresses into a first portion configured to address a storage location (logical page) on the storage medium 140 , and a second portion corresponding to an offset within the storage location.
- the storage controller 139 may be configured to store the offset index (the second portion of the storage addresses) on the storage medium 140 .
- the translation module 134 may be configured to index front-end addresses of the data using the first portion of the storage addresses.
- the second portion of the storage addresses may be omitted from the forward map 152 , which may reduce the memory overhead of the forward map 152 and/or enable the forward map 152 to reference a larger storage address space 144 .
- FIG. 7A depicts one embodiment of a system 700 for referencing data on a storage medium.
- the system 700 comprises a forward map 152 that includes an entry 153 configured to associate a front-end address 754 D with a storage address 756 .
- Other entries 153 of the forward map 152 are omitted from FIG. 7A to avoid obscuring the details of the depicted embodiments.
- the offset index module 249 may segment the storage address into a first portion 757 and a second portion 759 D.
- the first portion 757 may correspond to an address of a storage location and the second portion 759 D may identify an offset of the data segment within the storage location (e.g., within a logical page 542 ).
- the relative size of the offset portion 759 D of the storage address 756 to the storage location portion 757 may be based on the size of the data packets 610 A-N stored on the solid-state storage array 115 , the size of the logical page 542 , and/or the layout of the packets 610 A-N within the array 115 .
- the logical pages 542 may be used in a “24+1” horizontal storage configuration, comprising 24 data columns and a parity column, such that the physical storage capacity of the logical pages 542 within the array 115 is 24 times larger than the page size of the solid-state storage elements 116 A-Y (e.g., 192 kb for solid-state storage elements 116 A-Y comprising 8 kb pages).
- each logical page 542 may be capable of storing a relatively large number of data segments and/or packets 610 A-N.
- the disclosure is not limited in this regard, however, and could be adapted for use with any number of solid-state storage elements 116 A-Y having any suitable page size, storage configuration, and/or data layout.
- the data segment mapped to the front-end address 754 may be stored in the packet 610 D.
- the storage location address 757 (first portion of the storage address 756 ) comprises the media address of the logical page 542 within the array 115 .
- the offset 759 D indicates an offset of the packet 610 D within the logical page 542 .
- the offset index module 249 may be configured to determine the offset of the packet 610 D within the logical page 542 (as the packet 610 D is stored on the storage medium 140 ).
- the offset index module 249 may be further configured to generate an offset index 749 configured for storage on the storage medium 140 .
- the offset index 749 may comprise mappings between front-end identifiers 754 A-N of the data segments stored on the logical page 542 and the respective offsets of the data segments within the logical page 542 (e.g., the offsets of the data packets 610 A-N comprising the data segments).
- the storage layer 130 may be configured to store the offset index 749 on the storage medium 140 . As illustrated in FIG.
- the offset index 749 is stored on the corresponding storage location 542 (on the same logical page 542 comprising packets 610 A-N indexed by the offset index 749 ).
- the offset index 749 may be stored on a different storage location.
- the storage layer 130 may be configured to leverage the on-media offset index 749 to reduce the size of the entries 153 in the forward map 152 and/or enable the entries 153 to reference larger storage address spaces 144 .
- the entry 153 may include only the first portion (storage location address 757 ) of the storage address 756 .
- the storage layer 130 may be configured to omit and/or exclude the second portion of the address (the offset portion 759 D) from the index entries 153 .
- the storage layer 130 may determine the full storage address of a data segment by use of the storage location address 757 maintained within the forward map 152 and the offset index 749 stored on the storage medium 140 . Accordingly, accessing data associated with the front-end address 754 D may comprise a) accessing the storage location address 757 within the entry 153 corresponding to the front-end address 754 D in the forward map 152 , b) reading the offset index 749 from the logical page 542 at the specified storage location address 757 , and c) accessing the packet 610 D comprising the data segment at offset 759 D by use of the offset index 749 .
- the storage layer 130 may be configured to store data packets 610 A-N that are of a fixed, predetermined size. Accordingly, the offset of a particular data packet 610 A-N may be determined based on its sequential order within the logical page 542 .
- the offset index module 249 may generate an offset index 749 comprising an ordered list of front-end identifiers 754 A-N, which omits the specific offsets of the corresponding data packets 610 A-N.
- the offsets of the fixed-sized data packets 610 A-N may be determined based on the order of the front-end identifiers 754 A-N.
- the offset index 749 may comprise an offset of the first data packet 610 A in the logical page 542 , and may omit offsets of the subsequent packets 610 B-N. In other embodiments, the offset index 749 may comprise offsets to other datastructures within the storage location, such as the offset of particular ECC codewords 620 , as disclosed herein. The offsets may be derived from the offset index 749 using any suitable mechanism. In some embodiments, for example, the logical page 542 may store data structures having a variable size; the offset index 749 may be configured to list the front-end identifiers of the data structures along with a length or size of each data structure.
- the logical page 542 may be segmented into a plurality of fixed-sized “chunks,” and the data of a front-end identifier may occupy one or more of the chunks.
- the offset index 749 may comprise a bitmap (or other suitable data structure) indicating which chunks are occupied by data of which front-end identifiers.
- the storage controller 139 may be configured to append the offset index to the “tail” of the logical page 542 .
- the disclosure is not limited in this regard, however, and could be adapted to store the offset index 749 at any suitable location within the logical page 542 and/or on another storage location of the storage medium 140 .
- the offset index module 249 may be configured to determine the offset of data segments, and the data segments are stored on the storage medium 140 . Determining offsets of the data segments may comprise determining the offset of one or more data packets 610 and/or ECC codewords 620 comprising the segments, as disclosed above. Determining the offsets may further comprise monitoring the status of the write buffer 250 , 00 S conditions within one or more of the solid-state storage arrays 115 A-N, and so on. The offset index module 249 may be further configured to generate an offset index 749 for storage on the storage medium 140 . The offset index 749 may be stored at a predetermined location (e.g., offset) within the storage location that the offset index 749 describes.
- a predetermined location e.g., offset
- the offset index 249 may flow into the write buffer 250 and onto program buffers of a corresponding solid-state storage array 115 A-N, as disclosed herein.
- the data segments (data packets 610 and/or ECC codewords 620 ) and the offset index 749 may be written onto a storage location within one of the arrays 115 A-N in response to a program command, as disclosed herein.
- the translation module 134 may be configured to omit offset information from the index 152 , as disclosed herein.
- Reading data corresponding to a front-end address may comprise accessing an entry 153 associated with the front-end address to determine the physical address of the storage location comprising the requested data.
- the read module 241 may be configured to read the storage location by, inter alia, issuing a read command to one of the solid-state storage arrays 115 A-N, which may cause the storage elements 116 A-Y comprising the array 115 A-N to transfer the contents of a particular page into a read buffer.
- the offset index module 249 may be configured to determine the offset of the requested data by a) streaming the portion of the read buffer 251 comprising the offset index 749 into the read module 241 and b) parsing the offset index 749 to determine the offset of the requested data. The read module 241 may then access the portions of the read buffer 251 comprising the requested data by use of the determined offset.
- the packet module 244 may be configured to store data segments 312 in a packet format 310 that comprises persistent metadata 314 .
- the persistent metadata 314 may comprise one or more front-end identifiers 315 corresponding to the data segment 312 . Inclusion of the front-end interface metadata 315 may increase the on-media overhead imposed by the packet format 310 .
- the offset index 749 generated by the offset index module 249 which, in some embodiments, is stored with the corresponding data packets, may also include the front-end interface of the data segment 312 . Accordingly, in some embodiments, the packet format 310 may be modified to omit front-end interface metadata from the persistent metadata 314 .
- the horizontal data configuration implemented by the data layout module 248 may spread ECC codewords 620 (and the corresponding packets 610 and/or data segments) across the columns 0-23 (solid-state storage elements 116 A-X). As such, reading data of the ECC codeword 621 may require accessing a plurality of columns 118 . Moreover, the smallest read unit may be an ECC codeword 620 (and/or packet 610 ). Reading a packet 310 stored horizontally on the solid-state storage array 115 may, therefore, incur significant overhead. Referring back to FIG.
- reading the packet 610 A may require transferring data of the logical page 542 A into respective read buffers of the storage elements 116 A-X (e.g., storage elements 0 through 23). Transferring the contents of a page into the read buffer may incur a latency of Tr (read latency).
- Tr read latency
- read time or read latency Tr refers to the time needed to transfer the contents of a physical storage unit (e.g., physical page) into a read buffer of a solid-state storage element 116 A-Y.
- the read time Tr may, therefore, refer to the time required to transfer a physical page of each of the solid-state storage elements 116 A-X into a respective read buffer.
- the read time Tr of a logical storage unit 650 may correspond to the “slowest” read time of the constituent storage elements 116 A-X.
- the read module 241 may be configured to perform a read operation to read a storage location of one of the solid-state storage arrays 115 A, transfer the contents of the storage location into respective read buffers of the solid-state storage elements 116 A-Y, and stream the data into the read buffer 251 by use of the 24-byte interconnect 127 and/or bank controller 252 .
- the stream time (Ts) of the read operation may refer to the time required to stream the ECC codewords 620 (and/or packets 610 ) into the read module 241 .
- the stream time Ts may be 10 cycles of the interconnect 127 because, as disclosed above, each column 118 of the array 115 comprises 10 bytes of the ECC codeword 620 . Therefore, although the horizontal arrangement may incur a relatively high retrieval overhead, the stream overhead is relatively low (only 10 cycles).
- an input/output operations per second (IOPS) metric may be quantified.
- the IOPS to read an ECC codeword 620 may be expressed as:
- IOPS r C ( Tr + Ts ) Eq . ⁇ 1
- Equation 1 Tr is the read time of the solid-state storage elements 116 A-Y, Ts is the stream time (e.g., the clock speed times the number of cycles required), and C is the number of independent columns 118 used to store the data. Equation 1 may be scaled by the number of independent banks 119 A-N available to storage layer 130 . In the horizontal data structure layout of FIGS. 6A and 6B , Equation 1 may be expressed as:
- Equation 2 the number of columns is twenty-four (24), and Sc is the cycle time of the bus 127 .
- the cycle time is scaled by 10 since, as disclosed above, a horizontal 240-byte ECC codeword 620 may be streamed in 10 cycles of the interconnect 127 .
- the storage layer 130 may be configured to store data in different configurations, layouts, and/or arrangements within a solid-state storage array 115 .
- the data layout module 248 is configured to arrange data within respective independent columns, each comprising a subset of the columns 118 of the array 115 (e.g., subsets of the solid-state storage elements 116 A-Y).
- the data layout module 248 may be configured to store data vertically within respective “vertical stripes.” The vertical stripes may have a configurable depth, which may be a factor of the page size of the solid-state storage elements 116 A-Y comprising the array 115 .
- FIG. 8A depicts another embodiment of a system 800 for referencing data on a storage medium 140 .
- the data layout module 248 is configured to store data in a vertical layout within the array 115 .
- the data write module 240 may be configured to buffer ECC codewords 620 for storage on respective columns 118 of the solid-state storage array 115 (including the ECC codewords 621 , 622 , and 623 disclosed herein).
- the ECC codewords 620 may be streamed to respective columns 118 of the array 115 through a write buffer 250 , as disclosed above. Accordingly, each cycle of the interconnect 127 may comprise streaming a byte of a different respective ECC codeword 610 to each of the columns 116 A-X.
- the write module 240 may be further configured to generate parity data 637 corresponding to the different ECC codewords 620 for storage on a parity column (e.g., solid-state storage element 116 Y). Accordingly, each stream cycle may comprise streaming a byte of a respective ECC codeword 620 to a respective column 118 along with a corresponding parity byte to a parity column 118 .
- a parity column e.g., solid-state storage element 116 Y
- the data layout module 248 may be configured to buffer and rotate ECC codewords for vertical storage within respective columns 118 of the array 115 : the ECC codewords 621 , 622 , and 623 comprising the data segment 612 A may stream to (and be stored vertically on) column 0 (solid-state storage element 116 A), other ECC codewords 620 comprising other data segments may be stored vertically within other columns 118 of the array 115 .
- Solid-state storage element 116 Y may be configured to store parity data corresponding to the ECC codewords, as disclosed above. Alternatively, the parity column 24 may be used to store additional ECC codeword data.
- the storage controller 139 may comprise a plurality of packet modules 242 and/or ECC write modules 246 (e.g., multiple, independent write modules 240 ) configured to operate in parallel. Data of the parallel write modules 240 may flow into the data layout module 248 in a checkerboard pattern such that the data is arranged in the vertical format disclosed herein.
- the vertical arrangement of FIG. 8A may comprise the data layout module 248 arranging ECC codewords 620 for storage within respective columns 118 of the array 115 .
- each data row 667 streamed to the array 115 may comprise a byte corresponding to a respective ECC codeword 620 .
- the data row 667 may further comprise a corresponding parity byte; the data rows 667 may be configured to stream data of respective ECC codewords 620 to program buffers of respective data columns (e.g., solid-state storage elements 116 A-Y), and a corresponding parity byte to a parity column (e.g., column 116 Y).
- the data rows 667 may be stored with byte-wise parity information, each byte of a row 667 , and stored within the solid-state storage elements 116 A-X, may be reconstructed by use of the other bytes in the row 667 (and stored in other solid-state storage elements 116 A-X) and the corresponding parity byte.
- FIG. 8B depicts another embodiment of system 801 for referencing data on a storage medium.
- FIG. 8B depicts one embodiment of a vertical data arrangement within a solid-state storage array 115 .
- the FIG. 8B embodiment illustrates a vertical storage configuration within the solid-state storage array 115 .
- data D 0 through D 239 of the ECC codeword 621 is stored vertically in column 0
- Data O 0 through O 239 of another ECC codeword 620 is stored vertically in column 1
- Data Q 0 through Q 239 of another ECC codeword 620 is stored vertically in column 2
- data Z 0 through Z 239 of another ECC codeword 620 is stored vertically in column 23.
- the vertical storage configuration of other data of other ECC codewords 620 (R-Y) is also depicted.
- FIG. 8B also depicts one embodiment of a data row 667 as streamed to, and stored on, the solid-state storage array 115 .
- the data row 667 comprises a byte of each of a plurality of ECC codewords 620 (ECC codewords D, O, R, S, T, U . . . V, W, X, Y, and Z), each of which is streamed to, and stored within, a respective column 118 (respective solid-state storage element 116 A-X).
- the data row 667 further comprises a parity byte 668 corresponding to the data within the data row 667 . Accordingly, the parity byte 668 corresponds to byte 0 of ECC codewords D, O, R, S, T, U . . . V, W, X, Y, and Z.
- the vertical data layout of FIGS. 8A-B may result in a different IOPS metric.
- the vertical arrangement of the ECC codewords 620 may reduce overhead due to read time Tr, but may increase the stream overhead Ts.
- each byte on the bus 127 may correspond to a different, respective data segment (e.g., different ECC codeword 620 ).
- 24 different ECC codewords 620 may be streamed in parallel (as opposed to streaming a single ECC codeword 620 as in the horizontal arrangement example).
- each transferred logical page may comprise data of a separate request (e.g., may represent data of 24 different read requests).
- each ECC codeword 620 is arranged vertically, the stream time Ts for an ECC codeword 620 may be increased; the stream time of 240-byte ECC codewords 620 in a vertical configuration may be 240 cycles, as opposed to 10 cycles in the fully horizontal layout of FIGS. 6A and 6B .
- the IOPS metric for a single ECC codeword 620 may be represented as:
- IOPS r 1 ( T r + 240 * S c ) Eq . ⁇ 3
- the reduced IOPS metric may be offset by the increased throughput (reduced read overhead) and/or different Tr and Ts latency times. These considerations may vary from device to device and/or application to application. Moreover, the IOPS metric may be ameliorated by the fact that multiple, independent ECC codewords 620 can be streamed simultaneously. Therefore, in some embodiments, the data layout used by the storage layer 130 (and data layout module 248 ) may be configurable (e.g., by a user setting or preference, firmware update, or the like).
- the pages of the solid-state storage elements 116 A-Y may be capable of storing a large number of ECC codewords 620 and/or data packets 610 . Accordingly, the vertical data arrangement of FIGS. 8A-B may comprise storing ECC codewords 620 and/or data packets 610 corresponding to different front-end addresses within the same columns 118 of the array.
- FIG. 8C depicts one embodiment of a system 802 for referencing data stored in a vertical data layout.
- the offset index module 249 may be configured to segment the storage addresses into a first portion 1057 that identifies the vertical column 118 comprising the data (e.g., the particular page(s) comprising the data segment) and a second portion that identifies the offset of the data segments within the vertical column 118 .
- the packet 810 C comprising the data segment corresponding to front-end address 854 B is stored in a vertical data arrangement within a page of solid-state storage element 116 B.
- the offset index module 249 may be configured to determine the offsets of the packets stored within the page, and to generate an offset index 749 that maps the front-end identifiers of the packets 810 A-N to respective offsets 859 A-N of the packets within the vertical data arrangement within the page.
- the storage controller 139 may be configured to store the offset index 749 within the page comprising the packets 810 A-N indexed thereby.
- the packets 810 A-N are of variable size and, as such, the offset index 749 may associate front-end identifiers 854 A-N with respective offsets 859 A-N.
- the offsets 859 A-N may be inferred from the order of the packets within the vertical column arrangement.
- the forward map 152 may be configured to index front-end identifiers to pages of respective solid-state storage elements 116 A-Y. Accordingly, the forward map 152 may include a subset of the full storage address 1057 (the portion of the address that identifies the particular page comprising the data segment), and may omit addressing information pertaining to the offset of the data segment within the page.
- the storage layer 130 may be configured to access the data segment corresponding to front-end address 854 B by: a) identifying the page comprising the data segment associated with the front-end address 854 B by use of the forward map 152 ; b) reading the identified page; c) determining the offset of the data packet 810 B by use of the offset index 749 stored on the identified page; and d) reading the packet 810 B at the determined offset.
- the data layout module 248 may be configured to lay out and/or arrange data in an adaptive channel configuration.
- an adaptive channel configuration refers to a data layout in which the columns 118 of the array 115 are divided into a plurality of independent channels, each channel comprising a set of columns 118 of the solid-state storage array 115 .
- the channels may comprise subsets of the solid-state storage elements 116 A-Y.
- an adaptive channel configuration may comprise a fully horizontal data layout, in which data segments are stored within a channel comprising 24 columns 118 of the array 115 , as disclosed in conjunction with FIGS. 6A-B and 7 A-C.
- the adaptive channel configuration may comprise a vertical configuration, in which data segments are stored within one of 24 different channels, each comprising a single column 118 of the array 115 , as disclosed in conjunction with FIGS. 10A-C .
- the data layout module 248 may be configured to store data in other adaptive channel configurations and/or layouts on the solid-state storage array 115 .
- FIG. 9A depicts another embodiment of a system 900 for adaptive data storage.
- the data layout module 248 is configured to store data structures in adaptive channels comprising six solid-state storage elements 116 A-Y (six independent columns 118 per channel). Accordingly, data segments may be stored within respective independent channels, each comprising size columns 118 of the array 115 .
- the data layout module 248 may be configured to buffer four ECC codewords 620 to stream to the array 115 .
- Each of the four ECC codewords 621 , 622 , 623 , and 624 may stream to a respective set of six columns 118 within the array 115 .
- the data layout module 248 may be configured to buffer 24/N ECC codewords 620 , where N corresponds to the configuration of the adaptive channels used for each ECC codeword 620 .
- ECC codewords 620 may be stored within independent channels comprising N columns 118 (e.g., N solid-state storage elements 116 A-Y). Accordingly, the horizontal arrangement of FIGS. 6A-B could be referred to as an adaptive channel configuration comprising 24 column independent channels, and the vertical data structure configuration of FIGS. 8A-C could be referred to as an adaptive channel configuration comprising independent channels comprising a single column 118 .
- the storage controller 139 may be configured to arrange data in any suitable hybrid arrangement, including heterogeneous sets of independent channels.
- the data layout module 248 may be configured to buffer two ECC codewords 620 in a 12-column adaptive channel configuration (e.g., store ECC codewords 620 across each of 12 columns 118 ), buffer six ECC codewords 620 in a four-column adaptive channel configuration (e.g., store ECC codewords 620 across each of four columns 118 ), and so on.
- a 12-column adaptive channel configuration e.g., store ECC codewords 620 across each of 12 columns 118
- buffer six ECC codewords 620 in a four-column adaptive channel configuration e.g., store ECC codewords 620 across each of four columns 118
- data segments may be arranged in adjacent columns 118 within the array 115 (e.g., a data structure may be stored in columns 0-4). Alternatively, columns may be non-adjacent and/or interleaved with other data segments (e.g., a data segment may be stored on columns 0, 2, 4, and 6 and another data segment may be stored on columns 2, 3, 5, and 7).
- the data layout module 248 may be configured to adapt the data layout in accordance with out-of-service conditions within the array 115 ; if a column 118 (or portion thereof) is out of service, the data layout module 238 may be configured to adapt the data layout accordingly (e.g., arrange data to avoid the out of service portions of the array 115 , as disclosed above).
- FIG. 9B depicts another embodiment 901 of a six column independent channel data layout.
- data of an ECC codeword (data D 0-239 ) may be stored within a channel comprising columns 0-5 of the array 115 and data of another ECC codeword (data Z 0-239 ) may be stored within an independent channel comprising columns 20-23, and so on.
- FIG. 9B further depicts a data row 667 , which includes six bytes of four different ECC codewords, including D and Z (bytes D0-5 and Z0-5).
- the data row 667 may further comprise a parity byte 668 corresponding to the contents of the data row 667 , as disclosed above.
- the stream time Ts of an ECC codeword 620 in the independent channel embodiments of FIGS. 9A-B may be 40 cycles of the bus 127 (e.g., 240/N cycles).
- An IOPS metric of a six independent column data layout may be represented as:
- the IOPS metric may be modified according to a number of data segments that can be read in parallel.
- the six-column independent channel configuration may enable four different ECC codewords (and/or packets) to be read from the array 115 concurrently.
- FIG. 9C depicts another embodiment of a system 902 for referencing data stored in an adaptive, independent channel layout.
- data packets 910 A-N comprising respective data segments are stored in independent channels comprising six columns 118 of the array 115 , such that each independent channel comprises six solid-state storage elements 116 A-Y.
- the offset index module 248 may be configured to segment the storage addresses of the data packets 910 A-N into a first portion comprising the physical address 957 of an independent channel, which may correspond to a page address on each of six solid-state storage elements.
- the independent channel address 957 corresponds to a page on solid-state storage elements 0-5.
- the second portion of the storage address may correspond to offsets of the data packets 910 A-N within the independent channel.
- the offset index module 248 may be configured to generate an offset index 749 configured to map front-end addresses 954 A-N to corresponding offsets within the independent channel 957 .
- the data packets 910 A-N may be of fixed size and, as such, the offset index 749 may indicate the order of the data packets 910 A-N within the independent channel as opposed to specifying particular offsets.
- the storage layer 130 may be configured to store data in an adaptive vertical stripe configuration.
- a vertical stripe configuration refers to storing data structures vertically within vertical stripes having a predetermined depth within the columns 118 of the solid-state storage array. Multiple vertical stripes may be stored within rows 117 of the array 115 . The depth of the vertical stripes may, therefore, determine read-level parallelism, whereas the vertical ECC configuration may provide error detection, correction, and/or reconstruction benefits.
- FIG. 10A depicts one embodiment of a vertical stripe data configuration 1000 within a logical page 542 (row 117 ) of a solid-state storage array 115 .
- a vertical stripe may comprise vertically arranged data structures within respective columns 118 of the array 115 .
- the vertical stripes 646 A-N have a configurable depth or length.
- the vertical stripes 646 A-N are configured to have a depth sufficient to store four ECC codewords.
- the depth of the vertical stripes 646 A-N corresponds to an integral factor of ECC codeword size relative to a page size of the solid-state storage elements 116 comprising the array 115 .
- the page size of the solid-state storage elements 116 may be 16 kb, each page may be configured to hold four vertical stripes 646 A-N, and each vertical stripe may be configured to hold four 1 kb vertically aligned ECC codewords.
- the disclosed embodiments are not limited in this regard, however, and could be adapted to use any storage medium 140 having any page size in conjunction with any ECC codeword size and/or vertical stripe depth.
- the depth of the vertical stripes 646 A-N and the size of typical read operations may determine, inter alia, the number of channels (columns) needed to perform read operations (e.g., determine the number of channels used to perform a read operation, stream time Ts, and so on).
- a 4 kb data packet may be contained within 5 ECC codewords, including ECC codewords 3 through 7. Reading the 4 kb packet from the array 115 may, therefore, comprise reading data from two columns (columns 0 and 1).
- a larger 8 kb data structure may span 10 ECC codewords (ECC codewords 98-107), and as such, reading the 8 kb data structure may comprise reading data from three columns of the array (columns 0, 1, and 2).
- Configuring the vertical stripes 646 A-N with an increased depth may decrease the number of columns needed for a read operation, which may increase the stream time Ts for the individual read, but may allow for other independent read operations to be performed in parallel. Decreasing depth may increase the number of columns needed for read operations, which may decrease stream time T s , but result in decreasing the number of other, independent read operations that can be performed in parallel.
- FIG. 10B depicts embodiments of vertical stripes 1001 , each having a different respective depth.
- the vertical stripes 607 may comprise 1 kb, vertically aligned ECC codewords as disclosed above in conjunction with FIG. 8A-C .
- a 16 kb data structure 610 (packet) may be stored within a 4 k deep vertical stripe 746 A.
- the data structure 610 may be contained within 17 separate ECC codewords spanning five columns of the array 115 (columns 0 through 5). Accordingly, reading the data structure 610 may comprise reading data from an independent channel comprising six columns.
- the stream time Ts of the read operation may correspond to the depth of the vertical stripe 746 A (e.g., the stream time of four ECC codewords).
- the depth of the vertical stripe 746 B may be increased to 8 kb, which may be sufficient to hold eight vertically aligned ECC codewords.
- the data structure 610 may be stored within 17 ECC codewords, as disclosed above. However, the modified depth of the vertical stripe 746 B may result in the data structure occupying three columns (columns 0 through 2) rather than six. Accordingly, reading the data structure 610 may comprise reading data from an independent channel comprising three columns, which may increase the number of other, independent read operations that can occur in parallel on other columns (e.g., columns 3 and 4).
- the stream time Ts of the read operation may double as compared to the stream time of the vertical stripe 746 A.
- FIG. 10C is a block diagram of another embodiment of a system 1002 for referencing data on a storage medium.
- the data layout module 248 may be configured to store data in a vertical stripe configuration within logical pages 542 of the solid-state storage array 115 .
- the write module 240 may comprise one or more processing modules, which as disclosed above, may include, but are not limited to a packet module 244 and an ECC write module 246 .
- the ECC write module 246 may be configured to generate ECC codewords 620 (ECC codewords 0 through Z) in response to data for storage on the solid-state storage array 115 , as disclosed above.
- the ECC codewords 620 may flow into the data layout module 248 serially via a 128 bit data path of the write module 240 .
- the ECC write module 246 may further comprise a relational module 646 configured to include relational information in one or more of the ECC codewords 620 .
- the data layout module 248 may be configured to buffer the ECC codewords 620 for storage in vertical stripes, as disclosed herein.
- the data layout module 248 may comprise a fill module 660 that is configured to rotate the serial stream of ECC codewords 620 into vertical stripes by use of, inter alia, one or more cross point switches, FIFO buffers 662 A-X, and the like.
- the FIFO buffers 662 A-X may each correspond to a respective column of the array 115 .
- the fill module 660 may be configured to rotate and/or buffer the ECC codewords 620 according to a particular vertical code word depth, which may be based on the ECC codeword 620 size and/or size of physical storage units of the array 115 .
- the data layout module 248 may be further configured to manage OOS conditions within the solid-state storage array 115 .
- an OOS condition may indicate that one or more columns 118 of the array are not currently in use to store data.
- the storage metadata 135 may identify columns 118 that are out of service within various portions of the solid-state storage array 115 (e.g., rows 117 , logical erase blocks 540 , or the like).
- the storage metadata 135 may indicate that column 2, of the current logical page 542 , is out of service.
- the fill module 660 may be configured to avoid column 2 by, inter alia, injecting padding data into the FIFO buffer of the OOS column (e.g., FIFO buffer 662 C).
- the data layout module 248 may comprise a parity module 637 that is configured to generate parity data in accordance with the vertical strip data configuration.
- the parity data may be generated horizontally, on a byte-by-byte basis within rows 117 of the array 115 as disclosed above.
- the parity data P 0 may correspond to ECC codewords 0, 4, through 88; the parity data P 1 may correspond to ECC codewords 1, 5, through 89, and so on.
- the data layout module 248 may include a parity control FIFO 662 Y configured to manage OOS conditions for parity calculations (e.g., ignore data within OOS columns for the purposes of the parity calculation).
- the vertical stripe data configuration generated by the data layout module 248 may flow to write buffers of the solid-state storage elements 116 A-Y within the array 115 through the write buffer and/or bank controller 252 , as disclosed above.
- data rows 667 generated by write module 240 may comprise one byte for each data column in the array 115 (columns 116 A-X). Each byte in a data row 667 may correspond to a respective ECC codeword 620 and may include a corresponding parity byte. Accordingly, each data row 667 may comprise horizontal byte-wise parity information from which any of the bytes within the row 667 may be reconstructed, as disclosed herein.
- a data row 667 A may comprise a byte of ECC codeword 0 for storage on column 0, a byte of ECC codeword 4 for storage on column 1, padding data for column 1, a byte of ECC codeword 88 for storage on column 23, and so on.
- the data row 667 may further comprise a parity byte 668 A for storage on column 24 (or other column), as disclosed above.
- the data may be programmed unto the solid-state storage array 115 as a plurality of vertical stripes 646 A-N within a logical page 542 , as disclosed above (e.g., by programming the contents of program buffers to physical storage units of the solid-state storage elements 116 A-Y within the array 115 ).
- the indexing S*N may correspond to vertical stripes configured to hold S ECC codewords in an array 115 comprising N columns for storing data.
- FIG. 10D depicts another embodiment of a system 1003 configured to reference data stored in a vertical stripe configuration on a solid state storage array.
- the offset index module 249 may be configured to segment the storage address of packets 1010 A-N into a first portion corresponding to an address of the vertical stripe, which may correspond to a particular offset within a page of one or more storage elements (e.g., storage element 116 C), and a second portion corresponding to offsets of the packets 1010 A-N within the vertical stripe.
- the offset index module 249 may generate an offset index 749 C configured for storage within the vertical stripe, as disclosed above.
- the offset index 749 C may map front-end identifiers 1054 A-N of the packets 1010 A-N stored within the vertical stripe to respective offsets 1059 A-N of the packets.
- packets may span vertical stripes.
- the packet 1010 N is stored within vertical stripes on storage elements 116 C and 116 D.
- the offset index entry 1059 N corresponding to the packet 1010 N may indicate that the packet 1010 N continues within the next stripe.
- the offset index 749 D of the next vertical stripe may also include an entry associated with the front-end address 1054 of the packet 1010 N and may indicate the offset and/or length of the remaining data of the packet 1010 N within the column 116 D.
- the offset index module 249 may be configured to link the offset index 749 C to the offset index 749 D. As illustrated in FIG.
- the forward map 152 may only include references to the vertical stripe on column 116 C that comprises the “head” of the packet 1010 N. Moreover, the translation module 134 may omit the second portion of the storage addresses (the offsets 1059 A-N and 1069 N) from the entries 153 to reduce the memory overhead of the forward map 152 and/or allow the forward map 152 to reference larger storage address spaces 144 , as disclosed herein.
- FIG. 11 is a flow diagram of one embodiment of a method 1100 for referencing data on a storage medium.
- Step 1110 may comprise arranging data segments for storage at respective offsets within a storage location of a storage medium 140 .
- step 1110 comprises formatting the data segments into one or more packets 610 and/or encoding the packets 610 into one or more ECC codewords 620 , as disclosed herein.
- Step 1110 may further comprise streaming the packets 610 and/or ECC codewords 620 to program buffers of a solid-state storage array 115 via the interconnect 127 .
- Step 1110 may further include generating parity data for each of a plurality of data rows 667 comprising the data segments, as disclosed herein.
- step 1110 may further comprise compressing one or more of the data segments such that a compressed size of the data segments differs from the original, uncompressed size of the data segments.
- Step 1110 may further include encrypting and/or whitening the data segments, as disclosed herein.
- Step 1120 may comprise mapping front-end addresses of the data segments using, inter alia, a forward map 152 , as disclosed herein.
- Step 1120 may comprise segmenting the storage addresses of the data segments into a first portion that addresses the storage location comprising the data segments (e.g., the physical address of the logical page 542 comprising the data segments), and second portions comprising the respective offsets of the data segments within the storage location.
- Step 1120 may further comprise indexing the front-end addresses to the first portion of the storage address, and omitting the second portion of the storage address from the entries 153 of the forward index 152 .
- Step 1120 may comprise determining the data segment offsets based on a compressed size of the data segments, as disclosed herein. Accordingly, the offsets determined at step 1120 may differ from offsets based on the original, uncompressed size of the data segments.
- Step 1130 may comprise generating an offset index for the storage location by use of the offset index module 249 , as disclosed herein.
- Step 1130 may comprise generating an offset index 749 data structure that is configured for storage on the storage medium 140 .
- the offset index 749 may be configured for storage at a predetermined offset and/or location within the storage location comprising the indexed data segments.
- the offset index 749 may be configured to map front-end addresses of the data segments stored within the storage location to respective offsets of the data segments within the storage location, as disclosed herein.
- step 1130 further comprises storing the offset index 749 on the storage medium 140 , which may comprise streaming the offset index 749 to program buffers of the storage elements 116 A-Y comprising a solid-state storage array 115 A-N and/or issuing a program command to the solid-state storage elements 116 A-Y, as disclosed herein.
- FIG. 12 is a flow diagram of another embodiment of a method 1200 for referencing data stored on a storage medium 140 .
- Step 1210 may comprise identifying a storage location comprising data corresponding to a specified front-end address.
- Step 1210 may be implemented in response to a storage request pertaining to the front-end address.
- the storage request may include one or more of: a read request, a read-modify-write request, a copy request, and/or the like.
- Step 1210 may comprise accessing an entry 153 in the forward map 152 using, inter alia, the specified front-end address.
- the entry 153 may comprise the first portion of the full storage address of the requested data.
- the first portion may identify the storage location (e.g., logical page 542 ) comprising the requested data.
- the second portion of the full storage address may be maintained in a second index that is stored on the storage medium 140 and, as such, may be omitted from the forward map 152 .
- Step 1220 may comprise determining an offset of the requested data within the identified storage location.
- Step 1220 may comprise a) reading the identified storage location, b) accessing an offset index 749 at a predetermined location with the identified storage location, and c) determining the offset of data corresponding to the front-end address by use of the offset index.
- step 1220 may comprise forming the full storage address of the requested data by combining the address of the storage location maintained in the forward map 152 with the offset maintained in the on-media offset index 749 .
- Step 1230 may comprise accessing the requested data.
- Step 1230 may include streaming one or more ECC codewords 620 comprising the data packets 610 in which the requested data was stored from read buffers of the storage elements 116 A-Y comprising a storage array 115 A-N.
- Step 1230 may comprise streaming the data from the offset determined at step 1220 .
- Step 1230 may further include processing the ECC codeword(s) 620 and/or packet(s) 610 comprising the requested data, as disclosed herein (e.g., by use of the ECC read module 247 and/or depacket module 245 ).
- Step 1230 may further comprise decompressing the requested data by use of the decompression module 243 , decrypting the data, dewhitening the data, and so on, as disclosed herein.
- Embodiments may include various steps, which may be embodied in machine-executable instructions to be executed by a general-purpose or special-purpose computer (or other electronic device). Alternatively, the steps may be performed by hardware components that include specific logic for performing the steps, or by a combination of hardware, software, and/or firmware.
- Embodiments may also be provided as a computer program product including a computer-readable storage medium having stored instructions thereon that may be used to program a computer (or other electronic device) to perform processes described herein.
- the computer-readable storage medium may include, but is not limited to: hard drives, floppy diskettes, optical disks, CD-ROMs, DVD-ROMs, ROMs, RAMs, EPROMs, EEPROMs, magnetic or optical cards, solid-state memory devices, or other types of medium/machine-readable medium suitable for storing electronic instructions.
- a software module or component may include any type of computer instruction or computer executable code located within a memory device and/or computer-readable storage medium.
- a software module may, for instance, comprise one or more physical or logical blocks of computer instructions, which may be organized as a routine, program, object, component, data structure, etc., that performs one or more tasks or implements particular abstract data types.
- a particular software module may comprise disparate instructions stored in different locations of a memory device, which together implement the described functionality of the module.
- a module may comprise a single instruction or many instructions, and may be distributed over several different code segments, among different programs, and across several memory devices.
- Some embodiments may be practiced in a distributed computing environment where tasks are performed by a remote processing device linked through a communications network.
- software modules may be located in local and/or remote memory storage devices.
- data being tied or rendered together in a database record may be resident in the same memory device, or across several memory devices, and may be linked together in fields of a record in a database across a network.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Computer Networks & Wireless Communication (AREA)
- Signal Processing (AREA)
- Human Computer Interaction (AREA)
- Microelectronics & Electronic Packaging (AREA)
- Quality & Reliability (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
Abstract
Description
- This application is a continuation of U.S. patent application Ser. No. 13/925,410, filed Jun. 24, 2013, entitled “Systems and Methods for Referencing Data on a Storage Medium,” now U.S. patent Ser. No. 10/019,353, issued Jul. 10, 2018, which is continuation-in-part of, and claims priority to, U.S. patent application Ser. No. 13/784,705, entitled “Systems and Methods for Adaptive Data Storage,” filed Mar. 4, 2013, now U.S. Pat. No. 9,495,241, issued Nov. 15, 2016, which claims priority to U.S. Provisional Patent Application No. 61/606,253, entitled “Adaptive Data Arrangement,” filed Mar. 2, 2012, and to U.S. Provisional Patent Application No. 61/606,755, entitled “Adaptive Data Arrangement,” filed Mar. 5, 2012, and to U.S. Provisional Patent Application No. 61/663,464, filed Jun. 22, 2012, entitled “Systems and Methods for Referencing Data on a Non-Volatile Storage Medium,” each of which is hereby incorporated by reference.
- This disclosure relates to data storage and, in particular, to systems and methods for efficiently referencing data stored on a non-volatile storage medium.
- A storage system may map logical addresses to storage locations of a storage device. Physical addressing metadata used to reference the storage locations may consume significant memory resources. Moreover, the size of the physical addressing metadata may limit the size of the storage resources the system is capable of referencing.
- Disclosed herein are embodiments of a method for referencing data on a storage medium. The method may comprise arranging a plurality of data segments for storage at respective offsets within a storage location of a solid-state storage medium, mapping front-end addresses of the data segments to an address of the storage location in a first index, and generating a second index configured for storage on the solid-state storage medium, wherein the second index is configured to associate the front-end addresses of the data segments with respective offsets of the data segments within the storage location. In some embodiments, the method further includes compressing one or more of the data segments for storage on the solid-state storage medium such that a compressed size of the compressed data segments differs from an uncompressed size of the data segments, wherein the offsets of the data segments within the storage location are based on the compressed size of the one or more data segments.
- The disclosed method may further comprise storing the second index on the storage medium. The second index may be stored on the storage location that comprises the plurality of data segments. The offsets may be omitted from the first index, which may reduce the overhead of the first index and/or allow the first index to reference a larger storage address space. The storage address of a data segment associated with a particular front-end address may be determined by use of a storage location address mapped to the particular front-end address in the first index and a data segment offset associated with the particular front-end address of the second index stored on the storage location. Accessing a requested data segment of a specified front-end address may include accessing a physical address of a storage location mapped to the specified front-end address in the first index, and reading the second index stored on the storage location to determine an offset of the requested data segment within the storage location.
- Disclosed herein are embodiments of an apparatus for referencing data stored on a storage medium. The apparatus may include a storage layer configured to store data packets within storage units of a non-volatile storage medium, wherein the storage units are configured to store a plurality of data packets, a data layout module configured to determine relative locations of the stored data packets within the storage units, and an offset index module configured to generate offset indexes for the storage units based on the determined relative locations of the data packets stored within the storage units, wherein the offset index of a storage unit is configured to associate logical identifiers of data packets stored within the storage unit with the determined relative locations of the data packets within the storage unit.
- In some embodiments, the disclosed apparatus further includes a compression module configured to compress data of one or more of the data packets, such that a compressed size of the data differs from an uncompressed size of the data, wherein the offset index module is configured to determine the offsets of the data packets based on the compressed size of the data. The apparatus may further comprise a translation module which may be used to associate logical identifiers with media addresses of storage units comprising data packets corresponding to the logical identifiers, wherein the storage layer is configured to access a data packet corresponding to a logical identifier by use of a media address of a storage unit associated with the logical identifier by the translation module, and an offset index indicating a relative location of the data packet within the storage unit, wherein the offset index is stored at a pre-determined location within the storage unit.
- The storage layer may be configured to store the offset indexes of the storage units at pre-determined locations within the storage units. The storage layer may be further configured to store each offset index within the storage unit that comprises data packets indexed by the offset index.
- The storage medium may comprise a solid-state storage array comprising a plurality of columns, each column comprising a respective solid-state storage element, and wherein each of the storage units comprises physical storage units on two or more columns of the solid-state storage array. The solid-state storage array may comprise a plurality of columns, each column comprising a respective solid-state storage element. The offset indexes may indicate a relative location of a data packet within a column of the solid-state storage array. In some embodiments, the storage medium is a solid-state storage array comprising a plurality of independent channels, each channel comprising a plurality of solid-state storage elements, and wherein the offset indexes indicate relative locations of data packets within respective independent channels.
- Disclosed herein are further embodiments of a method for referencing data stored on a storage medium, by: segmenting physical addresses of data stored on a solid-state storage array into respective first portions and second portions, wherein the first portions of the physical addresses correspond to storage unit addresses, and wherein the second portions correspond to data offsets within respective storage units, mapping logical addresses of the data to respective first portions of the physical addresses, and storing the second portions of the physical addresses within respective storage units. The method may further comprise compressing the data for storage on the solid-state storage device, wherein the data offsets within respective storage units are based on a compressed size of the data.
- Data corresponding to a logical address may be accessed by combining a first portion of the physical address mapped to the logical address with a second portion of the physical address stored on a storage unit corresponding to the first portion of the physical address. In some embodiments, each storage unit comprises a plurality of storage units corresponding to respective solid-state storage elements. Alternatively, or in addition, the storage unit may comprise a page on a solid-state storage element, and the second portions of the physical addresses may correspond to a data offsets within the pages.
-
FIG. 1A is a block diagram of one embodiment of a computing system comprising a storage layer; -
FIG. 1B depicts embodiments of any-to-any mappings; -
FIG. 1C depicts one embodiment of a solid-state storage array; -
FIG. 1D depicts one embodiment of a storage log; -
FIG. 2 is a block diagram of another embodiment of a storage layer; -
FIG. 3 depicts one embodiment of a packet format; -
FIG. 4 depicts one embodiment of ECC codewords comprising one or more data segments; -
FIG. 5A is a block diagram depicting one embodiment of a solid-state storage array; -
FIG. 5B is a block diagram depicting another embodiment of a solid-state storage array; -
FIG. 5C is a block diagram depicting another embodiment of banks of solid-state storage arrays; -
FIG. 5D depicts one embodiment of sequential bank interleave; -
FIG. 5E depicts another embodiment of sequential bank interleave; -
FIG. 6A is a block diagram of another embodiment of a storage controller; -
FIG. 6B depicts one embodiment of a horizontal data storage configuration; -
FIG. 7A depicts one embodiment of storage metadata for referencing data stored on a storage medium; -
FIG. 7B depicts another embodiment of storage metadata for referencing data stored on a storage medium; -
FIG. 7C depicts another embodiment of storage metadata for referencing data stored on a storage medium; -
FIG. 8A depicts one embodiment of a vertical data layout; -
FIG. 8B depicts another embodiment of a vertical data layout; -
FIG. 8C depicts one embodiment of a system for referencing data stored on a storage medium in a vertical data layout; -
FIG. 9A is a block diagram of one embodiment of a system for referencing data stored in an independent column layout on a storage medium; -
FIG. 9B is a block diagram of another embodiment of a system for referencing data stored in an independent column layout on a storage medium; -
FIG. 9C is a block diagram of another embodiment of a system for referencing data stored in an independent column layout on a storage medium; -
FIG. 10A is a block diagram of one embodiment of data stored in a vertical stripe configuration; -
FIG. 10B is a block diagram of one embodiment of a system for referencing data stored in a vertical stripe configuration; -
FIG. 10C is a block diagram of another embodiment of a system for referencing data stored in a vertical stripe configuration; -
FIG. 10D is a block diagram of another embodiment of a system for referencing data stored in a vertical stripe configuration; -
FIG. 11 is a flow diagram of one embodiment of a method for referencing data stored on a storage medium; and -
FIG. 12 is a flow diagram of another embodiment of a method for referencing data stored on a storage medium. -
FIG. 1A is a block diagram of one embodiment of acomputing system 100 comprising astorage layer 130 configured to provide storage services to one ormore storage clients 106. Thecomputing system 100 may comprise any suitable computing device, including, but not limited to: a server, desktop, laptop, embedded system, mobile device, and/or the like. In some embodiments,computing system 100 may include multiple computing devices, such as a cluster of server computing devices. Thecomputing system 100 may comprise processingresources 101, volatile memory resources 102 (e.g., random access memory (RAM)),non-volatile storage resources 103, and acommunication interface 104. Theprocessing resources 101 may include, but are not limited to, general purpose central processing units (CPUs), application-specific integrated circuits (ASICs), programmable logic elements, such as field programmable gate arrays (FPGAs), programmable logic arrays (PLGs), and the like. Thenon-volatile storage 103 may comprise a non-transitory machine-readable storage medium, such as a magnetic hard disk, solid-state storage medium, optical storage medium, and/or the like. Thecommunication interface 104 may be configured to communicatively couple thecomputing system 100 to anetwork 105. Thenetwork 105 may comprise any suitable communication network including, but not limited to: a Transmission Control Protocol/Internet Protocol (TCP/IP) network, a Local Area Network (LAN), a Wide Area Network (WAN), a Virtual Private Network (VPN), a Storage Area Network (SAN), a Public Switched Telephone Network (PSTN), the Internet, and/or the like. - The
computing system 100 may comprise astorage layer 130, which may be configured to provide storage services to one ormore storage clients 106. Thestorage clients 106 may include, but are not limited to: operating systems (including bare metal operating systems, guest operating systems, virtual machines, virtualization environments, and the like), file systems, database systems, remote storage clients (e.g., storage clients communicatively coupled to thecomputing system 100 and/orstorage layer 130 through the network 105), and/or the like. - The storage layer 130 (and/or modules thereof) may be implemented in software, hardware and/or a combination thereof. In some embodiments, portions of the
storage layer 130 are embodied as executable instructions, such as computer program code, which may be stored on a persistent, non-transitory storage medium, such as thenon-volatile storage resources 103. The instructions and/or computer program code may be configured for execution by theprocessing resources 101. Alternatively, or in addition, portions of thestorage layer 130 may be embodied as machine components, such as general and/or application-specific components, programmable hardware, FPGAs, ASICs, hardware controllers, storage controllers, and/or the like. - The
storage layer 130 may be configured to perform storage operations on astorage medium 140. Thestorage medium 140 may comprise any storage medium capable of storing data persistently. As used herein, “persistent” data storage refers to storing information on a persistent, non-volatile storage medium. Thestorage medium 140 may include non-volatile storage media such as solid-state storage media in one or more solid-state storage devices or drives (SSD), hard disk drives (e.g., Integrated Drive Electronics (IDE) drives, Small Computer System Interface (SCSI) drives, Serial Attached SCSI (SAS) drives, Serial AT Attachment (SATA) drives, etc.), tape drives, writable optical drives (e.g., CD drives, DVD drives, Blu-ray drives, etc.), and/or the like. - In some embodiments, the
storage medium 140 comprises non-volatile solid-state memory, which may include, but is not limited to, NAND flash memory, NOR flash memory, nano RAM (NRAM), magneto-resistive RAM (MRAM), phase change RAM (PRAM), Racetrack memory, Memristor memory, nanocrystal wire-based memory, silicon-oxide based sub-10 nanometer process memory, graphene memory, Silicon-Oxide-Nitride-Oxide-Silicon (SONOS), Resistive random-access memory (RRAM), programmable metallization cell (PMC), conductive-bridging RAM (CBRAM), and/or the like. Although particular embodiments of thestorage medium 140 are disclosed herein, the teachings of this disclosure could be applied to any suitable form of memory including both non-volatile and volatile forms. Accordingly, although particular embodiments of thestorage layer 130 are disclosed in the context of non-volatile, solid-state storage devices 140, thestorage layer 130 may be used with other storage devices and/or storage media. - In some embodiments, the
storage device 130 includes volatile memory, which may include, but is not limited to RAM, dynamic RAM (DRAM), static RAM (SRAM), synchronous dynamic RAM (SDRAM), etc. Thestorage medium 140 may correspond to memory of theprocessing resources 101, such as a CPU cache (e.g., L1, L2, L3 cache, etc.), graphics memory, and/or the like. In some embodiments, thestorage medium 140 is communicatively coupled to thestorage layer 130 by use of aninterconnect 127. Theinterconnect 127 may include, but is not limited to peripheral component interconnect (PCI), PCI express (PCI-e), serial advanced technology attachment (serial ATA or SATA), parallel ATA (PATA), small computer system interface (SCSI), IEEE 1394 (FireWire), Fiber Channel, universal serial bus (USB), and/or the like. Alternatively, thestorage medium 140 may be a remote storage device that is communicatively coupled to thestorage layer 130 through the network 105 (and/or other communication interface, such as a Storage Area Network (SAN), a Virtual Storage Area Network (VSAN), or the like). Theinterconnect 127 may, therefore, comprise a remote bus, such as a PCE-e bus, a network connection (e.g., Infiniband), a storage network, Fibre Channel Protocol (FCP) network, HyperSCSI, and/or the like. - The
storage layer 130 may be configured to manage storage operations on thestorage medium 140 by use of, inter alia, astorage controller 139. Thestorage controller 139 may comprise software and/or hardware components including, but not limited to: one or more drivers and/or other software modules operating on thecomputing system 100, such as storage drivers, I/O drivers, filter drivers, and/or the like, hardware components, such as hardware controllers, communication interfaces, and/or the like, and so on. Thestorage medium 140 may be embodied on astorage device 141. Portions of the storage layer 139 (e.g., storage controller 139) may be implemented as hardware and/or software components (e.g., firmware) of thestorage device 141. - The
storage controller 139 may be configured to implement storage operations at particular storage locations of thestorage medium 140. As used herein, a storage location refers to unit of storage of a storage resource (e.g., a storage medium and/or device) that is capable of storing data persistently; storage locations may include, but are not limited to: pages, groups of pages (e.g., logical pages and/or offsets within a logical page), storage divisions (e.g., physical erase blocks, logical erase blocks, etc.), sectors, locations on a magnetic disk, battery-backed memory locations, and/or the like. The storage locations may be addressable within astorage address space 144 of thestorage medium 140. Storage addresses may correspond to physical addresses, media addresses, back-end addresses, address offsets, and/or the like. Storage addresses may correspond to any suitablestorage address space 144, storage addressing scheme and/or arrangement of storage locations. - The
storage layer 130 may comprise aninterface 131 through whichstorage clients 106 may access storage services provided by the storage layer. Thestorage interface 131 may include one or more of: a block device interface, a virtualized storage interface, an object storage interface, a database storage interface, and/or other suitable interface and/or Application Programming Interface (API). - The
storage layer 130 may provide for referencing storage resources through a front-end interface. As used herein, a front-end interface refers to the identifiers used by thestorage clients 106 to reference storage resources and/or services of thestorage layer 130. A front-end interface may correspond to a front-end address space 132 that comprises a set, range, and/or extent of front-end addresses or identifiers. As used herein, a front-end address refers to an identifier used to reference data and/or storage resources; front-end addresses may include, but are not limited to: names (e.g., file names, distinguished names, etc.), data identifiers, logical identifiers (LIDs), logical addresses, logical block addresses (LBAs), logical unit number (LUN) addresses, virtual storage addresses, storage addresses, physical addresses, media addresses, and/or the like. In some embodiments, the front-end address space 132 comprises a logical address space, comprising a plurality of logical identifiers, LBAs, and/or the like. - The
translation module 134 may be configured to map front-end identifiers of the front-end address space 132 to storage resources (e.g., data stored within thestorage address space 144 of the storage medium 140). The front-end address space 132 may be independent of the back-end storage resources (e.g., the storage medium 140); accordingly, there may be no set or pre-determined mappings between front-end addresses of the front-end address space 132 and the storage addresses of thestorage address space 144 of thestorage medium 140. In some embodiments, the front-end address space 132 is sparse, thinly provisioned, and/or over-provisioned, such that the size of the front-end address space 132 differs from thestorage address space 144 of thestorage medium 140. - The
storage layer 130 may be configured to maintainstorage metadata 135 pertaining to storage operations performed on thestorage medium 140. Thestorage metadata 135 may include, but is not limited to: a forward index comprising any-to-any mappings between front-end identifiers of the front-end address space 132 and storage addresses within thestorage address space 144 of thestorage medium 140, a reverse index pertaining to the contents of the storage locations of thestorage medium 140, one or more validity bitmaps, reliability testing and/or status metadata, status information (e.g., error rate, retirement status, and so on), and/or the like. Portions of thestorage metadata 135 may be maintained within thevolatile memory resources 102 of thecomputing system 100. Alternatively, or in addition, portions of thestorage metadata 135 may be stored onnon-volatile storage resources 103 and/or thestorage medium 140. -
FIG. 1B depicts one embodiment of any-to-anymappings 150 between front-end identifiers of the front-end address space 132 and back-end identifiers (e.g., storage addresses) within thestorage address space 144. The any-to-anymappings 150 may be maintained in one or more data structures of thestorage metadata 135. As illustrated inFIG. 1B , thetranslation module 134 may be configured to map any front-end address to any back-end storage location. As further illustrated, the front-end address space 132 may be sized differently than the underlyingstorage address space 144. In theFIG. 1B embodiment, the front-end address space 132 may be thinly provisioned, and, as such, may comprise a larger range of front-end identifiers than the range of storage addresses in thestorage address space 144. - The
storage layer 130 may be configured to maintain the any-to-any mappings in aforward map 152. Theforward map 152 may comprise any suitable data structure, including, but not limited to: an index, a map, a hash map, a hash table, an extended-range tree, a b-tree, and/or the like. Theforward map 152 may compriseentries 153 corresponding to front-end identifiers that have been allocated for use to reference data stored on thestorage medium 140. Theentries 153 of theforward map 152 may associate front-end identifiers 154A-D with respective storage addresses 156A-D within thestorage address space 144. Theforward map 152 may be sparsely populated, and as such, may omit entries corresponding to front-end identifiers that are not currently allocated by astorage client 106 and/or are not currently in use to reference valid data stored on thestorage medium 140. In some embodiments, theforward map 152 comprises a range-encoded data structure, such that one or more of theentries 153 may correspond to a plurality of front-end identifiers (e.g., a range, extent, and/or set of front-end identifiers). In theFIG. 1B embodiment, theforward map 152 includes anentry 153 corresponding to a range of front-end identifiers 154A mapped to a corresponding range of storage addresses 156A. Theentries 153 may be indexed by front-end identifiers. In theFIG. 1B embodiment, theentries 153 are arranged into a tree data structure by respective links. The disclosure is not limited in this regard, however, and could be adapted to use any suitable data structure and/or indexing mechanism. - Referring to
FIG. 1C , in some embodiments, the solid-state storage medium 140 may comprise a solid-state storage array 115 comprising a plurality of solid-state storage elements 116A-Y. As used herein, a solid-state storage array (or array) 115 refers to a set of two or moreindependent columns 118. Acolumn 118 may comprise one or more solid-state storage elements 116A-Y that are communicatively coupled to thestorage layer 130 in parallel using, inter alia, theinterconnect 127.Rows 117 of thearray 115 may comprise physical storage units of the respective columns 118 (solid-state storage elements 116A-Y). As used herein, a solid-state storage element 116A-Y includes, but is not limited to solid-state storage resources embodied as: a package, chip, die, plane, printed circuit board, and/or the like. The solid-state storage elements 116A-Y comprising thearray 115 may be capable of independent operation. Accordingly, a first one of the solid-state storage elements 116A may be capable of performing a first storage operation while a second solid-state storage element 116B performs a different storage operation. For example, the solid-state storage element 116A may be configured to read data at a first physical address, while another solid-state storage element 116B reads data at a different physical address. - A solid-
state storage array 115 may also be referred to as a logical storage element (LSE). As disclosed in further detail herein, the solid-state storage array 115 may comprise logical storage units (rows 117). As used herein, a “logical storage unit” orrow 117 refers to a logical construct combining two or more physical storage units, each physical storage unit on arespective column 118 of thearray 115. A logical erase block refers to a set of two or more physical erase blocks, a logical page refers to a set of two or more pages, and so on. In some embodiments, a logical erase block may comprise erase blocks within respectivelogical storage elements 115 and/or banks. Alternatively, a logical erase block may comprise erase blocks within a plurality ofdifferent arrays 115 and/or may span multiple banks of solid-state storage elements. - Referring back to
FIG. 1A , thestorage layer 130 may further comprise alog storage module 136 configured to store data on thestorage medium 140 in log structured storage configuration (e.g., in a storage log). As used herein, a “storage log” or “log structure” refers to an ordered arrangement of data within thestorage address space 144 of thestorage medium 140. In theFIG. 1A embodiment, thelog storage module 136 may be configured to append data sequentially within thestorage address space 144 of thestorage medium 140. -
FIG. 1D depicts one embodiment of thestorage address space 144 of thestorage medium 140. Thestorage address space 144 comprises a plurality of storage divisions (e.g., erase blocks, logical erase blocks, or the like), each of which can be initialized (e.g., erased) for use in storing data. Thestorage divisions 160A-N may comprise respective storage locations, which may correspond to pages, logical pages and/or the like. The storage locations may be assigned respective storage addresses (e.g.,storage address 0 to storage address N). - The
log storage module 136 may be configured to store data sequentially at anappend point 180 within thephysical address space 144. Data may be appended at theappend point 180 and, when thestorage location 182 is filled, theappend point 180 may advance 181 to a next available storage location. As used herein, an “available” logical page refers to a logical page that has been initialized (e.g., erased) and has not yet been programmed. Some types of storage media can only be reliably programmed once after erasure. Accordingly, an available storage location may refer to astorage division 160A-N that is in an initialized (or erased) state.Storage divisions 160A-N may be reclaimed for use in a storage recovery process, which may comprise relocating valid data (if any) on thestorage division 160A-N that is being reclaimed to other storage division(s) 160A-N and erasing thestorage division 160A-N. - In the
FIG. 1C embodiment, the logical eraseblock 160B may be unavailable for storage due to, inter alia, not being in an erased state (e.g., comprising valid data), being out-of service due to high error rates or the like, and so on. Therefore, after filling thestorage location 182, thelog storage module 136 may skip theunavailable storage division 160B, and advance theappend point 180 to the nextavailable storage division 160C. Thelog storage module 136 may be configured to continue appending data to storage locations 183-185, at which point theappend point 180 continues at a nextavailable storage division 160A-N, as disclosed above. - After storing data on the “last” storage location within the storage address space 144 (e.g.,
storage location N 189 ofstorage division 160N), theappend point 180 wraps back to thefirst storage division 160A (or the next available storage division, ifstorage division 160A is unavailable). Accordingly, thelog storage module 136 may treat thestorage address space 144 as a loop or cycle. - The
storage layer 130 may be configured to modify and/or overwrite data out-of-place. As used herein, modifying and/or overwriting data “out-of-place” refers to performing storage operations at different storage addresses rather than modifying and/or overwriting the data at its current storage location (e.g., overwriting the original physical location of the data “in-place”). Performing storage operations out-of-place may avoid write amplification, since existing, valid data on thestorage division 160A-N comprising the data that is being modified need not be erased and/or recopied. Moreover, writing data “out-of-place” may remove erasure from the latency path of many storage operations (the erasure latency is no longer part of the “critical path” of a write operation). In theFIG. 1C embodiment, a storage operation to overwrite and/or modify data corresponding to front-end address A (denoted AO) stored atphysical storage location 191 with modified data A1 may be stored out-of-place on a different location (media address 193) within thestorage address space 144. Storing the data A1 may comprise updating thestorage metadata 135 to associate the front end address A with the storage address ofstorage location 193 and/or to invalidate the obsolete data AO atstorage address 191. As illustrated inFIG. 1C , updating thestorage metadata 135 may comprise updating an entry of theforward map 152 to associate the front-end address A 154E with the storage address of the modified data A1. - In some embodiments, the
storage layer 130 is configured to scan thestorage address space 144 of thestorage medium 140 to identifystorage divisions 160A-N to reclaim. As disclosed above, reclaiming astorage division 160A-N may comprise relocating valid data on thestorage division 160A-N (if any) and erasing thestorage division 160A-N. Thestorage layer 130 may be further configured to store data in association with persistent metadata (e.g., in a self-describing format). The persistent metadata may comprise information about the data, such as the front-end identifier(s) associated with the data, data size, data length, and the like. Embodiments of a packet format comprising persistent, contextual metadata pertaining to data stored within the storage log are disclosed in further detail below in conjunction withFIG. 3 . - Referring back to
FIG. 1C , thestorage layer 130 may be configured to reconstruct thestorage metadata 135 by use of contents of thestorage medium 140. In theFIG. 1C embodiment, the current version of the data associated with front-end identifier A stored atstorage location 191 may be distinguished from the obsolete version of the data A stored atstorage location 193 based on the log order of the packets atstorage location storage layer 130 may determine thatstorage location 193 comprises the most recent, up-to-date version of the data A. Accordingly, the reconstructed forward map 152 may associate front-end identifier A with the data stored at storage location 193 (rather than the obsolete data at storage location 191). -
FIG. 2 is a block diagram of asystem 200 comprising another embodiment of astorage layer 130 configured to manage data storage operations on astorage medium 140. In some embodiments, thestorage medium 140 may comprise one or moreindependent banks 119A-N of solid-state storage arrays 115A-N. Each of the solid-state storage arrays 115A-N may comprise a plurality of solid-state storage elements (columns 118) communicatively coupled in parallel via theinterconnect 127, as disclosed herein. - The
storage controller 139 may comprise arequest module 231 configured to receive storage requests from thestorage layer 130 and/orstorage clients 106. Therequest module 231 may be configured to transfer data to/from thestorage controller 139 in response to the requests. Accordingly, therequest module 231 may comprise and/or be communicatively coupled to one or more direct memory access (DMA) modules, remote DMA modules, interconnect controllers, bus controllers, bridges, buffers, network interfaces, and the like. - The
storage controller 139 may comprise awrite module 240 configured to process data for storage on thestorage medium 140. In some embodiments, thewrite module 240 comprises one or more stages configured to process and/or format data for storage on thestorage medium 140, which may include, but are not limited to: acompression module 242, apacket module 244, anECC write module 246, and awrite buffer 250. In some embodiments, thewrite module 240 may further comprise a whitening module, configured to whiten data for storage on thestorage medium 140, one or more encryption modules configured to encrypt data for storage on thestorage medium 140, and so on. Theread module 241 may comprise one or more modules configured to process and/or format data read from thestorage medium 140, which may include, but are not limited to: aread buffer 251, thedata layout module 248, an ECC readmodule 247, adepacket module 245, and adecompression module 243. - In some embodiments, the
write module 240 comprises a write pipeline configured to process data for storage in a plurality of pipeline stages or modules, as disclosed herein. Similarly, in some embodiments, theread module 241 may comprise a read pipeline configured to process data read from the solid-state storage array 115 in a plurality of pipeline stages or modules, as disclosed herein. - The
compression module 242 may be configured to compress data for storage on thestorage medium 140. Data may be compressed using any suitable compression algorithm and/or technique. Thedata compression module 242 may be configured to compress the data, such that a compressed size of the data stored on thestorage medium 140 differs from the original, uncompressed size of the data. Thecompression module 242 may be configured to compress data using different compression algorithms and/or compression levels, which may result in variable compression ratios between the original, uncompressed size of certain data segments and the size of the compressed data segments. Thecompression module 242 may be further configured to perform one or more whitening transformations on the data segments and/or data packets generated by the packet module 244 (disclosed in further detail below). The data whitening transformations may comprise decorrelating the data, which may provide wear-leveling benefits for certain types of storage media. Thecompression module 242 may be further configured to encrypt data for storage on thestorage medium 140 by use of one or more of a media encryption key, a user encryption key, and/or the like. - The
packet module 244 may be configured to generate data packets comprising data to be stored on thestorage medium 140. As disclosed above, thewrite module 240 may be configured to store data in a storage log, in which data segments are stored in association with self-describing metadata in a packet format as illustrated inFIG. 3 . Referring toFIG. 3 , thepackt module 244 may be configured to generate packets comprising adata segment 312 and persistent metadata 314. The persistent metadata 314 may include one or more front-end addresses 315 associated with thedata segment 312. The data packets 310 may be associated with sequence information, such as asequence indicator 318, to define, inter alia, a log-order of the data packets 310 within the storage log on thestorage medium 140. Thesequence indicator 318 may comprise one or more sequence numbers, timestamps, or other indicators from which a relative order of the data packets 310 stored on thestorage medium 140 can be determined. Thestorage layer 130 may use the data packets 310 stored within the storage log on thestorage medium 140 to reconstruct portions of thestorage metadata 135, which may include, but is not limited to: reconstructing any-to-anymappings 150 between front-end addresses and storage addresses (e.g., the forward map 152), a reverse map, and/or the like. - In some embodiments, the
packet module 244 may be configured to generate packets of arbitrary lengths and/or sizes in accordance with the size of storage requests received via therequest receiver module 231, data compression performed by thecompression module 242, configuration, preferences, and so on. Thepacket module 244 may be configured to generate packets of one or more pre-determined sizes. In one embodiment, in response to a request to write 24 k of data to the solid-state storage medium 110, thepacket module 244 may be configured to generate six packets, each packet comprising 4 k of the data; in another embodiment, thepacket module 244 may be configured to generate a single packet comprising 24 k of data in response to the request. - The persistent metadata 314 may comprise the front-end identifier(s) 315 corresponding to the
packet data segment 312. Accordingly, the persistent metadata 314 may be configured to associate thepacket data segment 312 with one or more LIDs, LBAs, and/or the like. The persistent metadata 314 may be used to associate thepacket data segment 312 with the front-end identifier(s) independently of thestorage metadata 135. Accordingly, thestorage layer 130 may be capable of reconstructing the storage metadata 135 (e.g., the forward map 152) by use of the storage log stored on thestorage medium 140. The persistent metadata 314 may comprise other persistent metadata, which may include, but is not limited to, data attributes (e.g., an access control list), data segment delimiters, signatures, links, data layout metadata, and/or the like. - In some embodiments, the data packet 170 may be associated with a
log sequence indicator 318. Thelog sequence indicator 318 may be persisted on thestorage division 160A-N comprising the data packet 310. Alternatively, thesequence indicator 318 may be persisted elsewhere on thestorage medium 140. In some embodiments, the sequence indicator 178 is applied to thestorage divisions 160A-N when thestorage divisions 160A-N are reclaimed (e.g., erased, when the first or last storage unit is programmed, etc.). Thelog sequence indicator 318 may be used to determine the log-order of packets 310 within the storage log stored on the storage medium 140 (e.g., determine an ordered sequence of data packets 170). - Referring back to
FIG. 2 , theECC write module 246 may be configured to encode data packets 310 generated by thepacket module 244 into respective ECC codewords. As used herein, an ECC codeword refers to data and corresponding error detection and/or correction information. TheECC write module 246 may be configured to implement any suitable ECC algorithm and may be configured to generate corresponding ECC information (e.g., ECC codewords), which may include, but are not limited to: data segments and corresponding ECC syndromes, ECC symbols, ECC chunks, and/or other structured and/or unstructured ECC information. ECC codewords may comprise any suitable error-correcting encoding, including, but not limited to: block ECC encoding, convolutional ECC encoding, Low-Density Parity-Check (LDPC) encoding, Gallager encoding, Reed-Solomon encoding, Hamming codes, Multidimensional parity encoding, cyclic error-correcting codes, BCH codes, and/or the like. TheECC write module 246 may be configured to generate ECC codewords of a pre-determined size. Accordingly, a single packet may be encoded into a plurality of different ECC codewords and/or a single ECC codeword may comprise portions of two or more packets. - In some embodiments, the
ECC write module 246 is configured to generate ECC codewords, each of which may comprise a data of length N and a syndrome of length S. For example, theECC write module 246 may be configured to encode data segments into 240-byte ECC codewords, each ECC codeword comprising 224 bytes of data and 16 bytes of ECC syndrome information. In this embodiment, the ECC encoding may be capable of correcting more bit errors than the manufacturer of thestorage medium 140 requires. In other embodiments, theECC write module 246 may be configured to encode data in a symbolic ECC encoding, such that each data segment of length N produces a symbol of length X. TheECC write module 246 may encode data according to a selected ECC strength. As used herein, the “strength” of an error-correcting code refers to the number of errors that can be detected and/or corrected by use of the error correcting code. In some embodiments, the strength of the ECC encoding implemented by theECC write module 246 may be adaptive and/or configurable. The strength of the ECC encoding may be selected according to the reliability and/or error rate of thestorage medium 140. As disclosed in further detail herein, the strength of the ECC encoding may be independent of the partitioning and/or data layout on thestorage medium 140, which may allow thestorage layer 130 to select a suitable ECC encoding strength based on the conditions of thestorage medium 140, user requirements, and the like, as opposed to static and/or pre-determined ECC settings imposed by the manufacturer of thestorage medium 140. -
FIG. 4 depicts one embodiment ofdata flow 400 between thepacket module 244 and anECC write module 246. For clarity, and to avoid obscuring the details of the depicted embodiment, other portions of thewrite module 240 are omitted. Thepacket module 244 may be configured to generatepackets 310A-310N in response to one or more requests to store data on thestorage medium 140. Thepackets 310A-N may comprise respectivepacket data segments packets 310A-N may further comprise persistent metadata embodied inrespective packet headers packets 310A-N may be processed by, inter alia, theECC write module 246 to generateECC codewords 420A-Z. In theFIG. 4 embodiment, the ECC codewords compriseECC codewords 420A-420Z, each of which may comprise a portion of one or more of thepackets 310A-N and a syndrome (not shown). In other embodiments, the ECC codewords may comprise ECC symbols or the like. - As illustrated in
FIG. 4 , thepackets 310A-N may vary in size in accordance with the size of the respectivepacket data segments 312A-N and/orheader information 314A-N. Alternatively, in some embodiments, thepacket module 244 may be configured to generatepackets 310A-N of a fixed, uniform size. - The
ECC write module 246 may be configured to generateECC codewords 420A-N having a uniform, fixed size; eachECC codeword 420A-N may comprise N bytes of packet data and S syndrome bytes, such that eachECC codeword 420A-N comprises N+S bytes. In some embodiments, each ECC codeword comprises 240 bytes, and includes 224 bytes of packet data (N) and 16 byes of error correction code (S). The disclosed embodiments are not limited in this regard, however, and could be adapted to generateECC codewords 420A-N of any suitable size, having any suitable ratio between N and S. Moreover, theECC write module 242 may be further adapted to generate ECC symbols, or other ECC codewords, comprising any suitable ratio between data and ECC information. - As depicted in
FIG. 4 , the ECC codewords 420A-N may comprise portions of one ormore packets 310A-N;ECC codeword 420D comprises data ofpackets packets 310A-N may be spread between a plurality ofdifferent ECC codewords 420A-N:ECC codewords 420A-D comprise data ofpacket 310A;ECC codewords 420D-H comprise data ofpacket 310B; andECC codewords 420X-Z comprise data ofpacket 310N. - Referring back to
FIG. 2 , thewrite module 240 may further comprise adata layout module 248 configured to buffer data for storage on one or more of the solid-state storage arrays 115A-N. As disclosed in further detail herein, thedata layout module 248 may be configured to store data within one ormore columns 118 of a solid-state storage array 115. Thedata layout module 248 may be further configured to generate parity data associated corresponding to the layout and/or arrangement of the data on thestorage medium 140. The parity data may be configured to protect data stored withinrespective rows 117 of the solid-state storage array 115A-N, and may be generated in accordance with the data layout implemented by thestorage controller 139. - In some embodiments, the
write module 240 further comprises awrite buffer 250 configured to buffer data for storage within respective page write buffers of thestorage medium 140. Thewrite buffer 250 may comprise one or more synchronization buffers to synchronize a clock domain of thestorage controller 139 with a clock domain of the storage medium 140 (and/or interconnect 127). - The
log storage module 136 may be configured to select storage location(s) for data storage operations and/or may provide addressing and/or control information to thestorage controller 139. Accordingly, thelog storage module 136 may provide for storing data sequentially at anappend point 180 within thestorage address space 144 of thestorage medium 140. The storage address at which a particular data segment is stored may be independent of the front-end identifier(s) associated with the data segment. As disclosed above, thetranslation module 134 may be configured to associate the front-end interface of data segments (e.g., front-end identifiers of the data segments) with the storage address(es) of the data segments on thestorage medium 140. In some embodiments, thetranslation module 134 may leveragestorage metadata 135 to perform logical-to-physical translations; thestorage metadata 135 may include, but is not limited to: aforward map 152 comprising arbitrary, any-to-anymappings 150 between front-end identifiers and storage addresses; a reverse map comprising storage address validity indicators and/or any-to-any mappings between storage addresses and front-end identifiers; and so on. Thestorage metadata 135 may be maintained in volatile memory, such as thevolatile memory 102 of thecomputing system 100. In some embodiments, thestorage layer 130 is configured to periodically store portions of thestorage metadata 135 on a persistent storage medium, such as thestorage medium 140,non-volatile storage resources 103, and/or the like. - The
storage controller 139 may further comprise aread module 241 that is configured to read data from thestorage medium 140 in response to requests received via therequest module 231. Theread module 241 may be configured to process data read from thestorage medium 140, and provide the processed data to thestorage layer 130 and/or a storage client 106 (by use of the request module 231). Theread module 241 may comprise one or more modules configured to process and/or format data read from thestorage medium 140, which may include, but is not limited to: readbuffer 251,data layout module 248, ECC readmodule 247, adepacket 245, and adecompression module 243. In some embodiments, theread module 241 further includes a dewhiten module configured to perform one or more dewhitening transforms on the data, a decryption module configured to decrypt encrypted data stored on thestorage medium 140, and so on. Data processed by theread module 241 may flow to thestorage layer 130 and/or directly to thestorage client 106 via therequest module 231, and/or other interface or communication channel (e.g., the data may flow directly to/from a storage client via a DMA or remote DMA module of the storage layer 130). - Read requests may comprise and/or reference the data using the front-end interface of the data, such as a front-end identifier (e.g., a logical identifier, an LBA, a range and/or extent of identifiers, and/or the like). The back-end addresses associated with data of the request may be determined based, inter alia, on the any-to-any
mappings 150 maintained by the translation module 134 (e.g., forward map 152), metadata pertaining to the layout of the data on thestorage medium 140, and so on. Data may stream into theread module 241 via aread buffer 251. The readbuffer 251 may correspond to page read buffers of one or more of the solid-state storage arrays 115A-N. The readbuffer 251 may comprise one or more synchronization buffers configured to synchronize a clock domain of the readbuffer 251 with a clock domain of the storage medium 140 (and/or interconnect 127). - The
data layout module 248 may be configured to reconstruct one or more data segments from the contents of the readbuffer 251. Reconstructing the data segments may comprise recombining and/or reordering contents of the read buffer (e.g., ECC codewords) read fromvarious columns 118 in accordance with a layout of the data on the solid-state storage arrays 115A-N as indicated by thestorage metadata 135. As disclosed in further detail herein, in some embodiments, reconstructing the data may comprise stripping data associated with one ormore columns 118 from the readbuffer 251, reordering data of one ormore columns 118, and so on. - The
read module 241 may comprise an ECC readmodule 247 configured to detect and/or correct errors in data read from the solid-state storage medium 110 using, inter alia, the ECC encoding of the data (e.g., as encoded by the ECC write module 246), parity data (e.g., using parity substitution), and so on. As disclosed above, the ECC encoding may be capable of detecting and/or correcting a pre-determined number of bit errors, in accordance with the strength of the ECC encoding. The ECC readmodule 247 may be capable of detecting more bit errors than can be corrected. - The ECC read
module 247 may be configured to correct any “correctable” errors using the ECC encoding. In some embodiments, the ECC readmodule 247 may attempt to correct errors that cannot be corrected by use of the ECC encoding using other techniques, such as parity substitution, or the like. Alternatively, or in addition, the ECC readmodule 247 may attempt to recover data comprising uncorrectable errors from another source. For example, in some embodiments, data may be stored in a RAID configuration. In response to detecting an uncorrectable error, the ECC readmodule 247 may attempt to recover the data from the RAID, or other source of redundant data (e.g., a mirror, backup copy, or the like). - In some embodiments, the ECC read
module 247 may be configured to generate an interrupt in response to reading data comprising uncorrectable errors. The interrupt may comprise a message indicating that the requested data is in error, and may indicate that the ECC readmodule 247 cannot correct the error using the ECC encoding. The message may comprise the data that includes the error (e.g., the “corrupted data”). - The interrupt may be caught by the
storage layer 130 or other process, which, in response, may be configured to reconstruct the data using parity substitution, or other reconstruction technique, as disclosed herein. Parity substitution may comprise iteratively replacing portions of the corrupted data with a “parity mask” (e.g., all ones) until a parity calculation associated with the data is satisfied. The masked data may comprise the uncorrectable errors, and may be reconstructed using other portions of the data in conjunction with the parity data. Parity substitution may further comprise reading one or more ECC codewords from the solid-state storage array 115A-N (in accordance with an adaptive data structure layout on the array 115), correcting errors within the ECC codewords (e.g., decoding the ECC codewords), and reconstructing the data by use of the corrected ECC codewords and/or parity data. In some embodiments, the corrupted data may be reconstructed without first decoding and/or correcting errors within the ECC codewords. Alternatively, uncorrectable data may be replaced with another copy of the data, such as a backup or mirror copy. In another embodiment, thestorage layer 130 stores data in a RAID configuration, from which the corrupted data may be recovered. - As depicted in
FIG. 2 , the solid-state storage medium 140 may be arranged into a plurality ofindependent banks 119A-N. Each bank may comprise a plurality of solid-state storage elements arranged into respective solid-state storage arrays 115A-N. Thebanks 119A-N may be configured to operate independently; thestorage controller 139 may configure afirst bank 119A to perform a first storage operation while asecond bank 119B is configured to perform a different storage operation. Thestorage controller 139 may further comprise abank controller 252 configured to selectively route data and/or commands torespective banks 119A-N. In some embodiments,storage controller 139 is configured to read data from abank 119A while filling thewrite buffer 250 for storage on anotherbank 119B and/or may interleave one or more storage operations between one ormore banks 119A-N. Further embodiments of multi-bank storage operations and data pipelines are disclosed in U.S. Patent Application Publication No. 2008/0229079 (U.S. patent application Ser. No. 11/952,095), entitled, “Apparatus, System, and Method for Managing Commands of Solid-State Storage Using Bank Interleave,” filed Dec. 6, 2007 for David Flynn et al., which is hereby incorporated by reference in its entirety. - The
storage layer 130 may further comprise agroomer module 138 configured to reclaim storage resources of thestorage medium 140. Thegroomer module 138 may operate as an autonomous, background process, which may be suspended and/or deferred while other storage operations are in process. Thelog storage module 136 andgroomer module 138 may manage storage operations so that data is spread throughout thestorage address space 144 of thestorage medium 140, which may improve performance and data reliability, and avoid overuse and underuse of any particular storage locations, thereby lengthening the useful life of the storage medium 140 (e.g., wear-leveling, etc.). As disclosed above, data may be sequentially appended to a storage log within thestorage address space 144 at anappend point 180, which may correspond to a particular storage address within one or more of thebanks 119A-N (e.g.,physical address 0 ofbank 119A). Upon reaching the end of the storage address space 144 (e.g., physical address N ofbank 119N), theappend point 180 may revert to the initial position (or next available storage location). - As disclosed above, operations to overwrite and/or modify data stored on the
storage medium 140 may be performed “out-of-place.” The obsolete version of overwritten and/or modified data may remain on thestorage medium 140 while the updated version of the data is appended at a different storage location (e.g., at the current append point 180). Similarly, an operation to delete, erase, or TRIM data from thestorage medium 140 may comprise indicating that the data is invalid (e.g., does not need to be retained on the storage medium 140). Marking data as invalid may comprise modifying a mapping between the front-end identifier(s) of the data and the storage address(es) comprising the invalid data, marking the storage address as invalid in a reverse map, and/or the like. - The
groomer module 138 may be configured to select sections of the solid-state storage medium 140 for grooming operations. As used herein, a “section” of thestorage medium 140 may include, but is not limited to: an erase block, a logical erase block, a die, a plane, one or more pages, a portion of a solid-state storage element 116A-Y, a portion of arow 117 of a solid-state storage array 115, a portion of acolumn 118 of a solid-state storage array 115, and/or the like. A section may be selected for grooming operations in response to various criteria, which may include, but are not limited to: age criteria (e.g., data refresh), error metrics, reliability metrics, wear metrics, resource availability criteria, an invalid data threshold, and/or the like. A grooming operation may comprise relocating valid data on the selected section (if any). The operation may further comprise preparing the section for reuse, which may comprise erasing the section, marking the section with a sequence indicator, such as thesequence indicator 318, and/or placing the section into a queue of storage sections that are available to store data. Thegroomer module 138 may be configured to schedule grooming operations with other storage operations and/or requests. In some embodiments, thestorage controller 139 may comprise a groomer bypass (not shown) configured to relocate data from a storage section by transferring data read from the section from theread module 241 directly into thewrite module 240 without being routed out of thestorage controller 139. - The
storage layer 130 may be further configured to manage out-of-service conditions on thestorage medium 140. As used herein, a section of thestorage medium 140 that is “out-of-service” (OOS) refers to a section that is not currently being used to store valid data. Thestorage layer 130 may be configured to monitor storage operations performed on thestorage medium 140 and/or actively scan the solid-state storage medium 140 to identify sections that should be taken out of service. Thestorage metadata 135 may comprise OOS metadata that identifies OOS sections of the solid-state storage medium 140. Thestorage layer 130 may be configured to avoid OOS sections by, inter alia, streaming padding (and/or nonce) data to thewrite buffer 250 such that padding data will map to the identified OOS sections. In some embodiments, thestorage layer 130 may be configured to manage OOS conditions by replacing OOS sections of thestorage medium 140 with replacement sections. Alternatively, or in addition, a hybrid OOS approach may be used that combines adaptive padding and replacement techniques; the padding approach to managing OOS conditions may be used in portions of thestorage medium 140 comprising a relatively small number of OOS sections; as the number of OOS sections increases, thestorage layer 130 may replace one or more of the OOS sections with replacements sections. Further embodiments of apparatus, systems, and methods for detecting and/or correcting data errors, and managing OOS conditions, are disclosed in U.S. Patent Application Publication No. 2009/0287956 (U.S. application Ser. No. 12/467,914), entitled, “Apparatus, System, and Method for Detecting and Replacing a Failed Data Storage,” filed May 18, 2009, and U.S. Patent Application Publication No. 2013/0019072 (U.S. application Ser. No. 13/354,215), entitled, “Apparatus, System, and Method for Managing Out-of-Service Conditions,” filed Jan. 19, 2012 for John Strasser et al., each of which is hereby incorporated by reference in its entirety. - As disclosed above, the
storage medium 140 may comprise one or more solid-state storage arrays 115A-N. A solid-state storage array 115A-N may comprise a plurality of independent columns 118 (respective solid-state storage elements 116A-Y), which may be coupled to thestorage layer 130 in parallel via theinterconnect 127. Accordingly, storage operations performed on anarray 115A-N may be performed on a plurality of solid-state storage elements 116A-Y. Performing a storage operation on a solid-state storage array 115A-N may comprise performing the storage operation on each of the plurality of solid-state storage elements 116A-Y comprising thearray 115A-N: a read operation may comprise reading a physical storage unit (e.g., page) from a plurality of solid-state storage elements 116A-Y; a program operation may comprise programming a physical storage unit (e.g., page) on a plurality of solid-state storage elements 116A-Y; an erase operation may comprise erasing a section (e.g., erase block) on a plurality of solid-state storage elements 116A-Y; and so on. Accordingly, a program operation may comprise thewrite module 240 streaming data to program buffers of a plurality of solid-state storage elements 116A-Y (via thewrite buffer 250 and interconnect 127) and, when the respective program buffers are sufficiently full, issuing a program command to the solid-state storage elements 116A-Y. The program command may cause one or more storage units on each of thestorage elements 116A-Y to be programmed in parallel. -
FIG. 5A depicts anotherembodiment 500 of a solid-state storage array 115. As disclosed above, the solid-state storage array 115 may comprise a plurality ofindependent columns 118, each of which may correspond to a respective set of one or more solid-state storage elements 116A-Y. In theFIG. 5A embodiment, the solid-state storage array 115 comprises 25 columns 118 (e.g., solid-state storage element 0 116A through solid-state storage element 24 116Y). The solid-state storage elements 116A-Y comprising the array may be communicatively coupled to thestorage layer 130 in parallel by theinterconnect 127. Theinterconnect 127 may be capable of communicating data, addressing, and/or control information to each of the solid-state storage elements 116A-Y. The parallel connection may allow thestorage controller 139 to manage the solid-state storage elements 116A-Y in parallel, as a single, logical storage element. - The solid-
state storage elements 116A-Y may be partitioned into sections, such as physical storage divisions 530 (e.g., physical erase blocks). Each erase block may comprise a plurality ofphysical storage units 532, such as pages. Thephysical storage units 532 within aphysical storage division 530 may be erased as a group. AlthoughFIG. 5A depicts a particular partitioning scheme, the disclosed embodiments are not limited in this regard, and could be adapted to use solid-state storage elements 116A-Y partitioned in any suitable manner. - As depicted in
FIG. 5A , thecolumns 118 of thearray 115 may correspond to respective solid-state storage elements 116A-Y. Accordingly, thearray 115 ofFIG. 5A comprises 25columns 118. Rows of thearray 117 may correspond tophysical storage units 532 and/or 530 of a plurality of thecolumns 118. In other embodiments, thecolumns 118 may comprise multiple solid-state storage elements. -
FIG. 5B is a block diagram 501 of another embodiment of a solid-state storage array 115. The solid-state storage array 115 may comprise a plurality ofrows 117, which may correspond to storage units on a plurality ofdifferent columns 118 within thearray 115. Therows 117 of the solid-state storage array 115 may includelogical storage divisions 540, which may comprise physical storage divisions on a plurality of the solid-state storage elements 116A-Y. In some embodiments, alogical storage division 540 may comprise a logical erase block, comprising physical erase blocks of the solid-state storage elements 116A-Y within thearray 115. Alogical page 542 may comprise physical storage units (e.g., pages) on a plurality of the solid-state storage elements 116A-Y. - Storage operations performed on the solid-
state storage array 115 may operate on multiple solid-state storage elements 116A-Y: an operation to program data to alogical storage unit 542 may comprise programming data to each of 25 physical storage units (e.g., one storage unit pernon-volatile storage element 116A-Y); an operation to read data from alogical storage unit 542 may comprise reading data from 25 physical storage units (e.g., pages); an operation to erase alogical storage division 540 may comprise erasing 25 physical storage divisions (e.g., erase blocks); and so on. Since thecolumns 118 are independent, storage operations may be performed across different sets and/or portions of thearray 115. For example, a read operation on thearray 115 may comprise reading data fromphysical storage unit 532 at a first physical address of solid-state storage element 116A and reading data from aphysical storage unit 532 at a different physical address of one or more other solid-state storage elements 116B-N. - Arranging solid-
state storage elements 116A-Y into a solid-state storage array 115 may be used to address certain properties of thestorage medium 140. Some embodiments may comprise anasymmetric storage medium 140, in which it takes longer to program data onto the solid-state storage elements 116A-Y than it takes to read data therefrom (e.g., 10 times as long). Moreover, in some cases, data may only be programmed tophysical storage divisions 530 that have first been initialized (e.g., erased). Initialization operations may take longer than program operations (e.g., 10 times as long as a program, and byextension 100 times as long as a read operation). Managing groups of solid-state storage elements 116A-Y in an array 115 (and/or inindependent banks 119A-N as disclosed herein) may allow thestorage layer 130 to perform storage operations more efficiently, despite the asymmetric properties of thestorage medium 140. In some embodiments, the asymmetry in read, program, and/or erase operations is addressed by performing these operations on multiple solid-state storage elements 116A-Y in parallel. In the embodiment depicted inFIG. 5B , programming asymmetry may be addressed by programming 25 storage units in alogical storage unit 542 in parallel. Initialization operations may also be performed in parallel.Physical storage divisions 530 on each of the solid-state storage elements 116A-Y may be initialized as a group (e.g., as logical storage divisions 540), which may comprise erasing 25 physical erase blocks in parallel. - In some embodiments, portions of the solid-
state storage array 115 may be configured to store data and other portions of thearray 115 may be configured to store error detection and/or recovery information.Columns 118 used for data storage may be referred to as “data columns” and/or “data solid-state storage elements.” Columns used to store data error detection and/or recovery information may be referred to as a “parity column” and/or “recovery column.” Thearray 115 may be configured in an operational mode in which one of the solid-state storage elements 116Y is used to store parity data, whereas other solid-state storage elements 116A-X are used to store data. Accordingly, thearray 115 may comprise data solid-state storage elements 116A-X and a recovery solid-state storage element 116Y. In this operational mode, the effective storage capacity of the rows (e.g., logical pages 542) may be reduced by one physical storage unit (e.g., reduced from 25 physical pages to 24 physical pages). As used herein, the “effective storage capacity” of a storage unit refers to the number of storage units or divisions that are available to store data and/or the total amount of data that can be stored on a logical storage unit. The operational mode described above may be referred to as a “24+1” configuration, denoting that twenty-four (24)physical storage units 532 are available to store data, and one (1) of thephysical storage units 532 is used for parity. The disclosed embodiments are not limited to any particular operational mode and/or configuration, and could be adapted to use any number of the solid-state storage elements 116A-Y to store error detection and/or recovery data. - As disclosed above, the
storage controller 139 may be configured to interleave storage operations between a plurality ofindependent banks 119A-N of solid-state storage arrays 115A-N, which may further ameliorate asymmetry between erase, program, and read operations.FIG. 5C is a block diagram of asystem 502 comprising astorage controller 139 configured to manage storage divisions (logical erase blocks 540) that spanmultiple arrays 115A-N ofmultiple banks 119A-N. Eachbank 119A-N may comprise one or more solid-state storage arrays 115A-N, which, as disclosed herein, may comprise a plurality of solid-state storage elements 116A-Y coupled in parallel by arespective bus 127A-N. Thestorage controller 139 may be configured to perform storage operations on thestorage elements 116A-Y of thearrays 119A-N in parallel and/or in response to a single command and/or signal. - Some operations performed by the
storage controller 139 may cross bank boundaries. Thestorage controller 139 may be configured to manage groups of logical eraseblocks 540 that include erase blocks ofmultiple arrays 115A-N within differentrespective banks 119A-N. Each group of logical eraseblocks 540 may comprise eraseblocks 531A-N on each of thearrays 115A-N. The erase blocks 531A-N comprising the logical eraseblock group 540 may be erased together (e.g., in response to a single erase command and/or signal or in response to a plurality of separate erase commands and/or signals). Performing erase operations on logical eraseblock groups 540 comprising large numbers of eraseblocks 531A-N withinmultiple arrays 115A-N may further mask the asymmetric properties of the solid-state storage medium 140, as disclosed herein. - The
storage controller 139 may be configured to perform some storage operations within boundaries of thearrays 115A-N and/orbanks 119A-N. In some embodiments, the read, write, and/or program operations may be performed withinrows 117 of the solid-state storage arrays 115A-N (e.g., onlogical pages 542A-N withinarrays 115A-N ofrespective banks 119A-N). As depicted inFIG. 5C , thelogical pages 542A-N of thearrays 115A-N may not extend beyondsingle arrays 115A-N and/orbanks 119A-N. Thelog storage module 136 and/orbank interleave module 252 may be configured to append data to thestorage medium 140 by interleaving and/or scheduling storage operations sequentially between thearrays 115A-N of thebanks 119A-N. -
FIG. 5D depicts one embodiment of storage operations that are interleaved between solid-state storage arrays 115A-N ofrespective banks 119A-N. In theFIG. 5D embodiment, thebank interleave module 252 is configured to interleave programming operations betweenlogical pages 542A-N (rows 117) of thearrays 115A-N within thebanks 119A-N. As disclosed above, thewrite module 240 may comprise awrite buffer 250, which may have sufficient capacity to fill write buffers one or morelogical pages 542A-N of anarray 115A-N. In response to filling the write buffer 250 (e.g., buffering data sufficient to fill a portion of alogical page 542A-N), thestorage controller 139 may be configured to stream the contents of thewrite buffer 250 to program buffers of the solid-state storage elements 116A-Y comprising one of thebanks 119A-N. Thewrite module 240 may then issue a program command and/or signal to the solid-state storage array 115A-N to store the contents of the program buffers to a specifiedlogical page 542A-N. Thelog storage module 136 and/orbank interleave module 252 may be configured to provide control and addressing information to the solid-state storage elements 116A-Y of thearray 115A-N using abus 127A-N, as disclosed above. - The
bank interleave module 252 may be configured to append data to the solid-state storage medium 110 by programming data to thearrays 115A-N in accordance with a sequential interleave pattern. The sequential interleave pattern may comprise programming data to a first logical page (LP_0) ofarray 115A withinbank 119A, followed by the first logical page (LP_0) ofarray 115B within thenext bank 119B, and so on, until data is programmed to the first logical page LP_0 of eacharray 115A-N within each of thebanks 119A-N. As depicted inFIG. 5D , data may be programmed to the first logical page LP_0 ofarray 115A inbank 119A in aprogram operation 243A. Thebank interleave module 252 may then stream data to the first logical page (LP_0) of thearray 115B in thenext bank 119B. The data may then be programmed to LP_0 ofarray 115B inbank 119B in aprogram operation 243B. Theprogram operation 243B may be performed concurrently with theprogram operation 243A onarray 115A ofbank 119A; the data writemodule 240 may stream data toarray 115B and/or issue a command and/or signal for theprogram operation 243B, while theprogram operation 243A is being performed on thearray 115A. Data may be streamed to and/or programmed on the first logical page (LP_0) of thearrays 115C-N of theother banks 119C-119N following the same sequential interleave pattern (e.g., after data is streamed and/or programmed to LP_0 ofarray 115A ofbank 119B, data is streamed and/or programmed to LP_0 ofarray 115C ofbank 119C inprogram operation 243C, and so on). Following theprogramming operation 243N on LP_0 ofarray 115N within thelast bank 119N, thebank interleave controller 252 may be configured to begin streaming and/or programming data to the next logical page (LP_1) ofarray 115A within thefirst bank 119A, and the interleave pattern may continue accordingly (e.g., program LP_1 ofarray 115Bbank 119B, followed by LP_1 ofarray 115Cbank 119C through LP_1 ofarray 115Nbank 119N, followed by LP_2 ofarray 115Abank 119A, and so on). - Sequentially interleaving programming operations as disclosed herein may increase the time between concurrent programming operations on the
same array 115A-N and/orbank 119A-N, which may reduce the likelihood that thestorage controller 139 will have to stall storage operations while waiting for a programming operation to complete. As disclosed above, programming operations may take significantly longer than other operations, such as read and/or data streaming operations (e.g., operations to stream the contents of thewrite buffer 250 to anarray 115A-N via thebus 127A-N). The interleave pattern ofFIG. 5D may be configured to avoid consecutive program operations on thesame array 115A-N and/orbank 119A-N; programming operations on aparticular array 115A-N may be separated by N−1 programming operations on other banks (e.g., programming operations onarray 115A are separated by programming operations onarrays 115A-N). As such, programming operations onarray 119A are likely to be complete before another programming operation needs to be performed on thearray 119A. - As depicted in
FIG. 5D , the interleave pattern for programming operations may comprise programming data sequentially across rows 117 (e.g.,logical pages 542A-N) of a plurality ofarrays 115A-N. As depicted inFIG. 5E , the interleave pattern may result in interleaving programming operations betweenarrays 115A-N ofbanks 119A-N, such that the erase blocks of eacharray 115A-N (erase block groups EBG_0-N) are filled at the same rate. The sequential interleave pattern programs data to the logical pages of the first erase block group (EBG_0) in eacharray 115A-N before programming data to logical pages LP_0 through LP_N of the next erase block group (EBG_1), and so on (e.g., wherein each erase block comprises 0-N pages). The interleave pattern continues until the last erase block group EBG_N is filled, at which point the interleave pattern continues back at the first erase block group EBG_0. - The erase block groups of the
arrays 115A-N may, therefore, be managed as logical eraseblocks 540A-N that span thearrays 115A-N. Referring toFIG. 5C , a logical eraseblock group 540 may comprise eraseblocks 531A-N on each of thearrays 115A-N within thebanks 119A-N. As disclosed above, managing groups of erase blocks (e.g., logical erase block group 540) may comprise erasing each of the eraseblocks 531A-N included in thegroup 540. In theFIG. 5E embodiment, erasing the logical eraseblock group 540A may comprise erasing EBG_0 ofarrays 115A-N inbanks 119A-N, erasing a logical eraseblock group 540B may comprise erasing EBG_1 ofarrays 115A-N inbanks 517A-N, erasing logical eraseblock group 540C may comprise erasing EBG_2 ofarrays 115A-N inbanks 517A-N, and erasing logical eraseblock group 540N may comprise erasing EBG_N ofarrays 115A-N inbanks 517A-N. Other operations, such as grooming, recovery, and the like may be performed at the granularity of the logical eraseblock groups 540A-N; recovering the logical eraseblock group 540A may comprise relocating valid data (if any) stored on EBG_0 onarrays 115A-N inbanks 517A-N, erasing the erase blocks of each EBG_0 in arrays A-N, and so on. Accordingly, in embodiments comprising fourbanks 119A-N, eachbank 119A-N comprising a respective solid-state storage array 115A-N comprising 25storage elements 116A-Y, erasing, grooming, and/or recovering a logical eraseblock group 540 comprises erasing, grooming, and/or recovering 100 physical erase blocks 530. Although particular multi-bank embodiments are described herein, the disclosure is not limited in this regard and could be configured using any multi-bank architecture comprising any number ofbanks 119A-N ofarrays 115A-N comprising any number of solid-state storage elements 116A-Y. - Referring back to
FIG. 2 , thestorage layer 130 may be configured to store data segments in one or more different configurations, arrangements and/or layouts within a solid-state storage array 115A-N (by use of the data layout module 248). Thedata layout module 248 may be configured to buffer and/or arrange data in thewrite module 240 for storage in a particular arrangement within one or more of the solid-state storage arrays 115A-N. Referring toFIG. 5B , in some embodiments, thedata layout module 248 may configure data for “horizontal” storage withinrows 117 of the array 115 (e.g., horizontally withinlogical storage units 542 of the array 115). Accordingly, a datastructure, such as an ECC codeword, packet, or the like, may be spread across a plurality of thestorage elements 116A-Y comprising thelogical storage unit 542. In some embodiments, data may be stored horizontally within one or more independent “channels” of thearray 115. As used herein, an independent channel (or “channel”) refers to a subset of one ormore columns 118 of the array 115 (e.g., respective subsets of solid-state storage elements 116A-Y). Data may be arranged for storage within respective independent channels. Anarray 115 comprisingN columns 118 may be divided into a configurable number of independent channels X, each comprisingY columns 118 of thearray 115. In theFIG. 5B embodiment having a “24+1” configuration that comprises 24columns 118 for storing data, the channel configurations may include, but are not limited to: 24 channels each comprising asingle column 118; 12 channels each comprising two solid-state storage elements; eight channels each comprising three solid-state storage elements; six channels each comprising sixcolumns 118; and so on. In some embodiments, thearray 115 may be divided into heterogeneous channels, such as a first channel comprising 12columns 118 and six other channels each comprising twocolumns 118. In other embodiments, thedata layout module 248 may be configured to arrange data for storage in a vertical code word configuration (disclosed in further detail below). -
FIG. 6A is a block diagram of asystem 600 comprising one embodiment of astorage controller 139 comprising adata layout module 248 configured to arrange data for storage on a solid-state storage array 115 in a horizontal configuration. The solid-state storage array 115 comprises 25 solid-state storage elements 116A-Y operating in a “24+1” configuration, in which 24 of the solid-state storage elements 116A-X are used to store data, and one storage element (116Y) is used to store parity data. - The
write module 240 may comprise apacket module 244 configured to generate data packets comprising data segments for storage on thearray 115, as disclosed above. In theFIG. 6A embodiment, thepacket module 244 is configured to format data into apacket format 610, comprising apacket data segment 612 and persistent metadata 614 (e.g., header). Theheader 614 may comprise a front-end interface of thepacket data segment 612, a sequence number, and/or the like, as disclosed above. In theFIG. 6A embodiment, thepacket module 244 is configured to generatepackets 610 of a fixed size (520-bytepacket data segment - The
ECC write module 246 is configured to generate ECC datastructures (ECC codewords 620) comprising portions of one ormore packets 610, as disclosed above. The ECC codewords 620 may be of a fixed size. In theFIG. 6A example, eachECC codeword 620 comprises 224 bytes of packet data and a 16-byte error-correcting code or syndrome. Although particular sizes and/or configurations ofpackets 610 andECC codewords 620 are disclosed herein, the disclosure is not limited in this regard and could be adapted to use anysize packets 610 and/or ECC codewords 620. Moreover, in some embodiments, the size of the datastructures (e.g.,packets 610 and/or ECC codewords 620) may vary. For example, the size and/or contents of thepackets 610 and/orECC codewords 620 may be adapted according to out-of-service conditions, as disclosed above. - The
data layout module 248 may be configured to lay out data for horizontal storage withinrows 117 of thearray 115. Thedata layout module 248 may be configured to buffer and/or arrange data segments (e.g., theECC codewords data rows 667 comprising 24 bytes of data. Thedata layout module 248 may be capable of buffering one or more ECC codewords 620 (by use of the write buffer 251). In theFIG. 6A embodiment,data layout module 248 may be configured to buffer 10 24-byte data rows, which is sufficient to buffer a full 240-byte ECC codeword 620. - The
data layout module 248 may be configured to lay out data segments for horizontal storage withinrows 117 of thearray 115. Thedata layout module 248 may be configured to buffer and/or arrange data segments (e.g., theECC codewords data rows 667 comprising 24 bytes of data. Thedata layout module 248 may be capable of buffering one or more ECC codewords 620 (by use of the write buffer 251). In theFIG. 6A embodiment,data layout module 248 may be configured to buffer 10 24-byte data rows, which is sufficient to buffer a full 240-byte ECC codeword 620. - The
data layout module 248 may be further configured to stream 24-byte data rows to aparity module 637, which may be configured to generate a parity byte for each 24-byte group. Thedata layout module 248 streams the resulting 25-byte data rows 667 to thearray 115 via thebank controller 252 and interconnect 127 (and/or writebuffer 250, as disclosed above). Thestorage controller 139 may be configured to stream thedata rows 667 to respective program buffers of the solid-state storage array 115 (e.g., stream to program buffers of respective solid-state storage elements 116A-Y). Accordingly, each cycle of theinterconnect 127 may comprise transferring a byte of adata row 667 to a program buffer of a respective solid-state storage element 116A-Y. In theFIG. 6A embodiment, on each cycle of theinterconnect 127, the solid-state storage elements 116A-X receive data bytes of adata row 667 and solid-state storage element 116Y receives the parity byte of thedata row 667. - As illustrated in
FIG. 6A , data of the ECC codewords 620 (and packets 610) may be byte-wise interleaved between the solid-state storage elements 116A-X of thearray 115; each solid-state storage element 116A-X receives 10 bytes of each 240byte ECC codeword 620. As used herein, adata row 667 refers to a data set comprising data for each of a plurality ofcolumns 118 within thearray 115. The data row 667 may comprise a byte of data for each column 0-23. The data row 667 may further comprise a parity byte corresponding to the data bytes (e.g., a parity byte corresponding to the data bytes for columns 0-23).Data rows 667 may be streamed to respective program buffers of the solid-state storage elements 116A-Y via theinterconnect 127. In the horizontal data configuration illustrated inFIG. 6A , streaming a 240-byte ECC codeword 620 to thearray 115 may comprise streaming 10separate data rows 667 to thearray 115, each data row comprising 24 data bytes (one for each data solid-state storage element 116A-X) and a corresponding parity byte. - The storage locations of the solid-
state storage array 115 may be capable of storing a large number ofECC codewords 610 and/orpackets 610. For example, the solid-state storage elements may comprise 8 kb pages, such that the storage capacity of a storage location (row 117) is 192 kb. Accordingly, each storage location within thearray 115 may be capable of storing approximately 819 240B ECC codewords (352 packets 610). The storage address of a data segment may, therefore, comprise: a) the address of the storage location on which theECC codewords 620 and/orpackets 610 comprising the data segment are stored, and b) an offset of theECC codewords 620 and/orpackets 610 within therow 117. The storage location or offset 636 of thepacket 610A within thelogical page 542A may be determined based on the horizontal layout of thedata packet 610A. The offset 636 may identify the location of theECC codewords packet 610A (and/or may identify the location of thelast ECC codeword 623 comprising data of thepacket 610A). Accordingly, in some embodiments, the offset may be relative to one or more datastructures on the solid-state storage array 115 (e.g., a packet offset and/or ECC codeword offset). Another offset 638 may identify the location of the last ECC codeword of a next packet 620 (e.g.,packet 610B), and so on. - As depicted in
FIG. 6A , each of theECC codewords storage elements 116A-Y comprising thelogical page 542A (e.g., 10 bytes of theECC codewords state storage element 116A-X). Accessing thepacket 610A may, therefore, comprise accessing each of theECC codewords storage elements 116A-X). -
FIG. 6B is a block diagram of asystem 601 depicting one embodiment of astorage controller 139 configured to store data in a horizontal storage configuration. TheFIG. 6B embodiment depicts a horizontal layout of anECC codeword 621 on thearray 115 ofFIG. 6A . Data D0 denotes a first byte of theECC codeword 621, and data D239 denotes the last byte (byte 240) of theECC codeword 621. As illustrated inFIG. 6B , eachcolumn 118 of the solid-state storage array 115 comprises 10 bytes of theECC codeword 621, and the data of theECC codeword 621 is horizontally spread across arow 117 of the array 115 (e.g., horizontally spread across solid-state storage elements 116A-X of the array 115).FIG. 6B also depicts adata row 667 as streamed to (and stored on) the solid-state storage array 115. As illustrated inFIG. 6B , the data row 667 comprisesbytes 0 through 23 of the ECC codeword D, each stored on a respective one of thecolumns 118. The data row 667 further comprises aparity byte 668 corresponding to the contents of the data row 667 (bytes D0 through D23). - Since the data is spread across the columns 0-23 (solid-
state storage elements 116A-X), reading data of theECC codeword 621 may require accessing a plurality ofcolumns 118. Moreover, the smallest read unit may be an ECC codeword 620 (and/or packet 610). Referring back toFIG. 2 , reading a data segment may comprise determining the storage address of the data by use of, inter alia, the translation module 134 (e.g., the forward map 152). The storage address may comprise a) the address of the storage location (logical page) on which the ECC codewords and/or packets comprising the requested data are stored, and b) the offset of the ECC codewords and/or packets within the particular storage location. Referring toFIG. 1B , thetranslation module 134 may be configured to maintain aforward map 152 configured to index front-end identifiers to storage addresses on thestorage medium 140. As disclosed above, the storage address of data may comprise a) the address of the storage location (logical page) comprising the data and b) an offset of the data within the storage location. Accordingly, the storage addresses 156A-D of theentries 153 withinforward map 152 may be segmented into a first portion comprising an address of a storage location and a second portion comprising the offset of the data within the storage location. - Portions of the
storage metadata 135, including portions of theforward map 152, may be stored in volatile memory of thecomputing system 100 and/orstorage layer 130. The memory footprint of thestorage metadata 135 may grow in proportion to the number ofentries 153 that are included in theforward map 152, as well as the size of theentries 153 themselves. The memory footprint of theforward map 152 may be related the size (e.g., number of bits) used to represent the storage address of eachentry 153. The memory footprint of theforward map 153 may impact the performance of thecomputing system 100 hosting thestorage layer 130. For example, thecomputing device 100 may exhaust itsvolatile memory resources 102, and be forced to page swap memory tonon-volatile storage resources 103, or the like. Even small reductions in the size of theentries 153 may have a significant impact on the overall memory footprint of thestorage metadata 135 when scaled to a large number ofentries 153. - The number of the storage addresses 154A-D may also determine the storage capacity that the
forward map 152 is capable of referencing (e.g., may determine the number of unique storage locations that can be referenced by theentries 153 of the forward map 152). In one embodiment, for example, theentries 153 may comprise 32 bit storage addresses 154A-D. As disclosed above, a portion of each 32 bit storage addresses 154A-D may be used to address a specific storage location (e.g., logical page), and other portions of the storage addresses 154A-D may determine the offset within the storage location. If 4 bits are needed to represent storage location offsets, the 32 bit storage addresses 154A-D may only be capable of addressing 2̂28 unique storage locations. However, if offset information is stored on non-volatile storage media (e.g., on the logical pages themselves), the full 32 bits of the physical address may be used to reference unique logical pages. Therefore, a 32 bit address may address 2̂32 unique logical pages rather than only 2̂28 logical pages. Accordingly, segmenting storage addresses may effectively increase the number of unique storage locations that can be referenced by theforward map 152. - Referring to
FIG. 2 , in some embodiments, thestorage layer 130 comprises an offsetindex module 249 configured to determine the offsets of data segments within storage locations of thestorage medium 140. The offsetindex module 249 may be further configured to generate an offset index configured to map front-end identifiers of the data segments to respective offsets within the storage locations. The offset index may be configured for storage on thestorage medium 140. The offsetindex module 249 may, therefore, segment storage addresses into a first portion configured to address a storage location (logical page) on thestorage medium 140, and a second portion corresponding to an offset within the storage location. Thestorage controller 139 may be configured to store the offset index (the second portion of the storage addresses) on thestorage medium 140. Thetranslation module 134 may be configured to index front-end addresses of the data using the first portion of the storage addresses. The second portion of the storage addresses may be omitted from theforward map 152, which may reduce the memory overhead of theforward map 152 and/or enable theforward map 152 to reference a largerstorage address space 144. -
FIG. 7A depicts one embodiment of asystem 700 for referencing data on a storage medium. Thesystem 700 comprises aforward map 152 that includes anentry 153 configured to associate a front-end address 754D with astorage address 756.Other entries 153 of theforward map 152 are omitted fromFIG. 7A to avoid obscuring the details of the depicted embodiments. The offsetindex module 249 may segment the storage address into afirst portion 757 and asecond portion 759D. Thefirst portion 757 may correspond to an address of a storage location and thesecond portion 759D may identify an offset of the data segment within the storage location (e.g., within a logical page 542). The relative size of the offsetportion 759D of thestorage address 756 to thestorage location portion 757 may be based on the size of thedata packets 610A-N stored on the solid-state storage array 115, the size of thelogical page 542, and/or the layout of thepackets 610A-N within thearray 115. In theFIG. 7A embodiment, thelogical pages 542 may be used in a “24+1” horizontal storage configuration, comprising 24 data columns and a parity column, such that the physical storage capacity of thelogical pages 542 within thearray 115 is 24 times larger than the page size of the solid-state storage elements 116A-Y (e.g., 192 kb for solid-state storage elements 116A-Y comprising 8 kb pages). Accordingly, eachlogical page 542 may be capable of storing a relatively large number of data segments and/orpackets 610A-N. The disclosure is not limited in this regard, however, and could be adapted for use with any number of solid-state storage elements 116A-Y having any suitable page size, storage configuration, and/or data layout. - The data segment mapped to the front-end address 754 may be stored in the
packet 610D. The storage location address 757 (first portion of the storage address 756) comprises the media address of thelogical page 542 within thearray 115. The offset 759D indicates an offset of thepacket 610D within thelogical page 542. - Referring to the
system 701 ofFIG. 7B , the offsetindex module 249 may be configured to determine the offset of thepacket 610D within the logical page 542 (as thepacket 610D is stored on the storage medium 140). The offsetindex module 249 may be further configured to generate an offsetindex 749 configured for storage on thestorage medium 140. The offsetindex 749 may comprise mappings between front-end identifiers 754A-N of the data segments stored on thelogical page 542 and the respective offsets of the data segments within the logical page 542 (e.g., the offsets of thedata packets 610A-N comprising the data segments). Thestorage layer 130 may be configured to store the offsetindex 749 on thestorage medium 140. As illustrated inFIG. 7B , the offsetindex 749 is stored on the corresponding storage location 542 (on the samelogical page 542 comprisingpackets 610A-N indexed by the offset index 749). Alternatively, the offsetindex 749 may be stored on a different storage location. - The
storage layer 130 may be configured to leverage the on-media offsetindex 749 to reduce the size of theentries 153 in theforward map 152 and/or enable theentries 153 to reference largerstorage address spaces 144. As illustrated inFIG. 7B , theentry 153 may include only the first portion (storage location address 757) of thestorage address 756. Thestorage layer 130 may be configured to omit and/or exclude the second portion of the address (the offsetportion 759D) from theindex entries 153. - The
storage layer 130 may determine the full storage address of a data segment by use of thestorage location address 757 maintained within theforward map 152 and the offsetindex 749 stored on thestorage medium 140. Accordingly, accessing data associated with the front-end address 754D may comprise a) accessing thestorage location address 757 within theentry 153 corresponding to the front-end address 754D in theforward map 152, b) reading the offsetindex 749 from thelogical page 542 at the specifiedstorage location address 757, and c) accessing thepacket 610D comprising the data segment at offset 759D by use of the offsetindex 749. - Referring to the
system 702 ofFIG. 7C , in some embodiments, thestorage layer 130 may be configured to storedata packets 610A-N that are of a fixed, predetermined size. Accordingly, the offset of aparticular data packet 610A-N may be determined based on its sequential order within thelogical page 542. In such embodiments, the offsetindex module 249 may generate an offsetindex 749 comprising an ordered list of front-end identifiers 754A-N, which omits the specific offsets of the correspondingdata packets 610A-N. The offsets of the fixed-sized data packets 610A-N may be determined based on the order of the front-end identifiers 754A-N. In another embodiment, the offsetindex 749 may comprise an offset of thefirst data packet 610A in thelogical page 542, and may omit offsets of thesubsequent packets 610B-N. In other embodiments, the offsetindex 749 may comprise offsets to other datastructures within the storage location, such as the offset ofparticular ECC codewords 620, as disclosed herein. The offsets may be derived from the offsetindex 749 using any suitable mechanism. In some embodiments, for example, thelogical page 542 may store data structures having a variable size; the offsetindex 749 may be configured to list the front-end identifiers of the data structures along with a length or size of each data structure. In another embodiment, thelogical page 542 may be segmented into a plurality of fixed-sized “chunks,” and the data of a front-end identifier may occupy one or more of the chunks. In such embodiments, the offsetindex 749 may comprise a bitmap (or other suitable data structure) indicating which chunks are occupied by data of which front-end identifiers. - As illustrated in
FIGS. 7B and 7C , thestorage controller 139 may be configured to append the offset index to the “tail” of thelogical page 542. The disclosure is not limited in this regard, however, and could be adapted to store the offsetindex 749 at any suitable location within thelogical page 542 and/or on another storage location of thestorage medium 140. - Referring back to
FIG. 2 , the offsetindex module 249 may be configured to determine the offset of data segments, and the data segments are stored on thestorage medium 140. Determining offsets of the data segments may comprise determining the offset of one ormore data packets 610 and/orECC codewords 620 comprising the segments, as disclosed above. Determining the offsets may further comprise monitoring the status of thewrite buffer 250, 00S conditions within one or more of the solid-state storage arrays 115A-N, and so on. The offsetindex module 249 may be further configured to generate an offsetindex 749 for storage on thestorage medium 140. The offsetindex 749 may be stored at a predetermined location (e.g., offset) within the storage location that the offsetindex 749 describes. The offsetindex 249 may flow into thewrite buffer 250 and onto program buffers of a corresponding solid-state storage array 115A-N, as disclosed herein. The data segments (data packets 610 and/or ECC codewords 620) and the offsetindex 749 may be written onto a storage location within one of thearrays 115A-N in response to a program command, as disclosed herein. Thetranslation module 134 may be configured to omit offset information from theindex 152, as disclosed herein. - Reading data corresponding to a front-end address may comprise accessing an
entry 153 associated with the front-end address to determine the physical address of the storage location comprising the requested data. Theread module 241 may be configured to read the storage location by, inter alia, issuing a read command to one of the solid-state storage arrays 115A-N, which may cause thestorage elements 116A-Y comprising thearray 115A-N to transfer the contents of a particular page into a read buffer. The offsetindex module 249 may be configured to determine the offset of the requested data by a) streaming the portion of the readbuffer 251 comprising the offsetindex 749 into theread module 241 and b) parsing the offsetindex 749 to determine the offset of the requested data. Theread module 241 may then access the portions of the readbuffer 251 comprising the requested data by use of the determined offset. - As disclosed herein, the
packet module 244 may be configured to storedata segments 312 in a packet format 310 that comprises persistent metadata 314. The persistent metadata 314 may comprise one or more front-end identifiers 315 corresponding to thedata segment 312. Inclusion of the front-end interface metadata 315 may increase the on-media overhead imposed by the packet format 310. The offsetindex 749 generated by the offsetindex module 249, which, in some embodiments, is stored with the corresponding data packets, may also include the front-end interface of thedata segment 312. Accordingly, in some embodiments, the packet format 310 may be modified to omit front-end interface metadata from the persistent metadata 314. - Referring back to
FIGS. 6A and 6B , the horizontal data configuration implemented by thedata layout module 248 may spread ECC codewords 620 (and the correspondingpackets 610 and/or data segments) across the columns 0-23 (solid-state storage elements 116A-X). As such, reading data of theECC codeword 621 may require accessing a plurality ofcolumns 118. Moreover, the smallest read unit may be an ECC codeword 620 (and/or packet 610). Reading a packet 310 stored horizontally on the solid-state storage array 115 may, therefore, incur significant overhead. Referring back toFIG. 6A , reading thepacket 610A may require transferring data of thelogical page 542A into respective read buffers of thestorage elements 116A-X (e.g.,storage elements 0 through 23). Transferring the contents of a page into the read buffer may incur a latency of Tr (read latency). As used herein, read time or read latency Tr refers to the time needed to transfer the contents of a physical storage unit (e.g., physical page) into a read buffer of a solid-state storage element 116A-Y. In theFIG. 6A embodiment, the read time Tr may, therefore, refer to the time required to transfer a physical page of each of the solid-state storage elements 116A-X into a respective read buffer. Accordingly, the read time Tr of a logical storage unit 650 may correspond to the “slowest” read time of theconstituent storage elements 116A-X. - The
read module 241 may be configured to perform a read operation to read a storage location of one of the solid-state storage arrays 115A, transfer the contents of the storage location into respective read buffers of the solid-state storage elements 116A-Y, and stream the data into theread buffer 251 by use of the 24-byte interconnect 127 and/orbank controller 252. The stream time (Ts) of the read operation may refer to the time required to stream the ECC codewords 620 (and/or packets 610) into theread module 241. In the horizontal data layout ofFIG. 6A , the stream time Ts may be 10 cycles of theinterconnect 127 because, as disclosed above, eachcolumn 118 of thearray 115 comprises 10 bytes of theECC codeword 620. Therefore, although the horizontal arrangement may incur a relatively high retrieval overhead, the stream overhead is relatively low (only 10 cycles). - Given the horizontal data arrangement within the solid-
state storage array 115, and the latencies disclosed herein, an input/output operations per second (IOPS) metric may be quantified. The IOPS to read anECC codeword 620 may be expressed as: -
- In
Equation 1, Tr is the read time of the solid-state storage elements 116A-Y, Ts is the stream time (e.g., the clock speed times the number of cycles required), and C is the number ofindependent columns 118 used to store the data.Equation 1 may be scaled by the number ofindependent banks 119A-N available tostorage layer 130. In the horizontal data structure layout ofFIGS. 6A and 6B ,Equation 1 may be expressed as: -
- In
Equation 2, the number of columns is twenty-four (24), and Sc is the cycle time of thebus 127. The cycle time is scaled by 10 since, as disclosed above, a horizontal 240-byte ECC codeword 620 may be streamed in 10 cycles of theinterconnect 127. - The
storage layer 130 may be configured to store data in different configurations, layouts, and/or arrangements within a solid-state storage array 115. As disclosed above, in some embodiments, thedata layout module 248 is configured to arrange data within respective independent columns, each comprising a subset of thecolumns 118 of the array 115 (e.g., subsets of the solid-state storage elements 116A-Y). Alternatively, or in addition, thedata layout module 248 may be configured to store data vertically within respective “vertical stripes.” The vertical stripes may have a configurable depth, which may be a factor of the page size of the solid-state storage elements 116A-Y comprising thearray 115. -
FIG. 8A depicts another embodiment of asystem 800 for referencing data on astorage medium 140. In theFIG. 8A embodiment, thedata layout module 248 is configured to store data in a vertical layout within thearray 115. The data writemodule 240 may be configured to bufferECC codewords 620 for storage onrespective columns 118 of the solid-state storage array 115 (including theECC codewords respective columns 118 of thearray 115 through awrite buffer 250, as disclosed above. Accordingly, each cycle of theinterconnect 127 may comprise streaming a byte of a differentrespective ECC codeword 610 to each of thecolumns 116A-X. Thewrite module 240 may be further configured to generateparity data 637 corresponding to thedifferent ECC codewords 620 for storage on a parity column (e.g., solid-state storage element 116Y). Accordingly, each stream cycle may comprise streaming a byte of arespective ECC codeword 620 to arespective column 118 along with a corresponding parity byte to aparity column 118. - As depicted in
FIG. 8A , thedata layout module 248 may be configured to buffer and rotate ECC codewords for vertical storage withinrespective columns 118 of the array 115: theECC codewords data segment 612A may stream to (and be stored vertically on) column 0 (solid-state storage element 116A),other ECC codewords 620 comprising other data segments may be stored vertically withinother columns 118 of thearray 115. Solid-state storage element 116Y may be configured to store parity data corresponding to the ECC codewords, as disclosed above. Alternatively, theparity column 24 may be used to store additional ECC codeword data. - In some embodiments, the
storage controller 139 may comprise a plurality ofpacket modules 242 and/or ECC write modules 246 (e.g., multiple, independent write modules 240) configured to operate in parallel. Data of theparallel write modules 240 may flow into thedata layout module 248 in a checkerboard pattern such that the data is arranged in the vertical format disclosed herein. - The vertical arrangement of
FIG. 8A may comprise thedata layout module 248 arrangingECC codewords 620 for storage withinrespective columns 118 of thearray 115. Accordingly, each data row 667 streamed to thearray 115 may comprise a byte corresponding to arespective ECC codeword 620. The data row 667 may further comprise a corresponding parity byte; thedata rows 667 may be configured to stream data ofrespective ECC codewords 620 to program buffers of respective data columns (e.g., solid-state storage elements 116A-Y), and a corresponding parity byte to a parity column (e.g.,column 116Y). Accordingly, thedata rows 667 may be stored with byte-wise parity information, each byte of arow 667, and stored within the solid-state storage elements 116A-X, may be reconstructed by use of the other bytes in the row 667 (and stored in other solid-state storage elements 116A-X) and the corresponding parity byte. -
FIG. 8B depicts another embodiment ofsystem 801 for referencing data on a storage medium.FIG. 8B depicts one embodiment of a vertical data arrangement within a solid-state storage array 115. TheFIG. 8B embodiment illustrates a vertical storage configuration within the solid-state storage array 115. As illustrated inFIG. 6D , data D0 through D239 of theECC codeword 621 is stored vertically incolumn 0, Data O0 through O239 of anotherECC codeword 620 is stored vertically incolumn 1, Data Q0 through Q239 of anotherECC codeword 620 is stored vertically incolumn 2, and data Z0 through Z239 of anotherECC codeword 620 is stored vertically incolumn 23. The vertical storage configuration of other data of other ECC codewords 620 (R-Y) is also depicted. -
FIG. 8B also depicts one embodiment of adata row 667 as streamed to, and stored on, the solid-state storage array 115. As illustrated inFIG. 8B , the data row 667 comprises a byte of each of a plurality of ECC codewords 620 (ECC codewords D, O, R, S, T, U . . . V, W, X, Y, and Z), each of which is streamed to, and stored within, a respective column 118 (respective solid-state storage element 116A-X). The data row 667 further comprises aparity byte 668 corresponding to the data within thedata row 667. Accordingly, theparity byte 668 corresponds to byte 0 of ECC codewords D, O, R, S, T, U . . . V, W, X, Y, and Z. - The vertical data layout of
FIGS. 8A-B may result in a different IOPS metric. The vertical arrangement of theECC codewords 620 may reduce overhead due to read time Tr, but may increase the stream overhead Ts. As data is streamed from alogical storage element 116A-Y, each byte on thebus 127 may correspond to a different, respective data segment (e.g., different ECC codeword 620). As such, 24different ECC codewords 620 may be streamed in parallel (as opposed to streaming asingle ECC codeword 620 as in the horizontal arrangement example). Moreover, since each column may be independently addressable, each transferred logical page may comprise data of a separate request (e.g., may represent data of 24 different read requests). However, since eachECC codeword 620 is arranged vertically, the stream time Ts for anECC codeword 620 may be increased; the stream time of 240-byte ECC codewords 620 in a vertical configuration may be 240 cycles, as opposed to 10 cycles in the fully horizontal layout ofFIGS. 6A and 6B . The IOPS metric for asingle ECC codeword 620, therefore, may be represented as: -
- The reduced IOPS metric may be offset by the increased throughput (reduced read overhead) and/or different Tr and Ts latency times. These considerations may vary from device to device and/or application to application. Moreover, the IOPS metric may be ameliorated by the fact that multiple,
independent ECC codewords 620 can be streamed simultaneously. Therefore, in some embodiments, the data layout used by the storage layer 130 (and data layout module 248) may be configurable (e.g., by a user setting or preference, firmware update, or the like). - The pages of the solid-
state storage elements 116A-Y may be capable of storing a large number ofECC codewords 620 and/ordata packets 610. Accordingly, the vertical data arrangement ofFIGS. 8A-B may comprise storingECC codewords 620 and/ordata packets 610 corresponding to different front-end addresses within thesame columns 118 of the array.FIG. 8C depicts one embodiment of asystem 802 for referencing data stored in a vertical data layout. The offsetindex module 249 may be configured to segment the storage addresses into a first portion 1057 that identifies thevertical column 118 comprising the data (e.g., the particular page(s) comprising the data segment) and a second portion that identifies the offset of the data segments within thevertical column 118. As illustrated inFIG. 8C , thepacket 810C comprising the data segment corresponding to front-end address 854B is stored in a vertical data arrangement within a page of solid-state storage element 116B. The offsetindex module 249 may be configured to determine the offsets of the packets stored within the page, and to generate an offsetindex 749 that maps the front-end identifiers of thepackets 810A-N torespective offsets 859A-N of the packets within the vertical data arrangement within the page. Thestorage controller 139 may be configured to store the offsetindex 749 within the page comprising thepackets 810A-N indexed thereby. In theFIG. 8C embodiment, thepackets 810A-N are of variable size and, as such, the offsetindex 749 may associate front-end identifiers 854A-N withrespective offsets 859A-N. In otherembodiments comprising packets 810A-N that are of a fixed size, theoffsets 859A-N may be inferred from the order of the packets within the vertical column arrangement. - The
forward map 152 may be configured to index front-end identifiers to pages of respective solid-state storage elements 116A-Y. Accordingly, theforward map 152 may include a subset of the full storage address 1057 (the portion of the address that identifies the particular page comprising the data segment), and may omit addressing information pertaining to the offset of the data segment within the page. Thestorage layer 130 may be configured to access the data segment corresponding to front-end address 854B by: a) identifying the page comprising the data segment associated with the front-end address 854B by use of theforward map 152; b) reading the identified page; c) determining the offset of thedata packet 810B by use of the offsetindex 749 stored on the identified page; and d) reading thepacket 810B at the determined offset. - In some embodiments, the
data layout module 248 may be configured to lay out and/or arrange data in an adaptive channel configuration. As used herein, an adaptive channel configuration refers to a data layout in which thecolumns 118 of thearray 115 are divided into a plurality of independent channels, each channel comprising a set ofcolumns 118 of the solid-state storage array 115. The channels may comprise subsets of the solid-state storage elements 116A-Y. In some embodiments, an adaptive channel configuration may comprise a fully horizontal data layout, in which data segments are stored within a channel comprising 24columns 118 of thearray 115, as disclosed in conjunction withFIGS. 6A-B and 7A-C. In other embodiments, the adaptive channel configuration may comprise a vertical configuration, in which data segments are stored within one of 24 different channels, each comprising asingle column 118 of thearray 115, as disclosed in conjunction withFIGS. 10A-C . In other embodiments, thedata layout module 248 may be configured to store data in other adaptive channel configurations and/or layouts on the solid-state storage array 115.FIG. 9A depicts another embodiment of asystem 900 for adaptive data storage. In theFIG. 9A embodiment, thedata layout module 248 is configured to store data structures in adaptive channels comprising six solid-state storage elements 116A-Y (sixindependent columns 118 per channel). Accordingly, data segments may be stored within respective independent channels, each comprisingsize columns 118 of thearray 115. In theFIG. 9A embodiment, thedata layout module 248 may be configured to buffer fourECC codewords 620 to stream to thearray 115. Each of the fourECC codewords columns 118 within thearray 115. - In alternative adaptive channel configurations, the
data layout module 248 may be configured to buffer 24/N ECC codewords 620, where N corresponds to the configuration of the adaptive channels used for eachECC codeword 620.ECC codewords 620 may be stored within independent channels comprising N columns 118 (e.g., N solid-state storage elements 116A-Y). Accordingly, the horizontal arrangement ofFIGS. 6A-B could be referred to as an adaptive channel configuration comprising 24 column independent channels, and the vertical data structure configuration ofFIGS. 8A-C could be referred to as an adaptive channel configuration comprising independent channels comprising asingle column 118. Thestorage controller 139 may be configured to arrange data in any suitable hybrid arrangement, including heterogeneous sets of independent channels. In some embodiments, for example, thedata layout module 248 may be configured to buffer twoECC codewords 620 in a 12-column adaptive channel configuration (e.g.,store ECC codewords 620 across each of 12 columns 118), buffer sixECC codewords 620 in a four-column adaptive channel configuration (e.g.,store ECC codewords 620 across each of four columns 118), and so on. - In some embodiments, data segments may be arranged in
adjacent columns 118 within the array 115 (e.g., a data structure may be stored in columns 0-4). Alternatively, columns may be non-adjacent and/or interleaved with other data segments (e.g., a data segment may be stored oncolumns columns data layout module 248 may be configured to adapt the data layout in accordance with out-of-service conditions within thearray 115; if a column 118 (or portion thereof) is out of service, the data layout module 238 may be configured to adapt the data layout accordingly (e.g., arrange data to avoid the out of service portions of thearray 115, as disclosed above). -
FIG. 9B depicts anotherembodiment 901 of a six column independent channel data layout. As illustrated inFIG. 9B , data of an ECC codeword (data D0-239) may be stored within a channel comprising columns 0-5 of thearray 115 and data of another ECC codeword (data Z0-239) may be stored within an independent channel comprising columns 20-23, and so on.FIG. 9B further depicts adata row 667, which includes six bytes of four different ECC codewords, including D and Z (bytes D0-5 and Z0-5). The data row 667 may further comprise aparity byte 668 corresponding to the contents of thedata row 667, as disclosed above. - The stream time Ts of an
ECC codeword 620 in the independent channel embodiments ofFIGS. 9A-B may be 40 cycles of the bus 127 (e.g., 240/N cycles). An IOPS metric of a six independent column data layout may be represented as: -
- The IOPS metric may be modified according to a number of data segments that can be read in parallel. The six-column independent channel configuration may enable four different ECC codewords (and/or packets) to be read from the
array 115 concurrently. -
FIG. 9C depicts another embodiment of asystem 902 for referencing data stored in an adaptive, independent channel layout. In theFIG. 9C embodiment,data packets 910A-N comprising respective data segments are stored in independent channels comprising sixcolumns 118 of thearray 115, such that each independent channel comprises six solid-state storage elements 116A-Y. The offsetindex module 248 may be configured to segment the storage addresses of thedata packets 910A-N into a first portion comprising thephysical address 957 of an independent channel, which may correspond to a page address on each of six solid-state storage elements. In theFIG. 9C embodiment, theindependent channel address 957 corresponds to a page on solid-state storage elements 0-5. The second portion of the storage address may correspond to offsets of thedata packets 910A-N within the independent channel. The offsetindex module 248 may be configured to generate an offsetindex 749 configured to map front-end addresses 954A-N to corresponding offsets within theindependent channel 957. Thedata packets 910A-N may be of fixed size and, as such, the offsetindex 749 may indicate the order of thedata packets 910A-N within the independent channel as opposed to specifying particular offsets. - In some embodiments, the
storage layer 130 may be configured to store data in an adaptive vertical stripe configuration. As used herein, a vertical stripe configuration refers to storing data structures vertically within vertical stripes having a predetermined depth within thecolumns 118 of the solid-state storage array. Multiple vertical stripes may be stored withinrows 117 of thearray 115. The depth of the vertical stripes may, therefore, determine read-level parallelism, whereas the vertical ECC configuration may provide error detection, correction, and/or reconstruction benefits. -
FIG. 10A depicts one embodiment of a verticalstripe data configuration 1000 within a logical page 542 (row 117) of a solid-state storage array 115. As disclosed above, a vertical stripe may comprise vertically arranged data structures withinrespective columns 118 of thearray 115. Thevertical stripes 646A-N have a configurable depth or length. In theFIG. 10A embodiment, thevertical stripes 646A-N are configured to have a depth sufficient to store four ECC codewords. In some embodiments, the depth of thevertical stripes 646A-N corresponds to an integral factor of ECC codeword size relative to a page size of the solid-state storage elements 116 comprising thearray 115. The page size of the solid-state storage elements 116 may be 16 kb, each page may be configured to hold fourvertical stripes 646A-N, and each vertical stripe may be configured to hold four 1 kb vertically aligned ECC codewords. The disclosed embodiments are not limited in this regard, however, and could be adapted to use anystorage medium 140 having any page size in conjunction with any ECC codeword size and/or vertical stripe depth. - The depth of the
vertical stripes 646A-N and the size of typical read operations may determine, inter alia, the number of channels (columns) needed to perform read operations (e.g., determine the number of channels used to perform a read operation, stream time Ts, and so on). For example, a 4 kb data packet may be contained within 5 ECC codewords, includingECC codewords 3 through 7. Reading the 4 kb packet from thearray 115 may, therefore, comprise reading data from two columns (columns 0 and 1). A larger 8 kb data structure may span 10 ECC codewords (ECC codewords 98-107), and as such, reading the 8 kb data structure may comprise reading data from three columns of the array (columns vertical stripes 646A-N with an increased depth may decrease the number of columns needed for a read operation, which may increase the stream time Ts for the individual read, but may allow for other independent read operations to be performed in parallel. Decreasing depth may increase the number of columns needed for read operations, which may decrease stream time Ts, but result in decreasing the number of other, independent read operations that can be performed in parallel. -
FIG. 10B depicts embodiments ofvertical stripes 1001, each having a different respective depth. The vertical stripes 607 may comprise 1 kb, vertically aligned ECC codewords as disclosed above in conjunction withFIG. 8A-C . A 16 kb data structure 610 (packet) may be stored within a 4 k deepvertical stripe 746A. Thedata structure 610 may be contained within 17 separate ECC codewords spanning five columns of the array 115 (columns 0 through 5). Accordingly, reading thedata structure 610 may comprise reading data from an independent channel comprising six columns. The stream time Ts of the read operation may correspond to the depth of thevertical stripe 746A (e.g., the stream time of four ECC codewords). - The depth of the vertical stripe 746B may be increased to 8 kb, which may be sufficient to hold eight vertically aligned ECC codewords. The
data structure 610 may be stored within 17 ECC codewords, as disclosed above. However, the modified depth of the vertical stripe 746B may result in the data structure occupying three columns (columns 0 through 2) rather than six. Accordingly, reading thedata structure 610 may comprise reading data from an independent channel comprising three columns, which may increase the number of other, independent read operations that can occur in parallel on other columns (e.g.,columns 3 and 4). The stream time Ts of the read operation may double as compared to the stream time of thevertical stripe 746A. -
FIG. 10C is a block diagram of another embodiment of asystem 1002 for referencing data on a storage medium. In theFIG. 10C embodiment, thedata layout module 248 may be configured to store data in a vertical stripe configuration withinlogical pages 542 of the solid-state storage array 115. Thewrite module 240 may comprise one or more processing modules, which as disclosed above, may include, but are not limited to apacket module 244 and anECC write module 246. TheECC write module 246 may be configured to generate ECC codewords 620 (ECC codewords 0 through Z) in response to data for storage on the solid-state storage array 115, as disclosed above. The ECC codewords 620 may flow into thedata layout module 248 serially via a 128 bit data path of thewrite module 240. As disclosed in further detail herein, theECC write module 246 may further comprise a relational module 646 configured to include relational information in one or more of the ECC codewords 620. - The
data layout module 248 may be configured to buffer theECC codewords 620 for storage in vertical stripes, as disclosed herein. Thedata layout module 248 may comprise afill module 660 that is configured to rotate the serial stream ofECC codewords 620 into vertical stripes by use of, inter alia, one or more cross point switches, FIFO buffers 662A-X, and the like. The FIFO buffers 662A-X may each correspond to a respective column of thearray 115. Thefill module 660 may be configured to rotate and/or buffer theECC codewords 620 according to a particular vertical code word depth, which may be based on theECC codeword 620 size and/or size of physical storage units of thearray 115. - The
data layout module 248 may be further configured to manage OOS conditions within the solid-state storage array 115. As disclosed above, an OOS condition may indicate that one ormore columns 118 of the array are not currently in use to store data. Thestorage metadata 135 may identifycolumns 118 that are out of service within various portions of the solid-state storage array 115 (e.g.,rows 117, logical eraseblocks 540, or the like). In theFIG. 10C embodiment, thestorage metadata 135 may indicate thatcolumn 2, of the currentlogical page 542, is out of service. In response, thefill module 660 may be configured to avoidcolumn 2 by, inter alia, injecting padding data into the FIFO buffer of the OOS column (e.g.,FIFO buffer 662C). - In some embodiments, the
data layout module 248 may comprise aparity module 637 that is configured to generate parity data in accordance with the vertical strip data configuration. The parity data may be generated horizontally, on a byte-by-byte basis withinrows 117 of thearray 115 as disclosed above. The parity data P0 may correspond toECC codewords ECC codewords data layout module 248 may include aparity control FIFO 662Y configured to manage OOS conditions for parity calculations (e.g., ignore data within OOS columns for the purposes of the parity calculation). - The vertical stripe data configuration generated by the data layout module 248 (and parity module 637) may flow to write buffers of the solid-
state storage elements 116A-Y within thearray 115 through the write buffer and/orbank controller 252, as disclosed above. In some embodiments,data rows 667 generated bywrite module 240 may comprise one byte for each data column in the array 115 (columns 116A-X). Each byte in adata row 667 may correspond to arespective ECC codeword 620 and may include a corresponding parity byte. Accordingly, each data row 667 may comprise horizontal byte-wise parity information from which any of the bytes within therow 667 may be reconstructed, as disclosed herein. A data row 667A may comprise a byte ofECC codeword 0 for storage oncolumn 0, a byte ofECC codeword 4 for storage oncolumn 1, padding data forcolumn 1, a byte ofECC codeword 88 for storage oncolumn 23, and so on. The data row 667 may further comprise aparity byte 668A for storage on column 24 (or other column), as disclosed above. - The data may be programmed unto the solid-
state storage array 115 as a plurality ofvertical stripes 646A-N within alogical page 542, as disclosed above (e.g., by programming the contents of program buffers to physical storage units of the solid-state storage elements 116A-Y within the array 115). In theFIG. 10C embodiment, the indexing S*N may correspond to vertical stripes configured to hold S ECC codewords in anarray 115 comprising N columns for storing data. -
FIG. 10D depicts another embodiment of asystem 1003 configured to reference data stored in a vertical stripe configuration on a solid state storage array. The offsetindex module 249 may be configured to segment the storage address ofpackets 1010A-N into a first portion corresponding to an address of the vertical stripe, which may correspond to a particular offset within a page of one or more storage elements (e.g.,storage element 116C), and a second portion corresponding to offsets of thepackets 1010A-N within the vertical stripe. The offsetindex module 249 may generate an offsetindex 749C configured for storage within the vertical stripe, as disclosed above. The offsetindex 749C may map front-end identifiers 1054A-N of thepackets 1010A-N stored within the vertical stripe torespective offsets 1059A-N of the packets. - As disclosed above, packets may span vertical stripes. In the
FIG. 10D embodiment, thepacket 1010N is stored within vertical stripes onstorage elements index entry 1059N corresponding to thepacket 1010N may indicate that thepacket 1010N continues within the next stripe. The offsetindex 749D of the next vertical stripe may also include an entry associated with the front-end address 1054 of thepacket 1010N and may indicate the offset and/or length of the remaining data of thepacket 1010N within thecolumn 116D. Accordingly, the offsetindex module 249 may be configured to link the offsetindex 749C to the offsetindex 749D. As illustrated inFIG. 10D , theforward map 152 may only include references to the vertical stripe oncolumn 116C that comprises the “head” of thepacket 1010N. Moreover, thetranslation module 134 may omit the second portion of the storage addresses (theoffsets 1059A-N and 1069N) from theentries 153 to reduce the memory overhead of theforward map 152 and/or allow theforward map 152 to reference largerstorage address spaces 144, as disclosed herein. -
FIG. 11 is a flow diagram of one embodiment of amethod 1100 for referencing data on a storage medium.Step 1110 may comprise arranging data segments for storage at respective offsets within a storage location of astorage medium 140. In some embodiments,step 1110 comprises formatting the data segments into one ormore packets 610 and/or encoding thepackets 610 into one ormore ECC codewords 620, as disclosed herein.Step 1110 may further comprise streaming thepackets 610 and/orECC codewords 620 to program buffers of a solid-state storage array 115 via theinterconnect 127.Step 1110 may further include generating parity data for each of a plurality ofdata rows 667 comprising the data segments, as disclosed herein. - In some embodiments,
step 1110 may further comprise compressing one or more of the data segments such that a compressed size of the data segments differs from the original, uncompressed size of the data segments.Step 1110 may further include encrypting and/or whitening the data segments, as disclosed herein. -
Step 1120 may comprise mapping front-end addresses of the data segments using, inter alia, aforward map 152, as disclosed herein.Step 1120 may comprise segmenting the storage addresses of the data segments into a first portion that addresses the storage location comprising the data segments (e.g., the physical address of thelogical page 542 comprising the data segments), and second portions comprising the respective offsets of the data segments within the storage location.Step 1120 may further comprise indexing the front-end addresses to the first portion of the storage address, and omitting the second portion of the storage address from theentries 153 of theforward index 152.Step 1120 may comprise determining the data segment offsets based on a compressed size of the data segments, as disclosed herein. Accordingly, the offsets determined atstep 1120 may differ from offsets based on the original, uncompressed size of the data segments. -
Step 1130 may comprise generating an offset index for the storage location by use of the offsetindex module 249, as disclosed herein.Step 1130 may comprise generating an offsetindex 749 data structure that is configured for storage on thestorage medium 140. The offsetindex 749 may be configured for storage at a predetermined offset and/or location within the storage location comprising the indexed data segments. The offsetindex 749 may be configured to map front-end addresses of the data segments stored within the storage location to respective offsets of the data segments within the storage location, as disclosed herein. In some embodiments,step 1130 further comprises storing the offsetindex 749 on thestorage medium 140, which may comprise streaming the offsetindex 749 to program buffers of thestorage elements 116A-Y comprising a solid-state storage array 115A-N and/or issuing a program command to the solid-state storage elements 116A-Y, as disclosed herein. -
FIG. 12 is a flow diagram of another embodiment of amethod 1200 for referencing data stored on astorage medium 140.Step 1210 may comprise identifying a storage location comprising data corresponding to a specified front-end address.Step 1210 may be implemented in response to a storage request pertaining to the front-end address. The storage request may include one or more of: a read request, a read-modify-write request, a copy request, and/or the like.Step 1210 may comprise accessing anentry 153 in theforward map 152 using, inter alia, the specified front-end address. Theentry 153 may comprise the first portion of the full storage address of the requested data. The first portion may identify the storage location (e.g., logical page 542) comprising the requested data. The second portion of the full storage address may be maintained in a second index that is stored on thestorage medium 140 and, as such, may be omitted from theforward map 152. -
Step 1220 may comprise determining an offset of the requested data within the identified storage location.Step 1220 may comprise a) reading the identified storage location, b) accessing an offsetindex 749 at a predetermined location with the identified storage location, and c) determining the offset of data corresponding to the front-end address by use of the offset index. Accordingly,step 1220 may comprise forming the full storage address of the requested data by combining the address of the storage location maintained in theforward map 152 with the offset maintained in the on-media offsetindex 749. -
Step 1230 may comprise accessing the requested data.Step 1230 may include streaming one ormore ECC codewords 620 comprising thedata packets 610 in which the requested data was stored from read buffers of thestorage elements 116A-Y comprising astorage array 115A-N. Step 1230 may comprise streaming the data from the offset determined atstep 1220.Step 1230 may further include processing the ECC codeword(s) 620 and/or packet(s) 610 comprising the requested data, as disclosed herein (e.g., by use of the ECC readmodule 247 and/or depacket module 245).Step 1230 may further comprise decompressing the requested data by use of thedecompression module 243, decrypting the data, dewhitening the data, and so on, as disclosed herein. - The above description provides numerous specific details for a thorough understanding of the embodiments described herein. However, those of skill in the art will recognize that one or more of the specific details may be omitted, or other methods, components, or materials may be used. In some cases, operations are not shown or described in detail.
- Furthermore, the described features, operations, or characteristics may be combined in any suitable manner in one or more embodiments. It will also be readily understood that the order of the steps or actions of the methods described in connection with the embodiments disclosed may be changed as would be apparent to those skilled in the art. Thus, any order in the drawings or Detailed Description is for illustrative purposes only and is not meant to imply a required order, unless specified to require an order.
- Embodiments may include various steps, which may be embodied in machine-executable instructions to be executed by a general-purpose or special-purpose computer (or other electronic device). Alternatively, the steps may be performed by hardware components that include specific logic for performing the steps, or by a combination of hardware, software, and/or firmware.
- Embodiments may also be provided as a computer program product including a computer-readable storage medium having stored instructions thereon that may be used to program a computer (or other electronic device) to perform processes described herein. The computer-readable storage medium may include, but is not limited to: hard drives, floppy diskettes, optical disks, CD-ROMs, DVD-ROMs, ROMs, RAMs, EPROMs, EEPROMs, magnetic or optical cards, solid-state memory devices, or other types of medium/machine-readable medium suitable for storing electronic instructions.
- As used herein, a software module or component may include any type of computer instruction or computer executable code located within a memory device and/or computer-readable storage medium. A software module may, for instance, comprise one or more physical or logical blocks of computer instructions, which may be organized as a routine, program, object, component, data structure, etc., that performs one or more tasks or implements particular abstract data types.
- In certain embodiments, a particular software module may comprise disparate instructions stored in different locations of a memory device, which together implement the described functionality of the module. Indeed, a module may comprise a single instruction or many instructions, and may be distributed over several different code segments, among different programs, and across several memory devices. Some embodiments may be practiced in a distributed computing environment where tasks are performed by a remote processing device linked through a communications network. In a distributed computing environment, software modules may be located in local and/or remote memory storage devices. In addition, data being tied or rendered together in a database record may be resident in the same memory device, or across several memory devices, and may be linked together in fields of a record in a database across a network.
- It will be understood by those having skill in the art that many changes may be made to the details of the above-described embodiments without departing from the underlying principles of the disclosure.
Claims (8)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US16/030,232 US20180314627A1 (en) | 2012-03-02 | 2018-07-09 | Systems and Methods for Referencing Data on a Storage Medium |
Applications Claiming Priority (6)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US201261606253P | 2012-03-02 | 2012-03-02 | |
US201261606755P | 2012-03-05 | 2012-03-05 | |
US201261663464P | 2012-06-22 | 2012-06-22 | |
US13/784,705 US9495241B2 (en) | 2006-12-06 | 2013-03-04 | Systems and methods for adaptive data storage |
US13/925,410 US10019353B2 (en) | 2012-03-02 | 2013-06-24 | Systems and methods for referencing data on a storage medium |
US16/030,232 US20180314627A1 (en) | 2012-03-02 | 2018-07-09 | Systems and Methods for Referencing Data on a Storage Medium |
Related Parent Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US13/925,410 Continuation US10019353B2 (en) | 2012-03-02 | 2013-06-24 | Systems and methods for referencing data on a storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
US20180314627A1 true US20180314627A1 (en) | 2018-11-01 |
Family
ID=49381226
Family Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US13/925,410 Active 2035-04-05 US10019353B2 (en) | 2012-03-02 | 2013-06-24 | Systems and methods for referencing data on a storage medium |
US16/030,232 Abandoned US20180314627A1 (en) | 2012-03-02 | 2018-07-09 | Systems and Methods for Referencing Data on a Storage Medium |
Family Applications Before (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US13/925,410 Active 2035-04-05 US10019353B2 (en) | 2012-03-02 | 2013-06-24 | Systems and methods for referencing data on a storage medium |
Country Status (1)
Country | Link |
---|---|
US (2) | US10019353B2 (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10846588B2 (en) * | 2018-09-27 | 2020-11-24 | Deepmind Technologies Limited | Scalable and compressive neural network data storage system |
US11169877B2 (en) * | 2020-03-17 | 2021-11-09 | Allegro Microsystems, Llc | Non-volatile memory data and address encoding for safety coverage |
FR3136100A1 (en) * | 2022-05-25 | 2023-12-01 | STMicroelectronics (Alps) SAS | Data memory emulation in Flash memory |
Families Citing this family (54)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CA2546304A1 (en) | 2003-11-13 | 2005-05-26 | Commvault Systems, Inc. | System and method for performing an image level snapshot and for restoring partial volume data |
US9471578B2 (en) | 2012-03-07 | 2016-10-18 | Commvault Systems, Inc. | Data storage system utilizing proxy device for storage operations |
US9298715B2 (en) | 2012-03-07 | 2016-03-29 | Commvault Systems, Inc. | Data storage system utilizing proxy device for storage operations |
US9342537B2 (en) | 2012-04-23 | 2016-05-17 | Commvault Systems, Inc. | Integrated snapshot interface for a data storage system |
CN102868631B (en) * | 2012-09-28 | 2016-09-21 | 华为技术有限公司 | Load sharing method and device |
TWI486963B (en) * | 2012-11-08 | 2015-06-01 | Jmicron Technology Corp | Mehtod of error checking and correction and error checking and correction circuit thereof |
US9665973B2 (en) * | 2012-11-20 | 2017-05-30 | Intel Corporation | Depth buffering |
US9886346B2 (en) | 2013-01-11 | 2018-02-06 | Commvault Systems, Inc. | Single snapshot for multiple agents |
US10728171B2 (en) * | 2013-04-30 | 2020-07-28 | Hewlett Packard Enterprise Development Lp | Governing bare metal guests |
US20150012801A1 (en) * | 2013-07-03 | 2015-01-08 | Chih-Nan YEN | Method of detecting and correcting errors with bch and ldpc engines for flash storage systems |
WO2015047334A1 (en) * | 2013-09-27 | 2015-04-02 | Intel Corporation | Error correction in non_volatile memory |
US10444998B1 (en) | 2013-10-24 | 2019-10-15 | Western Digital Technologies, Inc. | Data storage device providing data maintenance services |
US9330143B2 (en) * | 2013-10-24 | 2016-05-03 | Western Digital Technologies, Inc. | Data storage device supporting accelerated database operations |
US10365858B2 (en) * | 2013-11-06 | 2019-07-30 | Pure Storage, Inc. | Thin provisioning in a storage device |
US9753812B2 (en) | 2014-01-24 | 2017-09-05 | Commvault Systems, Inc. | Generating mapping information for single snapshot for multiple applications |
US9632874B2 (en) | 2014-01-24 | 2017-04-25 | Commvault Systems, Inc. | Database application backup in single snapshot for multiple applications |
US9639426B2 (en) | 2014-01-24 | 2017-05-02 | Commvault Systems, Inc. | Single snapshot for multiple applications |
US9495251B2 (en) | 2014-01-24 | 2016-11-15 | Commvault Systems, Inc. | Snapshot readiness checking and reporting |
KR102318478B1 (en) | 2014-04-21 | 2021-10-27 | 삼성전자주식회사 | Storage controller, storage system and method of operation of the storage controller |
US10042716B2 (en) | 2014-09-03 | 2018-08-07 | Commvault Systems, Inc. | Consolidated processing of storage-array commands using a forwarder media agent in conjunction with a snapshot-control media agent |
US9774672B2 (en) | 2014-09-03 | 2017-09-26 | Commvault Systems, Inc. | Consolidated processing of storage-array commands by a snapshot-control media agent |
US9448731B2 (en) | 2014-11-14 | 2016-09-20 | Commvault Systems, Inc. | Unified snapshot storage management |
US9648105B2 (en) | 2014-11-14 | 2017-05-09 | Commvault Systems, Inc. | Unified snapshot storage management, using an enhanced storage manager and enhanced media agents |
US9281009B1 (en) * | 2014-12-18 | 2016-03-08 | Western Digital Technologies, Inc. | Data storage device employing variable size interleave written track segments |
US20180101434A1 (en) * | 2014-12-31 | 2018-04-12 | International Business Machines Corporation | Listing types in a distributed storage system |
US9900027B1 (en) * | 2015-04-22 | 2018-02-20 | Xilinx, Inc. | Method and apparatus for detecting and correcting errors in a communication channel |
US11983138B2 (en) | 2015-07-26 | 2024-05-14 | Samsung Electronics Co., Ltd. | Self-configuring SSD multi-protocol support in host-less environment |
US20180032471A1 (en) * | 2016-07-26 | 2018-02-01 | Samsung Electronics Co., Ltd. | Self-configuring ssd multi-protocol support in host-less environment |
US10503753B2 (en) | 2016-03-10 | 2019-12-10 | Commvault Systems, Inc. | Snapshot replication operations based on incremental block change tracking |
US10037245B2 (en) | 2016-03-29 | 2018-07-31 | International Business Machines Corporation | Raid system performance enhancement using compressed data and byte addressable storage devices |
US10437667B2 (en) * | 2016-03-29 | 2019-10-08 | International Business Machines Corporation | Raid system performance enhancement using compressed data |
WO2017185322A1 (en) * | 2016-04-29 | 2017-11-02 | 华为技术有限公司 | Storage network element discovery method and device |
US10263638B2 (en) * | 2016-05-31 | 2019-04-16 | Texas Instruments Incorporated | Lossless compression method for graph traversal |
US11144496B2 (en) | 2016-07-26 | 2021-10-12 | Samsung Electronics Co., Ltd. | Self-configuring SSD multi-protocol support in host-less environment |
US10210123B2 (en) | 2016-07-26 | 2019-02-19 | Samsung Electronics Co., Ltd. | System and method for supporting multi-path and/or multi-mode NMVe over fabrics devices |
US20190109720A1 (en) | 2016-07-26 | 2019-04-11 | Samsung Electronics Co., Ltd. | Modular system (switch boards and mid-plane) for supporting 50g or 100g ethernet speeds of fpga+ssd |
US10346041B2 (en) | 2016-09-14 | 2019-07-09 | Samsung Electronics Co., Ltd. | Method for using BMC as proxy NVMeoF discovery controller to provide NVM subsystems to host |
US11461258B2 (en) | 2016-09-14 | 2022-10-04 | Samsung Electronics Co., Ltd. | Self-configuring baseboard management controller (BMC) |
US10372659B2 (en) | 2016-07-26 | 2019-08-06 | Samsung Electronics Co., Ltd. | Multi-mode NMVE over fabrics devices |
CN106445589A (en) * | 2016-09-08 | 2017-02-22 | 百富计算机技术(深圳)有限公司 | Application loading method and apparatus for small embedded system |
US10698616B2 (en) * | 2016-11-15 | 2020-06-30 | Quantum Corporation | Efficient data storage across multiple storage volumes each representing a track of a larger storage volume |
US10394468B2 (en) * | 2017-02-23 | 2019-08-27 | International Business Machines Corporation | Handling data slice revisions in a dispersed storage network |
US11531767B1 (en) * | 2017-09-30 | 2022-12-20 | Superpowered Inc. | Strategic digital media encryption |
US10417088B2 (en) | 2017-11-09 | 2019-09-17 | International Business Machines Corporation | Data protection techniques for a non-volatile memory array |
US10942845B2 (en) * | 2018-01-30 | 2021-03-09 | EMC IP Holding Company LLC | Inline coalescing of file system free space |
US10732885B2 (en) | 2018-02-14 | 2020-08-04 | Commvault Systems, Inc. | Block-level live browsing and private writable snapshots using an ISCSI server |
US11397532B2 (en) | 2018-10-15 | 2022-07-26 | Quantum Corporation | Data storage across simplified storage volumes |
KR20200099882A (en) * | 2019-02-15 | 2020-08-25 | 에스케이하이닉스 주식회사 | Memory controller and operating method thereof |
US11061598B2 (en) * | 2019-03-25 | 2021-07-13 | Western Digital Technologies, Inc. | Optimized handling of multiple copies in storage management |
WO2021010088A1 (en) * | 2019-07-18 | 2021-01-21 | 日本電気株式会社 | Memory control method, memory control device, and program |
US11593026B2 (en) | 2020-03-06 | 2023-02-28 | International Business Machines Corporation | Zone storage optimization using predictive protocol patterns |
US11775391B2 (en) | 2020-07-13 | 2023-10-03 | Samsung Electronics Co., Ltd. | RAID system with fault resilient storage devices |
US11606422B2 (en) * | 2021-01-20 | 2023-03-14 | Samsung Electronics Co., Ltd. | Server for controlling data transmission through data pipeline and operation method thereof |
US11809274B2 (en) * | 2021-04-21 | 2023-11-07 | EMC IP Holding Company LLC | Recovery from partial device error in data storage system |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20060206603A1 (en) * | 2005-03-08 | 2006-09-14 | Vijayan Rajan | Integrated storage virtualization and switch system |
US20100011150A1 (en) * | 2008-07-10 | 2010-01-14 | Dean Klein | Data collection and compression in a solid state storage device |
US20110126045A1 (en) * | 2007-03-29 | 2011-05-26 | Bennett Jon C R | Memory system with multiple striping of raid groups and method for performing the same |
US20120036309A1 (en) * | 2010-08-05 | 2012-02-09 | Ut-Battelle, Llc | Coordinated garbage collection for raid array of solid state disks |
US20150268864A1 (en) * | 2014-03-20 | 2015-09-24 | Pure Storage, Inc. | Remote replication using mediums |
Family Cites Families (258)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
GB123416A (en) | 1918-02-28 | 1919-02-27 | John Buchanan | Composite Valve for all Classes of Internal Combustion Engines. |
US4571674A (en) | 1982-09-27 | 1986-02-18 | International Business Machines Corporation | Peripheral storage system having multiple data transfer rates |
US5359726A (en) | 1988-12-22 | 1994-10-25 | Thomas Michael E | Ferroelectric storage device used in place of a rotating disk drive unit in a computer system |
US5247658A (en) | 1989-10-31 | 1993-09-21 | Microsoft Corporation | Method and system for traversing linked list record based upon write-once predetermined bit value of secondary pointers |
US5261068A (en) | 1990-05-25 | 1993-11-09 | Dell Usa L.P. | Dual path memory retrieval system for an interleaved dynamic RAM memory unit |
US5307497A (en) | 1990-06-25 | 1994-04-26 | International Business Machines Corp. | Disk operating system loadable from read only memory using installable file system interface |
US5291496A (en) | 1990-10-18 | 1994-03-01 | The United States Of America As Represented By The United States Department Of Energy | Fault-tolerant corrector/detector chip for high-speed data processing |
JP3227707B2 (en) | 1990-12-29 | 2001-11-12 | 日本電気株式会社 | Cache memory control method for each driving mode |
US5325509A (en) | 1991-03-05 | 1994-06-28 | Zitel Corporation | Method of operating a cache memory including determining desirability of cache ahead or cache behind based on a number of available I/O operations |
US5438671A (en) | 1991-07-19 | 1995-08-01 | Dell U.S.A., L.P. | Method and system for transferring compressed bytes of information between separate hard disk drive units |
US5313475A (en) | 1991-10-31 | 1994-05-17 | International Business Machines Corporation | ECC function with self-contained high performance partial write or read/modify/write and parity look-ahead interface scheme |
US5469555A (en) | 1991-12-19 | 1995-11-21 | Opti, Inc. | Adaptive write-back method and apparatus wherein the cache system operates in a combination of write-back and write-through modes for a cache-based microprocessor system |
US5596736A (en) | 1992-07-22 | 1997-01-21 | Fujitsu Limited | Data transfers to a backing store of a dynamically mapped data storage system in which data has nonsequential logical addresses |
US5337275A (en) | 1992-10-30 | 1994-08-09 | Intel Corporation | Method for releasing space in flash EEPROM memory array to allow the storage of compressed data |
US5416915A (en) | 1992-12-11 | 1995-05-16 | International Business Machines Corporation | Method and system for minimizing seek affinity and enhancing write sensitivity in a DASD array |
US5845329A (en) | 1993-01-29 | 1998-12-01 | Sanyo Electric Co., Ltd. | Parallel computer |
US5459850A (en) | 1993-02-19 | 1995-10-17 | Conner Peripherals, Inc. | Flash solid state drive that emulates a disk drive and stores variable length and fixed lenth data blocks |
US5404485A (en) | 1993-03-08 | 1995-04-04 | M-Systems Flash Disk Pioneers Ltd. | Flash file system |
JP2784440B2 (en) | 1993-04-14 | 1998-08-06 | インターナショナル・ビジネス・マシーンズ・コーポレイション | Data page transfer control method |
CA2121852A1 (en) | 1993-04-29 | 1994-10-30 | Larry T. Jost | Disk meshing and flexible storage mapping with enhanced flexible caching |
US5499354A (en) | 1993-05-19 | 1996-03-12 | International Business Machines Corporation | Method and means for dynamic cache management by variable space and time binding and rebinding of cache extents to DASD cylinders |
JPH086854A (en) | 1993-12-23 | 1996-01-12 | Unisys Corp | Outboard-file-cache external processing complex |
US5809527A (en) | 1993-12-23 | 1998-09-15 | Unisys Corporation | Outboard file cache system |
GB9326499D0 (en) | 1993-12-24 | 1994-03-02 | Deas Alexander R | Flash memory system with arbitrary block size |
US5559988A (en) | 1993-12-30 | 1996-09-24 | Intel Corporation | Method and circuitry for queuing snooping, prioritizing and suspending commands |
US5603001A (en) | 1994-05-09 | 1997-02-11 | Kabushiki Kaisha Toshiba | Semiconductor disk system having a plurality of flash memories |
US5504882A (en) | 1994-06-20 | 1996-04-02 | International Business Machines Corporation | Fault tolerant data storage subsystem employing hierarchically arranged controllers |
DE19540915A1 (en) | 1994-11-10 | 1996-05-15 | Raymond Engineering | Redundant arrangement of solid state memory modules |
US6170047B1 (en) | 1994-11-16 | 2001-01-02 | Interactive Silicon, Inc. | System and method for managing system memory and/or non-volatile memory using a memory controller with integrated compression and decompression capabilities |
US6002411A (en) | 1994-11-16 | 1999-12-14 | Interactive Silicon, Inc. | Integrated video and memory controller with data processing and graphical processing capabilities |
US5701434A (en) | 1995-03-16 | 1997-12-23 | Hitachi, Ltd. | Interleave memory controller with a common access queue |
DE69615278T2 (en) | 1995-06-06 | 2002-06-27 | Hewlett Packard Co | SDRAM data allocation arrangement and method |
US6757800B1 (en) | 1995-07-31 | 2004-06-29 | Lexar Media, Inc. | Increasing the memory performance of flash memory devices by writing sectors simultaneously to multiple flash memory devices |
US8171203B2 (en) | 1995-07-31 | 2012-05-01 | Micron Technology, Inc. | Faster write operations to nonvolatile memory using FSInfo sector manipulation |
US6801979B1 (en) | 1995-07-31 | 2004-10-05 | Lexar Media, Inc. | Method and apparatus for memory control circuit |
US5930815A (en) | 1995-07-31 | 1999-07-27 | Lexar Media, Inc. | Moving sequential sectors within a block of information in a flash memory mass storage architecture |
US5838614A (en) | 1995-07-31 | 1998-11-17 | Lexar Microsystems, Inc. | Identification and verification of a sector within a block of mass storage flash memory |
US6081878A (en) | 1997-03-31 | 2000-06-27 | Lexar Media, Inc. | Increasing the memory performance of flash memory devices by writing sectors simultaneously to multiple flash memory devices |
US5845313A (en) | 1995-07-31 | 1998-12-01 | Lexar | Direct logical block addressing flash memory mass storage architecture |
US5907856A (en) | 1995-07-31 | 1999-05-25 | Lexar Media, Inc. | Moving sectors within a block of information in a flash memory mass storage architecture |
US6728851B1 (en) | 1995-07-31 | 2004-04-27 | Lexar Media, Inc. | Increasing the memory performance of flash memory devices by writing sectors simultaneously to multiple flash memory devices |
US6978342B1 (en) | 1995-07-31 | 2005-12-20 | Lexar Media, Inc. | Moving sectors within a block of information in a flash memory mass storage architecture |
US5754563A (en) | 1995-09-11 | 1998-05-19 | Ecc Technologies, Inc. | Byte-parallel system for implementing reed-solomon error-correcting codes |
GB2291991A (en) | 1995-09-27 | 1996-02-07 | Memory Corp Plc | Disk drive emulation with a block-erasable memory |
US5933847A (en) | 1995-09-28 | 1999-08-03 | Canon Kabushiki Kaisha | Selecting erase method based on type of power supply for flash EEPROM |
US6330688B1 (en) | 1995-10-31 | 2001-12-11 | Intel Corporation | On chip error correction for devices in a solid state drive |
US5787486A (en) | 1995-12-15 | 1998-07-28 | International Business Machines Corporation | Bus protocol for locked cycle cache hit |
US6385710B1 (en) | 1996-02-23 | 2002-05-07 | Sun Microsystems, Inc. | Multiple-mode external cache subsystem |
US5798968A (en) | 1996-09-24 | 1998-08-25 | Sandisk Corporation | Plane decode/virtual sector architecture |
US5960462A (en) | 1996-09-26 | 1999-09-28 | Intel Corporation | Method and apparatus for analyzing a main memory configuration to program a memory controller |
US5754567A (en) | 1996-10-15 | 1998-05-19 | Micron Quantum Devices, Inc. | Write reduction in flash memory systems through ECC usage |
US5890192A (en) | 1996-11-05 | 1999-03-30 | Sandisk Corporation | Concurrent write of multiple chunks of data into multiple subarrays of flash EEPROM |
JPH10154101A (en) | 1996-11-26 | 1998-06-09 | Toshiba Corp | Data storage system and cache controlling method applying to the system |
US6182188B1 (en) | 1997-04-06 | 2001-01-30 | Intel Corporation | Method of performing reliable updates in a symmetrically blocked nonvolatile memory having a bifurcated storage architecture |
US6073232A (en) | 1997-02-25 | 2000-06-06 | International Business Machines Corporation | Method for minimizing a computer's initial program load time after a system reset or a power-on using non-volatile storage |
US5961660A (en) | 1997-03-03 | 1999-10-05 | International Business Machines Corporation | Method and apparatus for optimizing ECC memory performance |
US5953737A (en) | 1997-03-31 | 1999-09-14 | Lexar Media, Inc. | Method and apparatus for performing erase operations transparent to a solid state storage system |
JP3459868B2 (en) | 1997-05-16 | 2003-10-27 | 日本電気株式会社 | Group replacement method in case of memory failure |
US6311256B2 (en) | 1997-06-30 | 2001-10-30 | Emc Corporation | Command insertion and reordering at the same storage controller |
US6418478B1 (en) | 1997-10-30 | 2002-07-09 | Commvault Systems, Inc. | Pipelined high speed data transfer mechanism |
US6567889B1 (en) | 1997-12-19 | 2003-05-20 | Lsi Logic Corporation | Apparatus and method to provide virtual solid state disk in cache memory in a storage controller |
US6209003B1 (en) | 1998-04-15 | 2001-03-27 | Inktomi Corporation | Garbage collection in an object cache |
US6101601A (en) | 1998-04-20 | 2000-08-08 | International Business Machines Corporation | Method and apparatus for hibernation within a distributed data processing system |
US7233977B2 (en) | 1998-12-18 | 2007-06-19 | Emc Corporation | Messaging mechanism employing mailboxes for inter processor communications |
GB9903490D0 (en) | 1999-02-17 | 1999-04-07 | Memory Corp Plc | Memory system |
US6412080B1 (en) | 1999-02-23 | 2002-06-25 | Microsoft Corporation | Lightweight persistent storage system for flash memory devices |
US6141249A (en) | 1999-04-01 | 2000-10-31 | Lexar Media, Inc. | Organization of blocks within a nonvolatile memory unit to effectively decrease sector write operation time |
DE19929751A1 (en) | 1999-06-30 | 2001-01-18 | Siemens Ag | System and method for the transmission of data, in particular between a user program and a server program in the field of automation technology with distributed objects |
US7660941B2 (en) * | 2003-09-10 | 2010-02-09 | Super Talent Electronics, Inc. | Two-level RAM lookup table for block and page allocation and wear-leveling in limited-write flash-memories |
US7620769B2 (en) | 2000-01-06 | 2009-11-17 | Super Talent Electronics, Inc. | Recycling partially-stale flash blocks using a sliding window for multi-level-cell (MLC) flash memory |
US8078794B2 (en) | 2000-01-06 | 2011-12-13 | Super Talent Electronics, Inc. | Hybrid SSD using a combination of SLC and MLC flash memory arrays |
KR100577380B1 (en) | 1999-09-29 | 2006-05-09 | 삼성전자주식회사 | A flash-memory and a it's controling method |
WO2001031512A2 (en) | 1999-10-25 | 2001-05-03 | Infolibria, Inc. | Fast indexing of web objects |
ATE247296T1 (en) | 1999-10-25 | 2003-08-15 | Sun Microsystems Inc | STORAGE SYSTEM SUPPORTING FILE LEVEL AND BLOCK LEVEL ACCESS |
US8452912B2 (en) * | 2007-10-11 | 2013-05-28 | Super Talent Electronics, Inc. | Flash-memory system with enhanced smart-storage switch and packed meta-data cache for mitigating write amplification by delaying and merging writes until a host read |
US8171204B2 (en) | 2000-01-06 | 2012-05-01 | Super Talent Electronics, Inc. | Intelligent solid-state non-volatile memory device (NVMD) system with multi-level caching of multiple channels |
US6671757B1 (en) | 2000-01-26 | 2003-12-30 | Fusionone, Inc. | Data transfer and synchronization system |
US6785835B2 (en) | 2000-01-25 | 2004-08-31 | Hewlett-Packard Development Company, L.P. | Raid memory |
US6240040B1 (en) | 2000-03-15 | 2001-05-29 | Advanced Micro Devices, Inc. | Multiple bank simultaneous operation for a flash memory |
US6523102B1 (en) | 2000-04-14 | 2003-02-18 | Interactive Silicon, Inc. | Parallel compression/decompression system and method for implementation of in-memory compressed cache improving storage density and access speed for industry standard memory subsystems and in-line memory modules |
US7089391B2 (en) | 2000-04-14 | 2006-08-08 | Quickshift, Inc. | Managing a codec engine for memory compression/decompression operations using a data movement engine |
US6675349B1 (en) | 2000-05-11 | 2004-01-06 | International Business Machines Corporation | Error correction coding of data blocks with included parity bits |
US6779094B2 (en) | 2000-06-19 | 2004-08-17 | Storage Technology Corporation | Apparatus and method for instant copy of data by writing new data to an additional physical storage area |
US6804755B2 (en) | 2000-06-19 | 2004-10-12 | Storage Technology Corporation | Apparatus and method for performing an instant copy of data based on a dynamically changeable virtual mapping scheme |
US6912537B2 (en) | 2000-06-20 | 2005-06-28 | Storage Technology Corporation | Dynamically changeable virtual mapping scheme |
US6981070B1 (en) | 2000-07-12 | 2005-12-27 | Shun Hang Luk | Network storage device having solid-state non-volatile memory |
JP3671138B2 (en) | 2000-08-17 | 2005-07-13 | ジャパンコンポジット株式会社 | Breathable waterproof covering structure and construction method thereof |
US6404647B1 (en) | 2000-08-24 | 2002-06-11 | Hewlett-Packard Co. | Solid-state mass memory storage device |
US6883079B1 (en) | 2000-09-01 | 2005-04-19 | Maxtor Corporation | Method and apparatus for using data compression as a means of increasing buffer bandwidth |
US6625685B1 (en) | 2000-09-20 | 2003-09-23 | Broadcom Corporation | Memory controller with programmable configuration |
US7039727B2 (en) | 2000-10-17 | 2006-05-02 | Microsoft Corporation | System and method for controlling mass storage class digital imaging devices |
US6779088B1 (en) | 2000-10-24 | 2004-08-17 | International Business Machines Corporation | Virtual uncompressed cache size control in compressed memory systems |
US20020069317A1 (en) | 2000-12-01 | 2002-06-06 | Chow Yan Chiew | E-RAID system and method of operating the same |
US7013376B2 (en) * | 2000-12-20 | 2006-03-14 | Hewlett-Packard Development Company, L.P. | Method and system for data block sparing in a solid-state storage device |
US6611836B2 (en) | 2000-12-26 | 2003-08-26 | Simdesk Technologies, Inc. | Server-side recycle bin system |
KR100708475B1 (en) | 2001-01-08 | 2007-04-18 | 삼성전자주식회사 | Pre-Decoder for recovering a punctured turbo code and a method therefor |
JP4818812B2 (en) | 2006-05-31 | 2011-11-16 | 株式会社日立製作所 | Flash memory storage system |
US6516380B2 (en) | 2001-02-05 | 2003-02-04 | International Business Machines Corporation | System and method for a log-based non-volatile write cache in a storage controller |
WO2002091586A2 (en) | 2001-05-08 | 2002-11-14 | International Business Machines Corporation | 8b/10b encoding and decoding for high speed applications |
JP4256600B2 (en) | 2001-06-19 | 2009-04-22 | Tdk株式会社 | MEMORY CONTROLLER, FLASH MEMORY SYSTEM PROVIDED WITH MEMORY CONTROLLER, AND FLASH MEMORY CONTROL METHOD |
US6839808B2 (en) | 2001-07-06 | 2005-01-04 | Juniper Networks, Inc. | Processing cluster having multiple compute engines and shared tier one caches |
US20030061296A1 (en) | 2001-09-24 | 2003-03-27 | International Business Machines Corporation | Memory semantic storage I/O |
US20030058681A1 (en) | 2001-09-27 | 2003-03-27 | Intel Corporation | Mechanism for efficient wearout counters in destructive readout memory |
GB0123415D0 (en) | 2001-09-28 | 2001-11-21 | Memquest Ltd | Method of writing data to non-volatile memory |
US6938133B2 (en) | 2001-09-28 | 2005-08-30 | Hewlett-Packard Development Company, L.P. | Memory latency and bandwidth optimizations |
GB0123416D0 (en) | 2001-09-28 | 2001-11-21 | Memquest Ltd | Non-volatile memory control |
US20030093741A1 (en) | 2001-11-14 | 2003-05-15 | Cenk Argon | Parallel decoder for product codes |
US6715046B1 (en) | 2001-11-29 | 2004-03-30 | Cisco Technology, Inc. | Method and apparatus for reading from and writing to storage using acknowledged phases of sets of data |
US7013379B1 (en) | 2001-12-10 | 2006-03-14 | Incipient, Inc. | I/O primitives |
US7173929B1 (en) | 2001-12-10 | 2007-02-06 | Incipient, Inc. | Fast path for performing data operations |
CN1278239C (en) | 2002-01-09 | 2006-10-04 | 株式会社瑞萨科技 | Storage system and storage card |
TWI257085B (en) | 2002-01-21 | 2006-06-21 | Koninkl Philips Electronics Nv | Method of encoding and decoding |
US7010662B2 (en) | 2002-02-27 | 2006-03-07 | Microsoft Corporation | Dynamic data structures for tracking file system free space in a flash memory device |
US6901499B2 (en) | 2002-02-27 | 2005-05-31 | Microsoft Corp. | System and method for tracking data stored in a flash memory device |
US7085879B2 (en) | 2002-02-27 | 2006-08-01 | Microsoft Corporation | Dynamic data structures for tracking data stored in a flash memory device |
JP2003281071A (en) | 2002-03-20 | 2003-10-03 | Seiko Epson Corp | Data transfer controller, electronic equipment and data transfer control method |
JP4050548B2 (en) | 2002-04-18 | 2008-02-20 | 株式会社ルネサステクノロジ | Semiconductor memory device |
US7043599B1 (en) | 2002-06-20 | 2006-05-09 | Rambus Inc. | Dynamic memory supporting simultaneous refresh and data-access transactions |
JP4001516B2 (en) | 2002-07-05 | 2007-10-31 | 富士通株式会社 | Degeneration control device and method |
US7051152B1 (en) | 2002-08-07 | 2006-05-23 | Nvidia Corporation | Method and system of improving disk access time by compression |
US7340566B2 (en) | 2002-10-21 | 2008-03-04 | Microsoft Corporation | System and method for initializing a memory device from block oriented NAND flash |
US7171536B2 (en) | 2002-10-28 | 2007-01-30 | Sandisk Corporation | Unusable block management within a non-volatile memory system |
US6973531B1 (en) | 2002-10-28 | 2005-12-06 | Sandisk Corporation | Tracking the most frequently erased blocks in non-volatile memory systems |
US7035974B2 (en) | 2002-11-06 | 2006-04-25 | Synology Inc. | RAID-5 disk having cache memory implemented using non-volatile RAM |
US6996676B2 (en) | 2002-11-14 | 2006-02-07 | International Business Machines Corporation | System and method for implementing an adaptive replacement cache policy |
US7082512B2 (en) | 2002-11-21 | 2006-07-25 | Microsoft Corporation | Dynamic data structures for tracking file system free space in a flash memory device |
ATE504446T1 (en) | 2002-12-02 | 2011-04-15 | Silverbrook Res Pty Ltd | DEAD NOZZLE COMPENSATION |
KR100502608B1 (en) | 2002-12-24 | 2005-07-20 | 한국전자통신연구원 | A Simplified Massage-Passing Decoder for Low-Density Parity-Check Codes |
US7076723B2 (en) | 2003-03-14 | 2006-07-11 | Quantum Corporation | Error correction codes |
US8041878B2 (en) | 2003-03-19 | 2011-10-18 | Samsung Electronics Co., Ltd. | Flash file system |
JP2004280752A (en) | 2003-03-19 | 2004-10-07 | Sony Corp | Date storage device, management information updating method for data storage device, and computer program |
US7197657B1 (en) | 2003-04-03 | 2007-03-27 | Advanced Micro Devices, Inc. | BMC-hosted real-time clock and non-volatile RAM replacement |
JP2004348818A (en) | 2003-05-20 | 2004-12-09 | Sharp Corp | Method and system for controlling writing in semiconductor memory device, and portable electronic device |
US7243203B2 (en) | 2003-06-13 | 2007-07-10 | Sandisk 3D Llc | Pipeline circuit for low latency memory |
US7047366B1 (en) | 2003-06-17 | 2006-05-16 | Emc Corporation | QOS feature knobs |
US20040268359A1 (en) | 2003-06-27 | 2004-12-30 | Hanes David H. | Computer-readable medium, method and computer system for processing input/output requests |
US7149947B1 (en) | 2003-09-04 | 2006-12-12 | Emc Corporation | Method of and system for validating an error correction code and parity information associated with a data word |
US7483974B2 (en) | 2003-09-24 | 2009-01-27 | Intel Corporation | Virtual management controller to coordinate processing blade management in a blade server environment |
US7487235B2 (en) | 2003-09-24 | 2009-02-03 | Dell Products L.P. | Dynamically varying a raid cache policy in order to optimize throughput |
US7337201B1 (en) | 2003-10-08 | 2008-02-26 | Sun Microsystems, Inc. | System and method to increase memory allocation efficiency |
TWI238325B (en) | 2003-10-09 | 2005-08-21 | Quanta Comp Inc | Apparatus of remote server console redirection |
US7096321B2 (en) | 2003-10-21 | 2006-08-22 | International Business Machines Corporation | Method and system for a cache replacement technique with adaptive skipping |
WO2005065084A2 (en) | 2003-11-13 | 2005-07-21 | Commvault Systems, Inc. | System and method for providing encryption in pipelined storage operations in a storage network |
US8112574B2 (en) | 2004-02-26 | 2012-02-07 | Super Talent Electronics, Inc. | Swappable sets of partial-mapping tables in a flash-memory system with a command queue for combining flash writes |
US7350127B2 (en) | 2003-12-12 | 2008-03-25 | Hewlett-Packard Development Company, L.P. | Error correction method and system |
US20050149819A1 (en) | 2003-12-15 | 2005-07-07 | Daewoo Electronics Corporation | Three-dimensional error correction method |
US7500000B2 (en) | 2003-12-17 | 2009-03-03 | International Business Machines Corporation | Method and system for assigning or creating a resource |
US20050149618A1 (en) | 2003-12-23 | 2005-07-07 | Mobile Action Technology Inc. | System and method of transmitting electronic files over to a mobile phone |
US7631138B2 (en) | 2003-12-30 | 2009-12-08 | Sandisk Corporation | Adaptive mode switching of flash memory address mapping based on host usage characteristics |
US7356651B2 (en) | 2004-01-30 | 2008-04-08 | Piurata Technologies, Llc | Data-aware cache state machine |
US7305520B2 (en) | 2004-01-30 | 2007-12-04 | Hewlett-Packard Development Company, L.P. | Storage system with capability to allocate virtual storage segments among a plurality of controllers |
US7130957B2 (en) | 2004-02-10 | 2006-10-31 | Sun Microsystems, Inc. | Storage system structure for storing relational cache metadata |
US7130956B2 (en) | 2004-02-10 | 2006-10-31 | Sun Microsystems, Inc. | Storage system including hierarchical cache metadata |
US7231590B2 (en) | 2004-02-11 | 2007-06-12 | Microsoft Corporation | Method and apparatus for visually emphasizing numerical data contained within an electronic document |
JP2005250938A (en) | 2004-03-05 | 2005-09-15 | Hitachi Ltd | Storage control system and method |
US7281192B2 (en) | 2004-04-05 | 2007-10-09 | Broadcom Corporation | LDPC (Low Density Parity Check) coded signal decoding using parallel and simultaneous bit node and check node processing |
US7725628B1 (en) | 2004-04-20 | 2010-05-25 | Lexar Media, Inc. | Direct secondary device interface by a host |
US20050240713A1 (en) | 2004-04-22 | 2005-10-27 | V-Da Technology | Flash memory device with ATA/ATAPI/SCSI or proprietary programming interface on PCI express |
US7644239B2 (en) | 2004-05-03 | 2010-01-05 | Microsoft Corporation | Non-volatile memory cache performance improvement |
US7360015B2 (en) | 2004-05-04 | 2008-04-15 | Intel Corporation | Preventing storage of streaming accesses in a cache |
US7512830B2 (en) | 2004-05-14 | 2009-03-31 | International Business Machines Corporation | Management module failover across multiple blade center chassis |
US7590522B2 (en) | 2004-06-14 | 2009-09-15 | Hewlett-Packard Development Company, L.P. | Virtual mass storage device for server management information |
US7734643B1 (en) | 2004-06-30 | 2010-06-08 | Oracle America, Inc. | Method for distributed storage of data |
US7447847B2 (en) | 2004-07-19 | 2008-11-04 | Micron Technology, Inc. | Memory device trims |
US7203815B2 (en) | 2004-07-30 | 2007-04-10 | International Business Machines Corporation | Multi-level page cache for enhanced file system performance via read ahead |
US8407396B2 (en) | 2004-07-30 | 2013-03-26 | Hewlett-Packard Development Company, L.P. | Providing block data access for an operating system using solid-state memory |
US7340487B2 (en) | 2004-08-18 | 2008-03-04 | International Business Machines Corporation | Delayed deletion of extended attributes |
US20060075057A1 (en) | 2004-08-30 | 2006-04-06 | International Business Machines Corporation | Remote direct memory access system and method |
JP4648674B2 (en) | 2004-10-01 | 2011-03-09 | 株式会社日立製作所 | Storage control device, storage control system, and storage control method |
JP2006127028A (en) | 2004-10-27 | 2006-05-18 | Hitachi Ltd | Memory system and storage controller |
US20060106968A1 (en) | 2004-11-15 | 2006-05-18 | Wooi Teoh Gary C | Intelligent platform management bus switch system |
US7487320B2 (en) | 2004-12-15 | 2009-02-03 | International Business Machines Corporation | Apparatus and system for dynamically allocating main memory among a plurality of applications |
US8122193B2 (en) | 2004-12-21 | 2012-02-21 | Samsung Electronics Co., Ltd. | Storage device and user device including the same |
KR100876084B1 (en) | 2007-02-13 | 2008-12-26 | 삼성전자주식회사 | Computing system capable of delivering deletion information to flash storage |
US20060143396A1 (en) | 2004-12-29 | 2006-06-29 | Mason Cabot | Method for programmer-controlled cache line eviction policy |
KR100725390B1 (en) | 2005-01-06 | 2007-06-07 | 삼성전자주식회사 | Apparatus and method for storing data in nonvolatile cache memory considering update ratio |
KR100621631B1 (en) | 2005-01-11 | 2006-09-13 | 삼성전자주식회사 | Solid state disk controller apparatus |
US8745011B2 (en) | 2005-03-22 | 2014-06-03 | International Business Machines Corporation | Method and system for scrubbing data within a data storage subsystem |
US7254686B2 (en) | 2005-03-31 | 2007-08-07 | International Business Machines Corporation | Switching between mirrored and non-mirrored volumes |
US7620773B2 (en) | 2005-04-15 | 2009-11-17 | Microsoft Corporation | In-line non volatile memory disk read cache and write buffer |
US9286198B2 (en) | 2005-04-21 | 2016-03-15 | Violin Memory | Method and system for storage of data in non-volatile media |
US7130960B1 (en) | 2005-04-21 | 2006-10-31 | Hitachi, Ltd. | System and method for managing disk space in a thin-provisioned storage subsystem |
US7716387B2 (en) | 2005-07-14 | 2010-05-11 | Canon Kabushiki Kaisha | Memory control apparatus and method |
US7409489B2 (en) | 2005-08-03 | 2008-08-05 | Sandisk Corporation | Scheduling of reclaim operations in non-volatile memory |
US7552271B2 (en) | 2005-08-03 | 2009-06-23 | Sandisk Corporation | Nonvolatile memory with block management |
JP5008845B2 (en) | 2005-09-01 | 2012-08-22 | 株式会社日立製作所 | Storage system, storage apparatus and control method thereof |
US7580287B2 (en) | 2005-09-01 | 2009-08-25 | Micron Technology, Inc. | Program and read trim setting |
US7979394B2 (en) | 2005-09-20 | 2011-07-12 | Teradata Us, Inc. | Method of managing storage and retrieval of data objects |
US7437510B2 (en) | 2005-09-30 | 2008-10-14 | Intel Corporation | Instruction-assisted cache management for efficient use of cache and memory |
US7529905B2 (en) | 2005-10-13 | 2009-05-05 | Sandisk Corporation | Method of storing transformed units of data in a memory system having fixed sized storage blocks |
US7631162B2 (en) | 2005-10-27 | 2009-12-08 | Sandisck Corporation | Non-volatile memory with adaptive handling of data writes |
US7366808B2 (en) | 2005-11-23 | 2008-04-29 | Hitachi, Ltd. | System, method and apparatus for multiple-protocol-accessible OSD storage subsystem |
US7526614B2 (en) | 2005-11-30 | 2009-04-28 | Red Hat, Inc. | Method for tuning a cache |
US8112513B2 (en) | 2005-11-30 | 2012-02-07 | Microsoft Corporation | Multi-user display proxy server |
JP4807063B2 (en) | 2005-12-20 | 2011-11-02 | ソニー株式会社 | Decoding device, control method, and program |
US7831783B2 (en) | 2005-12-22 | 2010-11-09 | Honeywell International Inc. | Effective wear-leveling and concurrent reclamation method for embedded linear flash file systems |
US20070150663A1 (en) | 2005-12-27 | 2007-06-28 | Abraham Mendelson | Device, system and method of multi-state cache coherence scheme |
JP2007240904A (en) | 2006-03-09 | 2007-09-20 | Hitachi Ltd | Plasma display device |
US20070245217A1 (en) | 2006-03-28 | 2007-10-18 | Stmicroelectronics S.R.L. | Low-density parity check decoding |
US7840398B2 (en) | 2006-03-28 | 2010-11-23 | Intel Corporation | Techniques for unified management communication for virtualization systems |
US7676628B1 (en) | 2006-03-31 | 2010-03-09 | Emc Corporation | Methods, systems, and computer program products for providing access to shared storage by computing grids and clusters with large numbers of nodes |
US20070233937A1 (en) | 2006-03-31 | 2007-10-04 | Coulson Richard L | Reliability of write operations to a non-volatile memory |
JP4787055B2 (en) | 2006-04-12 | 2011-10-05 | 富士通株式会社 | Information processing apparatus with information division recording function |
US7395377B2 (en) | 2006-04-20 | 2008-07-01 | International Business Machines Corporation | Method and system for adaptive back-off and advance for non-volatile storage (NVS) occupancy level management |
US20070271468A1 (en) | 2006-05-05 | 2007-11-22 | Mckenney Paul E | Method and Apparatus for Maintaining Data Integrity When Switching Between Different Data Protection Methods |
JP4681505B2 (en) | 2006-05-23 | 2011-05-11 | 株式会社日立製作所 | Computer system, management computer, and program distribution method |
US8307148B2 (en) | 2006-06-23 | 2012-11-06 | Microsoft Corporation | Flash management techniques |
US7853958B2 (en) | 2006-06-28 | 2010-12-14 | Intel Corporation | Virtual machine monitor management from a management service processor in the host processing platform |
GB0613192D0 (en) | 2006-07-01 | 2006-08-09 | Ibm | Methods, apparatus and computer programs for managing persistence |
US7721059B2 (en) | 2006-07-06 | 2010-05-18 | Nokia Corporation | Performance optimization in solid-state media |
US7594144B2 (en) | 2006-08-14 | 2009-09-22 | International Business Machines Corporation | Handling fatal computer hardware errors |
US20080043769A1 (en) | 2006-08-16 | 2008-02-21 | Tyan Computer Corporation | Clustering system and system management architecture thereof |
JP4932390B2 (en) | 2006-08-31 | 2012-05-16 | 株式会社日立製作所 | Virtualization system and area allocation control method |
US7774392B2 (en) | 2006-09-15 | 2010-08-10 | Sandisk Corporation | Non-volatile memory with management of a pool of update memory blocks based on each block's activity and data order |
WO2008040080A1 (en) | 2006-10-05 | 2008-04-10 | Waratek Pty Limited | Silent memory reclamation |
JP4942446B2 (en) | 2006-10-11 | 2012-05-30 | 株式会社日立製作所 | Storage apparatus and control method thereof |
KR100771519B1 (en) | 2006-10-23 | 2007-10-30 | 삼성전자주식회사 | Memory system including flash memory and merge method of thereof |
KR100843543B1 (en) | 2006-10-25 | 2008-07-04 | 삼성전자주식회사 | System comprising flash memory device and data recovery method thereof |
ES2431863T5 (en) | 2006-11-03 | 2017-07-27 | Air Products And Chemicals, Inc. | System and method for process monitoring |
US20080120469A1 (en) | 2006-11-22 | 2008-05-22 | International Business Machines Corporation | Systems and Arrangements for Cache Management |
US7904647B2 (en) | 2006-11-27 | 2011-03-08 | Lsi Corporation | System for optimizing the performance and reliability of a storage controller cache offload circuit |
US7783830B2 (en) | 2006-11-29 | 2010-08-24 | Seagate Technology Llc | Solid state device pattern for non-solid state storage media |
JP4923990B2 (en) | 2006-12-04 | 2012-04-25 | 株式会社日立製作所 | Failover method and its computer system. |
WO2008070173A1 (en) | 2006-12-06 | 2008-06-12 | Fusion Multisystems, Inc. (Dba Fusion-Io) | Apparatus, system, and method for solid-state storage as cache for high-capacity, non-volatile storage |
TW200825762A (en) | 2006-12-06 | 2008-06-16 | Inventec Corp | Apparatus and method for computer management |
US7930425B2 (en) | 2006-12-11 | 2011-04-19 | International Business Machines Corporation | Method of effectively establishing and maintaining communication linkages with a network interface controller |
US20080140918A1 (en) | 2006-12-11 | 2008-06-12 | Pantas Sutardja | Hybrid non-volatile solid state memory system |
US7660911B2 (en) | 2006-12-20 | 2010-02-09 | Smart Modular Technologies, Inc. | Block-based data striping to flash memory |
US8510533B2 (en) * | 2006-12-27 | 2013-08-13 | Intel Corporation | Method of managing data on a non-volatile memory |
JP4813385B2 (en) | 2007-01-29 | 2011-11-09 | 株式会社日立製作所 | Control device that controls multiple logical resources of a storage system |
US20080201535A1 (en) | 2007-02-21 | 2008-08-21 | Hitachi, Ltd. | Method and Apparatus for Provisioning Storage Volumes |
US20080205286A1 (en) | 2007-02-26 | 2008-08-28 | Inventec Corporation | Test system using local loop to establish connection to baseboard management control and method therefor |
US20080229046A1 (en) | 2007-03-13 | 2008-09-18 | Microsoft Corporation | Unified support for solid state storage |
US9152349B2 (en) | 2007-03-23 | 2015-10-06 | Emc Corporation | Automated information life-cycle management with thin provisioning |
US8135900B2 (en) | 2007-03-28 | 2012-03-13 | Kabushiki Kaisha Toshiba | Integrated memory management and memory management method |
JP2008276646A (en) | 2007-05-02 | 2008-11-13 | Hitachi Ltd | Storage device and data management method for storage device |
US7970919B1 (en) | 2007-08-13 | 2011-06-28 | Duran Paul A | Apparatus and system for object-based storage solid-state drive and method for configuring same |
US7873803B2 (en) | 2007-09-25 | 2011-01-18 | Sandisk Corporation | Nonvolatile memory with self recovery |
US7934072B2 (en) | 2007-09-28 | 2011-04-26 | Lenovo (Singapore) Pte. Ltd. | Solid state storage reclamation apparatus and method |
KR101433859B1 (en) | 2007-10-12 | 2014-08-27 | 삼성전자주식회사 | Nonvolatile memory system and method managing file data thereof |
US8055820B2 (en) | 2007-11-05 | 2011-11-08 | Nokia Siemens Networks Oy | Apparatus, system, and method for designating a buffer status reporting format based on detected pre-selected buffer conditions |
US8572310B2 (en) | 2007-11-06 | 2013-10-29 | Samsung Electronics Co., Ltd. | Invalidating storage area of non-volatile storage medium based on metadata |
JP2009122850A (en) | 2007-11-13 | 2009-06-04 | Toshiba Corp | Block device control device and access range management method |
US8131927B2 (en) | 2007-11-30 | 2012-03-06 | Hitachi, Ltd. | Fast accessible compressed thin provisioning volume |
US8738841B2 (en) | 2007-12-27 | 2014-05-27 | Sandisk Enterprise IP LLC. | Flash memory controller and system including data pipelines incorporating multiple buffers |
KR101086855B1 (en) | 2008-03-10 | 2011-11-25 | 주식회사 팍스디스크 | Solid State Storage System with High Speed and Controlling Method thereof |
US20090276654A1 (en) | 2008-05-02 | 2009-11-05 | International Business Machines Corporation | Systems and methods for implementing fault tolerant data processing services |
US8554983B2 (en) | 2008-05-27 | 2013-10-08 | Micron Technology, Inc. | Devices and methods for operating a solid state drive |
US7917803B2 (en) | 2008-06-17 | 2011-03-29 | Seagate Technology Llc | Data conflict resolution for solid-state memory devices |
US8843691B2 (en) | 2008-06-25 | 2014-09-23 | Stec, Inc. | Prioritized erasure of data blocks in a flash storage device |
US8135907B2 (en) | 2008-06-30 | 2012-03-13 | Oracle America, Inc. | Method and system for managing wear-level aware file systems |
US20100017556A1 (en) | 2008-07-19 | 2010-01-21 | Nanostar Corporationm U.S.A. | Non-volatile memory storage system with two-stage controller architecture |
KR101086857B1 (en) | 2008-07-25 | 2011-11-25 | 주식회사 팍스디스크 | Control Method of Solid State Storage System for Data Merging |
US7941591B2 (en) | 2008-07-28 | 2011-05-10 | CacheIQ, Inc. | Flash DIMM in a standalone cache appliance system and methodology |
JP5216463B2 (en) | 2008-07-30 | 2013-06-19 | 株式会社日立製作所 | Storage device, storage area management method thereof, and flash memory package |
KR101487190B1 (en) * | 2008-09-11 | 2015-01-28 | 삼성전자주식회사 | Flash memory integrated circuit with compression/decompression codec |
US8417928B2 (en) | 2008-09-24 | 2013-04-09 | Marvell International Ltd. | Turbo boot systems and methods for subsequent booting from a captured data stored in a non-volatile semiconductor memory |
KR101573722B1 (en) * | 2009-04-20 | 2015-12-03 | 삼성전자주식회사 | Memory system including nonvolatile memory device and controller |
-
2013
- 2013-06-24 US US13/925,410 patent/US10019353B2/en active Active
-
2018
- 2018-07-09 US US16/030,232 patent/US20180314627A1/en not_active Abandoned
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20060206603A1 (en) * | 2005-03-08 | 2006-09-14 | Vijayan Rajan | Integrated storage virtualization and switch system |
US20110126045A1 (en) * | 2007-03-29 | 2011-05-26 | Bennett Jon C R | Memory system with multiple striping of raid groups and method for performing the same |
US20100011150A1 (en) * | 2008-07-10 | 2010-01-14 | Dean Klein | Data collection and compression in a solid state storage device |
US20120036309A1 (en) * | 2010-08-05 | 2012-02-09 | Ut-Battelle, Llc | Coordinated garbage collection for raid array of solid state disks |
US20150268864A1 (en) * | 2014-03-20 | 2015-09-24 | Pure Storage, Inc. | Remote replication using mediums |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10846588B2 (en) * | 2018-09-27 | 2020-11-24 | Deepmind Technologies Limited | Scalable and compressive neural network data storage system |
US11983617B2 (en) | 2018-09-27 | 2024-05-14 | Deepmind Technologies Limited | Scalable and compressive neural network data storage system |
US11169877B2 (en) * | 2020-03-17 | 2021-11-09 | Allegro Microsystems, Llc | Non-volatile memory data and address encoding for safety coverage |
FR3136100A1 (en) * | 2022-05-25 | 2023-12-01 | STMicroelectronics (Alps) SAS | Data memory emulation in Flash memory |
Also Published As
Publication number | Publication date |
---|---|
US10019353B2 (en) | 2018-07-10 |
US20130282953A1 (en) | 2013-10-24 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US20180314627A1 (en) | Systems and Methods for Referencing Data on a Storage Medium | |
US10956258B2 (en) | Systems and methods for adaptive data storage | |
US20210342223A1 (en) | Systems and methods for adaptive error-correction coding | |
US10127166B2 (en) | Data storage controller with multiple pipelines | |
US9645758B2 (en) | Apparatus, system, and method for indexing data of an append-only, log-based structure | |
US9798620B2 (en) | Systems and methods for non-blocking solid-state memory | |
US20190073296A1 (en) | Systems and Methods for Persistent Address Space Management | |
US9875180B2 (en) | Systems and methods for managing storage compression operations | |
US9176810B2 (en) | Bit error reduction through varied data positioning | |
US8782344B2 (en) | Systems and methods for managing cache admission | |
US8725934B2 (en) | Methods and appratuses for atomic storage operations | |
US8898376B2 (en) | Apparatus, system, and method for grouping data stored on an array of solid-state storage elements | |
US10013354B2 (en) | Apparatus, system, and method for atomic storage operations | |
US9075710B2 (en) | Non-volatile key-value store | |
US10073630B2 (en) | Systems and methods for log coordination | |
US8806111B2 (en) | Apparatus, system, and method for backing data of a non-volatile storage device using a backing store | |
US8892980B2 (en) | Apparatus, system, and method for providing error correction | |
US20130205114A1 (en) | Object-based memory storage | |
US20090265578A1 (en) | Full Stripe Processing for a Redundant Array of Disk Drives | |
US11138071B1 (en) | On-chip parity buffer management for storage block combining in non-volatile memory |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
STPP | Information on status: patent application and granting procedure in general |
Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION |
|
AS | Assignment |
Owner name: FIO SEMICONDUCTOR TECHNOLOGIES LIMITED, TEXAS Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:LONGITUDE ENTERPRISE FLASH S.A.R.I.;REEL/FRAME:047702/0413 Effective date: 20181116 |
|
AS | Assignment |
Owner name: FIO SEMICONDUCTOR TECHNOLOGIES, LLC, TEXAS Free format text: CORRECTIVE ASSIGNMENT TO CORRECT THE INCORRECT DOCUMENT FILED PREVIOUSLY RECORDED ON REEL 047702 FRAME 0413. ASSIGNOR(S) HEREBY CONFIRMS THE ASSIGNMENT;ASSIGNOR:LONGITUDE ENTERPRISE FLASH S.A.R.I.;REEL/FRAME:048918/0035 Effective date: 20181116 |
|
AS | Assignment |
Owner name: FIO SEMICONDUCTOR TECHNOLOGIES, LLC, TEXAS Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:LONGITUDE ENTERPRISE FLASH S.A.R.I.;REEL/FRAME:050786/0961 Effective date: 20181116 |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: NON FINAL ACTION MAILED |
|
AS | Assignment |
Owner name: UNIFICATION TECHNOLOGIES LLC, TEXAS Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:ACACIA RESEARCH GROUP LLC;REEL/FRAME:052096/0225 Effective date: 20200227 Owner name: ACACIA RESEARCH GROUP LLC, TEXAS Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:FIO SEMICONDUCTOR TECHNOLOGIES, LLC;REEL/FRAME:052095/0903 Effective date: 20200217 |
|
AS | Assignment |
Owner name: STARBOARD VALUE INTERMEDIATE FUND LP, AS COLLATERAL AGENT, NEW YORK Free format text: PATENT SECURITY AGREEMENT;ASSIGNORS:ACACIA RESEARCH GROUP LLC;AMERICAN VEHICULAR SCIENCES LLC;BONUTTI SKELETAL INNOVATIONS LLC;AND OTHERS;REEL/FRAME:052853/0153 Effective date: 20200604 |
|
AS | Assignment |
Owner name: SAINT LAWRENCE COMMUNICATIONS LLC, TEXAS Free format text: RELEASE OF SECURITY INTEREST IN PATENTS;ASSIGNOR:STARBOARD VALUE INTERMEDIATE FUND LP;REEL/FRAME:053654/0254 Effective date: 20200630 Owner name: SUPER INTERCONNECT TECHNOLOGIES LLC, TEXAS Free format text: RELEASE OF SECURITY INTEREST IN PATENTS;ASSIGNOR:STARBOARD VALUE INTERMEDIATE FUND LP;REEL/FRAME:053654/0254 Effective date: 20200630 Owner name: STINGRAY IP SOLUTIONS LLC, TEXAS Free format text: RELEASE OF SECURITY INTEREST IN PATENTS;ASSIGNOR:STARBOARD VALUE INTERMEDIATE FUND LP;REEL/FRAME:053654/0254 Effective date: 20200630 Owner name: LIMESTONE MEMORY SYSTEMS LLC, CALIFORNIA Free format text: RELEASE OF SECURITY INTEREST IN PATENTS;ASSIGNOR:STARBOARD VALUE INTERMEDIATE FUND LP;REEL/FRAME:053654/0254 Effective date: 20200630 Owner name: CELLULAR COMMUNICATIONS EQUIPMENT LLC, TEXAS Free format text: RELEASE OF SECURITY INTEREST IN PATENTS;ASSIGNOR:STARBOARD VALUE INTERMEDIATE FUND LP;REEL/FRAME:053654/0254 Effective date: 20200630 Owner name: PARTHENON UNIFIED MEMORY ARCHITECTURE LLC, TEXAS Free format text: RELEASE OF SECURITY INTEREST IN PATENTS;ASSIGNOR:STARBOARD VALUE INTERMEDIATE FUND LP;REEL/FRAME:053654/0254 Effective date: 20200630 Owner name: R2 SOLUTIONS LLC, TEXAS Free format text: RELEASE OF SECURITY INTEREST IN PATENTS;ASSIGNOR:STARBOARD VALUE INTERMEDIATE FUND LP;REEL/FRAME:053654/0254 Effective date: 20200630 Owner name: INNOVATIVE DISPLAY TECHNOLOGIES LLC, TEXAS Free format text: RELEASE OF SECURITY INTEREST IN PATENTS;ASSIGNOR:STARBOARD VALUE INTERMEDIATE FUND LP;REEL/FRAME:053654/0254 Effective date: 20200630 Owner name: UNIFICATION TECHNOLOGIES LLC, TEXAS Free format text: RELEASE OF SECURITY INTEREST IN PATENTS;ASSIGNOR:STARBOARD VALUE INTERMEDIATE FUND LP;REEL/FRAME:053654/0254 Effective date: 20200630 Owner name: AMERICAN VEHICULAR SCIENCES LLC, TEXAS Free format text: RELEASE OF SECURITY INTEREST IN PATENTS;ASSIGNOR:STARBOARD VALUE INTERMEDIATE FUND LP;REEL/FRAME:053654/0254 Effective date: 20200630 Owner name: MONARCH NETWORKING SOLUTIONS LLC, CALIFORNIA Free format text: RELEASE OF SECURITY INTEREST IN PATENTS;ASSIGNOR:STARBOARD VALUE INTERMEDIATE FUND LP;REEL/FRAME:053654/0254 Effective date: 20200630 Owner name: LIFEPORT SCIENCES LLC, TEXAS Free format text: RELEASE OF SECURITY INTEREST IN PATENTS;ASSIGNOR:STARBOARD VALUE INTERMEDIATE FUND LP;REEL/FRAME:053654/0254 Effective date: 20200630 Owner name: TELECONFERENCE SYSTEMS LLC, TEXAS Free format text: RELEASE OF SECURITY INTEREST IN PATENTS;ASSIGNOR:STARBOARD VALUE INTERMEDIATE FUND LP;REEL/FRAME:053654/0254 Effective date: 20200630 Owner name: BONUTTI SKELETAL INNOVATIONS LLC, TEXAS Free format text: RELEASE OF SECURITY INTEREST IN PATENTS;ASSIGNOR:STARBOARD VALUE INTERMEDIATE FUND LP;REEL/FRAME:053654/0254 Effective date: 20200630 Owner name: MOBILE ENHANCEMENT SOLUTIONS LLC, TEXAS Free format text: RELEASE OF SECURITY INTEREST IN PATENTS;ASSIGNOR:STARBOARD VALUE INTERMEDIATE FUND LP;REEL/FRAME:053654/0254 Effective date: 20200630 Owner name: ACACIA RESEARCH GROUP LLC, NEW YORK Free format text: RELEASE OF SECURITY INTEREST IN PATENTS;ASSIGNOR:STARBOARD VALUE INTERMEDIATE FUND LP;REEL/FRAME:053654/0254 Effective date: 20200630 Owner name: NEXUS DISPLAY TECHNOLOGIES LLC, TEXAS Free format text: RELEASE OF SECURITY INTEREST IN PATENTS;ASSIGNOR:STARBOARD VALUE INTERMEDIATE FUND LP;REEL/FRAME:053654/0254 Effective date: 20200630 |
|
STCB | Information on status: application discontinuation |
Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION |
|
AS | Assignment |
Owner name: STARBOARD VALUE INTERMEDIATE FUND LP, AS COLLATERAL AGENT, NEW YORK Free format text: CORRECTIVE ASSIGNMENT TO CORRECT THE ASSIGNOR NAME PREVIOUSLY RECORDED AT REEL: 052853 FRAME: 0153. ASSIGNOR(S) HEREBY CONFIRMS THE PATENT SECURITY AGREEMENT;ASSIGNOR:UNIFICATION TECHNOLOGIES LLC;REEL/FRAME:058223/0001 Effective date: 20200604 Owner name: UNIFICATION TECHNOLOGIES LLC, TEXAS Free format text: CORRECTIVE ASSIGNMENT TO CORRECT THE ASSIGNEE NAME PREVIOUSLY RECORDED AT REEL: 053654 FRAME: 0254. ASSIGNOR(S) HEREBY CONFIRMS THE ASSIGNMENT;ASSIGNOR:STARBOARD VALUE INTERMEDIATE FUND LP, AS COLLATERAL AGENT;REEL/FRAME:058134/0001 Effective date: 20200630 |