WO2013142673A1 - Système et procédés pour stocker des données au moyen d'entrées de table des matières - Google Patents

Système et procédés pour stocker des données au moyen d'entrées de table des matières Download PDF

Info

Publication number
WO2013142673A1
WO2013142673A1 PCT/US2013/033276 US2013033276W WO2013142673A1 WO 2013142673 A1 WO2013142673 A1 WO 2013142673A1 US 2013033276 W US2013033276 W US 2013033276W WO 2013142673 A1 WO2013142673 A1 WO 2013142673A1
Authority
WO
WIPO (PCT)
Prior art keywords
page
block
datum
physical address
memory
Prior art date
Application number
PCT/US2013/033276
Other languages
English (en)
Inventor
Jeffrey S. BONWICK
Michael W. SHAPIRO
Original Assignee
DSSD, Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by DSSD, Inc. filed Critical DSSD, Inc.
Priority to CN201380015085.4A priority Critical patent/CN104246724B/zh
Priority to JP2015501909A priority patent/JP6211579B2/ja
Priority to EP13716554.4A priority patent/EP2828757B1/fr
Publication of WO2013142673A1 publication Critical patent/WO2013142673A1/fr

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0602Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
    • G06F3/061Improving I/O performance
    • G06F3/0611Improving I/O performance in relation to response time
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0638Organizing or formatting or addressing of data
    • G06F3/064Management of blocks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0668Interfaces specially adapted for storage systems adopting a particular infrastructure
    • G06F3/0671In-line storage system
    • G06F3/0683Plurality of storage devices
    • G06F3/0688Non-volatile semiconductor memory arrays
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/10Address translation
    • G06F12/1009Address translation using page tables, e.g. page table structures
    • G06F12/1018Address translation using page tables, e.g. page table structures involving hashing techniques, e.g. inverted page tables

Definitions

  • a method for storing data including receiving a request to write a first datum to persistent storage, wherein the first datum is defined using a first logical address , determining a first physical address in the persistent storage, wherein the first physical address comprises a first block ID and first sub block ID, writing the first datum to the first physical address, generating a first table of contents entry (TE) comprising the first logical address, and the first sub block ID, and writing the first TE to a second physical address in the persistent storage, wherein the second physical address comprises the first block ID and a second sub block ID, wherein a second sub block corresponds to the second sub block ID, and wherein the second sub block is located within a first block corresponding to the first block ID.
  • TE table of contents entry
  • the invention relates to a method for storing data, comprising receiving a request to write a first datum to persistent storage, wherein the first datum is defined using a first logical address, determining a first physical address in persistent storage, wherein the first physical address comprises a first block ID and a first page ID, writing a first frag comprising a copy of the first datum to the first physical address, generating a first table of contents entry (TE) comprising the first logical address, and the first page ID, receiving a request to write a second datum to the persistent storage, wherein the second datum is defined using a second logical address, determining a second physical address in the persistent storage, wherein the second physical address comprises the first block ID and a second page ID, writing a second frag comprising a copy of the second datum to the second physical address, generating a second TE comprising the first logical address, and the second page ID, generating a table of contents (TOC) page, wherein
  • TOC table of contents
  • the invention in general, in one aspect, relates to a method for populating an in-memory datum structure.
  • the method includes (a) selecting a first block in a persistent storage, (b) extracting a last page in the first block, wherein the first block is associated with a first block ID, (c) extracting a first table of contents entry (TE) from the last page in the first block, wherein the first TE comprises a first logical address for a first datum, and a first page ID corresponding to a page in the first block in which the first datum is located, (d) generating a first physical address for the first datum using the first block ID, and the first page ID, (e) hashing the first logical address to obtain a first hash value, and (f) populating the in-memory data structure with a first mapping between the first hash value and the first physical address.
  • TE table of contents entry
  • FIGS. 1A-1E show systems in accordance with one or more embodiments of the invention.
  • FIGS. 2A-2D show storage appliances in accordance with one or more embodiments of the invention.
  • FIG. 3 shows a storage module in accordance with one or more embodiments of the invention.
  • FIG. 4A shows a storage module in accordance with one or more embodiments of the invention.
  • FIG. 4B shows a block in accordance with one or more embodiments of the invention.
  • FIG. 4C shows a frag page in accordance with one or more embodiments of the invention.
  • FIG. 4D shows a TOC page in accordance with one or more embodiments of the invention.
  • FIG. 4E shows a block in accordance with one or more embodiments of the invention.
  • FIG. 4F shows a table of contents (TOC) entry in accordance with one or more embodiments of the invention.
  • FIG. 5 shows data structures in accordance with one or more embodiments of the invention.
  • FIGS. 6A-6C show flowcharts in accordance with one or more embodiments of the invention.
  • FIGS. 7A-7E show examples in accordance with one or more embodiments of the invention.
  • FIG. 8 shows a flowchart in accordance with one or more embodiments of the invention.
  • each and every embodiment of the components of each figure is incorporated by reference and assumed to be optionally present within every other figure having one or more like-named components. Additionally, in accordance with various embodiments of the invention, any description of the components of a figure is to be interpreted as an optional embodiment which may be implemented in addition to, in conjunction with, or in place of the embodiments described with regard to a corresponding like-named component in any other figure.
  • embodiments of the invention relate to a storage system. More specifically, embodiments of the invention relate to a storage system that includes self describing data. Further, embodiments of the invention relate to a storage system in which all metadata required to access the user data stored in the storage system is located with the user data it is describing. Additionally, the metadata is used to populate an in-memory data structure that allows the storage system to directly access the user data using only the in-memory data structure.
  • FIGS. 1A-1E show systems in accordance with one or more embodiments of the invention.
  • the system includes one or more clients (client A (100A), client M (100M)) operatively connected to a storage appliance (102).
  • clients (100A, 100M) correspond to any system that includes functionality to issue a read request to the storage appliance (102) and/or issue a write request to the storage appliance (102).
  • each of the clients (100 A, 100M) may include a client processor and client memory. Additional details about components in a client are described in FIG. ID below.
  • the clients (100A, 100M) are configured to communicate with the storage appliance (102) using one or more of the following protocols: Peripheral Component Interconnect (PCI), PCI-Express (PCIe), PCI-eXtended (PCI-X), Non-Volatile Memory Express (NVMe), Non-Volatile Memory Express (NVMe) over a PCI-Express fabric, Non- Volatile Memory Express (NVMe) over an Ethernet fabric, and Non- Volatile Memory Express (NVMe) over an Infiniband fabric.
  • PCI Peripheral Component Interconnect
  • PCIe PCI-Express
  • PCI-X PCI-eXtended
  • NVMe Non-Volatile Memory Express
  • NVMe Non-Volatile Memory Express
  • NVMe Non-Volatile Memory Express
  • NVMe Non-Volatile Memory Express
  • the client includes a root complex (not shown).
  • the root complex is a device that connects the client processor and client memory to the PCIe Fabric.
  • the root complex is integrated into the client processor.
  • the PCIe Fabric includes root complexes and endpoints which are connected via switches (e.g., client switch (1 16) in FIG. ID and switches within the switch fabric, e.g., switch fabric (206) in FIG. 2A).
  • an endpoint is a device other than a root complex or a switch that can originate PCI transactions (e.g., read request, write request) or that is a target of PCI transactions.
  • a single client and a single storage appliance may be considered part of a single PCIe Fabric.
  • any combination of one or more clients and one or more storage appliances may be considered part of a single PCIe Fabric.
  • the individual components within the storage appliance communicate using PCIe
  • individual components in the client see FIG. ID
  • all the components in the storage appliance and the client may be considered part of a single PCIe Fabric.
  • the storage appliance (102) is a system that includes volatile and persistent storage and is configured to service read requests and/or write requests from one or more clients (100A, 100M).
  • clients 100A, 100M
  • FIG. IB shows a system in which clients (100A, 100M) are connected to multiple storage appliances (104A, 104B, 104C, 104D) arranged in a mesh configuration (denoted as storage appliance mesh (104) in FIG. IB).
  • FIG. IB shows a system in which clients (100A, 100M) are connected to multiple storage appliances (104A, 104B, 104C, 104D) arranged in a mesh configuration (denoted as storage appliance mesh (104) in FIG. IB).
  • the storage appliance mesh (104) is shown in a fully-connected mesh configuration - that is, every storage appliance (104 A, 104B, 104C, 104D) in the storage appliance mesh (104) is directly connected to every other storage appliance (104A, 104B, 104C, 104D) in the storage appliance mesh (104).
  • each of the clients (100 A, 100M) may be directly connected to one or more storage appliances (104A, 104B, 104C, 104D) in the storage appliance mesh (104).
  • the storage appliance mesh may be implemented using other mesh configurations (e.g., partially connected mesh) without departing from the invention.
  • FIG. 1C shows a system in which clients (100A, 100M) are connected to multiple storage appliances (104A, 104B, 104C, 104D) arranged in a fan-out configuration.
  • each client (100 A, 100M) is connected to one or more of the storage appliances (104A, 104B, 104C, 104D); however, there is no communication between the individual storage appliances (104A, 104B, 104C, 104D).
  • FIG. ID shows a client in accordance with one or more embodiments of the invention.
  • the client (110) includes a client processor (112), client memory (114), and a client switch (116). Each of these components is described below.
  • the client processor (1 12) is a group of electronic circuits with a single core or multiple cores that are configured to execute instructions.
  • the client processor (112) may be implemented using a Complex Instruction Set (CISC) Architecture or a Reduced Instruction Set (RISC) Architecture.
  • the client processor (112) includes a root complex (as defined by the PCIe protocol) (not shown).
  • the client (110) includes a root complex (which may be integrated into the client processor (112)) then the client memory (114) is connected to the client processor (1 12) via the root complex.
  • the client memory (114) is directly connected to the client processor (112) using another point-to-point connection mechanism.
  • the client memory (114) corresponds to any volatile memory including, but not limited to, Dynamic Random-Access Memory (DRAM), Synchronous DRAM, SDR SDRAM, and DDR SDRAM.
  • DRAM Dynamic Random-Access Memory
  • Synchronous DRAM Synchronous DRAM
  • SDR SDRAM Synchronous DRAM
  • DDR SDRAM DDR SDRAM
  • the client memory (114) includes one or more of the following: a submission queue for the client processor and a completion queue for the client processor.
  • the storage appliance memory includes one or more submission queues for client processors visible to a client through the fabric, and the client memory includes one or more completion queues for the client processor visible to the storage appliance through the fabric.
  • the submission queue for the client processor is used to send commands (e.g., read request, write request) to the client processor.
  • the completion queue for the client processor is used to signal the client processor that a command it issued to another entity has been completed. Embodiments of the invention may be implemented using other notification mechanisms without departing from the invention.
  • the client switch (116) includes only a single switch. In another embodiment of the invention, the client switch (116) includes multiple interconnected switches. If the client switch (116) includes multiple switches, each switch may be connected to every other switch, may be connected to a subset of the switches in the switch fabric, or may only be connected to one other switch. In one embodiment of the invention, each of the switches in the client switch (116) is a combination of hardware and logic (implemented, for example, using integrated circuits) (as defined by the protocol(s) the switch fabric implements) that is configured to permit data and messages to be transferred between the client (110) and the storage appliances (not shown).
  • the client switch (116) when the clients (100A, 100M) implement one or more of the following protocols PCI, PCIe, or PCI-X, the client switch (116) is a PCI switch.
  • the client switch (116) includes a number of ports, where each port may be configured as a transparent bridge or a non-transparent bridge. Ports implemented as transparent bridges allow the root complex to continue discovery of devices (which may be other root complexes, switches, PCI bridges, or endpoints) connected (directly or indirectly) to the port. In contrast, when a root complex encounters a port implemented as a non-transparent bridge, the root complex is not able to continue discovery of devices connected to the port - rather, the root complex treats such a port as an endpoint.
  • a port When a port is implemented as a non-transparent bridge, devices on either side of the non-transparent bridge may only communicate using a mailbox system and doorbell interrupts (implemented by the client switch).
  • the doorbell interrupts allow a processor on one side of the non-transparent bridge to issue an interrupt to a processor on the other side of the non-transparent bridge.
  • the mailbox system includes one or more registers that are readable and writeable by processors on either side of the switch fabric. The aforementioned registers enable processors on either side of the client switch to pass control and status information across the non- transparent bridge.
  • the PCI transaction in order to send a PCI transaction from a device on one side of the non-transparent bridge to a device on the other side of the non-transparent bridge, the PCI transaction must be addressed to the port implementing the non-transparent bridge.
  • the client switch Upon receipt of the PCI transaction, the client switch performs an address translation (either using a direct address translation mechanism or a look-up table based translation mechanism). The resulting address is then used to route the packet towards the appropriate device on the other side of the non- transparent bridge.
  • the client switch (116) is configured such that at least a portion of the client memory (114) is directly accessible to the storage appliance. Said another way, a storage appliance on one side of the client switch may directly access, via the client switch, client memory on the other side of the client switch.
  • the client switch (116) includes a DMA engine (1 18). In one embodiment of the invention, the DMA engine (1 18) may be programmed by either the client processor or a storage appliance connected to the client switch. As discussed above, the client switch (116) is configured such that at least a portion of the client memory (1 14) is accessible to the storage appliance or storage modules.
  • the DMA engine (118) may be programmed to read data from an address in the portion of the client memory that is accessible to the storage appliance and directly write a copy of such data to memory in the storage appliance or storage modules. Further, the DMA engine (118) may be programmed to read data from the storage appliance and directly write a copy of such data to an address in the portion of the client memory that is accessible to the storage appliance.
  • the DMA engine (118) supports multicasting.
  • a processor in the storage appliance may create a multicast group, where each member of the multicast group corresponds to a unique destination address in memory on the storage appliance.
  • Each member of the multicast group is associated with a descriptor that specifies: (i) the destination address; (ii) the source address; (iii) the transfer size field; and (iv) a control field.
  • the source address for each of the descriptors remains constant while the destination address changes for each descriptor.
  • any data transfer through the switch targeting the multicast group address including a transfer initiated by a DMA engine, places an identical copy of the data in all of the destination ports associated with the multicast group.
  • the switch processes all of the multicast group descriptors in parallel.
  • FIG. ID shows a client switch (116) located in the client (110)
  • the client switch (1 16) may be located external to the client without departing from the invention.
  • the DMA engine (118) may be located external to the client switch (116) without departing from the invention.
  • FIG. IE shows a system in which clients (100A, 100M) are connected, via a client switch (108), to multiple storage appliances (104A, 104B, 104C, 104D) arranged in a mesh configuration (denoted as storage appliance mesh (104) in FIG. IE).
  • FIG. IE shows a system in which clients (100A, 100M) are connected, via a client switch (108), to multiple storage appliances (104A, 104B, 104C, 104D) arranged in a mesh configuration (denoted as storage appliance mesh (104) in FIG. IE).
  • FIG. IE shows a mesh configuration in which storage appliance mesh (104) in FIG. IE.
  • each client (100A, 100M) does not include its own client switch - rather, all of the clients share a client switch (108).
  • the storage appliance mesh (104) is shown in a fully- connected mesh configuration - that is, every storage appliance (104 A, 104B, 104C, 104D) in the storage appliance mesh (104) is directly connected to every other storage appliance (104A, 104B, 104C, 104D) in the storage appliance mesh (104).
  • the client switch (108) may be directly connected to one or more storage appliances (104 A, 104B, 104C, 104D) in the storage appliance mesh (104).
  • storage appliance mesh may be implemented using other mesh configurations (e.g., partially connected mesh) without departing from the invention.
  • each client may include its own client switch
  • switch fabric (defined below).
  • FIGs. 1A-1E show storage appliances connected to a limited number of clients, the storage appliances may be connected to any number of clients without departing from the invention.
  • FIGs. 1A-1E show various system configurations, the invention is not limited to the aforementioned system configurations.
  • the clients regardless of the configuration of the system
  • FIGS. 2A-2D show embodiments of storage appliances in accordance with one or more embodiments of the invention.
  • the storage appliance includes a control module (200) and a storage module group (202). Each of these components is described below.
  • the control module (200) is configured to manage the servicing of read and write requests from one or more clients.
  • the control module is configured to receive requests from one or more clients via the IOM (discussed below), to process the request (which may include sending the request to the storage module), and to provide a response to the client after the request has been serviced. Additional details about the components in the control module are included below. Further, the operation of the control module with respect to servicing read and write requests is described below with reference to FIGS. 4A-7C.
  • the control module (200) includes an Input/Output Module (IOM) (204), a switch fabric (206), a processor (208), a memory (210), and, optionally, a Field Programmable Gate Array (FPGA) (212).
  • IOM Input/Output Module
  • the IOM (204) is the physical interface between the clients (100A, 100M in FIGs. 1A-1E) and the other components in the storage appliance.
  • the IOM supports one or more of the following protocols: PCI, PCIe, PCI-X, Ethernet (including, but not limited to, the various standards defined under the IEEE 802.3a- 802.3bj), Infmiband, and Remote Direct Memory Access (RDMA) over Converged Ethernet (RoCE).
  • PCI Peripheral Component Interconnect
  • PCIe Peripheral Component Interconnect Express
  • PCI-X Ethernet
  • Ethernet including, but not limited to, the various standards defined under the IEEE 802.3a- 802.3bj
  • RDMA Remote Direct Memory Access
  • RoCE Converged Ethernet
  • the switch fabric (206) includes only a single switch. In another embodiment of the invention, the switch fabric (206) includes multiple interconnected switches. If the switch fabric (206) includes multiple switches, each switch may be connected to every other switch, may be connected to a subset of switches in the switch fabric, or may only be connected to one other switch in the switch fabric. In one embodiment of the invention, each of the switches in the switch fabric (206) is a combination of hardware and logic (implemented, for example, using integrated circuits) (as defined by the protocol(s) the switch fabric implements) that is configured to connect various components together in the storage appliance and to route packets (using the logic) between the various connected components.
  • the switch fabric (206) is physically connected to the IOM (204), processor (208), storage module group (202), and, if present, the FPGA (212).
  • all inter-component communication in the control module (200) passes through the switch fabric (206).
  • all communication between the control module (200) and the storage module group (202) passes through the switch fabric (206).
  • the switch fabric (206) is implemented using a PCI protocol (e.g., PCI, PCIe, PCI-X, or another PCI protocol). In such embodiments, all communication that passes through the switch fabric (206) uses the corresponding PCI protocol.
  • the switch fabric (206) includes a port for the processor (or, more specifically, a port for the root complex integrated in the processor (208) or for the root complex connected to the processor), one or more ports for storage modules (214A, 214N) (see FIG. 3) in the storage module group (202), a port for the FPGA (212) (if present), and a port for the IOM (204).
  • each of the aforementioned ports may be configured as a transparent bridge or a non-transparent bridge (as discussed above).
  • At least one switch in the switch fabric is selected from
  • the processor (206) is configured to implement multicasting. More specifically, in one embodiment of the invention, the processor (208) is configured to generate a multicast group where the multicast group includes two or more member with each member specifying an address in the memory (210) and/or in the storage modules (214 A, 214N). When the multicast group is created, the multicast group is associated with a multicast address. In order to implement the multicasting, at least one switch in the switch fabric is configured that when a write specifying the multicast address as the destination address is received, the switch is configured to generate a new write for each member in the multicast group and issue the writes to the appropriate address in the storage appliance. In one embodiment of the invention, the address for each write generated by the switch is determined by adding a particular offset to the multicast address.
  • the processor (208) is a group of electronic circuits with a single core or multi-cores that are configured to execute instructions.
  • the processor (208) may be implemented using a Complex Instruction Set (CISC) Architecture or a Reduced Instruction Set (RISC) Architecture.
  • the processor (208) includes a root complex (as defined by the PCIe protocol).
  • the control module (200) includes a root complex (which may be integrated into the processor (208)) then the memory (210) is connected to the processor (208) via the root complex. Alternatively, the memory (210) is directly connected to the processor (208) using another point-to-point connection mechanism.
  • the memory (210) corresponds to any volatile memory including, but not limited to, Dynamic Random-Access Memory (DRAM), Synchronous DRAM, SDR SDRAM, and DDR SDRAM.
  • the processor (208) is configured to create and update an in-memory data structure (not shown), where the in-memory data structure is stored in the memory (210).
  • the in-memory data structure includes mappings (direct or indirect) between logical addresses and physical storage addresses in the set of storage modules.
  • the logical address is an address at which the data appears to reside from the perspective of the client.
  • the logical address is (or includes) a hash value generated by applying a hash function (e.g. SHA-1, MD-5, etc.) to an n-tuple.
  • a hash function e.g. SHA-1, MD-5, etc.
  • the n-tuple is ⁇ object ID, offset ID>, where the object ID defines a file and the offset ID defines a location relative to the starting address of the file.
  • the n-tuple is ⁇ object ID, offset ID, birth time>, where the birth time corresponds to the time when the file (identified using the object ID) was created.
  • the logical address may include a logical object ID and a logical byte address, or a logical object ID and a logical address offset.
  • the logical address includes an object ID and an offset ID.
  • the physical address may correspond to
  • the in-memory data structure may map a single hash value to multiple physical addresses if there are multiple copies of the data in the storage appliance.
  • the memory (210) includes one or more of the following: a submission queue for the processor, a completion queue for the processor, a submission queue for each of the storage modules in the storage appliance and a completion queue for each of the storage modules in the storage appliance.
  • the submission queue for the processor is used to send commands (e.g., read request, write request) to the processor.
  • the completion queue for the processor is used to signal the processor that a command it issued to another entity has been completed.
  • the submission and completion queues for the storage modules function in a similar manner.
  • the processor via the switch fabric is configured to offload various types of processing to the FPGA (212).
  • the FPGA (212) includes functionality to calculate checksums for data that is being written to the storage module(s) and/or data that is being read from the storage module(s). Further, the FPGA (212) may include functionality to calculate P and/or Q parity information for purposes of storing data in the storage module(s) using a RAID scheme (e.g., RAID 2 - RAID 6) and/or functionality to perform various calculations necessary to recover corrupted data stored using a RAID scheme (e.g., RAID 2 - RAID 6).
  • a RAID scheme e.g., RAID 2 - RAID 6
  • the storage module group (202) includes one or more storage modules (214 A, 214N) each configured to store data. Storage modules are described below in FIG. 3.
  • the processor (208) is configured to program one or more DMA engines in the system.
  • the processor (208) is configured to program the DMA engine in the client switch (see FIG. ID).
  • the processor (208) may also be configured to program the DMA engine in the storage module (see FIG. 3).
  • programming the DMA engine in the client switch may include creating a multicast group and generating descriptors for each of the members in the multicast group.
  • FIG. 2B shows a storage appliance in accordance with one or more embodiments of the invention.
  • the storage appliance includes a control module (216) and at least two storage module groups (236, 238).
  • the control module (216) includes a switch fabric (234), which is directly connected to IOM A (218), IOM B (220), processor A (222), processor B (224), (if present) FPGA A (230), (if present) FPGA B (232), storage modules (236 A, 236N) in storage module group A (236) and storage modules (238A, 238N) in storage module group B (238). All communication between the aforementioned components (except between processor A (222) and processor B (224)) passes through the switch fabric (234).
  • processors (222, 224) within the control module (216) are able to directly communicate using, for example, point-to-point interconnect such as Intel® QuickPath Interconnect.
  • point-to-point interconnect such as Intel® QuickPath Interconnect.
  • point-to-point communication mechanisms may be used to permit direct communication between the processor (222, 224) without departing from the invention.
  • the control module (216) is substantially similar to the control module (200) in FIG. 2A.
  • the switch fabric (234) is substantially similar to the switch fabric (206) in FIG. 2A.
  • each processor (222, 224) is substantially similar to the processor (208) in FIG. 2A.
  • the memory (226, 228) is substantially similar to the memory (210) in FIG. 2A.
  • the IOMs (218, 220) are substantially similar to the IOM (204) in FIG. 2A.
  • the FPGAs (230, 232) are substantially similar to the FPGA (212) in FIG. 2 A.
  • the storage module groups (236, 238) are substantially similar to the storage module group (202) in FIG. 2A.
  • the two IOMs (218, 220) in the control module (216) double the I/O bandwidth for the control module (216) (over the I/O bandwidth of a control module with a single IOM).
  • the addition of a second IOM (or additional IOMs) increases the number of clients that may be connected to a given control module and, by extension, the number of clients that can be connected to a storage appliance.
  • the use of the switch fabric (234) to handle communication between the various connected components allows each of the processors (222, 224) to directly access (via the switch fabric (234)) all FPGAs (230, 232) and all storage modules (236A, 236N, 238A, 238N) connected to the switch fabric (234).
  • FIG. 2C shows a storage appliance that includes a control module (240) connected (via a switch fabric (246)) to multiple storage modules (not shown) in the storage module groups (256, 258, 260, 262).
  • the control module (240) includes two IOMs (242, 244), two processors (248, 250), and memory (252, 254).
  • all components in the control module (240) communicate via the switch fabric (246).
  • the processors (248, 250) may communicate with each other using the switch fabric (246) or a direct connection (as shown in FIG. 2C).
  • the processors (248, 250) within the control module (240) are able to directly communicate using, for example, a point-to-point interconnect such as Intel® QuickPath Interconnect.
  • a point-to-point interconnect such as Intel® QuickPath Interconnect.
  • point-to-point communication mechanisms may be used to permit direct communication between the processors (248, 250) without departing from the invention.
  • processor A (248) is configured to primarily handle requests related to the storage and retrieval of data from storage module groups A and B (256, 258) while processor B (250) is configured to primarily handle requests related to the storage and retrieval of data from storage module groups C and D (260, 262).
  • the processors (248, 250) are configured to communicate (via the switch fabric (246)) with all of the storage module groups (256, 258, 260, 262). This configuration enables the control module (240) to spread the processing of I/O requests between the processors and/or provides built-in redundancy to handle the scenario in which one of the processors fails.
  • the control module (240) is substantially similar to the control module (200) in FIG. 2A.
  • the switch fabric (246) is substantially similar to the switch fabric (206) in FIG. 2A.
  • each processor (248, 250) is substantially similar to the processor (208) in FIG. 2A.
  • the memory (252, 254) is substantially similar to the memory (210) in FIG. 2 A.
  • the IOMs (242, 244) are substantially similar to the IOM (204) in FIG. 2A.
  • the storage module groups (256, 258, 260, 262) are substantially similar to the storage module group (202) in FIG. 2A.
  • FIG. 2D shows a storage appliance that includes two control modules (264, 266).
  • Each control module includes IOMs (296, 298, 300, 302), processors (268, 270, 272, 274), memory (276, 278, 280, 282), and FPGAs (if present) (288, 290, 292, 294).
  • Each of the control modules (264, 266) includes a switch fabric (284, 286) through which components within the control modules communicate.
  • processors (268, 270, 272, 274) within a control module may directly communicate with each other using, for example, a point-to-point interconnect such as Intel® QuickPath Interconnect.
  • a point-to-point interconnect such as Intel® QuickPath Interconnect.
  • processors (268, 270) in control module A may communicate with components in control module B via a direct connection to the switch fabric (286) in control module B.
  • processors (272, 274) in control module B may communicate with components in control module A via a direct connection to the switch fabric (284) in control module A.
  • each of the control modules is connected to various storage modules (denoted by storage module groups (304, 306, 308, 310)). As shown in FIG. 2D, each control module may communicate with storage modules connected to the switch fabric in the control module. Further, processors in control module A (264) may communicate with storage modules connected to control module B (266) using switch fabric B (286). Similarly, processors in control module B (266) may communicate with storage modules connected to control module A (264) using switch fabric A (284).
  • control modules allow the storage control to distribute I/O load across the storage appliance regardless of which control module receives the I/O request. Further, the interconnection of control modules enables the storage appliance to process a larger number of I/O requests. Moreover, the interconnection of control modules provides built-in redundancy in the event that a control module (or one or more components therein) fails.
  • the in-memory data structure is mirrored across the memories in the control modules.
  • the processors in the control modules issue the necessary commands to update all memories within the storage appliance such that the in-memory data structure is mirrored across all the memories.
  • any processor may use its own memory to determine the location of a data (as defined by an n-tuple, discussed above) in the storage appliance. This functionality allows any processor to service any I/O request in regards to the location of the data within the storage module. Further, by mirroring the in-memory data structures, the storage appliance may continue to operate when one of the memories fails.
  • FIG. 3 shows a storage module in accordance with one or more embodiments of the invention.
  • the storage module (320) includes a storage module controller (322), memory (324), and one or more solid state memory modules (330A, 330N). Each of these components is described below.
  • the storage module controller (322) is configured to receive requests to read from and/or write data to one or more control modules. Further, the storage module controller (322) is configured to service the read and write requests using the memory (324) and/or the solid state memory modules (330A, 330N). Though not shown in FIG. 3, the storage module controller (322) may include a DMA engine, where the DMA engine is configured to read data from the memory (324) or from one of the solid state memory modules (33 OA, 33 ON) and write a copy of the data to a physical address in client memory (1 14 in FIG. ID). Further, the DMA engine may be configured to write data from the memory (324) to one or more of the solid state memory modules.
  • the DMA engine is configured to be programmed by the processor (e.g., 208 in FIG. 2A).
  • the storage module may include a DMA engine that is external to the storage module controller without departing from the invention.
  • the memory (324) corresponds to any volatile memory including, but not limited to, Dynamic Random-Access Memory (DRAM), Synchronous DRAM, SDR SDRAM, and DDR SDRAM.
  • DRAM Dynamic Random-Access Memory
  • Synchronous DRAM Synchronous DRAM
  • SDR SDRAM Synchronous DRAM
  • DDR SDRAM DDR SDRAM
  • the memory (324) may be logically or physically partitioned into vaulted memory (326) and cache (328).
  • the storage module controller (322) is configured to write out the entire contents of the vaulted memory (326) to one or more of the solid state memory modules (330A, 330N) in the event of notification of a power failure (or another event in which the storage module may lose power) in the storage module.
  • the storage module controller (322) is configured to write the entire contents of the vaulted memory (326) to one or more of the solid state memory modules (33 OA, 33 ON) between the time of the notification of the power failure and the actual loss of power to the storage module.
  • the content of the cache (328) is lost in the event of a power failure (or another event in which the storage module may lose power).
  • the solid state memory modules correspond to any data storage device that uses solid-state memory to store persistent data.
  • solid-state memory may include, but is not limited to, NAND Flash memory, NOR Flash memory, Magnetic RAM Memory (M- RAM), Spin Torque Magnetic RAM Memory (ST-MRAM), Phase Change Memory (PCM), or any other memory defined as a non-volatile Storage Class Memory (SCM).
  • NAND Flash memory NOR Flash memory
  • M- RAM Magnetic RAM Memory
  • ST-MRAM Spin Torque Magnetic RAM Memory
  • PCM Phase Change Memory
  • SCM non-volatile Storage Class Memory
  • the following storage locations are part of a unified address space: (i) the portion of the client memory accessible via the client switch, (ii) the memory in the control module, (iii) the memory in the storage modules, and (iv) the solid state memory modules. Accordingly, from the perspective of the processor in the storage appliance, the aforementioned storage locations (while physically separate) appear as a single pool of physical addresses. Said another way, the processor may issue read and/or write requests for data stored at any of the physical addresses in the unified address space.
  • the aforementioned storage locations may be referred to as storage fabric that is accessible using the unified address space.
  • a unified address space is created, in part, by the non-transparent bridge in the client switch which allows the processor in the control module to "see” a portion of the client memory. Accordingly, the processor in the control module may perform read and/or write requests in the portion of the client memory that it can "see”.
  • FIG. 4A shows a storage module in accordance with one or more embodiments of the invention.
  • the solid state memory module (400) includes one or more blocks.
  • a block is the smallest erasable unit of storage within the solid state memory module (400).
  • FIG. 4B shows a block in accordance with one or more embodiments of the invention. More specifically, each block (402) includes one or more pages.
  • a page is the smallest addressable unit for read and program operations (including the initial writing to a page) in the solid state memory module.
  • rewriting a page within a block requires the entire block to be rewritten.
  • each page within a block is either a Frag Page (see FIG. 4C) or a TOC Page (see FIG. 4D).
  • FIG. 4C shows a frag page in accordance with one or more embodiments of the invention.
  • the frag page includes one or more frags.
  • a frag corresponds to a finite amount of user data.
  • the frags within a given page may be of a uniform size or of a non-uniform size.
  • frags within a given block may be of a uniform size or of a non-uniform size.
  • a given frag may be less than the size of a page, may be exactly the size of a page, or may extend over one or more pages.
  • a frag page only includes frags.
  • each frag includes user data (i.e., data provided by the client for storage in the storage appliance).
  • fragment and "user data” are used interchangeably.
  • FIG. 4D shows a TOC page in accordance with one or more embodiments of the invention.
  • the TOC page (406) includes one or more TOC entries, where each of the TOC entries includes metadata for a given frag.
  • the TOC page (406) may include a reference to another TOC page in the block (402).
  • a TOC page only includes TOC entries (and, optionally, a reference to another TOC page in the block), but does not include any frags.
  • each TOC entry corresponds to a frag (see FIG. 4C) in the block (402).
  • the TOC entries only correspond to frags within the block.
  • the TOC page is associated with a block and only includes TOC entries for frags in that block.
  • the last page that is not defective in each block within each of the solid state memory modules is a TOC page.
  • FIG. 4E shows a block in accordance with one or more embodiments of the invention. More specifically, FIG. 4E shows a block (408) that includes TOC pages (410, 412, 414) and frag pages (416, 418, 420, 422, 424, 426). In one embodiment of the invention, the block (408) is conceptually filled from “top” to "bottom.” Further, TOC pages are generated and stored once the accumulated size of the TOC entries for the frags in the frag pages equal the size of a page. Turning to FIG. 4E, for example, frag page 0 (416) and frag page 1 (418) are stored in the block (408).
  • TOC page 412 is created and stored in the block (408). Further, because there is already a TOC page in the block (408), TOC page (412) also includes a reference to TOC page (414).
  • TOC page (410) is created and stored in the last page of the block (408).
  • the TOC page may include padding to address the difference between the cumulative size of the TOC entries and the page size.
  • TOC page (410) includes a reference to one other TOC page (412).
  • the TOC pages are linked from the "bottom” of the block to "top” of the page, such that the TOC page may be obtained by following a reference from a TOC page that is below the TOC page.
  • TOC page (412) may be accessed using the reference in TOC page (410).
  • block (408) may include pages (e.g., a page that includes parity data) other than frag pages and TOC pages without departing from the invention.
  • Such other pages may be located within the block and, depending on the implementation, interleaved between the TOC pages and the frag pages.
  • each TOC entry (430) includes metadata for a frag (and in particular the user data in the frag) and may include one or more of the following fields: (i) object ID (432), which identifies the object (e.g., file) being stored; (ii) the birth time (434), which specifies the time (e.g., the processor clock value of the processor in the control module) at which the frag corresponding to the TOC entry was written to the vaulted memory; (iii) offset ID (436), which identifies the starting point of the user data in the frag relative to the beginning of the object (identified by the object ID); (iv) fragment size (438), which specifies the size of the frag; (v) page ID (440), which identifies the page in the block in which the frag is stored; (vi) byte (442), which identifies the starting location of the frag in the
  • the ⁇ object ID, offset ID> or ⁇ object ID, offset ID, birth time> identify user data that is provided by the client. Further, the ⁇ object ID, offset ID> or ⁇ object ID, offset ID, birth time> are used by the client to identify particular user data, while the storage appliance uses a physical address(es) to identify user data within the storage appliance. Those skilled in the art will appreciate that the client may provide a logical address instead of the object ID and offset ID.
  • the TOC entry may include additional or fewer fields than shown in FIG. 4F without departing from the invention. Further, the fields in the TOC entry may be arranged in a different order and/or combined without departing from the invention. In addition, while the fields in the TOC entry shown in FIG. 4F appear to all be of the same size, the size of various fields in the TOC entry may be non-uniform, with the size of any given field varying based on the implementation of the TOC entry.
  • FIG. 5 shows data structures in accordance with one or more embodiments of the invention.
  • the memory in the control module includes an in- memory data structure.
  • the in-memory data structure includes a mapping between an n-tuple (e.g., ⁇ object ID, offset ID> (500), ⁇ object ID, offset ID, birth time> (not shown)) and a physical address (502) of the frag in a solid state memory module.
  • the mapping is between a hash of the n-tuple and the physical address.
  • the physical address for a frag is defined as the following n-tuple: ⁇ storage module, channel, chip enable, LUN, plane, block, page, byte>.
  • the control module also tracks the number of TOC entries (506) per block (504). More specifically, each time a frag is written to vaulted memory, a TOC entry for the frag is created. The control module tracks with which block the newly created TOC entry is associated and uses this information to generate TOC pages. For example, the control module uses the aforementioned information to determine whether the cumulative size of all TOC entries associated with a given block, which have not been written to a TOC page, equal a page size in the block.
  • the control module may generate a TOC page using the aforementioned entries and initiate the writing of the TOC page to a storage module.
  • FIGS. 6A-6C show flowcharts in accordance with one or more embodiments of the invention. More specifically, FIGS. 6A-6C show a method for storing user data in a storage appliance in accordance with one or more embodiments of the invention. While the various steps in the flowchart are presented and described sequentially, one of ordinary skill will appreciate that some or all of the steps may be executed in different orders, may be combined or omitted, and some or all of the steps may be executed in parallel. In one embodiment of the invention, the steps shown in FIG. 6A may be performed in parallel with the steps shown in FIG. 6B and the steps shown in FIG. 6C. Further, the steps shown in FIG. 6B may be performed in parallel with the steps shown in FIG. 6C.
  • the client writes a write command (write request) to the submission queue (SQ) of the processor in a control module (208 in FIG. 2A).
  • the write command specifies the logical address (which may also be referred to as a "source address") of the user data in the client memory.
  • the write command may specify the user data using ⁇ object ID, offset ID>.
  • the write command passes through at least the client switch and the switch fabric prior to reaching the SQ of the processor.
  • step 602 client writes a new SQ tail to the SQ Tail doorbell register.
  • the client by writing to the SQ Tail doorbell register, the client notifies the processor that there is a new command to process in its SQ.
  • step 604 the processor obtains the write command from the SQ.
  • the processor determines the physical address(es) at which to write the user data (as part of a frag). In one embodiment of the invention, the physical address(es) corresponds to a location in the solid state memory module. In one embodiment of the invention, the processor selects two physical addresses in which to write copies of the user data, where each of the physical addresses is in a separate solid state memory module.
  • the processor programs the DMA engine to issue a write to a multicast address.
  • the multicast address is associated with a multicast group, where the multicast group specifies a first memory location in the memory in the control module, a second memory location in a first vault memory, and a third memory location in a second vaulted memory.
  • the first vaulted memory is located in the same storage module as the solid state memory module that includes the physical address specified by the processor.
  • the second vaulted memory is determined in a similar manner.
  • the DMA engine reads the user data from the source address in client memory, and writes the data to the multicast address as directed by the control module.
  • a switch in the switch fabric is associated with the multicast address. Upon receipt of the address, the switch performs the necessary translation on the multicast address to obtain three addresses - one to each of the aforementioned memory locations. The switch subsequently sends copies of the user data to the three memory locations.
  • the particular switch which implements multicast may vary based on the implementation of the switch fabric. In this embodiment, there is only one write issued between the client and the storage appliance.
  • Step 608 the processor programs the DMA engine to issue three write requests in parallel - one to each of the aforementioned memory locations.
  • Step 610 DMA engine issues the three write requests in parallel. In this embodiment, there are three writes issues between the client and the storage appliance.
  • a TOC entry is created for each copy of user data stored in vaulted memory.
  • the page and byte specified in each TOC entry corresponds to the page and byte portions of the corresponding physical address identified in step 606. Accordingly, while the frag is not written to the physical address in the solid state memory module at the time the corresponding TOC entry is created, the frag (as part of a frag page) is intended to be written to the physical address at a later point in time.
  • each of the TOC entries is stored in a TOC page and the TOC page is eventually written to a solid state memory module. However, prior to the creation of the TOC page, the TOC entries are created and temporarily stored in the memory in the control module and in vaulted memory on one of the solid state storage modules.
  • the TOC entries created in step 612 are stored in vaulted memory. More specifically, each TOC entry is stored in the vaulted memory of the storage module and includes the physical address at which the corresponding frag will be written at a later point in time.
  • step 616 the processor updates the in-memory data structure to reflect that three copies of the user data are stored in the storage appliance.
  • the processor may also update the data structure that tracks the TOC entries per block (see FIG. 5).
  • step 618 the processor writes the SQ Identifier (which identifies the SQ of the processor) and a Write Command Identifier (which identifies the particular write command the client issued to the processor) to the completion queue (CQ) of the client.
  • step 620 the processor generates an interrupt for the client processor.
  • the processor uses the doorbell interrupts provided by the non-transparent bridge to issue an interrupt to the client processor.
  • step 622 the client processes the data in its CQ.
  • step 624 once the client has processed the data at the head of the completion queue, the client writes a new CQ head to the CQ head doorbell. This signifies to the processor the next location in the CQ to use in future notifications to the client.
  • step 626 the processor in the control module initiates the writing of the copies of the user data from the vaulted memory to the physical address identified in step 608.
  • the processor in the control module programs a DMA engine in the storage module controller to read user data from the vaulted memory and to write a copy of this user data to a physical address in the solid state memory module.
  • the physical address to which the copy of user data is written is the physical address previously determined by the processor in Step 606.
  • step 628 following step 626, the processor in the control module requests that all copies of the user data in vaulted memory that correspond to the user data written to the solid state memory module in step 626 are removed.
  • step 630 a confirmation of the removal is sent to the processor in the control module by each of the storage modules that included a copy of the user data (written in step 626) in their respective vaulted memory.
  • FIG. 6C shows a method that is performed each time a TOC entry is created.
  • step 632 a determination is made about whether there is more than one empty page remaining in the block. Said another way, a determination is made about whether user data has been written to all other pages except the last page in the block. If there is more than one empty page remaining in the block, the process proceeds to Step 636; otherwise the process proceeds to step 634. As discussed above, if there is only one empty page in the block in which to write user data, then a TOC page must be written to the last page in the block.
  • step 634 a determination is made about whether the cumulative size of TOC entries associated with the block (which have not been written to a TOC page in the block) are greater than or equal to a page size. If the cumulative size of TOC entries associated with the block (which have not been written to a TOC page in the block) are greater than or equal to a page size, then the process proceeds to Step 636; otherwise the process ends.
  • step 636 the TOC entries for the block (which have not been written to a TOC page in the block) are combined to create a TOC page.
  • the TOC page created in this scenario may including padding (as discussed above).
  • step 638 a determination is made about whether the block includes another TOC page. If the block includes another TOC page, the process proceeds to step 640; otherwise the process proceeds to step 642.
  • step 640 a reference to the most recently stored TOC page in the block is included in the TOC page created in step 636 (e.g., TOC page (410) references TOC page (412) in FIG. 4E).
  • step 642 the processor initiates the writing of the TOC page to a solid state memory module. More specifically, a DMA engine programmed by the processor writes a copy of the TOC page to the block in the solid state memory module that includes the frags corresponding to the TOC entries in the TOC page.
  • step 644 the processor requests all storage modules that include TOC entries that were included in the TOC page written to the solid state memory module in Step 642 to remove such TOC entries from their respective vaulted memories.
  • step 646 the processor receives confirmation, from the storage modules, that the aforementioned TOC entries have been removed.
  • FIGS. 7A-7E show an example of storing user data in a storage appliance in accordance with one or more embodiments of the invention. The example is not intended to limit the scope of the invention.
  • FIG. 7A consider the scenario in which the client (700) issues a request to write user data (denoted by the black circle) to the storage appliance.
  • the processor (714) in the control module (704) determines that a first copy of the user data should be written to a first physical location in solid state memory module A (726) in storage module A (718) and that a second copy of the user data should be written to a second physical location in solid state memory module B (728) in storage module B (720).
  • the processor (714) prior to receiving the write request creates a multicast group with three members.
  • a first member has a destination address in vaulted memory A (722)
  • the second member has a destination address in vaulted memory B (724)
  • the third member has a destination address in memory (712).
  • the processor (714) subsequently programs a switch (not shown) in the switch fabric (716) to implement the multicast group.
  • the DMA engine proceeds issue a write to a multicast address associated with the multicast group.
  • the write is transmitted to the switch fabric and ultimately reaches the switch (not shown) that implements the multicast group.
  • the switch subsequently creates three writes (each to one destinations specified by the multicast group) and issues the writes to the target memory locations. In one embodiment of the invention, the three writes occur in parallel
  • the frags to be written at the various destination addresses pass through the switch fabric (716). Once the writes are complete, there are three copies of the user data in the storage appliance. Once the writes are complete, the in-memory data structure (not shown) in the memory (712) is updated to reflect that the user data is stored in three locations within the storage appliance. Further, the client (700) is notified that the write is complete.
  • the processor generates a TOC entry (TE 1, TE 2) in memory (712) for each of the frags stored in vaulted memory.
  • TE 1 is the TOC entry from the frag stored in vaulted memory A (722) and TE 2 is the TOC entry for the frag stored in vaulted memory B (724).
  • the processor (via a DMA engine, not shown) subsequently writes a copy of TE 1 to vaulted memory A (722) and a copy of TE 2 to vaulted memory B (724).
  • the TOC entries (TE 1, and TE 2) are temporarily stored in the aforementioned vaulted memories until they are added to a TOC page and written to the appropriate solid state memory module.
  • the client (700) may remove the user data (which has already been written to the storage appliance) from the client memory (708).
  • the processor (714) issues a request to the storage module A (718) to write a copy of the user data currently in vaulted memory A (722) to the physical address in solid state memory module A (726).
  • the storage module controller (not shown) writes a copy of the user data in vaulted memory A (722) to solid state memory module A (726).
  • the processor (714) is notified once the write is complete.
  • the processor (714) may update the in-memory data structure upon receipt of the notification from storage module A (718).
  • the processor determines that the cumulative total size of TE 1 and other TOC entries (not shown) for frags in the same block (i.e., the block in which the frag corresponding to TE 1 is stored) equals a page size. Based on this determination, the processor creates a TOC page and subsequently (via a DMA engine (not shown)) writes the TOC page to the block (not shown) in the solid state memory module that includes the frag to which TE 1 corresponds.
  • the processor (714) issues a request to all storage modules that includes a copy of the user data in vaulted memory to remove the copy of the user data from their respective vaulted memories.
  • the processor (714) issues a request to all storage modules that include a copy of any TOC entry written in the aforementioned TOC page to remove such TOC entries from their respective vaulted memories.
  • the storage modules each notify the control module upon completion of these requests.
  • FIG. 7E shows the state of the system after all storage modules have completed the above requests.
  • the processor (714) may update the in- memory data structure upon receipt of the notification from storage modules that all copies of the user data in vaulted memory have been removed.
  • a TOC entry is created for each copy of user data and stored in vaulted memory such that each copy of user data can be accessed in the event that one of the TOC entries is corrupted, lost, or otherwise unavailable. Further, in the event of a power failure, all TOC entries within the vaulted memory are written to the corresponding solid state memory module. Further, the frags corresponding to the aforementioned TOC entries are written to the physical addresses that the processor originally determined at the time the write request for the client was processed.
  • FIG. 8 shows a flowchart in accordance with one or more embodiments of the invention. More specifically, FIG. 8 shows a method for generating an in-memory data structure in accordance with one or more embodiments of the invention. While the various steps in the flowchart are presented and described sequentially, one of ordinary skill will appreciate that some or all of the steps may be executed in different orders, may be combined or omitted, and some or all of the steps may be executed in parallel.
  • step 800 a block is selected.
  • the last page in the block is obtained.
  • the processor reads the contents of the last page.
  • the last page of every block in the solid state memory modules within the storage appliance is a TOC page.
  • the TOC entries are extracted from the TOC page.
  • each of the TOC entries obtained in Step 804 are processed to populate the in-memory data structure. More specifically, processing each TOC entry may include one or more following : (i) extracting the page ID and byte information from the TOC entry; (ii) combining the information in (i) with ⁇ storage module, channel, chip enable, LUN, plane, block> to obtain a physical address; (iii) extracting the object ID and offset ID (and optionally the birth time) from the TOC entry; (iv) applying a hash function to ⁇ object ID, offset ID> (or, optionally, ⁇ object ID, offset ID, birthtime>) to generate a hash value; and (v) populating the in-memory data structure with a mapping of the hash value and the physical address.
  • the processor already includes information about the ⁇ storage module, channel, chip enable, LUN, plane, block> because the processor needed this information to obtain the last page of the block.
  • the processor may (i) use the Type field in the TOC entry to determine whether the frag is in a bad page. If the frag is stored in a bad page, the processor may not generate a mapping in the in-memory data structure for the TOC entry.
  • step 808 once all TOC entries in the TOC page have been processed, a determination is made about whether the TOC page includes a reference to another TOC page in the block (i.e., the block selected in Step 800). If the TOC page includes a reference to another TOC page in the block, the process proceeds to Step 810; otherwise the process ends. In step 810, the referenced TOC page is obtained. In step 812, the TOC entries are extracted from the TOC page. The process then proceeds to Step 806.
  • the method in FIG. 8 may be performed in parallel for all blocks (or a subset of blocks) within the storage appliance when the system is powered on. Following this process, the resulting in-memory data structure may be updated by the processor as new user data is written to the storage appliance.
  • the in-memory data structure is generated prior to any operations (e.g., read operation, a write operation, and/or an erase operation) being performed on any datum stored in the solid state memory modules.
  • any operations e.g., read operation, a write operation, and/or an erase operation
  • One or more embodiments of the invention provide a system and method in which all user data stored in the storage appliance is co-located with its metadata. In this manner, all user data stored in the storage appliance is self describing.
  • the storage appliance is better protected against failure of a given solid state memory module (or a subset thereof). Said another way, if a given solid state memory module (or subset thereof) fails, the user data in other solid state memory modules in the system is still accessible because the metadata required to access the user data in the other solid state memory module is itself located in the other solid state memory modules.
  • embodiments of the invention enable the creation of an in-memory data structure, which allows the control module to access user data in a single look-up step. Said another way, the control module may use the in-memory data structure to directly ascertain the physical address(es) of the user data in the storage appliance. Using this information, the control module is able to directly access the user data and does not need to traverse any intermediate metadata hierarchy in order to obtain the user data.
  • One or more embodiments of the invention may be implemented using instructions executed by one or more processors in the system. Further, such instructions may corresponds to computer readable instructions that are stored on one or more non-transitory computer readable mediums.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
  • Multi Processors (AREA)
  • Techniques For Improving Reliability Of Storages (AREA)

Abstract

L'invention concerne un procédé permettant de stocker des données, qui consiste à recevoir une requête pour écrire une première donnée définie au moyen d'un premier ID d'objet et d'un premier ID de décalage dans un stockage persistant. Le procédé consiste également à déterminer une première adresse physique dans le stockage persistant, la première adresse physique comprenant un ID de premier bloc et un ID de premier sous-bloc. Le procédé consiste en outre à écrire la première donnée dans la première adresse physique, à générer une première entrée de table de matières (TE) comprenant le premier ID d'objet, le premier ID de décalage, et l'ID de premier sous bloc, et à écrire la première TE dans une deuxième adresse physique dans le stockage persistant, la deuxième adresse physique comprenant l'ID de premier bloc et un ID de deuxième sous-bloc correspondant à l'ID de deuxième sous-bloc, et le deuxième sous bloc étant situé à l'intérieur d'un premier bloc correspondant à l'ID de premier bloc.
PCT/US2013/033276 2012-03-23 2013-03-21 Système et procédés pour stocker des données au moyen d'entrées de table des matières WO2013142673A1 (fr)

Priority Applications (3)

Application Number Priority Date Filing Date Title
CN201380015085.4A CN104246724B (zh) 2012-03-23 2013-03-21 用于用内容表格条目存储数据的系统和方法
JP2015501909A JP6211579B2 (ja) 2012-03-23 2013-03-21 テーブル・オブ・コンテンツエントリを使用してデータを格納するためのシステムおよび方法
EP13716554.4A EP2828757B1 (fr) 2012-03-23 2013-03-21 Système et procédés pour stocker des données au moyen d'entrées de table des matières

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US13/428,771 US8370567B1 (en) 2012-03-23 2012-03-23 Storage system with self describing data
US13/428,771 2012-03-23

Publications (1)

Publication Number Publication Date
WO2013142673A1 true WO2013142673A1 (fr) 2013-09-26

Family

ID=47604714

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2013/033276 WO2013142673A1 (fr) 2012-03-23 2013-03-21 Système et procédés pour stocker des données au moyen d'entrées de table des matières

Country Status (5)

Country Link
US (1) US8370567B1 (fr)
EP (1) EP2828757B1 (fr)
JP (2) JP6211579B2 (fr)
CN (1) CN104246724B (fr)
WO (1) WO2013142673A1 (fr)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2016219013A (ja) * 2015-05-19 2016-12-22 イーエムシー コーポレイションEmc Corporation データを格納するための方法およびデータを格納するための一時的でないコンピュータ読取可能媒体
JP2017004506A (ja) * 2015-05-19 2017-01-05 イーエムシー コーポレイションEmc Corporation 永続ストレージにデータを格納するための方法およびストレージ機器
CN108959119A (zh) * 2014-08-29 2018-12-07 Emc知识产权控股有限公司 存储系统中垃圾收集的方法和系统

Families Citing this family (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8601206B1 (en) * 2013-03-14 2013-12-03 DSSD, Inc. Method and system for object-based transactions in a storage system
US9348537B2 (en) * 2013-09-10 2016-05-24 Qualcomm Incorporated Ascertaining command completion in flash memories
AU2014348774A1 (en) * 2013-11-12 2016-06-30 Western Digital Technologies, Inc. Apparatus and method for routing information in a non-volatile memory-based storage device
US9430156B1 (en) * 2014-06-12 2016-08-30 Emc Corporation Method to increase random I/O performance with low memory overheads
US10180889B2 (en) * 2014-06-23 2019-01-15 Liqid Inc. Network failover handling in modular switched fabric based data storage systems
US9378149B1 (en) * 2014-08-29 2016-06-28 Emc Corporation Method and system for tracking modification times of data in a storage system
KR20160075165A (ko) * 2014-12-19 2016-06-29 에스케이하이닉스 주식회사 메모리 시스템 및 메모리 시스템의 동작 방법
US10409514B2 (en) * 2015-11-30 2019-09-10 International Business Machines Corporation IP multicast message transmission for event notifications
US9858003B2 (en) 2016-05-02 2018-01-02 Toshiba Memory Corporation Storage system that reliably stores lower page data
US9996471B2 (en) * 2016-06-28 2018-06-12 Arm Limited Cache with compressed data and tag
KR102653389B1 (ko) * 2016-06-30 2024-04-03 에스케이하이닉스 주식회사 메모리 시스템 및 메모리 시스템의 동작 방법
US10277677B2 (en) * 2016-09-12 2019-04-30 Intel Corporation Mechanism for disaggregated storage class memory over fabric
KR20180031851A (ko) * 2016-09-19 2018-03-29 에스케이하이닉스 주식회사 메모리 시스템 및 메모리 시스템의 동작 방법

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090150641A1 (en) * 2007-12-06 2009-06-11 David Flynn Apparatus, system, and method for efficient mapping of virtual and physical addresses
US20090198902A1 (en) * 2008-02-04 2009-08-06 Apple Inc. Memory mapping techniques
US20100030999A1 (en) * 2008-08-01 2010-02-04 Torsten Hinz Process and Method for Logical-to-Physical Address Mapping in Solid Sate Disks

Family Cites Families (39)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH04323748A (ja) * 1991-04-24 1992-11-12 Fujitsu Ltd アドレス変換方法および装置
US5600821A (en) * 1993-07-28 1997-02-04 National Semiconductor Corporation Distributed directory for information stored on audio quality memory devices
JPH1185589A (ja) * 1997-09-12 1999-03-30 Toshiba Corp 情報記憶装置および同装置に適用される管理データ再構築方法
US7610438B2 (en) * 2000-01-06 2009-10-27 Super Talent Electronics, Inc. Flash-memory card for caching a hard disk drive with data-area toggling of pointers stored in a RAM lookup table
US7543100B2 (en) 2001-06-18 2009-06-02 3Par, Inc. Node controller for a data storage system
US7685126B2 (en) * 2001-08-03 2010-03-23 Isilon Systems, Inc. System and methods for providing a distributed file system utilizing metadata to track information about data stored throughout the system
US6850969B2 (en) * 2002-03-27 2005-02-01 International Business Machined Corporation Lock-free file system
US6996682B1 (en) * 2002-12-27 2006-02-07 Storage Technology Corporation System and method for cascading data updates through a virtual copy hierarchy
JP2005085011A (ja) * 2003-09-09 2005-03-31 Renesas Technology Corp 不揮発性メモリ制御装置
JP2008511886A (ja) * 2004-09-03 2008-04-17 ノキア コーポレイション メモリ媒体へのデータの記憶及びそこからの読み取り
US7366825B2 (en) * 2005-04-26 2008-04-29 Microsoft Corporation NAND flash memory management
JP4547028B2 (ja) * 2005-08-03 2010-09-22 サンディスク コーポレイション ブロック管理を伴う不揮発性メモリ
US7634627B1 (en) * 2005-08-19 2009-12-15 Symantec Operating Corporation System and method for performing extent level backups that support single file restores
US20070073989A1 (en) * 2005-08-31 2007-03-29 Interdigital Technology Corporation Method and apparatus for efficient data storage and management
JP2006114064A (ja) * 2005-12-28 2006-04-27 Hitachi Ltd 記憶サブシステム
US7702870B2 (en) * 2006-01-19 2010-04-20 Network Appliance Inc. Method and apparatus for defragmentation and for detection of relocated blocks
US10303783B2 (en) * 2006-02-16 2019-05-28 Callplex, Inc. Distributed virtual storage of portable media files
US7650458B2 (en) * 2006-06-23 2010-01-19 Microsoft Corporation Flash memory driver
US7694091B2 (en) * 2006-10-23 2010-04-06 Hewlett-Packard Development Company, L.P. Non-volatile storage for backing up volatile storage
KR100816761B1 (ko) * 2006-12-04 2008-03-25 삼성전자주식회사 낸드 플래시 메모리 및 에스램/노어 플래시 메모리를포함하는 메모리 카드 및 그것의 데이터 저장 방법
US8074011B2 (en) * 2006-12-06 2011-12-06 Fusion-Io, Inc. Apparatus, system, and method for storage space recovery after reaching a read count limit
JP4897524B2 (ja) * 2007-03-15 2012-03-14 株式会社日立製作所 ストレージシステム及びストレージシステムのライト性能低下防止方法
US7870327B1 (en) * 2007-04-25 2011-01-11 Apple Inc. Controlling memory operations using a driver and flash memory type tables
US7913032B1 (en) * 2007-04-25 2011-03-22 Apple Inc. Initiating memory wear leveling
US7739312B2 (en) * 2007-04-27 2010-06-15 Network Appliance, Inc. Data containerization for reducing unused space in a file system
US20090019245A1 (en) * 2007-07-10 2009-01-15 Prostor Systems, Inc. Methods for implementation of data formats on a removable disk drive storage system
US7836018B2 (en) * 2007-10-24 2010-11-16 Emc Corporation Simultaneously accessing file objects through web services and file services
US8762620B2 (en) * 2007-12-27 2014-06-24 Sandisk Enterprise Ip Llc Multiprocessor storage controller
JP4498426B2 (ja) * 2008-03-01 2010-07-07 株式会社東芝 メモリシステム
US7917803B2 (en) * 2008-06-17 2011-03-29 Seagate Technology Llc Data conflict resolution for solid-state memory devices
US8086799B2 (en) * 2008-08-12 2011-12-27 Netapp, Inc. Scalable deduplication of stored data
US8732388B2 (en) * 2008-09-16 2014-05-20 Micron Technology, Inc. Embedded mapping information for memory devices
US8219741B2 (en) * 2008-10-24 2012-07-10 Microsoft Corporation Hardware and operating system support for persistent memory on a memory bus
US7987162B2 (en) * 2009-03-06 2011-07-26 Bluearc Uk Limited Data compression in a file storage system
WO2010114006A1 (fr) * 2009-03-31 2010-10-07 日本電気株式会社 Système de mémoire et procédé et programme d'accès à une mémoire
US8321645B2 (en) * 2009-04-29 2012-11-27 Netapp, Inc. Mechanisms for moving data in a hybrid aggregate
US8468293B2 (en) * 2009-07-24 2013-06-18 Apple Inc. Restore index page
JP5066209B2 (ja) * 2010-03-18 2012-11-07 株式会社東芝 コントローラ、データ記憶装置、及びプログラム
US8812816B2 (en) * 2010-03-23 2014-08-19 Apple Inc. Garbage collection schemes for index block

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090150641A1 (en) * 2007-12-06 2009-06-11 David Flynn Apparatus, system, and method for efficient mapping of virtual and physical addresses
US20090198902A1 (en) * 2008-02-04 2009-08-06 Apple Inc. Memory mapping techniques
US20100030999A1 (en) * 2008-08-01 2010-02-04 Torsten Hinz Process and Method for Logical-to-Physical Address Mapping in Solid Sate Disks

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108959119A (zh) * 2014-08-29 2018-12-07 Emc知识产权控股有限公司 存储系统中垃圾收集的方法和系统
CN108959119B (zh) * 2014-08-29 2023-06-23 Emc知识产权控股有限公司 存储系统中垃圾收集的方法和系统
JP2016219013A (ja) * 2015-05-19 2016-12-22 イーエムシー コーポレイションEmc Corporation データを格納するための方法およびデータを格納するための一時的でないコンピュータ読取可能媒体
JP2017004506A (ja) * 2015-05-19 2017-01-05 イーエムシー コーポレイションEmc Corporation 永続ストレージにデータを格納するための方法およびストレージ機器
US9911487B2 (en) 2015-05-19 2018-03-06 EMC IP Holding Company LLC Method and system for storing and recovering data from flash memory
US10019168B2 (en) 2015-05-19 2018-07-10 EMC IP Holding Company LLC Method and system for multicasting data to persistent memory
US10229734B1 (en) 2015-05-19 2019-03-12 EMC IP Holding Company LLC Method and system for storing and recovering data from flash memory

Also Published As

Publication number Publication date
CN104246724B (zh) 2017-12-01
EP2828757A1 (fr) 2015-01-28
US8370567B1 (en) 2013-02-05
JP6211579B2 (ja) 2017-10-11
JP2017016691A (ja) 2017-01-19
CN104246724A (zh) 2014-12-24
JP6385995B2 (ja) 2018-09-05
JP2015515678A (ja) 2015-05-28
EP2828757B1 (fr) 2018-05-09

Similar Documents

Publication Publication Date Title
US10229734B1 (en) Method and system for storing and recovering data from flash memory
EP2828757B1 (fr) Système et procédés pour stocker des données au moyen d'entrées de table des matières
EP2845098B1 (fr) Système de stockage à dma à diffusion groupée et espace d'adressage unifié
US8301832B1 (en) Storage system with guaranteed read latency
US9921756B2 (en) Method and system for synchronizing an index of data blocks stored in a storage system using a shared storage module
US10445018B2 (en) Switch and memory device
US8341342B1 (en) Storage system with incremental multi-dimensional RAID
US8392428B1 (en) Method and system for hash fragment representation
CN106933504B (zh) 用于提供存储系统的访问的方法和系统
US10289550B1 (en) Method and system for dynamic write-back cache sizing in solid state memory storage
US20220187992A1 (en) Systems and methods for data copy offload for storage devices
EP4303711A1 (fr) Systèmes, procédés et appareil de placement de données dans un dispositif de stockage

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 13716554

Country of ref document: EP

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: 2013716554

Country of ref document: EP

ENP Entry into the national phase

Ref document number: 2015501909

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE