US20080201335A1 - Method and Apparatus for Storing Data in a Peer to Peer Network - Google Patents

Method and Apparatus for Storing Data in a Peer to Peer Network Download PDF

Info

Publication number
US20080201335A1
US20080201335A1 US12/023,133 US2313308A US2008201335A1 US 20080201335 A1 US20080201335 A1 US 20080201335A1 US 2313308 A US2313308 A US 2313308A US 2008201335 A1 US2008201335 A1 US 2008201335A1
Authority
US
United States
Prior art keywords
data
physical nodes
fragments
peer
slots
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US12/023,133
Other languages
English (en)
Inventor
Cezary Dubnicki
Leszek Gryz
Krzysztof Lichota
Cristian Ungureanu
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
NEC Laboratories America Inc
Original Assignee
NEC Laboratories America Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by NEC Laboratories America Inc filed Critical NEC Laboratories America Inc
Priority to US12/023,133 priority Critical patent/US20080201335A1/en
Priority to PCT/US2008/053564 priority patent/WO2008103568A1/en
Priority to PCT/US2008/053568 priority patent/WO2008103569A1/en
Priority to TW097105753A priority patent/TWI433504B/zh
Assigned to NEC LABORATORIES AMERICA, INC. reassignment NEC LABORATORIES AMERICA, INC. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: UNGUREANU, CRISTIAN, DUBNICKI, CEZARY, GRYZ, LESZEK K, LICHOTA, KRZYSZTOF
Priority to US12/038,296 priority patent/US8090792B2/en
Priority to TW097108198A priority patent/TWI437487B/zh
Publication of US20080201335A1 publication Critical patent/US20080201335A1/en
Abandoned legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/10File systems; File servers

Definitions

  • the present invention relates generally to peer to peer networking and more particularly to storing data in peer to peer networks.
  • Peer to peer networks for storing data may be overlay networks that allow data to be distributively stored in the network (e.g., at nodes).
  • peer to peer networks there are links between any two peers (e.g., nodes) that communicate with each other. That is, nodes in the peer to peer network may be considered as being connected by virtual or logical links, each of which corresponds to a path in the underlying network (e.g., a path of physical links).
  • Such a structured peer to peer network employs a globally consistent protocol to ensure that any node can efficiently route a search to some peer that has desired data (e.g., a file, piece of data, packet, etc.).
  • a common type of structured peer to peer network uses a distributed hash table (DHT) in which a variant of consistent hashing is used to assign ownership of each file or piece of data to a particular peer in a way analogous to a traditional hash table's assignment of each key to a particular array slot.
  • DHT distributed hash table
  • the present invention generally provides a method of storing data in a fixed prefix peer to peer network having a plurality of physical nodes.
  • a plurality of data fragments are generated by erasure coding a block of data and each of the data fragments are then stored in different physical nodes.
  • the erasure coding divides the block of data into a number of original fragments and a number of redundant fragments are created where the number of redundant fragments is equal to a predetermined network cardinality minus the number of original data fragments.
  • the physical nodes in the peer to peer network are logically divided into storage slots and the data fragments are stored in different slots on different physical nodes.
  • the storage locations of the fragments (e.g., the slots) are logically organized into a virtual node.
  • a network cardinality is determined, the block of data is divided into a number of original fragments, and a number of redundant fragments are created wherein the number of redundant fragments is equal to the network cardinality minus the number of original data fragments.
  • the storage locations of the plurality of data fragments are mapped in a data structure in which the storage locations are the physical nodes in which the plurality of data fragments are stored.
  • the data structure is a distributed hash table.
  • FIG. 1 is a diagram of an exemplary peer to peer network according to an embodiment of the invention
  • FIG. 2 is a diagram of an exemplary peer to peer network according to an embodiment of the invention.
  • FIG. 3 is a diagram of an exemplary peer to peer network according to an embodiment of the invention.
  • FIG. 4 is an exemplary supernode composition and component description table 400 according to an embodiment of the present invention.
  • FIG. 5 is a depiction of data to be stored in a peer to peer network
  • FIG. 6 is a flowchart of a method of storing data in a fixed prefix peer to peer network according to an embodiment of the present invention.
  • FIG. 7 is a schematic drawing of a controller according to an embodiment of the invention.
  • the present invention extends the concept of Distributed Hash Tables (DHTs) to create a more robust peer to peer network.
  • DHTs Distributed Hash Tables
  • the improved methods of storing data described herein allow for a simple DHT organization with built-in support for multiple classes of data redundancy which have a smaller storage overhead than previous DHTs.
  • Embodiments of the invention also support automatic monitoring of data resilience and automatic reconstruction of lost and/or damaged data.
  • the present invention provides greater robustness and resiliency to the DHT-based peer to peer network known as a Fixed Prefix Network (FPN) disclosed in U.S. patent application Ser. No. 10/813,504, filed Mar. 30, 2004 and incorporated herein by reference.
  • FPNs and networks according to the present invention are constructed such that the contributed resources (e.g., nodes) are dedicated to the peer to peer system and the systems are accordingly significantly more stable and scalable.
  • FIGS. 1-3 depict various illustrative embodiments of peer to peer networks utilizing FPN/SNs.
  • FIGS. 1-3 are exemplary diagrams to illustrate the various structures and relationships described below and are not meant to limit the invention to the specific network layouts shown.
  • FIG. 1 is a diagram of an exemplary peer to peer network 100 for use with an embodiment of the present invention.
  • the peer to peer network 100 has a plurality of physical nodes 102 , 104 , 106 , and 108 that communicate with each other through an underlying transport network 110 as is known.
  • FIG. 1 Though depicted in FIG. 1 as four physical nodes 102 - 108 , it is understood that any number of nodes in any arrangement may be utilized.
  • the physical nodes 102 - 108 may vary in actual storage space, processing power, and/or other resources.
  • Physical nodes 102 - 108 each have associated memories and/or storage areas (not shown) as is known.
  • the memories and/or storage areas of physical nodes 102 - 108 are each logically divided into a plurality of slots approximately proportional to the amount of storage available to each physical node.
  • the memory and/or storage area of physical node 102 is logically divided into approximately equivalent-sized slots 112 a , 112 b , 112 c , and 112 d
  • the memory and/or storage area of physical node 104 is logically divided into approximately equivalent-sized slots 114 a , 114 b , 114 c , and 114 d
  • the memory and/or storage area of physical node 106 is logically divided into approximately equivalent-sized slots 116 a , 116 b , 116 c , and 116 d
  • the memory and/or storage area of physical node 108 is logically divided into approximately equivalent-sized (e.g., in terms of storage capacity) slots 118 a , 118 b , 118 c , and 118 d .
  • a physical node may be logically divided in that its memory and/or storage allocation may be allocated as different storage areas (e.g., slots).
  • Physical nodes 102 - 108 may be divided into any appropriate number of slots, the slots being representative of an amount of storage space in the node. In other words, data may be stored in the nodes 102 - 108 in a sectorized or otherwise compartmentalized manner.
  • any appropriate division of the storage and/or memory of physical nodes 102 - 108 may be used and slots 112 a - d , 114 a - d , 116 a - d , and 118 a - d may be of unequal size.
  • slot size may not be static and may grow or shrink and slots may be split and/or may be merged with other slots.
  • Each physical node 102 - 108 is responsible for the storage and retrieval of one or more objects (e.g., files, data, pieces of data, data fragments, etc.) in the slots 112 a - d , 114 a - d , 116 a - d , and 118 a - d , respectively.
  • Each object may be associated with a preferably fixed-size hash key of a hash function.
  • one or more clients 120 may communicate with one or more of physical nodes 102 - 108 and issue a request for a particular object using a hash key.
  • Slots 112 a - d , 114 a - d , 116 a - d , and 118 a - d may also each be associated with a component of a virtual (e.g., logical) node (discussed in further detail below with respect to FIGS. 2 and 3 ).
  • components are not physical entities, but representations of a portion of a virtual node. That is, components may be logical representations of and/or directions to or addresses for a set or subset of data that is hosted in a particular location in a node (e.g., hosted in a slot). Storage locations of data fragments (e.g., data fragments discussed below with respect to FIG. 5 ) are logically organized into a virtual node.
  • FIG. 2 is a diagram of a portion of an exemplary peer to peer network 200 for use with an embodiment of the present invention.
  • the peer to peer network 200 is similar to peer to peer network 100 and has a plurality of physical nodes 202 , 204 , 206 , 208 , 210 , and 212 similar to physical nodes 102 - 108 .
  • Physical nodes 202 - 212 are each logically divided into a plurality of slots approximately proportional to the amount of storage available to each physical node.
  • physical node 202 is divided logically into slots 214 a , 214 b , 214 c , and 214 d
  • physical node 204 is divided logically into slots 216 a , 216 b , 216 c , and 216 d
  • physical node 206 is divided logically into slots 218 a , 218 b , 218 c , and 218 d
  • physical node 208 is divided logically into slots 220 a , 220 b , 220 c , and 220 d
  • physical node 210 is divided logically into slots 222 a , 222 b , 222 c , and 222 d
  • physical node 212 is divided logically into slots 224 a , 224 b , 224 c , and 224 d .
  • each slot 214 a - d , 216 a - d , 218 a - d , 220 a - d , 222 a - d , and 224 a - d hosts a component
  • the component corresponding to its host slot is referred to herein with the same reference numeral.
  • the component hosted in slot 214 c of physical node 202 is referred to as component 214 c.
  • a grouping of multiple components is referred to as a virtual node (e.g., a “supernode”).
  • supernode 226 comprises components 214 b , 216 c , 218 b , 220 d , 222 a , and 224 a .
  • a virtual node (e.g., supernode) is thus a logical grouping of a plurality of storage locations on multiple physical nodes.
  • the supernode may have any number of components—where the number of components is the supernode cardinality (e.g., the number of components in a supernode)—associated with any number of physical nodes in a network and a supernode need not have components from every physical node. However, each component of a supernode must be hosted in slots on different physical nodes. That is, no two components in a supernode should be hosted at the same physical node.
  • the total number of components in a supernode may be given by a predetermined constant—supernode cardinality. In some embodiments, the supernode cardinality may be in the range of 4-6 32.
  • the supernode cardinality may be a predetermined (e.g., desired, designed, etc.) number of data fragments.
  • a larger supernode cardinality is chosen to increase flexibility in choosing data classes.
  • a smaller supernode cardinality is chosen to provide greater access to storage locations (e.g., disks) in read/write operations.
  • data classes define a level of redundancy where lower data classes (e.g., data class low) have less redundancy and higher data classes (e.g., data class high) have more redundancy.
  • data class low may refer to a single redundant fragment and data class high may refer to four redundant fragments.
  • data blocks that are classified by user as data class low will be divided into a number of fragments equal to a supernode cardinality, where there are (supernode cardinality—1) original fragments and one redundant fragment. Accordingly, one fragment may be lost and the data block may be recreated.
  • data class high e.g., four redundant fragments
  • a block of data will be divided into fragments such that four of them will be redundant. Thus, four fragments may be lost and the original block of data may be recreated. Fragmentation, especially redundant fragments, is discussed in further detail below with respect to FIG. 5 .
  • Components of the supernode may be considered peers and may similarly associated (e.g., in a hash table, etc.), addressed, and/or contacted as peer nodes in a traditional peer to peer network.
  • FIG. 3 depicts a high level abstraction of an exemplary peer to peer network 300 according to an embodiment of the invention.
  • Peer to peer network 300 is similar to peer to peer networks 100 and 200 and has multiple physical nodes 302 , 304 , 306 , and 308 .
  • Each of the physical nodes 302 - 308 is divided into multiple slots as described above.
  • each of the physical nodes 302 - 308 has eight slots.
  • each slot 310 , 312 , 314 , 316 , 318 , 320 , 322 , or 324 hosts a component 310 , 312 , 314 , 316 , 318 , 320 , 322 , or 324 .
  • Components 310 - 324 are each associated with a corresponding supernode and are distributed among the physical nodes 302 - 308 . In this way, eight supernodes are formed, each with one component 310 - 324 on each of the four physical nodes 302 - 308 .
  • a first supernode is formed with four components—component 310 hosted on physical node 302 (e.g., in a slot 310 ), component 310 hosted in physical node 304 (e.g., in a slot 310 ), component 310 hosted in physical node 306 (e.g., in a slot 310 ), and component 310 hosted in physical node 308 (e.g., in a slot 310 ).
  • the first supernode comprising components 310 , is shown as dashed boxes.
  • a second supernode comprises the four components 312 hosted in physical nodes 302 - 308 and is shown as a trapezoid.
  • these are merely graphical representations to highlight the different components comprising different supernodes and are not meant to be literal representations of what a slot, component, node, or supernode might look like.
  • the remaining six supernodes are formed similarly.
  • the fixed prefix network model of DHTs may be extended to use supernodes.
  • Any advantageous hashing function that maps data (e.g., objects, files, etc.) to a fixed-size hash key may be utilized in the context of the present invention.
  • the hash keys may be understood to be fixed-size bit strings (e.g., 5 bits, 6 bits, etc.) in the space containing all possible combinations of such strings.
  • a subspace of the hashkey space is associated with a group of bits of the larger bit string as is known.
  • a group of hash keys beginning with 110 in a 5 bit string would include all hash keys except those beginning with 000, 001, 010, 011,100, and 101. That is, the prefix is 110.
  • Such a subspace of the hashkey space may be a supernode and a further specification may be a component of the supernode.
  • the prefix may be fixed for the life of a supernode and/or component.
  • the peer to peer network is referred to as a fixed-prefix peer to peer network. Other methods of hashing may be used as appropriate.
  • FIG. 4 is an exemplary supernode composition and component description table 400 according to an embodiment of the present invention.
  • the supernode composition and component description table 400 may be used in conjunction with the peer to peer network 200 , for example.
  • Each supernode e.g., supernode 226
  • a supernode composition e.g., with supernode composition and component description table 400
  • the array 402 size is equal to the supernode cardinality.
  • the supernode version 406 is a sequence number corresponding to the current incarnation of the supernode.
  • Each supernode is identified by a fixed prefix 402 as described above and in U.S. patent application Ser. No. 10/813,504.
  • the supernode 226 has a fixed prefix of 01101. Therefore, any data that has a hash key beginning with 01101 will be associated with supernode 226 .
  • each component e.g., 214 b , 216 c , 218 b , 220 d , 222 a , 224 a , etc.
  • a component description comprising a fixed prefix 408 , a component index 410 , and a component version 412 .
  • All components of the supernode e.g., in array 404
  • the component index 410 of each component corresponds to a location in the supernode array.
  • a component's index is fixed for the component's lifetime and is an identification number pointing to the particular component.
  • a component index is a number between 0 and (supernode cardinality—1).
  • a component's version is a version number sequentially increased whenever the component changes hosts (e.g., nodes).
  • hosts e.g., nodes.
  • a component may be split or moved from one physical node to another and its version is increased in such instances.
  • Supernode composition and component description table 400 is an example of an organization of the information related to physical nodes, supernodes, and their respective components.
  • one skilled in the art would recognize other methods of organizing and providing such information, such as storing the information locally on physical nodes in a database, storing the information at a remote location in a communal database, etc.
  • Updated indications of the supernode composition are maintained (e.g., in supernode composition and component description table 400 , etc.) to facilitate communication amongst peers.
  • physical nodes associated with the components maintain compositions of neighboring physical and/or virtual nodes.
  • physical nodes associated with components ping peers and neighbors as is known.
  • a physical node associated with a component may internally ping physical nodes associated with peers in the component's supernode to determine virtual node health and/or current composition.
  • a physical node associated with a component may externally ping physical nodes associated with neighbors (e.g., components with the same index, but belonging to a different supernode) to propagate and/or collect composition information.
  • neighbors e.g., components with the same index, but belonging to a different supernode
  • FIG. 5 is a generalized drawing of data that may be stored in peer to peer networks 100 , 200 , and/or 300 .
  • a block 502 of data may be divided into multiple pieces 504 of data according to any conventional manner.
  • the block of data 502 may be fragmented into multiple original pieces (e.g., fragments) 506 and a number of redundant fragments 508 may also be generated.
  • Such fragmentation and/or fragment generation may be accomplished by erasure coding, replication, and/or other fragmentation means.
  • FIG. 6 depicts a flowchart of a method 600 of organizing data in a fixed prefix peer to peer network according to an embodiment of the present invention with particular reference to FIGS. 2 and 5 above. Though discussed with reference to the peer to peer network 200 of FIG. 2 , the method steps described herein also may be used in peer to peer networks 100 and 300 , as appropriate. The method begins at step 602 .
  • a network cardinality is determined.
  • Network cardinality may be a predetermined constant for an entire system and may be determined in any appropriate fashion.
  • a plurality of data fragments 506 - 508 are generated.
  • the data fragments 506 - 508 are generated from a block of data 502 by utilizing an erasure code.
  • Using the erasure code transforms a block 502 of n (here, four) original pieces of data 504 into more than n fragments of data 506 - 508 (here, four original fragments and two redundant fragments) such that the original block 502 of n pieces (e.g., fragments) of data 504 can be recovered from a subset of those fragments (e.g., fragments 506 - 508 ).
  • the fraction of the fragments 506 - 508 required to recover the original n pieces of data 504 is called the rate r.
  • optimal erasure codes may be used.
  • An optimal erasure code produces n/r fragments of data where any n fragments may be used to recover the original n pieces of data.
  • near optimal erasure codes may be used to conserve system resources.
  • the erasure coding and creation of redundant fragments 508 allows recreation of the original block of data 502 with half plus one redundant fragments 508 and/or original fragments 506 .
  • only four total fragments from the group of fragments 506 - 508 are needed to reconstruct original block of data 502 .
  • any other erasure coding scheme may be used.
  • the data fragments 506 - 508 are stored in different physical nodes 202 - 212 .
  • Each of the data fragments 506 , representing the original pieces of the data block 502 , and the redundant fragments 508 are stored in separate physical nodes 202 - 212 using any appropriate methods of storing data in a peer to peer network.
  • data fragments 506 - 508 are stored in separate slots 214 a - d , 216 a - d , 218 a - d , 220 a - d , 222 a - d , 224 a - d of the physical nodes 202 - 212 .
  • one fragment from fragments 508 and 508 may be stored in each of slots 214 b , 216 c , 218 b , 220 d , 222 a , and 224 a.
  • a hash may be computed based on the original block of data 502 .
  • a virtual node e.g., virtual node 226
  • virtual node 226 comprises components 214 b , 216 c , 218 b , 220 d , 222 a , and 224 a
  • the data fragments 506 - 508 are then stored in the slots 214 b , 216 c , 218 b , 220 d , 222 a , and 224 a corresponding to components 214 b , 216 c , 218 b , 220 d , 222 a , and 224 a.
  • the storage locations of the data fragments 506 - 508 are recorded (e.g., mapped, etc.) in a data structure.
  • the data structure may be a hash table, a DHT, a DHT according to the FPN referenced above, the data structures described in co-pending and concurrently filed U.S. patent application Ser. No. ______, entitled “Methods for Operating a Fixed Prefix Peer to Peer Network”, Attorney Docket No. 06083 A, incorporated by reference herein, or any other appropriate data structure.
  • the data structure may facilitate organization, routing, look-ups, and other functions of peer to peer networks 100 , 200 , and 300 .
  • Fragments 506 - 508 may be numbered (e.g., from 0 to a supernode cardinality minus one) and fragments of the same number may be stored (e.g., grouped, arranged, etc.) in a logical entity (e.g., a virtual node component).
  • a logical entity e.g., a virtual node component
  • the data structure facilitates organization of information about the data fragments 506 - 508 into virtual nodes (e.g., supernode 226 , supernodes 310 - 324 , etc.). That is, the storage locations (e.g., the slots in the physical nodes) storing each of the original fragments 506 and each of the redundant fragments 408 are organized into and/or recorded as a grouping (e.g., a virtual node/supernode as described above). Accordingly, the fragments 506 - 508 may be organized into and hosted in supernode 226 as described above so that location, index, and version information about the fragments of data 506 - 508 may be organized as components of supernode 226 .
  • virtual nodes e.g., supernode 226 , supernodes 310 - 324 , etc.
  • the method ends at step 614 .
  • FIG. 7 is a schematic drawing of a controller 700 according to an embodiment of the invention.
  • Controller 700 contains a processor 702 that controls the overall operation of the controller 700 by executing computer program instructions that define such operation.
  • the computer program instructions may be stored in a storage device 704 (e.g., magnetic disk, database, etc.) and loaded into memory 706 when execution of the computer program instructions is desired.
  • applications for performing the herein-described method steps, such as erasure coding, storing data, and DHT organization, in method 600 are defined by the computer program instructions stored in the memory 706 and/or storage 704 and controlled by the processor 702 executing the computer program instructions.
  • the controller 700 may also include one or more network interfaces 608 for communicating with other devices via a network (e.g., a peer to peer network, etc.).
  • the controller 700 also includes input/output devices 710 (e.g., display, keyboard, mouse, speakers, buttons, etc.) that enable user interaction with the controller 700 .
  • Controller 700 and/or processor 702 may include one or more central processing units, read only memory (ROM) devices and/or random access memory (RAM) devices.
  • ROM read only memory
  • RAM random access memory
  • instructions of a program may be read into memory 706 , such as from a ROM device to a RAM device or from a LAN adapter to a RAM device. Execution of sequences of the instructions in the program may cause the controller 700 to perform one or more of the method steps described herein, such as those described above with respect to method 600 and/or erasure coding as described above with respect to FIG. 5 .
  • hard-wired circuitry or integrated circuits may be used in place of, or in combination with, software instructions for implementation of the processes of the present invention.
  • embodiments of the present invention are not limited to any specific combination of hardware, firmware, and/or software.
  • the memory 706 may store the software for the controller 700 , which may be adapted to execute the software program and thereby operate in accordance with the present invention and particularly in accordance with the methods described in detail below.
  • the invention as described herein could be implemented in many different ways using a wide range of programming techniques as well as general purpose hardware sub-systems or dedicated controllers.
  • Such programs may be stored in a compressed, uncompiled and/or encrypted format.
  • the programs furthermore may include program elements that may be generally useful, such as an operating system, a database management system, and device drivers for allowing the controller to interface with computer peripheral devices, and other equipment/components.
  • Appropriate general purpose program elements are known to those skilled in the art, and need not be described in detail herein.
  • each supernode includes the fragments derived from an original block of data (e.g., by erasure coding) and each of the fragments is thus stored on a separate physical node, the network is less susceptible to failure due to network changes. That is, changes to the peer physical nodes such as failures and node departures are less likely to affect the peer to peer network because of the distributed nature of the data.
  • inventive methods may be employed on a peer to peer network.
  • a controller e.g., controller 700
  • the controller may perform hashing functions store and/or look up one or more pieces of data in the peer to peer network.
  • the controller may further be configured to recover the stored data should one or more of the physical nodes be lost (e.g., through failure, inability to communicate, etc.)
  • the physical nodes in the peer to peer network may be configured to perform one or more of the functions of the controller instead.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
  • Computer And Data Communications (AREA)
US12/023,133 2007-02-20 2008-01-31 Method and Apparatus for Storing Data in a Peer to Peer Network Abandoned US20080201335A1 (en)

Priority Applications (6)

Application Number Priority Date Filing Date Title
US12/023,133 US20080201335A1 (en) 2007-02-20 2008-01-31 Method and Apparatus for Storing Data in a Peer to Peer Network
PCT/US2008/053564 WO2008103568A1 (en) 2007-02-20 2008-02-11 Method and apparatus for storing data in a peer to peer network
PCT/US2008/053568 WO2008103569A1 (en) 2007-02-20 2008-02-11 Methods for operating a fixed prefix peer to peer network
TW097105753A TWI433504B (zh) 2007-02-20 2008-02-19 在一點對點網路中儲存資料的方法和裝置
US12/038,296 US8090792B2 (en) 2007-03-08 2008-02-27 Method and system for a self managing and scalable grid storage
TW097108198A TWI437487B (zh) 2007-03-08 2008-03-07 自我管理及可調整網格儲存之系統及方法

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US89066107P 2007-02-20 2007-02-20
US12/023,133 US20080201335A1 (en) 2007-02-20 2008-01-31 Method and Apparatus for Storing Data in a Peer to Peer Network

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US12/038,296 Continuation-In-Part US8090792B2 (en) 2007-03-08 2008-02-27 Method and system for a self managing and scalable grid storage

Publications (1)

Publication Number Publication Date
US20080201335A1 true US20080201335A1 (en) 2008-08-21

Family

ID=39707530

Family Applications (2)

Application Number Title Priority Date Filing Date
US12/023,133 Abandoned US20080201335A1 (en) 2007-02-20 2008-01-31 Method and Apparatus for Storing Data in a Peer to Peer Network
US12/023,141 Expired - Fee Related US8140625B2 (en) 2007-02-20 2008-01-31 Method for operating a fixed prefix peer to peer network

Family Applications After (1)

Application Number Title Priority Date Filing Date
US12/023,141 Expired - Fee Related US8140625B2 (en) 2007-02-20 2008-01-31 Method for operating a fixed prefix peer to peer network

Country Status (4)

Country Link
US (2) US20080201335A1 (zh)
AR (1) AR076255A1 (zh)
TW (2) TWI432968B (zh)
WO (2) WO2008103569A1 (zh)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100064166A1 (en) * 2008-09-11 2010-03-11 Nec Laboratories America, Inc. Scalable secondary storage systems and methods
US20100070698A1 (en) * 2008-09-11 2010-03-18 Nec Laboratories America, Inc. Content addressable storage systems and methods employing searchable blocks
US7716179B1 (en) 2009-10-29 2010-05-11 Wowd, Inc. DHT-based distributed file system for simultaneous use by millions of frequently disconnected, world-wide users
US20100174968A1 (en) * 2009-01-02 2010-07-08 Microsoft Corporation Heirarchical erasure coding
US20110099200A1 (en) * 2009-10-28 2011-04-28 Sun Microsystems, Inc. Data sharing and recovery within a network of untrusted storage devices using data object fingerprinting
US20140129881A1 (en) * 2010-12-27 2014-05-08 Amplidata Nv Object storage system for an unreliable storage medium
US20180165154A1 (en) * 2014-08-07 2018-06-14 Pure Storage, Inc. Die-level monitoring in a storage cluster
WO2019170004A1 (zh) * 2018-03-09 2019-09-12 杭州海康威视系统技术有限公司 一种数据存储系统、方法及装置

Families Citing this family (178)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP5176835B2 (ja) * 2008-09-29 2013-04-03 ブラザー工業株式会社 監視装置、情報処理装置、情報処理方法並びにプログラム
US8051205B2 (en) * 2008-10-13 2011-11-01 Applied Micro Circuits Corporation Peer-to-peer distributed storage
US8478799B2 (en) 2009-06-26 2013-07-02 Simplivity Corporation Namespace file system accessing an object store
JP2012531674A (ja) 2009-06-26 2012-12-10 シンプリヴィティ・コーポレーション ノンユニフォームアクセスメモリにおけるスケーラブルなインデックス付け
AU2010318464B2 (en) * 2009-11-13 2015-07-09 Panasonic Intellectual Property Corporation Of America Encoding method, decoding method, coder and decoder
US12008266B2 (en) 2010-09-15 2024-06-11 Pure Storage, Inc. Efficient read by reconstruction
US11614893B2 (en) 2010-09-15 2023-03-28 Pure Storage, Inc. Optimizing storage device access based on latency
US9436748B2 (en) 2011-06-23 2016-09-06 Simplivity Corporation Method and apparatus for distributed configuration management
US8589640B2 (en) 2011-10-14 2013-11-19 Pure Storage, Inc. Method for maintaining multiple fingerprint tables in a deduplicating storage system
WO2013078611A1 (zh) * 2011-11-29 2013-06-06 华为技术有限公司 分布式存储系统中的数据处理方法及设备、客户端
US8886993B2 (en) * 2012-02-10 2014-11-11 Hitachi, Ltd. Storage device replacement method, and storage sub-system adopting storage device replacement method
US9032183B2 (en) 2012-02-24 2015-05-12 Simplivity Corp. Method and apparatus for content derived data placement in memory
US9043576B2 (en) 2013-08-21 2015-05-26 Simplivity Corporation System and method for virtual machine conversion
EP2863566B1 (en) 2013-10-18 2020-09-02 Université de Nantes Method and apparatus for reconstructing a data block
US8874835B1 (en) 2014-01-16 2014-10-28 Pure Storage, Inc. Data placement based on data properties in a tiered storage device system
US9612952B2 (en) 2014-06-04 2017-04-04 Pure Storage, Inc. Automatically reconfiguring a storage memory topology
US11652884B2 (en) 2014-06-04 2023-05-16 Pure Storage, Inc. Customized hash algorithms
US8850108B1 (en) 2014-06-04 2014-09-30 Pure Storage, Inc. Storage cluster
US11960371B2 (en) 2014-06-04 2024-04-16 Pure Storage, Inc. Message persistence in a zoned system
US9367243B1 (en) 2014-06-04 2016-06-14 Pure Storage, Inc. Scalable non-uniform storage sizes
US9003144B1 (en) 2014-06-04 2015-04-07 Pure Storage, Inc. Mechanism for persisting messages in a storage system
US11068363B1 (en) 2014-06-04 2021-07-20 Pure Storage, Inc. Proactively rebuilding data in a storage cluster
US10574754B1 (en) 2014-06-04 2020-02-25 Pure Storage, Inc. Multi-chassis array with multi-level load balancing
US9836234B2 (en) 2014-06-04 2017-12-05 Pure Storage, Inc. Storage cluster
US9213485B1 (en) 2014-06-04 2015-12-15 Pure Storage, Inc. Storage system architecture
US9218244B1 (en) 2014-06-04 2015-12-22 Pure Storage, Inc. Rebuilding data across storage nodes
US11399063B2 (en) 2014-06-04 2022-07-26 Pure Storage, Inc. Network authentication for a storage system
US10114757B2 (en) 2014-07-02 2018-10-30 Pure Storage, Inc. Nonrepeating identifiers in an address space of a non-volatile solid-state storage
US9836245B2 (en) 2014-07-02 2017-12-05 Pure Storage, Inc. Non-volatile RAM and flash memory in a non-volatile solid-state storage
US11604598B2 (en) 2014-07-02 2023-03-14 Pure Storage, Inc. Storage cluster with zoned drives
US11886308B2 (en) 2014-07-02 2024-01-30 Pure Storage, Inc. Dual class of service for unified file and object messaging
US9021297B1 (en) 2014-07-02 2015-04-28 Pure Storage, Inc. Redundant, fault-tolerant, distributed remote procedure call cache in a storage system
US8868825B1 (en) 2014-07-02 2014-10-21 Pure Storage, Inc. Nonrepeating identifiers in an address space of a non-volatile solid-state storage
US10853311B1 (en) 2014-07-03 2020-12-01 Pure Storage, Inc. Administration through files in a storage system
US9747229B1 (en) 2014-07-03 2017-08-29 Pure Storage, Inc. Self-describing data format for DMA in a non-volatile solid-state storage
US9811677B2 (en) 2014-07-03 2017-11-07 Pure Storage, Inc. Secure data replication in a storage grid
US8874836B1 (en) 2014-07-03 2014-10-28 Pure Storage, Inc. Scheduling policy for queues in a non-volatile solid-state storage
US9558069B2 (en) 2014-08-07 2017-01-31 Pure Storage, Inc. Failure mapping in a storage array
US10983859B2 (en) 2014-08-07 2021-04-20 Pure Storage, Inc. Adjustable error correction based on memory health in a storage unit
US9483346B2 (en) 2014-08-07 2016-11-01 Pure Storage, Inc. Data rebuild on feedback from a queue in a non-volatile solid-state storage
US9495255B2 (en) * 2014-08-07 2016-11-15 Pure Storage, Inc. Error recovery in a storage cluster
US9766972B2 (en) * 2014-08-07 2017-09-19 Pure Storage, Inc. Masking defective bits in a storage array
US10079711B1 (en) 2014-08-20 2018-09-18 Pure Storage, Inc. Virtual file server with preserved MAC address
US10021181B2 (en) * 2014-12-22 2018-07-10 Dropbox, Inc. System and method for discovering a LAN synchronization candidate for a synchronized content management system
JP2018506088A (ja) 2015-01-13 2018-03-01 ヒューレット パッカード エンタープライズ デベロップメント エル ピーHewlett Packard Enterprise Development LP 最適化シグネチャ比較およびデータレプリケーションのためのシステムおよび方法
US9948615B1 (en) 2015-03-16 2018-04-17 Pure Storage, Inc. Increased storage unit encryption based on loss of trust
US11294893B2 (en) 2015-03-20 2022-04-05 Pure Storage, Inc. Aggregation of queries
US9940234B2 (en) 2015-03-26 2018-04-10 Pure Storage, Inc. Aggressive data deduplication using lazy garbage collection
US10082985B2 (en) 2015-03-27 2018-09-25 Pure Storage, Inc. Data striping across storage nodes that are assigned to multiple logical arrays
US10178169B2 (en) 2015-04-09 2019-01-08 Pure Storage, Inc. Point to point based backend communication layer for storage processing
US9672125B2 (en) 2015-04-10 2017-06-06 Pure Storage, Inc. Ability to partition an array into two or more logical arrays with independently running software
US10140149B1 (en) 2015-05-19 2018-11-27 Pure Storage, Inc. Transactional commits with hardware assists in remote memory
US9817576B2 (en) 2015-05-27 2017-11-14 Pure Storage, Inc. Parallel update to NVRAM
US10846275B2 (en) 2015-06-26 2020-11-24 Pure Storage, Inc. Key management in a storage device
US10983732B2 (en) 2015-07-13 2021-04-20 Pure Storage, Inc. Method and system for accessing a file
US11232079B2 (en) 2015-07-16 2022-01-25 Pure Storage, Inc. Efficient distribution of large directories
US20170060700A1 (en) * 2015-08-28 2017-03-02 Qualcomm Incorporated Systems and methods for verification of code resiliency for data storage
US10108355B2 (en) 2015-09-01 2018-10-23 Pure Storage, Inc. Erase block state detection
US11341136B2 (en) 2015-09-04 2022-05-24 Pure Storage, Inc. Dynamically resizable structures for approximate membership queries
US11269884B2 (en) 2015-09-04 2022-03-08 Pure Storage, Inc. Dynamically resizable structures for approximate membership queries
TWI584617B (zh) 2015-11-18 2017-05-21 Walton Advanced Eng Inc Auxiliary data transmission
US10853266B2 (en) 2015-09-30 2020-12-01 Pure Storage, Inc. Hardware assisted data lookup methods
US9768953B2 (en) 2015-09-30 2017-09-19 Pure Storage, Inc. Resharing of a split secret
US10762069B2 (en) 2015-09-30 2020-09-01 Pure Storage, Inc. Mechanism for a system where data and metadata are located closely together
US9843453B2 (en) 2015-10-23 2017-12-12 Pure Storage, Inc. Authorizing I/O commands with I/O tokens
US10007457B2 (en) 2015-12-22 2018-06-26 Pure Storage, Inc. Distributed transactions with token-associated execution
US10261690B1 (en) 2016-05-03 2019-04-16 Pure Storage, Inc. Systems and methods for operating a storage system
US11231858B2 (en) 2016-05-19 2022-01-25 Pure Storage, Inc. Dynamically configuring a storage system to facilitate independent scaling of resources
US10691567B2 (en) 2016-06-03 2020-06-23 Pure Storage, Inc. Dynamically forming a failure domain in a storage system that includes a plurality of blades
US11861188B2 (en) 2016-07-19 2024-01-02 Pure Storage, Inc. System having modular accelerators
US11706895B2 (en) 2016-07-19 2023-07-18 Pure Storage, Inc. Independent scaling of compute resources and storage resources in a storage system
US9672905B1 (en) 2016-07-22 2017-06-06 Pure Storage, Inc. Optimize data protection layouts based on distributed flash wear leveling
US10768819B2 (en) 2016-07-22 2020-09-08 Pure Storage, Inc. Hardware support for non-disruptive upgrades
US11449232B1 (en) 2016-07-22 2022-09-20 Pure Storage, Inc. Optimal scheduling of flash operations
US11080155B2 (en) 2016-07-24 2021-08-03 Pure Storage, Inc. Identifying error types among flash memory
US11604690B2 (en) 2016-07-24 2023-03-14 Pure Storage, Inc. Online failure span determination
US10216420B1 (en) 2016-07-24 2019-02-26 Pure Storage, Inc. Calibration of flash channels in SSD
US11734169B2 (en) 2016-07-26 2023-08-22 Pure Storage, Inc. Optimizing spool and memory space management
US10203903B2 (en) 2016-07-26 2019-02-12 Pure Storage, Inc. Geometry based, space aware shelf/writegroup evacuation
US11886334B2 (en) 2016-07-26 2024-01-30 Pure Storage, Inc. Optimizing spool and memory space management
US10366004B2 (en) 2016-07-26 2019-07-30 Pure Storage, Inc. Storage system with elective garbage collection to reduce flash contention
US11797212B2 (en) 2016-07-26 2023-10-24 Pure Storage, Inc. Data migration for zoned drives
US11422719B2 (en) 2016-09-15 2022-08-23 Pure Storage, Inc. Distributed file deletion and truncation
US10303458B2 (en) 2016-09-29 2019-05-28 Hewlett Packard Enterprise Development Lp Multi-platform installer
US10756816B1 (en) 2016-10-04 2020-08-25 Pure Storage, Inc. Optimized fibre channel and non-volatile memory express access
US10545861B2 (en) 2016-10-04 2020-01-28 Pure Storage, Inc. Distributed integrated high-speed solid-state non-volatile random-access memory
US9747039B1 (en) 2016-10-04 2017-08-29 Pure Storage, Inc. Reservations over multiple paths on NVMe over fabrics
US10481798B2 (en) 2016-10-28 2019-11-19 Pure Storage, Inc. Efficient flash management for multiple controllers
EP3556063B1 (en) * 2016-12-16 2021-10-27 Telefonaktiebolaget LM Ericsson (publ) Method and request router for dynamically pooling resources in a content delivery network (cdn), for efficient delivery of live and on-demand content
US11550481B2 (en) 2016-12-19 2023-01-10 Pure Storage, Inc. Efficiently writing data in a zoned drive storage system
US11307998B2 (en) 2017-01-09 2022-04-19 Pure Storage, Inc. Storage efficiency of encrypted host system data
US11955187B2 (en) 2017-01-13 2024-04-09 Pure Storage, Inc. Refresh of differing capacity NAND
US9747158B1 (en) 2017-01-13 2017-08-29 Pure Storage, Inc. Intelligent refresh of 3D NAND
US10979223B2 (en) 2017-01-31 2021-04-13 Pure Storage, Inc. Separate encryption for a solid-state drive
US10887176B2 (en) 2017-03-30 2021-01-05 Hewlett Packard Enterprise Development Lp Predicting resource demand in computing environments
US10528488B1 (en) 2017-03-30 2020-01-07 Pure Storage, Inc. Efficient name coding
US11016667B1 (en) 2017-04-05 2021-05-25 Pure Storage, Inc. Efficient mapping for LUNs in storage memory with holes in address space
US10944671B2 (en) 2017-04-27 2021-03-09 Pure Storage, Inc. Efficient data forwarding in a networked device
US10516645B1 (en) 2017-04-27 2019-12-24 Pure Storage, Inc. Address resolution broadcasting in a networked device
US10141050B1 (en) 2017-04-27 2018-11-27 Pure Storage, Inc. Page writes for triple level cell flash memory
US11010300B2 (en) 2017-05-04 2021-05-18 Hewlett Packard Enterprise Development Lp Optimized record lookups
US11467913B1 (en) 2017-06-07 2022-10-11 Pure Storage, Inc. Snapshots with crash consistency in a storage system
US11138103B1 (en) 2017-06-11 2021-10-05 Pure Storage, Inc. Resiliency groups
US11947814B2 (en) 2017-06-11 2024-04-02 Pure Storage, Inc. Optimizing resiliency group formation stability
US11782625B2 (en) 2017-06-11 2023-10-10 Pure Storage, Inc. Heterogeneity supportive resiliency groups
US20200153702A1 (en) * 2017-06-20 2020-05-14 Telefonaktiebolaget Lm Ericsson (Publ) Methods and network nodes enabling a content delivery network to handle unexpected surges of traffic
US10425473B1 (en) 2017-07-03 2019-09-24 Pure Storage, Inc. Stateful connection reset in a storage cluster with a stateless load balancer
US10402266B1 (en) 2017-07-31 2019-09-03 Pure Storage, Inc. Redundant array of independent disks in a direct-mapped flash storage system
US10831935B2 (en) 2017-08-31 2020-11-10 Pure Storage, Inc. Encryption management with host-side data reduction
US10877827B2 (en) 2017-09-15 2020-12-29 Pure Storage, Inc. Read voltage optimization
US10210926B1 (en) 2017-09-15 2019-02-19 Pure Storage, Inc. Tracking of optimum read voltage thresholds in nand flash devices
US10496330B1 (en) 2017-10-31 2019-12-03 Pure Storage, Inc. Using flash storage devices with different sized erase blocks
US11520514B2 (en) 2018-09-06 2022-12-06 Pure Storage, Inc. Optimized relocation of data based on data characteristics
US11024390B1 (en) 2017-10-31 2021-06-01 Pure Storage, Inc. Overlapping RAID groups
US10884919B2 (en) 2017-10-31 2021-01-05 Pure Storage, Inc. Memory management in a storage system
US11354058B2 (en) 2018-09-06 2022-06-07 Pure Storage, Inc. Local relocation of data stored at a storage device of a storage system
US10515701B1 (en) 2017-10-31 2019-12-24 Pure Storage, Inc. Overlapping raid groups
US10545687B1 (en) 2017-10-31 2020-01-28 Pure Storage, Inc. Data rebuild when changing erase block sizes during drive replacement
US12032848B2 (en) 2021-06-21 2024-07-09 Pure Storage, Inc. Intelligent block allocation in a heterogeneous storage system
US12067274B2 (en) 2018-09-06 2024-08-20 Pure Storage, Inc. Writing segments and erase blocks based on ordering
US10860475B1 (en) 2017-11-17 2020-12-08 Pure Storage, Inc. Hybrid flash translation layer
US10990566B1 (en) 2017-11-20 2021-04-27 Pure Storage, Inc. Persistent file locks in a storage system
US10719265B1 (en) 2017-12-08 2020-07-21 Pure Storage, Inc. Centralized, quorum-aware handling of device reservation requests in a storage system
US10929053B2 (en) 2017-12-08 2021-02-23 Pure Storage, Inc. Safe destructive actions on drives
US10929031B2 (en) 2017-12-21 2021-02-23 Pure Storage, Inc. Maximizing data reduction in a partially encrypted volume
US11126755B2 (en) 2018-01-30 2021-09-21 Hewlett Packard Enterprise Development Lp Object signatures in object stores
US10860738B2 (en) 2018-01-30 2020-12-08 Hewlett Packard Enterprise Development Lp Augmented metadata and signatures for objects in object stores
US10587454B2 (en) 2018-01-30 2020-03-10 Hewlett Packard Enterprise Development Lp Object counts persistence for object stores
US10733053B1 (en) 2018-01-31 2020-08-04 Pure Storage, Inc. Disaster recovery for high-bandwidth distributed archives
US10976948B1 (en) 2018-01-31 2021-04-13 Pure Storage, Inc. Cluster expansion mechanism
US10467527B1 (en) 2018-01-31 2019-11-05 Pure Storage, Inc. Method and apparatus for artificial intelligence acceleration
US11036596B1 (en) 2018-02-18 2021-06-15 Pure Storage, Inc. System for delaying acknowledgements on open NAND locations until durability has been confirmed
US11494109B1 (en) 2018-02-22 2022-11-08 Pure Storage, Inc. Erase block trimming for heterogenous flash memory storage devices
US10997153B2 (en) 2018-04-20 2021-05-04 Hewlett Packard Enterprise Development Lp Transaction encoding and transaction persistence according to type of persistent storage
US12001688B2 (en) 2019-04-29 2024-06-04 Pure Storage, Inc. Utilizing data views to optimize secure data access in a storage system
US11995336B2 (en) 2018-04-25 2024-05-28 Pure Storage, Inc. Bucket views
US10853146B1 (en) 2018-04-27 2020-12-01 Pure Storage, Inc. Efficient data forwarding in a networked device
US11243703B2 (en) 2018-04-27 2022-02-08 Hewlett Packard Enterprise Development Lp Expandable index with pages to store object records
US11385792B2 (en) 2018-04-27 2022-07-12 Pure Storage, Inc. High availability controller pair transitioning
US10931450B1 (en) 2018-04-27 2021-02-23 Pure Storage, Inc. Distributed, lock-free 2-phase commit of secret shares using multiple stateless controllers
US12079494B2 (en) 2018-04-27 2024-09-03 Pure Storage, Inc. Optimizing storage system upgrades to preserve resources
US11436023B2 (en) 2018-05-31 2022-09-06 Pure Storage, Inc. Mechanism for updating host file system and flash translation layer based on underlying NAND technology
US11438279B2 (en) 2018-07-23 2022-09-06 Pure Storage, Inc. Non-disruptive conversion of a clustered service from single-chassis to multi-chassis
US11868309B2 (en) 2018-09-06 2024-01-09 Pure Storage, Inc. Queue management for data relocation
US11500570B2 (en) 2018-09-06 2022-11-15 Pure Storage, Inc. Efficient relocation of data utilizing different programming modes
US10454498B1 (en) 2018-10-18 2019-10-22 Pure Storage, Inc. Fully pipelined hardware engine design for fast and efficient inline lossless data compression
US10976947B2 (en) 2018-10-26 2021-04-13 Pure Storage, Inc. Dynamically selecting segment heights in a heterogeneous RAID group
US11334254B2 (en) 2019-03-29 2022-05-17 Pure Storage, Inc. Reliability based flash page sizing
US11775189B2 (en) 2019-04-03 2023-10-03 Pure Storage, Inc. Segment level heterogeneity
US12087382B2 (en) 2019-04-11 2024-09-10 Pure Storage, Inc. Adaptive threshold for bad flash memory blocks
US11099986B2 (en) 2019-04-12 2021-08-24 Pure Storage, Inc. Efficient transfer of memory contents
US11487665B2 (en) 2019-06-05 2022-11-01 Pure Storage, Inc. Tiered caching of data in a storage system
US11714572B2 (en) 2019-06-19 2023-08-01 Pure Storage, Inc. Optimized data resiliency in a modular storage system
US11281394B2 (en) 2019-06-24 2022-03-22 Pure Storage, Inc. Replication across partitioning schemes in a distributed storage system
US11893126B2 (en) 2019-10-14 2024-02-06 Pure Storage, Inc. Data deletion for a multi-tenant environment
US11704192B2 (en) 2019-12-12 2023-07-18 Pure Storage, Inc. Budgeting open blocks based on power loss protection
US11416144B2 (en) 2019-12-12 2022-08-16 Pure Storage, Inc. Dynamic use of segment or zone power loss protection in a flash device
US12001684B2 (en) 2019-12-12 2024-06-04 Pure Storage, Inc. Optimizing dynamic power loss protection adjustment in a storage system
US11847331B2 (en) 2019-12-12 2023-12-19 Pure Storage, Inc. Budgeting open blocks of a storage unit based on power loss prevention
US11188432B2 (en) 2020-02-28 2021-11-30 Pure Storage, Inc. Data resiliency by partially deallocating data blocks of a storage device
US11507297B2 (en) 2020-04-15 2022-11-22 Pure Storage, Inc. Efficient management of optimal read levels for flash storage systems
US11256587B2 (en) 2020-04-17 2022-02-22 Pure Storage, Inc. Intelligent access to a storage device
US12056365B2 (en) 2020-04-24 2024-08-06 Pure Storage, Inc. Resiliency for a storage system
US11474986B2 (en) 2020-04-24 2022-10-18 Pure Storage, Inc. Utilizing machine learning to streamline telemetry processing of storage media
US11416338B2 (en) 2020-04-24 2022-08-16 Pure Storage, Inc. Resiliency scheme to enhance storage performance
US11768763B2 (en) 2020-07-08 2023-09-26 Pure Storage, Inc. Flash secure erase
US11681448B2 (en) 2020-09-08 2023-06-20 Pure Storage, Inc. Multiple device IDs in a multi-fabric module storage system
US11513974B2 (en) 2020-09-08 2022-11-29 Pure Storage, Inc. Using nonce to control erasure of data blocks of a multi-controller storage system
US11487455B2 (en) 2020-12-17 2022-11-01 Pure Storage, Inc. Dynamic block allocation to optimize storage system performance
US12093545B2 (en) 2020-12-31 2024-09-17 Pure Storage, Inc. Storage system with selectable write modes
US11614880B2 (en) 2020-12-31 2023-03-28 Pure Storage, Inc. Storage system with selectable write paths
US12067282B2 (en) 2020-12-31 2024-08-20 Pure Storage, Inc. Write path selection
US11847324B2 (en) 2020-12-31 2023-12-19 Pure Storage, Inc. Optimizing resiliency groups for data regions of a storage system
US12061814B2 (en) 2021-01-25 2024-08-13 Pure Storage, Inc. Using data similarity to select segments for garbage collection
US11630593B2 (en) 2021-03-12 2023-04-18 Pure Storage, Inc. Inline flash memory qualification in a storage system
US12099742B2 (en) 2021-03-15 2024-09-24 Pure Storage, Inc. Utilizing programming page size granularity to optimize data segment storage in a storage system
US11507597B2 (en) 2021-03-31 2022-11-22 Pure Storage, Inc. Data replication to meet a recovery point objective
US11994723B2 (en) 2021-12-30 2024-05-28 Pure Storage, Inc. Ribbon cable alignment apparatus

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040215622A1 (en) * 2003-04-09 2004-10-28 Nec Laboratories America, Inc. Peer-to-peer system and method with improved utilization
US20050187946A1 (en) * 2004-02-19 2005-08-25 Microsoft Corporation Data overlay, self-organized metadata overlay, and associated methods
US20070208748A1 (en) * 2006-02-22 2007-09-06 Microsoft Corporation Reliable, efficient peer-to-peer storage
US20080005334A1 (en) * 2004-11-26 2008-01-03 Universite De Picardie Jules Verne System and method for perennial distributed back up
US7466810B1 (en) * 2004-12-20 2008-12-16 Neltura Technology, Inc. Distributed system for sharing of communication service resources between devices and users

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7130921B2 (en) * 2002-03-15 2006-10-31 International Business Machines Corporation Centrally enhanced peer-to-peer resource sharing method and apparatus
KR20040084530A (ko) * 2003-03-28 2004-10-06 엘지전자 주식회사 이동 통신 단말기의 적외선을 이용한 소프트웨어업그레이드 방법
US7418454B2 (en) * 2004-04-16 2008-08-26 Microsoft Corporation Data overlay, self-organized metadata overlay, and application level multicasting
JP2006319909A (ja) * 2005-05-16 2006-11-24 Konica Minolta Holdings Inc データ通信の方法、ピアツーピア型のネットワーク、および情報処理装置
US8060648B2 (en) * 2005-08-31 2011-11-15 Cable Television Laboratories, Inc. Method and system of allocating data for subsequent retrieval

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040215622A1 (en) * 2003-04-09 2004-10-28 Nec Laboratories America, Inc. Peer-to-peer system and method with improved utilization
US20050135381A1 (en) * 2003-04-09 2005-06-23 Nec Laboratories America, Inc. Peer-to-peer system and method with prefix-based distributed hash table
US20050187946A1 (en) * 2004-02-19 2005-08-25 Microsoft Corporation Data overlay, self-organized metadata overlay, and associated methods
US20080005334A1 (en) * 2004-11-26 2008-01-03 Universite De Picardie Jules Verne System and method for perennial distributed back up
US7466810B1 (en) * 2004-12-20 2008-12-16 Neltura Technology, Inc. Distributed system for sharing of communication service resources between devices and users
US20070208748A1 (en) * 2006-02-22 2007-09-06 Microsoft Corporation Reliable, efficient peer-to-peer storage

Cited By (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7992037B2 (en) 2008-09-11 2011-08-02 Nec Laboratories America, Inc. Scalable secondary storage systems and methods
US20100070698A1 (en) * 2008-09-11 2010-03-18 Nec Laboratories America, Inc. Content addressable storage systems and methods employing searchable blocks
CN101676855A (zh) * 2008-09-11 2010-03-24 美国日本电气实验室公司 可变动的辅助存储系统和方法
US8335889B2 (en) 2008-09-11 2012-12-18 Nec Laboratories America, Inc. Content addressable storage systems and methods employing searchable blocks
US20100064166A1 (en) * 2008-09-11 2010-03-11 Nec Laboratories America, Inc. Scalable secondary storage systems and methods
US20100174968A1 (en) * 2009-01-02 2010-07-08 Microsoft Corporation Heirarchical erasure coding
US8121993B2 (en) * 2009-10-28 2012-02-21 Oracle America, Inc. Data sharing and recovery within a network of untrusted storage devices using data object fingerprinting
US20110099200A1 (en) * 2009-10-28 2011-04-28 Sun Microsystems, Inc. Data sharing and recovery within a network of untrusted storage devices using data object fingerprinting
US20110106758A1 (en) * 2009-10-29 2011-05-05 Borislav Agapiev Dht-based distributed file system for simultaneous use by millions of frequently disconnected, world-wide users
US8296283B2 (en) 2009-10-29 2012-10-23 Google Inc. DHT-based distributed file system for simultaneous use by millions of frequently disconnected, world-wide users
US7716179B1 (en) 2009-10-29 2010-05-11 Wowd, Inc. DHT-based distributed file system for simultaneous use by millions of frequently disconnected, world-wide users
US20140129881A1 (en) * 2010-12-27 2014-05-08 Amplidata Nv Object storage system for an unreliable storage medium
US9135136B2 (en) * 2010-12-27 2015-09-15 Amplidata Nv Object storage system for an unreliable storage medium
US10725884B2 (en) 2010-12-27 2020-07-28 Western Digital Technologies, Inc. Object storage system for an unreliable storage medium
US20180165154A1 (en) * 2014-08-07 2018-06-14 Pure Storage, Inc. Die-level monitoring in a storage cluster
US10579474B2 (en) * 2014-08-07 2020-03-03 Pure Storage, Inc. Die-level monitoring in a storage cluster
WO2019170004A1 (zh) * 2018-03-09 2019-09-12 杭州海康威视系统技术有限公司 一种数据存储系统、方法及装置

Also Published As

Publication number Publication date
AR076255A1 (es) 2011-06-01
US8140625B2 (en) 2012-03-20
TW200843410A (en) 2008-11-01
TWI433504B (zh) 2014-04-01
TWI432968B (zh) 2014-04-01
WO2008103568A1 (en) 2008-08-28
TW200847689A (en) 2008-12-01
US20080201428A1 (en) 2008-08-21
WO2008103569A1 (en) 2008-08-28

Similar Documents

Publication Publication Date Title
US20080201335A1 (en) Method and Apparatus for Storing Data in a Peer to Peer Network
US8090792B2 (en) Method and system for a self managing and scalable grid storage
US10990479B2 (en) Efficient packing of compressed data in storage system implementing data striping
JP5539683B2 (ja) 拡張可能な2次ストレージシステムと方法
US9823980B2 (en) Prioritizing data reconstruction in distributed storage systems
CN106708425B (zh) 分布式多模存储管理
US9442673B2 (en) Method and apparatus for storing data using a data mapping algorithm
US9747155B2 (en) Efficient data reads from distributed storage systems
JP5500257B2 (ja) ストレージシステム
CN110096891B (zh) 对象库中的对象签名
US20200117362A1 (en) Erasure coding content driven distribution of data blocks
US20170126805A1 (en) Allocating delegates for modification of an index structure
CN102609446B (zh) 一种分布式Bloom过滤系统及其使用方法
CN112230861B (zh) 一种基于一致性哈希算法的数据存储方法及终端
CN103067525A (zh) 一种基于特征码的云存储数据备份方法
CN110147203B (zh) 一种文件管理方法、装置、电子设备及存储介质
CN110046160B (zh) 一种基于条带的一致性哈希存储系统构建方法
US11157186B2 (en) Distributed object storage system with dynamic spreading
WO2018235132A1 (en) DISTRIBUTED STORAGE SYSTEM
JP4891657B2 (ja) データ記憶システム、ファイル検索装置およびプログラム
US11163642B2 (en) Methods, devices and computer readable medium for managing a redundant array of independent disks
Klein et al. Dxram: A persistent in-memory storage for billions of small objects
Xiong et al. ECCH: Erasure Coded Consistent Hashing for Distributed Storage Systems
JP2014154087A (ja) データ均等分散配置方法
CN115827560A (zh) 基于分布式的工业海量小文件的存储方法及系统

Legal Events

Date Code Title Description
AS Assignment

Owner name: NEC LABORATORIES AMERICA, INC., NEW JERSEY

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:DUBNICKI, CEZARY;GRYZ, LESZEK K;LICHOTA, KRZYSZTOF;AND OTHERS;REEL/FRAME:020560/0407;SIGNING DATES FROM 20080218 TO 20080225

STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION