WO2005043389A2 - Method and apparatus for enabling high-reliability storage of distributed data on a plurality of independent storage devices - Google Patents

Method and apparatus for enabling high-reliability storage of distributed data on a plurality of independent storage devices Download PDF

Info

Publication number
WO2005043389A2
WO2005043389A2 PCT/EP2004/012314 EP2004012314W WO2005043389A2 WO 2005043389 A2 WO2005043389 A2 WO 2005043389A2 EP 2004012314 W EP2004012314 W EP 2004012314W WO 2005043389 A2 WO2005043389 A2 WO 2005043389A2
Authority
WO
WIPO (PCT)
Prior art keywords
data
redundancy
storage
areas
sub
Prior art date
Application number
PCT/EP2004/012314
Other languages
French (fr)
Other versions
WO2005043389A3 (en
Inventor
Volker Lindenstruth
Arne Wiebalck
Original Assignee
Certon Systems Gmbh
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Certon Systems Gmbh filed Critical Certon Systems Gmbh
Priority to EP04803099A priority Critical patent/EP1685489A2/en
Publication of WO2005043389A2 publication Critical patent/WO2005043389A2/en
Publication of WO2005043389A3 publication Critical patent/WO2005043389A3/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/08Error detection or correction by redundancy in data representation, e.g. by using checking codes
    • G06F11/10Adding special bits or symbols to the coded information, e.g. parity check, casting out 9's or 11's
    • G06F11/1076Parity data used in redundant arrays of independent storages, e.g. in RAID systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2211/00Indexing scheme relating to details of data-processing equipment not covered by groups G06F3/00 - G06F13/00
    • G06F2211/10Indexing scheme relating to G06F11/10
    • G06F2211/1002Indexing scheme relating to G06F11/1076
    • G06F2211/1028Distributed, i.e. distributed RAID systems with parity
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2211/00Indexing scheme relating to details of data-processing equipment not covered by groups G06F3/00 - G06F13/00
    • G06F2211/10Indexing scheme relating to G06F11/10
    • G06F2211/1002Indexing scheme relating to G06F11/1076
    • G06F2211/1047No striping, i.e. parity calculation on a RAID involving no stripes, where a stripe is an independent set of data

Definitions

  • Mass storage systems of petabyte scale have to be built in a modular fashion as no single computer can deliver such scalability.
  • RAID Redundant Array of Independent/Inexpensive Disks
  • Patterson et al. D. A. Patterson, G. Gibson, and R. H. Katz: "A Case for Redundant Arrays of Inexpensive Disks", SIGMOD International Conference on Data Management, Chicago, pp. 109-116, 1988
  • RAID aims at improving performance and reliability of single large disks by assembling them into one virtual device, while maintaining distributed parity information within this device.
  • the cited paper introduces five RAID strategies, often quoted as RAID levels 1 through 5, which differ in terms of performance and reliability.
  • the RAID Advisory Board defined four more levels, referred to as levels 0, 6, 10 and 53. All these RAID schemes are defined for local disk arrays. They are widely used in order to enhance the data rate or to protect from data loss by a disk failure, within one RAID ensemble.
  • a next step was to apply the RAID concept to a distributed computer farm. Distributed RAID on a block level (as opposed to a file system level) as first proposed by Stonebraker and Schloss (M. Stonebraker and G. A. Schloss: Distributed RAID - A new Multicopy Algorithm, Proceedings of the International Conference on Data Engineering, pp. 430-437, 1990) and patented, for instance, in JP 110 25 022 A.
  • RAID controllers e.g., N. Peleg: Apparatus and Method for a Distributed RAID, U.S. Patent Application 2003/0084397) or are meant for use in wide area networks (e.g., G. H. Moulton: System and Method for Data Protection with Multidimensional Parity, U.S. Patent Application 2002/0048284).
  • Data striping also applies to systems that are able to tolerate multiple failures by using multidimensional parity (e.g., D. J. Stephenson: RAID architecture with two-drive fault- tolerance, U.S. Patent No. 6,353,895).
  • PC clusters traditionally have centralized file servers and use the known RAID technology for their local devices.
  • backup systems are provided to protect from data loss in the case of an unrecoverable server error.
  • backup systems may require substantial time for the recovery process. It is desirable to avoid the expensive installations of centralized file servers with their associated disadvantages of poor scalability, low network throughput and high cost by building a reliable mass storage system based on the unreliable components of the cluster.
  • the present invention is embodied in a cluster computer system providing scalable, highly reliable data storage on multiple distributed, independent storage devices, with adjustable fault tolerance and corresponding overhead and minimal network communication requirements. It enables independent and asynchronous read/write access of all nodes in the computer cluster to local mass storage devices without particular knowledge of the cluster.
  • the invention provides fault tolerance with respect to the partial or the complete loss of a node and its storage devices, by affording a method and apparatus for reconstructing the lost data on a spare node based on available redundancy information distributed in the cluster.
  • Read accesses of the computer nodes in the cluster to their local mass storage devices may be serviced directly by a read-write module for user data by forwarding the access requests to an underlying physical mass storage device without the necessity for interaction with any other node in the cluster, unless the read returned an error.
  • Local read error detection such as may be accomplished, for instance, by verifying a Cyclic Redundancy Check CRC that may be automatically attached to any data block, may be employed by the mass storage devices. This enables device failures and data transmission errors to be easily detected by a node itself.
  • Write transactions of a node to a local mass storage device may be intercepted by the read-write module for user data and the appropriate redundancy information may be computed and distributed appropriately in the cluster prior to writing the data block to the local mass storage.
  • This redundancy information may be used to restore data in the case of a device failure.
  • the approach of the invention to serve read requests from the local device and to only update remote redundancy information for write requests is fundamentally different from other distributed RAID systems.
  • the architecture of the invention allows for a reduction of the network load to a minimum and imposes minimal additional load on the processor for read requests as compared to a stand-alone computer.
  • a desired level of fault tolerance and data security can be freely chosen by defining the number of redundancy blocks per group of data blocks in an ensemble, allowing optimization of the redundancy data overhead while maintaining a very high reliability.
  • the invention affords an efficient and reliable storage system based upon unreliable components.
  • Simple considerations show that a cluster with about 1000 PCs, each equipped with a 1 terabyte disk storage, can easily be incorporated into a distributed mass storage system with a capacity of about 1 petabyte and a mean time to data loss by disk failure of several 10000 years.
  • typical applications of such systems are research institutes operating PC farms with a high demand for reliable data storage (for instance genome databases or high energy physics experiments), TV and radio stations for storage of digitized multimedia data, or service providers like the internet search engines and the like.
  • the present inventive architecture is useful and advantageous for these and other applications requiring highly-reliable mass storage.
  • Fig. 1 depicts schematically the functional architecture of an embodiment of a cluster computer system in accordance with the invention
  • Fig. 2 shows a first embodiment of the physical distribution of data and redundancy blocks in the system where some nodes only store data, while others only store redundancy information;
  • Fig. 3 illustrates another embodiment of the physical distribution of data and redundancy blocks in the system, where all nodes store data as well as redundancy information in an interleaved fashion
  • Fig. 4 shows yet another embodiment of the physical distribution of data and redundancy blocks where the nodes store data as well as redundancy information in disjoint areas
  • Fig. 5 is a flow chart of a process for servicing requests in the system and its behavior in the case of errors.
  • Fig. 1 shows schematically the simplified functional architecture of a preferred embodiment of a distributed storage cluster computer system in accordance with the invention.
  • the system may comprise a plurality of nodes (1), each embodying a mass storage area for user data (MSAD) (4) and a plurality of nodes (2), each embodying a mass storage area for redundancy data (MSAR) (5).
  • Each node (1, 2) may be an independent computer having one or more associated data storage devices that provide one or more of the mass storage areas.
  • the system has at least one mass storage area of each kind.
  • the mass storage areas for user data and those for redundancy information may be assigned to dedicated nodes, or nodes may implement both functionalities. All nodes in the system are connected by a network (3).
  • All nodes contain at least one mass storage area MSAD, MSAR (4,5) which is part of the system. All storage areas are block-oriented. They are subdivided into blocks, which are preferably of equal size. The individual mass storage areas may be distributed over one or several block devices on one node supporting the same block size. Access to a block device is only possible in multiples of a block. Hard disks, floppies, or CDROMs are examples for such block-oriented devices which may be employed. In this sense, also a node's main memory is block oriented, since the access is byte-wise. All nodes contain read-write modules, either for user data, RWMD (6) or redundancy information RWMR (7), or both.
  • the data path also may contain a redundancy encoder, E (8), which generates redundancy information for write requests, and a redundancy decoder, D (9), which decodes the original user data if the local disk is not operational.
  • the redundancy encoder and decoder which may be embodied in a single unit and are referred to herein as CODECs or redundancy modules, (8, 9) can reside on any node.
  • the set of blocks on storage devices is divided into the two groups of logical mass storage areas (data and redundancy). There is preferably a well-defined mapping between all data and redundancy blocks in the various mass storage areas.
  • a redundancy block stores the redundancy information of all data blocks that are associated to it.
  • a set of associated data and redundancy blocks is defined herein as a redundant block ensemble. No two blocks within one redundant block ensemble may reside on the same node. Therefore, although a node may embody user data and redundancy areas, within a given redundant block ensemble a node serves exclusively either as a data node (1), holding a data block, or as a redundancy node (2) holding a redundancy block. They will be referenced as such in the description below.
  • Each access to the storage devices (4,5) may be intercepted by the appropriate read-write modules (6,7).
  • an unchanged read request is forwarded to the underlying local device and the result of the operation is sent back to the requesting application, A (10).
  • the interception is necessary to check if the transaction succeeded or failed.
  • the ability to determine the completion status of an operation is a required feature of the underlying device.
  • the read-write module (6) will reconstruct the data and forward it to the application (10).
  • the transaction is also intercepted by the read-write-modules (6).
  • the difference between the old and new data is computed by first reading the old data blocks and comparing the old data with the new data.
  • the redundancy encoder (8) uses this difference as its input to calculate any changes to be made on the corresponding redundancy blocks. For example, this difference can be calculated by applying a logical exclusive-OR (“XOR”) operation to the two data sets.
  • the difference data is sent over the network (3) to all nodes holding redundancy blocks of the given redundant block ensemble.
  • the remote storage may be made visible on the local machine using virtual mass storage areas, vMSA (11).
  • a virtual storage area masquerades a remote storage area as being local.
  • the remote area does not differ from any other access to a local device from the read-write modules point of view.
  • all read-write requests from a virtual device are served from the appropriate remote device.
  • the computed difference between stored data and the pending write transaction is used as the basis for the calculation of updated redundancy information using the redundancy encoder (8).
  • the error-correcting encoder returns the difference between the old and the new redundancy data.
  • the position of the encoder in the data path may be flexible and arbitrary. It is not necessary to compute the redundancy information on the redundancy node (2) holding the redundancy block. It is possible to determine the change in the redundancy information on the data node (1) and to send the difference in the redundancy information to the redundancy node (2). For better load balancing, it is therefore possible to install an appropriate redundancy encoder on or otherwise associate an appropriate redundancy encoder with every node in the system. Depending on the complexity of the redundancy algorithm used, and in order to off-load operations from the host CPU, a hardware- supported CODEC can be instantiated to accelerate and improve the overall system performance.
  • FPGA field programmable gate array
  • the location independence of the CODEC is advantageous in allowing only a few nodes in the system to be provided with such a hardware accelerator, if desired, while providing various trade-offs between cost and performance.
  • Fig. 2 is an embodiment that shows an example of the distribution of blocks with dedicated data nodes (1) and redundancy nodes (2).
  • every node in the cluster system embodies exclusively either an MSAD (4) or MSAR (5) and, therefore, serves exclusively as a computer with data mass storage entity (1) or a computer with redundancy mass storage entity (2).
  • the blocks ai, bi, ci, di and pi form a redundant blocks ensemble (13), where i is an index uniquely identifying the ensemble, a, b, c, d are data blocks, and p is a checksum block, in this case parity.
  • the unused blocks (15) may be initialized to a defined value, such as 0 as indicated in the figure.
  • the data can be read from the local devices independently and asynchronously with respect to all other nodes in the system. For write accesses, however, the steps as described above are followed. For instance, writing to data block al (12) triggers the computation of redundancy information which is added to the information on the associated (13) redundancy block pi (14). In addition, all associated redundancy blocks on all other nodes (2), embodying MSAR (5), are also updated.
  • the redundancy information in block pi and all other redundancy blocks in the redundant blocks ensemble may be calculated from the data in the data blocks al, bl, cl, and dl.
  • the assignment of blocks to logical structures, such as files, is entirely independent from the assignment of blocks to redundant blocks ensembles.
  • the blocks al through a5 in Figure 2 could contain the data of a file on an associated node, thus requiring five data blocks for the nodes shown.
  • all logical data objects (file system, files and the like) of one node may be stored within the data mass storage area (4) of that given node, the data storage therefore remaining completely local while being redundantly encoded remotely due to the existence of the remote redundancy mass storage areas (5). Consequently, read accesses will remain completely local and independent. Only in the case of read errors would the system have to access the remote storage areas (data (4) and redundancy (5)) in order to reconstruct the lost information.
  • some nodes have exclusively redundancy mass storage areas connected and cannot be used for user data. Those nodes do not have locally attached redundant mass storage for user data, and are, therefore, less useful for application processing as all related mass storage accesses are remote.
  • Fig. 3 is another embodiment that shows an example how the blocks may be redistributed over the various nodes in the cluster system so that every node now embodies mass storage areas for data (4) and for redundancy information (5), and the associated entities (1, 2).
  • the redundancy blocks ensembles are now ai, bi, ci, pi and Pi or ai, bi, ei, pi, Pi or ai, di, ei, pi, Pi and so forth.
  • any assignment of data and redundancy blocks within a redundancy blocks ensemble to nodes is possible, provided that no two blocks reside on the same node.
  • the number of blocks in the redundancy block ensemble does not have to match the number of available nodes, as sketched in this example. All other aspects of the system as discussed in the context of Fig. 2 remain valid.
  • each write access to data block a4 leads to a redundancy update of the redundancy blocks p4 and P4.
  • the same redundancy blocks are updated when the data of block e4 is changed.
  • all data blocks ai are stored locally on one given node and can be used for storing user data for independent and direct access on the local node.
  • all nodes have their private and redundantly encoded data area a, b, c, d, e. They also all store appropriate redundancy information in their MSAR (5).
  • the blocks of the MSAD and MSAR are preferably interlaced physically on the mass storage devices. Assuming the available physical storage space to be of equal size on all nodes, in this example, this results in each node using 60% of its physical storage for MSAD and 40% for MSAR. In the previous example, by contrast, some nodes used 100% of their physical mass storage for MSAD and others used their 100% for MSAR.
  • the data and redundancy information and their corresponding mass storage areas MSAD and MSAR are interlaced on the physical devices. This makes no difference with respect to the invention. Any block arrangement on a given node is possible and may be used. However, the arrangement has to be known globally in order to allow any node to determine which redundancy blocks to access in case of local write transactions.
  • Fig. 4 shows another example of a possible physical block distribution using a similar organization to that described for Fig. 3.
  • the user data and redundancy data may be stored adjacently in different areas of a local physical mass storage.
  • the first 60% of the local physical mass storage (for example hard drive(s)) may be used for the data blocks MSAD (4) and the remaining 40% may be used for MSAR (5).
  • the data and redundancy mass storage can be distributed to independent local devices.
  • a node may employ a 300 GB disk for data and a 100 GB disk for redundancy information, allowing completely independent operation of the local data mass storage entity (1) and the redundancy mass storage entity (2). Therefore, write transactions on remote nodes resulting in update transactions to the local redundancy area MSAR (5) would not affect at all any of the potential local accesses to the MSAD (4).
  • Fig. 2, 3 and 4 represent a few examples of where the redundancy data is concentrated on dedicated nodes and the redundancy data is equally distributed over all nodes in the system.
  • the number of blocks in a redundant blocks ensemble match the number of nodes, all having the same physical storage capacity.
  • the number of blocks in a redundant blocks ensemble may be smaller than the number of nodes, and the physical storage capacity of the nodes may vary.
  • the number of data and redundancy blocks can also vary from node to node. One reason for such a variation could be the fact that some nodes may require less storage space than others and, thus, can host more redundancy blocks.
  • the nodes in the system may not necessarily be built from the same type of hardware, but may differ in age or quality of their components. This results in different failure probabilities. Accordingly, such non- homogeneity may also be a reason to choose a different distribution arrangement than the homogenous arrangements described above.
  • Fig. 5 is a process flow chart showing an example of the operation of a cluster system of the invention to read or write service requests and to error scenarios.
  • the read- write module checks all requests for the occurrence of an error before they are handed back to the requester.
  • the notification of an error is generated by a storage device and provided to the read-write module. Once the read-write module receives the error notification, reconstruction of the missing information is triggered for all following requests.
  • the requested data may be reconstructed by decoding the given redundancy and user data in the redundant blocks ensemble.
  • the redundancy decoder uses an inverse coding algorithm to that used by the encoder to compute the requested data. An example how this may be done is described, for example, by Hankerson et al. (D.
  • the requested data may be returned to the requesting application and may be stored in a reallocated area in the mass storage system. If the reallocation fails, the mass storage device has to be replaced and an appropriate error operation may be initiated by the cluster system. In the extremely unlikely case of the reconstruction failing, as for instance if the number of failing devices exceeds the redundancy limits of the algorithm, the cluster system will have to return an I/O error.
  • An error during a write operation is more complicated because different scenarios may occur. Since every write request is preceded by a read request, an error could happen during this initial reading of the data block. However, such read errors can be handled in the same way as discussed above. If the data cannot be reconstructed, an I/O error has to be reported back to the requester. Using the reconstructed data, the write request can be served as in any other case. If a write fails, the specific device may be marked as faulty, and should be replaced by a new one. Furthermore, errors can occur during completion of the write requests for the corresponding redundancy blocks, i.e. during the read or write of the redundancy information. Read errors for redundancy blocks can trigger the recalculation of the redundancy information from the corresponding data blocks.
  • the reconstruction fails, the recalculation ends with an I/O error.
  • the reconstruction can only fail if the number of failing devices exceeds the number of errors tolerable by the chosen algorithm. It is of course also possible to mark the device as faulty immediately, without the recalculation of the redundancy information. In this scenario, the write request to this device can be tagged as failed.
  • the difference with respect to the new redundancy information may be determined and the result written back to the device. If the device has spare blocks, the write of the reconstructed redundancy information can succeed, but, of course, an error can also occur during this last write operation. If this happens, the device is marked faulty just as in the case of a write error for a data device above to enable it to be replaced. The status of all pending operations can be reported back to the read-write module, which checks whether or not the new data and the corresponding redundancy data have been stored on a sufficient number of devices, and generates a defined minimum level of fault tolerance. An insufficient number of successful write operations constitutes an error.
  • a device is able to recognize faulty blocks and remaps them to spare blocks. Only if there are no more spare blocks left is an error reported back to the application.
  • This remapping typically is done by the driver or the controller of the device and is transparent to applications. In Fig. 5 this remapping is introduced as Reallocation.
  • An enhanced device controller capable of performing the read-modify-write (RMW) transaction locally on the device, for instance, can reduce the data rate between the device and its host.
  • RMW read-modify-write
  • the device may calculate the difference.
  • a data write it stores the new data and hands the result back to the module for further calculations.
  • a redundancy write it applies the received update information to the local redundancy block. This approach relieves the load on the host processor, since the calculation is offloaded to the device hardware.
  • the available bandwidth to the device is increased, since part of the computation now takes place very close to the device.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Quality & Reliability (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
  • Techniques For Improving Reliability Of Storages (AREA)
  • Hardware Redundancy (AREA)

Abstract

A cluster computer system and method for distributed data storage enables distributed, reliable, low overhead mass storage systems. In particular, the system and method contemplate a plurality of computers connected in a network, each computer comprising a node having mass storage devices providing storage areas for data and redundancy information. Computer nodes access their locally attached mass storage devices independently and asynchronously, with a minimum of network transactions and storage overhead. The information stored on the mass storage devices is highly reliable due to the distribution of redundant information within the computer cluster. Redundancy information is used to reconstruct data associated with storage area access failures.

Description

Method and Apparatus for Enabling High-Reliability Storage of Distributed Data on a Plurality of Independent Storage Devices
Background of the Invention
Large scale mass storage systems are driven by many emerging applications in research and industry. For instance particle physics experiments generate petabytes of data per annum. Many commercial applications, for instance digital video or medical imaging, require highly reliable, distributed mass storage for on-line parallel access. Mass storage systems of petabyte scale have to be built in a modular fashion as no single computer can deliver such scalability.
Large farms of standard PCs have become a commodity and replace traditional supercomputers due to their comparable compute power and their much lower prices. The maximum capacity of standard disk drives, such as installed in commodity PCs, exceeds 1 terabyte per node. Thus, a cluster installation with 1000 commodity PCs and disks would provide a distributed mass storage capacity, exceeding 1 petabyte at a minimal cost. The reason why this type of distributed mass storage paradigm has not been adopted yet is its inherent unreliability.
Local disks, connected to a central server, can be protected against data loss by using RAID technology (RAID: "Redundant Array of Independent/Inexpensive Disks"). Proposed by Patterson et al. (D. A. Patterson, G. Gibson, and R. H. Katz: "A Case for Redundant Arrays of Inexpensive Disks", SIGMOD International Conference on Data Management, Chicago, pp. 109-116, 1988), RAID aims at improving performance and reliability of single large disks by assembling them into one virtual device, while maintaining distributed parity information within this device. The cited paper introduces five RAID strategies, often quoted as RAID levels 1 through 5, which differ in terms of performance and reliability. In addition to these five levels, the RAID Advisory Board defined four more levels, referred to as levels 0, 6, 10 and 53. All these RAID schemes are defined for local disk arrays. They are widely used in order to enhance the data rate or to protect from data loss by a disk failure, within one RAID ensemble. A next step was to apply the RAID concept to a distributed computer farm. Distributed RAID on a block level (as opposed to a file system level) as first proposed by Stonebraker and Schloss (M. Stonebraker and G. A. Schloss: Distributed RAID - A new Multicopy Algorithm, Proceedings of the International Conference on Data Engineering, pp. 430-437, 1990) and patented, for instance, in JP 110 25 022 A. This approach suffers often from several drawbacks: reliability, space overhead, computational overhead and network load. Most of these systems can only tolerate a single disk failure. Simple calculations show however, that inevitably larger systems must be able to cope with simultaneous errors of multiple components. This applies, in particular, to clusters of commodity components such as mentioned above, since the quality of standard components may be worse than that of high-end products. However, given the potential scale of the discussed systems, no compute node is reliable enough to provide appropriate reliability to support scalability to thousands of nodes. In addition, the space overhead, defined as the ratio of space required for redundant data to the space available for user data, induced by these systems is in most cases not optimal with respect to the Singleton bound (D. R. Hankerson: Coding Theory and Cryptography: The essentials, ISBN:0824704657). Codes that attain this bound are able to tolerate a disk failure for every redundancy region that is available within the system. It can be easily shown that as the minimal requirement for tolerating N disk failures, N redundancy regions are required. Distributed data mirroring, as for instance proposed by Hwang et al. (K. Hwang, H. Jin, and R. Ho: Orthogonal Striping and Mirroring in Distributed RAID for I/O-centric Cluster Computing, IEEE Transactions on Parallel and Distributed Systems, Vol. 13, no.l, January 2002), is very inefficient, using only half of the total capacity for user data. In addition, the whole system can only tolerate a single disk error. For larger installations, the probability of a data loss scales linear with the system size, approaching 1 during a period of a few years for the named systems, even if highly reliable components are being used.
All these system have in common that they stripe logical data objects over several physical devices. For instance, logically adjacent blocks of a file are distributed over several disks in case of a distributed system on multiple nodes. For distributed systems, this distribution of data blocks has a major drawback. It requires network transactions for any read/write access to the said logical data object. For example, in case of a read access to a large file on an N-node distributed RAID system, the fraction 1-1/(N-P) of all read accesses have to be performed across the network from remote nodes, where P is the number of redundancy blocks in a stripe group (usually 1). This traffic increases both network and CPU overhead.
Other distributed systems use network-capable RAID controllers (e.g., N. Peleg: Apparatus and Method for a Distributed RAID, U.S. Patent Application 2003/0084397) or are meant for use in wide area networks (e.g., G. H. Moulton: System and Method for Data Protection with Multidimensional Parity, U.S. Patent Application 2002/0048284). Data striping also applies to systems that are able to tolerate multiple failures by using multidimensional parity (e.g., D. J. Stephenson: RAID architecture with two-drive fault- tolerance, U.S. Patent No. 6,353,895).
PC clusters traditionally have centralized file servers and use the known RAID technology for their local devices. In addition, backup systems are provided to protect from data loss in the case of an unrecoverable server error. However, such backup systems may require substantial time for the recovery process. It is desirable to avoid the expensive installations of centralized file servers with their associated disadvantages of poor scalability, low network throughput and high cost by building a reliable mass storage system based on the unreliable components of the cluster.
Summary of the Invention
The present invention is embodied in a cluster computer system providing scalable, highly reliable data storage on multiple distributed, independent storage devices, with adjustable fault tolerance and corresponding overhead and minimal network communication requirements. It enables independent and asynchronous read/write access of all nodes in the computer cluster to local mass storage devices without particular knowledge of the cluster. The invention provides fault tolerance with respect to the partial or the complete loss of a node and its storage devices, by affording a method and apparatus for reconstructing the lost data on a spare node based on available redundancy information distributed in the cluster.
Read accesses of the computer nodes in the cluster to their local mass storage devices may be serviced directly by a read-write module for user data by forwarding the access requests to an underlying physical mass storage device without the necessity for interaction with any other node in the cluster, unless the read returned an error. Local read error detection, such as may be accomplished, for instance, by verifying a Cyclic Redundancy Check CRC that may be automatically attached to any data block, may be employed by the mass storage devices. This enables device failures and data transmission errors to be easily detected by a node itself.
Write transactions of a node to a local mass storage device may be intercepted by the read-write module for user data and the appropriate redundancy information may be computed and distributed appropriately in the cluster prior to writing the data block to the local mass storage. This redundancy information may be used to restore data in the case of a device failure. The approach of the invention to serve read requests from the local device and to only update remote redundancy information for write requests is fundamentally different from other distributed RAID systems. During normal operation, the architecture of the invention allows for a reduction of the network load to a minimum and imposes minimal additional load on the processor for read requests as compared to a stand-alone computer. A desired level of fault tolerance and data security can be freely chosen by defining the number of redundancy blocks per group of data blocks in an ensemble, allowing optimization of the redundancy data overhead while maintaining a very high reliability.
The invention affords an efficient and reliable storage system based upon unreliable components. Simple considerations show that a cluster with about 1000 PCs, each equipped with a 1 terabyte disk storage, can easily be incorporated into a distributed mass storage system with a capacity of about 1 petabyte and a mean time to data loss by disk failure of several 10000 years. Among typical applications of such systems are research institutes operating PC farms with a high demand for reliable data storage (for instance genome databases or high energy physics experiments), TV and radio stations for storage of digitized multimedia data, or service providers like the internet search engines and the like. The present inventive architecture is useful and advantageous for these and other applications requiring highly-reliable mass storage. Brief Description of the Drawings
Fig. 1 depicts schematically the functional architecture of an embodiment of a cluster computer system in accordance with the invention;
Fig. 2 shows a first embodiment of the physical distribution of data and redundancy blocks in the system where some nodes only store data, while others only store redundancy information;
Fig. 3 illustrates another embodiment of the physical distribution of data and redundancy blocks in the system, where all nodes store data as well as redundancy information in an interleaved fashion; Fig. 4 shows yet another embodiment of the physical distribution of data and redundancy blocks where the nodes store data as well as redundancy information in disjoint areas; and
Fig. 5 is a flow chart of a process for servicing requests in the system and its behavior in the case of errors.
Detailed Description of the Preferred Embodiments
Fig. 1 shows schematically the simplified functional architecture of a preferred embodiment of a distributed storage cluster computer system in accordance with the invention. As shown, the system may comprise a plurality of nodes (1), each embodying a mass storage area for user data (MSAD) (4) and a plurality of nodes (2), each embodying a mass storage area for redundancy data (MSAR) (5). Each node (1, 2) may be an independent computer having one or more associated data storage devices that provide one or more of the mass storage areas. The system has at least one mass storage area of each kind. In the preferred embodiment, the mass storage areas for user data and those for redundancy information may be assigned to dedicated nodes, or nodes may implement both functionalities. All nodes in the system are connected by a network (3). All nodes contain at least one mass storage area MSAD, MSAR (4,5) which is part of the system. All storage areas are block-oriented. They are subdivided into blocks, which are preferably of equal size. The individual mass storage areas may be distributed over one or several block devices on one node supporting the same block size. Access to a block device is only possible in multiples of a block. Hard disks, floppies, or CDROMs are examples for such block-oriented devices which may be employed. In this sense, also a node's main memory is block oriented, since the access is byte-wise. All nodes contain read-write modules, either for user data, RWMD (6) or redundancy information RWMR (7), or both. The data path also may contain a redundancy encoder, E (8), which generates redundancy information for write requests, and a redundancy decoder, D (9), which decodes the original user data if the local disk is not operational. The redundancy encoder and decoder, which may be embodied in a single unit and are referred to herein as CODECs or redundancy modules, (8, 9) can reside on any node. The set of blocks on storage devices is divided into the two groups of logical mass storage areas (data and redundancy). There is preferably a well-defined mapping between all data and redundancy blocks in the various mass storage areas. A redundancy block stores the redundancy information of all data blocks that are associated to it. A set of associated data and redundancy blocks is defined herein as a redundant block ensemble. No two blocks within one redundant block ensemble may reside on the same node. Therefore, although a node may embody user data and redundancy areas, within a given redundant block ensemble a node serves exclusively either as a data node (1), holding a data block, or as a redundancy node (2) holding a redundancy block. They will be referenced as such in the description below.
Each access to the storage devices (4,5) may be intercepted by the appropriate read-write modules (6,7). In case of a read access, an unchanged read request is forwarded to the underlying local device and the result of the operation is sent back to the requesting application, A (10). The interception is necessary to check if the transaction succeeded or failed. The ability to determine the completion status of an operation is a required feature of the underlying device. In case of a read error, the read-write module (6) will reconstruct the data and forward it to the application (10). In the case of a write access, the transaction is also intercepted by the read-write-modules (6). However, before the actual data is written to the local device, the difference between the old and new data is computed by first reading the old data blocks and comparing the old data with the new data. The redundancy encoder (8) uses this difference as its input to calculate any changes to be made on the corresponding redundancy blocks. For example, this difference can be calculated by applying a logical exclusive-OR ("XOR") operation to the two data sets. The difference data is sent over the network (3) to all nodes holding redundancy blocks of the given redundant block ensemble.
In order to provide a simple interface for the network transfer of this differential data, the remote storage may be made visible on the local machine using virtual mass storage areas, vMSA (11). A virtual storage area masquerades a remote storage area as being local. Thus, the remote area does not differ from any other access to a local device from the read-write modules point of view. However, all read-write requests from a virtual device are served from the appropriate remote device. The computed difference between stored data and the pending write transaction is used as the basis for the calculation of updated redundancy information using the redundancy encoder (8). The error-correcting encoder returns the difference between the old and the new redundancy data. Therefore, the result cannot simply be written to the corresponding redundancy block in the redundancy mass storage area (5), but has to be added to the existing redundancy information, making this access a read-modify-write block transaction, also. One example of an appropriate error-correcting code is Reed- Solomon Codes (I. S. Reed and G. Solomon: Polynomial codes over certain finite fields, Journal of the Society of Applied Mathematics, 8:300-304, 1960). However, other error correcting codes can also be used in the invention.
The position of the encoder in the data path may be flexible and arbitrary. It is not necessary to compute the redundancy information on the redundancy node (2) holding the redundancy block. It is possible to determine the change in the redundancy information on the data node (1) and to send the difference in the redundancy information to the redundancy node (2). For better load balancing, it is therefore possible to install an appropriate redundancy encoder on or otherwise associate an appropriate redundancy encoder with every node in the system. Depending on the complexity of the redundancy algorithm used, and in order to off-load operations from the host CPU, a hardware- supported CODEC can be instantiated to accelerate and improve the overall system performance. FPGA (field programmable gate array) technology is a suitable candidate for such a hardware-supported implementation. All operations needed, in particular for the above-mentioned XOR operation, can easily be implemented in massively parallel hardware. The location independence of the CODEC is advantageous in allowing only a few nodes in the system to be provided with such a hardware accelerator, if desired, while providing various trade-offs between cost and performance.
Fig. 2 is an embodiment that shows an example of the distribution of blocks with dedicated data nodes (1) and redundancy nodes (2). In this example, every node in the cluster system embodies exclusively either an MSAD (4) or MSAR (5) and, therefore, serves exclusively as a computer with data mass storage entity (1) or a computer with redundancy mass storage entity (2). In this example, the blocks ai, bi, ci, di and pi form a redundant blocks ensemble (13), where i is an index uniquely identifying the ensemble, a, b, c, d are data blocks, and p is a checksum block, in this case parity. In order to start the system in a defined state with correct redundancy information, the unused blocks (15) may be initialized to a defined value, such as 0 as indicated in the figure.
The data can be read from the local devices independently and asynchronously with respect to all other nodes in the system. For write accesses, however, the steps as described above are followed. For instance, writing to data block al (12) triggers the computation of redundancy information which is added to the information on the associated (13) redundancy block pi (14). In addition, all associated redundancy blocks on all other nodes (2), embodying MSAR (5), are also updated. The redundancy information in block pi and all other redundancy blocks in the redundant blocks ensemble may be calculated from the data in the data blocks al, bl, cl, and dl.
The assignment of blocks to logical structures, such as files, is entirely independent from the assignment of blocks to redundant blocks ensembles. For instance, the blocks al through a5 in Figure 2 could contain the data of a file on an associated node, thus requiring five data blocks for the nodes shown. In the preferred embodiment of the invention, all logical data objects (file system, files and the like) of one node may be stored within the data mass storage area (4) of that given node, the data storage therefore remaining completely local while being redundantly encoded remotely due to the existence of the remote redundancy mass storage areas (5). Consequently, read accesses will remain completely local and independent. Only in the case of read errors would the system have to access the remote storage areas (data (4) and redundancy (5)) in order to reconstruct the lost information. In prior art RAID systems, the blocks would have been distributed over all devices so that the (logically adjacent) blocks al through a5, and thus the contents of the file they describe, would not reside on a single device. Read access to the data would, therefore, necessarily induce network transactions in any case to all associated data mass storage areas in the system.
In the example shown in Fig. 2, some nodes have exclusively redundancy mass storage areas connected and cannot be used for user data. Those nodes do not have locally attached redundant mass storage for user data, and are, therefore, less useful for application processing as all related mass storage accesses are remote.
Fig. 3 is another embodiment that shows an example how the blocks may be redistributed over the various nodes in the cluster system so that every node now embodies mass storage areas for data (4) and for redundancy information (5), and the associated entities (1, 2). In the embodiment of Figure 3, there are two redundancy regions pi and Pi within each redundant blocks ensemble that provide error protection against twin failures, indicated as logical connections between the appropriate blocks (13), according to the Singleton bound. The redundancy blocks ensembles are now ai, bi, ci, pi and Pi or ai, bi, ei, pi, Pi or ai, di, ei, pi, Pi and so forth. In principle, any assignment of data and redundancy blocks within a redundancy blocks ensemble to nodes is possible, provided that no two blocks reside on the same node. The number of blocks in the redundancy block ensemble does not have to match the number of available nodes, as sketched in this example. All other aspects of the system as discussed in the context of Fig. 2 remain valid. For example, in the given scenario, each write access to data block a4 leads to a redundancy update of the redundancy blocks p4 and P4. The same redundancy blocks are updated when the data of block e4 is changed. Again all data blocks ai are stored locally on one given node and can be used for storing user data for independent and direct access on the local node. In this example, all nodes have their private and redundantly encoded data area a, b, c, d, e. They also all store appropriate redundancy information in their MSAR (5). However, the blocks of the MSAD and MSAR are preferably interlaced physically on the mass storage devices. Assuming the available physical storage space to be of equal size on all nodes, in this example, this results in each node using 60% of its physical storage for MSAD and 40% for MSAR. In the previous example, by contrast, some nodes used 100% of their physical mass storage for MSAD and others used their 100% for MSAR. In the above scenario shown in Figure 3, the data and redundancy information and their corresponding mass storage areas MSAD and MSAR are interlaced on the physical devices. This makes no difference with respect to the invention. Any block arrangement on a given node is possible and may be used. However, the arrangement has to be known globally in order to allow any node to determine which redundancy blocks to access in case of local write transactions.
Fig. 4 shows another example of a possible physical block distribution using a similar organization to that described for Fig. 3. Here, the user data and redundancy data may be stored adjacently in different areas of a local physical mass storage. For instance the first 60% of the local physical mass storage (for example hard drive(s)) may be used for the data blocks MSAD (4) and the remaining 40% may be used for MSAR (5). In the case where the local physical mass storage of a given node in the cluster is composed of several independent disks, the data and redundancy mass storage can be distributed to independent local devices. For example, a node may employ a 300 GB disk for data and a 100 GB disk for redundancy information, allowing completely independent operation of the local data mass storage entity (1) and the redundancy mass storage entity (2). Therefore, write transactions on remote nodes resulting in update transactions to the local redundancy area MSAR (5) would not affect at all any of the potential local accesses to the MSAD (4).
Fig. 2, 3 and 4 represent a few examples of where the redundancy data is concentrated on dedicated nodes and the redundancy data is equally distributed over all nodes in the system. In all of these embodiments, the number of blocks in a redundant blocks ensemble match the number of nodes, all having the same physical storage capacity. However, there are many other possible ways to organize the user data and redundancy data on the local physical storage that the invention may employ. For example, the number of blocks in a redundant blocks ensemble may be smaller than the number of nodes, and the physical storage capacity of the nodes may vary. The number of data and redundancy blocks can also vary from node to node. One reason for such a variation could be the fact that some nodes may require less storage space than others and, thus, can host more redundancy blocks. Moreover, the nodes in the system may not necessarily be built from the same type of hardware, but may differ in age or quality of their components. This results in different failure probabilities. Accordingly, such non- homogeneity may also be a reason to choose a different distribution arrangement than the homogenous arrangements described above.
Fig. 5 is a process flow chart showing an example of the operation of a cluster system of the invention to read or write service requests and to error scenarios. The read- write module checks all requests for the occurrence of an error before they are handed back to the requester. The notification of an error is generated by a storage device and provided to the read-write module. Once the read-write module receives the error notification, reconstruction of the missing information is triggered for all following requests. In case of read accesses, the requested data may be reconstructed by decoding the given redundancy and user data in the redundant blocks ensemble. The redundancy decoder uses an inverse coding algorithm to that used by the encoder to compute the requested data. An example how this may be done is described, for example, by Hankerson et al. (D. R. Hankerson: Coding Theory and Cryptography: The essentials, ISBN:0824704657). After reconstruction, the requested data may be returned to the requesting application and may be stored in a reallocated area in the mass storage system. If the reallocation fails, the mass storage device has to be replaced and an appropriate error operation may be initiated by the cluster system. In the extremely unlikely case of the reconstruction failing, as for instance if the number of failing devices exceeds the redundancy limits of the algorithm, the cluster system will have to return an I/O error.
An error during a write operation is more complicated because different scenarios may occur. Since every write request is preceded by a read request, an error could happen during this initial reading of the data block. However, such read errors can be handled in the same way as discussed above. If the data cannot be reconstructed, an I/O error has to be reported back to the requester. Using the reconstructed data, the write request can be served as in any other case. If a write fails, the specific device may be marked as faulty, and should be replaced by a new one. Furthermore, errors can occur during completion of the write requests for the corresponding redundancy blocks, i.e. during the read or write of the redundancy information. Read errors for redundancy blocks can trigger the recalculation of the redundancy information from the corresponding data blocks. This can be done using the redundancy encoder (8). If the reconstruction fails, the recalculation ends with an I/O error. The reconstruction can only fail if the number of failing devices exceeds the number of errors tolerable by the chosen algorithm. It is of course also possible to mark the device as faulty immediately, without the recalculation of the redundancy information. In this scenario, the write request to this device can be tagged as failed.
If the redundancy data to be overwritten can be recalculated, the difference with respect to the new redundancy information may be determined and the result written back to the device. If the device has spare blocks, the write of the reconstructed redundancy information can succeed, but, of course, an error can also occur during this last write operation. If this happens, the device is marked faulty just as in the case of a write error for a data device above to enable it to be replaced. The status of all pending operations can be reported back to the read-write module, which checks whether or not the new data and the corresponding redundancy data have been stored on a sufficient number of devices, and generates a defined minimum level of fault tolerance. An insufficient number of successful write operations constitutes an error.
Usually, a device is able to recognize faulty blocks and remaps them to spare blocks. Only if there are no more spare blocks left is an error reported back to the application. This remapping typically is done by the driver or the controller of the device and is transparent to applications. In Fig. 5 this remapping is introduced as Reallocation.
Since all write requests are preceded by reads in order to determine the difference between old and new data, it is suggested to relocate the computation from the read-write module to the device itself. An enhanced device controller capable of performing the read-modify-write (RMW) transaction locally on the device, for instance, can reduce the data rate between the device and its host. In this scenario, the write request would be directly forwarded to the device as a special RMW request. The device may calculate the difference. In case of a data write, it stores the new data and hands the result back to the module for further calculations. In case of a redundancy write, it applies the received update information to the local redundancy block. This approach relieves the load on the host processor, since the calculation is offloaded to the device hardware. In addition, the available bandwidth to the device is increased, since part of the computation now takes place very close to the device.
While the foregoing description of the invention has been with reference to preferred embodiments, it will be appreciated by those skilled in the art that changes to these embodiments may be made without departing from the principles and spirit of the invention, the scope of which is defined by the claims.
List of reference numerals 1 Computer with data mass storage entity 2 Computer with redundancy mass storage entity 3 Network 4 Mass storage area for user data 5 Mass storage area for redundancy data 6 Read-write module for user data 7 Read-write module for redundancy data 8 Redundancy encoder 9 Redundancy decoder 10 Application 11 Virtual mass storage area 12 Data block 13 Redundant blocks ensemble 14 Redundancy block 15 Unused block

Claims

What is claimed is:
1. A computer architecture or apparatus for distributed data storage, comprising: a plurality of independent computers (1,2) forming nodes and having associated storage devices (4,5); a network (3) connecting said computers; a plurality of storage areas for data (4) and for redundancy (5) information distributed over said storage devices (4,5) such that each said data storage area (4) is assigned to a computer (1) for independent and asynchronous access, said storage areas (4,5) having sub-areas, and the sub-area of a data storage area (4) being assigned to a sub- area in at least one redundancy storage area (5); and read-write modules (6,7) for each of the said computers (1,2), the modules (6,7) providing an interface to access said storage areas (4,5) and providing mapping between sub-areas of the data and redundancy storage areas (4,5), the read-write modules (6,7) generating redundancy information for write accesses to reconstruct data for an access failure, and initiating reconstruction of data for a failed access to a storage device (4,5).
2. A computer architecture or apparatus according to claim 1, wherein said independent computers (1) locally process successful read accesses to assigned local data storage areas (4) without requiring network transactions.
3. A computer architecture or apparatus according to claim 1 or 2, wherein said redundancy information for data reconstruction is derived from differences between data being written and data already stored, said differences being processed by a redundancy module (8,9) and being stored in assigned sub-areas of the redundancy storage areas (5) in a well-defined manner.
4. A computer architecture or apparatus according to claim 1 through 3, wherein the redundancy module (8,9) comprises an encoder (8) that generates redundancy information for a data block to be written, and the system uses such redundancy information to correct previous redundancy information stored in the assigned sub-areas of the redundancy storage areas (5).
5. A computer architecture or apparatus according to claim 1 through 4, wherein the redundancy module (8,9) comprises a decoder (9) that uses an inverse redundancy algorithm for reconstruction of the data in the corresponding sub-area, and the system stores the reconstructed data in a sub-area of a spare storage area.
6. A computer architecture or apparatus according to claim 1 through 5, further comprising virtual storage devices (11) providing an interface to storage areas (4,5) on remote nodes.
7. A computer architecture or apparatus according to claim 1 through 6, wherein said data storage areas (4) and assigned redundancy areas (5) reside on different storage devices (4,5).
8. A computer architecture or apparatus according to claim 1 through 7, wherein the system provides more than one redundancy area (5).
9. A computer architecture or apparatus according to claim 1 through 8, wherein the storage device comprises the memory of the computer.
10. A method for data storage in a system comprising a plurality of independent computers (1,2) forming nodes incorporating storage devices (4,5) and being interconnected by a network (3), said storage devices having storage areas (4,5) and sub- areas, comprising: distributing storage areas for data (4) and storage areas for redundancy information (5) over said storage devices (4,5); assigning each data storage area (4) to one of the said computers (1) for independent and asynchronous access; assigning each sub-area in the data storage areas (4) to an assigned sub-area in the redundancy storage areas (5); generating redundancy information for write accesses to reconstruct data for access failures by using of a redundancy module (8,9); and initiating data reconstruction using said redundancy information for a failed access to a storage device (4,5).
11. A method for data storage as recited in claim 10 further comprising processing locally successful read requests to local storage areas (4,5) without requiring remote data exchange.
12. A method for data storage as recited in claim 11, wherein said redundancy module (8,9) reconstructs data of faulty storage devices.
13. A method for data storage as recited in claim 12 further comprising implementing said redundancy module (8,9) in hardware and software.
14. A method for data storage as recited in claim 13 further comprising interfacing to remote storage areas using virtual storage devices (11) associated with said nodes.
15. A method for data storage as recited in claim 14, wherein said distributing storage areas for data (4) and for redundancy information (5) comprises allocating a first group of sub-areas of said storage areas on each node for data storage, and allocating a second group of sub-areas on such nodes for redundancy information, and wherein certain sub-sets of the redundancy sub-areas are allocated so as to be associated with multiple data sub-areas such that all redundancy sub-areas of the sub-set are updated with redundancy information for an access to any one of the associated data sub-areas.
16. A method for data storage as recited in claim 15, wherein said distributing storage areas (4,5) comprises interlacing the data and redundancy storage areas (4,5) across said storage devices on different nodes (1,2).
17. A method for data storage as recited in claim 16, wherein the number of redundancy sub-areas is different from the number of nodes, and the method further comprises mapping the redundancy sub areas to associated data storage sub-areas.
18. The use of a storage architecture according to claims 1 through 9, or the use of a method according to claims 10 through 17 for the operation of a network of a plurality of independent computers with a minimal number of network transfers.
19. The use of a storage architecture according to claims 1 through 9, or the use of a method according to claims 10 through 17 for the operation of computer clusters, for data base management systems, for reliable storage of multimedia data and/or Internet search engines.
PCT/EP2004/012314 2003-10-30 2004-10-29 Method and apparatus for enabling high-reliability storage of distributed data on a plurality of independent storage devices WO2005043389A2 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
EP04803099A EP1685489A2 (en) 2003-10-30 2004-10-29 Method and apparatus for enabling high-reliability storage of distributed data on a plurality of independent storage devices

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
DE10350590A DE10350590A1 (en) 2003-10-30 2003-10-30 Method and device for saving data in several independent read-write memories
DE10350590.3 2003-10-30

Publications (2)

Publication Number Publication Date
WO2005043389A2 true WO2005043389A2 (en) 2005-05-12
WO2005043389A3 WO2005043389A3 (en) 2006-05-04

Family

ID=34529903

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/EP2004/012314 WO2005043389A2 (en) 2003-10-30 2004-10-29 Method and apparatus for enabling high-reliability storage of distributed data on a plurality of independent storage devices

Country Status (4)

Country Link
US (1) US7386757B2 (en)
EP (1) EP1685489A2 (en)
DE (1) DE10350590A1 (en)
WO (1) WO2005043389A2 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1858167A2 (en) * 2006-05-19 2007-11-21 Scali AS Transmission of data using the difference between two messages
US7702988B2 (en) 2005-10-24 2010-04-20 Platform Computing Corporation Systems and methods for message encoding and decoding
US20100257403A1 (en) * 2009-04-03 2010-10-07 Microsoft Corporation Restoration of a system from a set of full and partial delta system snapshots across a distributed system

Families Citing this family (30)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7549077B2 (en) * 2005-04-22 2009-06-16 The United States Of America As Represented By The Secretary Of The Army Automated self-forming, self-healing configuration permitting substitution of software agents to effect a live repair of a system implemented on hardware processors
US9027080B2 (en) * 2008-03-31 2015-05-05 Cleversafe, Inc. Proxy access to a dispersed storage network
KR100772385B1 (en) * 2005-12-07 2007-11-01 삼성전자주식회사 Method and apparatus for transmitting and receiving content on distributed storage system
US7661058B1 (en) * 2006-04-17 2010-02-09 Marvell International Ltd. Efficient raid ECC controller for raid systems
US8195712B1 (en) 2008-04-17 2012-06-05 Lattice Engines, Inc. Lattice data set-based methods and apparatus for information storage and retrieval
US8244998B1 (en) 2008-12-11 2012-08-14 Symantec Corporation Optimized backup for a clustered storage system
US8065556B2 (en) * 2009-02-13 2011-11-22 International Business Machines Corporation Apparatus and method to manage redundant non-volatile storage backup in a multi-cluster data storage system
US20110173344A1 (en) * 2010-01-12 2011-07-14 Mihaly Attila System and method of reducing intranet traffic on bottleneck links in a telecommunications network
US8132044B1 (en) * 2010-02-05 2012-03-06 Symantec Corporation Concurrent and incremental repair of a failed component in an object based storage system for high availability
US8103904B2 (en) * 2010-02-22 2012-01-24 International Business Machines Corporation Read-other protocol for maintaining parity coherency in a write-back distributed redundancy data storage system
US8156368B2 (en) * 2010-02-22 2012-04-10 International Business Machines Corporation Rebuilding lost data in a distributed redundancy data storage system
US8583866B2 (en) * 2010-02-22 2013-11-12 International Business Machines Corporation Full-stripe-write protocol for maintaining parity coherency in a write-back distributed redundancy data storage system
US8103903B2 (en) * 2010-02-22 2012-01-24 International Business Machines Corporation Read-modify-write protocol for maintaining parity coherency in a write-back distributed redundancy data storage system
US20110238936A1 (en) * 2010-03-29 2011-09-29 Hayden Mark G Method and system for efficient snapshotting of data-objects
JP2012033169A (en) * 2010-07-29 2012-02-16 Ntt Docomo Inc Method and device for supporting live check pointing, synchronization, and/or recovery using coding in backup system
US8473778B2 (en) * 2010-09-08 2013-06-25 Microsoft Corporation Erasure coding immutable data
US8688660B1 (en) * 2010-09-28 2014-04-01 Amazon Technologies, Inc. System and method for providing enhancements of block-level storage
EP2793130B1 (en) 2010-12-27 2015-12-23 Amplidata NV Apparatus for storage or retrieval of a data object on a storage medium, which is unreliable
CN103019614B (en) 2011-09-23 2015-11-25 阿里巴巴集团控股有限公司 Distributed memory system management devices and method
US9110797B1 (en) 2012-06-27 2015-08-18 Amazon Technologies, Inc. Correlated failure zones for data storage
US8806296B1 (en) 2012-06-27 2014-08-12 Amazon Technologies, Inc. Scheduled or gradual redundancy encoding schemes for data storage
US8850288B1 (en) 2012-06-27 2014-09-30 Amazon Technologies, Inc. Throughput-sensitive redundancy encoding schemes for data storage
US8869001B1 (en) 2012-06-27 2014-10-21 Amazon Technologies, Inc. Layered redundancy encoding schemes for data storage
US10423491B2 (en) 2013-01-04 2019-09-24 Pure Storage, Inc. Preventing multiple round trips when writing to target widths
US20190250823A1 (en) 2013-01-04 2019-08-15 International Business Machines Corporation Efficient computation of only the required slices
US9558067B2 (en) * 2013-01-04 2017-01-31 International Business Machines Corporation Mapping storage of data in a dispersed storage network
US10402270B2 (en) 2013-01-04 2019-09-03 Pure Storage, Inc. Deterministically determining affinity for a source name range
KR102297541B1 (en) 2014-12-18 2021-09-06 삼성전자주식회사 Storage device and storage system storing data based on reliability of memory area
US10171314B2 (en) * 2015-12-01 2019-01-01 Here Global B.V. Methods, apparatuses and computer program products to derive quality data from an eventually consistent system
CN105530294A (en) * 2015-12-04 2016-04-27 中科院成都信息技术股份有限公司 Mass data distributed storage method

Family Cites Families (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5271012A (en) * 1991-02-11 1993-12-14 International Business Machines Corporation Method and means for encoding and rebuilding data contents of up to two unavailable DASDs in an array of DASDs
JPH1125022A (en) 1997-07-02 1999-01-29 Brother Ind Ltd Client server system
US6353895B1 (en) * 1998-02-19 2002-03-05 Adaptec, Inc. RAID architecture with two-drive fault tolerance
US6223323B1 (en) * 1998-07-17 2001-04-24 Ncr Corporation Method for storing parity information in a disk array storage system
US6826711B2 (en) * 2000-02-18 2004-11-30 Avamar Technologies, Inc. System and method for data protection with multidimensional parity
WO2002088961A1 (en) * 2001-05-01 2002-11-07 The Board Of Governors For Higher Education, State Of Rhode Island And Providence Plantations Distributed raid and location independence caching system
US6871263B2 (en) * 2001-08-28 2005-03-22 Sedna Patent Services, Llc Method and apparatus for striping data onto a plurality of disk drives
US20030084397A1 (en) * 2001-10-31 2003-05-01 Exanet Co. Apparatus and method for a distributed raid
US7024586B2 (en) * 2002-06-24 2006-04-04 Network Appliance, Inc. Using file system information in raid data reconstruction and migration
US7200770B2 (en) * 2003-12-31 2007-04-03 Hewlett-Packard Development Company, L.P. Restoring access to a failed data storage device in a redundant memory system

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
ALESSANDRO DI MARCO ET AL.: "Providing Single I/O Space and Multiple Fault Tolerance in a Distributed RAID" 27 May 2003 (2003-05-27), XP002369730 Retrieved from the Internet: URL:http://www.disi.unige.it/person/CiaccioG/draid_tr0306.pdf> [retrieved on 2006-02-24] *
FAY CHANG ET AL.: "Myriad: cost -effective disaster tolerance" CONFERENCE ON FILE AND STORAGE TECHNOLOGIES, 28 January 2002 (2002-01-28), pages 1-13, XP002369729 Monterey, CA, USA *
HAI JIN ET AL.: "Adaptive Sector Grouping to Reduce False Sharing in Distributed RAID" 26 September 2000 (2000-09-26), pages 1-22, XP002369731 Retrieved from the Internet: URL:http://gridsec.usc.edu/files/publications/FalseSharing903.pdf> [retrieved on 2006-02-24] *

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7702988B2 (en) 2005-10-24 2010-04-20 Platform Computing Corporation Systems and methods for message encoding and decoding
US8316274B2 (en) 2005-10-24 2012-11-20 International Business Machines Corporation Systems and methods for message encoding and decoding
EP1858167A2 (en) * 2006-05-19 2007-11-21 Scali AS Transmission of data using the difference between two messages
EP1858167A3 (en) * 2006-05-19 2008-03-12 Scali AS Transmission of data using the difference between two messages
US7751486B2 (en) 2006-05-19 2010-07-06 Platform Computing Corporation Systems and methods for transmitting data
US20100257403A1 (en) * 2009-04-03 2010-10-07 Microsoft Corporation Restoration of a system from a set of full and partial delta system snapshots across a distributed system

Also Published As

Publication number Publication date
WO2005043389A3 (en) 2006-05-04
US20050102548A1 (en) 2005-05-12
EP1685489A2 (en) 2006-08-02
US7386757B2 (en) 2008-06-10
DE10350590A1 (en) 2005-06-16

Similar Documents

Publication Publication Date Title
US7386757B2 (en) Method and apparatus for enabling high-reliability storage of distributed data on a plurality of independent storage devices
US6453428B1 (en) Dual-drive fault tolerant method and system for assigning data chunks to column parity sets
US10049008B2 (en) Storing raid data as encoded data slices in a dispersed storage network
US8839028B1 (en) Managing data availability in storage systems
US7529970B2 (en) System and method for improving the performance of operations requiring parity reads in a storage array system
US7093182B2 (en) Data redundancy methods and apparatus
US6282671B1 (en) Method and system for improved efficiency of parity calculation in RAID system
US6557123B1 (en) Data redundancy methods and apparatus
US7979779B1 (en) System and method for symmetric triple parity for failing storage devices
US20120192037A1 (en) Data storage systems and methods having block group error correction for repairing unrecoverable read errors
JP5124792B2 (en) File server for RAID (Redundant Array of Independent Disks) system
JP4516846B2 (en) Disk array system
US6950901B2 (en) Method and apparatus for supporting parity protection in a RAID clustered environment
US5007053A (en) Method and apparatus for checksum address generation in a fail-safe modular memory
US20040049632A1 (en) Memory controller interface with XOR operations on memory read to accelerate RAID operations
US7743308B2 (en) Method and system for wire-speed parity generation and data rebuild in RAID systems
US6871317B1 (en) Technique for efficiently organizing and distributing parity blocks among storage devices of a storage array
US6343343B1 (en) Disk arrays using non-standard sector sizes
Reddy et al. Gracefully degradable disk arrays
US7398460B1 (en) Technique for efficiently organizing and distributing parity blocks among storage devices of a storage array
Mishra et al. Dual crosshatch disk array: A highly reliable disk array system

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A2

Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BW BY BZ CA CH CN CO CR CU CZ DE DK DM DZ EC EE EG ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NA NI NO NZ OM PG PH PL PT RO RU SC SD SE SG SK SL SY TJ TM TN TR TT TZ UA UG US UZ VC VN YU ZA ZM ZW

AL Designated countries for regional patents

Kind code of ref document: A2

Designated state(s): BW GH GM KE LS MW MZ NA SD SL SZ TZ UG ZM ZW AM AZ BY KG KZ MD RU TJ TM AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IT LU MC NL PL PT RO SE SI SK TR BF BJ CF CG CI CM GA GN GQ GW ML MR NE SN TD TG

121 Ep: the epo has been informed by wipo that ep was designated in this application
WWE Wipo information: entry into national phase

Ref document number: 2004803099

Country of ref document: EP

WWP Wipo information: published in national office

Ref document number: 2004803099

Country of ref document: EP