Connect public, paid and private patent data with Google Patents Public Datasets

Semi-static distribution technique

Info

Publication number
CA2546242C
CA2546242C CA 2546242 CA2546242A CA2546242C CA 2546242 C CA2546242 C CA 2546242C CA 2546242 CA2546242 CA 2546242 CA 2546242 A CA2546242 A CA 2546242A CA 2546242 C CA2546242 C CA 2546242C
Authority
CA
Grant status
Grant
Patent type
Prior art keywords
parity
disk
blocks
disks
array
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CA 2546242
Other languages
French (fr)
Other versions
CA2546242A1 (en )
Inventor
Peter F. Corbett
Robert M. English
Steven R. Kleiman
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
NetApp Inc
Original Assignee
NetApp Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Grant date

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06FELECTRICAL DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/08Error detection or correction by redundancy in data representation, e.g. by using checking codes
    • G06F11/10Adding special bits or symbols to the coded information, e.g. parity check, casting out 9's or 11's
    • G06F11/1076Parity data used in redundant arrays of independent storages, e.g. in RAID systems
    • G06F11/1096Parity calculation or recalculation after configuration or reconfiguration of the system

Abstract

A semi-static distribution technique distributes parity across disks of an array. According to the technique, parity is distributed (assigned) across the disks of the array in a manner that maintains a fixed pattern of parity blocks among the stripes of the disks. When one or more disks are added to the array, the semi-static technique redistributes parity in a way that does not require recalculation of parity or moving of any data blocks. Notably, the parity information is not actually moved; the technique merely involves a change in the assignment (or reservation) for some of the parity blocks of each pre-existing disk to the newly added disk.

Description

SEMI-STATIC DISTRIBUTION TECHNIQUE

FIELD OF THE INVENTION

The present invention relates to arrays of storage systems and, more specifi-cally, to a system that efficiently assigns parity blocks within storage devices of a stor-age array.

BACKGROUND OF THE INVENTION

A storage system typically comprises one or more storage devices into which information may be entered, and from which information may be obtained, as desired.
The storage system includes a storage operating system that functionally organizes the io system by, inter alia, invoking storage operations in support of a storage service im-plemented by the system. The storage system may be implemented in accordance with a variety of storage architectures including, but not limited to, a network-attached stor-age environment, a storage area network and a disk assembly directly attached to a cli-ent or host computer. The storage devices are typically disk drives organized as a disk is array, wherein the term "disk" commonly describes a self-contained rotating magnetic media storage device. The term disk in this context is synonymous with hard disk drive (HDD) or direct access storage device (DASD).

Storage of information on the disk array is preferably implemented as one or more storage "volumes" that comprises a cluster of physical disks, defining an overall 20 logical arrangement of disk space. The disks within a volume are typically organized as one or more groups, wherein each group is operated as a Redundant Array of Inde-pendent (or Inexpensive) Disks (RAID). In this context, a RAID group is defined as a number of disks and an address/block space associated with those disks. The term "RAID" and its various implementations are well-known and disclosed in A Case for 25 Redundant Arrays of Inexpensive Disks (RAID), by D. A. Patterson, G. A.
Gibson and R. H. Katz, Proceedings of the International Conference on Management of Data (SIGMOD), June 1988.

The storage operating system of the storage system may implement a file sys-tem to logically organize the information as a hierarchical structure of directories, files and blocks on the disks. For example, each "on-disk" file may be implemented as set of data structures, i.e., disk blocks, configured to store information, such as the actual data for the file. The storage operating system may also implement a RAID
system that manages the storage and retrieval of the information to and from the disks in accor-dance with write and read operations. There is typically a one-to-one mapping between the information stored on the disks in, e.g., a disk block number space, and the informa-tion organized by the file system in, e.g., volume block number space.

A common type of file system is a "write in-place" file system, an example of which is the conventional Berkeley fast file system. In a write in-place file system, the locations of the data structures, such as data blocks, on disk are typically fixed.
Changes to the data blocks are made "in-place"; if an update to a file extends the quan-tity of data for the file, an additional data block is allocated. Another type of file sys-tem is a write-anywhere file system that does not overwrite data on disks. If a data block on disk is retrieved (read) from disk into a memory of the storage system and "dirtied" with new data, the data block is stored (written) to a new location on disk to thereby optimize write performance. A write-anywhere file system may initially as-sume an optimal layout such that the data is substantially contiguously arranged on disks. The optimal disk layout results in efficient access operations, particularly for sequential read operations, directed to the disks. An example of a write-anywhere file system that is configured to operate on a storage system is the Write Anywhere File Layout (WAFLTM) file system available from Network Appliance, Inc., Sunnyvale, California.

Most RAID implementations enhance the reliability/integrity of data storage through the redundant writing of data "stripes" across a given number of physical disks in the RAID group, and the appropriate storing of redundant information with respect to the striped data. The redundant information, e.g., parity information, enables recovery of data lost when a disk fails. A parity value may be computed by summing (usually modulo 2) data of a particular word size (usually one bit) across a number of similar disks holding different data and then storing the results on an additional similar disk.

That is, parity may be computed on vectors 1-bit wide, composed of bits in correspond-ing positions on each of the disks. When computed on vectors 1-bit wide, the parity can be either the computed sum or its complement; these are referred to as even and odd parity respectively. Addition and subtraction on 1-bit vectors are both equivalent to exclusive-OR (XOR) logical operations. The data is then protected against the loss of any one of the disks, or of any portion of the data on any one of the disks. If the disk storing the parity is lost, the parity can be regenerated from the data. If one of the data disks is lost, the data can be regenerated by adding the contents of the surviving data disks together and then subtracting the result from the stored parity.

Typically, the disks are divided into parity groups, each of which comprises one or more data disks and a parity disk. A parity set is a set of blocks, including several data blocks and one parity block, where the parity block is the XOR of all the data blocks. A parity group is a set of disks from which one or more parity sets are selected.
The disk space is divided into stripes, with each stripe containing one block from each is disk. The blocks of a stripe are usually at the same locations on each disk in the parity group. Within a stripe, all but one block contains data ("data blocks"), while the one block contains parity ("parity block") computed by the XOR of all the data.

As used herein, the term "encoding" means the computation of a redundancy value over a predetermined subset of data blocks, whereas the term "decoding"
means the reconstruction of a data or parity block by the same process as the redundancy computation using a subset of data blocks and redundancy values. If one disk fails in the parity group, the contents of that disk can be decoded (reconstructed) on a spare disk or disks by adding all the contents of the remaining data blocks and subtracting the result from the parity block. Since two's complement addition and subtraction over 1-bit fields are both equivalent to XOR operations, this reconstruction consists of the XOR of all the surviving data and parity blocks. Similarly, if the parity disk is lost, it can be recomputed in the same way from the surviving data.

If the parity blocks are all stored on one disk, thereby providing a single disk that contains all (and only) parity information, a RAID-4 level implementation is pro-vided. The RAID-4 implementation is conceptually the simplest form of advanced RAID (i.e., more than striping and mirroring) since it fixes the position of the parity information in each RAID group. In particular, a RAID-4 implementation provides protection from single disk errors with a single additional disk, while making it easy to incrementally add data disks to a RAID group.

If the parity blocks are contained within different disks in each stripe, in a rotat-ing pattern, then the implementation is RAID-5. Most commercial implementations that use advanced RAID techniques use RAID-5 level implementations, which distrib-ute the parity information. A motivation for choosing a RAID-5 implementation is that, for most static file systems, using a RAID-4 implementation would limit write throughput. Such static h le systems tend to scatter write data across many stripes in the disk array, causing the parity disks to seek for each stripe written. However, a write-anywhere file system, such as the WAFL file system, does not have this issue since it concentrates write data on a few nearby stripes.

Use of a RAID-4 level implementation in a write-anywhere file system is a de-sirable way of allowing incremental capacity increase while retaining performance;
is however there are some "hidden" downsides. First, where all the disks in a RAID
group are available for servicing read traffic in a RAID-5 implementation, one of the disks (the parity disk) does not participate in such traffic in the RAID-4 implementa-tion. Although this effect is insignificant for large RAID group sizes, those group sizes have been decreasing because of, e.g., a limited number of available disks or increasing reconstruction times of larger disks. As disks continue to increase in size, smaller RAID group configurations become more attractive. But this increases the fraction of disks unavailable to service read operations in a RAID-4 configuration. The use of a RAID-4 level implementation may therefore result in significant loss of read operations per second. Second, when a new disk is added to a full volume, the write anywhere file system tends to direct most of the write data traffic to the new disk, which is where most of the free space is located.

The RAID system typically keeps track of allocated data in a RAID-5 level im-plementation of the disk array. To that end, the RAID system reserves parity blocks in a fixed pattern that is simple to compute and that allows efficient identification of the non-data (parity) blocks. However, adding new individual disks to a RAID group of a RAID-5 level implementation typically requires repositioning of the parity information across the old and new disks in each stripe of the array to maintain the fixed pattern.
Repositioning of the parity information typically requires use of a complex (and costly) parity block redistribution scheme that "sweeps-through" the old and new disks, copy-ing both parity and data blocks to conform to the new distribution. The parity redistri-bution scheme further requires a mechanism to identify which blocks contain data and to ensure, per stripe, that there are not too many data blocks allocated so that there is sufficient space for the parity information. As a result of the complexity and cost of such a scheme, most RAID-5 implementations relinquish the ability to add individual disks to a RAID group and, instead, use a fixed RAID group size (usually in the 4-8 io disk range). Disk capacity is then increased a full RAID group at a time.
Yet, the use of small RAID groups translates to high parity overhead, whereas the use of larger RAID groups means having a high-cost for incremental capacity.

Therefore, it is desirable to provide a distribution system that enables a storage system to distribute parity evenly, or nearly evenly, among disks of the system, while is retaining the capability of incremental disk addition.

In addition, it is desirable to provide a distribution system that enables a write anywhere file system of a storage system to run with better performance in smaller (RAID group) configurations.

SUMMARY OF THE INVENTION

20 The present invention overcomes the disadvantages of the prior art by providing a semi-static distribution technique that distributes parity across disks of an array. Ac-cording to an illustrative embodiment of the technique, parity is distributed (assigned) across the disks of the array in a manner that maintains a fixed pattern of parity blocks among stripes of the disks. When one or more disks are added to the array, the semi-25 static technique redistributes parity in a way that does not require recalculation of parity or moving of any data blocks. Notably, the parity information is not actually moved;
the technique merely involves a change in the assignment (or reservation) for some of the parity blocks of each pre-existing disk to the newly added disk. For example, a pre-existing block that stored parity on, e.g., a first pre-existing disk, may continue to store 30 parity; alternatively, a block on the newly added disk can be assigned to store parity for the stripe, which "frees up" the pre-existing parity block on the first disk to store file system data.

Advantageously, semi-static distribution allows those blocks that hold parity (in the stripe) to change when disks are added to the array.
Reassignment occurs among blocks of a stripe to rebalance parity to avoid the case where a disk with a preponderance of parity gets "hot", i.e., more heavily utilized than other disks, during write traffic. The novel distribution technique applies to single disk failure correction and can be extended to apply to double (or greater) disk loss protection. In addition, the semi-static distribution technique has the potential to improve performance in disk-bound configurations while retaining the capability to add disks to a volume one or more disks at a time.

According to one aspect of the present invention, there is provided a method for distributing parity blocks across a disk array, the method comprising:
adding a new disk to a number of pre-existing disks of the array, wherein each pre-existing disk stores P/(N-1) parity blocks, wherein P is equal to a total number of parity blocks stored across the pre-existing disks and the parity blocks are stored in non-uniform pattern; dividing each disk into blocks, the blocks being organized into stripes such that each stripe contains one block from each disk;
and distributing parity among blocks of the new and pre-existing disks without recalculation of the parity or moving of any blocks containing data by moving every Nth parity block to the new disk to arrange each disk of the array with approximately 1/N parity blocks, where N is equal to the number of pre-existing disks plus the new disk.

According to another aspect of the present invention, there is provided a system adapted to distribute parity across disks of a storage system, the system comprising: a disk array comprising a number of pre-existing disks and at least one new disk, wherein each pre-existing disk stores P/(N-1) parity blocks, wherein P is equal to a total number of parity blocks stored across the pre-existing disks and the parity blocks are stored in non-uniform pattern; and a storage 6a module configured to compute parity in blocks of stripes across the disks and reconstruct blocks of disks lost as a result of failure, the storage module further configured to assign the parity among the blocks of the new and pre-existing disks without recalculation of the parity or moving of any data blocks by moving every Nth parity block to the new disk to arrange each disk of the array with approximately 1/N parity blocks, where N is equal to the number of pre-existing disks plus the new disk.

According to still another aspect of the present invention, there is provided apparatus for distributing parity across a disk array, the apparatus comprising: means for adding a new disk to a number of pre-existing disks of the array, wherein each pre-existing disk stores P/(N-1) parity blocks, wherein P
is equal to a total number of parity blocks stored across the pre-existing disks and the parity blocks are stored in non-uniform pattern; means for dividing each disk into blocks, the blocks being organized into stripes such that each stripe contains one block from each disk; and means for distributing parity among blocks of the new and pre-existing disks without recalculation of the parity or moving of any blocks containing data by moving every Nth parity block to the new disk to arrange each disk of the array with approximately 1/N parity blocks, where N is equal to the number of pre-existing disks plus the new disk.

According to yet another aspect of the present invention, there is provided a computer readable medium containing executable program instructions for distributing parity across a disk array, the executable instructions comprising one or more program instructions for: adding a new disk to a number of pre-existing disks of the array, wherein each pre-existing disk stores P/(N-1) parity blocks, wherein P is equal to a total number of parity blocks stored across the pre-existing disks and the parity blocks are stored in non-uniform pattern;
dividing each disk into blocks, the blocks being organized into stripes such that each stripe contains one block from each disk; and distributing parity among blocks of the new and pre-existing disks without recalculation of the parity or moving of any blocks containing data by moving every Nth parity block to the new disk to arrange each disk of the array with approximately 1/N parity blocks, where N is equal to the number of pre-existing disks plus the new disk.

6b BRIEF DESCRIPTION OF THE DRAWINGS

The above and further advantages of the invention may be better understood by referring to the following description in conjunction with the accompanying drawings 15 in which like reference numerals indicate identical or functionally similar elements:
Fig: 1 is a schematic block diagram of a storage system that may be advanta-geously used with the present invention;
Fig. 2 is a schematic diagram of a disk array illustrating parity assignments ac-cording to a semi-static distribution technique of the present invention;
20 Fig. 3 is a flowchart illustrating a sequence of steps for distributing parity among disks of an array in accordance with an illustrative embodiment of the semi-static distribution technique; and Fig. 4 is a diagram of a parity assignment table illustrating a repeat interval for various group sizes in accordance with the semi-static distribution technique.

Fig. 1 is a schematic block diagram of a storage system 100 that may be advan-tageously used with the present invention. In the illustrative embodiment, the storage system 100 comprises a processor 122, a memory 124 and a storage adapter 128 inter-connected by a system bus 125. The memory 124 comprises storage locations that are addressable by the processor and adapter for storing software program code and data structures associated with the present invention. The processor and adapter may, in turn, comprise processing elements and/or logic circuitry configured to execute the software code and manipulate the data structures. It will be apparent to those skilled in the art that other processing and memory means, including various computer readable media, may be used for storing and executing program instructions pertaining to the inventive technique described herein.

A storage operating system 150, portions of which are typically resident in memory and executed by the processing elements, functionally organizes the system 100 by, inter alia, invoking storage operations executed by the storage system. The storage operating system implements a high-level module to logically organize the in-formation as a hierarchical structure of directories, files and blocks on disks of an array.
The operating system 150 further implements a storage module that manages the stor-age and retrieval of the information to and from the disks in accordance with write and read operations. It should be noted that the high-level and storage modules can be im-plemented in software, hardware, firmware, or a combination thereof.

Specifically, the high-level module may comprise a file system 160 or other module, such as a database, that allocates storage space for itself in the disk array and that controls the layout of data on that array. In addition, the storage module may com-prise a disk array control system or RAID system 170 configured to compute redundant (e.g., parity) information using a redundant storage algorithm and recover from disk failures. The disk array control system ("disk array controller") or RAID
system may further compute the redundant information using algebraic and algorithmic calculations in response to the placement of fixed data on the array. It should be noted that the term "RAID system" is synonymous with "disk array control system" or "disk array control-ler" and, as such, use of the term "RAID system" does not imply employment of one of the known RAID techniques. Rather, the RAID system of the invention employs the inventive semi-static parity distribution technique. As described herein, the file system or database makes decisions about where to place data on the array and forwards those decisions to the RAID system.

In the illustrative embodiment, the storage operating system is preferably the NetApp Data ONTAPTM operating system available from Network Appliance, Inc., Sunnyvale, California that implements a Write Anywhere File Layout (WAFLTM) file system having an on-disk format representation that is block-based using, e.g., 4 kilo-byte (kB) WAFL blocks. However, it is expressly contemplated that any appropriate storage operating system including, for example, a write in-place file system may be enhanced for use in accordance with the inventive principles described herein.
As such, where the term "WAFL" is employed, it should be taken broadly to refer to any storage operating system that is otherwise adaptable to the teachings of this invention.

io As used herein, the term "storage operating system" generally refers to the, com-puter-executable code operable to perform a storage function in a storage system, e.g., that manages file semantics and may, in the case of a file server, implement file system semantics and manage data access. In this sense, the ONTAP software is an example of such a storage operating system implemented as a microkernel and including a WAFL layer to implement the WAFL file system semantics and manage data access.
The storage operating system can also be implemented as an application program oper-ating over a general-purpose operating system, such as UNIX or Windows NT , or as a general-purpose operating system with configurable functionality, which is config-ured for storage applications as described herein.

The storage adapter 128 cooperates with the storage operating system 150 exe-cuting on the system 100 to access information requested by a user (or client). The in-formation may be stored on any type of attached array of writeable storage device me-dia such as video tape, optical, DVD, magnetic tape, bubble memory, electronic ran-dom access memory, micro-electro mechanical and any other similar media adapted to store information, including data and parity information. However, as illustratively de-scribed herein, the information is preferably stored on the disks, such as HDD
and/or DASD, of array 200. The storage adapter includes input/output (I/O) interface circuitry that couples to the disks over an I/O interconnect arrangement, such as a conventional high-performance, Fibre Channel serial link topology.

Storage of information on array 200 is preferably implemented as one or more storage "volumes" (e.g., VOL 1-2 140) that comprise a cluster of physical storage disks, generally shown at 130 and defining an overall logical arrangement of disk space.
Each volume is generally, although not necessarily, associated with its own file system.
The disks within a volume/file system are typically organized as one or more groups, wherein each group is comparable to a RAID group. Most RAID implementations en-hance the reliability/integrity of data storage through the redundant writing of data "stripes" across a given number of physical disks in the RAID group, and the appropri-ate storing of parity information with respect to the striped data.

Specifically, each volume 140 is constructed from an array of physical disks 130 that are divided into blocks, with the blocks being organized into stripes. The disks are organized as groups 132, 134, and 136. Although these groups are comparable to RAID groups, a semi-static distribution technique described herein is used within each group. Each stripe in each group has one or more parity blocks, depending on the de-gree of failure tolerance required of the group. The selection of which disk(s) in each stripe contains parity is not determined by the RAID configuration, as it would be in a conventional RAID-4 or RAID-5 array.

The present invention relates to the semi-static distribution technique that dis-tributes parity across disks of an array. The inventive technique is preferably imple-mented by the RAID system 170 that, among other things, computes parity in stripes across the disks, distributes the parity among those stripes as described herein and re-constructs disks lost as a result of failure. The semi-static distribution technique does not require the participation of the file system 160 and, as such, is also suitable for de-ployment in RAID code embodied as, e.g., a RAID controller that may be internally or externally coupled to the storage system 100.

According to the technique, parity is distributed (assigned) across the disks of the array in a manner that maintains a fixed pattern of parity blocks among stripes of the disks. When one or more disks are added to the array, the semi-static technique re-distributes parity in a way that does not require recalculation of parity or moving of any data blocks. Notably, the parity information is not actually moved; the technique merely involves a change in the assignment (or reservation) for some of the parity blocks of each pre-existing disk to the newly added disk. For example, a pre-existing block that stored parity on, e.g., a first pre-existing disk, may continue to store parity;

alternatively, a block on the newly added disk can be assigned to store parity for the stripe, which "frees up" the pre-existing parity block on the first disk to store file sys-tem data. Note that references to the file system data do not preclude data generated by other high-level modules, such as databases.

Assuming data is allocated densely across the disks of array 200, the storage operating system 150 can choose to assign parity evenly across the disks in a fixed pat-tern. However, the fixed pattern changes when one or more disks are added to the ar-ray. In response, the semi-static distribution technique redistributes (reassigns) parity in a manner that maintains a fixed pattern of parity blocks among the stripes of the disks. Note that each newly added disk is initialized to a predetermined and fixed value, e.g., zeroed, so as to not affect the fixed parity of the stripes. It should be further noted that the fixed parity may be even or odd, as long as the parity value is known (predetermined); the following description herein is directed to the use of even parity.
In addition, initializing of the newly added disk allows reassignment of parity blocks in is some stripes (e.g., 1/N of the stripes, where N is equal to the number of disks) to the new disk without any calculation or writing of parity.

According to the invention, the reassignment algorithm only ever changes a par-ity block to a data block and never changes a data block to a parity block.
For exam-ple, in response to adding a new Nth disk to a group 132-136, the file system 160 can reassign every Nth parity block of each existing disk to the new disk. Such reassign-ment does not require any re-computation or data movement as the new disk only con-tains free blocks and parity blocks, so existing parity blocks can get reassigned for use for data, but not vice versa. This reassignment (construction) algorithm forms a pattern of parity that is deterministic for each group size and evenly distributes parity among all the disks in the group.

Fig. 2 is a schematic diagram of disk array 200 illustrating parity assignments according to the semi-static distribution technique of the present invention.
Assume the array 200 initially comprises one disk 202 and it is desirable to store redundant (parity) information; therefore, each block on the disk stores parity (P) information.
When a 3o second disk 204 is added to expand the array, the parity blocks may be distributed be-tween the two disks. Likewise, when a third disk 206 and, thereafter, a fourth disk 208 are added to the expanded array, the parity blocks may be distributed among those disks. As disks are added to the array 200, parity is not stored in a block that contains file system data. The semi-static distribution technique is directed to only reassigning parity blocks, which frees up blocks to use for data. In other words, the technique s never reassigns a data block, which is in contrast to the expansion of conventional RAID-5 level implementations.

Parity may be distributed among the disks in accordance with a construction al-gorithm of the inventive technique that reassigns one of N parity blocks from each pre-existing disk to the new disk, wherein N is equal to the number of disks in the expanded io array. Overall, one of N parity blocks is reassigned to the new disk, with each pre-existing disk continuing to hold exactly 1/N of the parity blocks in the expanded array.
For a 2-disk array, every other parity block on the first disk 202 is moved to the second disk 204. When the third disk 206 is added to the expanded array 200, thereby creating a 3-disk array, every third remaining parity block on the first disk 202, as well as every is third parity block on the second disk 204, is moved to the third disk 206.
When the fourth disk 208 is added to the array, creating a 4-disk array, every fourth remaining parity block from each disk (disks 1-3) is moved to the fourth disk 208. As a result of this reassignment, the amount of parity on each disk is substantially the same. The lo-cation of the parity block also changes from stripe to stripe across the disks of the array 20 in a predictable and deterministic pattern.

Fig. 3 is a flowchart illustrating a sequence of steps for distributing parity among disks of an array in accordance with an illustrative embodiment of the semi-static distribution technique of the present invention. Here, a new Nth disk is added to a group 132-136 of the array and, as described above, one out of every N
parity blocks 25 is assigned to the new disk, wherein N is equal to the number of disks in the array. As noted, there is no need to actually move the parity information among the disks; the in-ventive semi-static distribution technique contemplates merely a change in the assign-ment (or reservation) for each parity block on the newly added disk.

The sequence starts in Step 300 and proceeds to Step 302 where the new disk is 30 added to the group of N disks in the array. In Step 304, the new disk is initialized (e.g., zeroed) to ensure that the parity of the blocks on each stripe is unaffected.
There may be multiple blocks within a stripe that do not contain data (i.e., unallocated data blocks) and that could potentially store parity. The stripe will contain at. least one unallocated block, which is the parity block, and one or more unallocated blocks that are freed data blocks. All blocks contribute to, e.g., even parity, so the parity block(s) and the freed data blocks are all equivalent. The file system (or high-level module, if there is no file system) determines which disks contain free blocks in the stripe in response to a write request to store write data in the stripe. In Step 306, the file system 160 reserves as many free blocks as required by the redundant storage algorithm to store parity, arbi-trarily. For example, a pre-existing block that stored parity on, e.g., a first pre-existing disk, may continue to store parity; alternatively, a block on the newly added disk can be assigned to store parity for the stripe, which "frees up" the pre-existing parity block on the first disk to store the data.

Note that any parity algorithm that protects against two (or more) disk failures may be used with the semi-static distribution technique, as long as the algorithm allows is any two (or more) blocks in the stripe to store the parity. An example of a double fail-ure correcting algorithm that may be advantageously used with the present invention is uniform and symmetric row-diagonal (SRD) parity described in U.S. Patent No. 7,263,629 titled Uniform and Symmetric Double Failure Correcting Technique for Protecting against Two Disk Failures in a Disk Array, by Peter F.
Corbett et al. Here, the inventive technique is not dependent upon the uniformity or symmetry of the parity algorithm, although it can take advantage of it. When using a double failure correcting algorithm with the semi-static distribution technique, the file system reserves 'two unallocated data blocks to be assigned to store parity. A
non-uniform double or higher failure correcting algorithm can be used since the location of the parity blocks is known deterministically. However, using such an algorithm may sacrifice the advantage that parity need not be recalculated when a disk is added to the array.

Another technique is to employ the non-uniform algorithm such that data blocks are written to any of the blocks of the array, even those that typically would be used to store.redundant information. Since the multiple failure correcting algorithm can restore the contents of any missing disks, the remaining blocks can be used to store redundant information, even if they are constructed using the technique usually intended to recon-struct lost data blocks. Using a non-uniform algorithm in this way may result in an im-plementation that is much more complex than can be achieved by using a uniform and symmetric algorithm, such as SRD.

In Step 308, the write allocator 165 of the file system arranges the write data for storage on the disks in the stripe. In Step 310, the file system provides an indication of the reserved block(s) to the RAID system (storage module) via a write request message issued by the file system. In Step 312, the RAID system provides the parity informa-tion (and write data) to the disk driver system for storage on the disks. In particular, in Step 314, the parity is distributed among the blocks of the disks such that 1 /N of the parity blocks is stored on each disk to thereby balance the data across the disks of the array. Moreover, the locations of the parity blocks "move" among the stripes of the array in a predictable pattern that appears complicated, but is easy to compute. The se-quence then ends at Step 316.

Additional techniques by which a balanced semi-static distribution of redundant or parity blocks can be achieved in a double failure correcting array that has two redun-dant blocks per stripe includes a technique that simply replaces each single disk in a single failure correcting semi-static array with a pair of disks in the double failure cor-recting array. Here, the role of each pair of disks is identical to the role of the corre-sponding single disk in the single failure-correcting array. Balance is maintained by using the same number of rows used in the single failure-correcting array;
however, this technique is limited to adding disks to the array in multiples of two.

Another technique constructs a balanced or nearly balanced array by starting with two initial ("old") disks that are completely filled with parity blocks, then adding a third disk and moving every third parity block from each of the two initial disks to the new disk. This technique distributes one-third of the parity blocks to each disk, occu-pying two-thirds of the space on each disk. When reassigning parity blocks from an old disk, it may be discovered that the block on the new disk has already been designated as parity. In this case, the next possible parity block is reassigned from the old disk to the new disk, at the next row where the new disk does not yet contain parity and the old disk does.

This latter technique can be further extrapolated to build a deterministic set of parity assignments for any number of disks, with two redundant (e.g., parity) blocks per stripe and with the redundant blocks balanced or nearly balanced across the array.
Similarly, for three or greater numbers of redundant blocks per stripe, the same tech-nique can be employed to determine a placement of redundant blocks in a larger array of any size, in such a way that the number of redundant blocks per disk is balanced or nearly balanced. Moreover, the technique allows any number of disks to be added without ever changing a data block into a parity block, while continuing to keep the number of redundant blocks per disk balanced or nearly balanced.

io Other similar techniques can be developed to determine the roles of blocks as data blocks or redundant blocks in any size array, while preserving the property that the array can be expanded incrementally as the distribution of both data and redundant blocks are kept balanced or nearly balanced, and without ever changing a data block into a redundant block. Any of these assignment techniques can be implemented by storing or generating a data structure (e.g., a table) in memory containing the assign-ments for a specific number of rows in an array of specific size. It is also possible to store in a single table all possible assignments of redundant blocks for any array size up to a certain limit. Here, for example, the table may store a bitmap for each row, where the one (or more) highest numbered bit set is selected that is less than N, wherein N is the number of disks in the array. In general, any table-based parity assignment that maintains balance of distributed data and redundant blocks, while allowing expansion without changing data blocks to redundant (parity) blocks, is contemplated by the pre-sent invention, regardless of the number of redundant blocks per row (i.e., the number of failures the array can tolerate).

The parity assignments for the semi-static distribution technique are calculated for a known size of a group 132-136 of the disk array or for a maximum group size of the array; either way, as noted, the calculated parity assignments may be stored in a ta-ble. A parity distribution pattern defined by the stored assignments and, in particular, a repeat interval of the pattern can be used to determine the location of parity storage on 3o any disk in the array for a given group size and for a given stripe. That is, the pattern -.15-can be used to indicate which block in each stripe is used for parity or a different pat-tern can be used for several stripes.

Fig. 4 is a diagram of a parity assignment table 400 illustrating the repeat inter-val for various group sizes in accordance with the semi-static distribution technique.
The parity distribution pattern repeats at a repetition interval dependent upon the group size of the array. If a group of size N repeats every K stripes then the group of size (N+1) will repeat in the smallest number that both K and (N+1) evenly divide.
Nota-bly, the content of the table does not repeat until it reaches a number (repeat interval) dependent on the value of N, where N equals the number of disks. For example, in a 2-io disk array (i.e., a group size of two), the parity distribution pattern repeats every two stripes. When a third disk is added (for a group size of three), the parity pattern repeats every six stripes. When a fourth disk is added (for a group size of four), the parity pat-tern repeats every twelve stripes. It can be seen from table 400 that for a group size of five (and six), the parity pattern repeats every sixty stripes.

The repeat interval as a function of group size is determined in accordance with the set of unique prime factors ("primes") up to N, where N equals the number of disks.
The repeat interval (which is equivalent to the number of entries in table 400) is less than N factorial and, in fact, is equal to the product of all primes less than or equal to N, with each prime raised to the largest power possible such that the result is less than or equal to N. As some of the numbers between one and N are prime numbers, it is clear that the repeat interval may get large, making the table large. For example, for N = 10, the table size is 2A3 x 3A2 x 5Al x 7Al = 8 x 9 x 5 x 7 = 2520. Similarly, for N = 32, the table size is 2A5 x 3A3 x 5A2 x 7A1 x l 1Al x 13Al x 17A1 x 19^1 x 23Al x 29A1 x 3JAI =32x27x25x7x1lx 13x17x19x23x29x31-144x10^12.

A tradeoff may then be made between the table size of the pattern and precision of balancing; the table can be terminated at a reasonable point and the group size at that particular repeat interval can be used. Thereafter, even if there are more disks than the group size, the technique can continue to repeat the pattern and still realize nearly uni-form balance of data across the array within, e.g., a half percent. For example, as noted above, a group size of ten translates into a parity distribution pattern that repeats every 2,520 stripes. A table of this size (i.e., 2,520 entries) is relatively compact in memory 124 and can be computed relatively quickly at start-up using appropriate software code.
In contrast, the table for a group size of 32 (i.e., 144 x 10^12 entries) is too large to store in memory.

The 2,520 entry table works well with any reasonable number of disks to pro-s vide good data balance; however, it should be noted that this size table is not the only choice and other sized tables may also be used. The 2,520 entry pattern is perfectly balanced for N disks up to ten; for N greater than 10, the pattern provides good data balance even though the pattern has not repeated. In other words, although the parity assignment table for a 17-disk group is rather large (7.7MB with 5 bits per pattern), if to only a fraction of the table is used, good parity balance can still be achieved. Cutting off the pattern at 2,520, for example, yields perfect balance for all group sizes up to 10 disks, and less than 1% imbalance to larger groups while limiting the table size to 2520 x 4 bits = 1260 bytes for N = 11 and 5 x 2520 bits = 1,575 bytes for N = 17 to 32.

The parity assignment table 400 can be encoded as a single number indicating a is bit position of parity for a particular value of N. The table could also be coded as a bit vector, with one or two (or more) bits set indicating the position of a single or double (or greater) parity block providing single or double (or greater) disk failure protection.
Moreover, the table can be encoded as a single table indicating (for all disk array sizes up to some limit, e.g., 32 disks) what disks possibly contain parity in each stripe. The 20 determination of which disk actually contains parity for a specific value of N is then made by masking off the high order 32-N bits and selecting the highest order remaining one or two (or more) bits.

In sum, semi-static distribution strives to keep the number of data blocks per disk roughly matched across the array to thereby "spread" the read load across all disks 25 of the array. As a result, the technique eliminates any "bottleneck" in the array caused by throughput of any single disk in the array, while also eliminating the parity disk(s) as hot spot(s) for write operations. The general technique can be applied using a sym-metric algorithm, such as SRD parity, or an asymmetric double failure-correcting algo-rithm, such as Row-Diagonal (RD) parity. The RD parity technique is described in 30 U.S. Patent No. 6,993,701 titled Row-Diagonal Parity Technique for Enabling Efficient Recovery from Double Failures in a Storage Array, by Peter F.
Corbett et al., filed on December 28, 2001.

When employing a non-uniform algorithm, such as RD parity, the role of the disk in storing either data or redundant blocks in any particular block might be ignored with respect to the typical role of the disk in the asymmetric parity algorithm. Since any double failure correcting algorithm can construct missing "data" for any two miss-ing disks of an array, the contents of all the blocks in the row that are assigned the role of storing data are fixed and the contents of the two redundant blocks are computed us-ing the double failure correcting algorithm, which is applied differently depending on io the positions of the disks in the row. Having stored two redundant blocks in.each row, the array can tolerate two disk failures, recovering the lost data or redundant blocks re-gardless of the roles of the lost blocks in any particular stripe.

Alternatively, since the roles of the disks are deterministically defined, any al-gorithm that allows any two or more disks in the array to contain the redundant infor-mation can be employed. Using such an algorithm may require the recomputation of parity in stripes where the parity blocks move, but it does preserve the advantage of the invention that no data blocks are moved. SRD has the additional advantage that no par-ity blocks need be recomputed when parity block(s) are assigned to the newly added disk(s).

The distribution technique described herein is particularly useful for systems having fewer disks yet that want to utilize all read operations per second (ops) that are available from those disks. Performance of smaller arrays is bounded by the ops that are achievable from disks (disk-bound). Yet even in large arrays where disks get lar-ger, because of reconstruction times, the tendency is to reduce the number of disks per group 132-136. This results in an increase in redundancy overhead (the percentage of disks in a group devoted to redundancy increases). Therefore, it is desirable to take ad-vantage of the read ops available in those redundant disks. Another advantage of the distribution technique is that reconstruction and/or recovery occurs "blindly"
(i.e., without knowing the roles of the disks).

Semi-static distribution may be advantageously used with arrays having low numbers of large disks, since the technique balances data across the array.
Using larger disks is required to get reasonable capacity, but that also means using smaller groups to limit reconstruction time. If a 14-disk configuration uses two groups and one spare, then over 20% of the disks are unavailable for use in storing or retrieving data. Con-figurations with eight disks are even worse.

As noted, the semi-static distribution technique allows incremental addition of disks to a distributed parity implementation of a disk array. An advantage of the inven-tive distribution technique over a RAID-5 level implementation is that it allows easy expansion of the array, avoiding the need to add an entire group to the array or to per-form an expensive RAID-5 reorganization. The semi-static distribution technique may be used in connection with single/double failure error correction. In addition, the tech-nique allows use of multiple disk sizes in the same group 132-136.

While there has been shown and described illustrative embodiments of a semi-static distribution technique that distributes parity across disks, it is to be understood that various other adaptations and modifications may be made within the spirit and is scope of the invention. For example, the distribution technique described herein may apply to block-based RAID arrays to, e.g., allow easy addition of disks to RAID
groups. Block-based RAID arrays generally are not aware of which blocks they are asked to store contain file system data. Instead, the arrays must assume that all blocks not previously designated as parity blocks contain file system data.
Therefore, they usually pre-allocate which blocks will be used for parity. For a given array, these pre-allocated blocks remain fixed. Normally this is done in some predetermined algorithm so that the system does not have to keep track of each parity block.

According to the invention, the RAID system may move the parity designation of some of the blocks in the existing disks to the new disks using the semi-static distri-bution technique. The RAID system must also ensure that logical unit number (lun) block offsets of non-parity blocks in the existing disks are not changed. The new space will then be distributed among all the disks. This non-linear mapping is usually not de-sirable in block-based arrays, as file systems cannot compensate for it.
However, this effect can be mitigated if the parity blocks are allocated contiguously in large chunks (e.g. at least a track size).

It will be understood to those skilled in the art that the inventive technique de-scribed herein may apply to any type of special-purpose (e.g., file server, filer or multi-protocol storage appliance) or general-purpose computer, including a standalone com-puter or portion thereof, embodied as or including a storage system 100. An example of a multi-protocol storage appliance that may be advantageously used with the present invention is described in U.S. Patent Application Publication No. 2004/0030668 titled, Multi-Protocol Storage Appliance that provides Integrated Support for File and Block Access Protocols, filed on August 8, 2002. Moreover, the teachings of this invention can be adapted to a variety of storage system architectures including, but not limited to, a net-work-attached storage environment, a storage area network and disk assembly directly-attached to a client or host computer. The term "storage system" should therefore be taken broadly to include such arrangements in addition to any subsystems configured to :perform a storage function and associated with other equipment or systems.

The foregoing description has been directed to specific embodiments of this in-vention. It will be apparent, however, that other variations and modifications may be made to the described embodiments, with the attainment of some or all of their advan-tages. For instance, the semi-static distribution technique can be generalized to other applications involving the distribution of data structures among persistent storage, e.g., disks, or non-persistent storage, e.g., memory, of a system. Broadly, the technique may apply to the redistribution of any commodity over any set of containers as more con-tainers are added to the system. As an example, the semi-static technique may apply to a system having units and containers, wherein the units are distributed uniformly over the containers and wherein it is desirable to maintain a balanced rate of assignment of units to containers along some numbered dimension. When a new container is added to the system, the technique may be employed to transfer some of the existing units to the new container in such a way that overall and localized balance is maintained.

More specifically, the semi-static technique can be applied to distribution of data structures, such as inode file blocks, among persistent storage devices, such as disks, of an array coupled to a plurality of storage entities, such as storage "heads".
3o Note that a "head" is defined as all parts of a storage system, excluding the disks. An example of such an application involves distributing existing inode file blocks over the plurality of (N) storage heads, which includes one or more newly added storage heads.
Here, the inventive semi-static distribution technique may be used to move only 1/N of any existing inode file blocks to the newly added storage head.

It is expressly contemplated that the teachings of this invention can be imple-mented as software, including a computer-readable medium having program instruc-tions executing on a computer, hardware, firmware, or a combination thereof.
Accord-ingly this description is to be taken only by way of example and not to otherwise limit the scope of the invention. Therefore, it is the object of the appended claims to cover all such variations and modifications as come within the true spirit and scope of the in-vention.

What is claimed is:

Claims (16)

1. A method for distributing parity blocks across a disk array, the method comprising:

adding a new disk to a number of pre-existing disks of the array, wherein each pre-existing disk stores P/(N-1) parity blocks, wherein P is equal to a total number of parity blocks stored across the pre-existing disks and the parity blocks are stored in non-uniform pattern;

dividing each disk into blocks, the blocks being organized into stripes such that each stripe contains one block from each disk; and distributing parity among blocks of the new and pre-existing disks without recalculation of the parity or moving of any blocks containing data by moving every Nth parity block to the new disk to arrange each disk of the array with approximately 1/N parity blocks, where N is equal to the number of pre-existing disks plus the new disk.
2. The method of claim 1 wherein the step of distributing comprises the step of distributing parity among blocks of the new and pre-existing disks in a manner that maintains a predictable pattern of parity blocks among stripes of the disks.
3. The method of claim 2 wherein the predictable pattern appears complicated but repeats in a set repeat interval.
4. The method of claim 3 wherein the repeat interval is equal to a smallest number that both K and N evenly divide, where K is equal to how often N-1 disks repeat.
5. The method of claim 1 wherein the step of distributing comprises the step of changing an assignment for one or more blocks containing parity of each pre-existing disk to the newly added disk.
6. The method of claim 2 wherein the step of adding comprises the step of initializing the added disk so as to not affect parity of the stripes.
7. The method of claim 6 wherein the step of initializing comprises the step of reassigning blocks containing parity in certain stripes to the new disk without calculation or writing of parity.
8. The method of claim 7 wherein the step of reassigning comprises the step of changing a block containing parity (parity block) to a block containing data (data block) and not changing a data block to a parity block.
9. A system adapted to distribute parity across disks of a storage system, the system comprising:

a disk array comprising a number of pre-existing disks and at least one new disk, wherein each pre-existing disk stores P/(N-1) parity blocks, wherein P is equal to a total number of parity blocks stored across the pre-existing disks and the parity blocks are stored in non-uniform pattern; and a storage module configured to compute parity in blocks of stripes across the disks and reconstruct blocks of disks lost as a result of failure, the storage module further configured to assign the parity among the blocks of the new and pre-existing disks without recalculation of the parity or moving of any data blocks by moving every Nth parity block to the new disk to arrange each disk of the array with approximately 1/N parity blocks, where N is equal to the number of pre-existing disks plus the new disk.
10. The system of claim 9 further comprising a table configured to store parity assignments calculated for one of a known group size of the disk array and a maximum group size of the array, the stored parity assignments defining a repeat interval of a parity distribution pattern used to determine locations of parity storage on any disk in the array.
11. The system of claim 9 wherein the storage module is embodied as a RAID system on an operating system of the storage system.
12. The system of claim 9 wherein the storage module is embodied as an internal disk array controller of the storage system.
13. The system of claim 9 wherein the storage module is embodied as a disk array control system externally coupled to the storage system.
14. The system of claim 9 wherein the disk array is a block-based RAID
array.
15. Apparatus for distributing parity across a disk array, the apparatus comprising:

means for adding a new disk to a number of pre-existing disks of the array, wherein each pre-existing disk stores P/(N-1) parity blocks, wherein P is equal to a total number of parity blocks stored across the pre-existing disks and the parity blocks are stored in non-uniform pattern;

means for dividing each disk into blocks, the blocks being organized into stripes such that each stripe contains one block from each disk; and means for distributing parity among blocks of the new and pre-existing disks without recalculation of the parity or moving of any blocks containing data by moving every Nth parity block to the new disk to arrange each disk of the array with approximately 1/N parity blocks, where N is equal to the number of pre-existing disks plus the new disk.
16. A computer readable medium containing executable program instructions for distributing parity across a disk array, the executable instructions comprising one or more program instructions for:

adding a new disk to a number of pre-existing disks of the array, wherein each pre-existing disk stores P/(N-1) parity blocks, wherein P is equal to a total number of parity blocks stored across the pre-existing disks and the parity blocks are stored in non-uniform pattern;

dividing each disk into blocks, the blocks being organized into stripes such that each stripe contains one block from each disk; and distributing parity among blocks of the new and pre-existing disks without recalculation of the parity or moving of any blocks containing data by moving every Nth parity block to the new disk to arrange each disk of the array with approximately 1/N parity blocks, where N is equal to the number of pre-existing disks plus the new disk.
CA 2546242 2003-11-24 2004-11-24 Semi-static distribution technique Active CA2546242C (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
US10720364 US7185144B2 (en) 2003-11-24 2003-11-24 Semi-static distribution technique
US10/720,364 2003-11-24
PCT/US2004/039618 WO2005052784A3 (en) 2003-11-24 2004-11-24 Semi-static parity distribution technique

Publications (2)

Publication Number Publication Date
CA2546242A1 true CA2546242A1 (en) 2005-06-09
CA2546242C true CA2546242C (en) 2011-07-26

Family

ID=34591531

Family Applications (1)

Application Number Title Priority Date Filing Date
CA 2546242 Active CA2546242C (en) 2003-11-24 2004-11-24 Semi-static distribution technique

Country Status (7)

Country Link
US (2) US7185144B2 (en)
JP (1) JP2007516524A (en)
KR (1) KR101148697B1 (en)
CN (1) CN101023412B (en)
CA (1) CA2546242C (en)
EP (1) EP1687707B1 (en)
WO (1) WO2005052784A3 (en)

Families Citing this family (259)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7647451B1 (en) 2003-11-24 2010-01-12 Netapp, Inc. Data placement technique for striping data containers across volumes of a storage system cluster
US7698501B1 (en) * 2005-04-29 2010-04-13 Netapp, Inc. System and method for utilizing sparse data containers in a striped volume set
US7698289B2 (en) * 2003-12-02 2010-04-13 Netapp, Inc. Storage system architecture for striping data container content across volumes of a cluster
JP4324088B2 (en) * 2004-12-17 2009-09-02 富士通株式会社 Data replication control device
US7941602B2 (en) 2005-02-10 2011-05-10 Xiotech Corporation Method, apparatus and program storage device for providing geographically isolated failover using instant RAID swapping in mirrored virtual disks
US20060218360A1 (en) * 2005-03-22 2006-09-28 Burkey Todd R Method, apparatus and program storage device for providing an optimized read methodology for synchronously mirrored virtual disk pairs
US7698334B2 (en) * 2005-04-29 2010-04-13 Netapp, Inc. System and method for multi-tiered meta-data caching and distribution in a clustered computer environment
US7904649B2 (en) 2005-04-29 2011-03-08 Netapp, Inc. System and method for restriping data across a plurality of volumes
JP4473175B2 (en) * 2005-05-13 2010-06-02 富士通株式会社 Storage control method, program and apparatus
US8819011B2 (en) * 2008-07-16 2014-08-26 Cleversafe, Inc. Command line interpreter for accessing a data object stored in a distributed storage network
US8489915B2 (en) * 2009-07-30 2013-07-16 Cleversafe, Inc. Method and apparatus for storage integrity processing based on error types in a dispersed storage network
US8694668B2 (en) * 2005-09-30 2014-04-08 Cleversafe, Inc. Streaming media software interface to a dispersed data storage network
US9632722B2 (en) 2010-05-19 2017-04-25 International Business Machines Corporation Balancing storage unit utilization within a dispersed storage network
US8706980B2 (en) * 2009-07-30 2014-04-22 Cleversafe, Inc. Method and apparatus for slice partial rebuilding in a dispersed storage network
US9009575B2 (en) 2009-07-30 2015-04-14 Cleversafe, Inc. Rebuilding a data revision in a dispersed storage network
US8555109B2 (en) * 2009-07-30 2013-10-08 Cleversafe, Inc. Method and apparatus for distributed storage integrity processing
US7953937B2 (en) 2005-09-30 2011-05-31 Cleversafe, Inc. Systems, methods, and apparatus for subdividing data for storage in a dispersed data storage grid
US8630987B2 (en) * 2008-07-16 2014-01-14 Cleversafe, Inc. System and method for accessing a data object stored in a distributed storage network
US9558059B2 (en) 2009-07-30 2017-01-31 International Business Machines Corporation Detecting data requiring rebuilding in a dispersed storage network
US7574579B2 (en) * 2005-09-30 2009-08-11 Cleversafe, Inc. Metadata management system for an information dispersed storage system
US9774684B2 (en) 2005-09-30 2017-09-26 International Business Machines Corporation Storing data in a dispersed storage network
US8352782B2 (en) * 2005-09-30 2013-01-08 Cleversafe, Inc. Range based rebuilder for use with a dispersed data storage network
US9027080B2 (en) * 2008-03-31 2015-05-05 Cleversafe, Inc. Proxy access to a dispersed storage network
US9501355B2 (en) 2008-03-31 2016-11-22 International Business Machines Corporation Storing data and directory information in a distributed storage network
US8880799B2 (en) * 2005-09-30 2014-11-04 Cleversafe, Inc. Rebuilding data on a dispersed storage network
US8171101B2 (en) * 2005-09-30 2012-05-01 Cleversafe, Inc. Smart access to a dispersed data storage network
US7546427B2 (en) * 2005-09-30 2009-06-09 Cleversafe, Inc. System for rebuilding dispersed data
US8856552B2 (en) * 2008-03-31 2014-10-07 Cleversafe, Inc. Directory synchronization of a dispersed storage network
US7574570B2 (en) * 2005-09-30 2009-08-11 Cleversafe Inc Billing system for information dispersal system
EP1949214B1 (en) 2005-10-28 2012-12-19 Network Appliance, Inc. System and method for optimizing multi-pathing support in a distributed storage system environment
US8255425B1 (en) 2005-11-01 2012-08-28 Netapp, Inc. System and method for event notification using an event routing table
US7730258B1 (en) 2005-11-01 2010-06-01 Netapp, Inc. System and method for managing hard and soft lock state information in a distributed storage system environment
US8012542B2 (en) * 2005-12-30 2011-09-06 E.I. Du Pont De Nemours And Company Fluoropolymer coating compositions containing adhesive polymers and substrate coating process
US20070180300A1 (en) * 2006-01-02 2007-08-02 Via Technologies, Inc. Raid and related access method
US7934120B2 (en) * 2006-09-11 2011-04-26 International Business Machines Corporation Storing data redundantly
US8301673B2 (en) * 2006-12-29 2012-10-30 Netapp, Inc. System and method for performing distributed consistency verification of a clustered file system
US8489811B1 (en) 2006-12-29 2013-07-16 Netapp, Inc. System and method for addressing data containers using data set identifiers
US8639656B2 (en) 2007-02-02 2014-01-28 International Business Machines Corporation Method for implementing persistent file pre-allocation
US8312046B1 (en) 2007-02-28 2012-11-13 Netapp, Inc. System and method for enabling a data container to appear in a plurality of locations in a super-namespace
US7827350B1 (en) 2007-04-27 2010-11-02 Netapp, Inc. Method and system for promoting a snapshot in a distributed file system
US7797489B1 (en) 2007-06-01 2010-09-14 Netapp, Inc. System and method for providing space availability notification in a distributed striped volume set
US7975102B1 (en) 2007-08-06 2011-07-05 Netapp, Inc. Technique to avoid cascaded hot spotting
US8549351B2 (en) * 2007-10-09 2013-10-01 Cleversafe, Inc. Pessimistic data reading in a dispersed storage network
US8533256B2 (en) * 2007-10-09 2013-09-10 Cleversafe, Inc. Object interface to a dispersed data storage network
US8572429B2 (en) * 2007-10-09 2013-10-29 Cleversafe, Inc. Optimistic data writing in a dispersed storage network
US8185614B2 (en) * 2007-10-09 2012-05-22 Cleversafe, Inc. Systems, methods, and apparatus for identifying accessible dispersed digital storage vaults utilizing a centralized registry
US8819179B2 (en) 2007-10-09 2014-08-26 Cleversafe, Inc. Data revision synchronization in a dispersed storage network
US8209363B2 (en) 2007-10-09 2012-06-26 Cleversafe, Inc. File system adapted for use with a dispersed data storage network
US8965956B2 (en) * 2007-10-09 2015-02-24 Cleversafe, Inc. Integrated client for use with a dispersed data storage network
US7904475B2 (en) * 2007-10-09 2011-03-08 Cleversafe, Inc. Virtualized data storage vaults on a dispersed data storage network
US20090094250A1 (en) * 2007-10-09 2009-04-09 Greg Dhuse Ensuring data integrity on a dispersed storage grid
US8478865B2 (en) * 2007-10-09 2013-07-02 Cleversafe, Inc. Systems, methods, and apparatus for matching a connection request with a network interface adapted for use with a dispersed data storage network
US9697171B2 (en) 2007-10-09 2017-07-04 Internaitonal Business Machines Corporation Multi-writer revision synchronization in a dispersed storage network
US8285878B2 (en) * 2007-10-09 2012-10-09 Cleversafe, Inc. Block based access to a dispersed data storage network
US8156405B1 (en) * 2007-12-07 2012-04-10 Emc Corporation Efficient redundant memory unit array
US7996607B1 (en) 2008-01-28 2011-08-09 Netapp, Inc. Distributing lookup operations in a striped storage system
US7971013B2 (en) 2008-04-30 2011-06-28 Xiotech Corporation Compensating for write speed differences between mirroring storage devices by striping
US8429514B1 (en) * 2008-09-24 2013-04-23 Network Appliance, Inc. Dynamic load balancing of distributed parity in a RAID array
US7992055B1 (en) 2008-11-07 2011-08-02 Netapp, Inc. System and method for providing autosupport for a security system
US8161076B1 (en) * 2009-04-02 2012-04-17 Netapp, Inc. Generation and use of a data structure for distributing responsibilities among multiple resources in a network storage system
US8656187B2 (en) * 2009-04-20 2014-02-18 Cleversafe, Inc. Dispersed storage secure data decoding
US8819781B2 (en) * 2009-04-20 2014-08-26 Cleversafe, Inc. Management of network devices within a dispersed data storage network
US8504847B2 (en) * 2009-04-20 2013-08-06 Cleversafe, Inc. Securing data in a dispersed storage network using shared secret slices
US9483656B2 (en) 2009-04-20 2016-11-01 International Business Machines Corporation Efficient and secure data storage utilizing a dispersed data storage system
US8601259B2 (en) * 2009-04-20 2013-12-03 Cleversafe, Inc. Securing data in a dispersed storage network using security sentinel value
US9092294B2 (en) * 2009-04-20 2015-07-28 Cleversafe, Inc. Systems, apparatus, and methods for utilizing a reachability set to manage a network upgrade
US8744071B2 (en) * 2009-04-20 2014-06-03 Cleversafe, Inc. Dispersed data storage system data encryption and encoding
US20100269008A1 (en) * 2009-04-20 2010-10-21 Cleversafe, Inc. Dispersed data storage system data decoding and decryption
US20100268692A1 (en) 2009-04-20 2010-10-21 Cleversafe, Inc. Verifying data security in a dispersed storage network
US8117388B2 (en) * 2009-04-30 2012-02-14 Netapp, Inc. Data distribution through capacity leveling in a striped file system
US20100332751A1 (en) * 2009-06-30 2010-12-30 Cleversafe, Inc. Distributed storage processing module
US8595435B2 (en) * 2009-07-30 2013-11-26 Cleversafe, Inc. Dispersed storage write process
US8914669B2 (en) 2010-04-26 2014-12-16 Cleversafe, Inc. Secure rebuilding of an encoded data slice in a dispersed storage network
US8909858B2 (en) 2010-06-09 2014-12-09 Cleversafe, Inc. Storing encoded data slices in a dispersed storage network
US9208025B2 (en) 2009-07-30 2015-12-08 Cleversafe, Inc. Virtual memory mapping in a dispersed storage network
US9207870B2 (en) 2009-07-30 2015-12-08 Cleversafe, Inc. Allocating storage units in a dispersed storage network
US8275744B2 (en) * 2009-07-30 2012-09-25 Cleversafe, Inc. Dispersed storage network virtual address fields
US8527838B2 (en) * 2009-07-31 2013-09-03 Cleversafe, Inc. Memory controller utilizing an error coding dispersal function
US9167277B2 (en) * 2009-08-03 2015-10-20 Cleversafe, Inc. Dispersed storage network data manipulation
US9772791B2 (en) * 2009-08-27 2017-09-26 International Business Machines Corporation Dispersed storage processing unit and methods with geographical diversity for use in a dispersed storage system
US9411810B2 (en) * 2009-08-27 2016-08-09 International Business Machines Corporation Method and apparatus for identifying data inconsistency in a dispersed storage network
US8560855B2 (en) * 2009-08-27 2013-10-15 Cleversafe, Inc. Verification of dispersed storage network access control information
US8949695B2 (en) 2009-08-27 2015-02-03 Cleversafe, Inc. Method and apparatus for nested dispersed storage
US8357048B2 (en) * 2009-09-29 2013-01-22 Cleversafe, Inc. Interactive gaming utilizing a dispersed storage network
US8548913B2 (en) 2009-09-29 2013-10-01 Cleversafe, Inc. Method and apparatus to secure an electronic commerce transaction
US8554994B2 (en) * 2009-09-29 2013-10-08 Cleversafe, Inc. Distributed storage network utilizing memory stripes
US8281181B2 (en) * 2009-09-30 2012-10-02 Cleversafe, Inc. Method and apparatus for selectively active dispersed storage memory device utilization
US8438456B2 (en) * 2009-10-05 2013-05-07 Cleversafe, Inc. Method and apparatus for dispersed storage of streaming data
US9015431B2 (en) * 2009-10-29 2015-04-21 Cleversafe, Inc. Distributed storage revision rollbacks
US9661356B2 (en) 2009-10-29 2017-05-23 International Business Machines Corporation Distribution of unique copies of broadcast data utilizing fault-tolerant retrieval from dispersed storage
US8291277B2 (en) * 2009-10-29 2012-10-16 Cleversafe, Inc. Data distribution utilizing unique write parameters in a dispersed storage system
US9774678B2 (en) 2009-10-29 2017-09-26 International Business Machines Corporation Temporarily storing data in a dispersed storage network
US9413529B2 (en) 2009-10-30 2016-08-09 International Business Machines Corporation Distributed storage network and method for storing and retrieving encryption keys
US9667701B2 (en) 2009-10-30 2017-05-30 International Business Machines Corporation Robust reception of data utilizing encoded data slices
US9098376B2 (en) 2009-10-30 2015-08-04 Cleversafe, Inc. Distributed storage network for modification of a data object
US9311185B2 (en) 2009-10-30 2016-04-12 Cleversafe, Inc. Dispersed storage unit solicitation method and apparatus
US9195408B2 (en) 2009-10-30 2015-11-24 Cleversafe, Inc. Highly autonomous dispersed storage system retrieval method
US8589637B2 (en) * 2009-10-30 2013-11-19 Cleversafe, Inc. Concurrent set storage in distributed storage network
US8464133B2 (en) * 2009-10-30 2013-06-11 Cleversafe, Inc. Media content distribution in a social network utilizing dispersed storage
US8769035B2 (en) 2009-10-30 2014-07-01 Cleversafe, Inc. Distributed storage network for storing a data object based on storage requirements
US20110102546A1 (en) * 2009-10-30 2011-05-05 Cleversafe, Inc. Dispersed storage camera device and method of operation
US9842222B2 (en) 2010-08-25 2017-12-12 International Business Machines Corporation Securely rebuilding an encoded data slice
US9152514B2 (en) 2009-11-24 2015-10-06 Cleversafe, Inc. Rebuilding a data segment in a dispersed storage network
US9501349B2 (en) 2009-11-24 2016-11-22 International Business Machines Corporation Changing dispersed storage error encoding parameters
US8918897B2 (en) 2009-11-24 2014-12-23 Cleversafe, Inc. Dispersed storage network data slice integrity verification
US9270298B2 (en) 2009-11-24 2016-02-23 International Business Machines Corporation Selecting storage units to rebuild an encoded data slice
US9836352B2 (en) 2009-11-25 2017-12-05 International Business Machines Corporation Detecting a utilization imbalance between dispersed storage network storage units
US8819452B2 (en) 2009-11-25 2014-08-26 Cleversafe, Inc. Efficient storage of encrypted data in a dispersed storage network
US8527807B2 (en) * 2009-11-25 2013-09-03 Cleversafe, Inc. Localized dispersed storage memory system
US9626248B2 (en) 2009-11-25 2017-04-18 International Business Machines Corporation Likelihood based rebuilding of missing encoded data slices
US9489264B2 (en) 2009-11-25 2016-11-08 International Business Machines Corporation Storing an encoded data slice as a set of sub-slices
US9672109B2 (en) 2009-11-25 2017-06-06 International Business Machines Corporation Adaptive dispersed storage network (DSN) and system
US8688907B2 (en) * 2009-11-25 2014-04-01 Cleversafe, Inc. Large scale subscription based dispersed storage network
US8621268B2 (en) * 2009-11-25 2013-12-31 Cleversafe, Inc. Write threshold utilization in a dispersed storage system
US8762343B2 (en) * 2009-12-29 2014-06-24 Cleversafe, Inc. Dispersed storage of software
US9727266B2 (en) 2009-12-29 2017-08-08 International Business Machines Corporation Selecting storage units in a dispersed storage network
US9866595B2 (en) 2009-12-29 2018-01-09 International Busines Machines Corporation Policy based slice deletion in a dispersed storage network
US8352831B2 (en) * 2009-12-29 2013-01-08 Cleversafe, Inc. Digital content distribution utilizing dispersed storage
US8468368B2 (en) * 2009-12-29 2013-06-18 Cleversafe, Inc. Data encryption parameter dispersal
US8990585B2 (en) * 2009-12-29 2015-03-24 Cleversafe, Inc. Time based dispersed storage access
US9672108B2 (en) 2009-12-29 2017-06-06 International Business Machines Corporation Dispersed storage network (DSN) and system with improved security
US9413393B2 (en) 2009-12-29 2016-08-09 International Business Machines Corporation Encoding multi-media content for a centralized digital video storage system
US9369526B2 (en) 2009-12-29 2016-06-14 International Business Machines Corporation Distributed storage time synchronization based on retrieval delay
US20130246812A1 (en) 2009-12-29 2013-09-19 Cleversafe, Inc. Secure storage of secret data in a dispersed storage network
US9798467B2 (en) 2009-12-29 2017-10-24 International Business Machines Corporation Security checks for proxied requests
US9305597B2 (en) 2009-12-29 2016-04-05 Cleversafe, Inc. Accessing stored multi-media content based on a subscription priority level
US9330241B2 (en) 2009-12-29 2016-05-03 International Business Machines Corporation Applying digital rights management to multi-media file playback
US9507735B2 (en) 2009-12-29 2016-11-29 International Business Machines Corporation Digital content retrieval utilizing dispersed storage
US20110184997A1 (en) * 2010-01-28 2011-07-28 Cleversafe, Inc. Selecting storage facilities in a plurality of dispersed storage networks
US8959366B2 (en) * 2010-01-28 2015-02-17 Cleversafe, Inc. De-sequencing encoded data slices
US8352501B2 (en) 2010-01-28 2013-01-08 Cleversafe, Inc. Dispersed storage network utilizing revision snapshots
US9760440B2 (en) 2010-01-28 2017-09-12 International Business Machines Corporation Site-based namespace allocation
US9043548B2 (en) 2010-01-28 2015-05-26 Cleversafe, Inc. Streaming content storage
US9201732B2 (en) 2010-01-28 2015-12-01 Cleversafe, Inc. Selective activation of memory to retrieve data in a dispersed storage network
US8954667B2 (en) * 2010-01-28 2015-02-10 Cleversafe, Inc. Data migration in a dispersed storage network
US9311184B2 (en) * 2010-02-27 2016-04-12 Cleversafe, Inc. Storing raid data as encoded data slices in a dispersed storage network
US9135115B2 (en) 2010-02-27 2015-09-15 Cleversafe, Inc. Storing data in multiple formats including a dispersed storage format
US8347169B1 (en) * 2010-03-01 2013-01-01 Applied Micro Circuits Corporation System and method for encoding using common partial parity products
US8566552B2 (en) * 2010-03-12 2013-10-22 Cleversafe, Inc. Dispersed storage network resource allocation
US8707091B2 (en) * 2010-03-15 2014-04-22 Cleversafe, Inc. Failsafe directory file system in a dispersed storage network
US9170884B2 (en) 2010-03-16 2015-10-27 Cleversafe, Inc. Utilizing cached encoded data slices in a dispersed storage network
US9229824B2 (en) 2010-03-16 2016-01-05 International Business Machines Corporation Caching rebuilt encoded data slices in a dispersed storage network
US8495466B2 (en) * 2010-03-16 2013-07-23 Cleversafe, Inc. Adjusting data dispersal in a dispersed storage network
US9606858B2 (en) 2010-04-26 2017-03-28 International Business Machines Corporation Temporarily storing an encoded data slice
US9077734B2 (en) 2010-08-02 2015-07-07 Cleversafe, Inc. Authentication of devices of a dispersed storage network
US9495117B2 (en) 2010-04-26 2016-11-15 International Business Machines Corporation Storing data in a dispersed storage network
US9047218B2 (en) 2010-04-26 2015-06-02 Cleversafe, Inc. Dispersed storage network slice name verification
US8938552B2 (en) 2010-08-02 2015-01-20 Cleversafe, Inc. Resolving a protocol issue within a dispersed storage network
US9092386B2 (en) 2010-04-26 2015-07-28 Cleversafe, Inc. Indicating an error within a dispersed storage network
US8625635B2 (en) 2010-04-26 2014-01-07 Cleversafe, Inc. Dispersed storage network frame protocol header
US8959597B2 (en) 2010-05-19 2015-02-17 Cleversafe, Inc. Entity registration in multiple dispersed storage networks
US8448044B2 (en) 2010-05-19 2013-05-21 Cleversafe, Inc. Retrieving data from a dispersed storage network in accordance with a retrieval threshold
US8621580B2 (en) 2010-05-19 2013-12-31 Cleversafe, Inc. Retrieving access information in a dispersed storage network
US9231768B2 (en) 2010-06-22 2016-01-05 International Business Machines Corporation Utilizing a deterministic all or nothing transformation in a dispersed storage network
US8612831B2 (en) 2010-06-22 2013-12-17 Cleversafe, Inc. Accessing data stored in a dispersed storage memory
US9063968B2 (en) 2010-08-02 2015-06-23 Cleversafe, Inc. Identifying a compromised encoded data slice
US8842746B2 (en) 2010-08-02 2014-09-23 Cleversafe, Inc. Receiving encoded data slices via wireless communication
US8762793B2 (en) 2010-08-26 2014-06-24 Cleversafe, Inc. Migrating encoded data slices from a re-provisioned memory device of a dispersed storage network memory
US9116831B2 (en) 2010-10-06 2015-08-25 Cleversafe, Inc. Correcting an errant encoded data slice
US9843412B2 (en) 2010-10-06 2017-12-12 International Business Machines Corporation Optimizing routing of data across a communications network
US9571230B2 (en) 2010-10-06 2017-02-14 International Business Machines Corporation Adjusting routing of data within a network path
US8612821B2 (en) 2010-10-06 2013-12-17 Cleversafe, Inc. Data transmission utilizing route selection and dispersed storage error encoding
US8707105B2 (en) 2010-11-01 2014-04-22 Cleversafe, Inc. Updating a set of memory devices in a dispersed storage network
US9015499B2 (en) 2010-11-01 2015-04-21 Cleversafe, Inc. Verifying data integrity utilizing dispersed storage
US9274977B2 (en) 2010-11-01 2016-03-01 International Business Machines Corporation Storing data integrity information utilizing dispersed storage
US8627065B2 (en) 2010-11-09 2014-01-07 Cleversafe, Inc. Validating a certificate chain in a dispersed storage network
US9590838B2 (en) 2010-11-09 2017-03-07 International Business Machines Corporation Transferring data of a dispersed storage network
US9454431B2 (en) 2010-11-29 2016-09-27 International Business Machines Corporation Memory selection for slice storage in a dispersed storage network
US9336139B2 (en) 2010-11-29 2016-05-10 Cleversafe, Inc. Selecting a memory for storage of an encoded data slice in a dispersed storage network
US8892845B2 (en) 2010-12-22 2014-11-18 Cleversafe, Inc. Segmenting data for storage in a dispersed storage network
US8897443B2 (en) 2010-12-27 2014-11-25 Cleversafe, Inc. Watermarking slices stored in a dispersed storage network
US8688949B2 (en) 2011-02-01 2014-04-01 Cleversafe, Inc. Modifying data storage in response to detection of a memory system imbalance
US20120198066A1 (en) 2011-02-01 2012-08-02 Cleversafe, Inc. Utilizing a dispersed storage network access token module to acquire digital content from a digital content provider
US20120226772A1 (en) 2011-03-02 2012-09-06 Cleversafe, Inc. Transferring data utilizing a transfer token module
US20120226667A1 (en) 2011-03-02 2012-09-06 Cleversafe, Inc. Determining a staleness state of a dispersed storage network local directory
US8874991B2 (en) 2011-04-01 2014-10-28 Cleversafe, Inc. Appending data to existing data stored in a dispersed storage network
US8880978B2 (en) 2011-04-01 2014-11-04 Cleversafe, Inc. Utilizing a local area network memory and a dispersed storage network memory to access data
US9219604B2 (en) 2011-05-09 2015-12-22 Cleversafe, Inc. Generating an encrypted message for storage
US9298550B2 (en) 2011-05-09 2016-03-29 Cleversafe, Inc. Assigning a dispersed storage network address range in a maintenance free storage container
US9141458B2 (en) 2011-05-09 2015-09-22 Cleversafe, Inc. Adjusting a data storage address mapping in a maintenance free storage container
US8707393B2 (en) 2011-05-09 2014-04-22 Cleversafe, Inc. Providing dispersed storage network location information of a hypertext markup language file
US8762479B2 (en) 2011-06-06 2014-06-24 Cleversafe, Inc. Distributing multi-media content to a plurality of potential accessing devices
US20130013798A1 (en) 2011-07-06 2013-01-10 Cleversafe, Inc. Distribution of multi-media content to a user device
US8924770B2 (en) 2011-07-06 2014-12-30 Cleversafe, Inc. Rebuilding a data slice of a maintenance free storage container
US9135098B2 (en) 2011-07-27 2015-09-15 Cleversafe, Inc. Modifying dispersed storage network event records
US9229823B2 (en) 2011-08-17 2016-01-05 International Business Machines Corporation Storage and retrieval of dispersed storage network access information
US8751894B2 (en) 2011-09-06 2014-06-10 Cleversafe, Inc. Concurrent decoding of data streams
US8555130B2 (en) 2011-10-04 2013-10-08 Cleversafe, Inc. Storing encoded data slices in a dispersed storage unit
US9274864B2 (en) 2011-10-04 2016-03-01 International Business Machines Corporation Accessing large amounts of data in a dispersed storage network
US8856617B2 (en) 2011-10-04 2014-10-07 Cleversafe, Inc. Sending a zero information gain formatted encoded data slice
US8607122B2 (en) 2011-11-01 2013-12-10 Cleversafe, Inc. Accessing a large data object in a dispersed storage network
US9798616B2 (en) 2011-11-01 2017-10-24 International Business Machines Corporation Wireless sending a set of encoded data slices
US8627066B2 (en) 2011-11-03 2014-01-07 Cleversafe, Inc. Processing a dispersed storage network access request utilizing certificate chain validation information
US9584326B2 (en) 2011-11-28 2017-02-28 International Business Machines Corporation Creating a new file for a dispersed storage network
US8848906B2 (en) 2011-11-28 2014-09-30 Cleversafe, Inc. Encrypting data for storage in a dispersed storage network
US9817701B2 (en) 2011-12-12 2017-11-14 International Business Machines Corporation Threshold computing in a distributed computing system
US9304857B2 (en) 2011-12-12 2016-04-05 Cleversafe, Inc. Retrieving data from a distributed storage network
US9430286B2 (en) 2011-12-12 2016-08-30 International Business Machines Corporation Authorizing distributed task processing in a distributed storage network
US9584359B2 (en) 2011-12-12 2017-02-28 International Business Machines Corporation Distributed storage and computing of interim data
US9674155B2 (en) 2011-12-12 2017-06-06 International Business Machines Corporation Encrypting segmented data in a distributed computing system
US9009567B2 (en) 2011-12-12 2015-04-14 Cleversafe, Inc. Encrypting distributed computing data
US9141468B2 (en) 2011-12-12 2015-09-22 Cleversafe, Inc. Managing memory utilization in a distributed storage and task network
US9514132B2 (en) 2012-01-31 2016-12-06 International Business Machines Corporation Secure data migration in a dispersed storage network
US9203901B2 (en) 2012-01-31 2015-12-01 Cleversafe, Inc. Efficiently storing data in a dispersed storage network
US9146810B2 (en) 2012-01-31 2015-09-29 Cleversafe, Inc. Identifying a potentially compromised encoded data slice
US9465861B2 (en) 2012-01-31 2016-10-11 International Business Machines Corporation Retrieving indexed data from a dispersed storage network
US9588994B2 (en) 2012-03-02 2017-03-07 International Business Machines Corporation Transferring task execution in a distributed storage and task network
US20130232153A1 (en) 2012-03-02 2013-09-05 Cleversafe, Inc. Modifying an index node of a hierarchical dispersed storage index
US9380032B2 (en) 2012-04-25 2016-06-28 International Business Machines Corporation Encrypting data for storage in a dispersed storage network
US9632872B2 (en) 2012-06-05 2017-04-25 International Business Machines Corporation Reprioritizing pending dispersed storage network requests
US9613052B2 (en) 2012-06-05 2017-04-04 International Business Machines Corporation Establishing trust within a cloud computing system
US9292212B2 (en) 2012-06-25 2016-03-22 International Business Machines Corporation Detecting storage errors in a dispersed storage network
US8935761B2 (en) 2012-06-25 2015-01-13 Cleversafe, Inc. Accessing storage nodes in an on-line media storage system
US9537609B2 (en) 2012-08-02 2017-01-03 International Business Machines Corporation Storing a stream of data in a dispersed storage network
US9176822B2 (en) 2012-08-31 2015-11-03 Cleversafe, Inc. Adjusting dispersed storage error encoding parameters
US9424326B2 (en) 2012-09-13 2016-08-23 International Business Machines Corporation Writing data avoiding write conflicts in a dispersed storage network
US20140123316A1 (en) 2012-10-30 2014-05-01 Cleversafe, Inc. Access control of data in a dispersed storage network
US9298542B2 (en) 2012-10-30 2016-03-29 Cleversafe, Inc. Recovering data from corrupted encoded data slices
US9811533B2 (en) 2012-12-05 2017-11-07 International Business Machines Corporation Accessing distributed computing functions in a distributed computing system
US9521197B2 (en) 2012-12-05 2016-12-13 International Business Machines Corporation Utilizing data object storage tracking in a dispersed storage network
US20140181455A1 (en) * 2012-12-20 2014-06-26 Apple Inc. Category based space allocation for multiple storage devices
US9558067B2 (en) 2013-01-04 2017-01-31 International Business Machines Corporation Mapping storage of data in a dispersed storage network
US9311187B2 (en) 2013-01-04 2016-04-12 Cleversafe, Inc. Achieving storage compliance in a dispersed storage network
US9043499B2 (en) 2013-02-05 2015-05-26 Cleversafe, Inc. Modifying a dispersed storage network memory data access response plan
US9274908B2 (en) 2013-02-26 2016-03-01 International Business Machines Corporation Resolving write conflicts in a dispersed storage network
JP6135226B2 (en) * 2013-03-21 2017-05-31 日本電気株式会社 The information processing apparatus, information processing method, a storage system and a computer program
US9456035B2 (en) 2013-05-03 2016-09-27 International Business Machines Corporation Storing related data in a dispersed storage network
US9405609B2 (en) 2013-05-22 2016-08-02 International Business Machines Corporation Storing data in accordance with a performance threshold
US9432341B2 (en) 2013-05-30 2016-08-30 International Business Machines Corporation Securing data in a dispersed storage network
US9424132B2 (en) 2013-05-30 2016-08-23 International Business Machines Corporation Adjusting dispersed storage network traffic due to rebuilding
US9652470B2 (en) 2013-07-01 2017-05-16 International Business Machines Corporation Storing data in a dispersed storage network
US9501360B2 (en) 2013-07-01 2016-11-22 International Business Machines Corporation Rebuilding data while reading data in a dispersed storage network
US20150039660A1 (en) 2013-07-31 2015-02-05 Cleversafe, Inc. Co-locate objects request
US20150039666A1 (en) 2013-07-31 2015-02-05 Cleversafe, Inc. Distributed storage network with client subsets and methods for use therewith
US20150067164A1 (en) 2013-08-29 2015-03-05 Cleversafe, Inc. Dispersed storage with coordinated execution and methods for use therewith
US9661074B2 (en) 2013-08-29 2017-05-23 International Business Machines Corporations Updating de-duplication tracking data for a dispersed storage network
US9857974B2 (en) 2013-10-03 2018-01-02 International Business Machines Corporation Session execution decision
US9781208B2 (en) 2013-11-01 2017-10-03 International Business Machines Corporation Obtaining dispersed storage network system registry information
US9594639B2 (en) 2014-01-06 2017-03-14 International Business Machines Corporation Configuring storage resources of a dispersed storage network
US8949692B1 (en) 2014-01-23 2015-02-03 DSSD, Inc. Method and system for service-aware parity placement in a storage system
US20150205667A1 (en) * 2014-01-23 2015-07-23 DSSD, Inc. Method and system for service-aware data placement in a storage system
US9778987B2 (en) 2014-01-31 2017-10-03 International Business Machines Corporation Writing encoded data slices in a dispersed storage network
US9552261B2 (en) 2014-01-31 2017-01-24 International Business Machines Corporation Recovering data from microslices in a dispersed storage network
US9529834B2 (en) 2014-02-26 2016-12-27 International Business Machines Corporation Concatenating data objects for storage in a dispersed storage network
US9665429B2 (en) 2014-02-26 2017-05-30 International Business Machines Corporation Storage of data with verification in a dispersed storage network
US9390283B2 (en) 2014-04-02 2016-07-12 International Business Machines Corporation Controlling access in a dispersed storage network
US9762395B2 (en) 2014-04-30 2017-09-12 International Business Machines Corporation Adjusting a number of dispersed storage units
US9542239B2 (en) 2014-04-30 2017-01-10 International Business Machines Corporation Resolving write request conflicts in a dispersed storage network
US9606867B2 (en) 2014-06-05 2017-03-28 International Business Machines Corporation Maintaining data storage in accordance with an access metric
US9690520B2 (en) 2014-06-30 2017-06-27 International Business Machines Corporation Recovering an encoded data slice in a dispersed storage network
US9838478B2 (en) 2014-06-30 2017-12-05 International Business Machines Corporation Identifying a task execution resource of a dispersed storage network
US9841925B2 (en) 2014-06-30 2017-12-12 International Business Machines Corporation Adjusting timing of storing data in a dispersed storage network
CN104156276B (en) * 2014-08-14 2017-06-09 浪潮电子信息产业股份有限公司 An anti-raid damaged two disks method
US9591076B2 (en) 2014-09-08 2017-03-07 International Business Machines Corporation Maintaining a desired number of storage units
US9727275B2 (en) 2014-12-02 2017-08-08 International Business Machines Corporation Coordinating storage of data in dispersed storage networks
US9727427B2 (en) 2014-12-31 2017-08-08 International Business Machines Corporation Synchronizing storage of data copies in a dispersed storage network
US9740547B2 (en) 2015-01-30 2017-08-22 International Business Machines Corporation Storing data using a dual path storage approach
US9826038B2 (en) 2015-01-30 2017-11-21 International Business Machines Corporation Selecting a data storage resource of a dispersed storage network
US20160259574A1 (en) * 2015-03-03 2016-09-08 International Business Machines Corporation Incremental replication of a source data set

Family Cites Families (64)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US34100A (en) * 1862-01-07 Improved step-ladder
US3876978A (en) 1973-06-04 1975-04-08 Ibm Archival data protection
US4092732A (en) 1977-05-31 1978-05-30 International Business Machines Corporation System for recovering data stored in failed memory unit
US4201976A (en) 1977-12-23 1980-05-06 International Business Machines Corporation Plural channel error correcting methods and means using adaptive reallocation of redundant channels among groups of channels
US4205324A (en) 1977-12-23 1980-05-27 International Business Machines Corporation Methods and means for simultaneously correcting several channels in error in a parallel multi channel data system using continuously modifiable syndromes and selective generation of internal channel pointers
JPS6353636B2 (en) 1979-09-04 1988-10-25 Fanuc Ltd
US4467421A (en) 1979-10-18 1984-08-21 Storage Technology Corporation Virtual storage system and method
GB2061575B (en) 1979-10-24 1984-09-19 Matsushita Electric Ind Co Ltd Method and apparatus for encoding low redundancy check words from source data
US4825403A (en) 1983-05-16 1989-04-25 Data General Corporation Apparatus guaranteeing that a controller in a disk drive system receives at least some data from an invalid track sector
JPS60142418A (en) 1983-12-28 1985-07-27 Hitachi Ltd Input/output error recovery system
FR2561428B1 (en) 1984-03-16 1986-09-12 Bull Sa Method for recording in a memory and a disk memory system has discs
US4667326A (en) 1984-12-20 1987-05-19 Advanced Micro Devices, Inc. Method and apparatus for error detection and correction in systems comprising floppy and/or hard disk drives
US5202979A (en) 1985-05-08 1993-04-13 Thinking Machines Corporation Storage system using multiple independently mechanically-driven storage units
US4722085A (en) 1986-02-03 1988-01-26 Unisys Corp. High capacity disk storage system having unusually high fault tolerance level and bandpass
JPH0675329B2 (en) 1986-02-18 1994-09-21 ソニー株式会社 Disc player
US4761785B1 (en) 1986-06-12 1996-03-12 Ibm Parity spreading to enhance storage access
US4775978A (en) 1987-01-12 1988-10-04 Magnetic Peripherals Inc. Data error correction system
USRE34100E (en) 1987-01-12 1992-10-13 Seagate Technology, Inc. Data error correction system
US4796260A (en) 1987-03-30 1989-01-03 Scs Telecom, Inc. Schilling-Manela forward error correction and detection code method and apparatus
US5257367A (en) 1987-06-02 1993-10-26 Cab-Tek, Inc. Data storage system with asynchronous host operating system communication link
US4849974A (en) 1987-08-03 1989-07-18 Scs Telecom, Inc. PASM and TASM forward error correction and detection code method and apparatus
US4849976A (en) 1987-08-03 1989-07-18 Scs Telecom, Inc. PASM and TASM forward error correction and detection code method and apparatus
US4837680A (en) 1987-08-28 1989-06-06 International Business Machines Corporation Controlling asynchronously operating peripherals
US4870643A (en) 1987-11-06 1989-09-26 Micropolis Corporation Parallel drive array storage system
US4847842A (en) 1987-11-19 1989-07-11 Scs Telecom, Inc. SM codec method and apparatus
US4899342A (en) 1988-02-01 1990-02-06 Thinking Machines Corporation Method and apparatus for operating multi-unit array of memories
US5077736A (en) 1988-06-28 1991-12-31 Storage Technology Corporation Disk drive memory
US4989205A (en) 1988-06-28 1991-01-29 Storage Technology Corporation Disk drive memory
US4989206A (en) 1988-06-28 1991-01-29 Storage Technology Corporation Disk drive memory
US5128810A (en) 1988-08-02 1992-07-07 Cray Research, Inc. Single disk emulation interface for an array of synchronous spindle disk drives
US5218689A (en) 1988-08-16 1993-06-08 Cray Research, Inc. Single disk emulation interface for an array of asynchronously operating disk drives
US5148432A (en) 1988-11-14 1992-09-15 Array Technology Corporation Arrayed disk drive system and method
US5163131A (en) 1989-09-08 1992-11-10 Auspex Systems, Inc. Parallel i/o network file server architecture
US5101492A (en) 1989-11-03 1992-03-31 Compaq Computer Corporation Data redundancy and recovery protection
US5233618A (en) 1990-03-02 1993-08-03 Micro Technology, Inc. Data correcting applicable to redundant arrays of independent disks
US5088081A (en) 1990-03-28 1992-02-11 Prime Computer, Inc. Method and apparatus for improved disk access
US5166936A (en) 1990-07-20 1992-11-24 Compaq Computer Corporation Automatic hard disk bad sector remapping
US5210860A (en) 1990-07-20 1993-05-11 Compaq Computer Corporation Intelligent disk array controller
US5208813A (en) 1990-10-23 1993-05-04 Array Technology Corporation On-line reconstruction of a failed redundant array system
US5235601A (en) 1990-12-21 1993-08-10 Array Technology Corporation On-line restoration of redundancy information in a redundant array system
US5274799A (en) 1991-01-04 1993-12-28 Array Technology Corporation Storage device array architecture with copyback cache
US5579475A (en) 1991-02-11 1996-11-26 International Business Machines Corporation Method and means for encoding and rebuilding the data contents of up to two unavailable DASDS in a DASD array using simple non-recursive diagonal and row parity
US5179704A (en) 1991-03-13 1993-01-12 Ncr Corporation Method and apparatus for generating disk array interrupt signals
EP0519669A3 (en) 1991-06-21 1994-07-06 Ibm Encoding and rebuilding data for a dasd array
US5237658A (en) 1991-10-01 1993-08-17 Tandem Computers Incorporated Linear and orthogonal expansion of array storage in multiprocessor computing systems
US5305326A (en) 1992-03-06 1994-04-19 Data General Corporation High availability disk arrays
US5410667A (en) 1992-04-17 1995-04-25 Storage Technology Corporation Data record copy system for a disk drive array data storage subsystem
US5537567A (en) 1994-03-14 1996-07-16 International Business Machines Corporation Parity block configuration in an array of storage devices
US5623595A (en) 1994-09-26 1997-04-22 Oracle Corporation Method and apparatus for transparent, real time reconstruction of corrupted data in a redundant array data storage system
US5615352A (en) * 1994-10-05 1997-03-25 Hewlett-Packard Company Methods for adding storage disks to a hierarchic disk array while maintaining data availability
US5812753A (en) 1995-10-13 1998-09-22 Eccs, Inc. Method for initializing or reconstructing data consistency within an array of storage elements
US5862158A (en) 1995-11-08 1999-01-19 International Business Machines Corporation Efficient method for providing fault tolerance against double device failures in multiple device systems
US5758118A (en) * 1995-12-08 1998-05-26 International Business Machines Corporation Methods and data storage devices for RAID expansion by on-line addition of new DASDs
US5884098A (en) 1996-04-18 1999-03-16 Emc Corporation RAID controller system utilizing front end and back end caching systems including communication path connecting two caching systems and synchronizing allocation of blocks in caching systems
US5805788A (en) 1996-05-20 1998-09-08 Cray Research, Inc. Raid-5 parity generation and data reconstruction
US6000010A (en) * 1997-05-09 1999-12-07 Unisys Corporation Method of increasing the storage capacity of a level five RAID disk array by adding, in a single step, a new parity block and N--1 new data blocks which respectively reside in a new columns, where N is at least two
KR100267366B1 (en) 1997-07-15 2000-10-16 Samsung Electronics Co Ltd Method for recoding parity and restoring data of failed disks in an external storage subsystem and apparatus therefor
US6092215A (en) 1997-09-29 2000-07-18 International Business Machines Corporation System and method for reconstructing data in a storage array system
JP3616487B2 (en) 1997-11-21 2005-02-02 アルプス電気株式会社 Disk array device
US6138201A (en) 1998-04-15 2000-10-24 Sony Corporation Redundant array of inexpensive tape drives using data compression and data allocation ratios
US6442649B1 (en) * 1999-08-18 2002-08-27 Intel Corporation Dynamic expansion of storage device array
US6532548B1 (en) 1999-09-21 2003-03-11 Storage Technology Corporation System and method for handling temporary errors on a redundant array of independent tapes (RAIT)
US6581185B1 (en) 2000-01-24 2003-06-17 Storage Technology Corporation Apparatus and method for reconstructing data using cross-parity stripes on storage media
US7080278B1 (en) * 2002-03-08 2006-07-18 Network Appliance, Inc. Technique for correcting multiple storage device failures in a storage array

Also Published As

Publication number Publication date Type
EP1687707B1 (en) 2012-08-22 grant
US7185144B2 (en) 2007-02-27 grant
CN101023412B (en) 2012-10-31 grant
JP2007516524A (en) 2007-06-21 application
US7257676B2 (en) 2007-08-14 grant
EP1687707A2 (en) 2006-08-09 application
CA2546242A1 (en) 2005-06-09 application
US20070083710A1 (en) 2007-04-12 application
CN101023412A (en) 2007-08-22 application
WO2005052784A2 (en) 2005-06-09 application
KR101148697B1 (en) 2012-07-05 grant
US20050114594A1 (en) 2005-05-26 application
WO2005052784A3 (en) 2007-02-08 application
KR20060120143A (en) 2006-11-24 application

Similar Documents

Publication Publication Date Title
US5875456A (en) Storage device array and methods for striping and unstriping data and for adding and removing disks online to/from a raid storage array
US5258984A (en) Method and means for distributed sparing in DASD arrays
US5257362A (en) Method and means for ensuring single pass small read/write access to variable length records stored on selected DASDs in a DASD array
US6298415B1 (en) Method and system for minimizing writes and reducing parity updates in a raid system
US7080278B1 (en) Technique for correcting multiple storage device failures in a storage array
US6067635A (en) Preservation of data integrity in a raid storage device
US5546558A (en) Memory system with hierarchic disk array and memory map store for persistent storage of virtual mapping information
US5872906A (en) Method and apparatus for taking countermeasure for failure of disk array
US20130173955A1 (en) Data protection in a random access disk array
US20040123032A1 (en) Method for storing integrity metadata in redundant data layouts
US4761785A (en) Parity spreading to enhance storage access
US6532548B1 (en) System and method for handling temporary errors on a redundant array of independent tapes (RAIT)
US7577866B1 (en) Techniques for fault tolerant data storage
US8099623B1 (en) Efficient distributed hot sparing scheme in a parity declustered RAID organization
US20080168225A1 (en) Providing enhanced tolerance of data loss in a disk array system
US5862313A (en) Raid system using I/O buffer segment to temporary store striped and parity data and connecting all disk drives via a single time multiplexed network
US6195727B1 (en) Coalescing raid commands accessing contiguous data in write-through mode
US5809516A (en) Allocation method of physical regions of a disc array to a plurality of logically-sequential data, adapted for increased parallel access to data
US20110126045A1 (en) Memory system with multiple striping of raid groups and method for performing the same
US5375128A (en) Fast updating of DASD arrays using selective shadow writing of parity and data blocks, tracks, or cylinders
US20040064638A1 (en) Integration of a RAID controller with a disk drive module
US5392244A (en) Memory systems with data storage redundancy management
US5805788A (en) Raid-5 parity generation and data reconstruction
US6334168B1 (en) Method and system for updating data in a data storage system
US5799140A (en) Disk array system and method for storing data

Legal Events

Date Code Title Description
EEER Examination request