US20010018729A1 - System and method for storage media group parity protection - Google Patents

System and method for storage media group parity protection Download PDF

Info

Publication number
US20010018729A1
US20010018729A1 US09/852,328 US85232801A US2001018729A1 US 20010018729 A1 US20010018729 A1 US 20010018729A1 US 85232801 A US85232801 A US 85232801A US 2001018729 A1 US2001018729 A1 US 2001018729A1
Authority
US
United States
Prior art keywords
storage medium
data
storage
protection group
group
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
US09/852,328
Other versions
US6393516B2 (en
Inventor
Theodore Johnson
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Rakuten Group Inc
AT&T Properties LLC
Original Assignee
AT&T Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by AT&T Corp filed Critical AT&T Corp
Priority to US09/852,328 priority Critical patent/US6393516B2/en
Publication of US20010018729A1 publication Critical patent/US20010018729A1/en
Application granted granted Critical
Publication of US6393516B2 publication Critical patent/US6393516B2/en
Assigned to AT&T CORP. reassignment AT&T CORP. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: JOHNSON, THEODORE
Assigned to AT&T PROPERTIES, LLC reassignment AT&T PROPERTIES, LLC ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: AT&T CORP.
Assigned to AT&T INTELLECTUAL PROPERTY II, L.P. reassignment AT&T INTELLECTUAL PROPERTY II, L.P. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: AT&T PROPERTIES, LLC
Assigned to RAKUTEN, INC. reassignment RAKUTEN, INC. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: AT&T INTELLECTUAL PROPERTY II, L.P.
Assigned to RAKUTEN, INC. reassignment RAKUTEN, INC. CHANGE OF ADDRESS Assignors: RAKUTEN, INC.
Anticipated expiration legal-status Critical
Assigned to RAKUTEN GROUP, INC. reassignment RAKUTEN GROUP, INC. CHANGE OF NAME (SEE DOCUMENT FOR DETAILS). Assignors: RAKUTEN, INC.
Expired - Lifetime legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/08Error detection or correction by redundancy in data representation, e.g. by using checking codes
    • G06F11/10Adding special bits or symbols to the coded information, e.g. parity check, casting out 9's or 11's
    • G06F11/1076Parity data used in redundant arrays of independent storages, e.g. in RAID systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2211/00Indexing scheme relating to details of data-processing equipment not covered by groups G06F3/00 - G06F13/00
    • G06F2211/10Indexing scheme relating to G06F11/10
    • G06F2211/1002Indexing scheme relating to G06F11/1076
    • G06F2211/1019Fast writes, i.e. signaling the host that a write is done before data is written to disk
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2211/00Indexing scheme relating to details of data-processing equipment not covered by groups G06F3/00 - G06F13/00
    • G06F2211/10Indexing scheme relating to G06F11/10
    • G06F2211/1002Indexing scheme relating to G06F11/1076
    • G06F2211/108RAIT, i.e. RAID on tape drive

Definitions

  • This invention relates to systems and methods for storing information.
  • Tape storage is often used as an inexpensive backup for on-line storage, increasing the reliability of computer-stored data by providing a redundant storage location.
  • hierarchical storage management (HSM) systems use tape storage to greatly expand the capacity of a fixed disk-based file system. Files are migrated from the disk-resident file system to tape storage when the disk-resident file system runs out of space, and files are migrated from tape to fixed disk when they are referenced. Most files in an HSM system are stored only on tape, and no redundant copy is stored on disk.
  • One method of storing backup information in an HSM system is to store two copies of the information, i.e., data mirroring. This way, stored information can be reconstructed even if a primary and one backup information source are damaged or lost.
  • RAIT Redundant Arrays of Inexpensive Tapes
  • a collection of N+1 tapes are aggregated to act as a single virtual tape.
  • data files are simultaneously written to blocks of the N tapes in stripes, and the parity information, which is the bit-wise exclusive-OR of data written in the blocks, is stored on the additional tape drive simultaneous with storing the data files on the N tapes.
  • the RAIT system has a higher performance compared to systems that store duplicate copies of information because writing data in parallel to multiple tapes results in high speed storage.
  • the invention provides a system and method for storing information using a plurality of storage media, such as magnetic tapes, that can be used as part of an HSM system.
  • storage media that store data files and related parity information are written to asynchronously. That is, data files can be stored in a group of storage media synchronously in stripes similar to that in RAIT, or asynchronously unlike RAIT, but parity data is stored asynchronously with respect to storage of the data files.
  • data files and related parity data are stored independently of each other.
  • Protection groups are preferably formed for the storage media, or regions of the storage media, to organize how data is stored on the storage media and how parity information is generated.
  • a protection group can be a collection of N regions from N storage media, one region per storage medium, and parity information is generated and stored for data in each protection group. Parity information is stored so that if one storage medium in a protection group is lost or damaged, data stored on the lost or damaged storage medium can be reconstructed from the remaining storage media and the parity information in the protection group.
  • parity information is determined as the exclusive-OR of data in a protection group, but other methods for generating parity information are possible.
  • parity data for a protection group can be stored asynchronously with respect to storage of data files for which the parity data is generated.
  • a storage medium, or region of a storage medium can be selected to store the data file. Selection of the storage medium or region can be done in many different ways, including using a “round-robin” allocation scheme or by selecting a storage medium that has the largest number of open regions. Once a storage medium or region is selected, the data file is stored and parity data related to the data file is generated. Parity information can be generated before, during or after the data file is stored.
  • data files can be stored within a single region or storage medium, or in a relatively small number of regions or storage media, a file can be restored by accessing a single or relatively small number of storage media.
  • RAIT storage systems which store single files in stripes across multiple tapes.
  • data files can be stored in an asynchronous fashion with respect to each other, data files can be read from appropriate storage media using commonly available equipment, unlike RAIT storage systems that require multiple tapes be synchronously accessed to restore a data file. That is, according to the invention data can be stored asynchronously, and even in parallel, to multiple storage media.
  • data files can be written asynchronously according to the invention, the invention is not limited to asynchronous data file storage. That is, data files can be stored in a stripe across two or more storage media similar to RAIT systems.
  • FIG. 1 is a flowchart of steps for a method of storing information in accordance with the invention
  • FIG. 2 is a schematic block diagram of storage media and a database
  • FIG. 3 is a schematic block diagram of an information storage system in accordance with the invention.
  • the invention provides an information storing system and method that allows rapid and asynchronous access to stored information while providing information loss protection and requiring minimal storage space requirements.
  • a protection group is created to organize the storage of information on a plurality of different storage media.
  • the storage media are preferably magnetic tape media, but can be other information storage media, such as optical disk storage, magnetic disk storage, or any other volatile or non-volatile storage media.
  • the protection group is a collection of regions each having a length B bytes on each of N different storage media.
  • a region can be a portion of a total storage space in a storage medium, or the entire storage space on the storage medium.
  • parity information or a parity block, is generated for the protection group associated with the region to provide additional information loss protection.
  • the parity block is preferably computed to be the exclusive-OR of data stored in the N regions in the protection group and is maintained in active memory until the protection group is closed. When the protection group is closed, the parity block can be stored in more permanent storage. Thus, if data in one of the N regions is lost, the data can be reconstructed by taking the exclusive-OR of data in the unaffected regions in the protection group with the parity block for the protection group.
  • the invention provides a kind of “double backup” system that requires much less storage space than systems that store duplicate copies of data files.
  • data files can be stored in a single region or storage medium, or asynchronously across two or more regions or storage media, storage media used to restore a data file can be read asynchronously without specialized equipment.
  • This is in contrast to the RAIT storage systems that synchronously store single data files in a stripe across multiple storage media, and therefore require synchronous access to the multiple storage media to reconstruct a single data file.
  • the asynchronous nature of the invention allows faster and more convenient data read/write capabilities that do not require specialized synchronous data read/write systems.
  • Another advantage over the RAIT systems is that if two tapes in a RAIT data stripe are lost or damaged, all data in the set of RAIT tapes is lost. However, all data stored in accordance with at least one aspect of the invention is not lost if two or more storage media in a protection group are lost or damaged because data files can be stored asynchronously on one or more storage media.
  • FIG. 1 is a flowchart of steps of a method for storing information in accordance with the invention.
  • step 100 at least one storage medium or region of a storage medium is selected for storing a data file.
  • Selection of the region(s) or storage medium(s) can be done in many different ways, including a “round-robin” allocation scheme, where a first through Nth region are used to store a first through Nth data files, respectively, and then the first through Nth regions are used to store a N+1 th through N+Nth data files, respectively, and so on.
  • a region can also be selected by selecting the region or storage medium that has the largest amount of open storage space, or by selecting a storage medium that has the largest number of open regions.
  • Each storage medium can have one or more regions, as desired. If the data file to be stored is larger than a single region, e.g., is larger than B bytes, a set of regions located on a single storage medium or on more than one storage medium is selected. Alternately, if a data file is to be stored synchronously in a stripe across several storage media, several storage media can be selected.
  • the data file is stored in the selected region(s) or storage medium(s).
  • the data file can be stored in a single region and/or in a single storage medium.
  • the data file can be stored in more than one storage medium in a synchronous or asynchronous manner. For example, if a data file is stored in a stripe similar to that used in RAIT technology, the data file is stored synchronously on several storage media.
  • parity data is generated for the regions and/or storage media used to store the data file.
  • the parity data can be generated either before, during or after the storage of each data file.
  • the parity data is generated before the data file is stored. This scheme provides maximum data backup protection since parity data is generated for each data file before it is written into storage.
  • parity data can be generated after the data file is stored, e.g., after a protection group in which the data file is stored is closed.
  • the parity data is generated by determining the exclusive-OR for the regions in a protection group. For purposes of determining the parity data, empty regions or portions of regions are considered to be filled with “zeroes”. This process of generating parity data is similar to that in RAIT systems and is well known.
  • the parity data is stored. That is, the parity data is stored asynchronously with respect to the storage of data files related to the parity data.
  • the parity data and other information related to the identification and location of regions and parity groups are stored in a database.
  • the parity data can be stored on a media different than that used to store the other information related to the identification and location of regions and parity information, and can be stored on the same type of media used to store the parity protected data.
  • neither the parity data nor its related region location information should be stored on the same medium as the data used to generate that piece of parity data.
  • the parity information could be stored on one of the storage media that includes regions used to generate the parity data, if desired.
  • FIG. 2 shows a schematic diagram of an example set of storage mediums 6 - 1 through 6 - 4 and a database 7 that stores parity information.
  • a region 61 When a data file is received for storage, a region 61 , a group of regions 61 , a storage medium 6 , or group of storage media 6 are selected, and the data file is stored.
  • the data file is stored in a single region 61 - 1 in the storage medium 6 - 1 and is shown as a shaded area in the region 61 - 1 .
  • the data file could completely fill the region 61 - 1 in the storage medium 6 - 1 , or completely fill the region 61 - 1 in the storage medium 6 - 1 and another region 61 in the storage medium 6 - 1 or another storage medium 6 .
  • the data file could be written synchronously in a stripe across multiple storage media 6 , e.g., across regions 61 on storage media 6 - 1 through 6 - 4 .
  • a protection group is formed that includes the regions 61 - 1 in the storage media 6 - 1 through 6 - 4 .
  • parity information for the protection group is generated and stored in active memory (not shown), e.g., active disk storage.
  • active memory not shown
  • the protection group is closed and parity data is migrated to the database 7 , which can be any type of storage media, including magnetic tape.
  • the protection group is closed, a new set of storage media 6 , and/or regions 61 , can be opened and a new protection group generated.
  • a new storage medium 6 can be opened whenever an existing storage medium 6 in a protection group is closed.
  • the current protection group is preferably closed and a new protection group including the new storage medium 6 is opened.
  • This strategy usually reduces the number of parity blocks that are created because storage media 6 typically have varying capacity due to unwritable portions in the storage medium 6 and because of data file size uncertainty caused by data compression.
  • a storage medium group is a collection of storage media dedicated to storing related data files.
  • This strategy is designed to fill open protection groups as quickly as possible.
  • the number of regions in a protection group, or the protection width, is set to k and a list of protection groups having less than k regions is maintained.
  • the new region is assigned to an open protection group such that no other region from the same storage medium is a member of the protection group. If no such protection group exists, a new protection group is created.
  • parity information for the protection group is migrated from active storage to a backup storage medium.
  • This strategy minimizes the active memory storage overhead that is needed to store parity information for open protection groups, since protection groups are closed relatively quickly.
  • the time needed to reconstruct a damaged storage medium can be relatively large, since a large number of storage media must be used to read and reconstruct the lost data.
  • media management can be difficult because if a single storage medium is exported and retired, the parity information for a relatively large number of storage media is invalidated. Thus, retiring a single storage medium can require that all potentially invalidated parity blocks be updated.
  • This strategy attempts to minimize the amount of time needed to rebuild a lost or damaged storage medium by minimizing the number of storage media that must be accessed in the rebuild process.
  • a protection set including k storage media can be constructed so that regions from the k storage media only participate in protection groups with other storage media in the protection set. Thus, if a single storage medium is lost or damaged, only the other k-I storage media and the parity information need be read to reconstruct the lost or damaged storage medium.
  • every protection set contains storage media from a single storage medium group.
  • the storage medium group is written to in a slice of storage media of width s, where s evenly divides the protection width, k.
  • Storage media in a slice should be filled at about the same rate, whether by tape striping techniques, or by writing files to storage media in the slice in parallel streams and allocating new data files to the least-filled storage medium.
  • each storage medium in the protection set should be written to at about the same rate.
  • protection groups will close shortly after being opened, and parity information can be quickly migrated to more permanent storage.
  • Using multiple storage media groups can have the disadvantage that some storage medium groups do not generate enough migratable data to justify keeping multiple storage media open for the groups.
  • slices of several storage medium groups can be combined into a single protection set.
  • Some storage medium groups can be constrained to enter protection groups only with each other. For example, slow filling storage medium groups can form protection sets only with other slow filling groups.
  • the size for individual regions can be adjusted to manage data file storage. Large regions create protection groups that take a long time to fill, but small regions increase the amount of metadata (information about data file location, region location, etc.) that must be managed. Protection set policy can also be modified to organize fractions of a storage medium, rather than whole storage media. While error recovery times can increase, open regions in a protection group might close faster, reducing active storage overhead.
  • Data file migration storage in backup storage media
  • Data file migration policies can also be used to manage protection groups. For example, data files can be gathered and migrated as a group to minimize the number of open protection groups, close a particular protection group, or organize data files in a desired way with respect to protection groups, etc.
  • FIG. 3 is a schematic block diagram of an information storage system 10 in accordance with the invention.
  • the information storage system 10 includes a data processing system 1 , that can be a general purpose computer, or network of general purpose computers, that are programmed to operate in accordance with the invention.
  • the data processing system 1 also includes at least one controller 2 that can be implemented, at least in part, as a single special purpose integrated circuit, e.g., (ASIC) or an array of ASICs, each having a main or central processor section for overall, system-level control, and separate sections dedicated to performing various different specific computations, finctions and other processes under the control of the central processor section.
  • the controller 2 can also be implemented using a plurality of separate dedicated programmable integrated or other electronic circuits or devices, e.g., hard wired electronic or logic circuits such as discrete element circuits or programmable logic devices.
  • the controller 2 also preferably includes other devices, such as volatile or non-volatile memory devices, communications devices, relays and/or other circuitry or components necessary to perform the desired input/output or other functions.
  • the data processing system 1 also includes a storage medium selector 3 and a parity information generator 4 for selecting a region/storage medium and generating parity information for a protection group, respectively.
  • the storage medium selector 3 and parity information generator 4 can be implemented as software modules that are executed by the controller 2 or any other suitable data processing apparatus. Alternately, the storage medium selector 3 and/or the parity information generator 4 can be implemented as hardwired electronic circuits or other programmed integrated or other electronic circuits or devices, e.g., hardwired electronic or logic circuits such as discrete element circuits or programmable logic devices.
  • the data processing system 1 communicates with a plurality of storage devices 9 through a bus 5 .
  • the storage devices 9 store data on associated storage media 6 and a database 7 , which stores parity information and other information, such as region and parity group location information.
  • the storage devices 9 can be any type of well known storage devices that store information on storage media such as magnetic tape, optical disk, magnetic disk, or other storage media, and can be part of a robotic storage media library, for example. Thus, the type of storage devices 9 depends on the type of storage media 6 used to store data.
  • the storage media 6 and the database 7 can be any type of volatile or non-volatile storage medium, but preferably are magnetic tape storage media.
  • the database 7 is used in this example embodiment to more clearly distinguish where data files and parity information is stored. However, information stored on the database 7 can be stored in the storage media 6 . Preferably, however, parity data is stored in storage media that does not include a region for which the parity data was generated.
  • the controller 2 receives data, such as data files for storage, and/or control information on a data input line 8 .
  • the controller 2 requests the storage medium selector 3 to indicate regions and/or a storage medium 6 within a protection group for storing the data file.
  • the storage medium selector 3 can determine a region, set of regions and/or a storage medium 6 by using a “round-robin” file allocation algorithm, by identifying a storage medium 6 that has the largest number of open regions, identifying regions on multiple storage media 6 for a data stripe, or by some alternate method. Once the location where the data file will be stored is identified, the controller 2 controls the storage of the data file on the selected storage medium 6 .
  • the parity information generator 4 Before, during or after the data file is stored, the parity information generator 4 generates parity information for the protection group(s) in which the data file is stored. Other information, including region and media group location can also be stored in the database 7 .
  • the parity information generated by the parity information generator 4 preferably is determined as the exclusive-OR of the regions in the protection group(s). However, other parity information or similar information can be generated.
  • the parity information is preferably stored in active memory, e.g., in the controller 2 or any other storage medium, and migrated to the database 7 when a protection group closes.
  • protection groups can remain open until all of the regions and/or storage media 6 in the protection groups are filled.
  • the controller 2 could close a current protection group whenever a first region becomes filled, and create a new protection group that includes the storage media 6 that are not yet filled and at least one new storage medium 6 .
  • the controller 2 could close a current protection group only when all regions and/or the storage media 6 within the protection group are filled, and create a new protection group that does not include any of the storage media 6 that were included in the earlier closed protection group.
  • Other protection group management schemes can be used depending upon the desired data storage structure.
  • closing a current protection group only when all regions for all storage media 6 in the current protection group are filled minimizes the number of storage media 6 that must be read to reconstruct information stored on a damaged storage medium 6 .
  • Adding new storage media to a protection group whenever a storage medium 6 in the protection group closes reduces the number of parity blocks that are created.

Abstract

A system and method for storage medium group parity protection stores data files and related parity information asynchronously on an array of storage media. Data files can be stored asynchronously, or synchronously in stripes as in RAIT technology, but related parity information is stored asynchronously with respect to the data files. Regions of the storage media are preferably organized into protection groups for which parity information is generated. Parity information is generated on line as data files are stored and maintained in active memory. Once a protection group is filled, the parity information is migrated to more permanent backup storage. As one example, regions of an array of N storage media can constitute a protection group, and once the regions in the protection group are filled with data, parity data for the protection group is migrated from active memory to more permanent backup storage.

Description

    BACKGROUND OF THE INVENTION
  • 1. Field of Invention [0001]
  • This invention relates to systems and methods for storing information. [0002]
  • 2. Description of Related Art [0003]
  • Tape storage is often used as an inexpensive backup for on-line storage, increasing the reliability of computer-stored data by providing a redundant storage location. Additionally, hierarchical storage management (HSM) systems use tape storage to greatly expand the capacity of a fixed disk-based file system. Files are migrated from the disk-resident file system to tape storage when the disk-resident file system runs out of space, and files are migrated from tape to fixed disk when they are referenced. Most files in an HSM system are stored only on tape, and no redundant copy is stored on disk. [0004]
  • One method of storing backup information in an HSM system is to store two copies of the information, i.e., data mirroring. This way, stored information can be reconstructed even if a primary and one backup information source are damaged or lost. [0005]
  • Another method for storing backup information is the Redundant Arrays of Inexpensive Tapes (RAIT) technology. In a RAIT system, a collection of N+1 tapes are aggregated to act as a single virtual tape. In a typical implementation, data files are simultaneously written to blocks of the N tapes in stripes, and the parity information, which is the bit-wise exclusive-OR of data written in the blocks, is stored on the additional tape drive simultaneous with storing the data files on the N tapes. The RAIT system has a higher performance compared to systems that store duplicate copies of information because writing data in parallel to multiple tapes results in high speed storage. However, because data is stored in stripes across multiple tapes in the RAIT system, all of the tapes in a RAIT stripe, i.e., a group of tapes storing a particular set of data, must be mounted and read synchronously to reconstruct a file stored on the tapes. Because data must be synchronously read from tapes in the RAIT stripe, special hardware, or software emulation, for reading the tapes is typically required, and if one of the tape drives is not operating properly, data cannot be properly read from any of the tapes. That is, the system must wait until all of the tapes and associated tape drives are operating properly before any data can be read from the tapes. [0006]
  • SUMMARY OF THE INVENTION
  • The invention provides a system and method for storing information using a plurality of storage media, such as magnetic tapes, that can be used as part of an HSM system. According to at least one aspect of the invention, storage media that store data files and related parity information are written to asynchronously. That is, data files can be stored in a group of storage media synchronously in stripes similar to that in RAIT, or asynchronously unlike RAIT, but parity data is stored asynchronously with respect to storage of the data files. Thus, data files and related parity data are stored independently of each other. [0007]
  • Protection groups are preferably formed for the storage media, or regions of the storage media, to organize how data is stored on the storage media and how parity information is generated. For example, a protection group can be a collection of N regions from N storage media, one region per storage medium, and parity information is generated and stored for data in each protection group. Parity information is stored so that if one storage medium in a protection group is lost or damaged, data stored on the lost or damaged storage medium can be reconstructed from the remaining storage media and the parity information in the protection group. Preferably, parity information is determined as the exclusive-OR of data in a protection group, but other methods for generating parity information are possible. When a protection group is created, each region in the group is empty. As data is written to a region of a storage medium, the region and the corresponding protection group become filled, and parity information is generated and stored in active memory for the protection group. When the regions in a protection group are completely filled and closed, the protection group is closed and parity information stored in active memory for the protection group can be migrated to more permanent backup storage. Thus, parity data for a protection group can be stored asynchronously with respect to storage of data files for which the parity data is generated. [0008]
  • When a data file is received for storage, a storage medium, or region of a storage medium, can be selected to store the data file. Selection of the storage medium or region can be done in many different ways, including using a “round-robin” allocation scheme or by selecting a storage medium that has the largest number of open regions. Once a storage medium or region is selected, the data file is stored and parity data related to the data file is generated. Parity information can be generated before, during or after the data file is stored. [0009]
  • Since in accordance with one aspect of the invention, data files can be stored within a single region or storage medium, or in a relatively small number of regions or storage media, a file can be restored by accessing a single or relatively small number of storage media. This is in contrast to RAIT storage systems, which store single files in stripes across multiple tapes. In addition, since data files can be stored in an asynchronous fashion with respect to each other, data files can be read from appropriate storage media using commonly available equipment, unlike RAIT storage systems that require multiple tapes be synchronously accessed to restore a data file. That is, according to the invention data can be stored asynchronously, and even in parallel, to multiple storage media. Although data files can be written asynchronously according to the invention, the invention is not limited to asynchronous data file storage. That is, data files can be stored in a stripe across two or more storage media similar to RAIT systems. [0010]
  • Various different storage media management strategies can be used to achieve different goals, such as minimizing parity information active memory storage overhead, minimizing the number of open storage media, minimizing data recovery or reconstruction time, etc. To achieve these goals, adjustments to region size, protection group forming policy and/or parity information storage policy can be made. [0011]
  • These and other aspects of the invention will be appreciated and/or are obvious in view of the following description of the invention. [0012]
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • The invention is described in connection with the following drawings where reference numerals indicate like elements and wherein: [0013]
  • FIG. 1 is a flowchart of steps for a method of storing information in accordance with the invention; [0014]
  • FIG. 2 is a schematic block diagram of storage media and a database; and [0015]
  • FIG. 3 is a schematic block diagram of an information storage system in accordance with the invention. [0016]
  • DETAILED DESCRIPTION OF PREFERRED EMBODIMENTS
  • As discussed above, the invention provides an information storing system and method that allows rapid and asynchronous access to stored information while providing information loss protection and requiring minimal storage space requirements. In a preferred embodiment of the invention, a protection group is created to organize the storage of information on a plurality of different storage media. The storage media are preferably magnetic tape media, but can be other information storage media, such as optical disk storage, magnetic disk storage, or any other volatile or non-volatile storage media. The protection group is a collection of regions each having a length B bytes on each of N different storage media. When a data file is to be stored, a region, regions, a storage medium or storage media in the protection group are selected, and the entire data file, or a portion of the data file, is stored in the selected region(s) or storage medium(s). As used herein, a region can be a portion of a total storage space in a storage medium, or the entire storage space on the storage medium. When the data file is stored, parity information, or a parity block, is generated for the protection group associated with the region to provide additional information loss protection. The parity block is preferably computed to be the exclusive-OR of data stored in the N regions in the protection group and is maintained in active memory until the protection group is closed. When the protection group is closed, the parity block can be stored in more permanent storage. Thus, if data in one of the N regions is lost, the data can be reconstructed by taking the exclusive-OR of data in the unaffected regions in the protection group with the parity block for the protection group. [0017]
  • Thus, the invention provides a kind of “double backup” system that requires much less storage space than systems that store duplicate copies of data files. In addition, since data files can be stored in a single region or storage medium, or asynchronously across two or more regions or storage media, storage media used to restore a data file can be read asynchronously without specialized equipment. This is in contrast to the RAIT storage systems that synchronously store single data files in a stripe across multiple storage media, and therefore require synchronous access to the multiple storage media to reconstruct a single data file. Thus, the asynchronous nature of the invention allows faster and more convenient data read/write capabilities that do not require specialized synchronous data read/write systems. Another advantage over the RAIT systems is that if two tapes in a RAIT data stripe are lost or damaged, all data in the set of RAIT tapes is lost. However, all data stored in accordance with at least one aspect of the invention is not lost if two or more storage media in a protection group are lost or damaged because data files can be stored asynchronously on one or more storage media. [0018]
  • FIG. 1 is a flowchart of steps of a method for storing information in accordance with the invention. In [0019] step 100, at least one storage medium or region of a storage medium is selected for storing a data file. Selection of the region(s) or storage medium(s) can be done in many different ways, including a “round-robin” allocation scheme, where a first through Nth region are used to store a first through Nth data files, respectively, and then the first through Nth regions are used to store a N+1 th through N+Nth data files, respectively, and so on. A region can also be selected by selecting the region or storage medium that has the largest amount of open storage space, or by selecting a storage medium that has the largest number of open regions. Each storage medium can have one or more regions, as desired. If the data file to be stored is larger than a single region, e.g., is larger than B bytes, a set of regions located on a single storage medium or on more than one storage medium is selected. Alternately, if a data file is to be stored synchronously in a stripe across several storage media, several storage media can be selected.
  • In step [0020] 200, the data file is stored in the selected region(s) or storage medium(s). As discussed above, the data file can be stored in a single region and/or in a single storage medium. However, the data file can be stored in more than one storage medium in a synchronous or asynchronous manner. For example, if a data file is stored in a stripe similar to that used in RAIT technology, the data file is stored synchronously on several storage media.
  • In step [0021] 300, parity data is generated for the regions and/or storage media used to store the data file. The parity data can be generated either before, during or after the storage of each data file. Preferably, the parity data is generated before the data file is stored. This scheme provides maximum data backup protection since parity data is generated for each data file before it is written into storage. Alternately, parity data can be generated after the data file is stored, e.g., after a protection group in which the data file is stored is closed. Preferably, the parity data is generated by determining the exclusive-OR for the regions in a protection group. For purposes of determining the parity data, empty regions or portions of regions are considered to be filled with “zeroes”. This process of generating parity data is similar to that in RAIT systems and is well known.
  • In step [0022] 400, the parity data is stored. That is, the parity data is stored asynchronously with respect to the storage of data files related to the parity data. Preferably, the parity data and other information related to the identification and location of regions and parity groups are stored in a database. The parity data can be stored on a media different than that used to store the other information related to the identification and location of regions and parity information, and can be stored on the same type of media used to store the parity protected data. However, neither the parity data nor its related region location information should be stored on the same medium as the data used to generate that piece of parity data. However, the parity information could be stored on one of the storage media that includes regions used to generate the parity data, if desired.
  • If protection groups are used to organize how data files are stored and how parity information is generated, additional steps can be performed to manage the protection groups. For example, FIG. 2 shows a schematic diagram of an example set of storage mediums [0023] 6-1 through 6-4 and a database 7 that stores parity information. When a data file is received for storage, a region 61, a group of regions 61, a storage medium 6, or group of storage media 6 are selected, and the data file is stored. In this example, the data file is stored in a single region 61-1 in the storage medium 6-1 and is shown as a shaded area in the region 61-1. Of course, the data file could completely fill the region 61-1 in the storage medium 6-1, or completely fill the region 61 -1 in the storage medium 6-1 and another region 61 in the storage medium 6-1 or another storage medium 6. Likewise, the data file could be written synchronously in a stripe across multiple storage media 6, e.g., across regions 61 on storage media 6-1 through 6-4.
  • In this example, a protection group is formed that includes the regions [0024] 61 -1 in the storage media 6-1 through 6-4. Thus, as data is stored in the regions 61-1, parity information for the protection group is generated and stored in active memory (not shown), e.g., active disk storage. When all of the regions 61-1 in the protection group are filled and are closed, the protection group is closed and parity data is migrated to the database 7, which can be any type of storage media, including magnetic tape. When the protection group is closed, a new set of storage media 6, and/or regions 61, can be opened and a new protection group generated. Alternately, a new storage medium 6 can be opened whenever an existing storage medium 6 in a protection group is closed. When a new storage medium 6 is opened, the current protection group is preferably closed and a new protection group including the new storage medium 6 is opened. This strategy usually reduces the number of parity blocks that are created because storage media 6 typically have varying capacity due to unwritable portions in the storage medium 6 and because of data file size uncertainty caused by data compression.
  • Other or additional storage media management schemes can be used to achieve different goals such as minimizing parity information active memory storage overhead, minimizing the number of open storage media, minimizing data recovery or reconstruction time, etc. To achieve these goals, adjustments to region size, protection group forming policy and/or parity information storage policy can be made. The following are four example strategies that use storage medium groups to manage data file storage. (A storage medium group is a collection of storage media dedicated to storing related data files.) [0025]
  • 1. Immediate Strategy [0026]
  • This strategy is designed to fill open protection groups as quickly as possible. The number of regions in a protection group, or the protection width, is set to k and a list of protection groups having less than k regions is maintained. When a new region is opened in response to a region on a storage medium closing, the new region is assigned to an open protection group such that no other region from the same storage medium is a member of the protection group. If no such protection group exists, a new protection group is created. When a protection group is closed, parity information for the protection group is migrated from active storage to a backup storage medium. [0027]
  • This strategy minimizes the active memory storage overhead that is needed to store parity information for open protection groups, since protection groups are closed relatively quickly. However, the time needed to reconstruct a damaged storage medium can be relatively large, since a large number of storage media must be used to read and reconstruct the lost data. Also, media management can be difficult because if a single storage medium is exported and retired, the parity information for a relatively large number of storage media is invalidated. Thus, retiring a single storage medium can require that all potentially invalidated parity blocks be updated. [0028]
  • 2. Protection Set Strategy [0029]
  • This strategy attempts to minimize the amount of time needed to rebuild a lost or damaged storage medium by minimizing the number of storage media that must be accessed in the rebuild process. For example, a protection set including k storage media can be constructed so that regions from the k storage media only participate in protection groups with other storage media in the protection set. Thus, if a single storage medium is lost or damaged, only the other k-I storage media and the parity information need be read to reconstruct the lost or damaged storage medium. [0030]
  • 3. Single Storage Medium Group Strategy [0031]
  • This strategy attempts to simplify storage medium management. In this strategy, every protection set contains storage media from a single storage medium group. The storage medium group is written to in a slice of storage media of width s, where s evenly divides the protection width, k. Storage media in a slice should be filled at about the same rate, whether by tape striping techniques, or by writing files to storage media in the slice in parallel streams and allocating new data files to the least-filled storage medium. If the slice width is equal to the protection width (s=k), each storage medium in the protection set should be written to at about the same rate. Thus, protection groups will close shortly after being opened, and parity information can be quickly migrated to more permanent storage. Moreover, setting the slice width equal to the protection width (s=k) increases the likelihood that all storage media in the protection set will be exported or retired at the same time. [0032]
  • 4. Multiple Storage Medium Groups [0033]
  • Using multiple storage media groups can have the disadvantage that some storage medium groups do not generate enough migratable data to justify keeping multiple storage media open for the groups. In this case, slices of several storage medium groups can be combined into a single protection set. Some storage medium groups can be constrained to enter protection groups only with each other. For example, slow filling storage medium groups can form protection sets only with other slow filling groups. [0034]
  • In addition to these strategies, the size for individual regions can be adjusted to manage data file storage. Large regions create protection groups that take a long time to fill, but small regions increase the amount of metadata (information about data file location, region location, etc.) that must be managed. Protection set policy can also be modified to organize fractions of a storage medium, rather than whole storage media. While error recovery times can increase, open regions in a protection group might close faster, reducing active storage overhead. [0035]
  • Data file migration (storage in backup storage media) policies can also be used to manage protection groups. For example, data files can be gathered and migrated as a group to minimize the number of open protection groups, close a particular protection group, or organize data files in a desired way with respect to protection groups, etc. [0036]
  • FIG. 3 is a schematic block diagram of an [0037] information storage system 10 in accordance with the invention. The information storage system 10 includes a data processing system 1, that can be a general purpose computer, or network of general purpose computers, that are programmed to operate in accordance with the invention.
  • The data processing system [0038] 1 also includes at least one controller 2 that can be implemented, at least in part, as a single special purpose integrated circuit, e.g., (ASIC) or an array of ASICs, each having a main or central processor section for overall, system-level control, and separate sections dedicated to performing various different specific computations, finctions and other processes under the control of the central processor section. The controller 2 can also be implemented using a plurality of separate dedicated programmable integrated or other electronic circuits or devices, e.g., hard wired electronic or logic circuits such as discrete element circuits or programmable logic devices. The controller 2 also preferably includes other devices, such as volatile or non-volatile memory devices, communications devices, relays and/or other circuitry or components necessary to perform the desired input/output or other functions.
  • The data processing system [0039] 1 also includes a storage medium selector 3 and a parity information generator 4 for selecting a region/storage medium and generating parity information for a protection group, respectively. The storage medium selector 3 and parity information generator 4 can be implemented as software modules that are executed by the controller 2 or any other suitable data processing apparatus. Alternately, the storage medium selector 3 and/or the parity information generator 4 can be implemented as hardwired electronic circuits or other programmed integrated or other electronic circuits or devices, e.g., hardwired electronic or logic circuits such as discrete element circuits or programmable logic devices.
  • The data processing system [0040] 1 communicates with a plurality of storage devices 9 through a bus 5. The storage devices 9 store data on associated storage media 6 and a database 7, which stores parity information and other information, such as region and parity group location information. The storage devices 9 can be any type of well known storage devices that store information on storage media such as magnetic tape, optical disk, magnetic disk, or other storage media, and can be part of a robotic storage media library, for example. Thus, the type of storage devices 9 depends on the type of storage media 6 used to store data. The storage media 6 and the database 7 can be any type of volatile or non-volatile storage medium, but preferably are magnetic tape storage media. The database 7 is used in this example embodiment to more clearly distinguish where data files and parity information is stored. However, information stored on the database 7 can be stored in the storage media 6. Preferably, however, parity data is stored in storage media that does not include a region for which the parity data was generated.
  • The [0041] controller 2 receives data, such as data files for storage, and/or control information on a data input line 8. When the controller 2 receives a data file for storage, the controller 2 requests the storage medium selector 3 to indicate regions and/or a storage medium 6 within a protection group for storing the data file. As discussed above, the storage medium selector 3 can determine a region, set of regions and/or a storage medium 6 by using a “round-robin” file allocation algorithm, by identifying a storage medium 6 that has the largest number of open regions, identifying regions on multiple storage media 6 for a data stripe, or by some alternate method. Once the location where the data file will be stored is identified, the controller 2 controls the storage of the data file on the selected storage medium 6. Before, during or after the data file is stored, the parity information generator 4 generates parity information for the protection group(s) in which the data file is stored. Other information, including region and media group location can also be stored in the database 7. The parity information generated by the parity information generator 4 preferably is determined as the exclusive-OR of the regions in the protection group(s). However, other parity information or similar information can be generated. The parity information is preferably stored in active memory, e.g., in the controller 2 or any other storage medium, and migrated to the database 7 when a protection group closes.
  • Any of the data management techniques described above can be used by the data processing system [0042] 1. For example, protection groups can remain open until all of the regions and/or storage media 6 in the protection groups are filled. Thus, the controller 2 could close a current protection group whenever a first region becomes filled, and create a new protection group that includes the storage media 6 that are not yet filled and at least one new storage medium 6. Alternately, the controller 2 could close a current protection group only when all regions and/or the storage media 6 within the protection group are filled, and create a new protection group that does not include any of the storage media 6 that were included in the earlier closed protection group. Other protection group management schemes can be used depending upon the desired data storage structure. For example, closing a current protection group only when all regions for all storage media 6 in the current protection group are filled, minimizes the number of storage media 6 that must be read to reconstruct information stored on a damaged storage medium 6. Adding new storage media to a protection group whenever a storage medium 6 in the protection group closes, reduces the number of parity blocks that are created.
  • While the invention has been described in conjunction with specific embodiments thereof, it is evident that many alternatives, modifications and variations will be apparent to those skilled in the art. Accordingly, preferred embodiments of the invention as set forth herein are intended to be illustrative, not limiting. Various changes may be made without departing from the spirit and scope of the invention. [0043]

Claims (19)

What is claimed is:
1. A method for storing data, comprising:
storing at least one data file in at least one storage medium in a first group of N storage media;
generating parity data for data stored in the storage media in the first group; and
storing the parity data on a storage medium asynchronously with respect to the storing of the at least one data file.
2. The method of
claim 1
, further comprising:
establishing a protection group including N regions of the N storage media in the first group.
3. The method of
claim 2
, wherein the step of storing at least one data file comprises:
selecting at least one region or storage medium to store a data file.
4. The method of
claim 3
, wherein the step of selecting at least one region or storage medium comprises:
selecting a storage medium to store the data file using a round-robin file allocation scheme.
5. The method of
claim 3
, wherein the step of selecting at least one region or storage medium comprises:
selecting a storage medium for storing the data file based an amount of open storage space on the storage medium.
6. The method of
claim 2
, wherein the step of generating parity data comprises:
generating parity data for data stored in the protection group.
7. The method of
claim 1
, wherein the step of storing at least one data file comprises:
storing data asynchronously in the storage media of the first group.
8. The method of
claim 1
, wherein the step of storing at least one data file comprises:
storing data synchronously in the storage media of the first group.
9. The method of
claim 1
, wherein the step of storing parity data comprises:
storing parity data when a protection group is closed.
10. The method of
claim 1
, wherein the step of generating parity data comprises:
generating parity data before, during, or after each data file is stored; and
storing the parity data in active memory.
11. The method of
claim 1
, further comprising:
determining that one storage medium in a first protection group is filled;
closing the first protection group; and
creating a second protection group that includes unfilled storage media from the first protection group and at least one new storage medium.
12. The method of
claim 1
, further comprising:
determining that at least one storage medium in a first protection group is filled;
closing all storage media in the first protection group; and
creating a second protection group that includes storage media that are not included in the first protection group.
13. An information storage system comprising:
a plurality of storage media that each have at least one region that stores information;
a storage medium selector that selects at least one region of a storage medium in a protection group for storing a data file;
a parity information generator that generates parity information for regions in a protection group; and
a controller that controls a storage medium selected by the storage medium selector to store a data file and that controls a storage medium to store parity information asynchronously with respect to storing data related to the parity information.
14. The system of
claim 13
, wherein the storage medium selector uses a round-robin allocation scheme to select regions for storing a data file.
15. The system of
claim 13
, wherein the storage medium selector selects a storage medium having a largest number of open regions for storing a data file.
16. The method of
claim 13
, wherein the parity information generator determines parity information for a protection group which includes a plurality of regions of the storage media by generating an exclusive-OR for the regions in the protection group.
17. The system of
claim 13
, wherein the controller adds at least one new storage medium to a storage medium group only when no open storage medium exists for the storage medium group.
18. The system of
claim 13
, wherein the controller adds at least one new storage medium to a storage medium group whenever a storage medium in the storage medium group closes.
19. The system of
claim 13
, wherein the controller closes a protection group when a new storage medium is added to the storage medium group.
US09/852,328 1998-12-23 2001-05-10 System and method for storage media group parity protection Expired - Lifetime US6393516B2 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US09/852,328 US6393516B2 (en) 1998-12-23 2001-05-10 System and method for storage media group parity protection

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US09/219,830 US6289415B1 (en) 1998-12-23 1998-12-23 System and method for storage media group parity protection
US09/852,328 US6393516B2 (en) 1998-12-23 2001-05-10 System and method for storage media group parity protection

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
US09/219,830 Continuation US6289415B1 (en) 1998-12-23 1998-12-23 System and method for storage media group parity protection

Publications (2)

Publication Number Publication Date
US20010018729A1 true US20010018729A1 (en) 2001-08-30
US6393516B2 US6393516B2 (en) 2002-05-21

Family

ID=22820958

Family Applications (2)

Application Number Title Priority Date Filing Date
US09/219,830 Expired - Lifetime US6289415B1 (en) 1998-12-23 1998-12-23 System and method for storage media group parity protection
US09/852,328 Expired - Lifetime US6393516B2 (en) 1998-12-23 2001-05-10 System and method for storage media group parity protection

Family Applications Before (1)

Application Number Title Priority Date Filing Date
US09/219,830 Expired - Lifetime US6289415B1 (en) 1998-12-23 1998-12-23 System and method for storage media group parity protection

Country Status (1)

Country Link
US (2) US6289415B1 (en)

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030169527A1 (en) * 2002-03-06 2003-09-11 Nec Corporation Magnetic tape apparatus, control method therefor, and magnetic tape apparatus control program
US20030217246A1 (en) * 2002-05-17 2003-11-20 Kenichi Kubota Memory control apparatus, method and program
US20070189153A1 (en) * 2005-07-27 2007-08-16 Archivas, Inc. Method for improving mean time to data loss (MTDL) in a fixed content distributed data storage
WO2007128417A1 (en) * 2006-05-10 2007-11-15 Nero Ag Apparatus for writing data having a data amount on a storage medium
US20080253256A1 (en) * 2007-04-13 2008-10-16 Andreas Eckleder Apparatus for writing data and redundancy data on a storage medium
US20080256365A1 (en) * 2006-05-10 2008-10-16 Andreas Eckleder Apparatus for writing information on a data content on a storage medium
EP2105928A1 (en) * 2008-03-27 2009-09-30 Deutsche Thomson OHG Method and device for saving files on a storage medium, method and device for correcting errors encountered when reading files from a storage medium and storage medium
WO2010049928A1 (en) * 2008-10-27 2010-05-06 Kaminario Tehnologies Ltd. System and methods for raid writing and asynchronous parity computation
US10013166B2 (en) 2012-12-20 2018-07-03 Amazon Technologies, Inc. Virtual tape library system
US10146652B2 (en) * 2016-02-11 2018-12-04 International Business Machines Corporation Resilient distributed storage system
US10795856B1 (en) * 2014-12-29 2020-10-06 EMC IP Holding Company LLC Methods, systems, and computer readable mediums for implementing a data protection policy for a transferred enterprise application
US20210157673A1 (en) * 2016-02-25 2021-05-27 Micron Technology, Inc. Redundant array of independent nand for a three-dimensional memory array
US20210257040A1 (en) * 2020-02-18 2021-08-19 SK Hynix Inc. Memory device and test method thereof

Families Citing this family (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6499039B1 (en) * 1999-09-23 2002-12-24 Emc Corporation Reorganization of striped data during file system expansion in a data storage system
US6546458B2 (en) * 2000-12-29 2003-04-08 Storage Technology Corporation Method and apparatus for arbitrarily large capacity removable media
US7024586B2 (en) * 2002-06-24 2006-04-04 Network Appliance, Inc. Using file system information in raid data reconstruction and migration
CA2497305A1 (en) 2002-09-10 2004-03-25 Exagrid Systems, Inc. Primary and remote data backup with nodal failover
US7155634B1 (en) * 2002-10-25 2006-12-26 Storage Technology Corporation Process for generating and reconstructing variable number of parity for byte streams independent of host block size
US7350101B1 (en) * 2002-12-23 2008-03-25 Storage Technology Corporation Simultaneous writing and reconstruction of a redundant array of independent limited performance storage devices
US7032126B2 (en) * 2003-07-08 2006-04-18 Softek Storage Solutions Corporation Method and apparatus for creating a storage pool by dynamically mapping replication schema to provisioned storage volumes
JP4354233B2 (en) * 2003-09-05 2009-10-28 株式会社日立製作所 Backup system and method
US8429253B1 (en) 2004-01-27 2013-04-23 Symantec Corporation Method and system for detecting changes in computer files and settings and automating the migration of settings and files to computers
US20070130232A1 (en) * 2005-11-22 2007-06-07 Therrien David G Method and apparatus for efficiently storing and managing historical versions and replicas of computer data files
JP2008197779A (en) * 2007-02-09 2008-08-28 Fujitsu Ltd Hierarchical storage management system, hierarchical controller, inter-hierarchy file moving method, and program
US8316441B2 (en) * 2007-11-14 2012-11-20 Lockheed Martin Corporation System for protecting information
US8453155B2 (en) 2010-11-19 2013-05-28 At&T Intellectual Property I, L.P. Method for scheduling updates in a streaming data warehouse
US9378098B2 (en) 2012-06-06 2016-06-28 Qualcomm Incorporated Methods and systems for redundant data storage in a register
US20180336097A1 (en) 2012-06-25 2018-11-22 International Business Machines Corporation Namespace affinity and failover for processing units in a dispersed storage network
US9495247B2 (en) * 2014-10-27 2016-11-15 International Business Machines Corporation Time multiplexed redundant array of independent tapes
US9792178B2 (en) 2015-01-26 2017-10-17 Spectra Logic, Corporation Progressive parity
US10372334B2 (en) 2016-02-11 2019-08-06 International Business Machines Corporation Reclaiming free space in a storage system
US10534678B2 (en) * 2016-04-04 2020-01-14 Brilliant Points, Inc. Data storage backup management method
US11238107B2 (en) 2020-01-06 2022-02-01 International Business Machines Corporation Migrating data files to magnetic tape according to a query having one or more predefined criterion and one or more query expansion profiles

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2550239B2 (en) * 1991-09-12 1996-11-06 株式会社日立製作所 External storage system
US5802264A (en) 1991-11-15 1998-09-01 Fujitsu Limited Background data reconstruction in a storage device array system
US5487160A (en) * 1992-12-04 1996-01-23 At&T Global Information Solutions Company Concurrent image backup for disk storage system
US5557770A (en) * 1993-03-24 1996-09-17 International Business Machines Corporation Disk storage apparatus and method for converting random writes to sequential writes while retaining physical clustering on disk
US5598549A (en) 1993-06-11 1997-01-28 At&T Global Information Solutions Company Array storage system for returning an I/O complete signal to a virtual I/O daemon that is separated from software array driver and physical device driver
JP3687111B2 (en) * 1994-08-18 2005-08-24 株式会社日立製作所 Storage device system and storage device control method
US5974503A (en) * 1997-04-25 1999-10-26 Emc Corporation Storage and access of continuous media files indexed as lists of raid stripe sets associated with file names
US6076143A (en) * 1997-09-02 2000-06-13 Emc Corporation Method and apparatus for managing the physical storage locations for blocks of information in a storage system to increase system performance

Cited By (24)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030169527A1 (en) * 2002-03-06 2003-09-11 Nec Corporation Magnetic tape apparatus, control method therefor, and magnetic tape apparatus control program
US6940666B2 (en) * 2002-03-06 2005-09-06 Nec Corporation Magnetic tape apparatus that duplicates data and stores the duplicated data in plural magnetic tape drives
US20030217246A1 (en) * 2002-05-17 2003-11-20 Kenichi Kubota Memory control apparatus, method and program
US20070189153A1 (en) * 2005-07-27 2007-08-16 Archivas, Inc. Method for improving mean time to data loss (MTDL) in a fixed content distributed data storage
US9672372B2 (en) 2005-07-27 2017-06-06 Hitachi Data Systems Corporation Method for improving mean time to data loss (MTDL) in a fixed content distributed data storage
US9305011B2 (en) * 2005-07-27 2016-04-05 Hitachi Data Systems Corporation Method for improving mean time to data loss (MTDL) in a fixed content distributed data storage
WO2007128417A1 (en) * 2006-05-10 2007-11-15 Nero Ag Apparatus for writing data having a data amount on a storage medium
US20080256365A1 (en) * 2006-05-10 2008-10-16 Andreas Eckleder Apparatus for writing information on a data content on a storage medium
US8301906B2 (en) 2006-05-10 2012-10-30 Nero Ag Apparatus for writing information on a data content on a storage medium
US20080253256A1 (en) * 2007-04-13 2008-10-16 Andreas Eckleder Apparatus for writing data and redundancy data on a storage medium
EP2105928A1 (en) * 2008-03-27 2009-09-30 Deutsche Thomson OHG Method and device for saving files on a storage medium, method and device for correcting errors encountered when reading files from a storage medium and storage medium
WO2009118252A1 (en) * 2008-03-27 2009-10-01 Thomson Licensing Method and apparatus for storing data to a storage medium, method and apparatus for correcting errors occurring while data is read from a storage medium, and storage medium
US8943357B2 (en) * 2008-10-27 2015-01-27 Kaminario Technologies Ltd. System and methods for RAID writing and asynchronous parity computation
US20110202792A1 (en) * 2008-10-27 2011-08-18 Kaminario Technologies Ltd. System and Methods for RAID Writing and Asynchronous Parity Computation
WO2010049928A1 (en) * 2008-10-27 2010-05-06 Kaminario Tehnologies Ltd. System and methods for raid writing and asynchronous parity computation
US10013166B2 (en) 2012-12-20 2018-07-03 Amazon Technologies, Inc. Virtual tape library system
US10795856B1 (en) * 2014-12-29 2020-10-06 EMC IP Holding Company LLC Methods, systems, and computer readable mediums for implementing a data protection policy for a transferred enterprise application
US20200401556A1 (en) * 2014-12-29 2020-12-24 EMC IP Holding Company LLC Methods, systems, and computer readable mediums for implementing a data protection policy for a transferred enterprise application
US11593302B2 (en) * 2014-12-29 2023-02-28 EMC IP Holding Company LLC Methods, systems, and computer readable mediums for implementing a data protection policy for a transferred enterprise application
US10146652B2 (en) * 2016-02-11 2018-12-04 International Business Machines Corporation Resilient distributed storage system
US20210157673A1 (en) * 2016-02-25 2021-05-27 Micron Technology, Inc. Redundant array of independent nand for a three-dimensional memory array
US11797383B2 (en) * 2016-02-25 2023-10-24 Micron Technology, Inc. Redundant array of independent NAND for a three-dimensional memory array
US20210257040A1 (en) * 2020-02-18 2021-08-19 SK Hynix Inc. Memory device and test method thereof
US11501844B2 (en) * 2020-02-18 2022-11-15 SK Hynix Inc. Memory device and test method thereof

Also Published As

Publication number Publication date
US6393516B2 (en) 2002-05-21
US6289415B1 (en) 2001-09-11

Similar Documents

Publication Publication Date Title
US6289415B1 (en) System and method for storage media group parity protection
US7281089B2 (en) System and method for reorganizing data in a raid storage system
US7774643B2 (en) Method and apparatus for preventing permanent data loss due to single failure of a fault tolerant array
US9021335B2 (en) Data recovery for failed memory device of memory device array
JP3505093B2 (en) File management system
US5696934A (en) Method of utilizing storage disks of differing capacity in a single storage volume in a hierarchial disk array
WO2001013236A1 (en) Object oriented fault tolerance
US7386758B2 (en) Method and apparatus for reconstructing data in object-based storage arrays
KR0130008B1 (en) A file system for a plurality of storage classes
US6421767B1 (en) Method and apparatus for managing a storage system using snapshot copy operations with snap groups
EP0726520A2 (en) Disk array having redundant storage and methods for incrementally generating redundancy as data is written to the disk array
JPH04230512A (en) Method and apparatus for updating record for dasd array
US7882420B2 (en) Method and system for data replication
WO2005052784A2 (en) Semi-static parity distribution technique
CA2717549A1 (en) Dynamically quantifying and improving the reliability of distributed data storage systems
US7500054B2 (en) Adaptive grouping in object RAID
US8495010B2 (en) Method and system for adaptive metadata replication
JP7140688B2 (en) Data storage system and method of accessing key-value pair objects
US7865673B2 (en) Multiple replication levels with pooled devices
US20070106925A1 (en) Method and system using checksums to repair data
US7653829B2 (en) Method of data placement and control in block-divided distributed parity disk array
CN116339644B (en) Method, device, equipment and medium for creating redundant array of independent disk
US7873799B2 (en) Method and system supporting per-file and per-block replication
WO2023241783A1 (en) Device and method for improved redundant storing of sequential access data
MXPA99003253A (en) Expansion of the number of drives in a raid set while maintaining integrity of migrated data

Legal Events

Date Code Title Description
STCF Information on status: patent grant

Free format text: PATENTED CASE

FPAY Fee payment

Year of fee payment: 4

FPAY Fee payment

Year of fee payment: 8

AS Assignment

Owner name: AT&T CORP., NEW YORK

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:JOHNSON, THEODORE;REEL/FRAME:028342/0743

Effective date: 19990312

AS Assignment

Owner name: AT&T PROPERTIES, LLC, NEVADA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:AT&T CORP.;REEL/FRAME:028369/0046

Effective date: 20120529

AS Assignment

Owner name: AT&T INTELLECTUAL PROPERTY II, L.P., GEORGIA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:AT&T PROPERTIES, LLC;REEL/FRAME:028378/0961

Effective date: 20120529

AS Assignment

Owner name: RAKUTEN, INC., JAPAN

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:AT&T INTELLECTUAL PROPERTY II, L.P.;REEL/FRAME:029195/0519

Effective date: 20120719

FEPP Fee payment procedure

Free format text: PAYOR NUMBER ASSIGNED (ORIGINAL EVENT CODE: ASPN); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

FPAY Fee payment

Year of fee payment: 12

AS Assignment

Owner name: RAKUTEN, INC., JAPAN

Free format text: CHANGE OF ADDRESS;ASSIGNOR:RAKUTEN, INC.;REEL/FRAME:037751/0006

Effective date: 20150824

AS Assignment

Owner name: RAKUTEN GROUP, INC., JAPAN

Free format text: CHANGE OF NAME;ASSIGNOR:RAKUTEN, INC.;REEL/FRAME:058314/0657

Effective date: 20210901