US20080168224A1 - Data protection via software configuration of multiple disk drives - Google Patents

Data protection via software configuration of multiple disk drives Download PDF

Info

Publication number
US20080168224A1
US20080168224A1 US11/621,504 US62150407A US2008168224A1 US 20080168224 A1 US20080168224 A1 US 20080168224A1 US 62150407 A US62150407 A US 62150407A US 2008168224 A1 US2008168224 A1 US 2008168224A1
Authority
US
United States
Prior art keywords
storage
data
logical disk
control node
segments
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US11/621,504
Inventor
James M. Davison
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
International Business Machines Corp
Original Assignee
International Business Machines Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by International Business Machines Corp filed Critical International Business Machines Corp
Priority to US11/621,504 priority Critical patent/US20080168224A1/en
Assigned to INTERNATIONAL BUSINESS MACHINES (IBM) CORPORATION reassignment INTERNATIONAL BUSINESS MACHINES (IBM) CORPORATION ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: DAVISON, JAMES M.
Publication of US20080168224A1 publication Critical patent/US20080168224A1/en
Application status is Abandoned legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/16Error detection or correction of the data by redundancy in hardware
    • G06F11/20Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements
    • G06F11/2053Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where persistent mass storage functionality or persistent mass storage control functionality is redundant
    • G06F11/2056Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where persistent mass storage functionality or persistent mass storage control functionality is redundant by mirroring
    • G06F11/2058Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where persistent mass storage functionality or persistent mass storage control functionality is redundant by mirroring using more than 2 mirrored copies
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/16Error detection or correction of the data by redundancy in hardware
    • G06F11/20Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements
    • G06F11/2053Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where persistent mass storage functionality or persistent mass storage control functionality is redundant
    • G06F11/2056Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where persistent mass storage functionality or persistent mass storage control functionality is redundant by mirroring
    • G06F11/2071Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where persistent mass storage functionality or persistent mass storage control functionality is redundant by mirroring using a plurality of controllers
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/16Error detection or correction of the data by redundancy in hardware
    • G06F11/20Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements
    • G06F11/2053Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where persistent mass storage functionality or persistent mass storage control functionality is redundant
    • G06F11/2056Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where persistent mass storage functionality or persistent mass storage control functionality is redundant by mirroring
    • G06F11/2087Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where persistent mass storage functionality or persistent mass storage control functionality is redundant by mirroring with a common controller

Abstract

A data storage system and a method for managing a data storage system are provided. A storage controller is programmed with a disk configuration for each of one or more logical disk arrays and a protection level k. The available storage space from one or more disk drives in the data storage system is merged into a single virtual address space and the merged storage space is divided into storage segments. Next, the storage segments are allocated among the logical disk arrays and a configuration table is generated indicating the number of storage segments in each logical disk array and the physical location of each storage segment on a disk drive. The configuration table is stored in the storage controller and k copies of data may then be stored on the logical disk arrays. Multiple storage controller nodes may be accommodated to provide at least primary and secondary storage.

Description

    RELATED APPLICATION DATA
  • The present application is related to commonly-assigned and co-pending U.S. application Ser. No. 11/______ [IBM Docket #TUC920050154US1], entitled DATA PROTECTION VIA SOFTWARE CONFIGURATION OF MULTIPLE DISK DRIVES, filed on the filing date hereof, which application is incorporated herein by reference in its entirety.
  • TECHNICAL FIELD
  • The present invention relates generally to data storage and, in particular, to the configuration of multiple hard disks through software instructions.
  • BACKGROUND ART
  • Hard disk drives are becoming more powerful in terms of speed and capacity. And, arrays of disk drives, such as RAID (redundant array of independent/inexpensive drives) arrays are becoming more powerful in terms of their ability to protect the stored data. The various levels of RAID are well known in the industry and various new and more complex levels or combinations of levels are being developed to further improve data protection and fault tolerance. However, hard drives do fail and, even though such failures are rare on a percentage basis, due to the huge number of drives in use, the number of drive failures is, in fact, significant. Moreover, in certain critical applications or installations, any failure is significant. Due to the high use of data backups of various kinds, the risk of a loss of data has declined while a major concern has become loss of data availability during the recovery from a failure.
  • FIG. 1 illustrates a RAID system 100 in which data stored at a primary site 110 is replicated through a peer-to-peer remote copy (PPRC) operation to a secondary site 120. In the system 100 of FIG. 1, the user establishes a fixed, logical relationship between physical storage locations in the primary and secondary storage sites 110, 120.
  • A RAID controller is programmed with instructions for the RAID level of the array and all drives in the array are dedicated to the array, resulting in a fixed configuration which can support only one RAID level. Moreover, in most installations, all of the drives in the array must be on the same backplane. Thus, a typical RAID system is inflexible. And, the new RAID levels are using increasingly complex RAID algorithms and are requiring more complicated controllers.
  • Some companies have also developed “software RAID” but such systems merely emulate hardware RAID and retain all of the limitations of hardware RAID, including the predefined, fixed disk arrays and the predefined, single RAID level.
  • However, it would be preferable if, rather than continue to increase the complexity of algorithms and hardware, existing resources could be used more efficiently and in a more flexible manner.
  • SUMMARY OF THE INVENTION
  • The present invention provides a method for managing a data storage system. A storage controller is programmed with a disk configuration for each of one or more logical disk arrays and the available storage space from one or more disk drives. The sum of all disk arrays and available storage space in the data storage system is merged into a single virtual address space. The virtual address space is divided, splitting the merged storage space into segments. Next, the segments are allocated logically to devices according to storage demands from the user. These logical storage devices are organized using a configuration table indicating the number of storage segments in each device. The configuration table is stored in the storage controller and data may then be stored on the logical disk arrays and the segments are mapped to physical locations by the drive controller.
  • The present invention also includes a data storage system having a drive controller, a plurality of disk drives coupled to and managed by the drive controller and a storage controller to which the drive controller is coupled. The storage controller includes a host adapter through which a host device transmits/receives instructions and data to/from the storage controller, a processor and a configuration table accessible to the processor. The processor is programmed for receiving disk configuration instructions, merging available storage space on the plurality of disk drives into a single virtual address space, dividing the merged storage space into segments and allocating the storage segments among one or more logical storage devices in accordance with configuration instructions stored in the configuration table and mapping the segment to physical locations by the drive controller.
  • The present invention may further include multiple storage controller nodes, coupled to provide at least primary and secondary data storage. Each controller node stores data in accordance with the algorithm described above, thereby further improving data availability and fault tolerance.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • FIG. 1 illustrates a RAID system in which data is replicated using a PPRC operation;
  • FIG. 2 is a block diagram of a storage system of the present invention;
  • FIG. 3 illustrates a segment pool into which storage space of some or all of the drives is merged;
  • FIG. 4 illustrates hard drives within the storage loops of the system of FIG. 2; and
  • FIG. 5 is a block diagram of a multi-node storage system of the present invention.
  • DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENT
  • FIG. 2 is a block diagram of a storage system 200 of the present invention. The system 200 includes a storage controller 210, one or more drive controllers 220A, 220B and an equal number of drive arrays 230A, 230B (collectively referred to as 230). One pair of a drive/array controller 220A and a drive array (having multiple disk drives) 230A form a first storage loop 240A while another pair of a drive/array controller 220B and a drive array (also having multiple disk drives) 230B form a second storage loop 240B. Two drive controllers, two drive arrays and two loops are illustrated by way of an example and not by way of a limitation. The storage controller 210 includes a processor 212 and a memory in which a configuration table 214 is stored, as described hereinbelow. A host system 10 connects to the storage controller 210 through a host interface (HA) 216 and communicates with the storage controller 210 using a storage protocol. The protocol allows data to be stored and retrieved and the storage controller 210 ensures that the correct data is sent and received. The storage controller 210 manages logical devices and makes them visible to the host system 10 through the protocol.
  • At this point, the disk drives within the drive arrays 220A, 220B have not been configured for any particular RAID level or non-RAID arrangement. They merely represent raw storage space having a total of X Gigabytes. As illustrated in FIG. 3, in accordance with the present invention, all of the physical space 300 on the disk drives 230 is merged into a single virtual address space. The storage controller 210 receives configuration instructions and the merged space 300 is divided into segments, preferably but not necessarily of equal size, with each controller loop having a number of segments dependent on the amount of physical storage space available on the loop being divided. This is based on the size of the drives and formatting of the drives for the controller loop, wherein larger drives result in more segments being available for a given loop. The merged space 300 may be thought of as a segment pool. Thus, for example, the virtual space 300 may be divided into four segments with two segments being allocated to each of the two loops 204A, 204B. A logical storage subsystem (LSS) or device or disk array may then be created, as illustrated in FIG. 4 and represented in TABLE I.
  • TABLE I LSS 1/Device 1 Segments 1–2 Loop 1 Segments 1–2 Loop 2
  • Logical devices managed by the storage controller 210 comprise segments from the pool 300 of available segments taken from the continuous address space described previously. Users may specify the size of a desired logical device and the storage controller 210 will allocate the number of segments required to match the size specified. The storage controller 210 also allows the user to specify the level of protection afforded the data stored by host system 10. The size of the address space available for allocation will depend on the organization of the logical disk arrays that make up the storage system: JBOD, RAID 5 or other. Using the letter ‘k’ to specify the minimum level of protection given to data stored on the storage system 200, k=1 indicates that there is no specific protection beyond storage of the data on the system 200. At this level, data loss is possible if the drive on which the data is stored fails. K=2 indicates that two copies of data will be resident on the storage system 200. The storage controller 210, having previously split the contiguous address space under its management into segments, allocates k segments per required segment to the logical device. This algorithm allows the data to be stored by the host 10 at a level of duplication equivalent to a minimum of k, with higher levels possible. For example, in a system with logical disk arrays formatted as RAID 5 and k=2, protection is provided which is equivalent to a storage controller system using RAID 10. When the arrays are configured as JBOD and k=2, protection is provided which is equivalent to RAID 0, or mirroring. The locations of the segments allocated are flexible, but preferably are physically separated to independent physical resources, such as on separate storage loops to which logical devices are assigned.
  • TABLE II illustrates a user-designated configuration of one logical device (such as a disk array) having two segments with a protection level of 3 (that is, a copy of the data on each of three loops). Segments may be similarly allocated to additional logical devices as indicated by the ellipses. It will be appreciated that more than two segments may be allocated to each device and that higher levels of protection (k>3) may be designated.
  • TABLE II LSS 1/Device 1 Segments 1–2 Loop 1 Segments 1–2 Loop 2 Segments 1–2 Loop 3 . . .
  • The configuration table 214 is populated with the identity of the logical device(s), the number of segments allocated to each and the physical location of each segment on a disk drive 230. When the system 200 is in normal operation, the host 10 transmits data to the storage controller 210 which then directs that the data be stored to the logical device. In the example of FIGS. 2 and 4, two copies of the data are kept.
  • The instructions received by the storage controller 210 may include instructions for configuring the virtual space 300 as a RAID array or as just a bunch of disks (JBOD). If the virtual space 300 is to be configured as JBOD, the instructions received by the storage controller 210 may include the number of copies the user wants to keep. If the virtual space 300 is to be configured as a RAID array, the instructions will include the RAID level. Moreover, because the present invention, is not constrained by the limitations of hardware RAID, the virtual space 300 may be configured as multiple RAID arrays having the same or different levels and the space allocated to a logical device need not be contiguous. Additionally, unlike hardware RAID, the system of the present invention does not require that copies of data be stored in the same corresponding logical locations on two arrays; the copies may be anywhere. Nor is it necessary that all of the physical storage space be used.
  • FIG. 5 illustrates a further embodiment of the present invention in which a storage system 500 includes a two storage controllers configured as separate nodes 510, 520. The first controller node 510 includes a primary device 512 which is coupled to a host 502. The second controller node 520 includes a secondary device 522 which is coupled to the primary device 512 through any appropriate network. The second node 520 may be in the same facility as the first node 510 or, for additional data security, may be located geographically remote from the first node 510. Data is stored in the first controller node 510 in accordance with the algorithm described above with respect to FIGS. 2-4. The data is then transferred and replicated to the second controller node 520 through, for example, a peer-to-peer remote copy (PPRC) function and stored in the second controller node 520, also in accordance with the algorithm described above with respect to FIGS. 2-4. In the event that the primary data set (that is, all copies of the data in the first controller node 510) is lost or damaged, the secondary data set stored in the second controller node 520 remains protected and may be used. Optionally, the host 502 may be configured to allow it to recognize and use the data stored on both the first and second controller nodes 510, 520. Thus, data availability and fault tolerance are improved. It will be appreciated that more than the system 500 may be configured with more than two controller nodes and that the configuration illustrated in FIG. 5 is for illustrative purposes only and not by way of limitation. The storage of data on the secondary controller 520 is controlled through an algorithm in order to maximize data protection. For example, if k=3 and two copies are stored at the primary controller 510, the primary controller 510 instructs the secondary controller 520 to reserve the necessary space and transfers the data to the secondary controller 520. Once the storage controllers 510, 520 are established as cooperating nodes, this reservation and transfer occurs without further input required from the user.
  • The system of the present invention provides a user with the ability to easily tailor the system to the user's needs, even as those needs change. For example, a user may want only a single copy of data, such as for a temporary dataset for data mining, and not want the overhead required by redundancy. If a disk fails, the dataset may be lost but the job may be re-run and the dataset recreated. On the other hand, a user may the security of mirroring provided by a RAID 10 system in which case the storage controller may be instructed to configure the storage space as two RAID 5 arrays. Of course, as the user's needs change, the configuration may be changed. Consequently, a user's hardware resources may be used more efficiently and in a way to better meet the user's needs.
  • The algorithm of the present invention allows several benefits over previous methods of data protection. There is complete flexibility as to where the data is located on the system and the algorithm may be optimized within the system or set of systems organized as cooperating nodes to achieve performance improvements. In addition, the disk controllers may be simplified as the storage controller now manages more of the complexity associated with data protection, a function previously pushed onto expensive to design and build disk controllers.
  • It is important to note that while the present invention has been described in the context of a fully functioning data processing system, those of ordinary skill in the art will appreciate that the processes of the present invention are capable of being distributed in the form of a computer readable medium of instructions and a variety of forms and that the present invention applies regardless of the particular type of signal bearing media actually used to carry out the distribution. Examples of computer readable media include recordable-type media such as a floppy disk, a hard disk drive, a RAM, and CD-ROMs and transmission-type media such as digital and analog communication links.
  • The description of the present invention has been presented for purposes of illustration and description, but is not intended to be exhaustive or limited to the invention in the form disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art. The embodiment was chosen and described in order to best explain the principles of the invention, the practical application, and to enable others of ordinary skill in the art to understand the invention for various embodiments with various modifications as are suited to the particular use contemplated. Moreover, although described above with respect to methods and systems, the need in the art may also be met with a computer program product containing instructions for method of managing a data storage system or a method for deploying computing infrastructure comprising integrating computer readable code into a computing system for method of managing a data storage system.

Claims (18)

1. A method of managing a data storage system, comprising:
configuring first and second storage controllers in the data storage system as first and second storage control nodes;
for each storage controller node:
programming the storage controller with a disk configuration for each of one or more logical disk arrays and a level of protection k;
merging available storage space from one or more disk drives in the storage control node into a single virtual address space;
dividing the merged storage space into storage segments;
allocating the storage segments among the logical disk arrays in the storage control node;
generating a configuration table indicating the number of storage segments in each logical disk array and the physical location of each storage segment on a disk drive; and
storing the configuration table in the storage controller;
receiving in the first storage control node data to be stored;
storing k copies of the data on the logical disk arrays in the first storage control node;
transferring the data from the first storage control node to the second storage control node; and
storing k copies of the data on the logical disk arrays in the second storage control node.
2. The method of claim 1, wherein programming each storage controller comprises establishing at least one RAID level.
3. The method of claim 1, wherein programming each storage controller comprises:
establishing at least one storage loop; and
assigning each logical disk array to one of the storage loops.
4. The method of claim 1, wherein allocating the storage segments comprises configuring the one or more logical disk arrays as non-RAID arrays.
5. The method of claim 1, wherein allocating the storage segments among the logical disk arrays of a storage control node comprises allocating fewer than all of the storage segments.
6. The method of claim 1, wherein transferring the data from the first storage control node to the second storage control node comprises transferring the data through a peer-to-peer remote copy function.
7. A data storage system, comprising:
first and second storage controllers, each configured as a storage control node and each comprising:
a processor; and
a configuration table accessible to the processor;
the processor being programmed for:
receiving disk configuration instructions, including a level of protection k;
merging available storage space on the plurality of disk drives into a single virtual address space;
dividing the merged storage space into storage segments;
allocating the storage segments among one or more logical disk arrays in accordance with the configuration instructions; and
entering into the configuration table the number of storage segments in each logical disk array and the physical location of each storage segment on the disk drives;
the first storage controller further comprising a host adapter through which a host transmits/receives instructions and data to/from the first storage controller;
the processor of the first storage controller being further programmed for, upon receipt of data from the host,
directing that k copies of data be stored on the logical disk arrays of the first storage control node; and
directing that the data be transferred to the second storage controller node; and
the processor of the second storage controller being further programmed for upon receipt of data from the first storage controller, directing that k copies of data be stored on the logical disk arrays of the second storage control node.
8. The data storage system of claim 7, wherein:
the configuration instructions include at least one assigned RAID level; and
the logical disk arrays of the first and second storage control nodes are configured as a RAID array in accordance with the at least one assigned RAID level.
9. The data storage system of claim 7, wherein:
the configuration instructions include at least one storage loop; and
each logical disk array is assigned to one of the storage loops.
10. The data storage system of claim 7, wherein:
the configuration instructions include a non-RAID configuration; and
the logical disk arrays of the first and second storage control nodes are configured as a just a bunch of disks in accordance with the non-RAID configuration.
11. The data storage system of claim 7, wherein the logical disk arrays comprise fewer than all of the storage segments.
12. The data storage system of claim 7, wherein the processor of the first storage controller is further programmed for transferring the data to the second storage control node through a peer-to-peer remote copy function.
13. A computer program product of a computer readable medium usable with a programmable computer, the computer program product having computer-readable code embodied therein for managing a data storage system, the computer-readable code comprising instructions for:
configuring first and second storage controllers in the data storage system as first and second storage control nodes;
for each storage controller node:
programming the storage controller with a disk configuration for each of one or more logical disk arrays and a level of protection k;
merging available storage space from one or more disk drives in the storage control node into a single virtual address space;
dividing the merged storage space into storage segments;
allocating the storage segments among the logical disk arrays in the storage control node;
generating a configuration table indicating the number of storage segments in each logical disk array and the physical location of each storage segment on a disk drive; and
storing the configuration table in the storage controller;
storing k copies of the data on the logical disk arrays in the first storage control node;
transferring the data from the first storage control node to the second storage control node; and
storing k copies of the data on the logical disk arrays in the second storage control node.
14. The computer program product of claim 13, wherein the instructions for programming each storage controller comprise instructions for establishing at least one RAID level.
15. The computer program product of claim 13, wherein the instructions for programming each storage controller further comprise instructions for:
establishing at least one storage loop; and
assigning each logical disk array to one of the storage loops.
16. The computer program product of claim 13, wherein the instructions for allocating the storage segments comprise instructions for configuring the one or more logical disk arrays as non-RAID arrays.
17. The computer program product of claim 13, wherein the instructions for allocating the storage segments among the logical disk arrays of a storage control node comprise instructions for allocating fewer than all of the storage segments.
18. The computer program product of claim 13, wherein the instructions for transferring the data from the first storage control node to the second storage control node comprise instructions for transferring the data through a peer-to-peer remote copy function.
US11/621,504 2007-01-09 2007-01-09 Data protection via software configuration of multiple disk drives Abandoned US20080168224A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US11/621,504 US20080168224A1 (en) 2007-01-09 2007-01-09 Data protection via software configuration of multiple disk drives

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
US11/621,504 US20080168224A1 (en) 2007-01-09 2007-01-09 Data protection via software configuration of multiple disk drives

Publications (1)

Publication Number Publication Date
US20080168224A1 true US20080168224A1 (en) 2008-07-10

Family

ID=39595256

Family Applications (1)

Application Number Title Priority Date Filing Date
US11/621,504 Abandoned US20080168224A1 (en) 2007-01-09 2007-01-09 Data protection via software configuration of multiple disk drives

Country Status (1)

Country Link
US (1) US20080168224A1 (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120079424A1 (en) * 2010-09-24 2012-03-29 Hitachi Data Systems Corporation System and method for optimizing protection levels when replicating data in an object storage system
US20140380242A1 (en) * 2013-06-24 2014-12-25 International Business Machines Corporation Displaying data protection levels
US10140041B1 (en) * 2017-07-28 2018-11-27 EMC IP Holding Company LLC Mapped RAID (redundant array of independent disks) in a data storage system with RAID extent sub-groups that are used to perform drive extent allocation and data striping for sequential data accesses to a storage object
US10146447B1 (en) * 2017-07-28 2018-12-04 EMC IP Holding Company LLC Mapped RAID (redundant array of independent disks) in a data storage system with drive extents allocated to individual RAID extents from individual sub-groups of storage made up of ranges of logical block addresses defined across a group of hard disk drives

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5754756A (en) * 1995-03-13 1998-05-19 Hitachi, Ltd. Disk array system having adjustable parity group sizes based on storage unit capacities
US20010051955A1 (en) * 2000-03-17 2001-12-13 Wong John P. Mirror file system
US6363462B1 (en) * 1997-03-31 2002-03-26 Lsi Logic Corporation Storage controller providing automatic retention and deletion of synchronous back-up data
US20020174296A1 (en) * 2001-01-29 2002-11-21 Ulrich Thomas R. Disk replacement via hot swapping with variable parity
US20030115218A1 (en) * 2001-12-19 2003-06-19 Bobbitt Jared E. Virtual file system
US20040059869A1 (en) * 2002-09-20 2004-03-25 Tim Orsley Accelerated RAID with rewind capability
US20040117438A1 (en) * 2000-11-02 2004-06-17 John Considine Switching system
US20050005044A1 (en) * 2003-07-02 2005-01-06 Ling-Yi Liu Storage virtualization computer system and external controller therefor
US20050027938A1 (en) * 2003-07-29 2005-02-03 Xiotech Corporation Method, apparatus and program storage device for dynamically resizing mirrored virtual disks in a RAID storage system
US20050055603A1 (en) * 2003-08-14 2005-03-10 Soran Philip E. Virtual disk drive system and method

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5754756A (en) * 1995-03-13 1998-05-19 Hitachi, Ltd. Disk array system having adjustable parity group sizes based on storage unit capacities
US6363462B1 (en) * 1997-03-31 2002-03-26 Lsi Logic Corporation Storage controller providing automatic retention and deletion of synchronous back-up data
US20010051955A1 (en) * 2000-03-17 2001-12-13 Wong John P. Mirror file system
US20040117438A1 (en) * 2000-11-02 2004-06-17 John Considine Switching system
US20020174296A1 (en) * 2001-01-29 2002-11-21 Ulrich Thomas R. Disk replacement via hot swapping with variable parity
US20030115218A1 (en) * 2001-12-19 2003-06-19 Bobbitt Jared E. Virtual file system
US20040059869A1 (en) * 2002-09-20 2004-03-25 Tim Orsley Accelerated RAID with rewind capability
US20050005044A1 (en) * 2003-07-02 2005-01-06 Ling-Yi Liu Storage virtualization computer system and external controller therefor
US20050027938A1 (en) * 2003-07-29 2005-02-03 Xiotech Corporation Method, apparatus and program storage device for dynamically resizing mirrored virtual disks in a RAID storage system
US20050055603A1 (en) * 2003-08-14 2005-03-10 Soran Philip E. Virtual disk drive system and method

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120079424A1 (en) * 2010-09-24 2012-03-29 Hitachi Data Systems Corporation System and method for optimizing protection levels when replicating data in an object storage system
US8700571B2 (en) * 2010-09-24 2014-04-15 Hitachi Data Systems Corporation System and method for optimizing protection levels when replicating data in an object storage system
US9146689B2 (en) 2010-09-24 2015-09-29 Hitachi Data Systems Corporation System and method for optimizing protection levels when replicating data in an object storage system
US9754003B2 (en) 2010-09-24 2017-09-05 Hitachi Data Systems Corporation System and method for optimizing protection levels when replicating data in an object storage system
US20140380242A1 (en) * 2013-06-24 2014-12-25 International Business Machines Corporation Displaying data protection levels
US10140041B1 (en) * 2017-07-28 2018-11-27 EMC IP Holding Company LLC Mapped RAID (redundant array of independent disks) in a data storage system with RAID extent sub-groups that are used to perform drive extent allocation and data striping for sequential data accesses to a storage object
US10146447B1 (en) * 2017-07-28 2018-12-04 EMC IP Holding Company LLC Mapped RAID (redundant array of independent disks) in a data storage system with drive extents allocated to individual RAID extents from individual sub-groups of storage made up of ranges of logical block addresses defined across a group of hard disk drives

Similar Documents

Publication Publication Date Title
US6678788B1 (en) Data type and topological data categorization and ordering for a mass storage system
US7836272B1 (en) Methods and apparatus for interfacing to a data storage system
US9135120B1 (en) Consistency group moving
KR100490723B1 (en) Apparatus and method for file-level striping
US7296126B2 (en) Storage system and data processing system
US6915397B2 (en) System and method for generating point in time storage copy
US9575857B1 (en) Active/active replication
US6704839B2 (en) Data storage system and method of storing data
CN102405460B (en) Virtual storage system and method of operation
US9575851B1 (en) Volume hot migration
US9507845B1 (en) Virtual splitter
US6978324B1 (en) Method and apparatus for controlling read and write accesses to a logical entity
US6304942B1 (en) Providing an upgrade path for an existing data storage system
JP4516846B2 (en) Disk array system
US5819310A (en) Method and apparatus for reading data from mirrored logical volumes on physical disk drives
JP3187730B2 (en) Snapshot copy creation method and apparatus of data in Raid storage subsystem
JP4175788B2 (en) Volume control device
US6895467B2 (en) System and method for atomizing storage
US20030145086A1 (en) Scalable network-attached storage system
US7594137B2 (en) Storage system and data restoration method thereof
US7127557B2 (en) RAID apparatus and logical device expansion method thereof
US6947981B2 (en) Flexible data replication mechanism
US6389459B1 (en) Virtualized storage devices for network disk mirroring applications
JP3085085B2 (en) Data access apparatus and a distributed database system
US8572330B2 (en) Systems and methods for granular resource management in a storage network

Legal Events

Date Code Title Description
AS Assignment

Owner name: INTERNATIONAL BUSINESS MACHINES (IBM) CORPORATION,

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:DAVISON, JAMES M.;REEL/FRAME:018734/0377

Effective date: 20061113

STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION